accuracy()

audmetric.accuracy(truth, prediction, labels=None)[source]

Classification accuracy.

accuracy=number of correct predictionsnumber of total predictions\text{accuracy} = \frac{\text{number of correct predictions}} {\text{number of total predictions}}
Parameters
  • truth (Sequence[Any]) – ground truth values/classes

  • prediction (Sequence[Any]) – predicted values/classes

  • labels (Optional[Sequence[Union[str, int]]]) – included labels in preferred ordering. Sample is considered in computation if either prediction or ground truth (logical OR) is contained in labels. If no labels are supplied, they will be inferred from {prediction,truth}\{\text{prediction}, \text{truth}\} and ordered alphabetically.

Return type

float

Returns

accuracy of prediction [0,1]\in [0, 1]

Raises

ValueError – if truth and prediction differ in length

Examples

>>> accuracy([0, 0], [0, 1])
0.5