accuracy()

audmetric.accuracy(truth, prediction, labels=None)[source]

Classification accuracy.

accuracy=number of correct predictionsnumber of total predictions\text{accuracy} = \frac{\text{number of correct predictions}} {\text{number of total predictions}}
Parameters:
  • truth (Sequence[object]) – ground truth values/classes

  • prediction (Sequence[object]) – predicted values/classes

  • labels (Sequence[str | int]) – included labels in preferred ordering. Sample is considered in computation if either prediction or ground truth (logical OR) is contained in labels. If no labels are supplied, they will be inferred from {prediction,truth}\{\text{prediction}, \text{truth}\} and ordered alphabetically.

Return type:

float

Returns:

accuracy of prediction [0,1]\in [0, 1]

Raises:

ValueError – if truth and prediction differ in length

Examples

>>> accuracy([0, 0], [0, 1])
0.5