fscore_per_class()

audmetric.fscore_per_class(truth, prediction, labels=None, *, zero_division=0)[source]

F-score per class.

fscorek=true positivektrue positivek+12(false positivek+false negativek)\text{fscore}_k = \frac{\text{true positive}_k} {\text{true positive}_k + \frac{1}{2} (\text{false positive}_k + \text{false negative}_k)}
Parameters
  • truth (Sequence[Any]) – ground truth values/classes

  • prediction (Sequence[Any]) – predicted values/classes

  • labels (Optional[Sequence[Any]]) – included labels in preferred ordering. If no labels are supplied, they will be inferred from {prediction,truth}\{\text{prediction}, \text{truth}\} and ordered alphabetically.

  • zero_division (float) – set the value to return when there is a zero division

Return type

Dict[str, float]

Returns

dictionary with label as key and F-score as value

Examples

>>> fscore_per_class([0, 0], [0, 1])
{0: 0.6666666666666666, 1: 0.0}