identification_error_rate()

audmetric.identification_error_rate(truth, prediction, *, num_workers=1, multiprocessing=False)[source]

Identification error rate.

IER=confusion+false alarm+misstotal\text{IER} = \frac{\text{confusion}+\text{false alarm}+\text{miss}} {\text{total}}

where confusion\text{confusion} is the total confusion duration, false alarm\text{false alarm} is the total duration of predictions without an overlapping ground truth, miss\text{miss} is the total duration of ground truth without an overlapping prediction, and total\text{total} is the total duration of ground truth segments. 1

The identification error rate should be used when the labels are known by the prediction model. If this isn’t the case, consider using audmetric.diarization_error_rate().

1

Hervé Bredin. pyannote.metrics: a toolkit for reproducible evaluation, diagnostic, and error analysis of speaker diarization systems. In Interspeech 2017, 18th Annual Conference of the International Speech Communication Association. Stockholm, Sweden, August 2017.

Parameters
  • truth (Series) – ground truth labels with a segmented index conform to audformat

  • prediction (Series) – predicted labels with a segmented index conform to audformat

  • num_workers (int) – number of threads or 1 for sequential processing

  • multiprocessing (bool) – use multiprocessing instead of multithreading

Return type

float

Returns

identification error rate

Raises

ValueError – if truth or prediction do not have a segmented index conform to audformat

Examples

>>> import pandas as pd
>>> import audformat
>>> truth = pd.Series(
...     index=audformat.segmented_index(
...         files=["f1.wav", "f1.wav"],
...         starts=[0.0, 0.1],
...         ends=[0.1, 0.2],
...     ),
...     data=["a", "b"],
... )
>>> prediction = pd.Series(
...     index=audformat.segmented_index(
...         files=["f1.wav", "f1.wav", "f1.wav"],
...         starts=[0, 0.1, 0.1],
...         ends=[0.1, 0.15, 0.2],
...     ),
...     data=["a", "b", "a"],
... )
>>> identification_error_rate(truth, prediction)
0.5