identification_error_rate_detailed()¶
- audmetric.identification_error_rate_detailed(truth, prediction, *, num_workers=1, multiprocessing=False)[source]¶
Detailed identification error rate result components.
where is the total confusion duration, is the total duration of predictions without an overlapping ground truth, is the total duration of ground truth without an overlapping prediction, and is the total duration of ground truth segments. [1]
Compared to
audmetric.identification_error_rate(), this function returns the identification error rate as well as theaudmetric.ErrorRateDetailscontaining the confusion rate, the false alarm rate, and the miss rate.The identification error rate should be used when the labels are known by the prediction model. If this isn’t the case, consider using
audmetric.diarization_error_rate_detailed().- Parameters:
truth (
Series) – ground truth labels with a segmented index conform to audformatprediction (
Series) – predicted labels with a segmented index conform to audformatnum_workers (
int) – number of threads or 1 for sequential processingmultiprocessing (
bool) – use multiprocessing instead of multithreading
- Return type:
tuple[float,ErrorRateDetails]- Returns:
identification error rate and
audmetric.ErrorRateDetailscontainingconf_rate,fa_rate,miss_rate- Raises:
ValueError – if
truthorpredictiondo not have a segmented index conform to audformat
Examples
>>> import pandas as pd >>> import audformat >>> truth = pd.Series( ... index=audformat.segmented_index( ... files=["f1.wav", "f1.wav"], ... starts=[0.0, 0.1], ... ends=[0.1, 0.2], ... ), ... data=["a", "b"], ... ) >>> prediction = pd.Series( ... index=audformat.segmented_index( ... files=["f1.wav", "f1.wav", "f1.wav"], ... starts=[0, 0.1, 0.1], ... ends=[0.1, 0.15, 0.2], ... ), ... data=["a", "b", "a"], ... ) >>> identification_error_rate_detailed(truth, prediction) (0.5, ErrorRateDetails(conf_rate=0.25, fa_rate=0.25, miss_rate=0.0))