equal_error_rate()

audmetric.equal_error_rate(truth, prediction)[source]

Equal error rate for verification tasks.

The equal error rate (EER) is the point where false non-match rate (FNMR) and the impostors or false match rate (FMR) are identical. The FNMR indicates how often an enrolled speaker was missed. The FMR indicates how often an impostor was verified as the enrolled speaker.

In practice the score distribution is not continuous and an interval is returned instead. The EER value will be set as the midpoint of this interval:1

EER=min(FNMR[t],FMR[t])+max(FNMR[t],FMR[t])2\text{EER} = \frac{ \min(\text{FNMR}[t], \text{FMR}[t]) + \max(\text{FNMR}[t], \text{FMR}[t]) }{2}

with t=argmin(FNMRFMR)t = \text{argmin}(|\text{FNMR} - \text{FMR}|).

truth may only contain entries like [1, 0, True, False...], whereas prediction values can also contain similarity scores, e.g. [0.8, 0.1, ...].

The implementation is identical with the one provided by the pyeer package.

1

D. Maio, D. Maltoni, R. Cappelli, J. L. Wayman, and A. K. Jain. Fvc2000: fingerprint verification competition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24:402–412, 2002. doi:10.1109/34.990140.

Parameters
  • truth (Sequence[Union[bool, int]]) – ground truth classes

  • prediction (Sequence[Union[bool, int, float]]) – predicted classes or similarity scores

Return type

Tuple[float, namedtuple]

Returns

  • equal error rate (EER)

  • namedtuple containing fmr, fnmr, thresholds, threshold whereas the last one corresponds to the threshold corresponding to the returned EER

Raises

ValueError – if truth contains values different from 1, 0, True, False

Examples

>>> truth = [0, 1, 0, 1, 0]
>>> prediction = [0.2, 0.8, 0.4, 0.5, 0.5]
>>> eer, stats = equal_error_rate(truth, prediction)
>>> eer
0.16666666666666666
>>> stats.threshold
0.5