accuracy_score#

class usencrypt.ai.metrics.accuracy_score(y_true, y_pred, normalize=True)#

Computes the accuracy score between labels and predictions.

For \(m\) ground-truth labels \(y\) and predicted labels \(\hat{y}\), this is defined as:

\[\text{accuracy}(y,\hat{y}) = \frac{1}{m} \sum_{i = 1}^{m}1(\hat{y}_{i}=y_{i})\]
Parameters
  • y_true (list or numpy.ndarray) – The ground-truth labels.

  • y_pred (list or numpy.ndarray) – The predicted labels.

  • normalize (bool) – If True, returns the fraction of correctly classified samples. Otherwise, returns the number of correctly classified samples. Defaults to True.

Returns

If normalize is True, returns the fraction of correctly classified samples as a float. Otherwise, returns the number of correctly classified samples as an int.

Return type

list or numpy.ndarray int, float, usencrypt.cipher.Int, or usencrypt.cipher.Float.

Examples
>>> import numpy as np
>>> import usencrypt as ue
>>> y_true = np.random.randint(0, 2, size=(10,))
>>> y_true
array([0 1 1 1 0 1 0 1 0 0])
>>> y_pred = np.random.randint(0, 2, size=(10,))
>>> y_pred
array([1 0 1 1 0 1 1 1 1 1])
>>> accuracy = ue.ai.metrics.accuracy_score(y_true, y_pred)
>>> accuracy
0.5
>>> accuracy = ue.ai.metrics.accuracy_score(y_true, y_pred, normalize=False)
>>> accuracy
5