binary_cross_entropy#

class usencrypt.ai.metrics.binary_cross_entropy(y_true, y_pred, label_smoothing=0.0, axis=0, _grad_check=False)#

Computes the binary cross-entropy between labels and predictions.

For \(m\) ground-truth labels \(y\) and predicted labels \(\hat{y}\), this is defined as:

\[\text{BCE}(y, \hat{y}) = \frac{1}{m} \sum^m_{i = 1} \Big(-y_i \log{\hat{y}_i} - (1 - y_i) \log(1 - \hat{y}_i)\Big)\]
Parameters
  • y_true (list or numpy.ndarray of float or usencrypt.cipher.Float) – The ground truth labels.

  • y_pred (list or numpy.ndarray of float or usencrypt.cipher.Float) – The predicted labels.

  • label_smoothing (float) – The label smoothing parameter in the range [0, 1]. If label_smoothing > 0 then the labels are smoothed by squeezing them towards 0.5 (i.e., 1 - 0.5 * label_smoothing for the target class and 0.5 * label_smoothing for the non-target class). Defaults to 0.0.

  • axis (int) – The axis along which the mean of is computed. Defaults to 0.

Returns

The computed binrary cross-entropy.

Return type

float or usencrypt.cipher.Float

Warning

Overflow occurs around the range of x = 60 due to exponentiation occurring in the Newton-Raphson process for the usencrypt.log() function. See the warnings for usencrypt.log().

Examples
>>> import numpy as np
>>> import usencrypt as ue
>>> y_true = np.random.rand(3, 3)
>>> y_true
array([[0.29968769, 0.05661954, 0.56347932],
      [0.13040524, 0.46424133, 0.67697876],
      [0.01753717, 0.71559143, 0.34469593]])
>>> y_pred = np.random.rand(3, 3)
>>> y_pred
array([[0.96913705, 0.09175659, 0.58708196],
      [0.53064407, 0.41190663, 0.92961167],
      [0.38228412, 0.63935748, 0.49057806]])
>>> bce = ue.ai.metrics.binary_cross_entropy(y_true, y_pred)
>>> bce
array([1.22524972, 0.51078447, 0.76010041])