MeanSquaredError#

class usencrypt.ai.losses.MeanSquaredError#

Computes the mean squared error between labels and predictions. Is used to set up the cost function in a neural network model.

For \(m\) ground-truth labels \(y\) and predicted labels \(\hat{y}\), this is defined as:

\[\text{MSE}(y, \hat{y}) = \frac{1}{2m} \sum_{i = 1}^m (y_i - \hat{y}_i)^2\]
Inheritance
Call arguments
Returns

The mean squared error of the input values.

Return type

float or usencrypt.cipher.Float

Examples

In addition to being set as the loss function of a neural network, an instance of the MeanSquaredError function can be called directly as follows:

>>> import usencrypt as ue
>>> y_true = np.random.rand(3, 3)
>>> y_true
array([[0.29968769, 0.05661954, 0.56347932],
       [0.13040524, 0.46424133, 0.67697876],
       [0.01753717, 0.71559143, 0.34469593]])
>>> y_pred = np.random.rand(3, 3)
>>> y_pred
array([[0.96913705, 0.09175659, 0.58708196],
       [0.53064407, 0.41190663, 0.92961167],
       [0.38228412, 0.63935748, 0.49057806]])]
>>> mse = ue.ai.losses.MeanSquaredError()
>>> error = mse(y_true, y_pred)
>>> error
array([0.2471313 , 0.00326172, 0.02855402])