## Log Loss: Penalising False Classifications

As we have seen, pure accuracy of our machine learning model is an evaluation metric that leaves a lot to be desired.

Now we move on to a slightly more complex, but much more effective metric, Log loss

At its core, Log loss is similar to the inverse of model accuracy, but focuses on penalising false classifications. ### Calculating Log Loss

To work with log loss, we need to have our model assign probabilities to the outcomes, not just the predicted end result.

This is also why we use log loss with classifiers. In this example, we are going to consider the binary classification problem, where the outcome is either a 1 or a 0.

Once your model has assigned every observation in the set a probability, we can use the following formula for every value: Where y is the true label (is it actually a 0 or 1), and log(p) is the logarithm of the predicted probability of the result.

This is then summed over the total set of observations, and provides an overall metric value we can use.

As this is a logarithmic function, when we optimise for log loss we will be penalising overconfident but incorrect predictions more. The uncertainty involved is taken into account.

### Generalising to Multi-Class Classification

When our model is trying to predict more than two classes, say any number of M classes, our formula summed over all the observation becomes: N is the total number of observations,

M is the number of different classes,

i represents the i-th observation, from 1 to N.

j represents the j-th class, from 1 to M.

### Summary of Log Loss

Log loss is a penalty function. We want it to be as low as possible when we optimise our model: a value close to 0 represents a more accurate model, and a value closer to 1 indicates a less accurate model.

Log loss is almost always preferred over basic classification accuracy.

In our next post, we’ll be moving on to the confusion matrix, which whilst not an evaluation metric, provides us a great way to view how our classifier is performing.

See you in the next post!