Model Evaluation Methods in Machine Learning

Machine learning models are fantastic tools at our disposal.

 

 They can be used for continuous outputs, like house prices or populations, or discrete and categorical outputs like which type of person are you, is this picture of an apple or a cat.

 

However, we need to validate the performance of the model, as bad models are almost as useless as bad data!

When we’ve trained our model, we want to have some assurance in our models’ ability to generalise to unseen data. 

This means running some tests and keeping an eye on particular metrics that are relevant to the problem we are trying to solve.

 

We want to know how well the model might actually work in an untrained situation, and how well it will perform on unseen data.

 

In this series, we’ll go through one by one some of the key evaluation metrics and methods for machine learning models.

 

With that in mind, this is what we are going to cover:

  1. Acccuracy
  2. Log Loss
  3. Confusion Matrix
  4. AUC
  5. Precision vs Recall
  6. F1
  7. Mean Absolute Error
  8. Mean Squared Error
 

We’ll see you in the next post – Accuracy!