We will go into some very helpful formulae that you can use to calculate performance metrics to assess your model in later posts, all using these values at their core.

But without going that far, it should be pretty obvious that this provides us with a great way of seeing where our model is performing well, and where it is performing not so well.

If we have no observations that our model is falsely predicting, then it is 100% accurate, both for negative and positive outcomes. This is the ideal situation, as long as it can continue the performance in real life!

If there are a lot of true positives, but also a lot of false positives, it may indicate that our model is too heavily weighted to label an observation as positive, at the detriment of overall performance.