Sensitivity
Sensitivity, also known as recall or the true positive rate, is a metric that measures how effectively a classification model identifies positive cases. It represents the proportion of true positives detected out of all actual positives.
Why it matters
Sensitivity is essential in scenarios where missing a positive case has severe consequences. While accuracy may give a broad view of performance, sensitivity focuses specifically on the model’s ability to catch true positives, making it critical in high-stakes domains.
Examples
- Medical diagnostics: a COVID-19 test with high sensitivity correctly identifies most infected patients, reducing the risk of false reassurance.
- Fraud detection: in banking, prioritizing sensitivity ensures suspicious transactions are flagged, even at the cost of false alarms.
- Search engines: a highly sensitive algorithm retrieves most of the relevant documents, though it may also bring in irrelevant ones.
Trade-offs
High sensitivity often comes at the expense of specificity (true negative rate). A model tuned for maximum sensitivity may raise many false positives, which can overwhelm human reviewers or downstream systems. Thus, sensitivity must be balanced with other metrics such as precision and F1-score.
💡 Sensitivity reflects how good a model is at catching positive cases. It is especially critical when missing a positive is costly—for example, overlooking a cancer diagnosis, letting a fraudulent transaction go unnoticed, or failing to detect a cyberattack.
A model with very high sensitivity rarely misses positives, but this often comes at the cost of generating more false alarms. For instance, a highly sensitive medical test might flag many healthy patients as potentially ill, creating anxiety and additional work for doctors.
Because of this, sensitivity is rarely interpreted alone. It is almost always balanced with specificity, which measures how well negatives are recognized, and with other metrics like precision or the F1-score, which combine perspectives for a fuller picture of model performance.
📚 Further Reading
- Fawcett, T. (2006). An Introduction to ROC Analysis. Pattern Recognition Letters.
- Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.