By clicking "Accept", you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. See our Privacy Policy for more information
Glossary
True Positive Rate
AI DEFINITION

True Positive Rate

The True Positive Rate (TPR), also called sensitivity or recall, is a metric used to measure the effectiveness of a classification model. It represents the proportion of actual positive cases that the model correctly identifies. In other words, it answers the question: “Out of all the positives in reality, how many did my model find?”

Why it matters
Accuracy alone can be misleading, especially with imbalanced datasets. A medical diagnostic model trained on data where 95% of patients are healthy could achieve high accuracy by always predicting “healthy.” But if it fails to detect true cases of illness, the model is clinically useless. TPR becomes essential in these contexts, as it focuses on the detection of positives.

Applications

  • Medical diagnostics: detecting cancer or rare diseases.
  • Fraud detection: ensuring fraudulent transactions are flagged.
  • Security: identifying malicious activity in cyber systems.
  • Search and retrieval: ensuring relevant documents or images are not missed.

Trade-offs
Improving TPR often means lowering precision (more false positives). For example, a spam filter that labels almost every email as spam will have a high TPR but frustrate users. Balancing TPR with other metrics (precision, F1-score, ROC-AUC) is key.

The True Positive Rate should rarely be looked at in isolation. In practice, decision-makers often consider it alongside specificity (true negative rate) and positive predictive value, to ensure the model not only catches positives but also avoids excessive false alarms. This holistic view is especially critical in medical and financial domains where mistakes carry high costs.

Another key dimension is threshold tuning. The probability cut-off chosen by a model directly affects the TPR. For instance, lowering the threshold increases sensitivity but may overload the system with false positives. This is why ROC curves and precision–recall curves are valuable tools to visualize trade-offs and select the most appropriate operating point.

Handling imbalanced datasets is also central to improving TPR. Techniques like oversampling minority classes, applying cost-sensitive learning, or using ensemble methods help models pay attention to rare but critical positive cases. Without such adjustments, models often default to majority-class predictions that artificially inflate accuracy while hurting recall.

In business settings, TPR is often used as a communication bridge between data scientists and stakeholders. It answers a straightforward question: “How many of the cases we truly care about did the model catch?”—making it a relatable, actionable metric beyond technical circles.

📚 Further Reading

  • Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification.
  • Géron, A. (2019). Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow.