Tennis Player Actions Dataset
Structured visual dataset for the recognition of sports actions, illustrating 4 key tennis gestures (serve, backhand, forehand, waiting position) with COCO annotations and body keypoints according to the OpenPose standard.
Description
The Tennis Player Actions Dataset brings together more than 2,000 images classified into 4 tennis actions: forehand, backhand, serve and waiting position. Each image is annotated in COCO format with 18 key body points (according to OpenPose), making it an excellent medium for analyzing human movement in sport.
What is this dataset for?
- Training models for the recognition of sports actions from still images
- Analyze body movements using fine keypoint annotation
- Develop coaching or automatic scoring assistants in tennis
Can it be enriched or improved?
Yes, you can add other viewpoints, integrate more specific actions, or cross-reference this data with video sequences to improve the performance of temporal models. The quality of the annotations also allows enrichment by 3D modeling or augmented generation.
🔎 In summary
🧠 Recommended for
- Sports vision developers
- Biomechanics researchers
- Motion analysis students
🔧 Compatible tools
- OpenPose
- Detectron2
- PyTorch
- TensorFlow
💡 Tip
Use keypoints to generate motion vectors or animated skeletons to model sequences in 3D.
Frequently Asked Questions
How many tennis stocks are covered in this dataset?
Four: backhand, forehand, serve, and hold position — each with 500 annotated images.
Are annotations compatible with known frameworks?
Yes, they follow the COCO format with OpenPose keypoints, compatible with most vision tools like Detectron2.
Can we use this dataset for video?
Indirectly yes, the images come from videos. It is possible to reconstruct sequences from annotated images.