PlanesNet
Dataset of satellite images from the PlanetScope platform, annotated for aircraft detection. Allows you to train robust automatic vision models on geospatial data.
Description
PlanesNet is a satellite imagery dataset containing 32,000 small (20×20 pixels) RGB images captured by Planet satellites over various airports in California. Each image is annotated with a binary class: “plane” (plane visible) or “no-plane”. It includes difficult cases like partial shots, confusing objects, or scenes that were misclassified by previous models.
What is this dataset for?
- Train object detection models on satellite images (embedded vision, drones)
- Automate the surveillance of airports or air traffic via AI
- Serve as a benchmark for complex binary classification on small image patches
Can it be enriched or improved?
Yes, you can combine PlanesNet data with other geospatial or meteorological sources, or enrich images by interpolation for super-resolution. It is also possible to annotate other objects on the complete scenes provided or to generate segmentation masks.
🔎 In summary
🧠 Recommended for
- Satellite vision engineers
- Defense and geointelligence researchers
- Object detection students
🔧 Compatible tools
- PyTorch
- TensorFlow
- FastAI
- RasterVision
- Keras
💡 Tip
Combine this dataset with complete scenes to validate your models in real contexts.
Frequently Asked Questions
Does this dataset contain geographic coordinates?
Yes, each image is geolocated by latitude and longitude, which allows for geospatial intersections.
Are the full scenes included in the archive?
Yes, four entire satellite scenes are provided to test the models on real images.
Can a segmentation model be trained with this dataset?
Not directly, but it is possible by generating masks from the full scenes available.