A comprehensive YOLOv8-based waste classification system that automatically identifies and categorizes waste materials into 5 main categories: Glass, Organic, Others, Packaged, and Plastic.
- Overview
- Model Categories
- Project Structure
- Quick Start
- Training
- Inference
- Web Application
- Category Management
- Troubleshooting
- Contributing
This project implements a state-of-the-art waste classification system using YOLOv8 (You Only Look Once version 8) for real-time object detection and classification. The system is designed to help improve waste management practices by automatically sorting waste materials.
- ✅ Real-time Detection: Fast and accurate waste classification
- ✅ 5 Waste Categories: Glass, Organic, Others, Packaged, Plastic
- ✅ Web Interface: User-friendly Streamlit application
- ✅ Category Mapping: Intelligent mapping from model categories to desired categories
- ✅ Multiple Input Sources: Images, webcam, and video files
- ✅ Confidence Control: Adjustable detection sensitivity
- ✅ Detailed Results: Category counts and confidence scores
The model classifies waste into the following categories:
| Category | Description | Examples |
|---|---|---|
| Glass | Glass containers and bottles | Wine bottles, jars, glass containers |
| Organic | Biodegradable waste | Food scraps, leaves, organic matter |
| Others | Miscellaneous waste | Metal, ceramics, non-recyclable items |
| Packaged | Packaging materials | Cardboard, paper, packaging |
| Plastic | Plastic containers and items | Plastic bottles, bags, containers |
The model uses intelligent category mapping to convert from the original training categories to your desired categories:
BIODEGRADABLE → Organic
CARDBOARD → Packaged
GLASS → Glass
METAL → Others
PAPER → Packaged
PLASTIC → Plastic
Train/
├── Waste-Classification-using-YOLOv8/
│ ├── dataset/ # Training and validation data
│ │ ├── train/images/ # Training images
│ │ ├── train/labels/ # Training labels
│ │ ├── val/images/ # Validation images
│ │ └── val/labels/ # Validation labels
│ ├── streamlit-detection-tracking - app/ # Web application
│ │ ├── app.py # Main Streamlit app
│ │ ├── helper.py # Helper functions
│ │ ├── settings.py # App configuration
│ │ ├── category_mapper.py # Category mapping utility
│ │ └── weights/
│ │ └── best.pt # Trained model weights
│ ├── data.yaml # Dataset configuration
│ ├── category_mapper.py # Category mapping utility
│ ├── verify_model_categories.py # Model verification script
│ ├── test_inference.py # Inference testing script
│ ├── train_new_model.py # Model training script
│ ├── quick_start.py # Quick start script
│ └── requirements.txt # Python dependencies
└── README.md # This file
- Python 3.8 or higher
- CUDA-compatible GPU (recommended for training)
- 8GB+ RAM
-
Clone the repository (if not already done):
git clone <repository-url> cd WasteNet/Train
-
Navigate to the project directory:
cd Waste-Classification-using-YOLOv8 -
Install dependencies:
pip install -r requirements.txt
-
Verify the model:
python verify_model_categories.py
-
Launch the web application:
python quick_start.py
Test the model on a sample image:
python test_inference.py --model "streamlit-detection-tracking - app/weights/best.pt" --image "dataset/train/images/0.jpg"The dataset should be organized in YOLO format:
- Images in
dataset/train/images/anddataset/val/images/ - Labels in
dataset/train/labels/anddataset/val/labels/ - Each label file should contain bounding box annotations in YOLO format
-
Prepare your dataset (if using custom data):
python prepare_dataset.py
-
Start training:
python train_new_model.py
-
Monitor training progress:
- Check the
runs/detect/directory for training logs - View training plots and metrics
- The best model will be saved as
runs/detect/waste_classification_new/weights/best.pt
- Check the
Key training parameters (configurable in train_new_model.py):
- Epochs: 50 (default)
- Batch Size: 16 (GPU) / 8 (CPU)
- Image Size: 640x640
- Learning Rate: 0.01
- Device: CUDA (if available) / CPU
# Basic inference
python test_inference.py --model "path/to/model.pt" --image "path/to/image.jpg"
# With custom confidence threshold
python test_inference.py --model "path/to/model.pt" --image "path/to/image.jpg" --confidence 0.3
# Without saving result image
python test_inference.py --model "path/to/model.pt" --image "path/to/image.jpg" --no-savefrom ultralytics import YOLO
from category_mapper import WasteCategoryMapper
# Load model
model = YOLO("path/to/model.pt")
# Load category mapper
mapper = WasteCategoryMapper()
# Run inference
results = model.predict("image.jpg", conf=0.25)
# Map to desired categories
mapped_results = mapper.map_prediction(model, results[0])
# Process results
for item in mapped_results:
print(f"Category: {item['mapped_class']}, Confidence: {item['confidence']:.3f}")cd "streamlit-detection-tracking - app"
streamlit run app.pyThe app will be available at http://localhost:8501
- Image Upload: Upload images for waste classification
- Webcam Support: Real-time classification using your webcam
- Confidence Control: Adjust detection sensitivity
- Category Mapping: See how model categories map to your desired categories
- Detailed Results: View category counts and confidence scores
- Main Page: Shows supported waste categories
- Sidebar: Model configuration and input source selection
- Results Panel: Detailed detection results with category mapping
- Mapping Info: Explanation of category conversions
python category_manager.pypython category_mapper.pyIf you need to modify categories:
-
Edit
data.yaml:names: - Glass - Organic - Others - Packaged - Plastic nc: 5
-
Retrain the model (if needed):
python train_new_model.py
-
Model not found:
- Check that
best.ptexists in the weights directory - Verify the file path in
settings.py
- Check that
-
CUDA out of memory:
- Reduce batch size in training parameters
- Use CPU instead of GPU
-
Category mismatch:
- Run
python verify_model_categories.pyto check categories - Use the category mapping solution
- Run
-
Poor detection accuracy:
- Adjust confidence threshold
- Ensure good image quality
- Consider retraining with more data
- Check the console output for error messages
- Verify all dependencies are installed
- Ensure image paths are correct
- Review the category mapping if results seem incorrect
- mAP50: ~0.85 (on validation set)
- Inference Speed: ~50ms per image (GPU)
- Model Size: ~50MB
- Input Size: 640x640 pixels
- Minimum: 4GB RAM, CPU-only
- Recommended: 8GB RAM, CUDA-compatible GPU
- Optimal: 16GB RAM, RTX 3060 or better
- Support for more waste categories
- Mobile app development
- Real-time video processing
- Integration with IoT devices
- Multi-language support
- Batch processing capabilities
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Ultralytics for YOLOv8
- Streamlit for the web interface
- OpenCV for image processing
- The waste classification research community
Ready to classify waste? Start with the Quick Start guide above! 🚀