Introduction
The AI-Based Waste Classifier is an innovative web application designed to help users properly identify and recycle waste items. By leveraging advanced machine learning techniques, our system accurately classifies waste into seven distinct categories: e-waste, glass, metal, organic materials, paper, plastic, and trash.
Note: This documentation provides technical guidance for users and developers interacting with the AI Waste Classifier system.
Key Features
- Real-time Classification: Upload images of waste items for instant classification
- Image Processing Tools: Enhance your images for better results
- Recycling Instructions: Get specific guidance for each waste category
- User Feedback System: Help improve the system through feedback
- Performance Analytics: View classification statistics
Getting Started
Prerequisites
Before setting up the AI Waste Classifier, ensure you have the following installed:
For Frontend:
- Node.js (v18.0.0 or higher)
- npm (v6.0.0 or higher)
For Backend:
- Python (v3.10 or higher)
- pip (latest version)
- MongoDB account (for database)
- Cloudinary account (for image storage)
Installation
Frontend Setup
Clone the repository and install dependencies:
git clone https://github.com/Israr-11/Frontend-AI-waste-classifier.git
cd frontend_ai_based_waste_classifier
npm install
Backend Setup
Clone the repository and install dependencies:
git clone https://github.com/Israr-11/Backend-AI-Waste-Classifier.git
cd backend_ai_based_waste_classifier
pip install -r requirements.txt
Configuration
Frontend Configuration
Create a .env file in the frontend root directory with the following:
REACT_APP_API_URL=http://localhost:8000
Backend Configuration
Create a .env file in the backend root directory with the following:
MONGO_URI=your_mongodb_connection_string
DB_NAME=waste_classifier
COLLECTION_FOR_PREDICTION=predictions
COLLECTION_FOR_FEEDBACK=feedback
COLLECTION_FOR_INSTRUCTIONS=recycling_instructions
CLOUDINARY_CLOUD_NAME=your_cloud_name
CLOUDINARY_API_KEY=your_api_key
CLOUDINARY_API_SECRET=your_api_secret
Running the Application
Frontend
Start the React development server:
npm run dev
The frontend will be available at http://localhost:3000.
Backend
Start the FastAPI server:
cd backend_ai_based_waste_classifier
uvicorn app:app --reload
The backend API will be available at http://localhost:8000.
Frontend Implementation
Architecture
The frontend is built using React.js with a focus on responsive design and user experience. The application follows a component-based architecture for modularity and reusability.
- Framework: React.js
- Styling: CSS with Tailwind CSS
- State Management: React Hooks
- Routing: React Router
- Image Processing: React-Cropper, custom filters
- Charts: Chart.js with React-Chartjs-2
Key Components
Core Components
- App.js - Main application component with routing
- Navbar.jsx - Navigation header with responsive menu
- Footer.jsx - Application footer with links
Feature Components
- Process.jsx - Image upload and processing interface
- ImageQualityCheck.jsx - Image quality assessment utilities
- FeedBackForm.jsx - User feedback collection form
- Statistics.jsx - Performance metrics and analytics display
UI Components
- Hero.jsx - Landing page hero section
- Overview.jsx - System overview explanation
- Flow.jsx - Visualization of the classification workflow
- WhyThis.jsx - Benefits and motivation section
User Flow
The frontend implements a streamlined user flow:
- Home Page - Introduction to the application
- Process Page - Image upload and processing
- Upload image or take photo
- Adjust image with filters (brightness, contrast, grayscale, sepia)
- Crop and prepare image
- Submit for classification
- Results Display - Shows classification results
- Waste category
- Confidence level
- Recycling instructions
- Feedback form
- Statistics Page - Performance analytics and metrics
Backend Implementation
Architecture
The backend is built with FastAPI, a modern Python framework for building APIs. It follows a structured architecture with controllers, services, and models.
- Framework: FastAPI
- ML Integration: TensorFlow
- Image Processing: OpenCV, PIL
- Database: MongoDB with PyMongo
- Cloud Storage: Cloudinary
- Authentication: JWT (planned)
Directory Structure
backend_ai_based_waste_classifier/
├── app.py # Main application entry point
├── controllers/ # API route handlers
│ ├── imageProcessingController.py
│ ├── feedbackController.py
│ └── statsController.py
├── services/ # Business logic
│ ├── imageProcessingService.py
│ ├── mlModelService.py
│ ├── feedbackService.py
│ ├── image_quality_service.py
│ └── statsService.py
├── models/ # Data models
│ ├── prediction.py
│ └── feedback.py
├── utils/ # Utility functions
│ ├── database.py
│ ├── uploadImage.py
│ └── populate_recycling_instructions.py
└── machineLearning/ # ML models and training
└── waste_classification_model_v2.keras
API Endpoints
Image Processing
| Endpoint | Method | Description |
|---|---|---|
/upload_image |
POST | Upload and classify waste image |
Feedback
| Endpoint | Method | Description |
|---|---|---|
/feedback |
POST | Submit user feedback on classification |
Statistics
| Endpoint | Method | Description |
|---|---|---|
/prediction-stats |
GET | Get classification statistics with optional filters |
Request and Response Examples
Image Upload Request:
POST /upload_image
Content-Type: multipart/form-data
form-data:
image: [binary file data]
Image Upload Response:
{
"image_hash": "a1b2c3d4e5f6g7h8i9j0",
"category": "plastic",
"confidence": 0.95,
"image_url": "https://res.cloudinary.com/example/image/upload/v1234567890/uploads/example.jpg",
"recycling_instructions": {
"title": "Plastic Recycling",
"description": "Clean and separate plastic items before recycling.",
"steps": [
"Remove labels and caps",
"Rinse container",
"Check for recycling code",
"Place in appropriate bin"
],
"additional_info": "Not all plastics are recyclable. Check with your local recycling center."
}
}
Database Structure
The application uses MongoDB Atlas with three main collections:
Predictions Collection
{
"_id": ObjectId("60d21b4667d0d8992e610c85"),
"image_hash": "a1b2c3d4e5f6g7h8i9j0",
"original_prediction": "plastic",
"correct_prediction": "plastic",
"category": "plastic",
"entryTime": ISODate("2023-08-24T14:25:30.123Z")
}
Feedback Collection
{
"_id": ObjectId("60d21b4667d0d8992e610c86"),
"image_hash": "a1b2c3d4e5f6g7h8i9j0",
"is_correct": false,
"correct_category": "metal",
"tried_techniques": [
"Adjust brightness",
"Use a plain background when possible"
],
"entryTime": ISODate("2023-08-24T14:28:10.456Z")
}
Recycling Instructions Collection
{
"_id": ObjectId("60d21b4667d0d8992e610c87"),
"category": "plastic",
"title": "Plastic Recycling",
"description": "Clean and separate plastic items before recycling.",
"steps": [
"Remove labels and caps",
"Rinse container",
"Check for recycling code",
"Place in appropriate bin"
],
"additional_info": "Not all plastics are recyclable. Check with your local recycling center."
}
Machine Learning Implementation
Model Architecture
The waste classification system uses a Convolutional Neural Network (CNN) optimized specifically for waste identification.
Model Specifications
- Input: 128×128 color images (RGB)
- Output: 7 waste categories (e-waste, glass, metal, organic, paper, plastic, trash)
- Architecture: Sequential CNN with convolutional, pooling, and dense layers
Model Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(128, 128, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(7, activation='softmax')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
Training Process
The model underwent multiple training iterations to achieve optimal performance.
Version 1
- Dataset: 2,527 images
- Training/Test Split: 80/20 (2,022 training, 505 test)
- Batch Size: 32
- Epochs: 9
- Accuracy: 80%
Version 2 (Current)
- Dataset: 20,534 images (8× increase)
- Training/Test Split: 80/20
- Batch Size: 32
- Epochs: 15
- Accuracy: 94.22%
Training Code Access
The complete training code is available in the following repositories:
- V1 Model Training - Initial training with smaller dataset
- V2 Model Training - Improved training with expanded dataset
- Full Factorial Analysis - Comprehensive model configuration testing
Optimization Techniques
Hyperparameter Tuning
The model was optimized through systematic testing of various parameters:
- Batch Size Optimization: Tested sizes from 16 to 64 to find optimal training efficiency
- Epoch Tuning: Increased from 9 to 15 epochs for better pattern learning
- Network Architecture: Adjusted CNN layers for optimal feature extraction
Data Augmentation
Enhanced the training dataset through transformations:
data_augmentation = tf.keras.Sequential([
tf.keras.layers.RandomFlip("horizontal"),
tf.keras.layers.RandomRotation(0.2),
tf.keras.layers.RandomZoom(0.1),
])
Full Factorial Analysis
Conducted systematic testing of 100+ model configurations:
| Parameter | Values Tested | Optimal Value |
|---|---|---|
| Filter Sizes | 3×3, 5×5, 7×7 | 3×3 |
| Network Depths | 2, 3, 4, 5 layers | 4 layers |
| Architectures | Simple CNN, VGG, MobileNet, ResNet | Simple CNN |
| Regularization | Dropout, BatchNorm, L2, None | BatchNorm |
The optimal configuration achieved 94.88% accuracy.
Image Processing Pipeline
The system implements a multi-layer image processing approach to ensure optimal classification results.
Frontend Processing
- Resolution Validation: Ensures adequate image detail
- Blur Detection: Identifies and warns about blurry images
- User-controlled Cropping: Allows focus on the waste item
- Format Verification: Checks image format compatibility
Backend Processing
- Clarity Assessment: Measures image sharpness
- Aspect Ratio Normalization: Standardizes image proportions
- Size Standardization: Resizes to 128×128 pixels
- Quality Verification: Ensures adequate processing quality
ML-specific Processing (OpenCV)
def enhance_image(image):
# Apply bounding box
image_cv = cv2.boxFilter(image, -1, (5, 5), normalize=True)
# Convert to grayscale
image_gray = cv2.cvtColor(image_cv, cv2.COLOR_BGR2GRAY)
# Apply histogram equalization
image_eq = cv2.equalizeHist(image_gray)
# Enhance sharpness with unsharp mask
gaussian = cv2.GaussianBlur(image_eq, (0, 0), 3)
image_sharp = cv2.addWeighted(image_eq, 1.5, gaussian, -0.5, 0)
return image_sharp
Performance Metrics
Model Accuracy
| Model | Architecture | Accuracy |
|---|---|---|
| SpotGarbage | AlexNet | 63.20% |
| TrashNet | VGG | 75.25% |
| WasteNet | DenseNet121 | 81.50% |
| DeepWaste | MobileNetV2 | 91.12% |
| Our Model | Custom CNN | 94.22% |
Processing Performance
- API Response Time: 10.15 seconds (71% improvement from initial version)
- Image Quality Improvement: +27% classification accuracy with preprocessing
- Feedback Integration: +10-15% accuracy improvement from user corrections
Category-specific Performance
| Waste Category | Precision | Recall | F1-Score |
|---|---|---|---|
| E-waste | 0.95 | 0.92 | 0.93 |
| Glass | 0.94 | 0.96 | 0.95 |
| Metal | 0.93 | 0.95 | 0.94 |
| Organic | 0.96 | 0.97 | 0.96 |
| Paper | 0.94 | 0.92 | 0.93 |
| Plastic | 0.93 | 0.91 | 0.92 |
| Trash | 0.91 | 0.89 | 0.90 |
Usage Guide
Image Upload
- Navigate to the Process page
- Select Upload Image or Take Photo
- Wait for the image to load in the editor
Best Practices for Image Capture
- Ensure good lighting conditions
- Center the waste item in the frame
- Use a plain background when possible
- Hold your device steady to avoid blur
- Clean the item before photographing
- Remove packaging or labels if possible
Image Processing and Classification
- Use the cropping tool to focus on the waste item
- Adjust image properties as needed:
- Brightness: Slider from 0-200%
- Contrast: Slider from 0-200%
- Grayscale: Toggle on/off
- Sepia: Toggle on/off
- Click Process Image to submit for classification
- Wait for the system to process and classify the image
- View the classification results and recycling instructions
Note: Processing may take up to 15 seconds depending on server load and image complexity.
Providing Feedback
After receiving classification results, you can provide feedback to help improve the system:
- Indicate whether the classification was correct (Yes or No)
- If incorrect, select the correct waste category from the dropdown
- Optionally, select techniques you've tried to improve the image quality:
- Preprocessing techniques (brightness, contrast, etc.)
- Photography techniques (lighting, background, etc.)
- Submit your feedback
Why Feedback Matters: Your feedback helps improve the model's accuracy. User feedback has already contributed to a 10-15% improvement in classification accuracy.
Troubleshooting
Common Issues and Solutions
Issue: Image fails to upload or process.
Solutions:
- Ensure image is in a supported format (JPG, PNG, WEBP)
- Check that image size is under 5MB
- Try a different browser or device
- Clear browser cache and cookies
Issue: Low confidence or incorrect classifications.
Solutions:
- Improve lighting conditions
- Use a contrasting background
- Focus on a single waste item
- Try adjusting image brightness and contrast
- Ensure the item is clean and clearly visible
Issue: Classification process times out or takes too long.
Solutions:
- Check your internet connection
- Reduce image resolution before uploading
- Try processing during off-peak hours
- Use the cropping tool to focus on just the waste item
Additional Resources
Source Code Repositories
- Frontend Repository - React application code
- Backend Repository - FastAPI server code
- ML Model V1 - Initial model training code
- ML Model V2 - Improved model training code
- Full Factorial Analysis - Comprehensive model testing
Recycling Resources
- Local Recycling Guidelines - Official recycling information for your area
- E-waste Disposal Centers - Locations for proper electronic waste disposal
- Composting Guide - Best practices for organic waste management