Inspiring examples to guide your AI development journey
About These Sample Projects
These example projects demonstrate the range of possibilities when creating AI applications using beginner-friendly platforms. Each project includes detailed descriptions, implementation steps, training data strategies, and expected outcomes to help you understand what makes a successful AI project.
How to Use These Examples: Review these projects to understand project scope, training data requirements, and successful implementation strategies. You can use these as inspiration for your own unique projects or adapt the concepts to different contexts.
1
Smart Recycling Sorter
Environmental AI for Waste Classification
Teachable MachineBeginnerImage Classification
Project Overview
The Smart Recycling Sorter helps users properly categorize waste items into Plastic, Paper, Metal, and Glass. This AI addresses the real-world problem of recycling confusion, where people are often unsure which recycling bin to use. The AI analyzes images of common household items and provides immediate classification guidance.
4Categories
80+Samples per Category
87%Accuracy Achieved
45Minutes to Build
📸
Screenshot: Recycling Sorter Interface
Shows the Teachable Machine interface with four categories and webcam preview
Paper Category (82 samples): Newspapers, cardboard boxes, paper bags, magazines, envelopes, notebook paper - different textures, colors, crumpled and flat
Metal Category (78 samples): Aluminum cans, tin cans, metal lids, foil, aerosol cans - various sizes, clean and dirty, different angles
Glass Category (80 samples): Glass bottles, jars, drinking glasses - clear and colored glass, different sizes and shapes, various lighting
📸
Screenshot: Training Data Examples
Grid showing diverse samples from each recycling category
Implementation Steps
Navigate to teachablemachine.withgoogle.com and select "Image Project"
Create four classes: Plastic, Paper, Metal, Glass
Collect 75-85 webcam samples per class, ensuring variety in items, angles, and lighting
Train the model (takes approximately 3-5 minutes)
Test with held-out samples and items not in training set
Iterate: Add more samples to categories with lower accuracy
Export model for use in web application or mobile app
Key Features
Real-time Classification: Instant feedback as items are shown to the camera
Confidence Scores: Shows percentage confidence for each category
Visual Feedback: Color-coded results (green for high confidence, yellow for medium, red for low)
Instructions Display: Shows proper disposal instructions for each category
📸
Screenshot: Live Classification Demo
Shows AI correctly identifying a plastic water bottle with 94% confidence
What Made This Project Successful
Clear, distinct categories that don't overlap significantly
Diverse training data including different lighting conditions and angles
Practical, real-world application that solves an actual problem
Multiple iteration cycles to improve accuracy from initial 72% to final 87%
Tips for Similar Projects
Avoid confusion by keeping categories visually distinct
Include "dirty" or "crumpled" versions of items, not just pristine examples
Test with items that weren't in your training data
Consider adding a fifth "Trash/Not Recyclable" category for completeness
2
Facial Emotion Detector
Understanding Human Expressions Through AI
Teachable MachineIntermediateImage Classification
Project Overview
The Facial Emotion Detector recognizes four primary emotions: Happy, Sad, Surprised, and Neutral. This AI can be used in educational settings to help students understand emotional recognition, in games for emotion-based controls, or as an accessibility tool for individuals who struggle with reading facial expressions.
4Emotions
120+Samples per Emotion
82%Accuracy Achieved
60Minutes to Build
📸
Screenshot: Emotion Detector Training Interface
Shows the four emotion categories with webcam samples
Training Data Strategy
Happy (125 samples): Big smiles, small smiles, laughing, grinning - different people, angles, with/without glasses
Sad (120 samples): Frowning, downturned mouth, sad eyes, looking down - various intensities and expressions
Surprised (118 samples): Wide eyes, open mouth, raised eyebrows - genuine and exaggerated surprise
Neutral (122 samples): Relaxed face, no expression, calm demeanor - looking at camera and away
📸
Screenshot: Emotion Training Samples Grid
Diverse examples showing variation in each emotion category
Implementation Steps
Create new Image Project in Teachable Machine
Set up four classes for different emotions
Recruit multiple participants (if possible) for diverse facial features
Capture 100+ samples per emotion with varied lighting and angles
Include samples with accessories (glasses, hats, different hairstyles)
Train model and test with live webcam
Refine by adding samples where confusion occurs
Integrate with simple game or application
Key Features
Live Emotion Tracking: Real-time analysis of facial expressions
Emoji Overlay: Displays corresponding emoji for detected emotion
Confidence Meter: Visual bar graph showing confidence levels
Shows interface detecting "Happy" emotion with emoji overlay and confidence bars
What Made This Project Successful
Large, diverse training dataset with multiple participants
Variety in accessories, lighting, and angles increased generalization
Clear visual differences between emotion categories
Engaging presentation with emoji overlays and sound effects
Challenges and Solutions
Challenge: Initial confusion between Neutral and Sad Solution: Added more exaggerated sad expressions and ensured neutral faces were truly relaxed
Challenge: Poor performance with glasses Solution: Collected additional samples specifically with various types of eyewear
Challenge: Difficulty with side profiles Solution: Added training samples from multiple angles, not just straight-on
3
Musical Instrument Sound Classifier
Audio Recognition for Music Education
Teachable MachineIntermediateAudio Classification
Project Overview
This AI identifies sounds from five different musical instruments: Guitar, Piano, Drums, Violin, and Human Voice. Perfect for music education, this tool helps students learn to distinguish between different instrument timbres and could be used in interactive music games or as an aid for young musicians.
5Instruments
75+Samples per Instrument
91%Accuracy Achieved
50Minutes to Build
📸
Screenshot: Audio Classification Interface
Shows the five instrument categories with waveform visualizations
Training Data Strategy
Guitar (78 samples): Acoustic strumming, electric riffs, fingerpicking, chords, single notes - different playing styles
Piano (82 samples): Single notes, chords, arpeggios, low and high registers - various dynamics and tempos
Drums (76 samples): Bass drum, snare, cymbals, tom-toms, full beats - different patterns and intensities
Violin (75 samples): Long notes, short notes, pizzicato, different bow pressures - high and low strings
Voice (80 samples): Singing different pitches, humming, vowel sounds - male and female voices, different ranges
📸
Screenshot: Audio Sample Collection
Waveforms showing variety in each instrument category
Implementation Steps
Select "Audio Project" in Teachable Machine
Create five classes for different instruments
Record live samples using microphone (or use pre-recorded audio)
Ensure quiet environment to minimize background noise
Include variety in pitch, dynamics, and playing techniques
Record 70+ samples per instrument category
Train model and test with live microphone input
Add confusing samples to improve accuracy
Key Features
Real-time Audio Analysis: Identifies instruments as they play
Visual Waveform Display: Shows audio input with frequency spectrum
This innovative AI controller recognizes five hand gestures to control a game: Thumbs Up (jump), Peace Sign (left), Fist (right), Open Palm (stop), and OK Sign (select). Integrated with Scratch, this creates a touchless gaming experience that demonstrates how AI can transform human movement into digital commands.
5Gestures
90+Samples per Gesture
88%Accuracy Achieved
90Minutes to Build
📸
Screenshot: Gesture Training Interface
Shows five hand gesture categories with webcam samples
Training Data Strategy
Thumbs Up (95 samples): Various hand angles, distances from camera, left and right hands
Peace Sign (92 samples): Different orientations, hand positions, finger spreads
Fist (88 samples): Tight and loose fists, different angles, hand rotations
Open Palm (94 samples): Fingers together and spread, facing camera and angled
OK Sign (91 samples): Different finger circle sizes, various hand orientations
📸
Screenshot: Gesture Sample Grid
Diverse hand gesture examples from multiple angles and positions
Implementation Steps
Create "Pose Project" in Teachable Machine
Set up five gesture classes
Record 85+ samples per gesture with variety in position, angle, distance
Include both left and right hands for better generalization
Train the model thoroughly
Test and iterate to improve gesture recognition
Export model to Teachable Machine library
Import model into Scratch using ML extension
Create simple game in Scratch that responds to gestures
Program Scratch to trigger actions based on detected gestures
Scratch Game Integration
The Teachable Machine model was integrated into a Scratch platform game where:
Thumbs Up: Makes the character jump
Peace Sign (left hand): Moves character left
Fist (right hand): Moves character right
Open Palm: Character stops and crouches
OK Sign: Activates special ability
📸
Screenshot: Scratch Game with Gesture Controls
Shows platform game responding to hand gestures in real-time
What Made This Project Successful
Distinct, easily distinguishable hand gestures
Large training dataset with variety in hand positions and angles
Successful integration between Teachable Machine and Scratch
Responsive game controls with minimal lag
Engaging, interactive demonstration of AI in gaming
Advanced Integration Tips
Start with simple game mechanics before adding complexity
Test gesture recognition in the actual lighting conditions where the game will be played
Add a calibration mode to adjust for different users and environments
Consider adding cooldown periods to prevent accidental multiple triggers
Include visual feedback showing which gesture is currently detected
5
Plant Health Identifier
AI-Powered Garden Care Assistant
MIT App InventorAdvancedImage Classification
Project Overview
This mobile app helps gardeners identify plant health issues by analyzing leaf photos. The AI classifies plants as Healthy, Drought-Stressed, Disease-Infected, or Pest-Damaged, then provides specific care recommendations. This demonstrates how AI can be used in agriculture and home gardening to diagnose problems early.
4Health Categories
100+Samples per Category
85%Accuracy Achieved
120Minutes to Build
📸
Screenshot: MIT App Inventor Designer View
Shows app layout with camera, image display, and classification result components
Training Data Strategy
Healthy (105 samples): Vibrant green leaves, no discoloration, various plant species, different leaf shapes
Drought-Stressed (98 samples): Wilted leaves, brown edges, drooping, curled leaves, various severities