Hand Commands: Training image classification models
Actua's AI Series - Activity 5
In this activity, participants will train an AI model to complete an image classification task. They will use Google’s Teachable Machine platform to classify images of hand gestures that can be used to control a computer program. Participants will create datasets to use for training the AI model and in doing so, will learn about how AI programs understand the information that they’re presented with.
If you’re accessing this activity directly, did you know there are eight other activities in this series up on our website? These activities also follow a space exploration narrative when done in order. It is recommended to complete the activities in order but they can also be done on their own.
If you find yourself unfamiliar with any of the AI concepts and terminology introduced in these activities, please refer to our AI Glossary. For more information about Artificial Intelligence and how to incorporate it into your classroom, we suggest exploring our AI Handbook.
Here we go:
You and your group-mates are astronauts and scientists aboard the Actua Orbital Station. Unfortunately, your station just got bombarded by magnetic rays and your electronics have begun to shut down! The only one who can save you is the orbital station’s AI, DANN. DANN stands for Dedicated Actua Neural Network, and it’s gone a little loopy. Brush up on your technical skills, learn about AI, and save yourself and your crewmates!
DANN’s visual core is almost complete, thanks to your hard work in “Sort Things Out: Exploring Image Classification”. But we aren’t done yet! Before the magnetic storm, DANN was a state-of-the-art AI system that you could interact with using your voice. But during the shut-down, DANN’s audio core was knocked offline. The problem is, it doesn’t look like we can access the audio core without DANN’s help first. Our mission specialists think that you might be able to use DANN’s visual core to communicate, but the visual core isn’t working either! We need to train an image classification model to recognize our hand shapes as commands, and then make sure they’re refined and ready to go in “What Machines See: Digging into Machine Vision”!