What Machines See: Digging into Machine Vision
Actua's AI Series - Activity 6
In this activity, participants will learn about machine vision, comparing and contrasting how humans and computers see. Participants will also learn about how computers understand what they see. They will also learn about applying different strategies for data collection that will help make machine vision models more accurate.
If you’re accessing this activity directly, did you know there are eight other activities in this series up on our website? These activities also follow a space exploration narrative when done in order. It is recommended to complete the activities in order but they can also be done on their own.
If you find yourself unfamiliar with any of the AI concepts and terminology introduced in these activities, please refer to our AI Glossary. For more information about Artificial Intelligence and how to incorporate it into your classroom, we suggest exploring our AI Handbook.
Here we go:
You and your group mates are astronauts and scientists aboard the Actua Orbital Station. Unfortunately, your station just got bombarded by magnetic rays and your electronics have begun to shut down! The only one who can save you is the orbital station’s AI, DANN. DANN stands for Dedicated Actua Neural Network, and it’s gone a little loopy. Brush up on your technical skills, learn about AI, and save yourself and your crewmates!
Our access to DANN’s audio core is almost complete. In “Hand Commands: Training Image Classification Models”, you trained a model that can recognize hand shapes so that you can give instructions to DANN to continue with repairs. Now we need to look deeper, though. How does that model work? How can the model tell if you’re making one hand shape or another? The instructional model to give DANN instructions isn’t working as well as it should. What can we do to fix it? Let’s find out so we can access DANN’s audio core and fix it in “Voice Activated AI: Training Audio Recognition Models”!