Sort Things Out: Exploring Image Classification

Actua's AI Series - Activity 4

In this activity, participants will experiment with machine vision’s application to classification tasks. They will learn to identify classification schemes and classes. Participants can then explore and compare two pre-trained machine vision models: COCO-SSD and MobileNet. Participants will evaluate these models and test their suitability for a defined task. This activity will lay the foundation for participants to train their own image classification model in a following activity.

If you’re accessing this activity directly, did you know there are eight other activities in this series up on our website? These activities also follow a space exploration narrative when done in order. It is recommended to complete the activities in order but they can also be done on their own.

If you find yourself unfamiliar with any of the AI concepts and terminology introduced in these activities, please refer to our AI Glossary. For more information about Artificial Intelligence and how to incorporate it into your classroom, we suggest exploring our AI Handbook.

Here we go:

You and your group-mates are astronauts and scientists aboard the Actua Orbital Station. Unfortunately, your station just got bombarded by magnetic rays and your electronics have begun to shut down! The only one who can save you is the station’s AI, DANN. DANN stands for Dedicated Actua Neural Network, and its gone a little loopy. Brush up on your technical skills, learn about AI, and save yourself and your crewmates! So far, we’ve restored DANN’s basic thinking abilities and its mathematical skills. Now we need to make sure we can communicate with it!

DANN has finished its diagnostic, and we have finished our study from “Regression Analysis: Making Predictions using Data” Now, we can begin to fix DANN! When it was functioning, DANN was a state-of-the-art AI system that you could interact with using your voice. Because of the damage, however, DANN’s audio core was knocked offline and, unfortunately, you don’t seem to be able to access the audio core without DANN’s help. Our mission specialists think that you might be able to use DANN’s visual core to communicate, but the visual core isn’t currently set up to recognize our requests. They suggest training an image classification model to recognize our poses or hand shapes so that we can get access to the audio core. Once we do that, we can bring DANN’s other senses back online in “Hand Commands: Training Image Classification Models”!

Activity Write Up

Sort Things Out

Related Resource

Educators AI Handbook

Looking for more information about Artificial Intelligence education? Check out our AI Handbook for educators.

Take me there

Related Resource

Artificial Intelligence Activities Glossary

If you find yourself unfamiliar with any of the words in this activity, our AI Glossary is here to help!

Take me there