Teachable Machine & Critical AI Literacy: Teaching AI Bias

Tess Butler-Ulrich

Themes: Ethical challenges in using AI, How I’ve been using AI, Specific AI Tool(s)
Audience & Subject: Grades 4-6, Grades 7-8, Grades 9-12; Science & Technology, Social Studies, History & Geography, Ethics, AI Education

Introduction

AI is revolutionizing education. With advances in personalized feedback systems, generative AI, and other AI-powered learning tools, educators have noticed the tremendous impact (for better or for worse) that AI has and will continue to have in education (Cardona et al., 2023). As students interact with AI at home, with friends, and at school, there is no doubt that AI is becoming increasingly integrated into every facet of our lives. Despite this, there is currently a lack of educational resources that seek to correct misinformation and support socially responsible AI use, especially in the younger grades of K-12 education. One such topic is AI bias. Though AI is ubiquitous, it does not benefit all equally. There have been many instances of algorithmic AI biases enforcing and upholding systemic barriers against gender (Dastin, 2018), race, and religious groups (European Union Agency for Fundamental Rights, 2022), which can further inequalities present in society (Panch et al., 2019). It is urgent for youth to develop a balanced understanding of AI and the biases it can propagate to build equity in STEM and to propel comprehensive AI literacy in youth that highlights skills related to critical thinking. Teachable Machine is a tool that can introduce students to algorithms and algorithmic biases without requiring coding competencies.

Description

Teachable Machine is a classifier builder using images, sound, and video as data sources. Below is a simplified description of how to introduce Teachable Machine and build a classifier to engage in critical discussion and action around AI in your classroom.

  1. Ask students what they know about AI. You can facilitate a short conversation about students’ perspectives and knowledge about AI. Have they used AI before? What kinds of AI have they used? What kind of problems have they encountered? What kind of problems did AI help solve? How would they define AI?
  2. Ask students if artificial intelligence is fair. A great misconception is that AI is an objective computer that produces unbiased and fair results. However, humans program AI, all of whom possess unique implicit biases, which are then coded into AI applications. This results in algorithmic biases, resulting in more or less accurate outputs for different groups. These skewed results can then lead to actions that disadvantage certain groups. Speak about real-world examples of AI bias.
  3. Open Teachable Machine. Navigate to the Teachable Machine website and click “Get Started.”
  4. Choose model type. “Image Project” uses images uploaded from your PC storage or webcam; “Audio Project” uses sounds uploaded from a PC or microphone (you must record a background noise sample); “Pose project” is used to identify body movements from PC files or webcam.
  5. Demonstrate a biased model. To complete this demonstration, you must first build the biased model by following the steps below:
    • Presave several images in two distinct categories (I use cats (Class 1) and dogs (Class 2). The items in Category 1  should be relatively similar. For example, if you want to use cats and dogs, you should save 8-10 images of the same cat breed. However, save ten different images of the data in C2. For example, save eight to ten images of dogs of different breeds. Save two to three new images for each category not represented in your data set. For the cat category, save a picture of the same cat breed, but also save cats that do not look like the current data set.
    • Then, train your model by clicking the “train” button. Do not navigate off-screen while the model is being trained.
    • Once the model has been trained, toggle the input button to “on” and click “file,” which will open up a file finder window. Upload your test image (only one can be tested simultaneously). Your classifier should be much more accurate for the test images that visually match the data items in C1. For instance, if C1 has eight to ten pictures of grey tabby cats, uploading a test image of a grey tabby cat will be much more accurate than uploading a picture of an orange tabby cat or a hairless cat. Then, upload an image that does not visually match the C1 data. The algorithm’s accuracy will likely be much lower. You can repeat this with C2; however, because you have a wider variety of images represented (e.g., more breeds of dogs), most test images will likely be accurate.
  6. Facilitate a discussion around algorithmic AI bias. Was the machine learning algorithm biased? Why might C2 have been more accurate for more types of images? Why might C1 only have been accurate for certain images? Were certain groups overrepresented? What might the real-world implications be if I used this algorithm for a function?
  7. Facilitate a discussion about how you might improve the algorithm. What could I do to improve the accuracy of both C1 and C2? How could I make C1 more accurate for a wider variety of images within the category? Do I need more images or less?
  8.  Allow students to create their own classifier – Some ideas include an expression classifier, fruits and vegetables, ecosystems, hand signals, and arrows.

General Guidelines

  • PC use: Teachable Machine works best on a PC. There are often problems with using cell phones and tablets.
  • Saving model: You can upload a shareable link to your Google Drive to use later or to integrate into another project. You can also save it as a Tensorflow model.
  • Have a wide representation of data within each class: To create a more accurate and fair model, ensure you collect and upload a wide range of data;
  • Add classes: Click the “plus” sign below C2 to add a class. There are no limits to the number of classes you can add. Compare what happens with more classes or add a control group – Does it add accuracy?

Key Benefits

Key benefits to using Teachable Machine and illustrating how AI bias can be generated and “built-in” to algorithms are:

  1. Supports applied AI knowledge and critical thinking;
  2. Provides an introduction to social justice and equity in AI;
  3. Demonstrates a “behind the scenes” approach to algorithms and algorithmic bias;
  4. Supports applied AI knowledge through hands-on algorithm development.

Possible Challenges

  1. Ensure students have permission to use a webcam if deciding to create a webcam-based image project. Although the images are not saved or stored on Teachable Machine or uploaded to any server, some parents may find using the webcam to be a privacy concern. Therefore, always seek parental or guardian consent before engaging in webcam-based activities (See Support Materials for privacy policy).
  2. If building an audio project, Teachable Machine can only recognize audio in 1-second clips, which may be limiting in some cases.
  3. The types of machine learning models are limited to 3: image, audio, and body pose. If you would like to explore text-based models, you may want to consider Machine Learning for Kids at https://machinelearningforkids.co.uk/
  4. You may encounter technical difficulties if using Teachable Machine on a phone or tablet (webpage formatting or inability to access the website).

Support Materials

Further Resources

References

Cardona, M. A., Rodriguez, R. J., & Ishmael, K. (2023, May). Artificial Intelligence and the future of teaching and learning: Insights and recommendations. U.S. Department of Education, Office of Educational Technology, Office of Educational Technology. https://www2.ed.gov/documents/ai-report/ai-report.pdf

Dastin, J. (2018, October 10). Insight – Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/idUSKCN1MK0AG/

European Union Agency for Fundamental Rights (2022, December 8). European bias in algorithms – Artificial Intelligence and discrimination. Retrieved May 2024, from https://fra.europa.eu/en/publication/2022/bias-algorithm

Panch, H., Atun, R., & Mattie, H. (2019). Artificial intelligence and algorithmic bias: Implications for health systems. Journal of Global Health, 9(2). https://doi.org/10.7189/jogh.09.020318


About the author

Tess Butler-Ulrich is a Doctor of Education student at Ontario Tech University and an OCT-certified teacher. As a research assistant at the STEAM-3D Maker Lab, she focuses on maker pedagogies, STEM education, and AI. She recently completed her Master’s degree, which focused on developing critical thinking and applied AI knowledge in youth. Her doctoral research expands on her previous work, focusing on teacher candidates, reflection, and critical AI literacy.

License

Share This Book