Thoughts on “Performing AI Literacy” by Ben Williamson

I have been following Williamson for a few months now. He is a voice for how we need to hit pause and reconsider our priors, so we can steer education in the right direction as technology with all the good and bad becomes pervasive.

in his article, Performing AI Literacy, Williamson critiques the OECD’s new international assessment, the PISA 2029 Media & Artificial Intelligence Literacy (MAIL).

MAIL measures young people’s competencies in engaging with digital and AI tools. This test, scheduled for 2029 with results in 2031, aims to standardize what counts as AI literacy and influence global education systems in the years leading up to it.

Ben Williamson is concerned with how standardizing AI literacy will shape how countries define, teach, and measure it. The goal of comparing outcomes across countries internationally will could lead to a very narrow and superficial scope. But nontheless, education systems will feel pressure to align with the assessment framework privileging certain knowledge and approaches over others.

Williamson is also worried about how the definition of AI literacy combines technical understanding with ethical awareness, but lacks clarity and inclusivity. He questions who decides what counts as AI literacy and whose voices are included. He cautions that the result may be a narrow focus on tool use and surface-level ethics, leaving out deeper questions about power, bias, and critical thinking in AI systems.

Williamson also raises concern about the political and commercial interests shaping AI education. With actors like the OECD, code.org, and EU partnerships involved, he warns that education is being steered toward market-driven, tech-centered goals. This risks turning schools into sites of AI performance and compliance, rather than spaces of critical inquiry and democratic learning.

The Top 3 Pressing Questions I have after reading Performing AI Literacy:

  1. Who gets to define what AI Literacy is, and whose knowledge and values are included—or excluded—in that definition?
    • Will it be EdTech Leaders who have no background in Ethics yet have made a career of branding themselves as Apple and Google Educators?
  2. How can schools balance the technical teaching of AI tools with fostering critical thinking about their ethical and societal impacts?
    • Similar to the previous answer, I am not so sure critical thinking is well-defined or can be modeled by our EdTech leaders, many of which have no background in analytical thinking, science, engineering, computer science, etc. Their commercial certifications and branding are not sufficient.
  3. What are the risks of AI literacy becoming a tool of governance or compliance, rather than a path to genuine understanding and agency?
    • Large International Schools where I currently work, are top-down enterprises, and the pressure to project relevance to the community will likely lead us to AI Literacy becoming a tool of governance and compliance. Again, because of the failures I mentions in answers #2 and #3.

Reference:

Here’s the APA citation for the article:

Williamson, B. (2024, May 17). What is AI literacy? Code Acts in Education. https://codeactsineducation.wordpress.com/2024/05/17/what-is-ai-literacy/

jamierhouse Avatar

Posted by

Leave a comment