Since they became publicly available at the end of 2022, generative AI tools have been hotly discussed by educators: what role should these tools for generating human-seeming text, images, and other media play in teaching and learning?
Two years later, the one thing most people agree on is that, like it or not, generative AI is here to stay. And as a computing educator, you probably have your learners and colleagues looking to you for guidance about this technology. We’re sharing how educators like you are approaching generative AI in issue 25 of Hello World, out today for free.
Generative AI and teaching
Since our ‘Teaching and AI’ issue a year ago, educators have been making strides grappling with generative AI’s place in their classroom, and with the potential risks to young people. In this issue, you’ll hear from a wide range of educators who are approaching this technology in different ways.
For example:
Laura Ventura from Gwinnett County Public Schools (GCPS) in Georgia, USA shares how the GCPS team has integrated AI throughout their K–12 curriculum
Mark Calleja from our team guides you through using the OCEAN prompt process to reliably get the results you want from an LLM
Kip Glazer, principal at Mountain View High School in California, USA shares a framework for AI implementation aimed at school leaders
Stefan Seegerer, a researcher and educator in Germany, discusses why unplugged activities help us focus on what’s really important in teaching about AI
This issue also includes practical solutions to problems that are unique to computer science educators:
Graham Hastings in the UK shares his solution to tricky crocodile clips when working with micro:bits
Riyad Dhuny shares his case study of home-hosting a learning management system with his students in Mauritius
And there is lots more for you to discover in issue 25.
Whether or not you use generative AI as part of your teaching practice, it’s important for you to be aware of AI technologies and how your young people may be interacting with it. In his article “A problem-first approach to the development of AI systems”, Ben Garside from our team affirms that:
“A big part of our job as educators is to help young people navigate the changing world and prepare them for their futures, and education has an essential role to play in helping people understand AI technologies so that they can avoid the dangers.
Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. […]
Our call to action to educators, carers, and parents is to have conversations with your young people about generative AI. Get to know their opinions on it and how they view its role in their lives, and help them to become critical thinkers when interacting with technology.”
Share your thoughts & subscribe to Hello World
Computing teachers are being asked again to teach something that they didn’t study. With generative AI as with all things computing, we want to support your teaching and share your successes. We hope you enjoy this issue of Hello World, and please get in touch with your article ideas or what you would like to see in the magazine.
Share your thoughts and ideas about Hello World and the new issue with us on social media by tagging the Hello World Twitter/X or Facebook accounts
To empower every educator to confidently bring AI into their classroom, we’ve created a new online training course called ‘Understanding AI for educators’ in collaboration with Google DeepMind. By taking this course, you will gain a practical understanding of the crossover between AI tools and education. The course includes a conceptual look at what AI is, how AI systems are built, different approaches to problem-solving with AI, and how to use current AI tools effectively and ethically.
In this post, I will share our approach to designing the course and some of the key considerations behind it — all of which you can apply today to teach your learners about AI systems.
Design decisions: Nurturing knowledge and confidence
We know educators have different levels of confidence with AI tools — we designed this course to help create a level playing field. Our goal is to uplift every educator, regardless of their prior experience, to a point where they feel comfortable discussing AI in the classroom.
AI literacy is key to understanding the implications and opportunities of AI in education. The course provides educators with a solid conceptual foundation, enabling them to ask the right questions and form their own perspectives.
As with all our AI learning materials that are part of Experience AI, we’ve used specific design principles for the course:
Choosing language carefully: We never anthropomorphise AI systems, replacing phrases like “The model understands” with “The model analyses”. We do this to make it clear that AI is just a computer system, not a sentient being with thoughts or feelings.
Accurate terminology: We avoid using AI as a singular noun, opting instead for the more accurate ‘AI tool’ when talking about applications or ‘AI system’ when talking about underlying component parts.
Ethics: The social and ethical impacts of AI are not an afterthought but highlighted throughout the learning materials.
Three main takeaways
The course offers three main takeaways any educator can apply to their teaching about AI systems.
1. Communicating effectively about AI systems
Deciding the level of detail to use when talking about AI systems can be difficult — especially if you’re not very confident about the topic. The SEAME framework offers a solution by breaking down AI into 4 levels: social and ethical, application, model, and engine. Educators can focus on the level most relevant to their lessons and also use the framework as a useful structure for classroom discussions.
You might discuss the impact a particular AI system is having on society, without the need to explain to your learners how the model itself has been trained or tested. Equally, you might focus on a specific machine learning model to look at where the data used to create it came from and consider the effect the data source has on the output.
2. Problem-solving approaches: Predictive vs. generative AI
AI applications can be broadly separated into two categories: predictive and generative. These two types of AI model represent two vastly different approaches to problem-solving.
People create predictive AI models to make predictions about the future. For example, you might create a model to make weather forecasts based on previously recorded weather data, or to recommend new movies to you based on your previous viewing history. In developing predictive AI models, the problem is defined first — then a specific dataset is assembled to help solve it. Therefore, each predictive AI model usually is only useful for a small number of applications.
Generative AI models are used to generate media (such as text, code, images, or audio). The possible applications of these models are much more varied because people can use media in many different kinds of ways. You might say that the outputs of generative AI models could be used to solve — or at least to partially solve — any number of problems, without these problems needing to be defined before the model is created.
3. Using generative AI tools: The OCEAN process
Generative AI systems rely on user prompts to generate outputs. The OCEAN process, outlined in the course, offers a simple yet powerful framework for prompting AI tools like Gemini, Stable Diffusion or ChatGPT.
The first three steps of the process help you write better prompts that will result in an output that is as close as possible to what you are looking for, while the last two steps outline how to improve the output:
Objective: Clearly state what you want the model to generate
Context: Provide necessary background information
Examples: Offer specific examples to fine-tune the model’s output
Assess: Evaluate the output
Negotiate: Refine the prompt to correct any errors in the output
The final step in using any generative AI tool should be to closely review or edit the output yourself. These tools will very quickly get you started but you’ll always have to rely on your own human effort to ensure the quality of your work.
Helping educators to be critical users
We believe the knowledge and skills our ‘Understanding AI for educators’ course teaches will help any educator determine the right AI tools and concepts to bring into their classroom, regardless of their specialisation. Here’s what one course participant had to say:
“From my inexperienced viewpoint, I kind of viewed AI as a cheat code. I believed that AI in the classroom could possibly be a real detriment to students and eliminate critical thinking skills.
After learning more about AI [on the course] and getting some hands-on experience with it, my viewpoint has certainly taken a 180-degree turn. AI definitely belongs in schools and in the workplace. It will take time to properly integrate it and know how to ethically use it. Our role as educators is to stay ahead of this trend as opposed to denying AI’s benefits and falling behind.” – ‘Understanding AI for educators’ course participant
All our Experience AI resources — including this online course and the teaching materials — are designed to foster a generation of AI-literate educators who can confidently and ethically guide their students in navigating the world of AI.
A version of this article also appears in Hello World issue 25, which will be published on Monday 23 September and will focus on all things generative AI and education.
As discussions of how artificial intelligence (AI) will impact teaching, learning, and assessment proliferate, I was thrilled to be able to add one of my own research projects to the mix. As a research scientist at the Raspberry Pi Foundation, I’ve been working on a pilot research study in collaboration with Jane Waite to explore the topic of program error messages (PEMs).
PEMs can be a significant barrier to learning for novice coders, as they are often confusing and difficult to understand. This can hinder troubleshooting and progress in coding, and lead to frustration.
Recently, various teams have been exploring how generative AI, specifically large language models (LLMs), can be used to help learners understand PEMs. My research in this area specifically explores secondary teachers’ views of the explanations of PEMs generated by a LLM, as an aid for learning and teaching programming, and I presented some of my results in our ongoing seminar series.
Understanding program error messages is hard at the start
I started the seminar by setting the scene and describing the current background of research on novices’ difficulty in using PEMs to fix their code, and the efforts made to date to improve these. The three main points I made were that:
PEMs are often difficult to decipher, especially by novices, and there’s a whole research area dedicated to identifying ways to improve them.
Recent studies have employed LLMs as a way of enhancing PEMs. However, the evidence on what makes an ‘effective’ PEM for learning is limited, variable, and contradictory.
There is limited research in the context of K–12 programming education, as well as research conducted in collaboration with teachers to better understand the practical and pedagogical implications of integrating LLMs into the classroom more generally.
My pilot study aims to fill this gap directly, by reporting K–12 teachers’ views of the potential use of LLM-generated explanations of PEMs in the classroom, and how their views fit into the wider theoretical paradigm of feedback literacy.
What did the teachers say?
To conduct the study, I interviewed eight expert secondary computing educators. The interviews were semi-structured activity-based interviews, where the educators got to experiment with a prototype version of the Foundation’s publicly available Code Editor. This version of the Code Editor was adapted to generate LLM explanations when the question mark next to the standard error message is clicked (see Figure 1 for an example of a LLM-generated explanation). The Code Editor version called the OpenAI GPT-3.5 interface to generate explanations based on the following prompt: “You are a teacher talking to a 12-year-old child. Explain the error {error} in the following Python code: {code}”.
Fifteen themes were derived from the educators’ responses and these were split into five groups (Figure 2). Overall, the educators’ views of the LLM feedback were that, for the most part, a sensible explanation of the error messages was produced. However, all educators experienced at least one example of invalid content (LLM “hallucination”). Also, despite not being explicitly requested in the LLM prompt, a possible code solution was always included in the explanation.
Matching the themes to PEM guidelines
Next, I investigated how the teachers’ views correlated to the research conducted to date on enhanced PEMs. I used the guidelines proposed by Brett Becker and colleagues, which consolidate a lot of the research done in this area into ten design guidelines. The guidelines offer best practices on how to enhance PEMs based on cognitive science and educational theory empirical research. For example, they outline that enhanced PEMs should provide scaffolding for the user, increase readability, reduce cognitive load, use a positive tone, and provide context to the error.
Out of the 15 themes identified in my study, 10 of these correlated closely to the guidelines. However, the 10 themes that correlated well were, for the most part, the themes related to the content of the explanations, presentation, and validity (Figure 3). On the other hand, the themes concerning the teaching and learning process did not fit as well to the guidelines.
Does feedback literacy theory fit better?
However, when I looked at feedback literacy theory, I was able to correlate all fifteen themes — the theory fits.
Feedback literacy theory positions the feedback process (which includes explanations) as a social interaction, and accounts for the actors involved in the interaction — the student and the teacher — as well as the relationships between the student, the teacher, and the feedback. We can explain feedback literacy theory using three constructs: feedback types, student feedback literacy, and teacher feedback literacy (Figure 4).
From the feedback literacy perspective, feedback can be grouped into four types: telling, guiding, developing understanding, and opening up new perspectives. The feedback type depends on the role of the student and teacher when engaging with the feedback (Figure 5).
From the student perspective, the competencies and dispositions students need in order to use feedback effectively can be stated as: appreciating the feedback processes, making judgements, taking action, and managing affect. Finally, from a teacher perspective, teachers apply their feedback literacy skills across three dimensions: design, relational, and pragmatic.
In short, according to feedback literacy theory, effective feedback processes entail well-designed feedback with a clear pedagogical purpose, as well as the competencies students and teachers need in order to make sense of the feedback and use it effectively.
This theory therefore provided a promising lens for analysing the educators’ perspectives in my study. When the educators’ views were correlated to feedback literacy theory, I found that:
Educators prefer the LLM explanations to fulfil a guiding and developing understanding role, rather than telling. For example, educators prefer to either remove or delay the code solution from the explanation, and they like the explanations to include keywords based on concepts they are teaching in the classroom to guide and develop students’ understanding rather than tell.
Related to students’ feedback literacy, educators talked about the ways in which the LLM explanations help or hinder students to make judgements and action the feedback in the explanations. For example, they talked about how detailed, jargon-free explanations can help students make judgments about the feedback, but invalid explanations can hinder this process. Therefore, teachers talked about the need for ways to manage such invalid instances. However, for the most part, the educators didn’t talk about eradicating them altogether. They talked about ways of flagging them, using them as counter-examples, and having visibility of them to be able to address them with students.
Finally, from a teacher feedback literacy perspective, educators discussed the need for professional development to manage feedback processes inclusive of LLM feedback (design) and address issues resulting from reduced opportunities to interact with students (relational and pragmatic). For example, if using LLM explanations results in a reduction in the time teachers spend helping students debug syntax errors from a pragmatic time-saving perspective, then what does that mean for the relationship they have with their students?
Conclusion from the study
By correlating educators’ views to feedback literacy theory as well as enhanced PEM guidelines, we can take a broader perspective on how LLMs might not only shape the content of the explanations, but the whole social interaction around giving and receiving feedback. Investigating ways of supporting students and teachers to practise their feedback literacy skills matters just as much, if not more, than focusing on the content of PEM explanations.
This study was a first-step exploration of eight educators’ views on the potential impact of using LLM explanations of PEMs in the classroom. Exactly what the findings of this study mean for classroom practice remains to be investigated, and we also need to examine students’ views on the feedback and its impact on their journey of learning to program.
If you want to hear more, you can watch my seminar:
If any of these ideas resonated with you as an educator, student, or researcher, do reach out — we’d love to hear from you. You can contact me directly at [email protected] or drop us a line in the comments below.
Join our next seminar
The focus of our ongoing seminar series is on teaching programming with or without AI. Check out the schedule of our upcoming seminars.
To take part in the next seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.
“I’ve enjoyed actually learning about what AI is and how it works, because before I thought it was just a scary computer that thinks like a human,” a student learning with Experience AI at King Edward’s School, Bath, UK, told us.
This is the essence of what we aim to do with our Experience AI lessons, which demystify artificial intelligence (AI) and machine learning (ML). Through Experience AI, teachers worldwide are empowered to confidently deliver engaging lessons with a suite of resources that inspire and educate 11- to 14-year-olds about AI and the role it could play in their lives.
“I learned new things and it changed my mindset that AI is going to take over the world.” – Student, Malaysia
Developed by us with Google DeepMind, our first set of Experience AI lesson resources was aimed at a UK audience and launched in April 2023. Next we released tailored versions of the resources for 5 other countries, working in close partnership with organisations in Malaysia, Kenya, Canada, Romania, and India. Thanks to new funding from Google.org, we’re now expanding Experience AI for 16 more countries and creating new resources on AI safety, with the aim of providing leading-edge AI education for more than 2 million young people across Europe, the Middle East, and Africa.
In this blog post, you’ll hear directly from students and teachers about the impact the Experience AI lessons have had so far.
Case study: Experience AI in Malaysia
Penang Science Cluster in Malaysia is among the first organisations we’ve partnered with for Experience AI. Speaking to Malaysian students learning with Experience AI, we found that the lessons were often very different from what they had expected.
“I actually thought it was going to be about boring lectures and not much about AI but more on coding, but we actually got to do a lot of hands-on activities, which are pretty fun. I thought AI was just about robots, but after joining this, I found it could be made into chatbots or could be made into personal helpers.” – Student, Malaysia
“Actually, I thought AI was mostly related to robots, so I was expecting to learn more about robots when I came to this programme. It widened my perception on AI.” – Student, Malaysia.
The Malaysian government actively promotes AI literacy among its citizens, and working with local education authorities, Penang Science Cluster is using Experience AI to train teachers and equip thousands of young people in the state of Penang with the understanding and skills to use AI effectively.
“We envision a future where AI education is as fundamental as mathematics education, providing students with the tools they need to thrive in an AI-driven world”, says Aimy Lee, Chief Operating Officer at Penang Science Cluster. “The journey of AI exploration in Malaysia has only just begun, and we’re thrilled to play a part in shaping its trajectory.”
Giving non-specialist teachers the confidence to introduce AI to students
“Our Key Stage 3 Computing students now feel immensely more knowledgeable about the importance and place that AI has in their wider lives. These lessons and activities are engaging and accessible to students and educators alike, whatever their specialism may be.” – Dave Cross, North Liverpool Academy, UK
“The feedback we’ve received from both teachers and learners has been overwhelmingly positive. They consistently rave about how accessible, fun, and hands-on these resources are. What’s more, the materials are so comprehensive that even non-specialists can deliver them with confidence.” – Storm Rae, The National Museum of Computing, UK
“[The lessons] go above and beyond to ensure that students not only grasp the material but also develop a genuine interest and enthusiasm for the subject.” – Teacher, Changamwe Junior School, Mombasa, Kenya
Sparking debates on bias and the limitations of AI
When learners gain an understanding of how AI works, it gives them the confidence to discuss areas where the technology doesn’t work well or its output is incorrect. These classroom debates deepen and consolidate their knowledge, and help them to use AI more critically.
“Students enjoyed the practical aspects of the lessons, like categorising apples and tomatoes. They found it intriguing how AI could sometimes misidentify objects, sparking discussions on its limitations. They also expressed concerns about AI bias, which these lessons helped raise awareness about. I didn’t always have all the answers, but it was clear they were curious about AI’s implications for their future.” – Tracey Mayhead, Arthur Mellows Village College, Peterborough, UK
“The lessons that we trialled took some of the ‘magic’ out of AI and started to give the students an understanding that AI is only as good as the data that is used to build it.” – Jacky Green, Waldegrave School, UK
“I have enjoyed learning about how AI is actually programmed, rather than just hearing about how impactful and great it could be.” – Student, King Edward’s School, Bath, UK
“It has changed my outlook on AI because now I’ve realised how much AI actually needs human intelligence to be able to do anything.” – Student, Arthur Mellows Village College, Peterborough, UK
“I didn’t really know what I wanted to do before this but now knowing more about AI, I probably would consider a future career in AI as I find it really interesting and I really liked learning about it.” – Student, Arthur Mellows Village College, Peterborough, UK
If you’d like to get involved with Experience AI as an educator and use our free lesson resources with your class, you can start by visiting experience-ai.org.
Since we launched the Experience AI learning programme in the UK in April 2023, educators in 130 countries have downloaded Experience AI lesson resources. They estimate reaching over 630,000 young people with the lessons, helping them to understand how AI works and to build the knowledge and confidence to use AI tools responsibly. Just last week, we announced another exciting expansion of Experience AI: thanks to $10 million in funding from Google.org, we will be able to work with local partner organisations to provide research-based AI education to an estimated over 2 million young people across Europe, the Middle East and Africa.
This blog post explains how we use research to continue to shape our Experience AI resources, including the new AI safety resources we are developing.
The beginning of Experience AI
Artificial intelligence (AI) and machine learning (ML) applications are part of our everyday lives — we use them every time we scroll through social media feeds organised by recommender systems or unlock an app with facial recognition. For young people, there is more need than ever to gain the skills and understanding to critically engage with AI technologies.
We wanted to design free lesson resources to help teachers in a wide range of subjects confidently introduce AI and ML to students aged 11 to 14 (Key Stage 3). This led us to develop Experience AI, in collaboration with Google DeepMind, offering materials including lesson plans, slide decks, videos (both teacher- and student-facing), student activities, and assessment questions.
SEAME: The research-based framework behind Experience AI
The Experience AI resources were built on rigorous research from the Raspberry Pi Computing Education Research Centre as well as from other researchers, including those we hosted at our series of seminars on AI and data science education. The Research Centre’s work involved mapping and categorising over 500 resources used to teach AI and ML, and found that the majority were one-off activities, and that very few resources were tailored to a specific age group.
To analyse the content that existing AI education resources covered, the Centre developed a simple framework called SEAME. The framework gives you an easy way to group concepts, knowledge, and skills related to AI and ML based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works.)
Through Experience AI, learners also gain an understanding of the models underlying AI applications, and the processes used to train and test ML models.
Our Experience AI lessons cover all four levels of SEAME and focus on applications of AI that are relatable for young people. They also introduce learners to AI-related issues such as privacy or bias concerns, and the impact of AI on employment.
The six foundation lessons of Experience AI
What is AI?: Learners explore the current context of AI and how it is used in the world around them. Looking at the differences between rule-based and data-driven approaches to programming, they consider the benefits and challenges that AI could bring to society.
How computers learn: Focusing on the role of data-driven models in AI systems, learners are introduced to ML and find out about three common approaches to creating ML models. Finally they explore classification, a specific application of ML.
Bias in, bias out: Students create their own ML model to classify images of apples and tomatoes. They discover that a limited dataset is likely to lead to a flawed ML model. Then they explore how bias can appear in a dataset, resulting in biased predictions produced by a ML model.
Decision trees: Learners take their first in-depth look at a specific type of ML model: decision trees. They see how different training datasets result in the creation of different ML models, experiencing first-hand what the term ‘data-driven’ means.
Solving problems with ML models: Students are introduced to the AI project lifecycle and use it to create a ML model. They apply a human-focused approach to working on their project, train a ML model, and finally test their model to find out its accuracy.
Model cards and careers: Learners finish the AI project lifecycle by creating a model card to explain their ML model. To complete the unit, they explore a range of AI-related careers, hear from people working in AI research at Google DeepMind, and explore how they might apply AI and ML to their interests.
We also offer two additional stand-alone lessons: one on large language models, how they work, and why they’re not always reliable, and the other on the application of AI in ecosystems research, which lets learners explore how AI tools can be used to support animal conservation.
New AI safety resources: Empowering learners to be critical users of technology
We have also been developing a set of resources for educator-led sessions on three topics related to AI safety, funded by Google.org.
AI and your data: With the support of this resource, young people reflect on the data they have already provided to AI applications in their daily lives, and think about how the prevalence of AI tools might change the way they protect their data.
Media literacy in the age of AI: This resource highlights the ways AI tools can be used to perpetuate misinformation and how AI applications can help people combat misleading claims.
Using generative AI responsibly: With this resource, young people consider their responsibilities when using generative AI, and their expectations of developers who release Experience AI tools.
Other research principles behind our free teaching resources
As well as using the SEAME framework, we have incorporated a whole host of other research-based concepts in the design principles for the Experience AI resources. For example, we avoid anthropomorphism — that is, words or imagery that can lead learners to wrongly believe that AI applications have sentience or intentions like humans do — and we instead promote the understanding that it’s people who design AI applications and decide how they are used. We also teach about data-driven application design, which is a core concept in computational thinking 2.0.
Share your feedback
We’d love to hear your thoughts and feedback about using the Experience AI resources. Your comments help us to improve the current materials, and to develop future resources. You can tell us what you think using this form.
And if you’d like to start using the Experience AI resources as an educator, you can download them for free at experience-ai.org.
Last week, we were honoured to attend UNESCO’s Digital Learning Week conference to present our free Experience AI resources and how they can help teachers demystify AI for their learners.
The conference drew a worldwide audience in-person and online to hear about the work educators and policy makers are doing to support teachers’ use of AI tools in their teaching and learning. Speaker after speaker reiterated that the shared goal of our work is to support learners to become critical consumers and responsible creators of AI systems.
In this blog, we share how our conference talk demonstrated the use of Experience AI for pursuing this globally shared goal, and how the Experience AI resources align with UNESCO’s newly launched AI competency framework for students.
Presenting the design principles behind Experience AI
Our talk about Experience AI, our learning programme developed with Google DeepMind, focused on the research-informed approach we are taking in our resource development. Specifically, we spoke about three key design principles that we embed in the Experience AI resources:
Firstly, using AI and machine learning to solve problems requires learners and educators to think differently to traditional computational thinking and use a data-driven approach instead, as laid out in the research around computational thinking 2.0.
Thirdly we described how we used the SEAME framework we adapted from work by Jane Waite (Raspberry Pi Foundation) and Paul Curzon (Queen Mary University, London) to categorise hundreds of AI education resources and inform the design of our Experience AI resources. The framework offers a common language for educators when assessing the content of resources, and when supporting learners to understand the different aspects of AI systems.
By presenting our design principles, we aimed to give educators, policy makers, and attendees from non-governmental organisations practical recommendations and actionable considerations for designing learning materials on AI literacy.
How Experience AI aligns with UNESCO’s new AI competency framework for students
At Digital Learning Week, UNESCO launched two AI competency frameworks:
A framework for students, intended to help teachers around the world with integrating AI tools in activities to engage their learners
A framework for teachers, “defining the knowledge, skills, and values teachers must master in the age of AI”
AI competency framework for students
We have had the chance to map the Experience AI resources to UNESCO’s AI framework for students at a high level, finding that the resources cover 10 of the 12 areas of the framework (see image below).
For instance, throughout the Experience AI resources runs a thread of promoting “citizenship in the AI era”: the social and ethical aspects of AI technologies are highlighted in all the lessons and activities. In this way, they provide students with the foundational knowledge of how AI systems work, and where they may work badly. Using the resources, educators can teach their learners core AI and machine learning concepts and make these concepts concrete through practical activities where learners create their own models and critically evaluate their outputs. Importantly, by learning with Experience AI, students not only learn to be responsible users of AI tools, but also to consider fairness, accountability, transparency, and privacy when they create AI models.
Teacher competency framework for AI
UNESCO’s AI competency framework for teachers outlines 15 competencies across 5 dimensions (see image below). We enjoyed listening to the launch panel members talk about the strong ambitions of the framework as well as the realities of teachers’ global and local challenges. The three key messages of the panel were:
AI will not replace the expertise of classroom teachers
Supporting educators to build AI competencies is a shared responsibility
Individual countries’ education systems have different needs in terms of educator support
All three messages resonate strongly with the work we’re doing at the Raspberry Pi Foundation. Supporting all educators is a fundamental part of our resource development. For example, Experience AI offers everything a teacher with no technical background needs to deliver the lessons, including lesson plans, videos, worksheets and slide decks. We also provide a free online training course on understanding AI for educators. And in our work with partner organisations around the world, we adapt and translate Experience AI resources so they are culturally relevant, and we organise locally delivered teacher professional development.
The teachers’ competency framework is meant as guidance for educators, policy makers, training providers, and application developers to support teachers in using AI effectively, and in helping their learners gain AI literacy skills. We will certainly consult the document as we develop our training and professional development resources for teachers further.
Towards AI literacy for all young people
Across this year’s UNESCO’s Digital Learning Week, we saw that the role of AI in education took centre stage across the presentations and the informal conversations among attendees. It was a privilege to present our work and see how well Experience AI was received, with attendees recognising that our design principles align with the values and principles in UNESCO’s new AI competency frameworks.
We look forward to continuing this international conversation about AI literacy and working in aligned ways to support all young people to develop a foundational understanding of AI technologies.
Two years ago, we announced Experience AI, a collaboration between the Raspberry Pi Foundation and Google DeepMind to inspire the next generation of AI leaders.
Today I am excited to announce that we are expanding the programme with the aim of reaching more than 2 million students over the next 3 years, thanks to a generous grant of $10m from Google.org.
Why do kids need to learn about AI
AI technologies are already changing the world and we are told that their potential impact is unprecedented in human history. But just like every other wave of technological innovation, along with all of the opportunities, the AI revolution has the potential to leave people behind, to exacerbate divisions, and to make more problems than it solves.
Part of the answer to this dilemma lies in ensuring that all young people develop a foundational understanding of AI technologies and the role that they can play in their lives.
That’s why the conversation about AI in education is so important. A lot of the focus of that conversation is on how we harness the power of AI technologies to improve teaching and learning. Enabling young people to use AI to learn is important, but it’s not enough.
We need to equip young people with the knowledge, skills, and mindsets to use AI technologies to create the world they want. And that means supporting their teachers, who once again are being asked to teach a subject that they didn’t study.
Experience AI
That’s the work that we’re doing through Experience AI, an ambitious programme to provide teachers with free classroom resources and professional development, enabling them to teach their students about AI technologies and how they are changing the world. All of our resources are grounded in research that defines the concepts that make up AI literacy, they are rooted in real world examples drawing on the work of Google DeepMind, and they involve hands-on, interactive activities.
The Experience AI resources have already been downloaded 100,000 times across 130 countries and we estimate that 750,000 young people have taken part in an Experience AI lesson already.
In November 2023, we announced that we were building a global network of partners that we would work with to localise and translate the Experience AI resources, to ensure that they are culturally relevant, and organise locally delivered teacher professional development. We’ve made a fantastic start working with partners in Canada, India, Kenya, Malaysia, and Romania; and it’s been brilliant to see the enthusiasm and demand for AI literacy from teachers and students across the globe.
Thanks to an incredibly generous donation of $10m from Google.org – announced at Google.org’s first Impact Summit – we will shortly be welcoming new partners in 17 countries across Europe, the Middle East, and Africa, with the aim of reaching more than 2 million students in the next three years.
AI Safety
Alongside the expansion of the global network of Experience AI partners, we are also launching new resources that focus on critical issues of AI safety.
AI and Your Data: Helping young people reflect on the data they are already providing to AI applications in their lives and how the prevalence of AI tools might change the way they protect their data.
Media Literacy in the Age of AI: Highlighting the ways AI tools can be used to perpetuate misinformation and how AI applications can help combat misleading claims.
Using Generative AI Responsibly: Empowering young people to reflect on their responsibilities when using Generative AI and their expectations of developers who release AI tools.
Get involved
In many ways, this moment in the development of AI technologies reminds me of the internet in the 1990s (yes, I am that old). We all knew that it had potential, but no-one could really imagine the full scale of what would follow.
We failed to rise to the educational challenge of that moment and we are still living with the consequences: a dire shortage of talent; a tech sector that doesn’t represent all communities and voices; and young people and communities who are still missing out on economic opportunities and unable to utilise technology to solve the problems that matter to them.
We have an opportunity to do a better job this time. If you’re interested in getting involved, we’d love to hear from you.
If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding a female-sounding voice. Their launch video demonstrated the model supporting the presenters with a maths problem and giving advice around presentation techniques, sounding friendly and jovial along the way.
Adding a voice to these AI models was perhaps inevitable as big tech companies try to compete for market share in this space, but it got me thinking, why would they add a voice? Why does the model have to flirt with the presenter?
Working in the field of AI, I’ve always seen AI as a really powerful problem-solving tool. But with GenAI, I often wonder what problems the creators are trying to solve and how we can help young people understand the tech.
What problem are we trying to solve with GenAI?
The fact is that I’m really not sure. That’s not to suggest that I think that GenAI hasn’t got its benefits — it does. I’ve seen so many great examples in education alone: teachers using large language models (LLMs) to generate ideas for lessons, to help differentiate work for students with additional needs, to create example answers to exam questions for their students to assess against the mark scheme. Educators are creative people and whilst it is cool to see so many good uses of these tools, I wonder if the developers had solving specific problems in mind while creating them, or did they simply hope that society would find a good use somewhere down the line?
Whilst there are good uses of GenAI, you don’t need to dig very deeply before you start unearthing some major problems.
Anthropomorphism
Anthropomorphism relates to assigning human characteristics to things that aren’t human. This is something that we all do, all of the time, without it having consequences. The problem with doing this with GenAI is that, unlike an inanimate object you’ve named (I call my vacuum cleaner Henry, for example), chatbots are designed to be human-like in their responses, so it’s easy for people to forget they’re not speaking to a human.
As feared, since my last blog post on the topic, evidence has started to emerge that some young people are showing a desire to befriend these chatbots, going to them for advice and emotional support. It’s easy to see why. Here is an extract from an exchange between the presenters at the ChatGPT-4o launch and the model:
ChatGPT (presented with a live image of the presenter): “It looks like you’re feeling pretty happy and cheerful with a big smile and even maybe a touch of excitement. Whatever is going on? It seems like you’re in a great mood. Care to share the source of those good vibes?” Presenter: “The reason I’m in a good mood is we are doing a presentation showcasing how useful and amazing you are.” ChatGPT: “Oh stop it, you’re making me blush.”
“Some people just want to talk to somebody. Just because it’s not a real person, doesn’t mean it can’t make a person feel — because words are powerful. At the end of the day, it can always help in an emotional and mental way.”
The prospect of teenagers seeking solace and emotional support from a generative AI tool is a concerning development. While these AI tools can mimic human-like conversations, their outputs are based on patterns and data, not genuine empathy or understanding. The ultimate concern is that this exposes vulnerable young people to be manipulated in ways we can’t predict. Relying on AI for emotional support could lead to a sense of isolation and detachment, hindering the development of healthy coping mechanisms and interpersonal relationships.
Arguably worse is the recent news of the world’s first AI beauty pageant. The very thought of this probably elicits some kind of emotional response depending on your view of beauty pageants. There are valid concerns around misogyny and reinforcing misguided views on body norms, but it’s also important to note that the winner of “Miss AI” is being described as a lifestyle influencer. The questions we should be asking are, who are the creators trying to have influence over? What influence are they trying to gain that they couldn’t get before they created a virtual woman?
DeepFake tools
Another use of GenAI is the ability to create DeepFakes. If you’ve watched the most recent Indiana Jones movie, you’ll have seen the technology in play, making Harrison Ford appear as a younger version of himself. This is not in itself a bad use of GenAI technology, but the application of DeepFake technology can easily become problematic. For example, recently a teacher was arrested for creating a DeepFake audio clip of the school principal making racist remarks. The recording went viral before anyone realised that AI had been used to generate the audio clip.
Easy-to-use DeepFake tools are freely available and, as with many tools, they can be used inappropriately to cause damage or even break the law. One such instance is the rise in using the technology for pornography. This is particularly dangerous for young women, who are the more likely victims, and can cause severe and long-lasting emotional distress and harm to the individuals depicted, as well as reinforce harmful stereotypes and the objectification of women.
Why we should focus on using AI as a problem-solving tool
Technological developments causing unforeseen negative consequences is nothing new. A lot of our job as educators is about helping young people navigate the changing world and preparing them for their futures and education has an essential role in helping people understand AI technologies to avoid the dangers.
Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. Having an understanding of how these technologies work goes a long way towards achieving sufficient AI literacy skills to make informed choices and this is where our Experience AI program comes in.
Experience AI is a set of lessons developed in collaboration with Google DeepMind and, before we wrote any lessons, our team thought long and hard about what we believe are the important principles that should underpin teaching and learning about artificial intelligence. One such principle is taking a problem-first approach and emphasising that computers are tools that help us solve problems. In the Experience AI fundamentals unit, we teach students to think about the problem they want to solve before thinking about whether or not AI is the appropriate tool to use to solve it.
Taking a problem-first approach doesn’t by default avoid an AI system causing harm — there’s still the chance it will increase bias and societal inequities — but it does focus the development on the end user and the data needed to train the models. I worry that focusing on market share and opportunity rather than the problem to be solved is more likely to lead to harm.
Another set of principles that underpins our resources is teaching about fairness, accountability, transparency, privacy, and security (Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education, Understanding Artificial Intelligence Ethics and Safety) in relation to the development of AI systems. These principles are aimed at making sure that creators of AI models develop models ethically and responsibly. The principles also apply to consumers, as we need to get to a place in society where we expect these principles to be adhered to and consumer power means that any models that don’t, simply won’t succeed.
Furthermore, once students have created their models in the Experience AI fundamentals unit, we teach them about model cards, an approach that promotes transparency about their models. Much like how nutritional information on food labels allows the consumer to make an informed choice about whether or not to buy the food, model cards give information about an AI model such as the purpose of the model, its accuracy, and known limitations such as what bias might be in the data. Students write their own model cards based on the AI solutions they have created.
What else can we do?
At the Raspberry Pi Foundation, we have set up an AI literacy team with the aim to embed principles around AI safety, security, and responsibility into our resources and align them with the Foundations’ mission to help young people to:
Be critical consumers of AI technology
Understand the limitations of AI
Expect fairness, accountability, transparency, privacy, and security and work toward reducing inequities caused by technology
See AI as a problem-solving tool that can augment human capabilities, but not replace or narrow their futures
Our call to action to educators, carers, and parents is to have conversations with your young people about GenAI. Get to know their opinions on GenAI and how they view its role in their lives, and help them to become critical thinkers when interacting with technology.
The world of education is loud with discussions about the uses and risks of generative AI — tools for outputting human-seeming media content such as text, images, audio, and video. In answer, there’s a new practical guide on using generative AI aimed at Computing teachers (and others), written by a group of classroom teachers and researchers at the Raspberry Pi Computing Education Research Centre and Faculty of Education at the University of Cambridge.
Their new guide is a really useful overview for everyone who wants to:
Understand the issues generative AI tools present in the context of education
Find out how to help their schools and students navigate them
Discover ideas on how to make use of generative AI tools in their teaching
Since generative AI tools have become publicly available, issues around data privacy and plagiarism are at the front of educators’ minds. At the same time, many educators are coming up with creative ways to use generative AI tools to enhance teaching and learning. The Research Centre’s guide describes the areas where generative AI touches on education, and lays out what schools and teachers can do to use the technology beneficially and help their learners do the same.
It’s widely accepted that AI tools can bring benefits but can also be used in unhelpful or harmful ways. Basic knowledge of how AI and machine learning works is key to being able to get the best from them. The Research Centre’s guide shares recommended educational resources for teaching learners about AI.
One of the recommendations is Experience AI, a set of free classroom resources we’re creating. It includes a set of 6 lessons for providing 11- to 14-year-olds with a foundational understanding of AI systems, as well as a standalone lesson specifically for teaching about large language model-based AI tools, such as ChatGPT and Google Gemini. These materials are for teachers of any specialism, not just for Computing teachers.
You’ll find that even a brief introduction to how large language models work is likely to make students’ ideas about using these tools to do all their homework much less appealing. The guide outlines creative ways you can help students see some of generative AI’s pitfalls, such as asking students to generate outputs and compare them, paying particular attention to inaccuracies in the outputs.
Generative AI tools and teaching computing
We’re still learning about what the best ways to teach programming to novice learners are. Generative AI has the potential to change how young people learn text-based programming, as AI functionality is now integrated into many of the major programming environments, generating example solutions or helping to spot errors.
The Research Centre’s guide acknowledges that there’s more work to be done to understand how and when to support learners with programming tasks through generative AI tools. (You can follow our ongoing seminar series on the topic.) In the meantime, you may choose to support established programming pedagogies with generative AI tools, such as prompting an AI chatbot to generate a PRIMM activity on a particular programming concept.
As ethics and the impact of technology play an important part in any good Computing curriculum, the guide also shares ways to use generative AI tools as a focus for your classroom discussions about topics such as bias and inequality.
Using generative AI tools to support teaching and learning
Teachers have been using generative AI applications as productivity tools to support their teaching, and the Research Centre’s guide gives several examples you can try out yourself. Examples include creating summaries of textual materials for students, and creating sets of questions on particular topics. As the guide points out, when you use generative AI tools like this, it’s important to always check the accuracy of the generated materials before you give any of them to your students.
Putting a school-wide policy in place
Importantly, the Research Centre’s guide highlights the need for a school-wide acceptable use policy (AUP) that informs teachers, other school staff, and students on how they may use generative AI tools. This section of the guide suggests websites that offer sample AUPs that can be used as a starting point for your school. Your AUP should aim to keep users safe, covering e-safety, privacy, and security issues as well as offering guidance on being transparent about the use of generative tools.
It’s not uncommon that schools look to specialist Computing teachers to act as the experts on questions around use of digital tools. However, for developing trust in how generative AI tools are used in the school, it’s important to encourage as wide a range of stakeholders as possible to be consulted in the process of creating an AUP.
A source of support for teachers and schools
As the Research Centre’s guide recognises, the landscape of AI and our thinking about it might change. In this uncertain context, the document offers a sensible and detailed overview of where we are now in understanding the current impact of generative AI on Computing as a subject, and on education more broadly. The example use cases and thought-provoking next steps on how this technology can be used and what its known risks and concerns are should be helpful for all interested educators and schools.
I recommend that all Computing teachers read this new guide, and I hope you feel inspired about the key role that you can play in shaping the future of education affected by AI.
Developed by us and Google DeepMind, Experience AI provides teachers with free resources to help them confidently deliver lessons that inspire and educate young people about artificial intelligence (AI) and the role it could play in their lives.
Tracy Mayhead is a computer science teacher at Arthur Mellows Village College in Cambridgeshire. She recently taught Experience AI to her KS3 pupils. In this blog post, she shares 4 key learnings from this experience.
1. Preparation saves time
The Experience AI lesson plans provided a clear guide on how to structure our lessons.
Each lesson includes teacher-facing intro videos, a lesson plan, a slide deck, activity worksheets, and student-facing videos that help to introduce each new AI concept.
It was handy to know in advance which websites needed unblocking so students could access them.
“My favourite bit was making my own model, and choosing the training data. I enjoyed seeing how the amount of data affected the accuracy of the AI and testing the model.” – Student, Arthur Mellows Village College, UK
2. The lessons can be adapted to meet student’s needs
It was clear from the start that I could adapt the lessons to make them work for myself and my students.
Having estimated times and corresponding slides for activities was beneficial for adjusting the lesson duration. The balance between learning and hands-on tasks was just right.
I felt fairly comfortable with my understanding of AI basics. However, teaching it was a learning experience, especially in tailoring the lessons to cater to students with varying knowledge. Their misconceptions sometimes caught me off guard, like their belief that AI is never wrong. Adapting to their needs and expectations was a learning curve.
“It has definitely changed my outlook on AI. I went from knowing nothing about it to understanding how it works, why it acts in certain ways, and how to actually create my own AI models and what data I would need for that.” – Student, Arthur Mellows Village College, UK
3. Young people are curious about AI and how it works
My students enjoyed the practical aspects of the lessons, like categorising apples and tomatoes. They found it intriguing how AI could sometimes misidentify objects, sparking discussions on its limitations. They also expressed concerns about AI bias, which these lessons helped raise awareness about. I didn’t always have all the answers, but it was clear they were curious about AI’s implications for their future.
It’s important to acknowledge that as a teacher you won’t always have all the answers especially when teaching AI literacy, which is such a new area. This is something that can be explored in a class alongside students.
There is an online course you can use that can help get you started teaching about AI if you are at all nervous.
“I learned a lot about AI and the possibilities it holds to better our futures as well as how to train it and problems that may arise when training it.” – Student, Arthur Mellows Village College, UK
4. Engaging young people with AI is important
Students are fascinated by AI and they recognise its significance in their future. It is important to equip them with the knowledge and skills to fully engage with AI.
Experience AI provides a valuable opportunity to explore these concepts and empower students to shape and question the technology that will undoubtedly impact their lives.
“It has changed my outlook on AI because I now understand it better and feel better equipped to work with AI in my working life.” – Student, Arthur Mellows Village College, UK
What is your experience of teaching Experience AI lessons?
We completely agree with Tracy. AI literacy empowers people to critically evaluate AI applications and how they are being used. Our Experience AI resources help to foster critical thinking skills, allowing learners to use AI tools to address challenges they are passionate about.
We’re also really interested to learn what misconceptions students have about AI and how teachers are addressing them. If you come across misconceptions that surprise you while you’re teaching with the Experience AI lesson materials, please let us know via the feedback form linked in the final lesson of the six-lesson unit.
If you would like to teach Experience AI lessons to your students, download the free resources from experience-ai.org
Generative artificial intelligence (AI) tools are becoming more easily accessible to learners and educators, and increasingly better at generating code solutions to programming tasks, code explanations, computing lesson plans, and other learning resources. This raises many questions for educators in terms of what and how we teach students about computing and AI, and AI’s impact on assessment, plagiarism, and learning objectives.
We were honoured to have Professor Brett Becker (University College Dublin) join us as part of our ‘Teaching programming (with or without AI)’ seminar series. He is uniquely placed to comment on teaching computing using AI tools, having been involved in many initiatives relevant to computing education at different levels, in Ireland and beyond.
Brett’s talk focused on what educators and education systems need to do to prepare all students — not just those studying Computing — so that they are equipped with sufficient knowledge about AI to make their way from primary school to secondary and beyond, whether it be university, technical qualifications, or work.
How do AI tools currently perform?
Brett began his talk by illustrating the increase in performance of large language models (LLMs) in solving first-year undergraduate programming exercises: he compared the findings from two recent studies he was involved in as part of an ITiCSE Working Group. In the first study — from 2021 — the results generated by GPT-3 were similar to those of students in the top quartile. By the second study in 2023, GPT-4’s performance matched that of a top student (Figure 1).
Figure 1: Student scores on Exam 1 and Exam 2, represented by circles. GPT-3’s 2021 score is represented by the blue ‘x’, and GPT-4’s 2023 score on the same questions is represented by the red ‘x’.
Brett also explained that the study found some models were capable of solving current undergraduate programming assessments almost error-free, and could solve the Irish Leaving Certificate and UK A level Computer Science exams.
What are challenges and opportunities for education?
This level of performance raises many questions for computing educators about what is taught and how to assess students’ learning. To address this, Brett referred to his 2023 paper, which included findings from a literature review and a survey on students’ and instructors’ attitudes towards using LLMs in computing education. This analysis has helped him identify several opportunities as well as the ethical challenges education systems face regarding generative AI.
The opportunities include:
The generation of unique content, lesson plans, programming tasks, or feedback to help educators with workload and productivity
More accessible content and tools generated by AI apps to make Computing more broadly accessible to more students
More engaging and meaningful student learning experiences, including using generative AI to enable creativity and using conversational agents to augment students’ learning
The impact on assessment practices, both in terms of automating the marking of current assessments as well as reconsidering what is assessed and how
Some of the challenges include:
The lack of reliability and accuracy of outputs from generative AI tools
The need to educate everyone about AI to create a baseline level of understanding
The legal and ethical implications of using AI in computing education and beyond
How to deal with questionable or even intentionally harmful uses of AI and mitigating the consequences of such uses
Programming as a basic skill for all subjects
Next, Brett talked about concrete actions that he thinks we need to take in response to these opportunities and challenges.
He also discussed the increased relevance of programming to all subjects, not only Computing, in a similar way to how reading and mathematics transcend the boundaries of their subjects, and the need he sees to adapt subjects and curricula to that effect.
As an example of how rapidly curricula may need to change with increasing AI use by students, Brett looked at the Irish Computer science specification for “senior cycle” (final two years of second-level, ages 16–18). This curriculum was developed in 2018 and remains a strong computing curriculum in Brett’s opinion. However, he pointed out that it only contains a single learning outcome on AI.
To help educators bridge this gap, in the book Brett wrote alongside Keith Quille to accompany the curriculum, they included two chapters dedicated to AI, machine learning, and ethics and computing. Brett believes these types of additional resources may be instrumental for teaching and learning about AI as resources are more adaptable and easier to update than curricula.
Generative AI in computing education
Taking the opportunity to use generative AI to reimagine new types of programming problems, Brett and colleagues have developed Promptly, a tool that allows students to practise prompting AI code generators. This tool provides a combined approach to learning about generative AI while learning programming with an AI tool.
Promptly is intended to help students learn how to write effective prompts. It encourages students to specify and decompose the programming problem they want to solve, read the code generated, compare it with test cases to discern why it is failing (if it is), and then update their prompt accordingly (Figure 2).
Figure 2: Example of a student’s use of Promptly.
Early undergraduate student feedback points to Promptly being a useful way to teach programming concepts and encourage metacognitive programming skills. The tool is further described in a paper, and whilst the initial evaluation was aimed at undergraduate students, Brett positioned it as a secondary school–level tool as well.
Brett hopes that by using generative AI tools like this, it will be possible to better equip a larger and more diverse pool of students to engage with computing.
Re-examining the concept of programming
Brett concluded his seminar by broadening the relevance of programming to all learners, while challenging us to expand our perspectives of what programming is. If we define programming as a way of prompting a machine to get an output, LLMs allow all of us to do so without the need for learning the syntax of traditional programming languages. Taking that view, Brett left us with a question to consider: “How do we prepare for this from an educational perspective?”
You can watch Brett’s presentation here:
Join our next seminar
The focus of our ongoing seminar series is on teaching programming with or without AI.
For our next seminar on Tuesday 11 June at 17:00 to 18:30 GMT, we’re joined by Veronica Cucuiat (Raspberry Pi Foundation), who will talk about whether LLMs could be employed to help understand programming error messages, which can present a significant obstacle to anyone new to coding, especially young people.
To take part in the seminar, click the button below to sign up, and we will send you information about how to join. We hope to see you there.
It’s been almost a year since we launched our first set of Experience AI resources in the UK, and we’re now working with partner organisations to bring AI literacy to teachers and students all over the world.
Developed by the Raspberry Pi Foundation and Google DeepMind, Experience AI provides everything that teachers need to confidently deliver engaging lessons that will inspire and educate young people about AI and the role that it could play in their lives.
Over the past six months we have been working with partners in Canada, Kenya, Malaysia, and Romania to create bespoke localised versions of the Experience AI resources. Here is what we’ve learned in the process.
Creating culturally relevant resources
The Experience AI Lessons address a variety of real-world contexts to support the concepts being taught. Including real-world contexts in teaching is a pedagogical strategy we at the Raspberry Pi Foundation call “making concrete”. This strategy significantly enhances the learning experience for learners because it bridges the gap between theoretical knowledge and practical application.
The initial aim of Experience AI was for the resources to be used in UK schools. While we put particular emphasis on using culturally relevant pedagogy to make the resources relatable to learners from backgrounds that are underrepresented in the tech industry, the contexts we included in them were for UK learners. As many of the resource writers and contributors were also based in the UK, we also unavoidably brought our own lived experiences and unintentional biases to our design thinking.
Therefore, when we began thinking about how to adapt the resources for schools in other countries, we knew we needed to make sure that we didn’t just convert what we had created into different languages. Instead we focused on localisation.
Localisation goes beyond translating resources into a different language. For example in educational resources, the real-world contexts used to make concrete the concepts being taught need to be culturally relevant, accessible, and engaging for students in a specific place. In properly localised resources, these contexts have been adapted to provide educators with a more relatable and effective learning experience that resonates with the students’ everyday lives and cultural background.
Working with partners on localisation
Recognising our UK-focused design process, we made sure that we made no assumptions during localisation. We worked with partner organisations in the four countries — Digital Moment, Tech Kidz Africa, Penang Science Cluster, and Asociația Techsoup — drawing on their expertise regarding their educational context and the real-world examples that would resonate with young people in their countries.
We asked our partners to look through each of the Experience AI resources and point out the things that they thought needed to change. We then worked with them to find alternative contexts that would resonate with their students, whilst ensuring the resources’ intended learning objectives would still be met.
Spotlight on localisation for Kenya
Tech Kidz Africa, our partner in Kenya, challenged some of the assumptions we had made when writing the original resources.
Relevant applications of AI technology
Tech Kidz Africa wanted the contexts in the lessons to not just be relatable to their students, but also to demonstrate real-world uses of AI applications that could make a difference in learners’ communities. They highlighted that as agriculture is the largest contributor to the Kenyan economy, there was an opportunity to use this as a key theme for making the Experience AI lessons more culturally relevant.
This conversation with Tech Kidz Africa led us to identify a real-world use case where farmers in Kenya were using an AI application that identifies disease in crops and provides advice on which pesticides to use. This helped the farmers to increase their crop yields.
We included this example when we adapted an activity where students explore the use of AI for “computer vision”. A Google DeepMind research engineer, who is one of the General Chairs of the Deep Learning Indaba, recommended a data set of images of healthy and diseased cassava crops (1). We were therefore able to include an activity where students build their own machine learning models to solve this real-world problem for themselves.
Access to technology
While designing the original set of Experience AI resources, we made the assumption that the vast majority of students in UK classrooms have access to computers connected to the internet. This is not the case in Kenya; neither is it the case in many other countries across the world. Therefore, while we localised the Experience AI resources with our Kenyan partner, we made sure that the resources allow students to achieve the same learning outcomes whether or not they have access to internet-connected computers.
Assuming teachers in Kenya are able to download files in advance of lessons, we added “unplugged” options to activities where needed, as well as videos that can be played offline instead of being streamed on an internet-connected device.
What we’ve learned
The work with our first four Experience AI partners has given us with lots of localisation learnings, which we will use as we continue to expand the programme with more partners across the globe:
Cultural specificity: We gained insight into which contexts are not appropriate for non-UK schools, and which contexts all our partners found relevant.
Importance of local experts: We know we need to make sure we involve not just people who live in a country, but people who have a wealth of experience of working with learners and understand what is relevant to them.
Adaptation vs standardisation: We have learned about the balance between adapting resources and maintaining the same progression of learning across the Experience AI resources.
Throughout this process we have also reflected on the design principles for our resources and the choices we can make while we create more Experience AI materials in order to make them more amenable to localisation.
Join us as an Experience AI partner
We are very grateful to our partners for collaborating with us to localise the Experience AI resources. Thank you to Digital Moment, Tech Kidz Africa, Penang Science Cluster, and Asociația Techsoup.
We now have the tools to create resources that support a truly global community to access Experience AI in a way that resonates with them. If you’re interested in joining us as a partner, you can register your interest here.
(1) The cassava data set was published open source by Ernest Mwebaze, Timnit Gebru, Andrea Frome, Solomon Nsumba, and Jeremy Tusubira. Read their research paper about it here.
We’re really excited to see that Experience AI Challenge mentors are starting to submit AI projects created by young people. There’s still time for you to get involved in the Challenge: the submission deadline is 24 May 2024.
If you want to find out more about the Challenge, join our live webinar on Wednesday 3 April at 15:30 BST on our YouTube channel.
During the webinar, you’ll have the chance to:
Ask your questions live. Get any Challenge-related queries answered by us in real time. Whether you need clarification on any part of the Challenge or just want advice on your young people’s project(s), this is your chance to ask.
Get introduced to the submission process. Understand the steps of submitting projects to the Challenge. We’ll walk you through the requirements and offer tips for making your young people’s submission stand out.
Learn more about our project feedback. Find out how we will deliver our personalised feedback on submitted projects (UK only).
Find out how we will recognise your creators’ achievements. Learn more about our showcase event taking place in July, and the certificates and posters we’re creating for you and your young people to celebrate submitting your projects.
The Experience AI Challenge, created by the Raspberry Pi Foundation in collaboration with Google DeepMind, guides young people under the age of 18, and their mentors, through the exciting process of creating their own unique artificial intelligence (AI) project. Participation is completely free.
Central to the Challenge is the concept of project-based learning, a hands-on approach that gets learners working together, thinking critically, and engaging deeply with the materials.
In the Challenge, young people are encouraged to seek out real-world problems and create possible AI-based solutions. By taking part, they become problem solvers, thinkers, and innovators.
And to every young person based in the UK who creates a project for the Challenge, we will provide personalised feedback and a certificate of achievement, in recognition of their hard work and creativity. Any projects considered as outstanding by our experts will be selected as favourites and its creators will be invited to a showcase event in the summer.
Resources ready for your classroom or club
You don’t need to be an AI expert to bring this Challenge to life in your classroom or coding club. Whether you’re introducing AI for the first time or looking to deepen your young people’s knowledge, the Challenge’s step-by-step resource pack covers all you and your young people need, from the basics of AI, to training a machine learning model, to creating a project in Scratch.
In the resource pack, you will find:
The mentor guide contains all you need to set up and run the Challenge with your young people
The creator guide supports young people throughout the Challenge and contains talking points to help with planning and designing projects
The blueprint workbook helps creators keep track of their inspiration, ideas, and plans during the Challenge
The pack offers a safety net of scaffolding, support, and troubleshooting advice.
Find out more about the Experience AI Challenge
By bringing the Experience AI Challenge to young people, you’re inspiring the next generation of innovators, thinkers, and creators. The Challenge encourages young people to look beyond the code, to the impact of their creations, and to the possibilities of the future.
You can find out more about the Experience AI Challenge, and download the resource pack, from the Experience AI website.
In the rapidly evolving digital landscape, students are increasingly interacting with AI-powered applications when listening to music, writing assignments, and shopping online. As educators, it’s our responsibility to equip them with the skills to critically evaluate these technologies.
A key aspect of this is understanding ‘explainability’ in AI and machine learning (ML) systems. The explainability of a model is how easy it is to ‘explain’ how a particular output was generated. Imagine having a job application rejected by an AI model, or facial recognition technology failing to recognise you — you would want to know why.
Establishing standards for explainability is crucial. Otherwise we risk creating a world where decisions impacting our lives are made by opaque systems we don’t understand. Learning about explainability is key for students to develop digital literacy, enabling them to navigate the digital world with informed awareness and critical thinking.
Why AI explainability is important
AI models can have a significant impact on people’s lives in various ways. For instance, if a model determines a child’s exam results, parents and teachers would want to understand the reasoning behind it.
Artists might want to know if their creative works have been used to train a model and could be at risk of plagiarism. Likewise, coders will want to know if their code is being generated and used by others without their knowledge or consent. If you came across an AI-generated artwork that features a face resembling yours, it’s natural to want to understand how a photo of you was incorporated into the training data.
Explainability is about accountability, transparency, and fairness, which are vital lessons for children as they grow up in an increasingly digital world.
There will also be instances where a model seems to be working for some people but is inaccurate for a certain demographic of users. This happened with Twitter’s (now X’s) face detection model in photos; the model didn’t work as well for people with darker skin tones, who found that it could not detect their faces as effectively as their lighter-skinned friends and family. Explainability allows us not only to understand but also to challenge the outputs of a model if they are found to be unfair.
In essence, explainability is about accountability, transparency, and fairness, which are vital lessons for children as they grow up in an increasingly digital world.
Routes to AI explainability
Some models, like decision trees, regression curves, and clustering, have an in-built level of explainability. There is a visual way to represent these models, so we can pretty accurately follow the logic implemented by the model to arrive at a particular output.
By teaching students about AI explainability, we are not only educating them about the workings of these technologies, but also teaching them to expect transparency as they grow to be future consumers or even developers of AI technology.
A decision tree works like a flowchart, and you can follow the conditions used to arrive at a prediction. Regression curves can be shown on a graph to understand why a particular piece of data was treated the way it was, although this wouldn’t give us insight into exactly why the curve was placed at that point. Clustering is a way of collecting similar pieces of data together to create groups (or clusters) with which we can interrogate the model to determine which characteristics were used to create the groupings.
However, the more powerful the model, the less explainable it tends to be. Neural networks, for instance, are notoriously hard to understand — even for their developers. The networks used to generate images or text can contain millions of nodes spread across thousands of layers. Trying to work out what any individual node or layer is doing to the data is extremely difficult.
Regardless of the complexity, it is still vital that developers find a way of providing essential information to anyone looking to use their models in an application or to a consumer who might be negatively impacted by the use of their model.
Model cards for AI models
One suggested strategy to add transparency to these models is using model cards. When you buy an item of food in a supermarket, you can look at the packaging and find all sorts of nutritional information, such as the ingredients, macronutrients, allergens they may contain, and recommended serving sizes. This information is there to help inform consumers about the choices they are making.
Model cards attempt to do the same thing for ML models, providing essential information to developers and users of a model so they can make informed choices about whether or not they want to use it.
Model cards include details such as the developer of the model, the training data used, the accuracy across diverse groups of people, and any limitations the developers uncovered in testing.
Model cards should be accessible to as many people as possible.
A real-world example of a model card is Google’s Face Detection model card. This details the model’s purpose, architecture, performance across various demographics, and any known limitations of their model. This information helps developers who might want to use the model to assess whether it is fit for their purpose.
Transparency and accountability in AI
As the world settles into the new reality of having the amazing power of AI models at our disposal for almost any task, we must teach young people about the importance of transparency and responsibility.
As a society, we need to have hard discussions about where and when we are comfortable implementing models and the consequences they might have for different groups of people. By teaching students about explainability, we are not only educating them about the workings of these technologies, but also teaching them to expect transparency as they grow to be future consumers or even developers of AI technology.
Most importantly, model cards should be accessible to as many people as possible — taking this information and presenting it in a clear and understandable way. Model cards are a great way for you to show your students what information is important for people to know about an AI model and why they might want to know it. Model cards can help students understand the importance of transparency and accountability in AI.
Google DeepMind’s Aimee Welch discusses our partnership on the Experience AI learning programme and why equal access to AI education is key. This article also appears in issue 22 of Hello World on teaching and AI.
From AI chatbots to self-driving cars, artificial intelligence (AI) is here and rapidly transforming our world. It holds the potential to solve some of the biggest challenges humanity faces today — but it also has many serious risks and inherent challenges, like reinforcing existing patterns of bias or “hallucinating”, a term that describes AI making up false outputs that do not reflect real events or data.
Teachers want to build young people’s AI literacy
As AI becomes an integral part of our daily lives, it’s essential that younger generations gain the knowledge and skills to navigate and shape this technology. Young people who have a foundational understanding of AI are able to make more informed decisions about using AI applications in their daily lives, helping ensure safe and responsible use of the technology. This has been recognised for example by the UK government’s AI Council, whose AI Roadmap sets out the goal of ensuring that every child in the UK leaves school with a basic sense of how AI works.
But while AI literacy is a key skill in this new era, not every young person currently has access to sufficient AI education and resources. In a recent survey by the EdWeek Research Center in the USA, only one in 10 teachers said they knew enough about AI to teach its basics, and very few reported receiving any professional development related to the topic. Similarly, our work with the Raspberry Pi Computing Education Research Centre has suggested that UK-based teachers are eager to understand more about AI and how to engage their students in the topic.
Bringing AI education into classrooms
Ensuring broad access to AI education is also important to improve diversity in the field of AI to ensure safe and responsible development of the technology. There are currently stark disparities in the field and these start already early on, with school-level barriers contributing to underrepresentation of certain groups of people. By increasing diversity in AI, we bring diverse values, hopes, and concerns into the design and deployment of the technology — something that’s critical for AI to benefit everyone.
By focusing on AI education from a young age, there is an opportunity to break down some of these long-standing barriers. That’s why we partnered with the Raspberry Pi Foundation to co-create Experience AI, a new learning programme with free lesson plans, slide decks, worksheets and videos, to address gaps in AI education and support teachers in engaging and inspiring young people in the subject.
The programme aims to help young people aged 11–14 take their first steps in understanding the technology, making it relevant to diverse learners, and encouraging future careers in the field. All Experience AI resources are freely available to every school across the UK and beyond.
The partnership is built on a shared vision to make AI education more inclusive and accessible. Bringing together the Foundation’s expertise in computing education and our cutting-edge technical knowledge and industry insights has allowed us to create a holistic learning experience that connects theoretical concepts and practical applications.
Experience AI: Informed by AI experts
A group of 15 research scientists and engineers at Google DeepMind contributed to the development of the lessons. From drafting definitions for key concepts, to brainstorming interesting research areas to highlight, and even featuring in the videos included in the lessons, the group played a key role in shaping the programme in close collaboration with the Foundation’s educators and education researchers.
To bring AI concepts to life, the lessons include interactive activities as well as real-life examples, such as a project where Google DeepMind collaborated with ecologists and conservationists to develop machine learning methods to study the behaviour of an entire animal community in the Serengeti National Park and Grumeti Reserve in Tanzania.
Member of the working group, Google DeepMind Research Scientist Petar Veličković, shares: “AI is a technology that is going to impact us all, and therefore educating young people on how to interact with this technology is likely going to be a core part of school education going forward. The project was eye-opening and humbling for me, as I learned of the challenges associated with making such a complex topic accessible — not only to every pupil, but also to every teacher! Observing the thoughtful approach undertaken by the Raspberry Pi Foundation left me deeply impressed, and I’m taking home many useful ideas that I hope to incorporate in my own AI teaching efforts going forward.”
The lessons have been carefully developed to:
Follow a clear learning journey, underpinned by the SEAME framework which guides learners sequentially through key concepts and acts as a progression framework.
Build foundational knowledge and provide support for teachers. Focus on teacher training and support is at the core of the programme.
Embed ethics and responsibility. Crucially, key concepts in AI ethics and responsibility are woven into each lesson and progressively built on.Students are introduced to concepts like data bias, user-focused approaches, model cards, and how AI can be used for social good.
Ensure cultural relevance and inclusion. Experience AI was designed with diverse learners in mind and includes a variety of activities to enable young people to pick topics that most interest them.
What teachers say about the Experience AI lessons
To date, we estimate the resources have reached 200,000+ students in the UK and beyond. We’re thrilled to hear from teachers already using the resources about the impact they are having in the classroom, such as Mrs J Green from Waldegrave School in London, who says: “I thought that the lessons covered a really important topic. Giving the pupils an understanding of what AI is and how it works will become increasingly important as it becomes more ubiquitous in all areas of society. The lessons that we trialled took some of the ‘magic’ out of AI and started to give the students an understanding that AI is only as good as the data that is used to build it. It also started some really interesting discussions with the students around areas such as bias.”
At North Liverpool Academy, teacher Dave Cross tells us: “AI is such a current and relevant topic in society that [these lessons] will enable Key Stage 3 computing students [ages 11–14] to gain a solid foundation in something that will become more prevalent within the curriculum, and wider subjects too as more sectors adopt AI and machine learning as standard. Our Key Stage 3 computing students now feel immensely more knowledgeable about the importance and place that AI has in their wider lives. These lessons and activities are engaging and accessible to students and educators alike, whatever their specialism may be.”
A stronger global AI community
Our hope is that the Experience AI programme instils confidence in both teachers and students, helping to address some of the critical school-level barriers leading to underrepresentation in AI and playing a role in building a stronger, more inclusive AI community where everyone can participate irrespective of their background.
Today’s young people are tomorrow’s leaders — and as such, educating and inspiring them about AI is valuable for everybody.
Teachers can visit experience-ai.org to download all Experience AI resources for free.
We are now building a network of educational organisations around the world to tailor and translate the Experience AI resources so that more teachers and students can engage with them and learn key AI literacy skills. Find out more.
I am delighted to announce that the Raspberry Pi Foundation and Google DeepMind are building a global network of educational organisations to bring AI literacy to teachers and students all over the world, starting with Canada, Kenya, and Romania.
Experience AI
We launched Experience AI in September 2022 to help teachers and students learn about AI technologies and how they are changing the world.
Developed by the Raspberry Pi Foundation and Google DeepMind, Experience AI provides everything that teachers need to confidently deliver engaging lessons that will inspire and educate young people about AI and the role that it could play in their lives.
We provide lesson plans, classroom resources, worksheets, hands-on activities, and videos that introduce a wide range of AI applications and the underlying technologies that make them work. The materials are designed to be relatable to young people and can be taught by any teacher, whether or not they have a technical background. Alongside the classroom resources, we provide teacher professional development, including an online course that provides an introduction to machine learning and AI.
The materials are grounded in real-world contexts and emphasise the potential for young people to positively change the world through a mastery of AI technologies.
Since launching the first resources, we have seen significant demand from teachers and students all over the world, with over 200,000 students already learning with Experience AI.
Experience AI network
Building on that initial success and in response to huge demand, we are now building a global network of educational organisations to expand the reach and impact of Experience AI by translating and localising the materials, promoting them to schools, and supporting teacher professional development.
Obum Ekeke OBE, Head of Education Partnerships at Google DeepMind, says:
“We have been blown away by the interest we have seen in Experience AI since its launch and are thrilled to be working with the Raspberry Pi Foundation and local partners to expand the reach of the programme. AI literacy is a critical skill in today’s world, but not every young person currently has access to relevant education and resources. By making AI education more inclusive, we can help young people make more informed decisions about using AI applications in their daily lives, and encourage safe and responsible use of the technology.”
Today we are announcing the first three organisations that we are working with, each of which is already doing fantastic work to democratise digital skills in their part of the world. All three are already working in partnership with the Raspberry Pi Foundation and we are excited to be deepening and expanding our collaboration to include AI literacy.
Digital Moment, Canada
Digital Moment is a Montreal-based nonprofit focused on empowering young changemakers through digital skills. Founded in 2013, Digital Moment has a track record of supporting teachers and students across Canada to learn about computing, coding, and AI literacy, including through supporting one of the world’s largest networks of Code Clubs.
“We’re excited to be working with the Raspberry Pi Foundation and Google DeepMind to bring Experience AI to teachers across Canada. Since 2018, Digital Moment has been introducing rich training experiences and educational resources to make sure that Canadian teachers have the support to navigate the impacts of AI in education for their students. Through this partnership, we will be able to reach more teachers and with more resources, to keep up with the incredible pace and disruption of AI.”
Indra Kubicek, President, Digital Moment
Tech Kidz Africa, Kenya
Tech Kidz Africa is a Mobasa-based social enterprise that nurtures creativity in young people across Kenya through digital skills including coding, robotics, app and web development, and creative design thinking.
“With the retooling of teachers as a key objective of Tech Kidz Africa, working with Google DeepMind and the Raspberry Pi Foundation will enable us to build the capacity of educators to empower the 21st century learner, enhancing the teaching and learning experience to encourage innovation and prepare the next generation for the future of work.”
Grace Irungu, CEO, Tech Kidz Africa
Asociația Techsoup, Romania
Asociația Techsoup works with teachers and students across Romania and Moldova, training Computer Science, ICT, and primary school teachers to build their competencies around coding and technology. A longstanding partner of the Raspberry Pi Foundation, they foster a vibrant community of CoderDojos and support young people to participate in Coolest Projects and the European Astro Pi Challenge.
“We are enthusiastic about participating in this global partnership to bring high-quality AI education to all students, regardless of their background. Given the current exponential growth of AI tools and instruments in our daily lives, it is crucial to ensure that students and teachers everywhere comprehend and effectively utilise these tools to enhance their human, civic, and professional potential. Experience AI is the best available method for AI education for middle school students. We couldn’t be more thrilled to work with the Raspberry Pi Foundation and Google DeepMind to make it accessible in Romanian for teachers in Romania and the Republic of Moldova, and to assist teachers in fully integrating it into their classes.”
Elena Coman, Director of Development, Asociația Techsoup
Get involved
These are the first of what will become a global network of organisations supporting tens of thousands of teachers to equip millions of students with a foundational understanding of AI technologies through Experience AI. If you want to get involved in inspiring the next generation of AI leaders, we would love to hear from you.
We are pleased to announce a new AI-themed challenge for young people: the Experience AI Challenge invites and supports young people aged up to 18 to design and make their own AI applications. This is their chance to have a taste of getting creative with the powerful technology of machine learning. And equally exciting: every young creator will get feedback and encouragement from us at the Raspberry Pi Foundation.
As you may have heard, we recently launched a series of classroom lessons called Experience AI in partnership with Google DeepMind. The lesson materials make it easy for teachers of all subjects to teach their learners aged up to 18 about artificial intelligence and machine learning. Now the Experience AI Challenge gives young people the opportunity to develop their skills further and build their own AI applications.
Key information
Starts on 08 January 2024
Free to take part in
Designed for beginners, based on the tools Scratch and Machine Learning for Kids
Open for official submissions made by UK-based young people aged up to 18 and their mentors
Young people and their mentors around the world are welcome to access the Challenge resources and make AI projects
Tailored resources for young people and mentors to support you to take part
For the Experience AI Challenge, you and the young people you work with will learn how to make a machine learning (ML) classifier that organises data types such as audio, text, or images into different groupings that you specify.
The Challenge resources show young people the basic principles of using the tools and training ML models. Then they will use these new skills to create their own projects, and it’s a chance for their imaginations to run free. Here are some examples of projects your young tech creators could make:
An instrument classifier to identify the type of musical instrument being played in pieces of music
An animal sound identifier to determine which animal is making a particular sound
A voice command recogniser to detect voice commands like ‘stop’, ‘go’, ‘left’, and ‘right’
A photo classifier to identify what kind of food is shown in a photograph
All creators will receive expert feedback on their projects.
To make the Experience AI Challenge as familiar and accessible as possible for young people who may be new to coding, we designed it for beginners. We chose the free, easy-to-use, online tool Machine Learning for Kids for young people to train their machine learning models, and Scratch as the programming environment for creators to code their projects. If you haven’t used these tools before, don’t worry. The Challenge resources will provide all the support you need to get up to speed.
Training an ML model and creating a project with it teaches many skills beyond coding, including computational thinking, ethical programming, data literacy, and developing a broader understanding of the influence of AI on society.
The three Challenge stages
Our resources for creators and mentors walk you through the three stages of the Experience AI Challenge.
Stage 1: Explore and discover
The first stage of the Challenge is designed to ignite young people’s curiosity. Through our resources, mentors let participants explore the world of AI and ML and discover how these technologies are revolutionising industries like healthcare and entertainment.
Stage 2: Get hands-on
In the second stage, young people choose a data type and embark on a guided example project. They create a training dataset, train an ML model, and develop a Scratch application as the user interface for their model.
Stage 3: Design and create
In the final stage, mentors support young people to apply what they’ve learned to create their own ML project that addresses a problem they’re passionate about. They submit their projects to us online and receive feedback from our expert panel.
Recent developments in artificial intelligence are changing how the world sees computing and challenging computing educators to rethink their approach to teaching. In the brand-new issue of Hello World, out today for free, we tackle some big questions about AI and computing education. We also get practical with resources for your classroom.
Teaching and AI
In their articles for issue 22, educators explore a range of topics related to teaching and AI, including what is AI literacy and how do we teach it; gender bias in AI and what we can do about it; how to speak to young children about AI; and why anthropomorphism hinders learners’ understanding of AI.
Our feature articles also include a research digest on AI ethics for children, and of course hands-on examples of AI lessons for your learners.
A snapshot of AI education
Hello World issue 22 is a comprehensive snapshot of the current landscape of AI education. Ben Garside, Learning Manager for our Experience AI programme and guest editor of this issue, says:
“When I was teaching in the classroom, I used to enjoy getting to grips with new technological advances and finding ways in which I could bring them into school and excite the students I taught. Occasionally, during the busiest of times, I’d also look longingly at other subjects and be jealous that their curriculum appeared to be more static than ours (probably a huge misconception on my behalf).”
It’s inspiring for me to see how the education community is reacting to the opportunities that AI can provide.
Ben Garside
“It’s inspiring for me to see how the education community is reacting to the opportunities that AI can provide. Of course, there are elements of AI where we need to tread carefully and be very cautious in our approach, but what you’ll see in this magazine is educators who are thinking creatively in this space.”
Download Hello World issue 22 for free
AI is a topic we’ve addressed before in Hello World, and we’ll keep covering this rapidly evolving area in future. We hope this issue gives you plenty of ideas to take away and build upon.
Also in issue 22:
Vocational training for young people
Making the most of online educator training
News about BBC micro:bit
An insight into the WiPSCE 2023 conference for teachers and educators
And much, much more
You can download your free PDF issue now, or purchase a print copy from our store. UK-based subscribers for a free print edition can expect their copies to arrive in the mail this week.
Send us a message or tag us on social media to let us know which articles have made you think and, most importantly, which will help you with your teaching.
It’s been less than a year since ChatGPT catapulted generative artificial intelligence (AI) into mainstream public consciousness, reigniting the debate about the role that these powerful new technologies will play in all of our futures.
‘Will AI save or destroy humanity?’ might seem like an extreme title for a podcast, particularly if you’ve played with these products and enjoyed some of their obvious limitations. The reality is that we are still at the foothills of what AI technology can achieve (think World Wide Web in the 1990s), and lots of credible people are predicting an astonishing pace of progress over the next few years, promising the radical transformation of almost every aspect of our lives. Comparisons with the Industrial Revolution abound.
At the same time, there are those saying it’s all moving too fast; that regulation isn’t keeping pace with innovation. One of the UK’s leading AI entrepreneurs, Mustafa Suleyman, said recently: “If you don’t start from a position of fear, you probably aren’t paying attention.”
What does all this mean for education, and particularly for computing education? Is there any point trying to teach children about AI when it is all changing so fast? Does anyone need to learn to code anymore? Will teachers be replaced by chatbots? Is assessment as we know it broken?
If we’re going to seriously engage with these questions, we need to understand that we’re talking about three different things:
AI literacy: What it is and how we teach it
Rethinking computer science (and possibly some other subjects)
Enhancing teaching and learning through AI-powered technologies
AI literacy: What it is and how we teach it
For young people to thrive in a world that is being transformed by AI systems, they need to understand these technologies and the role they could play in their lives.
The first problem is defining what AI literacy actually means. What are the concepts, knowledge, and skills that it would be useful for a young person to learn?
The reality is that — with a few notable exceptions — the vast majority of AI literacy resources available today are probably doing more harm than good.
In the past couple of years there has been a huge explosion in resources that claim to help young people develop AI literacy. Our research team mapped and categorised over 500 resources, and undertaken a systematic literature review to understand what research has been done on K–12 AI classroom interventions (spoiler: not much).
The reality is that — with a few notable exceptions — the vast majority of AI literacy resources available today are probably doing more harm than good. For example, in an attempt to be accessible and fun, many materials anthropomorphise AI systems, using human terms to describe them and their functions and thereby perpetuating misconceptions about what AI systems are and how they work.
What emerged from this work at the Raspberry Pi Foundation is the SEAME model, which articulates the concepts, knowledge, and skills that are essential ingredients of any AI literacy curriculum. It separates out the social and ethical, application, model, and engine levels of AI systems — all of which are important — and gets specific about age-appropriate learning outcomes for each.
This research has formed the basis of Experience AI (experience-ai.org), a suite of resources, lessons plans, videos, and interactive learning experiences created by the Raspberry Pi Foundation in partnership with Google DeepMind, which is already being used in thousands of classrooms.
If we’re serious about AI literacy for young people, we have to get serious about AI literacy for teachers.
Defining AI literacy and developing resources is part of the challenge, but that doesn’t solve the problem of how we get them into the hands and minds of every young person. This will require policy change. We need governments and education system leaders to grasp that a foundational understanding of AI technologies is essential for creating economic opportunity, ensuring that young people have the mindsets to engage positively with technological change, and avoiding a widening of the digital divide. We’ve messed this up before with digital skills. Let’s not do it again.
More than anything, we need to invest in teachers and their professional development. While there are some fantastic computing teachers with computer science qualifications, the reality is that most of the computing lessons taught anywhere on the planet are taught by a non-specialist teacher. That is even more so the case for anything related to AI. If we’re serious about AI literacy for young people, we have to get serious about AI literacy for teachers.
Rethinking computer science
Alongside introducing AI literacy, we also need to take a hard look at computer science. At the very least, we need to make sure that computer science curricula include machine learning models, explaining how they constitute a new paradigm for computing, and give more emphasis to the role that data will play in the future of computing. Adding anything new to an already packed computer science curriculum means tough choices about what to deprioritise to make space.
And, while we’re reviewing curricula, what about biology, geography, or any of the other subjects that are just as likely to be revolutionised by big data and AI? As part of Experience AI, we are launching some of the first lessons focusing on ecosystems and AI, which we think should be at the heart of any modern biology curriculum.
Some are saying young people don’t need to learn how to code. It’s an easy political soundbite, but it just doesn’t stand up to serious scrutiny.
There is already a lively debate about the extent to which the new generation of AI technologies will make programming as we know it obsolete. In January, the prestigious ACM journal ran an opinion piece from Matt Welsh, founder of an AI-powered programming start-up, in which he said: “I believe the conventional idea of ‘writing a program’ is headed for extinction, and indeed, for all but very specialised applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed.”
With GitHub (now part of Microsoft) claiming that their pair programming technology, Copilot, is now writing 46 percent of developers’ code, it’s perhaps not surprising that some are saying young people don’t need to learn how to code. It’s an easy political soundbite, but it just doesn’t stand up to serious scrutiny.
Even if AI systems can improve to the point where they generate consistently reliable code, it seems to me that it is just as likely that this will increase the demand for more complex software, leading to greater demand for more programmers. There is historical precedent for this: the invention of abstract programming languages such as Python dramatically simplified the act of humans providing instructions to computers, leading to more complex software and a much greater demand for developers.
However these AI-powered tools develop, it will still be essential for young people to learn the fundamentals of programming and to get hands-on experience of writing code as part of any credible computer science course. Practical experience of writing computer programs is an essential part of learning how to analyse problems in computational terms; it brings the subject to life; it will help young people understand how the world around them is being transformed by AI systems; and it will ensure that they are able to shape that future, rather than it being something that is done to them.
Enhancing teaching and learning through AI-powered technologies
Technology has already transformed learning. YouTube is probably the most important educational innovation of the past 20 years, democratising both the creation and consumption of learning resources. Khan Academy, meanwhile, integrated video instruction into a learning experience that gamified formative assessment. Our own edtech platform, Ada Computer Science, combines comprehensive instructional materials, a huge bank of questions designed to help learning, and automated marking and feedback to make computer science easier to teach and learn. Brilliant though these are, none of them have even begun to harness the potential of AI systems like large language models (LLMs).
The challenge for all of us working in education is how we ensure that ethics and privacy are at the centre of the development of [AI-powered edtech].
One area where I think we’ll see huge progress is feedback. It’s well-established that good-quality feedback makes a huge difference to learning, but a teacher’s ability to provide feedback is limited by their time. No one is seriously claiming that chatbots will replace teachers, but — if we can get the quality right — LLM applications could provide every child with unlimited, on-demand feedback. AI-powered feedback — not giving students the answers, but coaching, suggesting, and encouraging in the way that great teachers already do — could be transformational.
We are already seeing edtech companies racing to bring new products and features to market that leverage LLMs, and my prediction is that the pace of that innovation is going to increase exponentially over the coming years. The challenge for all of us working in education is how we ensure that ethics and privacy are at the centre of the development of these technologies. That’s important for all applications of AI, but especially so in education, where these systems will be unleashed directly on young people. How much data from students will an AI system need to access? Can that data — aggregated from millions of students — be used to train new models? How can we communicate transparently the limitations of the information provided back to students?
Ultimately, we need to think about how parents, teachers, and education systems (the purchasers of edtech products) will be able to make informed choices about what to put in front of students. Standards will have an important role to play here, and I think we should be exploring ideas such as an AI kitemark for edtech products that communicate whether they meet a set of standards around bias, transparency, and privacy.
Realising potential in a brave new world
We may very well be entering an era in which AI systems dramatically enhance the creativity and productivity of humanity as a species. Whether the reality lives up to the hype or not, AI systems are undoubtedly going to be a big part of all of our futures, and we urgently need to figure out what that means for education, and what skills, knowledge, and mindsets young people need to develop in order to realise their full potential in that brave new world.
That’s the work we’re engaged in at the Raspberry Pi Foundation, working in partnership with individuals and organisations from across industry, government, education, and civil society.
If you have ideas and want to get involved in shaping the future of computing education, we’d love to hear from you.
This article will also appear in issue 22 of Hello World magazine, which focuses on teaching and AI. We are publishing this new issue on Monday 23 October. Sign up for a free digital subscription to get the PDF straight to your inbox on the day.
New artificial intelligence (AI) tools have had a profound impact on many areas of our lives in the past twelve months, including on education. Teachers and schools have been exploring how AI tools can transform their work, and how they can teach their learners about this rapidly developing technology. As enabling all schools and teachers to help their learners understand computing and digital technologies is part of our mission, we’ve been working hard to support educators with high-quality, free teaching resources about AI through Experience AI, our learning programme in partnership with Google DeepMind.
In this article, we take you through the updates we’ve made to the Experience AI Lessons based on teachers’ feedback, reveal two new lessons on large language models (LLMs) and biology, and give you the chance to shape the future of the Experience AI programme.
Updated lessons based on your feedback
In April we launched the first Experience AI Lessons as a unit of six lessons for secondary school students (ages 11 to 14, Key Stage 3) that gives you everything you need to teach AI, including lesson plans, slide decks, worksheets, and videos. Since the launch, we’ve worked closely with teachers and learners to make improvements to the lesson materials.
The first big update you’ll see now is an additional project for students to do across Lesson 5 and Lesson 6. Before, students could choose between two projects to create their own machine learning model, either to classify data from the world’s oceans or to identify fake news. The new project we’ve added gives students the chance to use images to train a machine learning model to identify whether or not an item is biodegradable and therefore suitable to be put in a food waste bin.
Our second big update is a new set of teacher-focused videos that summarise each lesson and highlight possible talking points. We hope these videos will help you feel confident and ready to deliver the Experience AI Lessons to your learners.
A new lesson on large language models
As well as updating the six existing lessons, we’ve just released a new seventh lesson consisting of a set of activities to help students learn about the capabilities, opportunities, and downsides of LLMs, the models that AI chatbots are based on.
With the LLM lesson’s activities you can help your learners to:
Explore the purpose and functionality of LLMs and examine the critical aspect of trustworthiness of these models’ outputs
Examine the reasons why the output of LLMs may not always be reliable and understand that LLMs are machines that make predictions
Compare LLMs to other technologies to assess their suitability for different purposes
Evaluate the appropriateness of using LLMs in a variety of authentic scenarios
All Experience AI Lessons are designed to be cross-curricular, and for England-based teachers, the LLM lesson is particularly useful for teaching PSHE (Personal, Social, Health and Economic education).
The LLM lesson is designed as a set of five 10-minute activities, so you have the flexibility to teach the material as a single lesson or over a number of sessions. While we recommend that you teach the activities in the order they come, you can easily adapt them for your learners’ interests and needs. Feel free to take longer than our recommended time and have fun with them.
A new lesson on biology: AI for the Serengeti
We have also been working on an exciting new lesson to introduce AI to secondary school students (ages 11 to 14, Key Stage 3) in the biology classroom. This stand-alone lesson focuses on how AI can help conservationists with monitoring an ecosystem in the Serengeti.
We worked alongside members of the Biology Education Research Group (BERG) at the UK’s Royal Society of Biology to make sure the lesson is relevant and accessible for Key Stage 3 teachers and their learners.
Register your interest if you would like to be one of the first teachers to try out this thought-provoking lesson.
Webinars to support your teaching
If you want to use the Experience AI materials but would like more support, our new webinar series will help you. You will get your questions answered by the people who created the lessons. Our first webinar covered the six-lesson unit and you can watch the recording now:
September’s webinar: How to use Machine Learning for Kids
Join us to learn how to use Machine Learning for Kids (ML4K), a child-friendly tool for training AI models that is used for project work throughout the Experience AI Lessons. The September webinar will be with Dale Lane, who has spent his career developing AI technology and is the creator of ML4K.
Become part of our teacher feedback panel. We meet every half term, and our first session will be held mid-October. Email us to register your interest and we’ll be in touch.
To find out more about how you can use Experience AI to teach AI and machine learning to your learners this school year, visit the Experience AI website.
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.