All posts by Jane Waite

AI education resources: What do we teach young people?

Post Syndicated from Jane Waite original https://www.raspberrypi.org/blog/ai-education-resources-what-to-teach-seame-framework/

People have many different reasons to think that children and teenagers need to learn about artificial intelligence (AI) technologies. Whether it’s that AI impacts young people’s lives today, or that understanding these technologies may open up careers in their future — there is broad agreement that school-level education about AI is important.

A young person writes Python code.

But how do you actually design lessons about AI, a technical area that is entirely new to young people? That was the question we needed to answer as we started Experience AI, our exciting collaboration with DeepMind, a leading AI company.

Our approach to developing AI education resources

As part of Experience AI, we are creating a free set of lesson resources to help teachers introduce AI and machine learning (ML) to KS3 students (ages 11 to 14). In England this area is not currently part of the national curriculum, but it’s starting to appear in all sorts of learning materials for young people. 

Two learners and a teacher in a physical computing lesson.

While developing the six Experience AI lessons, we took a research-informed approach. We built on insights from the series of research seminars on AI and data science education we had hosted in 2021 and 2022, and on research we ourselves have been conducting at the Raspberry Pi Computing Education Research Centre.

We reviewed over 500 existing resources that are used to teach AI and ML.

As part of this research, we reviewed over 500 existing resources that are used to teach AI and ML. We found that the vast majority of them were one-off activities, and many claimed to be appropriate for learners of any age. There were very few sets of lessons, or units of work, that were tailored to a specific age group. Activities often had vague learning objectives, or none at all. We rarely found associated assessment activities. These were all shortcomings we wanted to avoid in our set of lessons.

To analyse the content of AI education resources, we use a simple framework called SEAME. This framework is based on work I did in 2018 with Professor Paul Curzon at Queen Mary University of London, running professional development for educators on teaching machine learning.

The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works).
Click to enlarge.

The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works). We hope that it will be a useful tool for anyone who is interested in looking at resources to teach AI. 

What do AI education resources focus on?

The four levels of the SEAME framework do not indicate a hierarchy or sequence. Instead, they offer a way for teachers, resource developers, and researchers to talk about the focus of AI learning activities.

Social and ethical aspects (SE)

The SE level covers activities that relate to the impact of AI on everyday life, and to its implications for society. Learning objectives and their related resources categorised at this level introduce students to issues such as privacy or bias concerns, the impact of AI on employment, misinformation, and the potential benefits of AI applications.

A slide from a lesson about AI that describes an AI application related to timetables.
An example activity in the Experience AI lessons where learners think about the social and ethical issues of an AI application that predicts what subjects they might want to study. This activity is mostly focused on the social and ethical level of the SEAME framework, but also links to the applications and models levels.

Applications (A)

The A level refers to activities related to applications and systems that use AI or ML models. At this level, learners do not learn how to train models themselves, or how such models work. Learning objectives at this level include knowing a range of AI applications and starting to understand the difference between rule-based and data-driven approaches to developing applications.

Models (M)

The M level concerns the models underlying AI and ML applications. Learning objectives at this level include learners understanding the processes used to train and test models. For example, through resources focused on the M level, students could learn about the different learning paradigms of ML (i.e., supervised, unsupervised, or reinforcement learning).

A slide from a lesson about AI that describes an ML model to classify animals.
An example activity in the Experience AI lessons where students learn about classification. This activity is mostly focused on the models level of the SEAME framework, but also links to the social and ethical and the applications levels.

Engines (E)

The E level is related to the engines that make AI models work. This is the most hidden and complex level, and for school-aged learners may need to be taught using unplugged activities and visualisations. Learning objectives could include understanding the basic workings of systems such as data-driven decision trees and artificial neural networks.

Covering the four levels

Some learning activities may focus on a single level, but activities can also span more than one level. For example, an activity may start with learners trying out an existing ‘rock-paper-scissors’ application that uses an ML model to recognise hand shapes. This would cover the applications level. If learners then move on to train the model to improve its accuracy by adding more image data, they work at the models level.

A teacher helps a young person with a coding project.

Other activities cover several SEAME levels to address a specific concept. For example, an activity focussed on bias might start with an example of the societal impact of bias (SE level). Learners could then discuss the AI applications they use and reflect on how bias impacts them personally (A level). The activity could finish with learners exploring related data in a simple ML model and thinking about how representative the data is of all potential application users (M level).

The set of lessons on AI we are developing in collaboration with DeepMind covers all four levels of SEAME.

The set of Experience AI lessons we are developing in collaboration with DeepMind covers all four levels of SEAME. The lessons are based on carefully designed learning objectives and specifically targeted to KS3 students. Lesson materials include presentations, videos, student activities, and assessment questions.

We’re releasing the Experience AI lessons very soon — if you want to be the first to hear news about them, please sign up here.

The SEAME framework as a tool for research on AI education

For researchers, we think the SEAME framework will, for example, be useful to analyse school curriculum material to see whether some age groups have more learning activities available at one level than another, and whether this changes over time. We may find that primary school learners work mostly at the SE and A levels, and secondary school learners move between the levels with increasing clarity as they develop their knowledge. It may also be the case that some learners or teachers prefer activities focused on one level rather than another. However, we can’t be sure: research is needed to investigate the teaching and learning of AI and ML across all year groups.

That’s why we’re excited to welcome Salomey Afua Addo to the Raspberry Pi Computing Education Research Centre. Salomey joined the Centre as a PhD student in January, and her research will focus on approaches to the teaching and learning of AI. We’re looking forward to seeing the results of her work.

The post AI education resources: What do we teach young people? appeared first on Raspberry Pi Foundation.

Are you technocentric? Shifting from technology to people

Post Syndicated from Jane Waite original https://www.raspberrypi.org/blog/technocentrism-shifting-from-technology-to-people-computing-education-pratim-sengupta-research-seminar/

When we teach children and young people about computing, do we consider how the subject has developed over time, how it relates to our students’ lives, and importantly, what our values are? Professor Pratim Sengupta shared some of the research he and his colleagues have been working on related to these questions in our June 2022 research seminar.

Pratim Sengupta.
Prof. Pratim Sengupta

Pratim revealed a complex landscape where we as educators can be easily trapped by what may seem like good intentions, thereby limiting learning and excluding some students. His presentation, entitled Computational heterogeneity in STEM education, introduced me to the concept of technocentrism and profoundly impacted my thinking about the essence of programming and how I research it. In this blog post, particularly for those unable to attend this stimulating seminar, I give my simplified view of the rich philosophy shared by Pratim, and my fledgling steps to admit to my technocentrism and overcome it.

Our seminars on teaching cross-disciplinary computing

Between May 2022 and November 2022, we are hosting a new series of free research seminars about teaching computing in different ways and in different contexts. This second seminar of the series was well attended with participants from the USA, Asia, Africa, and Europe, including teachers, researchers, and industry professionals, who contributed to a lively and thought-provoking discussion.

Two teachers and a group of learners are gathered around a laptop screen.

Pratim is a learning scientist based in Canada with a long and distinguished career. He has studied how to teach computational modelling in K-12 STEM classrooms and investigates the complexity of learning. Grounded in working with teachers and students, he brings together computing, science, education, and social justice. Based on his work at Northwestern University, Vanderbilt University, and now with the Mind, Matter and Media lab at the University of Calgary, Pratim has published hundreds of academic papers over some 20 years. Pratim and his team challenge how we focus on making technological artefacts — code for code’s sake — in computing education, and refocuses us on the human experience of coding and learning to code.

What is technocentrism?

Pratim started the seminar by giving us an overview of some of the key ideas that underpin the way that computing is usually taught in schools, including technocentrism (Figure 1).

Pratim Sengupta's summary of technocentrism: device-centred approaches for pedagogy and computational design; ignores teaching, social and institutional infrastructures, cultural histories; transparency or universality of code as symbolic power; recursive methods for education research, experience measured by being folded back onto devices; leads to symbolic violence, misrecognition of experience, muting and omission of voices, affect and moral dimensions of experience.
Figure 1: The features of technocentrism, a way of thinking about how we teach computing, particularly programming (Sengupta, 2022). Click to enlarge.

I have come to a simplified understanding of technocentrism. To me, it appears to be a way of looking at how we learn about computer science, where one might:

  • Focus on the finished product (e.g. a computer program), rather than thinking about the people who create, learn about, or use a program
  • Ignore the context and the environment, rather than paying attention to the history, the political situation, and the social context of the task at hand
  • View computing tasks as being implemented (enacted) by writing code, rather than seeing computing activities as rich and complex jumbles of meaning-making and communication that involve people using chatter, images, and lots of gestures
  • Anchor learning in concepts and skills, rather than placing the values and viewpoints of learners at the heart of teaching 

Examples of technocentrism and how to overcome it

Pratim recounted several research activities that he and his team have engaged with. These examples highlight instances of potential technocentrism and investigate how we might overcome it.

In the first example research activity, Pratim explained how in maths and physics lessons, middle school students were asked to develop models to solve time and distance problems. Rather than immediately coding a potential solution, the researcher and teacher supported the learners to spend much time developing a shared perspective to understand and express the problems first. Students grappled with different ways of representing the context, including graphs and diagrams (see Figure 2). Gradually and carefully, teachers shifted students to recognise what was important and what was not, to move them toward a meaningful language to describe and solve the problems.

Research results from Pratim Sengupta showing students' graph designs and how much time they spent on various activities during the graphing task.
Figure 2: Two graphs from students showing different representations of a context, and a researcher’s bar chart representing how students’ shared understanding emerged over time (Sengupta, 2022). Click to enlarge.

In a second example research activity, students were asked to build a machine that draws shapes using sensors, motors, and code. Rather than jumping straight to a solution, the students spent time with authentic users of their machines. Throughout the process, students worked with others, expressing the context through physical movement, clarifying their thoughts by drawing diagrams, and finding the sweet spot between coding, engineering design, and maths (see Figure 3).

Research results from Pratim Sengupta showing images documenting a physical computing design activity and how learners explained their design.
Figure 3:  Students used physical movements and user guides to be with others and publicly share and experience the task with authentic users (Sengupta, 2022). Click to enlarge.

In a third example research activity, racial segregation of US communities was discussed with pre-service teachers. The predominately white teachers found talking about the topic very difficult at the beginning of the activity. To overcome this hesitancy, teachers were first asked to work with a simulation that modelled the process of segregation through abstracted dots (or computational agents), a transitional other. Following this hypothetical representation, the context was then recontextualised through a map of real data points of the ethnicity of residents in an area of the US. This kind of map is called a Racial Dot Map based on US census data. When the teachers were able to interpret the link between the abstracted dot simulation and the real-world data they were able to talk about racism and segregation in a way they could not do before. The initial simulation and the recontextualisation were a pedagogical tool to reveal racism and provide a space where students felt comfortable discussing their values and beliefs that would otherwise have remained implicit.

Pratim Sengupta explains a research activity with predominantly white pre-service teachers who learned to discuss racism and segregation through a transitional othering activity using maps and graphing census data.
Figure 4: To facilitate discussion of racial segregation, a simulation was used that bridges abstracted dots and real people, giving pre-service teachers a space to reflect on discrimination  (Sengupta, 2022). Click to enlarge.

My takeaways

Pratim shared four implications of this research for computing pedagogy (see Figure 5).

Pratim Sengupta presents the pedagogical implications of shifting from technocentrism to perspectival heterogeneity in education: code as utterances and intertext; heterogeneity and tranformation of representational genres, code lives in translation; teachers' voice needs to be centred in system and activity design and classroom work, researchers must listen; uncertainty and ambiguity play central roles, recognition takes time.
Figure 5: Pratim’s four implications for pedagogy. Click to enlarge

As a researcher of pedagogy, these points provide takeaways that I can relate to my own research practice:

  • Code is a voice within an experience rather than symbols at a point in time. For example, when I listen to students predicting what a snippet of code will do, I think of the active nature of each carefully chosen command and how for each student, the code corresponds with them differently.
  • Code lives as a translation bridging many dimensions, such as data representation, algorithms, syntax, and user views. This statement resonates deeply with my liking of Carsten Schultes’s block model [1] but extends to include the people involved.
  • We should listen carefully and attentively to teachers, rather than making assumptions about what happens in classrooms. Teachers create new ideas. This takeaway is very important and reminds me about the trust and relationships built between teachers and researchers and how important it is to listen.
  • Uncertainty and ambiguity exist in learning, and this can take time to recognise. This final point makes me smile. As a developer, teacher, and researcher, I have found dealing with ambiguity hard at various points in my career. Still, over time, I think I am getting better at seeing it and celebrating it. 

Listening to Pratim share his research on the teaching and learning of computing and the pitfalls of technocentrism has made me think deeply about how I view computer science as a subject and do research about it. I have shared some of my reflections in this blog, and I plan to incorporate the underlying theory and ideas in my ongoing research projects.

If you would like to find out more about Pratim’s work, please look over his slides, watch his presentation, read the upcoming chapter in our seminar proceedings, or respond to this blog by leaving a comment so we can discuss!

Join our next seminar

We have another four seminars in our current series on cross-disciplinary computing

At our next seminar on 12 July 2022 at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Prof. Yasmin Kafai and Elaine Griggs, who are going to present research on introductory equity-oriented computer science with electronic textiles for high school students.

We look forward to meeting you there.


[1] You can learn more in the Hello World article where our Chief Learning Officer Sue Sentance talks about the block model.

The post Are you technocentric? Shifting from technology to people appeared first on Raspberry Pi.

170 research papers about teaching programming, summarised

Post Syndicated from Jane Waite original https://www.raspberrypi.org/blog/research-report-teaching-programming/

Computer programming is now part of the school curriculum in England and many other countries. Although not necessarily the primary focus of the computing curriculum, programming can be the area teachers find most challenging to teach. There is much evidence emerging from research on how to teach programming, particularly from projects with undergraduate learners. That’s why I recently wrote a report summarising over 170 programming pedagogy papers: Teaching programming in schools: A review of approaches and strategies.

In a computing classroom, a smiling girl raises her hand.

I hope this blog post about how I approached writing the report whets your appetite to read it, and encourages you to read more research summaries in general.

My approach to summarising research papers

Summarising findings from more than 170 research papers into 34 pages was not a task for the faint-hearted. I could not have embarked on this task without previous experience of writing similar, smaller reviews; working on a host of research projects; and writing reports about research for many different audiences.

A computing teacher and a learner do physical computing in the primary school classroom.

I love reading about computer science education. It evokes very strong emotions, making me by turns happy, curious, impressed, alarmed, and even cross. When I summarise the papers of other researchers, I am very careful when deciding what to include and what to leave out, in order to do the researchers’ work justice while not overselling it or misleading readers. Sometimes research papers can be hard to fathom, with lots of jargon and statistics. In other papers, the conclusions drawn have many limitations: the project the paper describes hasn’t produced robust enough evidence to give a clear, generalisable message. Academic integrity and not misrepresenting the work of others is paramount. And naturally, there are many more than 170 papers about teaching programming, but I had to stop somewhere. All this makes summarising research a tricky task that one has to undertake with great care.

a teenage boy does coding during a computer science lesson.

Another important aspect of summarising research is how to group papers. A long list saying “this paper said this”, “this paper said that” would not be easy to access and would not draw out overall themes. Often research studies span many topics. What might be a helpful grouping for one reader might not be interesting for another.

For this report, I grouped papers into three sections:

  1. Classroom strategies: Here I included well-researched classroom strategies that teachers can use to teach programming in schools
  2. Contexts and environments for learning programming: Here I outlined research related to opportunities for teaching programming, including different programming languages and the classroom context
  3. Supporting learners: Here I summarised research that helps teachers support learners, particularly learners who have difficulties with programming

Why you as a teacher should read research summaries

Teachers, as very busy professionals, have little time to replan lessons, and programming lessons are challenging to start with. However, the potential long-term benefit may outweigh the short-term cost when it comes to reading research summaries: new insights from firmly grounded research can improve your teaching and enable more of your learners to be successful.

In a computing classroom, a girl laughs at what she sees on the screen.

The process of translating research into practice is an area that I and the research team here are particularly interested in investigating. We are looking forward to working with teachers to explore this.

The Raspberry Pi Foundation regularly shares research summaries in the form of:

You can also check out other computing education podcasts e.g. CSEdPod.org, as well as computing education books (e.g. The Cambridge Handbook of Computing Education Research,  Computer Science Education: Perspectives on Teaching and Learning, and many others), and other researchers’ blogs about computing education (e.g. Amy Ko, article summaries on CSEdresearch.org).

The post 170 research papers about teaching programming, summarised appeared first on Raspberry Pi.

Creating better online multiple choice questions

Post Syndicated from Jane Waite original https://www.raspberrypi.org/blog/better-online-multiple-choice-questions-education-edtech/

In this blog post we explore good practices around creating online computing questions, specifically multiple choice questions (MCQs). Multiple choice questions are a popular way to help teachers and learners work out the next steps in learning, and to assess learning in examinations. As a case study, we look at some data related to learner responses to computing questions on the Oak National Academy platform.

Someone fills in a standardised test with multiple choice questions using a pencil.

The case study illustrates the many things MCQ authors have to think about while designing questions, and that there is much more research needed to understand how to get an MCQ “just right”.

Uses of multiple choice questions

Online auto-marked MCQs are now being integrated into classroom activities, set as homework, and used in self-led learning at home. Software products involving MCQs, such as Kahoot and Socratic, are easy to use for many, and have become popular in some learning contexts. MCQ may have become more prevalent due to increased online teaching and the availability of whole curricula through platforms such as the Oak National Academy.

A girl does school work at a laptop at home.

An international group of researchers from China, Spain, Singapore, and the UK recently looked into the reasons why MCQ-based testing might improve learning. Chunliang Yang and his co-authors concluded that there are three main ways that MCQ tests help learners learn:

  • They provide learners with additional exposure to learning content
  • They provide learners with content in the same format that they will be later assessed in 
  • They motivate learners, e.g. to prompt them to commit more effort to learn in general

What does the research say about creating multiple choice questions?

In recent research reviewing the use of MCQs, Andrew Butler from Washington University in St Louis looked at the effectiveness of MCQs in relation to learning, rather than assessment. Andrew gives the following advice for educators creating MCQs for learning:

  • Think about the thinking processes the learner will use when answering the question, and make sure the processes are productive for their learning
  • Don’t make the question super easy or too difficult, but make it challenging — the difficulty needs to be “just right”
  • Keep the phrasing of the question simple 
  • Ensure that all answers are plausible; providing three or four answers is usually a good idea
  • Be aware that if learners pick the wrong answer, this can reinforce the wrong thinking
  • Provide corrective feedback to learners who pick the wrong answer

What I find particularly interesting about Andrew’s advice is the need to make the difficulty of the MCQ “just right” for learners. But what does “just right” look like in practice? More research is needed to work this out.

The anatomy of a multiple choice question

When talking about MCQs, there are technical terms to describe question features, e.g.:

  • Incorrect answers are called distractors (or lures)
  • A distractor is defined as plausible if it’s an answer a layperson would see as a reasonable answer
  • Plausible distractors are called working distractors

Here at the Foundation, we created MCQs for the Oak National Academy when we adapted our Teach Computing Curriculum classroom materials into video lessons and accompanying home learning content to support learners and teachers during school closures. Data about what questions are attempted on the Oak platform, and what answer options are chosen, is stored securely by Oak National Academy. The Oak team kindly provided us with four months of anonymous data related to responses to the MCQs in the ‘GCSE Computer Science – Data representations’ unit.

Over this period of four months, learners on the platform made more than 29,000 question attempts on the thirty-five questions across the nine lessons that make up this data representation unit. Here is a breakdown of the questions by topic area:

Data about responses to a set of multiple choice questions on the Oak Academy platform.of a multiple choice question on the Oak Academy platform.
Responses to MCQs in the GCSE Computer Science data representation unit on Oak National Academy, data from February 2021 to end of May 2021 (click to enlarge)

As shown in the table, more questions relate to binary arithmetic than to any other topic area. This was a specific design decision, as it is well-known that learners need lots of practice of the processes involved in answering binary arithmetic questions.

Part of the graph of learning objectives for the Teach Computing Curriculum unit GCSE Computer Science data representation.
Part of the graph of learning objectives for the Teach Computing Curriculum unit GCSE Computer Science — Data representations (click to enlarge)

Let’s look at an example question from the binary arithmetic topic area, with one correct answer and two distractors. The learning objective being addressed with this question is ‘Perform addition in binary on two binary numbers’.

Screenshot of a multiple choice question on the Oak Academy platform.
One of the MCQs in the GCSE Computer Science data representation unit on the Oak National Academy, as displayed on the online platform

As shown in the table below, in four months, 1170 attempts were made to answer the example question. 65% of the attempts were correct responses, and 35% were not, with 21% of responses being distractor b, and 14% distractor c. These distractors appear to be working distractors, as they were chosen by more than 5% of learners, which has been suggested as a rule-of-thumb threshold that distractors have to clear to be classed as working.

Data about responses to a multiple choice question on the Oak Academy platform.
Example MCQ in the GCSE Computer Science data representation unit on the Oak National Academy, plus response data from February 2021 to end of May 2021 (click to enlarge)

However, because of the lack of research into MCQs, we cannot say for certain that this question is “just right” — it may be too hard. We need to do further research to find this out.

Creating multiple choice questions is not easy

The process of creating good MCQs is not an easy task, because question authors need to think about many things, including:

  • What learning objectives are to be addressed
  • What plausible distractors can be used
  • What level of difficulty is right for learners
  • What type of thinking the questions are encouraging, and how this is useful for learners

In order for MCQs to be useful for learners and teachers, much more research is needed in this area to show how to reliably produce MCQs that are “just right” and encourage productive thinking processes. We are very much looking forward to looking at this topic in our research work.

To find out more about the computing education research we are doing, you can browse our website, take part in our monthly seminars, and read our publications.

The post Creating better online multiple choice questions appeared first on Raspberry Pi.

The machine learning effect: Magic boxes and computational thinking 2.0

Post Syndicated from Jane Waite original https://www.raspberrypi.org/blog/machine-learning-education-school-computational-thinking-2-0-research-seminar/

How does teaching children and young people about machine learning (ML) differ from teaching them about other aspects of computing? Professor Matti Tedre and Dr Henriikka Vartiainen from the University of Eastern Finland shared some answers at our latest research seminar.

Three smiling young learners in a computing classroom.
We need to determine how to teach young people about machine learning, and what teachers need to know to help their learners form correct mental models.

Their presentation, titled ‘ML education for K-12: emerging trajectories’, had a profound impact on my thinking about how we teach computational thinking and programming. For this blog post, I have simplified some of the complexity associated with machine learning for the benefit of readers who are new to the topic.

a 3D-rendered grey box.
Machine learning is not magic — what needs to change in computing education to make sure learners don’t see ML systems as magic boxes?

Our seminars on teaching AI, ML, and data science

We’re currently partnering with The Alan Turing Institute to host a series of free research seminars about how to teach artificial intelligence (AI) and data science to young people.

The seminar with Matti and Henriikka, the third one of the series, was very well attended. Over 100 participants from San Francisco to Rajasthan, including teachers, researchers, and industry professionals, contributed to a lively and thought-provoking discussion.

Representing a large interdisciplinary team of researchers, Matti and Henriikka have been working on how to teach AI and machine learning for more than three years, which in this new area of study is a long time. So far, the Finnish team has written over a dozen academic papers based on their pilot studies with kindergarten-, primary-, and secondary-aged learners.

Current teaching in schools: classical rule-driven programming

Matti and Henriikka started by giving an overview of classical programming and how it is currently taught in schools. Classical programming can be described as rule-driven. Example features of classical computer programs and programming languages are:

  • A classical language has a strict syntax, and a limited set of commands that can only be used in a predetermined way
  • A classical language is deterministic, meaning we can guarantee what will happen when each line of code is run
  • A classical program is executed in a strict, step-wise order following a known set of rules

When we teach this type of programming, we show learners how to use a deductive problem solving approach or workflow: defining the task, designing a possible solution, and implementing the solution by writing a stepwise program that is then run on a computer. We encourage learners to avoid using trial and error to write programs. Instead, as they develop and test a program, we ask them to trace it line by line in order to predict what will happen when each line is run (glass-box testing).

A list of features of rule-driven computer programming, also included in the text.
The features of classical (rule-driven) programming approaches as taught in computer science education (CSE) (Tedre & Vartiainen, 2021).

Classical programming underpins the current view of computational thinking (CT). Our speakers called this version of CT ‘CT 1.0’. So what’s the alternative Matti and Henriikka presented, and how does it affect what computational thinking is or may become?

Machine learning (data-driven) models and new computational thinking (CT 2.0) 

Rule-based programming languages are not being eradicated. Instead, software systems are being augmented through the addition of machine learning (data-driven) elements. Many of today’s successful software products, such as search engines, image classifiers, and speech recognition programs, combine rule-driven software and data-driven models. However, the workflows for these two approaches to solving problems through computing are very different.

A table comparing problem solving workflows using computational thinking 1.0 versus computational thinking 2.0, info also included in the text.
Problem solving is very different depending on whether a rule-driven computational thinking (CT 1.0) approach or a data-driven computational thinking (CT 2.0) approach is used (Tedre & Vartiainen,2021).

Significantly, while in rule-based programming (and CT 1.0), the focus is on solving problems by creating algorithms, in data-driven approaches, the problem solving workflow is all about the data. To highlight the profound impact this shift in focus has on teaching and learning computing, Matti introduced us to a new version of computational thinking for machine learning, CT 2.0, which is detailed in a forthcoming research paper.

Because of the focus on data rather than algorithms, developing a machine learning model is not at all like developing a classical rule-driven program. In classical programming, programs can be traced, and we can predict what will happen when they run. But in data-driven development, there is no flow of rules, and no absolutely right or wrong answer.

A table comparing conceptual differences between computational thinking 1.0 versus computational thinking 2.0, info also included in the text.
There are major differences between rule-driven computational thinking (CT 1.0) and data-driven computational thinking (CT 2.0), which impact what computing education needs to take into account (Tedre & Vartiainen,2021).

Machine learning models are created iteratively using training data and must be cross-validated with test data. A tiny change in the data provided can make a model useless. We rarely know exactly why the output of an ML model is as it is, and we cannot explain each individual decision that the model might have made. When evaluating a machine learning system, we can only say how well it works based on statistical confidence and efficiency. 

Machine learning education must cover ethical and societal implications 

The ethical and societal implications of computer science have always been important for students to understand. But machine learning models open up a whole new set of topics for teachers and students to consider, because of these models’ reliance on large datasets, the difficulty of explaining their decisions, and their usefulness for automating very complex processes. This includes privacy, surveillance, diversity, bias, job losses, misinformation, accountability, democracy, and veracity, to name but a few.

I see the shift in problem solving approach as a chance to strengthen the teaching of computing in general, because it opens up opportunities to teach about systems, uncertainty, data, and society.

Jane Waite

Teaching machine learning: the challenges of magic boxes and new mental models

For teaching classical rule-driven programming, much time and effort has been put into researching learners’ understanding of what a program will do when it is run. This kind of understanding is called a learner’s mental model or notional machine. An approach teachers often use to help students develop a useful mental model of a program is to hide the detail of how the program works and only gradually reveal its complexity. This approach is described with the metaphor of hiding the detail of elements of the program in a box. 

Data-driven models in machine learning systems are highly complex and make little sense to humans. Therefore, they may appear like magic boxes to students. This view needs to be banished. Machine learning is not magic. We have just not figured out yet how to explain the detail of data-driven models in a way that allows learners to form useful mental models.

An example of a representation of a machine learning model in TensorFlow, an online machine learning tool (Tedre & Vartiainen,2021).

Some existing ML tools aim to help learners form mental models of ML, for example through visual representations of how a neural network works (see Figure 2). But these explanations are still very complex. Clearly, we need to find new ways to help learners of all ages form useful mental models of machine learning, so that teachers can explain to them how machine learning systems work and banish the view that machine learning is magic.

Some tools and teaching approaches for ML education

Matti and Henriikka’s team piloted different tools and pedagogical approaches with different age groups of learners. In terms of tools, since large amounts of data are needed for machine learning projects, our presenters suggested that tools that enable lots of data to be easily collected are ideal for teaching activities. Media-rich education tools provide an opportunity to capture still images, movements, sounds, or sense other inputs and then use these as data in machine learning teaching activities. For example, to create a machine learning–based rock-paper-scissors game, students can take photographs of their hands to train a machine learning model using Google Teachable Machine.

Photos of hands are used to train a machine learning model as part of a project to create a rock-paper-scissors game.
Photos of hands are used to train a Teachable Machine machine learning model as part of a project to create a rock-paper-scissors game (Tedre & Vartiainen, 2021).

Similar to tools that teach classic programming to novice students (e.g. Scratch), some of the new classroom tools for teaching machine learning have a drag-and-drop interface (e.g. Cognimates). Using such tools means that in lessons, there can be less focus on one of the more complex aspects of learning to program, learning programming language syntax. However, not all machine learning education products include drag-and-drop interaction, some instead have their own complex languages (e.g. Wolfram Programming Lab), which are less attractive to teachers and learners. In their pilot studies, the Finnish team found that drag-and-drop machine learning tools appeared to work well with students of all ages.

The different pedagogical approaches the Finnish research team used in their pilot studies included an exploratory approach with preschool children, who investigated machine learning recognition of happy or sad faces; and a project-based approach with older students, who co-created machine learning apps with web-based tools such as Teachable Machine and Learn Machine Learning (built by the research team), supported by machine learning experts.

Example of a middle school (age 8 to 11) student’s pen and paper design for a machine learning app that recognises different instruments and chords.
Example of a middle school (age 8 to 11) student’s design for a machine learning app that recognises different instruments and chords (Tedre & Vartiainen, 2021).

What impact these pedagogies have on students’ long-term mental models about machine learning has yet to be researched. If you want to find out more about the classroom pilot studies, the academic paper is a very accessible read.

My take-aways: new opportunities, new research questions

We all learned a tremendous amount from Matti and Henriikka and their perspectives on this important topic. Our seminar participants asked them many questions about the pedagogies and practicalities of teaching machine learning in class, and raised concerns about squeezing more into an already packed computing curriculum.

For me, the most significant take-away from the seminar was the need to shift focus from algorithms to data and from CT 1.0 to CT 2.0. Learning how to best teach classical rule-driven programming has been a long journey that we have not yet completed. We are forming an understanding of what concepts learners need to be taught, the progression of learning, key mental models, pedagogical options, and assessment approaches. For teaching data-driven development, we need to do the same.  

The question of how we make sure teachers have the necessary understanding is key.

Jane Waite

I see the shift in problem solving approach as a chance to strengthen the teaching of computing in general, because it opens up opportunities to teach about systems, uncertainty, data, and society. I think it will help us raise awareness about design, context, creativity, and student agency. But I worry about how we will introduce this shift. In my view, there is a considerable risk that we will be sucked into open-ended, project-based learning, with busy and fun but shallow learning experiences that result in restricted conceptual development for students.

I also worry about how we can best help teachers build up the knowledge and experience to support their students. In the Q&A after the seminar, I asked Matti and Henriikka about the role of their team’s machine learning experts in their pilot studies. It seemed to me that without them, the pilot lessons would not have worked, as the participating teachers and students would not have had the vocabulary to talk about the process and would not have known what was doable given the available time, tools, and student knowledge.

The question of how we make sure teachers have the necessary understanding is key. Many existing professional development resources for teachers wanting to learn about ML seem to imply that teachers will all need a PhD in statistics and neural network optimisation to engage with machine learning education. This is misleading. But teachers do need to understand the machine learning concepts that their students need to learn about, and I think we don’t yet know exactly what these concepts are. 

In summary, clearly more research is needed. There are fundamental questions still to be answered about what, when, and how we teach data-driven approaches to software systems development and how this impacts what we teach about classical, rule-based programming. But to me, that is exciting, and I am very much looking forward to the journey ahead.

Join our next free seminar

To find out what others recommend about teaching AI and ML, catch up on last month’s seminar with Professor Carsten Schulte and colleagues on centring data instead of code in the teaching of AI.

We have another four seminars in our monthly series on AI, machine learning, and data science education. Find out more about them on this page, and catch up on past seminar blogs and recordings here.

At our next seminar on Tuesday 7 December at 17:00–18:30 GMT, we will welcome Professor Rose Luckin from University College London. She will be presenting on what it is about AI that makes it useful for teachers and learners.

We look forward to meeting you there!

PS You can build your understanding of machine learning by joining our latest free online course, where you’ll learn foundational concepts and train your own ML model!

The post The machine learning effect: Magic boxes and computational thinking 2.0 appeared first on Raspberry Pi.