All posts by Sue Sentance

Gender Balance in Computing — the big picture

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/gender-balance-in-computing-big-picture/

Improving gender balance in computing is part of our work to ensure equitable learning opportunities for all young people. Our Gender Balance in Computing (GBIC) research programme has been the largest effort to date to explore ways to encourage more girls and young women to engage with Computing.

A girl in a university computing classroom.

Commissioned by the Department for Education in England and led by the Raspberry Pi Foundation as part of our National Centre for Computing Education work, the GBIC programme was a collaborative effort involving the Behavioural Insights Team, Apps for Good, and the WISE Campaign.

Gender Balance in Computing ran from 2019 to 2022 and comprised seven studies relating to five different research areas:

  • Teaching Approach:
  • Belonging: Supporting learners to feel that they “belong” in computer science
  • Non-formal Learning: Establishing the connections between in-school and out-of-school computing
  • Relevance: Making computing relatable to everyday life
  • Subject Choice: How computer science is presented to young people as a subject choice 

In December we published the last of seven reports describing the results of the programme. In this blog post I summarise our overall findings and reflect on what we’ve learned through doing this research.

Gender balance in computing is not a new problem

I was fascinated to read a paper by Deborah Butler from 2000 which starts by summarising themes from research into gender balance in computing from the 1980s and 1990s, for example that boys may have access to more role models in computing and may receive more encouragement to pursue the subject, and that software may be developed with a bias towards interests traditionally considered to be male. Butler’s paper summarises research from at least two decades ago — have we really made progress?

A computing classroom filled with learners.

In England, it’s true that making Computing a mandatory subject from age 5 means we have taken great strides forward; the need for young people to make a choice about studying the subject only arises at age 14. However, statistics for England’s externally assessed high-stakes Computer Science courses taken at ages 14–16 (GCSE) and 16–18 (A level) clearly show that, although there is a small upwards trend in the proportion of female students, particularly for A level, gender balance among the students achieving GCSE/A level qualifications remains an issue:

Computer Science qualification (England): In 2018: In 2021: In 2022:
GCSE (age 16) 20.41% 20.77% 21.37%
A level (age 18) 11.74% 14.71% 15.17%
Percentage of girls among the students achieving Computer Science qualifications in England’s secondary schools

What did we do in the Gender Balance in Computing programme?

In GBIC, we carried out a range of research studies involving more than 14,500 pupils and 725 teachers in England. Implementation teams came from the Foundation, Apps For Good, the WISE Campaign, and the Behavioural Insights Team (BIT). A separate team at BIT acted as the independent evaluators of all the studies.

In total we conducted the following studies:

  • Two feasibility studies: Storytelling; Relevance, which led to a full randomised controlled trial (RCT)
  • Five RCTs: Belonging; Peer Instruction; Pair Programming; Relevance, which was preceded by a feasibility study; Non-formal Learning (primary)
  • One quasi-experimental study: Non-formal Learning (secondary)
  • One exploratory research study: Subject Choice (Subject choice evenings and option booklets)

Each study (apart from the exploratory research study) involved a 12-week intervention in schools. Bespoke materials were developed for all the studies, and teachers received training on how to deliver the intervention they were a part of. For the RCTs, randomisation was done at school level: schools were randomly divided into treatment and control groups. The independent evaluators collected both quantitative and qualitative data to ensure that we gained comprehensive insights from the schools’ experiences of the interventions. The evaluators’ reports and our associated blog posts give full details of each study.

The impact of the pandemic

The research programme ran from 2019 to 2022, and as it was based in schools, we faced a lot of challenges due to the coronavirus pandemic. Many research programmes meant to take place in school were cancelled as soon as schools shut during the pandemic.

A learner and a teacher in a computing classroom.

Although we were fortunate that GBIC was allowed to continue, we were not allowed to extend the end date of the programme. Thus our studies were compressed into the period after schools reopened and primarily delivered in the academic year 2021/2022. When schools were open again, the implementation of the studies was affected by teacher and pupil absences, and by schools necessarily focusing on making up some of the lost time for learning.

The overall results of Gender Balance in Computing

Quantitatively, none of the RCTs showed a statistically significant impact on the primary outcome measured, which was different in different trials but related to either learners’ attitudes to computer science or their intention to study computer science. Most of the RCTs showed a positive impact that fell just short of statistical significance. The evaluators went to great lengths to control for pandemic-related attrition, and the implementation teams worked hard to support teachers in still delivering the interventions as designed, but attrition and disruptions due to the pandemic may have played a part in the results.

Woman teacher and female students at a computer

The qualitative research results were more encouraging. Teachers were enthusiastic about the approaches we had chosen in order to address known barriers to gender balance, and the qualitative data indicated that pupils reacted positively to the interventions. One key theme across the Teaching Approach (and other) studies was that girls valued collaboration and teamwork. The data also offered insights that enable us to improve on the interventions.

We designed the studies so they could act as pilots that may be rolled out at a national scale. While we have gained sufficient understanding of what works to be able to run the interventions at a larger scale, two particular learnings shape our view of what a large-scale study should look like:

1. A single intervention may not be enough to have an impact

The GBIC results highlight that there is no quick fix and suggest that we should combine some of the approaches we’ve been trialling to provide a more holistic approach to teaching Computing in an equitable way. We would recommend that schools adopt several of the approaches we’ve tested; the materials associated with each intervention are freely available (see our blog posts for links).

2. Age matters

One of the very interesting overall findings from this research programme was the difference in intent to study Computing between primary school and secondary school learners; fewer secondary school learners reported intent to study the subject further. This difference was observed for both girls and boys, but was more marked for girls, as shown in the graph below. This suggests that we need to double down on supporting children, especially girls, to maintain their interest in Computing as they enter secondary school at age 11. It also points to a need for more longitudinal research to understand more about the transition period from primary to secondary school and how it impacts children’s engagement with computer science and technology in general.

Bar graph showing that in the Gender Balance in Computing research programme, learners intent to continue studying computing was lower in secondary school than primary school, and that this difference  is more pronounced for girls.
Compared to primary school age girls, girls aged 12 to 13 show dramatically reduced intent to continue studying computing.

What’s next?

We think that more time (in excess of 12 weeks) is needed to both deliver the interventions and measure their outcome, as the change in learners’ attitudes may be slow to appear, and we’re hoping to engage in more longitudinal research moving forward.

In a computing classroom, a girl looks at a computer screen.

We know that an understanding of computer science can improve young people’s access to highly skilled jobs involving technology and their understanding of societal issues, and we need that to be available to all. However, gender balance relating to computing and technology is a deeply structural issue that has existed for decades throughout the computing education and workplace ecosystem. That’s why we intend to pursue more work around a holistic approach to improving gender balance, aligning with our ongoing research into making computing education culturally relevant.

Stay in touch

We are very keen to continue to build on our research on gender balance in computing. If you’d like to support us in any way, we’d love to hear from you. To explore the research projects we’re currently involved in, check out our research pages and visit the website of the Raspberry Pi Computing Education Research Centre at the University of Cambridge.

The post Gender Balance in Computing — the big picture appeared first on Raspberry Pi.

Data ethics for computing education through ballet and biometrics

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/data-ethics-computing-education-ballet-biometrics-research-seminar/

For our seminar series on cross-disciplinary computing, it was a delight to host Genevieve Smith-Nunes this September. Her research work involving ballet and augmented reality was a perfect fit for our theme.

Genevieve Smith-Nunes.
Genevieve Smith-Nunes

Genevieve has a background in classical ballet and was also a computing teacher for several years before starting Ready Salted Code, an educational initiative around data-driven dance. She is now coming to the end of her doctoral studies at the University of Cambridge, in which she focuses on raising awareness of data ethics using ballet and brainwave data as narrative tools, working with student Computing teachers.

Why dance and computing?

You may be surprised that there are links between dance, particularly ballet, and computing. Genevieve explained that classical ballet has a strict repetitive routine, using rule-based choreography and algorithms. Her work on data-driven dance had started at the time of the announcement of the new Computing curriculum in England, when she realised the lack of gender balance in her computing classroom. As an expert in both ballet and computing, she was driven by a desire to share the more creative elements of computing with her learners.

Two photographs of data-driven ballets.
Two of Genevieve’s data-driven ballet dances: [arra]stre and [PAIN]byte

Genevieve has been working with a technologist and a choreographer for several years to develop ballets that generate biometric data and include visualisation of such data — hence her term ‘data-driven dance’. This has led to her developing a second focus in her PhD work on how Computing students can discuss questions of ethics based on the kind of biometric and brainwave data that Genevieve is collecting in her research. Students need to learn about the ethical issues surrounding data as part of their Computing studies, and Genevieve has been working with student teachers to explore ways in which her research can be used to give examples of data ethics issues in the Computing curriculum.

Collecting data during dances

Throughout her talk, Genevieve described several examples of dances she had created. One example was [arra]stre, a project that involved a live performance of a dance, plus a series of workshops breaking down the computer science theory behind the performance, including data visualisation, wearable technology, and images triggered by the dancers’ data.

A presentation slide describing technologies necessary for motion capture of ballet.

Much of Genevieve’s seminar was focused on the technologies used to capture movement data from the dancers and the challenges this involves. For example, some existing biometric tools don’t capture foot movement — which is crucial in dance — and also can’t capture movements when dancers are in the air. For some of Genevieve’s projects, dancers also wear headsets that allow collection of brainwave data.

A presentation slide describing technologies necessary for turning motion capture data into 3D models.

Due to interruptions to her research design caused by the COVID-19 pandemic, much of Genevieve’s PhD research took place online via video calls. New tools had to be created to capture dance performances within a digital online setting. Her research uses webcams and mobile phones to record the biometric data of dancers at 60 frames per second. A number of processes are then followed to create a digital representation of the dance: isolating the dancer in the raw video; tracking the skeleton data; using post pose estimation machine learning algorithms; and using additional software to map the joints to the correct place and rotation.

A presentation slide describing technologies necessary turning a 3D computer model into an augmented reality object.

Are your brainwaves personal data?

It’s clear from Genevieve’s research that she is collecting a lot of data from her research participants, particularly the dancers. The projects include collecting both biometric data and brainwave data. Ethical issues tied to brainwave data are part of the field of neuroethics, which comprises the ethical questions raised by our increasing understanding of the biology of the human brain.

A graph of brainwaves placed next to ethical questions related to brainwave data.

Teaching learners to be mindful about how to work with personal data is at the core of the work that Genevieve is doing now. She mentioned that there are a number of ethics frameworks that can be used in this area, and highlighted the UK government’s Data Ethics Framework as being particularly straightforward with its three guiding principles of transparency, accountability, and fairness. Frameworks such as this can help to guide a classroom discussion around the security of the data, and whether the data can be used in discriminatory ways.

Brainwave data visualisation using the Emotiv software.
Brainwave data visualisation using the Emotiv software.

Data ethics provides lots of material for discussion in Computing classrooms. To exemplify this, Genevieve recorded her own brainwaves during dance, research, and rest activities, and then shared the data during workshops with student computing teachers. In our seminar Genevieve showed two visualisations of her own brainwave data (see the images above) and discussed how the student computing teachers in her workshops had felt that one was more “personal” than the other. The same brainwave data can be presented as a spreadsheet, or a moving graph, or an image. Student computing teachers felt that the graph data (shown above) felt more medical, and more like permanent personal data than the visualisation (shown above), but that the actual raw spreadsheet data felt the most personal and intrusive.

Watch the recording of Genevieve’s seminar to see her full talk:

You can also access her slides and the links she shared in her talk.

More to explore

There are a variety of online tools you can use to explore augmented reality: for example try out Posenet with the camera of your device.

Genevieve’s seminar used the title ME++, which refers to the data self and the human self: both are important and of equal value. Genevieve’s use of this term is inspired by William J. Mitchell’s book Me++: The Cyborg Self and the Networked City. Within his framing, the I in the digital world is more than the I of the physical world and highlights the posthuman boundary-blurring of the human and non-human. 

Genevieve’s work is also inspired by Luciani Floridi’s philosophical work, and his book Ethics of Information might be something you want to investigate further. You can also read ME++ Data Ethics of Biometrics Through Ballet and AR, a paper by Genevieve about her doctoral work

Join our next seminar

In our final two seminars for this year we are exploring further aspects of cross-disciplinary computing. Just this week, Conrad Wolfram of Wolfram Technologies joined us to present his ideas on maths and a core computational curriculum. We will share a summary and recording of his talk soon.

On 2 November, Tracy Gardner and Rebecca Franks from our team will close out this series by presenting work we have been doing on computing education in non-formal settings. Sign up now to join us for this session:

We will shortly be announcing the theme of a brand-new series of seminars starting in January 2023.  

The post Data ethics for computing education through ballet and biometrics appeared first on Raspberry Pi.

Join us at the launch event of the Raspberry Pi Computing Education Research Centre

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/raspberry-pi-computing-education-research-centre-launch-event-invitation/

Last summer, the Raspberry Pi Foundation and the University of Cambridge Department of Computer Science and Technology created a new research centre focusing on computing education research for young people in both formal and non-formal education. The Raspberry Pi Computing Education Research Centre is an exciting venture through which we aim to deliver a step-change for the field.

school-aged girls and a teacher using a computer together.

Computing education research that focuses specifically on young people is relatively new, particularly in contrast to established research disciplines such as those focused on mathematics or science education. However, computing is now a mandatory part of the curriculum in several countries, and being taken up in education globally, so we need to rigorously investigate the learning and teaching of this subject, and do so in conjunction with schools and teachers.

You’re invited to our in-person launch event

To celebrate the official launch of the Raspberry Pi Computing Education Research Centre, we will be holding an in-person event in Cambridge, UK on Weds 20 July from 15.00. This event is free and open to all: if you are interested in computing education research, we invite you to register for a ticket to attend. By coming together in person, we want to help strengthen a collaborative community of researchers, teachers, and other education practitioners.

The launch event is your opportunity to meet and mingle with members of the Centre’s research team and listen to a series of short talks. We are delighted that Prof. Mark Guzdial (University of Michigan), who many readers will be familiar with, will be travelling from the US to join us in cutting the ribbon. Mark has worked in computer science education for decades and won many awards for his research, so I can’t think of anybody better to be our guest speaker. Our other speakers are Prof. Alastair Beresford from the Department of Computer Science and Technology, and Carrie Anne Philbin MBE, our Director of Educator Support at the Foundation.

The event will take place at the Department of Computer Science and Technology in Cambridge. It will start at 15.00 with a reception where you’ll have the chance to talk to researchers and see the work we’ve been doing. We will then hear from our speakers, before wrapping up at 17.30. You can find more details about the event location on the ticket registration page.

Our research at the Centre

The aim of the Raspberry Pi Computing Education Research Centre is to increase our understanding of teaching and learning computing, computer science, and associated subjects, with a particular focus on young people who are from backgrounds that are traditionally under-represented in the field of computing or who experience educational disadvantage.

Young learners at computers in a classroom.

We have been establishing the Centre over the last nine months. In October, I was appointed Director, and in December, we were awarded funding by Google for a one-year research project on culturally relevant computing teaching, following on from a project at the Raspberry Pi Foundation. The Centre’s research team is uniquely positioned, straddling both the University and the Foundation. Our two organisations complement each other very well: the University is one of the highest-ranking universities in the world and renowned for its leading-edge academic research, and the Raspberry Pi Foundation works with schools, educators, and learners globally to pursue its mission to put the power of computing into the hands of young people.

In our research at the Centre, we will make sure that:

  1. We collaborate closely with teachers and schools when implementing and evaluating research projects
  2. We publish research results in a number of different formats, as promptly as we can and without a paywall
  3. We translate research findings into practice across the Foundation’s extensive programmes and with our partners

We are excited to work with a large community of teachers and researchers, and we look forward to meeting you at the launch event.

Stay up to date

At the end of June, we’ll be launching a new website for the Centre at computingeducationresearch.org. This will be the place for you to find out more about our projects and events, and to sign up to our newsletter. For announcements on social media, follow the Raspberry Pi Foundation on Twitter or Linkedin.

The post Join us at the launch event of the Raspberry Pi Computing Education Research Centre appeared first on Raspberry Pi.

A teaspoon of computing in every subject: Broadening participation in computer science

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/guzdial-teaspoon-computing-tsp-language-broadening-participation-school/

From May to November 2022, our seminars focus on the theme of cross-disciplinary computing. Through this seminar series, we want to explore the intersections and interactions of computing with all aspects of learning and life, and think about how they can help us teach young people. We were delighted to welcome Prof. Mark Guzdial (University of Michigan) as our first speaker.

Mark Guzdial.
Professor Mark Guzdial, University of Michigan

Mark has worked in computer science (CS) education for decades and won many awards for his research, including the prestigious ACM SIGCSE Outstanding Contribution to Computing Education award in 2019. He has written literally hundreds of papers about CS education, and he authors an extremely popular computing education research blog that keeps us all up to date with what is going on in the field.

Young learners at computers in a classroom.

In his talk, Mark focused on his recent work around developing task-specific programming (TSP) languages, with which teachers can add a teaspoon (also abbreviated TSP) of programming to a wide variety of subject areas in schools. Mark’s overarching thesis is that if we want everyone to have some exposure to CS, then we need to integrate it into a range of subjects across the school curriculum. And he explained that this idea of “adding a teaspoon” embraces some core principles; for TSP languages to be successful, they need to:

  • Meet the teachers’ needs
  • Be relevant to the context or lesson in which it appears
  • Be technically easy to get to grips with

Mark neatly summarised this as ‘being both usable and useful’. 

Historical views on why we should all learn computer science

We can learn a lot from going back in time and reflecting on the history of computing. Mark started his talk by sharing the views of some of the eminent computer scientists of the early days of the subject. C. P. Snow maintained, way back in 1961, that all students should study CS, because it was too important to be left to a small handful of people.

A quote by computer scientist C. S. Snow from 1961: A handful of people, having no relation to the will of society, having no communication with the rest of society, will be taking decisions in secret which are going to affect our lives in the deepest, sense.

Alan Perlis, also in 1961, argued that everyone at university should study one course in CS rather than a topic such as calculus. His reason was that CS is about process, and thus gives students tools that they can use to change the world around them. I’d never heard of this work from the 1960s before, and it suggests incredible foresight. Perhaps we don’t need to even have the debate of whether computer science is for everyone — it seems it always was!

What’s the problem with the current situation?

In many of our seminars over the last two years, we have heard about the need to broaden participation in computing in school. Although in England, computing is mandatory for ages 5 to 16 (in theory, in practice it’s offered to all children from age 5 to 14), other countries don’t have any computing for younger children. And once computing becomes optional, numbers drop, wherever you are.

""
Not enough students are experiencing computer science in school.

Mark shared with us that in US high schools, only 4.7% of students are enrolled in a CS course. However, students are studying other subjects, which brought him to the conclusion that CS should be introduced where the students already are. For example, Mark described that, at the Advanced Placement (AP) level in the US, many more students choose to take history than CS (399,000 vs 114,000) and the History AP cohort has more even gender balance, and a higher proportion of Black and Hispanic students. 

The teaspoon approach to broadening participation

A solution to low uptake of CS being proposed by Mark and his colleagues is to add a little computing to other subjects, and in his talk he gave us some examples from history and mathematics, both subjects taken by a high proportion of US students. His focus is on high school, meaning learners aged 14 and upwards (upper secondary in Europe, or key stage 4 and 5 in England). To introduce a teaspoon of CS to other subjects, Mark’s research group builds tools using a participatory design approach; his group collaborates with teachers in schools to identify the needs of the teachers and students and design and iterate TSP languages in conjunction with them.

Three teenage boys do coding at a shared computer during a computer science lesson.

Mark demonstrated a number of TSP language prototypes his group has been building for use in particular contexts. The prototypes seem like simple apps, but can be classified as languages because they specify a process for a computational agent to execute. These small languages are designed to be used at a specific point in the lesson and should be learnable in ten minutes. For example, students can use a small ‘app’ specific to their topic, look at a script that generates a visualisation, and change some variables to find out how they impact the output. Students may also be able to access some program code, edit it, and see the impact of their edits. In this way, they discover through practical examples the way computer programs work, and how they can use CS principles to help build an understanding of the subject area they are currently studying. If the language is never used again, the learning cost was low enough that it was worth the value of adding computation to the one lesson.

We have recorded the seminar and will be sharing the video very soon, so bookmark this page.

Try TSP languages yourself

You can try out the TSP language prototypes Mark shared yourself, which will give you a good idea of how much a teaspoon is!

DV4L: For history students, the team and participating teachers have created a prototype called DV4L, which visualises historical data. The default example script shows population growth in Africa. Students can change some of the variables in the script to explore data related to other countries and other historical periods.

Pixel Equations: Mathematics and engineering students can use the Pixel Equations tool to learn about the way that pictures are made up of individual pixels. This can be introduced into lessons using a variety of contexts. One example lesson activity looks at images in the contexts of maps. This prototype is available in English and Spanish. 

Counting Sheets: Another example given by Mark was Counting Sheets, an interactive tool to support the exploration of counting problems, such as how many possible patterns can come from flipping three coins. 

Have a go yourself. What subjects could you imagine adding a teaspoon of computing to?

Join our next free research seminar

We’d love you to join us for the next seminar in our series on cross-disciplinary computing. On 7 June, we will hear from Pratim Sengupta, of the University of Calgary, Canada. He has conducted studies in science classrooms and non-formal learning environments, focusing on providing open and engaging experiences for anyone to explore code. Pratim will share his thoughts on the ways that more of us can become involved with code when we open up its richness and depth to a wider audience. He will also introduce us to his ideas about countering technocentrism, a key focus of his new book.

And finally… save another date!

We will shortly be sharing details about the official in-person launch event of the Raspberry Pi Computing Education Research Centre at the University of Cambridge on 20 July 2022. And guess who is going to be coming to Cambridge, UK, from Michigan to officially cut the ribbon for us? That’s right, Mark Guzdial. More information coming soon on how you can sign up to join us for free at this launch event.

The post A teaspoon of computing in every subject: Broadening participation in computer science appeared first on Raspberry Pi.

AI literacy research: Children and families working together around smart devices

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/ai-literacy-children-families-working-together-ai-education-research/

Between September 2021 and March 2022, we’ve been partnering with The Alan Turing Institute to host a series of free research seminars about how to young people about AI and data science.

In the final seminar of the series, we were excited to hear from Stefania Druga from the University of Washington, who presented on the topic of AI literacy for families. Stefania’s talk highlighted the importance of families in supporting children to develop AI literacy. Her talk was a perfect conclusion to the series and very well-received by our audience.

Stefania Druga.
Stefania Druga, University of Washington

Stefania is a third-year PhD student who has been working on AI literacy in families, and since 2017 she has conducted a series of studies that she presented in her seminar talk. She presented some new work to us that was to be formally shared at the HCI conference in April, and we were very pleased to have a sneak preview of these results. It was a fascinating talk about the ways in which the interactions between parents and children using AI-based devices in the home, and the discussions they have while learning together, can facilitate an appreciation of the affordances of AI systems. You’ll find my summary as well as the seminar recording below.

“AI literacy practices and skills led some families to consider making meaningful use of AI devices they already have in their homes and redesign their interactions with them. These findings suggest that family has the potential to act as a third space for AI learning.”

– Stefania Druga

AI literacy: Growing up with AI systems, growing used to them

Back in 2017, interest in Alexa and other so-called ‘smart’, AI-based devices was just developing in the public, and such devices would have been very novel to most people. That year, Stefania and colleagues conducted a first pilot study of children’s and their parents’ interactions with ‘smart’ devices, including robots, talking dolls, and the sort of voice assistants we are used to now.

A slide from Stefania Druga's AI literacy seminar. Content is described in the blog text.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

Working directly with families, the researchers explored the level of understanding that children had about ‘smart’ devices, and were surprised by the level of insight very young children had into the potential of this type of technology.

In this AI literacy pilot study, Stefania and her colleagues found that:

  • Children perceived AI-based agents (i.e. ‘smart’ devices) as friendly and truthful
  • They treated different devices (e.g. two different Alexas) as completely independent
  • How ‘smart’ they found the device was dependent on age, with older children more likely to describe devices as ‘smart’

AI literacy: Influence of parents’ perceptions, influence of talking dolls

Stefania’s next study, undertaken in 2018, showed that parents’ perceptions of the implications and potential of ‘smart’ devices shaped what their children thought. Even when parents and children were interviewed separately, if the parent thought that, for example, robots were smarter than humans, then the child did too.

A slide from Stefania Druga's AI literacy seminar.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

Another part of this study showed that talking dolls could influence children’s moral decisions (e.g. “Should I give a child a pillow?”). In some cases, these ‘smart’ toys would influence the child more than another human. Some ‘smart’ dolls have been banned in some European countries because of security concerns. In the light of these concerns, Stefania pointed out how important it is to help children develop a critical understanding of the potential of AI-based technology, and what its fallibility and the limits of its guidance are.

A slide from Stefania Druga's AI literacy seminar.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

AI literacy: Programming ‘smart’ devices, algorithmic bias

Another study Stefania discussed involved children who programmed ‘smart’ devices. She used the children’s drawings to find out about their mental models of how the technology worked.

She found that when children had the opportunity to train machine learning models or ‘smart’ devices, they became more sceptical about the appropriate use of these technologies and asked better questions about when and for what they should be used. Another finding was that children and adults had different ideas about algorithmic bias, particularly relating to the meaning of fairness.

A parent and child work together at a Raspberry Pi computer.

AI literacy: Kinaesthetic activities, sharing discussions

The final study Stefania talked about was conducted with families online during the pandemic, when children were learning at home. 15 families, with in total 18 children (ages 5 to 11) and 16 parents, participated in five weekly sessions. A number of learning activities to demonstrate features of AI made up each of the sessions. These are all available at aiplayground.me.

A slide from Stefania Druga's AI literacy seminar, describing two research questions about how children and parents learn about AI together, and about how to design learning supports for family AI literacies.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

The fact that children and parents, or other family members, worked through the activities together seemed to generate fruitful discussions about the usefulness of AI-based technology. Many families were concerned about privacy and what was happening to their personal data when they were using ‘smart’ devices, and also expressed frustration with voice assistants that couldn’t always understand the way they spoke.

A slide from Stefania Druga's AI literacy seminar. Content described in the blog text.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

In one of the sessions, with a focus on machine learning, families were introduced to a kinaesthetic activity involving moving around their home to train a model. Through this activity, parents and children had more insight into the constraints facing machine learning. They used props in the home to experiment and find out ways of training the model better. In another session, families were encouraged to design their own devices on paper, and Stefania showed some examples of designs children had drawn.

A slide from Stefania Druga's AI literacy seminar. Content described in the blog text.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

This study identified a number of different roles that parents or other adults played in supporting children’s learning about AI, and found that embodied and tangible activities worked well for encouraging joint work between children and their families.

Find out more

You can catch up with Stefania’s seminar below in the video, and download her presentation slides.

More about Stefania’s work can be learned in her paper on children’s training of ML models and also in her latest paper about the five weekly AI literacy sessions with families.

Recordings and slides of all our previous seminars on AI education are available online for you, and you can see the list of AI education resources we’ve put together based on recommendations from seminar speakers and participants.

Join our next free research seminar

We are delighted to start a new seminar series on cross-disciplinary computing, with seminars in May, June, July, and September to look forward to. It’s not long now before we begin: Mark Guzdial will speak to us about task-specific programming languages (TSP) in history and mathematics classes on 3 May, 17.00 to 18.30pm local UK time. I can’t wait!

Sign up to receive the Zoom details for the seminar with Mark:

The post AI literacy research: Children and families working together around smart devices appeared first on Raspberry Pi.

Exploring cross-disciplinary computing education in our new seminar series

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/cross-disciplinary-computing-education-research-seminars/

We are delighted to launch our next series of free online seminars, this time on the topic of cross-disciplinary computing, running monthly from May to November 2022. As always, our seminars are for all researchers, educators, and anyone else interested in research related to computing education.

An educator helps two learners set up a Raspberry Pi computer.

Crossing disciplinary boundaries

What do we mean by cross-disciplinary computing? Through this upcoming seminar series, we want to embrace the intersections and interactions of computing with all aspects of learning and life, and think about how they can help us teach young people. The researchers we’ve invited as our speakers will help us shed light on cross-disciplinary areas of computing through the breadth of their presentations.

In a computing classroom, a girl looks at a computer screen.

At the Raspberry Pi Foundation our mission is to make computing accessible to all children and young people everywhere, and because computing and technology appear in all aspects of our and young people’s lives, in this series of seminars we will consider what computing education looks like in a multiplicity of environments.

Mark Guzdial on computing in history and mathematics

We start the new series on 3 May, and are beyond delighted to be kicking off with a talk from Mark Guzdial (University of Michigan). Mark has worked in computer science education for decades and won many awards for his research, including the prestigious ACM SIGCSE Outstanding Contribution to Computing Education award in 2019. Mark has written hundreds of papers about computer science education, and he authors an extremely popular computing education research blog that keeps us all up to date with what is going on in the field.

Mark Guzdial.

Recently, he has been researching the ways in which programming education can be integrated into other subjects, so he is a perfect speaker to start us thinking about our theme of cross-disciplinary computing. His talk will focus on how we can add a teaspoon of computing to history and mathematics classes.

Pratim Sengupta on countering technocentrism

On 7 June, our speaker will be Pratim Sengupta (University of Calgary), who I feel will really challenge us to think about programming and computing education in a new way. He has conducted studies in science classrooms and non-formal learning environments which focus on providing open and engaging experiences for the public to explore code, for example through the Voice your Celebration installation. Recently, he has co-authored a book called Voicing Code in STEM: A Dialogical Imagination (MIT Press, availabe open access).

Pratim Sengupta.

In Pratim’s talk, he will share his thoughts about the ways that more of us can become involved with code through opening up its richness and depth to a wider public audience, and he will introduce us to his ideas about countering technocentrism, a key focus of his new book. I’m so looking forward to being challenged by this talk.

Yasmin Kafai on curriculum design with e-textiles

On 12 July, we will hear from Yasmin Kafai (University of Pennsylvania), who is another legend in computing education in my eyes. Yasmin started her long career in computing education with Seymour Papert, internationally known for his work on Logo and on constructionism as a theoretical lens for understanding the way we learn computing. Yasmin was part of the team that created Scratch, and for many years now has been working on projects revolving around digital making, electronic textiles, and computational participation.

Yasmin Kafai.

In Yasmin’s talk she will present, alongside a panel of teachers she’s been collaborating with, some of their work to develop a high school curriculum that uses electronic textiles to introduce students to computer science. This promises to be a really engaging and interactive seminar.

Genevieve Smith-Nunes on exploring data ethics

In August we will take a holiday, to return on 6 September to hear from the inspirational Genevieve Smith-Nunes (University of Cambridge), whose research is focused on dance and computing, in particular data-driven dance. Her work helps us to focus on the possibilities of creative computing, but also to think about the ethics of applications that involve vast amounts of data.

Genevieve Smith-Nunes.

Genevieve’s talk will prompt us to think about some really important questions: Is there a difference in sense of self (identity) between the human and the virtual? How does sharing your personal biometric data make you feel? How can biometric and immersive development tools be used in the computing classroom to raise awareness of data ethics? Impossible to miss!

Sign up now to attend the seminars

Do enter all these dates in your diary so you don’t miss out on participating — we are very excited about this series. Sign up below, and ahead of every seminar, we will send you the information for joining.

As usual, the seminars will take place online on a Tuesday at 17:00 to 18:30 local UK time. Later on in the series, we will also host a talk by our own researchers and developers at the Raspberry Pi Foundation about our non-formal learning research. Watch this space for details about the October and November seminars, which we are still finalising.

The post Exploring cross-disciplinary computing education in our new seminar series appeared first on Raspberry Pi.

Bias in the machine: How can we address gender bias in AI?

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/gender-bias-in-ai-machine-learning-biased-data/

At the Raspberry Pi Foundation, we’ve been thinking about questions relating to artificial intelligence (AI) education and data science education for several months now, inviting experts to share their perspectives in a series of very well-attended seminars. At the same time, we’ve been running a programme of research trials to find out what interventions in school might successfully improve gender balance in computing. We’re learning a lot, and one primary lesson is that these topics are not discrete: there are relationships between them.

We can’t talk about AI education — or computer science education more generally — without considering the context in which we deliver it, and the societal issues surrounding computing, AI, and data. For this International Women’s Day, I’m writing about the intersection of AI and gender, particularly with respect to gender bias in machine learning.

The quest for gender equality

Gender inequality is everywhere, and researchers, activists, and initiatives, and governments themselves, have struggled since the 1960s to tackle it. As women and girls around the world continue to suffer from discrimination, the United Nations has pledged, in its Sustainable Development Goals, to achieve gender equality and to empower all women and girls.

While progress has been made, new developments in technology may be threatening to undo this. As Susan Leahy, a machine learning researcher from the Insight Centre for Data Analytics, puts it:

Artificial intelligence is increasingly influencing the opinions and behaviour of people in everyday life. However, the over-representation of men in the design of these technologies could quietly undo decades of advances in gender equality.

Susan Leavy, 2018 [1]

Gender-biased data

In her 2019 award-winning book Invisible Women: Exploring Data Bias in a World Designed for Men [2], Caroline Ceriado Perez discusses the effects of gender-biased data. She describes, for example, how the designs of cities, workplaces, smartphones, and even crash test dummies are all based on data gathered from men. She also discusses that medical research has historically been conducted by men, on male bodies.

Looking at this problem from a different angle, researcher Mayra Buvinic and her colleagues highlight that in most countries of the world, there are no sources of data that capture the differences between male and female participation in civil society organisations, or in local advisory or decision making bodies [3]. A lack of data about girls and women will surely impact decision making negatively. 

Bias in machine learning

Machine learning (ML) is a type of artificial intelligence technology that relies on vast datasets for training. ML is currently being use in various systems for automated decision making. Bias in datasets for training ML models can be caused in several ways. For example, datasets can be biased because they are incomplete or skewed (as is the case in datasets which lack data about women). Another example is that datasets can be biased because of the use of incorrect labels by people who annotate the data. Annotating data is necessary for supervised learning, where machine learning models are trained to categorise data into categories decided upon by people (e.g. pineapples and mangoes).

A banana, a glass flask, and a potted plant on a white surface. Each object is surrounded by a white rectangular frame with a label identifying the object.
Max Gruber / Better Images of AI / Banana / Plant / Flask / CC-BY 4.0

In order for a machine learning model to categorise new data appropriately, it needs to be trained with data that is gathered from everyone, and is, in the case of supervised learning, annotated without bias. Failing to do this creates a biased ML model. Bias has been demonstrated in different types of AI systems that have been released as products. For example:

Facial recognition: AI researcher Joy Buolamwini discovered that existing AI facial recognition systems do not identify dark-skinned and female faces accurately. Her discovery, and her work to push for the first-ever piece of legislation in the USA to govern against bias in the algorithms that impact our lives, is narrated in the 2020 documentary Coded Bias

Natural language processing: Imagine an AI system that is tasked with filling in the missing word in “Man is to king as woman is to X” comes up with “queen”. But what if the system completes “Man is to software developer as woman is to X” with “secretary” or some other word that reflects stereotypical views of gender and careers? AI models called word embeddings learn by identifying patterns in huge collections of texts. In addition to the structural patterns of the text language, word embeddings learn human biases expressed in the texts. You can read more about this issue in this Brookings Institute report

Not noticing

There is much debate about the level of bias in systems using artificial intelligence, and some AI researchers worry that this will cause distrust in machine learning systems. Thus, some scientists are keen to emphasise the breadth of their training data across the genders. However, other researchers point out that despite all good intentions, gender disparities are so entrenched in society that we literally are not aware of all of them. White and male dominance in our society may be so unconsciously prevalent that we don’t notice all its effects.

Three women discuss something while looking at a laptop screen.

As sociologist Pierre Bourdieu famously asserted in 1977: “What is essential goes without saying because it comes without saying: the tradition is silent, not least about itself as a tradition.” [4]. This view holds that people’s experiences are deeply, or completely, shaped by social conventions, even those conventions that are biased. That means we cannot be sure we have accounted for all disparities when collecting data.

What is being done in the AI sector to address bias?

Developers and researchers of AI systems have been trying to establish rules for how to avoid bias in AI models. An example rule set is given in an article in the Harvard Business Review, which describes the fact that speech recognition systems originally performed poorly for female speakers as opposed to male ones, because systems analysed and modelled speech for taller speakers with longer vocal cords and lower-pitched voices (typically men).

A women looks at a computer screen.

The article recommends four ways for people who work in machine learning to try to avoid gender bias:

  • Ensure diversity in the training data (in the example from the article, including as many female audio samples as male ones)
  • Ensure that a diverse group of people labels the training data
  • Measure the accuracy of a ML model separately for different demographic categories to check whether the model is biased against some demographic categories
  • Establish techniques to encourage ML models towards unbiased results

What can everybody else do?

The above points can help people in the AI industry, which is of course important — but what about the rest of us? It’s important to raise awareness of the issues around gender data bias and AI lest we find out too late that we are reintroducing gender inequalities we have fought so hard to remove. Awareness is a good start, and some other suggestions, drawn out from others’ work in this area are:

Improve the gender balance in the AI workforce

Having more women in AI and data science, particularly in both technical and leadership roles, will help to reduce gender bias. A 2020 report by the World Economic Forum (WEF) on gender parity found that women account for only 26% of data and AI positions in the workforce. The WEF suggests five ways in which the AI workforce gender balance could be addressed:

  1. Support STEM education
  2. Showcase female AI trailblazers
  3. Mentor women for leadership roles
  4. Create equal opportunities
  5. Ensure a gender-equal reward system

Ensure the collection of and access to high-quality and up-to-date gender data

We need high-quality dataset on women and girls, with good coverage, including country coverage. Data needs to be comparable across countries in terms of concepts, definitions, and measures. Data should have both complexity and granularity, so it can be cross-tabulated and disaggregated, following the recommendations from the Data2x project on mapping gender data gaps.

A woman works at a multi-screen computer setup on a desk.

Educate young people about AI

At the Raspberry Pi Foundation we believe that introducing some of the potential (positive and negative) impacts of AI systems to young people through their school education may help to build awareness and understanding at a young age. The jury is out on what exactly to teach in AI education, and how to teach it. But we think educating young people about new and future technologies can help them to see AI-related work opportunities as being open to all, and to develop critical and ethical thinking.

Three teenage girls at a laptop

In our AI education seminars we heard a number of perspectives on this topic, and you can revisit the videos, presentation slides, and blog posts. We’ve also been curating a list of resources that can help to further AI education — although there is a long way to go until we understand this area fully. 

We’d love to hear your thoughts on this topic.


References

[1] Leavy, S. (2018). Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning. Proceedings of the 1st International Workshop on Gender Equality in Software Engineering, 14–16.

[2] Perez, C. C. (2019). Invisible Women: Exploring Data Bias in a World Designed for Men. Random House.

[3] Buvinic M., Levine R. (2016). Closing the gender data gap. Significance 13(2):34–37 

[4] Bourdieu, P. (1977). Outline of a Theory of Practice (No. 16). Cambridge University Press. (p.167)

The post Bias in the machine: How can we address gender bias in AI? appeared first on Raspberry Pi.

Linking AI education to meaningful projects

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/ai-education-meaningful-projects-tara-chklovski/

Our seminars in this series on AI and data science education, co-hosted with The Alan Turing Institute, have been covering a range of different topics and perspectives. This month was no exception. We were delighted to be able to host Tara Chklovski, CEO of Technovation, whose presentation was called ‘Teaching youth to use AI to tackle the Sustainable Development Goals’.

Tara Chklovski.
Tara Chklovski

The Technovation Challenge

Tara started Technovation, formerly called Iridescent, in 2007 with a family science programme in one school in Los Angeles. The nonprofit has grown hugely, and Technovation now runs computing education activities across the world. We heard from Tara that over 350,000 girls from more than 100 countries take part in their programmes, and that the nonprofit focuses particularly on empowering girls to become tech entrepreneurs. The girls, with support from industry volunteers, parents, and the Technovation curriculum, work in teams to solve real-world problems through an annual event called the Technovation Challenge. Working at scale with young people has given the Technovation team the opportunity to investigate the impact of their programmes as well as more generally learn what works in computing education. 

Tara Chklovski describes the Technovation Challenge in an online seminar.
Click to enlarge

Tara’s talk was extremely engaging (you’ll find the recording below), with videos of young people who had participated in recent years. Technovation works with volunteers and organisations to reach young people in communities where opportunities may be lacking, focussing on low- and middle-income countries. Tara spoke about the 900 million teenage girls in the world, a  substantial number of whom live in countries where there is considerable inequality. 

To illustrate the impact of the programme, Tara gave a number of examples of projects that students had developed, including:

  • An air quality sensor linked to messaging about climate change
  • A support circle for girls living in domestic violence situation
  • A project helping mothers communicate with their daughters
  • Support for water collection in Kenya

Early on, the Technovation Challenge had involved the creation of mobile apps, but in recent years, the projects have focused on using AI technologies to solve problems. An key message that Tara wanted to get across was that the focus on real-world problems and teamwork was as important, if not more, than the technical skills the young people were developing.

Technovation has designed an online curriculum to support teams, who may have no prior computing experience, to learn how to design an AI project. Students work through units on topics such as data analysis and building datasets. As well as the technical activities, young people also work through activities on problem-solving approaches, design, and system thinking to help them tackle a real-world problem that is relevant to them. The curriculum supports teams to identify problems in their community and find a path to prototype and share an invention to tackle that problem.

Tara Chklovski describes the Technovation Challenge in an online seminar.
Click to enlarge

While working through the curriculum, teams develop AI models to address the problem that they have chosen. They then submit them to a global competition for beginners, juniors, and seniors. Many of the girls enjoy the Technovation Challenge so much that they come back year on year to further develop their team skills. 

AI Families: Children and parents using AI to solve problems

Technovation runs another programme, AI Families, that focuses on families working together to learn AI concepts and skills and use them to develop projects together. Families worked together with the help of educators to identify meaningful problems in their communities, and developed AI prototypes to address them.

A list of lessons in the AI Families programme from Technovation.

There were 20,000 participants from under-resourced communities in 17 countries through 2018 and 2019. 70% of them were women (mothers and grandmothers) who wanted their children to participate; in this way the programme encouraged parents to be role models for their daughters, as well as enabling families to understand that AI is a tool that could be used to think about what problems in their community can be solved with the help of AI skills and principles. Tara was keen to emphasise that, given the importance of AI in the world, the more people know about it, the more impact they can make on their local communities.

Tara shared links to the curriculum to demonstrate what families in this programme would learn week by week. The AI modules use tools such as Machine Learning for Kids.

The results of the AI Families project as investigated over 2018 and 2019 are reported in this paper.  The findings of the programme included:

  • Learning needs to focus on more than just content; interviews showed that the learners needed to see the application to real-world applications
  • Engaging parents and other family members can support retention and a sense of community, and support a culture of lifelong learning
  • It takes around 3 to 5 years to iteratively develop fun, engaging, effective curriculum, training, and scalable programme delivery methods. This level of patience and commitment is needed from all community and industry partners and funders.

The research describes how the programme worked pre-pandemic. Tara highlighted that although the pandemic has prevented so much face-to-face team work, it has allowed some young people to access education online that they would not have otherwise had access to.

Many perspectives on AI education

Our goal is to listen to a variety of perspectives through this seminar series, and I felt that Tara really offered something fresh and engaging to our seminar audience, many of them (many of you!) regular attendees who we’ve got to know since we’ve been running the seminars. The seminar combined real-life stories with videos, as well as links to the curriculum used by Technovation to support learners of AI. The ‘question and answer’ session after the seminar focused on ways in which people could engage with the programme. On Twitter, one of the seminar participants declared this seminar “my favourite thus far in the series”.  It was indeed very inspirational.

As we near the end of this series, we can start to reflect on what we’ve been learning from all the various speakers, and I intend to do this more formally in a month or two as we prepare Volume 3 of our seminar proceedings. While Tara’s emphasis is on motivating children to want to learn the latest technologies because they can see what they can achieve with them, some of our other speakers have considered the actual concepts we should be teaching, whether we have to change our approach to teaching computer science if we include AI, and how we should engage young learners in the ethics of AI.

Join us for our next seminar

I’m really looking forward to our final seminar in the series, with Stefania Druga, on Tuesday 1 March at 17:00–18:30 GMT. Stefania, PhD candidate at the University of Washington Information School, will also focus on families. In her talk ‘Democratising AI education with and for families’, she will consider the ways that children engage with smart, AI-enabled devices that they are becoming part of their everyday lives. It’s a perfect way to finish this series, and we hope you’ll join us.

Thanks to our seminars series, we are developing a list of AI education resources that seminar speakers and attendees share with us, plus the free resources we are developing at the Foundation. Please do take a look.

You can find all blog posts relating to our previous seminars on this page.

The post Linking AI education to meaningful projects appeared first on Raspberry Pi.

Calling all Computing and ICT teachers in the UK and Ireland: Have your say

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/computing-ict-teacher-survey-uk-ireland-ukicts-call-for-responses/

Back in October, I wrote about a report that the Brookings Institution, a US think tank, had published about the provision of computer science in schools around the world. Brookings conducted a huge amount of research on computer science curricula in a range of countries, and the report gives a very varied picture. However, we believe that, to see a more complete picture, it’s also important to gather teachers’ own perspectives on their teaching.

school-aged girls and a teacher using a computer together.

Complete our survey for computing teachers

Experiences shared by teachers on the ground can give important insights to educators and researchers as well as to policymakers, and can be used to understand both gaps in provision and what is working well. 

Today we launch a survey for computing teachers across Ireland and the UK. The purpose of this survey is to find out about the experiences of computing teachers across the UK and Ireland, including what you teach, your approaches to teaching, and professional development opportunities that you have found useful. You can access it by clicking one of these buttons:

The survey is:

  • Open to all early years, primary, secondary, sixth-form, and further education teachers in Ireland, England, Northern Ireland, Scotland, and Wales who have taught any computing or computer science (even a tiny bit) in the last year
  • Available in English, Welsh, Gaelic, and Irish/Gaeilge
  • Anonymous, and we aim to make the data openly available, in line with our commitment to open-source data; the survey collects no personal data
  • Designed to take you 20 to 25 minutes to complete

The survey will be open for four weeks, until 7 March. When you complete the survey, you’ll have the opportunity to enter a prize draw for a £50 book token per week, so if you complete the survey in the first week, you automatically get four chances to win a token!

We’re aiming for 1000 teachers to complete the survey, so please do fill it in and share it with your colleagues. If you can help us now, we’ll be able to share the survey findings on this website and other channels in the summer.

“Computing education in Ireland — as in many other countries — has changed so much in the last decade, and perhaps even more so in the last few years. Understanding teachers’ views is vital for so many reasons: to help develop, inform, and steer much-needed professional development; to inform policymakers on actions that will have positive effects for teachers working in the classroom; and to help researchers identify and conduct research in areas that will have real impact on and for teachers.”

– Keith Quille (Technological University Dublin), member of the research project team

What computing is taught in the UK and Ireland?

There are key differences in the provision of computer science and computing education across the UK and Ireland, not least what we all call the subject.

In England, the mandatory national curriculum subject is called Computing, but for learners electing to take qualifications such as GCSE and A level, the subject is called computer science. Computing is taught in all schools from age 5, and is a broad subject covering digital literacy as well as elements of computer science, such as algorithms and programming; networking; and computer architecture.

Male teacher and male students at a computer

In Northern Ireland, the teaching curriculum involves developing Cross-Curricular Skills (CCS) and Thinking Skills and Personal Capabilities. This means that from the Early Years Foundation Stage to the end of key stage 3, “using ICT” is one of the three statutory CCS, alongside “communication” and “using mathematics”, which must be included in lessons. At GCSE and A level, the subject (for those who select it) is called Digital Technology, with GCSE students being able to choose between GCSE Digital Technology (Multimedia) and GCSE Digital Technology (Programming).

In Scotland, the ​​Curriculum for Excellence is divided into two phases: the broad general education (BGE) and the senior phase. In the BGE, from age 3 to 15 (the end of the third year of secondary school), all children and young people are entitled to a computing science curriculum as part of the Technologies framework. In S4 to S6, young people may choose to extend and deepen their learning in computing science through National and Higher qualification courses.

A computing teacher and students in the classroom.

In Wales, computer science will be part of a new Science & Technology area of learning and experience for all learners aged 3-16. Digital competence is also a statutory cross-curricular skill alongside literacy and numeracy;  this includes Citizenship; Interacting and collaborating; Producing; and Data and computational thinking. Wales offers a new GCSE and A level Digital Technology, as well as GCSE and A level Computer Science.

Ireland has introduced the Computer Science for Leaving Certificate as an optional subject (age ranges typically from 15 to 18), after a pilot phase which began in 2018. The Leaving Certificate subject includes three strands: practices and principles; core concepts; and computer science in practice. At junior cycle level (age ranges typically from 12 to 15), an optional short course in coding is now available. The short course has three strands: Computer science introduction; Let’s get connected; and Coding at the next level

What is the survey?

The survey is a localised and slightly adapted version of METRECC, which is a comprehensive and validated survey tool developed in 2019 to benchmark and measure developments of the teaching and learning of computing in formal education systems around the world. METRECC stands for ‘MEasuring TeacheR Enacted Computing Curriculum’. The METRECC survey has ten categories of questions and is designed to be completed by practising computing teachers.

Using existing standardised survey instruments is good research practice, as it increases the reliability and validity of the results. In 2019, METRECC was used to survey teachers in England, Scotland, Ireland, Italy, Malta, Australia, and the USA. It was subsequently revised and has been used more recently to survey computing teachers in South Asia and in four countries in Africa.

A computing teacher and a learner do physical computing in the primary school classroom.

With sufficient responses, we hope to be able to report on the resources and classroom practices of computing teachers, as well as on their access to professional development opportunities. This will enable us to not only compare the UK’s four devolved nations and Ireland, but also to report on aspects of the teaching of computing in general, and on how teachers perceive the teaching of the subject. As computing is a relatively new subject whatever country you are in, it’s crucial to gather and analyse this information so that we can develop our understanding of the teaching of computing. 

The research team

For this project, we are working as a team of researchers across the UK and Ireland. Together we have a breadth of experience around the development of computing as a school subject (using this broad term to also cover digital competencies and digital technology) in our respective countries. We also have experience of quantitative research and reporting, and we are aiming to publish the results in an academic journal as well as disseminate them to a wider audience. 

In alphabetical order, on the team are:

  • Elizabeth Cole, who researches early years and primary programming education at the Centre for Computing Science Education (CCSE), University of Glasgow
  • Tom Crick, who is Professor of Digital Education & Policy at Swansea University and has been involved in policy development around computing in Wales for many years
  • Diana Kirby, who is a Programme Coordinator at the Raspberry Pi Foundation
  • Nicola Looker, who is a Lecturer in Secondary Education at Edgehill University, and a PhD student at CCSE, University of Glasgow, researching programming pedagogy
  • Keith Quille, who is a Senior Lecturer in Computing at Technological University Dublin
  • Sue Sentance, who is the Director of the Raspberry Pi Computing Education Research Centre at University of Cambridge; and Chief Learning Officer at the Raspberry Pi Foundation

In addition, Dr Irene Bell, Stranmillis University College, Belfast, has been assisting the team to ensure that the survey is applicable for teachers in Northern Ireland. Keith, Sue, and Elizabeth were part of the original team that designed the survey in 2019.

How can I find out more?

On this page, you’ll see more information about the survey and our findings once we start analysing the data. You can bookmark the page, as we will keep it updated with the results of the survey and any subsequent publications.

The post Calling all Computing and ICT teachers in the UK and Ireland: Have your say appeared first on Raspberry Pi.

The Roots project: Implementing culturally responsive computing teaching in schools in England

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/culturally-responsive-computing-teaching-schools-england-roots-research-project/

Since last year, we have been investigating culturally relevant pedagogy and culturally responsive teaching in computing education. This is an important part of our research to understand how to make computing accessible to all young people. We are now continuing our work in this area with a new project called Roots, bridging our research team here at the Foundation and the team at the Raspberry Pi Computing Education Research Centre, which we jointly created with the University of Cambridge in its Department of Computer Science and Technology.

Across both organisations, we’ve got great ambitions for the Centre, and I’m delighted to have been appointed as its Director. It’s a great privilege to lead this work. 

What do we mean by culturally relevant pedagogy?

Culturally relevant pedagogy is a framework for teaching that emphasises the importance of incorporating and valuing all learners’ knowledge, ways of learning, and heritage. It promotes the development of learners’ critical consciousness of the world and encourages them to ask questions about ethics, power, privilege, and social justice. Culturally relevant pedagogy emphasises opportunities to address issues that are important to learners and their communities.

Culturally responsive teaching builds on the framework above to identify a range of teaching practices that can be implemented in the classroom. These include:

  • Drawing on learners’ cultural knowledge and experiences to inform the curriculum
  • Providing opportunities for learners to choose personally meaningful projects and express their own cultural identities
  • Exploring issues of social justice and bias

The story so far

The overall objective of our work in this area is to further our understanding of ways to engage underrepresented groups in computing. In 2021, funded by a Special Projects Grant from ACM’s Special Interest Group in Computer Science Education (SIGCSE), we established a working group of teachers and academics who met up over the course of three months to explore and discuss culturally relevant pedagogy. The result was a collaboratively written set of practical guidelines about culturally relevant and responsive teaching for classroom educators.

The video below is an introduction for teachers who may not be familiar with the topic, showing the perspectives of three members of the working group and their students. You can also find other resources that resulted from this first phase of the work, and read our Special Projects Report.

We’re really excited that, having developed the guidelines, we can now focus on how culturally responsive computing teaching can be implemented in English schools through the Roots project, a new, related project supported by funding from Google. This funding continues Google’s commitment to grow the impact of computer science education in schools, which included a £1 million donation to support us and other organisations to develop online courses for teachers.

The next phase of work: Roots

In our new Roots project, we want to learn from practitioners how culturally responsive computing teaching can be implemented in classrooms in England, by supporting teachers to plan activities, and listening carefully to their experiences in school. Our approach is similar to the Research-Practice-Partnership (RPP) approach used extensively in the USA to develop research in computing education; this approach hasn’t yet been used in the UK. In this way, we hope to further develop and improve the guidelines with exemplars and case studies, and to increase our understanding of teachers’ motivations and beliefs with respect to culturally responsive computing teaching.

The pilot phase of the Roots project starts this month and will run until December 2022. During this phase, we will work with a small group of schools around London, Essex, and Cambridgeshire. Longer-term, we aim to scale up this work across the UK.

The project will be centred around two workshops held in participating teachers’ schools during the first half of the year. In the first workshop, teachers will work together with facilitators from the Foundation and the Raspberry Pi Computing Education Research Centre to discuss culturally responsive computing teaching and how to make use of the guidelines in adapting existing lessons and programmes of study. The second workshop will take place after the teachers have implemented the guidelines in their classroom, and it will be structured around a discussion of the teachers’ experiences and suggestions for iteration of the guidelines. We will also be using a visual research methodology to create a number of videos representing the new knowledge gleaned from all participants’ experiences of the project. We’re looking forward to sharing the results of the project later on in the year. 

We’re delighted that Dr Polly Card will be leading the work on this project at the Raspberry Pi Computing Education Research Centre, University of Cambridge, together with Saman Rizvi in the Foundation’s research team and Katie Vanderpere-Brown, Assistant Headteacher, Saffron Walden County High School, Essex and Computing Lead of the NCCE London, Hertfordshire and Essex Computing Hub.

More about equity, diversity, and inclusion in computing education

We hold monthly research seminars here at the Foundation, and in the first half of 2021, we invited speakers who focus on a range of topics relating to equity, diversity, and inclusion in computing education.

As well as holding seminars and building a community of interested people around them, we share the insights from speakers and attendees through video recordings of the sessions, blog posts, and the speakers’ presentation slides. We also publish a series of seminar proceedings with referenced chapters written by the speakers.

You can download your copy of the proceedings of the equity, diversity, and inclusion series now.  

The post The Roots project: Implementing culturally responsive computing teaching in schools in England appeared first on Raspberry Pi.

The AI4K12 project: Big ideas for AI education

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/ai-education-ai4k12-big-ideas-ai-thinking/

What is AI thinking? What concepts should we introduce to young people related to AI, including machine learning (ML), and data science? Should we teach with a glass-box or an opaque-box approach? These are the questions we’ve been grappling with since we started our online research seminar series on AI education at the Raspberry Pi Foundation, co-hosted with The Alan Turing Institute.

Over the past few months, we’d already heard from researchers from the UK, Germany, and Finland. This month we virtually travelled to the USA, to hear from Prof. Dave Touretzky (Carnegie Mellon University) and Prof. Fred G. Martin (University of Massachusetts Lowell), who have pioneered the influential AI4K12 project together with their colleagues Deborah Seehorn and Christina Gardner-McLure.

The AI4K12 project

The AI4K12 project focuses on teaching AI in K-12 in the US. The AI4K12 team have aligned their vision for AI education to the CSTA standards for computer science education. These Standards, published in 2017, describe what should be taught in US schools across the discipline of computer science, but they say very little about AI. This was the stimulus for starting the AI4K12 initiative in 2018. A number of members of the AI4K12 working group are practitioners in the classroom who’ve made a huge contribution in taking this project from ideas into the classroom.

Dave Touretzky presents the five big ideas of the AI4K12 project at our online research seminar.
Dave gave us an overview of the AI4K12 project (click to enlarge)

The project has a number of goals. One is to develop a curated resource directory for K-12 teachers, and another to create a community of K-12 resource developers. On the AI4K12.org website, you can find links to many resources and sign up for their mailing list. I’ve been subscribed to this list for a while now, and fascinating discussions and resources have been shared. 

Five Big Ideas of AI4K12

If you’ve heard of AI4K12 before, it’s probably because of the Five Big Ideas the team has set out to encompass the AI field from the perspective of school-aged children. These ideas are: 

  1. Perception — the idea that computers perceive the world through sensing
  2. Representation and reasoning — the idea that agents maintain representations of the world and use them for reasoning
  3. Learning — the idea that computers can learn from data
  4. Natural interaction — the idea that intelligent agents require many types of knowledge to interact naturally with humans
  5. Societal impact — the idea that artificial intelligence can impact society in both positive and negative ways

Sometimes we hear concerns that resources being developed to teach AI concepts to young people are narrowly focused on machine learning, particularly supervised learning for classification. It’s clear from the AI4K12 Five Big Ideas that the team’s definition of the AI field encompasses much more than one area of ML. Despite being developed for a US audience, I believe the description laid out in these five ideas is immensely useful to all educators, researchers, and policymakers around the world who are interested in AI education.

Fred Martin presents one of the five big ideas of the AI4K12 project at our online research seminar.
Fred explained how ‘representation and reasoning’ is a big idea in the AI field (click to enlarge)

During the seminar, Dave and Fred shared some great practical examples. Fred explained how the big ideas translate into learning outcomes at each of the four age groups (ages 5–8, 9–11, 12–14, 15–18). You can find out more about their examples in their presentation slides or the seminar recording (see below). 

I was struck by how much the AI4K12 team has thought about progression — what you learn when, and in which sequence — which we do really need to understand well before we can start to teach AI in any formal way. For example, looking at how we might teach visual perception to young people, children might start when very young by using a tool such as Teachable Machine to understand that they can teach a computer to recognise what they want it to see, then move on to building an application using Scratch plugins or Calypso, and then to learning the different levels of visual structure and understanding the abstraction pipeline — the hierarchy of increasingly abstract things. Talking about visual perception, Fred used the example of self-driving cars and how they represent images.

A diagram of the levels of visual structure.
Fred used this slide to describe how young people might learn abstracted elements of visual structure

AI education with an age-appropriate, glass-box approach

Dave and Fred support teaching AI to children using a glass-box approach. By ‘glass-box approach’ we mean that we should give students information about how AI systems work, and show the inner workings, so to speak. The opposite would be a ‘opaque-box approach’, by which we mean showing students an AI system’s inputs and the outputs only to demonstrate what AI is capable of, without trying to teach any technical detail.

AI4K12 advice for educators supporting K-12 students: 1. Use transparent AI demonstrations. 2. Help students build mental models. 3. Encourage students to build AI applications.
AI4K12 teacher guidelines for AI education

Our speakers are keen for learners to understand, at an age-appropriate level, what is going on “inside” an AI system, not just what the system can do. They believe it’s important for young people to build mental models of how AI systems work, and that when the young people get older, they should be able to use their increasing knowledge and skills to develop their own AI applications. This aligns with the views of some of our previous seminar speakers, including Finnish researchers Matti Tedre and Henriikka Vartiainen, who presented at our seminar series in November

What is AI thinking?

Dave addressed the question of what AI thinking looks like in school. His approach was to start with computational thinking (he used the example of the Barefoot project’s description of computational thinking as a starting point) and describe AI thinking as an extension that includes the following skills:

  • Perception 
  • Reasoning
  • Representation
  • Machine learning
  • Language understanding
  • Autonomous robots

Dave described AI thinking as furthering the ideas of abstraction and algorithmic thinking commonly associated with computational thinking, stating that in the case of AI, computation actually is thinking. My own view is that to fully define AI thinking, we need to dig a bit deeper into, for example, what is involved in developing an understanding of perception and representation.

An image demonstrating that AI systems for object recognition may not distinguish between a real banana on a desk and the photo of a banana on a laptop screen.
Image: Max Gruber / Better Images of AI / Ceci n’est pas une banane / CC-BY 4.0

Thinking back to Matti Tedre and Henriikka Vartainen’s description of CT 2.0, which focuses only on the ‘Learning’ aspect of the AI4K12 Five Big Ideas, and on the distinct ways of thinking underlying data-driven programming and traditional programming, we can see some differences between how the two groups of researchers describe the thinking skills young people need in order to understand and develop AI systems. Tedre and Vartainen are working on a more finely granular description of ML thinking, which has the potential to impact the way we teach ML in school.

There is also another description of AI thinking. Back in 2020, Juan David Rodríguez García presented his system LearningML at one of our seminars. Juan David drew on a paper by Brummelen, Shen, and Patton, who extended Brennan and Resnick’s CT framework of concepts, practices, and perspectives, to include concepts such as classification, prediction, and generation, together with practices such as training, validating, and testing.

What I take from this is that there is much still to research and discuss in this area! It’s a real privilege to be able to hear from experts in the field and compare and contrast different standpoints and views.

Resources for AI education

The AI4K12 project has already made a massive contribution to the field of AI education, and we were delighted to hear that Dave, Fred, and their colleagues have just been awarded the AAAI/EAAI Outstanding Educator Award for 2022 for AI4K12.org. An amazing achievement! Particularly useful about this website is that it links to many resources, and that the Five Big Ideas give a framework for these resources.

Through our seminars series, we are developing our own list of AI education resources shared by seminar speakers or attendees, or developed by us. Please do take a look.

Join our next seminar

Through these seminars, we’re learning a lot about AI education and what it might look like in school, and we’re having great discussions during the Q&A section.

On Tues 1 February at 17:00–18:30 GMT, we’ll hear from Tara Chklovski, who will talk about AI education in the context of the Sustainable Development Goals. To participate, click the button below to sign up, and we will send you information about joining. I really hope you’ll be there for this seminar!

The schedule of our upcoming seminars is online. You can also (re)visit past seminars and recordings on the blog.

The post The AI4K12 project: Big ideas for AI education appeared first on Raspberry Pi.

How do we develop AI education in schools? A panel discussion

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/ai-education-schools-panel-uk-policy/

AI is a broad and rapidly developing field of technology. Our goal is to make sure all young people have the skills, knowledge, and confidence to use and create AI systems. So what should AI education in schools look like?

To hear a range of insights into this, we organised a panel discussion as part of our seminar series on AI and data science education, which we co-host with The Alan Turing Institute. Here our panel chair Tabitha Goldstaub, Co-founder of CogX and Chair of the UK government’s AI Council, summarises the event. You can also watch the recording below.

As part of the Raspberry Pi Foundation’s monthly AI education seminar series, I was delighted to chair a special panel session to broaden the range of perspectives on the subject. The members of the panel were:

  • Chris Philp, UK Minister for Tech and the Digital Economy
  • Philip Colligan, CEO of the Raspberry Pi Foundation 
  • Danielle Belgrave, Research Scientist, DeepMind
  • Caitlin Glover, A level student, Sandon School, Chelmsford
  • Alice Ashby, student, University of Brighton

The session explored the UK government’s commitment in the recently published UK National AI Strategy stating that “the [UK] government will continue to ensure programmes that engage children with AI concepts are accessible and reach the widest demographic.” We discussed what it will take to make this a reality, and how we will ensure young people have a seat at the table.

Two teenage girls do coding during a computer science lesson.

Why AI education for young people?

It was clear that the Minister felt it is very important for young people to understand AI. He said, “The government takes the view that AI is going to be one of the foundation stones of our future prosperity and our future growth. It’s an enabling technology that’s going to have almost universal applicability across our entire economy, and that is why it’s so important that the United Kingdom leads the world in this area. Young people are the country’s future, so nothing is complete without them being at the heart of it.”

A teacher watches two female learners code in Code Club session in the classroom.

Our panelist Caitlin Glover, an A level student at Sandon School, reiterated this from her perspective as a young person. She told us that her passion for AI started initially because she wanted to help neurodiverse young people like herself. Her idea was to start a company that would build AI-powered products to help neurodiverse students.

What careers will AI education lead to?

A theme of the Foundation’s seminar series so far has been how learning about AI early may impact young people’s career choices. Our panelist Alice Ashby, who studies Computer Science and AI at Brighton University, told us about her own process of deciding on her course of study. She pointed to the fact that terms such as machine learning, natural language processing, self-driving cars, chatbots, and many others are currently all under the umbrella of artificial intelligence, but they’re all very different. Alice thinks it’s hard for young people to know whether it’s the right decision to study something that’s still so ambiguous.

A young person codes at a Raspberry Pi computer.

When I asked Alice what gave her the courage to take a leap of faith with her university course, she said, “I didn’t know it was the right move for me, honestly. I took a gamble, I knew I wanted to be in computer science, but I wanted to spice it up.” The AI ecosystem is very lucky that people like Alice choose to enter the field even without being taught what precisely it comprises.

We also heard from Danielle Belgrave, a Research Scientist at DeepMind with a remarkable career in AI for healthcare. Danielle explained that she was lucky to have had a Mathematics teacher who encouraged her to work in statistics for healthcare. She said she wanted to ensure she could use her technical skills and her love for math to make an impact on society, and to really help make the world a better place. Danielle works with biologists, mathematicians, philosophers, and ethicists as well as with data scientists and AI researchers at DeepMind. One possibility she suggested for improving young people’s understanding of what roles are available was industry mentorship. Linking people who work in the field of AI with school students was an idea that Caitlin was eager to confirm as very useful for young people her age.

We need investment in AI education in school

The AI Council’s Roadmap stresses how important it is to not only teach the skills needed to foster a pool of people who are able to research and build AI, but also to ensure that every child leaves school with the necessary AI and data literacy to be able to become engaged, informed, and empowered users of the technology. During the panel, the Minister, Chris Philp, spoke about the fact that people don’t have to be technical experts to come up with brilliant ideas, and that we need more people to be able to think creatively and have the confidence to adopt AI, and that this starts in schools. 

A class of primary school students do coding at laptops.

Caitlin is a perfect example of a young person who has been inspired about AI while in school. But sadly, among young people and especially girls, she’s in the minority by choosing to take computer science, which meant she had the chance to hear about AI in the classroom. But even for young people who choose computer science in school, at the moment AI isn’t in the national Computing curriculum or part of GCSE computer science, so much of their learning currently takes place outside of the classroom. Caitlin added that she had had to go out of her way to find information about AI; the majority of her peers are not even aware of opportunities that may be out there. She suggested that we ensure AI is taught across all subjects, so that every learner sees how it can make their favourite subject even more magical and thinks “AI’s cool!”.

A primary school boy codes at a laptop with the help of an educator.

Philip Colligan, the CEO here at the Foundation, also described how AI could be integrated into existing subjects including maths, geography, biology, and citizenship classes. Danielle thoroughly agreed and made the very good point that teaching this way across the school would help prepare young people for the world of work in AI, where cross-disciplinary science is so important. She reminded us that AI is not one single discipline. Instead, many different skill sets are needed, including engineering new AI systems, integrating AI systems into products, researching problems to be addressed through AI, or investigating AI’s societal impacts and how humans interact with AI systems.

On hearing about this multitude of different skills, our discussion turned to the teachers who are responsible for imparting this knowledge, and to the challenges they face. 

The challenge of AI education for teachers

When we shifted the focus of the discussion to teachers, Philip said: “If we really want to equip every young person with the knowledge and skills to thrive in a world that shaped by these technologies, then we have to find ways to evolve the curriculum and support teachers to develop the skills and confidence to teach that curriculum.”

Teenage students and a teacher do coding during a computer science lesson.

I asked the Minister what he thought needed to happen to ensure we achieved data and AI literacy for all young people. He said, “We need to work across government, but also across business and society more widely as well.” He went on to explain how important it was that the Department for Education (DfE) gets the support to make the changes needed, and that he and the Office for AI were ready to help.

Philip explained that the Raspberry Pi Foundation is one of the organisations in the consortium running the National Centre for Computing Education (NCCE), which is funded by the DfE in England. Through the NCCE, the Foundation has already supported thousands of teachers to develop their subject knowledge and pedagogy around computer science.

A recent study recognises that the investment made by the DfE in England is the most comprehensive effort globally to implement the computing curriculum, so we are starting from a good base. But Philip made it clear that now we need to expand this investment to cover AI.

Young people engaging with AI out of school

Philip described how brilliant it is to witness young people who choose to get creative with new technologies. As an example, he shared that the Foundation is seeing more and more young people employ machine learning in the European Astro Pi Challenge, where participants run experiments using Raspberry Pi computers on board the International Space Station. 

Three teenage boys do coding at a shared computer during a computer science lesson.

Philip also explained that, in the Foundation’s non-formal CoderDojo club network and its Coolest Projects tech showcase events, young people build their dream AI products supported by volunteers and mentors. Among these have been autonomous recycling robots and AI anti-collision alarms for bicycles. Like Caitlin with her company idea, this shows that young people are ready and eager to engage and create with AI.

We closed out the panel by going back to a point raised by Mhairi Aitken, who presented at the Foundation’s research seminar in September. Mhairi, an Alan Turing Institute ethics fellow, argues that children don’t just need to learn about AI, but that they should actually shape the direction of AI. All our panelists agreed on this point, and we discussed what it would take for young people to have a seat at the table.

A Black boy uses a Raspberry Pi computer at school.

Alice advised that we start by looking at our existing systems for engaging young people, such as Youth Parliament, student unions, and school groups. She also suggested adding young people to the AI Council, which I’m going to look into right away! Caitlin agreed and added that it would be great to make these forums virtual, so that young people from all over the country could participate.

The panel session was full of insight and felt very positive. Although the challenge of ensuring we have a data- and AI-literate generation of young people is tough, it’s clear that if we include them in finding the solution, we are in for a bright future. 

What’s next for AI education at the Raspberry Pi Foundation?

In the coming months, our goal at the Foundation is to increase our understanding of the concepts underlying AI education and how to teach them in an age-appropriate way. To that end, we will start to conduct a series of small AI education research projects, which will involve gathering the perspectives of a variety of stakeholders, including young people. We’ll make more information available on our research pages soon.

In the meantime, you can sign up for our upcoming research seminars on AI and data science education, and peruse the collection of related resources we’ve put together.

The post How do we develop AI education in schools? A panel discussion appeared first on Raspberry Pi.

Computer science education is a global challenge

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/brookings-report-global-computer-science-education-policy/

For the last two years, I’ve been one of the advisors to the Center for Universal Education at the Brookings Institution, a US-based think tank, on their project to survey formal computing education systems across the world. The resulting education policy report, Building skills for life: How to expand and improve computer science education around the world, pulls together the findings of their research. I’ll highlight key lessons policymakers and educators can benefit from, and what elements I think have been missed.

Woman teacher and female students at a computer

Why a global challenge?

Work on this new Brookings report was motivated by the belief that if our goal is to create an equitable, global society, then we need computer science (CS) in school to be accessible around the world; countries need to educate their citizens about computer science, both to strengthen their economic situation and to tackle inequality between countries. The report states that “global development gaps will only be expected to widen if low-income countries’ investments in these domains falter while high-income countries continue to move ahead” (p. 12).

Student using a Raspberry Pi computer

The report makes an important contribution to our understanding of computer science education policy, providing a global overview as well as in-depth case studies of education policies around the world. The case studies look at 11 countries and territories, including England, South Africa, British Columbia, Chile, Uruguay, and Thailand. The map below shows an overview of the Brookings researchers’ findings. It indicates whether computer science is a mandatory or elective subject, whether it is taught in primary or secondary schools, and whether it is taught as a discrete subject or across the curriculum.

A world map showing countries' situation in terms of computing education policy.
Computer science education across the world. Figure courtesy of Brookings Institution (click to enlarge).

It’s a patchy picture, demonstrating both countries’ level of capacity to deliver computer science education and the different approaches countries have taken. Analysis in the Brookings report shows a correlation between a country’s economic position and implementation of computer science in schools: no low-income countries have implemented it at all, while over 20% of high-income countries have mandatory computer science education at both primary and secondary level. 

Capacity building: IT infrastructure and beyond

Given these disparities, there is a significant focus in the report on what IT infrastructure countries need in order to deliver computer science education. This infrastructure needs to be preceded by investment (funds to afford it) and policy (a clear statement of intent and an implementation plan). Many countries that the Brookings report describes as having no computer science education may still be struggling to put these in place.

A young woman codes in a computing classroom.

The recently developed CAPE (capacity, access, participation, experience) framework offers another way of assessing disparities in education. To have capacity to make computer science part of formal education, a country needs to put in place the following elements:

My view is that countries that are at the beginning of this process need to focus on IT infrastructure, but also on the other elements of capacity. The Brookings report touches on these elements of capacity as well. Once these are in place in a country, the focus can shift to the next level: access for learners.

Comparing countries — what policies are in place?

In their report, the Brookings researchers identify seven complementary policy actions that a country can take to facilitate implementation of computer science education:

  1. Introduction of ICT (information and communications technology) education programmes
  2. Requirement for CS in primary education
  3. Requirement for CS in secondary education
  4. Introduction of in-service CS teacher education programmes
  5. Introduction of pre-service teacher CS education programmes
  6. Setup of a specialised centre or institution focused on CS education research and training
  7. Regular funding allocated to CS education by the legislative branch of government

The figure below compares the 11 case-study regions in terms of how many of the seven policy actions have been taken, what IT infrastructure is in place, and when the process of implementing CS education started.

A graph showing the trajectory of 11 regions of the world in terms of computing education policy.
Trajectories of regions in the 11 case studies. Figure courtesy of Brookings Institution (click to enlarge).

England is the only country that has taken all seven of the identified policy actions, having already had nation-wide IT infrastructure and broadband connectivity in place. Chile, Thailand, and Uruguay have made impressive progress, both on infrastructure development and on policy actions. However, it’s clear that making progress takes many years — Chile started in 1992, and Uruguay in 2007 —  and requires a considerable amount of investment and government policy direction.

Computing education policy in England

The first case study that Brookings produced for this report, back in 2019, related to England. Over the last 8 years in England, we have seen the development of computing education in the curriculum as a mandatory subject in primary and secondary schools. Initially, funding for teacher education was limited, but in 2018, the government provided £80 million of funding to us and a consortium of partners to establish the National Centre for Computing Education (NCCE). Thus, in-service teacher education in computing has been given more priority in England than probably anywhere else in the world.

Three young people learn coding at laptops supported by a volunteer at a CoderDojo session.

Alongside teacher education, the funding also covered our development of classroom resources to cover the whole CS curriculum, and of Isaac Computer Science, our online platform for 14- to 18-year-olds learning computer science. We’re also working on a £2m government-funded research project looking at approaches to improving the gender balance in computing in English schools, which is due to report results next year.

The future of education policy in the UK as it relates to AI technologies is the topic of an upcoming panel discussion I’m inviting you to attend.

school-aged girls and a teacher using a computer together.

The Brookings report highlights the way in which the English government worked with non-profit organisations, including us here at the Raspberry Pi Foundation, to deliver on the seven policy actions. Partnerships and engagement with stakeholders appear to be key to effectively implementing computer science education within a country. 

Lessons learned, lessons missed

What can we learn from the Brookings report’s helicopter view of 11 case studies? How can we ensure that computer science education is going to be accessible for all children? The Brookings researchers draw our six lessons learned in their report, which I have taken the liberty of rewording and shortening here:

  1. Create demand
  2. Make it mandatory
  3. Train teachers
  4. Start early
  5. Work in partnership
  6. Make it engaging

In the report, the sixth lesson is phrased as, “When taught in an interactive, hands-on way, CS education builds skills for life.” The Brookings researchers conclude that focusing on project-based learning and maker spaces is the way for schools to achieve this, which I don’t find convincing. The problem with project-based learning in maker spaces is one of scale: in my experience, this approach only works well in a non-formal, small-scale setting. The other reason is that maker spaces, while being very engaging, are also very expensive. Therefore, I don’t see them as a practicable aspect of a nationally rolled-out, mandatory, formal curriculum.

When we teach computer science, it is important that we encourage young people to ask questions about ethics, power, privilege, and social justice.

Sue Sentance

We have other ways to make computer science engaging to all learners, using a breadth of pedagogical approaches. In particular, we should focus on cultural relevance, an aspect of education the Brookings report does not centre. Culturally relevant pedagogy is a framework for teaching that emphasises the importance of incorporating and valuing all learners’ knowledge, heritage, and ways of learning, and promotes the development of learners’ critical consciousness of the world. When we teach computer science, it is important that we encourage young people to ask questions about ethics, power, privilege, and social justice.

Three teenage boys do coding at a shared computer during a computer science lesson.

The Brookings report states that we need to develop and use evidence on how to teach computer science, and I agree with this. But to properly support teachers and learners, we need to offer them a range of approaches to teaching computing, rather than just focusing on one, such as project-based learning, however valuable that approach may be in some settings. Through the NCCE, we have embedded twelve pedagogical principles in the Teach Computing Curriculum, which is being rolled out to six million learners in England’s schools. In time, through this initiative, we will gain firm evidence on what the most effective approaches are for teaching computer science to all students in primary and secondary schools.

Moving forward together

I believe the Brookings Institution’s report has a huge contribution to make as countries around the world seek to introduce computer science in their classrooms. As we can conclude from the patchiness of the CS education world map, there is still much work to be done. I feel fortunate to be living in a country that has been able and motivated to prioritise computer science education, and I think that partnerships and working across stakeholder groups, particularly with schools and teachers, have played a large part in the progress we have made.

To my mind, the challenge now is to find ways in which countries can work together towards more equity in computer science education around the world. The findings in this report will help us make that happen.


PS We invite you to join us on 16 November for our online panel discussion on what the future of the UK’s education policy needs to look like to enable young people to navigate and shape AI technologies. Our speakers include UK Minister Chris Philp, our CEO Philip Colligan, and two young people currently in education. Tabitha Goldstaub, Chair of the UK government’s AI Council, will be chairing the discussion.

Sign up for your free ticket today and submit your questions to our panel!

The post Computer science education is a global challenge appeared first on Raspberry Pi.

Should we teach AI and ML differently to other areas of computer science? A challenge

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/research-seminar-data-centric-ai-ml-teaching-in-school/

Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host a series of free research seminars about how to teach AI and data science to young people.

In the second seminar of the series, we were excited to hear from Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from the University of Paderborn, Germany, who presented on the topic of teaching AI and machine learning (ML) from a data-centric perspective. Their talk raised the question of whether and how AI and ML should be taught differently from other themes in the computer science curriculum at school.

Machine behaviour — a new field of study?

The rationale behind the speakers’ work is a concept they call hybrid interaction system, referring to the way that humans and machines interact. To explain this concept, Carsten referred to an 2019 article published in Nature by Iyad Rahwan and colleagues: Machine hehaviour. The article’s authors propose that the study of AI agents (complex and simple algorithms that make decisions) should be a separate, cross-disciplinary field of study, because of the ubiquity and complexity of AI systems, and because these systems can have both beneficial and detrimental impacts on humanity, which can be difficult to evaluate. (Our previous seminar by Mhairi Aitken highlighted some of these impacts.) The authors state that to study this field, we need to draw on scientific practices from across different fields, as shown below:

Machine behaviour as a field sits at the intersection of AI engineering and behavioural science. Quantitative evidence from machine behaviour studies feeds into the study of the impact of technology, which in turn feeds questions and practices into engineering and behavioural science.
The interdisciplinarity of machine behaviour. (Image taken from Rahwan et al [1])

In establishing their argument, the authors compare the study of animal behaviour and machine behaviour, citing that both fields consider aspects such as mechanism, development, evolution and function. They describe how part of this proposed machine behaviour field may focus on studying individual machines’ behaviour, while collective machines and what they call ‘hybrid human-machine behaviour’ can also be studied. By focusing on the complexities of the interactions between machines and humans, we can think both about machines shaping human behaviour and humans shaping machine behaviour, and a sort of ‘co-behaviour’ as they work together. Thus, the authors conclude that machine behaviour is an interdisciplinary area that we should study in a different way to computer science.

Carsten and his team said that, as educators, we will need to draw on the parameters and frameworks of this machine behaviour field to be able to effectively teach AI and machine learning in school. They argue that our approach should be centred on data, rather than on code. I believe this is a challenge to those of us developing tools and resources to support young people, and that we should be open to these ideas as we forge ahead in our work in this area.

Ideas or artefacts?

In the interpretation of computational thinking popularised in 2006 by Jeanette Wing, she introduces computational thinking as being about ‘ideas, not artefacts’. When we, the computing education community, started to think about computational thinking, we moved from focusing on specific technology — and how to understand and use it — to the ideas or principles underlying the domain. The challenge now is: have we gone too far in that direction?

Carsten argued that, if we are to understand machine behaviour, and in particular, human-machine co-behaviour, which he refers to as the hybrid interaction system, then we need to be studying   artefacts as well as ideas.

Throughout the seminar, the speakers reminded us to keep in mind artefacts, issues of bias, the role of data, and potential implications for the way we teach.

Studying machine learning: a different focus

In addition, Carsten highlighted a number of differences between learning ML and learning other areas of computer science, including traditional programming:

  1. The process of problem-solving is different. Traditionally, we might try to understand the problem, derive a solution in terms of an algorithm, then understand the solution. In ML, the data shapes the model, and we do not need a deep understanding of either the problem or the solution.
  2. Our tolerance of inaccuracy is different. Traditionally, we teach young people to design programs that lead to an accurate solution. However, the nature of ML means that there will be an error rate, which we strive to minimise. 
  3. The role of code is different. Rather than the code doing the work as in traditional programming, the code is only a small part of a real-world ML system. 

These differences imply that our teaching should adapt too.

A graphic demonstrating that in machine learning as compared to other areas of computer science, the process of problem-solving, tolerance of inaccuracy, and role of code is different.
Click to enlarge.

ProDaBi: a programme for teaching AI, data science, and ML in secondary school

In Germany, education is devolved to state governments. Although computer science (known as informatics) was only last year introduced as a mandatory subject in lower secondary schools in North Rhine-Westphalia, where Paderborn is located, it has been taught at the upper secondary levels for many years. ProDaBi is a project that researchers have been running at Paderborn University since 2017, with the aim of developing a secondary school curriculum around data science, AI, and ML.

The ProDaBi curriculum includes:

  • Two modules for 11- to 12-year-olds covering decision trees and data awareness (ethical aspects), introduced this year
  • A short course for 13-year-olds covering aspects of artificial intelligence, through the game Hexapawn
  • A set of modules for 14- to 15-year-olds, covering data science, data exploration, decision trees, neural networks, and data awareness (ethical aspects), using Jupyter notebooks
  • A project-based course for 18-year-olds, including the above topics at a more advanced level, using Codap and Jupyter notebooks to develop practical skills through projects; this course has been running the longest and is currently in its fourth iteration

Although the ProDaBi project site is in German, an English translation is available.

Learning modules developed as part of the ProDaBi project.
Modules developed as part of the ProDaBi project

Our speakers described example activities from three of the modules:

  • Hexapawn, a two-player game inspired by the work of Donald Michie in 1961. The purpose of this activity is to support learners in reflecting on the way the machine learns. Children can then relate the activity to the behavior of AI agents such as autonomous cars. An English version of the activity is available. 
  • Data cards, a series of activities to teach about decision trees. The cards are designed in a ‘Top Trumps’ style, and based on food items, with unplugged and digital elements. 
  • Data awareness, a module focusing on the amount of data an individual can generate as they move through a city, in this case through the mobile phone network. Children are encouraged to reflect on personal data in the context of the interaction between the human and data-driven artefact, and how their view of the world influences their interpretation of the data that they are given.

Questioning how we should teach AI and ML at school

There was a lot to digest in this seminar: challenging ideas and some new concepts, for me anyway. An important takeaway for me was how much we do not yet know about the concepts and skills we should be teaching in school around AI and ML, and about the approaches that we should be using to teach them effectively. Research such as that being carried out in Paderborn, demonstrating a data-centric approach, can really augment our understanding, and I’m looking forward to following the work of Carsten and his team.

Carsten and colleagues ended with this summary and discussion point for the audience:

“‘AI education’ requires developing an adequate picture of the hybrid interaction system — a kind of data-driven, emergent ecosystem which needs to be made explicitly to understand the transformative role as well as the technological basics of these artificial intelligence tools and how they are related to data science.”

You can catch up on the seminar, including the Q&A with Carsten and his colleagues, here:

Join our next seminar

This seminar really extended our thinking about AI education, and we look forward to introducing new perspectives from different researchers each month. At our next seminar on Tuesday 2 November at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Professor Matti Tedre and Henriikka Vartiainen (University of Eastern Finland). The two Finnish researchers will talk about emerging trajectories in ML education for K-12. We look forward to meeting you there.

Carsten and their colleagues are also running a series of seminars on AI and data science: you can find out about these on their registration page.

You can increase your own understanding of machine learning by joining our latest free online course!


[1] Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., … & Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486.

The post Should we teach AI and ML differently to other areas of computer science? A challenge appeared first on Raspberry Pi.

What’s a kangaroo?! AI ethics lessons for and from the younger generation

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/ai-ethics-lessons-education-children-research/

Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host speakers from the UK, Finland, Germany, and the USA presenting a series of free research seminars about AI and data science education for young people. These rapidly developing technologies have a huge and growing impact on our lives, so it’s important for young people to understand them both from a technical and a societal perspective, and for educators to learn how to best support them to gain this understanding.

Mhairi Aitken.

In our first seminar we were beyond delighted to hear from Dr Mhairi Aitken, Ethics Fellow at The Alan Turing Institute. Mhairi is a sociologist whose research examines social and ethical dimensions of digital innovation, particularly relating to uses of data and AI. You can catch up on her full presentation and the Q&A with her in the video below.

Why we need AI ethics

The increased use of AI in society and industry is bringing some amazing benefits. In healthcare for example, AI can facilitate early diagnosis of life-threatening conditions and provide more accurate surgery through robotics. AI technology is also already being used in housing, financial services, social services, retail, and marketing. Concerns have been raised about the ethical implications of some aspects of these technologies, and Mhairi gave examples of a number of controversies to introduce us to the topic.

“Ethics considers not what we can do but rather what we should do — and what we should not do.”

Mhairi Aitken

One such controversy in England took place during the coronavirus pandemic, when an AI system was used to make decisions about school grades awarded to students. The system’s algorithm drew on grades awarded in previous years to other students of a school to upgrade or downgrade grades given by teachers; this was seen as deeply unfair and raised public consciousness of the real-life impact that AI decision-making systems can have.

An AI system was used in England last year to make decisions about school grades awarded to students — this was seen as deeply unfair.

Another high-profile controversy was caused by biased machine learning-based facial recognition systems and explored in Shalini Kantayya’s documentary Coded Bias. Such facial recognition systems have been shown to be much better at recognising a white male face than a black female one, demonstrating the inequitable impact of the technology.

What should AI be used for?

There is a clear need to consider both the positive and negative impacts of AI in society. Mhairi stressed that using AI effectively and ethically is not just about mitigating negative impacts but also about maximising benefits. She told us that bringing ethics into the discussion means that we start to move on from what AI applications can do to what they should and should not do. To outline how ethics can be applied to AI, Mhairi first outlined four key ethical principles:

  • Beneficence (do good)
  • Nonmaleficence (do no harm)
  • Autonomy
  • Justice

Mhairi shared a number of concrete questions that ethics raise about new technologies including AI: 

  • How do we ensure the benefits of new technologies are experienced equitably across society?
  • Do AI systems lead to discriminatory practices and outcomes?
  • Do new forms of data collection and monitoring threaten individuals’ privacy?
  • Do new forms of monitoring lead to a Big Brother society?
  • To what extent are individuals in control of the ways they interact with AI technologies or how these technologies impact their lives?
  • How can we protect against unjust outcomes, ensuring AI technologies do not exacerbate existing inequalities or reinforce prejudices?
  • How do we ensure diverse perspectives and interests are reflected in the design, development, and deployment of AI systems? 

Who gets to inform AI systems? The kangaroo metaphor

To mitigate negative impacts and maximise benefits of an AI system in practice, it’s crucial to consider the context in which the system is developed and used. Mhairi illustrated this point using the story of an autonomous vehicle, a self-driving car, developed in Sweden in 2017. It had been thoroughly safety-tested in the country, including tests of its ability to recognise wild animals that may cross its path, for example elk and moose. However, when the car was used in Australia, it was not able to recognise kangaroos that hopped into the road! Because the system had not been tested with kangaroos during its development, it did not know what they were. As a result, the self-driving car’s safety and reliability significantly decreased when it was taken out of the context in which it had been developed, jeopardising people and kangaroos.

A parent kangaroo with a young kangaroo in its pouch stands on grass.
Mitigating negative impacts and maximising benefits of AI systems requires actively involving the perspectives of groups that may be affected by the system — ‘kangoroos’ in Mhairi’s metaphor.

Mhairi used the kangaroo example as a metaphor to illustrate ethical issues around AI: the creators of an AI system make certain assumptions about what an AI system needs to know and how it needs to operate; these assumptions always reflect the positions, perspectives, and biases of the people and organisations that develop and train the system. Therefore, AI creators need to include metaphorical ‘kangaroos’ in the design and development of an AI system to ensure that their perspectives inform the system. Mhairi highlighted children as an important group of ‘kangaroos’. 

AI in children’s lives

AI may have far-reaching consequences in children’s lives, where it’s being used for decision-making around access to resources and support. Mhairi explained the impact that AI systems are already having on young people’s lives through these systems’ deployment in children’s education, in apps that children use, and in children’s lives as consumers.

A young child sits at a table using a tablet.
AI systems are already having an impact on children’s lives.

Children can be taught not only that AI impacts their lives, but also that it can get things wrong and that it reflects human interests and biases. However, Mhairi was keen to emphasise that we need to find out what children know and want to know before we make assumptions about what they should be taught. Moreover, engaging children in discussions about AI is not only about them learning about AI, it’s also about ethical practice: what can people making decisions about AI learn from children by listening to their views and perspectives?

AI research that listens to children

UNICEF, the United Nations Children’s Fund, has expressed concerns about the impact of new AI technologies used on children and young people. They have developed the UNICEF Requirements for Child-Centred AI.

Unicef Requirements for Child-Centred AI: Support childrenʼs development and well-being. Ensure inclusion of and for children. Prioritise fairness and non-discrimination for children. Protect childrenʼs data and privacy. Ensure safety for children. Provide transparency, explainability, and accountability for children. Empower governments and businesses with knowledge of AI and childrenʼs rights. Prepare children for present and future developments in AI. Create an enabling environment for child-centred AI. Engage in digital cooperation.
UNICEF’s requirements for child-centred AI, as presented by Mhairi. Click to enlarge.

Together with UNICEF, Mhairi and her colleagues working on the Ethics Theme in the Public Policy Programme at The Alan Turing Institute are engaged in new research to pilot UNICEF’s Child-Centred Requirements for AI, and to examine how these impact public sector uses of AI. A key aspect of this research is to hear from children themselves and to develop approaches to engage children to inform future ethical practices relating to AI in the public sector. The researchers hope to find out how we can best engage children and ensure that their voices are at the heart of the discussion about AI and ethics.

We all learned a tremendous amount from Mhairi and her work on this important topic. After her presentation, we had a lively discussion where many of the participants relayed the conversations they had had about AI ethics and shared their own concerns and experiences and many links to resources. The Q&A with Mhairi is included in the video recording.

What we love about our research seminars is that everyone attending can share their thoughts, and as a result we learn so much from attendees as well as from our speakers!

It’s impossible to cover more than a tiny fraction of the seminar here, so I do urge you to take the time to watch the seminar recording. You can also catch up on our previous seminars through our blogs and videos.

Join our next seminar

We have six more seminars in our free series on AI, machine learning, and data science education, taking place every first Tuesday of the month. At our next seminar on Tuesday 5 October at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from the University of Paderborn, Germany, who will be presenting on the topic of teaching AI and machine learning (ML) from a data-centric perspective (find out more here). Their talk will raise the questions of whether and how AI and ML should be taught differently from other themes in the computer science curriculum at school.

Sign up now and we’ll send you the link to join on the day of the seminar — don’t forget to put the date in your diary.

I look forward to meeting you there!

In the meantime, we’re offering a brand-new, free online course that introduces machine learning with a practical focus — ideal for educators and anyone interested in exploring AI technology for the first time.

The post What’s a kangaroo?! AI ethics lessons for and from the younger generation appeared first on Raspberry Pi.

Delivering a culturally relevant computing curriculum: new guide for teachers

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/culturally-relevant-computing-curriculum-guidelines-for-teachers/

In computing education, designing equitable and authentic learning experiences requires a conscious effort to take into account the characteristics of all learners and their social environments. Doing this allows teachers to address topics that are relevant to a diverse range of learners. To support computing and computer science teachers with this work, we’re now sharing a practical guide document for culturally responsive teaching in schools.

Why we need to make computing culturally relevant

Making computing culturally relevant means that learners with a range of cultural identities will be able to identify with the examples chosen to illustrate computing concepts, to engage effectively with the teaching methods, and to feel empowered to use computing to address problems that are meaningful to them and their communities. This will enable a more diverse group of learners to feel that they belong in computing and encourage them to choose to continue with it as a discipline in qualifications and careers.

Such an approach can empower all our students and support their skills and understanding of the integral role that computing can play in promoting social justice.

Yota Dimitriadi, Associate Professor at the University of Reading, member of the project working group

We introduced our work on this new document to you previously here on the blog. Check out the prblog post to find out more about the project’s funding and background, and the external working group of teachers and academics we convened to develop the guide.

Some shared definitions

To get the project off to the best start possible once we had assembled the working group, we first spent time drawing on research from the USA and discussing within the working group to come to a shared understanding of key terms:

  • Culture: A person’s knowledge, beliefs, and understanding of the world, which are affected by multiple personal characteristics, as well as social and economic factors.
  • Culturally relevant pedagogy: A framework for teaching that emphasises the importance of incorporating and valuing all learners’ knowledge, ways of learning, and heritage, and that promotes critical consciousness in teachers and learners.
  • Culturally responsive teaching: A range of teaching practices that draw on learners’ personal experiences and cultural identities to make learning more relevant to them, and that support the development of critical consciousness.
  • Social justice: The extent to which all members of society have a fair and equal chance to participate in all aspects of social life, develop to their full potential, contribute to society, and be treated as equals.
  • Equity: The extent to which different groups in society have access to particular activities or resources. To ensure that opportunities for access and participation are equal across different groups.

To bring in the voices of young people into the project, we asked teachers in the working group to consult with their learners to understand their perspectives on computing and how schools can engage more diverse groups of learners in elective computer science courses. The main reason that learners reported for being put off computing: complex or boring lessons of coding activities with a focus on theory rather than on practical outcomes. Many said that they were inspired by tasks such as producing their own games and suggested that early experiences in primary school and KS3 had been very important for their engagement in computing.

Curriculum, teaching approaches, and learning materials

The guide shows you that a culturally relevant pedagogy applies in three aspects of education, which we liken to a tree to indicate how these aspects connect to each other: the tree’s root system, the basis of culturally relevant pedagogy, is the focus of the curriculum; the tree’s trunk and branches are the teaching approaches taken to deliver the curriculum; the learning materials, represented by the tree’s crown of leaves, are the most widely visible aspect of computing lessons.

A tree with the roots labeled 'curriculum, the trunk labeled 'teaching approaches', and the crown labeled 'learning materials'.

Each aspect plays an important role in culturally relevant pedagogy:

  • Within the curriculum, it is important to think about the contexts in which computing concepts are taught, and about you make connections with issues that are meaningful to your learners
  • Equitable teaching approaches, such as open-ended, inquiry-led activities and discussion-based collaborative tasks, are key if you want to provide opportunities for all your learners to express their ideas and their identities through computing
  • Finally, inclusive representations of a range of cultures, and making learning materials accessible, are both of great importance to ensure that all your learners feel that computing is relevant to them

You can download the guide on culturally relevant pedagogy for computing teachers now to explore the resources provided:

  • You’ll find a lot more information, practical tips, and links to resources to support you to implement culturally relevant pedagogy in all these aspects of your teaching
  • The document links to different available curricula, and we have highlighted materials we’ve created for the Teach Computing Curriculum that promote key aspects of the approach
  • We’ve also included links to academic papers and books if you want to learn more, as well as to videos and courses that you can use for professional development

What was being part of the working group like?

One of the teachers who was part of the working group is Joe Arday from Woodbridge High School in Essex, UK. Joe originally worked in the technology sector and has been teaching computing for ten years. We asked him about his experience of being part of the project and how he plans to use the guide in his own classroom practice:

“It has been an absolute privilege to play a part in working towards producing the guide that my own children will be beneficiaries of when they are studying the computing curriculum throughout their education. I have been able to reflect on how to further improve my teaching practice and pedagogy to ensure that the curriculum taught is culturally diverse and caters for all learners that I teach. (Also, having the opportunity to work with academics from both the UK and US has made me think about becoming an academic in the field of computing at some point in the future!)”

Computer science teacher Joe Arday.

Joe also says: “I plan to review the computing curriculum taught in my computing department and sit down with my colleagues to work on how we can implement the guide in our units of work for Key Stages 3 to 5. The guide will also help my department to work towards one of my school’s aims to encourage an anti-racism community and curriculum in my school.“

Continuing the work

We hope you find this resource useful for your own practice, and for conversations within your school and network of fellow educators! Please spread the word about the guide to anyone in your circles who you think might benefit.

We plan to keep working with learners on their perspectives on culturally relevant teaching, and to develop professional development opportunities for teachers, initially in conjunction with a small number of schools. As always with our research projects, we will investigate what works well and share all our findings widely and promptly.

Many thanks to the teachers and academics in the working group for being wonderful collaborators, to the learners who contributed their time and ideas, and to Hayley Leonard and Diana Kirby from our team for all the time and energy they devoted to this project!

Working group

Joseph Arday, FCCT, Woodbridge High School, Essex, UK

Lynda Chinaka, University of Roehampton, UK

Mike Deutsch, Kids Code Jeunesse, Canada

Dr Yota Dimitriadi, University of Reading, UK

Amir Fakhoury, St Anne’s Catholic School and Sixth Form College, Hampshire, UK

Dr Samuel George, Ark St Alban’s Academy, West Midlands, UK

Professor Joanna Goode, University of Oregon, USA

Alain Ndabala, St George Catholic College, Hampshire, UK

Vanessa Olsen-Dry, North Cambridge Academy, Cambridgeshire, UK

Rohini Shah, Queens Park Community School, London, UK

Neelu Vasishth, Hampton Court House, Surrey, UK

The post Delivering a culturally relevant computing curriculum: new guide for teachers appeared first on Raspberry Pi.

Educating young people in AI, machine learning, and data science: new seminar series

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/ai-machine-learning-data-science-education-seminars/

A recent Forbes article reported that over the last four years, the use of artificial intelligence (AI) tools in many business sectors has grown by 270%. AI has a history dating back to Alan Turing’s work in the 1940s, and we can define AI as the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.

A woman explains a graph on a computer screen to two men.
Recent advances in computing technology have accelerated the rate at which AI and data science tools are coming to be used.

Four key areas of AI are machine learning, robotics, computer vision, and natural language processing. Other advances in computing technology mean we can now store and efficiently analyse colossal amounts of data (big data); consequently, data science was formed as an interdisciplinary field combining mathematics, statistics, and computer science. Data science is often presented as intertwined with machine learning, as data scientists commonly use machine learning techniques in their analysis.

Venn diagram showing the overlaps between computer science, AI, machine learning, statistics, and data science.
Computer science, AI, statistics, machine learning, and data science are overlapping fields. (Diagram from our forthcoming free online course about machine learning for educators)

AI impacts everyone, so we need to teach young people about it

AI and data science have recently received huge amounts of attention in the media, as machine learning systems are now used to make decisions in areas such as healthcare, finance, and employment. These AI technologies cause many ethical issues, for example as explored in the film Coded Bias. This film describes the fallout of researcher Joy Buolamwini’s discovery that facial recognition systems do not identify dark-skinned faces accurately, and her journey to push for the first-ever piece of legislation in the USA to govern against bias in the algorithms that impact our lives. Many other ethical issues concerning AI exist and, as highlighted by UNESCO’s examples of AI’s ethical dilemmas, they impact each and every one of us.

Three female teenagers and a teacher use a computer together.
We need to make sure that young people understand AI technologies and how they impact society and individuals.

So how do such advances in technology impact the education of young people? In the UK, a recent Royal Society report on machine learning recommended that schools should “ensure that key concepts in machine learning are taught to those who will be users, developers, and citizens” — in other words, every child. The AI Roadmap published by the UK AI Council in 2020 declared that “a comprehensive programme aimed at all teachers and with a clear deadline for completion would enable every teacher confidently to get to grips with AI concepts in ways that are relevant to their own teaching.” As of yet, very few countries have incorporated any study of AI and data science in their school curricula or computing programmes of study.

A teacher and a student work on a coding task at a laptop.
Our seminar speakers will share findings on how teachers can help their learners get to grips with AI concepts.

Partnering with The Alan Turing Institute for a new seminar series

Here at the Raspberry Pi Foundation, AI, machine learning, and data science are important topics both in our learning resources for young people and educators, and in our programme of research. So we are delighted to announce that starting this autumn we are hosting six free, online seminars on the topic of AI, machine learning, and data science education, in partnership with The Alan Turing Institute.

A woman teacher presents to an audience in a classroom.
Everyone with an interest in computing education research is welcome at our seminars, from researchers to educators and students!

The Alan Turing Institute is the UK’s national institute for data science and artificial intelligence and does pioneering work in data science research and education. The Institute conducts many different strands of research in this area and has a special interest group focused on data science education. As such, our partnership around the seminar series enables us to explore our mutual interest in the needs of young people relating to these technologies.

This promises to be an outstanding series drawing from international experts who will share examples of pedagogic best practice […].

Dr Matt Forshaw, The Alan Turing Institute

Dr Matt Forshaw, National Skills Lead at The Alan Turing Institute and Senior Lecturer in Data Science at Newcastle University, says: “We are delighted to partner with the Raspberry Pi Foundation to bring you this seminar series on AI, machine learning, and data science. This promises to be an outstanding series drawing from international experts who will share examples of pedagogic best practice and cover critical topics in education, highlighting ethical, fair, and safe use of these emerging technologies.”

Our free seminar series about AI, machine learning, and data science

At our computing education research seminars, we hear from a range of experts in the field and build an international community of researchers, practitioners, and educators interested in this important area. Our new free series of seminars runs from September 2021 to February 2022, with some excellent and inspirational speakers:

  • Tues 7 September: Dr Mhairi Aitken from The Alan Turing Institute will share a talk about AI ethics, setting out key ethical principles and how they apply to AI before discussing the ways in which these relate to children and young people.
  • Tues 5 October: Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from Paderborn University in Germany will use a series of examples from their ProDaBi programme to explore whether and how AI and machine learning should be taught differently from other topics in the computer science curriculum at school. The speakers will suggest that these topics require a paradigm shift for some teachers, and that this shift has to do with the changed role of algorithms and data, and of the societal context.
  • Tues 3 November: Professor Matti Tedre and Dr Henriikka Vartiainen from the University of Eastern Finland will focus on machine learning in the school curriculum. Their talk will map the emerging trajectories in educational practice, theory, and technology related to teaching machine learning in K-12 education.
  • Tues 7 December: Professor Rose Luckin from University College London will be looking at the breadth of issues impacting the teaching and learning of AI.
  • Tues 11 January: We’re delighted that Dr Dave Touretzky and Dr Fred Martin (Carnegie Mellon University and University of Massachusetts Lowell, respectively) from the AI4K12 Initiative in the USA will present some of the key insights into AI that the researchers hope children will acquire, and how they see K-12 AI education evolving over the next few years.
  • Tues 1 February: Speaker to be confirmed

How you can join our online seminars

All seminars start at 17:00 UK time (18:00 Central European Time, 12 noon Eastern Time, 9:00 Pacific Time) and take place in an online format, with a presentation, breakout discussion groups, and a whole-group Q&A.

Sign up now and we’ll send you the link to join on the day of each seminar — don’t forget to put the dates in your diary!

In the meantime, you can explore some of our educational resources related to machine learning and data science:

The post Educating young people in AI, machine learning, and data science: new seminar series appeared first on Raspberry Pi.

The digital divide: interactions between socioeconomic disadvantage and computing education

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/digital-divide-socioeconomic-disadvantage-computing-education/

Digital technology is developing at pace, impacting us all. Most of us use screens and all kinds of computers much more than we did five years ago. The total number of apps downloaded globally each quarter has doubled since 2015, reflecting both increased smartphone penetration and the increasingly prominent role of apps in our lives. However, access to digital technology and the internet is not yet equal: there is still a ‘digital divide’, i.e. some people do not have as much access to digital technologies as others, if any at all.

This month we welcomed Dr Hayley Leonard and Thom Kunkeler at our research seminar series, to present findings on ‘Why the digital divide does not stop at access: understanding the complex interactions between socioeconomic disadvantage and computing education’. Both Hayley and Thom work as researchers at the Raspberry Pi Foundation, where we have a focus on increasing our understanding of computing education for all. They shared some results of a research project they’d carried out with a group of young people who benefitted from our Learn at Home campaign.

Digital inequality: beyond the dichotomy of access

Hayley introduced some of the existing research and thinking around digital inequality, and Thom presented the results of their research project. Setting the scene, Hayley explained that the term ‘digital divide’ can create a dichotomous have/have-not view of the world, as can the concept of a ‘gap’. However, the research presents a more nuanced picture. Rather than describing digital inequality as purely centred on access to technology, some researchers characterise three levels of the digital divide:

  • Level 1: Access
  • Level 2: Skills (digital skills, internet skills) and uses (what you do once you have access)
  • Level 3: Outcomes (what you achieve)

This characterisation is useful because it enables us to look beyond access and also towards what happens once people have access to technology. This is where our Learn At Home campaign came in.

The presenters gave a brief overview of the impact of the campaign, in which the Raspberry Pi Foundation has partnered with 80 youth and community organisations and to date, thanks to generous donors, has given 5100 Raspberry Pi desktop computer kits (including monitors, headphones, etc.) to young people in the UK who didn’t have the resources to buy their own computers.

Hayley Leonard presents an online slide describing the interview responses of recipients of Raspberry Pi desktop computer kits, which revolved around five themes: ease of homework completion; connecting with others; having their own device; new opportunities for learning; improved understanding of schoolwork.
Click on the image to enlarge it. Learn more in the first Learn at Home campaign impact report.

Computing, identity, and self-efficacy

As part of the Learn At Home campaign, Hayley and Thom conducted a pilot study of how young people from underserved communities feel about computing and their own digital skills. They interviewed and analysed responses of fifteen young people, who had received hardware through Learn At Home, about computing as a subject, their confidence with computing, stereotypes, and their future aspirations.

Thom Kunkeler presents an online slide describing the background and research question of the 'Learn at Home campaign' pilot study: underrepresentation, belonging, identity, archetypes, and the question "How do young people from underserved communities feel about computing and their own digital skills?".
Click on the image to enlarge it.

The notion of a ‘computer person’ was used in the interview questions, following work conducted by Billy Wong at the University of Reading, which found that young people experienced a difference between being a ‘computer person’ and ‘doing computing’. The study carried out by Hayley and Thom largely supports this finding. Thom described two major themes that emerged from their analysis: a mismatch between computing and interviewees’ own identities, and low self-indicated self-efficacy.

Showing that stereotypes still persist of what a ‘computer person’ is like, a 13-year-old female interviewee described them as “a bit smart. Very, very logical, because computers are very logical. Things like smart, clever, intelligent because computers are quite hard.” Four of the interviewees were also more likely to associate a ‘computer person’ with being male.

Thom Kunkeler presents an online slide of findings of the 'Learn at Home campaign' pilot study. The young people interviewed associated the term 'computing person' with the attributes smart, clever, intelligent, nerdy/geeky, problem-solving ability.
The young people interviewed associated a ‘computing person’ with the following characteristics: smart, clever, intelligent, nerdy/geeky, problem-solving ability. Click on the image to enlarge it.

The majority of the young people in the study said that they could be this ’computer person’. Even for those who did not see themselves working with computers in the future, being a ’computer person’ was still a possibility: One interviewee said, “I feel like maybe I’m quite good at using a computer. I know my way around. Yes, you never know. I could be, eventually.”

Five of the young people indicated relatively low self-efficacy in computing, and thought there were more barriers to becoming a computer person, for example needing to be better at mathematics. 

In terms of future career goals, only two (White male) participants in the study considered computing as a career, with one (White female) interviewee understanding that choosing computing as a qualification might be important for her future career. This aligns with research into computer science (CS) qualification choice at age 14 in England, explored in a previous seminar, which highlighted the interaction between income, gender, and ethnicity: White girls from lower-income families were more likely to choose a CS qualification than White girls more from more affluent families, while very few Asian, Black, and Chinese girls from low-income backgrounds chose a CS qualification.

Evaluating computing education opportunities using the CAPE framework

An interesting aspect of this seminar was how Hayley and Thom situated their work in the relatively new CAPE framework, which describes different levels at which to evaluate computer science education opportunities. The CAPE framework highlights that capacity and access to computing (C and A in the framework) are only part of the challenge of making computer science education equitable; students’ participation (P) in and experience (E) of computing are key factors in keeping them engaged longer-term.

A diagram illustrating the CAPE framework for assessing computing education opportunities according to four aspects. 1, capacity, which relates to availability of resources. 2, access, which relates to whether learners have the opportunity to engage in the subject. 3, participation, which relates to whether learners choose to engage with the subject. 4, experience, which relates to what the outcome of learners' participation is.
Socioeconomic status (SES) can affect learner engagement with computing education at four levels set out in the CAPE framework.

As we develop computing education in the curriculum, we can use the CAPE framework to evaluate our provision. For example, where I’m writing from in England, we have the capacity to teach computing through the availability of professional development training for teachers, fully developed curriculum materials such as the Teach Computing Curriculum, and community support for teachers through organisations such as Computing at School and the National Centre for Computing Education. In terms of access we have an established national curriculum in the subject, but access to it has been interrupted for many due to the coronavirus pandemic. In terms of participation we know that gender and economic status can impact whether young people choose computer science as an elective subject post-14, and taking an intersectional view reveals that the issue of participation is more complex than that. Finally, according to our seminar speakers, young people’s experience of computing education can be impacted by their digital or technological capital, by their self-efficacy, and by the relevance of the subject to their career aspirations and goals. This analysis really enhances our understanding of digital inequality, as it moves us away from the have/have-not language of the digital divide and starts to unpack the complexity of the impacting factors. 

Although this was not covered in this month’s seminar, I also want to draw out that the CAPE framework also supports our understanding of global computing education: we may need to focus on capacity building in order to create a foundation for the other levels. Lots to think about! 

If you’d like to find out more about this project, you can read the paper that relates to the research and the impact report of the early phases of the Learn At Home initiative

If you missed the seminar, you can find the presentation slides on our seminars page and watch the recording of the researchers’ talk:

Join our next seminar

The next seminar will be the final one in the current series focused diversity and inclusion, which we’re co-hosting with the Royal Academy of Engineering. It will take place on Tuesday 13 July at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, and we’ll welcome Prof Ron Eglash, a prominent researcher in the area of ethnocomputing. The title of Ron’s seminar is Computing for generative justice: decolonizing the circular economy.

To join this free event, click below and sign up with your name and email address:

We’ll email you the link and instructions. See you there!

This was our 17th research seminar — you can find all the related blog posts here, and download the first volume of our seminar proceedings with contributions from previous guest speakers.

The post The digital divide: interactions between socioeconomic disadvantage and computing education appeared first on Raspberry Pi.

What does equity-focused teaching mean in computer science education?

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/equity-focused-teaching-in-computer-science-education/

Today, I discuss the second research seminar in our series of six free online research seminars focused on diversity and inclusion in computing education, where we host researchers from the UK and USA together with the Royal Academy of Engineering. By diversity, we mean any dimension that can be used to differentiate groups and people from one another. This might be, for example, age, gender, socio-economic status, disability, ethnicity, religion, nationality, or sexuality. The aim of inclusion is to embrace all people irrespective of difference. 

In this seminar, we were delighted to hear from Prof Tia Madkins (University of Texas at Austin), Dr Nicol R. Howard (University of Redlands), and Shomari Jones (Bellevue School District) (find their bios here), who talked to us about culturally responsive pedagogy and equity-focused teaching in K-12 Computer Science.

Equity-focused computer science teaching

Tia began the seminar with an audience-engaging task: she asked all participants to share their own definition of equity in the seminar chat. Amongst their many suggestions were “giving everybody the same opportunity”, “equal opportunity to access high-quality education”, and “everyone has access to the same resources”. I found Shomari’s own definition of equity very powerful: 

“Equity is the fair treatment, access, opportunity, and advancement of all people, while at the same time striving to identify and eliminate barriers that have prevented the full participation of some groups. Improving equity involves increasing justice and fairness within the procedures and processes of institutions or systems, as well as the distribution of resources. Tackling equity requires an understanding of the root cause of outcome disparity within our society.”

Shomari Jones

This definition is drawn directly from the young people Shomari works with, and it goes beyond access and opportunity to the notion of increasing justice and fairness and addressing the causes of outcome disparity. Justice was a theme throughout the seminar, with all speakers referring to the way that their work looks at equity in computer science education through a justice-oriented lens.

Removing deficit thinking

Using a justice-oriented approach means that learners should be encouraged to use their computer science knowledge to make a difference in areas that are important to them. It means that just having access to a computer science education is not sufficient for equity.

Tia Madkins presents a slide: "A justice-oriented approach to computer science teaching empowers students to use CS knowledge for transformation, moves beyond access and achievement frames, and is an asset- or strengths-based approach centering students and families"

Tia spoke about the need to reject “deficit thinking” (i.e. focusing on what learners lack) and instead focus on learners’ strengths or assets and how they bring these to the school classroom. For researchers and teachers to do this, we need to be aware of our own mindset and perspective, to think about what we value about ethnic and racial identities, and to be willing to reflect and take feedback.

Activities to support computer science teaching

Nicol talked about some of the ways of designing computing lessons to be equity-focused. She highlighted the benefits of pair programming and other peer pedagogies, where students teach and learn from each other through feedback and sharing ideas/completed work. She suggested using a variety of different programs and environments, to ensure a range of different pathways to understanding. Teachers and schools can aim to base teaching around tools that are open and accessible and, where possible, available in many languages. If the software environment and tasks are accessible, they open the doors of opportunity to enable students to move on to more advanced materials. To demonstrate to learners that computer science is applicable across domains, the topic can also be introduced in the context of mathematics and other subjects.

Nicol Howard presents a slide: "Considerations for equity-focused computer science teaching include your beliefs (and your students' beliefs) and how they impact CS classrooms; tiered activities and pair programming; self-expressions versus CS preparation; equity-focused lens"

Learners can benefit from learning computer science regardless of whether they want to become a computer scientist. Computing offers them skills that they can use for self-expression or to be creative in other areas of their life. They can use their knowledge for a specific purpose and to become more autonomous, particularly if their teacher does not have any deficit thinking. In addition, culturally relevant teaching in the classroom demonstrates a teacher’s deliberate and explicit acknowledgment that they value all students in their classroom and expect students to excel.

Engaging family and community

Shomari talked about the importance of working with parents and families of ethnically diverse students in order to hear their voices and learn from their experiences.

Shomari Jones presents a slide: “Parents without backgrounds and insights into the changing landscape of technology struggle to negotiate what roles they can play, such as how to work together in computing activities or how to find learning opportunities for their children.”

He described how the absence of a background in technology of parents and carers can drastically impact the experiences of young people.

“Parents without backgrounds and insights into the changing landscape of technology struggle to negotiate what roles they can play, such as how to work together in computing activities or how to find learning opportunities for their children.”

Betsy DiSalvo, Cecili Reid, and Parisa Khanipour Roshan. 2014

Shomari drew on an example from the Pacific Northwest in the US, a region with many successful technology companies. In this location, young people from wealthy white and Asian communities can engage fully in informal learning of computer science and can have aspirations to enter technology-related fields, whereas amongst the Black and Latino communities, there are significant barriers to any form of engagement with technology. This already existent inequity has been enhanced by the coronavirus pandemic: once so much of education moved online, it became widely apparent that many families had never owned, or even used, a computer. Shomari highlighted the importance of working with pre-service teachers to support them in understanding the necessity of family and community engagement.

Building classroom communities

Building a classroom community starts by fostering and maintaining relationships with students, families, and their communities. Our speakers emphasised how important it is to understand the lives of learners and their situations. Through this understanding, learning experiences can be designed that connect with the learners’ lived experiences and cultural practices. In addition, by tapping into what matters most to learners, teachers can inspire them to be change agents in their communities. Tia gave the example of learning to code or learning to build an app, which provides learners with practical tools they can use for projects they care about, and with skills to create artefacts that challenge and document injustices they see happening in their communities.

Find out more

If you want to learn more about this topic, a great place to start is the recent paper Tia and Nicol have co-authored that lays out more detail on the work described in the seminar: Engaging Equity Pedagogies in Computer Science Learning Environments, by Tia C. Madkins, Nicol R. Howard and Natalie Freed, 2020.

You can access the presentation slides via our seminars page.

Join our next free seminar

In our next seminar on Tuesday 2 March at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we’ll welcome Jakita O. Thomas (Auburn University), who is going to talk to us about Designing STEM Learning Environments to Support Computational Algorithmic Thinking and Black Girls: A Possibility Model for Changing Hegemonic Narratives and Disrupting STEM Neoliberal Projects. To join this free online seminar, simply sign up with your name and email address.

Once you’ve signed up, we’ll email you the seminar meeting link and instructions for joining. If you attended Peter’s and Billy’s seminar, the link remains the same.

The post What does equity-focused teaching mean in computer science education? appeared first on Raspberry Pi.

Computing education and underrepresentation: the data from England

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/computing-education-underrepresentation-data-england-schools/

In this blog post, I’ll discuss the first research seminar in our six-part series about diversity and inclusion. Let’s start by defining our terms. Diversity is any dimension that can be used to differentiate groups and people from one another. This might be, for example, age, gender, socio-economic status, disability, ethnicity, religion, nationality, or sexuality. The aim of inclusion is to embrace all people irrespective of difference.

It’s vital that we are inclusive in computing education, because we need to ensure that everyone can access and learn the empowering and enabling technical skills they need to support all aspects of their lives.

One male and two female teenagers at a computer

Between January and June of this year, we’re partnering with the Royal Academy of Engineering to host speakers from the UK and USA for a series of six research seminars focused on diversity and inclusion in computing education.

We kicked off the series with a seminar from Dr Peter Kemp and Dr Billy Wong focused on computing education in England’s schools post-14. Peter is a Lecturer in Computing Education at King’s College London, where he leads on initial teacher education in computing. His research areas are digital creativity and digital equity. Billy is an Associate Professor at the Institute of Education, University of Reading. His areas of research are educational identities and inequalities, especially in the context of higher education and STEM education.

Computing in England’s schools

Peter began the seminar with a comprehensive look at the history of curriculum change in Computing in England. This was very useful given our very international audience for these seminars, and I will summarise it below. (If you’d like more detail, you can look over the slides from the seminar. Note that these changes refer to England only, as education in the UK is devolved, and England, Northern Ireland, Scotland, and Wales each has a different education system.)

In 2014, England switched from mandatory ICT (Information and Communication Technology) to mandatory Computing (encompassing information technology, computer science, and digital literacy). This shift was complemented by a change in the qualifications for students aged 14–16 and 16–18, where the primary qualifications are GCSEs and A levels respectively:

  • At GCSE, there has been a transition from GCSE ICT to GCSE Computer Science over the last five years, with GCSE ICT being discontinued in 2017
  • At A level before 2014, ICT and Computing were on offer as two separate A levels; now there is only one, A level Computer Science

One of the issues is that in the English education system, there is a narrowing of the curriculum at age 14: students have to choose between Computer Science and other subjects such as Geography, History, Religious Studies, Drama, Music, etc. This means that those students that choose not to take a GCSE Computer Science (CS) may find that their digital education is thereby curtailed from then onwards. Peter’s and Billy’s view is that having a more specialist subject offer for age 14+ (Computer Science as opposed to ICT) means that fewer students take it, and they showed evidence of this from qualifications data. The number of students taking CS at GCSE has risen considerably since its introduction, but it’s not yet at the level of GCSE ICT uptake.

GCSE computer science and equity

Only 64% of schools in England offer GCSE Computer Science, meaning that just 81% of students have the opportunity to take the subject (some schools also add selection criteria). A higher percentage (90%) of selective grammar schools offer GCSE CS than do comprehensive schools (80%) or independent schools (39%). Peter suggested that this was making Computer Science a “little more elitist” as a subject.

Peter analysed data from England’s National Pupil Database (NPD) to thoroughly investigate the uptake of Computer Science post-14 with respect to the diversity of entrants.

He found that the gender gap for GCSE CS uptake is greater than it was for GCSE ICT. Now girls make up 22% of the cohort for GCSE CS (2020 data), whereas for the ICT qualification (2017 data), 43% of students were female.

Peter’s analysis showed that there is also a lower representation of black students and of students from socio-economically disadvantaged backgrounds in the cohort for GCSE CS. In contrast, students with Chinese ancestry are proportionally more highly represented in the cohort. 

Another part of Peter’s analysis related gender data to the Income Deprivation Affecting Children Index (IDACI), which is used as an indicator of the level of poverty in England’s local authority districts. In the graphs below, a higher IDACI decile means more deprivation in an area. Relating gender data of GCSE CS uptake against the IDACI shows that:

  • Girls from more deprived areas are more likely to take up GCSE CS than girls from less deprived areas are
  • The opposite is true for boys
Two bar charts relating gender data of GCSE uptake against the Income Deprivation Affecting Children Index. The graph plotting GCSE ICT data shows that students from areas with higher deprivation are slightly more likely to choose the GCSE, irrespective of gender. The graph plotting GCSE Computer Science data shows that girls from more deprived areas are more likely to take up GCSE CS than girls from less deprived areas, and the opposite is true for boys.

Peter covered much more data in the seminar, so do watch the video recording (below) if you want to learn more.

Peter’s analysis shows a lack of equity (i.e. equality of outcome in the form of proportional representation) in uptake of GCSE CS after age 14. It is also important to recognise, however, that England does mandate — not simply provide or offer — Computing for all pupils at both primary and secondary levels; making a subject mandatory is the only way to ensure that we do give access to all pupils.

What can we do about the lack of equity?

Billy presented some of the potential reasons for why some groups of young people are not fully represented in GCSE Computer Science:

  • There are many stereotypes surrounding the image of ‘the computer scientist’, and young people may not be able to identify with the perception they hold of ‘the computer scientist’
  • There is inequality in access to resources, as indicated by the research on science and STEM capital being carried out within the ASPIRES project

More research is needed to understand the subject choices young people make and their reasons for choosing as they do.

We also need to look at how the way we teach Computing to students aged 11 to 14 (and younger) affects whether they choose CS as a post-14 subject. Our next seminar revolves around equity-focused teaching practices, such as culturally relevant pedagogy or culturally responsive teaching, and how educators can use them in their CS learning environments. 

Meanwhile, our own research project at the Raspberry Pi Foundation, Gender Balance in Computing, investigates particular approaches in school and non-formal learning and how they can impact on gender balance in Computer Science. For an overview of recent research around barriers to gender balance in school computing, look back on the research seminar by Katharine Childs from our team.

Peter and Billy themselves have recently been successful in obtaining funding for a research project to explore female computing performance and subject choice in English schools, a project they will be starting soon!

If you missed the seminar, watch recording here. You can also find Peter and Billy’s presentation slides on our seminars page.

Next up in our seminar series

In our next research seminar on Tuesday 2 February at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we’ll welcome Prof Tia Madkins (University of Texas at Austin), Dr Nicol R. Howard (University of Redlands), and Shomari Jones (Bellevue School District), who are going to talk to us about culturally responsive pedagogy and equity-focused teaching in K-12 Computer Science. To join this free online seminar, simply sign up with your name and email address.

Once you’ve signed up, we’ll email you the seminar meeting link and instructions for joining. If you attended Peter’s and Billy’s seminar, the link remains the same.

The post Computing education and underrepresentation: the data from England appeared first on Raspberry Pi.