All posts by Sue Sentance

Block-based programming: does it help students learn?

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/block-based-programming-does-it-help-students-learn-research-seminar/

At the Raspberry Pi Foundation, we are continually inspired by young learners in our community: they embrace digital making and computing to build creative projects, supported by our resources, clubs, and volunteers. While creating their projects, they are learning the core programming skills that underlie digital making.

Over the years, many tools and environments have been developed to make programming more accessible to young people. Scratch is one example of a block-based programming environment for young learners, and it’s been shown to make programming more accessible to them; on our projects site we offer many step-by-step Scratch project resources.

Mark Scratch
A Scratch code-along, led by one of our educators on our weekly Digital Making at Home live stream

But does block-based programming actually help learning? Does it increase motivation and support students? Where is the hard evidence? In our latest research seminar, we were delighted to hear from Dr David Weintrop, an Assistant Professor at the University of Maryland who has done research in this area for several years and published widely on the differences between block-based and text-based programming environments.

David Weintrop

A variety of block-based programming environments

The first useful insight David shared was that we should avoid thinking about block-based programming as synonymous with the well-known Scratch environment. There are several other environments, with different affordances, that David referred to in his talk, such as Snap, Pencil Code, Blockly, and more.

Logos of block-based programming environments

Some of these, for example Pencil Code, offer a dual-modality (or hybrid) environment, where learners can write the same program in a text-based and a block-based programming environment side by side. Dual-modality environments provide this side-by-side approach based on the assumption that being able to match a text-based program to its block-based equivalent supports the development of understanding of program syntax in a text-based language.

Screenshot of the Pencil Code dual-modality programming environment

As a tool for transitioning to text-based programming

Another aspect of the research around block-based programming focuses on its usefulness as a transition to a text-based language. David described a 15-week study he conducted in high schools in the USA to investigate differences in student learning caused by use of block-based, text-based, and hybrid (a mixture of both using a dual-modality platform) programming tools.

Details of the study design: classroom-based, 3 conditions, 2 phases, quasi-experimental mixed method study

The 90 students in the study (14 to 16 years old) were divided into three groups, each with a different intervention but taught by the same teacher. In the first phase of the study (5 weeks), the groups were set the same tasks with the same learning objectives, but they used either block-based programming, text-based programming, or the hybrid environment.

After 5 weeks, students were given a test to assess learning outcomes, and they were asked questions about their attitudes to programming (specifically their perception of computing and their confidence). In the second phase (10 weeks), all the students were taught Java (a common language taught in the USA for end-of-school assessment), and then the test and attitudinal questions were repeated.

The results showed that at the 5-week point, the students who had used block-based programming scored higher in their learning outcome assessment, but at the final assessment after 15 weeks, all groups’ scores were roughly equivalent.  

A graph of assessment scores of the three groups in the study. The final scores are not significantly different.

In terms of students’ perception of computing and confidence, the responses of the Blocks group were very positive at the 5-week point, while at the 15-week point, the responses were less positive. The responses from the Text group showed a gradual increase in positivity between the 5- and 15-week points. The Hybrid group’s responses weren’t as negative as those of the Text group at the 5-week point, and their positivity didn’t decrease like the Blocks group’s did.

Taking both methods of assessment into account, the Hybrid group showed the best results in the study. The gains associated with the block-based introduction to programming did not translate to those students being further ahead when learning Java, but starting with block-based programming also did not hamper students’ transition to text-based programming.

David completed his talk by recommending dual-modality environments (such as Pencil Code) for teaching programming, as used by the Hybrid group in his study. 

More research is needed

The seminar audience raised many questions about David’s study, for example whether the actual teaching (pedagogy) may have differed for the three groups, and whether the results are not just due to the specific tools or environments that were used. This is definitely an area for further research. 

It seems that students may benefit from different tools at different times, which is why a dual-modality environment can be very useful. Of course, competence in programming takes a long time to develop, so there is room on the research agenda for longitudinal studies that monitor students’ progress over many months and even years. Such studies could take into account both the teaching approach and the programming environment in order to determine what factors impact a deep understanding of programming concepts, and students’ desire to carry on with their programming journey. 

Next up in our series

If you missed the seminar, you can find David’s presentation slides and a recording of his talk on our seminars page.

Our next free online seminar takes place on Tuesday 5 January at 17:00–18:00 BST / 12:00–13:00 EDT / 9:00–10:00 PDT / 18:00–19:00 CEST. We’ll welcome Peter Kemp and Billy Wong, who are going to share insights from their research on computing education for underrepresented groups. To join this free online seminar, simply sign up with your name and email address.

Once you’ve signed up, we’ll email you the seminar meeting link and instructions for joining. If you attended David’s seminar, the link remains the same.

The January seminar will be the first one in our series focusing on diversity and inclusion in computing education, which we’re co-hosting with the Royal Academy for Engineering. We hope to see you there!

The post Block-based programming: does it help students learn? appeared first on Raspberry Pi.

Diversity and inclusion in computing education — new research seminars

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/diversity-inclusion-computing-education-research-seminars/

At the Raspberry Pi Foundation, we host a free online research seminar once a month to explore a wide variety of topics in the area of digital and computing education. This year, we’ve hosted eleven seminars — you can (re)discover slides and recordings on our website.

A classroom of young learners and a teacher at laptops

Now we’re getting ready for new seminars in 2021! In the coming months, our seminars are going to focus on diversity and inclusion in computing education. This topic is extremely important, as we want to make sure that computing is accessible to all, that we understand how to actively remove barriers to participation for learners, and that we understand how to teach computing in an inclusive way. 

We are delighted to announce that these seminars focusing on diversity and inclusion will be co-hosted by the Royal Academy of Engineering. The Royal Academy of Engineering is harnessing the power of engineering to build a sustainable society and an inclusive economy that works for everyone.

Royal Academy of Engineering logo

We’re very excited to be partnering with the Academy because of our shared interest in ensuring that computing and engineering are inclusive and accessible to all.

Our upcoming seminars

The seminars take place on the first Tuesday of the month at 17:00–18:30 GMT / 12:00–13:30 EST / 9:00–10:30 PST / 18:00–19:30 CET.

  • 5 January 2021: Peter Kemp (King’s College London) and Billy Wong (University of Reading) will be looking at computing education in England, particularly GCSE computer science, and how it is accessed by groups typically underrepresented in computing.
  • 2 February 2021: Professor Tia Madkins (University of Texas at Austin), Nicol R. Howard (University of Redlands), and Shomari Jones (Bellevue School District) will be talking about equity-focused teaching in K–12 computer science. Find out more.
  • 2 March 2021: Dr Jakita O. Thomas (Auburn University, Alabama) will be talking about her research on supporting computational algorithmic thinking in the context of intersectional computing.
  • April 2021: event to be confirmed
  • 4 May 2021: Dr Cecily Morrison (Microsoft Research) will be speaking about her work on physical programming for people with visual impairments.

Join the seminars

We’d love to welcome you to these seminars so we can learn and discuss together. To get access, simply sign up with your name and email address.

Once you’ve signed up, we’ll email you the seminar meeting link and instructions for joining. If you attended our seminars in the past, the link remains the same.

The post Diversity and inclusion in computing education — new research seminars appeared first on Raspberry Pi.

Formative assessment in the computer science classroom

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/research-seminar-formative-assessment-computer-science-classroom/

In computing education research, considerable focus has been put on the design of teaching materials and learning resources, and investigating how young people learn computing concepts. But there has been less focus on assessment, particularly assessment for learning, which is called formative assessment. As classroom teachers are engaged in assessment activities all the time, it’s pretty strange that researchers in the area of computing and computer science in school have not put a lot of focus on this.

Shuchi Grover

That’s why in our most recent seminar, we were delighted to hear about formative assessment — assessment for learning — from Dr Shuchi Grover, of Looking Glass Ventures and Stanford University in the USA. Shuchi has a long track record of work in the learning sciences (called education research in the UK), and her contribution in the area of computational thinking has been hugely influential and widely drawn on in subsequent research.

Two types of assessment

Assessment is typically divided into two types:

  1. Summative assessment (i.e. assessing what has been learned), which typically takes place through examinations, final coursework, projects, etcetera.
  2. Formative assessment (i.e. assessment for learning), which is not aimed at giving grades and typically takes place through questioning, observation, plenary classroom activities, and dialogue with students.

Through formative assessment, teachers seek to find out where students are at, in order to use that information both to direct their preparation for the next teaching activities and to give students useful feedback to help them progress. Formative assessment can be used to surface misconceptions (or alternate conceptions) and for diagnosis of student difficulties.

Venn diagram of how formative assessment practices intersect with teacher knowledge and skills
Click to enlarge

As Shuchi outlined in her talk, a variety of activities can be used for formative assessment, for example:

  • Self- and peer-assessment activities (commonly used in schools).
  • Different forms of questioning and quizzes to support learning (not graded tests).
  • Rubrics and self-explanations (for assessing projects).

A framework for formative assessment

Shuchi described her own research in this topic, including a framework she has developed for formative assessment. This comprises three pillars:

  1. Assessment design.
  2. Teacher or classroom practice.
  3. The role of the community in furthering assessment practice.
Shuchi Grover's framework for formative assessment
Click to enlarge

Shuchi’s presentation then focused on part of the first pillar in the framework: types of assessments, and particularly types of multiple-choice questions that can be automatically marked or graded using software tools. Tools obviously don’t replace teachers, but they can be really useful for providing timely and short-turnaround feedback for students.

As part of formative assessment, carefully chosen questions can also be used to reveal students’ misconceptions about the subject matter — these are called diagnostic questions. Shuchi discussed how in a classroom setting, teachers can employ this kind of question to help them decide what to focus on in future lessons, and to understand their students’ alternate or different conceptions of a topic. 

Formative assessment of programming skills

The remainder of the seminar focused on the formative assessment of programming skills. There are many ways of assessing developing programming skills (see Shuchi’s slides), including Parsons problems, microworlds, hotspot items, rubrics (for artifacts), and multiple-choice questions. As an MCQ example, in the figure below you can see some snippets of block-based code, which students need to read and work out what the outcome of running the snippets will be. 

Click to enlarge

Questions such as this highlight that it’s important for learners to engage in code comprehension and code reading activities when learning to program. This really underlines the fact that such assessment exercises can be used to support learning just as much as to monitor progress.

Formative assessment: our support for teachers

Interestingly, Shuchi commented that in her experience, teachers in the UK are more used to using code reading activities than US teachers. This may be because code comprehension activities are embedded into the curriculum materials and support for pedagogy, both of which the Raspberry Pi Foundation developed as part of the National Centre for Computing Education in England. We explicitly share approaches to teaching programming that incorporate code reading, for example the PRIMM approach. Moreover, our work in the Raspberry Pi Foundation includes the Isaac Computer Science online learning platform for A level computer science students and teachers, which is centered around different types of questions designed as tools for learning.

All these materials are freely available to teachers wherever they are based.

Further work on formative assessment

Based on her work in US classrooms researching this topic, Shuchi’s call to action for teachers was to pay attention to formative assessment in computer science classrooms and to investigate what useful tools can support them to give feedback to students about their learning. 

Advice from Shuchi Grover on how to embed formative assessment in classroom practice
Click to enlarge

Shuchi is currently involved in an NSF-funded research project called CS Assess to further develop formative assessment in computer science via a community of educators. For further reading, there are two chapters related to formative assessment in computer science classrooms in the recently published book Computer Science in K-12 edited by Shuchi.

There was much to take away from this seminar, and we are really grateful to Shuchi for her input and look forward to hearing more about her developing project.

Join our next seminar

If you missed the seminar, you can find the presentation slides and a recording of the Shuchi’s talk on our seminars page.

In our next seminar on Tuesday 3 November at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PT / 18:00–19:30 CEST, I will be presenting my work on PRIMM, particularly focusing on language and talk in programming lessons. To join, simply sign up with your name and email address.

Once you’ve signed up, we’ll email you the seminar meeting link and instructions for joining. If you attended this past seminar, the link remains the same.

The post Formative assessment in the computer science classroom appeared first on Raspberry Pi.

How is computing taught in schools around the world?

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/international-computing-curriculum-metrecc-research-seminar/

Around the world, formal education systems are bringing computing knowledge to learners. But what exactly is set down in different countries’ computing curricula, and what are classroom educators teaching? This was the topic of the first in the autumn series of our Raspberry Pi research seminars on Tuesday 8 September.

A glowing globe floating above an open hand in the dark

We heard from an international team (Monica McGill , USA; Rebecca Vivian, Australia; Elizabeth Cole, Scotland) who represented a group of researchers also based in England, Malta, Ireland, and Italy. As a researcher working at the Raspberry Pi Foundation, I myself was part of this research group. The group developed METRECC, a comprehensive and validated survey tool that can be used to benchmark and measure developments of the teaching and learning of computing in formal education systems around the world. Monica, Rebecca, and Elizabeth presented how the research group developed and validated the METRECC tool, and shared some findings from their pilot study.

What’s in a curriculum? Developing a survey tool

Those of us who work or have worked in school education use the word ‘curriculum’ frequently, although it’s an example of education terminology that means different things in different contexts, and to different people. Following Porter and Smithson (2001)1, we can distinguish between the intended curriculum and the enacted curriculum:

  • Intended curriculum: Policy tools as curriculum standards, frameworks, or guidelines that outline the curriculum teachers are expected to deliver.
  • Enacted curriculum: Actual curricular content in which students engage in the classroom, and adopted pedagogical approaches; for computer science (CS) curricula, this also includes students’ use of technology, physical computing devices, and tools in CS lessons.

To compare the intended and enacted computing curriculum in as many countries as possible, at particular points in time, the research group Monica, Rebecca, Elizabeth, and I were part of developed the METRECC survey tool.

A classroom of students in North America

METRECC stands for MEasuring TeacheREnacted Computing Curriculum. The METRECC survey has 11 categories of questions and is designed to be completed by computing teachers within 35–40 minutes. Following best practice in research, which calls for standardised research instruments, the research group ensured that the survey produces valid, reliable results (meaning that it works as intended) before using it to gather data.

Using METRECC in a pilot study

In their pilot study, the research group gathered data from 7 countries. The intended curriculum for each country was determined by examining standards and policies in place for each country/state under consideration. Teachers’ answers in the METRECC survey provided the countries’ enacted curricula. (The complete dataset from the pilot study is publicly available at csedresearch.org, a very useful site for CS education researchers where many surveys are shared.)

Two girls coding at a computer under supervision of a female teacher

The researchers then mapped the intended to the enacted curricula to find out whether teachers were actually teaching the topics that were prescribed for them. Overall, the results of the mapping showed that there was a good match between intended and enacted curricula. Examples of mismatches include lower numbers of primary school teachers reporting that they taught visual or symbolic programming, even though the topic did appear on their curriculum.

A table listing computer science topics
This table shows computer science topic the METRECC tool asks teachers about, and what percentage of respondents in the pilot study stated that they teach these to their students.

Another aspect of the METRECC survey allows to measure teachers’ confidence, self-efficacy, and self-esteem. The results of the pilot study showed a relationship between years of experience and CS self-esteem; in particular, after four years of teaching, teachers started to report high self-esteem in relation to computer science. Moreover, primary teachers reported significantly lower self-esteem than secondary teachers did, and female teachers reported lower self-esteem than male teachers did.

Adapting the survey’s language

The METRECC survey has also been used in South Asia, namely Bangladesh, Nepal, Pakistan, and Sri Lanka (where computing is taught under ICT). Amongst other things, what the researchers learned from that study was that some of the survey questions needed to be adapted to be relevant to these countries. For example, while in the UK we use the word ‘gifted’ to mean ‘high-attaining’, in the South Asian countries involved in the study, to be ‘gifted’ means having special needs.

Two girls coding at a computer under supervision of a female teacher

The study highlighted how important it is to ensure that surveys intended for an international audience use terminology and references that are pertinent to many countries, or that the survey language is adapted in order to make sense in each context it is delivered. 

Let’s keep this monitoring of computing education moving forward!

The seminar presentation was well received, and because we now hold our seminars for 90 minutes instead of an hour, we had more time for questions and answers.

My three main take-aways from the seminar were:

1. International collaboration is key

It is very valuable to be able to form international working groups of researchers collaborating on a common project; we have so much to learn from each other. Our Raspberry Pi research seminars attract educators and researchers from many different parts of the world, and we can truly push the field’s understanding forward when we listen to experiences and lessons of people from diverse contexts and cultures.

2. Making research data publicly available

Increasingly, it is expected that research datasets are made available in publicly accessible repositories. While this is becoming the norm in healthcare and scientific, it’s not yet as prevalent in computing education research. It was great to be able to publicly share the dataset from the METRECC pilot study, and we encourage other researchers in this field to do the same. 

3. Extending the global scope of this research

Finally, this work is only just beginning. Over the last decade, there has been an increasing move towards teaching aspects of computer science in school in many countries around the world, and being able to measure change and progress is important. Only a handful of countries were involved in the pilot study, and it would be great to see this research extend to more countries, with larger numbers of teachers involved, so that we can really understand the global picture of formal computing education. Budding research students, take heed!

Next up in our seminar series

If you missed the seminar, you can find the presentation slides and a recording of the researchers’ talk on our seminars page.

In our next seminar on Tuesday 6 October at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PT / 18:00–19:30 CEST, we’ll welcome Shuchi Grover, a prominent researcher in the area of computational thinking and formative assessment. The title of Shuchi’s seminar is Assessments to improve student learning in introductory CS classrooms. To join, simply sign up with your name and email address.

Once you’ve signed up, we’ll email you the seminar meeting link and instructions for joining. If you attended this past seminar, the link remains the same.


1. Andrew C. Porter and John L. Smithson. 2001. Defining, Developing and Using Curriculum Indicators. CPRE Research Reports, 12-2001. (2001)

The post How is computing taught in schools around the world? appeared first on Raspberry Pi.