Tag Archives: research

Culturally relevant Computing: Experiences of primary learners

Post Syndicated from Alex Hadwen-Bennett original https://www.raspberrypi.org/blog/culturally-relevant-pedagogy-experiences-primary-computing/

Today’s blog is written by Dr Alex Hadwen-Bennett, who we worked with to find out primary school learners’ experiences of engaging with culturally relevant Computing lessons. Alex is a Lecturer in Computing Education at King’s College London, where he undertakes research focusing on inclusive computing education and the pedagogy of making.

Despite many efforts to make a career in Computing more accessible, many groups of people are still underrepresented in the field. For instance, a 2022 report revealed that only 22% of people currently working in the IT industry in the UK are women. Additionally, among learners who study Computing at schools in England, Black Caribbean students are currently one of the most underrepresented groups. One approach that has been suggested to address this underrepresentation at school is culturally relevant pedagogy.

In a computing classroom, a girl laughs at what she sees on the screen.

For this reason, a particular focus of the Raspberry Pi Foundation’s academic research programme is to support Computing teachers in the use of culturally relevant pedagogy. This pedagogy involves developing learning experiences that deliberately aim to enable all learners to engage with and succeed in Computing, including by bringing their culture and interests into the classroom.

The Foundation’s work in this area started with the development of guidelines for culturally relevant and responsive teaching together with a group of teachers and external researchers. The Foundation’s researchers then explored how a group of Computing teachers employed the guidelines in their own teaching. In a follow-on study funded by Cognizant, the team worked with 13 primary school teachers in England to adapt Computing lessons to make them culturally relevant for their learners. In this process, the teachers adapted a unit on photo editing for Year 4 (ages 8–9), and a unit about vector graphics for Year 5 (ages 9–10). As part of the project, I worked with the Foundation team to analyse and report on data gathered from focus groups of primary learners who had engaged with the adapted units.

At the beginning of this study, teachers adapted two units of work that cover digital literacy skills

Conducting the focus groups

For the focus groups, the Foundation team asked teachers from three schools to each choose four learners to take part. All children in the three focus groups had taken part in all the lessons involving the culturally adapted resources. The children were both boys and girls, and came from diverse cultural backgrounds where possible.

The questions for the focus groups were prepared in advance and covered:

  • Perceptions of Computing as a subject
  • Reflections of their experiences of the engaging with culturally adapted resources
  • Perceptions of who does Computing

Outcomes from the focus groups

“I feel happy that I see myself represented in some way.”

“It was nice to do something that actually represented you in many different ways, like your culture and your background.”

– Statements of learners who participated in the focus groups

When the learners were asked about what they did in their Computing lessons, most of them made references to working with and manipulating graphics; fewer made references to programming and algorithms. This emphasis on graphics is likely related to this being the most recent topic the learners engaged with. The learners were also asked about their reflections on the culturally adapted graphics unit that they had recently completed. Many of them felt that the unit gave them the freedom to incorporate things that related to their interests or culture. The learners’ responses also suggested that they felt represented in the work they completed during the unit. Most of them indicated that their interests were acknowledged, whereas fewer mentioned that they felt their cultural backgrounds were highlighted.

“Anyone can be good at computing if they have the passion to do it.”

– Statement by a learner who participated in a focus group

When considering who does computing, the learners made multiple references to people who keep trying or do not give up. Whereas only a couple of learners said that computer scientists need to be clever or intelligent to do computing. A couple of learners suggested that they believed that anyone can do computing. It is encouraging that the learners seemed to associate being good at computing with effort rather than with ability. However, it is unclear whether this is associated with the learners engaging with the culturally adapted resources.

Reflections and next steps

While this was a small-scale study, the focus groups findings do suggest that engaging with culturally adapted resources can make primary learners feel more represented in their Computing lessons. In particular, engaging with an adapted unit led learners to feel that their interests were recognised as well as, to a lesser extent, their cultural backgrounds. This suggests that primary-aged learners may identify their practical interests as the most important part of their background, and want to share this in class.

Two children code on laptops while an adult supports them.

Finally, the responses of the learners suggest that they feel that perseverance is a more important quality than intelligence for success in computing and that anyone can do it. While it is not possible to say whether this is directly related to their engagement with a culturally adapted unit, it would be an interesting area for further research.

More information and resources

You can find out more about culturally relevant pedagogy and the Foundation’s research on it, for example by:

The Foundation would like to extend thanks to Cognizant for funding this research, and to the primary computing teachers and learners who participated in the project. 

The post Culturally relevant Computing: Experiences of primary learners appeared first on Raspberry Pi Foundation.

Engaging primary Computing teachers in culturally relevant pedagogy through professional development

Post Syndicated from Claire Johnson original https://www.raspberrypi.org/blog/culturally-relevant-pedagogy-areas-opportunity-adapting-lessons/

Underrepresentation in computing is a widely known issue, in industry and in education. To cite some statistics from the UK: a Black British Voices report from August 2023 noted that 95% of respondents believe the UK curriculum neglects black lives and experiences; fewer students from working class backgrounds study GCSE Computer Science; when they leave formal education, fewer female, BAME, and white working class people are employed in the field of computer science (Kemp 2021); only 21% of GCSE Computer Science students, 15% at A level, and 22% at undergraduate level are female (JCQ 2020, Ofqual 2020, UCAS 2020); students with additional needs are also underrepresented.

In a computing classroom, two girls concentrate on their programming task.

Such statistics have been the status quo for too long. Many Computing teachers already endeavour to bring about positive change where they can and engage learners by including their interests in the lessons they deliver, so how can we support them to do this more effectively? Extending the reach of computing so that it is accessible to all also means that we need to consider what formal and informal values predominate in the field of computing. What is the ‘hidden’ curriculum in computing that might be excluding some learners? Who is and who isn’t represented?

Katharine Childs.
Katharine Childs (Raspberry Pi Foundation)

In a recent research seminar, Katharine Childs from our team outlined a research project we conducted, which included a professional development workshop to increase primary teachers’ awareness of and confidence in culturally relevant pedagogy. In the workshop, teachers considered how to effectively adapt curriculum materials to make them culturally relevant and engaging for the learners in their classrooms. Katharine described the practical steps teachers took to adapt two graphics-related units, and invited seminar participants to apply their learning to a graphics activity themselves.

What is culturally relevant pedagogy?

Culturally relevant pedagogy is a teaching framework which values students’ identities, backgrounds, knowledge, and ways of learning. By drawing on students’ own interests, experiences and cultural knowledge educators can increase the likelihood that the curriculum they deliver is more relevant, engaging and accessible to all.

The idea of culturally relevant pedagogy was first introduced in the US in the 1990s by African-American academic Gloria Ladson-Billings (Ladson-Billings 1995). Its aim was threefold: to raise students’ academic achievement, to develop students’ cultural competence and to promote students’ critical consciousness. The idea of culturally responsive teaching was later advanced by Geneva Gay (2000) and more recently  brought into focus in US computer science education by Kimberly Scott and colleagues (2015). The approach has been localised for England by Hayley Leonard and Sue Sentance (2021) in work they undertook here at the Foundation.

Ten areas of opportunity

Katharine began her presentation by explaining that the professional development workshop in the Primary culturally adapted resources for computing project built on two of our previous research projects to develop guidelines for culturally relevant and responsive computing and understand how teachers used them in practice. This third project ran as a pilot study funded by Cognizant, starting in Autumn 2022 with a one-day, in-person workshop for 13 primary computing teachers

The research structure was a workshop followed by research adaption, then delivery of resources, and evaluation through a parent survey, teacher interviews, and student focus groups.

Katharine then introduced us to the 10 areas of opportunity (AO) our research at the Raspberry Pi Computing Education Research Centre had identified for culturally relevant pedagogy. These 10 areas were used as practical prompts to frame the workshop discussions:

  1. Find out about learners
  2. Find out about ourselves as teachers
  3. Review the content
  4. Review the context
  5. Make the learning accessible to all
  6. Provide opportunities for open-ended and problem solving activities
  7. Promote collaboration and structured group discussion
  8. Promote student agency through choice
  9. Review the learning environment
  10. Review related policies, processes, and training in your school and department

At first glance it is easy to think that you do most of those things already, or to disregard some items as irrelevant to the computing curriculum. What would your own cultural identity (see AO2) have to do with computing, you might wonder. But taking a less complacent perspective might lead you to consider all the different facets that make up your identity and then to think about the same for the students you teach. You may discover that there are many areas which you have left untapped in your lesson planning.

Two young people learning together at a laptop.

Katharine explained how this is where the professional development workshop showed itself as beneficial for the participants. It gave teachers the opportunity to reflect on how their cultural identity impacted on their teaching practices — as a starting point to learning more about other aspects of the culturally relevant pedagogy approach.

Our researchers were interested in how they could work alongside teachers to adapt two computing units to make them more culturally relevant for teachers’ specific contexts. They used the Computing Curriculum units on Photo Editing (Year 4) and Vector Graphics (Year 5).

A slide about adapting an emoji teaching activity to make it culturally relevant.

Katharine illustrated some of the adaptations teachers and researchers working together had made to the emoji activity above, and which areas of opportunity (AO) had been addressed; this aspect of the research will be reported in later publications.

Results after the workshop

Although the numbers of participants in this pilot study was small, the findings show that the professional development workshop significantly increased teachers’ awareness of culturally relevant pedagogy and their confidence in adapting resources to take account of local contexts:

  • After the workshop, 10/13 teachers felt more confident to adapt resources to be culturally relevant for their own contexts, and 8/13 felt more confident in adapting resources for others.
  • Before the workshop, 5/13 teachers strongly agreed that it was an important part of being a computing teacher to examine one’s own attitudes and beliefs about race, gender, disabilities, sexual orientation. After the workshop, the number in agreement rose to 12/13.
  • After the workshop, 13/13 strongly agreed that part of a computing teacher’s responsibility is to challenge teaching practices which maintain social inequities (compared to 7/13 previously).
  • Before the workshop, 4/13 teachers strongly agreed that it is important to allow student choice when designing computing activities; this increased to 9/13 after the workshop.

These quantitative shifts in perspective indicate a positive effect of the professional development pilot. 

Katharine described that in our qualitative interviews with the participating teachers, they expressed feeling that their understanding of culturally relevant pedagogy had increased and they recognized the many benefits to learners of the approach. They valued the opportunity to discuss their contexts and to adapt materials they currently used with other teachers, because it made it a more ‘authentic’ and practical professional development experience.

The seminar ended with breakout sessions inviting viewers to consider possible adaptations that could be made to the graphics activities which had been the focus of the workshop.

In the breakout sessions, attendees also discussed specific examples of culturally relevant teaching practices that had been successful in their own classrooms, and they considered how schools and computing educational initiatives could support teachers in their efforts to integrate culturally relevant pedagogy into their practice. Some attendees observed that it was not always possible to change schemes of work without a ‘whole-school’ approach, senior leadership team support, and commitment to a research-based professional development programme.

Where do you see opportunities for your teaching?

The seminar reminds us that the education system is not culture neutral and that teachers generally transmit the dominant culture (which may be very different from their students’) in their settings (Vrieler et al, 2022). Culturally relevant pedagogy is an attempt to address the inequities and biases that exist, which result in many students feeling marginalised, disenfranchised, or underachieving. It urges us to incorporate learners’ cultures and experiences in our endeavours  to create a more inclusive computing curriculum; to adopt an intersectional lens so that all can thrive.

Secondary school age learners in a computing classroom.

As a pilot study, the workshop was offered to a small cohort of 13, yet the findings show that the intervention significantly increased participants’ awareness of culturally relevant pedagogy and their confidence in adapting resources to take account of local contexts.

Of course there are many ways in which teachers already adapt resources to make them interesting and accessible to their pupils. Further examples of the sort of adaptations you might make using these areas of opportunity include:

  • AO1: You could find out to what extent learners feel like they ‘belong’ or are included in a particular computing-related career. This is sure to yield valuable insights into learners’ knowledge and/or preconceptions of computing-related careers. 
  • AO3: You could introduce topics such as the ethics of AI, data bias, investigations of accessibility and user interface design. 
  • AO4: You might change the context of a unit of work on the use of conditional statements in programming, from creating a quiz about ‘Vikings’ to focus on, for example, aspects of youth culture which are more engaging to some learners such as football or computer games, or to focus on religious celebrations, which may be more meaningful to others.
  • AO5: You could experiment with a particular pedagogical approach to maximise the accessibility of a unit of work. For example, you could structure a programming unit by using the PRIMM model, or follow the Universal Design for Learning framework to differentiate for diversity.
  • AO6/7: You could offer more open-ended and collaborative activities once in a while, to promote engagement and to allow learners to express themselves autonomously.
  • AO8: By allowing learners to choose topics which are relevant or familiar to their individual contexts and identities, you can increase their feeling of agency. 
  • AO9: You could review both your learning materials and your classroom to ensure that all your students are fully represented.
  • AO10: You can bring colleagues on board too; the whole enterprise of embedding culturally relevant pedagogy will be more successful when school- as well as department-level policies are reviewed and prioritised.

Can you see an opportunity for integrating culturally relevant pedagogy in your classroom? We would love to hear about examples of culturally relevant teaching practices that you have found successful. Let us know your thoughts or questions in the comments below.

You can watch Katharine’s seminar here:

You can download her presentation slides on our ‘previous seminars’ page, and you can read her research paper.

To get a practical overview of culturally relevant pedagogy, read our 2-page Quick Read on the topic and download the guidelines we created with a group of teachers and academic specialists.

Tomorrow we’ll be sharing a blog about how the learners who engaged with the culturally adapted units found the experience, and how it affected their views of computing. Follow us on social media to not miss it!

Join our upcoming seminars live

On 12 December we’ll host the last seminar session in our series on primary (K-5) computing. Anaclara Gerosa will share her work on how to design and structure early computing activities that promote and scaffold students’ conceptual understanding. As always, the seminar is free and takes place online at 17:00–18:30 GMT / 12:00–13:30 ET / 9:00–10:30 PT / 18:00–19:30 CET. Sign up and we’ll send you the link to join on the day.

In 2024, our new seminar series will be about teaching and learning programming, with and without AI tools. If you’re signed up to our seminars, you’ll receive the link to join every monthly seminar.

The post Engaging primary Computing teachers in culturally relevant pedagogy through professional development appeared first on Raspberry Pi Foundation.

Netflix Original Research: MIT CODE 2023

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/netflix-original-research-mit-code-2023-9340b879176a

Netflix was thrilled to be the premier sponsor for the 2nd year in a row at the 2023 Conference on Digital Experimentation (CODE@MIT) in Cambridge, MA. The conference features a balanced blend of academic and industry research from some wicked smart folks, and we’re proud to have contributed a number of talks and posters along with a plenary session.

Our contributions kicked off with a concept that is crucial to our understanding of A/B tests: surrogates!

Our first talk was given by Aurelien Bibaut (with co-authors Nathan Kallus, Simon Ejdemyr and Michael Zhao) in which we discussed how to confidently measure long-term outcomes using short term surrogates in the presence of bias. For example, how do we estimate the effects of innovations on retention a year later without running all our experiments for a year? We proposed an estimation method using cross-fold procedures, and construct valid confidence intervals for long term effects before that effect is fully observed.

Later on, Michael Zhao (with Vickie Zhang, Anh Le and Nathan Kallus) spoke about the evaluation of surrogate index models for product decision making. Using 200 real A/B tests performed at Netflix, we showed that surrogate-index models, constructed using only 2 weeks of data, lead to the same product ship decisions ~95% of the time when compared to making a call based on 2 months of data. This means we can reliably run shorter tests with confidence without needing to wait months for results!

Our next topic focused on how to understand and balance competing engagement metrics; for example, should 1 hour of gaming equal 1 hour of streaming? Michael Zhao and Jordan Schafer shared a poster on how they built an Overall Evaluation Criterion (OEC) metric that provides holistic evaluation for A/B tests, appropriately weighting different engagement metrics to serve a single overall objective. This new framework has enabled fast and confident decision making in tests, and is being actively adapted as our business continues to expand into new areas.

In the second plenary session of the day, Martin Tingley took us on a compelling and fun journey of complexity, exploring key challenges in digital experimentation and how they differ from the challenges faced by agricultural researchers a century ago. He highlighted different areas of complexity and provided perspectives on how to tackle the right challenges based on business objectives.

Our final talk was given by Apoorva Lal (with co-authors Samir Khan and Johan Ugander) in which we show how partial identification of the dose-response function (DRF) under non-parametric assumptions can be used to provide more insightful analyses of experimental data than the standard ATE analysis does. We revisited a study that reduced like-minded content algorithmically, and showed how we could extend the binary ATE learning to answer how the amount of like-minded content a user sees affects their political attitudes.

We had a blast connecting with the CODE@MIT community and bonding over our shared enthusiasm for not only rigorous measurement in experimentation, but also stats-themed stickers and swag!

One of our stickers this year, can you guess what this is showing?!

We look forward to next year’s iteration of the conference and hope to see you there!

Psst! We’re hiring Data Scientists across a variety of domains at Netflix — check out our open roles.


Netflix Original Research: MIT CODE 2023 was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Spotlight on teaching programming with and without AI in our 2024 seminar series

Post Syndicated from Bonnie Sheppard original https://www.raspberrypi.org/blog/teaching-programming-ai-seminar-series-2024/

How do you best teach programming in school? It’s one of the core questions for primary and secondary computing teachers. That’s why we’re making it the focus of our free online seminars in 2024. You’re invited to attend and hear about the newest research about the teaching and learning of programming, with or without AI tools.

Two smiling adults learn about computing at desktop computers.

Building on the success and the friendly, accessible session format of our previous seminars, this coming year we will delve into the latest trends and innovative approaches to programming education in school.

Secondary school age learners in a computing classroom.

Our online seminars are for everyone interested in computing education

Our monthly online seminars are not only for computing educators but also for everyone else who is passionate about teaching young people to program computers. The seminar participants are a diverse community of teachers, technology enthusiasts, industry professionals, coding club volunteers, and researchers.

Two adults learn about computing at desktop computers.

With the seminars we aim to bridge the gap between the newest research and practical teaching. Whether you are an educator in a traditional classroom setting or a mentor guiding learners in a CoderDojo or Code Club, you will gain insights from leading researchers about how school-age learners engage with programming. 

What to expect from the seminars

Each online seminar begins with an expert presenter delivering their latest research findings in an accessible way. We then move into small groups to encourage discussion and idea exchange. Finally, we come back together for a Q&A session with the presenter.

Here’s what attendees had to say about our previous seminars:

“As a first-time attendee of your seminars, I was impressed by the welcoming atmosphere.”

“[…] several seminars (including this one) provided valuable insights into different approaches to teaching computing and technology.”

“I plan to use what I have learned in the creation of curriculum […] and will pass on what I learned to my team.”

“I enjoyed the fact that there were people from different countries and we had a chance to see what happens elsewhere and how that may be similar and different to what we do here.”

January seminar: AI-generated Parson’s Problems

Computing teachers know that, for some students, learning about the syntax of programming languages is very challenging. Working through Parson’s Problem activities can be a way for students to learn to make sense of the order of lines of code and how syntax is organised. But for teachers it can be hard to precisely diagnose their students’ misunderstandings, which in turn makes it hard to create activities that address these misunderstandings.

A group of students and a teacher at the Coding Academy in Telangana.

At our first 2024 seminar on 9 January, Dr Barbara Ericson and Xinying Hou (University of Michigan) will present a promising new approach to helping teachers solve this difficulty. In one of their studies, they combined Parsons Problems and generative AI to create targeted activities for students based on the errors students had made in previous tasks. Thus they were able to provide personalised activities that directly addressed gaps in the students’ learning.

Sign up now to join our seminars

All our seminars start at 17:00 UK time (18:00 CET / 12:00 noon ET / 9:00 PT) and are held online on Zoom. To ensure you don’t miss out, sign up now to receive calendar invitations, and access links for each seminar on the day.

If you sign up today, we’ll also invite you to our 12 December seminar with Anaclara Gerosa (University of Glasgow) about how to design and structure of computing activities for young learners, the final session in our 2023 series about primary (K-5) computing education.

The post Spotlight on teaching programming with and without AI in our 2024 seminar series appeared first on Raspberry Pi Foundation.

Support for new computing teachers: A tool to find Scratch programming errors

Post Syndicated from Bonnie Sheppard original https://www.raspberrypi.org/blog/support-new-computing-teachers-debugging-scratch-litterbox/

We all know that learning to program, and specifically learning how to debug or fix code, can be frustrating and leave beginners overwhelmed and disheartened. In a recent blog article, our PhD student Lauria at the Raspberry Pi Computing Education Research Centre highlighted the pivotal role that teachers play in shaping students’ attitudes towards debugging. But what about teachers who are coding novices themselves?

Two adults learn about computing at desktop computers.

In many countries, primary school teachers are holistic educators and often find themselves teaching computing despite having little or no experience in the field. In a recent seminar of our series on computing education for primary-aged children, Luisa Greifenstein told attendees that struggling with debugging and negative attitudes towards programming were among the top ten challenges mentioned by teachers.

Luisa Greifenstein.

Luisa is a researcher at the University of Passau, Germany, and has been working closely with both teacher trainees and experienced primary school teachers in Germany. She’s found that giving feedback to students can be difficult for primary school teachers, and especially for teacher trainees, as programming is still new to them. Luisa’s seminar introduced a tool to help.

A unique approach: Visualising debugging with LitterBox

To address this issue, the University of Passau has initiated the primary::programming project. One of its flagship tools, LitterBox, offers a unique solution to debugging and is specifically designed for Scratch, a beginners’ programming language widely used in primary schools.

A screenshot from the LitterBox tool.
You can upload Scratch program files to LitterBox to analyse them. Click to enlarge.

LitterBox serves as a static code debugging tool that transforms code examination into an engaging experience. With a nod to the Scratch cat, the tool visualises the debugging of Scratch code as checking the ‘litterbox’, categorising issues into ‘bugs’ and ‘smells’:

  • Bugs represent code patterns that have gone wrong, such as missing loops or specific blocks
  • Smells indicate that the code couldn’t be processed correctly because of duplications or unnecessary elements
A screenshot from the LitterBox tool.
The code patterns LitterBox recognises. Click to enlarge.

What sets LitterBox apart is that it also rewards correct code by displaying ‘perfumes’. For instance, it will praise correct broadcasting or the use of custom blocks. For every identified problem or achievement, the tool provides short and direct feedback.

A screenshot from the LitterBox tool.
LitterBox also identifies good programming practice. Click to enlarge.

Luisa and her team conducted a study to gauge the effectiveness of LitterBox. In the study, teachers were given fictitious student code with bugs and were asked to first debug the code themselves and then explain in a manner appropriate to a student how to do the debugging.

The results were promising: teachers using LitterBox outperformed a control group with no access to the tool. However, the team also found that not all hints proved equally helpful. When hints lacked direct relevance to the code at hand, teachers found them confusing, which highlighted the importance of refining the tool’s feedback mechanisms.

A bar chart showing that LitterBox helps computing teachers.

Despite its limitations, LitterBox proved helpful in another important aspect of the teachers’ work: coding task creation. Novice students require structured tasks and help sheets when learning to code, and teachers often invest substantial time in developing these resources. While LitterBox does not guide educators in generating new tasks or adapting them to their students’ needs, in a second study conducted by Luisa’s team, teachers who had access to LitterBox not only received support in debugging their own code but also provided more scaffolding in task instructions they created for their students compared to teachers without LitterBox.

How to maximise the impact of new tools: use existing frameworks and materials

One important realisation that we had in the Q&A phase of Luisa’s seminar was that many different research teams are working on solutions for similar challenges, and that the impact of this research can be maximised by integrating new findings and resources. For instance, what the LitterBox tool cannot offer could be filled by:

  • Pedagogical frameworks to enhance teachers’ lessons and feedback structures. Frameworks such as PRIMM (Predict, Run, Investigate, Modify, and Make) or TIPP&SEE for Scratch projects (Title, Instructions, Purpose, Play & Sprites, Events, Explore) can serve as valuable resources. These frameworks provide a structured approach to lesson design and teaching methodologies, making it easier for teachers to create engaging and effective programming tasks. Additionally, by adopting semantic waves in the feedback for teachers and students, a deeper understanding of programming concepts can be fostered. 
  • Existing courses and materials to aid task creation and adaptation. Our expert educators at the Raspberry Pi Foundation have not only created free lesson plans and courses for teachers and educators, but also dedicated non-formal learning paths for Scratch, Python, Unity, web design, and physical computing that can serve as a starting point for classroom tasks.

Exploring innovative ideas in computing education

As we navigate the evolving landscape of programming education, it’s clear that innovative tools like LitterBox can make a significant difference in the journey of both educators and students. By equipping educators with effective debugging and task creation solutions, we can create a more positive and engaging learning experience for students.

If you’re an educator, consider exploring how such tools can enhance your teaching and empower your students in their coding endeavours.

You can watch the recording of Luisa’s seminar here:

Sign up now to join our next seminar

If you’re interested in the latest developments in computing education, join us at one of our free, monthly seminars. In these sessions, researchers from all over the world share their innovative ideas and are eager to discuss them with educators and students. In our December seminar, Anaclara Gerosa (University of Edinburgh) will share her findings about how to design and structure early-years computing activities.

This will be the final seminar in our series about primary computing education. Look out for news about the theme of our 2024 seminar series, which are coming soon.

The post Support for new computing teachers: A tool to find Scratch programming errors appeared first on Raspberry Pi Foundation.

Is That Smart Home Technology Secure? Here’s How You Can Find Out.

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2023/10/30/is-that-smart-home-technology-secure-heres-how-you-can-find-out/

Is That Smart Home Technology Secure? Here’s How You Can Find Out.

As someone who likes the convenience of smart home Internet of Things (IoT) technology, I am regularly on the lookout for products that meet my expectations while also considering security and privacy concerns. Smart technology should never be treated differently than how we as consumers look at other products, like purchasing an automobile for example. In the case of automobiles, we search for the vehicle that meets our visual and performance expectations, but that will also keep us and our family safe. With that said, shouldn’t we also seek smart home technologies that are secure and protect our privacy?

I can’t tell you which solution will work for your specific case, but I can give you some pointers around technology security to help you do that research and determine which solution may best meet your needs and help you stay secure while doing it. Many of these recommendations will work no matter what IoT product you’re looking to purchase; however, I do recommend taking the time to perform some of these basic product security research steps.

The first thing I recommend is to visit the vendor site and search to see what they have to say about their products’ security. Also, do they have a vulnerability disclosure program (VDP)? If an organization that manufactures and sells IoT technology doesn’t have much to say about their products’ security or an easy way for you or someone else to report a security issue, then I highly recommend you move on.

This would indicate that product security probably doesn’t matter to them as much as it should. I also say this to the product vendors out there: If you don’t take product security seriously enough to help educate us consumers on why your products are the best when it comes to security, then why should we buy your products?

Next, I always recommend searching the Common Vulnerability Exposure (CVE) database and the Internet for the product you’re looking to buy and/or the vendor’s name. The information you find is sometimes very telling in terms of how an organization handles security vulnerability disclosure and follow-up patching of their products.

The existence of a vulnerability in an IoT product isn’t necessarily a bad thing; we’re always going to find vulnerabilities within IoT Products. The question we’re looking to answer by doing this search is this: How does this vendor handle reported vulnerabilities? For example, do they patch them quickly, or does it take months (or years!) for them to react – or will they ultimately do nothing? If there is no vulnerability information published on a specific IoT product, it may be that no one has bothered to test the security of the product. It’s also possible that the vendor has silently patched their issues and never issued any CVEs.

It is unlikely, but not impossible, that a product will never contain a vulnerability. Over the years I’ve encountered products where I was unsuccessful in finding any issues; however, not being successful in finding vulnerabilities within a product doesn’t mean they couldn’t possibly exist.

Recently, I became curious to learn how vendors that produce and/or retrofit garage door openers stack up in terms of security, so I followed the research process discussed above. I took a look at multiple vendors to see, are any of them following my recommendations? The sad part is, practically none of them even mentioned the word “security” on their websites. One clear exception was Tuya, a global IoT hardware and IoT software-as-a-service (SaaS) organization.

When I examined the Tuya website, I quickly located their security page and it was full of useful information. On this page, Tuya points out their security policies, standards, and compliance. Along with having a VDP, they also run a bug bounty program. Bug bounty programs allow researchers to work with a vendor to report security issues – and get paid to do it. Tuya’s bug bounty information is located at the Tuya Security Response Center. Vendors take note: This is how an IoT product vendor should present themselves and their security program.

In closing, consumers, if you’re looking to spend your hard-earned money, please take the time to do some basic research to see if the vendor has a proactive security program. Also, vendors, remember that consumers are becoming more aware and concerned about product security. If you want your product to rise to the status of “best solution around,” I highly recommend you start taking product security seriously as well as share details and access to your security program for your business and products. This data will help consumers make more informed decisions on which product best meets their needs and expectations.

Young children’s ScratchJr coding projects: Assessment and support

Post Syndicated from Diana Kirby original https://www.raspberrypi.org/blog/childrens-scratchjr-projects-assessment-support/

Block-based programming applications like Scratch and ScratchJr provide millions of children with an introduction to programming; they are a fun and accessible way for beginners to explore programming concepts and start making with code. ScratchJr, in particular, is designed specifically for children between the ages of 5 and 7, enabling them to create their own interactive stories and games. So it’s no surprise that they are popular tools for primary-level (K–5) computing teachers and learners. But how can teachers assess coding projects built in ScratchJr, where the possibilities are many and children are invited to follow their imagination?

Aim Unahalekhala
Aim Unahalekhala

In the latest seminar of our series on computing education for primary-aged children, attendees heard about two research studies that explore the use of ScratchJr in K–2 education. The speaker, Apittha (Aim) Unahalekhala, is a graduate researcher at the DevTech Research Group at Tufts University. The two studies looked at assessing young children’s ScratchJr coding projects and understanding how they create projects. Both of the studies were part of the Coding as Another Language project, which sees computer science as a new literacy for the 21st century, and is developing a literacy-based coding curriculum for K–2.

How to evaluate children’s ScratchJr projects

ScratchJr offers children 28 blocks to choose from when creating a coding project. Some of these are simple, such as blocks that determine the look of a character or setting, while others are more complex, such as messaging blocks and loops. Children can combine the blocks in many different ways to create projects of different levels of complexity.

A child select blocks for a ScratchJr project on a tablet.
Selecting blocks for a ScratchJr project

At the start of her presentation, Aim described a rubric that she and her colleagues at DevTech have developed to assess three key aspects of a ScratchJr coding project. These aspects are coding concepts, project design, and purposefulness.

  • Coding concepts in ScratchJr are sequencing, repeats, events, parallelism, coordination, and the number parameter
  • Project design includes elaboration (number of settings and characters, use of speech bubbles) and originality (character and background customisation, animated looks, sounds)

The rubric lets educators or researchers:

  • Assess learners’ ability to use their coding knowledge to create purposeful and creative ScratchJr projects
  • Identify the level of mastery of each of the three key aspects demonstrated within the project
  • Identify where learners might need more guidance and support
The elements covered by the ScratchJr project evaluation rubric.
The elements covered by the ScratchJr project evaluation rubric. Click to enlarge.

As part of the study, Aim and her colleagues collected coding projects from two schools at the start, middle, and end of a curriculum unit. They used the rubric to evaluate the coding projects and found that project scores increased over the course of the unit.

They also found that, overall, the scores for the project design elements were higher than those for coding concepts: many learners enjoyed spending lots of time designing their characters and settings, but made less use of other features. However, the two scores were correlated, meaning that learners who devoted a lot of time to the design of their project also got higher scores on coding concepts.

The rubric is a useful tool for any teachers using ScratchJr with their students. If you want to try it in your classroom, the validated rubric is free to download from the DevTech research group’s website.

How do young children create a project?

The rubric assesses the output created by a learner using ScratchJr. But learning is a process, not just an end outcome, and the final project might not always be an accurate reflection of a child’s understanding.

By understanding more about how young children create coding projects, we can improve teaching and curriculum design for early childhood computing education.

In the second study Aim presented, she set out to explore this question. She conducted a qualitative observation of children as they created coding projects at different stages of a curriculum unit, and used Google Analytics data to conduct a quantitative analysis of the steps the children took.

A Scratch project creation process involving iteration.
A project creation process involving iteration

Her findings highlighted the importance of encouraging young learners to explore the full variety of blocks available, both by guiding them in how to find and use different blocks, and by giving them the time and tools they need to explore on their own.

She also found that different teaching strategies are needed at different stages of the curriculum unit to support learners. This helps them to develop their understanding of both basic and advanced blocks, and to explore, customise, and iterate their projects.

Early-unit strategy:

  • Encourage free play to self-discover different functions, especially basic blocks

Mid-unit strategy:

  • Set plans on how long children will need on customising vs coding
  • More guidance on the advanced blocks, then let children explore

End-of-unit strategy:

  • Provide multiple sessions to work
  • Promote iteration by encouraging children to keep improving code and adding details
Teaching strategies for different stages of a ScratchJr curriculum.
Teaching strategies for different stages of the curriculum

You can watch Aim’s full presentation here:

You can also access the seminar slides here.

Join our next seminar on primary computing education

At our next seminar, we welcome Aman Yadav (Michigan State University), who will present research on computational thinking in primary school. The session will take place online on Tuesday 7 November at 17:00 UK time. Don’t miss out and sign up now:

To find out more about connecting research to practice for primary computing education, you can find the rest of our upcoming monthly seminars on primary (K–5) teaching and learning and watch the recordings of previous seminars in this series.

The post Young children’s ScratchJr coding projects: Assessment and support appeared first on Raspberry Pi Foundation.

The Risks of Exposing DICOM Data to the Internet

Post Syndicated from Christiaan Beek original https://blog.rapid7.com/2023/10/11/the-risks-of-exposing-dicom-data-to-the-internet/

Introduction

The Risks of Exposing DICOM Data to the Internet

Digital Imaging and Communications in Medicine (DICOM) is the international standard for the transmission, storage, retrieval, print, and display of medical images and related information. While DICOM has revolutionized the medical imaging industry, allowing for enhanced patient care through the easy exchange of imaging data, it also presents potential vulnerabilities when exposed to the open internet.

About five years ago, I was in the hospital while an ultrasound was taken of my pregnant wife. While the doctor made the images, a small message on the screen got my attention: “writing image to disk – transfer DICOM.” Digging into the DICOM standard at the time resulted in being able to discover exposed systems over the internet, retrieve medical images, use demo software, and 3D-print a pelvis. An example of that research is still available online here. It’s now five years later, so I was curious to see if things had changed (and no worries—I will not 3D-print another body part 😉).

This article delves into the risks associated with the unintended exposure of DICOM data and the importance of safeguarding this data.

Understanding DICOM

DICOM is more than just an image format; it encompasses a suite of protocols that allow different medical imaging devices and systems, such as MRI machines, X-ray devices, and computer workstations, to communicate with each other. A typical DICOM file not only contains the image but also the associated metadata, which may have patient demographic information, clinical data, and sometimes even the patient’s full name, date of birth, and other personal identifiers.

What Are the Exposure Risks?

  1. Breach of Patient Confidentiality: The most pressing concern is the breach of patient confidentiality. If DICOM data is exposed online, there’s a high risk of unauthorized access to sensitive patient information. Such breaches have the potential to result in legal consequences, financial penalties, and damage to the reputations of medical institutions.
  2. Data Manipulation: An unprotected system might allow malicious entities not only to view but also to alter medical data. Such manipulations have the potential to lead to mis-diagnoses, inappropriate treatments, or other medical errors.
  3. Ransomware Attacks: In recent years, healthcare institutions have become prime targets for ransomware attacks. Exposing DICOM data could potentially provide a gateway for cybercriminals to encrypt vital medical information and demand a ransom for its release.
  4. Data Loss: Without proper security measures, data could be accidentally or maliciously deleted, leading to loss of crucial medical records.
  5. Service Interruptions: Unprotected DICOM servers could be vulnerable to denial-of-service (DoS) attacks, disrupting medical services and interfering with patient care.

Research

While previously I focused on the imaging part of the protocol, this time I looked into the possibility of retrieving PII data* from openly exposed DICOM servers.

Using Sonar, Rapid7’s proprietary internet scan engine, a study was conducted to scan for the DICOM port exposed to the internet. Using the output of the scan, a simple Python script was created that used the IP addresses discovered as input, whereby a basic set of DICOM descriptors from the “PATIENT” root-level were queried. The standard itself is very extensive and contains many fields that can be retrieved, such as PII related data including name, date of birth, comments on the treatment, and many more.

Unfortunately, we were able to quickly retrieve sensitive patient information. No need for authentication; we received the information simply by requesting it. The following screenshot is an example of what we retrieved, with the PII altered for privacy purposes.

The Risks of Exposing DICOM Data to the Internet

In some cases, we were able to get more details on the study and status of the patient:

The Risks of Exposing DICOM Data to the Internet

Importantly, our results not only discovered hospitals, but also private practice and veterinary clinics.

When scanning for systems connected to the internet, we focused on the two main TCP ports: TCP port 104 and TCP port 11112. We ignored the TCP port 4242 since that is mostly used to send images. In total we discovered more than 3600 results that replied to these two ports.

Although it might be interesting to geolocate where these systems are, we believe that it is better to investigate which systems are really possible candidates that we can retrieve data from and geolocate those.

TCP port 104 stats

After retrieving the list of IP addresses that responded to the open port and matched a DICOM reply, we scanned the list by using a custom script that would query if a connection could be established or not. The following diagram shows the results of this scan.

The Risks of Exposing DICOM Data to the Internet

In 45% of cases, the remote server was accepting a connection that could be used for retrieving information.

TCP port 11112 stats

Next, we used the list of IP addresses that responded to a DICOM ping reply on TCP port 1112. Again we used our script to query if a connection could be established or not. The diagram below shows the results of this particular scan.

The Risks of Exposing DICOM Data to the Internet

Of the total number of 1921 discovered systems responding to our DICOM connection verification script, 43% of these systems were accepting a connection that could be used for retrieving data.

Since we now know how many systems are connected, accepting connections to retrieve the information, let’s map those out on a global map, where each orange colored country is a country where systems were discovered:

The Risks of Exposing DICOM Data to the Internet

Not much seems to have changed since my initial research in 2018; even searching for medical images using a fairly simple Google query results in the ability to download images from DICOM systems, including complete MRI sets. The image below showcases an innocent example from a veterinary clinic where an X-ray of an unfortunate pet was made.

The Risks of Exposing DICOM Data to the Internet

Conclusion

While DICOM has proven invaluable in the world of medical imaging, its exposure to the internet poses significant risks. Healthcare institutions are the prime targets of threat actors; therefore, these risks have detrimental implications on patients’ healthcare services and consumer trust, and they cause legal and financial damage to healthcare providers.

It’s essential for healthcare institutions to recognize these risks and implement robust measures to protect both patient data and their reputations. As the cyber landscape continues to evolve, so too must the defenses that guard against potential threats. Healthcare organizations should make it a part of their business strategy to regularly scan their exposure to the internet and institute robust protections against potential risks.

*Note: Where possible, Rapid7 used their connections with National CERTS to inform them of our findings. All data that was discovered has been securely removed from the researcher’s system.

Little Crumbs Can Lead To Giants

Post Syndicated from Christiaan Beek original https://blog.rapid7.com/2023/10/05/little-crumbs-can-lead-to-giants/

Little Crumbs Can Lead To Giants

This week is the Virus Bulletin Conference in London. Part of the conference is the Cyber Threat Alliance summit, where CTA members like Rapid7 showcase their research into all kinds of cyber threats and techniques.

Traditionally, when we investigate a campaign, the focus is mostly on the code of the file, the inner workings of the malware, and communications towards threat actor-controlled infrastructure. Having a background in forensics, and in particular data forensics, I’m always interested in new ways of looking at and investigating data. New techniques can help proactively track, detect, and hunt for artifacts.

In this blog, which highlights my presentation at the conference, I will dive into the world of Shell Link files (LNK) and Virtual Hard Disk files (VHD). As part of this research, Rapid7 is releasing a new feature in Velociraptor that can parse LNK files and will be released with the posting of this blog.

VHD files

VHD and its successor VHDX are formats representing a virtual hard disk. They can contain contents usually found on a physical hard drive, such as disk partitions and files. They are typically used as the hard disk of a virtual machine, are built into modern versions of Windows, and are the native file format for Microsoft’s hypervisor, Hyper-V. The format was created by Connectix for their Virtual PC, known as Microsoft Virtual since Microsoft acquired Connectix in 2003. As we will see later, the word “Connectix” is still part of the footer of a VHD file.

Why would threat actors use VHD files in their campaigns? Microsoft has a security technology that is called “Mark of the Web” (MOTW). When files are downloaded from the internet using Windows, they are marked with a secret Zone.Identifier NTFS Alternate Data Stream (ADS) with a particular value called the MOTW. MOTW-tagged files are restricted and unable to carry out specific operations. Windows Defender SmartScreen, which compares files with an allowlist of well-known executables, will process executables marked with the MOTW. SmartScreen will stop the execution of the file if it is unknown or untrusted and will alert the user not to run it. Since VHD files are a virtual hard-disk, they can contain files and folders. When files are inside a VHD container, they will not receive the MOTW and bypass the security restrictions.

Depending on the underlying operating system, the VHD file can be in FAT or NTFS. The great thing about that is that traditional file system forensics can be applied. Think about Master-File_Table analysis, Header/Footer analysis and data carving, to name a few.

Example case:

In the past we investigated a case where a threat-actor was using a VHD file as part of their campaign. The flow of the campaign demonstrates how this attack worked:

Little Crumbs Can Lead To Giants

After sending a spear-phishing email with a VHD file, the victim would open up the VHD file that would auto-mount in Windows. Next, the MOTW is bypassed and a PDF file with backdoor is opened to download either the Sednit or Zebrocy malware. The backdoor would then establish a connection with the command-and-control (C2) server controlled by the threat actor.

After retrieving the VHD file, first it is mounted as ‘read-only’ so we cannot change anything about the digital evidence. Secondly, the Master-File-Table (MFT) is retrieved and analyzed:

Little Crumbs Can Lead To Giants

Besides the valuable information like creation and last modification times (always take into consideration that these can be altered on purpose), two of the files were copied from a system into the VHD file. Another interesting discovery here is that the VHD disk contained a RECYCLE.BIN file that contained deleted files. That’s great since depending on the filesize of the VHD (the bigger, the more chance that files are not overwritten), it is possible to retrieve these deleted files by using a technique called “data carving.”

Using Photorec as one of the data carving tools, again the VHD file is mounted read-only and the tool pointed towards this share to attempt to recover the deleted files.

Little Crumbs Can Lead To Giants

After running for a short bit, the deleted files could be retrieved and used as part of the investigation. Since this is not relevant for this blog, we continue with the footer analysis.

Footer analysis of a VHD file

The footer, which is often referred to as the trailer, is an addition to the original header that is appended to the end of a file. It is a data structure that resembles a header.

A footer is never located at a fixed offset from the beginning of an image file unless the image data is always the same size because by definition it comes after the image data, which is typically of variable length. It is often situated a certain distance from the end of a picture file. Similar to headers, footers often have a defined size. A rendering application can use a footer’s identification field or magic number, like a header’s, to distinguish it from other data structures in the file.

When we look at the footer of the VHD file, certain interesting fields can be observed:

Little Crumbs Can Lead To Giants

These values are some of the examples of the data structures that are specified for the footer of a VHD file, but there are also other values like “type of disk” that can be valuable during comparisons of multiple campaigns by an actor.

From the screenshot, we can see that “conectix” is the magic number value of the footer of a VHD file, you can compare it to a small fingerprint. From the other values, we can determine that the actor used a Windows operating system, and we can derive from the HEX value the creation time of the VHD file.

From a threat hunting or tracking perspective, these values can be very useful. In the below example, a Yara rule was written to identify the file as a VHD file and secondly the serial number of the hard drive used by the actor:

Little Crumbs Can Lead To Giants

Shell link files (LNK), aka Shortcut files

A Shell link, also known as a Shortcut, is a data object in this format that houses data that can be used to reach another data object. Windows files with the “LNK” extension are in a format known as the Shell Link Binary File Format. Shell links can also be used by programs that require the capacity to store a reference to a destination file. Shell links are frequently used to facilitate application launching and linking scenarios, such as Object Linking and Embedding (OLE).

LNK files are massively abused in multiple cybercrime campaigns to download next stage payloads or contain code hidden in certain data fields. The data structure specification of LNK files mentions that LNK files store various information, including “optional data” in the “extra data” sections. That is an interesting area to focus on.

Below is a summarized overview of the Extra Data structure:

Little Crumbs Can Lead To Giants

The ‘Header’ LinkInfo part contains interesting data on the type of drive used, but more importantly it contains the SerialNumber of the hard drive used by the actor when creating the LNK file:

Little Crumbs Can Lead To Giants

Other interesting information can be found; for example, around a value with regards to the icon used and in this file used, it contains an interesting string.

Little Crumbs Can Lead To Giants

Combining again that information, a simple Yara rule can be written for this particular LNK file which might have been used in multiple campaigns:

Little Crumbs Can Lead To Giants

One last example is to look for the ‘Droids’ values in the Extra Data sections. Droids stands for Digital Record Object Identification. There are two values present in the example file:

Little Crumbs Can Lead To Giants

The value in these fields translates to the MAC address of the attacker’s system… yes, you read this correctly and may close your open mouth now…

Little Crumbs Can Lead To Giants

Also this can be used to build upon the previous LNK Yara rule, where you could replace the “.\\3.jpg” part with the MAC address value to hunt for LNK files that were created on that particular device with that MAC address.

In a recent campaign called “Raspberry Robin”, LNK files were used to distribute the malware. Analyzing the LNK files and using the above investigation technique, the following Yara rule was created:

Little Crumbs Can Lead To Giants

Velociraptor LNK parser

Based on our research into LNK files, an updated LNK parser was developed by Matt Green from Rapid7 for Velociraptor, our advanced open-source endpoint monitoring, digital forensics, and cyber response platform.

With the parser, multiple LNK files can be processed and information can be extracted to use as an input for Yara rules that can be pushed back into the platform to hunt.

Little Crumbs Can Lead To Giants

Windows.Forensics.Lnk parses LNK shortcut files using Velociraptor’s built-in binary parser. The artifact outputs fields aligning to Microsoft’s ms-shllink protocol specification and some analysis hints to assist review or detection use cases. Users have the option to search for specific indicators in key fields with regex, or control the definitions for suspicious items to bubble up during parsing.

Some of the default targeted suspicious attributes include:

  • Large size
  • Startup path location for auto execution
  • Environment variable script — environment variable with a common script configured to execute
  • No target with an environment variable only execution
  • Suspicious argument size — large sized arguments over 250 characters as default
  • Arguments have ticks — ticks are common in malicious LNK files
  • Arguments have environment variables — environment variables are common in malicious LNKs
  • Arguments have rare characters — look for specific rare characters that may indicate obfuscation
  • Arguments that have leading space. Malicious LNK files may have many leading spaces to obfuscate some tools
  • Arguments that have http strings — LNKs are regularly used as a download cradle
  • Suspicious arguments — some common malicious arguments observed in field
  • Suspicious trackerdata hostname
  • Hostname mismatch with trackerdata hostname

Due to the use of Velociraptor’s binary parser, the artifact is significantly faster than other analysis tools. It can be deployed as part of analysis or at scale as a hunting function using the IOCRegex and/or SuspiciousOnly flag.

Summary

It is worth investigating the characteristics of file types we tend to skip in threat actor campaigns. In this blog I provided a few examples of how artifacts can be retrieved from VHD and LNK files and then used for the creation of hunting logic. As a result of this research, Rapid7 is happy to release a new LNK parser feature in Velociraptor and we welcome any feedback.

Birthday Week recap: everything we announced — plus an AI-powered opportunity for startups

Post Syndicated from Dina Kozlov original http://blog.cloudflare.com/birthday-week-2023-wrap-up/

Birthday Week recap: everything we announced — plus an AI-powered opportunity for startups

Birthday Week recap: everything we announced — plus an AI-powered opportunity for startups

This year, Cloudflare officially became a teenager, turning 13 years old. We celebrated this milestone with a series of announcements that benefit both our customers and the Internet community.

From developing applications in the age of AI to securing against the most advanced attacks that are yet to come, Cloudflare is proud to provide the tools that help our customers stay one step ahead.

We hope you’ve had a great time following along and for anyone looking for a recap of everything we launched this week, here it is:

Monday

What

In a sentence…

Switching to Cloudflare can cut emissions by up to 96%

Switching enterprise network services from on-prem to Cloudflare can cut related carbon emissions by up to 96%. 

Cloudflare Trace

Use Cloudflare Trace to see which rules and settings are invoked when an HTTP request for your site goes through our network. 

Cloudflare Fonts

Introducing Cloudflare Fonts. Enhance privacy and performance for websites using Google Fonts by loading fonts directly from the Cloudflare network. 

How Cloudflare intelligently routes traffic

Technical deep dive that explains how Cloudflare uses machine learning to intelligently route traffic through our vast network. 

Low Latency Live Streaming

Cloudflare Stream’s LL-HLS support is now in open beta. You can deliver video to your audience faster, reducing the latency a viewer may experience on their player to as little as 3 seconds. 

Account permissions for all

Cloudflare account permissions are now available to all customers, not just Enterprise. In addition, we’ll show you how you can use them and best practices. 

Incident Alerts

Customers can subscribe to Cloudflare Incident Alerts and choose when to get notified based on affected products and level of impact. 

Tuesday

What

In a sentence…

Welcome to the connectivity cloud

Cloudflare is the world’s first connectivity cloud — the modern way to connect and protect your cloud, networks, applications and users. 

Amazon’s $2bn IPv4 tax — and how you can avoid paying it 

Amazon will begin taxing their customers $43 for IPv4 addresses, so Cloudflare will give those \$43 back in the form of credits to bypass that tax. 

Sippy

Minimize egress fees by using Sippy to incrementally migrate your data from AWS to R2. 

Cloudflare Images

All Image Resizing features will be available under Cloudflare Images and we’re simplifying pricing to make it more predictable and reliable.  

Traffic anomalies and notifications with Cloudflare Radar

Cloudflare Radar will be publishing anomalous traffic events for countries and Autonomous Systems (ASes).

Detecting Internet outages

Deep dive into how Cloudflare detects Internet outages, the challenges that come with it, and our approach to overcome these problems. 

Wednesday

What

In a sentence…

The best place on Region: Earth for inference

Now available: Workers AI, a serverless GPU cloud for AI, Vectorize so you can build your own vector databases, and AI Gateway to help manage costs and observability of your AI applications. 

Cloudflare delivers the best infrastructure for next-gen AI applications, supported by partnerships with NVIDIA, Microsoft, Hugging Face, Databricks, and Meta.

Workers AI 

Launching Workers AI — AI inference as a service platform, empowering developers to run AI models with just a few lines of code, all powered by our global network of GPUs. 

Partnering with Hugging Face 

Cloudflare is partnering with Hugging Face to make AI models more accessible and affordable to users. 

Vectorize

Cloudflare’s vector database, designed to allow engineers to build full-stack, AI-powered applications entirely on Cloudflare's global network — available in Beta. 

AI Gateway

AI Gateway helps developers have greater control and visibility in their AI apps, so that you can focus on building without worrying about observability, reliability, and scaling. AI Gateway handles the things that nearly all AI applications need, saving you engineering time so you can focus on what you're building.

 

You can now use WebGPU in Cloudflare Workers

Developers can now use WebGPU in Cloudflare Workers. Learn more about why WebGPUs are important, why we’re offering them to customers, and what’s next. 

What AI companies are building with Cloudflare

Many AI companies are using Cloudflare to build next generation applications. Learn more about what they’re building and how Cloudflare is helping them on their journey. 

Writing poems using LLama 2 on Workers AI

Want to write a poem using AI? Learn how to run your own AI chatbot in 14 lines of code, running on Cloudflare’s global network. 

Thursday

What

In a sentence…

Hyperdrive

Cloudflare launches a new product, Hyperdrive, that makes existing regional databases much faster by dramatically speeding up queries that are made from Cloudflare Workers.

D1 Open Beta

D1 is now in open beta, and the theme is “scale”: with higher per-database storage limits and the ability to create more databases, we’re unlocking the ability for developers to build production-scale applications on D1.

Pages Build Caching

Build cache is a feature designed to reduce your build times by caching and reusing previously computed project components — now available in Beta. 

Running serverless Puppeteer with Workers and Durable Objects

Introducing the Browser Rendering API, which enables developers to utilize the Puppeteer browser automation library within Workers, eliminating the need for serverless browser automation system setup and maintenance

Cloudflare partners with Microsoft to power their Edge Secure Network

We partnered with Microsoft Edge to provide a fast and secure VPN, right in the browser. Users don’t have to install anything new or understand complex concepts to get the latest in network-level privacy: Edge Secure Network VPN is available on the latest consumer version of Microsoft Edge in most markets, and automatically comes with 5GB of data. 

Re-introducing the Cloudflare Workers playground

We are revamping the playground that demonstrates the power of Workers, along with new development tooling, and the ability to share your playground code and deploy instantly to Cloudflare’s global network

Cloudflare integrations marketplace expands

Introducing the newest additions to Cloudflare’s Integration Marketplace. Now available: Sentry, Momento and Turso. 

A Socket API that works across Javascript runtimes — announcing WinterCG spec and polyfill for connect()

Engineers from Cloudflare and Vercel have published a draft specification of the connect() sockets API for review by the community, along with a Node.js compatible polyfill for the connect() API that developers can start using.

New Workers pricing

Announcing new pricing for Cloudflare Workers, where you are billed based on CPU time, and never for the idle time that your Worker spends waiting on network requests and other I/O.

Friday

What

In a sentence…

Post Quantum Cryptography goes GA 

Cloudflare is rolling out post-quantum cryptography support to customers, services, and internal systems to proactively protect against advanced attacks. 

Encrypted Client Hello

Announcing a contribution that helps improve privacy for everyone on the Internet. Encrypted Client Hello, a new standard that prevents networks from snooping on which websites a user is visiting, is now available on all Cloudflare plans. 

Email Retro Scan 

Cloudflare customers can now scan messages within their Office 365 Inboxes for threats. The Retro Scan will let you look back seven days to see what threats your current email security tool has missed. 

Turnstile is Generally Available

Turnstile, Cloudflare’s CAPTCHA replacement, is now generally available and available for free to everyone and includes unlimited use. 

AI crawler bots

Any Cloudflare user, on any plan, can choose specific categories of bots that they want to allow or block, including AI crawlers. We are also recommending a new standard to robots.txt that will make it easier for websites to clearly direct how AI bots can and can’t crawl.

Detecting zero-days before zero-day

Deep dive into Cloudflare’s approach and ongoing research into detecting novel web attack vectors in our WAF before they are seen by a security researcher. 

Privacy Preserving Metrics

Deep dive into the fundamental concepts behind the Distributed Aggregation Protocol (DAP) protocol with examples on how we’ve implemented it into Daphne, our open source aggregator server. 

Post-quantum cryptography to origin

We are rolling out post-quantum cryptography support for outbound connections to origins and Cloudflare Workers fetch() calls. Learn more about what we enabled, how we rolled it out in a safe manner, and how you can add support to your origin server today. 

Network performance update

Cloudflare’s updated benchmark results regarding network performance plus a dive into the tools and processes that we use to monitor and improve our network performance. 

One More Thing

Birthday Week recap: everything we announced — plus an AI-powered opportunity for startups

When Cloudflare turned 12 last year, we announced the Workers Launchpad Funding Program – you can think of it like a startup accelerator program for companies building on Cloudlare’s Developer Platform, with no restrictions on your size, stage, or geography.

A refresher on how the Launchpad works: Each quarter, we admit a group of startups who then get access to a wide range of technical advice, mentorship, and fundraising opportunities. That includes our Founders Bootcamp, Open Office Hours with our Solution Architects, and Demo Day. Those who are ready to fundraise will also be connected to our community of 40+ leading global Venture Capital firms.

In exchange, we just ask for your honest feedback. We want to know what works, what doesn’t and what you need us to build for you. We don’t ask for a stake in your company, and we don’t ask you to pay to be a part of the program.


Over the past year, we’ve received applications from nearly 60 different countries. We’ve had a chance to work closely with 50 amazing early and growth-stage startups admitted into the first two cohorts, and have grown our VC partner community to 40+ firms and more than $2 billion in potential investments in startups building on Cloudflare.

Next up: Cohort #3! Between recently wrapping up Cohort #2 (check out their Demo Day!), celebrating the Launchpad’s 1st birthday, and the heaps of announcements we made last week, we thought that everyone could use a little extra time to catch up on all the news – which is why we are extending the deadline for Cohort #3 a few weeks to October 13, 2023. AND we’re reserving 5 spots in the class for those who are already using any of last Wednesday’s AI announcements. Just be sure to mention what you’re using in your application.

So once you’ve had a chance to check out the announcements and pour yourself a cup of coffee, check out the Workers Launchpad. Applying is a breeze — you’ll be done long before your coffee gets cold.

Until next time

That’s all for Birthday Week 2023. We hope you enjoyed the ride, and we’ll see you at our next innovation week!


Post-quantum cryptography goes GA

Post Syndicated from Wesley Evans original http://blog.cloudflare.com/post-quantum-cryptography-ga/

Post-quantum cryptography goes GA

Post-quantum cryptography goes GA

Over the last twelve months, we have been talking about the new baseline of encryption on the Internet: post-quantum cryptography. During Birthday Week last year we announced that our beta of Kyber was available for testing, and that Cloudflare Tunnel could be enabled with post-quantum cryptography. Earlier this year, we made our stance clear that this foundational technology should be available to everyone for free, forever.

Today, we have hit a milestone after six years and 31 blog posts in the making: we’re starting to roll out General Availability1 of post-quantum cryptography support to our customers, services, and internal systems as described more fully below. This includes products like Pingora for origin connectivity, 1.1.1.1, R2, Argo Smart Routing, Snippets, and so many more.

This is a milestone for the Internet. We don't yet know when quantum computers will have enough scale to break today's cryptography, but the benefits of upgrading to post-quantum cryptography now are clear. Fast connections and future-proofed security are all possible today because of the advances made by Cloudflare, Google, Mozilla, the National Institutes of Standards and Technology in the United States, the Internet Engineering Task Force, and numerous academic institutions

Post-quantum cryptography goes GA

What does General Availability mean? In October 2022 we enabled X25519+Kyber as a beta for all websites and APIs served through Cloudflare. However, it takes two to tango: the connection is only secured if the browser also supports post-quantum cryptography. Starting August 2023, Chrome is slowly enabling X25519+Kyber by default.

The user’s request is routed through Cloudflare’s network (2). We have upgraded many of these internal connections to use post-quantum cryptography, and expect to be done upgrading all of our internal connections by the end of 2024. That leaves as the final link the connection (3) between us and the origin server.

We are happy to announce that we are rolling out support for X25519+Kyber for most inbound and outbound connections as Generally Available for use including origin servers and Cloudflare Workers fetch()es.

Plan Support for post-quantum outbound connections
Free Started roll-out. Aiming for 100% by the end of the October.
Pro and business Aiming for 100% by the end of year.
Enterprise Roll-out begins February 2024. 100% by March 2024.

For our Enterprise customers, we will be sending out additional information regularly over the course of the next six months to help prepare you for the roll-out. Pro, Business, and Enterprise customers can skip the roll-out and opt-in within your zone today, or opt-out ahead of time using an API described in our companion blog post. Before rolling out for Enterprise in February 2024, we will add a toggle on the dashboard to opt out.

If you're excited to get started now, check out our blog with the technical details and flip on post-quantum cryptography support via the API!

What’s included and what is next?

With an upgrade of this magnitude, we wanted to focus on our most used products first and then expand outward to cover our edge cases. This process has led us to include the following products and systems in this roll out:

1.1.1.1
AMP
API Gateway
Argo Smart Routing
Auto Minify
Automatic Platform Optimization
Automatic Signed Exchange
Cloudflare Egress
Cloudflare Images
Cloudflare Rulesets
Cloudflare Snippets
Cloudflare Tunnel
Custom Error Pages
Flow Based Monitoring
Health checks
Hermes
Host Head Checker
Magic Firewall
Magic Network Monitoring
Network Error Logging
Project Flame
Quicksilver
R2 Storage
Request Tracer
Rocket Loader
Speed on Cloudflare Dash
SSL/TLS
Traffic Manager
WAF, Managed Rules
Waiting Room
Web Analytics

If a product or service you use is not listed here, we have not started rolling out post-quantum cryptography to it yet. We are actively working on rolling out post-quantum cryptography to all products and services including our Zero Trust products. Until we have achieved post-quantum cryptography support in all of our systems, we will publish an update blog in every Innovation Week that covers which products we have rolled out post-quantum cryptography to, the products that will be getting it next, and what is still on the horizon.

Products we are working on bringing post-quantum cryptography support to soon:

Cloudflare Gateway
Cloudflare DNS
Cloudflare Load Balancer
Cloudflare Access
Always Online
Zaraz
Logging
D1
Cloudflare Workers
Cloudflare WARP
Bot Management

Why now?

As we announced earlier this year, post-quantum cryptography will be included for free in all Cloudflare products and services that can support it. The best encryption technology should be accessible to everyone – free of charge – to help support privacy and human rights globally.

As we mentioned in March:

“What was once an experimental frontier has turned into the underlying fabric of modern society. It runs in our most critical infrastructure like power systems, hospitals, airports, and banks. We trust it with our most precious memories. We trust it with our secrets. That’s why the Internet needs to be private by default. It needs to be secure by default.”

Our work on post-quantum cryptography is driven by the thesis that quantum computers that can break conventional cryptography create a similar problem to the Year 2000 bug. We know there is going to be a problem in the future that could have catastrophic consequences for users, businesses, and even nation states. The difference this time is we don’t know how the date and time that this break in the computational paradigm will occur. Worse, any traffic captured today could be decrypted in the future. We need to prepare today to be ready for this threat.

We are excited for everyone to adopt post-quantum cryptography into their systems. To follow the latest developments of our deployment of post-quantum cryptography and third-party client/server support, check out pq.cloudflareresearch.com and keep an eye on this blog.

***

1We are using a preliminary version of Kyber, NIST’s pick for post-quantum key agreement. Kyber has not been finalized. We expect a final standard to be published in 2024 under the name ML-KEM, which we will then adopt promptly while deprecating support for X25519Kyber768Draft00.

Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers

Post Syndicated from Natalie Zargarov original https://blog.rapid7.com/2023/08/31/fake-update-utilizes-new-idat-loader-to-execute-stealc-and-lumma-infostealers/

Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers

Technical Analysis by: Thomas Elkins, Natalie Zargarov
Contributions: Evan McCann, Tyler McGraw

Recently, Rapid7 observed the Fake Browser Update lure tricking users into executing malicious binaries. While analyzing the dropped binaries, Rapid7 determined a new loader is utilized in order to execute infostealers on compromised systems including StealC and Lumma.

The IDAT loader is a new, sophisticated loader that Rapid7 first spotted in July 2023. In earlier versions of the loader, it was disguised as a 7-zip installer that delivered the SecTop RAT. Rapid7 has now observed the loader used to deliver infostealers like Stealc, Lumma, and Amadey. It implements several evasion techniques including Process Doppelgänging, DLL Search Order Hijacking, and Heaven’s Gate. IDAT loader got its name as the threat actor stores the malicious payload in the IDAT chunk of PNG file format.

Prior to this technique, Rapid7 observed threat actors behind the lure utilizing malicious JavaScript files to either reach out to Command and Control (C2) servers or drop the Net Support Remote Access Trojan (RAT).

The following analysis covers the entire attack flow, which starts from the SocGholish malware and ends with the stolen information in threat actors’ hands.

Technical Analysis

Threat Actors (TAs) are often staging their attacks in the way security tools will not detect them and security researchers will have a hard time investigating them.

Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 1 – Attack Flow

Stage 1 – SocGholish

First observed in the wild as early as 2018, SocGholish was attributed to TA569. Mainly recognized for its initial infection method characterized as “drive-by” downloads, this attack technique involves the injection of malicious JavaScript into compromised yet otherwise legitimate websites. When an unsuspecting individual receives an email with a link to a compromised website and clicks on it, the injected JavaScript will activate as soon as the browser loads the page.

The injected JavaScript investigated by Rapid7 loads an additional JavaScript that will access the final URL when all the following browser conditions are met:

  • The access originated from the Windows OS
  • The access originated from an external source
  • Cookie checks are passed
Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 2 – Obfuscated JavaScript Embedded in the Compromised Domain

This prompt falsely presents itself as a browser update, with the added layer of credibility coming from the fact that it appears to originate from the intended domain.

Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 3 – Pop-up Prompting the User to Update their Browser

Once the user interacts with the “Update Chrome” button, the browser is redirected to another URL where a binary automatically downloads to the user’s default download folder. After the user double clicks the fake update binary, it will proceed to download the next stage payload. In this investigation, Rapid7 identified a binary called ChromeSetup.exe, the file name widely used in previous SocGholish attacks.

Stage 2 – MSI Downloader

ChromeSetup.exe downloads and executes the Microsoft Software Installer (MSI) package from: hxxps://ocmtancmi2c5t[.]xyz/82z2fn2afo/b3/update[.]msi.

In similar investigations, Rapid7 observed that the initial dropper executable appearance and file name may vary depending on the user’s browser when visiting the compromised web page. In all instances, the executables contained invalid signatures and attempted to download and install an MSI package.

Rapid7 determined that the MSI package executed with several switches intended to avoid detection:

  • /qn to avoid an installation UI
  • /quiet to prevent user interaction
  • /norestart to prevent the system from restarting during the infection process

When executed, the MSI dropper will write a legitimate VMwareHostOpen.exe executable, multiple legitimate dependencies, and the malicious Dynamic-Link Library (DLL) file vmtools.dll. It will also drop an encrypted vmo.log file which has a PNG file structure and is later decrypted by the malicious DLL.
Rapid7 spotted an additional version of the attack where the MSI dropped a legitimate pythonw.exe, legitimate dependencies, and the malicious DLL file python311.dll. In that case, the encrypted file was named pz.log, though the execution flow remains the same.

Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 4 – Content of vmo.log

Stage 3 – Decryptor

When executed, the legitimate VMWareHostOpen.exe loads the malicious vmtools.dll from the same directory as from which the VMWareHostOpen.exe is executed. This technique is known as DLL Search Order Hijacking.

During the execution of vmtools.dll, Rapid7 observed that the DLL loads API libraries from kernel32.dll and ntdll.dll using API hashing and maps them to memory. After the API functions are mapped to memory, the DLL reads the hex string 83 59 EB ED 50 60 E8 and decrypts it using a bitwise XOR operation with the key F5 34 84 C3 3C 0F 8F, revealing the string vmo.log. The file is similar to the Vmo\log directory, where Vmware logs are stored.

The DLL then reads the contents from vmo.log into memory and searches for the string …IDAT. The DLL takes 4 bytes following …IDAT and compares them to the hex values of C6 A5 79 EA. If the 4 bytes following …IDAT are equal to the hex values C6 A5 79 EA, the DLL proceeds to copy all the contents following …IDAT into memory.

Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 5 – Function Searching for Hex Values C6 A5 79 EA

Once all the data is copied into memory, the DLL attempts to decrypt the copied data using the bitwise XOR operation with key F4 B4 07 9A. Upon additional analysis of other samples, Rapid7 determined that the XOR keys were always stored as 4 bytes following the hex string C6 A5 79 EA.

Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 6 – XOR Keys found within PNG Files pz.log and vmo.log

Once the DLL decrypts the data in memory, it is decompressed using the RTLDecompressBuffer function. The parameters passed to the function include:

  • Compression format
  • Size of compressed data
  • Size of compressed buffer
  • Size of uncompressed data
  • Size of uncompressed buffer
Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 7 – Parameters passed to RTLDecompressBuffer function

The vmtools.dll DLL utilizes the compression algorithm LZNT1 in order to decompress the decrypted data from the vmo.log file.

After the data is decompressed, the DLL loads mshtml.dll into memory and overwrites its .text section with the decompressed code. After the overwrite, vmtools.dll calls the decompressed code.

Stage 4 – IDAT Injector

Similarly to vmtools.dll, IDAT loader uses dynamic imports. The IDAT injector then expands the %APPDATA% environment variable by using the ExpandEnvironmentStringsW API call. It creates a new folder under %APPDATA%, naming it based on the QueryPerformanceCounter API call output and randomizing its value.

All the dropped files by MSI are copied to the newly created folder. IDAT then creates a new instance of VMWareHostOpen.exe from the %APPDATA% by using CreateProcessW and exits.

The second instance of VMWareHostOpen.exe behaves the same up until the stage where the IDAT injector code is called from mshtml.dll memory space. IDAT immediately started the implementation of the Heaven’s Gate evasion technique, which it uses for most API calls until the load of the infostealer is completed.

Heaven’s Gate is widely used by threat actors to evade security tools. It refers to a method for executing a 64-bit process within a 32-bit process or vice versa, allowing a 32-bit process to run in a 64-bit process. This is accomplished by initiating a call or jump instruction through the use of a reserved selector. The key points in analyzing this technique in our case is to change the process mode from 32-bit to 64-bit, the specification of the selector “0x0033” required and followed by the execution of a far call or far jump, as shown in Figure 8.

Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers

Figure 8 – Heaven’s Gate technique implementation

The IDAT injector then expands the %TEMP% environment variable by using the ExpandEnvironmentStringsW API call. It creates a string based on the QueryPerformanceCounter API call output and randomizes its value.

Next, the IDAT loader gets the computer name by calling GetComputerNameW API call, and the output is randomized by using rand and srand API calls. It uses that randomized value to set a new environment variable by using SetEnvironmentVariableW. This variable is set to a combination of %TEMP% path with the randomized string created previously.

Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 9 – New Environment variable – TCBEDOPKVDTUFUSOCPTRQFD set to %TEMP%\89680228

Now, the new cmd.exe process is executed by the loader. The loader then creates and writes to the %TEMP%\89680228 file.

Next, the IDAT injects code into cmd.exe process by using NtCreateSection + NtMapViewOfSection Code Injection technique. Using this technique the malware:

  • Creates a new memory section inside the remote process by using the NtCreateSection API call
  • Maps a view of the newly created section to the local malicious process with RW protection by using NtMapViewOfSection API call
  • Maps a view of the previously created section to a remote target process with RX protection by using NtMapViewOfSection API call
  • Fills the view mapped in the local process with shellcode by using NtWriteVirtualMemory API call
  • In our case, IDAT loader suspends the main thread on the cmd.exe process by using NtSuspendThread API call and then resumes the thread by using NtResumeThread API call
    After completing the injection, the second instance of VMWareHostOpen.exe exits.

Stage 5 – IDAT Loader:

The injected loader code implements the Heaven’s Gate evasion technique in exactly the same way as the IDAT injector did. It retrieves the TCBEDOPKVDTUFUSOCPTRQFD environment variable, and reads the %TEMP%\89680228 file data into the memory. The data is then recursively XORed with the 3D ED C0 D3 key.  

The decrypted data seems to contain configuration data, including which process the infostealer should be loaded, which API calls should be dynamically retrieved, additional code,and more. The loader then deletes the initial malicious DLL (vmtools.dll) by using DeleteFileW. The loader finally injects the infostealer code into the explorer.exe process by using the Process Doppelgänging injection technique.

The Process Doppelgänging method utilizes the Transactional NTFS feature within the Windows operating system. This feature is designed to ensure data integrity in the event of unexpected errors. For instance, when an application needs to write or modify a file, there’s a risk of data corruption if an error occurs during the write process. To prevent such issues, an application can open the file in a transactional mode to perform the modification and then commit the modification, thereby preventing any potential corruption. The modification either succeeds entirely or does not commence.

Process Doppelgänging exploits this feature to replace a legitimate file with a malicious one, leading to a process injection. The malicious file is created within a transaction, then committed to the legitimate file, and subsequently executed. The Process Doppelgänging in our sample was performed by:

  • Initiating a transaction by using NtCreateTransaction API call
  • Creating a new file by using NtCreateFile API call
  • Writing to the new file by using NtWriteFile API call
  • Writing malicious code into a section of the local process using NtCreateSection API call
  • Discarding the transaction by using NtRollbackTransaction API call
  • Running a new instance of explorer.exe process by using NtCreateProcessEx API call
  • Running the malicious code inside explorer.exe process by using NtCreateThreadEx API call

If the file created within a transaction is rolled back (instead of committed), but the file section was already mapped into the process memory, the process injection will still be performed.

The final payload injected into the explorer.exe process was identified by Rapid7 as Lumma Stealer.

Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 10 – Process Tree

Throughout the whole attack flow, the malware delays execution by using NtDelayExecution, a technique that is usually used to escape sandboxes.

As previously mentioned, Rapid7 has investigated several IDAT loader samples. The main differences were:

  1. The legitimate software that loads the malicious DLL.
  2. The name of the staging directory created within %APPDATA%.
  3. The process the IDAT injector injects the Loader code to.
  4. The process into which the infostealer/RAT loaded into.
  5. Rapid7 observed the IDAT loader has been used to load the following infostealers and RAT: Stealc, Lumma and Amadey infostealers and SecTop RAT.
Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 11 – Part of an HTTP POST request to a StealC C2 domain
Fake Update Utilizes New IDAT Loader To Execute StealC and Lumma Infostealers
Figure 12 – An HTTP POST request to a Lumma Stealer C2 domain

Conclusion

IDAT Loader is a new sophisticated loader that utilizes multiple evasion techniques in order to execute various commodity malware including InfoStealers and RAT’s. The Threat Actors behind the Fake Update campaign have been packaging the IDAT Loader into DLLs that are loaded by legitimate programs such as VMWarehost, Python and Windows Defender.

Rapid7 Customers

For Rapid7 MDR and InsightIDR customers, the following Attacker Behavior Analytics (ABA) rules are currently deployed and alerting on the activity described in this blog:

  • Attacker Technique – MSIExec loading object via HTTP
  • Suspicious Process – FSUtil Zeroing Out a File
  • Suspicious Process – Users Script Spawns Cmd And Redirects Output To Temp File
  • Suspicious Process – Possible Dropper Script Executed From Users Downloads Directory
  • Suspicious Process – WScript Runs JavaScript File from Temp Or Download Directory

MITRE ATT&CK Techniques:

Initial Access Drive-by Compromise (T1189) The SocGholish Uses Drive-by Compromise technique to target user’s web browser
Defense Evasion System Binary Proxy Execution: Msiexec (T1218.007) The ChromeSetup.exe downloader (C9094685AE4851FD5A5B886B73C7B07EFD9B47EA0BDAE3F823D035CF1B3B9E48) downloads and executes .msi file
Execution User Execution: Malicious File (T1204.002) Update.msi (53C3982F452E570DB6599E004D196A8A3B8399C9D484F78CDB481C2703138D47) drops and executes VMWareHostOpen.exe
Defense Evasion Hijack Execution Flow: DLL Search Order Hijacking (T1574.001) VMWareHostOpen.exe loads a malicious vmtools.dll (931D78C733C6287CEC991659ED16513862BFC6F5E42B74A8A82E4FA6C8A3FE06)
Defense Evasion Deobfuscate/Decode Files or Information (T1140) vmtools.dll (931D78C733C6287CEC991659ED16513862BFC6F5E42B74A8A82E4FA6C8A3FE06) decrypts vmo.log(51CEE2DE0EBE01E75AFDEFFE29D48CB4D413D471766420C8B8F9AB08C59977D7) file
Defense Evasion Masquerading (T1036) vmo.log(51CEE2DE0EBE01E75AFDEFFE29D48CB4D413D471766420C8B8F9AB08C59977D7) file masqueraded to .png file
Execution Native API (T1106) The IDAT injector and IDAT loader are using Heaven’s Gate technique to evade detection
Defense Evasion Process Injection (T1055) IDAT injector implements NtCreateSection + NtMapViewOfSection Code Injection technique to inject into cmd.exe process
Defense Evasion Process Injection: Process Doppelgänging (T1055.013) IDAT loader implements Process Doppelgänging technique to load the InfoStealer
Defense Evasion Virtualization/Sandbox Evasion: Time Based Evasion (T1497.003) Execution delays are performed by several stages throughout the attack flow

IOCs

IOC SHA-256 Notes
InstaIIer.exe A0319E612DE3B7E6FBB4B71AA7398266791E50DA0AE373C5870C3DCAA51ABCCF MSI downloader
ChromeSetup.exe C9094685AE4851FD5A5B886B73C7B07EFD9B47EA0BDAE3F823D035CF1B3B9E48 MSI downloader
MlcrоsоftЕdgеSеtuр.exe 3BF4B365D61C1E9807D20E71375627450B8FEA1635CB6DDB85F2956E8F6B3EC3 MSI downloader
update.msi 53C3982F452E570DB6599E004D196A8A3B8399C9D484F78CDB481C2703138D47 MSI dropper, dropped pythonw.exe, python311.dll and pz.log files
update.msi D19C166D0846DDAF1A6D5DBD62C93ACB91956627E47E4E3CBD79F3DFB3E0F002 MSI dropper, dropped VMWareHostOpen.exe, vmtools.dll and vmo.log files
DirectX12AdvancedSupport.msi B287C0BC239B434B90EEF01BCBD00FF48192B7CBEB540E568B8CDCDC26F90959 MSI dropper, dropped MpCopyAccelerator.exe, MpClient.dll, and virginium.flac file
python311.dll BE8EB5359185BAA8E456A554A091EC54C8828BB2499FE332E9ECD65639C9A75B Malicious dll loaded by pythonw.exe
vmtools.dll 931D78C733C6287CEC991659ED16513862BFC6F5E42B74A8A82E4FA6C8A3FE06 Malicious dll loaded by VMWareHostOpen.exe
MpClient.dll 5F57537D18ADCC1142294D7C469F565F359D5FF148E93A15CCBCEB5CA3390DBD Malicious dll loaded by MpCopyAccelerator.exe
vmo.log 51CEE2DE0EBE01E75AFDEFFE29D48CB4D413D471766420C8B8F9AB08C59977D7 Encrypted payload decrypted by vmtools.dll
pz.log 8CE0901A5CF2D3014AAA89D5B5B68666DA0D42D2294A2F2B7E3A275025B35B79 Encrypted payload decrypted by python311.dll
virginium.flac B3D8BC93A96C992099D768BEB42202B48A7FE4C9A1E3B391EFBEEB1549EF5039 Encrypted payload decrypted by MpClient.dll
ocmtancmi2c5t[.]xyz Host of the MSI package
lazagrc3cnk[.]xyz Host of the MSI package
omdowqind[.]site Domain that facilitated download of the MSI downloader
weomfewnfnu[.]site Domain that facilitated download of the MSI downloader
winextrabonus[.]life Domain that facilitated download of the MSI downloader
bgobgogimrihehmxerreg[.]site Domain that facilitated download of the MSI downloader
pshkjg[.]db[.]files[.]1drv[.]com Domain that facilitated download of the MSI downloader
ooinonqnbdqnjdnqwqkdn[.]space Domain that facilitated download of the MSI downloader
hello-world-broken-dust-1f1c[.]brewasigfi1978[.]workers[.]dev Domain that facilitated download of the MSI downloader
doorblu[.]xyz C&C server
costexcise[.]xyz C&C server
buyerbrand[.]xyz C&C server
94.228.169[.]55 C&C server
gapi-node[.]io C&C server
gstatic-node[.]io C&C server

References:

https://zeltser.com/media/docs/malware-analysis-lab.pdf

Poorly Purged Medical Devices Present Security Concerns After Sale on Secondary Market

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2023/08/02/security-implications-improper-deacquisition-medical-infusion-pumps/

Poorly Purged Medical Devices Present Security Concerns After Sale on Secondary Market

In a post-pandemic landscape, the interconnectedness of cybersecurity is front and center. Few could say that they were not at least aware of, if not directly affected by, the downstream effects of major breaches that cause impacts felt across economies. One should look at disruptions in the global supply chain as case in point.

So the concept of security that goes from the cradle to the grave, is more than just an industry buzz phrase, it is a critical component of securing networks, applications, and devices.

Sadly, in too many cases, cradle to grave security was either not considered at conception, or outright ignored. And as a new report released today by Rapid7 principal researcher, Deral Heiland points out, even when organizations are able to take steps to mitigate concerns at the grave portion of the life cycle, they don’t.

In Security Implications from Improper De-acquisition of Medical Infusion Pumps Heiland performs a physical and technical teardown of more than a dozen medical infusion pumps — devices used to deliver and control fluids directly into a patient’s body. Each of these devices was available for purchase on the secondary market and each one had issues that could compromise their previous organization’s networks.

The reason these devices pose such a risk is a lack of (or lax) process for de-acquisitioning them before they are sold on sites like eBay. In at least eight of the 13 devices used in the study, WiFi PSK access credentials were discovered, offering attackers potential access to health organization networks.

In the report, Heiland calls for systemic changes to policies and procedures for both the acquisition and de-acquisition of these devices. The policies must define ownership and governance of these devices from the moment they enter the building to the moment they are sold on the secondary market. The processes should detail how data should be purged from these devices (and by extension, many others). In the cases of medical devices that are leased, contractual agreements on the purging process and expectations should be made before acquisition.

The ultimate finding is that properly disposing of sensitive information on these devices should be a priority. Purging them of data should not (and in many cases is not) terribly difficult. The issue lies with process and responsibility for the protection of information stored in those devices. And that is a major component of the cradle to grave security concept.

If you would like to read the report it is available here.

Old Blackmoon Trojan, NEW Monetization Approach

Post Syndicated from Natalie Zargarov original https://blog.rapid7.com/2023/07/13/old-blackmoon-trojan-new-monetization-approach/

Old Blackmoon Trojan, NEW Monetization Approach

Rapid7 is tracking a new, more sophisticated and staged campaign using the Blackmoon trojan, which appears to have originated in November 2022. The campaign is actively targeting various businesses primarily in the USA and Canada. However, it is not used to steal credentials, instead it implements different evasion and persistence techniques to drop several unwanted programs and stay in victims’ environment for as long as possible.

Blackmoon, also known as KRBanker, is a banking trojan first spotted in late September 2015 when targeting banks of the Republic of Korea. Back in 2015, it was using a “pharming” technique to steal credentials from targeted victims. This technique involves redirecting traffic to a forged website when a user attempts to access one of the banking sites being targeted by the cyber criminals. The fake server masquerades the original site and urges visitors to submit their information and credentials.

Old Blackmoon Trojan, NEW Monetization Approach

Stage 1 – Blackmoon

Blackmoon trojan was named after a debug string “blackmoon,” that is present in its code:

Old Blackmoon Trojan, NEW Monetization Approach
Blackmoon string found inside malware’s code

Blackmoon drops a dll into C:\Windows\Logs folder named RunDllExe.dll and implements a Port Monitors persistence technique. Port Monitors is related to the Windows Print Spooler Service or spoolsv.exe. When adding a printer port monitor a user (or the attacker in our case) has the ability to add an arbitrary dll that acts as the monitor. There are two ways to add a port monitor: via Registry for persistence or via a AddMonitor API call for immediate dll execution.

Our sample implements both, it calls AddMonitor API call to immediately execute RunDllExe.dll:

Old Blackmoon Trojan, NEW Monetization Approach
AddMonitorA API call

It also sets a Driver value in HKLM\SYSTEM\CurrentControlSet\Control\Print\Monitors\RunDllExe registry key to the malicious dll path.

Old Blackmoon Trojan, NEW Monetization Approach
Driver value set under monitors registry key

Next, the malware adds a shutdown system privilege to the Spooler service by adding SeShutdownPrivilege to the RequiredPrivileges value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Spooler registry key.

Old Blackmoon Trojan, NEW Monetization Approach
Old Blackmoon Trojan, NEW Monetization Approach
RequiredPrivileges data before and after the update

The malware disables Windows Defender by setting HKLM\SOFTWARE\Policies\Microsoft\Windows Defender\DisableAntiSpyware value to “1”.

It also stops and disables “Lanman” service (the service that allows a computer to share files and printers with other devices on the network).

To block all incoming RPC and SMB communication the malware executes the set of following commands:

netsh ipsec static add policy name=Block
netsh ipsec static add filterlist name=Filter1
netsh ipsec static add filter filterlist=Filter1 srcaddr=any dstaddr=Me dstport=135 protocol=TCP
netsh ipsec static add filter filterlist=Filter1 srcaddr=any dstaddr=Me dstport=135 protocol=UDP
netsh ipsec static add filter filterlist=Filter1 srcaddr=any dstaddr=Me dstport=139 protocol=TCP
netsh ipsec static add filter filterlist=Filter1 srcaddr=any dstaddr=Me dstport=139 protocol=UDP
netsh ipsec static add filter filterlist=Filter1 srcaddr=any dstaddr=Me dstport=445 protocol=TCP
netsh ipsec static add filter filterlist=Filter1 srcaddr=any dstaddr=Me dstport=445 protocol=UDP
netsh ipsec static add filteraction name=FilteraAtion1 action=block
netsh ipsec static add rule name=Rule1 policy=Block filterlist=Filter1 filteraction=FilteraAtion1
netsh ipsec static set policy name=Block assign=y

The malware sets two additional values under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_staters: Work and Mining, both set to “1”.

Next, the malware checks if one of the following services exists on the victim computer:

  • clr_optimization_v3.0.50727_32
  • clr_optimization_v3.0.50727_64
  • WinHelpsvcs
  • Services
  • Help Service
  • KuGouMusic
  • WinDefender
  • Msubridge
  • ChromeUpdater
  • MicrosoftMysql
  • MicrosoftMssql
  • Conhost
  • MicrosotMaims
  • MicrosotMais

In case the service is found, it will be disabled (by setting “Start” value under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\servicename to “4”) or deleted by using DeleteService API call.

The malware enumerates running processes by using a combination of CreateToolhelp32Snapshot and Process32First and Process32Next API calls to terminate service’s process if one is running.

Finally, a Powershell command is executed to delete the running process’s file and the malware exits.

Stage 2 – RunDllExe.dll – injector

RunDllExe.dll is executed by Spooler service and is responsible for injecting a next stage payload into the newly executed svchost.exe process. The malware implements Process Hollowing injection technique. The injected code is a C++ file downloader.

Stage 3 – File Downloader

The downloader first checks if ‘Work’ and ‘Mining’ values exist and set under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_staters registry key, if the values do not exist, it will create them and set both to “1”.

As downloader, this part of the attack flow is checking if all needed to be downloaded files are already present (by using PathFileExistsA API call) on the PC,  if not, the malware sleeps for two minutes before every download and then use the URLDownloadToFileA API call to download the following files:

  • C:\WINDOWS\Temp\MpMgSvc.dll
  • C:\WINDOWS\Temp\Hooks.exe
  • C:\WINDOWS\Temp\MpMgSvc.exe
  • C:\Windows\Microsoft.NET\Framework\v3.0\WmiPrvSER.exe

After the download all but the MpMgSvc.dll are executed by the downloader:

Old Blackmoon Trojan, NEW Monetization Approach
Execution tree

Stage 4 – Hook.exe – dropper

Hook.exe drops an additional dll to the users roaming folder C:\Users\Username\AppData\Roaming\GraphicsPerfSvcs.dll and creates a new service named GraphicsPerfSvcs, which will be automatically executed at system startup. The service name is almost identical to the legitimate service named GraphicsPerfSvc, which belongs to the Graphics performance monitor service. Naming services and files similarly to the once belonged to OS is an evasion technique widely used by threat actors.

Old Blackmoon Trojan, NEW Monetization Approach
Malicious Service under the legitimate one

The dropper starts the created service. It then creates and executes a .vbs which responsible for deleting Hook.exe and the .vbs itself:

Old Blackmoon Trojan, NEW Monetization Approach
Created .vbs

Stage 4.1 – MpMgSvc.exe – spreader MpMgSvc.exe first creates a new \BaseNamedObjects\Brute_2022 mutex. As being responsible for spreading the malware, it drops Doublepulsar-1.3.1.exe, Eternalblue-2.2.0.exe, Eternalromance-1.4.0.exe and all required for these files libraries into the C:\Windows\Temp folder.

Then it scans the network for PC’s with open 3306, 445, 1433 ports. If any open ports are found, the spreader will attempt to install a backdoor by using EternalBlue and send shellcode to inject dll with Doublepulsar as implemented in the Eternal-Pulsar github project .

Old Blackmoon Trojan, NEW Monetization Approach
Eternal-Pulsar commands in spreader memory‌‌

There are two dlls dropped, one for x64 architecture and the second one for x86. When injected by Doublepulsar it will download the first stage Blackmoon malware and follow the same execution stages described in this analysis.

Stage 4.2 – WmiPrvSER.exe – XMRig miner

WmiPrvSER.exe is a classic XMRig Monero miner. Our sample is the XMRig version 6.18, and it creates a BaseNamedObjects\\Win__Host mutex on the victim’s host. You can find a full report on XMRig here.

Stage 5 – GraphicsPerfSvcs service – dropper

As mentioned in the previous stage, the GraphicsPerfSvcs service will be started automatically at system startup. Every time it runs, it will check if two of the following files exist:

  • C:\Windows\TEMP\ctfmoon.exe
  • C:\Windows\Microsoft.NET\traffmonetizer\Traffmonetizer.exe

If not found, it will drop both those files and all needed dlls for their execution.

The dropper also creates two new firewall rules that allow all outbound connections from dropped files by executing the following commands:

  • netsh advfirewall firewall add rule name=ctfmoon dir=out program=C:\Windows\Microsoft.NET\ctfmoon.exe action=allow
  • netsh advfirewall firewall add rule name=traffmonetizer dir=out program=C:\Windows\Microsoft.NET\traffmonetizer\traffmonetizer.exe action=allow
Old Blackmoon Trojan, NEW Monetization Approach
Ctfmoon.exe firewall rule creation

The service stays up and constantly attempts to read from the URL: hxxp://down.ftp21[.]cc/Update.txt. At the time of the analysis, this URL was down so we were not able to observe its content. However, following the service code, it seems to read the URL content and check if it contains one of the following commands:

[Delete File], [Kill Proccess], or [Delete Service], which will delete file, kill process or delete service accordingly.

Stage 6 – Ctfmoon.exe and Traffmonetizer.exe – Traffic Stealers

GraphicsPerfSvcs service executes two dropped files: Ctfmoon.exe and Traffmonetizer.exe, both appeared to be Potentially Unwanted Programs (PUP’s) in the form of traffic stealers. Both software are using the “network bandwidth sharing” monetization scheme to make “passive income”.

Ctfmoon.exe is a cli version of the Iproyal Pawns application. It gets the user email address and password as execution parameters to associate the activity and collect the money to the passed account. GraphicsPerfSvcs executes the following command line to start the Iproyal Pawns: ctfmoon.exe [email protected] -password=123456Aa. -device-name=Win32 -accept-tos

We can see that the user mentioned in our execution parameters already made $169:

Old Blackmoon Trojan, NEW Monetization Approach
Iproyal Pawns earnings from our sample

The Traffmonetizer.exe is similar to Ctfmoon.exe, created by Traffmonetizer. It reads the user account data from a settings.json file dropped in users roaming directory. Our .json file contains the following content:

{"Token":"1gUgURMzQiuGFgttIdjeZBS0G6fqFlVvhCKlqzfHd3o=","StartWithWindows":false,"Accepting":true}.

Conclusion

The analysis in this blog reveals the effort threat actors put into the attack flow, by using several evasion and persistence techniques and using different approaches to increase their income and use victim resources.

MITRE ATT&CK Techniques:

Persistence Boot or Logon Autostart Execution: Port Monitors (T1547.010) The Blackmoon trojan (a95737adb2cd7b1af2291d143200a82d8d32
a868c64fb4acc542608f56a0aeda) is using Port Monitors technique to establish persistence on the target host.
Persistence Create or Modify System Process: Windows Service (T1543.003) The Hook.exe dropper (1A7A4B5E7C645316A6AD59E26054A95
654615219CC03657D6834C9DA7219E99F) creates a new service to establish persistence on the target host.
Defense Evasion Process Injection: Process Hollowing (T1055.012) The dll dropped by Blackmoon (F5D508C816E485E05DF5F58450D623DC6B
FA35A2A0682C238286D82B4B476FBB) is using the process hollowing technique to evade endpoint security detection.
Defense Evasion Impair Defenses: Disable or Modify Tools (T1562.001) The Blackmoon trojan (a95737adb2cd7b1af2291d143200a82d8
d32a868c64fb4acc542608f56a0aeda) disables Windows Defender to evade end-point security detection.
Lateral Movement Exploitation of Remote Services (T1210) The MpMgSvc.exe spreader (72B0DA797EA4FC76BA4DB6AD131056257965D
F9B2BCF26CE2189AF3DBEC5B1FC) uses EternalBlue and DoublePulsar to spread in organization’s environment.
Discovery Network Share Discovery (T1135) The MpMgSvc.exe spreader (72B0DA797EA4FC76BA4DB6AD131056257965D
F9B2BCF26CE2189AF3DBEC5B1FC) scans the network do discover open SMB ports.
Impact Resource Hijacking (T1496) The XMRing miner (ECC5A64D97D4ADB41ED9332E4C0F5DC7DC02
A64A77817438D27FC31C69F7C1D3), Iproyal Pawns trafficStealer (FDD762192D351CEA051C0170840F1D8D
171F334F06313A17EBA97CACB5F1E6E1) and Traffmonetizer trafficStealer (2923EACD0C99A2D385F7C989882B7CCA
83BFF133ECF176FDB411F8D17E7EF265) executed to use victim’s resources.
Impact Service Stop (T1489) The Blackmoon trojan (a95737adb2cd7b1af2291d143200a82d8d
32a868c64fb4acc542608f56a0aeda) stops updates and security products services.
Command and Control Application Layer Protocol: Web Protocols (T1071.001) The downloader (E9A83C8811E7D7A6BF7EA7A656041BCD68968
7F8B23FA7655B28A8053F67BE99) downloads next stage payloads over HTTP protocol.
GraphicsPerfSvcs service (5AF88DBDC7F53BA359DDC47C3BCAF3F5FE
9BDE83211A6FF98556AF7E38CDA72B) uses HTTP protocol to get command from C&C server.

IOC’s

File name SHA-256
445.exe a95737adb2cd7b1af2291d143200a82
d8d32a868c64fb4acc542608f56a0aeda
Blackmoon Trojan
RunDllExe.dll F5D508C816E485E05DF5F58450D623DC
6BFA35A2A0682C238286D82B4B476FBB
Injector
Injected code E9A83C8811E7D7A6BF7EA7A656041BCD
689687F8B23FA7655B28A8053F67BE99
Downloader
MpMgSvc.dll E9BD4A9C6EA27033BCB696E65D7441DC2D
42CD7F9F02084B5C704316F0A4FDDF
Hooks.exe 1A7A4B5E7C645316A6AD59E26054A95654615
219CC03657D6834C9DA7219E99F
Dropper
MpMgSvc.exe 72B0DA797EA4FC76BA4DB6AD131056257965
DF9B2BCF26CE2189AF3DBEC5B1FC
Spreader
WmiPrvSER.exe ECC5A64D97D4ADB41ED9332E4C0F5DC7DC02
A64A77817438D27FC31C69F7C1D3
XMRig
GraphicsPerfSvcs.dll 5AF88DBDC7F53BA359DDC47C3BCAF3F5FE9BDE
83211A6FF98556AF7E38CDA72B
Dropper
Doublepulsar-1.3.1.exe 15FFBB8D382CD2FF7B0BD4C87A7C0BFFD1541
C2FE86865AF445123BC0B770D13
Shellcode installer
Eternalblue-2.2.0.exe 85B936960FBE5100C170B777E1647CE9F0F0
1E3AB9742DFC23F37CB0825B30B5
Exploit
Eternalromance-1.4.0.exe B99C3CC1ACBB085C9A895A8C3510F6DAA
F31F0D2D9CCB8477C7FB7119376F57B
Exploit
X64.dll 275A9A7B99F3474CBF8A61964A6022E3CF
7BAF76E0EE2FBA31A708D8F1E25BD0
shellcode
X86.dll F247A48D3ECDBDF91FCD7A2D8728ADAAF
06149586ADDE62DE7212C6DE645AD58
shellcode
Ctfmoon.exe FDD762192D351CEA051C0170840F1D8D171
F334F06313A17EBA97CACB5F1E6E1
Iproyal Pawns
Traffmonetizer.exe 2923EACD0C99A2D385F7C989882B7CCA8
3BFF133ECF176FDB411F8D17E7EF265
Traffmonetizer
[email protected] Iproyal Pawns account
1gUgURMzQiuGFgttIdjeZBS0
G6fqFlVvhCKlqzfHd3o=
Traffmonetizer
token
hxxp://down.ftp21[.]cc C&C server

References

Apply for a free UK teacher’s place at the WiPSCE conference

Post Syndicated from Bonnie Sheppard original https://www.raspberrypi.org/blog/free-uk-teacher-places-wipsce-conference-2023/

From 27 to 29 September 2023, we and the University of Cambridge are hosting the WiPSCE International Workshop on Primary and Secondary Computing Education Research for educators and researchers. This year, this annual conference will take place at Robinson College in Cambridge. We’re inviting all UK-based teachers of computing subjects to apply for one of five ‘all expenses paid’ places at this well-regarded annual event.

Educators and researchers mingle at a conference.

You could attend WiPSCE with all expenses paid

WiPSCE is where teachers and researchers discuss research that’s relevant to teaching and learning in primary and secondary computing education, to teacher training, and to related topics. You can find more information about the conference, including the preliminary programme, at wipsce.org

As a teacher at the conference, you will:

  • Engage with high-quality international research in the field where you teach
  • Learn ways to use that research to develop your own classroom practice
  • Find out how to become an advocate in your professional community for research-informed approaches to the teaching of computing.

We are delighted that, thanks to generous funding from a funder, we can offer five free places to UK computing teachers, covering:

  • The registration fee
  • Two nights’ accommodation at Robinson College
  • Up to £500 supply costs paid to your school to cover your teaching
  • Up to £100 travel costs

The application deadline is Wednesday 19 July.

The application details

To be eligible to apply:

  1. You need to be a currently practising, UK-based teacher of Computing (England), Computing Science (Scotland), ICT or Digital Technologies (N. Ireland), or Computer Science (Wales)
  2. Your headteacher needs to be able to provide written confirmation that they are happy for you to attend WiPSCE
  3. You need to be available to attend the whole conference from Wednesday lunchtime to Friday afternoon
  4. You need to be willing to share what you learn from the conference with your colleagues at school and with your broader teaching community, including through writing an article about your experience and its relevance to your teaching for this blog or Hello World magazine

The application form will ask your for:

  • Your name and contact details
  • Demographic and school information
  • Your teaching experience
  • A statement of up to 500 words on why you’re applying and how you think your teaching practice, your school and your colleagues will benefit from your attendance at WiPSCE (500 words is the maximum, feel free to be concise)

After the 19 July deadline, we’re aiming to inform you of the outcome of your application on Friday 21 July. 

Your application will be reviewed by the 2023 WiPSCE Chairs:

Sue and Mareen will:

  • Use the information you share in your form, particularly in your statement
  • Select applicants from a mix of primary and secondary schools, with a mix of years of computing teaching experience, and from a mix of geographic areas

Join us in strengthening research-informed computing classroom practice

We’d be delighted to receive your application. Being able to facilitate teachers’ attendance at the conference is very much aligned with our approach to research. Both at the Foundation and the Raspberry Pi Computing Education Research Centre, we’re committed to conducting research that’s directly relevant to schools and teachers, and to working in close collaboration with teachers.

We hope you are interested in attending WiPSCE and becoming an advocate for research-informed computing education practice. If your application is unsuccessful, we hope you consider coming along anyway. We’re looking forward to meeting you there. In the meantime, you can keep up with WiPSCE news on Twitter.

The post Apply for a free UK teacher’s place at the WiPSCE conference appeared first on Raspberry Pi Foundation.

Running a workshop with teachers to create culturally relevant Computing lessons

Post Syndicated from Katharine Childs original https://www.raspberrypi.org/blog/research-teacher-workshop-culturally-relevant-computing-lessons/

Who chooses to study Computing? In England, data from GCSE and A level Computer Science entries in 2019 shows that the answer is complex. Black Caribbean students were one of the most underrepresented groups in the subject, while pupils from other ethnic backgrounds, such as White British, Chinese, and Asian Indian, were well-represented. This picture is reflected in the STEM workforce in England, where Black people are also underrepresented.

Two young girls, one of them with a hijab, do a Scratch coding activity together at a desktop computer.

That’s why one of our areas of academic research aims to support Computing teachers to use culturally relevant pedagogy to design and deliver equitable learning experiences that enable all learners to enjoy and succeed in Computing and Computer Science at school. Our previous research projects within this area have involved developing guidelines for culturally relevant and responsive teaching, and exploring how a small group of primary and secondary Computing teachers used these guidelines in their teaching.

A tree symbolising culturally relevant pedagogy,with the roots labeled 'curriculum, the trunk labeled 'teaching approaches', and the crown labeled 'learning materials'.
Learning materials, teaching approaches, and the curriculum as a whole are three areas where culturally relevance is important.

In our latest research study, funded by Cognizant, we worked with 13 primary school teachers in England on adapting computing lessons to incorporate culturally relevant and responsive principles and practices. Here’s an insight into the workshop we ran with them, and what the teachers and we have taken away from it.

Adapting lesson materials based on culturally relevant pedagogy

In the group of 13 England-based primary school Computing teachers we worked with for this study:

  • One third were specialist primary Computing teachers, and the other two thirds were class teachers who taught a range of subjects
  • Some acted as Computing subject lead or coordinator at their school
  • Most had taught Computing for between three and five years 
  • The majority worked in urban areas of England, at schools with culturally diverse catchment areas 

In November 2022, we held a one-day workshop with the teachers to introduce culturally relevant pedagogy and explore how to adapt two six-week units of computing resources.

An example of a collaborative activity from a teacher-focused workshop around culturally relevant pedagogy.
An example of a collaborative activity from the workshop

The first part of the workshop was a collaborative, discussion-based professional development session exploring what culturally relevant pedagogy is. This type of pedagogy uses equitable teaching practices to:

  • Draw on the breadth of learners’ experiences and cultural knowledge
  • Facilitate projects that have personal meaning for learners
  • Develop learners’ critical consciousness

The rest of the workshop day was spent putting this learning into practice while planning how to adapt two units of computing lessons to make them culturally relevant for the teachers’ particular settings. We used a design-based approach for this part of the workshop, meaning researchers and teachers worked collaboratively as equal stakeholders to decide on plans for how to alter the units.

We worked in four groups, each with three or four teachers and one or two researchers, focusing on one of two units of work from The Computing Curriculum for teaching digital skills: a unit on photo editing for Year 4 (ages 8–9), and a unit about vector graphics for Year 5 (ages 9–10).

Descriptions of a classroom unit of teaching materials about photo editing for Year 4 (ages 8–9), and a unit about vector graphics for Year 5 (ages 9–10).
We based the workshop around two Computing Curriculum units that cover digital literacy skills.

In order to plan how the resources in these units of work could be made culturally relevant for the participating teachers’ contexts, the groups used a checklist of ten areas of opportunity. This checklist is a result of one of our previous research projects on culturally relevant pedagogy. Each group used the list to identify a variety of ways in which the units’ learning objectives, activities, learning materials, and slides could be adapted. Teachers noted down their ideas and then discussed them with their group to jointly agree a plan for adapting the unit.

By the end of the day, the groups had designed four really creative plans for:

  • A Year 4 unit on photo editing that included creating an animal to represent cultural identity
  • A Year 4 unit on photo editing that included creating a collage all about yourself 
  • A Year 5 unit on vector graphics that guided learners to create their own metaverse and then add it to the class multiverse
  • A Year 5 unit on vector graphics that contextualised the digital skills by using them in online activities and in video games

Outcomes from the workshop

Before and after the workshop, we asked the teachers to fill in a survey about themselves, their experiences of creating computing resources, and their views about culturally relevant resources. We then compared the two sets of data to see whether anything had changed over the course of the workshop.

A teacher attending a training workshop laughs as she works through an activity.
The workshop was a positive experience for the teachers.

After teachers had attended the workshop, they reported a statistically significant increase in their confidence levels to adapt resources to be culturally relevant for both themselves and others. 

Teachers explained that the workshop had increased their understanding of culturally relevant pedagogy and of how it could impact on learners. For example, one teacher said:

“The workshop has developed my understanding of how culturally adapted resources can support pupil progress and engagement. It has also highlighted how contextual appropriateness of resources can help children to access resources.” – Participating teacher

Some teachers also highlighted how important it had been to talk to teachers from other schools during the workshop, and how they could put their new knowledge into practice in the classroom:

“The dedicated time and value added from peer discourse helped make this authentic and not just token activities to check a box.” – Participating teacher

“I can’t wait to take some of the work back and apply it to other areas and subjects I teach.” – Participating teacher

What you can expect to see next from this project

After our research team made the adaptations to the units set out in the four plans made during the workshop, the adapted units were delivered by the teachers to more than 500 Year 4 and 5 pupils. We visited some of the teachers’ schools to see the units being taught, and we have interviewed all the teachers about their experience of delivering the adapted materials. This observational and interview data, together with additional survey responses, will be analysed by us, and we’ll share the results over the coming months.

A computing classroom filled with learners
As part of the project, we observed teachers delivering the adapted units to their learners.

In our next blog post about this work, we will delve into the fascinating realm of parental attitudes to culturally relevant computing, and we’ll explore how embracing diversity in the digital landscape is shaping the future for both children and their families. 

We’ve also written about this professional development activity in more detail in a paper to be published at the UKICER conference in September, and we’ll share the paper once it’s available.

Finally, we are grateful to Cognizant for funding this academic research, and to our cohort of primary computing teachers for their enthusiasm, energy, and creativity, and their commitment to this project.

The post Running a workshop with teachers to create culturally relevant Computing lessons appeared first on Raspberry Pi Foundation.

Survey reveals AI’s impact on the developer experience

Post Syndicated from Inbal Shani original https://github.blog/2023-06-13-survey-reveals-ais-impact-on-the-developer-experience/


Developers today do more than just write and ship code—they’re expected to navigate a number of tools, environments, and technologies, including the new frontier of generative artificial intelligence (AI) coding tools. But the most important thing for developers isn’t story points or the speed of deployments. It’s the developer experience, which determines how efficiently and productively developers can exceed standards, enter a flow state, and drive impact.

I say this not only as GitHub’s chief product officer, but as a long-time developer who has worked across every part of the stack. Decades ago, when I earned my master’s in mechanical engineering, I became one of the first technologists to apply AI in the lab. Back then, it would take our models five days to process our larger datasets—which is striking considering the speed of today’s AI models. I yearned for tools that would make me more efficient and shorten my time to production. This is why I’m passionate about developer experience (DevEx) and have made it my focus as GitHub’s chief product officer.

Amid the rapid advancements in generative AI, we wanted to get a better understanding from developers about how new tools—and current workflows—are impacting the overall developer experience. As a starting point, we focused on some of the biggest components of the developer experience: developer productivity, team collaboration, AI, and how developers think they can best drive impact in enterprise environments.

To do so, we partnered with Wakefield Research to survey 500 U.S.-based developers at enterprise companies. In the following report, we’ll show how organizations can remove barriers to help enterprise engineering teams drive innovation and impact in this new age of software development. Ultimately, the way to innovate at scale is to empower developers by improving their productivity, increasing their satisfaction, and enabling them to do their best work—every day. After all, there can be no progress without developers who are empowered to drive impact.

Inbal Shani
Chief Product Officer // GitHub

Learn how generative AI is changing the developer experience

Discover how generative AI is changing software development in a pre-recorded session from GitHub.

Watch the video >

Why developer experience matters

At GitHub, we’re aware there’s often a significant gap between the day-to-day reality for most developers and “conversations about ‘what developers want.’”

With this survey, we wanted to better understand the typical experience for developers—and identify key ways companies can empower their developers and achieve greater success.

One big takeaway: It starts with investing in a great developer experience. And collaboration, as we learned from our research, is at the core of how developers want to work and what makes them most productive, satisfied, and impactful.

A diagram of a formula behind the developer experience that accounts for productivity, impact, satisfaction, and collaboration.
C = Collaboration, the multiplier across the entire developer experience.

DevEx is a formula that takes into account:

  • How simple and fast it is for a developer to implement a change on a codebase—or be productive.
  • How frictionless it is to move from idea through production to impact.
  • How positively or negatively the work environment, workflows, and tools affect developer satisfaction.

For leaders, developer experience is about creating a collaborative environment where developers can be their most productive, impactful, and satisfied at work. For developers, collaboration is one of the most important parts of the equation.

Current performance metrics fall short of developer expectations

Developers say performance metrics don’t meet expectations

The way developers are currently evaluated doesn’t align with how they think their performance should be measured.

  • For instance, the developers we surveyed say they’re currently measured by the number of incidents they resolve. But developers believe that how they handle those bugs and issues is more important to performance. This aligns with the belief that code quality over code quantity should remain a top performance metric.
  • Developers also believe collaboration and communication should be just as important as code quality in terms of performance measures. Their ability to collaborate and communicate with others is essential to their job, but only 33% of developers report that their companies use it as a performance metric.
Key survey findings showing what developer say their managers use to measure their performance and what developers think will matter more when they start using AI coding tools.
Metrics currently used to measure performance, compared with metrics developers think should be used to measure their performance.
More than output quantity and efficiency, code quality and collaboration are the most
important performance metrics, according to the developers we surveyed.
Twitter logo LinkedIn logo
A chart showing what developers say their teams spend the most time doing at work.
The top ranked responses that developers say their teams are working the most on including writing code and finding and fixing security vulnerabilities.

Developers want more opportunities to upskill and drive impact

When developers are asked about what makes a positive impact on their workday, they rank learning new skills (43%), getting feedback from end users (39%), and automated tests (38%), and designing solutions to novel problems (36%) as top contenders.

A ranked list of the tasks 500 U.S.-based developers say have the most positive impact on their workdays.
The top tasks developers say positively impact their workdays.

But developers say they’re spending most of their time writing code and tests, then waiting for that code to be reviewed or builds and tests to be executed.

On a typical day, the enterprise developers we surveyed report their teams are busy with a variety of tasks, including writing code, fixing security vulnerabilities, and getting feedback from end users, among other things. Developers also report that they spend a similar amount of time across these tasks, indicating that they’re stretched thin throughout the day.

A ranked list of the top tasks developers and software engineers say they spend the most time working on each day.
The tasks developers say they spend the most time working on each day.

Notably, developers say they spend the same amount of time waiting for builds and tests as they do writing new code.

  • This suggests that wait times for builds and tests are still a persistent problem despite investments in DevOps tools over the past decade.
  • Developers also continue to face obstacles, such as waiting on code review, builds, and test runs, which can hinder their ability to learn new skills and design solutions to novel problems, and our research suggests that these factors can have the biggest impact on their overall satisfaction.

Developers want feedback from end users, but face challenges

Developers say getting feedback from end users (39%) is the second-most important thing that positively impacts their workdays—but it’s often challenging for development teams to get that feedback directly.

  • Product managers and marketing teams often act as intermediaries, making it difficult for developers to directly receive end-user feedback.
  • Developers would ideally receive feedback from automated and validation tests to improve their work, but sometimes these tests are sent to other teams before being handed off to engineering teams.

The top two daily tasks for development teams include writing code (32%) and finding and fixing security vulnerabilities (31%).

  • This shows the increased importance developers have placed on security and underscores how companies are prioritizing security.
  • It also demonstrates the critical role that enterprise development teams play in meeting policy and board edicts around security.

The bottom line
Developers want to upskill, design solutions, get feedback from end users, and be evaluated on their communication skills. However, wait times on builds and tests, as well as the current performance metrics they’re evaluated on, are getting in the way.

Collaboration is the cornerstone of the developer experience

Developers thrive in collaborative environments

In our survey of enterprise engineers, developers say they work with an average of 21 other developers on a typical project—and 52% report working with other teams daily or weekly. Notably, they rank regular touchpoints as the most important factor for effective collaboration.

A survey finding that developers at enterprise companies often work with an average of 21 developers on other projects and often work on a daily or weekly basis with colleagues.
Developers in enterprise settings often work with an average of 21 other developers on a daily or weekly cadence.

But developers also have a holistic view of collaboration—it’s defined not only by talking and meeting with others, but also by uninterrupted work time, access to fully configured developer environments, and formal mentor-mentee relationships.

  • Specified blocks with no team communication give developers the time and space to write code and work towards team goals.
  • Access to fully configured developer environments promotes consistency throughout the development process. It also helps developers collaborate faster and avoid hearing the infamous line, “But it worked on my machine.”
  • Mentorships can help developers upskill and build interpersonal skills that are essential in a collaborative work environment.

It’s important to note these factors can also negatively impact a developer’s work day—which suggests that ineffective meetings can serve to distract rather than help developers (something we’ve found in previous research).

The key factors developers in a survey say contribute most highly to effective team collaboration including meetings, dedicated time for individual work, and access to fully configured dev environments.

Our survey indicates the factors most important to effective collaboration are so critical that when they’re not done effectively, they have a noticeable, negative impact on a developer’s work.

A ranked list of the top tasks developers in a survey reported as having a negative impact on their overall workday experience.
The tasks developers say most often have a negative impact on their workday experience.
Developers work with an average of 21 people on any given project. They need the time and tools for success—including regular touchpoints, heads-down time, access to fully-configured dev environments, and formal mentor-mentee relationships.
Twitter logo LinkedIn logo

We wanted to learn more about how developers collaborate

So, we sourced some answers from our followers on Twitter. We asked developers what tips they have for effective collaboration. Here’s what one developer had to say:

Twitter user Colby Ray had multiple points in response to our prompt. Click the image to read his tweet.

We also asked what makes for a productive and valuable meeting:

Twitter user kettenaito had several points in response to our prompt. Click the image to read on Twitter.

Twitter user Mateus Feira had several points in response to our prompt. Click the image to read on Twitter.

Effective collaboration improves code quality

As developer experience continues to be defined, so, too, will successful developer collaboration. Too many pings and messages can affect flow, but there’s still a need to stay in touch. In our survey, developers say effective collaboration results in improved test coverage and faster, cleaner, more secure code writing—which are best practices for any development team. This shows that when developers work effectively with others, they believe they build better and more secure software.

Developers in a survey report that collaboration positively impacts how they write code, how fast they can ship it, and more.
Developers widely view effective collaboration as helping to improve what they ship and how often they ship it.

Developers we surveyed believe collaboration and communication—along with code quality—should be the top priority for evaluation.

  • From DevOps to agile methodologies, developers and the greater business world have been talking about the importance of collaboration for a long time.
  • But developers are still not being measured on it.
Developers in a survey respond to a question about what metrics they believe their companies should use to measure their performance and productivity.
The metrics that developers think their managers should use to evaluate their performance and productivity.

We asked developers to share their ideas for measuring how well they collaborate. Here’s what one developer had to say:

Twitter user Andrew DiMola had several points in response to our prompt. Click to read on Twitter.

  • The takeaway: Companies and engineering managers should encourage regular team communication, and set time to check in–especially in remote environments–but respect developers’ need to work and focus.
Developers think regular touchpoints with their teams including meetings, asynchronous communication, and innersource practices help organizations collaborate at scale.
Developers believe that effective and regular touchpoints with their colleagues are critical for effective team collaboration.

4 tips for engineering managers to improve collaboration

At GitHub, our researchers, developers, product teams, and analysts are dedicated to studying and improving developer productivity and satisfaction. Here are their tips for engineering leaders who want to improve collaboration among developers:

  1. Make collaboration a goal in performance objectives. This builds the space and expectation that people will collaborate. This could be in the form of lunch and learns, joint projects, etc.
  2. Define and scope what collaboration looks like in your organization. Let people know when they’re being informed about something vs. being consulted about something. A matrix outlining roles and responsibilities helps define each person’s role and is something GitHub teams have implemented.
  3. Give developers time to converse and get to know one another. In particular, remote or hybrid organizations need to dedicate a portion of a developer’s time and virtual space to building relationships. Check out the GitHub guides to remote work.
  4. Identify principal and distinguished engineers. Academic research supports the positive impact of change agents in organizations—and how they should be the people who are exceptionally great at collaboration. It’s a matter of identifying your distinguished engineers and elevating them to a place where they can model desired behaviors.

The bottom line
Effective developer collaboration improves code quality and should be a performance measure. Regular touchpoints, heads-down time, access to fully configured dev environments, and formal mentor-mentee relationships result in improved test coverage and faster, cleaner, more secure code writing.

AI improves individual performance and team collaboration

Developers are already using AI coding tools at work

A staggering 92% of U.S.-based developers working in large companies report using an AI coding tool either at work or in their personal time—and 70% say they see significant benefits to using these tools.

  • AI is here to stay—and it’s already transforming how developers approach their day-to-day work. That makes it critical for businesses and engineering leaders to adopt enterprise-grade AI tools to avoid their developers using non-approved applications. Companies should also establish governance standards for using AI tools to ensure that they are used ethically and effectively.
92% of developers in a survey say they're already using AI coding tools at work.
Almost all developers are already using AI coding tools at and outside of work.

70% of developers see a benefit to using AI coding tools at work.

Almost all (92%) developers use AI coding tools at work—and a majority (67%) have used these tools in both a work setting and during their personal time. Curiously, only 6% of developers in our survey say they solely use these tools outside of work.
Twitter logo LinkedIn logo

Developers believe AI coding tools will enhance their performance

With most developers experimenting with AI tools in the workplace, our survey results suggest it’s not just idle interest leading developers to use AI. Rather, it’s a recognition that AI coding tools will help them meet performance standards.

  • In our survey, developers say AI coding tools can help them meet existing performance standards with improved code quality, faster outputs, and fewer production-level incidents. They also believe that these metrics should be used to measure their performance beyond code quantity.
The metrics developers say their managers use to measure their productivity vs. the metrics developers think their managers should use to measure their productivity if they use AI coding tools.
Developers widely think that AI coding tools will layer into their existing workflows and bring greater efficiencies—but they do not think AI will change how software is made.

Around one-third of developers report that their managers currently assess their performance based on the volume of code they produce—and an equal number anticipate that this will persist when they start using AI-based coding tools.

  • Notably, the quantity of code a developer produces may not necessarily correspond to its business value.
  • Stay smart. With the increase of AI tooling being used in software development—which often contributes to code volume—engineering leaders will need to ask whether measuring code volume is still the best way to measure productivity and output.

Developers think AI coding tools will lead to greater team collaboration

Beyond improving individual performance, more than 4 in 5 developers surveyed (81%) say AI coding tools will help increase collaboration within their teams and organizations.

  • In fact, security reviews, planning, and pair programming are the most significant points of collaboration and the tasks that development teams are expected to, and should, work on with the help of AI coding tools. This also indicates that code and security reviews will remain important as developers increase their use of AI coding tools in the workplace.
Developers believe that AI coding tools will make engineering teams more collaborative as the quality of code produced becomes ever more important.
Developers think their teams will need to become more collaborative as they start using AI coding tools.
Sometimes, developers can do the same thing with one line or multiple lines of code. Even still, one-third of developers in our survey say their managers measure their performance based on how much code they produce.
Twitter logo LinkedIn logo

Notably, developers believe AI coding tools will give them more time to focus on solution design. This has direct organizational benefits and means developers believe they’ll spend more time designing new features and products with AI instead of writing boilerplate code.

  • Developers are already using generative AI coding tools to automate parts of their workflow, which frees up time for more collaborative projects like security reviews, planning, and pair programming.
Developers think AI coding tools will help them upskill, become more productive, and focus on higher-value problem solving.
Developers believe that AI coding tools will help them focus on higher-value problem solving.

Developers think AI increases productivity and prevents burnout

Not only can AI coding tools help improve overall productivity, but they can also provide upskilling opportunities to help create a smarter workforce according to the developers we surveyed.

  • 57% of developers believe AI coding tools help them improve their coding language skills—which is the top benefit they see. Beyond the prospect of acting as an upskilling aid, developers also say AI coding tools can also help with reducing cognitive effort, and since mental capacity and time are both finite resources, 41% of developers believe that AI coding tools can help with preventing burnout.
  • In previous research we conducted, 87% of developers reported that the AI coding tool GitHub Copilot helped them preserve mental effort while completing more repetitive tasks. This shows that AI coding tools allow developers to preserve cognitive effort and focus on more challenging and innovative aspects of software development or research and development.
  • AI coding tools help developers upskill while they work. Across our survey, developers consistently rank learning new skills as the number one contributor to a positive workday. But 30% also say learning and development can have a negative impact on their overall workday, which suggests some developers view learning and development as adding more work to their workdays. Notably, developers say the top benefit of AI coding tools is learning new skills—and these tools can help developers learn while they work, instead of making learning and development an additional task.
Developers are already using generative AI coding tools to automate parts of their workflow, which frees up time for more collaborative projects like security reviews, planning, and pair programming.
Twitter logo LinkedIn logo

AI is improving the developer experience across the board

Developers in our survey suggest they can better meet standards around code quality, completion time, and the number of incidents when using AI coding tools—all of which are measures developers believe are key areas for evaluating their performance.

AI coding tools can also help reduce the likelihood of coding errors and improve the accuracy of code—which ultimately leads to more reliable software, increased application performance, and better performance numbers for developers. As AI technology continues to advance, it is likely that these coding tools will have an even greater impact on developer performance and upskilling.

AI coding tools are layering into existing developer workflows and creating greater efficiencies

Developers believe that AI coding tools will increase their productivity—but our survey suggests that developers don’t think these tools are fundamentally altering the software development lifecycle. Instead, developers suggest they’re bringing greater efficiencies to it.

  • The use of automation and AI has been a part of the developer workflow for a considerable amount of time, with developers already utilizing a range of automated and AI-powered tools, such as machine learning-based security checks and CI/CD pipelines.
  • Rather than completely overhauling operations, these tools create greater efficiencies within existing workflows, and that frees up more time for developers to concentrate on developing solutions.

The bottom line
Almost all developers (92%) are using AI coding at work—and they say these tools not only improve day-to-day tasks but enable upskilling opportunities, too. Developers see material benefits to using AI tools including improved performance and coding skills, as well as increased team collaboration.

The path forward

Developer satisfaction, productivity, and organizational impact are all positioned to get a boost from AI coding tools—and that will have a material impact on the overall developer experience.

92% of developers already saying they use AI coding tools at work and in their personal time, which makes it clear AI is here to stay. 70% of the developers we surveyed say they already see significant benefits when using AI coding tools, and 81% of the developers we surveyed expect AI coding tools to make their teams more collaborative—which is a net benefit for companies looking to improve both developer velocity and the developer experience.

Notably, 57% of developers believe that AI could help them upskill—and hold the potential to build learning and development into their daily workflow. With all of this in mind, technical leaders should start exploring AI as a solution to improve satisfaction, productivity, and the overall developer experience.

In addition to exploring AI tools, here are three takeaways engineering and business leaders should consider to improve the developer experience:

  1. Help your developers enter a flow state with tools, processes, and practices that help them be productive, drive impact, and do creative and meaningful work.
  2. Empower collaboration by breaking down organizational silos and providing developers with the opportunity to communicate efficiently.
  3. Make room for upskilling within developer workflows through key investments in AI to help your organization experiment and innovate for the future.

Methodology

This report draws on a survey conducted online by Wakefield Research on behalf of GitHub from March 14, 2023 through March 29, 2023 among 500 non-student, U.S.-based developers who are not managers and work at companies with 1,000-plus employees. For a complete survey methodology, please contact [email protected].

Introducing data science concepts and skills to primary school learners

Post Syndicated from Katharine Childs original https://www.raspberrypi.org/blog/data-science-data-literacy-primary-school-scotland/

Every day, most of us both consume and create data. For example, we interpret data from weather forecasts to predict our chances of a good weather for a special occasion, and we create data as our carbon footprint leaves a trail of energy consumption information behind us. Data is important in our lives, and countries around the world are expanding their school curricula to teach the knowledge and skills required to work with data, including at primary (K–5) level.

In our most recent research seminar, attendees heard about a research-based initiative called Data Education in Schools. The speakers, Kate Farrell and Professor Judy Robertson from the University of Edinburgh, Scotland, shared how this project aims to empower learners to develop data literacy skills and succeed in a data-driven world.

“Data literacy is the ability to ask questions, collect, analyse, interpret and communicate stories about data.”

– Kate Farrell & Prof. Judy Robertson

Being a data citizen

Scotland’s national curriculum does not explicitly mention data literacy, but the topic is embedded in many subjects such as Maths, English, Technologies, and Social Studies. Teachers in Scotland, particularly in primary schools, have the flexibility to deliver learning in an interdisciplinary way through project-based learning. Therefore, the team behind Data Education in Schools developed a set of cross-curricular data literacy projects. Educators and education policy makers in other countries who are looking to integrate computing topics with other subjects may also be interested in this approach.

Becoming a data citizen involves finding meaning in data, controlling your personal data trail, being a critical consumer of data, and taking action based on data.
Data citizens have skills they need to thrive in a world shaped by digital technology.

The Data Education in Schools projects are aimed not just at giving learners skills they may need for future jobs, but also at equipping them as data citizens in today’s world. A data citizen can think critically, interpret data, and share insights with others to effect change.

Kate and Judy shared an example of data citizenship from a project they had worked on with a primary school. The learners gathered data about how much plastic waste was being generated in their canteen. They created a data visualisation in the form of a giant graph of types of rubbish on the canteen floor and presented this to their local council.

A child arranges objects to visualise data.
Sorting food waste from lunch by type of material

As a result, the council made changes that reduced the amount of plastic used in the canteen. This shows how data citizens are able to communicate insights from data to influence decisions.

A cycle for data literacy projects

Across its projects, the Data Education in Schools initiative uses a problem-solving cycle called the PPDAC cycle. This cycle is a useful tool for creating educational resources and for teaching, as you can use it to structure resources, and to concentrate on areas to develop learner skills.

The PPDAC project cycle.
The PPDAC data problem-solving cycle

The five stages of the cycle are: 

  1. Problem: Identifying the problem or question to be answered
  2. Plan: Deciding what data to collect or use to answer the question
  3. Data: Collecting the data and storing it securely
  4. Analysis: Preparing, modelling, and visualising the data, e.g. in a graph or pictogram
  5. Conclusion: Reviewing what has been learned about the problem and communicating this with others 

Smaller data literacy projects may focus on one or two stages within the cycle so learners can develop specific skills or build on previous learning. A large project usually includes all five stages, and sometimes involves moving backwards — for example, to refine the problem — as well as forwards.

Data literacy for primary school learners

At primary school, the aim of data literacy projects is to give learners an intuitive grasp of what data looks like and how to make sense of graphs and tables. Our speakers gave some great examples of playful approaches to data. This can be helpful because younger learners may benefit from working with tangible objects, e.g. LEGO bricks, which can be sorted by their characteristics. Kate and Judy told us about one learner who collected data about their clothes and drew the results in the form of clothes on a washing line — a great example of how tangible objects also inspire young people’s creativity.

In a computing classroom, a girl laughs at what she sees on the screen.

As learners get older, they can begin to work with digital data, including data they collect themselves using physical computing devices such as BBC micro:bit microcontrollers or Raspberry Pi computers.

Free resources for primary (and secondary) schools

For many attendees, one of the highlights of the seminar was seeing the range of high-quality teaching resources for learners aged 3–18 that are part of the Data Education in Schools project. These include: 

  • Data 101 videos: A set of 11 videos to help primary and secondary teachers understand data literacy better.
  • Data literacy live lessons: Data-related activities presented through live video.
  • Lesson resources: Lots of projects to develop learners’ data literacy skills. These are mapped to the Scottish primary and secondary curriculum, but can be adapted for use in other countries too.

More resources are due to be published later in 2023, including a set of prompt cards to guide learners through the PPDAC cycle, a handbook for teachers to support the teaching of data literacy, and a set of virtual data-themed escape rooms.  

You may also be interested in the units of work on data literacy skills that are part of The Computing Curriculum, our complete set of classroom resources to teach computing to 5- to 16-year-olds.

Join our next seminar on primary computing education

At our next seminar we welcome Aim Unahalekhaka from Tufts University, USA, who will share research about a rubric to evaluate young learners’ ScratchJr projects. If you have a tablet with ScratchJr installed, make sure to have it available to try out some activities. The seminar will take place online on Tuesday 6 June at 17.00 UK time, sign up now to not miss out.

To find out more about connecting research to practice for primary computing education, you can see a list of our upcoming monthly seminars on primary (K–5) teaching and learning and watch the recordings of previous seminars in this series.

The post Introducing data science concepts and skills to primary school learners appeared first on Raspberry Pi Foundation.

Integrating primary computing and literacy through multimodal storytelling

Post Syndicated from Veronica Cucuiat original https://www.raspberrypi.org/blog/primary-computing-programming-literacy-storytelling/

Broadening participation and finding new entry points for young people to engage with computing is part of how we pursue our mission here at the Raspberry Pi Foundation. It was also the focus of our March online seminar, led by our own Dr Bobby Whyte. In this third seminar of our series on computing education for primary-aged children, Bobby presented his work on ‘designing multimodal composition activities for integrated K-5 programming and storytelling’. In this research he explored the integration of computing and literacy education, and the implications and limitations for classroom practice.

Young learners at computers in a classroom.

Motivated by challenges Bobby experienced first-hand as a primary school teacher, his two studies on the topic contribute to the body of research aiming to make computing less narrow and difficult. In this work, Bobby integrated programming and storytelling as a way of making the computing curriculum more applicable, relevant, and contextualised.

Critically for computing educators and researchers in the area, Bobby explored how theories related to ‘programming as writing’ translate into practice, and what the implications of designing and delivering integrated lessons in classrooms are. While the two studies described here took place in the context of UK schooling, we can learn universal lessons from this work.

What is multimodal composition?

In the seminar Bobby made a distinction between applying computing to literacy (or vice versa) and true integration of programming and storytelling. To achieve true integration in the two studies he conducted, Bobby used the idea of ‘multimodal composition’ (MMC). A multimodal composition is defined as “a composition that employs a variety of modes, including sound, writing, image, and gesture/movement [… with] a communicative function”.

Storytelling comes together with programming in a multimodal composition as learners create a program to tell a story where they:

  • Decide on content and representation (the characters, the setting, the backdrop)
  • Structure text they’ve written
  • Use technical aspects (i.e. motion blocks, tension) to achieve effects for narrative purposes
A screenshot showing a Scratch project.
Defining multimodal composition (MMC) for a visual programming context

Multimodality for programming and storytelling in the classroom

To investigate the use of MMC in the classroom, Bobby started by designing a curriculum unit of lessons. He mapped the unit’s MMC activities to specific storytelling and programming learning objectives. The MMC activities were designed using design-based research, an approach in which something is designed and tested iteratively in real-world contexts. In practice that means Bobby collaborated with teachers and students to analyse, evaluate, and adapt the unit’s activities.

A list of learning objectives that could be covered by a multimodal composition activity.
Mapping of the MMC activities to storytelling and programming learning objectives

The first of two studies to explore the design and implementation of MMC activities was conducted with 10 K-5 students (age 9 to 11) and showed promising results. All students approached the composition task multimodally, using multiple representations for specific purposes. In other words, they conveyed different parts of their stories using either text, sound, or images.

Bobby found that broadcast messages and loops were the least used blocks among the group. As a consequence, he modified the curriculum unit to include additional scaffolding and instructional support on how and why the students might embed these elements.

A list of modifications to the MMC curriculum unit based on testing in a classroom.
Bobby modified the classroom unit based on findings from his first study

In the second study, the MMC activities were evaluated in a classroom of 28 K-5 students led by one teacher over two weeks. Findings indicated that students appreciated the longer multi-session project. The teacher reported being satisfied with the project work the learners completed and the skills they practised. The teacher also further integrated and adapted the unit into their classroom practice after the research project had been completed.

How might you use these research findings?

Factors that impacted the integration of storytelling and programming included the teacher’s confidence to teach programming as well as the teacher’s ability to differentiate between students and what kind of support they needed depending on their previous programming experience.

In addition, there are considerations regarding the curriculum. The school where the second study took place considered the activities in the unit to be literacy-light, as the English literacy curriculum is ‘text-heavy’ and the addition of multimodal elements ‘wastes’ opportunities to produce stories that are more text-based.

Woman teacher and female student at a laptop.

Bobby’s research indicates that MMC provides useful opportunities for learners to simultaneously pursue storytelling and programming goals, and the curriculum unit designed in the research proved adaptable for the teacher to integrate into their classroom practice. However, Bobby cautioned that there’s a need to carefully consider both the benefits and trade-offs when designing cross-curricular integration projects in order to ensure a fair representation of both subjects.

Can you see an opportunity for integrating programming and storytelling in your classroom? Let us know your thoughts or questions in the comments below.

You can watch Bobby’s full presentation:

And you can read his research paper Designing for Integrated K-5 Computing and Literacy through Story-making Activities (open access version).

You may also be interested in our pilot study on using storytelling to teach computing in primary school, which we conducted as part of our Gender Balance in Computing programme.

Join our next seminar on primary computing education

At our next seminar, we welcome Kate Farrell and Professor Judy Robertson (University of Edinburgh). This session will introduce you to how data literacy can be taught in primary and early-years education across different curricular areas. It will take place online on Tuesday 9 May at 17.00 UK time, don’t miss out and sign up now.

Yo find out more about connecting research to practice for primary computing education, you can find other our upcoming monthly seminars on primary (K–5) teaching and learning and watch the recordings of previous seminars in this series.

The post Integrating primary computing and literacy through multimodal storytelling appeared first on Raspberry Pi Foundation.

3 Key Challenges to Clarity in Threat Intelligence: 2023 Forrester Consulting Total Economic Impact™ Study

Post Syndicated from Stacy Moran original https://blog.rapid7.com/2023/04/20/3-key-challenges-to-clarity-in-threat-intelligence/

3 Key Challenges to Clarity in Threat Intelligence: 2023 Forrester Consulting Total Economic Impact™ Study

Inundated with data

It would have been really cool to combine those two words to make “inundata,” but it would have been disastrous for SEO purposes. It’s all meant to kick off a conversation about the state of security organizations with regard to threat intelligence. There are several key challenges to overcome on the road to clarity in threat intelligence operations and enabling actionable data.

This is the second entry in a blog series based on The Total Economic Impact™ of Rapid7 Threat Command For Digital Risk Protection and Threat Intelligence. Let’s dive into three challenges organizations are facing when it comes to threat intelligence.

Lack of visibility and actionable data

For the commissioned study, Forrester conducted interviews with four Rapid7 customers and collated their responses into the form of one representative organization and its experiences after implementing Rapid7’s threat intelligence solution, Threat Command. Interviewees noted that prior to utilizing Threat Command, lack of visibility and unactionable data across legacy systems were hampering efforts to innovate in threat detection. The study stated:

“Interviewees noted that there was an immense amount of data to examine with their previous solutions and systems. This resulted in limited visibility into the potential security threats to both interviewees’ organizations and their customers. The data the legacy solutions provided was confusing to navigate. There was no singular accounting of assets or solution to provide curated customizable information.”

A key part of that finding is that limited visibility can turn into potential liabilities for an organization’s customers – like the SonicWall attack a couple of years ago. These kinds of incidents can cause immediate pandemonium within the organizations of their downstream customers.

In this same scenario, lack of visibility can also be disastrous for the supply chain. Instead of affecting end-users of one product, now there’s a whole network of vendors and their end-users who could be adversely affected by lack of visibility into threat intelligence originating from just one organization. With greater data visibility through a single pane of glass and consolidating information into a centralized asset list, security teams can begin to mitigate visibility concerns.

Time-consuming processes for investigation and analysis

Rapid7 customers interviewed for the study also felt that their legacy threat intelligence solutions forced teams to “spend hours manually searching through different platforms, such as a web-based Git repository or the dark web, to investigate all potential threat alerts, many of which were irrelevant.”

Because of these inefficiencies, additional and unforeseen work was created on the backend, along with what we can assume were many overstretched analysts. How can organizations, then, gain back – and create new – efficiencies? First, alert context is a must. With Threat Command, security organizations can:

  • Receive actionable alerts categorized by severity, type (phishing, data leakage), and source (social media, black markets).
  • Implement alert automation rules based on your specific criteria so you can precisely customize and fine-tune alert management.
  • Accelerate alert triage and shorten investigation time by leveraging Threat Command Extend™ (browser extension) as well as 24x7x365 availability of Rapid7 expert analysts.  

By leveraging these features, the study’s composite organization was able to surface far more actionable alerts and see faster remediation processes. It saved $302,000 over three years by avoiding the cost of hiring an additional security analyst.

Pivoting away from a constant reactive approach to cyber incidents

When it comes to security, no one ever aims for an after-the-fact approach. But sometimes a SOC may realize that’s the position it’s been in for quite some time. Nothing good will come from that for the business or anyone on the security team. Indeed, interviewees in the study supported this perspective:

“Legacy systems and internal processes led to a reactive approach for their threat intelligence investigations and security responses. Security team members would get alerts from systems or other teams with limited context, which led to inefficient triage or larger issues. As a result, the teams sacrificed quality for speed.”

The study notes how interviewees’ organizations were then motivated to look for a solution with improved search capabilities across locations such as social media and the dark web. After implementing Threat Command, it was possible for those organizations to receive early warning of potential attacks and automated intelligence of vulnerabilities targeting their networks and customers.

By creating processes that are centered around early-warning methodologies and a more proactive approach to security, the composite organization was able to reduce the likelihood of a breach by up to 70% over the course of three years.

Security is about the solutions

Challenges in a SOC don’t have to mean stopping everything and preparing for a years-long audit of all processes and solutions. It is possible to overcome challenges relatively quickly with a solution like Threat Command that can show immediate value after an accelerated onboarding process. And it is possible to vastly improve security posture in the face of an increasing volume of global threats.  

For a deeper-dive into The Total Economic Impact™ of Rapid7 Threat Command For Digital Risk Protection and Threat Intelligence, download the study now. You can also read the previous blog entry in this series here.