This video demos a real-like Pokedex, complete with visual recognition, that I created using a Raspberry Pi, Python, and Deep Learning. You can find the entire blog post, including code, using this link: https://www.pyimagesearch.com/2018/04/30/a-fun-hands-on-deep-learning-project-for-beginners-students-and-hobbyists/ Music credit to YouTube user “No Copyright” for providing royalty free music: https://www.youtube.com/watch?v=PXpjqURczn8
The history of Pokémon in 30 seconds
The Pokémon franchise was created by video game designer Satoshi Tajiri in 1995. In the fictional world of Pokémon, Pokémon Trainers explore the vast landscape, catching and training small creatures called Pokémon. To date, there are 802 different types of Pokémon. They range from the ever recognisable Pikachu, a bright yellow electric Pokémon, to the highly sought-after Shiny Charizard, a metallic, playing-card-shaped Pokémon that your mate Alex claims she has in mint condition, but refuses to show you.
In the world of Pokémon, children as young as ten-year-old protagonist and all-round annoyance Ash Ketchum are allowed to leave home and wander the wilderness. There, they hunt vicious, deadly creatures in the hope of becoming a Pokémon Master.
Adrian’s deep learning Pokédex
Adrian is a bit of a deep learning pro, as demonstrated by his Santa/Not Santa detector, which we wrote about last year. For that project, he also provided a great explanation of what deep learning actually is. In a nutshell:
…a subfield of machine learning, which is, in turn, a subfield of artificial intelligence (AI).While AI embodies a large, diverse set of techniques and algorithms related to automatic reasoning (inference, planning, heuristics, etc), the machine learning subfields are specifically interested in pattern recognition and learning from data.
As with his earlier Raspberry Pi project, Adrian uses the Keras deep learning model and the TensorFlow backend, plus a few other packages such as Adrian’s own imutils functions and OpenCV.
Adrian trained a Convolutional Neural Network using Keras on a dataset of 1191 Pokémon images, obtaining 96.84% accuracy. As Adrian explains, this model is able to identify Pokémon via still image and video. It’s perfect for creating a Pokédex – an interactive Pokémon catalogue that should, according to the franchise, be able to identify and read out information on any known Pokémon when captured by camera. More information on model training can be found on Adrian’s blog.
For the physical build, a Raspberry Pi 3 with camera module is paired with the Raspberry Pi 7″ touch display to create a portable Pokédex. And while Adrian comments that the same result can be achieved using your home computer and a webcam, that’s not how Adrian rolls as a Raspberry Pi fan.
Plus, the smaller size of the Pi is perfect for one of you to incorporate this deep learning model into a 3D-printed Pokédex for ultimate Pokémon glory, pretty please, thank you.
Adrian has gone into impressive detail about how the project works and how you can create your own on his blog, pyimagesearch. So if you’re interested in learning more about deep learning, and making your own Pokédex, be sure to visit.
This blog post was co-authored by Ujjwal Ratan, a senior AI/ML solutions architect on the global life sciences team.
Healthcare data is generated at an ever-increasing rate and is predicted to reach 35 zettabytes by 2020. Being able to cost-effectively and securely manage this data whether for patient care, research or legal reasons is increasingly important for healthcare providers.
Healthcare providers must have the ability to ingest, store and protect large volumes of data including clinical, genomic, device, financial, supply chain, and claims. AWS is well-suited to this data deluge with a wide variety of ingestion, storage and security services (e.g. AWS Direct Connect, Amazon Kinesis Streams, Amazon S3, Amazon Macie) for customers to handle their healthcare data. In a recent Healthcare IT News article, healthcare thought-leader, John Halamka, noted, “I predict that five years from now none of us will have datacenters. We’re going to go out to the cloud to find EHRs, clinical decision support, analytics.”
I realize simply storing this data is challenging enough. Magnifying the problem is the fact that healthcare data is increasingly attractive to cyber attackers, making security a top priority. According to Mariya Yao in her Forbes column, it is estimated that individual medical records can be worth hundreds or even thousands of dollars on the black market.
In this first of a 2-part post, I will address the value that AWS can bring to customers for ingesting, storing and protecting provider’s healthcare data. I will describe key components of any cloud-based healthcare workload and the services AWS provides to meet these requirements. In part 2 of this post we will dive deep into the AWS services used for advanced analytics, artificial intelligence and machine learning.
The data tsunami is upon us
So where is this data coming from? In addition to the ubiquitous electronic health record (EHR), the sources of this data include:
devices such as MRIs, x-rays and ultrasounds
sensors and wearables for patients
medical equipment telemetry
Additional sources of data come from non-clinical, operational systems such as:
claims and billing
Data from these sources can be structured (e.g., claims data) as well as unstructured (e.g., clinician notes). Some data comes across in streams such as that taken from patient monitors, while some comes in batch form. Still other data comes in near-real time such as HL7 messages. All of this data has retention policies dictating how long it must be stored. Much of this data is stored in perpetuity as many systems in use today have no purge mechanism. AWS has services to manage all these data types as well as their retention, security and access policies.
Imaging is a significant contributor to this data tsunami. Increasing demand for early-stage diagnoses along with aging populations drive increasing demand for images from CT, PET, MRI, ultrasound, digital pathology, X-ray and fluoroscopy. For example, a thin-slice CT image can be hundreds of megabytes. Increasing demand and strict retention policies make storage costly.
Due to the plummeting cost of gene sequencing, molecular diagnostics (including liquid biopsy) is a large contributor to this data deluge. Many predict that as the value of molecular testing becomes more identifiable, the reimbursement models will change and it will increasingly become the standard of care. According to the Washington Post article “Sequencing the Genome Creates so Much Data We Don’t Know What to do with It,”
“Some researchers predict that up to one billion people will have their genome sequenced by 2025 generating up to 40 exabytes of data per year.”
Although genomics is primarily used for oncology diagnostics today, it’s also used for other purposes, pharmacogenomics — used to understand how an individual will metabolize a medication.
It is increasingly challenging for the typical hospital, clinic or physician practice to securely store, process and manage this data without cloud adoption.
Amazon has a variety of ingestion techniques depending on the nature of the data including size, frequency and structure. AWS Snowball and AWS Snowmachine are appropriate for extremely-large, secure data transfers whether one time or episodic. AWS Glue is a fully-managed ETL service for securely moving data from on-premise to AWS and Amazon Kinesis can be used for ingesting streaming data.
Amazon S3, Amazon S3 IA, and Amazon Glacier are economical, data-storage services with a pay-as-you-go pricing model that expand (or shrink) with the customer’s requirements.
The above architecture has four distinct components – ingestion, storage, security, and analytics. In this post I will dive deeper into the first three components, namely ingestion, storage and security. In part 2, I will look at how to use AWS’ analytics services to draw value on, and optimize, your healthcare data.
A typical provider data center will consist of many systems with varied datasets. AWS provides multiple tools and services to effectively and securely connect to these data sources and ingest data in various formats. The customers can choose from a range of services and use them in accordance with the use case.
For use cases involving one-time (or periodic), very large data migrations into AWS, customers can take advantage of AWS Snowball devices. These devices come in two sizes, 50 TB and 80 TB and can be combined together to create a petabyte scale data transfer solution.
The devices are easy to connect and load and they are shipped to AWS avoiding the network bottlenecks associated with such large-scale data migrations. The devices are extremely secure supporting 256-bit encryption and come in a tamper-resistant enclosure. AWS Snowball imports data in Amazon S3 which can then interface with other AWS compute services to process that data in a scalable manner.
For use cases involving a need to store a portion of datasets on premises for active use and offload the rest on AWS, the Amazon storage gateway service can be used. The service allows you to seamlessly integrate on premises applications via standard storage protocols like iSCSI or NFS mounted on a gateway appliance. It supports a file interface, a volume interface and a tape interface which can be utilized for a range of use cases like disaster recovery, backup and archiving, cloud bursting, storage tiering and migration.
By using the AWS proposed reference architecture for disaster recovery, healthcare providers can ensure their data assets are securely stored on the cloud and are easily accessible in the event of a disaster. The “AWS Disaster Recovery” whitepaper includes details on options available to customers based on their desired recovery time objective (RTO) and recovery point objective (RPO).
AWS is an ideal destination for offloading large volumes of less-frequently-accessed data. These datasets are rarely used in active compute operations but are exceedingly important to retain for reasons like compliance. By storing these datasets on AWS, customers can take advantage of the highly-durable platform to securely store their data and also retrieve them easily when they need to. For more details on how AWS enables customers to run back and archival use cases on AWS, please refer to the following set of whitepapers.
A healthcare provider may have a variety of databases spread throughout the hospital system supporting critical applications such as EHR, PACS, finance and many more. These datasets often need to be aggregated to derive information and calculate metrics to optimize business processes. AWS Glue is a fully-managed Extract, Transform and Load (ETL) service that can read data from a JDBC-enabled, on-premise database and transfer the datasets into AWS services like Amazon S3, Amazon Redshift and Amazon RDS. This allows customers to create transformation workflows that integrate smaller datasets from multiple sources and aggregates them on AWS.
Healthcare providers deal with a variety of streaming datasets which often have to be analyzed in near real time. These datasets come from a variety of sources such as sensors, messaging buses and social media, and often do not adhere to an industry standard. The Amazon Kinesis suite of services, that includes Amazon Kinesis Streams, Amazon Kinesis Firehose, and Amazon Kinesis Analytics, are the ideal set of services to accomplish the task of deriving value from streaming data.
Example: Using AWS Glue to de-identify and ingest healthcare data into S3 Let’s consider a scenario in which a provider maintains patient records in a database they want to ingest into S3. The provider also wants to de-identify the data by stripping personally- identifiable attributes and store the non-identifiable information in an S3 bucket. This bucket is different from the one that contains identifiable information. Doing this allows the healthcare provider to separate sensitive information with more restrictions set up via S3 bucket policies.
To ingest records into S3, we create a Glue job that reads from the source database using a Glue connection. The connection is also used by a Glue crawler to populate the Glue data catalog with the schema of the source database. We will use the Glue development endpoint and a zeppelin notebook server on EC2 to develop and execute the job.
Step 1: Import the necessary libraries and also set a glue context which is a wrapper on the spark context:
Step 2: Create a dataframe from the source data. I call the dataframe “readmissionsdata”. Here is what the schema would look like:
Step 3: Now select the columns that contains indentifiable information and store it in a new dataframe. Call the new dataframe “phi”.
Step 4: Non-PHI columns are stored in a separate dataframe. Call this dataframe “nonphi”.
Step 5: Write the two dataframes into two separate S3 buckets
Once successfully executed, the PHI and non-PHI attributes are stored in two separate files in two separate buckets that can be individually maintained.
In 2016, 327 healthcare providers reported a protected health information (PHI) breach, affecting 16.4m patient records. There have been 342 data breaches reported in 2017 — involving 3.2 million patient records.
To date, AWS has released 51 HIPAA-eligible services to help customers address security challenges and is in the process of making many more services HIPAA-eligible. These HIPAA-eligible services (along with all other AWS services) help customers build solutions that comply with HIPAA security and auditing requirements. A catalogue of HIPAA-enabled services can be found at AWS HIPAA-eligible services. It is important to note that AWS manages physical and logical access controls for the AWS boundary. However, the overall security of your workloads is a shared responsibility, where you are responsible for controlling user access to content on your AWS accounts.
AWS storage services allow you to store data efficiently while maintaining high durability and scalability. By using Amazon S3 as the central storage layer, you can take advantage of the Amazon S3 storage management features to get operational metrics on your data sets and transition them between various storage classes to save costs. By tagging objects on Amazon S3, you can build a governance layer on Amazon S3 to grant role based access to objects using Amazon IAM and Amazon S3 bucket policies.
To learn more about the Amazon S3 storage management features, see the following link.
In the example above, we are storing the PHI information in a bucket named “phi.” Now, we want to protect this information to make sure its encrypted, does not have unauthorized access, and all access requests to the data are logged.
Encryption: S3 provides settings to enable default encryption on a bucket. This ensures any object in the bucket is encrypted by default.
Logging: S3 provides object level logging that can be used to capture all API calls to the object. The API calls are logged in cloudtrail for easy access and consolidation. Moreover, it also supports events to proactively alert customers of read and write operations.
Access control: Customers can use S3 bucket policies and IAM policies to restrict access to the phi bucket. It can also put a restriction to enforce multi-factor authentication on the bucket. For example, the following policy enforces multi-factor authentication on the phi bucket:
In Part 1 of this blog, we detailed the ingestion, storage, security and management of healthcare data on AWS. Stay tuned for part two where we are going to dive deep into optimizing the data for analytics and machine learning.
Today, I’m very pleased to announce that AWS services comply with the General Data Protection Regulation (GDPR). This means that, in addition to benefiting from all of the measures that AWS already takes to maintain services security, customers can deploy AWS services as a key part of their GDPR compliance plans.
This announcement confirms we have completed the entirety of our GDPR service readiness audit, validating that all generally available services and features adhere to the high privacy bar and data protection standards required of data processors by the GDPR. We completed this work two months ahead of the May 25, 2018 enforcement deadline in order to give customers and APN partners an environment in which they can confidently build their own GDPR-compliant products, services, and solutions.
AWS’s GDPR service readiness is only part of the story; we are continuing to work alongside our customers and the AWS Partner Network (APN) to help on their journey toward GDPR compliance. Along with this announcement, I’d like to highlight the following examples of ways AWS can help you accelerate your own GDPR compliance efforts.
Security of Personal Data During our GDPR service readiness audit, our security and compliance experts confirmed that AWS has in place effective technical and organizational measures for data processors to secure personal data in accordance with the GDPR. Security remains our highest priority, and we continue to innovate and invest in a high bar for security and compliance across all global operations. Our industry-leading functionality provides the foundation for our long list of internationally-recognized certifications and accreditations, demonstrating compliance with rigorous international standards, such as ISO 27001 for technical measures, ISO 27017 for cloud security, ISO 27018 for cloud privacy, SOC 1, SOC 2 and SOC 3, PCI DSS Level 1, and EU-specific certifications such as BSI’s Common Cloud Computing Controls Catalogue (C5). AWS continues to pursue the certifications that assist our customers.
Compliance-enabling Services Many requirements under the GDPR focus on ensuring effective control and protection of personal data. AWS services give you the capability to implement your own security measures in the ways you need in order to enable your compliance with the GDPR, including specific measures such as:
Encryption of personal data
Ability to ensure the ongoing confidentiality, integrity, availability, and resilience of processing systems and services
Ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident
Processes for regularly testing, assessing, and evaluating the effectiveness of technical and organizational measures for ensuring the security of processing
This is an advanced set of security and compliance services that are designed specifically to handle the requirements of the GDPR. There are numerous AWS services that have particular significance for customers focusing on GDPR compliance, including:
Amazon GuardDuty – a security service featuring intelligent threat detection and continuous monitoring
Amazon Macie – a machine learning tool to assist discovery and securing of personal data stored in Amazon S3
Amazon Inspector – an automated security assessment service to help keep applications in conformity with best security practices
AWS Config Rules – a monitoring service that dynamically checks cloud resources for compliance with security rules
Additionally, we have published a whitepaper, “Navigating GDPR Compliance on AWS,” dedicated to this topic. This paper details how to tie GDPR concepts to specific AWS services, including those relating to monitoring, data access, and key management. Furthermore, our GDPR Center will give you access to the up-to-date resources you need to tackle requirements that directly support your GDPR efforts.
Compliant DPA We offer a GDPR-compliant Data Processing Addendum (DPA), enabling you to comply with GDPR contractual obligations.
Conformity with a Code of Conduct GDPR introduces adherence to a “code of conduct” as a mechanism for demonstrating sufficient guarantees of requirements that the GDPR places on data processors. In this context, we previously announced compliance with the CISPE Code of Conduct. The CISPE Code of Conduct provides customers with additional assurances regarding their ability to fully control their data in a safe, secure, and compliant environment when they use services from providers like AWS. More detail about the CISPE Code of Conduct can be found at: https://aws.amazon.com/compliance/cispe/
Training and Summits We can provide you with training on navigating GDPR compliance using AWS services via our Professional Services team. This team has a GDPR workshop offering, which is a two-day facilitated session customized to your specific needs and challenges. We are also providing GDPR presentations during our AWS Summits in European countries, as well as San Francisco and Tokyo.
Additional Resources Finally, we have teams of compliance, data protection, and security experts, as well as the APN, helping customers across Europe prepare for running regulated workloads in the cloud as the GDPR becomes enforceable. For additional information on this, please contact your AWS Account Manager.
As we move towards May 25 and beyond, we’ll be posting a series of blogs to dive deeper into GDPR-related concepts along with how AWS can help. Please visit our GDPR Center for more information. We’re excited about being your partner in fully addressing this important regulation.
Vice President, AWS Security Assurance
Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.
Are you tired of friends borrowing your books and never returning them? Maybe you’re sure you own 1984 but can’t seem to locate it? Do you find a strange satisfaction in using the supermarket self-checkout simply because of the barcode beep? With the ShelfChecker smart shelf from maker Annelynn described on Instructables, you can be your own librarian and never misplace your books again! Beep!
Harry Potter and the Aesthetically Pleasing Smart Shelf
The ShelfChecker smart shelf
Annelynn built her smart shelf utilising a barcode scanner, LDR light sensors, a Raspberry Pi, plus a few other peripherals and some Python scripts. She has created a fully integrated library checkout system with accompanying NeoPixel location notification for your favourite books.
This build allows you to issue your book-borrowing friends their own IDs and catalogue their usage of your treasured library. On top of that, you’ll be able to use LED NeoPixels to highlight your favourite books, registering their removal and return via light sensor tracking.
Using light sensors for book cataloguing
Once Annelynn had built the shelf, she drilled holes to fit the eight LDRs that would guard her favourite books, and separated them with corner brackets to prevent confusion.
Corner brackets keep the books in place without confusion between their respective light sensors
Due to the limitations of the MCP3008 Adafruit microchip, the smart shelf can only keep track of eight of your favourite books. But this limitation won’t stop you from cataloguing your entire home library; it simply means you get to pick your ultimate favourites that will occupy the prime real estate on your wall.
Obviously, the light sensors sense light. So when you remove or insert a book, light floods or is blocked from that book’s sensor. The sensor sends this information to the Raspberry Pi. In response, an Arduino controls the NeoPixel strip along the ‘favourites’ shelf to indicate the book’s status.
The book you are looking for is temporarily unavailable
Code your own library
While keeping a close eye on your favourite books, the system also allows creation of a complete library catalogue system with the help of a MySQL database. Users of the library can log into the system with a barcode scanner, and take out or return books recorded in the database guided by an LCD screen attached to the Pi.
I won’t go into an extensive how-to on creating MySQL databases here on the blog, because my glamourous assistant Janina has pulled up these MySQL tutorials to help you get started. Annelynn’s Github scripts are also packed with useful comments to keep you on track.
Raspberry Pi and books
We love books and libraries. And considering the growing number of Code Clubs and makespaces into libraries across the world, and the host of book-based Pi builds we’ve come across, the love seems to be mutual.
Did I say we love books? In fact we love them so much that members of our team have even written a few.*
If you’ve set up any sort of digital making event in a library, have in some way incorporated Raspberry Pi into your own personal book collection, or even managed to recreate the events of your favourite story using digital making, make sure to let us know in the comments below.
* Shameless plug**
Fancy adding some Pi to your home library? Check out these publications from the Raspberry Pi staff:
Just over a year ago, the European Commission approved and adopted the new General Data Protection Regulation (GDPR). The GDPR is the biggest change in data protection laws in Europe since the 1995 introduction of the European Union (EU) Data Protection Directive, also known as Directive 95/46/EC. The GDPR aims to strengthen the security and protection of personal data in the EU and will replace the Directive and all local laws relating to it.
AWS welcomes the arrival of the GDPR. The new, robust requirements raise the bar for data protection, security, and compliance, and will push the industry to follow the most stringent controls, helping to make everyone more secure. I am happy to announce today that all AWS services will comply with the GDPR when it becomes enforceable on May 25, 2018.
In this blog post, I explain the work AWS is doing to help customers with the GDPR as part of our continued commitment to help ensure they can comply with EU Data Protection requirements.
What has AWS been doing?
AWS continually maintains a high bar for security and compliance across all of our regions around the world. This has always been our highest priority—truly “job zero.” The AWS Cloud infrastructure has been architected to offer customers the most powerful, flexible, and secure cloud-computing environment available today. AWS also gives you a number of services and tools to enable you to build GDPR-compliant infrastructure on top of AWS.
One tool we give you is a Data Processing Agreement (DPA). I’m happy to announce today that we have a DPA that will meet the requirements of the GDPR. This GDPR DPA is available now to all AWS customers to help you prepare for May 25, 2018, when the GDPR becomes enforceable. For additional information about the new GDPR DPA or to obtain a copy, contact your AWS account manager.
In addition to account managers, we have teams of compliance experts, data protection specialists, and security experts working with customers across Europe to answer their questions and help them prepare for running workloads in the AWS Cloud after the GDPR comes into force. To further answer customers’ questions, we have updated our EU Data Protection website. This website includes information about what the GDPR is, the changes it brings to organizations operating in the EU, the services AWS offers to help you comply with the GDPR, and advice about how you can prepare.
As well as giving customers a number of tools and services to build GDPR-compliant environments, AWS has achieved a number of internationally recognized certifications and accreditations. In the process, AWS has demonstrated compliance with third-party assurance frameworks such as ISO 27017 for cloud security, ISO 27018 for cloud privacy, PCI DSS Level 1, and SOC 1, SOC 2, and SOC 3. AWS also helps customers meet local security standards such as BSI’s Common Cloud Computing Controls Catalogue (C5) that is important in Germany. We will continue to pursue certifications and accreditations that are important to AWS customers.
What can you do?
Although the GDPR will not be enforceable until May 25, 2018, we are encouraging our customers and partners to start preparing now. If you have already implemented a high bar for compliance, security, and data privacy, the move to GDPR should be simple. However, if you have yet to start your journey to GDPR compliance, we urge you to start reviewing your security, compliance, and data protection processes now to ensure a smooth transition in May 2018.
You should consider the following key points in preparation for GDPR compliance:
Territorial reach – Determining whether the GDPR applies to your organization’s activities is essential to ensuring your organization’s ability to satisfy its compliance obligations.
Data subject rights – The GDPR enhances the rights of data subjects in a number of ways. You will need to make sure you can accommodate the rights of data subjects if you are processing their personal data.
Data breach notifications – If you are a data controller, you must report data breaches to the data protection authorities without undue delay and in any event within 72 hours of you becoming aware of a data breach.
Data protection officer (DPO) – You may need to appoint a DPO who will manage data security and other issues related to the processing of personal data.
Data protection impact assessment (DPIA) – You may need to conduct and, in some circumstances, you might be required to file with the supervisory authority a DPIA for your processing activities.
Data processing agreement (DPA) – You may need a DPA that will meet the requirements of the GDPR, particularly if personal data is transferred outside the European Economic Area.
AWS offers a wide range of services and features to help customers meet requirements of the GDPR, including services for access controls, monitoring, logging, and encryption. For more information about these services and features, see EU Data Protection.
At AWS, security, data protection, and compliance are our top priorities, and we will continue to work vigilantly to ensure that our customers are able to enjoy the benefits of AWS securely, compliantly, and without disruption in Europe and around the world. As we head toward May 2018, we will share more news and resources with you to help you comply with the GDPR.
At Raspberry Pi, we’re determined in our ambition to put the power of digital making into the hands of people all over the world: one way we pursue this is by developing high-quality learning resources to support a growing community of educators. We spend a lot of time thinking hard about what you can learn by tinkering and making with a Raspberry Pi, and other devices and platforms, in order to become skilled in computer programming, electronics, and physical computing.
Now, we’ve taken an exciting step in this journey by defining our own digital making curriculum that will help people everywhere learn new skills.
We have a large and diverse community of people who are interested in digital making. Some might use the curriculum to help guide and inform their own learning, or perhaps their children’s learning. People who run digital making clubs at schools, community centres, and Raspberry Jams may draw on it for extra guidance on activities that will engage their learners. Some teachers may wish to use the curriculum as inspiration for what to teach their students.
Raspberry Pi produces an extensive and varied range of online learning resources and delivers a huge teacher training program. In creating this curriculum, we have produced our own guide that we can use to help plan our resources and make sure we cover the broad spectrum of learners’ needs.
Learning anything involves progression. You start with certain skills and knowledge and then, with guidance, practice, and understanding, you gradually progress towards broader and deeper knowledge and competence. Our digital making curriculum is structured around this progression, and in representing it, we wanted to avoid the age-related and stage-related labels that are often associated with a learner’s progress and the preconceptions these labels bring. We came up with our own, using characters to represent different levels of competence, starting with Creator and moving onto Builder and Developer before becoming a Maker.
Progress through our curriculum and become a digital maker
We want to help people to make things so that they can become the inventors, creators, and makers of tomorrow. Digital making, STEAM, project-based learning, and tinkering are at the core of our teaching philosophy which can be summed up simply as ‘we learn best by doing’.
We’ve created five strands which we think encapsulate key concepts and skills in digital making: Design, Programming, Physical Computing, Manufacture, and Community and Sharing.
One of the Raspberry Pi Foundation’s aims is to help people to learn about computer science and how to make things with computers. We believe that learning how to create with digital technology will help people shape an increasingly digital world, and prepare them for the work of the future.
Computational thinking is at the heart of the learning that we advocate. It’s the thought process that underpins computing and digital making: formulating a problem and expressing its solution in such a way that a computer can effectively carry it out. Computational thinking covers a broad range of knowledge and skills including, but not limited to:
By progressing through our curriculum, learners will develop computational thinking skills and put them into practice.
What’s not on our curriculum?
If there’s one thing we learned from our extensive work in formulating this curriculum, it’s that no two educators or experts can agree on the best approach to progression and learning in the field of digital making. Our curriculum is intended to represent the skills and thought processes essential to making things with technology. We’ve tried to keep the headline outcomes as broad as possible, and then provide further examples as a guide to what could be included.
Our digital making curriculum is not intended to be a replacement for computer science-related curricula around the world, such as the ‘Computing Programme of Study’ in England or the ‘Digital Technologies’ curriculum in Australia. We hope that following our learning pathways will support the study of formal curricular and exam specifications in a fun and tangible way. As we continue to expand our catalogue of free learning resources, we expect our curriculum will grow and improve, and your input into that process will be vital.
We’re proud to be part of a movement that aims to empower people to shape their world through digital technologies. We value the support of our community of makers, educators, volunteers, and enthusiasts. With this in mind, we’re interested to hear your thoughts on our digital making curriculum. Add your feedback to this form, or talk to us at one of the events that Raspberry Pi will attend in 2017.
AWS made many launch announcements at AWS re:Invent 2016, including the announcement of a new compliance service, AWS Artifact. After so much recent activity, I want to highlight some EU-related news that you might have missed.
AWS has completed its assessment against the Cloud Computing Compliance Controls Catalogue (C5) information security and compliance program. Bundesamt für Sicherheit in der Informationstechnik (BSI)—Germany’s national cybersecurity authority—established C5 to define a reference standard for German cloud security requirements. With C5 (as well as with IT-Grundschutz), customers in German member states can leverage the work performed under this BSI audit to comply with stringent local requirements and operate secure workloads in the AWS Cloud. Although this is a newer program, BSI’s C5 standard is a key assurance framework that will be an authoritative program for not only German customers moving to the cloud, but also an influential one for all EU member states. C5 has comprehensive cloud-security criteria and is audited using a proven global assessment and reporting standard. AWS is the first cloud provider to achieve this certification, and it shows our commitment to Germany and the EU region.
This completed C5 assessment follows the August announcement of our transition from Safe Harbor to the EU-US Privacy Shield Framework. Though the EU-US Privacy Shield Framework does not affect the way you use or work with AWS, it ensures that you can continue to transfer data between the US and EU in an internationally recognized, compliant way. You can contact our team at [email protected], or read the FAQ.
We did a bit of a count recently and it turns out that The MagPi magazine has produced more than 3,200 pages of Raspberry Pi-related reading. That’s a lot of quality content (even if I do say so myself).
While we’re rather proud of this achievement, we’re also very aware of the fact that these lovingly crafted collections of words and pictures can very easily get lost in the mists of time (or in the recycling bin).
The first four MagPi Essentials books taught us how to use the command line, make games, experiment with the Sense HAT, and even code music with Sonic Pi
So, in 2015, we set out to make sure all the essential reading from the magazine wasn’t consigned to a dusty and dog-eared pile under the coffee table. Enter the MagPi Essentials range! They’re bite-sized books that build on the best articles in the magazine and mould them into a cohesive, easily digested form.
We’ve recently been hard at work putting the finishing touches on the latest batch, and I’m excited to report that the fifth to eighth books are out in hard copy now! We’ll spare you the minute details on each title in the series here, but I’ve hijacked the ‘You might also like’ doohickey on the right so you can read up on each book individually.
Shiny new books! Well, the cover’s actually a matt laminate… Learn to code with Scratch, hack and make with Minecraft, do electronics using GPIO Zero, and program with C in our latest range.
Want them? Point your mouse fingers towards The Pi Hut or Amazon. You can even grab them directly from The MagPi’s own little lemonade stand if you want. Like everything else Raspberry Pi, they’re also super-affordable: £2.99 on our Apple and Android apps, or £3.99 in print. Not sure you can afford them all? You can also download each book as a free PDF too: just click on the appropriate link in our catalogue.
All of these books are available now. Have a read while we crack on with making the next ones…
Mike Smith wanted to be able to locate specific records in his collection with ease, so he turned to a Raspberry Pi for assistance.
A web server running on the Pi catalogues his vast vinyl collection. Upon selecting a specific record, the appropriate shelf lights up, followed by a single NeoPixel highlighting the record’s location.
recordShelf helps organize and visualize dat about your record collection. This is my second video demonstrating it’s latest form.
The lights are controlled with Adafruit’s FadeCandy, a dithering USB controller driver with its own software that allows for easier direction of a NeoPixel. It also puts on a pretty nifty light show.
Records can be selected via artist, title, record label, a unique index number, or even vinyl colour. This also allowed for Mike to select all records in a specific category and highlight them at once; how many records by a specific artist or label, for example.
Further down the line, Mike is also planning on RFID support, allowing him to scan a record and have the appropriate shelf light up to indicate where it should be stored. Keep up to date with the build via the project’s Hackaday.io page.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.