You see that door? You secretly want that to be a MIDI controller? Here’s how to do it, and how to play a cover version of “Break On Through” by The Doors on a door 😉 Link to source code and the DIY kit below.
If you don’t live in a home with squeaky doors — living room door, I’m looking at you — you probably never think about the musical potential of mundane household objects.
We thought this was hilarious. Hope you enjoy! This video has over 60 million views worldwide! Social Media: @jessconte To use this video in a commercial player, advertising or in broadcasts, please email [email protected]
If the sound of a slammed oven door isn’t involved in your ditty of choice, you may instead want to add some electronics to that sweet, sweet harmony maker, just like Floyd.
Trusting in the melodic possibilities of incorporating a Raspberry Pi 3B+ and various sensory components into a humble door, Floyd created The Doors Door, a musical door that plays… well, I’m sure you can guess.
If you want to build your own, you can practice some sophisticated ‘copy and paste’ programming after downloading the code. And for links to all the kit you need, check out the description of the video over on YouTube. While you’re there, be sure to give the video a like, and subscribe to Floyd’s channel.
And now, to get you pumped for the weekend, here’s Jim:
recorded fall 1966 – lyrics: You know the day destroys the night Night divides the day Tried to run Tried to hide Break on through to the other side Break on through to the other side Break on through to the other side, yeah We chased our pleasures here Dug our treasures there But can you still recall The time we cried Break on through to the other side Break on through to the other side Yeah!
Blinky lights and music created using a Raspberry Pi? Count us in! When Aaron Chambers shared his latest project, Py-Lights, on Reddit, we were quick to ask for more information. And here it is:
Controlling lights with MIDI commands
Tentatively titled Py-Lights, Aaron’s project allows users to assign light patterns to MIDI actions, creating a rather lovely blinky light display.
For his example, Aaron connected a MIDI keyboard to a strip of RGB LEDs via a Raspberry Pi that ran his custom Python code.
The program I made lets me bind “actions” (strobe white, flash blue, disable all colors, etc.) to any input and any input type (hold, knob, trigger, etc.). And each action type has a set of parameters that I bind to the input. For example, I have a knob that changes a strobe’s intensity, and another knob that changes its speed.
The program updates each action, pulls its resulting color, and adds them together, then sends that to the LEDs. I’m using rtmidi for reading the midi device and pigpio for handling the LED output.
Aaron has updated the Py-Lights GitHub repo for the project to include a handy readme file and a more stable build.
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easier to prepare and load your data for analytics. You can create and run an ETL job with a few clicks on the AWS Management Console. Just point AWS Glue to your data store. AWS Glue discovers your data and stores the associated metadata (for example, a table definition and schema) in the AWS Glue Data Catalog.
AWS Glue has native connectors to data sources using JDBC drivers, either on AWS or elsewhere, as long as there is IP connectivity. In this post, we demonstrate how to connect to data sources that are not natively supported in AWS Glue today. We walk through connecting to and running ETL jobs against two such data sources, IBM DB2 and SAP Sybase. However, you can use the same process with any other JDBC-accessible database.
AWS Glue data sources
AWS Glue natively supports the following data stores by using the JDBC protocol:
One of the fastest growing architectures deployed on AWS is the data lake. The ETL processes that are used to ingest, clean, transform, and structure data are critically important for this architecture. Having the flexibility to interoperate with a broader range of database engines allows for a quicker adoption of the data lake architecture.
For data sources that AWS Glue doesn’t natively support, such as IBM DB2, Pivotal Greenplum, SAP Sybase, or any other relational database management system (RDBMS), you can import custom database connectors from Amazon S3 into AWS Glue jobs. In this case, the connection to the data source must be made from the AWS Glue script to extract the data, rather than using AWS Glue connections. To learn more, see Providing Your Own Custom Scripts in the AWS Glue Developer Guide.
Setting up an ETL job for an IBM DB2 data source
The first example demonstrates how to connect the AWS Glue ETL job to an IBM DB2 instance, transform the data from the source, and store it in Apache Parquet format in Amazon S3. To successfully create the ETL job using an external JDBC driver, you must define the following:
The S3 location of the job script
The S3 location of the temporary directory
The S3 location of the JDBC driver
The S3 location of the Parquet data (output)
The IAM role for the job
By default, AWS Glue suggests bucket names for the scripts and the temporary directory using the following format:
Keep in mind that having the AWS Glue job and S3 buckets in the same AWS Region helps save on cross-Region data transfer fees. For this post, we will work in the US East (Ohio) Region (us-east-2).
Creating the IAM role
The next step is to set up the IAM role that the ETL job will use:
Sign in to the AWS Management Console, and search for IAM:
On the IAM console, choose Roles in the left navigation pane.
Choose Create role. The role type of trusted entity must be an AWS service, specifically AWS Glue.
Choose Next: Permissions.
Search for the AWSGlueServiceRole policy, and select it.
Search again, now for the SecretsManagerReadWrite This policy allows the AWS Glue job to access database credentials that are stored in AWS Secrets Manager.
CAUTION: This policy is open and is being used for testing purposes only. You should create a custom policy to narrow the access just to the secrets that you want to use in the ETL job.
Select this policy, and choose Next: Review.
Give your role a name, for example, GluePermissions, and confirm that both policies were selected.
Choose Create role.
Now that you have created the IAM role, it’s time to upload the JDBC driver to the defined location in Amazon S3. For this example, we will use the DB2 driver, which is available on the IBM Support site.
Storing database credentials
It is a best practice to store database credentials in a safe store. In this case, we use AWS Secrets Manager to securely store credentials. Follow these steps to create those credentials:
Open the console, and search for Secrets Manager.
In the AWS Secrets Manager console, choose Store a new secret.
Under Select a secret type, choose Other type of secrets.
In the Secret key/value, set one row for each of the following parameters:
db_username
db_password
db_url (for example, jdbc:db2://10.10.12.12:50000/SAMPLE)
db_table
driver_name (ibm.db2.jcc.DB2Driver)
output_bucket: (for example, aws-glue-data-output-1234567890-us-east-2/User)
Choose Next.
For Secret name, use DB2_Database_Connection_Info.
Choose Next.
Keep the Disable automatic rotation check box selected.
Choose Next.
Choose Store.
Adding a job in AWS Glue
The next step is to author the AWS Glue job, following these steps:
In the AWS Management Console, search for AWS Glue.
In the navigation pane on the left, choose Jobs under the ETL
Choose Add job.
Fill in the basic Job properties:
Give the job a name (for example, db2-job).
Choose the IAM role that you created previously (GluePermissions).
For This job runs, choose A new script to be authored by you.
For ETL language, choose Python.
In the Script libraries and job parameters section, choose the location of your JDBC driver for Dependent jars path.
Choose Next.
On the Connections page, choose Next
On the summary page, choose Save job and edit script. This creates the job and opens the script editor.
In the editor, replace the existing code with the following script. Important: Line 47 of the script corresponds to the mapping of the fields in the source table to the destination, dropping of the null fields to save space in the Parquet destination, and finally writing to Amazon S3 in Parquet format.
Choose the black X on the right side of the screen to close the editor.
Running the ETL job
Now that you have created the job, the next step is to execute it as follows:
On the Jobs page, select your new job. On the Action menu, choose Run job, and confirm that you want to run the job. Wait a few moments as it finishes the execution.
After the job shows as Succeeded, choose Logs to read the output of the job.
In the output of the job, you will find the result of executing the df.printSchema() and the message with the df.count().
Also, if you go to your output bucket in S3, you will find the Parquet result of the ETL job.
Using AWS Glue, you have created an ETL job that connects to an existing database using an external JDBC driver. It enables you to execute any transformation that you need.
Setting up an ETL job for an SAP Sybase data source
In this section, we describe how to create an AWS Glue ETL job against an SAP Sybase data source. The process mentioned in the previous section works for a Sybase data source with a few changes required in the job:
While creating the job, choose the correct jar for the JDBC dependency.
In the script, change the reference to the secret to be used from AWS Secrets Manager:
After you successfully execute the new ETL job, the output contains the same type of information that was generated with the DB2 data source.
Note that each of these JDBC drivers has its own nuances and different licensing terms that you should be aware of before using them.
Maximizing JDBC read parallelism
Something to keep in mind while working with big data sources is the memory consumption. In some cases, “Out of Memory” errors are generated when all the data is read into a single executor. One approach to optimize this is to rely on the parallelism on read that you can implement with Apache Spark and AWS Glue. To learn more, see the Apache Spark SQL module.
You can use the following options:
partitionColumn: The name of an integer column that is used for partitioning.
lowerBound: The minimum value of partitionColumn that is used to decide partition stride.
upperBound: The maximum value of partitionColumn that is used to decide partition stride.
numPartitions: The number of partitions. This, along with lowerBound (inclusive) and upperBound (exclusive), form partition strides for generated WHERE clause expressions used to split the partitionColumn When unset, this defaults to SparkContext.defaultParallelism.
Those options specify the parallelism of the table read. lowerBound and upperBound decide the partition stride, but they don’t filter the rows in the table. Therefore, Spark partitions and returns all rows in the table. For example:
It’s important to be careful with the number of partitions because too many partitions could also result in Spark crashing your external database systems.
Conclusion
Using the process described in this post, you can connect to and run AWS Glue ETL jobs against any data source that can be reached using a JDBC driver. This includes new generations of common analytical databases like Greenplum and others.
You can improve the query efficiency of these datasets by using partitioning and pushdown predicates. For more information, see Managing Partitions for ETL Output in AWS Glue. This technique opens the door to moving data and feeding data lakes in hybrid environments.
Kapil Shardha is a Technical Account Manager and supports enterprise customers with their AWS adoption. He has background in infrastructure automation and DevOps.
William Torrealba is an AWS Solutions Architect supporting customers with their AWS adoption. He has background in Application Development, High Available Distributed Systems, Automation, and DevOps.
The Internet of Things (IoT) has precipitated to an influx of connected devices and data that can be mined to gain useful business insights. If you own an IoT device, you might want the data to be uploaded seamlessly from your connected devices to the cloud so that you can make use of cloud storage and the processing power to perform sophisticated analysis of data. To upload the data to the AWS Cloud, devices must pass authentication and authorization checks performed by the respective AWS services. The standard way of authenticating AWS requests is the Signature Version 4 algorithm that requires the caller to have an access key ID and secret access key. Consequently, you need to hardcode the access key ID and the secret access key on your devices. Alternatively, you can use the built-in X.509 certificate as the unique device identity to authenticate AWS requests.
AWS IoT has introduced the credentials provider feature that allows a caller to authenticate AWS requests by having an X.509 certificate. The credentials provider authenticates a caller using an X.509 certificate, and vends a temporary, limited-privilege security token. The token can be used to sign and authenticate any AWS request. Thus, the credentials provider relieves you from having to manage and periodically refresh the access key ID and secret access key remotely on your devices.
In the process of retrieving a security token, you use AWS IoT to create a thing (a representation of a specific device or logical entity), register a certificate, and create AWS IoT policies. You also configure an AWS Identity and Access Management (IAM) role and attach appropriate IAM policies to the role so that the credentials provider can assume the role on your behalf. You also make an HTTP-over-Transport Layer Security (TLS) mutual authentication request to the credentials provider that uses your preconfigured thing, certificate, policies, and IAM role to authenticate and authorize the request, and obtain a security token on your behalf. You can then use the token to sign any AWS request using Signature Version 4.
In this blog post, I explain the AWS IoT credentials provider design and then demonstrate the end-to-end process of retrieving a security token from AWS IoT and using the token to write a temperature and humidity record to a specific Amazon DynamoDB table.
Note: This post assumes you are familiar with AWS IoT and IAM to perform steps using the AWS CLI and OpenSSL. Make sure you are running the latest version of the AWS CLI.
Overview of the credentials provider workflow
The following numbered diagram illustrates the credentials provider workflow. The diagram is followed by explanations of the steps.
To explain the steps of the workflow as illustrated in the preceding diagram:
The AWS IoT device uses the AWS SDK or custom client to make an HTTPS request to the credentials provider for a security token. The request includes the device X.509 certificate for authentication.
The credentials provider forwards the request to the AWS IoT authentication and authorization module to verify the certificate and the permission to request the security token.
If the certificate is valid and has permission to request a security token, the AWS IoT authentication and authorization module returns success. Otherwise, it returns failure, which goes back to the device with the appropriate exception.
If assuming the role succeeds, AWS STS returns a temporary, limited-privilege security token to the credentials provider.
The credentials provider returns the security token to the device.
The AWS SDK on the device uses the security token to sign an AWS request with AWS Signature Version 4.
The requested service invokes IAM to validate the signature and authorize the request against access policies attached to the preconfigured IAM role.
If IAM validates the signature successfully and authorizes the request, the request goes through.
In another solution, you could configure an AWS Lambda rule that ingests your device data and sends it to another AWS service. However, in applications that require the uploading of large files such as videos or aggregated telemetry to the AWS Cloud, you may want your devices to be able to authenticate and send data directly to the AWS service of your choice. The credentials provider enables you to do that.
Outline of the steps to retrieve and use security token
Perform the following steps as part of this solution:
Create an AWS IoT thing: Start by creating a thing that corresponds to your home thermostat in the AWS IoT thing registry database. This allows you to authenticate the request as a thing and use thing attributes as policy variables in AWS IoT and IAM policies.
Register a certificate: Create and register a certificate with AWS IoT, and attach it to the thing for successful device authentication.
Create and configure an IAM role: Create an IAM role to be assumed by the service on behalf of your device. I illustrate how to configure a trust policy and an access policy so that AWS IoT has permission to assume the role, and the token has necessary permission to make requests to DynamoDB.
Create a role alias: Create a role alias in AWS IoT. A role alias is an alternate data model pointing to an IAM role. The credentials provider request must include a role alias name to indicate which IAM role to assume for obtaining a security token from AWS STS. You may update the role alias on the server to point to a different IAM role and thus make your device obtain a security token with different permissions.
Attach a policy: Create an authorization policy with AWS IoT and attach it to the certificate to control which device can assume which role aliases.
Request a security token: Make an HTTPS request to the credentials provider and retrieve a security token and use it to sign a DynamoDB request with Signature Version 4.
Use the security token to sign a request: Use the retrieved token to sign a request to DynamoDB and successfully write a temperature and humidity record from your home thermostat in a specific table. Thus, starting with an X.509 certificate on your home thermostat, you can successfully upload your thermostat record to DynamoDB and use it for further analysis. Before the availability of the credentials provider, you could not do this.
Deploy the solution
1. Create an AWS IoT thing
Register your home thermostat in the AWS IoT thing registry database by creating a thing type and a thing. You can use the AWS CLI with the following command to create a thing type. The thing type allows you to store description and configuration information that is common to a set of things.
Now, you need to have a Certificate Authority (CA) certificate, sign a device certificate using the CA certificate, and register both certificates with AWS IoT before your device can authenticate to AWS IoT. If you do not already have a CA certificate, you can use OpenSSL to create a CA certificate, as described in Use Your Own Certificate. To register your CA certificate with AWS IoT, follow the steps on Registering Your CA Certificate.
You then have to create a device certificate signed by the CA certificate and register it with AWS IoT, which you can do by following the steps on Creating a Device Certificate Using Your CA Certificate. Save the certificate and the corresponding key pair; you will use them when you request a security token later. Also, remember the password you provide when you create the certificate.
Run the following command in the AWS CLI to attach the device certificate to your thing so that you can use thing attributes in policy variables.
If the attach-thing-principal command succeeds, the output is empty.
3. Configure an IAM role
Next, configure an IAM role in your AWS account that will be assumed by the credentials provider on behalf of your device. You are required to associate two policies with the role: a trust policy that controls who can assume the role, and an access policy that controls which actions can be performed on which resources by assuming the role.
The following trust policy grants the credentials provider permission to assume the role. Put it in a text document and save the document with the name, trustpolicyforiot.json.
The following access policy allows DynamoDB operations on the table that has the same name as the thing name that you created in Step 1, MyHomeThermostat, by using credentials-iot:ThingName as a policy variable. I explain after Step 5 about using thing attributes as policy variables. Put the following policy in a text document and save the document with the name, accesspolicyfordynamodb.json.
Finally, run the following command in the AWS CLI to attach the access policy to your role.
aws iam attach-role-policy --role-name dynamodb-access-role --policy-arn arn:aws:iam::<your_aws_account_id>:policy/accesspolicyfordynamodb
If the attach-role-policy command succeeds, the output is empty.
Configure the PassRole permissions
The IAM role that you have created must be passed to AWS IoT to create a role alias, as described in Step 4. The user who performs the operation requires iam:PassRole permission to authorize this action. You also should add permission for the iam:GetRole action to allow the user to retrieve information about the specified role. Create the following policy to grant iam:PassRole and iam:GetRole permissions. Name this policy, passrolepermission.json.
Now, run the following command to attach the policy to the user.
aws iam attach-user-policy --policy-arn arn:aws:iam::<your_aws_account_id>:policy/passrolepermission --user-name <user_name>
If the attach-user-policy command succeeds, the output is empty.
4. Create a role alias
Now that you have configured the IAM role, you will create a role alias with AWS IoT. You must provide the following pieces of information when creating a role alias:
RoleAlias: This is the primary key of the role alias data model and hence a mandatory attribute. It is a string; the minimum length is 1 character, and the maximum length is 128 characters.
RoleArn: This is the Amazon Resource Name (ARN) of the IAM role you have created. This is also a mandatory attribute.
CredentialDurationSeconds: This is an optional attribute specifying the validity (in seconds) of the security token. The minimum value is 900 seconds (15 minutes), and the maximum value is 3,600 seconds (60 minutes); the default value is 3,600 seconds, if not specified.
Run the following command in the AWS CLI to create a role alias. Use the credentials of the user to whom you have given the iam:PassRole permission.
You created and registered a certificate with AWS IoT earlier for successful authentication of your device. Now, you need to create and attach a policy to the certificate to authorize the request for the security token.
Let’s say you want to allow a thing to get credentials for the role alias, Thermostat-dynamodb-access-role-alias, with thing owner Alice, thing type thermostat, and the thing attached to a principal. The following policy, with thing attributes as policy variables, achieves these requirements. After this step, I explain more about using thing attributes as policy variables. Put the policy in a text document, and save it with the name, alicethermostatpolicy.json.
If the attach-policy command succeeds, the output is empty.
You have completed all the necessary steps to request an AWS security token from the credentials provider!
Using thing attributes as policy variables
Before I show how to request a security token, I want to explain more about how to use thing attributes as policy variables and the advantage of using them. As a prerequisite, a device must provide a thing name in the credentials provider request.
Thing substitution variables in AWS IoT policies
AWS IoT Simplified Permission Management allows you to associate a connection with a specific thing, and allow the thing name, thing type, and other thing attributes to be available as substitution variables in AWS IoT policies. You can write a generic AWS IoT policy as in alicethermostatpolicy.json in Step 5, attach it to multiple certificates, and authorize the connection as a thing. For example, you could attach alicethermostatpolicy.json to certificates corresponding to each of the thermostats you have that you want to assume the role alias, Thermostat-dynamodb-access-role-alias, and allow operations only on the table with the name that matches the thing name. For more information, see the full list of thing policy variables.
Thing substitution variables in IAM policies
You also can use the following three substitution variables in the IAM role’s access policy (I used credentials-iot:ThingName in accesspolicyfordynamodb.json in Step 3):
credentials-iot:ThingName
credentials-iot:ThingTypeName
credentials-iot:AwsCertificateId
When the device provides the thing name in the request, the credentials provider fetches these three variables from the database and adds them as context variables to the security token. When the device uses the token to access DynamoDB, the variables in the role’s access policy are replaced with the corresponding values in the security token. Note that you also can use credentials-iot:AwsCertificateId as a policy variable; AWS IoT returns certificateId during registration.
6. Request a security token
Make an HTTPS request to the credentials provider to fetch a security token. You have to supply the following information:
Certificate and key pair: Because this is an HTTP request over TLS mutual authentication, you have to provide the certificate and the corresponding key pair to your client while making the request. Use the same certificate and key pair that you used during certificate registration with AWS IoT.
RoleAlias: Provide the role alias (in this example, Thermostat-dynamodb-access-role-alias) to be assumed in the request.
ThingName: Provide the thing name that you created earlier in the AWS IoT thing registry database. This is passed as a header with the name, x-amzn-iot-thingname. Note that the thing name is mandatory only if you have thing attributes as policy variables in AWS IoT or IAM policies.
Run the following command in the AWS CLI to obtain your AWS account-specific endpoint for the credentials provider. See the DescribeEndpoint API documentation for further details.
Note that if you are on Mac OS X, you need to export your certificate to a .pfx or .p12 file before you can pass it in the https request. Use OpenSSL with the following command to convert the device certificate from .pem to .pfx format. Remember the password because you will need it subsequently in a curl command.
Now, make an HTTPS request to the credentials provider to fetch a security token. You may use your preferred HTTP client for the request. I use curl in the following examples.
This command returns a security token object that has an accessKeyId, a secretAccessKey, a sessionToken, and an expiration. The following is sample output of the curl command.
Create a DynamoDB table called MyHomeThermostat in your AWS account. You will have to choose the hash (partition key) and the range (sort key) while creating the table to uniquely identify a record. Make the hash the serial_number of the thermostat and the range the timestamp of the record. Create a text file with the following JSON to put a temperature and humidity record in the table. Name the file, item.json.
You can use the accessKeyId, secretAccessKey, and sessionToken retrieved from the output of the curl command to sign a request that writes the temperature and humidity record to the DynamoDB table. Use the following commands to accomplish this.
In this blog post, I demonstrated how to retrieve a security token by using an X.509 certificate and then writing an item to a DynamoDB table by using the security token. Similarly, you could run applications on surveillance cameras or sensor devices that exchange the X.509 certificate for an AWS security token and use the token to upload video streams to Amazon Kinesis or telemetry data to Amazon CloudWatch.
If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the AWS IoT forum.
Security updates have been issued by Arch Linux (mbedtls), CentOS (gcab and java-1.7.0-openjdk), Debian (drupal7, lucene-solr, wavpack, and xmltooling), Fedora (dnsmasq, gcab, gimp, golang, knot-resolver, ldns, libsamplerate, mingw-OpenEXR, mingw-poppler, python-crypto, qt5-qtwebengine, sblim-sfcb, systemd, unbound, and wavpack), Mageia (ioquake3, TiMidity++, tomcat, tomcat-native, and wireshark), openSUSE (systemd and zziplib), Red Hat (erlang and openstack-nova and python-novaclient), and SUSE (kernel).
Piano keys are so limiting! Why not swap them out for LEDs and the wealth of instruments in Pygame to build air keys, as demonstrated by Instructables maker 2fishy?
Raspberry Pi LED Light Schroeder Piano – Twinkle Little Star
Keys? Where we’re going you don’t need keys!
This project, created by either Yolanda or Ken Fisher (or both!), uses an array of LEDs and photoresistors to form a MIDI sequencer. Twelve LEDs replace piano keys, and another three change octaves and access the menu.
Each LED is paired with a photoresistor, which detects the emitted light to form a closed circuit. Interrupting the light beam — in this case with a finger — breaks the circuit, telling the Python program to perform an action.
We’re all hoping this is just the scaled-down prototype of a full-sized LED grand piano
Using Pygame, the 2fishy team can access 75 different instruments and 128 notes per instrument, making their wooden piano more than just a one-hit wonder.
Piano building
The duo made the piano’s body out of plywood, hardboard, and dowels, and equipped it with a Raspberry Pi 2, a speaker, and the aforementioned LEDs and photoresistors.
A Raspberry Pi 2 and speaker sit within the wooden body, with LEDs and photoresistors in place of the keys.
A complete how-to for the build, including some rather fancy and informative schematics, is available at Instructables, where 2fishy received a bronze medal for their project. Congratulations!
Learn more
If you’d like to learn more about using Pygame, check out The MagPi’s Make Games with Python Essentials Guide, available both in print and as a free PDF download.
And for more music-based projects using a variety of tech, be sure to browse our free resources.
Lastly, if you’d like to see more piano-themed Raspberry Pi projects, take a look at our Big Minecraft Piano, these brilliant piano stairs, this laser-guided piano teacher, and our video below about the splendid Street Fighter duelling pianos we witnessed at Maker Faire.
Two pianos wired up as Playstation 2 controllers allow users to battle…musically! We caught up with makers Eric Redon and Cyril Chapellier of foobarflies a…
Right now, 400km above the Earth aboard the International Space Station, are two very special Raspberry Pi computers. They were launched into space on 6 December 2015 and are, most assuredly, the farthest-travelled Raspberry Pi computers in existence. Each year they run experiments that school students create in the European Astro Pi Challenge.
Left: Astro Pi Vis (Ed); right: Astro Pi IR (Izzy). Image credit: ESA.
The European Columbus module
Today marks the tenth anniversary of the launch of the European Columbus module. The Columbus module is the European Space Agency’s largest single contribution to the ISS, and it supports research in many scientific disciplines, from astrobiology and solar science to metallurgy and psychology. More than 225 experiments have been carried out inside it during the past decade. It’s also home to our Astro Pi computers.
Here’s a video from 7 February 2008, when Space Shuttle Atlantis went skywards carrying the Columbus module in its cargo bay.
From February 7th, 2008 NASA-TV Coverage of The 121st Space Shuttle Launch Launched At:2:45:30 P.M E.T – Coverage begins exactly one hour till launch STS-122 Crew:
Today, coincidentally, is also the deadline for the European Astro Pi Challenge: Mission Space Lab. Participating teams have until midnight tonight to submit their experiments.
Anniversary celebrations
At 16:30 GMT today there will be a live event on NASA TV for the Columbus module anniversary with NASA flight engineers Joe Acaba and Mark Vande Hei.
Our Astro Pi computers will be joining in the celebrations by displaying a digital birthday candle that the crew can blow out. It works by detecting an increase in humidity when someone blows on it. The video below demonstrates the concept.
The exact Astro Pi code that will run on the ISS today is available for you to download and run on your own Raspberry Pi and Sense HAT. You’ll notice that the program includes code to make it stop automatically when the date changes to 8 February. This is just to save time for the ground control team.
If you have a Raspberry Pi and a Sense HAT, you can use the terminal commands below to download and run the code yourself:
When you see a blank blue screen with the brightness increasing, the Sense HAT is measuring the baseline humidity. It does this every 15 minutes so it can recalibrate to take account of natural changes in background humidity. A humidity increase of 2% is needed to blow out the candle, so if the background humidity changes by more than 2% in 15 minutes, it’s possible to get a false positive. Press Ctrl + C to quit.
Please tweet pictures of your candles to @astro_pi – we might share yours! And if we’re lucky, we might catch a glimpse of the candle on the ISS during the NASA TV event at 16:30 GMT today.
Three years after the last stable release, version 3.0 of the MusE MIDI/Audio sequencer is now available. As you might expect there many changes since the last release including a switch to Qt5, a new Plugin Path editor in Global Settings, a mixer makeover with lots of fixes, a system-wide move to double precision of all audio paths, and much more.
Security updates have been issued by Arch Linux (kernel), CentOS (kernel, libvirt, microcode_ctl, and qemu-kvm), Debian (kernel and xen), Fedora (kernel), Mageia (backintime, erlang, and wildmidi), openSUSE (kernel and ucode-intel), Oracle (kernel, libvirt, microcode_ctl, and qemu-kvm), Red Hat (kernel, kernel-rt, libvirt, microcode_ctl, qemu-kvm, and qemu-kvm-rhev), Scientific Linux (libvirt and qemu-kvm), SUSE (kvm and qemu), and Ubuntu (ruby1.9.1, ruby2.0, ruby2.3).
Hey folks, Rob from The MagPi here! We know many people might be getting their very first Raspberry Pi this Christmas, and excitedly wondering “what do I do with it?” While we can’t tell you exactly what to do with your Pi, we can show you how to immerse yourself in the world of Raspberry Pi and be inspired by our incredible community, and that’s the topic of The MagPi 65, out today tomorrow (we’re a day early because we’re simply TOO excited about the special announcement below!).
The one, the only…issue 65!
Raspberry Pi for Newbies
Raspberry Pi for Newbies covers some of the very basics you should know about the world of Raspberry Pi. After a quick set-up tutorial, we introduce you to the Raspberry Pi’s free online resources, including Scratch and Python projects from Code Club, before guiding you through the wider Raspberry Pi and maker community.
Pages and pages of useful advice and starter projects
The online community is an amazing place to learn about all the incredible things you can do with the Raspberry Pi. We’ve included some information on good places to look for tutorials, advice and ideas.
And that’s not all
Want to do more after learning about the world of Pi? The rest of the issue has our usual selection of expert guides to help you build some amazing projects: you can make a Christmas memory game, build a tower of bells to ring in the New Year, and even take your first steps towards making a game using C++.
Midimutant, the synthesizer “that boinks endless strange sounds”
All this along with inspiring projects, definitive reviews, and tales from around the community.
Raspberry Pi Annual
Issue 65 isn’t the only new release to look out for. We’re excited to bring you the first ever Raspberry Pi Annual, and it’s free for MagPi subscribers – in fact, subscribers should be receiving it the same day as their issue 65 delivery!
If you’re not yet a subscriber of The MagPi, don’t panic: you can still bag yourself a copy of the Raspberry Pi Annual by signing up to a 12-month subscription of The MagPi before 24 January. You’ll also receive the usual subscriber gift of a free Raspberry Pi Zero W (with case and cable). Click here to subscribe to The MagPi – The Official Raspberry Pi magazine.
Ooooooo…aaaaaahhhhh…
The Raspberry Pi Annual is aimed at young folk wanting to learn to code, with a variety of awesome step-by-step Scratch tutorials, games, puzzles, and comics, including a robotic Babbage.
Get your copy
You can get The MagPi 65 and the Raspberry Pi Annual 2018 from our online store, and the magazine can be found in the wild at WHSmith, Tesco, Sainsbury’s, and Asda. You’ll be able to get it in the US at Barnes & Noble and Micro Center in a few days’ time. The MagPi 65 is also available digitally on our Android and iOS apps. Finally, you can also download a free PDF of The MagPi 65 and The Raspberry Pi Annual 2018.
We hope you have a merry Christmas! We’re off until the New Year. Bye!
Twinkly lights are to Christmas what pumpkins are to Halloween. And when you add a Raspberry Pi to your light show, the result instantly goes from “Meh, yeah.” to “OMG, wow!”
Here are some cool light-based Christmas projects to inspire you this weekend.
In his Christmas lights project, Caleb Johnson uses an app as a control panel to switch between predefined displays. The full code is available on his GitHub, and it connects a Raspberry Pi A+ to a strip of programmable LEDs that change their pattern at the touch of a phone screen.
What’s great about this project, aside from the simplicity of its design, is the scope for extending it. Why not share the app with friends and family, allowing them to control your lights remotely? Or link the lights to social media so they are triggered by a specific hashtag, like in Alex Ellis’ #cheerlights project below.
Here we have a smart holiday light which will only run when it detects your presence in the room through a passive infrared PIR sensor. I’ve used hot glue for the fixings and an 8-LED NeoPixel strip connected to port 18.
Cheerlights, an online service created by Hans Scharler, allows makers to incorporate hashtag-controlled lighting into the projects. By tweeting the hashtag #cheerlights, followed by a colour, you can control a network of lights so that they are all displaying the same colour.
For his holiday light hack using Cheerlights, Alex incorporated the Pimoroni Blinkt! and a collection of cheap Christmas decorations to create cute light-up ornaments for the festive season.
To make your own, check out Alex’s blog post, and head to your local £1/$1 store for hackable decor. You could even link your Christmas tree and the trees of your family, syncing them all in one glorious, Santa-pleasing spectacular.
With just a few bucks of extra material, I walk you through converting your regular Christmas lights into a whole-house light show. The goal here is to go from scratch. Although this guide is intended for people who don’t know how to use linux at all and those who do alike, the focus is for people for whom linux and the raspberry pi are a complete mystery.
Looking to outdo your neighbours with your Christmas light show this year? YouTuber Makin’Things has created a beginners guide to setting up a Raspberry Pi–based musical light show for your facade, complete with information on soldering, wiring, and coding.
Once you’ve wrapped your house in metres and metres of lights and boosted your speakers so they can be heard for miles around, why not incorporate #cheerlights to make your outdoor decor interactive?
Still not enough? How about controlling your lights using a drum kit? Christian Kratky’s MIDI-Based Christmas Lights Animation system (or as I like to call it, House Rock) does exactly that.
Project documentation and source code: https://www.hackster.io/cyborg-titanium-14/light-pi-1c88b0 The song is taken from: https://www.youtube.com/watch?v=G6r1dAire0Y
Any more?
We know these projects are just the tip of the iceberg when it comes to the Raspberry Pi–powered Christmas projects out there, and as always, we’d love you to share yours with us. So post a link in the comments below, or tag us on social media when posting your build photos, videos, and/or blog links. ‘Tis the season for sharing after all.
Scale takes on a whole new meaning when it comes to IoT. Last year I was lucky enough to tour a gigantic factory that had, on average, one environment sensor per square meter. The sensors measured temperature, humidity, and air purity several times per second, and served as an early warning system for contaminants. I’ve heard customers express interest in deploying IoT-enabled consumer devices in the millions or tens of millions.
With powerful, long-lived devices deployed in a geographically distributed fashion, managing security challenges is crucial. However, the limited amount of local compute power and memory can sometimes limit the ability to use encryption and other forms of data protection.
To address these challenges and to allow our customers to confidently deploy IoT devices at scale, we are working on IoT Device Defender. While the details might change before release, AWS IoT Device Defender is designed to offer these benefits:
Continuous Auditing – AWS IoT Device Defender monitors the policies related to your devices to ensure that the desired security settings are in place. It looks for drifts away from best practices and supports custom audit rules so that you can check for conditions that are specific to your deployment. For example, you could check to see if a compromised device has subscribed to sensor data from another device. You can run audits on a schedule or on an as-needed basis.
Real-Time Detection and Alerting – AWS IoT Device Defender looks for and quickly alerts you to unusual behavior that could be coming from a compromised device. It does this by monitoring the behavior of similar devices over time, looking for unauthorized access attempts, changes in connection patterns, and changes in traffic patterns (either inbound or outbound).
Fast Investigation and Mitigation – In the event that you get an alert that something unusual is happening, AWS IoT Device Defender gives you the tools, including contextual information, to help you to investigate and mitigate the problem. Device information, device statistics, diagnostic logs, and previous alerts are all at your fingertips. You have the option to reboot the device, revoke its permissions, reset it to factory defaults, or push a security fix.
Stay Tuned I’ll have more info (and a hands-on post) as soon as possible, so stay tuned!
Since we launched the Oracle Weather Station project, we’ve collected more than six million records from our network of stations at schools and colleges around the world. Each one of these records contains data from ten separate sensors — that’s over 60 million individual weather measurements!
Weather station measurements in Oracle database
Weather data collection
Having lots of data covering a long period of time is great for spotting trends, but to do so, you need some way of visualising your measurements. We’ve always had great resources like Graphing the weather to help anyone analyse their weather data.
And from now on its going to be even easier for our Oracle Weather Station owners to display and share their measurements. I’m pleased to announce a new partnership with our friends at Initial State: they are generously providing a white-label platform to which all Oracle Weather Station recipients can stream their data.
Using Initial State
Initial State makes it easy to create vibrant dashboards that show off local climate data. The service is perfect for having your Oracle Weather Station data on permanent display, for example in the school reception area or on the school’s website.
But that’s not all: the Initial State toolkit includes a whole range of easy-to-use analysis tools for extracting trends from your data. Distribution plots and statistics are just a few clicks away!
Looks like Auntie Beryl is right — it has been a damp old year! (Humidity value distribution May–Nov 2017)
The wind direction data from my Weather Station supports my excuse as to why I’ve not managed a high-altitude balloon launch this year: to use my launch site, I need winds coming from the east, and those have been in short supply.
Chart showing wind direction over time
Initial State credientials
Every Raspberry Pi Oracle Weather Station school will shortly be receiving the credentials needed to start streaming their data to Initial State. If you’re super keen though, please email [email protected] with a photo of your Oracle Weather Station, and I’ll let you jump the queue!
The Initial State folks are big fans of Raspberry Pi and have a ton of Pi-related projects on their website. They even included shout-outs to us in the music video they made to celebrate the publication of their 50th tutorial. Can you spot their weather station?
Your home-brew weather station
If you’ve built your own Raspberry Pi–powered weather station and would like to dabble with the Initial State dashboards, you’re in luck! The team at Initial State is offering 14-day trials for everyone. For more information on Initial State, and to sign up for the trial, check out their website.
When James Puderer moved to Lima, Peru, his roadside runs left a rather nasty taste in his mouth. Hit by the pollution from old diesel cars in the area, he decided to monitor the air quality in his new city using Raspberry Pis and the abundant taxies as his tech carriers.
How to assemble the enclosure for my Taxi Datalogger project: https://www.hackster.io/james-puderer/distributed-air-quality-monitoring-using-taxis-69647e
Sensing air quality in Lima
Luckily for James, almost all taxies in Lima are equipped with the standard hollow vinyl roof sign seen in the video above, which makes them ideal for hacking.
With the onboard tech, the device collects data on longitude, latitude, humidity, temperature, pressure, and airborne particle count, feeding it back to an Android Things datalogger. This data is then pushed to Google IoT Core, where it can be remotely accessed.
Next, the data is processed by Google Dataflow and turned into a BigQuery table. Users can then visualize the collected measurements. And while James uses Google Maps to analyse his data, there are many tools online that will allow you to organise and study your figures depending on what final result you’re hoping to achieve.
James hopped in a taxi and took his monitor on the road, collecting results throughout the journey
James has provided the complete build process, including all tech ingredients and code, on his Hackster.io project page, and urges makers to create their own air quality monitor for their local area. He also plans on building upon the existing design by adding a 12V power hookup for connecting to the taxi, functioning lights within the sign, and companion apps for drivers.
Sensing the world around you
We’ve seen a wide variety of Raspberry Pi projects using sensors to track the world around us, such as Kasia Molga’s Human Sensor costume series, which reacts to air pollution by lighting up, and Clodagh O’Mahony’s Social Interaction Dress, which she created to judge how conversation and physical human interaction can be scored and studied.
Kasia Molga’s Human Sensor — a collection of hi-tech costumes that react to air pollution within the wearer’s environment.
Many people also build their own Pi-powered weather stations, or use the Raspberry Pi Oracle Weather Station, to measure and record conditions in their towns and cities from the roofs of schools, offices, and homes.
Have you incorporated sensors into your Raspberry Pi projects? Share your builds in the comments below or via social media by tagging us.
Hi folks, Rob from The MagPi here! Issue 63 is now available, and it’s a huge one: we finally show you how to create the ultimate Raspberry Pi arcade cabinet in our latest detailed tutorial, so get some quarters and your saw ready.
Totally awesome video game builds!
The 16-page-long arcade machine instructions cover everything from the tools you need and how to do the woodwork, to setting up the electronics. In my spare time, I pretend to be Street Fighter baddie M. Bison, so I’m no stranger to arcade machines. However, I had never actually built one — luckily, the excellent Bob Clagett of I Like To Make Stuff was generous enough to help out with this project. I hope you enjoy reading the article, and making your own cabinet, as much as I enjoyed writing and building them.
Projects for kids
Retro gaming isn’t the only thing you’ll find in this issue of The MagPi though. We have a big feature called Junior Pi Projects, which we hope will inspire young people to make something really cool using Scratch or Python.
As usual, the new issue also includes a collection of other tutorials for you to follow, for example for building a hydroponic garden, or making a special MIDI box. There are also fantastic maker projects to read up on, and reviews to tempt your wallet.
The kids are alright
Get The MagPi 63
You can grab The MagPi 63 right now from WH Smith, Tesco, Sainsbury’s, and Asda. If you live in the US, check out your local Barnes & Noble or Micro Center in the next few days. You can also get the new issue online from our store, or digitally via our Android or iOS apps. And don’t forget, there’s always the free PDF as well.
Subscribe for free goodies
Want to support the Raspberry Pi Foundation, the magazine, and get some cool free stuff? If you take out a twelve-month print subscription to The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.
That’s it for this month! We’re off to play some games.
Did you realise the Sense HAT has been available for over two years now? Used by astronauts on the International Space Station, the exact same hardware is available to you on Earth. With a new Astro Pi challenge just launched, it’s time for a retrospective/roundup/inspiration post about this marvellous bit of kit.
The Sense HAT on a Pi in full glory
The Sense HAT explained
We developed our scientific add-on board to be part of the Astro Pi computers we sent to the International Space Station with ESA astronaut Tim Peake. For a play-by-play of Astro Pi’s history, head to the blog archive.
Just to remind you, this is all the cool stuff our engineers have managed to fit onto the HAT:
Use the LED matrix and joystick to recreate games such as Pong or Flappy Bird. Of course, you could also add sensor input to your game: code an egg drop game or a Magic 8 Ball that reacts to how the device moves.
If you like the great outdoors, you could also use your Sense HAT to recreate this Hiking Companion by Marcus Johnson. Take it with you on your next hike!
It’s also possible to incorporate Sense HAT data into your digital art! The Python Turtle module and the Processing language are both useful tools for creating beautiful animations based on real-world information.
A Sense HAT project that also uses this principle is Giorgio Sancristoforo’s Tableau, a ‘generative music album’. This device creates music according to the sensor data:
“There is no doubt that, as music is removed by the phonographrecord from the realm of live production and from the imperative of artistic activity and becomes petrified, it absorbs into itself, in this process of petrification, the very life that would otherwise vanish.”
Our online resource shows you how to record the information your HAT picks up. Next you can analyse and graph your data using Mathematica, which is included for free on Raspbian. This resource walks you through how this software works.
If you’re seeking inspiration for experiments you can do on our Astro Pis Izzy and Ed on the ISS, check out the winning entries of previous rounds of the Astro Pi challenge.
Thomas Pesquet with Ed and Izzy
But you can also stick to terrestrial scientific investigations. For example, why not build a weather station and share its data on your own web server or via Weather Underground?
Your code in space!
If you’re a student or an educator in one of the 22 ESA member states, you can get a team together to enter our 2017-18 Astro Pi challenge. There are two missions to choose from, including Mission Zero: follow a few guidelines, and your code is guaranteed to run in space!
At the Raspberry Pi Foundation, we love a good music project. So of course we’re excited to welcome Andy Grove‘s ultrasonic piano to the collection! It is a thing of beauty… and noise. Don’t let the name fool you – this build can do so much more than sound like a piano.
The Ultrasonic Pi Piano uses HC-SR04 ultrasonic sensors for input and generates MIDI instructions that are played by fluidsynth. For more information: http://theotherandygrove.com/projects/ultrasonic-pi-piano/
What’s an ultrasonic piano?
What we have here, people of all genders, is really a theremin on steroids. The build’s eight ultrasonic distance sensors detect hand movements and, with the help of an octasonic breakout board, a Raspberry Pi 3 translates their signals into notes. But that’s not all: this digital instrument is almost endlessly customisable – you can set each sensor to a different octave, or to a different instrument.
The breakout board designed by Andy
Andy has implemented gesture controls to allow you to switch between modes you have preset. In his video, you can see that holding your hands over the two sensors most distant from each other changes the instrument. Say you’re bored of the piano – try a xylophone! Not your jam? How about a harpsichord? Or a clarinet? In fact, there are 128 MIDI instruments and sound effects to choose from. Go nuts and compose a piece using tuba, ocarina, and the noise of a guitar fret!
How to build the ultrasonic piano
If you head over to Instructables, you’ll find the thorough write-up Andy has provided. He has also made all his scripts, written in Rust, available on GitHub. Finally, he’s even added a video on how to make a housing, so your ultrasonic piano can look more like a proper instrument, and less like a pile of electronics.
If you follow us on Twitter, you may have seen photos and footage of the Raspberry Pi staff attending a Pi Towers Picademy. Like Andy*, quite a few of us are massive Whovians. Consequently, one of our final builds on the course was an ultrasonic theremin that gave off a sound rather like a dying Dalek. Take a look at our masterwork here! We loved our make so much that we’ve since turned the instructions for building it into a free resource. Go ahead and build your own! And be sure to share your compositions with us in the comments.
Sonic is feeling the groove as well
* He has a full-sized Dalek at home. I know, right?
At the Audio MC at the Linux Plumbers Conference one thing became very clear: it is very difficult for programmers to figure out which audio API to use for which purpose and which API not to use when doing audio programming on Linux. So here’s my try to guide you through this jungle:
What do you want to do?
I want to write a media-player-like application!
Use GStreamer! (Unless your focus is only KDE in which cases Phonon might be an alternative.)
I want to add event sounds to my application!
Use libcanberra, install your sound files according to the XDG Sound Theming/Naming Specifications! (Unless your focus is only KDE in which case KNotify might be an alternative although it has a different focus.)
I want to do professional audio programming, hard-disk recording, music synthesizing, MIDI interfacing!
Use JACK and/or the full ALSA interface.
I want to do basic PCM audio playback/capturing!
Use the safe ALSA subset.
I want to add sound to my game!
Use the audio API of SDL for full-screen games, libcanberra for simple games with standard UIs such as Gtk+.
I want to write a mixer application!
Use the layer you want to support directly: if you want to support enhanced desktop software mixers, use the PulseAudio volume control APIs. If you want to support hardware mixers, use the ALSA mixer APIs.
I want to write audio software for the plumbing layer!
Use the full ALSA stack.
I want to write audio software for embedded applications!
For technical appliances usually the safe ALSA subset is a good choice, this however depends highly on your use-case.
You want to know more about the different sound APIs?
GStreamer
GStreamer is the de-facto standard media streaming system for Linux desktops. It supports decoding and encoding of audio and video streams. You can use it for a wide range of purposes from simple audio file playback to elaborate network streaming setups. GStreamer supports a wide range of CODECs and audio backends. GStreamer is not particularly suited for basic PCM playback or low-latency/realtime applications. GStreamer is portable and not limited in its use to Linux. Among the supported backends are ALSA, OSS, PulseAudio. [Programming Manuals and References]
libcanberra
libcanberra is an abstract event sound API. It implements the XDG Sound Theme and Naming Specifications. libcanberra is a blessed GNOME dependency, but itself has no dependency on GNOME/Gtk/GLib and can be used with other desktop environments as well. In addition to an easy interface for playing sound files, libcanberra provides caching (which is very useful for networked thin clients) and allows passing of various meta data to the underlying audio system which then can be used to enhance user experience (such as positional event sounds) and for improving accessibility. libcanberra supports multiple backends and is portable beyond Linux. Among the supported backends are ALSA, OSS, PulseAudio, GStreamer. [API Reference]
JACK
JACK is a sound system for connecting professional audio production applications and hardware output. It’s focus is low-latency and application interconnection. It is not useful for normal desktop or embedded use. It is not an API that is particularly useful if all you want to do is simple PCM playback. JACK supports multiple backends, although ALSA is best supported. JACK is portable beyond Linux. Among the supported backends are ALSA, OSS. [API Reference]
Full ALSA
ALSA is the Linux API for doing PCM playback and recording. ALSA is very focused on hardware devices, although other backends are supported as well (to a limit degree, see below). ALSA as a name is used both for the Linux audio kernel drivers and a user-space library that wraps these. ALSA — the library — is comprehensive, and portable (to a limited degree). The full ALSA API can appear very complex and is large. However it supports almost everything modern sound hardware can provide. Some of the functionality of the ALSA API is limited in its use to actual hardware devices supported by the Linux kernel (in contrast to software sound servers and sound drivers implemented in user-space such as those for Bluetooth and FireWire audio — among others) and Linux specific drivers. [API Reference]
Safe ALSA
Only a subset of the full ALSA API works on all backends ALSA supports. It is highly recommended to stick to this safe subset if you do ALSA programming to keep programs portable, future-proof and compatible with sound servers, Bluetooth audio and FireWire audio. See below for more details about which functions of ALSA are considered safe. The safe ALSA API is a suitable abstraction for basic, portable PCM playback and recording — not just for ALSA kernel driver supported devices. Among the supported backends are ALSA kernel driver devices, OSS, PulseAudio, JACK.
Phonon and KNotify
Phonon is high-level abstraction for media streaming systems such as GStreamer, but goes a bit further than that. It supports multiple backends. KNotify is a system for “notifications”, which goes beyond mere event sounds. However it does not support the XDG Sound Theming/Naming Specifications at this point, and also doesn’t support caching or passing of event meta-data to an underlying sound system. KNotify supports multiple backends for audio playback via Phonon. Both APIs are KDE/Qt specific and should not be used outside of KDE/Qt applications. [Phonon API Reference] [KNotify API Reference]
SDL
SDL is a portable API primarily used for full-screen game development. Among other stuff it includes a portable audio interface. Among others SDL support OSS, PulseAudio, ALSA as backends. [API Reference]
PulseAudio
PulseAudio is a sound system for Linux desktops and embedded environments that runs in user-space and (usually) on top of ALSA. PulseAudio supports network transparency, per-application volumes, spatial events sounds, allows switching of sound streams between devices on-the-fly, policy decisions, and many other high-level operations. PulseAudio adds a glitch-free audio playback model to the Linux audio stack. PulseAudio is not useful in professional audio production environments. PulseAudio is portable beyond Linux. PulseAudio has a native API and also supports the safe subset of ALSA, in addition to limited, LD_PRELOAD-based OSS compatibility. Among others PulseAudio supports OSS and ALSA as backends and provides connectivity to JACK. [API Reference]
OSS
The Open Sound System is a low-level PCM API supported by a variety of Unixes including Linux. It started out as the standard Linux audio system and is supported on current Linux kernels in the API version 3 as OSS3. OSS3 is considered obsolete and has been fully replaced by ALSA. A successor to OSS3 called OSS4 is available but plays virtually no role on Linux and is not supported in standard kernels or by any of the relevant distributions. The OSS API is very low-level, based around direct kernel interfacing using ioctl()s. It it is hence awkward to use and can practically not be virtualized for usage on non-kernel audio systems like sound servers (such as PulseAudio) or user-space sound drivers (such as Bluetooth or FireWire audio). OSS3’s timing model cannot properly be mapped to software sound servers at all, and is also problematic on non-PCI hardware such as USB audio. Also, OSS does not do sample type conversion, remapping or resampling if necessary. This means that clients that properly want to support OSS need to include a complete set of converters/remappers/resamplers for the case when the hardware does not natively support the requested sampling parameters. With modern sound cards it is very common to support only S32LE samples at 48KHz and nothing else. If an OSS client assumes it can always play back S16LE samples at 44.1KHz it will thus fail. OSS3 is portable to other Unix-like systems, various differences however apply. OSS also doesn’t support surround sound and other functionality of modern sounds systems properly. OSS should be considered obsolete and not be used in new applications. ALSA and PulseAudio have limited LD_PRELOAD-based compatibility with OSS. [Programming Guide]
All sound systems and APIs listed above are supported in all relevant current distributions. For libcanberra support the newest development release of your distribution might be necessary.
All sound systems and APIs listed above are suitable for development for commercial (read: closed source) applications, since they are licensed under LGPL or more liberal licenses or no client library is involved.
You want to know why and when you should use a specific sound API?
GStreamer
GStreamer is best used for very high-level needs: i.e. you want to play an audio file or video stream and do not care about all the tiny details down to the PCM or codec level.
libcanberra
libcanberra is best used when adding sound feedback to user input in UIs. It can also be used to play simple sound files for notification purposes.
JACK
JACK is best used in professional audio production and where interconnecting applications is required.
Full ALSA
The full ALSA interface is best used for software on “plumbing layer” or when you want to make use of very specific hardware features, which might be need for audio production purposes.
Safe ALSA
The safe ALSA interface is best used for software that wants to output/record basic PCM data from hardware devices or software sound systems.
Phonon and KNotify
Phonon and KNotify should only be used in KDE/Qt applications and only for high-level media playback, resp. simple audio notifications.
SDL
SDL is best used in full-screen games.
PulseAudio
For now, the PulseAudio API should be used only for applications that want to expose sound-server-specific functionality (such as mixers) or when a PCM output abstraction layer is already available in your application and it thus makes sense to add an additional backend to it for PulseAudio to keep the stack of audio layers minimal.
OSS
OSS should not be used for new programs.
You want to know more about the safe ALSA subset?
Here’s a list of DOS and DONTS in the ALSA API if you care about that you application stays future-proof and works fine with non-hardware backends or backends for user-space sound drivers such as Bluetooth and FireWire audio. Some of these recommendations apply for people using the full ALSA API as well, since some functionality should be considered obsolete for all cases.
If your application’s code does not follow these rules, you must have a very good reason for that. Otherwise your code should simply be considered broken!
DONTS:
Do not use “async handlers”, e.g. via snd_async_add_pcm_handler() and friends. Asynchronous handlers are implemented using POSIX signals, which is a very questionable use of them, especially from libraries and plugins. Even when you don’t want to limit yourself to the safe ALSA subset it is highly recommended not to use this functionality. Read this for a longer explanation why signals for audio IO are evil.
Do not parse the ALSA configuration file yourself or with any of the ALSA functions such as snd_config_xxx(). If you need to enumerate audio devices use snd_device_name_hint() (and related functions). That is the only API that also supports enumerating non-hardware audio devices and audio devices with drivers implemented in userspace.
Do not parse any of the files from /proc/asound/. Those files only include information about kernel sound drivers — user-space plugins are not listed there. Also, the set of kernel devices might differ from the way they are presented in user-space. (i.e. sub-devices are mapped in different ways to actual user-space devices such as surround51 an suchlike.
Do not rely on stable device indexes from ALSA. Nowadays they depend on the initialization order of the drivers during boot-up time and are thus not stable.
Do not use the snd_card_xxx() APIs. For enumerating use snd_device_name_hint() (and related functions). snd_card_xxx() is obsolete. It will only list kernel hardware devices. User-space devices such as sound servers, Bluetooth audio are not included. snd_card_load() is completely obsolete in these days.
Do not hard-code device strings, especially not hw:0 or plughw:0 or even dmix — these devices define no channel mapping and are mapped to raw kernel devices. It is highly recommended to use exclusively default as device string. If specific channel mappings are required the correct device strings should be front for stereo, surround40 for Surround 4.0, surround41, surround51, and so on. Unfortunately at this point ALSA does not define standard device names with channel mappings for non-kernel devices. This means default may only be used safely for mono and stereo streams. You should probably prefix your device string with plug: to make sure ALSA transparently reformats/remaps/resamples your PCM stream for you if the hardware/backend does not support your sampling parameters natively.
Do not assume that any particular sample type is supported except the following ones: U8, S16_LE, S16_BE, S32_LE, S32_BE, FLOAT_LE, FLOAT_BE, MU_LAW, A_LAW.
Do not use snd_pcm_avail_update() for synchronization purposes. It should be used exclusively to query the amount of bytes that may be written/read right now. Do not use snd_pcm_delay() to query the fill level of your playback buffer. It should be used exclusively for synchronisation purposes. Make sure you fully understand the difference, and note that the two functions return values that are not necessarily directly connected!
Do not assume that the mixer controls always know dB information.
Do not assume that all devices support MMAP style buffer access.
Do not assume that the hardware pointer inside the (possibly mmaped) playback buffer is the actual position of the sample in the DAC. There might be an extra latency involved.
Do not try to recover with your own code from ALSA error conditions such as buffer under-runs. Use snd_pcm_recover() instead.
Do not touch buffering/period metrics unless you have specific latency needs. Develop defensively, handling correctly the case when the backend cannot fulfill your buffering metrics requests. Be aware that the buffering metrics of the playback buffer only indirectly influence the overall latency in many cases. i.e. setting the buffer size to a fixed value might actually result in practical latencies that are much higher.
Do not assume that snd_pcm_rewind() is available and works and to which degree.
Do not assume that the time when a PCM stream can receive new data is strictly dependant on the sampling and buffering parameters and the resulting average throughput. Always make sure to supply new audio data to the device when it asks for it by signalling “writability” on the fd. (And similarly for capturing)
Do not use the “simple” interface snd_spcm_xxx().
Do not use any of the functions marked as “obsolete”.
Do not use the timer, midi, rawmidi, hwdep subsystems.
DOS:
Use snd_device_name_hint() for enumerating audio devices.
Use snd_smixer_xx() instead of raw snd_ctl_xxx()
For synchronization purposes use snd_pcm_delay().
For checking buffer playback/capture fill level use snd_pcm_update_avail().
Use snd_pcm_recover() to recover from errors returned by any of the ALSA functions.
If possible use the largest buffer sizes the device supports to maximize power saving and drop-out safety. Use snd_pcm_rewind() if you need to react to user input quickly.
FAQ
What about ESD and NAS?
ESD and NAS are obsolete, both as API and as sound daemon. Do not develop for it any further.
ALSA isn’t portable!
That’s not true! Actually the user-space library is relatively portable, it even includes a backend for OSS sound devices. There is no real reason that would disallow using the ALSA libraries on other Unixes as well.
Portability is key to me! What can I do?
Unfortunately no truly portable (i.e. to Win32) PCM API is available right now that I could truly recommend. The systems shown above are more or less portable at least to Unix-like operating systems. That does not mean however that there are suitable backends for all of them available. If you care about portability to Win32 and MacOS you probably have to find a solution outside of the recommendations above, or contribute the necessary backends/portability fixes. None of the systems (with the exception of OSS) is truly bound to Linux or Unix-like kernels.
What about PortAudio?
I don’t think that PortAudio is very good API for Unix-like operating systems. I cannot recommend it, but it’s your choice.
Oh, why do you hate OSS4 so much?
I don’t hate anything or anyone. I just don’t think OSS4 is a serious option, especially not on Linux. On Linux, it is also completely redundant due to ALSA.
You idiot, you have no clue!
You are right, I totally don’t. But that doesn’t hinder me from recommending things. Ha!
Hey I wrote/know this tiny new project which is an awesome abstraction layer for audio/media!
Sorry, that’s not sufficient. I only list software here that is known to be sufficiently relevant and sufficiently well maintained.
Final Words
Of course these recommendations are very basic and are only intended to lead into the right direction. For each use-case different necessities apply and hence options that I did not consider here might become viable. It’s up to you to decide how much of what I wrote here actually applies to your application.
This summary only includes software systems that are considered stable and universally available at the time of writing. In the future I hope to introduce a more suitable and portable replacement for the safe ALSA subset of functions. I plan to update this text from time to time to keep things up-to-date.
If you feel that I forgot a use case or an important API, then please contact me or leave a comment. However, I think the summary above is sufficiently comprehensive and if an entry is missing I most likely deliberately left it out.
(Also note that I am upstream for both PulseAudio and libcanberra and did some minor contributions to ALSA, GStreamer and some other of the systems listed above. Yes, I am biased.)
Oh, and please syndicate this, digg it. I’d like to see this guide to be well-known all around the Linux community. Thank you!
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.