Tag Archives: biology

Raspberry Pi High Quality Camera powers up homemade microscope

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/raspberry-pi-high-quality-camera-powers-up-homemade-microscope/

Wow, DIY-Maxwell, wow. This reddit user got their hands on one of our new Raspberry Pi High Quality Cameras and decided to upgrade their homemade microscope with it. The brains of the thing are also provided by a Raspberry Pi.

Key features

  • Raspberry Pi OS
  • 8 MegaPixel CMOS camera (Full HD 30 fps video)
  • Imaging features from several centimetres to several micrometers without changing the lens
  • 6 stepper motors (X, Y, tilt, rotation, magnification, focus)
  • Variable speed control using a joystick controller or keyboard
  • Uniform illumination for imaging reflective surface
  • Modular design: stages and modules can be arranged in any configuration depending on the application

Here’s what a penny looks like under this powerful microscope:

Check out this video from the original reddit post to see the microscope in action.

Bill of materials

Click image to enlarge

The user has put together very detailed, image-led build instructions walking you through how to create the linear actuators, camera setup, rotary stage, illumination, title mechanism, and electronics.

The project uses a program written in Python 3 (MicroscoPy.py) to control the microscope, modify camera settings, and take photos and videos controlled by keyboard input.

Click image to enlarge

Here is a quick visual to show you the exact ports you need for this project on whatever Raspberry Pi you have:

Click image to enlarge

In the comments of the original reddit post, DIY_Maxwell explains that $10 objective lens used in the project limited the Raspberry Pi High Quality Camera’s performance. They predict you can expect even better images with a heavier investment in the lens.

The project is the result of a team at IBM Research–Europe, in Zurich, who develop microfluidic technologies for medical applications, needing to provide high-quality photos and videos of their microfluidic chips.

In a blog for IEEE Spectrum, IBM team member Yuksel Temiz explains: “Taking a photo of a microfluidic chip is not easy. The chips are typically too big to fit into the field of view of a standard microscope, but they have fine features that cannot be resolved using a regular camera. Uniform illumination is also critical because the chips are often made of highly reflective or transparent materials. Looking at publications from other research groups, it’s obvious that this is a common challenge. With this motivation, I devoted some of my free time to designing a multipurpose and compact lab instrument that can take macro photos from almost any angle.”

Here’s the full story about how the Raspberry Pi-powered creation came to be.

And for some extra-credit homework, you can check out this document comparing the performance of the microscope using our Raspberry Pi Camera Module v2 and the High Quality Camera. The key takeaway for those wishing to upgrade their old projects with the newer camera is to remember that it’s heavier and 50% bigger, so you’ll need to tweak your housing to fit it in.

The post Raspberry Pi High Quality Camera powers up homemade microscope appeared first on Raspberry Pi.

Help medical research with [email protected]

Post Syndicated from Ben Hardwidge original https://www.raspberrypi.org/blog/help-medical-research-with-foldinghome/

Did you know: the first machine to break the exaflop barrier (one quintillion floating‑point operations per second) wasn’t a huge dedicated IBM supercomputer, but a bunch of interconnected PCs with ordinary CPUs and gaming GPUs.

With that in mind, welcome to the [email protected] project, which is targeting its enormous power at COVID-19 research. It’s effectively the world’s fastest supercomputer, and your PC can be a part of it.

COVID-19

The [email protected] project is now targeting COVID-19 research

[email protected] with Custom PC

Put simply, [email protected] runs hugely complicated simulations of protein molecules for medical research. They would usually take hundreds of years for a typical computer to process. However, by breaking them up into smaller work units, and farming them out to thousands of independent machines on the Internet, it’s possible to run simulations that would be impossible to run experimentally.

Back in 2004, Custom PC magazine started its own [email protected] team. The team is currently sitting at number 12 on the world leaderboard and we’re still going strong. If you have a PC, you can join us (or indeed any [email protected] team) and put your spare clock cycles towards COVID-19 research.

Get folding

Getting your machine folding is simple. First, download the client. Your username can be whatever you like, and you’ll need to put in team number 35947 to fold for the Custom PC & bit-tech team. If you want your PC to work on COVID-19 research, select ‘COVID-19’ in the ‘I support research finding’ pulldown menu.

Set your username and team number

Enter team number 35947 to fold for the Custom PC & bit-tech team

You’ll get the most points per Watt from GPU folding, but your CPU can also perform valuable research that can’t be done on your GPU. ‘There are actually some things we can do on CPUs that we can’t do on GPUs,’ said Professor Greg Bowman, Director of [email protected], speaking to Custom PC in the latest issue.

‘With the current pandemic in mind, one of the things we’re doing is what are called “free energy calculations”. We’re simulating proteins with small molecules that we think might be useful starting points for developing therapeutics, for example.’

Select COVID-19 from the pulldown menu

If you want your PC to work on COVID-19 research, select ‘COVID-19’ in the ‘I support research finding’ pulldown menu

Bear in mind that enabling folding on your machine will increase power consumption. For reference, we set up folding on a Ryzen 7 2700X rig with a GeForce GTX 1070 Ti. The machine consumes around 70W when idle. That figure increases to 214W when folding on the CPU and around 320W when folding on the GPU as well. If you fold a lot, you’ll see an increase in your electricity bill, so keep an eye on it.

Folding on Arm?

Could we also see [email protected] running on Arm machines, such as Raspberry Pi? ‘Oh I would love to have [email protected] running on Arm,’ says Bowman. ‘I mean they’re used in Raspberry Pis and lots of phones, so I think this would be a great future direction. We’re actually in contact with some folks to explore getting [email protected] running on Arm in the near future.’

In the meantime, you can still recruit your Raspberry Pi for the cause by participating in [email protected], a similar project also working to help the fight against COVID-19. For more information, visit the [email protected] website.

You’ll also find a full feature about [email protected] and its COVID-19 research in Issue 202 of Custom PC, available from the Raspberry Pi Press online store.

The post Help medical research with [email protected] appeared first on Raspberry Pi.

FluSense takes on COVID-19 with Raspberry Pi

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/flusense-takes-on-covid-19-with-raspberry-pi/

Raspberry Pi devices are often used by scientists, especially in biology to capture and analyse data, and a particularly striking – and sobering – project has made the news this week. Researchers at UMass Amherst have created FluSense, a dictionary-sized piece of equipment comprising a cheap microphone array, a thermal sensor, an Intel Movidius 2 neural computing engine, and a Raspberry Pi. FluSense monitors crowd sounds to forecast outbreaks of viral respiratory disease like seasonal flu; naturally, the headlines about their work have focused on its potential relevance to the COVID-19 pandemic.

A photo of Forsad Al Hossain and Tauhidur Rahman with the FluSense device alongside a logo from the Amherst University of Massachusetts

Forsad Al Hossain and Tauhidur Rahman with the FluSense device. Image courtesy of the University of Massachusetts Amherst

The device can distinguish coughing from other sounds. When cough data is combined with information about the size of the crowd in a location, it can provide an index predicting how many people are likely to be experiencing flu symptoms.

It was successfully tested in in four health clinic waiting rooms, and now, PhD student Forsad Al Hossain and his adviser, assistant professor Tauhidur Rahman, plan to roll FluSense out in other large spaces to capture data on a larger scale and strengthen the device’s capabilities. Privacy concerns are mitigated by heavy encryption, and Al Hossain and Rahman explain that the emphasis is on aggregating data, not identifying sickness in any single patient.

The researchers believe the secret to FluSense’s success lies in how much of the processing work is done locally, via the neural computing engine and Raspberry Pi: “Symptom information is sent wirelessly to the lab for collation, of course, but the heavy lifting is accomplished at the edge.”

A bird's-eye view of the components inside the Flu Sense device

Image courtesy of the University of Massachusetts Amherst

FluSense offers a different set of advantages to other tools, such as the extremely popular self-reporting app developed by researchers at Kings College Hospital in London, UK, together with startup Zoe. Approaches like this rely on the public to sign up, and that’s likely to skew the data they gather, because people in some demographic groups are more likely than others to be motivated and able to participate. FluSense can be installed to capture data passively from groups across the entire population. This could be particularly helpful to underprivileged groups who are less likely to have access to healthcare.

Makers, engineers, and scientists across the world are rising to the challenge of tackling COVID-19. One notable initiative is the Montreal General Hospital Foundation’s challenge to quickly design a low-cost, easy to use ventilator which can be built locally to serve patients, with a prize of CAD $200,000 on offer. The winning designs will be made available to download for free.

There is, of course, loads of chatter on the Raspberry Pi forum about the role computing has in beating the virus. We particularly liked this PSA letting you know how to free up some of your unused processing power for those researching treatments.

screenshot of the hand washer being built from a video on instagram

Screenshot via @deeplocal on Instagram

And to end on a cheering note, we *heart* this project from @deeplocal on Instagram. They’ve created a Raspberry Pi-powered soap dispenser which will play 20 seconds of your favourite song to keep you at the sink and make sure you’re washing your hands for long enough to properly protect yourself.

The post FluSense takes on COVID-19 with Raspberry Pi appeared first on Raspberry Pi.

Raspberry Pi vs antibiotic resistance: microbiology imaging with open source hardware

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/raspberry-pi-vs-antibiotic-resistance-microbiology-imaging-with-open-source-hardware/

The Edwards Lab at the University of Reading has developed a flexible, low-cost, open source lab robot for capturing images of microbiology samples with a Raspberry Pi camera module. It’s called POLIR, for Raspberry Pi camera Open-source Laboratory Imaging Robot. Here’s a timelapse video of them assembling it.

Measuring antibiotic resistance with colour-changing dye

The robot is useful for all kinds of microbiology imaging, but at the moment the lab is using it to measure antimicrobial resistance in bacteria. They’re doing this by detecting the colour change in a dye called resazurin, which changes from blue to pink in the presence of metabolically active cells: if bacteria incubated with antibiotics grow, their metabolic activity causes the dye to turn pink. However, if the antibiotics stop or impede the growth of the bacteria, their lower levels of metabolic activity will cause less colour change, or none at all. In the photo below, the colourful microtitre plate holds bacterial samples with and without resistance to the antibiotics against which they’re being tested.

POLIR, an open source 3D printer-based Raspberry Pi lab imaging robot

An imaging system based on 3D-printer designs

The researchers adapted existing open source 3D printer designs and used v-slot aluminium extrusion (this stuff) with custom 3D-printed joints to make a frame. Instead of a printer extrusion head, a Raspberry Pi and camera module are mounted on the frame. An Arduino running open-source Repetier software controls x-y-z stepper motors to adjust the position of the computer and camera.

Front and top views of POLIR

Open-source OctoPrint software controls the camera position by supplying scripts from the Raspberry Pi to the Arduino. OctoPrint also allows remote access and control, which gives researchers flexibility in when they run experiments and check progress. Images are acquired using a Python script configured with the appropriate settings (eg image exposure), and are stored on the Raspberry Pi’s SD card. From there, they can be accessed via FTP.

More flexibility, lower cost

Off-the-shelf lab automation systems are extremely expensive and remain out of the reach of most research groups. POLIR cost just £600.

The system has a number of advantages over higher-cost off-the-shelf imaging systems. One is its flexibility: the robot can image a range of sample formats, including agar plates like those in the video above, microtitre plates like the one in the first photograph, and microfluidic “lab-on-a-comb” devices. A comb looks much like a small, narrow rectangle of clear plastic with striations running down its length; each striation is a microcapillary with capacity for a 1μl sample, and each comb has ten microcapillaries. These microfluidic devices let scientists run experiments on a large number of samples at once, while using a minimum of space on a lab bench, in an incubator, or in an imaging robot like POLIR.

POLIR accommodates 2160 individual capillaries and a 96 well plate, with room to spare

High spatial and temporal resolution

For lab-on-a-comb images, POLIR gives the Reading team four times the spatial resolution they get with a static camera. The moveable Raspberry Pi camera with a short focus yields images with 6 pixels per capillary, compared to 1.5 pixels per capillary using a $700 static Canon camera with a macro lens.

Because POLIR is automated, it brings higher temporal resolution within reach, too. A non-automated system, by contrast, can only be used for timelapse imaging if a researcher repeatedly intervenes at fixed time intervals. Capturing kinetic data with timelapse imaging is valuable because it can be significant if different samples reach the same endpoint but at different rates, and because some dyes can give a transient signal that would be missed by an endpoint measurement alone.

Dr Alexander Edwards of the University of Reading comments:

We built the robot with a simple purpose, to make antimicrobial resistance testing more robust without resorting to expensive and highly specialised lab equipment […] The beauty of the POLIR kit is that it’s based on open source designs and we have likewise published our own designs and modifications, allowing everyone and anyone to benefit from the original design and the modifications in other contexts. We believe that open source hardware is a game changer that will revolutionise microbiological and other life science lab work by increasing data production whilst reducing hands-on labour time in the lab.

You can find POLIR on GitLab here. You can also read more, and browse more figures, in the team’s open-access paper, Exploiting open source 3D printer architecture for laboratory robotics to automate high-throughput time-lapse imaging for analytical microbiology.

The post Raspberry Pi vs antibiotic resistance: microbiology imaging with open source hardware appeared first on Raspberry Pi.

Growth Monitor pi: an open monitoring system for plant science

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/growth-monitor-pi-an-open-monitoring-system-for-plant-science/

Plant scientists and agronomists use growth chambers to provide consistent growing conditions for the plants they study. This reduces confounding variables – inconsistent temperature or light levels, for example – that could render the results of their experiments less meaningful. To make sure that conditions really are consistent both within and between growth chambers, which minimises experimental bias and ensures that experiments are reproducible, it’s helpful to monitor and record environmental variables in the chambers.

A neat grid of small leafy plants on a black plastic tray. Metal housing and tubing is visible to the sides.

Arabidopsis thaliana in a growth chamber on the International Space Station. Many experimental plants are less well monitored than these ones.
(“Arabidopsis thaliana plants […]” by Rawpixel Ltd (original by NASA) / CC BY 2.0)

In a recent paper in Applications in Plant Sciences, Brandin Grindstaff and colleagues at the universities of Missouri and Arizona describe how they developed Growth Monitor pi, or GMpi: an affordable growth chamber monitor that provides wider functionality than other devices. As well as sensing growth conditions, it sends the gathered data to cloud storage, captures images, and generates alerts to inform scientists when conditions drift outside of an acceptable range.

The authors emphasise – and we heartily agree – that you don’t need expertise with software and computing to build, use, and adapt a system like this. They’ve written a detailed protocol and made available all the necessary software for any researcher to build GMpi, and they note that commercial solutions with similar functionality range in price from $10,000 to $1,000,000 – something of an incentive to give the DIY approach a go.

GMpi uses a Raspberry Pi Model 3B+, to which are connected temperature-humidity and light sensors from our friends at Adafruit, as well as a Raspberry Pi Camera Module.

The team used open-source app Rclone to upload sensor data to a cloud service, choosing Google Drive since it’s available for free. To alert users when growing conditions fall outside of a set range, they use the incoming webhooks app to generate notifications in a Slack channel. Sensor operation, data gathering, and remote monitoring are supported by a combination of software that’s available for free from the open-source community and software the authors developed themselves. Their package GMPi_Pack is available on GitHub.

With a bill of materials amounting to something in the region of $200, GMpi is another excellent example of affordable, accessible, customisable open labware that’s available to researchers and students. If you want to find out how to build GMpi for your lab, or just for your greenhouse, Affordable remote monitoring of plant growth in facilities using Raspberry Pi computers by Brandin et al. is available on PubMed Central, and it includes appendices with clear and detailed set-up instructions for the whole system.

The post Growth Monitor pi: an open monitoring system for plant science appeared first on Raspberry Pi.

A low-cost, open-source, computer-assisted microscope

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/a-low-cost-open-source-computer-assisted-microscope/

Low-cost open labware is a good thing in the world, and I was particularly pleased when micropalaeontologist Martin Tetard got in touch about the Raspberry Pi-based microscope he is developing. The project is called microscoPI (what else?), and it can capture, process, and store images and image analysis results. Martin is engaged in climate research: he uses microscopy to study tiny fossil remains, from which he gleans information about the environmental conditions that prevailed in the far-distant past.

microscoPI: a microcomputer-assisted microscope

microscoPI a project that aims to design a multipurpose, open-source and inexpensive micro-computer-assisted microscope (Raspberry PI 3). This microscope can automatically take images, process them, and save them altogether with the results of image analyses on a flash drive. It it multipurpose as it can be used on various kinds of images (e.g.

Martin repurposed an old microscope with a Z-axis adjustable stage for accurate focusing, and sourced an inexpensive X/Y movable stage to allow more accurate horizontal positioning of samples under the camera. He emptied the head of the scope to install a Raspberry Pi Camera Module, and he uses an M12 lens adapter to attach lenses suitable for single-specimen close-ups or for imaging several specimens at once. A Raspberry Pi 3B sits above the head of the microscope, and a 3.5-inch TFT touchscreen mounted on top of the Raspberry Pi allows the user to check images as they are captured and processed.

The Raspberry Pi runs our free operating system, Raspbian, and free image-processing software ImageJ. Martin and his colleagues use a number of plugins, some developed themselves and some by others, to support the specific requirements of their research. With this software, microscoPI can capture and analyse microfossil images automatically: it can count particles, including tiny specimens that are touching, analyse their shape and size, and save images and results before prompting the user for the name of the next sample.

microscoPI is compact – less than 30cm in height – and it’s powered by a battery bank secured under the base of the microscope, so it’s easily portable. The entire build comes in at under 160 Euros. You can find out more, and get in touch with Martin, on the microscoPI website.

The post A low-cost, open-source, computer-assisted microscope appeared first on Raspberry Pi.

Saving biologists’ time with Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/saving-biologists-time-with-raspberry-pi/

In an effort to save themselves and fellow biologists hours of time each week, Team IoHeat are currently prototyping a device that allows solutions to be heated while they are still in cold storage.

The IoHeat team didn’t provide any photos with their project writeup, so here’s a picture of a bored biologist that I found online

Saving time in the lab

As they explain in their prototype write-up:

As scientists working with living organisms (from single cells to tissue samples), we are often required to return to work outside of normal hours to maintain our specimens. In many cases, the compounds and solutions we are using in our line of work are stored at 4°C and need to reach 37°C before they can be used. So far, in order to do this we need to return to our workplace early, incubate our solutions at 37°C for 1–2h, depending on the required volume, and then use them in processes that often take a few minutes. It is clear that there is a lot of room here to improve our efficiency.

Controlling temperatures with Raspberry Pi

These hours wasted on waiting for solutions to heat up could be better spent elsewhere, so the team is building a Raspberry Pi–powered device that will allow them to control the heating process remotely.

We are aiming to built a small incubator that we can store in a cold room/fridge, and that can be activated remotely to warm up to a defined temperature. This incubator will enable us to safely store our reagents at low temperature and warm them up remotely before we need to use them, saving an estimate of 12h per week per user.

This is a great project idea, and they’ve already prototyped it using a Raspberry Pi, heating element, and fan. Temperature and humidity sensors connected to the Raspberry Pi monitor conditions inside the incubator, and the prototype can be controlled via Telegram.

Find out more about the project on Hackster.

We’ve got more than one biologist on the Raspberry Pi staff, so we have a personal appreciation for the effort behind this project, and we look forward to seeing how IoHeat progresses in the future.

The post Saving biologists’ time with Raspberry Pi appeared first on Raspberry Pi.

Friday Squid Blogging: Do Cephalopods Contain Alien DNA?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/friday_squid_bl_627.html

Maybe not DNA, but biological somethings.

Cause of Cambrian explosion — Terrestrial or Cosmic?“:

Abstract: We review the salient evidence consistent with or predicted by the Hoyle-Wickramasinghe (H-W) thesis of Cometary (Cosmic) Biology. Much of this physical and biological evidence is multifactorial. One particular focus are the recent studies which date the emergence of the complex retroviruses of vertebrate lines at or just before the Cambrian Explosion of ~500 Ma. Such viruses are known to be plausibly associated with major evolutionary genomic processes. We believe this coincidence is not fortuitous but is consistent with a key prediction of H-W theory whereby major extinction-diversification evolutionary boundaries coincide with virus-bearing cometary-bolide bombardment events. A second focus is the remarkable evolution of intelligent complexity (Cephalopods) culminating in the emergence of the Octopus. A third focus concerns the micro-organism fossil evidence contained within meteorites as well as the detection in the upper atmosphere of apparent incoming life-bearing particles from space. In our view the totality of the multifactorial data and critical analyses assembled by Fred Hoyle, Chandra Wickramasinghe and their many colleagues since the 1960s leads to a very plausible conclusion — life may have been seeded here on Earth by life-bearing comets as soon as conditions on Earth allowed it to flourish (about or just before 4.1 Billion years ago); and living organisms such as space-resistant and space-hardy bacteria, viruses, more complex eukaryotic cells, fertilised ova and seeds have been continuously delivered ever since to Earth so being one important driver of further terrestrial evolution which has resulted in considerable genetic diversity and which has led to the emergence of mankind.

Two commentaries.

This is almost certainly not true.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Recording lost seconds with the Augenblick blink camera

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/augenblick-camera/

Warning: a GIF used in today’s blog contains flashing images.

Students at the University of Bremen, Germany, have built a wearable camera that records the seconds of vision lost when you blink. Augenblick uses a Raspberry Pi Zero and Camera Module alongside muscle sensors to record footage whenever you close your eyes, producing a rather disjointed film of the sights you miss out on.

Augenblick blink camera recording using a Raspberry Pi Zero

Blink and you’ll miss it

The average person blinks up to five times a minute, with each blink lasting 0.5 to 0.8 seconds. These half-seconds add up to about 30 minutes a day. What sights are we losing during these minutes? That is the question asked by students Manasse Pinsuwan and René Henrich when they set out to design Augenblick.

Blinking is a highly invasive mechanism for our eyesight. Every day we close our eyes thousands of times without noticing it. Our mind manages to never let us wonder what exactly happens in the moments that we miss.

Capturing lost moments

For Augenblick, the wearer sticks MyoWare Muscle Sensor pads to their face, and these detect the electrical impulses that trigger blinking.

Augenblick blink camera recording using a Raspberry Pi Zero

Two pads are applied over the orbicularis oculi muscle that forms a ring around the eye socket, while the third pad is attached to the cheek as a neutral point.

Biology fact: there are two muscles responsible for blinking. The orbicularis oculi muscle closes the eye, while the levator palpebrae superioris muscle opens it — and yes, they both sound like the names of Harry Potter spells.

The sensor is read 25 times a second. Whenever it detects that the orbicularis oculi is active, the Camera Module records video footage.

Augenblick blink recording using a Raspberry Pi Zero

Pressing a button on the side of the Augenblick glasses set the code running. An LED lights up whenever the camera is recording and also serves to confirm the correct placement of the sensor pads.

Augenblick blink camera recording using a Raspberry Pi Zero

The Pi Zero saves the footage so that it can be stitched together later to form a continuous, if disjointed, film.

Learn more about the Augenblick blink camera

You can find more information on the conception, design, and build process of Augenblick here in German, with a shorter explanation including lots of photos here in English.

And if you’re keen to recreate this project, our free project resource for a wearable Pi Zero time-lapse camera will come in handy as a starting point.

The post Recording lost seconds with the Augenblick blink camera appeared first on Raspberry Pi.

Amazon Transcribe Now Generally Available

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-transcribe-now-generally-available/


At AWS re:Invent 2017 we launched Amazon Transcribe in private preview. Today we’re excited to make Amazon Transcribe generally available for all developers. Amazon Transcribe is an automatic speech recognition service (ASR) that makes it easy for developers to add speech to text capabilities to their applications. We’ve iterated on customer feedback in the preview to make a number of enhancements to Amazon Transcribe.

New Amazon Transcribe Features in GA

To start off we’ve made the SampleRate parameter optional which means you only need to know the file type of your media and the input language. We’ve added two new features – the ability to differentiate multiple speakers in the audio to provide more intelligible transcripts (“who spoke when”), and a custom vocabulary to improve the accuracy of speech recognition for product names, industry-specific terminology, or names of individuals. To refresh our memories on how Amazon Transcribe works lets look at a quick example. I’ll convert this audio in my S3 bucket.

import boto3
transcribe = boto3.client("transcribe")
transcribe.start_transcription_job(
    TranscriptionJobName="TranscribeDemo",
    LanguageCode="en-US",
    MediaFormat="mp3",
    Media={"MediaFileUri": "https://s3.amazonaws.com/randhunt-transcribe-demo-us-east-1/out.mp3"}
)

This will output JSON similar to this (I’ve stripped out most of the response) with indidivudal speakers identified:

{
  "jobName": "reinvent",
  "accountId": "1234",
  "results": {
    "transcripts": [
      {
        "transcript": "Hi, everybody, i'm randall ..."
      }
    ],
    "speaker_labels": {
      "speakers": 2,
      "segments": [
        {
          "start_time": "0.000000",
          "speaker_label": "spk_0",
          "end_time": "0.010",
          "items": []
        },
        {
          "start_time": "0.010000",
          "speaker_label": "spk_1",
          "end_time": "4.990",
          "items": [
            {
              "start_time": "1.000",
              "speaker_label": "spk_1",
              "end_time": "1.190"
            },
            {
              "start_time": "1.190",
              "speaker_label": "spk_1",
              "end_time": "1.700"
            }
          ]
        }
      ]
    },
    "items": [
      {
        "start_time": "1.000",
        "end_time": "1.190",
        "alternatives": [
          {
            "confidence": "0.9971",
            "content": "Hi"
          }
        ],
        "type": "pronunciation"
      },
      {
        "alternatives": [
          {
            "content": ","
          }
        ],
        "type": "punctuation"
      },
      {
        "start_time": "1.190",
        "end_time": "1.700",
        "alternatives": [
          {
            "confidence": "1.0000",
            "content": "everybody"
          }
        ],
        "type": "pronunciation"
      }
    ]
  },
  "status": "COMPLETED"
}

Custom Vocabulary

Now if I needed to have a more complex technical discussion with a colleague I could create a custom vocabulary. A custom vocabulary is specified as an array of strings passed to the CreateVocabulary API and you can include your custom vocabulary in a transcription job by passing in the name as part of the Settings in a StartTranscriptionJob API call. An individual vocabulary can be as large as 50KB and each phrase must be less than 256 characters. If I wanted to transcribe the recordings of my highschool AP Biology class I could create a custom vocabulary in Python like this:

import boto3
transcribe = boto3.client("transcribe")
transcribe.create_vocabulary(
LanguageCode="en-US",
VocabularyName="APBiology"
Phrases=[
    "endoplasmic-reticulum",
    "organelle",
    "cisternae",
    "eukaryotic",
    "ribosomes",
    "hepatocyes",
    "cell-membrane"
]
)

I can refer to this vocabulary later on by the name APBiology and update it programatically based on any errors I may find in the transcriptions.

Available Now

Amazon Transcribe is available now in US East (N. Virginia), US West (Oregon), US East (Ohio) and EU (Ireland). Transcribe’s free tier gives you 60 minutes of transcription for free per month for the first 12 months with a pay-as-you-go model of $0.0004 per second of transcribed audio after that, with a minimum charge of 15 seconds.

When combined with other tools and services I think transcribe opens up a entirely new opportunities for application development. I’m excited to see what technologies developers build with this new service.

Randall

[$] DIY biology

Post Syndicated from jake original https://lwn.net/Articles/747187/rss

A scientist with a rather unusual name, Meow-Ludo Meow-Meow, gave a talk at
linux.conf.au 2018
about the current trends in “do it yourself” (DIY) biology or
“biohacking”. He is perhaps most famous for being
prosecuted for implanting an Opal card RFID chip
into his hand; the
Opal card is used for public transportation fares in Sydney. He gave more
details about his implant as well as describing some other biohacking
projects in an engaging presentation.

Astro Pi celebrates anniversary of ISS Columbus module

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/astro-pi-celebrates-anniversary/

Right now, 400km above the Earth aboard the International Space Station, are two very special Raspberry Pi computers. They were launched into space on 6 December 2015 and are, most assuredly, the farthest-travelled Raspberry Pi computers in existence. Each year they run experiments that school students create in the European Astro Pi Challenge.

Raspberry Astro Pi units on the International Space Station

Left: Astro Pi Vis (Ed); right: Astro Pi IR (Izzy). Image credit: ESA.

The European Columbus module

Today marks the tenth anniversary of the launch of the European Columbus module. The Columbus module is the European Space Agency’s largest single contribution to the ISS, and it supports research in many scientific disciplines, from astrobiology and solar science to metallurgy and psychology. More than 225 experiments have been carried out inside it during the past decade. It’s also home to our Astro Pi computers.

Here’s a video from 7 February 2008, when Space Shuttle Atlantis went skywards carrying the Columbus module in its cargo bay.

STS-122 Launch NASA TV Coverage

From February 7th, 2008 NASA-TV Coverage of The 121st Space Shuttle Launch Launched At:2:45:30 P.M E.T – Coverage begins exactly one hour till launch STS-122 Crew:

Today, coincidentally, is also the deadline for the European Astro Pi Challenge: Mission Space Lab. Participating teams have until midnight tonight to submit their experiments.

Anniversary celebrations

At 16:30 GMT today there will be a live event on NASA TV for the Columbus module anniversary with NASA flight engineers Joe Acaba and Mark Vande Hei.

Our Astro Pi computers will be joining in the celebrations by displaying a digital birthday candle that the crew can blow out. It works by detecting an increase in humidity when someone blows on it. The video below demonstrates the concept.

AstroPi candle

Uploaded by Effi Edmonton on 2018-01-17.

Do try this at home

The exact Astro Pi code that will run on the ISS today is available for you to download and run on your own Raspberry Pi and Sense HAT. You’ll notice that the program includes code to make it stop automatically when the date changes to 8 February. This is just to save time for the ground control team.

If you have a Raspberry Pi and a Sense HAT, you can use the terminal commands below to download and run the code yourself:

wget http://rpf.io/colbday -O birthday.py
chmod +x birthday.py
./birthday.py

When you see a blank blue screen with the brightness increasing, the Sense HAT is measuring the baseline humidity. It does this every 15 minutes so it can recalibrate to take account of natural changes in background humidity. A humidity increase of 2% is needed to blow out the candle, so if the background humidity changes by more than 2% in 15 minutes, it’s possible to get a false positive. Press Ctrl + C to quit.

Please tweet pictures of your candles to @astro_pi – we might share yours! And if we’re lucky, we might catch a glimpse of the candle on the ISS during the NASA TV event at 16:30 GMT today.

The post Astro Pi celebrates anniversary of ISS Columbus module appeared first on Raspberry Pi.

Virtual Forest

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/virtual-forest/

The RICOH THETA S is a fairly affordable consumer 360° camera, which allows users to capture interesting locations and events for viewing through VR headsets and mobile-equipped Google Cardboard. When set up alongside a Raspberry Pi acting as a controller, plus a protective bubble, various cables, and good ol’ Mother Nature, the camera becomes a gateway to a serene escape from city life.

Virtual Forest

Ecologist Koen Hufkens, from the Richardson Lab in the Department of Organismic and Evolutionary Biology at Harvard University, decided to do exactly that, creating the Virtual Forest with the aim of “showing people how the forest changes throughout the seasons…and the beauty of the forest”.

The camera takes a still photograph every 15 minutes, uploading it for our viewing pleasure. The setup currently only supports daylight viewing, as the camera is not equipped for night vision, so check your watch first.

one autumn day

360 view of a day in a North-Eastern Hardwood forest during autumn

The build cost somewhere in the region of $500 to create; Hufkens provides a complete ingredients list here, with supporting code on GitHub. He also aims to improve the setup by using the new Nikon KeyMission, which can record video at 4K ultra-HD resolution.

The Virtual Forest has been placed deep within the heart of Harvard Forest, a university-owned plot of land used both by researchers and by the general public. If you live nearby, you could go look at it and possibly even appear in a photo. Please resist the urge to photobomb, though, because that would totally defeat the peopleless zen tranquility that we’re feeling here in Pi Towers.

The post Virtual Forest appeared first on Raspberry Pi.

Detecting landmines – with spinach

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/detecting-landmines-with-spinach/

Forget sniffer dogs…we need to talk about spinach.

The team at MIT (Massachusetts Institute of Technology) have been working to transform spinach plants into a means of detection in the fight against buried munitions such as landmines.

Plant-to-human communication

MIT engineers have transformed spinach plants into sensors that can detect explosives and wirelessly relay that information to a handheld device similar to a smartphone. (Learn more: http://news.mit.edu/2016/nanobionic-spinach-plants-detect-explosives-1031) Watch more videos from MIT: http://www.youtube.com/user/MITNewsOffice?sub_confirmation=1 The Massachusetts Institute of Technology is an independent, coeducational, privately endowed university in Cambridge, Massachusetts.

Nanoparticles, plus tiny tubes called carbon nanotubes, are embedded into the spinach leaves where they pick up nitro-aromatics, chemicals found in the hidden munitions.

It takes the spinach approximately ten minutes to absorb water from the ground, including the nitro-aromatics, which then bind to the polymer material wrapped around the nanotube.

But where does the Pi come into this?

The MIT team shine a laser onto the leaves, detecting the altered fluorescence of the light emitted by the newly bonded tubes. This light is then read by a Raspberry Pi fitted with an infrared camera, resulting in a precise map of where hidden landmines are located. This signal can currently be picked up within a one-mile radius, with plans to increase the reach in future.

detecting landmines with spinach

You can also physically hack a smartphone to replace the Raspberry Pi… but why would you want to do that?

The team at MIT have already used the tech to detect hydrogen peroxide, TNT, and sarin, while co-author Prof. Michael Strano advises that the same setup can be used to detect “virtually anything”.

“The plants could be use for defence applications, but also to monitor public spaces for terrorism-related activities, since we show both water and airborne detection”

More information on the paper can be found at the MIT website.

The post Detecting landmines – with spinach appeared first on Raspberry Pi.