Tag Archives: Guest Blog

Spiegelbilder Studio’s giant CRT video walls

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/crt-video-walls/

After getting in contact with us to share their latest build with us, we invited Matvey Fridman of Germany-based production company Spiegelbilder Studio to write a guest blog post about their CRT video walls created for the band STRANDKØNZERT.

STRANDKØNZERT – TAGTRAUMER – OFFICIAL VIDEO

GERMAN DJENT RAP / EST. 2017. COMPLETE DIY-PROJECT.

CRT video wall

About a year ago, we had the idea of building a huge video wall out of old TVs to use in a music video. It took some time, but half a year later we found ourselves in a studio actually building this thing using 30 connected computers, 24 of which were Raspberry Pis.

STRANDKØNZERT CRT video wall Raspberry Pi

How we did it

After weeks and months of preproduction and testing, we decided on two consecutive days to build the wall, create the underlying IP network, run a few tests, and then film the artists’ performance in front of it. We actually had 32 Pis (a mixed bag of first, second, and third generation models) and even more TVs ready to go, since we didn’t know what the final build would actually look like. We ended up using 29 separate screens of various sizes hooked up to 24 separate Pis — the remaining five TVs got a daisy-chained video signal out of other monitors for a cool effect. Each Pi had to run a free software called PiWall.

STRANDKØNZERT CRT video wall Raspberry Pi

Since the TVs only had analogue video inputs, we had to get special composite breakout cables and then adapt the RCA connectors to either SCART, S-Video, or BNC.

STRANDKØNZERT CRT video wall Raspberry Pi

As soon as we had all of that running, we connected every Pi to a 48-port network switch that we’d hooked up to a Windows PC acting as a DHCP server to automatically assign IP addresses and handle the multicast addressing. To make remote control of the Raspberry Pis easier, a separate master Linux PC and two MacBook laptops, each with SSH enabled and a Samba server running, joined the network as well.

STRANDKØNZERT CRT video wall Raspberry Pi

The MacBook laptops were used to drop two files containing the settings on each Pi. The .pitile file was unique to every Pi and contained their respective IDs. The .piwall file contained the same info for all Pis: the measurements and positions of every single screen to help the software split up the video signal coming in through the network. After every Pi got the command to start the PiWall software, which specifies the UDP multicast address and settings to be used to receive the video stream, the master Linux PC was tasked with streaming the video file to these UDP addresses. Now every TV was showing its section of the video, and we could begin filming.

STRANDKØNZERT CRT video wall Raspberry Pi

The whole process and the contents of the files and commands are summarised in the infographic below. A lot of trial and error was involved in the making of this project, but it all worked out well in the end. We hope you enjoy the craft behind the music video even though the music is not for everybody 😉

PiWall_Infographic

You can follow Spiegelbilder Studio on Facebook, Twitter, and Instagram. And if you enjoyed the music video, be sure to follow STRANDKØNZERT too.

The post Spiegelbilder Studio’s giant CRT video walls appeared first on Raspberry Pi.

What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hamr-hard-drives/

HAMR drive illustration

During Q4, Backblaze deployed 100 petabytes worth of Seagate hard drives to our data centers. The newly deployed Seagate 10 and 12 TB drives are doing well and will help us meet our near term storage needs, but we know we’re going to need more drives — with higher capacities. That’s why the success of new hard drive technologies like Heat-Assisted Magnetic Recording (HAMR) from Seagate are very relevant to us here at Backblaze and to the storage industry in general. In today’s guest post we are pleased to have Mark Re, CTO at Seagate, give us an insider’s look behind the hard drive curtain to tell us how Seagate engineers are developing the HAMR technology and making it market ready starting in late 2018.

What is HAMR and How Does It Enable the High-Capacity Needs of the Future?

Guest Blog Post by Mark Re, Seagate Senior Vice President and Chief Technology Officer

Earlier this year Seagate announced plans to make the first hard drives using Heat-Assisted Magnetic Recording, or HAMR, available by the end of 2018 in pilot volumes. Even as today’s market has embraced 10TB+ drives, the need for 20TB+ drives remains imperative in the relative near term. HAMR is the Seagate research team’s next major advance in hard drive technology.

HAMR is a technology that over time will enable a big increase in the amount of data that can be stored on a disk. A small laser is attached to a recording head, designed to heat a tiny spot on the disk where the data will be written. This allows a smaller bit cell to be written as either a 0 or a 1. The smaller bit cell size enables more bits to be crammed into a given surface area — increasing the areal density of data, and increasing drive capacity.

It sounds almost simple, but the science and engineering expertise required, the research, experimentation, lab development and product development to perfect this technology has been enormous. Below is an overview of the HAMR technology and you can dig into the details in our technical brief that provides a point-by-point rundown describing several key advances enabling the HAMR design.

As much time and resources as have been committed to developing HAMR, the need for its increased data density is indisputable. Demand for data storage keeps increasing. Businesses’ ability to manage and leverage more capacity is a competitive necessity, and IT spending on capacity continues to increase.

History of Increasing Storage Capacity

For the last 50 years areal density in the hard disk drive has been growing faster than Moore’s law, which is a very good thing. After all, customers from data centers and cloud service providers to creative professionals and game enthusiasts rarely go shopping looking for a hard drive just like the one they bought two years ago. The demands of increasing data on storage capacities inevitably increase, thus the technology constantly evolves.

According to the Advanced Storage Technology Consortium, HAMR will be the next significant storage technology innovation to increase the amount of storage in the area available to store data, also called the disk’s “areal density.” We believe this boost in areal density will help fuel hard drive product development and growth through the next decade.

Why do we Need to Develop Higher-Capacity Hard Drives? Can’t Current Technologies do the Job?

Why is HAMR’s increased data density so important?

Data has become critical to all aspects of human life, changing how we’re educated and entertained. It affects and informs the ways we experience each other and interact with businesses and the wider world. IDC research shows the datasphere — all the data generated by the world’s businesses and billions of consumer endpoints — will continue to double in size every two years. IDC forecasts that by 2025 the global datasphere will grow to 163 zettabytes (that is a trillion gigabytes). That’s ten times the 16.1 ZB of data generated in 2016. IDC cites five key trends intensifying the role of data in changing our world: embedded systems and the Internet of Things (IoT), instantly available mobile and real-time data, cognitive artificial intelligence (AI) systems, increased security data requirements, and critically, the evolution of data from playing a business background to playing a life-critical role.

Consumers use the cloud to manage everything from family photos and videos to data about their health and exercise routines. Real-time data created by connected devices — everything from Fitbit, Alexa and smart phones to home security systems, solar systems and autonomous cars — are fueling the emerging Data Age. On top of the obvious business and consumer data growth, our critical infrastructure like power grids, water systems, hospitals, road infrastructure and public transportation all demand and add to the growth of real-time data. Data is now a vital element in the smooth operation of all aspects of daily life.

All of this entails a significant infrastructure cost behind the scenes with the insatiable, global appetite for data storage. While a variety of storage technologies will continue to advance in data density (Seagate announced the first 60TB 3.5-inch SSD unit for example), high-capacity hard drives serve as the primary foundational core of our interconnected, cloud and IoT-based dependence on data.

HAMR Hard Drive Technology

Seagate has been working on heat assisted magnetic recording (HAMR) in one form or another since the late 1990s. During this time we’ve made many breakthroughs in making reliable near field transducers, special high capacity HAMR media, and figuring out a way to put a laser on each and every head that is no larger than a grain of salt.

The development of HAMR has required Seagate to consider and overcome a myriad of scientific and technical challenges including new kinds of magnetic media, nano-plasmonic device design and fabrication, laser integration, high-temperature head-disk interactions, and thermal regulation.

A typical hard drive inside any computer or server contains one or more rigid disks coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code.

Increasing the amount of data you can store on a disk requires cramming magnetic regions closer together, which means the grains need to be smaller so they won’t interfere with each other.

Heat Assisted Magnetic Recording (HAMR) is the next step to enable us to increase the density of grains — or bit density. Current projections are that HAMR can achieve 5 Tbpsi (Terabits per square inch) on conventional HAMR media, and in the future will be able to achieve 10 Tbpsi or higher with bit patterned media (in which discrete dots are predefined on the media in regular, efficient, very dense patterns). These technologies will enable hard drives with capacities higher than 100 TB before 2030.

The major problem with packing bits so closely together is that if you do that on conventional magnetic media, the bits (and the data they represent) become thermally unstable, and may flip. So, to make the grains maintain their stability — their ability to store bits over a long period of time — we need to develop a recording media that has higher coercivity. That means it’s magnetically more stable during storage, but it is more difficult to change the magnetic characteristics of the media when writing (harder to flip a grain from a 0 to a 1 or vice versa).

That’s why HAMR’s first key hardware advance required developing a new recording media that keeps bits stable — using high anisotropy (or “hard”) magnetic materials such as iron-platinum alloy (FePt), which resist magnetic change at normal temperatures. Over years of HAMR development, Seagate researchers have tested and proven out a variety of FePt granular media films, with varying alloy composition and chemical ordering.

In fact the new media is so “hard” that conventional recording heads won’t be able to flip the bits, or write new data, under normal temperatures. If you add heat to the tiny spot on which you want to write data, you can make the media’s coercive field lower than the magnetic field provided by the recording head — in other words, enable the write head to flip that bit.

So, a challenge with HAMR has been to replace conventional perpendicular magnetic recording (PMR), in which the write head operates at room temperature, with a write technology that heats the thin film recording medium on the disk platter to temperatures above 400 °C. The basic principle is to heat a tiny region of several magnetic grains for a very short time (~1 nanoseconds) to a temperature high enough to make the media’s coercive field lower than the write head’s magnetic field. Immediately after the heat pulse, the region quickly cools down and the bit’s magnetic orientation is frozen in place.

Applying this dynamic nano-heating is where HAMR’s famous “laser” comes in. A plasmonic near-field transducer (NFT) has been integrated into the recording head, to heat the media and enable magnetic change at a specific point. Plasmonic NFTs are used to focus and confine light energy to regions smaller than the wavelength of light. This enables us to heat an extremely small region, measured in nanometers, on the disk media to reduce its magnetic coercivity,

Moving HAMR Forward

HAMR write head

As always in advanced engineering, the devil — or many devils — is in the details. As noted earlier, our technical brief provides a point-by-point short illustrated summary of HAMR’s key changes.

Although hard work remains, we believe this technology is nearly ready for commercialization. Seagate has the best engineers in the world working towards a goal of a 20 Terabyte drive by 2019. We hope we’ve given you a glimpse into the amount of engineering that goes into a hard drive. Keeping up with the world’s insatiable appetite to create, capture, store, secure, manage, analyze, rapidly access and share data is a challenge we work on every day.

With thousands of HAMR drives already being made in our manufacturing facilities, our internal and external supply chain is solidly in place, and volume manufacturing tools are online. This year we began shipping initial units for customer tests, and production units will ship to key customers by the end of 2018. Prepare for breakthrough capacities.

The post What is HAMR and How Does It Enable the High-Capacity Needs of the Future? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

UI Testing at Scale with AWS Lambda

Post Syndicated from Stas Neyman original https://aws.amazon.com/blogs/devops/ui-testing-at-scale-with-aws-lambda/

This is a guest blog post by Wes Couch and Kurt Waechter from the Blackboard Internal Product Development team about their experience using AWS Lambda.

One year ago, one of our UI test suites took hours to run. Last month, it took 16 minutes. Today, it takes 39 seconds. Here’s how we did it.

The backstory:

Blackboard is a global leader in delivering robust and innovative education software and services to clients in higher education, government, K12, and corporate training. We have a large product development team working across the globe in at least 10 different time zones, with an internal tools team providing support for quality and workflows. We have been using Selenium Webdriver to perform automated cross-browser UI testing since 2007. Because we are now practicing continuous delivery, the automated UI testing challenge has grown due to the faster release schedule. On top of that, every commit made to each branch triggers an execution of our automated UI test suite. If you have ever implemented an automated UI testing infrastructure, you know that it can be very challenging to scale and maintain. Although there are services that are useful for testing different browser/OS combinations, they don’t meet our scale needs.

It used to take three hours to synchronously run our functional UI suite, which revealed the obvious need for parallel execution. Previously, we used Mesos to orchestrate a Selenium Grid Docker container for each test run. This way, we were able to run eight concurrent threads for test execution, which took an average of 16 minutes. Although this setup is fine for a single workflow, the cracks started to show when we reached the scale required for Blackboard’s mature product lines. Going beyond eight concurrent sessions on a single container introduced performance problems that impact the reliability of tests (for example, issues in Webdriver or the browser popping up frequently). We tried Mesos and considered Kubernetes for Selenium Grid orchestration, but the answer to scaling a Selenium Grid was to think smaller, not larger. This led to our breakthrough with AWS Lambda.

The solution:

We started using AWS Lambda for UI testing because it doesn’t require costly infrastructure or countless man hours to maintain. The steps we outline in this blog post took one work day, from inception to implementation. By simply packaging the UI test suite into a Lambda function, we can execute these tests in parallel on a massive scale. We use a custom JUnit test runner that invokes the Lambda function with a request to run each test from the suite. The runner then aggregates the results returned from each Lambda test execution.

Selenium is the industry standard for testing UI at scale. Although there are other options to achieve the same thing in Lambda, we chose this mature suite of tools. Selenium is backed by Google, Firefox, and others to help the industry drive their browsers with code. This makes Lambda and Selenium a compelling stack for achieving UI testing at scale.

Making Chrome Run in Lambda

Currently, Chrome for Linux will not run in Lambda due to an absent mount point. By rebuilding Chrome with a slight modification, as Marco Lüthy originally demonstrated, you can run it inside Lambda anyway! It took about two hours to build the current master branch of Chromium to build on a c4.4xlarge. Unfortunately, the current version of ChromeDriver, 2.33, does not support any version of Chrome above 62, so we’ll be using Marco’s modified version of version 60 for the near future.

Required System Libraries

The Lambda runtime environment comes with a subset of common shared libraries. This means we need to include some extra libraries to get Chrome and ChromeDriver to work. Anything that exists in the java resources folder during compile time is included in the base directory of the compiled jar file. When this jar file is deployed to Lambda, it is placed in the /var/task/ directory. This allows us to simply place the libraries in the java resources folder under a folder named lib/ so they are right where they need to be when the Lambda function is invoked.

To get these libraries, create an EC2 instance and choose the Amazon Linux AMI.

Next, use ssh to connect to the server. After you connect to the new instance, search for the libraries to find their locations.

sudo find / -name libgconf-2.so.4
sudo find / -name libORBit-2.so.0

Now that you have the locations of the libraries, copy these files from the EC2 instance and place them in the java resources folder under lib/.

Packaging the Tests

To deploy the test suite to Lambda, we used a simple Gradle tool called ShadowJar, which is similar to the Maven Shade Plugin. It packages the libraries and dependencies inside the jar that is built. Usually test dependencies and sources aren’t included in a jar, but for this instance we want to include them. To include the test dependencies, add this section to the build.gradle file.

shadowJar {
   from sourceSets.test.output
   configurations = [project.configurations.testRuntime]
}

Deploying the Test Suite

Now that our tests are packaged with the dependencies in a jar, we need to get them into a running Lambda function. We use  simple SAM  templates to upload the packaged jar into S3, and then deploy it to Lambda with our settings.

{
   "AWSTemplateFormatVersion": "2010-09-09",
   "Transform": "AWS::Serverless-2016-10-31",
   "Resources": {
       "LambdaTestHandler": {
           "Type": "AWS::Serverless::Function",
           "Properties": {
               "CodeUri": "./build/libs/your-test-jar-all.jar",
               "Runtime": "java8",
               "Handler": "com.example.LambdaTestHandler::handleRequest",
               "Role": "<YourLambdaRoleArn>",
               "Timeout": 300,
               "MemorySize": 1536
           }
       }
   }
}

We use the maximum timeout available to ensure our tests have plenty of time to run. We also use the maximum memory size because this ensures our Lambda function can support Chrome and other resources required to run a UI test.

Specifying the handler is important because this class executes the desired test. The test handler should be able to receive a test class and method. With this information it will then execute the test and respond with the results.

public LambdaTestResult handleRequest(TestRequest testRequest, Context context) {
   LoggerContainer.LOGGER = new Logger(context.getLogger());
  
   BlockJUnit4ClassRunner runner = getRunnerForSingleTest(testRequest);
  
   Result result = new JUnitCore().run(runner);

   return new LambdaTestResult(result);
}

Creating a Lambda-Compatible ChromeDriver

We provide developers with an easily accessible ChromeDriver for local test writing and debugging. When we are running tests on AWS, we have configured ChromeDriver to run them in Lambda.

To configure ChromeDriver, we first need to tell ChromeDriver where to find the Chrome binary. Because we know that ChromeDriver is going to be unzipped into the root task directory, we should point the ChromeDriver configuration at that location.

The settings for getting ChromeDriver running are mostly related to Chrome, which must have its working directories pointed at the tmp/ folder.

Start with the default DesiredCapabilities for ChromeDriver, and then add the following settings to enable your ChromeDriver to start in Lambda.

public ChromeDriver createLambdaChromeDriver() {
   ChromeOptions options = new ChromeOptions();

   // Set the location of the chrome binary from the resources folder
   options.setBinary("/var/task/chrome");

   // Include these settings to allow Chrome to run in Lambda
   options.addArguments("--disable-gpu");
   options.addArguments("--headless");
   options.addArguments("--window-size=1366,768");
   options.addArguments("--single-process");
   options.addArguments("--no-sandbox");
   options.addArguments("--user-data-dir=/tmp/user-data");
   options.addArguments("--data-path=/tmp/data-path");
   options.addArguments("--homedir=/tmp");
   options.addArguments("--disk-cache-dir=/tmp/cache-dir");
  
   DesiredCapabilities desiredCapabilities = DesiredCapabilities.chrome();
   desiredCapabilities.setCapability(ChromeOptions.CAPABILITY, options);
  
   return new ChromeDriver(desiredCapabilities);
}

Executing Tests in Parallel

You can approach parallel test execution in Lambda in many different ways. Your approach depends on the structure and design of your test suite. For our solution, we implemented a custom test runner that uses reflection and JUnit libraries to create a list of test cases we want run. When we have the list, we create a TestRequest object to pass into the Lambda function that we have deployed. In this TestRequest, we place the class name, test method, and the test run identifier. When the Lambda function receives this TestRequest, our LambdaTestHandler generates and runs the JUnit test. After the test is complete, the test result is sent to the test runner. The test runner compiles a result after all of the tests are complete. By executing the same Lambda function multiple times with different test requests, we can effectively run the entire test suite in parallel.

To get screenshots and other test data, we pipe those files during test execution to an S3 bucket under the test run identifier prefix. When the tests are complete, we link the files to each test execution in the report generated from the test run. This lets us easily investigate test executions.

Pro Tip: Dynamically Loading Binaries

AWS Lambda has a limit of 250 MB of uncompressed space for packaged Lambda functions. Because we have libraries and other dependencies to our test suite, we hit this limit when we tried to upload a function that contained Chrome and ChromeDriver (~140 MB). This test suite was not originally intended to be used with Lambda. Otherwise, we would have scrutinized some of the included libraries. To get around this limit, we used the Lambda functions temporary directory, which allows up to 500 MB of space at runtime. Downloading these binaries at runtime moves some of that space requirement into the temporary directory. This allows more room for libraries and dependencies. You can do this by grabbing Chrome and ChromeDriver from an S3 bucket and marking them as executable using built-in Java libraries. If you take this route, be sure to point to the new location for these executables in order to create a ChromeDriver.

private static void downloadS3ObjectToExecutableFile(String key) throws IOException {
   File file = new File("/tmp/" + key);

   GetObjectRequest request = new GetObjectRequest("s3-bucket-name", key);

   FileUtils.copyInputStreamToFile(s3client.getObject(request).getObjectContent(), file);
   file.setExecutable(true);
}

Lambda-Selenium Project Source

We have compiled an open source example that you can grab from the Blackboard Github repository. Grab the code and try it out!

https://blackboard.github.io/lambda-selenium/

Conclusion

One year ago, one of our UI test suites took hours to run. Last month, it took 16 minutes. Today, it takes 39 seconds. Thanks to AWS Lambda, we can reduce our build times and perform automated UI testing at scale!

AWS Marketplace Adds Healthcare & Life Sciences Category

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-marketplace-adds-healthcare-life-sciences-category/

Wilson To and Luis Daniel Soto are our guest bloggers today, telling you about a new industry vertical category that is being added to the AWS Marketplace.Check it out!

-Ana


AWS Marketplace is a managed and curated software catalog that helps customers innovate faster and reduce costs, by making it easy to discover, evaluate, procure, immediately deploy and manage 3rd party software solutions.  To continue supporting our customers, we’re now adding a new industry vertical category: Healthcare & Life Sciences.

healthpost

This new category brings together best-of-breed software tools and solutions from our growing vendor ecosystem that have been adapted to, or built from the ground up, to serve the healthcare and life sciences industry.

Healthcare
Within the AWS Marketplace HCLS category, you can find solutions for Clinical information systems, population health and analytics, health administration and compliance services. Some offerings include:

  1. Allgress GetCompliant HIPAA Edition – Reduce the cost of compliance management and adherence by providing compliance professionals improved efficiency by automating the management of their compliance processes around HIPAA.
  2. ZH Healthcare BlueEHS – Deploy a customizable, ONC-certified EHR that empowers doctors to define their clinical workflows and treatment plans to enhance patient outcomes.
  3. Dicom Systems DCMSYS CloudVNA – DCMSYS Vendor Neutral Archive offers a cost-effective means of consolidating disparate imaging systems into a single repository, while providing enterprise-wide access and archiving of all medical images and other medical records.

Life Sciences

  1. National Instruments LabVIEW – Graphical system design software that provides scientists and engineers with the tools needed to create and deploy measurement and control systems through simple yet powerful networks.
  2. NCBI Blast – Analysis tools and datasets that allow users to perform flexible sequence similarity searches.
  3. Acellera AceCloud – Innovative tools and technologies for the study of biophysical phenomena. Acellera leverages the power of AWS Cloud to enable molecular dynamics simulations.

Healthcare and life sciences companies deal with huge amounts of data, and many of their data sets are some of the most complex in the world. From physicians and nurses to researchers and analysts, these users are typically hampered by their current systems. Their legacy software cannot let them efficiently store or effectively make use of the immense amounts of data they work with. And protracted and complex software purchasing cycles keep them from innovating at speed to stay ahead of market and industry trends. Data analytics and business intelligence solutions in AWS Marketplace offer specialized support for these industries, including:

  • Tableau Server – Enable teams to visualize across costs, needs, and outcomes at once to make the most of resources. The solution helps hospitals identify the impact of evidence-based medicine, wellness programs, and patient engagement.
  • TIBCO Spotfire and JasperSoft. TIBCO provides technical teams powerful data visualization, data analytics, and predictive analytics for Amazon Redshift, Amazon RDS, and popular database sources via AWS Marketplace.
  • Qlik Sense Enterprise. Qlik enables healthcare organizations to explore clinical, financial and operational data through visual analytics to discover insights which lead to improvements in care, reduced costs and delivering higher value to patients.

With more than 5,000 listings across more than 35 categories, AWS Marketplace simplifies software licensing and procurement by enabling customers to accept user agreements, choose pricing options, and automate the deployment of software and associated AWS resources with just a few clicks. AWS Marketplace also simplifies billing for customers by delivering a single invoice detailing business software and AWS resource usage on a monthly basis.

With AWS Marketplace, we can help drive operational efficiencies and reduce costs in these ways:

  • Easily bring in new solutions to solve increasingly complex issues, gain quick insight into the huge amounts of data users handle.
  • Healthcare data will be more actionable. We offer pay-as-you-go solutions that make it considerably easier and more cost-effective to ingest, store, analyze, and disseminate data.
  • Deploy healthcare and life sciences software with 1-Click ease — then evaluate and deploy it in minutes. Users can now speed up their historically slow cycles in software procurement and implementation.
  • Pay only for what’s consumed — and manage software costs on your AWS bill.
  • In addition to the already secure AWS Cloud, AWS Marketplace offers industry-leading solutions to help you secure operating systems, platforms, applications and data that can integrate with existing controls in your AWS Cloud and hybrid environment.

Click here to see who the current list of vendors are in our new Healthcare & Life Sciences category.

Come on In
If you are a healthcare ISV and would like to list and sell your products on AWS, visit our Sell in AWS Marketplace page.

– Wilson To and Luis Daniel Soto

Get ‘Back to my Pi’ from anywhere with VNC Connect

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/get-back-to-my-pi-from-anywhere-with-vnc-connect/

In today’s guest blog, Andy Clark, Engineering Manager at RealVNC, introduces VNC Connect: a brand-new, and free, version of VNC that makes it simple to connect securely to your Raspberry Pi from anywhere in the world.

Since September 2016, every version of Raspbian has come with the built-in ability to remotely access and control your Raspberry Pi’s screen from another computer, using a technology called VNC. As the original inventors of this technology, RealVNC were happy to partner with Raspberry Pi to provide the community with the latest and most secure version of VNC for free.

We’re always looking to improve things, and one criticism of VNC technology over the years has been its steep learning curve. In particular, you need a bit of networking knowledge in order to connect to a Pi on the same network, and a heck of a lot to get a connection working across the internet!

This is why we developed VNC Connect, a brand-new version of VNC that allows you not only to make direct connections within your own networks, but also to make secure cloud-brokered connections back to your computer from anywhere in the world, with no specialist networking knowledge needed.

I’m delighted to announce that VNC Connect is available for Raspberry Pi, and from today is included in the Raspbian repositories. What’s more, we’ve added some extra features and functionality tailored to the Raspberry Pi community, and it’s all still free for non-commercial and educational use.

‘Back to my Pi’ and direct connections

The main change in VNC Connect is the ability to connect back to your Raspberry Pi from anywhere in the world, from a wide range of devices, without any complex port forwarding or IP addressing configuration. Our cloud service brokers a secure, end-to-end encrypted connection back to your Pi, letting you take control simply and securely from wherever you happen to be.

RealVNC

While this convenience is great for a lot of our standard home users, it’s not enough for the demands of the Raspberry Pi community! The Raspberry Pi is a great educational platform, and gets used in inventive and non-standard ways all the time. So on the Raspberry Pi, you can still make direct TCP connections the way you’ve always done with VNC. This way, you can have complete control over your project and learn all about IP networking if you want, or you can choose the simplicity of a cloud-brokered connection if that’s what you need.

Simpler connection management

Choosing the computer to connect to using VNC has historically been a fiddly process, requiring you to remember IP addresses or hostnames, or use a separate application to keep track of things. With VNC Connect we’ve introduced a new VNC Viewer with a built-in address book and enhanced UI, making it much simpler and quicker to manage your devices and connections. You now have the option of securely saving passwords for frequently used connections, and you can synchronise your entries with other VNC Viewers, making it easier to access your Raspberry Pi from other computers, tablets, or mobile devices.

RealVNC

Direct capture performance improvements

We’ve been working hard to make improvements to the experimental ‘direct capture’ feature of VNC Connect that’s unique to the Raspberry Pi. This feature allows you to see and control applications that render directly to the screen, like Minecraft, omxplayer, or even the terminal. You should find that performance of VNC in direct capture mode has improved, and is much more usable for interactive tasks.

RealVNC

Getting VNC Connect

VNC Connect is available in the Raspbian repositories from today, so running the following commands at a terminal will install it:

sudo apt-get update

sudo apt-get install realvnc-vnc-server realvnc-vnc-viewer

If you’re already running VNC Server or VNC Viewer, the same commands will install the update; then you’ll need to restart it to use the latest version.

There’s more information about getting set up on the RealVNC Raspberry Pi page. If you want to take advantage of the cloud connectivity, you’ll need to sign up for a RealVNC account, and you can do that here too.

Come and see us!

We’ve loved working with the Raspberry Pi Foundation and the community over the past few years, and making VNC Connect available for free on the Raspberry Pi is just the next phase of our ongoing relationship.

We’d love to get your feedback on Twitter, in the forums, or in the comments below. We’ll be at the Raspberry Pi Big Birthday Weekend again this year on 4-5 March in Cambridge, so please come and say hi and let us know how you use VNC Connect!

The post Get ‘Back to my Pi’ from anywhere with VNC Connect appeared first on Raspberry Pi.

Inclusive learning at South London Raspberry Jam

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/inclusive-learning-south-london-raspberry-jam/

Raspberry Pi Certified Educator Grace Owolade-Coombes runs the fantastically inclusive South London Raspberry Jam with her son Femi. In this guest post, she gives us the low-down on how the Jam got started. Enjoy!

Grace and Femi

Grace and Femi Owolade-Coombes

Our Jam has been running for over a year now; we’ve had three really big events and one smaller family hack day. Let me begin by telling you about how the idea of running a Jam arose in the first place.

Around three years ago, I read about how coding was going to be part of the curriculum in primary and secondary schools and, as a teacher in the FE sector, I was intrigued. As I also had a young and inquisitive son, who was at primary school at the time, I felt that we should investigate further.

National STEM Centre

Grace visited the National STEM Learning Centre in York for a course which introduced her to coding.

I later attended a short course at the National STEM Learning Centre in York, during which one of the organisers told me about the Raspberry Pi Foundation; he suggested I come to a coding event back at the Centre a few weeks later with my family. We did, and Femi loved the Minecraft hack.

Note from Alex: not the actual Minecraft hack but I’ll be having words with our resource gurus because this would be brilliant!

The first Raspberry Jam we attended was in Southend with Andy Melder and the crew: it showed us just how welcoming the Jam community can be. Then I was lucky enough to attend Picademy, which truly was a transformative experience. Ben Nuttall showed me how to tweet photographs with the Pi, which was the beginning of me using Twitter. I particularly loved Clive Beale’s physical computing workshop which I took back and delivered to Femi.

Grace Owolade-Coombes with Carrie Anne Philbin

Picademy gave Grace the confidence to deliver Raspberry Pi training herself.

After Picademy, I tweeted that I was now a Raspberry Pi Certified Educator and immediately got a request from Dragon Hall, Convent Garden to run a workshop – I didn’t realise they meant in three days’ time! Femi and I bit the bullet and ran our first physical computing workshop together. We haven’t looked back since.

Festival of Code Femi

Femi went on to join the Festival of Code, which he loved.

Around this time, Femi was attending a Tourettes Action support group, where young people with Tourette’s syndrome, like him, met up. Femi wanted to share his love of coding with them, but he felt that they might be put off as it can be difficult to spend extended amounts of time in public places when you have tics. He asked if we could set up a Jam that was inclusive: it would be both autism- and Tourette’s syndrome-friendly. There was such a wealth of support, advice, and volunteers who would help us set up that it really wasn’t a hard decision to make.

Femi Owolade-Coombes

Grace and Femi set up an Indiegogo campaign to help fund their Jam.

We were fortunate to have met Marc Grossman during the Festival of Code: with his amazing skills and experience with Code Club, we set up together. For our first Jam, we had young coding pioneers from the community, such as Yasmin Bey and Isreal Genius, to join us. We were also blessed with David Whale‘s company and Kano even did a workshop with us. There are too many amazing people to mention.

South London Jam

Grace and Femi held the first South London Raspberry Jam, an autism- and Tourette’s syndrome-friendly event for five- to 15-year-olds, at Deptford Library in October 2016, with 75 participants.

We held a six-session Code Club in Catford Library followed by a second Jam in a local community centre, focusing on robotics with the CamJam EduKit 3, as well as the usual Minecraft hacks.

Our third Jam was in conjunction with Kano, at their HQ, and included a SEN TeachMeet with Computing at School (CAS). Joseph Birks, the inventor of the Crumble, delivered a great robot workshop, and Paul Haynes delivered a Unity workshop too.

Family Hack Day

Grace and Femi’s latest event was a family hack day in conjunction with the London Connected Learning Centre.

Femi often runs workshops at our Jams. We try to encourage young coders to follow in Femi’s footsteps and deliver sessions too: it works best when young people learn from each other, and we hope the confidence they develop will enable them to help their friends and classmates to enjoy coding too.

Inclusivity, diversity, and accessibility are at the heart of our Jams, and we are proud to have Tourettes Action and Ambitious about Autism as partners.

Tourettes Action on Twitter

All welcome to this event in London SAT, 12 DEC 2015 AT 13:00 2nd South London Raspberry Jam 2015 Bellingham… https://t.co/TPYC9Ontot

Now we are taking stock of our amazing journey to learn about coding, and preparing to introduce it to more people. Presently we are looking to collaborate with the South London Makerspace and the Digital Maker Collective, who have invited Femi to deliver robot workshops at Tate Modern. We are also looking to progress to more project-based activities which fit with the Raspberry Pi Foundation’s Pioneers challenges.

Femi Astro Pi

South London Raspberry Jam has participated in both Pi Wars and Astro Pi.

Femi writes about all the events we attend or run: see hackerfemo.com or check out our website and sign up to our mailing list to keep informed. We are just about to gather a team for the Pioneers project, so watch out for updates.

The post Inclusive learning at South London Raspberry Jam appeared first on Raspberry Pi.

Squarespace OCSP Stapling Implementation

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2016/10/24/squarespace-ocsp-impl.html

<blockquote>
<p>We’re excited that Squarespace has decided to protect the millions of sites they host with HTTPS! While talking with their
team we learned they were deploying OCSP Stapling from the get-go, and we were impressed. We asked them to share their
experience with our readers in our first guest blog post (hopefully more to come).</p>

<p>- Josh Aas, Executive Director, ISRG / Let’s Encrypt</p>
</blockquote>

<p><a href="https://en.wikipedia.org/wiki/OCSP_stapling">OCSP stapling</a> is an alternative approach to the Online Certificate Status Protocol (OCSP) for checking the revocation status of certificates. It allows the presenter of a certificate to bear the resource cost involved in providing OCSP responses by appending (“stapling”) a time-stamped OCSP response signed by the CA to the initial TLS handshake, eliminating the need for clients to contact the CA. The certificate holder queries the OCSP responder at regular intervals and caches the responses.</p>

<p>Traditional OCSP requires the CA to provide responses to each client that requests certificate revocation information. When a certificate is issued for a popular website, a large amount of queries start hitting the CA’s OCSP responder server. This poses a privacy risk because information must pass through a third party and the third party is able to determine who browsed which site at what time. It can also create performance problems, since most browsers will contact the OCSP responder before loading anything on the web page. OCSP stapling is efficient because the user doesn’t have to make a separate connection to the CA, and it’s safe because the OCSP response is digitally signed so it cannot be modified without detection.</p>

<h2 id="ocsp-stapling-squarespace">OCSP Stapling @ Squarespace</h2>

<p>As we were planning our roll out of SSL for all custom domains on the Squarespace platform, we decided that we wanted to support OCSP stapling at time of launch. A reverse proxy built by our <a href="https://www.squarespace.com/about/careers?gh_jid=245517">Edge Infrastructure team</a> is responsible for terminating all SSL traffic, it’s written in Java and is powered by <a href="http://netty.io/">Netty</a>. Unfortunately, the Java JDK 8 only has preliminary, client-only, OCSP stapling support. JDK 9 introduces OCSP stapling with <a href="http://openjdk.java.net/jeps/249">JEP 249</a>, but it is not available yet.</p>

<p>Our reverse proxy does not use the JDK’s SSL implementation. Instead, we use OpenSSL via <a href="http://netty.io/wiki/forked-tomcat-native.html">netty-tcnative</a>. At this time, neither the original tcnative nor Netty’s fork have OCSP stapling support. However, the tcnative library exposes the inner workings of OpenSSL, including the address pointers for the SSL context and engine. We were able to use JNI to extend the netty-tcnative library and add OCSP stapling support using the <a href="https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_tlsext_status_type.html">tlsext_status</a> OpenSSL C functions. Our extension is a standalone library but we could equally well fold it into the netty-tcnative library itself. If there is interest, we can contribute it upstream as part of Netty’s next API-breaking development cycle.</p>

<p>One of the goals of our initial OCSP stapling implementation was to take the biggest edge off of the OCSP responder’s operator, in this case Let’s Encrypt. Due to the nature of the website traffic on our platform, we have a very long tail. At least to start, we don’t pre-fetch and cache all OCSP responses. We decided to fetch OCSP responses asynchronously and we try to do it only if more than one client is going to use it in the foreseeable future. Bloom filters are utilized to identify “one-hit wonders” that are not worthy of being cached.</p>

<p>Squarespace invests in the security of our customers’ websites and their visitors. We will continue to make refinements to our OCSP stapling implementation to eventually have OCSP staples on all requests. For a more in depth discussion about the security challenges of traditional OCSP, we recommend <a href="https://www.imperialviolet.org/2014/04/19/revchecking.html">this blog post</a>.</p>