Tag Archives: spotify

IoT Inspector Tool from Princeton

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/iot_inspector_t.html

Researchers at Princeton University have released IoT Inspector, a tool that analyzes the security and privacy of IoT devices by examining the data they send across the Internet. They’ve already used the tool to study a bunch of different IoT devices. From their blog post:

Finding #3: Many IoT Devices Contact a Large and Diverse Set of Third Parties

In many cases, consumers expect that their devices contact manufacturers’ servers, but communication with other third-party destinations may not be a behavior that consumers expect.

We have found that many IoT devices communicate with third-party services, of which consumers are typically unaware. We have found many instances of third-party communications in our analyses of IoT device network traffic. Some examples include:

  • Samsung Smart TV. During the first minute after power-on, the TV talks to Google Play, Double Click, Netflix, FandangoNOW, Spotify, CBS, MSNBC, NFL, Deezer, and Facebook­even though we did not sign in or create accounts with any of them.
  • Amcrest WiFi Security Camera. The camera actively communicates with cellphonepush.quickddns.com using HTTPS. QuickDDNS is a Dynamic DNS service provider operated by Dahua. Dahua is also a security camera manufacturer, although Amcrest’s website makes no references to Dahua. Amcrest customer service informed us that Dahua was the original equipment manufacturer.

  • Halo Smoke Detector. The smart smoke detector communicates with broker.xively.com. Xively offers an MQTT service, which allows manufacturers to communicate with their devices.

  • Geeni Light Bulb. The Geeni smart bulb communicates with gw.tuyaus.com, which is operated by TuYa, a China-based company that also offers an MQTT service.

We also looked at a number of other devices, such as Samsung Smart Camera and TP-Link Smart Plug, and found communications with third parties ranging from NTP pools (time servers) to video storage services.

Their first two findings are that “Many IoT devices lack basic encryption and authentication” and that “User behavior can be inferred from encrypted IoT device traffic.” No surprises there.

Boingboing post.

Related: IoT Hall of Shame.

Danish Traffic to Pirate Sites Increases 67% in Just a Year

Post Syndicated from Andy original https://torrentfreak.com/danish-traffic-to-pirate-sites-increases-67-in-just-a-year-180501/

For close to 20 years, rightsholders have tried to stem the tide of mainstream Internet piracy. Yet despite increasingly powerful enforcement tools, infringement continues on a grand scale.

While the problem is global, rightsholder groups often zoom in on their home turf, to see how the fight is progressing locally. Covering Denmark, the Rights Alliance Data Report 2017 paints a fairly pessimistic picture.

Published this week, the industry study – which uses SimilarWeb and MarkMonitor data – finds that Danes visited 2,000 leading pirate sites 596 million times in 2017. That represents a 67% increase over the 356 million visits to unlicensed platforms made by citizens during 2016.

The report notes that, at least in part, this explosive growth can be attributed to mobile-compatible sites and services, which make it easier than ever to consume illicit content on the move, as well as at home.

In a sea of unauthorized streaming sites, Rights Alliance highlights one platform above all the others as a particularly bad influence in 2017 – 123movies (also known as GoMovies and GoStream, among others).

“The popularity of this service rose sharply in 2017 from 40 million visits in 2016 to 175 million visits in 2017 – an increase of 337 percent, of which most of the traffic originates from mobile devices,” the report notes.

123movies recently announced its closure but before that the platform was subjected to web-blocking in several jurisdictions.

Rights Alliance says that Denmark has one of the most effective blocking systems in the world but that still doesn’t stop huge numbers of people from consuming pirate content from sites that aren’t yet blocked.

“Traffic to infringing sites is overwhelming, and therefore blocking a few sites merely takes the top of the illegal activities,” Rights Alliance chief Maria Fredenslund informs TorrentFreak.

“Blocking is effective by stopping 75% of traffic to blocked sites but certainly, an upscaled effort is necessary.”

Rights Alliance also views the promotion of legal services as crucial to its anti-piracy strategy so when people visit a blocked site, they’re also directed towards legitimate platforms.

“That is why we are working at the moment with Denmark’s Ministry of Culture and ISPs on a campaign ‘Share With Care 2′ which promotes legal services e.g. by offering a search function for legal services which will be placed in combination with the signs that are put on blocked websites,” the anti-piracy group notes.

But even with such measures in place, the thirst for unlicensed content is great. In 2017 alone, 500 of the most popular films and TV shows were downloaded from P2P networks like BitTorrent more than 15 million times from Danish IP addresses, that’s up from 11.9 million in 2016.

Given the dramatic rise in visits to pirate sites overall, the suggestion is that plenty of consumers are still getting through. Rights Alliance says that the number of people being restricted is also hampered by people who don’t use their ISP’s DNS service, which is the method used to block sites in Denmark.

Additionally, interest in VPNs and similar anonymization and bypass-capable technologies is on the increase. Between 3.5% and 5% of Danish Internet users currently use a VPN, a number that’s expected to go up. Furthermore, Rights Alliance reports greater interest in “closed” pirate communities.

“The data is based on closed [BitTorrent] networks. We also address the challenges with private communities on Facebook and other [social media] platforms,” Fredenslund explains.

“Due to the closed doors of these platforms it is not possible for us to say anything precisely about the amount of infringing activities there. However, we receive an increasing number of notices from our members who discover that their products are distributed illegally and also we do an increased monitoring of these platforms.”

But while more established technologies such as torrents and regular web-streaming continue in considerable volumes, newer IPTV-style services accessible via apps and dedicated platforms are also gaining traction.

“The volume of visitors to these services’ websites has been sharply rising in 2017 – an increase of 84 percent from January to December,” Rights Alliance notes.

“Even though the number of visitors does not say anything about actual consumption, as users usually only visit pages one time to download the program, the number gives an indication that the interest in IPTV is increasing.”

To combat this growth market, Rights Alliance says it wants to establish web-blockades against sites hosting the software applications.

Also on the up are visits to platforms offering live sports illegally. In 2017, Danish IP addresses made 2.96 million visits to these services, corresponding to almost 250,000 visits per month and representing an annual increase of 28%.

Rights Alliance informs TF that in future a ‘live’ blocking mechanism similar to the one used by the Premier League in the UK could be deployed in Denmark.

“We already have a dynamic blocking system, and we see an increasing demand for illegal TV products, so this could be a natural next step,” Fredenslund explains.

Another small but perhaps significant detail is how users are accessing pirate sites. According to the report, large volumes of people are now visiting platforms directly, with more than 50% doing so in preference to referrals from search engines such as Google.

In terms of deterrence, the Rights Alliance report sticks to the tried-and-tested approaches seen so often in the anti-piracy arena.

Firstly, the group notes that it’s increasingly encountering people who are paying for legal services such as Netflix and Spotify so believe that allows them to grab something extra from a pirate site. However, in common with similar organizations globally, the group counters that pirate sites can serve malware or have other nefarious business interests behind the scenes, so people should stay away.

Whether significant volumes will heed this advice will remain to be seen but if a 67% increase last year is any predictor of the future, piracy is here to stay – and then some. Rights Alliance says it is ready for the challenge but will need some assistance to achieve its goals.

“As it is evident from the traffic data, criminal activities are not something that we, private companies (right holders in cooperation with ISPs), can handle alone,” Fredenslund says.

“Therefore, we are very pleased that DK Government recently announced that the IP taskforce which was set down as a trial period has now been made permanent. In that regard it is important and necessary that the police will also obtain the authority to handle blocking of massively infringing websites. Police do not have the authority to carry out blocking as it is today.”

The full report is available here (Danish, pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

qrocodile: the kid-friendly Sonos system

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/qrocodile-kid-friendly-sonos-system/

Chris Campbell’s qrocodile uses a Raspberry Pi, a camera, and QR codes to allow Chris’s children to take full control of the Sonos home sound system. And we love it!

qrocodile

Introducing qrocodile, a kid-friendly system for controlling your Sonos with QR codes. Source code is available at: https://github.com/chrispcampbell/qrocodile Learn more at: http://labonnesoupe.org https://twitter.com/chrscmpbll

Sonos

SONOS is SONOS backwards. It’s also SONOS upside down, and SONOS upside down and backwards. I just learnt that this means SONOS is an ambigram. Hurray for learning!

Sonos (the product, not the ambigram) is a multi-room speaker system controlled by an app. Speakers in different rooms can play different tracks or join forces to play one track for a smooth musical atmosphere throughout your home.

sonos raspberry pi

If you have a Sonos system in your home, I would highly recommend accessing to it from outside your home and set it to play the Imperial March as you walk through the front door. Why wouldn’t you?

qrocodile

One day, Chris’s young children wanted to play an album while eating dinner. By this one request, he was inspired to create qrocodile, a musical jukebox enabling his children to control the songs Sonos plays, and where it plays them, via QR codes.

It all started one night at the dinner table over winter break. The kids wanted to put an album on the turntable (hooked up to the line-in on a Sonos PLAY:5 in the dining room). They’re perfectly capable of putting vinyl on the turntable all by themselves, but using the Sonos app to switch over to play from the line-in is a different story.

The QR codes represent commands (such as Play in the living room, Use the turntable, or Build a song list) and artists (such as my current musical crush Courtney Barnett or the Ramones).

qrocodile raspberry Pi

A camera attached to a Raspberry Pi 3 feeds the Pi the QR code that’s presented, and the Pi runs a script that recognises the code and sends instructions to Sonos accordingly.


Chris used a costum version of the Sonos HTTP API created by Jimmy Shimizu to gain access to Sonos from his Raspberry Pi. To build the QR codes, he wrote a script that utilises the Spotify API via the Spotipy library.

His children are now able to present recognisable album art to the camera in order to play their desired track.

It’s been interesting seeing the kids putting the thing through its paces during their frequent “dance parties”, queuing up their favorite songs and uncovering new ones. I really like that they can use tangible objects to discover music in much the same way I did when I was their age, looking through my parents records, seeing which ones had interesting artwork or reading the song titles on the back, listening and exploring.

Chris has provided all the scripts for the project, along with a tutorial of how to set it up, on his GitHub — have a look if you want to recreate it or learn more about his code. Also check out Chris’ website for more on qrocodile and to see some of his other creations.

The post qrocodile: the kid-friendly Sonos system appeared first on Raspberry Pi.

MagPi 66: Raspberry Pi media projects for your home

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-66-media-pi/

Hey folks, Rob from The MagPi here! Issue 66 of The MagPi is out right now, with the ultimate guide to powering your home media with Raspberry Pi. We think the Pi is the perfect replacement or upgrade for many media devices, so in this issue we show you how to build a range of Raspberry Pi media projects.

MagPi 66

Yes, it does say Pac-Man robotics on the cover. They’re very cool.

The article covers file servers for sharing media across your network, music streaming boxes that connect to Spotify, a home theatre PC to make your TV-watching more relaxing, a futuristic Pi-powered moving photoframe, and even an Alexa voice assistant to control all these devices!

More to see

That’s not all though — The MagPi 66 also shows you how to build a Raspberry Pi cluster computer, how to control LEGO robots using the GPIO, and why your Raspberry Pi isn’t affected by Spectre and Meltdown.




In addition, you’ll also find our usual selection of product reviews and excellent project showcases.

Get The MagPi 66

Issue 66 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.

Subscribe for free goodies

Want to support the Raspberry Pi Foundation and the magazine, and get some cool free stuff? If you take out a twelve-month print subscription to The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

I hope you enjoy this issue! See you next month.

The post MagPi 66: Raspberry Pi media projects for your home appeared first on Raspberry Pi.

N O D E’s Handheld Linux Terminal

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/n-o-d-es-handheld-linux-terminal/

Fit an entire Raspberry Pi-based laptop into your pocket with N O D E’s latest Handheld Linux Terminal build.

The Handheld Linux Terminal Version 3 (Portable Pi 3)

Hey everyone. Today I want to show you the new version 3 of the Handheld Linux Terminal. It’s taken a long time, but I’m finally finished. This one takes all the things I’ve learned so far, and improves on many of the features from the previous iterations.

N O D E

With interests in modding tech, exploring the boundaries of the digital world, and open source, YouTuber N O D E has become one to watch within the digital maker world. He maintains a channel focused on “the transformative power of technology.”

“Understanding that electronics isn’t voodoo is really powerful”, he explains in his Patreon video. “And learning how to build your own stuff opens up so many possibilities.”

NODE Youtube channel logo - Handheld Linux Terminal v3

The topics of his videos range from stripped-down devices, upgraded tech, and security upgrades, to the philosophy behind technology. He also provides weekly roundups of, and discussions about, new releases.

Essentially, if you like technology, you’ll like N O D E.

Handheld Linux Terminal v3

Subscribers to N O D E’s YouTube channel, of whom there are currently over 44000, will have seen him documenting variations of this handheld build throughout the last year. By stripping down a Raspberry Pi 3, and incorporating a Zero W, he’s been able to create interesting projects while always putting functionality first.

Handheld Linux Terminal v3

With the third version of his terminal, N O D E has taken experiences gained from previous builds to create something of which he’s obviously extremely proud. And so he should be. The v3 handheld is impressively small considering he managed to incorporate a fully functional keyboard with mouse, a 3.5″ screen, and a fan within the 3D-printed body.

Handheld Linux Terminal v3

“The software side of things is where it really shines though, and the Pi 3 is more than capable of performing most non-intensive tasks,” N O D E goes on to explain. He demonstrates various applications running on Raspbian, plus other operating systems he has pre-loaded onto additional SD cards:

“I have also installed Exagear Desktop, which allows it to run x86 apps too, and this works great. I have x86 apps such as Sublime Text and Spotify running without any problems, and it’s technically possible to use Wine to also run Windows apps on the device.”

We think this is an incredibly neat build, and we can’t wait to see where N O D E takes it next!

The post N O D E’s Handheld Linux Terminal appeared first on Raspberry Pi.

Deploying Java Microservices on Amazon EC2 Container Service

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/deploying-java-microservices-on-amazon-ec2-container-service/

This post and accompanying code graciously contributed by:

Huy Huynh
Sr. Solutions Architect
Magnus Bjorkman
Solutions Architect

Java is a popular language used by many enterprises today. To simplify and accelerate Java application development, many companies are moving from a monolithic to microservices architecture. For some, it has become a strategic imperative. Containerization technology, such as Docker, lets enterprises build scalable, robust microservice architectures without major code rewrites.

In this post, I cover how to containerize a monolithic Java application to run on Docker. Then, I show how to deploy it on AWS using Amazon EC2 Container Service (Amazon ECS), a high-performance container management service. Finally, I show how to break the monolith into multiple services, all running in containers on Amazon ECS.

Application Architecture

For this example, I use the Spring Pet Clinic, a monolithic Java application for managing a veterinary practice. It is a simple REST API, which allows the client to manage and view Owners, Pets, Vets, and Visits.

It is a simple three-tier architecture:

  • Client
    You simulate this by using curl commands.
  • Web/app server
    This is the Java and Spring-based application that you run using the embedded Tomcat. As part of this post, you run this within Docker containers.
  • Database server
    This is the relational database for your application that stores information about owners, pets, vets, and visits. For this post, use MySQL RDS.

I decided to not put the database inside a container as containers were designed for applications and are transient in nature. The choice was made even easier because you have a fully managed database service available with Amazon RDS.

RDS manages the work involved in setting up a relational database, from provisioning the infrastructure capacity that you request to installing the database software. After your database is up and running, RDS automates common administrative tasks, such as performing backups and patching the software that powers your database. With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover.

Walkthrough

You can find the code for the example covered in this post at amazon-ecs-java-microservices on GitHub.

Prerequisites

You need the following to walk through this solution:

  • An AWS account
  • An access key and secret key for a user in the account
  • The AWS CLI installed

Also, install the latest versions of the following:

  • Java
  • Maven
  • Python
  • Docker

Step 1: Move the existing Java Spring application to a container deployed using Amazon ECS

First, move the existing monolith application to a container and deploy it using Amazon ECS. This is a great first step before breaking the monolith apart because you still get some benefits before breaking apart the monolith:

  • An improved pipeline. The container also allows an engineering organization to create a standard pipeline for the application lifecycle.
  • No mutations to machines.

You can find the monolith example at 1_ECS_Java_Spring_PetClinic.

Container deployment overview

The following diagram is an overview of what the setup looks like for Amazon ECS and related services:

This setup consists of the following resources:

  • The client application that makes a request to the load balancer.
  • The load balancer that distributes requests across all available ports and instances registered in the application’s target group using round-robin.
  • The target group that is updated by Amazon ECS to always have an up-to-date list of all the service containers in the cluster. This includes the port on which they are accessible.
  • One Amazon ECS cluster that hosts the container for the application.
  • A VPC network to host the Amazon ECS cluster and associated security groups.

Each container has a single application process that is bound to port 8080 within its namespace. In reality, all the containers are exposed on a different, randomly assigned port on the host.

The architecture is containerized but still monolithic because each container has all the same features of the rest of the containers

The following is also part of the solution but not depicted in the above diagram:

  • One Amazon EC2 Container Registry (Amazon ECR) repository for the application.
  • A service/task definition that spins up containers on the instances of the Amazon ECS cluster.
  • A MySQL RDS instance that hosts the applications schema. The information about the MySQL RDS instance is sent in through environment variables to the containers, so that the application can connect to the MySQL RDS instance.

I have automated setup with the 1_ECS_Java_Spring_PetClinic/ecs-cluster.cf AWS CloudFormation template and a Python script.

The Python script calls the CloudFormation template for the initial setup of the VPC, Amazon ECS cluster, and RDS instance. It then extracts the outputs from the template and uses those for API calls to create Amazon ECR repositories, tasks, services, Application Load Balancer, and target groups.

Environment variables and Spring properties binding

As part of the Python script, you pass in a number of environment variables to the container as part of the task/container definition:

'environment': [
{
'name': 'SPRING_PROFILES_ACTIVE',
'value': 'mysql'
},
{
'name': 'SPRING_DATASOURCE_URL',
'value': my_sql_options['dns_name']
},
{
'name': 'SPRING_DATASOURCE_USERNAME',
'value': my_sql_options['username']
},
{
'name': 'SPRING_DATASOURCE_PASSWORD',
'value': my_sql_options['password']
}
],

The preceding environment variables work in concert with the Spring property system. The value in the variable SPRING_PROFILES_ACTIVE, makes Spring use the MySQL version of the application property file. The other environment files override the following properties in that file:

  • spring.datasource.url
  • spring.datasource.username
  • spring.datasource.password

Optionally, you can also encrypt sensitive values by using Amazon EC2 Systems Manager Parameter Store. Instead of handing in the password, you pass in a reference to the parameter and fetch the value as part of the container startup. For more information, see Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks.

Spotify Docker Maven plugin

Use the Spotify Docker Maven plugin to create the image and push it directly to Amazon ECR. This allows you to do this as part of the regular Maven build. It also integrates the image generation as part of the overall build process. Use an explicit Dockerfile as input to the plugin.

FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD spring-petclinic-rest-1.7.jar app.jar
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

The Python script discussed earlier uses the AWS CLI to authenticate you with AWS. The script places the token in the appropriate location so that the plugin can work directly against the Amazon ECR repository.

Test setup

You can test the setup by running the Python script:
python setup.py -m setup -r <your region>

After the script has successfully run, you can test by querying an endpoint:
curl <your endpoint from output above>/owner

You can clean this up before going to the next section:
python setup.py -m cleanup -r <your region>

Step 2: Converting the monolith into microservices running on Amazon ECS

The second step is to convert the monolith into microservices. For a real application, you would likely not do this as one step, but re-architect an application piece by piece. You would continue to run your monolith but it would keep getting smaller for each piece that you are breaking apart.

By migrating microservices, you would get four benefits associated with microservices:

  • Isolation of crashes
    If one microservice in your application is crashing, then only that part of your application goes down. The rest of your application continues to work properly.
  • Isolation of security
    When microservice best practices are followed, the result is that if an attacker compromises one service, they only gain access to the resources of that service. They can’t horizontally access other resources from other services without breaking into those services as well.
  • Independent scaling
    When features are broken out into microservices, then the amount of infrastructure and number of instances of each microservice class can be scaled up and down independently.
  • Development velocity
    In a monolith, adding a new feature can potentially impact every other feature that the monolith contains. On the other hand, a proper microservice architecture has new code for a new feature going into a new service. You can be confident that any code you write won’t impact the existing code at all, unless you explicitly write a connection between two microservices.

Find the monolith example at 2_ECS_Java_Spring_PetClinic_Microservices.
You break apart the Spring Pet Clinic application by creating a microservice for each REST API operation, as well as creating one for the system services.

Java code changes

Comparing the project structure between the monolith and the microservices version, you can see that each service is now its own separate build.
First, the monolith version:

You can clearly see how each API operation is its own subpackage under the org.springframework.samples.petclinic package, all part of the same monolithic application.
This changes as you break it apart in the microservices version:

Now, each API operation is its own separate build, which you can build independently and deploy. You have also duplicated some code across the different microservices, such as the classes under the model subpackage. This is intentional as you don’t want to introduce artificial dependencies among the microservices and allow these to evolve differently for each microservice.

Also, make the dependencies among the API operations more loosely coupled. In the monolithic version, the components are tightly coupled and use object-based invocation.

Here is an example of this from the OwnerController operation, where the class is directly calling PetRepository to get information about pets. PetRepository is the Repository class (Spring data access layer) to the Pet table in the RDS instance for the Pet API:

@RestController
class OwnerController {

    @Inject
    private PetRepository pets;
    @Inject
    private OwnerRepository owners;
    private static final Logger logger = LoggerFactory.getLogger(OwnerController.class);

    @RequestMapping(value = "/owner/{ownerId}/getVisits", method = RequestMethod.GET)
    public ResponseEntity<List<Visit>> getOwnerVisits(@PathVariable int ownerId){
        List<Pet> petList = this.owners.findById(ownerId).getPets();
        List<Visit> visitList = new ArrayList<Visit>();
        petList.forEach(pet -> visitList.addAll(pet.getVisits()));
        return new ResponseEntity<List<Visit>>(visitList, HttpStatus.OK);
    }
}

In the microservice version, call the Pet API operation and not PetRepository directly. Decouple the components by using interprocess communication; in this case, the Rest API. This provides for fault tolerance and disposability.

@RestController
class OwnerController {

    @Value("#{environment['SERVICE_ENDPOINT'] ?: 'localhost:8080'}")
    private String serviceEndpoint;

    @Inject
    private OwnerRepository owners;
    private static final Logger logger = LoggerFactory.getLogger(OwnerController.class);

    @RequestMapping(value = "/owner/{ownerId}/getVisits", method = RequestMethod.GET)
    public ResponseEntity<List<Visit>> getOwnerVisits(@PathVariable int ownerId){
        List<Pet> petList = this.owners.findById(ownerId).getPets();
        List<Visit> visitList = new ArrayList<Visit>();
        petList.forEach(pet -> {
            logger.info(getPetVisits(pet.getId()).toString());
            visitList.addAll(getPetVisits(pet.getId()));
        });
        return new ResponseEntity<List<Visit>>(visitList, HttpStatus.OK);
    }

    private List<Visit> getPetVisits(int petId){
        List<Visit> visitList = new ArrayList<Visit>();
        RestTemplate restTemplate = new RestTemplate();
        Pet pet = restTemplate.getForObject("http://"+serviceEndpoint+"/pet/"+petId, Pet.class);
        logger.info(pet.getVisits().toString());
        return pet.getVisits();
    }
}

You now have an additional method that calls the API. You are also handing in the service endpoint that should be called, so that you can easily inject dynamic endpoints based on the current deployment.

Container deployment overview

Here is an overview of what the setup looks like for Amazon ECS and the related services:

This setup consists of the following resources:

  • The client application that makes a request to the load balancer.
  • The Application Load Balancer that inspects the client request. Based on routing rules, it directs the request to an instance and port from the target group that matches the rule.
  • The Application Load Balancer that has a target group for each microservice. The target groups are used by the corresponding services to register available container instances. Each target group has a path, so when you call the path for a particular microservice, it is mapped to the correct target group. This allows you to use one Application Load Balancer to serve all the different microservices, accessed by the path. For example, https:///owner/* would be mapped and directed to the Owner microservice.
  • One Amazon ECS cluster that hosts the containers for each microservice of the application.
  • A VPC network to host the Amazon ECS cluster and associated security groups.

Because you are running multiple containers on the same instances, use dynamic port mapping to avoid port clashing. By using dynamic port mapping, the container is allocated an anonymous port on the host to which the container port (8080) is mapped. The anonymous port is registered with the Application Load Balancer and target group so that traffic is routed correctly.

The following is also part of the solution but not depicted in the above diagram:

  • One Amazon ECR repository for each microservice.
  • A service/task definition per microservice that spins up containers on the instances of the Amazon ECS cluster.
  • A MySQL RDS instance that hosts the applications schema. The information about the MySQL RDS instance is sent in through environment variables to the containers. That way, the application can connect to the MySQL RDS instance.

I have again automated setup with the 2_ECS_Java_Spring_PetClinic_Microservices/ecs-cluster.cf CloudFormation template and a Python script.

The CloudFormation template remains the same as in the previous section. In the Python script, you are now building five different Java applications, one for each microservice (also includes a system application). There is a separate Maven POM file for each one. The resulting Docker image gets pushed to its own Amazon ECR repository, and is deployed separately using its own service/task definition. This is critical to get the benefits described earlier for microservices.

Here is an example of the POM file for the Owner microservice:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.springframework.samples</groupId>
    <artifactId>spring-petclinic-rest</artifactId>
    <version>1.7</version>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.2.RELEASE</version>
    </parent>
    <properties>
        <!-- Generic properties -->
        <java.version>1.8</java.version>
        <docker.registry.host>${env.docker_registry_host}</docker.registry.host>
    </properties>
    <dependencies>
        <dependency>
            <groupId>javax.inject</groupId>
            <artifactId>javax.inject</artifactId>
            <version>1</version>
        </dependency>
        <!-- Spring and Spring Boot dependencies -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-rest</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-cache</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <!-- Databases - Uses HSQL by default -->
        <dependency>
            <groupId>org.hsqldb</groupId>
            <artifactId>hsqldb</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <scope>runtime</scope>
        </dependency>
        <!-- caching -->
        <dependency>
            <groupId>javax.cache</groupId>
            <artifactId>cache-api</artifactId>
        </dependency>
        <dependency>
            <groupId>org.ehcache</groupId>
            <artifactId>ehcache</artifactId>
        </dependency>
        <!-- end of webjars -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <scope>runtime</scope>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>com.spotify</groupId>
                <artifactId>docker-maven-plugin</artifactId>
                <version>0.4.13</version>
                <configuration>
                    <imageName>${env.docker_registry_host}/${project.artifactId}</imageName>
                    <dockerDirectory>src/main/docker</dockerDirectory>
                    <useConfigFile>true</useConfigFile>
                    <registryUrl>${env.docker_registry_host}</registryUrl>
                    <!--dockerHost>https://${docker.registry.host}</dockerHost-->
                    <resources>
                        <resource>
                            <targetPath>/</targetPath>
                            <directory>${project.build.directory}</directory>
                            <include>${project.build.finalName}.jar</include>
                        </resource>
                    </resources>
                    <forceTags>false</forceTags>
                    <imageTags>
                        <imageTag>${project.version}</imageTag>
                    </imageTags>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Test setup

You can test this by running the Python script:

python setup.py -m setup -r <your region>

After the script has successfully run, you can test by querying an endpoint:

curl <your endpoint from output above>/owner

Conclusion

Migrating a monolithic application to a containerized set of microservices can seem like a daunting task. Following the steps outlined in this post, you can begin to containerize monolithic Java apps, taking advantage of the container runtime environment, and beginning the process of re-architecting into microservices. On the whole, containerized microservices are faster to develop, easier to iterate on, and more cost effective to maintain and secure.

This post focused on the first steps of microservice migration. You can learn more about optimizing and scaling your microservices with components such as service discovery, blue/green deployment, circuit breakers, and configuration servers at http://aws.amazon.com/containers.

If you have questions or suggestions, please comment below.

Tinkernut’s do-it-yourself Pi Zero audio HAT

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernut-diy-pi-zero-audio/

Why buy a Raspberry Pi Zero audio HAT when Tinkernut can show you how to make your own?

Adding Audio Output To The Raspberry Pi Zero – Tinkernut Workbench

The Raspberry Pi Zero W is an amazing miniature computer piece of technology. I want to turn it into an epic portable Spotify radio that displays visuals such as Album Art. So in this new series called “Tinkernut Workbench”, I show you step by step what it takes to build a product from the ground up.

Raspberry Pi Zero audio

Unlike their grown-up siblings, the Pi Zero and Zero W lack an onboard audio jack, but that doesn’t stop you from using them to run an audio output. Various audio HATs exist on the market, from Adafruit, Pimoroni and Pi Supply to name a few, providing easy audio output for the Zero. But where would the fun be in a Tinkernut video that shows you how to attach a HAT?

Tinkernut Pi Zero Audio

“Take this audio HAT, press it onto the header pins and, errr, done? So … how was your day?”

DIY Audio: Tinkernut style

For the first video in his Hipster Spotify Radio using a Raspberry Pi Tinkernut Workbench series, Tinkernut – real name Daniel Davis – goes through the steps of researching, prototyping and finishing his own audio HAT for his newly acquired Raspberry Pi Zero W.

The build utilises the GPIO pins on the Zero W, specifically pins #18 and #13. FYI, this hidden gem of information comes from the Adafruit Pi Zero PWM Audio guide. Before he can use #18 and #13, header pins need to be soldered. If the thought of soldering pins to the Pi is somewhat daunting, check out the Pimoroni Hammer Header.

Pimoroni Hammer Header for Raspberry Pi

You’re welcome.

Once complete, with Raspbian installed on the micro SD, and SSH enabled for remote access, he’s ready to start prototyping.

Ingredients

Tinkernut uses two 270 ohm resistors, two 150 ohm resistors, two 10μf electrolytic capacitors, two 0.01 μf polyester film capacitors, an audio jack and some wire. You’ll also need a breadboard for prototyping. For the final build, you’ll need a single row female pin header and some prototyping board, if you want to join in at home.

Tinkernut audio board Raspberry Pi Zero W

It should look like this…hopefully.

Once the prototype is working to run audio through to a cheap speaker (thanks to an edit of the config.txt file), the final board can be finished.

What’s next?

The audio board is just one step in the build.

Spotify is such an awesome music service. Raspberry Pi Zero is such an awesome ultra-mini computing device. Obviously, combining the two is something I must do!!! The idea here is to make something that’s stylish, portable, can play Spotify, and hopefully also display visuals such as album art.

Subscribe to Tinkernut’s YouTube channel to keep up to date with the build, and check out some of his other Raspberry Pi builds, such as his cheap 360 video camera, security camera and digital vintage camera.

Have you made your own Raspberry Pi HAT? Show it off in the comments below!

The post Tinkernut’s do-it-yourself Pi Zero audio HAT appeared first on Raspberry Pi.

Community Profile: Matt Reed

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-matt-reed/

This column is from The MagPi issue 51. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

Matt Reed‘s background is in web design/development, extending to graphic design in which he acquired his BFA at the University of Tennessee, Knoxville. In his youth, his passion focused on car stereo systems, designing elaborate builds that his wallet couldn’t afford. However, this enriched his maker skill set by introducing woodwork, electronics, and fabrication exploration into his creations.

Matt Reed Raspberry Pi redpepper MagPi Magazine

Matt hosts the redpepper ‘Touch of Tech’ online series, highlighting the latest in interesting and unusual tech releases

Having joined the integrated marketing agency redpepper eight years ago, Matt originally worked in the design and production of microsites. However, as his interests continued to grow, demand began to evolve, and products such as the Arduino and Raspberry Pi came into the mix. Matt soon found himself moving away from the screen toward physical builds.

“I’m interested in anything that uses tech in a clever way, whether it be AR, VR, front-end, back-end, app dev, servers, hardware, UI, UX, motion graphics, art, science, or human behaviour. I really enjoy coming up with ideas people can relate to.”

Matt’s passion is to make tech seem cool, creative, empowering, and approachable, and his projects reflect this. Away from the Raspberry Pi, Matt has built some amazing creations such as the Home Alone Holidaython, an app that lets you recreate the famous curtain shadow party in Kevin McCallister’s living room. Pick the shadow you want to appear, and projectors illuminate the design against a sheet across the redpepper office window. Christmas on Tweet Street LIVE! captures hilariously negative Christmas-themed tweets from Twitter, displaying them across a traditional festive painting, while DOOR8ELL allows office visitors the opportunity to Slack-message their required staff member via an arcade interface, complete with 8-bit graphics. There’s also been a capacitive piano built with jelly keys, a phone app to simulate the destruction of cars as you sit in traffic, and a working QR code made entirely from Oreos.

Matt Reed Raspberry Pi redpepper MagPi Magazine

The BoomIlluminator, an interactive art installation for the Red Bull Creation Qualifier, used LEDs within empty Red Bull cans that reacted to the bass of any music played. A light show across the cans was then relayed to peoples’ phones, extending the experience.

Playing the ‘technology advocate’ role at redpepper, Matt continues to bridge the gap between the company’s day-to-day business and the fun, intuitive uses of tech. Not only do they offer technological marketing solutions via their rpLab, they have continued to grow, incorporating Google’s Sprint methodology into idea-building and brainstorming within days of receiving a request, “so having tools that are powerful, flexible, and cost-effective like the Pi is invaluable.”

Matt Reed Raspberry Pi redpepper MagPi Magazine

Walk into a room with Doorjam enabled, and suddenly your favourite tune is playing via boombox speakers. Simply select your favourite song from Spotify, walk within range of a Bluetooth iBeacon, and you’re ready to make your entrance in style.

“I just love the intersection of art and science,” Matt explains when discussing his passion for tech. “Having worked with Linux servers for most of my career, the Pi was the natural extension for my interest in hardware. Running Node.js on the Pi has become my go-to toolset.”

Matt Reed Raspberry Pi redpepper MagPi Magazine

Slackbot Bot: Users of the multi-channel messenger service Slack will appreciate this one. Beacons throughout the office allow users to locate Slackbot Bot, which features a tornado siren mounted on a Roomba, and send it to predetermined locations to deliver messages. “It was absolutely hilarious to test in the office.”

We’ve seen Matt’s Raspberry Pi-based portfolio grow over the last couple of years. A few of his builds have been featured in The MagPi, and his Raspberry Preserve was placed 13th in the Top 50 Raspberry Pi Builds in issue 50.

Matt Reed Raspberry Pi redpepper MagPi Magazine

Matt Reed’s ‘Raspberry Preserve’ build allows uses to store their precious photos in a unique memory jar

There’s no denying that Matt will continue to be ‘one to watch’ in the world of quirky, original tech builds. You can follow his work at his website or via his Twitter account.

The post Community Profile: Matt Reed appeared first on Raspberry Pi.

AWS Hot Startups – November 2016 – AwareLabs, Doctor On Demand, Starling Bank, and VigLink

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-november-2016-awarelabs-doctor-on-demand-starling-bank-and-viglink/

Tina is back with another impressive set of startups!

Jeff;


This month we are featuring four hot AWS-powered startups:

  • AwareLabs – Helping small businesses build smart websites
  • Doctor On Demand – Delivering fast, easy, and cost-effective access to top healthcare providers.
  • Starling Bank – Mobile banking for the next generation.
  • VigLink – Powering content-driven commerce.

Make sure to also check out October’s Hot Startups if you missed it!

AwareLabs (Phoenix/Charlotte)
AwareLabs is a small, three person startup focused on helping business owners engage their customers through dozens of integrated applications. The startup was born in November 2011 and began as a website building guide that helped hundreds of entrepreneurs within its first few weeks. Early on, founder Paul Kenjora recognized that small businesses were being slowed down by existing business solutions, and in 2013 he took on the task of creating a business centric website builder. After attending an AWS seminar, Paul realized that small teams could design and deploy massive infrastructure just as well as heavily funded, high-tech companies. Previously, only big companies or heavy investment allowed for that type of scale. With the help of AwareLabs, small businesses with limited time and budgets can build the smart websites they need.

The AwareLabs team relies on AWS to achieve what was previously impossible with a team of their size. They’ve been able to raise less capital, move faster, and deliver a solution customers love. AwareLabs leverages Amazon EC2 extensively for everything from running client websites, to maintaining their own secure code repository. Amazon S3 has also been a game changer in offloading the burden of data storage and reliability. This was the single biggest factor in letting the AwareLabs development team focus on client-facing features instead of infrastructure issues. Amazon SES and Amazon SNS freed their developers to deliver integrated one-click newsletters with intelligent bounce reduction, which was very well received by clients. Finally, AWS has helped AwareLabs be profitable, which is huge for any startup!

Be sure to check out AwareLabs for your professional website needs!

Doctor On Demand (San Francisco)
Doctor On Demand was built to address the growing problem that many of those in the U.S. face – lack of access to healthcare providers. The average wait time to see a physician is three weeks, and in rural areas, it can be even longer. It takes an average of 25 days to see a psychiatrist or psychologist and nearly half of all patients with mental health issues go without treatment. With Doctor On Demand, patients can see a board-certified physician or psychologist in a matter of minutes directly from their smartphone, tablet, or computer. They can also have video visits with providers at any time of day – no matter where they are. Patients simply download the Doctor On Demand app (iOS and Android) or visit www.doctorondemand.com, provide a summary of the reason for their visit, and are connected to a licensed provider in their state. Services are delivered through hundreds of employers and work with dozens of major health plans.

From the very beginning, AWS has allowed Doctor on Demand to operate securely in the healthcare space. They utilize Amazon EC2, Amazon S3, Amazon CloudFront, Amazon CloudWatch, and AWS Trusted Advisor. With these services they are able to build compliant security and privacy controls, ‘simple’ fault tolerance, and easily setup a disaster recovery site (utilizing multiple AWS Regions). The company says the best part about working with AWS is that they are able to get everything they need on a startup budget.

Check out the Doctor On Demand blog to keep up with the latest news!

Starling Bank (UK)
Starling Bank is on a mission to shake up financial services.  In the way that TV was radically changed by Netflix, music by the likes of Spotify, and social media by Snapchat – this is what Starling aims to do for banking. Founded in 2014 by Anne Boden, Starling uses the latest technology to make the traditional current account obsolete. Having assembled a team of engineers, artists, and economists, the build of the bank is nearing completion. They will be launching their app in early 2017.

Many next generation banks continue to stick to the traditional bank model that was built on technology from the 1960s and 70s. Instead of providing a range of products that are sold and cross-sold to unwilling customers, Starling will empower their users through seamless access to a mobile marketplace of financial services and products that best meet their needs at any given time. Customers can enjoy the security and protection of a licensed and regulated bank while also getting access to insights, data, and services that empower them to make decisions about their money.

Starling Bank uses AWS to provision and scale a secure infrastructure automatically and on demand. They primarily use Amazon CloudFormation and Amazon EC2, but also make use of Amazon S3, Amazon RDS, and Amazon Lambda.

Sign up here to be one of Starling’s first customers!

VigLink (San Francisco)
Oliver Roup, founder and CEO of VigLink, was first introduced to affiliate marketing as a student at Harvard Business School. His interest in the complex ecosystem prompted him to write a crawler to identify existing product links to Amazon. Roup found that less than half of those links were enrolled in the associates program. It was at this moment that he determined there was a real business opportunity at hand, and VigLink was born.

Over the last seven years, the company has grown into not only a content monetization platform, but a platform that provides publishers and merchants with insights into their ecommerce business. At its core, VigLink identifies commercial product mentions within a publisher’s content and automatically transforms them into revenue generating hyperlinks whose destinations can be determined in real-time, advertiser-bid auctions. Since its founding in 2009, VigLink has been backed by top investors including Google Ventures, Emergence Capital Partners, and RRE. Check out a recent interview with Roup and a tour of VigLink’s offices here!

Since the company’s start, VigLink has utilized AWS extensively. The flexibility to be able to respond to demand elastically without capital costs or hardware maintenance has been game-changing. They use numerous services including Amazon EC2, Amazon S3, Amazon SQS, Amazon RDS, and Amazon Redshift. While continuing to scale, VigLink has recently been able to cut costs by 15% using tools such as AWS Cost Explorer.

Take a behind-the-scenes look at VigLink in this short video.

Tina Barr

Five Mistakes Everyone Makes With Cloud Backup

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/5-common-cloud-backup-mistakes/

cloud backup error

Cloud-based storage and file sync services are ubiquitous: Everywhere we turn new services pop up (and often shut down), promising free or low-cost storage of everything and anything on our computers and mobile devices.

When you depend on the cloud it’s very easy to get lulled into a false sense of security. Don’t. Here are five common mistakes all of us make with cloud backup and sync services. I’ve added suggestions for how to avoid these pitfalls.

Assuming the Cloud Is Backing Things Up

“I have iCloud or Google Drive, so everything’s backed up.”

Some cloud backup and file sync services make it really easy to put files online, but they may not be all the files you need. Don’t just assume the cloud services you use are doing a complete backup of your device – check to see what is actually being backed up. The services you use may only back up specific folders or directories on your computer’s hard drive.

Read this for more info on how Backblaze backs up.

There’s a big difference between file backup services and sync services, by the way. Which brings me to my next point:

Confusing Sync for Backup

“I don’t need backup. I’ve got my files synced.”

Sync service enables you to keep consistent contents between multiple devices – think Dropbox or iCloud Drive, for example. Make one change to the contents of that shared info, and the same thing happens across all devices, including file changes and deletions. Depending on how you have syncing and sharing set-up you can delete a file on one device and have it disappear on all the other shared devices.

I’ve also found it handy to have a backup service that enables you to restore multiple versions. In point of fact, Dropbox lets you restore previous versions. Apple’s Time Machine, built into the Mac, does this too. So does Backblaze (we keep track of multiple versions up to 30 days). Not to say you shouldn’t use Dropbox, we do! We wrote about how we are complimentary services.

Thinking One Backup Is Enough

“Hey, I’m backing up to the cloud. That’s better than nothing, right?”

It’s better than nothing but it’s not enough. You want a local backup too. That’s why I recommend a 3-2-1 Backup strategy. In addition to the “live” copy of the data on your hard drive, make sure you have a local backup, and use the cloud for offsite storage. Likewise, if you’re only storing data on a local backup, you’re putting all your eggs in that basket. Add offsite backup to complete your backup strategy. Conversely, if you only store your data in the cloud, you’re susceptible to those services being down as well. So having a local copy can keep you productive even if your favorite service is temporarily down.

Leaving Things Insecure

“I’m not backing up anything important enough for hackers to bother with.”

With identity theft on the rise, the security of all of your data online should be paramount. Strong encryption is important, so make sure it’s supported by the services you depend on.

Even if a bad actor doesn’t want your data, they still may want your computer for nefarious purposes, like driving a botnet used to launch a DDOS (Distributed Denial of Service) attack. That’s exactly what recently happened to Dyn, a company that provides core Internet services for other popular Internet services like Twitter and Spotify.

Make sure to protect your computer with strong passwords, practice safe surfing and keep your computer updated with the latest software. Also check periodically for malware and get rid of it when you find it.

Thinking That it’s Taken Care Of

“I have a backup strategy in place, so I don’t have to think about it anymore.”

I think it’s wise to observe an old aphorism: “Trust but verify.”

There’s absolutely nothing wrong with developing an automated backup strategy. But it’s vitally important to periodically test your backups to make sure they’re doing what they’re supposed to.

You should test your most important, mission-critical data first. Tax returns? Important legal documents? Irreplaceable baby pictures? Make sure the files that are important to you are retrievable and intact by actually trying to recover them. Find out more about how to test your backup.

Backblaze too. Test all your backups – we even recommend it in our Best Practices.

Got more cloud backup myths to bust? Share them with me in the comments!

 

The post Five Mistakes Everyone Makes With Cloud Backup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Dyn DNS DDoS That Killed Half The Internet

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/wJNQgzKbLTI/

Last week the Dyn DNS DDoS took out most of the East coast US websites including monsters like Spotify, Twitter, Netflix, Github, Heroku and many more. Hopefully it wasn’t because I shared the Mirai source code and some script kiddies got hold of it and decided to talk half of the US websites out. A […]

The post The Dyn DNS DDoS That Killed…

Read the full post at darknet.org.uk

Doorjam – play your own theme music

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/doorjam-create-your-own-theme-music/

Have you ever dreamed about having your own theme music? That perfect song that reflects your mood as you enter a room, drawing the attention of others towards you?

I know I have. Though that might be due to my desire to live in a Disney movie, or maybe just because I spent three years studying drama and live in a constant state of theatrical bliss.

Whatever the reason, it’s fair to say that Doorjam is an awesome build.

Doorjam

Walk into your theme song. Powered by Spotify. http://doorjam.in

Using a WiFi dongle, repurposed as an iBeacon, the Doorjam mobile phone app allows you to select your theme song from Spotify and play it via a boombox when you are in range.

Stick-figure diagram showing the way Doorjam lets you choose your theme music and plays it when you're within range

The team at redpepper have made the build code available publicly, taking makers through a step-by-step tutorial on their website.

So while we work on our own Doorjam build, why don’t you tell us what your ultimate theme music would be?

And for inspiration, I’ll hand over to Joseph…

(500) Days of Summer – “You Make My Dreams Come True” by Hall & Oates [HD VIDEO CLIP]

I know this feeling very well.

 

The post Doorjam – play your own theme music appeared first on Raspberry Pi.

Hi Fi Raspberry Pi – digitising and streaming vinyl

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/hi-fi-raspberry-pi/

Over at Mozilla HQ (where Firefox, a browser that many of you are using to read this, is made), some retro hardware hacking has been going on.

vinyl record

The Mozillans have worked their way through several office music services, but nothing, so far, has stuck. Then this home-made project, which started as a bit of a joke, landed on a countertop – and it’s stayed.

Matt Claypotch found a vinyl record player online, and had it delivered to the office, intending to tinker with it at home. It never made it that far. He and his colleagues spent their lunch hour at a local thrift store buying up random vintage vinyl…and the record player stayed in the office so everybody could use it.

Potch’s officemates embarked on a vinyl spending spree.

1-SuvYfwtYQ7xAfUYACc7GtA

1-cx_LPjsu4DmlNoxWdxtEPQ

What could be better? The warm crackle of vintage vinyl, “random, crappy albums” you definitely can’t find on Spotify (and stuff like the Van Halen album above that you can find on Spotify but possibly would prefer not to)…the problem was, once the machine had been set up in a break room, only the people in that room could listen to the cheese.

Enter the Raspberry Pi, with a custom-made streaming setup. One Mozillan didn’t want to have to sit in the common area to get his daily dose of bangin’ choons, so he set up a Pi to stream music from the analogue vinyl over USB (it’s 2016, record players apparently have USB ports now) via an Icecast stream to headphones anywhere in the office. Analogue > digital > analogue, if you like.

The setup is surprisingly successful; they’ve organised other audio systems which weren’t very popular, but this one, which happened organically, is being used by the whole office.

You can listen to a podcast from Envoy Office Hacks about the setup, and the office’s reaction to it.

Mozilla, keep on bopping to disco Star Wars. (I’m off to see if I can find a copy of that record. It’s probably a lot better in my imagination than it is in real life, but BOY, is it good in my imagination*.)

*I found it on YouTube. It’s a lot better in my imagination.

The post Hi Fi Raspberry Pi – digitising and streaming vinyl appeared first on Raspberry Pi.