Tag Archives: project

Solus 3 released

Post Syndicated from corbet original https://lwn.net/Articles/731121/rss

The Solus distribution project has announced
the availability of Solus 3. “This is the third iteration of
Solus since our move to become a rolling release operating system. Unlike
the previous iterations, however, this is a release and not a
snapshot. We’ve now moved away from the ‘regular snapshot’ model to
accommodate the best hybrid approach possible – feature rich releases with
explicit goals and technology enabling, along with the benefits of a
curated rolling release operating system.
” Headline features
include support for the Snap packaging format, a lot of desktop changes,
and numerous software updates. (LWN looked at
Solus
in 2016).

Community Profile: David Pride

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-david-pride/

This column is from The MagPi issue 55. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

David Pride’s experiences in computer education came slightly later in life. He admits to not being a grade-A student: he left school with few qualifications, unable to pursue further education at university. There was, however, a teacher who instilled in him a passion for computers and coding which would stick with him indefinitely.

David Pride The MagPi Raspberry Pi Community Profile

David joined us at the St James’s Palace community celebration, mingling with the likes of the Duke of York, plus organisers of Jams and clubs, such as Grace and Femi

Welcome to the Community

Twenty years later, back in 2012, David heard of the Raspberry Pi – a soon-to-be-released “new little marvel” that he instantly fell for, head first. Despite a lack of knowledge in Linux and Python, he experimented and had fun. He found a Raspberry Jam and, with it, Pi enthusiasts like Mike Horne and Peter Onion. The projects on display at the Jam were enough to push David further into the Raspberry Pi rabbit hole and, after working his way through several Python books, he began to take steps into the world of formal higher education.

David Pride The MagPi Raspberry Pi Community Profile

David’s determination to access and complete further education in computing has earned him a three-year PhD studentship. Not bad for a “lousy student”

Back to School

With a Mooc qualification from Rice University under his belt, he continued to improve upon his self-taught knowledge, and was fortunate enough to be accepted to study for a master’s degree in Computer Science at the University of Hertfordshire. With a distinction for his final dissertation, David completed the course with an overall distinction for his MSc, and was recently awarded a fully funded PhD studentship with The Open University’s Knowledge Media Institute.

David Pride The MagPi Raspberry Pi Community Profile

Self-playing xylophones, Wiimote air drums, Lego sorters, Pi Wars robots, and more. David is continually hacking toys, giving them new Pi-powered life

Maker of things

The portfolio of projects that helped him to achieve his many educational successes has provided regular retweet material for the Raspberry Pi Twitter account, and we’ve highlighted his fun, imaginative work on this blog before. His builds have travelled to a range of Jams and made their way to the Raspberry Pi and Code Club stands at the Bett Show, as well as to our birthday celebrations.

David Pride The MagPi Raspberry Pi Community Profile

“Pi & Chips – with a little extra source”

His website, the pun-tastic Pi and Chips, is home to the majority of his work; David also links to YouTube videos and walk-throughs of his projects, and relates his experiences at various events. If you’ve followed any of the action across the Raspberry Pi social media channels – or indeed read any previous issues of The MagPi magazine – you’ll no doubt have seen a couple of David’s projects.

David Pride The MagPi Raspberry Pi Community Profile 4-Bot

Many readers will have come across the wonderful 4-Bot before, and it has even made an appearance alongside David in a recent Bloomberg interview. Considering the trillions of possible game positions, David made a compromise and, if you’re lucky, you may just be able to beat it

The 4-Bot, a robotic second player for the family game Connect Four, allows people to go head to head with a Pi-powered robotic arm. Using a Python imaging library, the 4-Bot splits the game grid into 42 squares, and recognises them as being red, yellow, or empty by reading the RGB value of the space. Using the minimax algorithm, 4-Bot is able to play each move within 25 seconds. Believe us when we say that it’s not as easy to beat as you’d hope. Then there’s his more recent air drum kit, which uses an old toy found at a car boot sale together with a Wiimote to make a functional air drum that showcases David’s toy-hacking abilities… and his complete lack of rhythm. He does fare much better on his homemade laser harp, though!

The post Community Profile: David Pride appeared first on Raspberry Pi.

GNOME turns 20

Post Syndicated from ris original https://lwn.net/Articles/731036/rss

The GNOME project was founded by Miguel de Icaza and Federico Mena Quintero
on August 15, 1997, so today the project celebrates
its 20th birthday. “There have been 33 stable releases since the initial release of GNOME 1.0 in 1999. The latest stable release, GNOME 3.24 “Portland,” was well-received. “Portland” included exciting new features like the GNOME Recipes application and Night Light, which helps users avoid eyestrain. The upcoming version of GNOME 3.26 “Manchester,” is scheduled for release in September of this year. With over 6,000 contributors, and 8 million lines of code, the GNOME Project continues to thrive in its twentieth year.

OK Google, be aesthetically pleasing

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/aesthetically-pleasing-ok-google/

Maker Andrew Jones took a Raspberry Pi and the Google Assistant SDK and created a gorgeous-looking, and highly functional, alternative to store-bought smart speakers.

Raspberry Pi Google AI Assistant

In this video I get an “Ok Google” voice activated AI assistant running on a raspberry pi. I also hand make a nice wooden box for it to live in.

OK Google, what are you?

Google Assistant is software of the same ilk as Amazon’s Alexa, Apple’s Siri and Microsoft’s Cortana. It’s a virtual assistant that allows you to request information, play audio, and control smart home devices via voice commands.

Infinite Looping Siri, Alexa and Google Home

One can barely see the iPhone’s screen. That’s because I have a privacy protection screen. Sorry, did not check the camera angle. Learn how to create your own loop, why we put Cortana out of the loop, and how to train Siri to an artificial voice: https://www.danrl.com/2016/12/01/looping-ais-siri-alexa-google-home.html

You probably have a digital assistant on your mobile phone, and if you go to the home of someone even mildly tech-savvy, you may see a device awaiting commands via a wake word such the device’s name or, for the Google Assistant, the phrase “OK, Google”.

Homebrew versions

Understanding the maker need to ‘put tech into stuff’ and upgrade everyday objects into everyday objects 2.0, the creators of these virtual assistants have allowed access for developers to run their software on devices such as the Raspberry Pi. This means that your common-or-garden homemade robot can now be controlled via voice, and your shed-built home automation system can have easy-to-use internet connectivity via a reliable, multi-device platform.

Andrew’s Google Assistant build

Andrew gives a peerless explanation of how the Google Assistant works:

There’s Google’s Cloud. You log into Google’s Cloud and you do a bunch of cloud configuration cloud stuff. And then on the Raspberry Pi you install some Python software and you do a bunch of configuration. And then the cloud and the Pi talk the clouds kitten rainbow protocol and then you get a Google AI assistant.

It all makes perfect sense. Though for more extra detail, you could always head directly to Google.

Andrew Jones Raspberry Pi OK Google Assistant

I couldn’t have explained it better myself

Andrew decided to take his Google Assistant-enabled Raspberry Pi and create a new body for it. One that was more aesthetically pleasing than the standard Pi-inna-box. After wiring his build and cannibalising some speakers and a microphone, he created a sleek, wooden body that would sit quite comfortably in any Bang & Olufsen shop window.

Find the entire build tutorial on Instructables.

Make your own

It’s more straightforward than Andrew’s explanation suggests, we promise! And with an array of useful resources online, you should be able to incorporate your choice of virtual assistants into your build.

There’s The Raspberry Pi Guy’s tutorial on setting up Amazon Alexa on the Raspberry Pi. If you’re looking to use Siri on your Pi, YouTube has a plethora of tutorials waiting for you. And lastly, check out Microsoft’s site for using Cortana on the Pi!

If you’re looking for more information on Google Assistant, check out issue 57 of The MagPi Magazine, free to download as a PDF. The print edition of this issue came with a free AIY Projects Voice Kit, and you can sign up for The MagPi newsletter to be the first to know about the kit’s availability for purchase.

The post OK Google, be aesthetically pleasing appeared first on Raspberry Pi.

BREIN is Taking Infamous ‘Piracy’ Hosting Provider Ecatel to Court

Post Syndicated from Andy original https://torrentfreak.com/brein-is-taking-infamous-piracy-hosting-provider-ecatel-to-court-170815/

A regular website can be easily hosted in most countries of the world but when the nature of the project begins to step on toes, opportunities begin to reduce. Openly hosting The Pirate Bay, for example, is something few providers want to get involved with.

There are, however, providers out there who specialize in hosting services that others won’t touch. They develop a reputation of turning a blind eye to their customers’ activities, only reacting when a crisis looms on the horizon. Despite the problems, there are a few that are surprisingly resilient.

One such host is Netherlands-based Ecatel, which has hit the headlines many times over the years for allegedly having customers involved in warez, torrents, and streaming, not to mention spam and malware. For hosting the former group, it’s now in the crosshairs of Dutch anti-piracy group BREIN.

According to an application for a witness hearing filed with The Court of the Hague by BREIN, Ecatel has repeatedly hosted websites dealing in infringing content over recent years. While this is nothing particularly out of the ordinary, BREIN claims that complaints filed against the sites were dealt with slowly by Ecatel or not at all.

Ecatel Ltd is a company incorporated in the UK with servers in the Netherlands but since 2015, another hosting company called Novogara has appeared in tandem. Court documents suggest that Novogara is associated with Ecatel, something that was confirmed early 2016 in an email sent out by Ecatel itself.

“We’d like to inform you that all services of Ecatel Ltd are taken over by a new brand called Novogara Ltd with immediate effect. The take-over includes Ecatel and all her subsidiaries,” the email read.

Muddying the waters a little more, in 2015 Ecatel’s IP addresses were apparently taken over by Quasi Networks Ltd, a Seychelles-based company whose business is described locally as being conducted entirely overseas.

“Stichting BREIN has found several websites in the network of Quasi Networks with obviously infringing content. Quasi Networks, however, does not respond structurally to requests for closing those websites. This involves unlawful acts against the parties associated with the BREIN Foundation,” a ruling from the Court reads.

As a result, BREIN wants a witness hearing with three defendants connected to the Ecatel/Novgara/Quasi group of companies in order to establish the relationship between the businesses, where their servers are, and who is behind Quasi Networks.

“Stichting BREIN is interested in this information in order to be able to judge who it can appeal to and whether it is useful to start a legal procedure,” the Court adds.

Two of the defendants failed to lodge a defense against BREIN’s application but one objected to the request for a hearing. He said that since Quasi Networks, Ecatel and Novogara are all incorporated outside the Netherlands, a trial must also be conducted abroad and therefore a Dutch judge would not have jurisdiction.

He also argued that BREIN would use the witness hearing as a “fishing expedition” in order to gather information it currently does not have, in order to formulate some kind of case against the defendants, in one way or another.

In a decision published this week, The Court of the Hague rejected that argument, noting that the basis for the claim is copyright infringement through Netherlands-hosted websites. Furthermore, the majority of the witnesses are resident in the district of The Hague. It also underlined the importance of a hearing.

“The request for holding a preliminary witness hearing opens an independent petition procedure, which does not address the eligibility of any claim that may be lodged. An investigation must be made by the judge who has to deal with and decide the main case – if it comes.

“The court points out that a preliminary witness hearing is now (partly) necessary to clarify whether and to what extent a claim has any chance of success,” the decision reads.

According to documents published by Companies House in the UK, Ecatel Ltd ceased to exist this morning, having been dissolved at the request of its directors.

The hearing of the witnesses is set to take place on Tuesday, September 26, 2017 at 9.30 in the Palace of Justice at Prince Claus 60 in The Hague.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Launch – AWS Glue Now Generally Available

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/launch-aws-glue-now-generally-available/

Today we’re excited to announce the general availability of AWS Glue. Glue is a fully managed, serverless, and cloud-optimized extract, transform and load (ETL) service. Glue is different from other ETL services and platforms in a few very important ways.

First, Glue is “serverless” – you don’t need to provision or manage any resources and you only pay for resources when Glue is actively running. Second, Glue provides crawlers that can automatically detect and infer schemas from many data sources, data types, and across various types of partitions. It stores these generated schemas in a centralized Data Catalog for editing, versioning, querying, and analysis. Third, Glue can automatically generate ETL scripts (in Python!) to translate your data from your source formats to your target formats. Finally, Glue allows you to create development endpoints that allow your developers to use their favorite toolchains to construct their ETL scripts. Ok, let’s dive deep with an example.

In my job as a Developer Evangelist I spend a lot of time traveling and I thought it would be cool to play with some flight data. The Bureau of Transportations Statistics is kind enough to share all of this data for anyone to use here. We can easily download this data and put it in an Amazon Simple Storage Service (S3) bucket. This data will be the basis of our work today.

Crawlers

First, we need to create a Crawler for our flights data from S3. We’ll select Crawlers in the Glue console and follow the on screen prompts from there. I’ll specify s3://crawler-public-us-east-1/flight/2016/csv/ as my first datasource (we can add more later if needed). Next, we’ll create a database called flights and give our tables a prefix of flights as well.

The Crawler will go over our dataset, detect partitions through various folders – in this case months of the year, detect the schema, and build a table. We could add additonal data sources and jobs into our crawler or create separate crawlers that push data into the same database but for now let’s look at the autogenerated schema.

I’m going to make a quick schema change to year, moving it from BIGINT to INT. Then I can compare the two versions of the schema if needed.

Now that we know how to correctly parse this data let’s go ahead and do some transforms.

ETL Jobs

Now we’ll navigate to the Jobs subconsole and click Add Job. Will follow the prompts from there giving our job a name, selecting a datasource, and an S3 location for temporary files. Next we add our target by specifying “Create tables in your data target” and we’ll specify an S3 location in Parquet format as our target.

After clicking next, we’re at screen showing our various mappings proposed by Glue. Now we can make manual column adjustments as needed – in this case we’re just going to use the X button to remove a few columns that we don’t need.

This brings us to my favorite part. This is what I absolutely love about Glue.

Glue generated a PySpark script to transform our data based on the information we’ve given it so far. On the left hand side we can see a diagram documenting the flow of the ETL job. On the top right we see a series of buttons that we can use to add annotated data sources and targets, transforms, spigots, and other features. This is the interface I get if I click on transform.

If we add any of these transforms or additional data sources, Glue will update the diagram on the left giving us a useful visualization of the flow of our data. We can also just write our own code into the console and have it run. We can add triggers to this job that fire on completion of another job, a schedule, or on demand. That way if we add more flight data we can reload this same data back into S3 in the format we need.

I could spend all day writing about the power and versatility of the jobs console but Glue still has more features I want to cover. So, while I might love the script editing console, I know many people prefer their own development environments, tools, and IDEs. Let’s figure out how we can use those with Glue.

Development Endpoints and Notebooks

A Development Endpoint is an environment used to develop and test our Glue scripts. If we navigate to “Dev endpoints” in the Glue console we can click “Add endpoint” in the top right to get started. Next we’ll select a VPC, a security group that references itself and then we wait for it to provision.


Once it’s provisioned we can create an Apache Zeppelin notebook server by going to actions and clicking create notebook server. We give our instance an IAM role and make sure it has permissions to talk to our data sources. Then we can either SSH into the server or connect to the notebook to interactively develop our script.

Pricing and Documentation

You can see detailed pricing information here. Glue crawlers, ETL jobs, and development endpoints are all billed in Data Processing Unit Hours (DPU) (billed by minute). Each DPU-Hour costs $0.44 in us-east-1. A single DPU provides 4vCPU and 16GB of memory.

We’ve only covered about half of the features that Glue has so I want to encourage everyone who made it this far into the post to go read the documentation and service FAQs. Glue also has a rich and powerful API that allows you to do anything console can do and more.

We’re also releasing two new projects today. The aws-glue-libs provide a set of utilities for connecting, and talking with Glue. The aws-glue-samples repo contains a set of example jobs.

I hope you find that using Glue reduces the time it takes to start doing things with your data. Look for another post from me on AWS Glue soon because I can’t stop playing with this new service.
Randall

MPAA Revenue Stabilizes, Chris Dodd Earns $3.5 Million

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-revenue-stabilizes-chris-dodd-earns-3-5-million170813/

Protecting the interests of Hollywood, the MPAA has been heavily involved in numerous anti-piracy efforts around the world in recent years.

Through its involvement in the shutdowns of Popcorn Time, YIFY, isoHunt, Hotfile, Megaupload and several other platforms, the MPAA has worked hard to target piracy around the globe.

Perhaps just as importantly, the group lobbies lawmakers globally while managing anti-piracy campaigns both in and outside the US, including the Creative Content UK program.

All this work doesn’t come for free, obviously, so the MPAA relies on six major movie studios for financial support. After its revenues plummeted a few years ago, they have steadily recovered and according to its latest tax filing, the MPAA’s total income is now over $72 million.

The IRS filing, covering the fiscal year 2015, reveals that the movie studios contributed $65 million, the same as a year earlier. Overall revenue has stabilized as well, after a few years of modest growth.

Going over the numbers, we see that salaries make up a large chunk of the expenses. Former Senator Chris Dodd, the MPAA’s Chairman and CEO, is the highest paid employee with a total income of more than $3.5 million, including a $250,000 bonus.

It was recently announced that Dodd will leave the MPAA next month. He will be replaced by Charles Rivkin, another political heavyweight. Rivkin previously served as Assistant Secretary of State for Economic and Business Affairs in the Obama administration.

In addition to Dodd, there are two other employees who made over a million in 2015, Global General Counsel Steve Fabrizio and Diane Strahan, the MPAA’s Chief Operating Officer.

Looking at some of the other expenses we see that the MPAA’s lobbying budget remained stable at $4.2 million. Another $4.4 million went to various grants, while legal costs totaled $7.2 million that year.

More than two million dollars worth of legal expenses were paid to the US law firm Jenner & Block, which represented the movie studios in various court cases. In addition, the MPAA paid more than $800,000 to the UK law firm Wiggin, which assisted the group in local site-blocking efforts.

Finally, it’s worth looking at the various gifts and grants the MPAA hands out. As reported last year, the group handsomely contributes to various research projects. This includes a recurring million dollar grant for Carnegie Mellon’s ‘Initiative for Digital Entertainment Analytics’ (IDEA), which researches various piracy related topics.

IDEA co-director Rahul Telang previously informed us that the gift is used to hire researchers and pay for research materials. It is not tied to a particular project.

We also see $70,000+ in donations for both the Democratic and Republican Attorneys General associations. The purpose of the grants is listed as “general support.” Interestingly, just recently over a dozen Attorneys General released a public service announcement warning the public to stay away from pirate sites.

These type of donations and grants are nothing new and are a regular part of business across many industries. Still, they are worth keeping in mind.

It will be interesting to see which direction the MPAA takes in the years to come. Under Chris Dodd it has booked a few notable successes, but there is still a long way to go before the piracy situation is somewhat under control.



MPAA’s full form 990 was published in Guidestar recently and a copy is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

DMCA Used to Remove Ad Server URL From Easylist Ad Blocklist

Post Syndicated from Andy original https://torrentfreak.com/dmca-used-to-remove-ad-server-url-from-easylist-ad-blocklist-170811/

The default business model on the Internet is “free” for consumers. Users largely expect websites to load without paying a dime but of course, there’s no such thing as a free lunch. To this end, millions of websites are funded by advertising revenue.

Sensible sites ensure that any advertising displayed is unobtrusive to the visitor but lots seem to think that bombarding users with endless ads, popups, and other hindrances is the best way to do business. As a result, ad blockers are now deployed by millions of people online.

In order to function, ad-blocking tools – such as uBlock Origin or Adblock – utilize lists of advertising domains compiled by third parties. One of the most popular is Easylist, which is distributed by authors fanboy, MonztA, Famlam, and Khrinunder, under dual Creative Commons Attribution-ShareAlike and GNU General Public Licenses.

With the freedom afforded by those licenses, copyright tends not to figure high on the agenda for Easylist. However, a legal problem that has just raised its head is causing serious concern among those in the ad-blocking community.

Two days ago a somewhat unusual commit appeared in the Easylist repo on Github. As shown in the image below, a domain URL previously added to Easylist had been removed following a DMCA takedown notice filed with Github.

Domain text taken down by DMCA?

The DMCA notice in question has not yet been published but it’s clear that it targets the domain ‘functionalclam.com’. A user called ‘ameshkov’ helpfully points out a post by a new Github user called ‘DMCAHelper’ which coincided with the start of the takedown process more than three weeks ago.

A domain in a list circumvents copyright controls?

Aside from the curious claims of a URL “circumventing copyright access controls” (domains themselves cannot be copyrighted), the big questions are (i) who filed the complaint and (ii) who operates Functionalclam.com? The domain WHOIS is hidden but according to a helpful sleuth on Github, it’s operated by anti ad-blocking company Admiral.

Ad-blocking means money down the drain….

If that is indeed the case, we have the intriguing prospect of a startup attempting to protect its business model by using a novel interpretation of copyright law to have a domain name removed from a list. How this will pan out is unclear but a notice recently published on Functionalclam.com suggests the route the company wishes to take.

“This domain is used by digital publishers to control access to copyrighted content in accordance with the Digital Millenium Copyright Act and understand how visitors are accessing their copyrighted content,” the notice begins.

Combined with the comments by DMCAHelper on Github, this statement suggests that the complainants believe that interference with the ad display process (ads themselves could be the “copyrighted content” in question) represents a breach of section 1201 of the DMCA.

If it does, that could have huge consequences for online advertising but we will need to see the original DMCA notice to have a clearer idea of what this is all about. Thus far, Github hasn’t published it but already interest is growing. A representative from the EFF has already contacted the Easylist team, so this battle could heat up pretty quickly.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Security updates for Friday

Post Syndicated from ris original https://lwn.net/Articles/730629/rss

Security updates have been issued by Arch Linux (firefox, flashplugin, lib32-flashplugin, libsoup, and varnish), Debian (freeradius, git, libsoup2.4, pjproject, postgresql-9.1, postgresql-9.4, postgresql-9.6, subversion, and xchat), Fedora (gsoap, irssi, knot-resolver, php-horde-horde, php-horde-Horde-Core, php-horde-Horde-Form, php-horde-Horde-Url, php-horde-kronolith, php-horde-nag, and php-horde-turba), Mageia (perl-XML-LibXML), Oracle (libsoup), Red Hat (firefox and libsoup), SUSE (kernel and libsoup), and Ubuntu (git, kernel, libsoup2.4, linux, linux-aws, linux-gke, linux-raspi2, linux-snapdragon, linux, linux-raspi2, linux-hwe, linux-lts-trusty, linux-lts-xenial, php5, php7.0, and subversion).

Ms. Haughs’ tote-ally awesome Raspberry Pi bag

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pi-tote-bag/

While planning her trips to upcoming educational events, Raspberry Pi Certified Educator Amanda Haughs decided to incorporate the Pi Zero W into a rather nifty accessory.

Final Pi Tote bag

Uploaded by Amanda Haughs on 2017-07-08.

The idea

Commenting on the convenient size of the Raspberry Pi Zero W, Amanda explains on her blog “I decided that I wanted to make something that would fully take advantage of the compact size of the Pi Zero, that was somewhat useful, and that I could take with me and share with my maker friends during my summer tech travels.”

Amanda Haughs Raspberry Pi Tote Bag

Awesome grandmothers and wearable tech are an instant recipe for success!

With access to her grandmother’s “high-tech embroidery machine”, Amanda was able to incorporate various maker skills into her project.

The Tech

Amanda used five clear white LEDs and the Raspberry Pi Zero for the project. Taking inspiration from the LED-adorned Babbage Bear her team created at Picademy, she decided to connect the LEDs using female-to-female jumper wires

Amanda Haughs Pi Tote Bag

Poor Babbage really does suffer at Picademy events

It’s worth noting that she could also have used conductive thread, though we wonder how this slightly less flexible thread would work in a sewing machine, so don’t try this at home. Or do, but don’t blame me if it goes wonky.

Having set the LEDs in place, Amanda worked on the code. Unsure about how she wanted the LEDs to blink, she finally settled on a random pulsing of the lights, and used the GPIO Zero library to achieve the effect.

Raspberry Pi Tote Bag

Check out the GPIO Zero library for some great LED effects

The GPIO Zero pulse effect allows users to easily fade an LED in and out without the need for long strings of code. Very handy.

The Bag

Inspiration for the bag’s final design came thanks to a YouTube video, and Amanda and her grandmother were able to recreate the make using their fabric of choice.

DIY Tote Bag – Beginner’s Sewing Tutorial

Learn how to make this cute tote bag. A great project for beginning seamstresses!

A small pocket was added on the outside of the bag to allow for the Raspberry Pi Zero to be snugly secured, and the pattern was stitched into the front, allowing spaces for the LEDs to pop through.

Raspberry Pi Tote Bag

Amanda shows off her bag to Philip at ISTE 2017

You can find more information on the project, including Amanda’s initial experimentation with the Sense HAT, on her blog. If you’re a maker, an educator or, (and here’s a word I’m pretty sure I’ve made up) an edumaker, be sure to keep her blog bookmarked!

Make your own wearable tech

Whether you use jumper leads, or conductive thread or paint, we’d love to see your wearable tech projects.

Getting started with wearables

To help you get started, we’ve created this Getting started with wearables free resource that allows you to get making with the Adafruit FLORA and and NeoPixel. Check it out!

The post Ms. Haughs’ tote-ally awesome Raspberry Pi bag appeared first on Raspberry Pi.

Automating Blue/Green Deployments of Infrastructure and Application Code using AMIs, AWS Developer Tools, & Amazon EC2 Systems Manager

Post Syndicated from Ramesh Adabala original https://aws.amazon.com/blogs/devops/bluegreen-infrastructure-application-deployment-blog/

Previous DevOps blog posts have covered the following use cases for infrastructure and application deployment automation:

An AMI provides the information required to launch an instance, which is a virtual server in the cloud. You can use one AMI to launch as many instances as you need. It is security best practice to customize and harden your base AMI with required operating system updates and, if you are using AWS native services for continuous security monitoring and operations, you are strongly encouraged to bake into the base AMI agents such as those for Amazon EC2 Systems Manager (SSM), Amazon Inspector, CodeDeploy, and CloudWatch Logs. A customized and hardened AMI is often referred to as a “golden AMI.” The use of golden AMIs to create EC2 instances in your AWS environment allows for fast and stable application deployment and scaling, secure application stack upgrades, and versioning.

In this post, using the DevOps automation capabilities of Systems Manager, AWS developer tools (CodePipeLine, CodeDeploy, CodeCommit, CodeBuild), I will show you how to use AWS CodePipeline to orchestrate the end-to-end blue/green deployments of a golden AMI and application code. Systems Manager Automation is a powerful security feature for enterprises that want to mature their DevSecOps practices.

Here are the high-level phases and primary services covered in this use case:

 

You can access the source code for the sample used in this post here: https://github.com/awslabs/automating-governance-sample/tree/master/Bluegreen-AMI-Application-Deployment-blog.

This sample will create a pipeline in AWS CodePipeline with the building blocks to support the blue/green deployments of infrastructure and application. The sample includes a custom Lambda step in the pipeline to execute Systems Manager Automation to build a golden AMI and update the Auto Scaling group with the golden AMI ID for every rollout of new application code. This guarantees that every new application deployment is on a fully patched and customized AMI in a continuous integration and deployment model. This enables the automation of hardened AMI deployment with every new version of application deployment.

 

 

We will build and run this sample in three parts.

Part 1: Setting up the AWS developer tools and deploying a base web application

Part 1 of the AWS CloudFormation template creates the initial Java-based web application environment in a VPC. It also creates all the required components of Systems Manager Automation, CodeCommit, CodeBuild, and CodeDeploy to support the blue/green deployments of the infrastructure and application resulting from ongoing code releases.

Part 1 of the AWS CloudFormation stack creates these resources:

After Part 1 of the AWS CloudFormation stack creation is complete, go to the Outputs tab and click the Elastic Load Balancing link. You will see the following home page for the base web application:

Make sure you have all the outputs from the Part 1 stack handy. You need to supply them as parameters in Part 3 of the stack.

Part 2: Setting up your CodeCommit repository

In this part, you will commit and push your sample application code into the CodeCommit repository created in Part 1. To access the initial git commands to clone the empty repository to your local machine, click Connect to go to the AWS CodeCommit console. Make sure you have the IAM permissions required to access AWS CodeCommit from command line interface (CLI).

After you’ve cloned the repository locally, download the sample application files from the part2 folder of the Git repository and place the files directly into your local repository. Do not include the aws-codedeploy-sample-tomcat folder. Go to the local directory and type the following commands to commit and push the files to the CodeCommit repository:

git add .
git commit -a -m "add all files from the AWS Java Tomcat CodeDeploy application"
git push

After all the files are pushed successfully, the repository should look like this:

 

Part 3: Setting up CodePipeline to enable blue/green deployments     

Part 3 of the AWS CloudFormation template creates the pipeline in AWS CodePipeline and all the required components.

a) Source: The pipeline is triggered by any change to the CodeCommit repository.

b) BuildGoldenAMI: This Lambda step executes the Systems Manager Automation document to build the golden AMI. After the golden AMI is successfully created, a new launch configuration with the new AMI details will be updated into the Auto Scaling group of the application deployment group. You can watch the progress of the automation in the EC2 console from the Systems Manager –> Automations menu.

c) Build: This step uses the application build spec file to build the application build artifact. Here are the CodeBuild execution steps and their status:

d) Deploy: This step clones the Auto Scaling group, launches the new instances with the new AMI, deploys the application changes, reroutes the traffic from the elastic load balancer to the new instances and terminates the old Auto Scaling group. You can see the execution steps and their status in the CodeDeploy console.

After the CodePipeline execution is complete, you can access the application by clicking the Elastic Load Balancing link. You can find it in the output of Part 1 of the AWS CloudFormation template. Any consecutive commits to the application code in the CodeCommit repository trigger the pipelines and deploy the infrastructure and code with an updated AMI and code.

 

If you have feedback about this post, add it to the Comments section below. If you have questions about implementing the example used in this post, open a thread on the Developer Tools forum.


About the author

 

Ramesh Adabala is a Solutions Architect in Southeast Enterprise Solution Architecture team at Amazon Web Services.

Video playback on freely-arranged screens with info-beamer

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/info-beamer/

When the creator of the digital signage software info-beamer, Florian Wesch, shared this project on Reddit, I don’t think he was prepared for the excited reaction of the community. Florian’s post, which by now has thousands of upvotes, showcased the power of info-beamer. Not only can the software display a video via multiple Raspberry Pis, it also automatically rejigs the output to match the size and angle of the Pis’ monitors.

info-beamer raspberry pi

Wait…what?

I know, right? We’ve seen many video-based Raspberry Pi projects, but this is definitely one of the most impressive ones. While those of us with a creative streak were imagining cool visual arts installations using monitors and old televisions of various sizes, the more technically-minded puzzled over how Florian pulled this off.

It’s obvious that info-beamer has manifold potential uses. But we had absolutely zero understanding of how it works!

How does info-beamer do this?

Lucky for us, Florian returned to Reddit a few days later with a how-to video, explaining in layman’s terms how you too can get a video to play on a multi-screen, multi-Pi setup.

Automatic video wall configuration with info-beamer hosted

This is an exciting new feature I’ve made available for the info-beamer hosted digital signage system: You can create a video wall consisting of freely arranged screens in seconds. The screens don’t even have to be planar. Just rotate and place them as you like.

First you’ll need to set up info-beamer, which will allow you to introduce multiple Raspberry Pis, and their attached monitors, into a joint network. To make the software work, there’s some Python code you have to write yourself, but hands-on tutorials and example code exist to make this fairly easy, even if you have little experience in Python.

info-beamer raspberry pi

As you can see in Florian’s video, info-beamer assigns each monitor its own, unique section of video. Taking a photo of the monitors and uploading it to a site provides enough information for the software to play a movie trailer split across multiple screens.

info-beamer raspberry pi

A step that’s missing in the video, but that Florian described on Reddit, is how to configure the screens via a drag-and-drop interface so that the software recognizes them. Once this is done, your video display is good to go.

For more information about info-beamer check out the website, and follow the official Twitter account for updates.

Using Raspberry Pi in video-based projects

Since it has an HDMI port, connecting your Raspberry Pi to any compatible monitor, including your television, is an easy task. And with a little tweaking and soldering you can even connect your Pi to that ageing SCART TV/Video combo you might have in the loft.

As I said earlier, there’s an abundance of Pi-powered video-based projects. Many digital art installations, and even commercial media devices, rely on the Raspberry Pi because of its low cost, small size, and high-quality multimedia capabilities.

Have you used a Raspberry Pi in a video-playback project? Share it with us below – we’d love to see it!

The post Video playback on freely-arranged screens with info-beamer appeared first on Raspberry Pi.

All Systems Go! 2017 Speakers

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/all-systems-go-2017-speakers.html

The All Systems Go! 2017 Headline Speakers Announced!

Don’t forget to send in your submissions to the All Systems Go! 2017 CfP! Proposals are accepted until September 3rd!

A couple of headline speakers have been announced now:

  • Alban Crequy (Kinvolk)
  • Brian “Redbeard” Harrington (CoreOS)
  • Gianluca Borello (Sysdig)
  • Jon Boulle (NStack/CoreOS)
  • Martin Pitt (Debian)
  • Thomas Graf (covalent.io/Cilium)
  • Vincent Batts (Red Hat/OCI)
  • (and yours truly)

These folks will also review your submissions as part of the papers committee!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

To submit your proposal now please visit our CFP submission web site.

For further information about All Systems Go! visit our conference web site.

OSGeo-Live 11.0 Released

Post Syndicated from ris original https://lwn.net/Articles/730340/rss

OSGeo-Live is a live DVD/USB/VM distribution that includes a variety of
open-source geospatial software. Version 11.0 is “a major
reboot, with a refocus on leading applications and emphasis on quality over
quantity. Less mature parts of the projects have been dropped with a
targeted focus placed on upgrading and improving documentation.

Growing up alongside tech

Post Syndicated from Eevee original https://eev.ee/blog/2017/08/09/growing-up-alongside-tech/

IndustrialRobot asks… or, uh, asked last month:

industrialrobot: How has your views on tech changed as you’ve got older?

This is so open-ended that it’s actually stumped me for a solid month. I’ve had a surprisingly hard time figuring out where to even start.


It’s not that my views of tech have changed too much — it’s that they’ve changed very gradually. Teasing out and explaining any one particular change is tricky when it happened invisibly over the course of 10+ years.

I think a better framework for this is to consider how my relationship to tech has changed. It’s gone through three pretty distinct phases, each of which has strongly colored how I feel and talk about technology.

Act I

In which I start from nothing.

Nothing is an interesting starting point. You only really get to start there once.

Learning something on my own as a kid was something of a magical experience, in a way that I don’t think I could replicate as an adult. I liked computers; I liked toying with computers; so I did that.

I don’t know how universal this is, but when I was a kid, I couldn’t even conceive of how incredible things were made. Buildings? Cars? Paintings? Operating systems? Where does any of that come from? Obviously someone made them, but it’s not the sort of philosophical point I lingered on when I was 10, so in the back of my head they basically just appeared fully-formed from the æther.

That meant that when I started trying out programming, I had no aspirations. I couldn’t imagine how far I would go, because all the examples of how far I would go were completely disconnected from any idea of human achievement. I started out with BASIC on a toy computer; how could I possibly envision a connection between that and something like a mainstream video game? Every new thing felt like a new form of magic, so I couldn’t conceive that I was even in the same ballpark as whatever process produced real software. (Even seeing the source code for GORILLAS.BAS, it didn’t quite click. I didn’t think to try reading any of it until years after I’d first encountered the game.)

This isn’t to say I didn’t have goals. I invented goals constantly, as I’ve always done; as soon as I learned about a new thing, I’d imagine some ways to use it, then try to build them. I produced a lot of little weird goofy toys, some of which entertained my tiny friend group for a couple days, some of which never saw the light of day. But none of it felt like steps along the way to some mountain peak of mastery, because I didn’t realize the mountain peak was even a place that could be gone to. It was pure, unadulterated (!) playing.

I contrast this to my art career, which started only a couple years ago. I was already in my late 20s, so I’d already spend decades seeing a very broad spectrum of art: everything from quick sketches up to painted masterpieces. And I’d seen the people who create that art, sometimes seen them create it in real-time. I’m even in a relationship with one of them! And of course I’d already had the experience of advancing through tech stuff and discovering first-hand that even the most amazing software is still just code someone wrote.

So from the very beginning, from the moment I touched pencil to paper, I knew the possibilities. I knew that the goddamn Sistine Chapel was something I could learn to do, if I were willing to put enough time in — and I knew that I’m not, so I’d have to settle somewhere a ways before that. I knew that I’d have to put an awful lot of work in before I’d be producing anything very impressive.

I did it anyway (though perhaps waited longer than necessary to start), but those aren’t things I can un-know, and so I can never truly explore art from a place of pure ignorance. On the other hand, I’ve probably learned to draw much more quickly and efficiently than if I’d done it as a kid, precisely because I know those things. Now I can decide I want to do something far beyond my current abilities, then go figure out how to do it. When I was just playing, that kind of ambition was impossible.


So, I played.

How did this affect my views on tech? Well, I didn’t… have any. Learning by playing tends to teach you things in an outward sprawl without many abrupt jumps to new areas, so you don’t tend to run up against conflicting information. The whole point of opinions is that they’re your own resolution to a conflict; without conflict, I can’t meaningfully say I had any opinions. I just accepted whatever I encountered at face value, because I didn’t even know enough to suspect there could be alternatives yet.

Act II

That started to seriously change around, I suppose, the end of high school and beginning of college. I was becoming aware of this whole “open source” concept. I took classes that used languages I wouldn’t otherwise have given a second thought. (One of them was Python!) I started to contribute to other people’s projects. Eventually I even got a job, where I had to work with other people. It probably also helped that I’d had to maintain my own old code a few times.

Now I was faced with conflicting subjective ideas, and I had to form opinions about them! And so I did. With gusto. Over time, I developed an idea of what was Right based on experience I’d accrued. And then I set out to always do things Right.

That’s served me decently well with some individual problems, but it also led me to inflict a lot of unnecessary pain on myself. Several endeavors languished for no other reason than my dissatisfaction with the architecture, long before the basic functionality was done. I started a number of “pure” projects around this time, generic tools like imaging libraries that I had no direct need for. I built them for the sake of them, I guess because I felt like I was improving some niche… but of course I never finished any. It was always in areas I didn’t know that well in the first place, which is a fine way to learn if you have a specific concrete goal in mind — but it turns out that building a generic library for editing images means you have to know everything about images. Perhaps that ambition went a little haywire.

I’ve said before that this sort of (self-inflicted!) work was unfulfilling, in part because the best outcome would be that a few distant programmers’ lives are slightly easier. I do still think that, but I think there’s a deeper point here too.

In forgetting how to play, I’d stopped putting any of myself in most of the work I was doing. Yes, building an imaging library is kind of a slog that someone has to do, but… I assume the people who work on software like PIL and ImageMagick are actually interested in it. The few domains I tried to enter and revolutionize weren’t passions of mine; I just happened to walk through the neighborhood one day and decided I could obviously do it better.

Not coincidentally, this was the same era of my life that led me to write stuff like that PHP post, which you may notice I am conspicuously not even linking to. I don’t think I would write anything like it nowadays. I could see myself approaching the same subject, but purely from the point of view of language design, with more contrasts and tradeoffs and less going for volume. I certainly wouldn’t lead off with inflammatory puffery like “PHP is a community of amateurs”.

Act III

I think I’ve mellowed out a good bit in the last few years.

It turns out that being Right is much less important than being Not Wrong — i.e., rather than trying to make something perfect that can be adapted to any future case, just avoid as many pitfalls as possible. Code that does something useful has much more practical value than unfinished code with some pristine architecture.

Nowhere is this more apparent than in game development, where all code is doomed to be crap and the best you can hope for is to stem the tide. But there’s also a fixed goal that’s completely unrelated to how the code looks: does the game work, and is it fun to play? Yes? Ship the damn thing and forget about it.

Games are also nice because it’s very easy to pour my own feelings into them and evoke feelings in the people who play them. They’re mine, something with my fingerprints on them — even the games I’ve built with glip have plenty of my own hallmarks, little touches I added on a whim or attention to specific details that I care about.

Maybe a better example is the Doom map parser I started writing. It sounds like a “pure” problem again, except that I actually know an awful lot about the subject already! I also cleverly (accidentally) released some useful results of the work I’ve done thusfar — like statistics about Doom II maps and a few screenshots of flipped stock maps — even though I don’t think the parser itself is far enough along to release yet. The tool has served a purpose, one with my fingerprints on it, even without being released publicly. That keeps it fresh in my mind as something interesting I’d like to keep working on, eventually. (When I run into an architecture question, I step back for a while, or I do other work in the hopes that the solution will reveal itself.)

I also made two simple Pokémon ROM hacks this year, despite knowing nothing about Game Boy internals or assembly when I started. I just decided I wanted to do an open-ended thing beyond my reach, and I went to do it, not worrying about cleanliness and willing to accept a bumpy ride to get there. I played, but in a more experienced way, invoking the stuff I know (and the people I’ve met!) to help me get a running start in completely unfamiliar territory.


This feels like a really fine distinction that I’m not sure I’m doing justice. I don’t know if I could’ve appreciated it three or four years ago. But I missed making toys, and I’m glad I’m doing it again.

In short, I forgot how to have fun with programming for a little while, and I’ve finally started to figure it out again. And that’s far more important than whether you use PHP or not.

The CNC Wood Burner turning heads (and wood, obviously)

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/cnc-wood-burner/

Why stick to conventional laser cutters or CNC machines for creating images on wood, when you can build a device to do the job that is a beautiful piece of art in itself? Mechanical and Computer Science student and Imgur user Tucker Shannon has created a wonderful-looking CNC Wood Burner using a Raspberry Pi and stepper motors. His project has a great vinyl-turntable-like design.

Raspberry Pi CNC Wood Burner

Tucker’s somewhat hypnotic build burns images into wood using a Raspberry Pi and stepper motors
GIF c/o Tucker Shannon

A CNC Wood Burner?

Sure! Why not? Tucker had already put the knowledge he acquired while studying at Oregon State University to good use by catching a bike thief in action with the help of a Raspberry Pi. Thus it’s obvious he has the skills he needed to incorporate our little computer into a project. Moreover, his Skittles portrait of Bill Nye is evidence of his artistic flare, so it’s not surprising that he wanted to make something a little different, and pretty, using code.

Tucker Shannon

“Bill Nye, the Skittles Guy”
Image c/o Tucker Shannon

With an idea in mind and sketches drawn, Tucker first considered using an old record player as the base of his build. Having a rotating deck and arm already in place would have made building his project easier. However, he reports on Imgur:

I thought about that! I couldn’t find any at local thrift shops though. Apparently, they’ve become pretty popular…

We can’t disagree with him. Since his search was unsuccessful, Tucker ended up creating the CNC Wood Burner from scratch.

Raspberry Pi CNC Wood Burner

Concept designs
Image c/o Tucker Shannon

Taking into consideration the lumps and bumps of the wood he would be using as a ‘canvas’, Tucker decided to incorporate a pivot to allow the arm to move smoothly over the rough surface.

The code for the make is currently in ‘spaghetti form’, though Tucker is set to release it, as well as full instructions for the build, in the near future.

The build

Tucker laser-cut the pieces for the wood burner’s box and gear out of birch and pine wood. As the motors require 12v power, the standard Raspberry Pi supply wasn’t going to be enough. Therefore, Tucker scavenged for old computer parts , and ended up rescuing a PSU (power supply unit). He then fitted the PSU and the Raspberry Pi within the box.

Raspberry Pi CNC Wood Burner

The cannibalised PSU, stepper motor controller, and Raspberry Pi fit nicely into Tucker’s handmade pine box.
Image c/o Tucker Shannon

Next, he got to work building runners for the stepper motor controlling the position of the ‘pen thing’ that would scorch the image into the wood.

Raspberry Pi CNC Wood Burner

Initial tests on paper help to align the pen
Image c/o Tucker Shannon

After a few test runs using paper, the CNC Wood Burner was good to go!

The results

Tucker has used his CNC Wood Burner to create some wonderful pieces of art. The few examples he’s shared on Imgur have impressed us with their precision. We’re looking forward to seeing what else he is going to make with it!

Raspberry Pi CNC Wood Burner

The build burns wonderfully clean-lined images into wood
Image c/o Tucker Shannon

Your turn

Image replication using Raspberry Pis and stepper motors isn’t a new thing – though doing it using a wood-burning device may be! We’ve seen some great builds in which makers set up motors and a marker pen to create massive works of art. Are you one of those makers? Or have you been planning a build similar to Tucker’s project, possibly with a new twist?

Share your project with us below, whether it is complete or still merely sketches in a notebook. We’d love to see what you’re getting up to!

The post The CNC Wood Burner turning heads (and wood, obviously) appeared first on Raspberry Pi.

[$] The coming WebKitGTK+ 2.4 apocalypse

Post Syndicated from corbet original https://lwn.net/Articles/730185/rss

It is well understood that old and unmaintained software tends to be a
breeding ground for security problems. These problems are never welcome, but they
are particularly worrying when the software in question is a net-facing
tool like a web browser. Standalone browsers are (hopefully) reasonably
well maintained, but those are not the only web browsers out there; they
can also be embedded into applications. The effort to do away with one
unmaintained embedded browser is finally approaching its conclusion, but
the change appears to have caught some projects unaware.

Vetter: Why Github can’t host the Linux Kernel Community

Post Syndicated from corbet original https://lwn.net/Articles/730184/rss

Daniel Vetter describes
how the kernel community scales
and why he feels that the GitHub model tends not to
work for the largest projects. “Unfortunately github doesn’t support
this workflow, at least not natively in the github UI. It can of course be
done with just plain git tooling, but then you’re back to patches on
mailing lists and pull requests over email, applied manually. In my opinion
that’s the single one reason why the kernel community cannot benefit from
moving to github. There’s also the minor issue of a few top maintainers
being extremely outspoken against github in general, but that’s a not
really a technical issue. And it’s not just the linux kernel, it’s all huge
projects on github in general which struggle with scaling, because github
doesn’t really give them the option to scale to multiple repositories,
while sticking to with a monotree.