Tag Archives: bees

The Pi Towers Secret Santa Babbage

Post Syndicated from Mark Calleja original https://www.raspberrypi.org/blog/secret-santa-babbage/

Tired of pulling names out of a hat for office Secret Santa? Upgrade your festive tradition with a Raspberry Pi, thermal printer, and everybody’s favourite microcomputer mascot, Babbage Bear.

Raspberry Pi Babbage Bear Secret Santa

The name’s Santa. Secret Santa.

It’s that time of year again, when the cosiness gets turned up to 11 and everyone starts thinking about jolly fat men, reindeer, toys, and benevolent home invasion. At Raspberry Pi, we’re running a Secret Santa pool: everyone buys a gift for someone else in the office. Obviously, the person you buy for has to be picked in secret and at random, or the whole thing wouldn’t work. With that in mind, I created Secret Santa Babbage to do the somewhat mundane task of choosing gift recipients. This could’ve just been done with some names in a hat, but we’re Raspberry Pi! If we don’t make a Python-based Babbage robot wearing a jaunty hat and programmed to spread Christmas cheer, who will?

Secret Santa Babbage

Ho ho ho!

Mecha-Babbage Xmas shenanigans

The script the robot runs is pretty basic: a list of names entered as comma-separated strings is shuffled at the press of a GPIO button, then a name is popped off the end and stored as a variable. The name is matched to a photo of the person stored on the Raspberry Pi, and a thermal printer pinched from Alex’s super awesome PastyCam (blog post forthcoming, maybe) prints out the picture and name of the person you will need to shower with gifts at the Christmas party. (Well, OK — with one gift. No more than five quid’s worth. Nothing untoward.) There’s also a redo function, just in case you pick yourself: press another button and the last picked name — still stored as a variable — is appended to the list again, which is shuffled once more, and a new name is popped off the end.

Secret Santa Babbage prototyping

Prototyping!

As the build was a bit of a rush job undertaken at the request of our ‘Director of Vibe’ Emily, there are a few things I’d like to improve about this functionality that I didn’t get around to — more on that later. To add some extra holiday spirit to the project at the last minute, I used Pygame to play a WAV file of Santa’s jolly laugh while Babbage chooses a name for you. The file is included in the GitHub repo along with everything else, because ‘tis the season, etc., etc.

Secret Santa Babbage prototyping

Editor’s note: Considering these desk adornments, Mark’s Secret Santa gift-giver has a lot to go on.

Writing the code for Xmas Mecha-Babbage was fairly straightforward, though it uses some tricky bits for managing the thermal printer. You’ll need to install the drivers to make it go, as well as the CUPS package for managing the print hosting. You can find instructions for these things here, thanks to the wonderful Adafruit crew. Also, for reasons I couldn’t fathom, this will all only work on a Pi 2 and not a Pi 3, as there are some compatibility issues with the thermal printer otherwise. (I also tested the script on a Pi Zero W…no dice.)

Building a Christmassy throne

The hardest (well, fiddliest) parts of making the whole build were constructing the throne and wiring the bear. Using MakerCase, Inkscape, a bit of ingenuity, and a laser cutter, I was able to rig up a Christmassy plywood throne which has a hole through the seat so I could run the wires down from Babbage and to the Pi inside. I finished the throne by rubbing a couple of fingers of beeswax into it; as well as making the wood shine just a little bit and protecting it against getting wet, this had the added bonus of making it smell awesome.

Secret Santa Babbage inside

Next year’s iteration will be mulled wine–scented.

I next soldered two LEDs to some lengths of wire, and then ran the wires through holes at the top of the throne and down the back along a small channel I had carved with a narrow chisel to connect them to the Pi’s GPIO pins. The green LED will remain on as long as Babbage is running his program, and the red one will light up while he is processing your request. Once the red LED goes off again, the next person can have a go. I also laser-cut a final piece of wood to overlay the back of Babbage’s Xmas throne and cover the wiring a bit.

Creating a Xmas cyborg bear

Taking two 6 mm tactile buttons, I clipped the spiky metal legs off one side of each (the buttons were going into a stuffed christmas toy, after all) and soldered a length of wire to each of the remaining legs. Next, I made a small incision into Babbage with my trusty Swiss army knife (in a place that actually made me cringe a little) and fed the buttons up into his paws. At some point in this process I was standing in the office wrestling with the bear and muttering to myself, which elicited some very strange looks from my colleagues.

Secret Santa Babbage throne

Poor Babbage…

One thing to note here is to make sure the wires remain attached at the solder points while you push them up into Babbage’s paws. The first time I tried it, I snapped one of my connections and had to start again. It helped to remove some stuffing like a tunnel and then replace it afterward. Moreover, you can use your fingertip to support the joints as you poke the wire in. Finally, a couple of squirts of hot glue to keep Babbage’s furry cheeks firmly on the seat, and done!

Secret Santa Babbage

Next year: Game of Thrones–inspired candy cane throne

The Secret Santa Babbage masterpiece

The whole build process was the perfect holiday mix of cheerful and macabre, and while getting the thermal printer to work was a little time-consuming, the finished product definitely raised some smiles around the office and added a bit of interesting digital flavour to a staid office tradition. And it also helped people who are new to the office or from other branches of the Foundation to know for whom they will be buying a gift.

Secret Santa Babbage

Ready to dispense Christmas cheer!

There are a few ways in which I’ll polish this project before next year, such as having the script write the names to external text files to create a record that will persist in case of a reboot, and maybe having Secret Santa Babbage play you a random Christmas carol when you squeeze his paw instead of just laughing merrily every time. (I also thought about adding electric shocks for those people who are on the naughty list, but HR said no. Bah, humbug!)

Make your own

The code and laser cut plans for the whole build are available here. If you plan to make your own, let us know which stuffed toy you will be turning into a Secret Santa cyborg! And if you’ve been working on any other Christmas-themed Raspberry Pi projects, we’d like to see those too, so tag us on social media to share the festive maker cheer.

The post The Pi Towers Secret Santa Babbage appeared first on Raspberry Pi.

Derek Woodroffe’s steampunk tentacle hat

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/steampunk-tentacle-hat/

Halloween: that glorious time of year when you’re officially allowed to make your friends jump out of their skin with your pranks. For those among us who enjoy dressing up, Halloween is also the occasion to go all out with costumes. And so, dear reader, we present to you: a steampunk tentacle hat, created by Derek Woodroffe.

Finished Tenticle hat

Finished Tenticle hat

Extreme Electronics

Derek is an engineer who loves all things electronics. He’s part of Extreme Kits, and he runs the website Extreme Electronics. Raspberry Pi Zero-controlled Tesla coils are Derek’s speciality — he’s even been on one of the Royal Institution’s Christmas Lectures with them! Skip ahead to 15:06 in this video to see Derek in action:

Let There Be Light! // 2016 CHRISTMAS LECTURES with Saiful Islam – Lecture 1

The first Lecture from Professor Saiful Islam’s 2016 series of CHRISTMAS LECTURES, ‘Supercharged: Fuelling the future’. Watch all three Lectures here: http://richannel.org/christmas-lectures 2016 marked the 80th anniversary since the BBC first broadcast the Christmas Lectures on TV. To celebrate, chemist Professor Saiful Islam explores a subject that the lectures’ founder – Michael Faraday – addressed in the very first Christmas Lectures – energy.

Wearables

Wearables are electronically augmented items you can wear. They might take the form of spy eyeglasses, clothes with integrated sensors, or, in this case, headgear adorned with mechanised tentacles.

Why did Derek make this? We’re not entirely sure, but we suspect he’s a fan of the Cthulu mythos. In any case, we were a little astounded by his project. This is how we reacted when Derek tweeted us about it:

Raspberry Pi on Twitter

@ExtElec @extkits This is beyond incredible and completely unexpected.

In fact, we had to recover from a fit of laughter before we actually managed to type this answer.

Making a steampunk tentacle hat

Derek made the ‘skeleton’ of each tentacle out of a net curtain spring, acrylic rings, and four lengths of fishing line. Two servomotors connect to two ends of fishing line each, and pull them to move the tentacle.

net curtain spring and acrylic rings forming a mechanic tentacle skeleton - steampunk tentacle hat by Derek Woodroffe
Two servos connecting to lengths of fishing line - steampunk tentacle hat by Derek Woodroffe

Then he covered the tentacles with nylon stockings and liquid latex, glued suckers cut out of MDF onto them, and mounted them on an acrylic base. The eight motors connect to a Raspberry Pi via an I2C 8-port PWM controller board.

artificial tentacles - steampunk tentacle hat by Derek Woodroffe
8 servomotors connected to a controller board and a raspberry pi- steampunk tentacle hat by Derek Woodroffe

The Pi makes the servos pull the tentacles so that they move in sine waves in both the x and y directions, seemingly of their own accord. Derek cut open the top of a hat to insert the mounted tentacles, and he used more liquid latex to give the whole thing a slimy-looking finish.

steampunk tentacle hat by Derek Woodroffe

Iä! Iä! Cthulhu fhtagn!

You can read more about Derek’s steampunk tentacle hat here. He will be at the Beeston Raspberry Jam in November to show off his build, so if you’re in the Nottingham area, why not drop by?

Wearables for Halloween

This build is already pretty creepy, but just imagine it with a sensor- or camera-powered upgrade that makes the tentacles reach for people nearby. You’d have nightmare fodder for weeks.

With the help of the Raspberry Pi, any Halloween costume can be taken to the next level. How could Pi technology help you to win that coveted ‘Scariest costume’ prize this year? Tell us your ideas in the comments, and be sure to share pictures of you in your get-up with us on Twitter, Facebook, or Instagram.

The post Derek Woodroffe’s steampunk tentacle hat appeared first on Raspberry Pi.

New Network Load Balancer – Effortless Scaling to Millions of Requests per Second

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/

Elastic Load Balancing (ELB)) has been an important part of AWS since 2009, when it was launched as part of a three-pack that also included Auto Scaling and Amazon CloudWatch. Since that time we have added many features, and also introduced the Application Load Balancer. Designed to support application-level, content-based routing to applications that run in containers, Application Load Balancers pair well with microservices, streaming, and real-time workloads.

Over the years, our customers have used ELB to support web sites and applications that run at almost any scale — from simple sites running on a T2 instance or two, all the way up to complex applications that run on large fleets of higher-end instances and handle massive amounts of traffic. Behind the scenes, ELB monitors traffic and automatically scales to meet demand. This process, which includes a generous buffer of headroom, has become quicker and more responsive over the years and works well even for our customers who use ELB to support live broadcasts, “flash” sales, and holidays. However, in some situations such as instantaneous fail-over between regions, or extremely spiky workloads, we have worked with our customers to pre-provision ELBs in anticipation of a traffic surge.

New Network Load Balancer
Today we are introducing the new Network Load Balancer (NLB). It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:

Static IP Addresses – Each Network Load Balancer provides a single IP address for each VPC subnet in its purview. If you have targets in a subnet in us-west-2a and other targets in a subnet in us-west-2c, NLB will create and manage two IP addresses (one per subnet); connections to that IP address will spread traffic across the instances in the subnet. You can also specify an existing Elastic IP for each subnet for even greater control. With full control over your IP addresses, Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.

Zonality – The IP-per-subnet feature reduces latency with improved performance, improves availability through isolation and fault tolerance and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single subnet while still allowing automatic failover.

Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.

Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.

Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.

Creating a Network Load Balancer
I can create a Network Load Balancer opening up the EC2 Console, selecting Load Balancers, and clicking on Create Load Balancer:

I choose Network Load Balancer and click on Create, then enter the details. I can choose an Elastic IP address for each subnet in the target VPC and I can tag the Network Load Balancer:

Then I click on Configure Routing and create a new target group. I enter a name, and then choose the protocol and port. I can also set up health checks that go to the traffic port or to the alternate of my choice:

Then I click on Register Targets and the EC2 instances that will receive traffic, and click on Add to registered:

I make sure that everything looks good and then click on Create:

The state of my new Load Balancer is provisioning, switching to active within a minute or so:

For testing purposes, I simply grab the DNS name of the Load Balancer from the console (in practice I would use Amazon Route 53 and a more friendly name):

Then I sent it a ton of traffic (I intended to let it run for just a second or two but got distracted and it created a huge number of processes, so this was a happy accident):

$ while true;
> do
>   wget http://nlb-1-6386cc6bf24701af.elb.us-west-2.amazonaws.com/phpinfo2.php &
> done

A more disciplined test would use a tool like Bees with Machine Guns, of course!

I took a quick break to let some traffic flow and then checked the CloudWatch metrics for my Load Balancer, finding that it was able to handle the sudden onslaught of traffic with ease:

I also looked at my EC2 instances to see how they were faring under the load (really well, it turns out):

It turns out that my colleagues did run a more disciplined test than I did. They set up a Network Load Balancer and backed it with an Auto Scaled fleet of EC2 instances. They set up a second fleet composed of hundreds of EC2 instances, each running Bees with Machine Guns and configured to generate traffic with highly variable request and response sizes. Beginning at 1.5 million requests per second, they quickly turned the dial all the way up, reaching over 3 million requests per second and 30 Gbps of aggregate bandwidth before maxing out their test resources.

Choosing a Load Balancer
As always, you should consider the needs of your application when you choose a load balancer. Here are some guidelines:

Network Load Balancer (NLB) – Ideal for load balancing of TCP traffic, NLB is capable of handling millions of requests per second while maintaining ultra-low latencies. NLB is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone.

Application Load Balancer (ALB) – Ideal for advanced load balancing of HTTP and HTTPS traffic, ALB provides advanced request routing that supports modern application architectures, including microservices and container-based applications.

Classic Load Balancer (CLB) – Ideal for applications that were built within the EC2-Classic network.

For a side-by-side feature comparison, see the Elastic Load Balancer Details table.

If you are currently using a Classic Load Balancer and would like to migrate to a Network Load Balancer, take a look at our new Load Balancer Copy Utility. This Python tool will help you to create a Network Load Balancer with the same configuration as an existing Classic Load Balancer. It can also register your existing EC2 instances with the new load balancer.

Pricing & Availability
Like the Application Load Balancer, pricing is based on Load Balancer Capacity Units, or LCUs. Billing is $0.006 per LCU, based on the highest value seen across the following dimensions:

  • Bandwidth – 1 GB per LCU.
  • New Connections – 800 per LCU.
  • Active Connections – 100,000 per LCU.

Most applications are bandwidth-bound and should see a cost reduction (for load balancing) of about 25% when compared to Application or Classic Load Balancers.

Network Load Balancers are available today in all AWS commercial regions except China (Beijing), supported by AWS CloudFormation, Auto Scaling, and Amazon ECS.

Jeff;

 

CoderDojo Coolest Projects 2017

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/coderdojo-coolest-projects-2017/

When I heard we were merging with CoderDojo, I was delighted. CoderDojo is a wonderful organisation with a spectacular community, and it’s going to be great to join forces with the team and work towards our common goal: making a difference to the lives of young people by making technology accessible to them.

You may remember that last year Philip and I went along to Coolest Projects, CoderDojo’s annual event at which their global community showcase their best makes. It was awesome! This year a whole bunch of us from the Raspberry Pi Foundation attended Coolest Projects with our new Irish colleagues, and as expected, the projects on show were as cool as can be.

Coolest Projects 2017 attendee

Crowd at Coolest Projects 2017

This year’s coolest projects!

Young maker Benjamin demoed his brilliant RGB LED table tennis ball display for us, and showed off his brilliant project tutorial website codemakerbuddy.com, which he built with Python and Flask. [Click on any of the images to enlarge them.]

Coolest Projects 2017 LED ping-pong ball display
Coolest Projects 2017 Benjamin and Oly

Next up, Aimee showed us a recipes app she’d made with the MIT App Inventor. It was a really impressive and well thought-out project.

Coolest Projects 2017 Aimee's cook book
Coolest Projects 2017 Aimee's setup

This very successful OpenCV face detection program with hardware installed in a teddy bear was great as well:

Coolest Projects 2017 face detection bear
Coolest Projects 2017 face detection interface
Coolest Projects 2017 face detection database

Helen’s and Oly’s favourite project involved…live bees!

Coolest Projects 2017 live bees

BEEEEEEEEEEES!

Its creator, 12-year-old Amy, said she wanted to do something to help the Earth. Her project uses various sensors to record data on the bee population in the hive. An adjacent monitor displays the data in a web interface:

Coolest Projects 2017 Aimee's bees

Coolest robots

I enjoyed seeing lots of GPIO Zero projects out in the wild, including this robotic lawnmower made by Kevin and Zach:

Raspberry Pi Lawnmower

Kevin and Zach’s Raspberry Pi lawnmower project with Python and GPIO Zero, showed at CoderDojo Coolest Projects 2017

Philip’s favourite make was a Pi-powered robot you can control with your mind! According to the maker, Laura, it worked really well with Philip because he has no hair.

Philip Colligan on Twitter

This is extraordinary. Laura from @CoderDojo Romania has programmed a mind controlled robot using @Raspberry_Pi @coolestprojects

And here are some pictures of even more cool robots we saw:

Coolest Projects 2017 coolest robot no.1
Coolest Projects 2017 coolest robot no.2
Coolest Projects 2017 coolest robot no.3

Games, toys, activities

Oly and I were massively impressed with the work of Mogamad, Daniel, and Basheerah, who programmed a (borrowed) Amazon Echo to make a voice-controlled text-adventure game using Java and the Alexa API. They’ve inspired me to try something similar using the AIY projects kit and adventurelib!

Coolest Projects 2017 Mogamad, Daniel, Basheerah, Oly
Coolest Projects 2017 Alexa text-based game

Christopher Hill did a brilliant job with his Home Alone LEGO house. He used sensors to trigger lights and sounds to make it look like someone’s at home, like in the film. I should have taken a video – seeing it in action was great!

Coolest Projects 2017 Lego home alone house
Coolest Projects 2017 Lego home alone innards
Coolest Projects 2017 Lego home alone innards closeup

Meanwhile, the Northern Ireland Raspberry Jam group ran a DOTS board activity, which turned their area into a conductive paint hazard zone.

Coolest Projects 2017 NI Jam DOTS activity 1
Coolest Projects 2017 NI Jam DOTS activity 2
Coolest Projects 2017 NI Jam DOTS activity 3
Coolest Projects 2017 NI Jam DOTS activity 4
Coolest Projects 2017 NI Jam DOTS activity 5
Coolest Projects 2017 NI Jam DOTS activity 6

Creativity and ingenuity

We really enjoyed seeing so many young people collaborating, experimenting, and taking full advantage of the opportunity to make real projects. And we loved how huge the range of technologies in use was: people employed all manner of hardware and software to bring their ideas to life.

Philip Colligan on Twitter

Wow! Look at that room full of awesome young people. @coolestprojects #coolestprojects @CoderDojo

Congratulations to the Coolest Projects 2017 prize winners, and to all participants. Here are some of the teams that won in the different categories:

Coolest Projects 2017 winning team 1
Coolest Projects 2017 winning team 2
Coolest Projects 2017 winning team 3

Take a look at the gallery of all winners over on Flickr.

The wow factor

Raspberry Pi co-founder and Foundation trustee Pete Lomas came along to the event as well. Here’s what he had to say:

It’s hard to describe the scale of the event, and photos just don’t do it justice. The first thing that hit me was the sheer excitement of the CoderDojo ninjas [the children attending Dojos]. Everyone was setting up for their time with the project judges, and their pure delight at being able to show off their creations was evident in both halls. Time and time again I saw the ninjas apply their creativity to help save the planet or make someone’s life better, and it’s truly exciting that we are going to help that continue and expand.

Even after 8 hours, enthusiasm wasn’t flagging – the awards ceremony was just brilliant, with ninjas high-fiving the winners on the way to the stage. This speaks volumes about the ethos and vision of the CoderDojo founders, where everyone is a winner just by being part of a community of worldwide friends. It was a brilliant introduction, and if this weekend was anything to go by, our merger certainly is a marriage made in Heaven.

Join this awesome community!

If all this inspires you as much as it did us, consider looking for a CoderDojo near you – and sign up as a volunteer! There’s plenty of time for young people to build up skills and start working on a project for next year’s event. Check out coolestprojects.com for more information.

The post CoderDojo Coolest Projects 2017 appeared first on Raspberry Pi.

Continuous Deployment to Amazon ECS using AWS CodePipeline, AWS CodeBuild, Amazon ECR, and AWS CloudFormation

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/continuous-deployment-to-amazon-ecs-using-aws-codepipeline-aws-codebuild-amazon-ecr-and-aws-cloudformation/

Thanks to my colleague John Pignata for a great blog on how to create a continuous deployment pipeline to Amazon ECS.

Delivering new iterations of software at a high velocity is a competitive advantage in today’s business environment. The speed at which organizations can deliver innovations to customers and adapt to changing markets is increasingly a pivotal attribute that can make the difference between success and failure.

AWS provides a set of flexible services designed to enable organizations to embrace the combination of cultural philosophies, practices, and tools called DevOps that increases an organization’s ability to deliver applications and services at high velocity.

In this post, I explore the DevOps practice called continuous deployment and outline a reference architecture to implement an automated deployment pipeline for applications delivered as Docker containers onto Amazon ECS using AWS CodePipeline, AWS CodeBuild, and AWS CloudFormation.

What is continuous deployment?

Agility is often cited as a key advantage of cloud computing over the traditional delivery of IT resources. Instead of waiting weeks or months for other departments to provision a new server, developers can create new instances with a click or API call and start using it within minutes. This newfound speed and autonomy frees developers to experiment and deliver new products and features to their customers as quickly as possible.

On top of the cloud, teams are embracing DevOps practices in order to achieve a faster time-to-market, better code quality, and more reliable releases of their products and services. Continuous deployment is a DevOps practice in which new software revisions are automatically built, tested, packaged, and released to production.

Continuous deployment enables developers to ship features and fixes through an entirely automated software release process. Instead of batching up large releases over a period of weeks or months and conducting deployments manually, developers can use automation to deliver versions of their applications many times a day as new software revisions are ready for users. In the same way cloud computing abbreviates the delivery time of resources, continuous deployment reduces the release cycle of new software to your users from weeks or months to minutes.

Embracing this speed and agility has many benefits including:

  • New features and bug fixes are released to users quickly; code sitting in a source code repository does not deliver business value or benefit your customers. By releasing new software revisions as close to immediately as possible, customers start benefiting from your work more quickly and teams can get more focused feedback.
  • Change sets are smaller; large change sets create challenges in pinpointing root causes of issues, bugs, and other regressions. By releasing smaller change sets more frequently, teams can more easily attribute and correct introduced issues.
  • Automated deployment encourages best practices; as any change committed to your source code repository can be deployed immediately via automation, teams have to ensure that changes are well-tested and that their production environments are closely monitored.

How does continuous deployment work?

Continuous deployment is conducted by an automated pipeline that coordinates the activities related to software release and provides visibility into the process. During the process, a releasable artifact is built, tested, packaged, and deployed into a production environment. The releasable artifact might be an executable file, a package of script files, a container, or some other component that ultimately must be delivered to production.

AWS CodePipeline is a continuous delivery and deployment service that coordinates the building, testing, and deployment of your code each time there is a new software revision. CodePipeline provides visible, central orchestration for taking a code change and moving it through a workflow and ultimately into the hands of your users. The pipeline defines stages to retrieve code from a source code repository, build the source code into a releasable artifact, test that artifact, and deliver it to production while ensuring that these stages happen in order and are halted if a failure occurs.

While CodePipeline powers the delivery pipeline and orchestrates the process, it does not have facilities for building or testing the software itself. For these stages, CodePipeline integrates with several other tools, including AWS CodeBuild, which is a fully managed build service. CodeBuild compiles source code, runs tests, and produces software packages that are ready to deploy. That makes it ideal for the build and test stages of a continuous deployment pipeline. Out of the box, CodeBuild has native support for many different kinds of build environments, including building Docker containers.

Containers are a powerful mechanism for software delivery, as they allow for a predictable and reproducible environment and provide a high level of confidence that changes tested in one environment can be successfully deployed. AWS provides several services to run and manage Docker container images. Amazon ECS is a highly scalable and high performance container management service that allows you to run applications on a cluster of Amazon EC2 instances. Amazon ECR is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.

Finally, CodePipeline integrates with several services to facilitate deployment, including AWS Elastic Beanstalk, AWS CodeDeploy, AWS OpsWorks, and your own custom deployment code or process using AWS Lambda or AWS CloudFormation. These deployment actions can be used to power the final step in your pipeline to push the newly built changes live onto your production environment.

Continuous deployment to Amazon ECS

Here’s a reference architecture that puts these components together to deliver a continuous deployment pipeline of Docker applications onto ECS:

This architecture demonstrates how to deploy containers onto ECS and ECR using CodePipeline to build a fully automated continuous deployment pipeline on top of AWS. This approach to continuous deployment is entirely serverless and uses managed services for the orchestration, build, and deployment of your software.

The pipeline created in the reference architecture looks like the following:

In this post, I discuss each stage in this reference architecture. What happens when a developer changes some copy on a landing page and pushes that change into the source code repository?

First, in the Source stage, the pipeline is configured with details for accessing a source code repository system. In the reference architecture, you have a sample application hosted in a GitHub repository. CodePipeline polls this repository and initiates a new pipeline execution for each new commit. In addition to GitHub, CodePipeline also supports source locations such as a Git repository in AWS CodeCommit or a versioned object stored in Amazon S3. Each new build is retrieved from the source code repository, packaged as a zip file, stored on S3, and sent to the next stage of the pipeline.

The Source stage also defines a template artifact stored on Amazon S3. This is the template that defines the deployment environment used by the deployment stage after a successful build of the application.

The Build stage uses CodeBuild to create a new Docker container image based upon the latest source code and pushes it to an ECR repository. CodePipeline also integrates with a number of third-party build systems, such as Jenkins, CloudBees, Solano CI, and TeamCity.

Finally, the Deploy stage uses CloudFormation to create a new task definition revision that points to the newly built Docker container image and updates the ECS service to use the new task definition revision. After this is done, ECS initiates a deployment by fetching the new Docker container from ECR and restarting the service.

After all of the pipeline’s stages are green, you can reload the application in a web browser and see the developer’s copy changes live in production. This happened automatically without any human invention.

This pipeline is now in production, listening for new code in the source code repository, and ready to ship any future changes that your team pushes into production. It’s also extensible, meaning that new stages can be added to include additional steps. For example, you could include a test stage to execute unit and acceptance tests to ensure the new code revision is safe to deploy to production. After it’s deployed, a notification step could be added to alert your team via email or a Slack channel that a new version is live, along with the details about the change set deployed to production.

Conclusion

We’re excited to see what kinds of applications you can deliver to your users using this approach and how it affects your product development processes. The cloud unlocks massive advantages in agility, and the ability to implement techniques like continuous deployment unlocks a significant competitive advantage.

You’ll find an AWS CloudFormation template with everything necessary to spin up your own continuous deployment pipeline at the AWS Labs EC2 Container Service – Reference Architecture: Continuous Deployment repo on GitHub. If you have any questions, feedback, or suggestions, please let us know!

Aussie KickassTorrents Blocking Battle Continues, Despite Takedown

Post Syndicated from Andy original https://torrentfreak.com/aussie-kickasstorrents-blocking-battle-continues-despite-takedown-161025/

Back in April, members of the Australian Recording Industry Association (ARIA) and Australasian collecting society APRA AMCOS teamed up to file the music industry’s first ‘pirate’ site blocking application Down Under.

Filed at the Federal Court under section 115A of the Copyright Act 1968, member labels Universal Music, Warner Music, Sony Music and J Albert & Son demanded that then leading torrent site KickassTorrents (KAT) should be blocked by the country’s ISPs.

Arguing that KickassTorrents showed a “complete disrespect for music creators and the value of music”, the industry groups asked leading ISPs Telstra, Optus, TPG, and Foxtel to stop their subscribers from accessing the site. However, during the summer that job was effectively carried out for them by the US Department of Justice, which shut down KickassTorrents and had its owner arrested.

But despite the disappearance of the site, the Aussie case has continued. The music industry is now focusing on the many clones, copies, and wannabees that are using the KickassTorrents name to get traffic, despite having nothing to do with the original site.

This week the parties were back in the Federal Court. The ISPs aren’t fighting the blocking demand per se, but as usual there’s a dispute over who will foot the bill for legal proceedings and will shoulder costs of implementing the blockades.

None of the ISPs are objecting to paying for the blocking systems to be put in place. However, they want rightsholders to pay for the initial implementation and ongoing maintenance of a block, which according to ComputerWorld will be put in place for three years.

ISP Optus estimated a cost of AUS$12,500 (US$9,533) to put blocks in place. TPG informed the court that following initial setup, each domain name would cost $50 to block.

Simplifying the rolling injunctions demands made by the movie and TV industry in the blocking case against The Pirate Bay and others, the music industry is seeking straight-forward DNS-based blocking backed up by a system which would block subsequently appearing clones, mirrors, and proxy sites.

The record labels and their allies foresee an application to the court containing details of any site they wish the ISPs to block, with the ISPs given 10 days to object to the blocking demand. The court would then decide whether the parties would need to appear before another hearing in advance of the domain being added to the blocklist.

Of course, KickassTorrents no longer exists so the continuing of a case to have it blocked is somewhat unusual, to say the least.

Illustrating just how far removed the case has shifted from its original aims is an image posted by CNET, which shows the original domains the industry wanted blocked, and how that has completely changed following the demise of KAT.

kat-block

– Kat.al is an incomplete snapshot of KAT before it went offline.
– Kattor.zyx has nothing to do with KAT, redirects to another site.
– Kickass.cd is a clone of The Pirate Bay.
– Kickasstorrents.immunicity.date is another ‘snapshot’ site.
– Kickass.pe is completely inactive
– Kickass.Ukbypass.download (see Kickass.cd)
– Kickass.Unblocked.tv (see Kickass.cd)

The case (which now has only tenuous links to KickassTorrents) continues alongside the movie industry’s blocking case against The Pirate Bay. That too is experiencing argument over who will pay for what and has also been affected by takedowns.

The Pirate Bay, isoHunt, Torrentz, TorrentHound and Solarmovie are featured in that action, but only the first two domains are intact after last three permanently closed (1,2) in recent months.

Only time will tell whether the expense and inevitable game of whac-a-mole will be worth it, but all the signs point to this being a complex battle with no definite end.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

On a technicality

Post Syndicated from Eevee original https://eev.ee/blog/2016/07/22/on-a-technicality/

Apropos of nothing, I’d like to tell you a story. I’ve touched on this before, but this is the full version. It’s the story of hypothetical small-to-medium Internet community.

Stop me if you’ve heard this one

You create a little community for a thing you like. You give it a phpBB forum or something.

You want people to be nice, so you make a couple rules. No swearing. No spamming. Don’t use all caps.

You invite your friends, and they invite their friends, and all is well and good. There are a few squabbles now and then, but they get resolved without too much trouble, and everyone more or less gets along.

One day, a new person shows up, and starts linking to their website in almost every thread. Their website mostly consists of very mean-spirited articles written about several well-known and well-liked people in the group. When people ask them to stop, they lash out with harsh insults.

So you ban them.

There is immediate protest from a number of people, most of whom you strangely don’t recognize. The person didn’t break any of the rules — how dare you ban them? They never swore. They never used all caps. They never even spammed, because technically spam is unwanted and automated, and this was a real person linking their website which is related to the thing the community is about.

You can’t think of a good counter-argument for this, so you unban them. You also add a new rule, prohibiting linking to websites.

Now the majority of the community is affected, because they can’t link their own work any more. This won’t work. You repeal the previous rule, and instead make one that limits the number of website links to one per day.

The original jerk responds by linking their website once a day, and then making other posts that link to that first post they made. They continue to be abrasive towards everyone else, but they never swear, and you’re just not sure what to do about that.

A few other people start posting, seemingly just to make fun of the rest of you, but likewise never break any of your rules.

A preposterous arms race follows, with the rules becoming increasingly nitpicky as you try to distinguish overt antagonism from mundane and innocent behavior.

After a while, you notice that many of your friends no longer come around. And there seem to be a lot more jerks than there were before. You don’t understand why. Your rules are reasonable, and you enforced them fairly, right?

But it’s not really a swear word

I’ve noticed that people really like to write rules that sound objective. Seems like a good enough idea, right? Lets everyone know exactly what the line is.

The trick is that human behavior, and especially human language, are very… squishy. We gauge each other based on a lot of unspoken context: our prior relationship, how both of us seem to be feeling, whether or not we skipped lunch today. When the same comment or action can mean radically different things in different circumstances, it’s hard to draw a fine distinction between what’s acceptable behavior and what’s not.

And rules are written in human language, which makes them just as squishy. Who decides what “swearing” is? If all caps aren’t allowed, how about 90%? Who decides what’s a slur? What, precisely, constitutes harassment? These things sound straightforward and concrete, but they can still be nitpicked to death.

We give people the benefit of the doubt and assume they’ll try to respect what we clearly mean, but there’s nothing guaranteeing that.

Have you ever tried to politely decline a request or invitation, and been asked why not? Then the other party starts trying to weasel around your reason, and now you’re somehow part of a debate about what you want? I’ve seen it happen with mundane social interactions, with freelance workers, and of course, with small online communities.

This isn’t to say that hunting for technicalities is a sign of aggressive malice; it’s human nature. We want to do a thing, we’re told me can’t because of X, and so we see X as an obstacle to overcome. Language is subjective, so it’s the easiest avenue of attack.

Fixing this in rules is a hard problem. The obvious approach is to add increasingly specific details, though then you risk catching innocent behaviors, and you can end up stuck in an almost comical game of cat-and-mouse where you keep trying to find ways to edit your own rules so you’re allowed to punish someone you’ve already passed judgment on.

I think we forget that even real laws are somewhat subjective, often hinging on intent. There are entire separate crimes for homicide, depending on whether it was intentional or accidental or due to clear neglect. These things get decided by a judge or a jury and become case law, the somewhat murky extra rules that aren’t part of formal law but are binding nonetheless.

(In an awkward twist, a lot of communities — especially very large platforms! — don’t explain their reasoning for punishing any particular behavior. That somewhat protects them from being “but technically“-ed, but it also means there’s no case law, and no one else can quite be sure what’s expected behavior.)

That’s why I mostly now make quasirules like “don’t be a dick” or “keep your vitriol to your own blog“. The general expectation is still clear, and it’s obvious that I reserve the right to judge individual cases — which, in the case of a small community, is going to happen anyway. Let’s face it: small communities are monarchies, not democracies.

I do have another reason for this, which is based on another observation I’ve made of small communities. I’ve joined a few where I didn’t bother reading the rules, made some conversation, never bothered anyone, and then later discovered that I’d pretty clearly violated a rule. But no one ever pointed it out, and perhaps no one even noticed, because I wasn’t being a dick.

So I concluded that, for a smaller community, the people who need the rules are likely to be people who you don’t want around in the first place. And “don’t be a dick” covers that just as well.

Evaporative cooling

There are some nice people in the world. I mean nice people, the sort I couldn’t describe myself as. People who are friends with everyone, who are somehow never involved in any argument, who seem content to spend their time drawing pictures of bumblebees on flowers that make everyone happy.

Those people are great to have around. You want to hold onto them as much as you can.

But people only have so much tolerance for jerkiness, and really nice people often have less tolerance than the rest of us.

The trouble with not ejecting a jerk — whether their shenanigans are deliberate or incidental — is that you allow the average jerkiness of the community to rise slightly. The higher it goes, the more likely it is that those really nice people will come around less often, or stop coming around at all. That, in turn, makes the average jerkiness rise even more, which teaches the original jerk that their behavior is acceptable and makes your community more appealing to other jerks. Meanwhile, more people at the nice end of the scale are drifting away.

And this goes for a community of any size, though it may take more jerks to significantly affect a very large platform.

It’s still hard to give someone the boot, though, because it just feels like a really harsh thing to do to someone, especially for an abstract reason like “preserving the feel of the community”. And a jerk is more likely to make a fuss about being made to leave, which makes it feel like a huge issue — whereas nice people generally leave very quietly, and you may not even notice until several of them have been gone for a while.

There’s a human tendency to measure peace as though it were the inverse of volume: the louder people get, the less peaceful it is. We then try to optimize for the least arguing. I’m sure you’ve seen this happen before: someone in a group points out that the group is doing something destructive, that causes an argument, and then onlookers blame the person who pointed out the problem for causing the argument to happen. You can probably think of some pretty high-profile examples in some current events.

(You may relatedly enjoy the tale of the missing stair.)

Have you ever watched one of those TV shows where a dude comes in to berate restaurant owners for all the ridiculous things they’ve been doing? One of the most common defenses is: “well, no one complained“.

In the age of the Internet, where it seems like everyone is always complaining about something, it’s easy to forget that by and large people don’t complain. Sure, they might complain on their Twitter or to their friends or whatever, but chances are, they won’t complain to you. Consider: either you’re aware of the problem and have failed to solve it, or you’re clueless for not noticing. Either way, complaining won’t help anything; it’ll just cause conflict, making them that person who “caused” an argument by pointing out the obvious.

Gamification

Some people are aware of the technicality game on some level, and decide to play it — deliberately. Maybe to get their way; maybe just for fun.

These are people who think “it’d be a shame if something happened to it” is just the way people talk. Layered thick with multiple levels of irony, cloaked in jokes and misdirection, up to its eyeballs in plausible deniability, but crystal clear to the right audience.

It’s a game that offers them a massive advantage, because even if you both know you’re playing it, they have much more experience. Oh, and chances are they don’t even truly care about whether they’re banned or not, so they have nothing to lose — whereas you’re stuck with an existential crisis, questioning everything you believe about free speech and community management, while your nicest peers sneak out the back door.

I remember a time when someone in a community I helped run decided they didn’t like me. They started making subtle jabs, and eventually built up to saying the most biting and personal things they could think to say. Those things weren’t true, but they didn’t know that, and they phrased everything in such a way that their friends could rationalize them as not really trying to be cruel. And they had quite a lot of friends in the community, which put me in a pretty awkward position. How do I justify banning them, if a significant number of people are sure they’re innocent? Am I fucking crazy for seeing this glaring pattern when no one else does?

I did eventually ban them, but it contributed to a complete schism where most of the more grating people left to form their own clubhouse. Win/win?

Or let’s say, hypothetically, that some miscreant constructs a fake tweet screenshot. It’s shared by a high-profile person and spreads like wildfire.

Should either of them be punished? Which one, and why? The faker probably regarded it as a harmless joke; if not for the sharer, it would’ve remained one. Yet the sharer’s only crime was being popular. Did the sharer know it was fake? Was the sharer trying to inflict harm, draw attention to troubling behavior, or share something that made them laugh? Are the faker and the sharer the same person? If you can’t be sure either way, does it matter?

What if, instead of the thing you may be thinking about, the forgery depicted Donald Trump plagiarizing Barack Obama’s tweet congratulating Michelle Obama for her speech? Does that change any of the answers?

This is really difficult in extremely large groups, where you most want to avoid doling out arbitrary punishment, yet where people who play this game can inflict the most damage. The people who make and enforce the rules may not even be part of the group any more, and certainly can’t form an impression of every individual person in the group, so how can anything be enforced consistently? How do you account for intention, sarcasm, irony, self-deprecating humor? How do you explain this clearly without subjecting yourself to an endless deluge of technicalities? You could refuse to explain yourself at all, of course, but then you leave yourself open for people to offer their own explanations: you’re a tyrant who bans anyone who contradicts you, or you hated them for demographic reasons, or you’re just plain irrational and do zany cruel things to people around you on a whim.

I don’t have any good answers

I’m not sure there are any. Corralling people is a tricky problem. We still barely know how to do it in meatspace groups of half a dozen, let alone digital groups numbering in the hundreds of millions.

Our current approaches kinda suck, though.

Plan Bee

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/plan-bee/

Bees are important. I find myself saying this a lot and, slowly but surely, the media seems to be coming to this realisation too. The plight of the bee is finally being brought to our attention with increasing urgency.

A colony of bees make honey

Welcome to the house of buzz.

In the UK, bee colonies are suffering mass losses. Due to the use of bee-killing fertilisers and pesticides within the farming industry, the decline of pollen-rich plants, the destruction of hives by mites, and Colony Collapse Disorder (CCD), bees are in decline at a worrying pace.

Bee Collision

When you find the perfect GIF…

One hint of a silver lining is that increasing awareness of the crisis has led to a rise in the number of beekeeping hobbyists. As getting your hands on some bees is now as simple as ordering a box from the internet, keeping bees in your garden is a much less daunting venture than it once was. 

Taking this one step further, beekeepers are now using tech to monitor the conditions of their bees, improving conditions for their buzzy workforce while also recording data which can then feed into studies attempting to lessen the decline of the bee.

WDLabs recently donated a PiDrive to the Honey Bee Gardens Project in order to help beekeeper David Ammons and computer programmer Graham Total create The Hive Project, an electric beehive colony that monitors real-time bee data.

Electric Bee Hive

The setup records colony size, honey production, and bee health to help combat CCD.

Colony Collapse Disorder (CCD) is decidedly mysterious. Colonies hit by the disease seem to simply disappear. The hive itself often remains completely intact, full of honey at the perfect temperature, but… no bees. Dead or alive, the bees are nowhere to be found.

To try to combat this phenomenon, the electric hive offers 24/7 video coverage of the inner hive, while tracking the conditions of the hive population.

Bee bringing pollen into the hive

This is from the first live day of our instrumented beehive. This was the only bee we spotted all day that brought any pollen into the hive.

Ultimately, the team aim for the data to be crowdsourced, enabling researchers and keepers to gain the valuable information needed to fight CCD via a network of electric hives. While many people blame the aforementioned pollen decline and chemical influence for the rise of CCD, without the empirical information gathered from builds such as The Hive Project, the source of the problem, and therefore the solution, can’t be found.

Bee making honey

It has been brought to our attention that the picture here previously was of a wasp doing bee things. We have swapped it out for a bee.

 

 

Ammons and Total researched existing projects around the use of digital tech within beekeeping, and they soon understood that a broad analysis of bee conditions didn’t exist. While many were tracking hive weight, temperature, or honey population, there was no system in place for integrating such data collection into one place. This realisation spurred them on further.

“We couldn’t find any one project that took a broad overview of the whole area. Even if we don’t end up being the people who implement it, we intend to create a plan for a networked system of low-cost monitors that will assist both research and commercial beekeeping.”

With their mission statement firmly in place, the duo looked toward the Raspberry Pi as the brain of their colony. Finding the device small enough to fit within the hive without disruption, the power of the Pi allowed them to monitor multiple factors while also using the Pi Camera Module to record all video to the 314GB storage of the Western Digital PiDrive.

Data recorded by The Hive Project is vital to the survival of the bee, the growth of colony population, and an understanding of the conditions of the hive in changing climates. These are issues which affect us all. The honey bee is responsible for approximately 80% of pollination in the UK, and is essential to biodiversity. Here, I should hand over to a ‘real’ bee to explain more about the importance of bee-ing…

Bee Movie – Devastating Consequences – HD

Barry doesn’t understand why all the bee aren’t happy. Then, Vanessa shows Barry the devastating consequences of the bees being triumphant in their lawsuit against the human race.

 

The post Plan Bee appeared first on Raspberry Pi.