Fleet Management Made Easy with Auto Scaling

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/fleet-management-made-easy-with-auto-scaling/

If your application runs on Amazon EC2 instances, then you have what’s referred to as a ‘fleet’. This is true even if your fleet is just a single instance. Automating how your fleet is managed can have big pay-offs, both for operational efficiency and for maintaining the availability of the application that it serves. You can automate the management of your fleet with Auto Scaling, and the best part is how easy it is to set up!

There are three main functions that Auto Scaling performs to automate fleet management for EC2 instances:

  • Monitoring the health of running instances
  • Automatically replacing impaired instances
  • Balancing capacity across Availability Zones

In this post, we describe how Auto Scaling performs each of these functions, provide an example of how easy it is to get started, and outline how to learn more about Auto Scaling.

Monitoring the health of running instances

Auto Scaling monitors the health of all instances that are placed within an Auto Scaling group. Auto Scaling performs EC2 health checks at regular intervals, and if the instance is connected to an Elastic Load Balancing load balancer, it can also perform ELB health checks. Auto Scaling ensures that your application is able to receive traffic and that the instances themselves are working properly. When Auto Scaling detects a failed health check, it can replace the instance automatically.

Automatically replacing impaired instances

When an impaired instance fails a health check, Auto Scaling automatically terminates it and replaces it with a new one. If you’re using an Elastic Load Balancing load balancer, Auto Scaling gracefully detaches the impaired instance from the load balancer before provisioning a new one and attaches it back to the load balancer. This is all done automatically, so you don’t need to respond manually when an instance needs replacing.

Balancing capacity across Availability Zones

Balancing resources across Availability Zones is a best practice for well-architected applications, as this greatly increases aggregate system availability. Auto Scaling automatically balances EC2 instances across zones when you configure multiple zones in your Auto Scaling group settings. Auto Scaling always launches new instances such that they are balanced between zones as evenly as possible across the entire fleet. What’s more, Auto Scaling only launches into Availability Zones in which there is available capacity for the requested instance type.

Getting started is easy!

The easiest way to get started with Auto Scaling is to build a fleet from existing instances. The AWS Management Console provides a simple workflow to do this: right-click on a running instance and choose Instance Settings, Attach to Auto Scaling Group.

You can then opt to attach the instance to a new Auto Scaling group. Your instance is now being automatically monitored for health and will be replaced if it becomes impaired. If you configure additional zones and add more instances, they will be spread evenly across Availability Zones to make your fleet more resilient to unexpected failures.

Diving deeper

While this example is a good starting point, you may want to dive deeper into how Auto Scaling can automate the management of your EC2 instances.

The first thing to explore is how to automate software deployments. AWS Elastic Beanstalk is a popular and easy-to-use solution that works well for web applications. AWS CodeDeploy is a good solution for fine-grained control over the deployment process. If your application is based on containers, then Amazon EC2 Container Service (Amazon ECS) is something to consider. You may also want to look into AWS Partner solutions such as Ansible and Puppet. One common strategy for deploying software across a production fleet without incurring downtime is blue/green deployments, to which Auto Scaling is particularly well-suited.

These solutions are all enhanced by the core fleet management capabilities in Auto Scaling. You can also use the API or CLI to roll your own automation solution based on Auto Scaling. The following learning path will help you to explore the service in more detail.

  • Launch configurations
  • Lifecycle hooks
  • Fleet size
  • Automatic process control
  • Scheduled scaling

Launch configurations

Launch configurations are the key to how Auto Scaling launches instances. Whenever an Auto Scaling group launches a new instance, it uses the currently associated launch configuration as a template for the launch. In the example above, Auto Scaling automatically created a launch configuration by deriving it from the attached instance. In many cases, however, you create your own launch configuration. For example, if your software environment is baked into an Amazon Machine Image (AMI), then your launch configuration points to the version that you want Auto Scaling to deploy onto new instances.

Lifecycle hooks

Lifecycle hooks let you take action before an instance goes into service or before it gets terminated. This can be especially useful if you are not baking your software environment into an AMI. For example, launch hooks can perform software configuration on an instance to ensure that it’s fully prepared to handle traffic before Auto Scaling proceeds to connect it to your load balancer. One way to do this is by connecting the launch hook to an AWS Lambda function that invokes RunCommand on the instance.

Terminate hooks can be useful for collecting important data from an instance before it goes away. For example, you could use a terminate hook to preserve your fleet’s log files by copying them to an Amazon S3 bucket when instances go out of service.

Fleet size

You control the size of your fleet using the minimum, desired, and maximum capacity attributes of an Auto Scaling group. Auto Scaling automatically launches or terminates instances to keep the group at the desired capacity. As mentioned before, Auto Scaling uses the launch configuration as a template for launching new instances in order to meet the desired capacity, doing so such that they are balanced across configured Availability Zones.

Automatic process control

You can control the behavior of Auto Scaling’s automatic processes such as health checks, launches, and terminations. You may find the AZRebalance process of particular interest. By default, Auto Scaling automatically terminates instances from one zone and re-launches them into another if the instances in the fleet are not spread out in a balanced manner.
You may want to disable this behavior under certain conditions. For example, if you’re attaching existing instances to an Auto Scaling group, you may not want them terminated and re-launched right away if that is required to re-balance your zones. Note that Auto Scaling always replaces impaired instances with launches that are balanced across zones, regardless of this setting. You can also control how Auto Scaling performs health checks, launches, terminations, and more.

Scheduled scaling

Scheduled scaling is a simple tool for adjusting the size of your fleet on a schedule. For example, you can add more or fewer instances to your fleet at different times of the day to handle changing customer traffic patterns. A more advanced tool is dynamic scaling, which adjusts the size of your fleet based on Amazon CloudWatch metrics.


Auto Scaling can bestow important benefits to cloud applications by automating the management of fleets of EC2 instances. Auto Scaling makes it easy to monitor instance health, automatically replace impaired instances, and spread capacity across multiple Availability Zones.

If you already have a fleet of EC2 instances, then it’s easy to get started with Auto Scaling in just a few clicks. After your first Auto Scaling group is working to safeguard your existing fleet, you can follow the suggested learning path in this post. Over time, you can explore more features of Auto Scaling and further automate your software deployments and application scaling.

If you have questions or suggestions, please comment below.

KSI Buys His Own Movie 62 Times to Atone For Piracy Sins

Post Syndicated from Andy original https://torrentfreak.com/ksi-buys-his-own-movie-62-times-to-atone-for-piracy-sins-161020/

ksi-7KSI, the UK-based YouTubing, videogaming, rapper-comedian has 14.8 million channel subscribers. As a result he’s own of the most popular stars on the Internet.

The London-based jack-of-all-trades recently released a movie called Laid in America. It came out on Blu-ray earlier this month but rather than all of his fans buying it, many pirated it on torrent sites.

That led to KSI going crazy in a video published last week, in which he called his pirating fans some terrible, awful things.

Quickly, however, people with long memories recalled that KSI himself isn’t so innocent when it comes to getting stuff without paying for it. Indeed, KSI had previously downloaded a pirate copy of Sony Vegas and even asked for help on Twitter to get it working.


As a result, the cries of ‘hypocrite’ on his channel became deafening but of course, someone of KSI’s YouTubing abilities was hardly going to let this opportunity go to waste. We were waiting for a response and yesterday morning it came, albeit briefly.

KSI uploaded a new video titled “I’M A HYPOCRITE” but immediately made it private before we could get a sneak preview. Was this his long-awaited apology?

In the past few hours it became clear that, yes, yes it was. In a new five minute long, well-produced video, KSI gets on his knees and asks for forgiveness.

Ok, he doesn’t at all. He dresses up as a pirate, goes for a swashbuckling adventure around London, and attempts to atone for his sins by buying stuff.


However, instead of buying Sony software to make up for his earlier transgression, KSI does the next best thing.

He visits HMV, buys every single copy of his OWN movie that the store has in stock, takes them out onto the streets, and gives them all away to adoring fans. No doubt impressed, Sony are probably recalling their lawyers right now.

While some might find his style unpalatable, there’s little doubt that KSI is a master of YouTube and utterly brilliant at getting clicks. Of course, everyone who reports on his antics only adds to his popularity, but when people like him are prepared to deal with a difficult topic like piracy in a refreshing way, that’s worthy of a second look.

Last week we toyed with the tantalizing possibility that KSI might be engaged in a guerilla anti-piracy campaign. While that may or may not be the case, if you’re trying to reach the YouTube generation with that kind of message, there doesn’t seem to be a better way to go about it.

Young people aren’t best known for following the advice of men in suits. Young men dressed as pirates, on the other hand…. GetitRight campaign take note?

Finally, while the video is no doubt entertaining, even the cleverest anti-piracy campaigns can backfire in unexpected ways, as the YouTube comment below illustrates.


Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Developer Tool Recap – Recent Enhancements to CodeCommit, CodePipeline, and CodeDeploy

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-developer-tool-recap-recent-enhancements-to-codecommit-codepipeline-and-codedeploy/

The AWS Developer Tools help you to put modern DevOps practices to work! Here’s a quick overview (read New AWS Tools for Code Management and Deployment for an in-depth look):

AWS CodeCommit is a fully-managed source code control service. You can use it to host secure and highly scalable private Git repositories while continuing to use your existing Git tools and workflows (watch the Introduction to AWS CodeCommit video to learn more).

AWS CodeDeploy automates code deployment to Amazon Elastic Compute Cloud (EC2) instances and on-premises servers. You can update your application at a rapid clip, while avoiding downtime during deployment (watch the Introduction to AWS CodeDeploy video to learn more).

AWS CodePipeline is a continuous delivery service that you can use to streamline and automate your release process. Checkins to your repo (CodeCommit or Git) will initiate build, test, and deployment actions (watch Introducing AWS CodePipeline for an introduction). The build can be deployed to your EC2 instances or on-premises servers via CodeDeploy, AWS Elastic Beanstalk, or AWS OpsWorks.

You can combine these services with your existing build and testing tools to create an end-to-end software release pipeline, all orchestrated by CodePipeline.

We have made a lot of enhancements to the Code* products this year and today seems like a good time to recap all of them for you! Many of these enhancements allow you to connect the developer tools to other parts of AWS so that you can continue to fine-tune your development process.

CodeCommit Enhancements
Here’s what’s new with CodeCommit:

  • Repository Triggers
  • Code Browsing
  • Commit History
  • Commit Visualization
  • Elastic Beanstalk Integration

Repository Triggers – You can create Repository Triggers that Send Notification or Run Code whenever a change occurs in a CodeCommit repository (these are sometimes called webhooks — user-defined HTTP callbacks). These hooks will allow you to customize and automate your development workflow. Notifications can be delivered to an Amazon Simple Notification Service (SNS) topic or can invoke a Lambda function.

Code Browsing – You can Browse Your Code in the Console. This includes navigation through the source code tree and the code:

Commit History – You can View the Commit History for your repositories (mine is kind of quiet, hence the 2015-era dates):

Commit Visualization – You can View a Graphical Representation of the Commit History for your repositories:

Elastic Beanstalk Integration – You can Use CodeCommit Repositories with Elastic Beanstalk to store your project code for deployment to an Elastic Beanstalk environment.

CodeDeploy Enhancements
Here’s what’s new with CodeDeploy:

  • CloudWatch Events Integration
  • CloudWatch Alarms and Automatic Deployment Rollback
  • Push Notifications
  • New Partner Integrations

CloudWatch Events Integration – You can Monitor and React to Deployment Changes with Amazon CloudWatch Events by configuring CloudWatch Events to stream changes in the state of your instances or deployments to an AWS Lambda function, an Amazon Kinesis stream, an Amazon Simple Queue Service (SQS) queue, or an SNS topic. You can build workflows and processes that are triggered by your changes. You could automatically terminate EC2 instances when a deployment fails or you could invoke a Lambda function that posts a message to a Slack channel.

CloudWatch Alarms and Automatic Deployment Rollback – CloudWatch Alarms give you another type of Monitoring for your Deployments. You can monitor metrics for the instances or Auto Scaling Groups managed by CodeDeploy and take action if they cross a threshold for a defined period of time, stop a deployment, or change the state of an instance by rebooting, terminating, or recovering it. You can also automatically rollback a deployment in response to a deployment failure or a CloudWatch Alarm.

Push Notifications – You can Receive Push Notifications via Amazon SNS for events related to your deployments and use them to track the state and progress of your deployment.

New Partner Integrations – Our CodeDeploy Partners have been hard at work, connecting their products to ours. Here are some of the most recent offerings:

CodePipeline Enhancements
And here’s what’s new with CodePipeline:

  • AWS OpsWorks Integration
  • Triggering of Lambda Functions
  • Manual Approval Actions
  • Information about Committed Changes
  • New Partner Integrations

AWS OpsWorks Integration – You can Choose AWS OpsWorks as a Deployment Provider in the software release pipelines that you model in CodePipeline:

You can also configure CodePipeline to use OpsWorks to deploy your code using recipes contained in custom Chef cookbooks.

Triggering of Lambda Functions – You can now Trigger a Lambda Function as one of the actions in a stage of your software release pipeline. Because Lambda allows you to write functions to perform almost any task, you can customize the way your pipeline works:

Manual Approval Actions – You can now add Manual Approval Actions to your software release pipeline. Execution pauses until the code change is approved or rejected by someone with the required IAM permission:

Information about Committed Changes – You can now View Information About Committed Changes to the code flowing through your software release pipeline:


New Partner Integrations – Our CodePipeline Partners have been hard at work, connecting their products to ours. Here are some of the most recent offerings:

New Online Content
In order to help you and your colleagues to understand the newest development methodologies, we have created some new introductory material:

Thanks for Reading!
I hope that you have enjoyed this quick look at some of the most recent additions to our development tools.

In order to help you to get some hands-on experience with continuous delivery, my colleagues have created a new Pipeline Starter Kit. The kit includes a AWS CloudFormation template that will create a VPC with two EC2 instances inside, a pair of applications (one for each EC2 instance, both deployed via CodeDeploy), and a pipeline that builds and then deploys the sample application, along with all of the necessary IAM service and instance roles.


Security advisories for Thursday

Post Syndicated from jake original http://lwn.net/Articles/704119/rss

CentOS has updated java-1.8.0-openjdk (C7; C6: multiple vulnerabilities).

Debian has updated kernel (multiple vulnerabilities,
one from 2015).

Debian-LTS has updated kernel
(multiple vulnerabilities, one from 2015) and libxvmc (code execution).

Fedora has updated glibc-arm-linux-gnu (F23: denial of service)
and perl-DBD-MySQL (F23: denial of service).

Oracle has updated java-1.8.0-openjdk (OL7; OL6: multiple vulnerabilities).

Red Hat has updated java-1.6.0-sun (multiple vulnerabilities), java-1.7.0-oracle (multiple vulnerabilities), and java-1.8.0-oracle (RHEL7&6: multiple vulnerabilities).

Scientific Linux has updated java-1.8.0-openjdk (SL7&6: multiple vulnerabilities).

SUSE has updated quagga (SLE11:
code execution).

Ubuntu has updated kernel (12.04; 14.04;
16.04; 16.10: privilege escalation), linux-lts-trusty (12.04: privilege escalation), linux-lts-xenial (14.04: privilege escalation), linux-raspi2 (16.04: privilege escalation), linux-snapdragon (16.04: privilege escalation), and linux-ti-omap4 (12.04: privilege escalation).

An important set of stable kernel updates

Post Syndicated from corbet original http://lwn.net/Articles/704078/rss

and 4.4.26 stable kernel updates have been
released. There’s nothing in the announcements to indicate this, but they
all contain a fix for CVE-2016-5195, a bug that can allow local attackers
to overwrite files they should not have write access to. So the “all users
must upgrade” message seems more than usually applicable this time around.

Police Confirm Arrests of BlackCats-Games Operators

Post Syndicated from Andy original https://torrentfreak.com/police-confirm-arrests-blackcats-games-operators-161020/

After being down for several hours, yesterday the domain of private tracker BlackCats-Games was seized by the UK’s Police Intellectual Property Crime Unit.

The domain used to point to an IP address in Canada, but was later switched to a server known to be under the control of PIPCU, the UK’s leading anti-piracy force.

Following several hours of rumors, last evening sources close to the site began to confirm that the situation was serious. Reddit user Farow went public with specific details, noting that the owner of BlackCats-Games had been arrested and the site would be closing down.

Former site staff member SteWieH added that there had in fact been two arrests and it was the site’s sysops that had been taken into custody.

While both are credible sources, there was no formal confirmation from PIPCU. That came a few moments ago and it’s pretty bad news for fans of the site and its operators.

“Officers from the City of London Police Intellectual Property Crime Unit (PIPCU) have arrested two men in connection with an ongoing investigation into the illegal distribution of copyright protected video games,” the unit said in a statement.

Police say that the raids took place on Tuesday, with officers arresting two men aged 47 and 44 years at their homes in Birmingham, West Midlands and Blyth, Northumberland. Both were arrested on suspicion of copyright infringement and money laundering offenses.

Detectives say they also seized digital media and computer hardware.

PIPCU report that the investigation into the site was launched in cooperation with UK Interactive Entertainment (UKIE) and the Entertainment Software Association (ESA). Former staff member SteWieH says that a PayPal account operated by the site in 2013 appears to have played an important role in the arrests.

Detective Sergeant Gary Brownfrom the City of London Police Intellectual Property Unit said that their goal was to disrupt the work of “content thieves.”

“With the ever-growing consumer appetite for gaming driving the threat of piracy to the industry, our action today is essential in disrupting criminal activity and the money which drives it,” Brownfrom said.

“Those who steal copyrighted content exploit the highly skilled work and jobs supported by the gaming industry. We are working hard to tackle digital intellectual property crime and we will continue to target our enforcement activity towards those identified as content thieves whatever scale they are operating at.”

UK Interactive Entertainment welcomed the arrests.

“UKIE applauds the action taken by PIPCU against the operators of the site. Sites like this are harmful to the hard work of game creators around the world. PIPCU’s actions confirm that these sites will not be tolerated, and are subject to criminal enforcement,” a spokesman said.

Stanley Pierre-Louis, general counsel for the Entertainment Software Association, thanked PIPCU for its work.

“ESA commends PIPCU for its commitment to taking action against sites that facilitate the illegal copying and distribution of incredibly advanced works of digital art. We are grateful for PIPCU’s leadership in this area and their support of creative industries.”

Both men have been released on police bail.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Избори 2016 – секциите в чужбина

Post Syndicated from Боян Юруков original http://yurukov.net/blog/2016/sekcii-2016/

На 15-ти октомври ЦИК обяви секциите за президентските избори и референдума ма 6-ти ноември. Вчера поправи решението си добавяйки още секции в Германия, Испания и Австрия. Към днешна дата знаем, че ще има 307 секции в 71 държави.

Новата карта

Очаквам да има още промени по секциите в чужбина. Най-малкото адресите на всички не са ясни. Доста от тях ще са в посолства и консулства, а на други събрахме адресите от местните координатори. Затова пуснах новата карта на секциите в чужбина на Glasuvam.org. Ще обновявам данните там, когато станат ясни.

Този път я пренаписах така, че да работи по-лесно на мобилни устройства. За разлика от предишната версия, търсенето на секции и места става изцяло на устройството. Данните не се изпращат на сървъра ми и не се записват никъде. При търсене на град, в близост до който да има секция, се прави проверка с търсачката на Google. При отваряне на сайта всеки може да реши дали да сподели местоположението си. За целта трябва да е включен GPS-а на телефона.

Карта на изборите в чужбина от Glasuvam.org

В тази версия има и възможност да се предлагат промени по местоположението на дадена секция. Адресите все още се обновяват централно при мен, но самият маркер на картата ще може да се мести. За целта при отваряне на информацията на секцията може да натиснете линка за предложения, да преместите маркера на правилното според вас място и да го изпратите. След това аз ще прегледам всички предложения и ще обновя данните.

Добавил съм и меню с информация за картата, лиценза и поверителността, както и лесен код за вграждане. Данните и кодът на картата са със свободен лиценз CC0, което значи, че може да я използвате както намерите за добре.

Кампанията по заявленията

Решенията на ЦИК бяха предшествани от значителна кампания по събиране на заявления. Скандали не липсваха. Имаше доста неясноти около ограниченията по места, имаше проблеми със сайта на ЦИК и попълването на данните. Във Великобритания бързо минаха ограничението от 35 секции. Външно обяви в последния момент, че Германия е отказала отварянето на секции извън консулствата и почетните консулства. Писмените заявления бяха въведени в последните дни на кампанията от което силно се изкривиха данните и заявленията бяха пръснати на различни места.

Всички това доведе до сериозни трудности в организацията на секции по места. Много заявления се изгубиха като смисъл по една или друга причина. Ефектът беше, че в Германия не бяха събрани достатъчно заявления за отварянето на повече секции там, където според Външно Германия е разрешила да има такива. След сериозното недоволство на 15-ти, когато ЦИК обяви по една секция на места със сериозно натоварване в предишни избори, Външно е изпратило становище до ЦИК с искане за отваряне на още. Не е ясно, защо такова не е било изпратено преди това, когато всички тези проблеми бяха ясни. За тях предупреждавах многократно в последните седмици. Отделно помогнаха и няколкото писма от общностите ни зад граница, за да се отворят още секции там, където има нужда от тях.

Секциите от предишни години

Направих бърза таблица със сравнение на секциите по държави през последните 3 години. На пръв поглед броят на секциите е по-голям от този на последния референдум и парламентарните избори през 2013-та. Доста по-малко са от парламентарните през 2014-та обаче. Основната причина за това са секциите в Турция. Докато през 2014-та са били 136, то миналата и тази година са съответно 7 и 35. За референдума просто нямаха интерес, а на тези избори има ограничение.

Ако изключим Турция от общия брой, изглежда няма голяма разлика в броят секции – движат се между 270 и 290. Разбивката по държави обаче показва друго. Секциите от западна Европа, САЩ и Канада представляват 66% от всички секции (изключвайки Турция). През 2014-та и 2015-та са били съответно 72 и 75%. Това означава, че доста секции са преместени от държави с големи български общности и са преместени на места, където почти няма българи. Това лесно се обяснява с решението на ЦИК да отвори автоматично секции в посолства дори без никакви заявления или сведения, че има желание някой да гласува.

Какво следва?

Сега започва кампанията по информиране. Говорителя на ЦИК май за пръв път обяви, че няма намерение да има нещо общо с информирането на българите в чужбина. С изключение на 2014-та, Външно май отново няма никакви планове.

Доброволци по места ще информират общностите ни. Аз в понеделник ще разпратя бюлетин до абонираните в Glasuvam.org. В него ще се включи най-близката до всеки абониран секция. Ако желаете да го получавате, може да се регистрирате на сайта.

Срокът за подаване на предложения за секционни комисии в чужбина изтече в понеделник. Днес ЦИК провежда консултации с регистрираните партии и инициативни комитети за попълването им. Очакваме скоро тези списъци, както и окончателния списък с адресите на секциите.

Окончателният и навярно най-важен щрих от тези избори е да говорим за какво въобще гласуваме. Практически липсва дебат за референдума, например. Отделям личното си мнение по това, както и президентските избори от информационната кампания, която аз и доста други водим. Няма да видите нито на картата на Glasuvam.org, нито в бюлетина или в статиите ми по самата организация на изборите нещо по този въпрос. Все пак, в близките седмици имам намерение да разпиша подробно как смятам да гласувам и защо смятам, че е важно.

Inspiring educators with a special MagPi!

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/inspiring-educators-special-magpi/

If there’s one thing we’re passionate about here at the Raspberry Pi Foundation, it’s sharing our community’s passion for making with technology. Back in January, the Education team exhibited at the Bett Show with a special Educator’s Edition of our fabulous magazine, The MagPi. The goal was to share our projects and programmes with educators who could join our increasing community of digital makers. Like all our publications, a downloadable PDF was made available on our website; this was good thinking, as the magazine proved to be very popular and we ran out of copies soon after the show.

Exhibiting a the Bett Show 2016

Exhibiting at the Bett Show 2016 with the special Educator’s Edition of The MagPi

This year, we’ve been working hard to improve the support we provide to our Raspberry Pi Certified Educators when they take their first steps post-Picademy, and begin to share their new skills with their students or faculty on their own. In the past, we’ve provided printable versions of our resources or handed out copies of The MagPi. Instead of providing these separately, we thought it would be fun to bundle them together for all to access.

Digital making educators getting hands on with their builds at Picademy

Educators getting hands-on with their builds at Picademy

Thanks to the support of our colleagues in the MagPi team, we’ve been able to bring you a new and improved special edition of The MagPi: it’s aimed at educators and is packed full of new content, including tutorials and guides, for use in schools and clubs. You can download a free PDF of the second issue of the special Educator’s Edition right now. If you want a printed copy, then you’ll need to seek us out at events or attend a Picademy in the UK and US whilst we have them in stock!

Warning: contains inspiration!

Warning: contains inspiration!

Contents include:

  • The digital making revolution in education: how the maker movement has been taking the classroom by storm!
  • A case study: creative computing at Eastwood Academy
  • How to start a Code Club in your school
  • Physical computing tutorials with Python and Scratch
  • Teaching computing with Minecraft
  • Blinky lights, cameras, micro:bits, and motor tutorials
  • Sonic Pi live coding
  • What’s next for Astro Pi?
  • News about Raspberry Pi in education

Blinky lights tutorial page from MagPi

Case study page from MagPi about Eastwood Academy

The MagPi Educator’s Edition is freely licensed under Creative Commons (BY-SA-NC 3.0).

The post Inspiring educators with a special MagPi! appeared first on Raspberry Pi.

President Obama Talks About AI Risk, Cybersecurity, and More

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/president_obama_1.html

Interesting interview:

Obama: Traditionally, when we think about security and protecting ourselves, we think in terms of armor or walls. Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cybersecurity continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there. It means that we’ve got to think differently about our security, make different investments that may not be as sexy but may actually end up being as important as anything.

What I spend a lot of time worrying about are things like pandemics. You can’t build walls in order to prevent the next airborne lethal flu from landing on our shores. Instead, what we need to be able to do is set up systems to create public health systems in all parts of the world, click triggers that tell us when we see something emerging, and make sure we’ve got quick protocols and systems that allow us to make vaccines a lot smarter. So if you take a public health model, and you think about how we can deal with, you know, the problems of cybersecurity, a lot may end up being really helpful in thinking about the AI threats.

SHA-256 and SHA3-256 Are Safe For the Foreseeable Future

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/ZmbJUVGw8OU/

Hashing, it’s always a contentious issue – used to be md5, then sha-1, then bcrypt and now it looks like SHA-256 or SHA3-256 might the future with quantum science boffins predicting it’s not feasable to crack. You can read more about the algorithm and design (using sponge construction) on Wikipedia here: SHA-3 While it’s reasonable […]


Read the full post at darknet.org.uk

Megaupload User Fears Complete Data Loss, Asks Court For Help

Post Syndicated from Ernesto original https://torrentfreak.com/megaupload-user-fears-complete-data-loss-asks-court-for-help-161020/

megauploadIn the wake of Megaupload’s shutdown nearly five years ago, many of the site’s users complained that their personal files had been lost as collateral damage.

One of these users is Kyle Goodwin, who operates a sports video company in Ohio. He used Megaupload as part of his business, storing large videos he created himself.

After Megaupload’s servers were raided Mr. Goodwin could no longer access the files. In an effort to remedy this, he asked the court to help him and others to retrieve their personal property.

Helped by the Electronic Frontier Foundation (EFF) and Stanford’s Hoover Institution, Mr. Goodwin filed over half a dozen requests asking the court to find a workable solution for the return of his data. Thus far, however, this has been without success.

This week, his legal team once again raised the issue before the Virginia District Court, pointing out that the government’s actions are still being felt today.

“As a result of the government’s actions, Mr. Goodwin and many other former Megaupload users lost access to their valuable data, and that data remains inaccessible today,” Goodwin’s legal team writes (pdf).

The files were originally stored by Carpathia Hosting, which was later taken over by QTS Realty Trust. Although the backups are still in place, QTS informed the court last year that this may not last for long.

At the time QTS noted that “there is a high likelihood that the disk drives, on which the data presumably reside, will experience high failure rates.”

Meanwhile, the parties involved including the Government, Megaupload and copyright holders, have yet to find a mutually agreeable solution for the data retrieval. Similarly, the court has yet to rule on Mr. Goodwin’s motion asking for the return of his property, which he filed in 2012.

“Mr. Goodwin’s motion remains pending. Further delay may mean the complete loss of Mr. Goodwin’s valuable property and that of other former Megaupload users,” his lawyers write.

Hoping to finally make a breakthrough while the data still exists, Mr. Goodwin is now asking the court to rule on his pending motion for return of his property as soon as possible.

Megaupload counsel Ira Rothken hopes that the court will hold the Government responsible for their actions and that it will help to reunite former users with their data.

“Megaupload looks forward to having the court determine whether or not the U.S. acted appropriately by turning off all consumer access to their data stored in the cloud,” Rothken tells TF in a comment.

“The Department of Justice should avoid elevating Hollywood interests over consumer interests and do the right thing for consumers like Kyle Goodwin who wants access to youth soccer videos he stored in the Megaupload cloud,” he adds.

The sports videographer is not the only one waiting to be reunited with his files. Many others are in the same position. Just a few weeks ago a former Megaupload user contacted TorrentFreak in desperation, hoping to recover a personal photo that is very dear to him.

Whether the court can help to make this happen has yet to be seen. The lack of progress over the past several years doesn’t encourage optimism.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Steal This Show S02E04: Decentralize All The Things

Post Syndicated from J.J. King original https://torrentfreak.com/steal-show-s02e04-decentralize-things/

stslogo180Yours is one of a number of startups setting out to address the problems of centralized content monopolies. This micropayments-based content sharing network lets users tip their favourite creators using Bitcoin, but also benefit by sharing in the earnings of successful creators.

It’s hotly anticipated in both the Bitcoin and decentralized content spaces, following founder Ryan X. Charles‘ viral post, ‘Fix Reddit With Bitcoin‘.

A year after Ryan’s idea of a decentralized, bitcoin-powered Reddit caught the attention of certain parts of the internet, they’re getting near to launch. In this show, Jamie meets Ryan Charles and Steven McKie of Yours to place the project in the history of P2P efforts to help us regain control of our content online. We discuss:

  • Digg and Reddit’s early days, and how the legacy of Digg still haunts Reddit
  • Reddit’s nascent plans to become decentralized and how they got shelved
  • How Jamie nearly made a billion dollars on Bitcoin, or says he did
  • Corruption on the Bitcoin subreddit, and how decentralization could address moderators gone bad
  • Bitcoin-powered torrenting with Joystream ,and Jamie’s idea for a ratio-less private tracker
  • The Yours.network concept: a content platform in which creators and reposters share in content payments.

Steal This Show aims to release bi-weekly episodes featuring insiders discussing copyright and file-sharing news. It complements our regular reporting by adding more room for opinion, commentary and analysis.

The guests for our news discussions will vary and we’ll aim to introduce voices from different backgrounds and persuasions. In addition to news, STS will also produce features interviewing some of the great innovators and minds.

Host: Jamie King

Guest: Ryan X. Charles and Steven McKie

Produced by Jamie King
Edited & Mixed by Riley Byrne
Original Music by David Triana
Web Production by Siraje Amarniss

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Run Windows Server 2016 on Amazon EC2

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/run-windows-server-2016-on-amazon-ec2/

You can now run Windows Server 2016 on Amazon Elastic Compute Cloud (EC2). This version of Windows Server is packed with new features including support for Docker and Windows containers.  We are making it available in all AWS regions today, in four distinct forms:

  • Windows Server 2016 Datacenter with Desktop Experience – The mainstream version of Windows Server, designed with security and scalability in mind, with support for both traditional and cloud-native applications. To learn a lot more about Windows Server 2016, download The Ultimate Guide to Windows Server 2016 (registration required).
  • Windows Server 2016 Nano Server -A cloud-native, minimal install that takes up a modest amount of disk space and boots more swiftly than the Datacenter version, while leaving more system resources (memory, storage, and CPU) available to run apps and services. You can read Moving to Nano Server to learn how to migrate your code and your applications. Nano Server does not include a desktop UI so you’ll need to administer it remotely using PowerShell or WMI. To learn how to do this, read Connecting to a Windows Server 2016 Nano Server Instance.
  • Windows Server 2016 with Containers – Windows Server 2016 with Windows containers and Docker already installed.
  • Windows Server 2016 with SQL Server 2016 – Windows Server 2016 with SQL Server 2016 already installed.

Here are a couple of things to keep in mind with respect to Windows Server 2016 on EC2:

  • Memory – Microsoft recommends a minimum of 2 GiB of memory for Windows Server. Review the EC2 Instance Types to find the type that is the best fit for your application.
  • Pricing – The standard Windows EC2 Pricing applies; you can launch On-Demand and Spot Instances, and you can purchase Reserved Instances.
  • Licensing – You can (subject to your licensing terms with Microsoft) bring your own license to AWS.
  • SSM Agent – An upgraded version of our SSM Agent is now used in place of EC2Config. Read the User Guide to learn more.

Containers in Action
I launched the Windows Server 2016 with Containers AMI and logged in to it in the usual way:

Then I opened up PowerShell and ran the command docker run microsoft/sample-dotnet . Docker downloaded the image, and launched it. Here’s what I saw:

We plan to add Windows container support to Amazon ECS by the end of 2016. You can register here to learn more.

Get Started Today
You can get started with Windows Server 2016 on EC2 today. Try it out and let me know what you think!


Reserved Seating Now Open for AWS re:Invent 2016 Sessions

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/reserved-seating-now-open-for-reinvent-2016-sessions/

re:Invent 2016 logo

Reserved seating is new to re:Invent this year and is now open! Some important things you should know about reserved seating:

  1. All sessions have a predetermined number of seats available and must be reserved ahead of time.
  2. If a session is full, you can join a waitlist.
  3. Waitlisted attendees will receive a seat in the order in which they were added to the waitlist and will be notified via email if and when a seat is reserved.
  4. Only one session can be reserved for any given time slot (in other words, you cannot double-book a time slot on your re:Invent calendar).
  5. Don’t be late! The minute the session begins, if you have not badged in, attendees waiting in line at the door might receive your seat.
  6. Waitlisting will not be supported onsite and will be turned off 7-14 days before the beginning of the conference.

You can watch a 23-minute video that explains reserved seating and how to start reserving your seats today.

Or you can log in and start reserving seats now. That login page is also available from the AWS re:Invent 2016 home page.

– Craig

Cliché: Security through obscurity (again)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/cliche-security-through-obscurity-again.html

This post keeps popping up in my timeline. It’s wrong. The phrase “security through/by security” has become such a cliché that it’s lost all meaning. When somebody says it, they are almost certainly saying a dumb thing, regardless if they support it or are trying to debunk it.

Let’s go back to first principles, namely Kerckhoff’s Principle from the 1800s that states cryptography should be secure even if everything is known about it except the key. In other words, there exists no double-secret military-grade encryption with secret algorithms. Today’s military crypto is public crypto.

Let’s apply this to port knocking. This is not a layer of obscurity, as proposed by the above post, but a layer of security. Applying Kerkhoff’s Principle, it should work even if everything is known about the port knocking algorithm except the sequence of ports being knocked.

Kerkhoff’s Principle is based on a few simple observations. Two relevant ones today are:

* things are not nearly as obscure as you think
* obscurity often impacts your friends more than your enemies

I (as an attacker) know that many sites use port knocking. Therefore, if I get no response from an IP address (which I have reason to know exists), then I’ll assume port knocking is hiding it. I know which port knocking techniques are popular. Or, sniffing at the local Starbucks, I might observe outgoing port knocking behavior, and know which sensitive systems I can attack later using the technique. Thus, though port knocking makes it look like a system doesn’t exist, this doesn’t fully hide a system from me. The security of the system should not rest on this obscurity.

Instead of an obscurity layer, port knocking a security layer. The security it provides is that it drives up the amount of effort an attacker needs to hack the system. Some use the opposite approach, whereby the firewall in front of a subnet responds with a SYN-ACK to every SYN. This likewise increases the costs of those doing port scans (like myself, who masscans the entire Internet), by making it look that all IP addresses and ports exist, not by hiding systems behind a layer of obscurity.

One plausible way of defeating a port knocking implementation is to simply scan all 64k ports many times. If you are looking for a sequence of TCP ports 1000, 5000, 2000, 4000, then you’ll see this sequence. You’ll see all sequences.

If the code for your implementation is open, then it’s easy for others to see this plausible flaw and point it out to you. You could fix this flaw by then forcing the sequence to reset every time it saw the first port, or to also listen for bad ports (ones not part of the sequence) that would likewise reset the sequence.

If your code is closed, then your friends can’t see this problem. But your enemies are still highly motivated. They might find your code, find the compiled implementation, or must just guess ways around your possible implementation. The chances that you, some random defender, is better at this than the combined effort of all your attackers is very small. Opening things up to your friends gives you a greater edge to combat your enemies.

Thus, applying Kerkoff’s Principle to this problem is that you shouldn’t rely upon the secrecy of your port knocking algorithm, or the fact that you are using port knocking in the first place.

The above post also discusses ssh on alternate ports. It points out that if an 0day is found in ssh, those who run the service on the default port of 22 will get hacked first, while those who run at odd ports, like 7837, will have time to patch their services before getting owned.

But this is just repeating the fallacy. It’s focusing only on the increase in difficulty to attackers, but ignoring the increase in difficulties to friends. Let’s say some new ssh 0day is announced. Everybody is going to rush to patch their servers. They are going to run tools like my masscan to quickly find everything listening on port 22, or a vuln scanner like Nessus. Everything on port 22 will quickly get patched. SSH servers running on port 7837, however, will not get patched. On the other other hand, Internet-wide scans like Shodan or the 2012 Internet Census may have already found that you are running ssh on port 7837. That means the attackers can quickly attack it with the latest 0day even while you, the defender, are slow to patch it.

Running ssh on alternate ports is certainly useful because, as the article points out, it dramatically cuts down on the noise that defenders have to deal with. If somebody is brute forcing passwords on port 7837, then that’s a threat worth paying more attention to than somebody doing the same at port 22. But this benefit is separate discussion from obscurity. Hiding an ssh server on an obscure port may thus be a good idea, but not because there is value to obscurity.

Thus, both port knocking and putting ssh on alternate ports are valid security strategies. However, once you mention the cliche “security by/through obscurity”, you add nothing useful to the mix.

Blackcat Games Domain Seized by UK Anti-Piracy Police

Post Syndicated from Andy original https://torrentfreak.com/blackcat-games-domain-seized-by-uk-anti-piracy-police-161019/

blackcats-1For the past several years, the UK’s Police Intellectual Property Crime Unit (PIPCU) has been contacting torrent, streaming, and file-hosting sites in an effort to close them down.

In the main, PIPCU has relied on its position as a government agency to add weight to its threats that one or way or another, sites will either be shut down or have their operations hampered.

Many sites located overseas didn’t take the threats particularly seriously but on several occasions, PIPCU has shown that it doesn’t need to leave the UK to make an impact. That appears to be the case today with private tracker Blackcats-Games.

With around 30K members, the long-established private tracker has been a major player in the gaming torrents scene for many years but earlier today TorrentFreak received a tip that the site may have attracted the attention of the authorities.

With the site down no further news became available, but in the past few hours, fresh signs suggest that the site is indeed in some kind of legal trouble.

Results currently vary depending on ISP and region, but most visitors to the site’s Blackcats-Games.net domain are now greeted with the familiar banner that PIPCU places on sites when they’re under investigation.


TorrentFreak has confirmed that the police images appearing on the site’s main page are not stored on the front-facing server BlackCats-Games operated in Canada (OVH,, but are actually being served from an IP address known to be under the control of the Police Intellectual Property Crime Unit.

The same server also provides the images for previously-seized domains including filecrop.com, mp3juices.com, immunicity.org, nutjob.eu, deejayportal.co.uk and oldskoolscouse.co.uk.


Of course, being greeted by these PIPCU images leads many users to the conclusion that the site may have been raided and/or its operators arrested. While that is yet to be confirmed by the authorities or sources close to the site, there is also a less dramatic option.

PIPCU is known to approach registrars with requests for them to suspend domains. The police argue that since they have determined that a particular site is acting illegally, registrars should comply with their requests.

While some like Canada-based EasyDNS have not caved in to the demands, others have. This has resulted in domains quickly being taken out of the control of site operators without any due process. It’s certainly possible that this could’ve happened to Blackcats-Games.net.

Furthermore, a separate micro-site (nefarious-gamer.com) on BlackCats’ server in Canada is still serving a short message, an indication that the server hasn’t been completely seized. However, there are probably other servers elsewhere, so only time will tell how they have been affected.

Until official word is received from one side or the other, the site’s users will continue to presume the worst. In 2015, PIPCU deprioritized domain suspensions so more could be at play here.

Update: A source close to the site has informed TF that there has been an arrest but was unable to confirm who was detained.

Update2: A Reddit moderator says that the owner of Blackcats-Games has been raided and arrested, with equipment seized.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Security advisories for Wednesday

Post Syndicated from ris original http://lwn.net/Articles/703974/rss

Debian has updated quagga (stack overrun) and tor (denial of service).

Debian-LTS has updated dwarfutils (multiple vulnerabilities), guile-2.0 (two vulnerabilities), libass (two vulnerabilities), libgd2 (two vulnerabilities), libxv (insufficient validation), and tor (denial of service).

Fedora has updated epiphany (F24:
unspecified), ghostscript (F24; F23: multiple vulnerabilities), glibc-arm-linux-gnu (F24: denial of service),
guile (F24: two vulnerabilities), libgit2 (F24: two vulnerabilities), openssh (F23: null pointer dereference), qemu (F24: multiple vulnerabilities), and webkitgtk4 (F24: unspecified).

Mageia has updated asterisk
(denial of service), flash-player-plugin
(multiple vulnerabilities), kernel (multiple vulnerabilities), and mailman (password disclosure).

Red Hat has updated java-1.8.0-openjdk (RHEL6, 7: multiple
vulnerabilities), kernel (RHEL6.7:
use-after-free), and mariadb-galera
(RHOSP8: SQL injection/privilege escalation).

‘Bowl of Skittles’ Photographer Sues Trump for Copyright Infringement

Post Syndicated from Ernesto original https://torrentfreak.com/bowl-of-skittles-photographer-sues-trump-for-copyright-infringement-161019/

trumpdWith the U.S. presidential elections just weeks away there’s plenty of mud-slinging going on from every imaginable angle.

It’s safe to say that a few lines have been crossed here and there, and according to a complaint that was filed at an Illinois District Court this week, Donald Trump’s a pirate.

The case in question was filed by UK-based photographer David Kittos, who shares a lot of his work publicly on Flickr. This includes a photo of a bowl of Skittles, which he took to experiment with a light tent and off-camera flash.

The photo was uploaded with an “all rights reserved” notice and didn’t really get any attention, until it became part of Trump’s presidential campaign in the form of the following “advertisement” tweeted by Donald Trump Jr.

“If I had a bowl of skittles and I told you just three would kill you. Would you take a handful? That’s our Syrian refugee problem.”


While the message itself has been widely debated already, few people knew that the image was used without permission. Making things even worse, the photographer in question turns out to be a refugee himself, as stated in the complaint.

“The unauthorized use of the Photograph is reprehensibly offensive to Plaintiff as he is a refugee of the Republic of Cyprus who was forced to flee his home at the age of six years old,” Kittos’ lawyer writes (pdf).

“Plaintiff never authorized Defendant Trump for President, Inc. or the other Defendants to use the Photograph as part of the Advertisement or for any other purpose,” the complaint adds.

In addition to Donald Trump Sr, the complaint also lists running mate Mike Pence and Trump Jr. as defendants. All are accused of both direct and secondary copyright infringement, by sharing the image online.

After about a week Trump’s tweet was removed, following a complaint from Kittos’ lawyer, but others continued to share it on social media and elsewhere.

In the lawsuit the photographer asks for an injunction, hoping to prevent further copyright infringements. In addition, he wants damages for copyright infringement, as well as compensation from any profits that were made in the process.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.