Tag Archives: displays

Sean Hodgins’ video-playing Christmas ornament

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/sean-hodgins-ornament/

Standard Christmas tree ornaments are just so boring, always hanging there doing nothing. Yawn! Lucky for us, Sean Hodgins has created an ornament that plays classic nineties Christmas adverts, because of nostalgia.

YouTube Christmas Ornament! – Raspberry Pi Project

This Christmas ornament will really take you back…

Ingredients

Sean first 3D printed a small CRT-shaped ornament resembling the family television set in The Simpsons. He then got to work on the rest of the components.

Pi Zero and electronic components — Sean Hodgins Raspberry Pi Christmas ornament

All images featured in this blog post are c/o Sean Hodgins. Thanks, Sean!

The ornament uses a Raspberry Pi Zero W, 2.2″ TFT LCD screen, Mono Amp, LiPo battery, and speaker, plus the usual peripherals. Sean purposely assembled it with jumper wires and tape, so that he can reuse the components for another project after the festive season.

Clip of PowerBoost 1000 LiPo charger — Sean Hodgins Raspberry Pi Christmas ornament

By adding header pins to a PowerBoost 1000 LiPo charger, Sean was able to connect a switch to control the Pi’s power usage. This method is handy if you want to seal your Pi in a casing that blocks access to the power leads. From there, jumper wires connect the audio amplifier, LCD screen, and PowerBoost to the Zero W.

Code

Then, with Raspbian installed to an SD card and SSH enabled on the Zero W, Sean got the screen to work. The type of screen he used has both SPI and FBTFT enabled. And his next step was to set up the audio functionality with the help of an Adafruit tutorial.

Clip demoing Sean Hodgins Raspberry Pi Christmas ornament

For video playback, Sean installed mplayer before writing a program to extract video content from YouTube*. Once extracted, the video files are saved to the Raspberry Pi, allowing for seamless playback on the screen.

Construct

When fully assembled, the entire build fit snugly within the 3D-printed television set. And as a final touch, Sean added the cut-out lens of a rectangular magnifying glass to give the display the look of a curved CRT screen.

Clip of completed Sean Hodgins Raspberry Pi Christmas ornament in a tree

Then finally, the ornament hangs perfectly on the Christmas tree, up and running and spreading nostalgic warmth.

For more information on the build, check out the Instructables tutorial. And to see all of Sean’s builds, subscribe to his YouTube channel.

Make

If you’re looking for similar projects, have a look at this tutorial by Cabe Atwell for building a Pi-powered ornament that receives and displays text messages.

Have you created Raspberry Pi tree ornaments? Maybe you’ve 3D printed some of our own? We’d love to see what you’re doing with a Raspberry Pi this festive season, so make sure to share your projects with us, either in the comments below or via our social media channels.

 

*At this point, I should note that we don’t support the extraction of  video content from YouTube for your own use if you do not have the right permissions. However, since Sean’s device can play back any video, we think it would look great on your tree showing your own family videos from previous years. So, y’know, be good, be legal, and be festive.

The post Sean Hodgins’ video-playing Christmas ornament appeared first on Raspberry Pi.

The Raspberry Pi Christmas shopping list 2017

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/christmas-shopping-list-2017/

Looking for the perfect Christmas gift for a beloved maker in your life? Maybe you’d like to give a relative or friend a taste of the world of coding and Raspberry Pi? Whatever you’re looking for, the Raspberry Pi Christmas shopping list will point you in the right direction.

An ice-skating Raspberry Pi - The Raspberry Pi Christmas Shopping List 2017

For those getting started

Thinking about introducing someone special to the wonders of Raspberry Pi during the holidays? Although you can set up your Pi with peripherals from around your home, such as a mobile phone charger, your PC’s keyboard, and the old mouse dwelling in an office drawer, a starter kit is a nice all-in-one package for the budding coder.



Check out the starter kits from Raspberry Pi Approved Resellers such as Pimoroni, The Pi Hut, ModMyPi, Adafruit, CanaKit…the list is pretty long. Our products page will direct you to your closest reseller, or you can head to element14 to pick up the official Raspberry Pi Starter Kit.



You can also buy the Raspberry Pi Press’s brand-new Raspberry Pi Beginners Book, which includes a Raspberry Pi Zero W, a case, a ready-made SD card, and adapter cables.

Once you’ve presented a lucky person with their first Raspberry Pi, it’s time for them to spread their maker wings and learn some new skills.

MagPi Essentials books - The Raspberry Pi Christmas Shopping List 2017

To help them along, you could pick your favourite from among the Official Projects Book volume 3, The MagPi Essentials guides, and the brand-new third edition of Carrie Anne Philbin’s Adventures in Raspberry Pi. (She is super excited about this new edition!)

And you can always add a link to our free resources on the gift tag.

For the maker in your life

If you’re looking for something for a confident digital maker, you can’t go wrong with adding to their arsenal of electric and electronic bits and bobs that are no doubt cluttering drawers and boxes throughout their house.



Components such as servomotors, displays, and sensors are staples of the maker world. And when it comes to jumper wires, buttons, and LEDs, one can never have enough.



You could also consider getting your person a soldering iron, some helpings hands, or small tools such as a Dremel or screwdriver set.

And to make their life a little less messy, pop it all inside a Really Useful Box…because they’re really useful.



For kit makers

While some people like to dive into making head-first and to build whatever comes to mind, others enjoy working with kits.



The Naturebytes kit allows you to record the animal visitors of your garden with the help of a camera and a motion sensor. Footage of your local badgers, birds, deer, and more will be saved to an SD card, or tweeted or emailed to you if it’s in range of WiFi.

Cortec Tiny 4WD - The Raspberry Pi Christmas Shopping List 2017

Coretec’s Tiny 4WD is a kit for assembling a Pi Zero–powered remote-controlled robot at home. Not only is the robot adorable, building it also a great introduction to motors and wireless control.



Bare Conductive’s Touch Board Pro Kit offers everything you need to create interactive electronics projects using conductive paint.

Pi Hut Arcade Kit - The Raspberry Pi Christmas Shopping List 2017

Finally, why not help your favourite maker create their own gaming arcade using the Arcade Building Kit from The Pi Hut?

For the reader

For those who like to curl up with a good read, or spend too much of their day on public transport, a book or magazine subscription is the perfect treat.

For makers, hackers, and those interested in new technologies, our brand-new HackSpace magazine and the ever popular community magazine The MagPi are ideal. Both are available via a physical or digital subscription, and new subscribers to The MagPi also receive a free Raspberry Pi Zero W plus case.

Cover of CoderDojo Nano Make your own game

Marc Scott Beginner's Guide to Coding Book

You can also check out other publications from the Raspberry Pi family, including CoderDojo’s new CoderDojo Nano: Make Your Own Game, Eben Upton and Gareth Halfacree’s Raspberry Pi User Guide, and Marc Scott’s A Beginner’s Guide to Coding. And have I mentioned Carrie Anne’s Adventures in Raspberry Pi yet?

Stocking fillers for everyone

Looking for something small to keep your loved ones occupied on Christmas morning? Or do you have to buy a Secret Santa gift for the office tech? Here are some wonderful stocking fillers to fill your boots with this season.

Pi Hut 3D Christmas Tree - The Raspberry Pi Christmas Shopping List 2017

The Pi Hut 3D Xmas Tree: available as both a pre-soldered and a DIY version, this gadget will work with any 40-pin Raspberry Pi and allows you to create your own mini light show.



Google AIY Voice kit: build your own home assistant using a Raspberry Pi, the MagPi Essentials guide, and this brand-new kit. “Google, play Mariah Carey again…”



Pimoroni’s Raspberry Pi Zero W Project Kits offer everything you need, including the Pi, to make your own time-lapse cameras, music players, and more.



The official Raspberry Pi Sense HAT, Camera Module, and cases for the Pi 3 and Pi Zero will complete the collection of any Raspberry Pi owner, while also opening up exciting project opportunities.

STEAM gifts that everyone will love

Awesome Astronauts | Building LEGO’s Women of NASA!

LEGO Idea’s bought out this amazing ‘Women of NASA’ set, and I thought it would be fun to build, play and learn from these inspiring women! First up, let’s discover a little more about Sally Ride and Mae Jemison, two AWESOME ASTRONAUTS!

Treat the kids, and big kids, in your life to the newest LEGO Ideas set, the Women of NASA — starring Nancy Grace Roman, Margaret Hamilton, Sally Ride, and Mae Jemison!



Explore the world of wearables with Pimoroni’s sewable, hackable, wearable, adorable Bearables kits.



Add lights and motors to paper creations with the Activating Origami Kit, available from The Pi Hut.




We all loved Hidden Figures, and the STEAM enthusiast you know will do too. The film’s available on DVD, and you can also buy the original book, along with other fascinating non-fiction such as Rebecca Skloot’s The Immortal Life of Henrietta Lacks, Rachel Ignotofsky’s Women in Science, and Sydney Padua’s (mostly true) The Thrilling Adventures of Lovelace and Babbage.

Have we missed anything?

With so many amazing kits, HATs, and books available from members of the Raspberry Pi community, it’s hard to only pick a few. Have you found something splendid for the maker in your life? Maybe you’ve created your own kit that uses the Raspberry Pi? Share your favourites with us in the comments below or via our social media accounts.

The post The Raspberry Pi Christmas shopping list 2017 appeared first on Raspberry Pi.

AWS Cloud9 – Cloud Developer Environments

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-cloud9-cloud-developer-environments/

One of the first things you learn when you start programming is that, just like any craftsperson, your tools matter. Notepad.exe isn’t going to cut it. A powerful editor and testing pipeline supercharge your productivity. I still remember learning to use Vim for the first time and being able to zip around systems and complex programs. Do you remember how hard it was to setup all your compilers and dependencies on a new machine? How many cycles have you wasted matching versions, tinkering with configs, and then writing documentation to onboard a new developer to a project?

Today we’re launching AWS Cloud9, an Integrated Development Environment (IDE) for writing, running, and debugging code, all from your web browser. Cloud9 comes prepackaged with essential tools for many popular programming languages (Javascript, Python, PHP, etc.) so you don’t have to tinker with installing various compilers and toolchains. Cloud9 also provides a seamless experience for working with serverless applications allowing you to quickly switch between local and remote testing or debugging. Based on the popular open source Ace Editor and c9.io IDE (which we acquired last year), AWS Cloud9 is designed to make collaborative cloud development easy with extremely powerful pair programming features. There are more features than I could ever cover in this post but to give a quick breakdown I’ll break the IDE into 3 components: The editor, the AWS integrations, and the collaboration.

Editing


The Ace Editor at the core of Cloud9 is what lets you write code quickly, easily, and beautifully. It follows a UNIX philosophy of doing one thing and doing it well: writing code.

It has all the typical IDE features you would expect: live syntax checking, auto-indent, auto-completion, code folding, split panes, version control integration, multiple cursors and selections, and it also has a few unique features I want to highlight. First of all, it’s fast, even for large (100000+ line) files. There’s no lag or other issues while typing. It has over two dozen themes built-in (solarized!) and you can bring all of your favorite themes from Sublime Text or TextMate as well. It has built-in support for 40+ language modes and customizable run configurations for your projects. Most importantly though, it has Vim mode (or emacs if your fingers work that way). It also has a keybinding editor that allows you to bend the editor to your will.

The editor supports powerful keyboard navigation and commands (similar to Sublime Text or vim plugins like ctrlp). On a Mac, with ⌘+P you can open any file in your environment with fuzzy search. With ⌘+. you can open up the command pane which allows you to do invoke any of the editor commands by typing the name. It also helpfully displays the keybindings for a command in the pane, for instance to open to a terminal you can press ⌥+T. Oh, did I mention there’s a terminal? It ships with the AWS CLI preconfigured for access to your resources.

The environment also comes with pre-installed debugging tools for many popular languages – but you’re not limited to what’s already installed. It’s easy to add in new programs and define new run configurations.

The editor is just one, admittedly important, component in an IDE though. I want to show you some other compelling features.

AWS Integrations

The AWS Cloud9 IDE is the first IDE I’ve used that is truly “cloud native”. The service is provided at no additional charge, and you only charged for the underlying compute and storage resources. When you create an environment you’re prompted for either: an instance type and an auto-hibernate time, or SSH access to a machine of your choice.

If you’re running in AWS the auto-hibernate feature will stop your instance shortly after you stop using your IDE. This can be a huge cost savings over running a more permanent developer desktop. You can also launch it within a VPC to give it secure access to your development resources. If you want to run Cloud9 outside of AWS, or on an existing instance, you can provide SSH access to the service which it will use to create an environment on the external machine. Your environment is provisioned with automatic and secure access to your AWS account so you don’t have to worry about copying credentials around. Let me say that again: you can run this anywhere.

Serverless Development with AWS Cloud9

I spend a lot of time on Twitch developing serverless applications. I have hundreds of lambda functions and APIs deployed. Cloud9 makes working with every single one of these functions delightful. Let me show you how it works.


If you look in the top right side of the editor you’ll see an AWS Resources tab. Opening this you can see all of the lambda functions in your region (you can see functions in other regions by adjusting your region preferences in the AWS preference pane).

You can import these remote functions to your local workspace just by double-clicking them. This allows you to edit, test, and debug your serverless applications all locally. You can create new applications and functions easily as well. If you click the Lambda icon in the top right of the pane you’ll be prompted to create a new lambda function and Cloud9 will automatically create a Serverless Application Model template for you as well. The IDE ships with support for the popular SAM local tool pre-installed. This is what I use in most of my local testing and serverless development. Since you have a terminal, it’s easy to install additional tools and use other serverless frameworks.

 

Launching an Environment from AWS CodeStar

With AWS CodeStar you can easily provision an end-to-end continuous delivery toolchain for development on AWS. Codestar provides a unified experience for building, testing, deploying, and managing applications using AWS CodeCommit, CodeBuild, CodePipeline, and CodeDeploy suite of services. Now, with a few simple clicks you can provision a Cloud9 environment to develop your application. Your environment will be pre-configured with the code for your CodeStar application already checked out and git credentials already configured.

You can easily share this environment with your coworkers which leads me to another extremely useful set of features.

Collaboration

One of the many things that sets AWS Cloud9 apart from other editors are the rich collaboration tools. You can invite an IAM user to your environment with a few clicks.

You can see what files they’re working on, where their cursors are, and even share a terminal. The chat features is useful as well.

Things to Know

  • There are no additional charges for this service beyond the underlying compute and storage.
  • c9.io continues to run for existing users. You can continue to use all the features of c9.io and add new team members if you have a team account. In the future, we will provide tools for easy migration of your c9.io workspaces to AWS Cloud9.
  • AWS Cloud9 is available in the US West (Oregon), US East (Ohio), US East (N.Virginia), EU (Ireland), and Asia Pacific (Singapore) regions.

I can’t wait to see what you build with AWS Cloud9!

Randall

AWS Systems Manager – A Unified Interface for Managing Your Cloud and Hybrid Resources

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-systems-manager/

AWS Systems Manager is a new way to manage your cloud and hybrid IT environments. AWS Systems Manager provides a unified user interface that simplifies resource and application management, shortens the time to detect and resolve operational problems, and makes it easy to operate and manage your infrastructure securely at scale. This service is absolutely packed full of features. It defines a new experience around grouping, visualizing, and reacting to problems using features from products like Amazon EC2 Systems Manager (SSM) to enable rich operations across your resources.

As I said above, there are a lot of powerful features in this service and we won’t be able to dive deep on all of them but it’s easy to go to the console and get started with any of the tools.

Resource Groupings

Resource Groups allow you to create logical groupings of most resources that support tagging like: Amazon Elastic Compute Cloud (EC2) instances, Amazon Simple Storage Service (S3) buckets, Elastic Load Balancing balancers, Amazon Relational Database Service (RDS) instances, Amazon Virtual Private Cloud, Amazon Kinesis streams, Amazon Route 53 zones, and more. Previously, you could use the AWS Console to define resource groupings but AWS Systems Manager provides this new resource group experience via a new console and API. These groupings are a fundamental building block of Systems Manager in that they are frequently the target of various operations you may want to perform like: compliance management, software inventories, patching, and other automations.

You start by defining a group based on tag filters. From there you can view all of the resources in a centralized console. You would typically use these groupings to differentiate between applications, application layers, and environments like production or dev – but you can make your own rules about how to use them as well. If you imagine a typical 3 tier web-app you might have a few EC2 instances, an ELB, a few S3 buckets, and an RDS instance. You can define a grouping for that application and with all of those different resources simultaneously.

Insights

AWS Systems Manager automatically aggregates and displays operational data for each resource group through a dashboard. You no longer need to navigate through multiple AWS consoles to view all of your operational data. You can easily integrate your exiting Amazon CloudWatch dashboards, AWS Config rules, AWS CloudTrail trails, AWS Trusted Advisor notifications, and AWS Personal Health Dashboard performance and availability alerts. You can also easily view your software inventories across your fleet. AWS Systems Manager also provides a compliance dashboard allowing you to see the state of various security controls and patching operations across your fleets.

Acting on Insights

Building on the success of EC2 Systems Manager (SSM), AWS Systems Manager takes all of the features of SSM and provides a central place to access them. These are all the same experiences you would have through SSM with a more accesible console and centralized interface. You can use the resource groups you’ve defined in Systems Manager to visualize and act on groups of resources.

Automation


Automations allow you to define common IT tasks as a JSON document that specify a list of tasks. You can also use community published documents. These documents can be executed through the Console, CLIs, SDKs, scheduled maintenance windows, or triggered based on changes in your infrastructure through CloudWatch events. You can track and log the execution of each step in the documents and prompt for additional approvals. It also allows you to incrementally roll out changes and automatically halt when errors occur. You can start executing an automation directly on a resource group and it will be able to apply itself to the resources that it understands within the group.

Run Command

Run Command is a superior alternative to enabling SSH on your instances. It provides safe, secure remote management of your instances at scale without logging into your servers, replacing the need for SSH bastions or remote powershell. It has granular IAM permissions that allow you to restrict which roles or users can run certain commands.

Patch Manager, Maintenance Windows, and State Manager

I’ve written about Patch Manager before and if you manage fleets of Windows and Linux instances it’s a great way to maintain a common baseline of security across your fleet.

Maintenance windows allow you to schedule instance maintenance and other disruptive tasks for a specific time window.

State Manager allows you to control various server configuration details like anti-virus definitions, firewall settings, and more. You can define policies in the console or run existing scripts, PowerShell modules, or even Ansible playbooks directly from S3 or GitHub. You can query State Manager at any time to view the status of your instance configurations.

Things To Know

There’s some interesting terminology here. We haven’t done the best job of naming things in the past so let’s take a moment to clarify. EC2 Systems Manager (sometimes called SSM) is what you used before today. You can still invoke aws ssm commands. However, AWS Systems Manager builds on and enhances many of the tools provided by EC2 Systems Manager and allows those same tools to be applied to more than just EC2. When you see the phrase “Systems Manager” in the future you should think of AWS Systems Manager and not EC2 Systems Manager.

AWS Systems Manager with all of this useful functionality is provided at no additional charge. It is immediately available in all public AWS regions.

The best part about these services is that even with their tight integrations each one is designed to be used in isolation as well. If you only need one component of these services it’s simple to get started with only that component.

There’s a lot more than I could ever document in this post so I encourage you all to jump into the console and documentation to figure out where you can start using AWS Systems Manager.

Randall

The Pirate Bay Has Trouble Keeping Afloat

Post Syndicated from Ernesto original https://torrentfreak.com/the-pirate-bay-suffers-downtime-tor-domain-is-up-171128/

pirate bayThe crew of The Pirate Bay has had a hard time keeping the ship afloat over the past few days, something which has caused considerable downtime.

For a lot of people, the site still displays a Cloudflare error message across the entire site, and many proxies are affected by the downtime as well.

Not everyone is affected equally though. In some regions, the site loads just fine. That said, there are reports that, even then, uploads are broken and searches turn up blank.

TorrentFreak reached out to the TPB team but we have yet to hear more about the issue. Judging from past experience, however, it’s likely down to a small technical issue with part of the infrastructure that needs fixing.

Pirate Bay down

The Pirate Bay has had quite a few stints of downtime in recent months. The popular torrent site usually returns after several hours, although it can take longer on occasion.

But there’s some good news for those who desperately need to access the notorious torrent site.

TPB is still accessible through some proxies and its .onion address on the Tor network, via the popular Tor Browser, for example. The Tor traffic goes through a separate connection and works just fine.

New uploads are coming through as well, although these appear to be mostly from upload bots.

As always, the site’s admins and moderators are asking people to refrain from panicking while waiting patiently for the storm to subside, but seasoned TPB users will probably know the drill by now…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Easier Certificate Validation Using DNS with AWS Certificate Manager

Post Syndicated from Todd Cignetti original https://aws.amazon.com/blogs/security/easier-certificate-validation-using-dns-with-aws-certificate-manager/

Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates are used to secure network communications and establish the identity of websites over the internet. Before issuing a certificate for your website, Amazon must validate that you control the domain name for your site. You can now use AWS Certificate Manager (ACM) Domain Name System (DNS) validation to establish that you control a domain name when requesting SSL/TLS certificates with ACM. Previously ACM supported only email validation, which required the domain owner to receive an email for each certificate request and validate the information in the request before approving it.

With DNS validation, you write a CNAME record to your DNS configuration to establish control of your domain name. After you have configured the CNAME record, ACM can automatically renew DNS-validated certificates before they expire, as long as the DNS record has not changed. To make it even easier to validate your domain, ACM can update your DNS configuration for you if you manage your DNS records with Amazon Route 53. In this blog post, I demonstrate how to request a certificate for a website by using DNS validation. To perform the equivalent steps using the AWS CLI or AWS APIs and SDKs, see AWS Certificate Manager in the AWS CLI Reference and the ACM API Reference.

Requesting an SSL/TLS certificate by using DNS validation

In this section, I walk you through the four steps required to obtain an SSL/TLS certificate through ACM to identify your site over the internet. SSL/TLS provides encryption for sensitive data in transit and authentication by using certificates to establish the identity of your site and secure connections between browsers and applications and your site. DNS validation and SSL/TLS certificates provisioned through ACM are free.

Step 1: Request a certificate

To get started, sign in to the AWS Management Console and navigate to the ACM console. Choose Get started to request a certificate.

Screenshot of getting started in the ACM console

If you previously managed certificates in ACM, you will instead see a table with your certificates and a button to request a new certificate. Choose Request a certificate to request a new certificate.

Screenshot of choosing "Request a certificate"

Type the name of your domain in the Domain name box and choose Next. In this example, I type www.example.com. You must use a domain name that you control. Requesting certificates for domains that you don’t control violates the AWS Service Terms.

Screenshot of entering a domain name

Step 2: Select a validation method

With DNS validation, you write a CNAME record to your DNS configuration to establish control of your domain name. Choose DNS validation, and then choose Review.

Screenshot of selecting validation method

Step 3: Review your request

Review your request and choose Confirm and request to request the certificate.

Screenshot of reviewing request and confirming it

Step 4: Submit your request

After a brief delay while ACM populates your domain validation information, choose the down arrow (highlighted in the following screenshot) to display all the validation information for your domain.

Screenshot of validation information

ACM displays the CNAME record you must add to your DNS configuration to validate that you control the domain name in your certificate request. If you use a DNS provider other than Route 53 or if you use a different AWS account to manage DNS records in Route 53, copy the DNS CNAME information from the validation information, or export it to a file (choose Export DNS configuration to a file) and write it to your DNS configuration. For information about how to add or modify DNS records, check with your DNS provider. For more information about using DNS with Route 53 DNS, see the Route 53 documentation.

If you manage DNS records for your domain with Route 53 in the same AWS account, choose Create record in Route 53 to have ACM update your DNS configuration for you.

After updating your DNS configuration, choose Continue to return to the ACM table view.

ACM then displays a table that includes all your certificates. The certificate you requested is displayed so that you can see the status of your request. After you write the DNS record or have ACM write the record for you, it typically takes DNS 30 minutes to propagate the record, and it might take several hours for Amazon to validate it and issue the certificate. During this time, ACM shows the Validation status as Pending validation. After ACM validates the domain name, ACM updates the Validation status to Success. After the certificate is issued, the certificate status is updated to Issued. If ACM cannot validate your DNS record and issue the certificate after 72 hours, the request times out, and ACM displays a Timed out validation status. To recover, you must make a new request. Refer to the Troubleshooting Section of the ACM User Guide for instructions about troubleshooting validation or issuance failures.

Screenshot of a certificate issued and validation successful

You now have an ACM certificate that you can use to secure your application or website. For information about how to deploy certificates with other AWS services, see the documentation for Amazon CloudFront, Amazon API Gateway, Application Load Balancers, and Classic Load Balancers. Note that your certificate must be in the US East (N. Virginia) Region to use the certificate with CloudFront.

ACM automatically renews certificates that are deployed and in use with other AWS services as long as the CNAME record remains in your DNS configuration. To learn more about ACM DNS validation, see the ACM FAQs and the ACM documentation.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the ACM forum or contact AWS Support.

– Todd

Using AWS CodeCommit Pull Requests to request code reviews and discuss code

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/devops/using-aws-codecommit-pull-requests-to-request-code-reviews-and-discuss-code/

Thank you to Michael Edge, Senior Cloud Architect, for a great blog on CodeCommit pull requests.

~~~~~~~

AWS CodeCommit is a fully managed service for securely hosting private Git repositories. CodeCommit now supports pull requests, which allows repository users to review, comment upon, and interactively iterate on code changes. Used as a collaboration tool between team members, pull requests help you to review potential changes to a CodeCommit repository before merging those changes into the repository. Each pull request goes through a simple lifecycle, as follows:

  • The new features to be merged are added as one or more commits to a feature branch. The commits are not merged into the destination branch.
  • The pull request is created, usually from the difference between two branches.
  • Team members review and comment on the pull request. The pull request might be updated with additional commits that contain changes made in response to comments, or include changes made to the destination branch.
  • Once team members are happy with the pull request, it is merged into the destination branch. The commits are applied to the destination branch in the same order they were added to the pull request.

Commenting is an integral part of the pull request process, and is used to collaborate between the developers and the reviewer. Reviewers add comments and questions to a pull request during the review process, and developers respond to these with explanations. Pull request comments can be added to the overall pull request, a file within the pull request, or a line within a file.

To make the comments more useful, sign in to the AWS Management Console as an AWS Identity and Access Management (IAM) user. The username will then be associated with the comment, indicating the owner of the comment. Pull request comments are a great quality improvement tool as they allow the entire development team visibility into what reviewers are looking for in the code. They also serve as a record of the discussion between team members at a point in time, and shouldn’t be deleted.

AWS CodeCommit is also introducing the ability to add comments to a commit, another useful collaboration feature that allows team members to discuss code changed as part of a commit. This helps you discuss changes made in a repository, including why the changes were made, whether further changes are necessary, or whether changes should be merged. As is the case with pull request comments, you can comment on an overall commit, on a file within a commit, or on a specific line or change within a file, and other repository users can respond to your comments. Comments are not restricted to commits, they can also be used to comment on the differences between two branches, or between two tags. Commit comments are separate from pull request comments, i.e. you will not see commit comments when reviewing a pull request – you will only see pull request comments.

A pull request example

Let’s get started by running through an example. We’ll take a typical pull request scenario and look at how we’d use CodeCommit and the AWS Management Console for each of the steps.

To try out this scenario, you’ll need:

  • An AWS CodeCommit repository with some sample code in the master branch. We’ve provided sample code below.
  • Two AWS Identity and Access Management (IAM) users, both with the AWSCodeCommitPowerUser managed policy applied to them.
  • Git installed on your local computer, and access configured for AWS CodeCommit.
  • A clone of the AWS CodeCommit repository on your local computer.

In the course of this example, you’ll sign in to the AWS CodeCommit console as one IAM user to create the pull request, and as the other IAM user to review the pull request. To learn more about how to set up your IAM users and how to connect to AWS CodeCommit with Git, see the following topics:

  • Information on creating an IAM user with AWS Management Console access.
  • Instructions on how to access CodeCommit using Git.
  • If you’d like to use the same ‘hello world’ application as used in this article, here is the source code:
package com.amazon.helloworld;

public class Main {
	public static void main(String[] args) {

		System.out.println("Hello, world");
	}
}

The scenario below uses the us-east-2 region.

Creating the branches

Before we jump in and create a pull request, we’ll need at least two branches. In this example, we’ll follow a branching strategy similar to the one described in GitFlow. We’ll create a new branch for our feature from the main development branch (the default branch). We’ll develop the feature in the feature branch. Once we’ve written and tested the code for the new feature in that branch, we’ll create a pull request that contains the differences between the feature branch and the main development branch. Our team lead (the second IAM user) will review the changes in the pull request. Once the changes have been reviewed, the feature branch will be merged into the development branch.

Figure 1: Pull request link

Sign in to the AWS CodeCommit console with the IAM user you want to use as the developer. You can use an existing repository or you can go ahead and create a new one. We won’t be merging any changes to the master branch of your repository, so it’s safe to use an existing repository for this example. You’ll find the Pull requests link has been added just above the Commits link (see Figure 1), and below Commits you’ll find the Branches link. Click Branches and create a new branch called ‘develop’, branched from the ‘master’ branch. Then create a new branch called ‘feature1’, branched from the ‘develop’ branch. You’ll end up with three branches, as you can see in Figure 2. (Your repository might contain other branches in addition to the three shown in the figure).

Figure 2: Create a feature branch

If you haven’t cloned your repo yet, go to the Code link in the CodeCommit console and click the Connect button. Follow the instructions to clone your repo (detailed instructions are here). Open a terminal or command line and paste the git clone command supplied in the Connect instructions for your repository. The example below shows cloning a repository named codecommit-demo:

git clone https://git-codecommit.us-east-2.amazonaws.com/v1/repos/codecommit-demo

If you’ve previously cloned the repo you’ll need to update your local repo with the branches you created. Open a terminal or command line and make sure you’re in the root directory of your repo, then run the following command:

git remote update origin

You’ll see your new branches pulled down to your local repository.

$ git remote update origin
Fetching origin
From https://git-codecommit.us-east-2.amazonaws.com/v1/repos/codecommit-demo
 * [new branch]      develop    -> origin/develop
 * [new branch]      feature1   -> origin/feature1

You can also see your new branches by typing:

git branch --all

$ git branch --all
* master
  remotes/origin/develop
  remotes/origin/feature1
  remotes/origin/master

Now we’ll make a change to the ‘feature1’ branch. Open a terminal or command line and check out the feature1 branch by running the following command:

git checkout feature1

$ git checkout feature1
Branch feature1 set up to track remote branch feature1 from origin.
Switched to a new branch 'feature1'

Make code changes

Edit a file in the repo using your favorite editor and save the changes. Commit your changes to the local repository, and push your changes to CodeCommit. For example:

git commit -am 'added new feature'
git push origin feature1

$ git commit -am 'added new feature'
[feature1 8f6cb28] added new feature
1 file changed, 1 insertion(+), 1 deletion(-)

$ git push origin feature1
Counting objects: 9, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (9/9), 617 bytes | 617.00 KiB/s, done.
Total 9 (delta 2), reused 0 (delta 0)
To https://git-codecommit.us-east-2.amazonaws.com/v1/repos/codecommit-demo
   2774a53..8f6cb28  feature1 -> feature1

Creating the pull request

Now we have a ‘feature1’ branch that differs from the ‘develop’ branch. At this point we want to merge our changes into the ‘develop’ branch. We’ll create a pull request to notify our team members to review our changes and check whether they are ready for a merge.

In the AWS CodeCommit console, click Pull requests. Click Create pull request. On the next page select ‘develop’ as the destination branch and ‘feature1’ as the source branch. Click Compare. CodeCommit will check for merge conflicts and highlight whether the branches can be automatically merged using the fast-forward option, or whether a manual merge is necessary. A pull request can be created in both situations.

Figure 3: Create a pull request

After comparing the two branches, the CodeCommit console displays the information you’ll need in order to create the pull request. In the ‘Details’ section, the ‘Title’ for the pull request is mandatory, and you may optionally provide comments to your reviewers to explain the code change you have made and what you’d like them to review. In the ‘Notifications’ section, there is an option to set up notifications to notify subscribers of changes to your pull request. Notifications will be sent on creation of the pull request as well as for any pull request updates or comments. And finally, you can review the changes that make up this pull request. This includes both the individual commits (a pull request can contain one or more commits, available in the Commits tab) as well as the changes made to each file, i.e. the diff between the two branches referenced by the pull request, available in the Changes tab. After you have reviewed this information and added a title for your pull request, click the Create button. You will see a confirmation screen, as shown in Figure 4, indicating that your pull request has been successfully created, and can be merged without conflicts into the ‘develop’ branch.

Figure 4: Pull request confirmation page

Reviewing the pull request

Now let’s view the pull request from the perspective of the team lead. If you set up notifications for this CodeCommit repository, creating the pull request would have sent an email notification to the team lead, and he/she can use the links in the email to navigate directly to the pull request. In this example, sign in to the AWS CodeCommit console as the IAM user you’re using as the team lead, and click Pull requests. You will see the same information you did during creation of the pull request, plus a record of activity related to the pull request, as you can see in Figure 5.

Figure 5: Team lead reviewing the pull request

Commenting on the pull request

You now perform a thorough review of the changes and make a number of comments using the new pull request comment feature. To gain an overall perspective on the pull request, you might first go to the Commits tab and review how many commits are included in this pull request. Next, you might visit the Changes tab to review the changes, which displays the differences between the feature branch code and the develop branch code. At this point, you can add comments to the pull request as you work through each of the changes. Let’s go ahead and review the pull request. During the review, you can add review comments at three levels:

  • The overall pull request
  • A file within the pull request
  • An individual line within a file

The overall pull request
In the Changes tab near the bottom of the page you’ll see a ‘Comments on changes’ box. We’ll add comments here related to the overall pull request. Add your comments as shown in Figure 6 and click the Save button.

Figure 6: Pull request comment

A specific file in the pull request
Hovering your mouse over a filename in the Changes tab will cause a blue ‘comments’ icon to appear to the left of the filename. Clicking the icon will allow you to enter comments specific to this file, as in the example in Figure 7. Go ahead and add comments for one of the files changed by the developer. Click the Save button to save your comment.

Figure 7: File comment

A specific line in a file in the pull request
A blue ‘comments’ icon will appear as you hover over individual lines within each file in the pull request, allowing you to create comments against lines that have been added, removed or are unchanged. In Figure 8, you add comments against a line that has been added to the source code, encouraging the developer to review the naming standards. Go ahead and add line comments for one of the files changed by the developer. Click the Save button to save your comment.

Figure 8: Line comment

A pull request that has been commented at all three levels will look similar to Figure 9. The pull request comment is shown expanded in the ‘Comments on changes’ section, while the comments at file and line level are shown collapsed. A ‘comment’ icon indicates that comments exist at file and line level. Clicking the icon will expand and show the comment. Since you are expecting the developer to make further changes based on your comments, you won’t merge the pull request at this stage, but will leave it open awaiting feedback. Each comment you made results in a notification being sent to the developer, who can respond to the comments. This is great for remote working, where developers and team lead may be in different time zones.

Figure 9: Fully commented pull request

Adding a little complexity

A typical development team is going to be creating pull requests on a regular basis. It’s highly likely that the team lead will merge other pull requests into the ‘develop’ branch while pull requests on feature branches are in the review stage. This may result in a change to the ‘Mergable’ status of a pull request. Let’s add this scenario into the mix and check out how a developer will handle this.

To test this scenario, we could create a new pull request and ask the team lead to merge this to the ‘develop’ branch. But for the sake of simplicity we’ll take a shortcut. Clone your CodeCommit repo to a new folder, switch to the ‘develop’ branch, and make a change to one of the same files that were changed in your pull request. Make sure you change a line of code that was also changed in the pull request. Commit and push this back to CodeCommit. Since you’ve just changed a line of code in the ‘develop’ branch that has also been changed in the ‘feature1’ branch, the ‘feature1’ branch cannot be cleanly merged into the ‘develop’ branch. Your developer will need to resolve this merge conflict.

A developer reviewing the pull request would see the pull request now looks similar to Figure 10, with a ‘Resolve conflicts’ status rather than the ‘Mergable’ status it had previously (see Figure 5).

Figure 10: Pull request with merge conflicts

Reviewing the review comments

Once the team lead has completed his review, the developer will review the comments and make the suggested changes. As a developer, you’ll see the list of review comments made by the team lead in the pull request Activity tab, as shown in Figure 11. The Activity tab shows the history of the pull request, including commits and comments. You can reply to the review comments directly from the Activity tab, by clicking the Reply button, or you can do this from the Changes tab. The Changes tab shows the comments for the latest commit, as comments on previous commits may be associated with lines that have changed or been removed in the current commit. Comments for previous commits are available to view and reply to in the Activity tab.

In the Activity tab, use the shortcut link (which looks like this </>) to move quickly to the source code associated with the comment. In this example, you will make further changes to the source code to address the pull request review comments, so let’s go ahead and do this now. But first, you will need to resolve the ‘Resolve conflicts’ status.

Figure 11: Pull request activity

Resolving the ‘Resolve conflicts’ status

The ‘Resolve conflicts’ status indicates there is a merge conflict between the ‘develop’ branch and the ‘feature1’ branch. This will require manual intervention to restore the pull request back to the ‘Mergable’ state. We will resolve this conflict next.

Open a terminal or command line and check out the develop branch by running the following command:

git checkout develop

$ git checkout develop
Switched to branch 'develop'
Your branch is up-to-date with 'origin/develop'.

To incorporate the changes the team lead made to the ‘develop’ branch, merge the remote ‘develop’ branch with your local copy:

git pull

$ git pull
remote: Counting objects: 9, done.
Unpacking objects: 100% (9/9), done.
From https://git-codecommit.us-east-2.amazonaws.com/v1/repos/codecommit-demo
   af13c82..7b36f52  develop    -> origin/develop
Updating af13c82..7b36f52
Fast-forward
 src/main/java/com/amazon/helloworld/Main.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Then checkout the ‘feature1’ branch:

git checkout feature1

$ git checkout feature1
Switched to branch 'feature1'
Your branch is up-to-date with 'origin/feature1'.

Now merge the changes from the ‘develop’ branch into your ‘feature1’ branch:

git merge develop

$ git merge develop
Auto-merging src/main/java/com/amazon/helloworld/Main.java
CONFLICT (content): Merge conflict in src/main/java/com/amazon/helloworld/Main.java
Automatic merge failed; fix conflicts and then commit the result.

Yes, this fails. The file Main.java has been changed in both branches, resulting in a merge conflict that can’t be resolved automatically. However, Main.java will now contain markers that indicate where the conflicting code is, and you can use these to resolve the issues manually. Edit Main.java using your favorite IDE, and you’ll see it looks something like this:

package com.amazon.helloworld;

import java.util.*;

/**
 * This class prints a hello world message
 */

public class Main {
   public static void main(String[] args) {

<<<<<<< HEAD
        Date todaysdate = Calendar.getInstance().getTime();

        System.out.println("Hello, earthling. Today's date is: " + todaysdate);
=======
      System.out.println("Hello, earth");
>>>>>>> develop
   }
}

The code between HEAD and ‘===’ is the code the developer added in the ‘feature1’ branch (HEAD represents ‘feature1’ because this is the current checked out branch). The code between ‘===’ and ‘>>> develop’ is the code added to the ‘develop’ branch by the team lead. We’ll resolve the conflict by manually merging both changes, resulting in an updated Main.java:

package com.amazon.helloworld;

import java.util.*;

/**
 * This class prints a hello world message
 */

public class Main {
   public static void main(String[] args) {

        Date todaysdate = Calendar.getInstance().getTime();

        System.out.println("Hello, earth. Today's date is: " + todaysdate);
   }
}

After saving the change you can add and commit it to your local repo:

git add src/
git commit -m 'fixed merge conflict by merging changes'

Fixing issues raised by the reviewer

Now you are ready to address the comments made by the team lead. If you are no longer pointing to the ‘feature1’ branch, check out the ‘feature1’ branch by running the following command:

git checkout feature1

$ git checkout feature1
Branch feature1 set up to track remote branch feature1 from origin.
Switched to a new branch 'feature1'

Edit the source code in your favorite IDE and make the changes to address the comments. In this example, the developer has updated the source code as follows:

package com.amazon.helloworld;

import java.util.*;

/**
 *  This class prints a hello world message
 *
 * @author Michael Edge
 * @see HelloEarth
 * @version 1.0
 */

public class Main {
   public static void main(String[] args) {

        Date todaysDate = Calendar.getInstance().getTime();

        System.out.println("Hello, earth. Today's date is: " + todaysDate);
   }
}

After saving the changes, commit and push to the CodeCommit ‘feature1’ branch as you did previously:

git commit -am 'updated based on review comments'
git push origin feature1

Responding to the reviewer

Now that you’ve fixed the code issues you will want to respond to the review comments. In the AWS CodeCommit console, check that your latest commit appears in the pull request Commits tab. You now have a pull request consisting of more than one commit. The pull request in Figure 12 has four commits, which originated from the following activities:

  • 8th Nov: the original commit used to initiate this pull request
  • 10th Nov, 3 hours ago: the commit by the team lead to the ‘develop’ branch, merged into our ‘feature1’ branch
  • 10th Nov, 24 minutes ago: the commit by the developer that resolved the merge conflict
  • 10th Nov, 4 minutes ago: the final commit by the developer addressing the review comments

Figure 12: Pull request with multiple commits

Let’s reply to the review comments provided by the team lead. In the Activity tab, reply to the pull request comment and save it, as shown in Figure 13.

Figure 13: Replying to a pull request comment

At this stage, your code has been committed and you’ve updated your pull request comments, so you are ready for a final review by the team lead.

Final review

The team lead reviews the code changes and comments made by the developer. As team lead, you own the ‘develop’ branch and it’s your decision on whether to merge the changes in the pull request into the ‘develop’ branch. You can close the pull request with or without merging using the Merge and Close buttons at the bottom of the pull request page (see Figure 13). Clicking Close will allow you to add comments on why you are closing the pull request without merging. Merging will perform a fast-forward merge, incorporating the commits referenced by the pull request. Let’s go ahead and click the Merge button to merge the pull request into the ‘develop’ branch.

Figure 14: Merging the pull request

After merging a pull request, development of that feature is complete and the feature branch is no longer needed. It’s common practice to delete the feature branch after merging. CodeCommit provides a check box during merge to automatically delete the associated feature branch, as seen in Figure 14. Clicking the Merge button will merge the pull request into the ‘develop’ branch, as shown in Figure 15. This will update the status of the pull request to ‘Merged’, and will close the pull request.

Conclusion

This blog has demonstrated how pull requests can be used to request a code review, and enable reviewers to get a comprehensive summary of what is changing, provide feedback to the author, and merge the code into production. For more information on pull requests, see the documentation.

Use the New Visual Editor to Create and Modify Your AWS IAM Policies

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/use-the-new-visual-editor-to-create-and-modify-your-aws-iam-policies/

Today, AWS Identity and Access Management (IAM) made it easier for you to create and modify your IAM policies by using a point-and-click visual editor in the IAM console. The new visual editor guides you through granting permissions for IAM policies without requiring you to write policies in JSON (although you can still author and edit policies in JSON, if you prefer). This update to the IAM console makes it easier to grant least privilege for the AWS service actions you select by listing all the supported resource types and request conditions you can specify. Policy summaries identify unrecognized services and actions and permissions errors when you import existing policies, and now you can use the visual editor to correct them. In this blog post, I give a brief overview of policy concepts and show you how to create a new policy by using the visual editor.

IAM policy concepts

You use IAM policies to define permissions for your IAM entities (groups, users, and roles). Policies are composed of one or more statements that include the following elements:

  • Effect: Determines if a policy statement allows or explicitly denies access.
  • Action: Defines AWS service actions in a policy (these typically map to individual AWS APIs.)
  • Resource: Defines the AWS resources to which actions can apply. The defined resources must be supported by the actions defined in the Action element for permissions to be granted.
  • Condition: Defines when a permission is allowed or denied. The conditions defined in a policy must be supported by the actions defined in the Action element for the permission to be granted.

To grant permissions, you attach policies to groups, users, or roles. Now that I have reviewed the elements of a policy, I will demonstrate how to create an IAM policy with the visual editor.

How to create an IAM policy with the visual editor

Let’s say my human resources (HR) recruiter, Casey, needs to review files located in an Amazon S3 bucket for all the product manager (PM) candidates our HR team has interviewed in 2017. To grant this access, I will create and attach a policy to Casey that grants list and limited read access to all folders that begin with PM_Candidate in the pmrecruiting2017 S3 bucket. To create this new policy, I navigate to the Policies page in the IAM console and choose Create policy. Note that I could also use the visual editor to modify existing policies by choosing Import existing policy; however, for Casey, I will create a new policy.

Image of the "Create policy" button

On the Visual editor tab, I see a section that includes Service, Actions, Resources, and Request Conditions.

Image of the "Visual editor" tab

Select a service

To grant S3 permissions, I choose Select a service, type S3 in the search box, and choose S3 from the list.

Image of choosing "S3"

Select actions

After selecting S3, I can define actions for Casey by using one of four options:

  1. Filter actions in the service by using the search box.
  2. Type actions by choosing Add action next to Manual actions. For example, I can type List* to grant all S3 actions that begin with List*.
  3. Choose access levels from List, Read, Write, Permissions management, and Tagging.
  4. Select individual actions by expanding each access level.

In the following screenshot, I choose options 3 and 4, and choose List and s3:GetObject from the Read access level.

Screenshot of options in the "Select actions" section

We introduced access levels when we launched policy summaries earlier in 2017. Access levels give you a way to categorize actions and help you understand the permissions in a policy. The following table gives you a quick overview of access levels.

Access level Description Example actions
List Actions that allow you to see a list of resources s3:ListBucket, s3:ListAllMyBuckets
Read Actions that allow you to read the content in resources s3:GetObject, s3:GetBucketTagging
Write Actions that allow you to create, delete, or modify resources s3:PutObject, s3:DeleteBucket
Permissions management Actions that allow you to grant or modify permissions to resources s3:PutBucketPolicy
Tagging Actions that allow you to create, delete, or modify tags
Note: Some services support authorization based on tags.
s3:PutBucketTagging, s3:DeleteObjectVersionTagging

Note: By default, all actions you choose will be allowed. To deny actions, choose Switch to deny permissions in the upper right corner of the Actions section.

As shown in the preceding screenshot, if I choose the question mark icon next to GetObject, I can see the description and supported resources and conditions for this action, which can help me scope permissions.

Screenshot of GetObject

The visual editor makes it easy to decide which actions I should select by providing in an integrated documentation panel the action description, supported resources or conditions, and any required actions for every AWS service action. Some AWS service actions have required actions, which are other AWS service actions that need to be granted in a policy for an action to run. For example, the AWS Directory Service action, ds:CreateDirectory, requires seven Amazon EC2 actions to be able to create a Directory Service directory.

Choose resources

In the Resources section, I can choose the resources on which actions can be taken. I choose Resources and see two ways that I can define or select resources:

  1. Define specific resources
  2. Select all resources

Specific is the default option, and only the applicable resources are presented based on the service and actions I chose previously. Because I want to grant Casey access to some objects in a specific bucket, I choose Specific and choose Add ARN under bucket.

Screenshot of Resources section

In the pop-up, I type the bucket name, pmrecruiting2017, and choose Add to specify the S3 bucket resource.

Screenshot of specifying the S3 bucket resource

To specify the objects, I choose Add ARN under object and grant Casey access to all objects starting with PM_Candidate in the pmrecruiting2017 bucket. The visual editor helps you build your Amazon Resource Name (ARN) and validates that it is structured correctly. For AWS services that are AWS Region specific, the visual editor prompts for AWS Region and account number.

The visual editor displays all applicable resources in the Resources section based on the actions I choose. For Casey, I defined an S3 bucket and object in the Resources section. In this example, when the visual editor creates the policy, it creates three statements. The first statement includes all actions that require a wildcard (*) for the Resource element because this action does not support resource-level permissions. The second statement includes all S3 actions that support an S3 bucket. The third statement includes all actions that support an S3 object resource. The visual editor generates policy syntax for you based on supported permissions in AWS services.

Specify request conditions

For additional security, I specify a condition to restrict access to the S3 bucket from inside our internal network. To do this, I choose Specify request conditions in the Request Conditions section, and choose the Source IP check box. A condition is composed of a condition key, an operator, and a value. I choose aws:SourceIp for my Key so that I can control from where the S3 files can be accessed. By default, IpAddress is the Operator, and I set the Value to my internal network.

Screenshot of "Request conditions" section

To add other conditions, choose Add condition and choose Save changes after choosing the key, operator, and value.

After specifying my request condition, I am now able to review all the elements of these S3 permissions.

Screenshot of S3 permissions

Next, I can choose to grant permissions for another service by choosing Add new permissions (bottom left of preceding screenshot), or I can review and create this new policy. Because I have granted all the permissions Casey needs, I choose Review policy. I type a name and a description, and I review the policy summary before choosing Create policy. 

Now that I have created the policy, I attach it to Casey by choosing the Attached entities tab of the policy I just created. I choose Attach and choose Casey. I then choose Attach policy. Casey should now be able to access the interview files she needs to review.

Summary

The visual editor makes it easier to create and modify your IAM policies by guiding you through each element of the policy. The visual editor helps you define resources and request conditions so that you can grant least privilege and generate policies. To start using the visual editor, sign in to the IAM console, navigate to the Policies page, and choose Create policy.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

– Joy

How to Recover From Ransomware

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/complete-guide-ransomware/

Here’s the scenario. You’re working on your computer and you notice that it seems slower. Or perhaps you can’t access document or media files that were previously available.

You might be getting error messages from Windows telling you that a file is of an “Unknown file type” or “Windows can’t open this file.”

Windows error message

If you’re on a Mac, you might see the message “No associated application,” or “There is no application set to open the document.”

MacOS error message

Another possibility is that you’re completely locked out of your system. If you’re in an office, you might be looking around and seeing that other people are experiencing the same problem. Some are already locked out, and others are just now wondering what’s going on, just as you are.

Then you see a message confirming your fears.

wana decrypt0r ransomware message

You’ve been infected with ransomware.

You’ll have lots of company this year. The number of ransomware attacks on businesses tripled in the past year, jumping from one attack every two minutes in Q1 to one every 40 seconds by Q3.There were over four times more new ransomware variants in the first quarter of 2017 than in the first quarter of 2016, and damages from ransomware are expected to exceed $5 billion this year.

Growth in Ransomware Variants Since December 2015

Source: Proofpoint Q1 2017 Quarterly Threat Report

This past summer, our local PBS and NPR station in San Francisco, KQED, was debilitated for weeks by a ransomware attack that forced them to go back to working the way they used to prior to computers. Five months have passed since the attack and they’re still recovering and trying to figure out how to prevent it from happening again.

How Does Ransomware Work?

Ransomware typically spreads via spam or phishing emails, but also through websites or drive-by downloads, to infect an endpoint and penetrate the network. Once in place, the ransomware then locks all files it can access using strong encryption. Finally, the malware demands a ransom (typically payable in bitcoins) to decrypt the files and restore full operations to the affected IT systems.

Encrypting ransomware or “cryptoware” is by far the most common recent variety of ransomware. Other types that might be encountered are:

  • Non-encrypting ransomware or lock screens (restricts access to files and data, but does not encrypt them)
  • Ransomware that encrypts the Master Boot Record (MBR) of a drive or Microsoft’s NTFS, which prevents victims’ computers from being booted up in a live OS environment
  • Leakware or extortionware (exfiltrates data that the attackers threaten to release if ransom is not paid)
  • Mobile Device Ransomware (infects cell-phones through “drive-by downloads” or fake apps)

The typical steps in a ransomware attack are:

1
Infection
After it has been delivered to the system via email attachment, phishing email, infected application or other method, the ransomware installs itself on the endpoint and any network devices it can access.
2
Secure Key Exchange
The ransomware contacts the command and control server operated by the cybercriminals behind the attack to generate the cryptographic keys to be used on the local system.
3
Encryption
The ransomware starts encrypting any files it can find on local machines and the network.
4
Extortion
With the encryption work done, the ransomware displays instructions for extortion and ransom payment, threatening destruction of data if payment is not made.
5
Unlocking
Organizations can either pay the ransom and hope for the cybercriminals to actually decrypt the affected files (which in many cases does not happen), or they can attempt recovery by removing infected files and systems from the network and restoring data from clean backups.

Who Gets Attacked?

Ransomware attacks target firms of all sizes — 5% or more of businesses in the top 10 industry sectors have been attacked — and no no size business, from SMBs to enterprises, are immune. Attacks are on the rise in every sector and in every size of business.

Recent attacks, such as WannaCry earlier this year, mainly affected systems outside of the United States. Hundreds of thousands of computers were infected from Taiwan to the United Kingdom, where it crippled the National Health Service.

The US has not been so lucky in other attacks, though. The US ranks the highest in the number of ransomware attacks, followed by Germany and then France. Windows computers are the main targets, but ransomware strains exist for Macintosh and Linux, as well.

The unfortunate truth is that ransomware has become so wide-spread that for most companies it is a certainty that they will be exposed to some degree to a ransomware or malware attack. The best they can do is to be prepared and understand the best ways to minimize the impact of ransomware.

“Ransomware is more about manipulating vulnerabilities in human psychology than the adversary’s technological sophistication.” — James Scott, expert in Artificial Intelligence

Phishing emails, malicious email attachments, and visiting compromised websites have been common vehicles of infection (we wrote about protecting against phishing recently), but other methods have become more common in past months. Weaknesses in Microsoft’s Server Message Block (SMB) and Remote Desktop Protocol (RDP) have allowed cryptoworms to spread. Desktop applications — in one case an accounting package — and even Microsoft Office (Microsoft’s Dynamic Data Exchange — DDE) have been the agents of infection.

Recent ransomware strains such as Petya, CryptoLocker, and WannaCry have incorporated worms to spread themselves across networks, earning the nickname, “cryptoworms.”

How to Defeat Ransomware

1
Isolate the Infection
Prevent the infection from spreading by separating all infected computers from each other, shared storage, and the network.
2
Identify the Infection
From messages, evidence on the computer, and identification tools, determine which malware strain you are dealing with.
3
Report
Report to the authorities to support and coordinate measures to counter attacks.
4
Determine Your Options
You have a number of ways to deal with the infection. Determine which approach is best for you.
5
Restore and Refresh
Use safe backups and program and software sources to restore your computer or outfit a new platform.
6
Plan to Prevent Recurrence
Make an assessment of how the infection occurred and what you can do to put measures into place that will prevent it from happening again.

1 — Isolate the Infection

The rate and speed of ransomware detection is critical in combating fast moving attacks before they succeed in spreading across networks and encrypting vital data.

The first thing to do when a computer is suspected of being infected is to isolate it from other computers and storage devices. Disconnect it from the network (both wired and Wi-Fi) and from any external storage devices. Cryptoworms actively seek out connections and other computers, so you want to prevent that happening. You also don’t want the ransomware communicating across the network with its command and control center.

Be aware that there may be more than just one patient zero, meaning that the ransomware may have entered your organization or home through multiple computers, or may be dormant and not yet shown itself on some systems. Treat all connected and networked computers with suspicion and apply measures to ensure that all systems are not infected.

This Week in Tech (TWiT.tv) did a videocast showing what happens when WannaCry is released on an isolated system and encrypts files and trys to spread itself to other computers. It’s a great lesson on how these types of cryptoworms operate.

2 — Identify the Infection

Most often the ransomware will identify itself when it asks for ransom. There are numerous sites that help you identify the ransomware, including ID Ransomware. The No More Ransomware! Project provides the Crypto Sheriff to help identify ransomware.

Identifying the ransomware will help you understand what type of ransomware you have, how it propagates, what types of files it encrypts, and maybe what your options are for removal and disinfection. It also will enable you to report the attack to the authorities, which is recommended.

wanna decryptor 2.0 ransomware message

WannaCry Ransomware Extortion Dialog

3 — Report to the Authorities

You’ll be doing everyone a favor by reporting all ransomware attacks to the authorities. The FBI urges ransomware victims to report ransomware incidents regardless of the outcome. Victim reporting provides law enforcement with a greater understanding of the threat, provides justification for ransomware investigations, and contributes relevant information to ongoing ransomware cases. Knowing more about victims and their experiences with ransomware will help the FBI to determine who is behind the attacks and how they are identifying or targeting victims.

You can file a report with the FBI at the Internet Crime Complaint Center.

There are other ways to report ransomware, as well.

4 — Determine Your Options

Your options when infected with ransomware are:

  1. Pay the ransom
  2. Try to remove the malware
  3. Wipe the system(s) and reinstall from scratch

It’s generally considered a bad idea to pay the ransom. Paying the ransom encourages more ransomware, and in most cases the unlocking of the encrypted files is not successful.

In a recent survey, more than three-quarters of respondents said their organization is not at all likely to pay the ransom in order to recover their data (77%). Only a small minority said they were willing to pay some ransom (3% of companies have already set up a Bitcoin account in preparation).

Even if you decide to pay, it’s very possible you won’t get back your data.

5 — Restore or Start Fresh

You have the choice of trying to remove the malware from your systems or wiping your systems and reinstalling from safe backups and clean OS and application sources.

Get Rid of the Infection

There are internet sites and software packages that claim to be able to remove ransomware from systems. The No More Ransom! Project is one. Other options can be found, as well.

Whether you can successfully and completely remove an infection is up for debate. A working decryptor doesn’t exist for every known ransomware, and unfortunately it’s true that the newer the ransomware, the more sophisticated it’s likely to be and a perhaps a decryptor has not yet been created.

It’s Best to Wipe All Systems Completely

The surest way of being certain that malware or ransomware has been removed from a system is to do a complete wipe of all storage devices and reinstall everything from scratch. If you’ve been following a sound backup strategy, you should have copies of all your documents, media, and important files right up to the time of the infection.

Be sure to determine as well as you can from file dates and other information what was the date of infection. Consider that an infection might have been dormant in your system for a while before it activated and made significant changes to your system. Identifying and learning about the particular malware that attacked your systems will enable you to understand how that malware operates and what your best strategy should be for restoring your systems.

Backblaze Backup enables you to go back in time and specify the date prior to which you wish to restore files. That date should precede the date your system was infected.

Choose files to restore from earlier date in Backblaze Backup

If you’ve been following a good backup policy with both local and off-site backups, you should be able to use backup copies that you are sure were not connected to your network after the time of attack and hence protected from infection. Backup drives that were completely disconnected should be safe, as are files stored in the cloud, as with Backblaze Backup.

System Restores Are not the Best Strategy for Dealing with Ransomware and Malware

You might be tempted to use a System Restore point to get your system back up and running. System Restore is not a good solution for removing viruses or other malware. Since malicious software is typically buried within all kinds of places on a system, you can’t rely on System Restore being able to root out all parts of the malware. Instead, you should rely on a quality virus scanner that you keep up to date. Also, System Restore does not save old copies of your personal files as part of its snapshot. It also will not delete or replace any of your personal files when you perform a restoration, so don’t count on System Restore as working like a backup. You should always have a good backup procedure in place for all your personal files.

Local backups can be encrypted by ransomware. If your backup solution is local and connected to a computer that gets hit with ransomware, the chances are good your backups will be encrypted along with the rest of your data.

With a good backup solution that is isolated from your local computers, such as Backblaze Backup, you can easily obtain the files you need to get your system working again. You have the flexility to determine which files to restore, from which date you want to restore, and how to obtain the files you need to restore your system.

Choose how to obtain your backup files

You’ll need to reinstall your OS and software applications from the source media or the internet. If you’ve been managing your account and software credentials in a sound manner, you should be able to reactivate accounts for applications that require it.

If you use a password manager, such as 1Password or LastPass, to store your account numbers, usernames, passwords, and other essential information, you can access that information through their web interface or mobile applications. You just need to be sure that you still know your master username and password to obtain access to these programs.

6 — How to Prevent a Ransomware Attack

“Ransomware is at an unprecedented level and requires international investigation.” — European police agency EuroPol

A ransomware attack can be devastating for a home or a business. Valuable and irreplaceable files can be lost and tens or even hundreds of hours of effort can be required to get rid of the infection and get systems working again.

Security experts suggest several precautionary measures for preventing a ransomware attack.

  1. Use anti-virus and anti-malware software or other security policies to block known payloads from launching.
  2. Make frequent, comprehensive backups of all important files and isolate them from local and open networks. Cybersecurity professionals view data backup and recovery (74% in a recent survey) by far as the most effective solution to respond to a successful ransomware attack.
  3. Keep offline backups of data stored in locations inaccessible from any potentially infected computer, such as external storage drives or the cloud, which prevents them from being accessed by the ransomware.
  4. Install the latest security updates issued by software vendors of your OS and applications. Remember to Patch Early and Patch Often to close known vulnerabilities in operating systems, browsers, and web plugins.
  5. Consider deploying security software to protect endpoints, email servers, and network systems from infection.
  6. Exercise cyber hygiene, such as using caution when opening email attachments and links.
  7. Segment your networks to keep critical computers isolated and to prevent the spread of malware in case of attack. Turn off unneeded network shares.
  8. Turn off admin rights for users who don’t require them. Give users the lowest system permissions they need to do their work.
  9. Restrict write permissions on file servers as much as possible.
  10. Educate yourself, your employees, and your family in best practices to keep malware out of your systems. Update everyone on the latest email phishing scams and human engineering aimed at turning victims into abettors.

It’s clear that the best way to respond to a ransomware attack is to avoid having one in the first place. Other than that, making sure your valuable data is backed up and unreachable by ransomware infection will ensure that your downtime and data loss will be minimal or avoided completely.

Have you endured a ransomware attack or have a strategy to avoid becoming a victim? Please let us know in the comments.

The post How to Recover From Ransomware appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

I Still Prefer Eclipse Over IntelliJ IDEA

Post Syndicated from Bozho original https://techblog.bozho.net/still-prefer-eclipse-intellij-idea/

Over the years I’ve observed an inevitable shift from Eclipse to IntelliJ IDEA. Last year they were almost equal in usage, and I have the feeling things are swaying even more towards IDEA.

IDEA is like the iPhone of IDEs – its users tell you that “you will feel how much better it is once you get used to it”, “are you STILL using Eclipse??”, “IDEA is so much better, I thought everyone has switched”, etc.

I’ve been using mostly Eclipse for the past 12 years, but in some cases I did use IDEA – when I was writing Scala, when I was writing Android, and most recently – when Eclipse failed to be ready for the Java 9 release, so after half a day of trying to get it working, I just switched to IDEA until Eclipse finally gets a working Java 9 version (with Maven and the rest of the stuff).

But I will get back to Eclipse again, soon. And I still prefer it. Not just because of all the key combinations I’ve internalized (you can reuse those in IDEA), but because there are still things I find worse in IDEA. Of course, IDEA has so much more cool features like code improvement suggestions and actually working plugins for everything. But at least some of the problems I see have to do with the more basic development workflow and experience. And you can’t compensate for those with sugarcoating. So here they are:

  • Projects are not automatically built (by default), so you can end up with compilation errors that you don’t see until you open a non-compiling file or run a build. And turning the autobild on makes my machine crawl. I know I need an upgrade, but that’s not the point – not having “build on change” was a huge surprise to me the first time I tried IDEA. I recently complained about that on twitter and it turns out “it’s a feature”. The rationale seems to be that if you use refactoring, that shouldn’t happen. Well, there are dozens of cases when it does happen. Refactoring by adding a method parameter, by changing the type of a parameter, by removing a parameter (where the IDE can’t infer which parameter is removed based on the types), by changing return types. Also, a change in maven/gradle dependencies may introduces compilation issues that you don’t get to see. This is not a reasonable default at all, and I think the performance issues are the only reason it’s still the default. I think this makes the experience much worse.
  • You can have only one project per screen. Maybe there are those small companies with greenfield projects where you only need one. But I’ve never been in a situation, where you don’t at least occasionally need a separate project. Be it an “experiments” one, a “tools” one, or whatever. And no, multi-module maven projects (which IDEA handles well) are not sufficient. So each time you need to step out of your main project, you launch another screen. Apart from the bad usability, it’s double the memory, double the fun.
  • Speaking of memory, It seems to be taking more memory than Eclipse. I don’t have representative benchmarks of that, and I know that my 8 GB RAM home machine is way to small for development nowadays, but still.
  • It feels less responsive and clunky. There is some minor delay that I can’t define well, but “I feel it”. I read somewhere that they were excessively repainting the screen elements, so that might be the explanation. Eclipse feels smoother (I know that’s not a proper argument, but I can’t be more precise)
  • Due to some extra cleverness, I have “unused methods” and “never assigned fields” all around the project. It uses spring, so these methods and fields are controller methods and autowired fields. Maybe some spring plugin would take care of that, but spring is not the only framework that uses reflection. Even getters and setters on POJOs get the unused warnings. What’s the problem with those warnings? That warnings are devalued. They don’t mean anything now. There isn’t a “yellow” indicator on the class either, so you don’t actually see the amount of warnings you have. Eclipse displays warnings better, and the false positives are much less.
  • The call hierarchy is slightly worse. But since that’s the most important IDE feature for me (alongside refactoring), it matters. It doesn’t give you the call hierarchy of default constructors that are not explicitly defined. Also, from what I’ve seen IDEA users don’t often use the call hierarchy feature. “Find usage” I think predates the call hierarchy, and is also much more visible through the UI, so some of the IDEA users don’t even know what a call hierarchy is. And repeatedly do “find usage”. That’s only partly the IDE’s fault.
  • No search in the output console. Come one, why I do I have an IDE, where I have to copy the output and paste it in a text editor in order to search. Now, to clarify, the console does have search. But when I run my (spring-boot) application, it outputs stuff in a panel at the bottom that is not the console and doesn’t have search.
  • CTRL+arrows by default jumps over whole words, and not camel cased words. This is configurable, but is yet another odd default. You almost always want to be able to traverse your variables word by word (in camel case), rather than skipping over the whole variable (method/class) name.
  • A few years ago when I used it for Scala, the project never actually compiled. But I guess that’s more Scala’s fault than of the IDE

Apart from the first two, the rest are not major issues, I agree. But they add up. Ultimately, it’s a matter of personal choice whether you can turn a blind eye to these issues. But I’m getting back to Eclipse again. At some point I will propose improvements in the IntelliJ IDEA backlog and will check it again in a few years, I guess.

The post I Still Prefer Eclipse Over IntelliJ IDEA appeared first on Bozho's tech blog.

Multi-National Police Operation Shuts Down Pirate Forums

Post Syndicated from Andy original https://torrentfreak.com/multi-national-police-operation-shuts-down-pirate-forums-171110/

Once upon a time, large-scale raids on pirate operations were a regular occurrence, with news of such events making the headlines every few months. These days things have calmed down somewhat but reports coming out of Germany suggests that the war isn’t over yet.

According to a statement from German authorities, the Attorney General in Dresden and various cybercrime agencies teamed up this week to take down sites dedicated to sharing copyright protected material via the Usenet (newsgroups) system.

Huge amounts of infringing items were said to have been made available on a pair of indexing sites – 400,000 on Town.ag and 1,200,000 on Usenet-Town.com.

“Www.town.ag and www.usenet-town.com were two of the largest online portals that provided access to films, series, music, software, e-books, audiobooks, books, newspapers and magazines through systematic and unlawful copyright infringement,” the statement reads.

Visitors to these URLs are no longer greeted by the usual warez-fest, but by a seizure banner placed there by German authorities.

Seizure banner on Town.ag and Usenet-Town.com (translated)

Following an investigation carried out after complaints from rightsholders, 182 officers of various agencies raided homes and businesses Wednesday, each connected to a reported 26 suspects. In addition to searches of data centers located in Germany, servers in Spain, Netherlands, San Marino, Switzerland, and Canada were also targeted.

According to police the sites generated income from ‘sponsors’, netting their operators millions of euros in revenue. One of those appears to be Usenet reseller SSL-News, which displays the same seizure banner. Rightsholders claim that the Usenet portals have cost them many millions of euros in lost sales.

Arrest warrants were issued in Spain and Saxony against two German nationals, 39 and 31-years-old respectively. The man arrested in Spain is believed to be a ringleader and authorities there have been asked to extradite him to Germany.

At least 1,000 gigabytes of data were seized, with police scooping up numerous computers and other hardware for evidence. The true scale of material indexed is likely to be much larger, however.

Online chatter suggests that several other Usenet-related sites have also disappeared during the past day but whether that’s a direct result of the raids or down to precautionary measures taken by their operators isn’t yet clear.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Pirate Bay Suffers Downtime, Tor and Proxies are Up

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bay-down-for-24-hours-tor-and-proxies-are-up-171109/

pirate bayThe Pirate Bay has been unreachable for roughly a day now.

The site currently displays a CloudFlare error message across the entire site, with the CDN provider referring to an “unknown error.”

No further details are available to us and there is no known ETA for the site’s return. However, judging from past experience, it’s likely a small technical issue that needs fixing.

Pirate Bay downtime

The Pirate Bay has had quite a few stints of downtime in recent months. The popular torrent site usually returns after several hours, but an outage of more than 24 hours has happened before as well.

TorrentFreak reached out to the TPB team but we have yet to hear more about the issue.

Amid the downtime, there’s still some good news for those who desperately need to access the notorious torrent site. TPB is still available via its .onion address on the Tor network, accessible using the popular Tor Browser, for example. The Tor traffic goes through a separate server and works just fine.

The same is true for The Pirate Bay’s proxy sites, most of which are still working just fine.

The main .org domain will probably be back in action soon enough, but seasoned TPB users will probably know the drill by now…

The Pirate Bay is not the only torrent site facing problems at the moment. 1337x.to is also suffering downtime. A week ago the site’s operator said that the site was under attack, which may still be ongoing. Meanwhile, 1337x’s official proxy is still online.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Now Available – Compute-Intensive C5 Instances for Amazon EC2

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-compute-intensive-c5-instances-for-amazon-ec2/

I’m thrilled to announce that the new compute-intensive C5 instances are available today in six sizes for launch in three AWS regions!

These instances designed for compute-heavy applications like batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding. The new instances offer a 25% price/performance improvement over the C4 instances, with over 50% for some workloads. They also have additional memory per vCPU, and (for code that can make use of the new AVX-512 instructions), twice the performance for vector and floating point workloads.

Over the years we have been working non-stop to provide our customers with the best possible networking, storage, and compute performance, with a long-term focus on offloading many types of work to dedicated hardware designed and built by AWS. The C5 instance type incorporates the latest generation of our hardware offloads, and also takes another big step forward with the addition of a new hypervisor that runs hand-in-glove with our hardware. The new hypervisor allows us to give you access to all of the processing power provided by the host hardware, while also making performance even more consistent and further raising the bar on security. We’ll be sharing many technical details about it at AWS re:Invent.

The New Instances
The C5 instances are available in six sizes:

Instance Name vCPUs
RAM
EBS Bandwidth Network Bandwidth
c5.large 2 4 GiB Up to 2.25 Gbps Up to 10 Gbps
c5.xlarge 4 8 GiB Up to 2.25 Gbps Up to 10 Gbps
c5.2xlarge 8 16 GiB Up to 2.25 Gbps Up to 10 Gbps
c5.4xlarge 16 32 GiB 2.25 Gbps Up to 10 Gbps
c5.9xlarge 36 72 GiB 4.5 Gbps 10 Gbps
c5.18xlarge 72 144 GiB 9 Gbps 25 Gbps

Each vCPU is a hardware hyperthread on a 3.0 GHz Intel Xeon Platinum 8000-series processor. This custom processor, optimized for EC2, allows you have full control over the C-states on the two largest sizes, allowing you to run a single core at up to 3.5 GHz using Intel Turbo Boost Technology.

As you can see from the table, the four smallest instance sizes offer substantially more EBS and network bandwidth than the previous generation of compute-intensive instances.

Because all networking and storage functionality is implemented in hardware, C5 instances require HVM AMIs that include drivers for the Elastic Network Adapter (ENA) and NVMe. The latest Amazon Linux, Microsoft Windows, Ubuntu, RHEL, CentOS, SLES, Debian, and FreeBSD AMIs all support C5 instances. If you are doing machine learning inferencing, or other compute-intensive work, be sure to check out the most recent version of the Intel Math Kernel Library. It has been optimized for the Intel® Xeon® Platinum processor and has the potential to greatly accelerate your work.

In order to remain compatible with instances that use the Xen hypervisor, the device names for EBS volumes will continue to use the existing /dev/sd and /dev/xvd prefixes. The device name that you provide when you attach a volume to an instance is not used because the NVMe driver assigns its own device name (read Amazon EBS and NVMe to learn more):

The nvme command displays additional information about each volume (install it using sudo yum -y install nvme-cli if necessary):

The SN field in the output can be mapped to an EBS volume ID by inserting a “-” after the “vol” prefix (sadly, the NVMe SN field is not long enough to store the entire ID). Here’s a simple script that uses this information to create an EBS snapshot of each attached volume:

$ sudo nvme list | \
  awk '/dev/ {print(gensub("vol", "vol-", 1, $2))}' | \
  xargs -n 1 aws ec2 create-snapshot --volume-id

With a little more work (and a lot of testing), you could create a script that expands EBS volumes that are getting full.

Getting to C5
As I mentioned earlier, our effort to offload work to hardware accelerators has been underway for quite some time. Here’s a recap:

CC1 – Launched in 2010, the CC1 was designed to support scale-out HPC applications. It was the first EC2 instance to support 10 Gbps networking and one of the first to support HVM virtualization. The network fabric that we designed for the CC1 (based on our own switch hardware) has become the standard for all AWS data centers.

C3 – Launched in 2013, the C3 introduced Enhanced Networking and uses dedicated hardware accelerators to support the software defined network inside of each Virtual Private Cloud (VPC). Hardware virtualization removes the I/O stack from the hypervisor in favor of direct access by the guest OS, resulting in higher performance and reduced variability.

C4 – Launched in 2015, the C4 instances are EBS Optimized by default via a dedicated network connection, and also offload EBS processing (including CPU-intensive crypto operations for encrypted EBS volumes) to a hardware accelerator.

C5 – Launched today, the hypervisor that powers the C5 instances allow practically all of the resources of the host CPU to be devoted to customer instances. The ENA networking and the NVMe interface to EBS are both powered by hardware accelerators. The instances do not require (or support) the Xen paravirtual networking or block device drivers, both of which have been removed in order to increase efficiency.

Going forward, we’ll use this hypervisor to power other instance types and plan to share additional technical details in a set of AWS re:Invent sessions.

Launch a C5 Today
You can launch C5 instances today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions in On-Demand and Spot form (Reserved Instances are also available), with additional Regions in the works.

One quick note before I go: The current NVMe driver is not optimized for high-performance sequential workloads and we don’t recommend the use of C5 instances in conjunction with sc1 or st1 volumes. We are aware of this issue and have been working to optimize the driver for this important use case.

Jeff;

BitBarista: a fully autonomous corporation

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/bitbarista/

To some people, the idea of a fully autonomous corporation might seem like the beginning of the end. However, while the BitBarista coffee machine prototype can indeed run itself without any human interference, it also teaches a lesson about ethical responsibility and the value of quality.

BitBarista

Bitcoin coffee machine that engages coffee drinkers in the value chain

Autonomous corporations

If you’ve played Paperclips, you get it. And in case you haven’t played Paperclips, I will only say this: give a robot one job and full control to complete the task, and things may turn in a very unexpected direction. Or, in the case of Rick and Morty, they end in emotional breakdown.

BitBarista

While the fully autonomous BitBarista resides primarily on the drawing board, the team at the University of Edinburgh’s Center for Design Informatics have built a proof-of-concept using a Raspberry Pi and a Delonghi coffee maker.

BitBarista fully autonomous coffee machine using Raspberry Pi

Recently described by the BBC as ‘a coffee machine with a life of its own, dispensing coffee to punters with an ethical preference’, BitBarista works in conjunction with customers to source coffee and complete maintenance tasks in exchange for BitCoin payments. Customers pay for their coffee in BitCoin, and when BitBarista needs maintenance such as cleaning, water replenishment, or restocking, it can pay the same customers for completing those tasks.

BitBarista fully autonomous coffee machine using Raspberry Pi

Moreover, customers choose which coffee beans the machine purchases based on quality, price, environmental impact, and social responsibility. BitBarista also collects and displays data on the most common bean choices.

BitBarista fully autonomous coffee machine using Raspberry Pi

So not only is BitBarista a study into the concept of full autonomy, it’s also a means of data collection about the societal preference of cost compared to social and environmental responsibility.

For more information on BitBarista, visit the Design Informatics and PETRAS websites.

Home-made autonomy

Many people already have store-bought autonomous technology within their homes, such as the Roomba vacuum cleaner or the Nest Smart Thermostat. And within the maker community, many more still have created such devices using sensors, mobile apps, and microprocessors such as the Raspberry Pi. We see examples using the Raspberry Pi on a daily basis, from simple motion-controlled lights and security cameras to advanced devices using temperature sensors and WiFi technology to detect the presence of specific people.

How to Make a Smart Security Camera with a Raspberry Pi Zero

In this video, we use a Raspberry Pi Zero W and a Raspberry Pi camera to make a smart security camera! The camera uses object detection (with OpenCV) to send you an email whenever it sees an intruder. It also runs a webcam so you can view live video from the camera when you are away.

To get started building your own autonomous technology, you could have a look at our resources Laser tripwire and Getting started with picamera. These will help you build a visitor register of everyone who crosses the threshold a specific room.

Or build your own Raspberry Pi Zero W Butter Robot for the lolz.

The post BitBarista: a fully autonomous corporation appeared first on Raspberry Pi.

Now You Can Monitor DDoS Attack Trends with AWS Shield Advanced

Post Syndicated from Ritwik Manan original https://aws.amazon.com/blogs/security/now-you-can-monitor-ddos-attack-trends-with-aws-shield-advanced/

AWS Shield Advanced has always notified you about DDoS attacks on your applications via the AWS Management Console and API as well as Amazon CloudWatch metrics. Today, we added the global threat environment dashboard to AWS Shield Advanced to allow you to view trends and metrics about DDoS attacks across Amazon CloudFront, Elastic Load Balancing, and Amazon Route 53. This information can help you understand the DDoS target profile of the AWS services you use and, in turn, can help you create a more resilient and distributed architecture for your application.

The global threat environment dashboard shows comprehensive and easy-to-understand data about DDoS attacks. The dashboard displays a summary of the global threat environment, including the largest attacks, top vectors, and the relative number of significant attacks. You also can view the dashboard for different time durations to give you a history of DDoS attacks.

To get started with the global threat environment dashboard:

  1. Sign in to the AWS Management Console and navigate to the AWS WAF and AWS Shield console.
  2. To activate AWS Shield Advanced, choose Protected resources in the navigation pane, choose Activate AWS Shield Advanced, and then accept the terms by typing I accept.
  3. Navigate to the global threat environment dashboard through the navigation pane.
  4. Choose your desired time period from the time period drop-down menu in the top right part of the page.

You can use the information on the global threat environment dashboard to understand the threat landscape as well as to inform decisions you make that will help to better protect your AWS resources.

To learn more information, see Global Threat Environment Dashboard: View DDoS Attack Trends Across AWS.

– Ritwik

How to Prepare for AWS’s Move to Its Own Certificate Authority

Post Syndicated from Jonathan Kozolchyk original https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/

AWS Certificate Manager image

Transport Layer Security (TLS, formerly called Secure Sockets Layer [SSL]) is essential for encrypting information that is exchanged on the internet. For example, Amazon.com uses TLS for all traffic on its website, and AWS uses it to secure calls to AWS services.

An electronic document called a certificate verifies the identity of the server when creating such an encrypted connection. The certificate helps establish proof that your web browser is communicating securely with the website that you typed in your browser’s address field. Certificate Authorities, also known as CAs, issue certificates to specific domains. When a domain presents a certificate that is issued by a trusted CA, your browser or application knows it’s safe to make the connection.

In January 2016, AWS launched AWS Certificate Manager (ACM), a service that lets you easily provision, manage, and deploy SSL/TLS certificates for use with AWS services. These certificates are available for no additional charge through Amazon’s own CA: Amazon Trust Services. For browsers and other applications to trust a certificate, the certificate’s issuer must be included in the browser’s trust store, which is a list of trusted CAs. If the issuing CA is not in the trust store, the browser will display an error message (see an example) and applications will show an application-specific error. To ensure the ubiquity of the Amazon Trust Services CA, AWS purchased the Starfield Services CA, a root found in most browsers and which has been valid since 2005. This means you shouldn’t have to take any action to use the certificates issued by Amazon Trust Services.

AWS has been offering free certificates to AWS customers from the Amazon Trust Services CA. Now, AWS is in the process of moving certificates for services such as Amazon EC2 and Amazon DynamoDB to use certificates from Amazon Trust Services as well. Most software doesn’t need to be changed to handle this transition, but there are exceptions. In this blog post, I show you how to verify that you are prepared to use the Amazon Trust Services CA.

How to tell if the Amazon Trust Services CAs are in your trust store

The following table lists the Amazon Trust Services certificates. To verify that these certificates are in your browser’s trust store, click each Test URL in the following table to verify that it works for you. When a Test URL does not work, it displays an error similar to this example.

Distinguished name SHA-256 hash of subject public key information Test URL
CN=Amazon Root CA 1,O=Amazon,C=US fbe3018031f9586bcbf41727e417b7d1c45c2f47f93be372a17b96b50757d5a2 Test URL
CN=Amazon Root CA 2,O=Amazon,C=US 7f4296fc5b6a4e3b35d3c369623e364ab1af381d8fa7121533c9d6c633ea2461 Test URL
CN=Amazon Root CA 3,O=Amazon,C=US 36abc32656acfc645c61b71613c4bf21c787f5cabbee48348d58597803d7abc9 Test URL
CN=Amazon Root CA 4,O=Amazon,C=US f7ecded5c66047d28ed6466b543c40e0743abe81d109254dcf845d4c2c7853c5 Test URL
CN=Starfield Services Root Certificate Authority – G2,O=Starfield Technologies\, Inc.,L=Scottsdale,ST=Arizona,C=US 2b071c59a0a0ae76b0eadb2bad23bad4580b69c3601b630c2eaf0613afa83f92 Test URL
Starfield Class 2 Certification Authority 2ce1cb0bf9d2f9e102993fbe215152c3b2dd0cabde1c68e5319b839154dbb7f5 Test URL

What to do if the Amazon Trust Services CAs are not in your trust store

If your tests of any of the Test URLs failed, you must update your trust store. The easiest way to update your trust store is to upgrade the operating system or browser that you are using.

You will find the Amazon Trust Services CAs in the following operating systems (release dates are in parentheses):

  • Microsoft Windows versions that have January 2005 or later updates installed, Windows Vista, Windows 7, Windows Server 2008, and newer versions
  • Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5, Mac OS X 10.5 and newer versions
  • Red Hat Enterprise Linux 5 (March 2007), Linux 6, and Linux 7 and CentOS 5, CentOS 6, and CentOS 7
  • Ubuntu 8.10
  • Debian 5.0
  • Amazon Linux (all versions)
  • Java 1.4.2_12, Jave 5 update 2, and all newer versions, including Java 6, Java 7, and Java 8

All modern browsers trust Amazon’s CAs. You can update the certificate bundle in your browser simply by updating your browser. You can find instructions for updating the following browsers on their respective websites:

If your application is using a custom trust store, you must add the Amazon root CAs to your application’s trust store. The instructions for doing this vary based on the application or platform. Please refer to the documentation for the application or platform you are using.

AWS SDKs and CLIs

Most AWS SDKs and CLIs are not impacted by the transition to the Amazon Trust Services CA. If you are using a version of the Python AWS SDK or CLI released before February 5, 2015, you must upgrade. The .NET, Java, PHP, Go, JavaScript, and C++ SDKs and CLIs do not bundle any certificates, so their certificates come from the underlying operating system. The Ruby SDK has included at least one of the required CAs since June 10, 2015. Before that date, the Ruby V2 SDK did not bundle certificates.

Certificate pinning

If you are using a technique called certificate pinning to lock down the CAs you trust on a domain-by-domain basis, you must adjust your pinning to include the Amazon Trust Services CAs. Certificate pinning helps defend you from an attacker using misissued certificates to fool an application into creating a connection to a spoofed host (an illegitimate host masquerading as a legitimate host). The restriction to a specific, pinned certificate is made by checking that the certificate issued is the expected certificate. This is done by checking that the hash of the certificate public key received from the server matches the expected hash stored in the application. If the hashes do not match, the code stops the connection.

AWS recommends against using certificate pinning because it introduces a potential availability risk. If the certificate to which you pin is replaced, your application will fail to connect. If your use case requires pinning, we recommend that you pin to a CA rather than to an individual certificate. If you are pinning to an Amazon Trust Services CA, you should pin to all CAs shown in the table earlier in this post.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this post, start a new thread on the ACM forum.

– Jonathan

Ben’s Raspberry Pi Twilight Zone pinball hack

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/twilight-zone-pinball-display/

When Ben North was faced with the dilemma of his nine-year-old son wanting him to watch his pinball games while, at the same time, Ben should be doing housework, he came up with a brilliant hack. Ben decided to investigate the inner workings of his twenty-year-old Twilight Zone pinball machine to convert its score display data into a video stream he could keep an eye on while working.

Ben North Raspberry Pi Twilight Zone Pinball

Ben ended up with this. Read on to find out how…

Dad? Dad! DAD!!

Kids love sharing their achievements. That’s a given. And so, after Ben introduced his son Zach to his beloved pinball machine, Zach wanted his dad to witness his progress. However, at some point Ben had to get back to the dull reality of adulting.

My son Zach, now 9, has been steadily getting better at [playing pinball], and is keen for me to watch his games. So he and I wanted a way for me to keep an eye on how his game is going, while I do other jobs elsewhere.

The two of them thought that, with the right tools and some fiddling, they could hijack the machine’s score information on its way to the dot matrix display and divert it to a computer. “One way to do this would be to set up a webcam.” Ben explains on his blog, “But where’s the fun in that?”

Twilight Zone pinball wizardry

After researching how the dot matrix receives and displays the score data, Ben and Zach figured out how to fetch its output using a 16-channel USB logic analyser. Then they dove into learning to convert the data the logic analyser outputs back into images.

Ben North Raspberry Pi Twilight Zone Pinball

“Exploring in more detail confirmed that the data looked reasonable. We could see well-distinguished frames and rows, and within each row, the pixel data had a mixture of high (lit pixel) and low (dark pixel).”

After Ben managed to convert the signals of one frame into a human-readable pixel image, it was time to think about the hardware that could do this conversion in real time. Though he and Zach were convinced they would have to build custom hardware to complete their project, they decided to first give the Raspberry Pi a go. And it turned out that the Pi was up to the challenge!

Ben North Raspberry Pi Twilight Zone Pinball - example output

“By an amazing coincidence, the [first] frame I decoded was one showing that I am the current Lost In The Zone champion.”

To decode the first frame, Ben had written a Python script. However, he coded the program to produce a score live stream in C++, since this language is better at handling high-speed input and output. To make sure Zach would learn from the experience, Ben explained the how and why of the program to him.

I talked through with Zach what the program needed to do — detect clock edges, sample pixel data, collect rows, etc. — but then he left me to do ‘all the boring typing’.

Ben used various pieces of open-source software while working on this project, including the sigrok suite for signal analysis and the multimedia framework gstreamer for handling the live video stream to the Raspberry Pi.

Find more information about the Twilight Zone pinball build, including a lot of technical details and the code itself, on Ben’s blog.

Worthy self-promotion from Ben

“I also did an FPGA project to replicate some of the Colossus code-breaking machine used in Bletchley Park during World War II,” explained Ben in our recent emails. “with a Raspberry Pi as the host.”

Colossus computer Twilight Zone Pinball

The original Colossus, not Ben’s.
Image c/o Wikipedia

As a bit of a history nerd myself, I think this is beyond cool. And if, like me, you’d like to learn more, check out the link here.

The post Ben’s Raspberry Pi Twilight Zone pinball hack appeared first on Raspberry Pi.