Tag Archives: displays

CoderDojo Coolest Projects 2017

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/coderdojo-coolest-projects-2017/

When I heard we were merging with CoderDojo, I was delighted. CoderDojo is a wonderful organisation with a spectacular community, and it’s going to be great to join forces with the team and work towards our common goal: making a difference to the lives of young people by making technology accessible to them.

You may remember that last year Philip and I went along to Coolest Projects, CoderDojo’s annual event at which their global community showcase their best makes. It was awesome! This year a whole bunch of us from the Raspberry Pi Foundation attended Coolest Projects with our new Irish colleagues, and as expected, the projects on show were as cool as can be.

Coolest Projects 2017 attendee

Crowd at Coolest Projects 2017

This year’s coolest projects!

Young maker Benjamin demoed his brilliant RGB LED table tennis ball display for us, and showed off his brilliant project tutorial website codemakerbuddy.com, which he built with Python and Flask. [Click on any of the images to enlarge them.]

Coolest Projects 2017 LED ping-pong ball display
Coolest Projects 2017 Benjamin and Oly

Next up, Aimee showed us a recipes app she’d made with the MIT App Inventor. It was a really impressive and well thought-out project.

Coolest Projects 2017 Aimee's cook book
Coolest Projects 2017 Aimee's setup

This very successful OpenCV face detection program with hardware installed in a teddy bear was great as well:

Coolest Projects 2017 face detection bear
Coolest Projects 2017 face detection interface
Coolest Projects 2017 face detection database

Helen’s and Oly’s favourite project involved…live bees!

Coolest Projects 2017 live bees

BEEEEEEEEEEES!

Its creator, 12-year-old Amy, said she wanted to do something to help the Earth. Her project uses various sensors to record data on the bee population in the hive. An adjacent monitor displays the data in a web interface:

Coolest Projects 2017 Aimee's bees

Coolest robots

I enjoyed seeing lots of GPIO Zero projects out in the wild, including this robotic lawnmower made by Kevin and Zach:

Raspberry Pi Lawnmower

Kevin and Zach’s Raspberry Pi lawnmower project with Python and GPIO Zero, showed at CoderDojo Coolest Projects 2017

Philip’s favourite make was a Pi-powered robot you can control with your mind! According to the maker, Laura, it worked really well with Philip because he has no hair.

Philip Colligan on Twitter

This is extraordinary. Laura from @CoderDojo Romania has programmed a mind controlled robot using @Raspberry_Pi @coolestprojects

And here are some pictures of even more cool robots we saw:

Coolest Projects 2017 coolest robot no.1
Coolest Projects 2017 coolest robot no.2
Coolest Projects 2017 coolest robot no.3

Games, toys, activities

Oly and I were massively impressed with the work of Mogamad, Daniel, and Basheerah, who programmed a (borrowed) Amazon Echo to make a voice-controlled text-adventure game using Java and the Alexa API. They’ve inspired me to try something similar using the AIY projects kit and adventurelib!

Coolest Projects 2017 Mogamad, Daniel, Basheerah, Oly
Coolest Projects 2017 Alexa text-based game

Christopher Hill did a brilliant job with his Home Alone LEGO house. He used sensors to trigger lights and sounds to make it look like someone’s at home, like in the film. I should have taken a video – seeing it in action was great!

Coolest Projects 2017 Lego home alone house
Coolest Projects 2017 Lego home alone innards
Coolest Projects 2017 Lego home alone innards closeup

Meanwhile, the Northern Ireland Raspberry Jam group ran a DOTS board activity, which turned their area into a conductive paint hazard zone.

Coolest Projects 2017 NI Jam DOTS activity 1
Coolest Projects 2017 NI Jam DOTS activity 2
Coolest Projects 2017 NI Jam DOTS activity 3
Coolest Projects 2017 NI Jam DOTS activity 4
Coolest Projects 2017 NI Jam DOTS activity 5
Coolest Projects 2017 NI Jam DOTS activity 6

Creativity and ingenuity

We really enjoyed seeing so many young people collaborating, experimenting, and taking full advantage of the opportunity to make real projects. And we loved how huge the range of technologies in use was: people employed all manner of hardware and software to bring their ideas to life.

Philip Colligan on Twitter

Wow! Look at that room full of awesome young people. @coolestprojects #coolestprojects @CoderDojo

Congratulations to the Coolest Projects 2017 prize winners, and to all participants. Here are some of the teams that won in the different categories:

Coolest Projects 2017 winning team 1
Coolest Projects 2017 winning team 2
Coolest Projects 2017 winning team 3

Take a look at the gallery of all winners over on Flickr.

The wow factor

Raspberry Pi co-founder and Foundation trustee Pete Lomas came along to the event as well. Here’s what he had to say:

It’s hard to describe the scale of the event, and photos just don’t do it justice. The first thing that hit me was the sheer excitement of the CoderDojo ninjas [the children attending Dojos]. Everyone was setting up for their time with the project judges, and their pure delight at being able to show off their creations was evident in both halls. Time and time again I saw the ninjas apply their creativity to help save the planet or make someone’s life better, and it’s truly exciting that we are going to help that continue and expand.

Even after 8 hours, enthusiasm wasn’t flagging – the awards ceremony was just brilliant, with ninjas high-fiving the winners on the way to the stage. This speaks volumes about the ethos and vision of the CoderDojo founders, where everyone is a winner just by being part of a community of worldwide friends. It was a brilliant introduction, and if this weekend was anything to go by, our merger certainly is a marriage made in Heaven.

Join this awesome community!

If all this inspires you as much as it did us, consider looking for a CoderDojo near you – and sign up as a volunteer! There’s plenty of time for young people to build up skills and start working on a project for next year’s event. Check out coolestprojects.com for more information.

The post CoderDojo Coolest Projects 2017 appeared first on Raspberry Pi.

Manage Instances at Scale without SSH Access Using EC2 Run Command

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/manage-instances-at-scale-without-ssh-access-using-ec2-run-command/

The guest post below, written by Ananth Vaidyanathan (Senior Product Manager for EC2 Systems Manager) and Rich Urmston (Senior Director of Cloud Architecture at Pegasystems) shows you how to use EC2 Run Command to manage a large collection of EC2 instances without having to resort to SSH.

Jeff;


Enterprises often have several managed environments and thousands of Amazon EC2 instances. It’s important to manage systems securely, without the headaches of Secure Shell (SSH). Run Command, part of Amazon EC2 Systems Manager, allows you to run remote commands on instances (or groups of instances using tags) in a controlled and auditable manner. It’s been a nice added productivity boost for Pega Cloud operations, which rely daily on Run Command services.

You can control Run Command access through standard IAM roles and policies, define documents to take input parameters, control the S3 bucket used to return command output. You can also share your documents with other AWS accounts, or with the public. All in all, Run Command provides a nice set of remote management features.

Better than SSH
Here’s why Run Command is a better option than SSH and why Pegasystems has adopted it as their primary remote management tool:

Run Command Takes Less Time –  Securely connecting to an instance requires a few steps e.g. jumpboxes to connect to or IP addresses to whitelist etc. With Run Command, cloud ops engineers can invoke commands directly from their laptop, and never have to find keys or even instance IDs. Instead, system security relies on AWS auth, IAM roles and policies.

Run Command Operations are Fully Audited – With SSH, there is no real control over what they can do, nor is there an audit trail. With Run Command, every invoked operation is audited in CloudTrail, including information on the invoking user, instances on which command was run, parameters, and operation status. You have full control and ability to restrict what functions engineers can perform on a system.

Run Command has no SSH keys to Manage – Run Command leverages standard AWS credentials, API keys, and IAM policies. Through integration with a corporate auth system, engineers can interact with systems based on their corporate credentials and identity.

Run Command can Manage Multiple Systems at the Same Time – Simple tasks such as looking at the status of a Linux service or retrieving a log file across a fleet of managed instances is cumbersome using SSH. Run Command allows you to specify a list of instances by IDs or tags, and invokes your command, in parallel, across the specified fleet. This provides great leverage when troubleshooting or managing more than the smallest Pega clusters.

Run Command Makes Automating Complex Tasks Easier – Standardizing operational tasks requires detailed procedure documents or scripts describing the exact commands. Managing or deploying these scripts across the fleet is cumbersome. Run Command documents provide an easy way to encapsulate complex functions, and handle document management and access controls. When combined with AWS Lambda, documents provide a powerful automation platform to handle any complex task.

Example – Restarting a Docker Container
Here is an example of a simple document used to restart a Docker container. It takes one parameter; the name of the Docker container to restart. It uses the AWS-RunShellScript method to invoke the command. The output is collected automatically by the service and returned to the caller. For an example of the latest document schema, see Creating Systems Manager Documents.

{
  "schemaVersion":"1.2",
  "description":"Restart the specified docker container.",
  "parameters":{
    "param":{
      "type":"String",
      "description":"(Required) name of the container to restart.",
      "maxChars":1024
    }
  },
  "runtimeConfig":{
    "aws:runShellScript":{
      "properties":[
        {
          "id":"0.aws:runShellScript",
          "runCommand":[
            "docker restart {{param}}"
          ]
        }
      ]
    }
  }
}

Putting Run Command into practice at Pegasystems
The Pegasystems provisioning system sits on AWS CloudFormation, which is used to deploy and update Pega Cloud resources. Layered on top of it is the Pega Provisioning Engine, a serverless, Lambda-based service that manages a library of CloudFormation templates and Ansible playbooks.

A Configuration Management Database (CMDB) tracks all the configurations details and history of every deployment and update, and lays out its data using a hierarchical directory naming convention. The following diagram shows how the various systems are integrated:

For cloud system management, Pega operations uses a command line version called cuttysh and a graphical version based on the Pega 7 platform, called the Pega Operations Portal. Both tools allow you to browse the CMDB of deployed environments, view configuration settings, and interact with deployed EC2 instances through Run Command.

CLI Walkthrough
Here is a CLI walkthrough for looking into a customer deployment and interacting with instances using Run Command.

Launching the cuttysh tool brings you to the root of the CMDB and a list of the provisioned customers:

% cuttysh
d CUSTA
d CUSTB
d CUSTC
d CUSTD

You interact with the CMDB using standard Linux shell commands, such as cd, ls, cat, and grep. Items prefixed with s are services that have viewable properties. Items prefixed with d are navigable subdirectories in the CMDB hierarchy.

In this example, change directories into customer CUSTB’s portion of the CMDB hierarchy, and then further into a provisioned Pega environment called env1, under the Dev network. The tool displays the artifacts that are provisioned for that environment. These entries map to provisioned CloudFormation templates.

> cd CUSTB
/ROOT/CUSTB/us-east-1 > cd DEV/env1

The ls –l command shows the version of the provisioned resources. These version numbers map back to source control–managed artifacts for the CloudFormation, Ansible, and other components that compose a version of the Pega Cloud.

/ROOT/CUSTB/us-east-1/DEV/env1 > ls -l
s 1.2.5 RDSDatabase 
s 1.2.5 PegaAppTier 
s 7.2.1 Pega7 

Now, use Run Command to interact with the deployed environments. To do this, use the attach command and specify the service with which to interact. In the following example, you attach to the Pega Web Tier. Using the information in the CMDB and instance tags, the CLI finds the corresponding EC2 instances and displays some basic information about them. This deployment has three instances.

/ROOT/CUSTB/us-east-1/DEV/env1 > attach PegaWebTier
 # ID         State  Public Ip    Private Ip  Launch Time
 0 i-0cf0e84 running 52.63.216.42 10.96.15.70 2017-01-16 
 1 i-0043c1d running 53.47.191.22 10.96.15.43 2017-01-16 
 2 i-09b879e running 55.93.118.27 10.96.15.19 2017-01-16 

From here, you can use the run command to invoke Run Command documents. In the following example, you run the docker-ps document against instance 0 (the first one on the list). EC2 executes the command and returns the output to the CLI, which in turn shows it.

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-ps
. . 
CONTAINER ID IMAGE             CREATED      STATUS        NAMES
2f187cc38c1  pega-7.2         10 weeks ago  Up 8 weeks    pega-web

Using the same command and some of the other documents that have been defined, you can restart a Docker container or even pull back the contents of a file to your local system. When you get a file, Run Command also leaves a copy in an S3 bucket in case you want to pass the link along to a colleague.

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-restart pega-web
..
pega-web

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 get-file /var/log/cfn-init-cmd.log
. . . . . 
get-file

Data has been copied locally to: /tmp/get-file/i-0563c9e/data
Data is also available in S3 at: s3://my-bucket/CUSTB/cuttysh/get-file/data

Now, leverage the Run Command ability to do more than one thing at a time. In the following example, you attach to a deployment with three running instances and want to see the uptime for each instance. Using the par (parallel) option for run, the CLI tells Run Command to execute the uptime document on all instances in parallel.

/ROOT/CUSTB/us-east-1/DEV/env1 > run par uptime
 …
Output for: i-006bdc991385c33
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.42, 0.32, 0.30

Output for: i-09390dbff062618
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.08, 0.19, 0.22

Output for: i-08367d0114c94f1
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.36, 0.40, 0.40

Commands are complete.
/ROOT/PEGACLOUD/CUSTB/us-east-1/PROD/prod1 > 

Summary
Run Command improves productivity by giving you faster access to systems and the ability to run operations across a group of instances. Pega Cloud operations has integrated Run Command with other operational tools to provide a clean and secure method for managing systems. This greatly improves operational efficiency, and gives greater control over who can do what in managed deployments. The Pega continual improvement process regularly assesses why operators need access, and turns those operations into new Run Command documents to be added to the library. In fact, their long-term goal is to stop deploying cloud systems with SSH enabled.

If you have any questions or suggestions, please leave a comment for us!

— Ananth and Rich

EtherApe – Graphical Network Monitor

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/DxSK15EgI5k/

EtherApe is a graphical network monitor for Unix modelled after etherman. Featuring link layer, IP and TCP modes, it displays network activity graphically. Hosts and links change in size with traffic. Colour coded protocols display. It supports Ethernet, FDDI, Token Ring, ISDN, PPP, SLIP and WLAN devices, plus several encapsulation formats. It can…

Read the full post at darknet.org.uk

Tinkernut’s do-it-yourself Pi Zero audio HAT

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernut-diy-pi-zero-audio/

Why buy a Raspberry Pi Zero audio HAT when Tinkernut can show you how to make your own?

Adding Audio Output To The Raspberry Pi Zero – Tinkernut Workbench

The Raspberry Pi Zero W is an amazing miniature computer piece of technology. I want to turn it into an epic portable Spotify radio that displays visuals such as Album Art. So in this new series called “Tinkernut Workbench”, I show you step by step what it takes to build a product from the ground up.

Raspberry Pi Zero audio

Unlike their grown-up siblings, the Pi Zero and Zero W lack an onboard audio jack, but that doesn’t stop you from using them to run an audio output. Various audio HATs exist on the market, from Adafruit, Pimoroni and Pi Supply to name a few, providing easy audio output for the Zero. But where would the fun be in a Tinkernut video that shows you how to attach a HAT?

Tinkernut Pi Zero Audio

“Take this audio HAT, press it onto the header pins and, errr, done? So … how was your day?”

DIY Audio: Tinkernut style

For the first video in his Hipster Spotify Radio using a Raspberry Pi Tinkernut Workbench series, Tinkernut – real name Daniel Davis – goes through the steps of researching, prototyping and finishing his own audio HAT for his newly acquired Raspberry Pi Zero W.

The build utilises the GPIO pins on the Zero W, specifically pins #18 and #13. FYI, this hidden gem of information comes from the Adafruit Pi Zero PWM Audio guide. Before he can use #18 and #13, header pins need to be soldered. If the thought of soldering pins to the Pi is somewhat daunting, check out the Pimoroni Hammer Header.

Pimoroni Hammer Header for Raspberry Pi

You’re welcome.

Once complete, with Raspbian installed on the micro SD, and SSH enabled for remote access, he’s ready to start prototyping.

Ingredients

Tinkernut uses two 270 ohm resistors, two 150 ohm resistors, two 10μf electrolytic capacitors, two 0.01 μf polyester film capacitors, an audio jack and some wire. You’ll also need a breadboard for prototyping. For the final build, you’ll need a single row female pin header and some prototyping board, if you want to join in at home.

Tinkernut audio board Raspberry Pi Zero W

It should look like this…hopefully.

Once the prototype is working to run audio through to a cheap speaker (thanks to an edit of the config.txt file), the final board can be finished.

What’s next?

The audio board is just one step in the build.

Spotify is such an awesome music service. Raspberry Pi Zero is such an awesome ultra-mini computing device. Obviously, combining the two is something I must do!!! The idea here is to make something that’s stylish, portable, can play Spotify, and hopefully also display visuals such as album art.

Subscribe to Tinkernut’s YouTube channel to keep up to date with the build, and check out some of his other Raspberry Pi builds, such as his cheap 360 video camera, security camera and digital vintage camera.

Have you made your own Raspberry Pi HAT? Show it off in the comments below!

The post Tinkernut’s do-it-yourself Pi Zero audio HAT appeared first on Raspberry Pi.

An affordable ocular fundus camera

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/an-affordable-ocular-fundus-camera/

The ocular fundus is the interior surface of the eye, and an ophthalmologist can learn a lot about a patient’s health by examining it. However, there’s a problem: an ocular fundus camera can’t capture a useful image unless the eye is brightly lit, but this makes the pupil constrict, obstructing the camera’s view. Ophthalmologists use pupil-dilating eye drops to block the eye’s response to light, but these are uncomfortable and can cause blurred vision for several hours. Now, researchers at the University of Illinois at Chicago have built a Raspberry Pi-based fundus camera that can take photos of the retina without the need for eye drops.

Dr Bailey Shen and co-author Dr Shizuo Mukai made their camera with a Raspberry Pi 2 and a Pi NoIR Camera Module, a version of the Camera Module that does not have an infrared filter. They used a small LCD touchscreen display and a lithium battery, holding the ensemble together with tape and rubber bands. They also connected a button and a dual infrared/white light LED to the Raspberry Pi’s GPIO pins.

When the Raspberry Pi boots, a Python script turns on the infrared illumination from the LED and displays the camera view on the screen. The iris does not respond to infrared light, so in a darkened room the operator is able to position the camera and a separate condensing lens to produce a clear image of the patient’s fundus. When they’re satisfied with the image, the operator presses the button. This turns off the infrared light, produces a flash of white light, and captures a colour image of the fundus before the iris can respond and constrict the pupil.

This isn’t the first ocular fundus camera to use infrared/white light to focus and obtain images without eye drops, but it is less bulky and distinctly cheaper than existing equipment, which can cost thousands of dollars. The total cost of all the parts is $185, and all but one are available as off-the-shelf components. The exception is the dual infrared/white light LED, a prototype which the researchers explain is a critical part of the equipment. Using an infrared LED and a white LED side by side doesn’t yield consistent results. We’d be glad to see the LED available on the market, both to support this particular application, and because we bet there are plenty of other builds that could use one!

Read more in Science Daily, in an article exploring the background to the project. The article is based on the researchers’ recent paper, presenting the Raspberry Pi ocular fundus camera in the Journal of Ophthalmology. The journal is an open access publication, so if you think this build is as interesting as I do, I encourage you to read the researchers’ presentation of their work, its possibilities and its limitations. They also provide step-by-step instructions and a parts list to help others to replicate and build on their work.

It’s beyond brilliant to see researchers and engineers using the Raspberry Pi to make medical and scientific work cheaper and more accessible. Please tell us about your favourite applications, or the applications you’d develop in your fantasy lab or clinic, in the comments.

The post An affordable ocular fundus camera appeared first on Raspberry Pi.

The New AWS Organizations User Interface Makes Managing Your AWS Accounts Easier

Post Syndicated from Anders Samuelsson original https://aws.amazon.com/blogs/security/the-new-aws-organizations-user-interface-makes-managing-your-aws-accounts-easier/

With AWS Organizations—launched on February 27, 2017—you can easily organize accounts centrally and set organizational policies across a set of accounts. Starting today, the Organizations console includes a tree view that allows you to manage accounts and organizational units (OUs) easily. The new view also makes it simple to attach service control policies (SCPs) to individual accounts or a group of accounts in an OU. In this post, I demonstrate some of the benefits of the new user interface.

The new tree view

The following screenshot shows an example of how an organization is displayed in the tree view on the Organize accounts tab. I have chosen the Frontend OU, and it shows that two OUs—Application 1 and Application 2—are child OUs of the Frontend OU. In the tree view, I can choose any OU and immediately view and take action on the contents of that OU. This new view makes it easier to quickly view OUs and navigate the relationships between OUs in your organization.

Screenshot of the new tree view

If you would prefer not to use the tree view, you can hide it by choosing the Tree view toggle in the upper left corner of the main pane. The following screenshot shows the console with the tree view turned off.

Screenshot of the console with the tree view hidden

You can toggle between the old view and the new tree view at any time. For the rest of this post, though, I will show the tree view.

Additional Organizations console improvements

In addition, we made a few other console improvements. First, we added more detail to the right pane when you choose an account or an OU. In the following screenshot, I have chosen the Application 1 OU in the main pane of the console and then the new Accounts heading in the right pane. As a result, I now can view the accounts that are in the OU without having to navigate into the OU. I can also remove an account from the OU by choosing Remove next to the account I want to remove.

Screenshot showing the accounts in the OU

Secondly, we have made it easier for you to attach SCPs to entities such as individual accounts and OUs. For example, to attach to the Application 1 OU an SCP that blocks access to Amazon Redshift, I choose Service Control Policies in the right pane. I now see a list of SCPs from which I can choose, as shown in the following screenshot.

Screenshot showing SCPs that are attached and available

The Blacklist Redshift policy is an SCP I created previously, and by choosing Attach, I attach it to the Application 1 OU.

The third console enhancement is in the Accounts tab. The right pane displays additional information when you choose an account. In the following screenshot, I choose the Accounts tab and then the DB backend account. In the right pane, I now see a new option: Organizational units.

Screenshot showing the new "Organizational units" choice in the right pane

When I choose Organizational units in the right pane, I see the OUs of which the chosen account is a member—in this case, Application 1. If the account should not be in that OU, I can remove it by choosing Remove next to the OU name, as shown in the following screenshot.

Screenshot showing the OUs of which the account is a member

We have also made it possible to attach SCPs to accounts in this view. When I choose Service Control Policies in the right pane, I see a list of all SCPs in my organization. The list is organized such that all the policies that are directly attached to the account are at the top of the list. You can detach any of these policies by choosing Detach next to the policy.

At the bottom of the list, I see the SCPs that I can attach to accounts. To do this, I choose Attach next to a policy. In the following screenshot, the Blacklist Redshift SCP can be attached directly to the account. However, when I look at the policies that are indirectly attached to the account via OUs, I see that the Blacklist Redshift SCP is already attached via the Application 1 OU. This means it is not necessary for me to attach this SCP directly to the DB backend account.

Screenshot showing that the Blacklist Redshift SCP is already attached via the Application 1 OU

Summary

The new Organizations user interface makes it easier for you to manage your accounts and OUs as well as attach SCPs to accounts. To get started, sign in to the Organizations console.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the Organizations forum.

– Anders

Amazon CloudWatch launches Alarms on Dashboards

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/amazon-cloudwatch-launches-alarms-on-dashboards/

Amazon CloudWatch is a service that gives customers the ability to monitor their applications, systems, and solutions running on Amazon Web Services by providing and collecting metrics, logs, and events about AWS resources in real time. CloudWatch automatically provides key resource measurements such as; latency, error rates, and CPU usage, while also enabling monitoring of custom metrics via customer-supplied logs and system data.

Last November, Amazon CloudWatch added new Dashboard Widgets to provide additional data visualization options for all available metrics. In order to provide customers with even more insight into their solutions and resources running on AWS, CloudWatch has launched Alarms on Dashboards. With this alarms enhancement, customers can view alarms and metrics in the same dashboard widget enabling them to perform data-driven troubleshooting and analysis.

CloudWatch dashboards are designed with a goal of providing better visibility when monitoring AWS resources across regions in a consolidated view. Since CloudWatch dashboards are highly customizable, users can create their own custom dashboards to graphically represent data for varying metrics such as utilization, performance, estimated billing, and now alarm conditions. An alarm tracks a single metric over time based on the value of the metric in relation to a specified threshold. When the alarm state changes, an action such an Auto Scaling policy is executed or a notification is sent to Amazon SNS, among other options.

With the ability to add alarms to dashboards, CloudWatch users have another mechanism to proactively monitor and receive alerts about their AWS resources and applications across multiple regions. In addition, the metric data associated with an alarm, which has been added to a dashboard, can be charted and reviewed. Alarms have three possible states:

  • OK: The value of the alarm metric does not meet the threshold
  • INSUFFICIENT DATA: Initial triggering of alarm metric or alarm metric data does not have enough data to determine whether it’s in the OK state or the ALARM state
  • ALARM: The value of the alarm metric meets the threshold

When added to a dashboard, alarms are displayed in red when in the Alarm state, gray when in the Insufficient data state and shown with no color fill when the alarm is in the OK state. Alarms added to a dashboard are supported with the following widgets: Line, Number, and Stacked Graph widgets.

  • Number widget: provides a quick and efficient view of the latest value of any desired metric. Using the widget with alarms, the view of the state of the alarm is shown with different background colors for the latest metric data.
  • Line widget: allows the visualization of the actual value of any collection of chosen metrics. Provides a view on the dashboard of the state of the alarm, which displays the alarm threshold and condition as a horizontal line. The threshold line can act as a good indicator to view the degree of the alarm.
  • Stack graph widget: allows customers to visualize the net total effect of any collection of chosen metrics. The stacked graph widget loads one metric over another in order to illustrate the distribution and contribution of a metric and has the option to display the contribution of metrics in percentages. With alarms, it also provides a view of the state of the alarm, which displays the alarm threshold and condition as a horizontal line.

Currently, adding multiple metrics onto the same widget for an alarm is in the works and this feature is evolving based on customer feedback.

Adding Alarms on Dashboards

Let’s take a quick look at the utilizing the Alarms on a CloudWatch Dashboard. In the AWS Console, I will go to the CloudWatch service. When in the CloudWatch console, select Dashboards. I will click the Create dashboard button and create the CloudWatchBlog dashboard.

 

Upon creation of my CloudWatchBlog dashboard, a dialog box will open to allow me to add widgets to the dashboard. I will forego adding widgets for now since I want to focus on adding alarms on my dashboard. Therefore, I will hit the Cancel button here and go to the Alarms section of the CloudWatch console.

Once in the Alarms section of the CloudWatch console, you will see all of your alarms and the state of each of the alarms for the current region displayed.

As we mentioned earlier, there are three types of alarm states and as you can see in my console above that all of the different alarms states for various alarms are being displayed. If desired, you can adjust your filter on the console to display alarms filtered by the alarm state type.

As an example, I am only interested in viewing the alarms with an alarm state of ALARM. Therefore, I will adjust the filter to show only the alarms in the current region with an alarm state as ALARM.

Now only the two alarms that have a current alarm state of ALARM are displayed. One of these alarms is for monitoring the provisioned write capacity units of an Amazon DynamoDB table, and the other is to monitor the CPU utilization of my active Amazon Elasticsearch instance.

Let’s examine the scenario in which I leverage my CloudWatchBlog dashboard as my troubleshooting mechanism for identifying and diagnosing issues with my Elasticsearch solution and its instances. I will first add the Amazon Elasticsearch CPU utilization alarm, ES Alarm, to my CloudWatchBlog dashboard. To add the alarm, I simply select the checkbox by the desired alarm, which in this case is ES Alarm. Then with the alarm selected, I click the Add to Dashboard button.

The Add to dashboard dialog box will open, allowing me to select my CloudWatchBlog dashboard. Additionally, I can select the widget type I would like to use for the display of my alarm. For the ES Alarm, I will choose the Line widget and complete the process of adding this alarm to my dashboard by clicking the Add to dashboard button.

Upon successfully adding ES Alarm to the CloudWatchBlog dashboard, you will see a confirmation notice displayed in the CloudWatch console.

If I then go to the Dashboard section of the console and select my CloudWatchBlog dashboard, I will see the line widget for my alarm, ES Alarm, on the dashboard. To ensure that my ES Alarm widget is a permanent part of the dashboard, I will click the Save dashboard button to preserve the addition of this widget on the dashboard.

As we discussed, one of the benefits of utilizing a CloudWatch dashboard is the ability to add several alarms from various regions onto a dashboard. Since my scenario is leveraging my dashboard as a troubleshooting mechanism for my Elasticsearch solution, I would like to have several alarms and metrics related to my solution displayed on the CloudWatchBlog dashboard. Given this, I will create another alarm for my Elasticsearch instance and add it to my dashboard.

I will first return to the Alarms section of the console and click the Create Alarm button.

The Create Alarm dialog box is displayed showing all of the current metrics available in this region. From the summary, I can quickly see that there are 21 metrics being tracked for Elasticsearch. I will click on the ES Metrics link to view the individual metrics that can be used to create my alarm.

I can review the individual metrics shown for my Elasticsearch instance, and choose which metric I want to base my new alarm on. In this case, I choose the WriteLatency metric by selecting the checkbox for this metric and then click the Next button.

 

The next screen is where I fill in all the details about my alarm: name, description, alarm threshold, time period, and alarm action. I will name my new alarm, ES Latency Alarm, and complete the rest of the aforementioned data fields. To complete the creation of my new alarm, I click the Create Alarm button.

I will see a confirmation message box at the top of the Alarms console upon successful completion of adding the alarm, and the status of the newly created alarm will be displayed in the alarms list.

Now I will add my ES Latency Alarm to my CloudWatchBlog dashboard. Again, I click on the checkbox by the alarm and then click the Add to Dashboard button.

This time when the Add to Dashboard dialog comes up, I will choose the Stacked area widget to display the ES Latency Alarm on my CloudWatchBlog dashboard. Clicking the Add to Dashboard button will complete the addition of my ES Latency Alarm widget to the dashboard.

Once back in the console, again I will see the confirmation noting the successful addition of the widget. I go to the Dashboards and click on the CloudWatchBlog dashboard and I can now view the two widgets in my dashboard. To include this widget in the dashboard permanently, I click the Save dashboard button.

The final thing to note about the new CloudWatch feature, Alarms on Dashboards, is that alarms and metrics from other regions can be added to the dashboard for a complete view for troubleshooting. Let’s add a metric to the dashboard with the alarms widget.

Within the console, I will move from my current region, US East (Ohio), to the US East (N. Virginia) region.

Now I will go to the Metric section of the CloudWatch console. This section displays the metrics from services used in the US East (N. Virginia) region.

My Elasticsearch solution triggers Lambda functions to capture all of the EmployeeInfo DynamoDB database CRUD (Create, Read, Update, Delete) changes via DynamoDB streams and write those changes into my Elasticsearch domain, taratestdomain. Therefore, I will add metrics to my CloudWatchBlog dashboard to track table metrics from DynamoDB.

Therefore, I am going to add the EmployeeInfo database ProvisionedWriteCapacityUnits metric to my CloudWatchBlog dashboard.

Back again in the Add to Dashboard dialog, I will select my CloudWatchBlog dashboard and choose to display this metric using the Number widget.

Now, the ProvisionedWriteCapacityUnits metric from the US East (N. Virginia) is displayed in the CloudWatchBlog dashboard with the Number widget added to the dashboard to with the alarms from the US East (Ohio). To make this update permanent in the dashboard, I will (you guessed it!) click the Save dashboard button.

Summary

Getting started with alarms on dashboards is easy. You can use alarms on dashboards across regions for another means of proactively monitoring alarms, build troubleshooting playbooks, and view desired metrics. You can also choose the metric first in the Metric UI and then change the type of widget according to the visualization that fits the metric.

Alarms on Dashboards are supported on Line, Stacked Area, and Number widgets. In addition, you can use Text widgets next to alarms on a dashboard to add steps or observations on how to handle changes in the alarm state. To learn more about Amazon CloudWatch widgets and about the additional dashboard widgets, visit the Amazon CloudWatch documentation and the CloudWatch Getting Started guide.

 

Tara

Build the wall: a huge wall (of LEDs)

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/giant-led-wall/

London-based Solid State Group decided to combine Slack with a giant LED wall, and created a thing of beauty for their office.

Solid State LED wall Raspberry Pi

“BUILD THE WALL we cried aloud. Well, in truth, we shouted this over Slack, until the pixel wall became a thing.” Niall Quinn, Solid State Group.

Flexing the brain

Project name RIO: Rendered-Input-Output took its inspiration from Google Creative Lab’s anypixel.js project. An open source software and hardware library, anypixel.js boasts the ability to ‘create big, unusual, interactive displays out of all kinds of things’ such as arcade buttons and balloons.

Every tech company has side projects and Solid State is no different. It keeps devs motivated and flexes the bits of the brain sometimes not quite reached by day-to-day coding. Sometimes these side projects become products, sometimes we crack open a beer and ask “what the hell were we thinking”, but always we learn something – about the process, and perhaps ourselves.

Niall Quinn, Solid State Group

To ‘flex the bits of the brain’, the team created their own in-house resource. Utilising Slack as their interface, they were able to direct images, GIFs and video over the internet to the Pi-powered LED wall.

Solid State LED wall Raspberry Pi

Bricks and mortar

They developed an API for ‘drawing’ each pixel of the content sent to the wall, and converting them to match the pixels of the display.

After experimenting with the code on a small 6×5 pixel replica, the final LED wall was built using WS2812B RGB strips. With 2040 LEDs in total to control, higher RAM and power requirements called for the team to replace their microcontroller. Enter the Raspberry Pi.

Solid State LED wall Raspberry Pi

“With more pixels come more problems. LEDs gobble up RAM and draw a lot of power, so we switched from an Arduino to a Raspberry Pi, and got ourselves a pretty hefty power supply.”

Alongside the Slack-to-wall image sharing, Solid State also developed their own mobile app. This app used the HTML5 canvas element to draw data for the wall. The app enabled gaming via a SNES-style controller, a live drawing application, a messaging function and live preview capabilities.

Build your own LED wall

If you’re planning on building your own LED wall, whether for an event, a classroom, an office or a living room, the Solid State team have shared the entire project via their GitHub page. To read a full breakdown of the build, make sure you visit their blog. And if you do build your own, or have done already, make sure to share it in the comments below.

The post Build the wall: a huge wall (of LEDs) appeared first on Raspberry Pi.

Move Over JSON – Policy Summaries Make Understanding IAM Policies Easier

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/move-over-json-policy-summaries-make-understanding-iam-policies-easier/

Today, we added policy summaries to the IAM console, making it easier for you to understand the permissions in your AWS Identity and Access Management (IAM) policies. Instead of reading JSON policy documents, you can scan a table that summarizes services, actions, resources, and conditions for each policy. You can find this summary on the policy detail page or the Permissions tab on an individual IAM user’s page.

In this blog post, I introduce policy summaries and review the details of a policy summary.

How to read a policy summary

The following screenshot shows an example policy summary. The table provides you with an at-a-glance view of each service’s granted access level, resources, and conditions.

The columns in a policy summary are defined this way:

  • Service – The Amazon services defined in the policy. Click each service name to see the specific actions granted for the service.
  • Access level – Actions defined for each service in the policy (I provide more details below).
  • Resource –The resources defined for each service in the policy. This column displays one of the following values:
    • All resources – Access is granted or denied to all resources in the service.
    • Multiple – Some but not all of the resources are granted or denied in the service.
    • Amazon Resource Name (ARN) – The policy defines one resource in the service. You will see the actual ARN displayed for one resource.
  • Request condition – The conditions defined for each service. Conditions can be global conditions or conditions specific to the service. This column displays one of the following values:
    • None – No conditions are defined for the service.
    • Multiple – Multiple conditions are defined for the service.
    • Condition – One condition is defined for the service and applies to all actions defined in the policy for the service. You will see the condition defined in the policy in the table. For example, the preceding screenshot shows a condition for Amazon Elastic Beanstalk.

If you prefer reading and managing policies in JSON, choose View and edit JSON above the policy summary to see the policy in JSON.

Before I go over an example of a policy summary, I will explain access levels in more detail, a new concept we introduced with policy summaries.

Access levels in policy summaries

To help you understand the permissions defined in a policy, each AWS service’s actions are categorized in four access levels: List, Read, Write, and Permissions management. For example, the following table defines the access levels and provides examples using Amazon S3 actions. Full and Limited further qualify the access levels for each service. Full refers to all the actions within an access level, and Limited refers to at least one but not all actions in an access level. Note: You can see the complete list of actions and access levels for all services in the AWS IAM Policy Actions Grouped by Access Level documentation.

Access level Description Example
List Actions that allow you to see a list of resources s3:ListBucket, s3:ListAllMyBuckets
Read Actions that allow you to read the content in resources s3:GetObject, s3:GetBucketTagging
Write Actions that allow you to create, delete, or modify resources s3:PutObject, s3:DeleteBucket
Permissions management Actions that allow you to grant or modify permissions to resources s3:PutBucketPolicy

Note: Not all AWS services have actions in all access levels.

In the following screenshot, the access level for S3 is Full access, which means the policy permits all actions of the S3 List, Read, Write, and Permissions management access levels. The access level for EC2 is Full: List,Read and Limited: Write, meaning that the policy grants all actions of the List and Read access levels, but only a portion of the actions of the Write access level. You can view the specific actions defined in the policy by choosing the service in the policy summary.

Reviewing a policy summary in detail

Let’s look at a policy summary in the IAM console. Imagine that Alice is a developer on my team who analyzes data and generates quarterly reports for our finance team. To grant her the permissions she needs, I have added her to the Data_Analytics IAM group.

To see the policies attached to user Alice, I navigate to her user page by choosing her user name on the Users page of the IAM console. The following screenshot shows that Alice has 3 policies attached to her.

I will review the permissions defined in the Data_Analytics policy, but first, let’s look at the JSON syntax for the policy so that you can compare the different views.

{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": [
            "autoscaling:*",
            "ec2:CancelSpotInstanceRequests",
            "ec2:CancelSpotFleetRequests",
            "ec2:CreateTags",
            "ec2:DeleteTags",
            "ec2:Describe*",
            "ec2:ModifyImageAttribute",
            "ec2:ModifyInstanceAttribute",
            "ec2:ModifySpotFleetRequest",
            "ec2:RequestSpotInstances",
            "ec2:RequestSpotFleet",
            "elasticmapreduce:*",
            "es:Describe*",
            "es:List*",
            "es:Update*",
            "es:Add*",
            "es:Create*",
            "es:Delete*",
            "es:Remove*",
            "lambda:Create*",
            "lambda:Delete*",
            "lambda:Get*",
            "lambda:InvokeFunction",
            "lambda:Update*",
            "lambda:List*"
        ],
        "Resource": "*"
    },

    {
        "Effect": "Allow",
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:PutObject",
            "s3:PutBucketPolicy"
        ],
        "Resource": [
            "arn:aws:s3:::DataTeam"
        ],
        "Condition": {
            "StringLike": {
                "s3:prefix": [
                    "Sales/*"
                ]
            }
        }
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "elasticfilesystem:*"
        ],
        "Resource": [
            "arn:aws:elasticfilesystem:*:111122223333:file-system/2017sales"
        ]
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "rds:*"
        ],
         "Resource": [
            "arn:aws:rds:*:111122223333:db/mySQL_Instance"
        ]
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "dynamodb:*"
         ],
        "Resource": [
            "arn:aws:dynamodb:*:111122223333:table/Sales_2017Annual"
        ]
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "iam:GetInstanceProfile",
            "iam:GetRole",
            "iam:GetPolicy",
            "iam:GetPolicyVersion",
            "iam:ListRoles"
        ],
        "Condition": {
            "IpAddress": {
                "aws:SourceIp": "192.0.0.8"
            }
        },
        "Resource": [
            "*"
        ]
    }
  ]
}

To view the policy summary, I can either choose the policy name, which takes me to the policy’s page, or I can choose the arrow next to the policy name, which expands the policy summary on Alice‘s user page. The following screenshot shows the policy summary of the Data_Analytics policy that is attached to Alice.

Looking at this policy summary, I can see that Alice has access to multiple services with different access levels. She has Full access to Amazon EMR, but only Limited List and Limited Read access to IAM. I can also see the high-level summary of resources and conditions granted for each service. In this policy, Alice can access only the 2017sales file system in Amazon EFS and a single Amazon RDS instance. She has access to Multiple Amazon S3 buckets and Amazon DynamoDB tables. Looking at the Request condition column, I see that Alice can access IAM only from a specific IP range. To learn more about the details for resources and request conditions, see the IAM documentation on Understanding Policy Summaries in the AWS Management Console.

In the policy summary, to see the specific actions granted for a service, I choose a service name. For example, when I choose Elasticsearch, I see all the actions organized by access level, as shown in the following screenshot. In this case, Alice has access to all Amazon ES resources and has no request conditions.

Some exceptions

For policies that are complex or contain unrecognized actions, the policy summary may not be able to generate a simple, human-readable table. For these edge cases, we will continue to show the JSON policy without the policy summary.

For policies that include Deny statements, you will see a separate table that shows the permissions that the policy explicitly denies. You can see an example of a policy summary that includes both an Allow statement and a Deny statement in our documentation.

Conclusion

To see policy summaries in your AWS account, sign in to the IAM console and navigate to any managed policy on the Policies page of the IAM console or the Permissions tab on a user’s page. Policy summaries make it easy to scan for certain permissions, such as quickly identifying who has Full access or Permissions management privileges. You can also compare policies to determine which policies define conditions or specify resources for better security posture.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, please start a new thread on the IAM forum.

– Joy

S3 Storage Management Update – Analytics, Object Tagging, Inventory, and Metrics

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/s3-storage-management-update-analytics-object-tagging-inventory-and-metrics/

Today I would like to tell you about four S3 features that will give you detailed insights into your storage and your access patterns. You can see what and how much you are storing, how it is being used, and you can make informed decisions about the use of S3 storage classes as a result. These features will be of value to everyone who uses S3, whether they have tens, thousands, millions, or billions of objects in their buckets. Here’s an overview:

S3 Analytics – You can analyze the storage and retrieval patterns for your objects and use the results to choose the most appropriate storage class. You can inspect the results of the analysis from within the S3 Console, or you can load them into your favorite BI tool and dive deep. Either way, you now have the means to gain a deep understanding of your storage patterns and to see how they relate to usage and growth.

S3 Object Tagging – You can associate multiple key-value pairs (tags) with each of your S3 objects, with ability to change them at any time. The tags can be used to manage and control access, set up S3 Lifecycle policies, customize the S3 Analytics, and filter the CloudWatch metrics. You can think of the bucket as a data lake, and use tags to create a taxonomy of the objects within the lake. This is more flexible than using the bucket and a prefix, and allows you to make semantic-style changes without renaming, moving, or copying objects.

S3 Inventory – You can speed up your business workflows and your big data jobs using S3 Inventory. This feature provides you with a CSV-formatted flat-file representation of the contents of all or part (as identified by a prefix) of a bucket, on a daily or weekly basis.

S3 CloudWatch Metrics – You can improve the performance of your S3-powered applications by monitoring and alarming on 13 new Amazon CloudWatch metrics.

Let’s dive in…

S3 Analytics
As an S3 user, you have your choice of three storage classes (Standard, Standard – Infrequent Access, and Glacier) and the ability to use S3’s Object Lifecycle Management feature to indicate when objects should be either expired or transitioned to Standard – Infrequent Access or Glacier.

The S3 Analytics feature gives you the information that you need to have in order to choose between Standard and Standard – Infrequent Access for your objects. Because many customers store several different types or categories of information in a single bucket for ease of processing, you have the ability to run the analytics on subsets (defined by a common prefix or tag value) of the objects in a bucket. Each subset is defined by a filter; each bucket can have up to 1000 filters. Here are the filters on my jbarr-public bucket:

Here’s how I define a simple filter that analyzes objects that are prefixed with www:

I could choose to filter by tag name and value instead. Here’s how I would analyze objects that have a tag named type with the value page:

Each filter can have at most one prefix, along with any desired number of tags. I can also choose to have the analytics exported to a file each day:

Once one or more filters are in place, the analytics are run on a daily basis and I can view them by simply clicking on the filter. For example, here’s what I see when I click on my Archival filter:

There’s a lot of good information in this analysis! I can see that:

  • Looking back 127 days, most of my objects that are older than 30 days are accessed infrequently. I can save money by creating a Lifecycle rule that will transition these objects to Standard – Infrequent Access storage.
  • At this point, the majority (6.39 PB) of my storage is in Standard storage; just a tiny fraction (3.24 GB) is in Standard – Infrequent Access storage (I don’t actually have that much data in my bucket! The S3 team was kind enough to load up my account with some test metrics that were, shall we say, very generous).
  • Over the 127 day observation period, between 16% and 30% of the Standard storage was retrieved on a per-day basis.
  • Objects between 30-45 days old, 45-60 days old, 60-90 days old, 90-180 days old, and over 180 days old are all infrequently accessed and good candidates for Standard – Infrequent Access storage.

You can set up storage class analysis from the Console, the CLI, or through the S3 APIs.

To learn more, read S3 Storage Class Analysis.

S3 Object Tagging
Tags supplement the existing location-based S3 object management model (buckets and prefixes) with the ability to manage storage based on what the object represents. For example, you can tag objects with the name of a department and then construct IAM policies that grant access based on the tag. This gives you a lot of flexibility , including the ability to effect changes by simply altering tags.

You can create, update, or delete them during any part of the object’s life cycle. Tags can be referenced in IAM policies, S3 Lifecycle policies, and in the Storage Analytics filters that I just showed you. Each object can have up to ten tags and each version of an object has a distinct set of tags. You can use tags to manage the expiration of objects via a lifecycle policy and you can set up CloudWatch metrics that reflect the activity around a particular tag.

The Properties page for each object displays the current set of tags and allows me to edit them:

You can also set up and access tags from the CLI or through the S3 APIs. When you use either of these two methods, you must always work in terms of the full tag set. For example, if an object has four tags and you want to add a fifth one, you must read the current set, add the new one, and then update the entire set.

Tags can be replicated across regions via Cross-Region Replication, but your IAM policy on the source side must enable the s3:GetObjectVersionTagging and s3:ReplicateTags actions.

Tags cost $0.01 per 10,000 tags per month. Requests that add or update tags (PUT and GET, respectively) are charged at the usual rates.

To learn more, read about S3 Object Tagging.

S3 Inventory
S3’s LIST operation is synchronous, and returns up to 1000 objects at a time, along with a key that can be used to initiate a second LIST that picks up where the first one leaves off. While this works well for small to medium-sized buckets and single-threaded programs, it can be challenging to use in conjunction with huge buckets or multiple threads.

The S3 team told me that many of our customers store tens or hundreds of millions of objects in a single bucket, and that buckets with a billion or more objects are not unusual. These buckets are often processed daily or weekly as part of a larger workflow and can benefit from a more direct way to gain access to the list of objects in a bucket.

With S3 Inventory, you can now arrange to receive daily or weekly inventory reports for any of your buckets. You can use a prefix to filter the report and you can choose to include optional fields such as size, storage class, and replication status. Reports can be sent to an S3 bucket in your account or (with proper permission settings) in another account.

Here’s how I request a daily inventory report called WebStuff, for all objects that start with www:

I also need to give S3 permission to write to the destination bucket (jbarr-s3-inventory). Here’s the policy that I used (see Granting Permission for Amazon S3 Inventory and Amazon S3 Analytics to learn more):

The inventory mechanism creates three files: a manifest, a checksum for the manifest, and a data file. The manifest contains the location of the data file along with a checksum for it:

{
   "sourceBucket":"jbarr-public",
   "destinationBucket":"arn:aws:s3:::jbarr-s3-inventory",
   "version":"2016-11-30",
   "fileFormat":"CSV",
   "fileSchema":"Bucket, Key, Size, LastModifiedDate, ETag, StorageClass",
   "files":[
      {
         "key":"jbarr-public/DailyInventory/data/cf1da322-f52b-4c61-802a-b5c14f4f32e2.csv.gz",
         "size":2270,
         "MD5checksum":"be6d0eb3f9c4f497fe40658baa5a2e7c"
      }
   ]
}

Here’s what the data file looks like after it has been unzipped:

If you are using the inventory reports to power your daily or weekly processing workflow, don’t forget about S3 Notifications! The checksum file is written after the other two, so you can safely use it to get things moving. Also, don’t forget to use a lifecycle policy to manage your collection of inventory files since they will accumulate over time.

As an added bonus, using daily or weekly reports can save you up to 50%, when compared to multiple LIST operations. To learn more about this feature, read about S3 Storage Inventory.

S3 CloudWatch Metrics
S3 can now publish storage, request, and data transfer metrics to CloudWatch. The storage metrics are reported daily and are available at no extra cost. The request and data transfer metrics are available at one minute intervals (after some processing latency) and are billed at the standard CloudWatch rate. As is the case for the S3 Analytics, the CloudWatch metrics can be collected and reported for an entire bucket or for a subset selected via prefix or tags.

Here is the full set of metrics:

Storage Requests Data Transfer
  • Bucket Size Bytes
  • Number Of Objects
  • GET
  • LIST
  • PUT
  • POST
  • DELETE
  • ALL
  • HEAD
  • 4xx Errors
  • 5xx Errors
  • Total Request Latency
  • First Byte Latency
  • Bytes Uploaded
  • Bytes Downloaded

The metrics are available within the S3 and CloudWatch consoles. Here’s what the Total Request Latency looks like in the S3 Console for my bucket:

I can also click on View in CloudWatch and set CloudWatch alarms for any desired metric. Perhaps I want to be notified if my bucket grows beyond 100 MB:

To learn more, read How Do I Configure Request Metrics for an S3 Bucket?

Available Now
You have had access to these features since late last year!. They are accessible through the updated S3 Console, which also includes many other new features (watch Introduction to the New Amazon S3 Console to learn more).

Jeff;

 

Amazon WorkDocs Update – Commenting & Reviewing Enhancements and a New Activity Feed

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-workdocs-update-commenting-reviewing-enhancements-and-a-new-activity-feed/

As I have told you in the past, we like to drink our own Champagne at Amazon. Practically speaking, this means that we make use of our own services, tools, and applications as part of our jobs, and that we supply the development teams with feedback if we have an idea for an improvement or if we find something that does not work as expected.

I first talked about Amazon WorkDocs (which was originally called Zocalo) back in the middle of 2014, and have been using it ever since (at busy times I often have drafts of 7 or 8 posts circulating).

I upload drafts of every new blog post (usually as PDFs) to WorkDocs and then share them with the Product Manager, Product Marketing Manager, and other designated reviewers. The reviewers leave feedback for me, I update the draft, and I wait for more feedback. After a couple of iterations the draft settles down and I wait for the go-ahead to publish the post. The circle of reviews often grows to include developers, senior management, and so forth. I simply share the document with them and look forward to even more feedback. My job is to read and to process all of the feedback (lots of suggestions, and the occasional question) as quickly as possible and to make sure that I did not miss anything.

Today I would like to tell you about some recent recent enhancements that makes WorkDocs even more useful. We have added some more commenting and reviewing features, along with an activity feed.

Enhanced Commenting
Over the course of a couple of revisions, some comments will spur a discussion. There might be a question about the applicability of a particular feature or the value of a particular image. In order to make it easier to start and to continue conversations, WorkDocs now supports threaded replies. I simply click on Reply and respond to a comment:

It is displayed like this:

If I click on Private, the comment is accessible only to the person who wrote the original.

In order to strengthen my message, I can also use simple formatting (bold, italic, and strikethrough) in my comments. Here’s how I specify each one:

And here’s the result:

Clicking on the ? displays a handy guide to formatting:

When the time for comments has passed, I can now disable feedback with a single click:

To learn more about these features, read Giving Feedback in the WorkDocs User Guide.

Enhanced Reviewing
As the comments accumulate, I sometimes need to draw a reviewer’s attention to a particular comment. I can do this by entering an @ in the comment and then choosing their name from the popup menu:

The user will be notified by email in order to let them know that their feedback is needed.

From time to time, a potential reviewer will come in to possession of a URL to a WorkDocs document but will not have access to the document. They can now request access to the document like this:

The request will be routed to the owner of the document via email for approval.

Similarly, someone who has been granted Viewer-level access can now request Contributor-level access:

Again, the request will be routed to the owner of the document via email for approval:

 

Activity Feed
With multiple blog posts out for review at any given time, keeping track of what’s coming and going can be challenging. In order to give me a big-picture view, WorkDocs now includes an Activity Feed. The feed shows me what is going on with my own documents and with those that have been shared with me. I can watch as files and folders are created, changed, removed, and commented on. I can also see who is making the changes and track the times when they were made:

I can enter a search term to control what I see in the feed:

And I can further filter the updates by activity type or by date:

Available Now
These features are available now and you can start using them today.

Jeff;

 

The Pirate Bay Suffers Downtime, Tor Domain is Up

Post Syndicated from Ernesto original https://torrentfreak.com/the-pirate-bay-suffers-downtime-tor-domain-is-up-170123/

pirate bayThe Pirate Bay has been unreachable for more than half a day now.

The site currently displays a CloudFlare error message across the entire site, and many proxies are affected by the downtime as well.

TorrentFreak reached out to the TPB team who are aware of the issues. They have informed the responsible people and hope to have the site back online soonish.

No further details are available to us and there is no ETA for the site’s return either. However, judging from past experience, it’s likely a small technical issue that needs fixing.

cloud522

There’s also some good news for those who desperately need to access the notorious torrent site.

TPB is still accessible via its .onion address via the Tor network, via the popular Tor Browser for example. The Tor traffic goes through a separate server and works just fine.

Others will have to be patient, but seasoned TPB users will probably know the drill by now…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

WhatsApp Security Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/whatsapp_securi.html

Back in March, Rolf Weber wrote about a potential vulnerability in the WhatsApp protocol that would allow Facebook to defeat perfect forward secrecy by forcibly change users’ keys, allowing it — or more likely, the government — to eavesdrop on encrypted messages.

It seems that this vulnerability is real:

WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.

The recipient is not made aware of this change in encryption, while the sender is only notified if they have opted-in to encryption warnings in settings, and only after the messages have been re-sent. This re-encryption and rebroadcasting effectively allows WhatsApp to intercept and read users’ messages.

The security loophole was discovered by Tobias Boelter, a cryptography and security researcher at the University of California, Berkeley. He told the Guardian: “If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access due to the change in keys.”

The vulnerability is not inherent to the Signal protocol. Open Whisper Systems’ messaging app, Signal, the app used and recommended by whistleblower Edward Snowden, does not suffer from the same vulnerability. If a recipient changes the security key while offline, for instance, a sent message will fail to be delivered and the sender will be notified of the change in security keys without automatically resending the message.

WhatsApp’s implementation automatically resends an undelivered message with a new key without warning the user in advance or giving them the ability to prevent it.

Note that it’s an attack against current and future messages, and not something that would allow the government to reach into the past. In that way, it is no more troubling than the government hacking your mobile phone and reading your WhatsApp conversations that way.

An unnamed “WhatsApp spokesperson” said that they implemented the encryption this way for usability:

In WhatsApp’s implementation of the Signal protocol, we have a “Show Security Notifications” setting (option under Settings > Account > Security) that notifies you when a contact’s security code has changed. We know the most common reasons this happens are because someone has switched phones or reinstalled WhatsApp. This is because in many parts of the world, people frequently change devices and Sim cards. In these situations, we want to make sure people’s messages are delivered, not lost in transit.

He’s technically correct. This is not a backdoor. This really isn’t even a flaw. It’s a design decision that put usability ahead of security in this particular instance. Moxie Marlinspike, creator of Signal and the code base underlying WhatsApp’s encryption, said as much:

Under normal circumstances, when communicating with a contact who has recently changed devices or reinstalled WhatsApp, it might be possible to send a message before the sending client discovers that the receiving client has new keys. The recipient’s device immediately responds, and asks the sender to reencrypt the message with the recipient’s new identity key pair. The sender displays the “safety number has changed” notification, reencrypts the message, and delivers it.

The WhatsApp clients have been carefully designed so that they will not re-encrypt messages that have already been delivered. Once the sending client displays a “double check mark,” it can no longer be asked to re-send that message. This prevents anyone who compromises the server from being able to selectively target previously delivered messages for re-encryption.

The fact that WhatsApp handles key changes is not a “backdoor,” it is how cryptography works. Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system.

The only question it might be reasonable to ask is whether these safety number change notifications should be “blocking” or “non-blocking.” In other words, when a contact’s key changes, should WhatsApp require the user to manually verify the new key before continuing, or should WhatsApp display an advisory notification and continue without blocking the user.

Given the size and scope of WhatsApp’s user base, we feel that their choice to display a non-blocking notification is appropriate. It provides transparent and cryptographically guaranteed confidence in the privacy of a user’s communication, along with a simple user experience. The choice to make these notifications “blocking” would in some ways make things worse. That would leak information to the server about who has enabled safety number change notifications and who hasn’t, effectively telling the server who it could MITM transparently and who it couldn’t; something that WhatsApp considered very carefully.

How serious this is depends on your threat model. If you are worried about the US government — or any other government that can pressure Facebook — snooping on your messages, then this is a small vulnerability. If not, then it’s nothing to worry about.

Slashdot thread. Hacker News thread. BoingBoing post. More here.

EDITED TO ADD (1/24): Zeynep Tufekci takes the Guardian to task for their reporting on this vulnerability. (Note: I signed on to her letter.)

Compute Module 3 Launch!

Post Syndicated from James Adams original https://www.raspberrypi.org/blog/compute-module-3-launch/

Way back in April of 2014 we launched the original Compute Module (CM1) which was based around the BCM2835 processor of the original Raspberry Pi. CM1 was a great success and we’ve seen a lot of uptake from various markets, particularly in IoT and home and factory automation. Not to be outdone by its bigger Raspberry Pi brother, the Compute Module is also destined for space!

Compute Module 3

Since releasing the original Compute Module we’ve launched 2 further generations of much faster Raspberry Pi boards, so today we bring you the shiny new Compute Module 3 (CM3) which is based on the Raspberry Pi 3 hardware, providing twice the RAM and roughly 10x the CPU performance of the original module. We’ve been talking about the Compute Module 3 since the launch of the Raspberry Pi 3, and we’re already excited to see NEC displays, an early adopter, launching their CM3-enabled display solution.

Compute Module 3

The idea of the Compute Module was to provide an easy and cost effective route to producing customised products based on the Pi hardware and software platform. The thought was to provide the ‘team in a garage’ with easy access to the same technology as the big guys. The module takes care of the complexity of routing out the processor pins, the high speed RAM interface and core power supply and allows a simple carrier board to provide just what is needed in terms of external interfaces and form factor. The module uses a standard DDR2 SODIMM form factor, sockets for which are made by several manufacturers and are easily available and inexpensive.

In fact today we are launching two versions of Compute Module 3. The first is the ‘standard’ CM3 which has a BCM2837 processor at up to 1.2GHz with 1GByte RAM (the same as Pi3) and 4Gbytes of on-module eMMC flash. The second version is what we are calling ‘Compute Module 3 Lite’ (CM3L) which still has the same BCM2837 and 1Gbyte of RAM but brings the SD card interface to the module pins so a user can wire this up to an eMMC or SD card of their choice.

Back side of CM3 (left) and CM3L (right).

We are also releasing an updated version of our get-you-started breakout board, the Compute Module IO Board V3 (CMIO3). This board provides the necessary power to the module and gives you the ability to program the module’s Flash memory (for the non-Lite versions) or use an SD card (Lite versions), access the processor interfaces in a slightly more friendly fashion (pin headers and flexi connectors, much like the Pi) and provides the necessary HDMI and USB connectors so that you have an entire system that can boot Raspbian (or the OS of your choice). This board provides both a starting template for those who want to design with the Compute Module, and a quick way to start experimenting with the hardware and building and testing a system before going to the expense of fabricating a custom board. The CMIO3 can accept an original Compute Module, CM3 or CM3L.

Comprehensive information on the Compute Modules is available in the relevant hardware documentation section of our website and includes a datasheet and schematics.

With the launch of CM3 and CM3 Lite we are not obsoleting the original Compute Module, as we still see this as a valid product in its own right being a lower cost and lower power option where the performance of a CM3 would be overkill.

CM3 and CM3L are priced at $30 and $25 respectively (excluding tax and shipping) and this price applies to any size order. The original Compute Module is also reduced to $25. Our partners RS and Premier Farnell are also providing full development kits which include all you need to get started designing with the Compute Module 3.

The CM3 is largely backwards compatible with CM1 designs which have followed our design guidelines. The caveats are that the module is 1mm taller than the original module and the processor core supply (VBAT) can draw significantly more current and consequently the processor itself will run much hotter under heavy CPU load – i.e. designers need to consider thermals based on expected use cases.

CM3 (left) is 1mm taller than CM1 (right)

We’re very glad to finally be launching the Compute Module 3, and we’re excited to see what people do with it. Head on over to our partners element14 and RS Components to buy yours today!

The post Compute Module 3 Launch! appeared first on Raspberry Pi.

CES 2017: Trends For the Tech Savvy To Watch

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/ces-2017-trends-tech-savvy-watch/

This year’s Consumer Electronics Show (CES) just wrapped up in Las Vegas. The usual parade of cool tech toys created a lot of headlines this year, but there were some genuine trends to keep an eye on too. If you’re like us, you’re probably one of the first people around to adopt promising new technologies when they emerge. As early adopters we can sometimes lose the forest through the trees when it comes to understanding what this means for everyone else, so we’re going to look at it through that prism.

Alexa everywhere

2017 promises to be a big year for voice-activated “smart home” devices. The final landscape for this is still to be determined – all the expected players have their foot in it right now. Amazon, Apple, Google, Microsoft, even some smaller players.

Amazon deserves props after a holiday season that saw its Echo and Echo Dot devices in high demand. The company’s published an API that is Alexa is picking up plenty of support from third party manufacturers. Alexa’s testing for far beyond Echo, it seems.

Electronics giant LG is building Alexa into a line of robots designed for domestic duties and a refrigerator that also sports interior fridge cams, for example. Ford is integrating Alexa support into its Sync 3 automotive interface. Televisions, lighting devices, and home security products are among the many devices to feature Alexa integration.

Alexa is the new hotness, but the real trend here is in voice-assisted connectivity around the home. Even if Alexa runs out of steam, this tech is here to stay. The Internet of Things and voice activated interfaces are converging quickly, though that day isn’t today. It’s tantalizingly close. It’s still a niche, though, where it will stay for as long as consumers have to piece different things together to get it to work. That means there’s still room for disruption.

There’s especially ripe opportunity in underserved verticals. Take the home health market, for example: Natural language interfaces have huge implications for elderly and disabled care and assistance. Finding and developing solutions for those sorts of vertical markets is an awesome opportunity for the right players.

Of course, with great power comes great responsibility. A family of a six-year-old recently got stuck with a $160 bill after she told Alexa to order her cookies and a dollhouse. The family ended up donating the accidental order to charity. For what it’s worth, that problem can be avoided by activating a confirmation code feature in the Alexa software.

The Electric Vehicle (EV) Market Heats Up

One of the trickiest things to unpack from CES is hype from substance. Nowhere was that more apparent last week than the unveiling of Faraday Future’s FF91, a new Electric Vehicle (EV) positioned to go toe-to-toe with Tesla’s EV fleet.

The FF91 EV can purportedly go 378 miles on a single charge and also possesses autonomous driving capabilities (although its vaunted self-parking abilities didn’t demo as well as planned). When or if it’ll make it into production is still a head-scratcher, however. Faraday Future says it’ll be out next year, assuming that the company is beyond the production and manufacturing woes that have plagued it up until now.

While new vehicles and vehicle concepts are still largely the domain of auto shows, some auto manufacturers used CES to float new concepts ahead of the Detroit Auto Show, which happens this week. Toyota, for example, showed off its Concept-i, a car with artificial intelligence and natural language processing (like Siri or Alexa) designed to learn from you and adapt.

As we mentioned, Alexa is integrated into Ford’s Sync 3 platform, too. Already you can buy new cars with CarPlay and Android Auto, which makes it a lot easier to just talk with your mobile device to stay connected, get directions and entertain yourself on the morning commute simply by talking to your car instead of touching buttons. That’s a smart user interface change, but it’s still a potentially dangerous distraction for the driver. For this technology to succeed, it’s imperative that natural language interface designers make the experience as frictionless as possible.

Chrysler is making a play for future millennial families. We’re not making this up – they used “millennial” to describe the target market for this several times. The Portal concept is an electric minivan of sorts that’s chock-full of buzzwords: Facial recognition, Wi-Fi, media sharing, ten charging ports, semi-autonomous driving abilities and more).

2017 marks a pivot for car makers in this respect. For years the conventional wisdom that millennials were a lost cause for auto makers – Uber and Zipcar was all they needed. It turns out that was totally wrong. Economic pressures and diverse lifestyles may have delayed millennials’ trek toward auto ownership, but they’re turning out now in big numbers to buy wheels. Millennial families will need transportation just like generations before them back to the station wagon, which is why Chrysler says this “fifth-generation” family car will go into production sometime after 2018.

Volkswagen showed off its new I.D. concept car, a Golf-looking EV that also has all the requisite buzzwords. Speaking of buzzwords, what really excited us was the I.D. Buzz. This new EV resurrects the styling of the Hippy-era Microbus, with mood lighting, autonomous driving capabilities and a retractable steering wheel.

Rumors have persisted for years that VW was on the cusp of introducing a refreshed Microbus, but those rumors have never come to pass. And unfortunately, VW has no concrete plans to actually produce this – it seems to be a marketing effort to draw on nostalgic Boomer appeal, more than anything..

Both Buzz and Chrysler’s Portal do give us some insight about where auto makers are going when it comes to future generations of minivans: Electric, autonomous, customizable and more social than ever. If we are headed towards a future where vehicles drive themselves, family transportation will look very different than it is today.

Laptops At Both Extremes

CES saw the rollout of several new PC laptop models and concepts that will be hitting store shelves over the next several months.

Gamers looking for more real estate – a lot more real estate – were interested in Razer’s latest concept, Project Valerie. The laptop sports not one but three 4K displays which fold out on hinges. That’s 12K pixels of horizontal image space, mated to an Nvidia GeForce GTX 1080 graphics processor. A unibody aluminum chassis keeps it relatively thin (1.5 inches) when closed, but the entire rig weighs more than 12 pounds. Razer doesn’t have any immediate production plans, which may explain why their prototype was stolen before the end of the show.

Unlike Razer, Acer has production plans – immediate plans – for its gargantuan 21-inch Predator 21X laptop, priced at $8,999 and headed to store shelves next month. It was announced last year, but Acer finally offered launch details last week. A 17-inch model is also coming soon.

Big gaming laptops make for pretty pictures and certainly have their place in the PC ecosystem, but they’re niche devices. After a ramp up on 2-in-1s and low-powered laptops, Intel’s Kaby Lake processors are finally ready for the premium and mid-range laptop market. Kaby Lake efficiency improvements are helping PC makers build thinner and lighter laptops with better battery life, 4K video processing, faster solid state storage and more.

HP, Asus, MSI, Dell (and its gaming arm Alienware) were among the many companies with sleek new Kaby Lake-equipped models.

Gaming in the cloud with Nvidia

Nvidia, makers of premium graphics processors, offers GeForce Now cloud gaming to users of its Shield, an Android-based gaming handheld. That service is expanding to Windows and Mac in March.

Gaming as a Service, if you will, isn’t a new idea. OnLive pioneered the concept more than a decade ago. Gaikai followed, then was acquired by Sony in 2012. Nvidia’s had limited success with GeForce Now, but it’s been a single-platform offering up until now.

Nvidia has robust data centers to handle the processing and traffic, so best of luck to them as they scale up to meet demand. Gaming is very sensitive to network disruption – no gamer appreciates lag – so it’ll be interesting to see how GeForce Now scales to accommodate the new devices.

Mesh Networking

Mesh networking delivers more consistent, stronger network reception and performance than a conventional Wi-Fi router. Some of us have set up routers and extenders to fix dead spots – mesh networking works differently through smart traffic and better radio management between multiple network bases.

Eero, Ubiquiti, and even Google (with Google Wifi) are already offering mesh networking products, and this market segment looks to expand big in 2017. Netgear, Linksys, Asus, TP-Link and others are among those with new mesh networking setups. Mesh networking gear is still hampered by a higher price than plain old routers. That means the value isn’t there for some of us who have networking gear that gets the job done, even with shortcomings like dead zones or slow zones. But prices are coming down fast as more companies get into the market. If you have an 802.11ac router you’re happy with, stick with it for now, and move to a mesh networking setup for your next Wi-Fi upgrade.

Getting Your Feet Into VR

Our award for wackiest CES product has to go to Cerevo Taclim. Tactile feedback shoes and wireless hand controllers that help you “feel” the surface you’re walking on. Crunching snow underfoot, splashing through water. At an expected $1,000-$1,500 a pop, these probably won’t be next year’s Hatchimals, but it’s fun to imagine what game devs can do with the technology. Strap these to your feet then break out your best Hadouken in Street Fighter VR!

CES isn’t the real world. Only a fraction of what’s shown off ever sees the light of day, but it’s always interesting to see the trend-focused consumer electronics market shift and change from year to year. At the end of the year we hope to look back and see how much of this stuff ended up resonating with the actual consumer the show is named for.

The post CES 2017: Trends For the Tech Savvy To Watch appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Introducing Git Credentials: A Simple Way to Connect to AWS CodeCommit Repositories Using a Static User Name and Password

Post Syndicated from Ankur Agarwal original https://aws.amazon.com/blogs/devops/introducing-git-credentials-a-simple-way-to-connect-to-aws-codecommit-repositories-using-a-static-user-name-and-password/

Today, AWS is introducing a simplified way to authenticate to your AWS CodeCommit repositories over HTTPS.

With Git credentials, you can generate a static user name and password in the Identity and Access Management (IAM) console that you can use to access AWS CodeCommit repositories from the command line, Git CLI, or any Git tool that supports HTTPS authentication.

Because these are static credentials, they can be cached using the password management tools included in your local operating system or stored in a credential management utility. This allows you to get started with AWS CodeCommit within minutes. You don’t need to download the AWS CLI or configure your Git client to connect to your AWS CodeCommit repository on HTTPS. You can also use the user name and password to connect to the AWS CodeCommit repository from third-party tools that support user name and password authentication, including popular Git GUI clients (such as TowerUI) and IDEs (such as Eclipse, IntelliJ, and Visual Studio).

So, why did we add this feature? Until today, users who wanted to use HTTPS connections were required to configure the AWS credential helper to authenticate their AWS CodeCommit operations. Customers told us our credential helper sometimes interfered with password management tools such as Keychain Access and Windows Vault, which caused authentication failures. Also, many Git GUI tools and IDEs require a static user name and password to connect with remote Git repositories and do not support the credential helper.

In this blog post, I’ll walk you through the steps for creating an AWS CodeCommit repository, generating Git credentials, and setting up CLI access to AWS CodeCommit repositories.


Git Credentials Walkthrough
Let’s say Dave wants to create a repository on AWS CodeCommit and set up local access from his computer.

Prerequisite: If Dave had previously configured his local computer to use the credential helper for AWS CodeCommit, he must edit his .gitconfig file to remove the credential helper information from the file. Additionally, if his local computer is running macOS, he might need to clear any cached credentials from Keychain Access.

With Git credentials, Dave can now create a repository and start using AWS CodeCommit in four simple steps.

Step 1: Make sure the IAM user has the required permissions
Dave must have the following managed policies attached to his IAM user (or their equivalent permissions) before he can set up access to AWS CodeCommit using Git credentials.

  • AWSCodeCommitPowerUser (or an appropriate CodeCommit managed policy)
  • IAMSelfManageServiceSpecificCredentials
  • IAMReadOnlyAccess

Step 2: Create an AWS CodeCommit repository
Next, Dave signs in to the AWS CodeCommit console and create a repository, if he doesn’t have one already. He can choose any repository in his AWS account to which he has access. The instructions to create Git credentials are shown in the help panel. (Choose the Connect button if the instructions are not displayed.) When Dave clicks the IAM user link, the IAM console will open and he can generate the credentials.

GitCred_Blog1

 

Step 3: Create HTTPS Git credentials in the IAM console
On the IAM user page, Dave selects the Security Credentials tab and clicks Generate under HTTPS Git credentials for AWS CodeCommit section. This creates and displays the user name and password. Dave can then download the credentials.

GitCred_Blog2

Note: This is the only time the password is available to view or download.

 

Step 4: Clone the repository on the local machine
On the AWS CodeCommit console page for the repository, Dave chooses Clone URL, and then copy the HTTPS link for cloning the repository. At the command line or terminal, Dave will use the link he just copied to clone the repository. For example, Dave copies:

GitCred_Blog3

 

And then at the command line or terminal, Dave types:

$ git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/TestRepo_Dave

When prompted for user name and password, Dave provides the Git credentials (user name and password) he generated in step 3.

Dave is now ready to start pushing his code to the new repository.

Git credentials can be made active or inactive based on your requirements. You can also reset the password if you would like to use the existing username with a new password.

Next Steps

  1. You can optionally cache your credentials using the Git credentials caching command here.
  2. Want to invite a collaborator to work on your AWS CodeCommit repository? Simply create a new IAM user in your AWS account, create Git credentials for that user, and securely share the repository URL and Git credentials with the person you want to collaborate on the repositories.
  3. Connect to any third-party client that supports connecting to remote Git repositories using Git credentials (a stored user name and password). Virtually all tools and IDEs allow you to connect with static credentials. We’ve tested these:
    • Visual Studio (using the default Git plugin)
    • Eclipse IDE (using the default Git plugin)
    • Git Tower UI

For more information, see the AWS CodeCommit documentation.

We are excited to provide this new way of connecting to AWS CodeCommit. We look forward to hearing from you about the many different tools and IDEs you will be able to use with your AWS CodeCommit repositories.