Tag Archives: Bean

Newly Updated: Example AWS IAM Policies for You to Use and Customize

Post Syndicated from Deren Smith original https://aws.amazon.com/blogs/security/newly-updated-example-policies-for-you-to-use-and-customize/

To help you grant access to specific resources and conditions, the Example Policies page in the AWS Identity and Access Management (IAM) documentation now includes more than thirty policies for you to use or customize to meet your permissions requirements. The AWS Support team developed these policies from their experiences working with AWS customers over the years. The example policies cover common permissions use cases you might encounter across services such as Amazon DynamoDB, Amazon EC2, AWS Elastic Beanstalk, Amazon RDS, Amazon S3, and IAM.

In this blog post, I introduce the updated Example Policies page and explain how to use and customize these policies for your needs.

The new Example Policies page

The Example Policies page in the IAM User Guide now provides an overview of the example policies and includes a link to view each policy on a separate page. Note that each of these policies has been reviewed and approved by AWS Support. If you would like to submit a policy that you have found to be particularly useful, post it on the IAM forum.

To give you an idea of the policies we have included on this page, the following are a few of the EC2 policies on the page:

To see the full list of available policies, see the Example Polices page.

In the following section, I demonstrate how to use a policy from the Example Policies page and customize it for your needs.

How to customize an example policy for your needs

Suppose you want to allow an IAM user, Bob, to start and stop EC2 instances with a specific resource tag. After looking through the Example Policies page, you see the policy, Allows Starting or Stopping EC2 Instances a User Has Tagged, Programmatically and in the Console.

To apply this policy to your specific use case:

  1. Navigate to the Policies section of the IAM console.
  2. Choose Create policy.
    Screenshot of choosing "Create policy"
  3. Choose the Select button next to Create Your Own Policy. You will see an empty policy document with boxes for Policy Name, Description, and Policy Document, as shown in the following screenshot.
  4. Type a name for the policy, copy the policy from the Example Policies page, and paste the policy in the Policy Document box. In this example, I use “start-stop-instances-for-owner-tag” as the policy name and “Allows users to start or stop instances if the instance tag Owner has the value of their user name” as the description.
  5. Update the placeholder text in the policy (see the full policy that follows this step). For example, replace <REGION> with a region from AWS Regions and Endpoints and <ACCOUNTNUMBER> with your 12-digit account number. The IAM policy variable, ${aws:username}, is a dynamic property in the policy that automatically applies to the user to which it is attached. For example, when the policy is attached to Bob, the policy replaces ${aws:username} with Bob. If you do not want to use the key value pair of Owner and ${aws:username}, you can edit the policy to include your desired key value pair. For example, if you want to use the key value pair, CostCenter:1234, you can modify “ec2:ResourceTag/Owner”: “${aws:username}” to “ec2:ResourceTag/CostCenter”: “1234”.
    {
        "Version": "2012-10-17",
        "Statement": [
           {
          "Effect": "Allow",
          "Action": [
              "ec2:StartInstances",
              "ec2:StopInstances"
          ],
                 "Resource": "arn:aws:ec2:<REGION>:<ACCOUNTNUMBER>:instance/*",
                 "Condition": {
              "StringEquals": {
                  "ec2:ResourceTag/Owner": "${aws:username}"
              }
          }
            },
            {
                 "Effect": "Allow",
                 "Action": "ec2:DescribeInstances",
                 "Resource": "*"
            }
        ]
    }

  6. After you have edited the policy, choose Create policy.

You have created a policy that allows an IAM user to stop and start EC2 instances in your account, as long as these instances have the correct resource tag and the policy is attached to your IAM users. You also can attach this policy to an IAM group and apply the policy to users by adding them to that group.

Summary

We updated the Example Policies page in the IAM User Guide so that you have a central location where you can find examples of the most commonly requested and used IAM policies. In addition to these example policies, we recommend that you review the list of AWS managed policies, including the AWS managed policies for job functions. You can choose these predefined policies from the IAM console and associate them with your IAM users, groups, and roles.

We will add more IAM policies to the Example Policies page over time. If you have a useful policy you would like to share with others, post it on the IAM forum. If you have comments about this post, submit them in the “Comments” section below.

– Deren

Launch – .NET Core Support In AWS CodeStar and AWS Codebuild

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-net-core-support-in-aws-codestar-and-aws-codebuild/

A few months ago, I introduced the AWS CodeStar service, which allows you to quickly develop, build, and deploy applications on AWS. AWS CodeStar helps development teams to increase the pace of releasing applications and solutions while reducing some of the challenges of building great software.

When the CodeStar service launched in April, it was released with several project templates for Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda using five different programming languages; JavaScript, Java, Python, Ruby, and PHP. Each template provisions the underlying AWS Code Services and configures an end-end continuous delivery pipeline for the targeted application using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.

As I have participated in some of the AWS Summits around the world discussing AWS CodeStar, many of you have shown curiosity in learning about the availability of .NET templates in CodeStar and utilizing CodeStar to deploy .NET applications. Therefore, it is with great pleasure and excitement that I announce that you can now develop, build, and deploy cross-platform .NET Core applications with the AWS CodeStar and AWS CodeBuild services.

AWS CodeBuild has added the ability to build and deploy .NET Core application code to both Amazon EC2 and AWS Lambda. This new CodeBuild capability has enabled the addition of two new project templates in AWS CodeStar for .NET Core applications.  These new project templates enable you to deploy .NET Code applications to Amazon EC2 Linux Instances, and provides everything you need to get started quickly, including .NET Core sample code and a full software development toolchain.

Of course, I can’t wait to try out the new addition to the project templates within CodeStar and the update .NET application build options with CodeBuild. For my test scenario, I will use CodeStar to create, build, and deploy my .NET Code ASP.Net web application on EC2. Then, I will extend my ASP.Net application by creating a .NET Lambda function to be compiled and deployed with CodeBuild as a part of my application’s pipeline. This Lambda function can then be called and used within my ASP.Net application to extend the functionality of my web application.

So, let’s get started!

First, I’ll log into the CodeStar console and start a new CodeStar project. I am presented with the option to select a project template.


Right now, I would like to focus on building .NET Core projects, therefore, I’ll filter the project templates by selecting the C# in the Programming Languages section. Now, CodeStar only shows me the new .NET Core project templates that I can use to build web applications and services with ASP.NET Core.

I think I’ll use the ASP.NET Core web application project template for my first CodeStar .NET Core application. As you can see by the project template information display, my web application will be deployed on Amazon EC2, which signifies to me that my .NET Core code will be compiled and packaged using AWS CodeBuild and deployed to EC2 using the AWS CodeDeploy service.


My hunch about the services is confirmed on the next screen when CodeStar shows the AWS CodePipeline and the AWS services that will be configured for my new project. I’ll name this web application project, ASPNetCore4Tara, and leave the default Project ID that CodeStar generates from the project name. Yes, I know that this is one of the goofiest names I could ever come up with, but, hey, it will do for this test project so I’ll go ahead and click the Next button. I should mention that you have the option to edit your Amazon EC2 configuration for your project on this screen before CodeStar starts configuring and provisioning the services needed to run your application.

Since my ASP.Net Core web application will be deployed to an Amazon EC2 instance, I will need to choose an Amazon EC2 Key Pair for encryption of the login used to allow me to SSH into this instance. For my ASPNetCore4Tara project, I will use an existing Amazon EC2 key pair I have previously used for launching my other EC2 instances. However, if I was creating this project and I did not have an EC2 key pair or if I didn’t have access to the .pem file (private key file) for an existing EC2 key pair, I would have to first visit the EC2 console and create a new EC2 key pair to use for my project. This is important because if you remember, without having the EC2 key pair with the associated .pem file, I would not be able to log into my EC2 instance.

With my EC2 key pair selected and confirmation that I have the related private file checked, I am ready to click the Create Project button.


After CodeStar completes the creation of the project and the provisioning of the project related AWS services, I am ready to view the CodeStar sample application from the application endpoint displayed in the CodeStar dashboard. This sample application should be familiar to you if have been working with the CodeStar service or if you had an opportunity to read the blog post about the AWS CodeStar service launch. I’ll click the link underneath Application Endpoints to view the sample ASP.NET Core web application.

Now I’ll go ahead and clone the generated project and connect my Visual Studio IDE to the project repository. I am going to make some changes to the application and since AWS CodeBuild now supports .NET Core builds and deployments to both Amazon EC2 and AWS Lambda, I will alter my build specification file appropriately for the changes to my web application that will include the use of the Lambda function.  Don’t worry if you are not familiar with how to clone the project and connect it to the Visual Studio IDE, CodeStar provides in-console step-by-step instructions to assist you.

First things first, I will open up the Visual Studio IDE and connect to AWS CodeCommit repository provisioned for my ASPNetCore4Tara project. It is important to note that the Visual Studio 2017 IDE is required for .NET Core projects in AWS CodeStar and the AWS Toolkit for Visual Studio 2017 will need to be installed prior to connecting your project repository to the IDE.

In order to connect to my repo within Visual Studio, I will open up Team Explorer and select the Connect link under the AWS CodeCommit option under Hosted Service Providers. I will click Ok to keep my default AWS profile toolkit credentials.

I’ll then click Clone under the Manage Connections and AWS CodeCommit hosted provider section.

Once I select my aspnetcore4tara repository in the Clone AWS CodeCommit Repository dialog, I only have to enter my IAM role’s HTTPS Git credentials in the Git Credentials for AWS CodeCommit dialog and my process is complete. If you’re following along and receive a dialog for Git Credential Manager login, don’t worry just your enter the same IAM role’s Git credentials.


My project is now connected to the aspnetcore4tara CodeCommit repository and my web application is loaded to editing. As you will notice in the screenshot below, the sample project is structured as a standard ASP.NET Core MVC web application.

With the project created, I can make changes and updates. Since I want to update this project with a .NET Lambda function, I’ll quickly start a new project in Visual Studio to author a very simple C# Lambda function to be compiled with the CodeStar project. This AWS Lambda function will be included in the CodeStar ASP.NET Core web application project.

The Lambda function I’ve created makes a call to the REST API of NASA’s popular Astronomy Picture of the Day website. The API sends back the latest planetary image and related information in JSON format. You can see the Lambda function code below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

using System.Net.Http;
using Amazon.Lambda.Core;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace NASAPicOfTheDay
{
    public class SpacePic
    {
        HttpClient httpClient = new HttpClient();
        string nasaRestApi = "https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY";

        /// <summary>
        /// A simple function that retreives NASA Planetary Info and 
        /// Picture of the Day
        /// </summary>
        /// <param name="context"></param>
        /// <returns>nasaResponse-JSON String</returns>
        public async Task<string> GetNASAPicInfo(ILambdaContext context)
        {
            string nasaResponse;
            
            //Call NASA Picture of the Day API
            nasaResponse = await httpClient.GetStringAsync(nasaRestApi);
            Console.WriteLine("NASA API Response");
            Console.WriteLine(nasaResponse);
            
            //Return NASA response - JSON format
            return nasaResponse; 
        }
    }
}

I’ll now publish this C# Lambda function and test by using the Publish to AWS Lambda option provided by the AWS Toolkit for Visual Studio with NASAPicOfTheDay project. After publishing the function, I can test it and verify that it is working correctly within Visual Studio and/or the AWS Lambda console. You can learn more about building AWS Lambda functions with C# and .NET at: http://docs.aws.amazon.com/lambda/latest/dg/dotnet-programming-model.html

 

Now that I have my Lambda function completed and tested, all that is left is to update the CodeBuild buildspec.yml file within my aspnetcore4tara CodeStar project to include publishing and deploying of the Lambda function.

To accomplish this, I will create a new folder named functions and copy the folder that contains my Lambda function .NET project to my aspnetcore4tara web application project directory.

 

 

To build and publish my AWS Lambda function, I will use commands in the buildspec.yml file from the aws-lambda-dotnet tools library, which helps .NET Core developers develop AWS Lambda functions. I add a file, funcprof, to the NASAPicOfTheDay folder which contains customized profile information for use with aws-lambda-dotnet tools. All that is left is to update the buildspec.yml file used by CodeBuild for the ASPNetCore4Tara project build to include the packaging and the deployment of the NASAPictureOfDay AWS Lambda function. The updated buildspec.yml is as follows:

version: 0.2
phases:
  env:
  variables:
    basePath: 'hold'
  install:
    commands:
      - echo set basePath for project
      - basePath=$(pwd)
      - echo $basePath
      - echo Build restore and package Lambda function using AWS .NET Tools...
      - dotnet restore functions/*/NASAPicOfTheDay.csproj
      - cd functions/NASAPicOfTheDay
      - dotnet lambda package -c Release -f netcoreapp1.0 -o ../lambda_build/nasa-lambda-function.zip
  pre_build:
    commands:
      - echo Deploy Lambda function used in ASPNET application using AWS .NET Tools. Must be in path of Lambda function build 
      - cd $basePath
      - cd functions/NASAPicOfTheDay
      - dotnet lambda deploy-function NASAPicAPI -c Release -pac ../lambda_build/nasa-lambda-function.zip --profile-location funcprof -fd 'NASA API for Picture of the Day' -fn NASAPicAPI -fh NASAPicOfTheDay::NASAPicOfTheDay.SpacePic::GetNASAPicInfo -frun dotnetcore1.0 -frole arn:aws:iam::xxxxxxxxxxxx:role/lambda_exec_role -framework netcoreapp1.0 -fms 256 -ft 30  
      - echo Lambda function is now deployed - Now change directory back to Base path
      - cd $basePath
      - echo Restore started on `date`
      - dotnet restore AspNetCoreWebApplication/AspNetCoreWebApplication.csproj
  build:
    commands:
      - echo Build started on `date`
      - dotnet publish -c release -o ./build_output AspNetCoreWebApplication/AspNetCoreWebApplication.csproj
artifacts:
  files:
    - AspNetCoreWebApplication/build_output/**/*
    - scripts/**/*
    - appspec.yml
    

That’s it! All that is left is for me to add and commit all my file additions and updates to the AWS CodeCommit git repository provisioned for my ASPNetCore4Tara project. This kicks off the AWS CodePipeline for the project which will now use AWS CodeBuild new support for .NET Core to build and deploy both the ASP.NET Core web application and the .NET AWS Lambda function.

 

Summary

The support for .NET Core in AWS CodeStar and AWS CodeBuild opens the door for .NET developers to take advantage of the benefits of Continuous Integration and Delivery when building .NET based solutions on AWS.  Read more about .NET Core support in AWS CodeStar and AWS CodeBuild here or review product pages for AWS CodeStar and/or AWS CodeBuild for more information on using the services.

Enjoy building .NET projects more efficiently with Amazon Web Services using .NET Core with AWS CodeStar and AWS CodeBuild.

Tara

 

Pi-powered hands-on statistical model at the Royal Society

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/royal-society-galton-board/

Physics! Particles! Statistical modelling! Quantum theory! How can non-scientists understand any of it? Well, students from Durham University are here to help you wrap your head around it all – and to our delight, they’re using the power of the Raspberry Pi to do it!

At the Royal Society’s Summer Science Exhibition, taking place in London from 4-9 July, the students are presenting a Pi-based experiment demonstrating the importance of statistics in their field of research.

Modelling the invisible – Summer Science Exhibition 2017

The Royal Society Summer Science Exhibition 2017 features 22 exhibits of cutting-edge, hands-on UK science , along with special events and talks. You can meet the scientists behind the research. Find out more about the exhibition at our website: https://royalsociety.org/science-events-and-lectures/2017/summer-science-exhibition/

Ramona, Matthew, and their colleagues are particle physicists keen to bring their science to those of us whose heads start to hurt as soon as we hear the word ‘subatomic’. In their work, they create computer models of subatomic particles to make predictions about real-world particles. Their models help scientists to design better experiments and to improve sensor calibrations. If this doesn’t sound straightforward to you, never fear – this group of scientists has set out to show exactly how statistical models are useful.

The Galton board model

They’ve built a Pi-powered Galton board, also called a bean machine (much less intimidating, I think). This is an upright board, shaped like an upside-down funnel, with nails hammered into it. Drop a ball in at the top, and it will randomly bounce off the nails on its way down. How the nails are spread out determines where a ball is most likely to land at the bottom of the board.

If you’re having trouble picturing this, you can try out an online Galton board. Go ahead, I’ll wait.

You’re back? All clear? Great!

Now, if you drop 100 balls down the board and collect them at the bottom, the result might look something like this:

Galton board

By Antoine Taveneaux CC BY-SA 3.0

The distribution of the balls is determined by the locations of the nails in the board. This means that, if you don’t know where the nails are, you can look at the distribution of balls to figure out where they are most likely to be located. And you’ll be able to do all this using … statistics!!!

Statistical models

Similarly, how particles behave is determined by the laws of physics – think of the particles as balls, and laws of physics as nails. Physicists can observe the behaviour of particles to learn about laws of physics, and create statistical models simulating the laws of physics to predict the behaviour of particles.

I can hear you say, “Alright, thanks for the info, but how does the Raspberry Pi come into this?” Don’t worry – I’m getting to that.

Modelling the invisible – the interactive exhibit

As I said, Ramona and the other physicists have not created a regular old Galton board. Instead, this one records where the balls land using a Raspberry Pi, and other portable Pis around the exhibition space can access the records of the experimental results. These Pis in turn run Galton board simulators, and visitors can use them to recreate a virtual Galton board that produces the same results as the physical one. Then, they can check whether their model board does, in fact, look like the one the physicists built. In this way, people directly experience the relationship between statistical models and experimental results.

Hurrah for science!

The other exhibit the Durham students will be showing is a demo dark matter detector! So if you decide to visit the Summer Science Exhibition, you will also have the chance to learn about the very boundaries of human understanding of the cosmos.

The Pi in museums

At the Raspberry Pi Foundation, education is our mission, and of course we love museums. It is always a pleasure to see our computers incorporated into exhibits: the Pi-powered visual theremin teaches visitors about music; the Museum in a Box uses Pis to engage people in hands-on encounters with exhibits; and this Pi is itself a museum piece! If you want to learn more about Raspberry Pis and museums, you can listen to this interview with Pi Towers’ social media maestro Alex Bate.

It’s amazing that our tech is used to educate people in areas beyond computer science. If you’ve created a pi-powered educational project, please share it with us in the comments.

The post Pi-powered hands-on statistical model at the Royal Society appeared first on Raspberry Pi.

Top 10 Most Pirated Movies of The Week on BitTorrent – 06/19/17

Post Syndicated from Ernesto original https://torrentfreak.com/top-10-pirated-movies-week-bittorrent-061917/

This week we have three newcomers in our chart.

Wonder Woman is the most downloaded movie.

The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

RSS feed for the weekly movie download chart.

This week’s most downloaded movies are:
Movie Rank Rank last week Movie name IMDb Rating / Trailer
Most downloaded movies via torrents
1 (2) Wonder Woman (TC) 8.2 / trailer
2 (…) Power Rangers 6.5 / trailer
3 (1) The Fate of the Furious 6.7 / trailer
4 (…) Chips 5.8 / trailer
5 (5) The Boss Baby 6.5 / trailer
6 (4) John Wick: Chapter 2 8.0 / trailer
7 (3) Life 6.8 / trailer
8 (…) The Mummy 2017 (HDTS) 5.8 / trailer
9 (7) Logan 8.6 / trailer
10 (6) Pirates of the Caribbean: Dead Men Tell No Tales (TS) 7.1 / trailer

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Top 10 Most Pirated Movies of The Week on BitTorrent – 06/12/17

Post Syndicated from Ernesto original https://torrentfreak.com/top-10-pirated-movies-week-bittorrent-061217/

This week we have two newcomers in our chart.

The Fate of the Furious, which came out as Web-DL this week, is the most downloaded movie.

The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

RSS feed for the weekly movie download chart.

This week’s most downloaded movies are:
Movie Rank Rank last week Movie name IMDb Rating / Trailer
Most downloaded movies via torrents
1 (5) The Fate of the Furious 6.7 / trailer
2 (…) Wonder Woman (TC) 8.2 / trailer
3 (7) Life 6.8 / trailer
4 (1) John Wick: Chapter 2 8.0 / trailer
5 (2) The Boss Baby 6.5 / trailer
6 (3) Pirates of the Caribbean: Dead Men Tell No Tales (TS) 7.1 / trailer
7 (4) Logan 8.6 / trailer
8 (…) The Belko Experiment 6.2 / trailer
9 (8) Ghost in The Shell (Subbed HDRip) 6.9 / trailer
10 (9) Kong: Skull Island (Subbed HDRip) 7.0 / trailer

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Top 10 Most Pirated Movies of The Week on BitTorrent – 06/05/17

Post Syndicated from Ernesto original https://torrentfreak.com/top-10-pirated-movies-week-bittorrent-060517/

This week we have two newcomers in our chart.

John Wick: Chapter 2 is the most downloaded movie for the second week in a row.

The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

RSS feed for the weekly movie download chart.

This week’s most downloaded movies are:
Movie Rank Rank last week Movie name IMDb Rating / Trailer
Most downloaded movies via torrents
1 (1) John Wick: Chapter 2 8.0 / trailer
2 (3) The Boss Baby 6.5 / trailer
3 (…) Pirates of the Caribbean: Dead Men Tell No Tales (TS) 7.1 / trailer
4 (2) Logan 8.6 / trailer
5 (4) The Fate of the Furious (subbed HDRip) 6.7 / trailer
6 (5) A Cure For Wellness 6.5 / trailer
7 (…) Life 6.8 / trailer
8 (7) Ghost in The Shell (Subbed HDRip) 6.9 / trailer
9 (8) Kong: Skull Island (Subbed HDRip) 7.0 / trailer
10 (6) T2 Trainspotting 7.7 / trailer

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Even Fake Leaks Can Help in Hollywood’s Anti-Piracy Wars

Post Syndicated from Andy original https://torrentfreak.com/even-fake-leaks-can-help-in-hollywoods-anti-piracy-wars-170527/

On Monday 15 May, during a town hall meeting in New York, Disney CEO Bob Iger informed a group of ABC employees that hackers had stolen one of the company’s movies.

The hackers allegedly informed the company that if a ransom was paid, then the copy would never see the light of day. Predictably, Disney refused to pay, the most sensible decision under the circumstances.

Although Disney didn’t name the ‘hacked’ film, it was named by Deadline as ‘Pirates of the Caribbean: Dead Men Tell No Tales’. A week later, a video was published by the LA Times claiming that the movie was indeed the latest movie in the successful ‘Pirates’ franchise.

From the beginning, however, something seemed off. Having made an announcement about the ‘hack’ to ABC employees, Disney suddenly didn’t want to talk anymore, declining all requests for comment. That didn’t make much sense – why make something this huge public if you don’t want to talk about it?

With this and other anomalies nagging, TF conducted its own investigation and this Wednesday – a week and a half after Disney’s announcement and a full three weeks after the company was contacted with a demand for cash – we published our findings.

Our conclusion was that the ‘hack’ almost certainly never happened and, from the beginning, no one had ever spoken about the new Pirates film being the ‘hostage’. Everything pointed to a ransom being demanded for a non-existent copy of The Last Jedi and that the whole thing was a grand hoax.

Multiple publications tried to get a comment from Disney before Wednesday, yet none managed to do so. Without compromising our sources, TF also sent an outline of our investigation to the company to get to the bottom of this saga. We were ignored.

Then, out of the blue, one day after we published our findings, Disney chief Bob Iger suddenly got all talkative again. Speaking with Yahoo Finance, Iger confirmed what we suspected all along – it was a hoax.

“To our knowledge we were not hacked,” Iger said. “We had a threat of a hack of a movie being stolen. We decided to take it seriously but not react in the manner in which the person who was threatening us had required.”

Let’s be clear here, if there were to be a victim in all of this, that would quite clearly be Disney. The company didn’t ask to be hacked, extorted, or lied to. But why would a company quietly sit on a dubious threat for two weeks, then confidently make it public as fact but refuse to talk, only to later declare it a hoax under pressure?

That may never be known, but Disney and its colleagues sure managed to get some publicity and sympathy in the meantime.

Publications such as the LA Times placed the threat alongside the ‘North Korea’ Sony hack, the more recent Orange is the New Black leak, and the WannaCry ransomware attacks that plagued the web earlier this month.

“Hackers are seizing the content and instead of just uploading it, they’re contacting the studios and asking for a ransom. That is a pretty recent phenomenon,” said MPAA content protection chief Dean Marks in the same piece.

“It’s scary,” an anonymous studio executive added. “It could happen to any one of us.”

While that is indeed the case and there is a definite need to take things seriously, this particular case was never credible. Not a single person interviewed by TF believed that a movie was available. Furthermore, there were many signs that the person claiming to have the movie was definitely not another TheDarkOverlord.

In fact, when TF was investigating the leak we had a young member of a release group more or less laugh at us for wasting our time trying to find out of it was real or not. Considering its massive power (and the claim that the FBI had been involved) it’s difficult to conclude that Disney hadn’t determined the same at a much earlier stage.

All that being said, trying to hoax Disney over a fake leak of The Last Jedi is an extremely dangerous game in its own right. Not only is extortion a serious crime, but dancing around pre-release leaks of Star Wars movies is just about as risky as it gets.

In June 2005, after releasing a workprint copy of Star Wars: Episode 3, the FBI took down private tracker EliteTorrents in a blaze of publicity. People connected to the leak received lengthy jail sentences. The same would happen again today, no doubt.

It might seem like fun and games now, but people screwing with Disney – for real, for money, or both – rarely come out on top. If a workprint of The Last Jedi does eventually become available (and of course that’s always a possibility), potential leakers should consider their options very carefully.

A genuine workprint leak could prompt the company to go to war, but in the meantime, fake-based extortion attempts only add fuel to the anti-piracy fire – in Hollywood’s favor.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Was The Disney Movie ‘Hacking Ransom’ a Giant Hoax?

Post Syndicated from Andy original https://torrentfreak.com/was-the-disney-movie-hacking-ransom-a-giant-hoax-170524/

Last Monday, during a town hall meeting in New York, Disney CEO Bob Iger informed a group of ABC employees that hackers had stolen one of the company’s movies.

The hackers allegedly said they’d keep the leak private if Disney paid them a ransom. In response, Disney indicated that it had no intention of paying. Setting dangerous precedents in this area is unwise, the company no doubt figured.

After Hollywood Reporter broke the news, Deadline followed up with a report which further named the movie as ‘Pirates of the Caribbean: Dead Men Tell No Tales’, a fitting movie to parallel an emerging real-life swashbuckling plot, no doubt.

What the Deadline article didn’t do was offer any proof that Pirates 5 was the movie in question. Out of the blue, however, it did mention that a purported earlier leak of The Last Jedi had been revealed by “online chatter” to be a fake. Disney refused to comment.

Armed with this information, TF decided to have a dig around. Was Pirates 5 being discussed within release groups as being available, perhaps? Initially, our inquiries drew a complete blank but then out of the blue we found ourselves in conversation with the person claiming to be the Disney ‘hacker’.

“I can provide the original emails sent to Disney as well as some other unknown details,” he told us via encrypted mail.

We immediately asked several questions. Was the movie ‘Pirates 5’? How did he obtain the movie? How much did he try to extort from Disney? ‘EMH,’ as we’ll call him, quickly replied.

“It’s The Last Jedi. Bob Iger never made public the title of the film, Deadline was just going off and naming the next film on their release slate,” we were told. “We demanded 2BTC per month until September.”

TF was then given copies of correspondence that EMH had been having with numerous parties about the alleged leak. They included discussions with various release groups, a cyber-security expert, and Disney.

As seen in the screenshot, the email was purportedly sent to Disney on May 1. The Hollywood Reporter article, published two weeks later, noted the following;

“The Disney chief said the hackers demanded that a huge sum be paid in Bitcoin. They said they would release five minutes of the film at first, and then in 20-minute chunks until their financial demands are met,” HWR wrote.

While the email to Disney looked real enough, the proof of any leaked pudding is in the eating. We asked EMH how he had demonstrated to Disney that he actually has the movie in his possession. Had screenshots or clips been sent to the company? We were initially told they had not (plot twists were revealed instead) so this immediately raised suspicions.

Nevertheless, EMH then went on to suggest that release groups had shown interest in the copy and he proved that by forwarding his emails with them to TF.

“Make sure they know there is still work to be done on the CGI characters. There are little dots on their faces that are visible. And the colour grading on some scenes looks a little off,” EMH told one group, who said they understood.

“They all understand its not a completed workprint.. that is why they are sought after by buyers.. exclusive stuff nobody else has or can get,” they wrote back.

“That why they pay big $$$ for it.. a completed WP could b worth $25,000,” the group’s unedited response reads.

But despite all the emails and discussion, we were still struggling to see how EMH had shown to anyone that he really had The Last Jedi. We then learned, however, that screenshots had been sent to blogger Sam Braidley, a Cyber Security MSc and Computer Science BSc Graduate.

Since the information sent to us by EMH confirmed discussion had taken place with Braidley concerning the workprint, we contacted him directly to find out what he knew about the supposed Pirates 5 and/or The Last Jedi leak. He was very forthcoming.

“A user going by the username of ‘Darkness’ commented on my blog about having a leaked copy of The Last Jedi from a contact he knew from within Lucas Films. Of course, this garnered a lot of interest, although most were cynical of its authenticity,” Braidley explained.

The claim that ‘Darkness’ had obtained the copy from a contact within Lucas was certainly of interest ,since up to now the press narrative had been that Disney or one of its affiliates had been ‘hacked.’

After confirming that ‘Darkness’ used the same email as our “EMH,” we asked EMH again. Where had the movie been obtained from?

“Wasn’t hacked. Was given to me by a friend who works at a post production company owned by [Lucasfilm],” EMH said. After further prompting he reiterated: “As I told you, we obtained it from an employee.”

If they weren’t ringing loudly enough already, alarm bells were now well and truly clanging. Who would reveal where they’d obtained a super-hot leaked movie from when the ‘friend’ is only one step removed from the person attempting the extortion? Who would take such a massive risk?

Braidley wasn’t buying it either.

“I had my doubts following the recent [Orange is the New Black] leak from ‘The Dark Overlord,’ it seemed like someone trying to live off the back of its press success,” he said.

Braidley told TF that Darkness/EMH seemed keen for him to validate the release, as a member of a well-known release group didn’t believe that it was real, something TF confirmed with the member. A screenshot was duly sent over to Braidley for his seal of approval.

“The quality was very low and the scene couldn’t really show that it was in fact Star Wars, let alone The Last Jedi,” Braidley recalls, noting that other screenshots were considered not to be from the movie in question either.

Nevertheless, Darkness/EMH later told Braidley that another big release group had only declined to release the movie due to the possiblity of security watermarks being present in the workprint.

Since no groups had heard of a credible Pirates 5 leak, the claims that release groups were in discussion over the leaking of The Last Jedi intrigued us. So, through trusted sources and direct discussion with members, we tried to learn more.

While all groups admitted being involved or at least being aware of discussions taking place, none appeared to believe that a movie had been obtained from Disney, was being held for ransom, or would ever be leaked.

“Bullshit!” one told us. “Fake news,” said another.

With not even well-known release groups believing that leaks of The Last Jedi or Pirates 5 are anywhere on the horizon, that brought us full circle to the original statement by Disney chief Bob Iger claiming that a movie had been stolen.

What we do know for sure is that everything reported initially by Hollywood Reporter about a ransom demand matches up with statements made by Darkness/EMH to TorrentFreak, Braidley, and several release groups. We also know from copy emails obtained by TF that the discussions with the release groups took place well before HWR broke the story.

With Disney not commenting on the record to either HWR or Deadline (publications known to be Hollywood-friendly) it seemed unlikely that TF would succeed where they had failed.

So, without comprimising any of our sources, we gave a basic outline of our findings to a previously receptive Disney contact, in an effort to tie Darkness/EMH with the email address that he told us Disney already knew. Predictably, perhaps, we received no response.

At this point one has to wonder. If no credible evidence of a leak has been made available and the threats to leak the movie haven’t been followed through on, what was the point of the whole affair?

Money appears to have been the motive, but it seems likely that none will be changing hands. But would someone really bluff the leaking of a movie to a company like Disney in order to get a ‘ransom’ payment or scam a release group out of a few dollars? Perhaps.

Braidley informs TF that Darkness/EMH recently claimed that he’d had the copy of The Last Jedi since March but never had any intention of leaking it. He did, however, need money for a personal matter involving a family relative.

With this in mind, we asked Darkness/EMH why he’d failed to carry through with his threats to leak the movie, bit by bit, as his email to Disney claimed. He said there was never any intention of leaking the movie “until we are sure it wont be traced back” but “if the right group comes forward and meets our strict standards then the leak could come as soon as 2-3 weeks.”

With that now seeming increasingly unlikely (but hey, you never know), this might be the final chapter in what turns out to be the famous hacking of Disney that never was. Or, just maybe, undisclosed aces remain up sleeves.

“Just got another comment on my blog from [Darkness],” Braidley told TF this week. “He now claims that the Emoji movie has been leaked and is being held to ransom.”

Simultaneously he was telling TF the same thing. ‘Hacking’ announcement from Sony coming soon? Stay tuned…..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How to Control TLS Ciphers in Your AWS Elastic Beanstalk Application by Using AWS CloudFormation

Post Syndicated from Paco Hope original https://aws.amazon.com/blogs/security/how-to-control-tls-ciphers-in-your-aws-elastic-beanstalk-application-by-using-aws-cloudformation/

Securing data in transit is critical to the integrity of transactions on the Internet. Whether you log in to an account with your user name and password or give your credit card details to a retailer, you want your data protected as it travels across the Internet from place to place. One of the protocols in widespread use to protect data in transit is Transport Layer Security (TLS). Every time you access a URL that begins with “https” instead of just “http”, you are using a TLS-secured connection to a website.

To demonstrate that your application has a strong TLS configuration, you can use services like the one provided by SSL Labs. There are also open source, command-line-oriented TLS testing programs such as testssl.sh (which I do not cover in this post) and sslscan (which I cover later in this post). The goal of testing your TLS configuration is to provide evidence that weak cryptographic ciphers are disabled in your TLS configuration and only strong ciphers are enabled. In this blog post, I show you how to control the TLS security options for your secure load balancer in AWS CloudFormation, pass the TLS certificate and host name for your secure AWS Elastic Beanstalk application to the CloudFormation script as parameters, and then confirm that only strong TLS ciphers are enabled on the launched application by testing it with SSLLabs.

Background

In some situations, it’s not enough to simply turn on TLS with its default settings and call it done. Over the years, a number of vulnerabilities have been discovered in the TLS protocol itself with codenames such as CRIME, POODLE, and Logjam. Though some vulnerabilities were in specific implementations, such as OpenSSL, others were vulnerabilities in the Secure Sockets Layer (SSL) or TLS protocol itself.

The only way to avoid some TLS vulnerabilities is to ensure your web server uses only the latest version of TLS. Some organizations want to limit their TLS configuration to the highest possible security levels to satisfy company policies, regulatory requirements, or other information security requirements. In practice, such limitations usually mean using TLS version 1.2 (at the time of this writing, TLS 1.3 is in the works) and using only strong cryptographic ciphers. Note that forcing a high-security TLS connection in this manner limits which types of devices can connect to your web server. I address this point at the end of this post.

The default TLS configuration in most web servers is compatible with the broadest set of clients (such as web browsers, mobile devices, and point-of-sale systems). As a result, older ciphers and protocol versions are usually enabled. This is true for the Elastic Load Balancing load balancer that is created in your Elastic Beanstalk application as well as for web server software such as Apache and nginx.  For example, TLS versions 1.0 and 1.1 are enabled in addition to 1.2. The RC4 cipher is permitted, even though that cipher is too weak for the most demanding security requirements. If your application needs to prioritize the security of connections over compatibility with legacy devices, you must adjust the TLS encryption settings on your application. The solution in this post helps you make those adjustments.

Prerequisites for the solution

Before you implement this solution, you must have a few prerequisites in place:

  1. You must have a hosted zone in Amazon Route 53 where the name of the secure application will be created. I use example.com as my domain name in this post and assume that I host example.com publicly in Route 53. To learn more about creating and hosting a zone publicly in Route 53, see Working with Public Hosted Zones.
  2. You must choose a name to be associated with the secure app. In this case, I use secure.example.com as the DNS name to be associated with the secure app. This means that I’m trying to create an Elastic Beanstalk application whose URL will be https://secure.example.com/.
  3. You must have a TLS certificate hosted in AWS Certificate Manager (ACM). This certificate must be issued with the name you decided in Step 2. If you are new to ACM, see Getting Started. If you are already familiar with ACM, request a certificate and get its Amazon Resource Name (ARN).Look up the ARN for the certificate that you created by opening the ACM console. The ARN looks something like: arn:aws:acm:eu-west-1:111122223333:certificate/12345678-abcd-1234-abcd-1234abcd1234.

Implementing the solution

You can use two approaches to control the TLS ciphers used by your load balancer: one is to use a predefined protocol policy from AWS, and the other is to write your own protocol policy that lists exactly which ciphers should be enabled. There are many ciphers and options that can be set, so the appropriate AWS predefined policy is often the simplest policy to use. If you have to comply with an information security policy that requires enabling or disabling specific ciphers, you will probably find it easiest to write a custom policy listing only the ciphers that are acceptable to your requirements.

AWS released two predefined TLS policies on March 10, 2017: ELBSecurityPolicy-TLS-1-1-2017-01 and ELBSecurityPolicy-TLS-1-2-2017-01. These policies restrict TLS negotiations to TLS 1.1 and 1.2, respectively. You can find a good comparison of the ciphers that these policies enable and disable in the HTTPS listener documentation for Elastic Load Balancing. If your requirements are simply “support TLS 1.1 and later” or “support TLS 1.2 and later,” those AWS predefined cipher policies are the best place to start. If you need to control your cipher choice with a custom policy, I show you in this post which lines of the CloudFormation template to change.

Download the predefined policy CloudFormation template

Many AWS customers rely on CloudFormation to launch their AWS resources, including their Elastic Beanstalk applications. To change the ciphers and protocol versions supported on your load balancer, you must put those options in a CloudFormation template. You can store your site’s TLS certificate in ACM and create the corresponding DNS alias record in the correct zone in Route 53.

To start, download the CloudFormation template that I have provided for this blog post, or deploy the template directly in your environment. This template creates a CloudFormation stack in your default VPC that contains two resources: an Elastic Beanstalk application that deploys a standard sample PHP application, and a Route 53 record in a hosted zone. This CloudFormation template selects the AWS predefined policy called ELBSecurityPolicy-TLS-1-2-2017-01 and deploys it.

Launching the sample application from the CloudFormation console

In the CloudFormation console, choose Create Stack. You can either upload the template through your browser, or load the template into an Amazon S3 bucket and type the S3 URL in the Specify an Amazon S3 template URL box.

After you click Next, you will see that there are three parameters defined: CertificateARN, ELBHostName, and HostedDomainName. Set the CertificateARN parameter to the ARN of the certificate you want to use for your application. Set the ELBHostName parameter to the hostname part of the URL. For example, if your URL were https://secure.example.com/, the HostedDomainName parameter would be example.com and the ELBHostName parameter would be secure.

For the sample application, choose Next and then choose Create, and the CloudFormation stack will be created. For your own applications, you might need to set other options such as a database, VPC options, or Amazon SNS notifications. For more details, see AWS Elastic Beanstalk Environment Configuration. To deploy an application other than our sample PHP application, create your own application source bundle.

Launching the sample application from the command line

In addition to launching the sample application from the console, you can specify the parameters from the command line. Because the template uses parameters, you can launch multiple copies of the application, specifying different parameters for each copy. To launch the application from a Linux command line with the AWS CLI, insert the correct values for your application, as shown in the following command.

aws cloudformation create-stack --stack-name "SecureSampleApplication" \
--template-url https://<URL of your CloudFormation template in S3> \
--parameters ParameterKey=CertificateARN,ParameterValue=<Your ARN> \
ParameterKey=ELBHostName,ParameterValue=<Your Host Name> \
ParameterKey=HostedDomainName,ParameterValue=<Your Domain Name>

When that command exits, it prints the StackID of the stack it created. Save that StackID for later so that you can fetch the stack’s outputs from the command line.

Using a custom cipher specification

If you want to specify your own cipher choices, you can use the same CloudFormation template and change two lines. Let’s assume your information security policies require you to disable any ciphers that use Cipher Block Chaining (CBC) mode encryption. These ciphers are enabled in the ELBSecurityPolicy-TLS-1-2-2017-01 managed policy, so to satisfy that security requirement, you have to modify the CloudFormation template to use your own protocol policy.

In the template, locate the three lines that define the TLSHighPolicy.

- Namespace:  aws:elb:policies:TLSHighPolicy
OptionName: SSLReferencePolicy
Value:      ELBSecurityPolicy-TLS-1-2-2017-01

Change the OptionName and Value for the TLSHighPolicy. Instead of referring to the AWS predefined policy by name, explicitly list all the ciphers you want to use. Change those three lines so they look like the following.

- Namespace: aws:elb:policies:TLSHighPolicy
OptionName: SSLProtocols
Value:  Protocol-TLSv1.2,Server-Defined-Cipher-Order,ECDHE-ECDSA-AES256-GCM-SHA384,ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-RSA-AES256-GCM-SHA384,ECDHE-RSA-AES128-GCM-SHA256

This protocol policy stipulates that the load balancer should:

  • Negotiate connections using only TLS 1.2.
  • Ignore any attempts by the client (for example, the web browser or mobile device) to negotiate a weaker cipher.
  • Accept four specific, strong combinations of cipher and key exchange—and nothing else.

The protocol policy enables only TLS 1.2, strong ciphers that do not use CBC mode encryption, and strong key exchange.

Connect to the secure application

When your CloudFormation stack is in the CREATE_COMPLETED state, you will find three outputs:

  1. The public DNS name of the load balancer
  2. The secure URL that was created
  3. TestOnSSLLabs output that contains a direct link for testing your configuration

You can either enter the secure URL in a web browser (for example, https://secure.example.com/), or click the link in the Outputs to open your sample application and see the demo page. Note that you must use HTTPS—this template has disabled HTTP on port 80 and only listens with HTTPS on port 443.

If you launched your application through the command line, you can view the CloudFormation outputs using the command line as well. You need to know the StackId of the stack you launched and insert it in the following stack-name parameter.

aws cloudformation describe-stacks --stack-name "<ARN of Your Stack>" \
--query 'Stacks[0].Outputs'

Test your application over the Internet with SSLLabs

The easiest way to confirm that the load balancer is using the secure ciphers that we chose is to enter the URL of the load balancer in the form on SSL Labs’ SSL Server Test page. If you do not want the name of your load balancer to be shared publicly on SSLLabs.com, select the Do not show the results on the boards check box. After a minute or two of testing, SSLLabs gives you a detailed report of every cipher it tried and how your load balancer responded. This test simulates many devices that might connect to your website, including mobile phones, desktop web browsers, and software libraries such as Java and OpenSSL. The report tells you whether these clients would be able to connect to your application successfully.

Assuming all went well, you should receive an A grade for the sample application. The biggest contributors to the A grade are:

  • Supporting only TLS 1.2, and not TLS 1.1, TLS 1.0, or SSL 3.0
  • Supporting only strong ciphers such as AES, and not weaker ciphers such as RC4
  • Having an X.509 public key certificate issued correctly by ACM

How to test your application privately with sslscan

You might not be able to reach your Elastic Beanstalk application from the Internet because it might be in a private subnet that is only accessible internally. If you want to test the security of your load balancer’s configuration privately, you can use one of the open source command-line tools such as sslscan. You can install and run the sslscan command on any Amazon EC2 Linux instance or even from your own laptop. Be sure that the Elastic Beanstalk application you want to test will accept an HTTPS connection from your Amazon Linux EC2 instance or from your laptop.

The easiest way to get sslscan on an Amazon Linux EC2 instance is to:

  1. Enable the Extra Packages for Enterprise Linux (EPEL) repository.
  2. Run sudo yum install sslscan.
  3. After the command runs successfully, run sslscan secure.example.com to scan your application for supported ciphers.

The results are similar to Qualys’ results at SSLLabs.com, but the sslscan tool does not summarize and evaluate the results to assign a grade. It just reports whether your application accepted a connection using the cipher that it tried. You must decide for yourself whether that set of accepted connections represents the right level of security for your application. If you have been asked to build a secure load balancer that meets specific security requirements, the output from sslscan helps to show how the security of your application is configured.

The following sample output shows a small subset of the total output of the sslscan tool.

Accepted TLS12 256 bits AES256-GCM-SHA384
Accepted TLS12 256 bits AES256-SHA256
Accepted TLS12 256 bits AES256-SHA
Rejected TLS12 256 bits CAMELLIA256-SHA
Failed TLS12 256 bits PSK-AES256-CBC-SHA
Rejected TLS12 128 bits ECDHE-RSA-AES128-GCM-SHA256
Rejected TLS12 128 bits ECDHE-ECDSA-AES128-GCM-SHA256
Rejected TLS12 128 bits ECDHE-RSA-AES128-SHA256

An Accepted connection is one that was successful: the load balancer and the client were both able to use the indicated cipher. Failed and Rejected connections are connections whose load balancer would not accept the level of security that the client was requesting. As a result, the load balancer closed the connection instead of communicating insecurely. The difference between Failed and Rejected is based one whether the TLS connection was closed cleanly.

Comparing the two policies

The main difference between our custom policy and the AWS predefined policy is whether or not CBC ciphers are accepted. The test results with both policies are identical except for the results shown in the following table. The only change in the policy, and therefore the only change in the results, is that the cipher suites using CBC ciphers have been disabled.

Cipher Suite Name Encryption Algorithm Key Size (bits) ELBSecurityPolicy-TLS-1-2-2017-01 Custom Policy
ECDHE-RSA-AES256-GCM-SHA384 AESGCM 256 Enabled Enabled
ECDHE-RSA-AES256-SHA384 AES 256 Enabled Disabled
AES256-GCM-SHA384 AESGCM 256 Enabled Disabled
AES256-SHA256 AES 256 Enabled Disabled
ECDHE-RSA-AES128-GCM-SHA256 AESGCM 128 Enabled Enabled
ECDHE-RSA-AES128-SHA256 AES 128 Enabled Disabled
AES128-GCM-SHA256 AESGCM 128 Enabled Disabled
AES128-SHA256 AES 128 Enabled Disabled

Strong ciphers and compatibility

The custom policy described in the previous section prevents legacy devices and older versions of software and web browsers from connecting. The output at SSLLabs provides a list of devices and applications (such as Internet Explorer 10 on Windows 7) that cannot connect to an application that uses the TLS policy. By design, the load balancer will refuse to connect to a device that is unable to negotiate a connection at the required levels of security. Users who use legacy software and devices will see different errors, depending on which device or software they use (for example, Internet Explorer on Windows, Chrome on Android, or a legacy mobile application). The error messages will be some variation of “connection failed” because the Elastic Load Balancer closes the connection without responding to the user’s request. This behavior can be problematic for websites that must be accessible to older desktop operating systems or older mobile devices.

If you need to support legacy devices, adjust the TLSHighPolicy in the CloudFormation template. For example, if you need to support web browsers on Windows 7 systems (and you cannot enable TLS 1.2 support on those systems), you can change the policy to enable TLS 1.1. To do this, change the value of SSLReferencePolicy to ELBSecurityPolicy-TLS-1-1-2017-01.

Enabling legacy protocol versions such as TLS version 1.1 will allow older devices to connect, but then the application may not be compliant with the information security policies or business requirements that require strong ciphers.

Conclusion

Using Elastic Beanstalk, Route 53, and ACM can help you launch secure applications that are designed to not only protect data but also meet regulatory compliance requirements and your information security policies. The TLS policy, either custom or predefined, allows you to control exactly which cryptographic ciphers are enabled on your Elastic Load Balancer. The TLS test results provide you with clear evidence you can use to demonstrate compliance with security policies or requirements. The parameters in this post’s CloudFormation template also make it adaptable and reusable for multiple applications. You can use the same template to launch different applications on different secure URLs by simply changing the parameters that you pass to the template.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the CloudFormation forum.

– Paco

Fake ‘Pirates Of The Caribbean’ Leaks Troll Pirates and Reporters

Post Syndicated from Ernesto original https://torrentfreak.com/fake-pirates-of-the-caribbean-leaks-troll-pirates-and-reporters-170520/

Earlier this week, news broke that Disney was being extorted by hackers who were threatening to release an upcoming film, reportedly ‘Pirates of the Caribbean: Dead Men Tell No Tales.’

This prompted pirates and reporters to watch torrent sites for copies of the film, and after a few hours the first torrents did indeed appear.

The initial torrent spotted by TF was just over 200MB, which is pretty small. As it turned out, the file was fake and linked to some kind of survey scam.

Fake torrents are quite common and even more so with highly anticipated releases like a “Pirates Of The Caribbean” leak.

Soon after the fist fake, another one followed, this one carrying the name of movie distribution group ETRG. After the first people downloaded a copy, it quickly became clear that this was spam as well, and the torrent was swiftly removed from The Pirate Bay.

Unfortunately, however, some reporters confused the fake releases with the real deal. Without verifying the actual content of the files, news reports claimed that Pirates Of The Caribbean had indeed leaked.

“Hackers Dump Pirates of the Caribbean On Torrent Sites Ahead of Premiere,” Softpedia reported, followed by the award-winning security blog Graham Cluley who wrote that the “New Pirates of the Caribbean movie leaked online.”

Leaks? (via Softpedia)

The latter was also quick to point to a likely source of the leak. Hacker group The Dark Overlord was cited as the prime candidate, even though there were no signs linking it to the leak in question. This is off for a group that regularly takes full public credit for its achievements.

News site Fossbytes also appeared confident that The Dark Overlord was behind the reported (but fake) leaks, pretty much stating it as fact.

“The much-awaited Disney movie Pirates Of The Caribbean 5 Dead Men Tell No Tales was compromised by a hacker group called TheDarkOverlord,” the site reported.

Things got more confusing when the torrent files in question disappeared from The Pirate Bay. In reality, moderators simply removed the spam, as they usually do, but the reporters weren’t convinced and speculated that the ‘hackers’ could have reuploaded the files elsewhere.

A few hours later another ‘leak’ appeared on The Pirate Bay, confirming these alleged suspicions. This time it was a 54GB file which actually had “DARK-OVERL” in the title.

DARK-OVERL!!!

Soon after the torrent appeared online someone added a spam comment suggesting that it had a decent quality. One of the reporters picked this up and wrote that “comments indicate the quality is quite high.”

Again, at this point, none of the reporters had verified that the leaks were real. Still, the news spread further and further.

TorrentFreak also kept an eye on the developments and reached out to a source who said he’d obtained a copy of the 54GB release. This pirate was curious, but didn’t get what he was hoping for.

The file in question did indeed contain video material, he informed us. However, instead of an unreleased copy of the Pirates Of The Caribbean 5, he says he got several copies of an animation movie – Trolls…..

“Turns out, the iso contains a couple of .rar files that house a bunch of Trolls DVDs. I hope everyone learned their lesson, if it’s too good to be true it probably is.”

Indeed it is.

In the spirit of this article we have to stress that we didn’t verify the contents of the (now deleted) “Trolls” torrent ourselves. However, it’s clear that the fake leaks trolled several writers and pirates.

We reached out to Softpedia reporter Gabriela Vatu and Graham Cluley, who were both very receptive to our concerns and updated the initial articles to state that the leaks were not verified.

Let’s hope that this will stop the rumors from spreading any further.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Hackers Demand Ransom Over Stolen Copy of ‘Pirates of the Caribbean 5’

Post Syndicated from Ernesto original https://torrentfreak.com/hackers-demand-ransom-over-stolen-copy-of-pirates-of-the-caribbean-5-170516/

During a town hall meeting in New York on Monday, Disney CEO Bob Iger informed a group of ABC employees that hackers have stolen one of the company’s movies.

The hackers offered to keep it away from public eyes in exchange for ransom paid in Bitcoin but Disney says it has no intention to pay.

Although Iger did not mention the movie by name during the meeting, Deadline reports that it’s a copy of ‘Pirates of the Caribbean: Dead Men Tell No Tales.’

The fifth movie in the ‘Pirates‘ franchise starring Johnny Depp, is officially scheduled to appear in theaters next week. Needless to say, a high-quality leak at this point will be seen as a disaster for Disney.

The “ransom” demand from the hacker is reminiscent of another prominent entertainment industry leak, where the requested amount of Bitcoin was not paid.

Just a few weeks ago a group calling itself TheDarkOverlord (TDO) published the premiere episode of the fifth season of Netflix’s Orange is The New Black, followed by nine more episodes a few hours later.

Despite Netflix’s anti-piracy efforts, the ten leaked episodes of Orange is The New Black remain popular on many torrent indexes and pirate streaming sites.

There is no indication that the previous and threatened leaks are related in any way. TorrentFreak has seen a list of movies and TV-shows TDO said they have in their possession, but the upcoming ‘Pirates’ movie isn’t among them.

The Disney hackers have threatened to release the movie in increments, but the movie studio is hoping that they won’t go ahead with their claims.

Thus far there haven’t been any reports of leaked parts of the fifth Pirates of the Caribbean film. Disney, meanwhile, is working with the FBI to track down the people responsible for the hack.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Spotify’s Beta Used ‘Pirate’ MP3 Files, Some From Pirate Bay

Post Syndicated from Andy original https://torrentfreak.com/spotifys-beta-used-pirate-mp3-files-some-from-pirate-bay-170509/

While some pirates will probably never be tempted away from the digital high seas, over the past decade millions have ditched or tapered down their habit with the help of Spotify.

It’s no coincidence that from the very beginning more than a decade ago, the streaming service had more than a few things in common with the piracy scene.

Spotify CEO Daniel Ek originally worked with uTorrent creator Ludvig ‘Ludde’ Strigeus before the pair sold to BitTorrent Inc. and began work on Spotify. Later, the company told TF that pirates were their target.

“Spotify is a new way of enjoying music. We believe Spotify provides a viable alternative to music piracy,” the company said.

“We think the way forward is to create a service better than piracy, thereby converting users into a legal, sustainable alternative which also enriches the total music experience.”

The technology deployed by Spotify was also familiar. Like the majority of ‘pirate’ platforms at the time, Spotify operated a peer-to-peer (P2P) system which grew to become one of the largest on the Internet. It was shut down in 2011.

But in the clearest nod to pirates, Spotify was available for free, supported by ads if the user desired. This was the platform’s greatest asset as it sought to win over a generation that had grown accustomed to gorging on free MP3s. Interestingly, however, an early Pirate Bay figure has now revealed that Spotify also had a use for the free content floating around the Internet.

As one of the early members of Sweden’s infamous Piratbyrån (piracy bureau), Rasmus Fleischer was also one of key figures at The Pirate Bay. Over the years he’s been a writer, researcher, debater and musician, and in 2012 he finished his PhD thesis on “music’s political economy.”

As part of a five-person team, Fleischer is now writing a book about Spotify. Titled ‘Spotify Teardown – Inside the Black Box of Streaming Music’, the book aims to shine light on the history of the famous music service and also spills the beans on a few secrets.

In an interview with Sweden’s DI.se, Fleischer reveals that when Spotify was in early beta, the company used unlicensed music to kick-start the platform.

“Spotify’s beta version was originally a pirate service. It was distributing MP3 files that the employees happened to have on their hard drives,” he reveals.

Rumors that early versions of Spotify used ‘pirate’ MP3s have been floating around the Internet for years. People who had access to the service in the beginning later reported downloading tracks that contained ‘Scene’ labeling, tags, and formats, which are the tell-tale signs that content hadn’t been obtained officially.

Solid proof has been more difficult to come by but Fleischer says he knows for certain that Spotify was using music obtained not only from pirate sites, but the most famous pirate site of all.

According to the writer, a few years ago he was involved with a band that decided to distribute their music on The Pirate Bay instead of the usual outlets. Soon after, the album appeared on Spotify’s beta service.

“I thought that was funny. So I emailed Spotify and asked how they obtained it. They said that ‘now, during the test period, we will use music that we find’,” Fleischer recalls.

For a company that has attracting pirates built into its DNA, it’s perhaps fitting that it tempted them with the same bait found on pirate sites. Certainly, the company’s history of a pragmatic attitude towards piracy means that few will be shouting ‘hypocrites’ at the streaming platform now.

Indeed, according to Fleischer the successes and growth of Spotify are directly linked to the temporary downfall of The Pirate Bay following the raid on the site in 2006, and the lawsuits that followed.

“The entire Spotify beta period and its early launch history is in perfect sync with the Pirate Bay process,” Fleischer explains.

“They would not have had as much attention if they had not been able to surf that wave. The company’s early history coincides with the Pirate Party becoming a hot topic, and the trial of the Pirate Bay in the Stockholm District Court.”

In 2013, Fleischer told TF that The Pirate Bay had “helped catalyze so-called ‘new business models’,” and it now appears that Spotify is reaping the benefits and looks set to keep doing so into the future.

An in-depth interview with Rasmus Fleischer will be published here soon, including an interesting revelation detailing how TorrentFreak readers positively affected the launch of Spotify in the United States.

Spotify Teardown – Inside the Black Box of Streaming Music will be published early 2018.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Introducing an Easier Way to Delegate Permissions to AWS Services: Service-Linked Roles

Post Syndicated from Abhishek Pandey original https://aws.amazon.com/blogs/security/introducing-an-easier-way-to-delegate-permissions-to-aws-services-service-linked-roles/

Some AWS services create and manage AWS resources on your behalf. To do this, these services require you to delegate permissions to them by using AWS Identity and Access Management (IAM) roles. Today, AWS IAM introduces service-linked roles, which give you an easier and more secure way to delegate permissions to AWS services. To start, you can use service-linked roles with Amazon Lex, a service that enables you to build conversational interfaces in any application by using voice and text. Over time, more AWS services will use service-linked roles as a way for you to delegate permissions to them to create and manage AWS resources on your behalf. In this blog post, I walk through the details of service-linked roles and show how to use them.

Creation and management of service-linked roles

Each service-linked role links to an AWS service, which is called the linked service. Service-linked roles provide a secure way to delegate permissions to AWS services because only the linked service can assume a service-linked role. Additionally, AWS automatically defines and sets the permissions of service-linked roles, depending on the actions that the linked service performs on your behalf. This makes it easier for you to manage the permissions you delegate to AWS services. AWS allows only those changes to service-linked roles that do not remove the permissions required by the linked service to manage your resources, preventing you from making any changes that would leave your AWS resources in an inconsistent state. Service-linked roles also help you meet your monitoring and auditing requirements because all actions performed on your behalf by an AWS service using a service-linked role appear in your AWS CloudTrail logs.

When you work with an AWS service that uses service-linked roles, the service automatically creates a service-linked role for you. After that, whenever the service must act on your behalf to manage your resources, it assumes the service-linked role. You can view the details of the service-linked roles in your account by using the IAM console, IAM APIs, or the AWS CLI.

Service-linked roles follow a specific naming convention that includes a mandatory prefix that is defined by AWS and an optional suffix defined by you. The examples in the following table show how the role names of service-linked roles may appear.

Service-linked role name Prefix Optional suffix
AWSServiceRoleForLexBots AWSServiceRoleForLexBots Not set
AWSServiceRoleForElasticBeanstalk_myRole AWSServiceRoleForElasticBeanstalk myRole

If you are the administrator of your account and you do not want to grant permissions to other users to create roles or delegate permissions to AWS services, you can create service-linked roles for users in your account by using the IAM console, IAM APIs, or the AWS CLI. For more information about how to create service-linked roles through IAM, see the IAM documentation about creating a role to delegate permissions to an AWS service.

To create a service-linked role or to enable an AWS service to create one on your behalf, you must have permission for the iam:CreateServiceLinkedRole action. The following IAM policy grants the permission to create service-linked roles for Amazon Lex.

{ 
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowCreationOfServiceLinkedRoleForLex",
            "Effect": "Allow",
            "Action": ["iam:CreateServiceLinkedRole"],
            "Resource": ["arn:aws:iam::*:role/aws-service-role/lex.amazonaws.com/AWSServiceRoleForLex*"],
	    "Condition": {
	         "StringLike":{
			"iam:AWSServiceName": "lex.amazonaws.com"
		  }
 	    }
        }
    ]
}

The preceding policy allows the iam:CreateServiceLinkedRole action when the linked service is Amazon Lex, and the name of the service-linked role starts with AWSServiceRoleForLex. For more information, see Working with Policies.

If you no longer wish to use a specific AWS service, you can revoke permissions for that service by deleting the service-linked role. You can do this from the linked service, and the service might require you to delete the resources that depend on the service-linked role. This helps ensure that you do not inadvertently delete a role that is required for your AWS resources to function properly. To learn more about how to delete a service-linked role, see the linked service’s documentation.

Permissions of service-linked roles

Just like existing IAM roles, the permissions of service-linked roles come from two policies: a permission policy and a trust policy. The permission policy determines what the role can and cannot do, and the trust policy defines who can assume the role. AWS automatically sets the permission and trust policies of service-linked roles.

For the permission policy, service-linked roles use an AWS managed policy. This means that when a service adds a new feature, AWS automatically updates the managed policy to enable the new functionality without requiring you to change the policy. In most cases, you do not have to update the permission policy of a service-linked role. However, some services may require you to add specific permissions to the role such as access to a specific Amazon S3 bucket. To learn more about how to add permissions if a service requires specific permissions, see the linked service’s documentation.

Only the linked AWS service can assume a service-linked role, which is why you cannot modify the trust policy of a service-linked role. You can allow your users to create service-linked roles for AWS services while not permitting them to escalate their own privileges. For example, imagine Alice is a developer on your team and she wants to delegate permissions to Amazon Lex. When Alice creates a service-linked role for Amazon Lex, AWS automatically attaches the permission and trust policies. The permission policy includes only the permissions that Amazon Lex needs to manage your resources (this best practice is known as least privilege), and the trust policy defines Amazon Lex as the trusted entity. As a result, Alice is able to create a service-linked role to delegate permissions to Amazon Lex. However, she is unable to edit the trust policy to include additional trusted entities. This prevents her from granting unapproved access to other users or escalating her own privileges, while still having the necessary permissions to create service-linked roles for Amazon Lex.

How to create a service-linked role by using the IAM console

The following steps lead you through creating a service-linked role by using the IAM console. However, before you create a service-linked role, make sure you have the right permissions to do so.

To create a service-linked role by using the IAM console:

  1. Navigate to the IAM console and choose Roles in the navigation pane.
    Screenshot of the IAM console
  2. Choose Create new role.
    Screenshot showing the "Create new role" button
  3. On the Select role type page, in the AWS service-linked role section, choose the AWS service for which you want to create the role. For this example, I choose Amazon Lex – Bots.
    Screenshot of choosing "Amazon Lex - Bots"
  4. Notice that the role name prefix is automatically populated. Type the role name suffix for the service-linked role. Some AWS services, such as Amazon Lex, do not support custom suffixes, in which case you should leave the role name suffix box blank.
    Screenshot of the role name suffix box
  5. Include a description of the new role. Notice that IAM automatically suggests a description for this role, which you can edit. For this example, I keep the suggested description.
    Screenshot of the "Role description" box
  6. Choose Create role. After the role is created, you can view it in the IAM console. Service-linked roles are marked with a cube-shaped icon in the console to help you distinguish these roles from other roles in your account.

Conclusion

With service-linked roles, delegating permissions to AWS services is easier because when you work with an AWS service that uses these roles, the service creates the role for you. You do not have to create IAM policies for delegating permissions to AWS services. Any changes to these roles that might interfere with your AWS resources do not go through. Delegation of permissions is secure because only the linked service is able to use these roles.

To see which AWS services support service-linked roles, see AWS services that work with IAM. If you have any comments about service-linked roles, submit a comment in the “Comments” section below. If you have questions about working with service-linked roles, please start a new thread on the IAM forum.

– Abhishek

New- Introducing AWS CodeStar – Quickly Develop, Build, and Deploy Applications on AWS

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/new-aws-codestar/

It wasn’t too long ago that I was on a development team working toward completing a software project by a release deadline and facing the challenges most software teams face today in developing applications. Challenges such as new project environment setup, team member collaboration, and the day-to-day task of keeping track of the moving pieces of code, configuration, and libraries for each development build. Today, with companies’ need to innovate and get to market faster, it has become essential to make it easier and more efficient for development teams to create, build, and deploy software.

Unfortunately, many organizations face some key challenges in their quest for a more agile, dynamic software development process. The first challenge most new software projects face is the lengthy setup process that developers have to complete before they can start coding. This process may include setting up of IDEs, getting access to the appropriate code repositories, and/or identifying infrastructure needed for builds, tests, and production.

Collaboration is another challenge that most development teams may face. In order to provide a secure environment for all members of the project, teams have to frequently set up separate projects and tools for various team roles and needs. In addition, providing information to all stakeholders about updates on assignments, the progression of development, and reporting software issues can be time-consuming.

Finally, most companies desire to increase the speed of their software development and reduce the time to market by adopting best practices around continuous integration and continuous delivery. Implementing these agile development strategies may require companies to spend time in educating teams on methodologies and setting up resources for these new processes.

Now Presenting: AWS CodeStar

To help development teams ease the challenges of building software while helping to increase the pace of releasing applications and solutions, I am excited to introduce AWS CodeStar.

AWS CodeStar is a cloud service designed to make it easier to develop, build, and deploy applications on AWS by simplifying the setup of your entire development project. AWS CodeStar includes project templates for common development platforms to enable provisioning of projects and resources for coding, building, testing, deploying, and running your software project.

The key benefits of the AWS CodeStar service are:

  • Easily create new projects using templates for Amazon EC2, AWS Elastic Beanstalk, or AWS Lambda using five different programming languages; JavaScript, Java, Python, Ruby, and PHP. By selecting a template, the service will provision the underlying AWS services needed for your project and application.
  • Unified experience for access and security policies management for your entire software team. Projects are automatically configured with appropriate IAM access policies to ensure a secure application environment.
  • Pre-configured project management dashboard for tracking various activities, such as code commits, build results, deployment activity and more.
  • Running sample code to help you get up and running quickly enabling you to use your favorite IDEs, like Visual Studio, Eclipse, or any code editor that supports Git.
  • Automated configuration of a continuous delivery pipeline for each project using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.
  • Integration with Atlassian JIRA Software for issue management and tracking directly from the AWS CodeStar console

With AWS CodeStar, development teams can build an agile software development workflow that now only increases the speed in which teams and deploy software and bug fixes, but also enables developers to build software that is more inline with customers’ requests and needs.

An example of a responsive development workflow using AWS CodeStar is shown below:

Journey Into AWS CodeStar

Now that you know a little more about the AWS CodeStar service, let’s jump into using the service to set up a web application project. First, I’ll go to into the AWS CodeStar console and click the Start a project button.

If you have not setup the appropriate IAM permissions, AWS CodeStar will show a dialog box requesting permission to administer AWS resources on your behalf. I will click the Yes, grant permissions button to grant AWS CodeStar the appropriate permissions to other AWS resources.

However, I received a warning that I do not have administrative permissions to AWS CodeStar as I have not applied the correct policies to my IAM user. If you want to create projects in AWS CodeStar, you must apply the AWSCodeStarFullAccess managed policy to your IAM user or have an IAM administrative user with full permissions for all AWS services.

Now that I have added the aforementioned permissions in IAM, I can now use the service to create a project. To start, I simply click on the Create a new project button and I am taken to the hub of the AWS CodeStar service.

At this point, I am presented with over twenty different AWS CodeStar project templates to choose from in order to provision various environments for my software development needs. Each project template specifies the AWS Service used to deploy the project, the supported programming language, and a description of the type of development solution implemented. AWS CodeStar currently supports the following AWS Services: Amazon EC2, AWS Lambda, and AWS Elastic Beanstalk. Using preconfigured AWS CloudFormation templates, these project templates can create software development projects like microservices, Alexa skills, web applications, and more with a simple click of a button.

For my first AWS CodeStar project, I am going to build a serverless web application using Node.js and AWS Lambda using the Node.js/AWS Lambda project template.

You will notice for this template AWS CodeStar sets up all of the tools and services you need for a development project including an AWS CodePipeline connected with the services; AWS CodeBuild, AWS CloudFormation, and Amazon CloudWatch. I’ll name my new AWS CodeStar project, TaraWebProject, and click Create Project.

Since this is my first time creating an AWS CodeStar, I will see a dialog that asks about the setup of my AWS CodeStar user settings. I’ll type Tara in the textbox for the Display Name and add my email address in the Email textbox. This information is how I’ll appear to others in the project.

The next step is to select how I want to edit my project code. I have decided to edit my TaraWebProject project code using the Visual Studio IDE. With Visual Studio, it will be essential for me to configure it to use the AWS Toolkit for Visual Studio 2015 to access AWS resources while editing my project code. On this screen, I am also presented with the link to the AWS CodeCommit Git repository that AWS CodeStar configured for my project.

The provisioning and tool setup for my software development project is now complete. I’m presented with the AWS CodeStar dashboard for my software project, TaraWebProject, which allows me to manage the resources for the project. This includes the management of resources, such as code commits, team membership and wiki, continuous delivery pipeline, Jira issue tracking, project status and other applicable project resources.

What is really cool about AWS CodeStar for me is that it provides a working sample project from which I can start the development of my serverless web application. To view the sample of my new web application, I will go to the Application endpoints section of the dashboard and click the link provided.

A new browser window will open and will display the sample web application AWS CodeStar generated to help jumpstart my development. A cool feature of the sample application is that the background of the sample app changes colors based on the time of day.

Let’s now take a look at the code used to build the sample website. In order to view the code, I will back to my TaraWebProject dashboard in the AWS CodeStar console and select the Code option from the sidebar menu.

This takes me to the tarawebproject Git repository in the AWS CodeCommit console. From here, I can manually view the code for my web application, the commits made in the repo, the comparison of commits or branches, as well as, create triggers in response to my repo events.

This provides a great start for me to start developing my AWS hosted web application. Since I opted to integrate AWS CodeStar with Visual Studio, I can update my web application by using the IDE to make code changes that will be automatically included in the TaraWebProject every time I commit to the provisioned code repository.

You will notice that on the AWS CodeStar TaraWebProject dashboard, there is a message about connecting the tools to my project repository in order to work on the code. Even though I have already selected Visual Studio as my IDE of choice, let’s click on the Connect Tools button to review the steps to connecting to this IDE.

Again, I will see a screen that will allow me to choose which IDE: Visual Studio, Eclipse, or Command Line tool that I wish to use to edit my project code. It is important for me to note that I have the option to change my IDE choice at any time while working on my development project. Additionally, I can connect to my Git AWS CodeCommit repo via HTTPS and SSH. To retrieve the appropriate repository URL for each protocol, I only need to select the Code repository URL dropdown and select HTTPS or SSH and copy the resulting URL from the text field.

After selecting Visual Studio, CodeStar takes me to the steps needed in order to integrate with Visual Studio. This includes downloading the AWS Toolkit for Visual Studio, connecting the Team Explorer to AWS CodeStar via AWS CodeCommit, as well as, how to push changes to the repo.

After successfully connecting Visual Studio to my AWS CodeStar project, I return to the AWS CodeStar TaraWebProject dashboard to start managing the team members working on the web application with me. First, I will select the Setup your team tile so that I can go to the Project Team page.

On my TaraWebProject Project Team page, I’ll add a team member, Jeff, by selecting the Add team member button and clicking on the Select user dropdown. Team members must be IAM users in my account, so I’ll click on the Create new IAM user link to create an IAM accounts for Jeff.

When the Create IAM user dialog box comes up, I will enter an IAM user name, Display name, and Email Address for the team member, in this case, Jeff Barr. There are three types of project roles that Jeff can be granted, Owner, Contributor, or Viewer. For the TaraWebProject application, I will grant him the Contributor project role and allow him to have remote access by select the Remote access checkbox. Now I will create Jeff’s IAM user account by clicking the Create button.

This brings me to the IAM console to confirm the creation of the new IAM user. After reviewing the IAM user information and the permissions granted, I will click the Create user button to complete the creation of Jeff’s IAM user account for TaraWebProject.

After successfully creating Jeff’s account, it is important that I either send Jeff’s login credentials to him in email or download the credentials .csv file, as I will not be able to retrieve these credentials again. I would need to generate new credentials for Jeff if I leave this page without obtaining his current login credentials. Clicking the Close button returns me to the AWS CodeStar console.

Now I can complete adding Jeff as a team member in the TaraWebProject by selecting the JeffBarr-WebDev IAM role and clicking the Add button.

I’ve successfully added Jeff as a team member to my AWS CodeStar project, TaraWebProject enabling team collaboration in building the web application.

Another thing that I really enjoy about using the AWS CodeStar service is I can monitor all of my project activity right from my TaraWebProject dashboard. I can see the application activity, any recent code commits, and track the status of any project actions, such as the results of my build, any code changes, and the deployments from in one comprehensive dashboard. AWS CodeStar ties the dashboard into Amazon CloudWatch with the Application activity section, provides data about the build and deployment status in the Continuous Deployment section with AWS CodePipeline, and shows the latest Git code commit with AWS CodeCommit in the Commit history section.

Summary

In my journey of the AWS CodeStar service, I created a serverless web application that provisioned my entire development toolchain for coding, building, testing, and deployment for my TaraWebProject software project using AWS services. Amazingly, I have yet to scratch the surface of the benefits of using AWS CodeStar to manage day-to-day software development activities involved in releasing applications.

AWS CodeStar makes it easy for you to quickly develop, build, and deploy applications on AWS. AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. AWS CodeStar allows you to choose from various templates to setting up projects using AWS Lambda, Amazon EC2, or AWS Elastic Beanstalk. It comes pre-configured with a project management dashboard, an automated continuous delivery pipeline, and a Git code repository using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy allowing developers to implement modern agile software development best practices. Each AWS CodeStar project gives developers a head start in development by providing working code samples that can be used with popular IDEs that support Git. Additionally, AWS CodeStar provides out of the box integration with Atlassian JIRA Software providing a project management and issue tracking system for your software team directly from the AWS CodeStar console.

You can get started using the AWS CodeStar service for developing new software projects on AWS today. Learn more by reviewing the AWS CodeStar product page and the AWS CodeStar user guide documentation.

Tara

Pirate Bay Founder Launches Anonymous Domain Registration Service

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bay-founder-launches-anonymous-domain-registration-service-170419/

In recent years, copyright holders have taken aim at the domain name industry, calling on players to take a more active approach against piracy.

One of the often heard complaints is that website owners use Whois masking services to ensure their privacy.

There are several companies dedicated to offering privacy to domain registrants and today, rightsholders will see a well-known adversary entering the market.

Former Pirate Bay spokesperson and co-founder Peter Sunde has just announced his latest venture. Keeping up his fight for privacy on the Internet, he’s launching a new company called Njalla, that helps site operators to shield their identities from prying eyes.

The name Njalla refers to the traditional hut that Sámi people use to keep predators at bay. It’s built on a tall stump of a tree or pole and is used to store food or other goods.

On the Internet, Njalla helps to keep people’s domain names private. While anonymizer services aren’t anything new, Sunde’s company takes a different approach compared to most of the competition.

Njalla

With Njalla, customers don’t buy the domain names themselves, they let the company do it for them. This adds an extra layer of protection but also requires some trust.

A separate agreement grants the customer full usage rights to the domain. This also means that people are free to transfer it elsewhere if they want to.

“Think of us as your friendly drunk (but responsibly so) straw person that takes the blame for your expressions,” Njalla notes.

TorrentFreak spoke to Peter Sunde who says that the service is needed to ensure that people can register domain names without having to worry about being exposed.

“Njalla is needed because we’re going the wrong way in society regarding people’s right to be anonymous. With social media pressuring us to be less anonymous and services being centralized, we need alternatives,” Sunde says.

The current domain privacy services aren’t really providing anonymity, Sunde believes, that’s why he decided to fill this gap.

“All key parts of the Internet need to have options for anonymity, and the domain name area is something which was never really protected. At best you can buy a domain name using ‘privacy by proxy’ services, which are aimed more at limiting spam than actually protecting your privacy.”

As co-founder of The Pirate Bay, Njalla might also get some pirate sites as customers. Since Njalla owns the domain names, this could lead to some pressure from rightsholders, but Sunde isn’t really worried about this.

“The domain name itself is not really what they’re after. They’re after the content that the domain name points to. So we’re never helping with anything that might infringe on anything anyhow, so it’s a non-question for us,” Sunde says.

For those who are interested, Njalla just opened its website for business. The company is registered with the fitting name 1337 LLC and is based in Nevis, a small island in the Caribbean Sea.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Move Over JSON – Policy Summaries Make Understanding IAM Policies Easier

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/move-over-json-policy-summaries-make-understanding-iam-policies-easier/

Today, we added policy summaries to the IAM console, making it easier for you to understand the permissions in your AWS Identity and Access Management (IAM) policies. Instead of reading JSON policy documents, you can scan a table that summarizes services, actions, resources, and conditions for each policy. You can find this summary on the policy detail page or the Permissions tab on an individual IAM user’s page.

In this blog post, I introduce policy summaries and review the details of a policy summary.

How to read a policy summary

The following screenshot shows an example policy summary. The table provides you with an at-a-glance view of each service’s granted access level, resources, and conditions.

The columns in a policy summary are defined this way:

  • Service – The Amazon services defined in the policy. Click each service name to see the specific actions granted for the service.
  • Access level – Actions defined for each service in the policy (I provide more details below).
  • Resource –The resources defined for each service in the policy. This column displays one of the following values:
    • All resources – Access is granted or denied to all resources in the service.
    • Multiple – Some but not all of the resources are granted or denied in the service.
    • Amazon Resource Name (ARN) – The policy defines one resource in the service. You will see the actual ARN displayed for one resource.
  • Request condition – The conditions defined for each service. Conditions can be global conditions or conditions specific to the service. This column displays one of the following values:
    • None – No conditions are defined for the service.
    • Multiple – Multiple conditions are defined for the service.
    • Condition – One condition is defined for the service and applies to all actions defined in the policy for the service. You will see the condition defined in the policy in the table. For example, the preceding screenshot shows a condition for Amazon Elastic Beanstalk.

If you prefer reading and managing policies in JSON, choose View and edit JSON above the policy summary to see the policy in JSON.

Before I go over an example of a policy summary, I will explain access levels in more detail, a new concept we introduced with policy summaries.

Access levels in policy summaries

To help you understand the permissions defined in a policy, each AWS service’s actions are categorized in four access levels: List, Read, Write, and Permissions management. For example, the following table defines the access levels and provides examples using Amazon S3 actions. Full and Limited further qualify the access levels for each service. Full refers to all the actions within an access level, and Limited refers to at least one but not all actions in an access level. Note: You can see the complete list of actions and access levels for all services in the AWS IAM Policy Actions Grouped by Access Level documentation.

Access level Description Example
List Actions that allow you to see a list of resources s3:ListBucket, s3:ListAllMyBuckets
Read Actions that allow you to read the content in resources s3:GetObject, s3:GetBucketTagging
Write Actions that allow you to create, delete, or modify resources s3:PutObject, s3:DeleteBucket
Permissions management Actions that allow you to grant or modify permissions to resources s3:PutBucketPolicy

Note: Not all AWS services have actions in all access levels.

In the following screenshot, the access level for S3 is Full access, which means the policy permits all actions of the S3 List, Read, Write, and Permissions management access levels. The access level for EC2 is Full: List,Read and Limited: Write, meaning that the policy grants all actions of the List and Read access levels, but only a portion of the actions of the Write access level. You can view the specific actions defined in the policy by choosing the service in the policy summary.

Reviewing a policy summary in detail

Let’s look at a policy summary in the IAM console. Imagine that Alice is a developer on my team who analyzes data and generates quarterly reports for our finance team. To grant her the permissions she needs, I have added her to the Data_Analytics IAM group.

To see the policies attached to user Alice, I navigate to her user page by choosing her user name on the Users page of the IAM console. The following screenshot shows that Alice has 3 policies attached to her.

I will review the permissions defined in the Data_Analytics policy, but first, let’s look at the JSON syntax for the policy so that you can compare the different views.

{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": [
            "autoscaling:*",
            "ec2:CancelSpotInstanceRequests",
            "ec2:CancelSpotFleetRequests",
            "ec2:CreateTags",
            "ec2:DeleteTags",
            "ec2:Describe*",
            "ec2:ModifyImageAttribute",
            "ec2:ModifyInstanceAttribute",
            "ec2:ModifySpotFleetRequest",
            "ec2:RequestSpotInstances",
            "ec2:RequestSpotFleet",
            "elasticmapreduce:*",
            "es:Describe*",
            "es:List*",
            "es:Update*",
            "es:Add*",
            "es:Create*",
            "es:Delete*",
            "es:Remove*",
            "lambda:Create*",
            "lambda:Delete*",
            "lambda:Get*",
            "lambda:InvokeFunction",
            "lambda:Update*",
            "lambda:List*"
        ],
        "Resource": "*"
    },

    {
        "Effect": "Allow",
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:PutObject",
            "s3:PutBucketPolicy"
        ],
        "Resource": [
            "arn:aws:s3:::DataTeam"
        ],
        "Condition": {
            "StringLike": {
                "s3:prefix": [
                    "Sales/*"
                ]
            }
        }
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "elasticfilesystem:*"
        ],
        "Resource": [
            "arn:aws:elasticfilesystem:*:111122223333:file-system/2017sales"
        ]
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "rds:*"
        ],
         "Resource": [
            "arn:aws:rds:*:111122223333:db/mySQL_Instance"
        ]
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "dynamodb:*"
         ],
        "Resource": [
            "arn:aws:dynamodb:*:111122223333:table/Sales_2017Annual"
        ]
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "iam:GetInstanceProfile",
            "iam:GetRole",
            "iam:GetPolicy",
            "iam:GetPolicyVersion",
            "iam:ListRoles"
        ],
        "Condition": {
            "IpAddress": {
                "aws:SourceIp": "192.0.0.8"
            }
        },
        "Resource": [
            "*"
        ]
    }
  ]
}

To view the policy summary, I can either choose the policy name, which takes me to the policy’s page, or I can choose the arrow next to the policy name, which expands the policy summary on Alice‘s user page. The following screenshot shows the policy summary of the Data_Analytics policy that is attached to Alice.

Looking at this policy summary, I can see that Alice has access to multiple services with different access levels. She has Full access to Amazon EMR, but only Limited List and Limited Read access to IAM. I can also see the high-level summary of resources and conditions granted for each service. In this policy, Alice can access only the 2017sales file system in Amazon EFS and a single Amazon RDS instance. She has access to Multiple Amazon S3 buckets and Amazon DynamoDB tables. Looking at the Request condition column, I see that Alice can access IAM only from a specific IP range. To learn more about the details for resources and request conditions, see the IAM documentation on Understanding Policy Summaries in the AWS Management Console.

In the policy summary, to see the specific actions granted for a service, I choose a service name. For example, when I choose Elasticsearch, I see all the actions organized by access level, as shown in the following screenshot. In this case, Alice has access to all Amazon ES resources and has no request conditions.

Some exceptions

For policies that are complex or contain unrecognized actions, the policy summary may not be able to generate a simple, human-readable table. For these edge cases, we will continue to show the JSON policy without the policy summary.

For policies that include Deny statements, you will see a separate table that shows the permissions that the policy explicitly denies. You can see an example of a policy summary that includes both an Allow statement and a Deny statement in our documentation.

Conclusion

To see policy summaries in your AWS account, sign in to the IAM console and navigate to any managed policy on the Policies page of the IAM console or the Permissions tab on a user’s page. Policy summaries make it easy to scan for certain permissions, such as quickly identifying who has Full access or Permissions management privileges. You can also compare policies to determine which policies define conditions or specify resources for better security posture.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, please start a new thread on the IAM forum.

– Joy

JavaWatch automated coffee replenishment system

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/javawatch-automated-coffee-replenishment-system/

With the JavaWatch system from Terren Peterson, there’s (Raspberry Pi) ZERO reason for you ever to run out of coffee beans again!

By utilising many of the Amazon Web Services (AWS) available to budding developers, Terren was able to create a Pi Zero-powered image detection unit. Using the Raspberry Pi Camera Module to keep tabs on your coffee bean storage, it automatically orders a fresh batch of java when supplies are running low.

JavaWatch Sales Pitch

Introducing JavaWatch, the amazing device that monitors your coffee bean supply and refills from Amazon.com.

Coffee: quite possibly powering Pi Towers’ success

Here at Pi Towers, it’s safe to say that the vast majority of staff members run on high levels of caffeine. In fact, despite hitting ten million Pi boards sold last October, sending two Astro Pi units to space, documenting over 5,000 Code Clubs in the UK, and multiple other impressive achievements, the greatest accomplishment of the Pi Towers team is probably the acquisition of a new all-singing, all-dancing coffee machine for the kitchen. For, if nothing else, it has increased the constant flow of caffeine into the engineers…and that’s always a positive thing, right?

Here are some glamour shots of the beautiful beast:

Pi Towers coffee machine glamour shot
Pi Towers coffee machine glamour shot
Pi Towers coffee machine glamour shot

Anyway, back to JavaWatch

Terren uses the same technology that can be found in an Amazon Dash button, replacing the ‘button-press’ stimulus with image recognition to trigger a purchase request.

JavaWatch flow diagram

Going with the JavaWatch flow… 
Image from Terren’s hackster.io project page.

“The service was straightforward to get working,” Terren explains on his freeCodeCamp blog post. “The Raspberry Pi Camera Module captures and uploads photos at preset intervals to S3, the object-based storage service by AWS.”

The data is used to calculate the amount of coffee beans in stock. For example, the jar in the following image is registered at 73% full:

A jar which is almost full of coffee beans

It could also be 27% empty, depending on your general outlook on life.

A second photo, where the beans take up a mere 15% or so of the jar, registers no beans. As a result, JavaWatch orders more via a small website created specifically for the task, just like pressing a Dash button.

JavaWatch DRS Demo

Demonstration of DRS Capabilities with a project called JavaWatch. This orders coffee beans when the container runs empty.

Terren won second place in hackster.io’s Amazon DRS Developer Challenge for JavaWatch. If you are in need of regular and reliable caffeine infusions, you can find more information on the build, including Terren’s code, on his project page.

The post JavaWatch automated coffee replenishment system appeared first on Raspberry Pi.

Maximising site performance: 5 key considerations

Post Syndicated from Davy Jones original https://www.anchor.com.au/blog/2017/03/maximising-site-performance-key-considerations/

The ongoing performance of your website or application is an area where ‘not my problem’ can be a recurring sentiment from all stakeholders.  It’s not just a case of getting your shiny new website or application onto the biggest, spec-ed-up, dedicated server or cloud instance that money can buy because there are many factors that can influence the performance of your website that you, yes you, need to make friends with.

The relationship between site performance and business outcomes

Websites have evolved into web applications, starting out as simple text in html format to complex, ‘rich’ multimedia content requiring buckets of storage and computing power. Your server needs to run complex scripts and processes, and serve up content to global visitors because let’s face it, you probably have customers everywhere (or at least have plans to achieve a global customer base ). It is a truth universally acknowledged, that the performance of your website is directly related to customer experience, so underestimating the impact of having poor site performance will absolutely affect your brand reputation, sales revenue and business outcomes negatively, jeopardising your business’ success.

Site performance stakeholders

There is an increasing range of literature around the growing importance of optimising site performance for maximum customer experience but who is responsible for owning the customer site experience? Is it the marketing team, development team, digital agency or your hosting provider? The short answer is that all of the stakeholders can either directly or indirectly impact your site performance.

Let’s explore this shared responsibility in more detail, let’s break it down into five areas that affect a website’s performance.

5 key site performance considerations

In order to truly appreciate the performance of your website or application, you must take into consideration 5 key areas that affect your website’s ability to run at maximum performance:

  1. Site Speed
  2. Reliability and availability
  3. Code Efficiency
  4. Scalability
  5. Development Methodology
1. Site Speed

Site speed is the most critical metric. We all know and have experienced the frustration of “this site is slow, it takes too long to load!”. It’s the main (and sometimes, only) metric that most people would think about when it comes to the performance of a web application.

But what does it mean for a site to be slow? Well, it usually comes down to these factors:

a. The time it takes for the server to respond to a visitor requesting a page.
b. The time it takes to download all necessary content to display the website.
c.  The time it takes for your browser to load and display all the content.

Usually, the hosting provider will look over  (a), and the developers would look over (b) and (c), as those points are directly related to the web application.

2. Reliability and availability

Reliability and availability go hand-in-hand.

There’s no point in having a fast website if it’s not *reliably* fast. What do we mean by that?

Well, would you be happy if your website was only fast sometimes? If your Magento retail store is lightning fast when you are the only one using it, but becomes unresponsive during a sale, then the service isn’t performing up to scratch. The hosting provider has to provide you with a service that stays up, and can withstand the traffic going to it.

Outages are also inevitable, as 100% uptime is a myth. But with some clever infrastructure designs, we can minimise downtime as close to zero as we can get! Here at Anchor, our services are built with availability in mind. If your service is inaccessible, then it’s not reliable.

Our multitude of hosting options on offer such as VPS, dedicated and cloud are designed specifically for your needs. Proactive and reactive support, and hands-on management means your server stays reliable and available.

We know some businesses are concerned about the very public outage of AWS in the US recently, however AWS have taken action across all regions to prevent this from occurring again. AWS’s detailed response can be found at S3 Service Disruption in the Northern Virginia (US-EAST-1) Region.

As an advanced consulting partner with Amazon Web Services (AWS), we can guide customers through the many AWS configurations that will deliver the reliability required.  Considerations include utilising multiple availability zones, read-only replicas, automatic backups, and disaster recovery options such as warm standby.  

3. Code Efficiency

Let’s talk about efficiency of a codebase, that’s the innards of the application.

The code of an application determines how hard the CPU (the brain of your computer) has to work to process all the things the application wants to be able to do. The more work your application performs, the harder the CPU has to work to keep up.

In short, you want code to be efficient, and not have to do extra, unnecessary work. Here is a quick example:

# Example 1:    2 + 2 = 4

# Example 2:    ( ( 1 + 5) / 3 ) * 1 ) + 2 = 4

The end result is the same, but the first example gets straight to the point. It’s much easier to understand and faster to process. Efficient code means the server is able to do more with the same amount of resources, and most of the time it would also be faster!

We work with many code efficient partners who create awesome sites that drive conversions.  Get in touch if you’re looking for a code efficient developer, we’d be happy to suggest one of our tried and tested partners

4. Scalability

Accurately predicting the spikes in traffic to your website or application is tricky business.  Over or under-provisioning of infrastructure can be costly, so ensuring that your build has the potential to scale can help your website or application to optimally perform at all times.  Scaling up involves adding more resources to the current systems. Scaling out involves adding more nodes. Both have their advantages and disadvantages. If you want to know more, feel free to talk to any member of our sales team to get started.

If you are using a public cloud infrastructure like Amazon Web Services (AWS) there are several ways that scalability can be built into your infrastructure from the start.  Clusters are at the heart of scalability and there are a number of tools can optimise your cluster efficiency such as Amazon CloudWatch, that can trigger scaling activities, and Elastic Load Balancing to direct traffic to the various clusters within your auto scaling group.  For developers wanting complete control over AWS resources, Elastic Beanstalk may be more appropriate.

5. Development Methodology

Development methodologies describe the process of what needs to happen in order to introduce changes to software. A commonly used methodology nowadays is the ‘DevOps’ methodology.

What is DevOps?

It’s the union of Developers and IT Operations teams working together to achieve a common goal.

How can it improve your site’s performance?

Well, DevOps is a way of working, a culture that introduces close collaboration between the two teams of Developers and IT Operations in a single workflow.   By integrating these teams the process of creating, testing and deploying software applications can be streamlined. Instead of each team working in a silo, cross-functional teams work together to efficiently solve problems to get to a stable release faster. Faster releases mean that your website or application gets updates more frequently and updating your application more frequently means you are faster to fix bugs and introduce new features. Check out this article ‘5 steps to prevent your website getting hacked‘ for more details. 

The point is the faster you can update your applications the faster it is for you to respond to any changes in your situation.  So if DevOps has the potential to speed up delivery and improve your site or application performance, why isn’t everyone doing it?

Simply put, any change can be hard. And for a DevOps approach to be effective, each team involved needs to find new ways of working harmoniously with other teams toward a common goal. It’s not just a process change that is needed, toolsets, communication and company culture also need to be addressed.

The Anchor team love putting new tools through their paces.  We love to experiment and iterate on our processes in order to find one that works with our customers. We are experienced in working with a variety of teams, and love to challenge ourselves. If you are looking for an operations team to work with your development team, get in touch.

***
If your site is running slow or you are experiencing downtime, we can run a free hosting check up on your site and highlight the ‘quick wins’ on your site to boost performance.

The post Maximising site performance: 5 key considerations appeared first on AWS Managed Services by Anchor.

AWS Week in Review – February 20, 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-february-20-2017/

By popular demand, I am producing this “micro” version of the AWS Week in Review. I have included all of our announcements, content from all of our blogs, and as much community-generated AWS content as I had time for. Going forward I hope to bring back the other sections, as soon as I get my tooling and automation into better shape.

Monday

February 20

Tuesday

February 21

Wednesday

February 22

Thursday

February 23

Friday

February 24

Saturday

February 25

Jeff;