Tag Archives: policies

Announcing AWS Organizations: Centrally Manage Multiple AWS Accounts

Post Syndicated from Caitlyn Shim original https://aws.amazon.com/blogs/security/announcing-aws-organizations-centrally-manage-multiple-aws-accounts/

Today, AWS launched AWS Organizations: a new way for you to centrally manage all the AWS accounts your organization owns. Now you can arrange your AWS accounts into groups called organizational units (OUs) and apply policies to OUs or directly to accounts. For example, you can organize your accounts by application, environment, team, or any other grouping that makes sense for your business.

Organizations removes the need to manage security policies through separate AWS accounts. Before Organizations, if you had a set of AWS accounts, you had to ensure that users in those AWS accounts had the right level of access to AWS services. You had to either configure security settings on each account individually or write a custom script to iterate through each account. However, any user with administrative permissions in those AWS accounts could have bypassed the defined permissions. Organizations includes the launch of service control policies (SCPs), which give you the ability to configure one policy and have it apply to your entire organization, an OU, or an individual account. In this blog post, I walk through an example of how to use Organizations.

Create an organization

Let’s say you own software regulated by the healthcare or financial industry, and you want to deploy and run your software on AWS. In order to process, store, or transmit any protected health information on AWS, regulations such as ISO and SOC limit your software to using only designated AWS services. With Organizations, you can limit your software to access only the AWS services allowed by such regulations. SCPs can limit access to AWS services across all accounts in your organization such that the enforced policy cannot be overridden by any user in the AWS accounts on whom the SCP is enforced.

When creating a new organization, you want to start in the account that will become the master account. The master account is the account that will own the organization and all of the administrative actions applied to accounts by the organization. The master account can set up the SCPs, account groupings, and payment method of the other AWS accounts in the organization called member accounts. Like Consolidated Billing, the master account is also the account that pays the bills for the AWS activity of the member accounts, so choose it wisely.

For all my examples in this post, I use the AWS CLI. Organizations is also available through the AWS Management Console and AWS SDK.

In the following case, I use the create-organization API to create a new organization in Full_Control mode:

server:~$ aws organizations create-organization --mode FULL_CONTROL

returns:

{
    "organization": {
        "availablePolicyTypes": [
            {
                "status": "ENABLED",
                "type": "service_control_policy"
            }
        ],
        "masterAccountId": "000000000001",
        "masterAccountArn": "arn:aws:organizations::000000000001:account/o-1234567890/000000000001",
        "mode": "FULL_CONTROL",
        "masterAccountEmail": "[email protected]",
        "id": "o-1234567890",
        "arn": "arn:aws:organizations::000000000001:organization/o-1234567890"
    }
}

In this example, I want to set security policies on the organization, so I create it in Full_Control mode. You can create organizations in two different modes: Full_Control or Billing. Full_Control has all of the benefits of Billing mode but adds the additional ability to apply service control policies.

Note: If you currently use Consolidated Billing, your Consolidated Billing family is migrated automatically to AWS Organizations in Billing mode, which is equivalent to Consolidated Billing. All account activity charges accrued by member accounts are billed to the master account.

Add members to the organization

To populate the organization you’ve just created, you can either create new AWS accounts in the organization or invite existing accounts.

Create new AWS accounts

Organizations gives you APIs to programmatically create new AWS accounts in your organization. All you have to specify is the email address and account name. The rest of the account information (address, etc.) is inherited from the master account.

To create a new account by using the AWS CLI, you call create-account with the email address to assign to the new account and what you want to set the account name to be:

server:~$ aws organizations create-account --account-name DataIngestion --email [email protected]

Creating an AWS account is an asynchronous process, so create-account returns a createAccountStatus response:

{
    "createAccountStatus": {
        "requestedTimestamp": 1479783743.401,
        "state": "IN_PROGRESS",
        "id": "car-12345555555555555555555555556789",
        "accountName": "DataIngestion"
    }
}

You can use describe-create-account-status with the createAccountStatus id to see when the AWS account is ready to use. Accounts are generally ready within minutes.

Invite existing AWS accounts

You can also invite existing AWS accounts to your organization. To invite an AWS account to your organization, you either need to know the AWS account ID or email address.

The command through the AWS CLI is invite-account-to-organization:

server:~$ aws organizations invite-account-to-organization --target [email protected],type=EMAIL

That command returns a handshake object, which contains all information about the request sent. In this instance, this was a request sent to the account at [email protected] to join the organization with the id 1234567890:

{
    "handshake": {
        "id": "h-77888888888888888888888888888777",
        "state": "OPEN",
        "resources": [
            {
               "type": "ORGANIZATION",
               "resources": [
                  {
                     "type": "MASTER_EMAIL",
                     "value": "[email protected]"
                  },
                  {
                     "type": "MASTER_NAME",
                     "value": "Example Corp"
                  },
                  {
                     "type": "ORGANIZATION_MODE",
                     "value": "FULL"
                  }
             ],
             "value": "o-1234567890"
          },
          {
             "type": "EMAIL",
             "value": "[email protected]"
          }
        ],
        "parties": [
            {
              "type": "EMAIL",
              "id": "[email protected]"
            },
            {
              "type": "ORGANIZATION",
              "id": "1234567890"
            }
            ],
            "action": "invite",
            "requestedTimestamp": 1479784006.008,
            "expirationTimestamp": 1481080006.008,
            "arn": "arn:aws:organizations::000000000001:handshake/o-1234567890/invite/h-77888888888888888888888888888777"
   }
}

The preceding command sends an email to the account owner to notify them that you invited their account to join your organization. If the owner of the invited AWS account agrees to the terms of joining the organization, they can accept the request through a link in the email or by calling accept-handshake-request.

Viewing your organization

At this point, the organization is populated with two member accounts: DataIngestion and DataProcessing. The master account is [email protected]. By default, accounts are placed directly under the root of your organization. The root is the base of your organizational structure—all OUs or accounts branch from it, as shown in the following illustration.

You can view all the accounts in the organization with the CLI command, list-accounts:

server:~$ aws organizations list-accounts

The list-accounts command returns a list of accounts and the details of how they joined the organization. Note that the master account is the organization along with the member accounts:

{
    "accounts": [
        {
           "status": "ACTIVE",
           "name": "DataIngestion",
           "joinedMethod": "CREATED",
           "joinedTimestamp": 1479784102.074,
           "id": "200000000001",
           "arn": "arn:aws:organizations::000000000001:account/o-1234567890/200000000001"
        },
        {
           "status": "ACTIVE",
           "name": " DataProcessing ",
           "joinedMethod": "INVITED",
           "joinedTimestamp": 1479784102.074,
           "id": "200000000002",
           "arn": "arn:aws:organizations::000000000001:account/o-1234567890/200000000002"
        },
        {
           "status": "ACTIVE",
           "name": "Example Corp",
           "joinedMethod": "INVITED",
           "joinedTimestamp": 1479783034.579,
           "id": "000000000001",
           "arn": "arn:aws:organizations::000000000001:account/o-1234567890/000000000001"
        }
    ]
}

Service control policies

Service control policies (SCPs) let you apply controls on which AWS services and APIs are accessible to an AWS account. The SCPs apply to all entities within an account: the root user, AWS Identity and Access Management (IAM) users, and IAM roles. For this example, we’re going to set up an SCP to only allow access to seven AWS services.

Note that SCPs work in conjunction with IAM policies. Our SCP allows access to a set of services, but it is possible to set up an IAM policy to further restrict access for a user. AWS will enforce the most restrictive combination of the policies.

SCPs affect the master account differently than other accounts. To ensure you do not lock yourself out of your organization, SCPs applied to your organization do not apply to the master account. Only SCPs applied directly to the master account can modify the master account’s access.

Creating a policy

In this example, I want to make sure that only the given set of services is accessible in our AWS account, so I will create an Allow policy to grant access only to listed services. The following policy allows access to Amazon DynamoDB, Amazon RDS, Amazon EC2, Amazon S3, Amazon EMR, Amazon Glacier, and Elastic Load Balancing.

server:~$ aws organizations create-policy --name example-prod-policy --type service_control_policy --description "All services allowed by regulation policy." --content file://./Regulation-Policy.json

and inside Regulation-Policy.json:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                 "dynamodb:*","rds:*","ec2:*","s3:*","elasticmapreduce:*",
                 "glacier:*","elasticloadbalancing:*"
             ],
             "Effect": "Allow",
             "Resource": "*"
        }
    ]
}

SCPs follow the same structure as IAM policies. This example is an Allow policy, but SCPs also offer the ability to write Deny policies, restricting access to specific services and APIs.

To create an SCP in organizations through the CLI, you can store the earlier SCP in file Regulation-Policy.json and call create-policy:

server:~$ aws organizations create-policy --name example-prod-policy --type service_control_policy --description "All services allowed for regulation policy." --content file://./Regulation-Policy.json

The create-policy command returns the policy just created. Save the policy id for later when you want to apply it to an element in organizations:

{
"policy": {
"content": "{\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n {\n \"Action\": [\n \"dynamodb: *\",\"rds: *\",\"ec2: *\",\"s3: *\",\"elasticmapreduce: *\",\"glacier: *\",\"elasticloadbalancing: *\"\n],\n \"Effect\": \"Allow\",\n\"Resource\": \"*\"\n}\n ]\n}\n",
"policySummary": {
"description": "All services allowed for regulation policy.",
"type": "service_control_policy",
"id": "p-yyy12346",
"arn": "arn:aws:organizations:: 000000000001:policy/o-1234567890/service_control_policy/p-yyy12346",
"name": "example-prod-policy"
        }
    }
}

Attaching a policy

After you create your policy, you can attach it to entities in your organization. You can attach policies to the root account, an OU, or an individual account. Accounts inherit all policies above them in the organizational tree. To keep this example simple, attach the Example Corp. Prod SCP (as defined in the example-prod-policy) to your entire organization and attach the policy to the root. Because the root is the top element of an organization’s tree, policies applied to the tree apply to every member account in the organization.

Using the CLI, you first need to call list-roots to get the root id. Once you have the root id, you can call attach-policy, passing it in the example-prod-policy id (collected from the create-policy response) and the root id.

server:~$ aws organizations list-roots
{
    "roots": [
        {
            "policyTypes": [
                {
                    "status": "ENABLED",
                    "type": "service_control_policy"
                }
            ],
            "id": "r-4444",
            "arn": "arn:aws:organizations::000000000001:root/o-1234567890/r-4444",
             "name": "Root"
        }
    ]
}
server:~$ aws organizations attach-policy --target-id r-4444 --policy-id p-yyy12346

Let’s now verify the policies on the root. Calling list-policies-for-target on the root will display all policies attached to the root of the organization.

server:~$ aws organizations list-policies-for-target --target-id r-4444 --filter service_control_policy
 
{
    "policies": [
        {
            "awsManaged": false,
            "description": "All services allowed by regulation policy.",
            "type": "service_control_policy",
            "id": " p-yyy12346",
            "arn": "arn:aws:organizations::000000000001:policy/o-1234567890/service_control_policy/p-yyy12346",
            "name": "example-prod-policy"
        },
        {
            "awsManaged": true,
            "description": "Allows access to every operation",
            "type": "service_control_policy",
            "id": "p-FullAWSAccess",
            "arn": "arn:aws:organizations::aws:policy/service_control_policy/p-FullAWSAccess",
            "name": "FullAWSAccess"
        }
    ]
}

Two policies are returned from list-policies-for-target on the root: the example-prod-policy, and the FullAWSAccess policy. FullAWSAccess is an AWS managed policy that is applied to every element in the organization by default. When you create your organization, every root, OU, and account has the FullAWSAccess policy, which allows your organization to have access to everything in AWS (*:*).

If you want to limit your organization’s access to only the services specific to the Example Corp. Prod SCP, you must remove the FullAWSAccess policy. In this case, Organizations will allow the root account to have access to the services allowed in the FullAWSAccess policy and the Example Corp. Prod SCP, which is all AWS services. To have the restrictions intended by the Example Corp. Prod SCP, you have to remove the FullAWSAccess policy. To do this with the CLI:

aws organizations detach-policy --policy-id p-FullAWSAccess --target-id r-4444

And one final list to verify only the example-prod-policy is attached to the root:

aws organizations list-policies-for-target --target-id r-4444 --filter service_control_policy
 
{
    "policies": [
        {
            "awsManaged": false,
            "description": "All services allowed by regulation policy.",
            "type": "service_control_policy",
            "id": "p-yyy12346",
            "arn": "arn:aws:organizations:: 000000000001:policy/o-1234567890/service_control_policy/p-yyy12346",
            "name": "example-prod-policy"
        }
    ]
}

Now, you have an extra layer of protection, limiting all accounts in your organization to only those AWS services that are compliant with Example Corp’s regulations.

The organization limiting all accounts in your organization to only those AWS services that are compliant with Example Corp’s regulations

More advanced structure

If you want to break down your organization further, you can add OUs and apply policies. For example, some of your AWS accounts might not be used for protected health information and therefore have no need for such a restrictive SCP.

For example, you may have a development team that needs to use and experiment with different AWS services to do their job. In this case, you want to set the controls on these accounts to be lighter than the controls on the accounts that have access to regulated customer information. With Organizations, you can, for example, create an OU to represent the medical software you run in production, and a different OU to hold the accounts developers use. You can then apply different security restrictions to each OU.

The following illustration shows three developer accounts that are part of a Developer Research OU. The original two accounts that require regulation protections are in a Production OU. We also detached our Example Corp. Prod Policy from the root and instead attached it to the Production OU. Finally, any restrictions not related to regulations are kept at the root level by placing them in a new Company policy.

In the preceding illustration, the accounts in the Developer Research OU have the Company policy applied to them because their OU is still under the root with that policy. The accounts in the Production OU have both SCPs (Company policy and Example Corp. Prod Policy) applied to them. As is true with IAM policies, SCPs apply the more restrictive intersection of the two policies to the account. If the Company Policy SCP allowed Glacier, EMR, S3, EC2, RDS, Dynamo, Amazon Kinesis, AWS Lambda, and Amazon Redshift, the policy application for accounts in the Production OU would be the intersection, as shown in the following chart.

The accounts in the Production OU have both the Company policy SCP and the Example Corp. Prod Policy SCP applied to them. The intersection of the two policies applies, so these accounts have access to DynamoDB, Glacier, EMR, RDS, EC2, and S3. Because Redshift, Lambda, and Kinesis are not in the Example Corp. Prod Policy SCP, the accounts in the Production OU are not allowed to access them.

To summarize policy application: all elements (root account, OUs, accounts) have FullAWSAccess (*:*) by default. If you have multiple SCPs on the same element, Organizations takes the union. When policies are inherited across multiple elements in the organizational tree, Organizations takes the intersection of the SCPs. And finally, the master account has unique protections on it; only SCPs attached directly to it will change its access.

Conclusion

AWS Organizations makes it easy to manage multiple AWS accounts from a single master account. You can use Organizations to group accounts into organizational units and manage your accounts by application, environment, team, or any other grouping that makes sense for your business. SCPs give you the ability to apply policies to multiple accounts at once.

To learn more, see the AWS Organizations home page and sign up for the AWS Organizations Preview today. If you are at AWS re:Invent 2016, you can also attend session SAC323 – NEW SERVICE: Centrally Manage Multiple AWS Accounts with AWS Organizations on Wednesday, November 30, at 11:00 A.M.

– Caitlyn

Automated Cleanup of Unused Images in Amazon ECR

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/automated-cleanup-of-unused-images-in-amazon-ecr/

Thanks to my colleague Anuj Sharma for a great blog on cleaning up old images in Amazon ECR using AWS Lambda.
—-

When you use Amazon ECR as part of a container build and deployment pipeline, a new image is created and pushed to ECR on every code change. As a result, repositories tend to quickly fill up with new revisions. Each Docker push command creates a version that may or may not be in use by Amazon ECS tasks, depending on which version of the image is actually used to run the task.

It’s important to have an automated way to discover the image that is in use by running tasks and to delete the stale images. You may also want to keep a predetermined number of images in your repositories for easy fallback on a previous version. In this way, repositories can be kept clean and manageable, and save costs by cleaning undesired images.

In this post, I show you how to query running ECS tasks, identify the images which are in use, and delete the remaining images in ECR. To keep a predetermined number of images irrespective of images in use by tasks, the solution should keep the latest n number of images. As ECR repositories may be hosted in multiple regions, the solution should also query either all available regions where ECR is available or only a specific region. You should be able to identify and print the images to be deleted, so that extra caution can be exercised. Finally, the solution should run at a scheduled time on a reoccurring basis, if required.

Cleaning solution

The solution contains two components:

Python script

The Python script is available from the AWS Labs repository. The logic coded is as follows:

  • Get a list of all the ECS clusters using ListClusters API operation.
  • For each cluster, get a list of all the running tasks using ListTasks.
  • Get the ARNs for each running task by calling DescribeTasks.
  • Get the container image for each running task using DescribeTaskDefinition.
  • Filter out only those container images that contain “.dkr.ecr.” and “:” . This ensures that you get a list of all the containers running on ECR that have an associated tag.
  • Get a list of all the repositories using DescribeRepositories.
  • For each repository, get the imagePushedAt value, tags, and SHA for every image using DescribeImages.
  • Ignore those images from the list that have a “latest” tag or which are currently running (as discovered in the earlier steps).
  • Delete the images that have the tags as discovered earlier, using BatchDeleteImage.
    • The -dryrun flag prints out the images marked for deletion, without deleting them. Defaults to True, which only prints the images to be deleted.
    • The -region flag deletes images only in the specified region. If not specified, all regions where ECS is available are queried and images deleted. Defaults to None, which queries all regions.
    • The -imagestokeep flag keeps the latest n number of specified images and does not deletes them. Defaults to 100. For example, if –imagestokeep 20 is specified, the last 20 images in each repository are not deleted and the remaining images that satisfy the logic mentioned above are deleted.

Lambda function

The Lambda function runs the Python script, which is executed using CloudWatch scheduled events. Here’s a diagram of the workflow:

(1) The build system builds and pushes the container image. The build system also tags the image being pushed, either with the source control commit SHA hash value or the incremental package version. This gives full control over running a versioned container image in the task definition. The versioned images (images tagged in this way version the container images) are further used to run the tasks, using your own scheduler or by using the ECS scheduling capabilities.
(2) The Lambda function gets invoked by CloudWatch Events at the specified time and queries all available clusters.
(3) Based on the coded Python logic, the container image tags being used by running tasks are discovered.
(4) The Lambda function also queries all images available in ECR and, based on the coded logic, decides on the image tags to be cleaned.
(5) The Lambda function deletes the images as discovered by the coded logic.

Important: It’s a good practice to run the task definition with an image, which has a version in image tag, so that there is a better visibility and control over the version of running container. The build system can tag the image with the source code version or package version before pushing the image so that each image has the version tagged with each push, which can be used further to run the task. The script assumes that the task definitions have a versioned container specified. I recommend running the script first with the –dryrun flag to identify the images to be deleted.

Usage examples

Prints the images that are not used by running tasks and which are older than the last 100 versions, in all regions: python main.py

Deletes the images that are not used by running tasks and which are older than the last 100 versions, in all regions: python main.py -dryrun False

Deletes the images that are not used by running tasks and which are older than the last 20 versions (in each repository), in all regions: python main.py -dryrun False -imagestokeep 20

Deletes the images that are not used by running tasks and which are older than the last 20 versions (in each repository), in Oregon only: python main.py -dryrun False -imagestokeep 20 -region us-west-2

Assumptions

I’ve made the following assumptions for this solution:

  • Your ECR repository is in the same account as the running ECS clusters.
  • Each container image that has been pushed is tagged with a unique tag.

Docker tags and container image names are mutable. Multiple names can point to the same underlying image and the same image name can point to different underlying images at different points in time.

To trace the version of running code in a container to the source code, it’s a good practice to push the container image with a unique tag, which could be the source control commit SHA hash value or a package’s incremental version number; never reuse the same tag. In this way, you have better control over identifying the version of code to run in the container image, with no ambiguity.

Lambda function deployment

Now it’s time to create and deploy the Lambda function. You can do this through the console or the AWS CLI.

Using the AWS Management Console

Use the console to create a policy, role, and function.

Create a policy

Create a policy with the following policy test. This defines the access for the necessary APIs.

Sign in to the IAM console and choose Policies, Create Policy, Create Your Own Policy. Copy and paste the following policies, type a name, and choose Create.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1476919244000",
            "Effect": "Allow",
            "Action": [
                "ecr:BatchDeleteImage",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:DescribeImages"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "Stmt1476919353000",
            "Effect": "Allow",
            "Action": [
                "ecs:DescribeClusters",
                "ecs:DescribeTaskDefinition",
                "ecs:DescribeTasks",
                "ecs:ListClusters",
                "ecs:ListTaskDefinitions",
                "ecs:ListTasks"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Create a Lambda role

Create a role for the Lambda function, using the policy that you just created.

In the IAM console, choose Roles and enter a name, such as LAMBDA_ECR_CLEANUP. Choose AWS Lambda, select your custom policy, and choose Create Role.

Create a Lambda function

Create a Lambda function, with the role that you just created and the Python code available from the Lambda_ECR_Cleanup repository. For more information, see the Lambda function handler page in the Lambda documentation.

Schedule the Lambda function

Add the trigger for your Lambda function.

In the CloudWatch console, choose Events, Create rule, and then under Event selector, choose Schedule. For example, you can put the cron schedule as * 22 * * * to run the function every day at 10PM. For more information, see Scenario 2: Take Scheduled EBS Snapshots.

Using the AWS CLI

Use the following code examples to create the Lambda function using the AWS CLI. Replace values as needed.

Create a policy

Create a file with the following contents, called LAMBDA_ECR_DELETE.json. You reference this file when you create the IAM role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "lambda.amazonaws.com"
        ]
      },
      "Action": [
                "ecr:BatchDeleteImage",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:DescribeImages",
                "ecs:DescribeClusters",
                "ecs:DescribeTaskDefinition",
                "ecs:DescribeTasks",
                "ecs:ListClusters",
                "ecs:ListTaskDefinitions",
                "ecs:ListTasks"

            ],
      "Resource": ["*"]

    }
  ]
}

Create a Lambda role

aws iam create-role --role-name LAMBDA_ECR_DELETE --assume-role-policy-document file://LAMBDA_ECR_DELETE.json

Create a Lambda function

aws lambda create-function --function-name {NAME_OF_FUNCTION} --runtime python2.7 
    --role {ARN_NUMBER} --handler main.handler --timeout 15 
    --zip-file fileb://{ZIP_FILE_PATH}

Schedule a Lambda function

For more information, see Scenario 6: Run an AWS Lambda Function on a Schedule Using the AWS CLI.

Conclusion

The SDK for Python provides methods and access to ECS and ECR APIs, which can be leveraged to write logic to identify stale container images. Lambda can be used to run the code and avoid provisioning any servers. CloudWatch Events can be used to trigger the Lambda code on a recurring schedule, to keep your repositories clean and manageable and control costs.

If you have questions or suggestions, please comment below.

The false-false-balance problem

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/11/the-false-false-balance-problem.html

Until recently, journalism in America prided itself on objectivity — to report the truth, without taking sides. That’s because big debates are always complexed and nuanced, and that both sides are equally reasonable. Therefore, when writing an article, reporters attempt to achieve balance by quoting people/experts/proponents on both sides of an issue.

But what about those times when one side is clearly unreasonable? You’d never try to achieve balance by citing those who believe in aliens and big-foot, for example.Thus, journalists have come up with the theory of false-balance to justify being partisan and one-sided on certain issues.
Typical examples where journalists cite false-balance is reporting on anti-vaxxers, climate-change denialists, and Creationists. More recently, false-balance has become an issue in the 2016 Trump election.
But this concept of false-balance is wrong. It’s not that anti-vaxxers, denialists, Creationists, and white supremacists are reasonable. Instead, the issue is that the left-wing has reframed the debate. They’ve simplified it into something black-and-white, removing nuance, in a way that shows their opponents as being unreasonable. The media then adopts the reframed debate.
Let’s talk anti-vaxxers. One of the policy debates is whether the government has the power to force vaccinations on people (or on people’s children). Reasonable people say the government doesn’t have this power. Many (if not most) people hold this opinion while agreeing that vaccines are both safe and effective (that they don’t cause autism).
Consider this February 2015 interview with Chris Christy. He’s one of the few politicians who have taken the position that government can override personal choice, such as in the case of an outbreak. Yet, when he said “parents need to have some measure of choice in things as well, so that’s the balance that the government has to decide“, he was broadly reviled as an anti-vaxxer throughout the media. The press reviled other Republican candidates the same way, even while ignoring almost identical statements made at the same time by the Obama administration. They also ignored clearly anti-vax comments from both Hillary and Obama during the 2008 election.
Yes, we can all agree that anti-vaxxers are a bunch of crazy nutjobs. In calling for objectivity, we aren’t saying that you should take them seriously. Instead, we are pointing out the obvious bias in the way the media attacked Republican candidates as being anti-vaxxers, and then hiding behind “false-balance”.
Now let’s talk evolution. The issue is this: Darwinism has been set up as some sort of competing religion against belief in God(s). High-schools teach children to believe in Darwinism, but not to understand Darwinism. Few kids graduate understanding Darwinism, which is why it’s invariably misrepresented in mass-media (X-Men, Planet of the Apes, Waterworld, Godzilla, Jurassic Park, etc.). The only movie I can recall getting evolution correct is Idiocracy.
Also, evolution has holes in it. This isn’t a bad thing in science, every scientific theory has holes. Science isn’t a religion. We don’t care about the holes. That some things remain unexplained by a theory doesn’t bother us. Science has no problem with gaps in knowledge, where we admit “I don’t know”. It’s religion that has “God of the gaps”, where ignorance isn’t tolerated, and everything unexplained is explained by a deity.
The hole in evolution is how the cell evolved. The fossil record teaches us a lot about multi-cellular organisms over the last 400-million years, but not much about how the cell evolved in the 4-billion years on planet Earth before that. I can point to radio isotope dating and fossil finds to prove dinosaurs existed 250,000 million to 60 million years ago, thus disproving your crazy theory of a 10,000 year-old Earth. But I can’t point to anything that disagrees with your view that a deity created the original cellular organisms. I don’t agree with that theory, but I can’t disprove it, either.
The point is that Christians have a good point that Darwinism is taught as a competing religion. You see this in the way books that deny holes in knowledge, insisting that Darwinism explains even how cells evolved, and that doubting Darwin is blasphemy. 
The Creationist solution is wrong, we can’t teach religion in schools. But they have a reasonable concern about religious Darwinism. The solution there is to do a better job teaching it as a science. If kids want to believe that one of the deities created the first cells, then that’s okay, as long as they understand the fossil record and radioisotope dating.
Now let’s talk Climate Change. This is a tough one, because you people have lost your collective minds. The debate is over how much change? how much danger? how much costs?. The debate is not over Is it true?. We all agree it’s true, even most Republicans. By keeping the debate between the black-and-white “Is global warming true?”, the left-wing can avoid the debate “How much warming?”.
Consider this exchange from one of the primary debates:
Moderator: …about climate change…
RUBIO: Because we’re not going to destroy our economy …
Moderator: Governor Christie, … what do you make of skeptics of climate change such as Senator Rubio?
CHRISTIE: I don’t think Senator Rubio is a skeptic of climate change.
RUBIO: I’m not a denier/skeptic of climate change.
The media (in this case CNN) is so convinced that Republican deny climate change that they can’t hear any other statement. Rubio clearly didn’t deny Climate Change, but the moderator was convinced that he did. Every statement is seen as outright denial, or code words for denial. Thus, convinced of the falseness of false-balance, the media never sees the fact that most Republicans are reasonable.
Similar proof of Republican non-denial is this page full of denialism quotes. If you actually look at the quotes, you’ll see that when taken in context, virtually none of the statements deny climate change. For example, when Senator Dan Sulliven says “no concrete scientific consensus on the extent to which humans contribute to climate change“, he is absolutely right. There is 97% consensus that mankind contributes to climate change, but there is widespread disagreement on how much.
That “97% consensus” is incredibly misleading. Whenever it’s quoted, the speaker immediately moves the bar, claiming that scientists also agree with whatever crazy thing the speaker wants, like hurricanes getting worse (they haven’t — at least, not yet).
There’s no inherent reason why Republicans would disagree with addressing Climate Change. For example, Washington State recently voted on a bill to impose a revenue neutral carbon tax. The important part is “revenue neutral”: Republicans hate expanding government, but they don’t oppose policies that keep government the same side. Democrats opposed this bill, precisely because it didn’t expand the size of government. That proves that Democrats are less concerned with a bipartisan approach to addressing climate change, but instead simply use it as a wedge issue to promote their agenda of increased regulation and increased spending. 
If you are serious about address Climate Change, then agree that Republicans aren’t deniers, and then look for bipartisan solutions.
Conclusion

The point here is not to try to convince you of any political opinion. The point here is to describe how the press has lost objectivity by adopting the left-wing’s reframing of the debate. Instead of seeing balanced debate between two reasonable sides, they see a warped debate between a reasonable (left-wing) side and an unreasonable (right-wing) side. That the opposing side is unreasonable is so incredible seductive they can never give it up.
That Christie had to correct the moderator in the debate should teach you that something is rotten in journalism. Christie understood Rubio’s remarks, but the debate moderator could not. Journalists cannot even see the climate debate because they are wedded to the left-wing’s corrupt view of the debate.
The issue of false-balance is wrong. In debates that evenly divide the population, the issues are complex and nuanced, both sides are reasonable. That’s the law. It doesn’t matter what the debate is. If you see the debate simplified to the point where one side is obviously unreasonable, then it’s you who has a problem.

Dinner with Rajneeshees

One evening I answered the doorbell to find a burgundy clad couple on the doorstep. They were followers of the Bagwan Shree Rajneesh, whose cult had recently purchased a large ranch in the eastern part of the state. No, they weren’t there to convert us. They had come for dinner. My father had invited them.
My father was a journalist, who had been covering the controversies with the cult’s neighbors. Yes, they were a crazy cult which later would breakup after committing acts of domestic terrorism.  But this couple was a pair of young professionals (lawyers) who, except for their clothing, looked and behaved like normal people. They would go on to live normal lives after the cult.
Growing up, I lived in two worlds. One was the normal world, which encourages you to demonize those who disagree with you. On the political issues that concern you most, you divide the world into the righteous and the villains. It’s not enough to believe the other side wrong, you most also believe them to be evil.
The other world was that of my father, teaching me to see the other side of the argument. I guess I grew up with my own Atticus Finch (from To Kill a Mockingbird), who set an ideal. In much the same way that Atticus told his children that they couldn’t hate even Hitler, I was told I couldn’t hate even the crazy Rajneeshees.

CloudTrail Update – Capture and Process Amazon S3 Object-Level API Activity

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/cloudtrail-update-capture-and-process-amazon-s3-object-level-api-activity/

I would like to show you how several different AWS services can be used together to address a challenge faced by many of our customers. Along the way I will introduce you to a new AWS CloudTrail feature that launches today and show you how you can use it in conjunction with CloudWatch Events.

The Challenge
Our customers store many different types of mission-critical data in Amazon Simple Storage Service (S3) and want to be able to track object-level activity on their data. While some of this activity is captured and stored in the S3 access logs, the level of detail is limited and log delivery can take several hours. Customers, particularly in financial services and other regulated industries, are asking for additional detail, delivered on a more timely basis. For example, they would like to be able to know when a particular IAM user accesses sensitive information stored in a specific part of an S3 bucket.

In order to meet the needs of these customers, we are now giving CloudTrail the power to capture object-level API activity on S3 objects, which we call Data events (the original CloudTrail events are now called Management events). Data events include “read” operations such as GET, HEAD, and Get Object ACL as well as “write” operations such as PUT and POST. The level of detail captured for these operations is intended to provide support for many types of security, auditing, governance, and compliance use cases. For example, it can be used to scan newly uploaded data for Personally Identifiable Information (PII), audit attempts to access data in a protected bucket, or to verify that the desired access policies are in effect.

Processing Object-Level API Activity
Putting this all together, we can easily set up a Lambda function that will take a custom action whenever an S3 operation takes place on any object within a selected bucket or a selected folder within a bucket.

Before starting on this post, I created a new CloudTrail trail called jbarr-s3-trail:

I want to use this trail to log object-level activity on one of my S3 buckets (jbarr-s3-trail-demo). In order to do this I need to add an event selector to the trail. The selector is specific to S3, and allows me to focus on logging the events that are of interest to me. Event selectors are a new CloudTrail feature and are being introduced as part of today’s launch, in case you were wondering.

I indicate that I want to log both read and write events, and specify the bucket of interest. I can limit the events to part of the bucket by specifying a prefix, and I can also specify multiple buckets. I can also control the logging of Management events:

CloudTrail supports up to 5 event selectors per trail. Each event selector can specify up to 50 S3 buckets and optional bucket prefixes.

I set this up, opened my bucket in the S3 Console, uploaded a file, and took a look at one of the entries in the trail. Here’s what it looked like:

{
  "eventVersion": "1.05",
  "userIdentity": {
    "type": "Root",
    "principalId": "99999999999",
    "arn": "arn:aws:iam::99999999999:root",
    "accountId": "99999999999",
    "username": "jbarr",
    "sessionContext": {
      "attributes": {
        "creationDate": "2016-11-15T17:55:17Z",
        "mfaAuthenticated": "false"
      }
    }
  },
  "eventTime": "2016-11-15T23:02:12Z",
  "eventSource": "s3.amazonaws.com",
  "eventName": "PutObject",
  "awsRegion": "us-east-1",
  "sourceIPAddress": "72.21.196.67",
  "userAgent": "[S3Console/0.4]",
  "requestParameters": {
    "X-Amz-Date": "20161115T230211Z",
    "bucketName": "jbarr-s3-trail-demo",
    "X-Amz-Algorithm": "AWS4-HMAC-SHA256",
    "storageClass": "STANDARD",
    "cannedAcl": "private",
    "X-Amz-SignedHeaders": "Content-Type;Host;x-amz-acl;x-amz-storage-class",
    "X-Amz-Expires": "300",
    "key": "ie_sb_device_4.png"
  }

Then I create a simple Lambda function:

Next, I create a CloudWatch Events rule that matches the function name of interest (PutObject) and invokes my Lambda function (S3Watcher):

Now I upload some files to my bucket and check to see that my Lambda function has been invoked as expected:

I can also find the CloudWatch entry that contains the output from my Lambda function:

Pricing and Availability
Data events are recorded only for the S3 buckets that you specify, and are charged at the rate of $0.10 per 100,000 events. This feature is available in all commercial AWS Regions.

Jeff;

Introducing Simplified Serverless Application Deployment and Management

Post Syndicated from Orr Weinstein original https://aws.amazon.com/blogs/compute/introducing-simplified-serverless-application-deplyoment-and-management/

Orr Weinstein
Orr Weinstein, Sr. Product Manager, AWS Lambda

Today, AWS Lambda launched the AWS Serverless Application Model (AWS SAM); a new specification that makes it easier than ever to manage and deploy serverless applications on AWS. With AWS SAM, customers can now express Lambda functions, Amazon API Gateway APIs, and Amazon DynamoDB tables using simplified syntax that is natively supported by AWS CloudFormation. After declaring their serverless app, customers can use new CloudFormation commands to easily create a CloudFormation stack and deploy the specified AWS resources.

AWS SAM

AWS SAM is a simplification of the CloudFormation template, which allows you to easily define AWS resources that are common in serverless applications.

You inform CloudFormation that your template defines a serverless app by adding a line under the template format version, like the following:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Serverless resource types

Now, you can start declaring serverless resources….

AWS Lambda function (AWS::Serverless::Function)

Use this resource type to declare a Lambda function. When doing so, you need to specify the function’s handler, runtime, and a URI pointing to an Amazon S3 bucket that contains your Lambda deployment package.

If you want, you could add managed policies to the function’s execution role, or add environment variables to your function. The really cool thing about AWS SAM is that you can also use it to create an event source to trigger your Lambda function with just a few lines of text. Take a look at the example below:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  MySimpleFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs4.3
      CodeUri: s3://<bucket>/MyCode.zip
      Events:
        MyUploadEvent:
          Type: S3
          Properties:
            Id: !Ref Bucket
            Events: Create
  Bucket:
    Type: AWS::S3::Bucket

In this example, you declare a Lambda function and provide the required properties (Handler, Runtime and CodeUri). Then, declare the event source—an S3 bucket to trigger your Lambda function on ‘Object Created’ events. Note that you are using the CloudFormation intrinsic ref function to set the bucket you created in the template as the event source.

Amazon API Gateway API (AWS::Serverless::Api)

You can use this resource type to declare a collection of Amazon API Gateway resources and methods that can be invoked through HTTPS endpoints. With AWS SAM, there are two ways to declare an API:

1) Implicitly

An API is created implicitly from the union of API events defined on AWS::Serverless::Function. An example of how this is done is     shown later in this post.

2) Explicitly

If you require the ability to configure the underlying API Gateway resources, you can declare an API by providing a Swagger file, and     the stage name:

MyAPI:
   Type: AWS::Serverless::Api
   Properties:
      StageName: prod
      DefinitionUri: swaggerFile.yml

Amazon DynamoDB table (AWS::Serverless::SimpleTable)

This resource creates a DynamoDB table with a single attribute primary key. You can specify the name and type of your primary key, and your provisioned throughput:

MyTable:
   Type: AWS::Serverless::SimpleTable
   Properties:
      PrimaryKey:
         Name: id
         Type: String
      ProvisionedThroughput:
         ReadCapacityUnits: 5
         WriteCapacityUnits: 5

In the event that you require more advanced functionality, you should declare the AWS::DynamoDB::Table resource instead.

App example

Now, let’s walk through an example that demonstrates how easy it is to build an app using AWS SAM, and then deploy it using a new CloudFormation command.

Building a serverless app

In this example, you build a simple CRUD web service. The “front door” to the web service is an API that exposes the “GET”, “PUT”, and “DELETE” methods. Each method triggers a corresponding Lambda function that performs an action on a DynamoDB table.

This is how your AWS SAM template should look:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Simple CRUD web service. State is stored in a DynamoDB table.
Resources:
  GetFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.get
      Runtime: nodejs4.3
      Policies: AmazonDynamoDBReadOnlyAccess
      Environment:
        Variables:
          TABLE_NAME: !Ref Table
      Events:
        GetResource:
          Type: Api
          Properties:
            Path: /resource/{resourceId}
            Method: get
  PutFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.put
      Runtime: nodejs4.3
      Policies: AmazonDynamoDBFullAccess
      Environment:
        Variables:
          TABLE_NAME: !Ref Table
      Events:
        PutResource:
          Type: Api
          Properties:
            Path: /resource/{resourceId}
            Method: put
  DeleteFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.delete
      Runtime: nodejs4.3
      Policies: AmazonDynamoDBFullAccess
      Environment:
        Variables:
          TABLE_NAME: !Ref Table
      Events:
        DeleteResource:
          Type: Api
          Properties:
            Path: /resource/{resourceId}
            Method: delete
  Table:
    Type: AWS::Serverless::SimpleTable

There are a few things to note here:

  • You start the template by specifying Transform: AWS::Serverless-2016-10-31. This informs CloudFormation that this template contains AWS SAM resources that need to be ‘transformed’ to full-blown CloudFormation resources when the stack is created.
  • You declare three different Lambda functions (GetFunction, PutFunction, and DeleteFunction), and a simple DynamoDB table. In each of the functions, you declare an environment variable (TABLENAME) that leverages the CloudFormation intrinsic ref function to set TABLENAME to the name of the DynamoDB table that you declare in your template.
  • You do not use the CodeUri attribute to specify the location of your Lambda deployment package for any of your functions (more on this later).
  • By declaring an API event (and not declaring the same API as a separate AWS::Serverless::Api resource), you are telling AWS SAM to generate that API for you. The API that is going to be generated from the three API events above looks like the following:
/resource/{resourceId}
      GET
      PUT
      DELETE

Next, take a look at the code:

'use strict';

console.log('Loading function');
let doc = require('dynamodb-doc');
let dynamo = new doc.DynamoDB();

const tableName = process.env.TABLE_NAME;
const createResponse = (statusCode, body) => {
    return {
        "statusCode": statusCode,
        "body": body || ""
    }
};

exports.get = (event, context, callback) => {
    var params = {
        "TableName": tableName,
        "Key": {
            id: event.pathParameters.resourceId
        }
    };
    
    dynamo.getItem(params, (err, data) => {
        var response;
        if (err)
            response = createResponse(500, err);
        else
            response = createResponse(200, data.Item ? data.Item.doc : null);
        callback(null, response);
    });
};

exports.put = (event, context, callback) => {
    var item = {
        "id": event.pathParameters.resourceId,
        "doc": event.body
    };

    var params = {
        "TableName": tableName,
        "Item": item
    };

    dynamo.putItem(params, (err, data) => {
        var response;
        if (err)
            response = createResponse(500, err);
        else
            response = createResponse(200, null);
        callback(null, response);
    });
};

exports.delete = (event, context, callback) => {

    var params = {
        "TableName": tableName,
        "Key": {
            "id": event.pathParameters.resourceId
        }
    };

    dynamo.deleteItem(params, (err, data) => {
        var response;
        if (err)
            response = createResponse(500, err);
        else
            response = createResponse(200, null);
        callback(null, response);
    });
};

Notice the following:

  • There are three separate functions, which have handlers that correspond to the handlers you defined in the AWS SAM file (‘get’, ‘put’, and ‘delete’).
  • You are using env.TABLE_NAME to pull the value of the environment variable that you declared in the AWS SAM file.

Deploying a serverless app

OK, now assume you’d like to deploy your Lambda functions, API, and DynamoDB table, and have your app up and ready to go. In addition, assume that your AWS SAM file and code files are in a local folder:

  • MyProject
    • app_spec.yml
    • index.js

To create a CloudFormation stack to deploy your AWS resources, you need to:

  1. Zip the index.js file.
  2. Upload it to an S3 bucket.
  3. Add a CodeUri property, specifying the location of the zip file in the bucket for each function in app_spec.yml.
  4. Call the CloudFormation CreateChangeSet operation with app_spec.yml.
  5. Call the CloudFormation ExecuteChangeSet operation with the name of the changeset you created in step 4.

This seems like a long process, especially if you have multiple Lambda functions (you would have to go through steps 1 to 3 for each function). Luckily, the new ‘Package’ and ‘Deploy’ commands from CloudFormation take care of all five steps for you!

First, call package To perform steps 1 to 3. You need to provide the command with the path to your AWS SAM file, an S3 bucket name, and a name for the new template that will be created (which will contain an updated CodeUri property):

aws cloudformation package --template-file app_spec.yml --output-template-file new_app_spec.yml --s3-bucket <your-bucket-name>

Next, call Deploy with the name of the newly generated SAM file, and with the name of the CloudFormation stack. In addition, you need to provide CloudFormation with the capability to create IAM roles:

aws cloudformation deploy --template-file new_app_spec.yml --stack-name <your-stack-name> --capabilities CAPABILITY_IAM

And voila! Your CloudFormation stack has been created, and your Lambda functions, API Gateway API, and DynamoDB table have all been deployed.

Conclusion

Creating, deploying and managing a serverless app has never been easier. To get started, visit our docs page or check out the serverless-application-model repo on GitHub.

If you have questions or suggestions, please comment below.

New – Auto Scaling for EMR Clusters

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-auto-scaling-for-emr-clusters/

The Amazon EMR team is cranking out new features at an impressive pace (guess they have lots of worker nodes)! So far this quarter they have added all of these features:

Today we are adding to this list with the addition of automatic scaling for EMR clusters. You can now use scale out and scale in policies to adjust the number of core and task nodes in your clusters in response to changing workloads and to optimize your resource usage:

Scale out Policies add additional capacity and allow you to tackle bigger problems. Applications like Apache Spark and Apache Hive will automatically take advantage of the increased processing power as it comes online.

Scale in Policies remove capacity, either at the end of an instance billing hour or as tasks complete. If a node is removed while it is running a YARN container, YARN will rerun that container on another node (read Configure Cluster Scale-Down Behavior for more info).

Using Auto Scaling
In order to make use of Auto Scaling, an IAM role that give Auto Scaling permission to launch and terminate EC2 instances must be associated with your cluster. If you create a cluster from the EMR Console, it will create the EMR_AutoScaling_DefaultRole for you. You can use it as-is or customize it as needed. If you create a cluster programmatically or via the command-line, you will need to create it yourself. You can also create the default roles from the command line like this:

$ aws emr create-default-roles

From the console, you can edit the Auto Scaling policies by clicking on the Advanced Options when you create your cluster:

Simply click on the pencil icon to begin editing your policy. Here’s my Scale out policy:

Because this policy is driven by YARNMemoryAvailablePercentage, it will be activated under low-memory conditions when I am running a YARN-based framework such as Spark, Tez, or Hadoop MapReduce. I can choose many other metrics as well; here are some of my options:

And here’s my Scale in policy:

I can choose from the same set of metrics, and I can set a Cooldown period for each policy. This value sets the minimum amount of time between scaling activities, and allows the metrics to stabilize as the changes take effect.

Default policies (driven by YARNMemoryAvailablePercentage and ContainerPendingRatio) are also available in the console.

Available Now
To learn more about Auto Scaling, read about Scaling Cluster Resources in the EMR Management Guide.

This feature is available now and you start using it today. Simply select emr-5.1.0 from the Release menu to get started!

Jeff;

 

 

Dynamically Scale Applications on Amazon EMR with Auto Scaling

Post Syndicated from Jonathan Fritz original https://aws.amazon.com/blogs/big-data/dynamically-scale-applications-on-amazon-emr-with-auto-scaling/

Jonathan Fritz is a Senior Product Manager for Amazon EMR

Customers running Apache Spark, Presto, and the Apache Hadoop ecosystem take advantage of Amazon EMR’s elasticity to save costs by terminating clusters after workflows are complete and resizing clusters with low-cost Amazon EC2 Spot Instances. For instance, customers can create clusters for daily ETL or machine learning jobs and shut them down when they complete, or scale out a Presto cluster serving BI analysts during business hours for ad hoc, low-latency SQL on Amazon S3.

With new support for Auto Scaling in Amazon EMR releases 4.x and 5.x, customers can now add (scale out) and remove (scale in) nodes on a cluster more easily. Scaling actions are triggered automatically by Amazon CloudWatch metrics provided by EMR at 5 minute intervals, including several YARN metrics related to memory utilization, applications pending, and HDFS utilization.

In EMR release 5.1.0, we introduced two new metrics, YARNMemoryAvailablePercentage and ContainerPendingRatio, which serve as useful cluster utilization metrics for scalable, YARN-based frameworks like Apache Spark, Apache Tez, and Apache Hadoop MapReduce. Additionally, customers can use custom CloudWatch metrics in their Auto Scaling policies.

The following is an example Auto Scaling policy on an instance group that scales 1 instance at a time to a maximum of 40 instances or a minimum of 10 instances. The instance group scales out when the memory available in YARN is less than 15%, and scales in when this metric is greater than 75%: Also, the instance group scales out when the ratio of pending YARN containers over allocated YARN containers is 0.75.

AutoScaling_SocialMedia

Additionally, customers can now configure the scale-down behavior when nodes are terminated from their cluster on EMR 5.1.0. By default, EMR now terminates nodes only during a scale-in event at the instance hour boundary, regardless of when the request was submitted. Because EC2 charges per full hour regardless of when the instance is terminated, this behavior enables applications running on your cluster to use instances in a dynamically scaling environment more cost effectively.

Conversely, customers can set the previous default for EMR releases earlier than 5.1.0, which blacklists and drains tasks from nodes before terminating them, regardless of proximity to the instance hour boundary. With either behavior, EMR removes the least active nodes first and blocks termination if it could lead to HDFS corruption.

You can create or modify Auto Scaling polices using the EMR console, AWS CLI, or the AWS SDKs with the EMR API. To enable Auto Scaling, EMR also requires an additional IAM role to grant permission for Auto Scaling to add and terminate capacity. For more information, see Auto Scaling with Amazon EMR. If you have any questions or would like to share an interesting use case about encryption on EMR, please leave a comment below.


Related

Use Apache Flink on Amazon EMR

o_Flink_2


Tech Companies Urge Trump To Protect DMCA Safe Harbors

Post Syndicated from Andy original https://torrentfreak.com/tech-companies-urge-trump-to-protect-dmca-safe-harbors-161118/

trumpThe Internet Association is a US-based organization comprised of the country’s leading Internet-based businesses. Google, Facebook, Twitter, Amazon, Reddit and Yahoo are all members.

In an open letter published this week, Internet Association president and CEO Michael Beckerman urges President-elect Donald Trump to consider the importance of the connected industries and their contribution to the economy.

“Congratulations on your recent election. The Internet Association and its 40 member companies are writing to you today because innovation emerging from America’s internet industry drives significant economic growth throughout our economy,” Beckerman writes.

“Businesses of all sizes are able to connect with new customers at the touch of a button and compete on a global scale in ways impossible just a decade ago.”

Noting that the internet sector accounted for nearly $1 trillion of GDP in 2014, Beckerman says that the Internet was built on an open architecture that lowers barriers to entry, fosters innovation, and empowers choice. He urges Trump to consider the importance of the connected industries alongside a “roadmap of key policy areas,” compiled by the Internet Association (IA), that will allow the Internet to thrive.

Considering that Association members including Google, Facebook, Twitter, and LinkedIn all generate revenue from content posted by users, it’s no surprise that they want Trump to uphold intermediary liability, should that content be infringing. This freedom to operate, they explain, is central to growth and development.

“It is not by accident or chance that the U.S. is the global leader in online innovation. Twenty years ago, internet platforms began to flourish because new laws protected platforms from being held liable for information posted and shared by users,” Beckerman says.

“Intermediary liability laws and policies protect free speech and creativity on the internet. This, in turn, generates substantial value for our economy and society through increased scale, greater diversity, and lowered barriers to entry for creators and entrepreneurs.

“Threats of excessive liability can transform internet service providers and companies, which often lack the knowledge necessary to make legal decisions about the nature of content, into enforcement agents that block user content and make the web less free, innovative, and collaborative.”

The Internet Association says that weakening liability laws would chill innovation and free expression, warning that if its members were held liable for user-uploaded content, 80% of investors would be “less likely” to fund startups. A similar percentage would be wary of investing in companies where legal action might be more likely.

Specifically, the Internet Association asks Trump to uphold Section 230 of the Communications Decency Act (CDA) but of course there is another piece of key legislation that affords Internet Association safe harbors from user-uploaded content – the DMCA.

“The safe harbors included in Section 512 of the Digital Millennium Copyright Act (DMCA) provide flexible yet robust parameters from which responsible internet companies can operate in good faith in order to grow and adapt across time and technical evolution,” Beckerman writes.

“Under Section 512, internet platforms receive liability protection only if they meet certain requirements, including the expeditious removal of specific identified content flagged by rights-holders. Safe harbors provide flexibility that creates a future-proof framework in which private parties are incentivized to efficiently comply with the law and work collaboratively on private sector efforts tailored to the unique needs of platforms and creators.”

The Internet Association chief goes on to describe the legal certainty provided by the DMCA’s safe harbors as the “gold standard worldwide” for fostering innovation, adding that its takedown mechanisms assist both copyright holders and service providers to provide platforms for lawful consumption.

“Under the shared responsibilities of the notice and takedown system, both rights holders and digital platforms have flourished as consumers increasingly rely on the internet for access to legal content. Efforts to weaken the safe harbors would create legal uncertainty, force internet companies to police the web, chill innovation and free expression online, and undermine the collaborative framework of the law,” he explains.

“Copyright policies must prioritize the public interest by protecting innovation and freedom of expression online, encouraging new forms of follow-on creative works, and ensuring users have access to legal content. Threats to the flexible framework, such as weakening limitations or exceptions to safe harbors, would create barriers to entry for internet startups and creators, which would deny users the ability to access content online.”

The open letter, which covers a broad range of advice for President-elect Trump, can be found here (pdf), but only time will tell if it will have any effect on current consultations over the future of the DMCA.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Now Create and Manage Users More Easily with the AWS IAM Console

Post Syndicated from Rob Moncur original https://aws.amazon.com/blogs/security/now-create-and-manage-users-more-easily-with-the-aws-iam-console/

Today, we updated the AWS Identity and Access Management (IAM) console to make it easier for you to create and manage your IAM users. These improvements include an updated user creation workflow and new ways to assign and manage permissions. The new user workflow guides you through the process of setting user details, including enabling programmatic access (via access key) and console access (via password). In addition, you can assign permissions by adding users to a group, copying permissions from an existing user, and attaching policies directly to users. We have also updated the tools to view details and manage permissions for existing users. Finally, we’ve added 10 new AWS managed policies for job functions that you can use when assigning permissions.

In this post, I show how to use the updated user creation workflow and introduce changes to the user details pages. If you want to learn more about the new AWS managed policies for job functions, see How to Assign Permissions Using New AWS Managed Policies for Job Functions.

The new user creation workflow

Let’s imagine a new database administrator, Arthur, has just joined your team and will need access to your AWS account. To give Arthur access to your account, you must create a new IAM user for Arthur and assign relevant permissions so that Arthur can do his job.

To create a new IAM user:

  1. Navigate to the IAM console.
  2. To create the new user for Arthur, choose Users in the left pane, and then choose Add user.
    Screenshot of creating new user

Set user details
The first step in creating the user arthur is to enter a user name for Arthur and assign his access type:

  1. Type arthur in the User name box. (Note that this workflow allows you to create multiple users at a time. If you create more than one user in a single workflow, all users will have the same access type and permissions.)
    Screenshot of establishing the user name
  2. In addition to using the AWS Management Console, Arthur needs to use the AWS CLI and make API calls using the AWS SDK; therefore, you have to configure both programmatic and AWS Management Console access for him (see the following screenshot). If you select AWS Management Console access, you also have the option to use either an Autogenerated password, which will generate a random password retrievable after the user had been created, or a Custom password, which you can define yourself. In this case, choose Autogenerated password.

By enabling the option User must change password at next sign-in, arthur will be required to change his password the first time he signs in. If you do not have the accountwide setting enabled that allows users to change their own passwords, the workflow will automatically add the IAMUserChangePassword policy to arthur, giving him the ability to change his own password (and no one else’s).
Screenshot of configure both configuring programmatic and AWS Management Console access for Arthur

You can see the contents of the policy by clicking IAMUserChangePassword. This policy grants access to the IAM action, iam:ChangePassword, and it leverages an IAM policy variable, ${aws:username}, which will resolve the current username of the authenticating user. This will enable any user to which it is applied the ability to change only their own password. It also grants access to the IAM action, iam:GetAccountPasswordPolicy, which lets a user see the account password policy details that are shown to help them set a password that conforms to this policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
           "Effect": "Allow",
           "Action": [
               "iam:ChangePassword"
           ],
           "Resource": [
               "arn:aws:iam::*:user/${aws:username}"
           ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:GetAccountPasswordPolicy"
            ],
            "Resource": "*"
        }

    ]
}

Assign permissions

Arthur needs the necessary permissions to do his job as a database administrator. Because you do not have an IAM group set up for database administrators yet, you will create a new group and assign the proper permissions to it:

  1. Knowing that you may grow the database administrator team, using a group to assign permissions will make it easy to assign permissions to additional team members in the future. Choose Add user to group.
  2. Choose Create group. This opens a new window where you can create a new group.
    Screenshot of creating the group
  3. Call the new group DatabaseAdmins and attach the DatabaseAdministrator policy from the new AWS managed policies for job functions, as shown in the following screenshot. This policy enables Arthur to use AWS database services and other supporting services to do his job as a database administrator.

Note: This policy enables you to use additional features in other AWS services (such as Amazon CloudWatch and AWS Data Pipeline). In order to do so, you must create one or more IAM service roles. To understand the different features and the service roles required, see our documentation.
Screenshot of creating DatabaseAdmins group

Review the permissions you assigned to arthur

After creating the group, move to the review step to verify the details and permissions you assigned to arthur (see the following screenshot). If you decide you need to adjust the permissions, you can choose Previous to add or remove assigned permissions. After confirming that arthur has the correct access type and permissions, you can proceed to the next step by choosing Create user.

Screenshot of reviewing the permissions

Retrieve security credentials and email sign-in instructions

The user arthur has now been created with the appropriate permissions.

Screenshot of the "Success" message

You can now retrieve security credentials (access key ID, secret access key, and console password). By choosing Send email, an email with instructions about how to sign in to the AWS Management Console will be generated in your default email application.

Screenshot of the "Send email" link

This email provides a convenient way to send instructions about how to sign in to the AWS Management Console. The email does not include access keys or passwords, so to enable users to get started, you also will need to securely transmit those credentials according to your organization’s security policies.

Screenshot of email with sign-in instructions

Updates to the user details pages

We have also refreshed the user details pages. On the Permissions tab, you will see that the previous Attach policy button is now called Add permissions. This will launch you into the same permissions assignment workflow used in the user creation process. We’ve also changed the way that policies attached to a user are displayed and have added the count of groups attached to the user in the label of the Groups tab.

Screenshot of the changed user details page

On the Security credentials tab, we’ve updated a few items as well. We’ve updated the Sign-in credentials section and added Console password, which shows if AWS Management Console access is enabled or disabled. We’ve also added the Console login link to make it easier to find. We have also updated the Console password, Create access key, and Upload SSH public key workflows so that they are easier to use.

Screenshot of updates made to the "Security credentials" tab

Conclusion

We made these improvements to make it easier for you to create and manage permissions for your IAM users. As you are using these new tools, make sure to review and follow IAM best practices, which can help you improve your security posture and make your account easier to manage.

If you have any feedback or questions about these new features, submit a comment below or start a new thread on the IAM forum.

– Rob

Election Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/election_securi.html

It’s over. The voting went smoothly. As of the time of writing, there are no serious fraud allegations, nor credible evidence that anyone tampered with voting rolls or voting machines. And most important, the results are not in doubt.

While we may breathe a collective sigh of relief about that, we can’t ignore the issue until the next election. The risks remain.

As computer security experts have been saying for years, our newly computerized voting systems are vulnerable to attack by both individual hackers and government-sponsored cyberwarriors. It is only a matter of time before such an attack happens.

Electronic voting machines can be hacked, and those machines that do not include a paper ballot that can verify each voter’s choice can be hacked undetectably. Voting rolls are also vulnerable; they are all computerized databases whose entries can be deleted or changed to sow chaos on Election Day.

The largely ad hoc system in states for collecting and tabulating individual voting results is vulnerable as well. While the difference between theoretical if demonstrable vulnerabilities and an actual attack on Election Day is considerable, we got lucky this year. Not just presidential elections are at risk, but state and local elections, too.

To be very clear, this is not about voter fraud. The risks of ineligible people voting, or people voting twice, have been repeatedly shown to be virtually nonexistent, and “solutions” to this problem are largely voter-suppression measures. Election fraud, however, is both far more feasible and much more worrisome.

Here’s my worry. On the day after an election, someone claims that a result was hacked. Maybe one of the candidates points to a wide discrepancy between the most recent polls and the actual results. Maybe an anonymous person announces that he hacked a particular brand of voting machine, describing in detail how. Or maybe it’s a system failure during Election Day: voting machines recording significantly fewer votes than there were voters, or zero votes for one candidate or another. (These are not theoretical occurrences; they have both happened in the United States before, though because of error, not malice.)

We have no procedures for how to proceed if any of these things happen. There’s no manual, no national panel of experts, no regulatory body to steer us through this crisis. How do we figure out if someone hacked the vote? Can we recover the true votes, or are they lost? What do we do then?

First, we need to do more to secure our elections system. We should declare our voting systems to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states.

We need national security standards for voting machines, and funding for states to procure machines that comply with those standards. Voting-security experts can deal with the technical details, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters, counted by computer but recountable by hand. And we need a system of pre-election and postelection security audits to increase confidence in the system.

Second, election tampering, either by a foreign power or by a domestic actor, is inevitable, so we need detailed procedures to follow–both technical procedures to figure out what happened, and legal procedures to figure out what to do–that will efficiently get us to a fair and equitable election resolution. There should be a board of independent computer-security experts to unravel what happened, and a board of independent election officials, either at the Federal Election Commission or elsewhere, empowered to determine and put in place an appropriate response.

In the absence of such impartial measures, people rush to defend their candidate and their party. Florida in 2000 was a perfect example. What could have been a purely technical issue of determining the intent of every voter became a battle for who would win the presidency. The debates about hanging chads and spoiled ballots and how broad the recount should be were contested by people angling for a particular outcome. In the same way, after a hacked election, partisan politics will place tremendous pressure on officials to make decisions that override fairness and accuracy.

That is why we need to agree on policies to deal with future election fraud. We need procedures to evaluate claims of voting-machine hacking. We need a fair and robust vote-auditing process. And we need all of this in place before an election is hacked and battle lines are drawn.

In response to Florida, the Help America Vote Act of 2002 required each state to publish its own guidelines on what constitutes a vote. Some states — Indiana, in particular — set up a “war room” of public and private cybersecurity experts ready to help if anything did occur. While the Department of Homeland Security is assisting some states with election security, and the F.B.I. and the Justice Department made some preparations this year, the approach is too piecemeal.

Elections serve two purposes. First, and most obvious, they are how we choose a winner. But second, and equally important, they convince the loser–and all the supporters–that he or she lost. To achieve the first purpose, the voting system must be fair and accurate. To achieve the second one, it must be shown to be fair and accurate.

We need to have these conversations before something happens, when everyone can be calm and rational about the issues. The integrity of our elections is at stake, which means our democracy is at stake.

This essay previously appeared in the New York Times.

Building a Backup System for Scaled Instances using AWS Lambda and Amazon EC2 Run Command

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/building-a-backup-system-for-scaled-instances-using-aws-lambda-and-amazon-ec2-run-command/

Diego Natali Diego Natali, AWS Cloud Support Engineer

When an Auto Scaling group needs to scale in, replace an unhealthy instance, or re-balance Availability Zones, the instance is terminated, data on the instance is lost and any on-going tasks are interrupted. This is normal behavior but sometimes there are use cases when you might need to run some commands, wait for a task to complete, or execute some operations (for example, backing up logs) before the instance is terminated. So Auto Scaling introduced lifecycle hooks, which give you more control over timing after an instance is marked for termination.

In this post, I explore how you can leverage Auto Scaling lifecycle hooks, AWS Lambda, and Amazon EC2 Run Command to back up your data automatically before the instance is terminated. The solution illustrated allows you to back up your data to an S3 bucket; however, with minimal changes, it is possible to adapt this design to carry out any task that you prefer before the instance gets terminated, for example, waiting for a worker to complete a task before terminating the instance.

 

Using Auto Scaling lifecycle hooks, Lambda, and EC2 Run Command

You can configure your Auto Scaling group to add a lifecycle hook when an instance is selected for termination. The lifecycle hook enables you to perform custom actions as Auto Scaling launches or terminates instances. In order to perform these actions automatically, you can leverage Lambda and EC2 Run Command to allow you to avoid the use of additional software and to rely completely on AWS resources.

For example, when an instance is marked for termination, Amazon CloudWatch Events can execute an action based on that. This action can be a Lambda function to execute a remote command on the machine and upload your logs to your S3 bucket.

EC2 Run Command enables you to run remote scripts through the agent running within the instance. You use this feature to back up the instance logs and to complete the lifecycle hook so that the instance is terminated.

The example provided in this post works precisely this way. Lambda gathers the instance ID from CloudWatch Events and then triggers a remote command to back up the instance logs.

Architecture Graph

 

Set up the environment

Make sure that you have the latest version of the AWS CLI installed locally. For more information, see Getting Set Up with the AWS Command Line Interface.

Step 1 – Create an SNS topic to receive the result of the backup

In this step, you create an Amazon SNS topic in the region in which to run your Auto Scaling group. This topic allows EC2 Run Command to send you the outcome of the backup. The output of the aws iam create-topic command includes the ARN. Save the ARN, as you need it for future steps.

aws sns create-topic --name backupoutcome

Now subscribe your email address as the endpoint for SNS to receive messages.

aws sns subscribe --topic-arn <enter-your-sns-arn-here> --protocol email --notification-endpoint <your_email>

Step 2 – Create an IAM role for your instances and your Lambda function

In this step, you use the AWS console to create the AWS Identity and Access Management (IAM) role for your instances and Lambda to enable them to run the SSM agent, upload your files to your S3 bucket, and complete the lifecycle hook.

First, you need to create a custom policy to allow your instances and Lambda function to complete lifecycle hooks and publish to the SNS topic set up in Step 1.

  1. Log into the IAM console.
  2. Choose Policies, Create Policy
  3. For Create Your Own Policy, choose Select.
  4. For Policy Name, type “ASGBackupPolicy”.
  5. For Policy Document, paste the following policy which allows to complete a lifecycle hook:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "autoscaling:CompleteLifecycleAction",
        "sns:Publish"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

Create the role for EC2.

  1. In the left navigation pane, choose Roles, Create New Role.
  2. For Role Name, type “instance-role” and choose Next Step.
  3. Choose Amazon EC2 and choose Next Step.
  4. Add the policies AmazonEC2RoleforSSM and ASGBackupPolicy.
  5. Choose Next Step, Create Role.

Create the role for the Lambda function.

  1. In the left navigation pane, choose Roles, Create New Role.
  2. For Role Name, type “lambda-role” and choose Next Step.
  3. Choose AWS Lambda and choose Next Step.
  4. Add the policies AmazonSSMFullAccess, ASGBackupPolicy, and AWSLambdaBasicExecutionRole.
  5. Choose Next Step, Create Role.

Step 3 – Create an Auto Scaling group and configure the lifecycle hook

In this step, you create the Auto Scaling group and configure the lifecycle hook.

  1. Log into the EC2 console.
  2. Choose Launch Configurations, Create launch configuration.
  3. Select the latest Amazon Linux AMI and whatever instance type you prefer, and choose Next: Configuration details.
  4. For Name, type “ASGBackupLaunchConfiguration”.
  5. For IAM role, choose “instance-role” and expand Advanced Details.
  6. For User data, add the following lines to install and launch the SSM agent at instance boot:
    #!/bin/bash
    sudo yum install amazon-ssm-agent -y
    sudo /sbin/start amazon-ssm-agent
  7. Choose Skip to review, Create launch configuration, select your key pair, and then choose Create launch configuration.
  8. Choose Create an Auto Scaling group using this launch configuration.
  9. For Group name, type “ASGBackup”.
  10. Select your VPC and at least one subnet and then choose Next: Configuration scaling policies, Review, and Create Auto Scaling group.

Your Auto Scaling group is now created and you need to add the lifecycle hook named “ASGBackup” by using the AWS CLI:

aws autoscaling put-lifecycle-hook --lifecycle-hook-name ASGBackup --auto-scaling-group-name ASGBackup --lifecycle-transition autoscaling:EC2_INSTANCE_TERMINATING --heartbeat-timeout 3600

Step 4 – Create an S3 bucket for files

Create an S3 bucket where your data will be saved, or use an existing one. To create a new one, you can use this AWS CLI command:

aws s3api create-bucket --bucket <your_bucket_name>

Step 5 – Create the SSM document

The following JSON document archives the files in “BACKUPDIRECTORY” and then copies them to your S3 bucket “S3BUCKET”. Every time this command completes its execution, a SNS message is sent to the SNS topic specified by the “SNSTARGET” variable and completes the lifecycle hook.

In your JSON document, you need to make a few changes according to your environment:

Auto Scaling group name (line 12) “ASGNAME=’ASGBackup’”,
Lifecycle hook name (line 13) “LIFECYCLEHOOKNAME=’ASGBackup’”,
Directory to back up (line 14) “BACKUPDIRECTORY=’/var/log’”,
S3 bucket (line 15) “S3BUCKET='<your_bucket_name>’”,
SNS target (line 16) “SNSTARGET=’arn:aws:sns:’${REGION}’:<your_account_id>:<your_sns_ backupoutcome_topic>”

Here is the document:

{
  "schemaVersion": "1.2",
  "description": "Backup logs to S3",
  "parameters": {},
  "runtimeConfig": {
    "aws:runShellScript": {
      "properties": [
        {
          "id": "0.aws:runShellScript",
          "runCommand": [
            "",
            "ASGNAME='ASGBackup'",
            "LIFECYCLEHOOKNAME='ASGBackup'",
            "BACKUPDIRECTORY='/var/log'",
            "S3BUCKET='<your_bucket_name>'",
            "SNSTARGET='arn:aws:sns:'${REGION}':<your_account_id>:<your_sns_ backupoutcome_topic>'",           
            "INSTANCEID=$(curl http://169.254.169.254/latest/meta-data/instance-id)",
            "REGION=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone)",
            "REGION=${REGION::-1}",
            "HOOKRESULT='CONTINUE'",
            "MESSAGE=''",
            "",
            "tar -cf /tmp/${INSTANCEID}.tar $BACKUPDIRECTORY &> /tmp/backup",
            "if [ $? -ne 0 ]",
            "then",
            "   MESSAGE=$(cat /tmp/backup)",
            "else",
            "   aws s3 cp /tmp/${INSTANCEID}.tar s3://${S3BUCKET}/${INSTANCEID}/ &> /tmp/backup",
            "       MESSAGE=$(cat /tmp/backup)",
            "fi",
            "",
            "aws sns publish --subject 'ASG Backup' --message \"$MESSAGE\"  --target-arn ${SNSTARGET} --region ${REGION}",
            "aws autoscaling complete-lifecycle-action --lifecycle-hook-name ${LIFECYCLEHOOKNAME} --auto-scaling-group-name ${ASGNAME} --lifecycle-action-result ${HOOKRESULT} --instance-id ${INSTANCEID}  --region ${REGION}"
          ]
        }
      ]
    }
  }
}
  1. Log into the EC2 console.
  2. Choose Command History, Documents, Create document.
  3. For Document name, enter “ASGLogBackup”.
  4. For Content, add the above JSON, modified for your environment.
  5. Choose Create document.

Step 6 – Create the Lambda function

The Lambda function uses modules included in the Python 2.7 Standard Library and the AWS SDK for Python module (boto3), which is preinstalled as part of Lambda. The function code performs the following:

  • Checks to see whether the SSM document exists. This document is the script that your instance runs.
  • Sends the command to the instance that is being terminated. It checks for the status of EC2 Run Command and if it fails, the Lambda function completes the lifecycle hook.
  1. Log in to the Lambda console.
  2. Choose Create Lambda function.
  3. For Select blueprint, choose Skip, Next.
  4. For Name, type “lambda_backup” and for Runtime, choose Python 2.7.
  5. For Lambda function code, paste the Lambda function from the GitHub repository.
  6. Choose Choose an existing role.
  7. For Role, choose lambda-role (previously created).
  8. In Advanced settings, configure Timeout for 5 minutes.
  9. Choose Next, Create function.

Your Lambda function is now created.

Step 7 – Configure CloudWatch Events to trigger the Lambda function

Create an event rule to trigger the Lambda function.

  1. Log in to the CloudWatch console.
  2. Choose Events, Create rule.
  3. For Select event source, choose Auto Scaling.
  4. For Specific instance event(s), choose EC2 Instance-terminate Lifecycle Action and for Specific group name(s), choose ASGBackup.
  5. For Targets, choose Lambda function and for Function, select the Lambda function that you previously created, “lambda_backup”.
  6. Choose Configure details.
  7. In Rule definition, type a name and choose Create rule.

Your event rule is now created; whenever your Auto Scaling group “ASGBackup” starts terminating an instance, your Lambda function will be triggered.

Step 8 – Test the environment

From the Auto Scaling console, you can change the desired capacity and the minimum for your Auto Scaling group to 0 so that the instance running starts being terminated. After the termination starts, you can see from Instances tab that the instance lifecycle status changed to Termination:Wait. While the instance is in this state, the Lambda function and the command are executed.

You can review your CloudWatch logs to see the Lambda output. In the CloudWatch console, choose Logs and /aws/lambda/lambda_backup to see the execution output.

You can go to your S3 bucket and check that the files were uploaded. You can also check Command History in the EC2 console to see if the command was executed correctly.

Conclusion

Now that you’ve seen an example of how you can combine various AWS services to automate the backup of your files by relying only on AWS services, I hope you are inspired to create your own solutions.

Auto Scaling lifecycle hooks, Lambda, and EC2 Run Command are powerful tools because they allow you to respond to Auto Scaling events automatically, such as when an instance is terminated. However, you can also use the same idea for other solutions like exiting processes gracefully before an instance is terminated, deregistering your instance from service managers, and scaling stateful services by moving state to other running instances. The possible use cases are only limited by your imagination.

Learn more about:

I’ve open-sourced the code in this example in the awslabs GitHub repo; I can’t wait to see your feedback and your ideas about how to improve the solution.

AWS Week in Review – November 7, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-november-7-2016/

Let’s take a quick look at what happened in AWS-land last week. Thanks are due to the 16 internal and external contributors who submitted pull requests!

Monday

November 7

Tuesday

November 8

Wednesday

November 9

Thursday

November 10

Friday

November 11

Saturday

November 12

Sunday

November 13

New & Notable Open Source

  • Sippy Cup is a Python nanoframework for AWS Lambda and API Gateway.
  • Yesterdaytabase is a Python tool for constantly refreshing data in your staging and test environments with Lambda and CloudFormation.
  • ebs-snapshot-lambda is a Lambda function to snapshot EBS volumes and purge old snapshots.
  • examples is a collection of boilerplates and examples of serverless architectures built with the Serverless Framework and Lambda.
  • ecs-deploy-cli is a simple and easy way to deploy tasks and update services in AWS ECS.
  • Comments-Showcase is a serverless comment webapp that uses API Gateway, Lambda, DynamoDB, and IoT.
  • serverless-offline emulates Lambda and API Gateway locally for development of Serverless projects.
  • aws-sign-web is a JavaScript implementation of AWS Signature v4 for use within web browsers.
  • Zappa implements serverless Django on Lambda and API Gateway.
  • awsping is a console tool to check latency to AWS regions.

New SlideShare Presentations

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

On Trump

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2016/11/on-trump.html

I dislike commenting on politics. I think it’s difficult to contribute any novel thought – and in today’s hyper-polarized world, stating an unpopular or half-baked opinion is a recipe for losing friends or worse. Still, with many of my colleagues expressing horror and disbelief over what happened on Tuesday night, I reluctantly decided to jot down my thoughts.

I think that in trying to explain away the meteoric rise of Mr. Trump, many of the mainstream commentators have focused on two phenomena. Firstly, they singled out the emergence of “filter bubbles” – a mechanism that allows people to reinforce their own biases and shields them from opposing views. Secondly, they implicated the dark undercurrents of racism, misogynism, or xenophobia that still permeate some corners of our society. From that ugly place, the connection to Mr. Trump’s foul-mouthed populism was not hard to make; his despicable bragging about women aside, to his foes, even an accidental hand gesture or an inane 4chan frog meme was proof enough. Once we crossed this line, the election was no longer about economic policy, the environment, or the like; it was an existential battle for equality and inclusiveness against the forces of evil that lurk in our midst. Not a day went by without a comparison between Mr. Trump and Adolf Hitler in the press. As for the moderate voters, the pundits had an explanation, too: the right-wing filter bubble must have clouded their judgment and created a false sense of equivalency between a horrid, conspiracy-peddling madman and our cozy, liberal status quo.

Now, before I offer my take, let me be clear that I do not wish to dismiss the legitimate concerns about the overtones of Mr. Trump’s campaign. Nor do I desire to downplay the scale of discrimination and hatred that the societies around the world are still grappling with, or the potential that the new administration could make it worse. But I found the aforementioned explanation of Mr. Trump’s unexpected victory to be unsatisfying in many ways. Ultimately, we all live in bubbles and we all have biases; in that regard, not much sets CNN apart from Fox News, Vox from National Review, or The Huffington Post from Breitbart. The reason why most of us would trust one and despise the other is that we instinctively recognize our own biases as more benign. After all, in the progressive world, we are fighting for an inclusive society that gives all people a fair chance to succeed. As for the other side? They seem like a bizarre, cartoonishly evil coalition of dimwits, racists, homophobes, and the ultra-rich. We even have serious scientific studies to back that up; their authors breathlessly proclaim that the conservative brain is inferior to the progressive brain. Unlike the conservatives, we believe in science, so we hit the “like” button and retweet the news.

But here’s the thing: I know quite a few conservatives, many of whom have probably voted for Mr. Trump – and they are about as smart, as informed, and as compassionate as my progressive friends. I think that the disconnect between the worldviews stems from something else: if you are a well-off person in a coastal city, you know people who are immigrants or who belong to other minorities, making you acutely attuned to their plight; but you may lack the same, deeply personal connection to – say – the situation of the lower middle class in the Midwest. You might have seen surprising charts or read a touching story in Mother Jones few years back, but it’s hard to think of them as individuals; they are more of a socioeconomic obstacle, a problem to be solved. The same goes for our understanding of immigration or globalization: these phenomena make our high-tech hubs more prosperous and more open; the externalities of our policies, if any, are just an abstract price that somebody else ought to bear for doing what’s morally right. And so, when Mr. Trump promises to temporarily ban travel from Muslim countries linked to terrorism or anti-American sentiments, we (rightly) gasp in disbelief; but when Mr. Obama paints an insulting caricature of rural voters as simpletons who “cling to guns or religion or antipathy to people who aren’t like them”, we smile and praise him for his wit, not understanding how the other side could be so offended by the truth. Similarly, when Mrs. Clinton chuckles while saying “we are going to put a lot of coal miners out of business” to a cheering crowd, the scene does not strike us as a thoughtless, offensive, or in poor taste. Maybe we will read a story about the miners in Mother Jones some day?

Of course, liberals take pride in caring for the common folk, but I suspect that their leaders’ attempts to reach out to the underprivileged workers in the “flyover states” often come across as ham-fisted and insincere. The establishment schools the voters about the inevitability of globalization, as if it were some cosmic imperative; they are told that to reject the premise would not just be wrong – but that it’d be a product of a diseased, nativist mind. They hear that the factories simply had to go to China or Mexico, and the goods just have to come back duty-free – all so that our complex, interconnected world can be a happier place. The workers are promised entitlements, but it stands to reason that they want dignity and hope for their children, not a lifetime on food stamps. The idle, academic debates about automation, post-scarcity societies, and Universal Basic Income probably come across as far-fetched and self-congratulatory, too.

The discourse is poisoned by cognitive biases in many other ways. The liberal media keeps writing about the unaccountable right-wing oligarchs who bankroll the conservative movement and supposedly poison people’s minds – but they offer nothing but praise when progressive causes are being bankrolled by Mr. Soros or Mr. Bloomberg. They claim that the conservatives represent “post-truth” politics – but their fact-checkers shoot down conservative claims over fairly inconsequential mistakes, while giving their favored politicians a pass on half-true platitudes about immigration, gun control, crime, or the sources of inequality. Mr. Obama sneers at the conservative bias of Fox News, but has no concern with the striking tilt to the left in the academia or in the mainstream press. The Economist finds it appropriate to refer to Trump supporters as “trumpkins” in print – but it would be unthinkable for them to refer to the fans of Mrs. Clinton using any sort of a mocking term. The pundits ponder the bold artistic statement made by the nude statues of the Republican nominee – but they would be disgusted if a conservative sculptor portrayed the Democratic counterpart in a similarly unflattering light. The commentators on MSNBC read into every violent incident at Trump rallies – but when a a random group of BLM protesters starts chanting about killing police officers, we all agree it would not be fair to cast the entire movement in a negative light.

Most progressives are either oblivious to these biases, or dismiss them as a harmless casualty of fighting the good fight. Perhaps so – and it is not my intent to imply equivalency between the causes of the left and of the right. But in the end, I suspect that the liberal echo chamber contributed to the election of Mr. Trump far more than anything that ever transpired on the right. It marginalized and excluded legitimate but alien socioeconomic concerns from the mainstream political discourse, binning them with truly bigoted and unintelligent speech – and leaving the “flyover underclass” no option other than to revolt. And it wasn’t just a revolt of the awful fringes. On the right, we had Mr. Trump – a clumsy outsider who eschews many of the core tenets of the conservative platform, and who does not convincingly represent neither the neoconservative establishment of the Bush era, nor the Bible-thumping religious right of the Tea Party. On the left, we had Mr. Sanders – an unaccomplished Senator who offered simplistic but moving slogans, who painted the accumulation of wealth as the source of our ills, and who promised to mold the United States into an idyllic version of the social democracies of Europe – supposedly governed by the workers, and not by the exploitative elites.

I think that people rallied behind Mr. Sanders and Mr. Trump not because they particularly loved the candidates or took all their promises seriously – but because they had no other credible herald for their cause. When the mainstream media derided their rebellion and the left simply laughed it off, it only served as a battle cry. When tens of millions of Trump supporters were labeled as xenophobic and sexist deplorables who deserved no place in politics, it only pushed more moderates toward the fringe. Suddenly, rational people could see themselves voting for a politically inexperienced and brash billionaire – a guy who talks about cutting taxes for the rich, who wants to cozy up to Russia, and whose VP pick previously wasn’t so sure about LGBT rights. I think it all happened not because of Mr. Trump’s character traits or thoughtful political positions, and not because half of the country hates women and minorities. He won because he was the only one to promise to “drain the swamp” – and to promise hope, not handouts, to the lower middle class.

There is a risk that this election will prove to be a step back for civil rights, or that Mr. Trump’s bold but completely untested economic policies will leave the world worse off; while not certain, it pains me to even contemplate this possibility. When we see injustice, we should fight tooth and nail. But for now, I am not swayed by the preemptively apocalyptic narrative on the left. Perhaps naively, I have faith in the benevolence of our compatriots and the strength of the institutions of – as cheesy as it sounds – one of the great nations of the world.

How to Assign Permissions Using New AWS Managed Policies for Job Functions

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/how-to-assign-permissions-using-new-aws-managed-policies-for-job-functions/

Today, AWS Identity and Access Management (IAM) made 10 AWS managed policies available that align with common job functions in organizations. AWS managed policies enable you to set permissions using policies that AWS creates and manages, and with a single AWS managed policy for job functions, you can grant the permissions necessary for network or database administrators, for example.

You can attach multiple AWS managed policies to your users, roles, or groups if they span multiple job functions. As with all AWS managed policies, AWS will keep these policies up to date as we introduce new services or actions. You can use any AWS managed policy as a template or starting point for your own custom policy if the policy does not fully meet your needs (however, AWS will not automatically update any of your custom policies). In this blog post, I introduce the new AWS managed policies for job functions and show you how to use them to assign permissions.

The following table lists the new AWS managed policies for job functions and their descriptions.

 Job function Description
Administrator This policy grants full access to all AWS services.
Billing This policy grants permissions for billing and cost management. These permissions include viewing and modifying budgets and payment methods. An additional step is required to access the AWS Billing and Cost Management pages after assigning this policy.
Data Scientist This policy grants permissions for data analytics and analysis. Access to the following AWS services is a part of this policy: Amazon Elastic Map Reduce, Amazon Redshift, Amazon Kinesis, Amazon Machine Learning, Amazon Data Pipeline, Amazon S3, and Amazon Elastic File System. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
Database Administrator This policy grants permissions to all AWS database services. Access to the following AWS services is a part of this policy: Amazon DynamoDB, Amazon ElastiCache, Amazon RDS, and Amazon Redshift. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
Developer Power User This policy grants full access to all AWS services except for IAM.
Network Administrator This policy grants permissions required for setting up and configuring network resources. Access to the following AWS services is included in this policy: Amazon Route 53, Route 53 Domains, Amazon VPC, and AWS Direct Connect. This policy grants access to actions that require delegating permissions to CloudWatch Logs. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for this service.
Security Auditor This policy grants permissions required for configuring security settings and for monitoring events and logs in the account.
Support User This policy grants permissions to troubleshoot and resolve issues in an AWS account. This policy also enables the user to contact AWS support to create and manage cases.
System Administrator This policy grants permissions needed to support system and development operations. Access to the following AWS services is included in this policy: AWS CloudTrail, Amazon CloudWatch, CodeCommit, CodeDeploy, AWS Config, AWS Directory Service, EC2, IAM, AWS KMS, Lambda, RDS, Route 53, S3, SES, SQS, AWS Trusted Advisor, and Amazon VPC. This policy grants access to actions that require delegating permissions to EC2, CloudWatch, Lambda, and RDS. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
View Only User This policy grants permissions to view existing resources across all AWS services within an account.

Some of the policies in the preceding table enable you to take advantage of additional features that are optional. These policies grant access to iam:passrole, which passes a role to delegate permissions to an AWS service to carry out actions on your behalf.  For example, the Network Administrator policy passes a role to CloudWatch’s flow-logs-vpc so that a network administrator can log and capture IP traffic for all the Amazon VPCs they create. You must create IAM service roles to take advantage of the optional features. To follow security best practices, the policies already include permissions to pass the optional service roles with a naming convention. This  avoids escalating or granting unnecessary permissions if there are other service roles in the AWS account. If your users require the optional service roles, you must create a role that follows the naming conventions specified in the policy and then grant permissions to the role.

For example, your system administrator may want to run an application on an EC2 instance, which requires passing a role to Amazon EC2. The system administrator policy already has permissions to pass a role named ec2-sysadmin-*. When you create a role called ec2-sysadmin-example-application, for example, and assign the necessary permissions to the role, the role is passed automatically to the service and the system administrator can start using the features. The documentation summarizes each of the use cases for each job function that requires delegating permissions to another AWS service.

How to assign permissions by using an AWS managed policy for job functions

Let’s say that your company is new to AWS, and you have an employee, Alice, who is a database administrator. You want Alice to be able to use and manage all the database services while not giving her full administrative permissions. In this scenario, you can use the Database Administrator AWS managed policy. This policy grants view, read, write, and admin permissions for RDS, DynamoDB, Amazon Redshift, ElastiCache, and other AWS services a database administrator might need.

The Database Administrator policy passes several roles for various use cases. The following policy shows the different service roles that are applicable to a database administrator.

{
            "Effect": "Allow",
            "Action": [
                "iam:GetRole",
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::*:role/rds-monitoring-role",
                "arn:aws:iam::*:role/rdbms-lambda-access",
                "arn:aws:iam::*:role/lambda_exec_role",
                "arn:aws:iam::*:role/lambda-dynamodb-*",
                "arn:aws:iam::*:role/lambda-vpc-execution-role",
                "arn:aws:iam::*:role/DataPipelineDefaultRole",
                "arn:aws:iam::*:role/DataPipelineDefaultResourceRole"
            ]
        }

However, in order for user Alice to be able to leverage any of the features that require another service, you must create a service role and grant it permissions. In this scenario, Alice wants only to monitor RDS databases. To enable Alice to monitor RDS databases, you must create a role called rds-monitoring-role and assign the necessary permissions to the role.

Steps to assign permissions to the Database Administrator policy in the IAM console

  1. Sign in to the IAM console.
  2. In the left pane, choose Policies and type database in the Filter box.
    Screenshot of typing "database" in filter box
  1. Choose the Database Administrator policy, and then choose Attach in the Policy Actions drop-down menu.
    Screenshot of choosing "Attach" from drop-down list
  1. Choose the user (in this case, I choose Alice), and then choose Attach Policy.
    Screenshot of selecting the user
  2. User Alice now has Database Administrator permissions. However, for Alice to monitor RDS databases, you must create a role called rds-monitoring-role. To do this, in the left pane, choose Roles, and then choose Create New Role.
  3. For the Role Name, type rds-monitoring-role to match the name that is specified in the Database Administrator. Choose Next Step.
    Screenshot of creating the role called rds-monitoring-role
  1. In the AWS Service Roles section, scroll down and choose Amazon RDS Role for Enhanced Monitoring.
    Screenshot of selecting "Amazon RDS Role for Enhanced Monitoring"
  1. Choose the AmazonRDSEnhancedMonitoringRole policy and then choose Select.
    Screenshot of choosing AmazonRDSEnhancedMonitoringRole
  2. After reviewing the role details, choose Create Role.

User Alice now has Database Administrator permissions and can now monitor RDS databases. To use other roles for the Database Administrator AWS managed policy, see the documentation.

To learn more about AWS managed policies for job functions, see the IAM documentation. If you have comments about this post, submit them in the “Comments” section below. If you have questions about these new policies, please start a new thread on the IAM forum.

– Joy

Build Serverless Applications in AWS Mobile Hub with New Cloud Logic and User Sign-in Features

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/build-serverless-applications-in-aws-mobile-hub/

Last month, we showed you how to power a mobile back end using a serverless stack, with your business logic in AWS Lambda and the resulting cloud APIs exposed to your app through Amazon API Gateway. This pattern enables you to create and test mobile cloud APIs backed by business logic functions you develop, all without managing servers or paying for unused capacity. Further, you can share your business logic across your iOS and Android apps.

Today, AWS Mobile Hub is announcing a new Cloud Logic feature that makes it much easier for mobile app developers to implement this pattern, integrate their mobile apps with the resulting cloud APIs, and connect the business logic functions to a range of AWS services or on-premises enterprise resources. The feature automatically applies access control to the cloud APIs in API Gateway, making it easy to limit access to app users who have authenticated with any of the user sign-in options in Mobile Hub, including two new options that are also launching today:

  • Fully managed email- and password-based app sign-in
  • SAML-based app sign-in

In this post, we show how you can build a secure mobile back end in just a few minutes using a serverless stack.

 

Get started with AWS Mobile Hub

We launched Mobile Hub last year to simplify the process of building, testing, and monitoring mobile applications that use one or more AWS services. Use the integrated Mobile Hub console to choose the features you want to include in your app.

With Mobile Hub, you don’t have to be an AWS expert to begin using its powerful back-end features in your app. Mobile Hub then provisions and configures the necessary AWS services on your behalf and creates a working quickstart app for you. This includes IAM access control policies created to save you the effort of provisioning security policies for resources such as Amazon DynamoDB tables and associating those resources with Amazon Cognito.

Get started with Mobile Hub by navigating to it in the AWS console and choosing your features.

serverless-apps-mobile-hub-console

 

New user sign-in options

We are happy to announce that we now support two new user sign-in options that help you authenticate your app users and provide secure access to control to AWS resources.

The Email and Password option lets you easily provision a fully managed user directory for your app in Amazon Cognito, with sign-in parameters that you configure. The SAML Federation option enables you to authenticate app users using existing credentials in your SAML-enabled identity provider, such as Active Directory Federation Service (ADFS). Mobile Hub also provides ready-to-use app flows for sign-up, sign-in, and password recovery codes that you can add to your own app.

Navigate to the User Sign-in tile in Mobile Hub to get started and choose your sign-in providers.

serverless-apps-mobile-hub-sign-in

Read more about the user sign-in feature in this blog and in the Mobile Hub documentation.

 

Enhanced Cloud Logic

We have enhanced the Cloud Logic feature (the right-hand tile in the top row of the above Mobile Hub screenshot), and you can now easily spin up a serverless stack. This enables you to create and test mobile cloud APIs connected to business logic functions that you develop. Previously, you could use Mobile Hub to integrate existing Lambda functions with your mobile app. With the enhanced Cloud Logic feature, you can now easily create Lambda functions, as well as API Gateway endpoints that you invoke from your mobile apps.

The feature automatically applies access control to the resulting REST APIs in API Gateway, making it easy to limit access to users who have authenticated with any of the user sign-in capabilities in Mobile Hub. Mobile Hub also allows you to test your APIs within your project and set up the permissions that your Lambda function needs for connecting to software resources behind a VPC (e.g., business applications or databases), within AWS or on-premises. Finally, you can integrate your mobile app with your cloud APIs using either the quickstart app (as an example) or the mobile app SDK; both are custom-generated to match your APIs. Here’s how it comes together:

serverless-apps-mobile-hub-enhanced-cloud-logic

Create an API

After you have chosen a sign-in provider, choose Configure more features. Navigate to Cloud Logic in your project and choose Create a new API. You can choose to limit access to your Cloud Logic API to only signed-in app users:

serverless-apps-mobile-hub-cloud-logic-api

Under the covers, this creates an IAM role for the API that limits access to authenticated, or signed-in, users.

serverless-apps-mobile-hub-cloud-logic-auth

Quickstart app

The resulting quickstart app generated by Mobile Hub allows you to test your APIs and learn how to develop a mobile UX that invokes your APIs:

serverless-apps-mobile-hub-quickstart-app

Multi-stage rollouts

To make it easy to deploy and test your Lambda function quickly, Mobile Hub provisions both your API and the Lambda function in a Development stage, for instance, https://<yoururl>/Development. This is mapped to a Lambda alias of the same name, Development. Lambda functions are versioned, and this alias is always points to the latest version of the Lambda function. This way, changes you make to your Lambda function are immediately reflected when you invoke the corresponding API in API Gateway.

When you are ready to deploy to production, you can create more stages in API Gateway, such as Production. This gives you an endpoint such as https://<yoururl>/Production. Then, create an alias of the same name in Lambda but point this alias to a specific version of your Lambda function (instead of $LATEST). This way, your Production endpoint always points to a known version of your Lambda function.

 

Summary

In this post, we demonstrated how to use Mobile Hub to create a secure serverless back end for your mobile app in minutes using three new features – enhanced Cloud Logic, email and password-based app sign-in, and SAML-based app sign-in. While it was just a few steps for the developer, Mobile Hub performed several underlying steps automatically–provisioning back-end resources, generating a sample app, and configuring IAM roles and sign-in providers–so you can focus your time on the unique value in your app. Get started today with AWS Mobile Hub.

Running Swift Web Applications with Amazon ECS

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/running-swift-web-applications-with-amazon-ecs/

This is a guest post from Asif Khan about how to run Swift applications on Amazon ECS.

—–

Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. A goal for Swift is to be the best language for uses ranging from systems programming, to mobile and desktop applications, scaling up to cloud services. As a developer, I am thrilled with the possibility of a homogeneous application stack and being able to leverage the benefits of Swift both on the client and server side. My code becomes more concise, and is more tightly integrated to the iOS environment.

In this post, I provide a walkthrough on building a web application using Swift and deploying it to Amazon ECS with an Ubuntu Linux image and Amazon ECR.

Overview of container deployment

Swift provides an Ubuntu version of the compiler that you can use. You still need a web server, a container strategy, and an automated cluster management with automatic scaling for traffic peaks.

There are some decisions to make in your approach to deploy services to the cloud:

  • HTTP server
    Choose a HTTP server which supports Swift. I found Vapor to be the easiest. Vapor is a type-safe web framework for Swift 3.0 that works on iOS, MACOS, and Ubuntu. It is very simple and easy to deploy a Swift application. Vapor comes with a CLI that will help you create new Vapor applications, generate Xcode projects and build them, as well as deploy your applications to Heroku or Docker. Another Swift webserver is Perfect. In this post, I use Vapor as I found it easier to get started with.

Tip: Join the Vapor slack group; it is super helpful. I got answers on a long weekend which was super cool.

  • Container model
    Docker is an open-source technology that that allows you to build, run, test, and deploy distributed applications inside software containers. It allows you to package a piece of software in a standardized unit for software development, containing everything the software needs to run: code, runtime, system tools, system libraries, etc. Docker enables you to quickly, reliably, and consistently deploy applications regardless of environment.
    In this post, you’ll use Docker, but if you prefer Heroku, Vapor is compatible with Heroku too.
  • Image repository
    After you choose Docker as the container deployment unit, you need to store your Docker image in a repository to automate the deployment at scale. Amazon ECR is a fully-managed Docker registry and you can employ AWS IAM policies to secure your repositories.
  • Cluster management solution
    Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.

With ECS, it is very easy to adopt containers as a building block for your applications (distributed or otherwise) by skipping the need for you to install, operate, and scale your own cluster infrastructure. Using Docker container within ECS provides flexibility to schedule long-running applications, services, and batch processes. ECS maintains application availability and allows you to scale containers.

To put it all together, you have your Swift web application running in a HTTP server (Vapor), deployed on containers (Docker) with images are stored on a secure repository (ECR) with automated cluster management (ECS) to scale horizontally.

Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions.
  2. Use the region selector in the navigation bar to choose the AWS Region where you want to deploy Swift web applications on AWS.
  3. Create a key pair in your preferred region.

Walkthrough

The following steps are required to set up your first web application written in Swift and deploy it to ECS:

  1. Download and launch an instance of the AWS CloudFormation template. The CloudFormation template installs Swift, Vapor, Docker, and the AWS CLI.
  2. SSH into the instance.
  3. Download the vapor example code
  4. Test the Vapor web application locally.
  5. Enhance the Vapor example code to include a new API.
  6. Push your code to a code repository
  7. Create a Docker image of your code.
  8. Push your image to Amazon ECR.
  9. Deploy your Swift web application to Amazon ECS.

Detailed steps

  1. Download the CloudFormation template and spin up an EC2 instance. The CloudFormation has Swift , Vapor, Docker, and git installed and configured. To launch an instance, launch the CloudFormation template from here.
  2. SSH into your instance:
    ssh –i [email protected]
  3. Download the Vapor example code – this code helps deploy the example you are using for your web application:
    git clone https://github.com/awslabs/ecs-swift-sample-app.git
  4. Test the Vapor application locally:
    1. Build a Vapor project:
      cd ~/ecs-swift-sample-app/example \
      vapor build
    2. Run the Vapor project:
      vapor run serve --port=8080
    3. Validate that server is running (in a new terminal window):
      ssh -i [email protected] curl localhost:8080
  5. Enhance the Vapor code:
    1. Follow the guide to add a new route to the sample application: https://Vapor.readme.io/docs/hello-world
    2. Test your web application locally:
      vapor run serve --port=8080
      curl http://localhost/hello.
  6. Commit your changes and push this change to your GitHub repository:
    git add –all
    git commit –m
    git push
  7. Build a new Docker image with your code:
    docker build -t swift-on-ecs \
    --build-arg SWIFT_VERSION=DEVELOPMENT-SNAPSHOT-2016-06-06-a \
    --build-arg REPO_CLONE_URL= \
    ~/ ecs-swift-sample-app/example
  8. Upload to ECR: Create an ECR repository and push the image following the steps in Getting Started with Amazon ECR.
  9. Create a ECS cluster and run tasks following the steps in Getting Started with Amazon ECS:
    1. Be sure to use the full registry/repository:tag naming for your ECR images when creating your task. For example, aws_account_id.dkr.ecr.us-east-1.amazonaws.com/my-web-app:latest.
    2. Ensure that you have port forwarding 8080 set up.
  10. You can now go to the container, get the public IP address, and try to access it to see the result.
    1. Open your running task and get the URL:
    2. Open the public URL in a browser:

Your first Swift web application is now running.

At this point, you can use ECS with Auto Scaling to scale your services and also monitor them using CloudWatch metrics and events.

Conclusion

If you want to leverage the benefits of Swift, you can use Vapor as the web container with Amazon ECS and Amazon ECR to deploy Swift web applications at scale and delegate the cluster management to Amazon ECS.

There are many interesting things you could do with Swift beyond this post. To learn more about Swift, see the additional Swift libraries and read the Swift documentation.

If you have questions or suggestions, please comment below.

In Case You Missed These: AWS Security Blog Posts from September and October

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-september-and-october/

In case you missed any AWS Security Blog posts from September and October, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from enabling multi-factor authentication on your AWS API calls to using Amazon CloudWatch Events to monitor application health.

October

October 30: Register for and Attend This November 10 Webinar—Introduction to Three AWS Security Services
As part of the AWS Webinar Series, AWS will present Introduction to Three AWS Security Services on Thursday, November 10. This webinar will start at 10:30 A.M. and end at 11:30 A.M. Pacific Time. AWS Solutions Architect Pierre Liddle shows how AWS Identity and Access Management (IAM), AWS Config Rules, and AWS Cloud Trail can help you maintain control of your environment. In a live demo, Pierre shows you how to track changes, monitor compliance, and keep an audit record of API requests.

October 26: How to Enable MFA Protection on Your AWS API Calls
Multi-factor authentication (MFA) provides an additional layer of security for sensitive API calls, such as terminating Amazon EC2 instances or deleting important objects stored in an Amazon S3 bucket. In some cases, you may want to require users to authenticate with an MFA code before performing specific API requests, and by using AWS Identity and Access Management (IAM) policies, you can specify which API actions a user is allowed to access. In this blog post, I show how to enable an MFA device for an IAM user and author IAM policies that require MFA to perform certain API actions such as EC2’s TerminateInstances.

October 19: Reserved Seating Now Open for AWS re:Invent 2016 Sessions
Reserved seating is new to re:Invent this year and is now open! Some important things you should know about reserved seating:

  1. All sessions have a predetermined number of seats available and must be reserved ahead of time.
  2. If a session is full, you can join a waitlist.
  3. Waitlisted attendees will receive a seat in the order in which they were added to the waitlist and will be notified via email if and when a seat is reserved.
  4. Only one session can be reserved for any given time slot (in other words, you cannot double-book a time slot on your re:Invent calendar).
  5. Don’t be late! The minute the session begins, if you have not badged in, attendees waiting in line at the door might receive your seat.
  6. Waitlisting will not be supported onsite and will be turned off 7-14 days before the beginning of the conference.

October 17: How to Help Achieve Mobile App Transport Security (ATS) Compliance by Using Amazon CloudFront and AWS Certificate Manager
Web and application users and organizations have expressed a growing desire to conduct most of their HTTP communication securely by using HTTPS. At its 2016 Worldwide Developers Conference, Apple announced that starting in January 2017, apps submitted to its App Store will be required to support App Transport Security (ATS). ATS requires all connections to web services to use HTTPS and TLS version 1.2. In addition, Google has announced that starting in January 2017, new versions of its Chrome web browser will mark HTTP websites as being “not secure.” In this post, I show how you can generate Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificates by using AWS Certificate Manager (ACM), apply the certificates to your Amazon CloudFront distributions, and deliver your websites and APIs over HTTPS.

October 5: Meet AWS Security Team Members at Grace Hopper 2016
For those of you joining this year’s Grace Hopper Celebration of Women in Computing in Houston, you may already know the conference will have a number of security-specific sessions. A group of women from AWS Security will be at the conference, and we would love to meet you to talk about your cloud security and compliance questions. Are you a student, an IT security veteran, or an experienced techie looking to move into security? Make sure to find us to talk about career opportunities.

September

September 29: How to Create a Custom AMI with Encrypted Amazon EBS Snapshots and Share It with Other Accounts and Regions
An Amazon Machine Image (AMI) provides the information required to launch an instance (a virtual server) in your AWS environment. You can launch an instance from a public AMI, customize the instance to meet your security and business needs, and save configurations as a custom AMI. With the recent release of the ability to copy encrypted Amazon Elastic Block Store (Amazon EBS) snapshots between accounts, you now can create AMIs with encrypted snapshots by using AWS Key Management Service (KMS) and make your AMIs available to users across accounts and regions. This allows you to create your AMIs with required hardening and configurations, launch consistent instances globally based on the custom AMI, and increase performance and availability by distributing your workload while meeting your security and compliance requirements to protect your data.

September 19: 32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog
AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

September 8: Automated Reasoning and Amazon s2n
In June 2015, AWS Chief Information Security Officer Stephen Schmidt introduced AWS’s new Open Source implementation of the SSL/TLS network encryption protocols, Amazon s2n. s2n is a library that has been designed to be small and fast, with the goal of providing you with network encryption that is more easily understood and fully auditable. In the 14 months since that announcement, development on s2n has continued, and we have merged more than 100 pull requests from 15 contributors on GitHub. Those active contributors include members of the Amazon S3, Amazon CloudFront, Elastic Load Balancing, AWS Cryptography Engineering, Kernel and OS, and Automated Reasoning teams, as well as 8 external, non-Amazon Open Source contributors.

September 6: IAM Service Last Accessed Data Now Available for the Asia Pacific (Mumbai) Region
In December, AWS Identity and Access Management (IAM) released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support the recently launched Asia Pacific (Mumbai) Region. With this release, you can now view the date when an IAM entity last accessed an AWS service in this region. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

From ELK Stack to EKK: Aggregating and Analyzing Apache Logs with Amazon Elasticsearch Service, Amazon Kinesis, and Kibana

Post Syndicated from Pubali Sen original https://aws.amazon.com/blogs/devops/from-elk-stack-to-ekk-aggregating-and-analyzing-apache-logs-with-amazon-elasticsearch-service-amazon-kinesis-and-kibana/

By Pubali Sen, Shankar Ramachandran

Log aggregation is critical to your operational infrastructure. A reliable, secure, and scalable log aggregation solution makes all the difference during a crunch-time debugging session.

In this post, we explore an alternative to the popular log aggregation solution, the ELK stack (Elasticsearch, Logstash, and Kibana): the EKK stack (Amazon Elasticsearch Service, Amazon Kinesis, and Kibana). The EKK solution eliminates the undifferentiated heavy lifting of deploying, managing, and scaling your log aggregation solution. With the EKK stack, you can focus on analyzing logs and debugging your application, instead of managing and scaling the system that aggregates the logs.

In this blog post, we describe how to use an EKK stack to monitor Apache logs. Let’s look at the components of the EKK solution.

Amazon Elasticsearch Service is a popular search and analytics engine that provides real-time application monitoring and log and clickstream analytics. For this post, you will store and index Apache logs in Amazon ES. As a managed service, Amazon ES is easy to deploy, operate, and scale in the AWS Cloud. Using a managed service also eliminates administrative overhead, like patch management, failure detection, node replacement, backing up, and monitoring. Because Amazon ES includes built-in integration with Kibana, it eliminates installing and configuring that platform. This simplifies your process further. For more information about Amazon ES, see the Amazon Elasticsearch Service detail page.

Amazon Kinesis Agent is an easy-to-install standalone Java software application that collects and sends data. The agent continuously monitors the Apache log file and ships new data to the delivery stream. This agent is also responsible for file rotation, checkpointing, retrying upon failures, and delivering the log data reliably and in a timely manner. For more information, see Writing to Amazon Kinesis Firehose Using Amazon Kinesis Agent or Amazon Kinesis Agent in GitHub.

Amazon Kinesis Firehose provides the easiest way to load streaming data into AWS. In this post, Firehose helps you capture and automatically load the streaming log data to Amazon ES and back it up in Amazon Simple Storage Service (Amazon S3). For more information, see the Amazon Kinesis Firehose detail page.

You’ll provision an EKK stack by using an AWS CloudFormation template. The template provisions an Apache web server and sends the Apache access logs to an Amazon ES cluster using Amazon Kinesis Agent and Firehose. You’ll back up the logs to an S3 bucket. To see the logs, you’ll leverage the Amazon ES Kibana endpoint.

By using the template, you can quickly complete the following tasks:

·      Provision an Amazon ES cluster.

·      Provision an Amazon Elastic Compute Cloud (Amazon EC2) instance.

·      Install Apache HTTP Server version 2.4.23.

·      Install the Amazon Kinesis Agent on the web server.

·      Provision an Elastic Load Balancing load balancer.

·      Create the Amazon ES index and the associated log mappings.

·      Create an Amazon Kinesis Firehose delivery stream.

·      Create all AWS Identity and Access Management (IAM) roles and policies. For example, the Firehose delivery stream backs up the Apache logs to an S3 bucket. This requires that the Firehose delivery stream be associated with a role that gives it permission to upload the logs to the correct S3 bucket.

·      Configure Amazon CloudWatch Logs log streams and log groups for the Firehose delivery stream. This helps you to troubleshoot when the log events don’t reach their destination.

EKK Stack Architecture
The following architecture diagram shows how an EKK stack works.

Arch8-2-2-2

Prerequisites
To build the EKK stack, you must have the following:

·      An Amazon EC2 key pair in the US West (Oregon) Region. If you don’t have one, create one.

·      An S3 bucket in the US West (Oregon) Region. If you don’t have one, create one.

·      A default VPC in the US West (Oregon) Region. If you have deleted the default VPC, request one.

·      Administrator-level permissions in IAM to enable Amazon ES and Amazon S3 to receive the log data from the EC2 instance through Firehose.

Getting Started
Begin by launching the AWS CloudFormation template to create the stack.

1.     In the AWS CloudFormation console, choose  to   launch-stack the AWS CloudFormation template. Make sure that you are in the US West (Oregon) region.

Note: If you want to download the template to your computer and then upload it to AWS CloudFormation, you can do so from this Amazon S3 bucket. Save the template to a location on your computer that’s easy to remember.

2.     Choose Next.

3.     On the Specify Details page, provide the following:

Screen Shot 2016-11-01 at 9.44.20 AM

a)    Stack Name: A name for your stack.

b)    InstanceType: Select the instance family for the EC2 instance hosting the web server.

c)     KeyName: Select the Amazon EC2 key pair in the US West (Oregon) Region.

d)    SSHLocation: The IP address range that can be used to connect to the EC2 instance by using SSH. Accept the default, 0.0.0.0/0.

e)    WebserverPort: The TCP/IP port of the web server. Accept the default, 80.

4.     Choose Next.

5.     On the Options page, optionally specify tags for your AWS CloudFormation template, and then choose Next.

createStack2

6.     On the Review page, review your template details. Select the Acknowledgement checkbox, and then choose Create to create the stack.

It takes about 10-15 minutes to create the entire stack.

Configure the Amazon Kinesis Agent
After AWS CloudFormation has created the stack, configure the Amazon Kinesis Agent.

1.     In the AWS CloudFormation console, choose the Resources tab to find the Firehose delivery stream name. You need this to configure the agent. Record this value because you will need it in step 3.

createStack3

2.     On the Outputs tab, find and record the public IP address of the web server. You need it to connect to the web server using SSH to configure the agent. For instructions on how to connect to an EC2 instance using SSH, see Connecting to Your Linux Instance Using SSH.

outputs

3. On the web server’s command line, run the following command:

sudo vi /etc/aws-kinesis/agent.json

This command opens the configuration file, agent.json, as follows.

{ "cloudwatch.emitMetrics": true, "firehose.endpoint": "firehose.us-west-2.amazonaws.com", "awsAccessKeyId": "", "awsSecretAccessKey": "", "flows": [ { "filePattern": "/var/log/httpd/access_log", "deliveryStream": "", "dataProcessingOptions": [ { "optionName": "LOGTOJSON", "logFormat": "COMMONAPACHELOG" } ] } ] } 

4.     For the deliveryStream key, type the value of the KinesisFirehoseDeliveryName that you retrieved from the stack’s Resources tab. After you type the value, save and terminate the agent.json file.

5.     Run the following command on the CLI:

sudo service aws-kinesis-agent restart

6.     On the AWS CloudFormation console choose the resources tab and note the name of the Amazon ES cluster corresponding to the LogicalID ESDomain.

7.     Go to AWS Management Console, and choose Amazon Elasticsearch Service. Under My Domains, you can see the Amazon ES domain that the AWS CloudFormation template created.

createStac5

Configure Kibana and View Your Apache Logs
Amazon ES provides a default installation of Kibana with every Amazon ES domain. You can find the Kibana endpoint on your domain dashboard in the Amazon ES console.

1.     In the Amazon ES console, choose the Kibana endpoint.

2.     In Kibana, for Index name or pattern, type logmonitor. logmonitor is the name of the AWS ES index that you created for the web server access logs. The health checks from Amazon Elastic Load Balancing generate access logs on the web server, which flow through the EKK pipeline to Kibana for discovery and visualization.

3.     In Time-field name, select datetime.

kibana1

4.     On the Kibana console, choose the Discover tab to see the Apache logs.

kibana3

Use Kibana to visualize the log data by creating bar charts, line and scatter plots, histograms, pie charts, etc.

Kibana4

Pie chart of IP addresses accessing the web server in the last 30 days

Kibana5

Bar chart of IP addresses accessing the web server in the last 5 minutes

You can graph information about http response, bytes, or IP address to provide meaningful insights on the Apache logs. Kibana also facilitates making dashboards by combining graphs.

Monitor Your Log Aggregator

To monitor the Firehose delivery stream, navigate to the Firehose console. Choose the stream, and then choose the Monitoring tab to see the Amazon CloudWatch metrics for the stream.

monito1

 

When log delivery fails, the Amazon S3 and Amazon ES logs help you troubleshoot. For example, the following screenshot shows logs when delivery to an Amazon ES destination fails because the date mapping on the index was not in line with the ingest log.

monitor2

Conclusion
In this post, we showed how to ship Apache logs to Kibana by using Amazon Kinesis Agent, Amazon ES, and Firehose. It’s worth pointing out that Firehose automatically scales up or down based on the rate at which your application generates logs. To learn more about scaling Amazon ES clusters, see the Amazon Elasticsearch Service Developer Guide.

Managed services like Amazon ES and Amazon Kinesis Firehose simplify provisioning and managing a log aggregation system. The ability to run SQL queries against your streaming log data using Amazon Kinesis Analytics further strengthens the case for using an EKK stack. The AWS CloudFormation template used in this post is available to extend and build your own EKK stack.

 

YouTube Signs Landmark Deal to End Music Video Blocking in Germany

Post Syndicated from Ernesto original https://torrentfreak.com/youtube-signs-landmark-deal-to-end-music-video-blocking-in-germany-160101/

youtubehappyOver the years, many Germans have gotten used to YouTube’s blocking notice, which prevented them from playing music videos that are widely available in the rest of the world.

At the root of this issue is a dispute between the video hosting platform and GEMA, a local music group that claims to represent 70,000 artists.

YouTube and GEMA have been fighting several court cases for more than half a decade, but today they announced a breakthrough.

The two parties have signed a licensing agreement where the video hosting platform agrees to pay a fee for making the music videos of GEMA members available in Germany. As a result, the blocking notifications on many videos will disappear.

“This is a win for music artists around the world, enabling them to reach new and existing fans in Germany, while also earning money from the advertising on their videos,” says YouTube’s Christophe Muller.

“And for YouTube users in Germany, who will no longer see a blocking message on music content that contains GEMA repertoire, for the first time in seven years.”

YouTube created a special GIF to celebrate the occasion

youtubeavailable

GEMA is happy with the outcome as well, in particular because YouTube agreed to pay retroactive compensation for videos that have been published since the start of the dispute in 2009.

“After seven years of tough negotiations, signing a deal with YouTube marks a milestone for GEMA and its members,” GEMA’s Harald Heker comments.

“What is crucial is that the license agreement covers publications from both the future and the past. With this agreement, we can provide our members their royalties,” he adds.

Increasingly, music groups are criticizing YouTube for “profiting” from the hard work of artists without paying proper compensations, so it’s not unlikely that similar deals will follow in other countries.

On the other hand, music insiders have also complained about GEMA’s restrictive policies. Sony Music’s Edgar Berger previously said that millions were lost because of the YouTube ban. At the same time, some musicians complained that there were not able to share their music freely.

With this in mind, the current agreement is certainly a big step forward for both musicians and the public at large.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.