Tag Archives: policies

Now Create and Manage Users More Easily with the AWS IAM Console

Post Syndicated from Rob Moncur original https://aws.amazon.com/blogs/security/now-create-and-manage-users-more-easily-with-the-aws-iam-console/

Today, we updated the AWS Identity and Access Management (IAM) console to make it easier for you to create and manage your IAM users. These improvements include an updated user creation workflow and new ways to assign and manage permissions. The new user workflow guides you through the process of setting user details, including enabling programmatic access (via access key) and console access (via password). In addition, you can assign permissions by adding users to a group, copying permissions from an existing user, and attaching policies directly to users. We have also updated the tools to view details and manage permissions for existing users. Finally, we’ve added 10 new AWS managed policies for job functions that you can use when assigning permissions.

In this post, I show how to use the updated user creation workflow and introduce changes to the user details pages. If you want to learn more about the new AWS managed policies for job functions, see How to Assign Permissions Using New AWS Managed Policies for Job Functions.

The new user creation workflow

Let’s imagine a new database administrator, Arthur, has just joined your team and will need access to your AWS account. To give Arthur access to your account, you must create a new IAM user for Arthur and assign relevant permissions so that Arthur can do his job.

To create a new IAM user:

  1. Navigate to the IAM console.
  2. To create the new user for Arthur, choose Users in the left pane, and then choose Add user.
    Screenshot of creating new user

Set user details
The first step in creating the user arthur is to enter a user name for Arthur and assign his access type:

  1. Type arthur in the User name box. (Note that this workflow allows you to create multiple users at a time. If you create more than one user in a single workflow, all users will have the same access type and permissions.)
    Screenshot of establishing the user name
  2. In addition to using the AWS Management Console, Arthur needs to use the AWS CLI and make API calls using the AWS SDK; therefore, you have to configure both programmatic and AWS Management Console access for him (see the following screenshot). If you select AWS Management Console access, you also have the option to use either an Autogenerated password, which will generate a random password retrievable after the user had been created, or a Custom password, which you can define yourself. In this case, choose Autogenerated password.

By enabling the option User must change password at next sign-in, arthur will be required to change his password the first time he signs in. If you do not have the accountwide setting enabled that allows users to change their own passwords, the workflow will automatically add the IAMUserChangePassword policy to arthur, giving him the ability to change his own password (and no one else’s).
Screenshot of configure both configuring programmatic and AWS Management Console access for Arthur

You can see the contents of the policy by clicking IAMUserChangePassword. This policy grants access to the IAM action, iam:ChangePassword, and it leverages an IAM policy variable, ${aws:username}, which will resolve the current username of the authenticating user. This will enable any user to which it is applied the ability to change only their own password. It also grants access to the IAM action, iam:GetAccountPasswordPolicy, which lets a user see the account password policy details that are shown to help them set a password that conforms to this policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
           "Effect": "Allow",
           "Action": [
               "iam:ChangePassword"
           ],
           "Resource": [
               "arn:aws:iam::*:user/${aws:username}"
           ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:GetAccountPasswordPolicy"
            ],
            "Resource": "*"
        }

    ]
}

Assign permissions

Arthur needs the necessary permissions to do his job as a database administrator. Because you do not have an IAM group set up for database administrators yet, you will create a new group and assign the proper permissions to it:

  1. Knowing that you may grow the database administrator team, using a group to assign permissions will make it easy to assign permissions to additional team members in the future. Choose Add user to group.
  2. Choose Create group. This opens a new window where you can create a new group.
    Screenshot of creating the group
  3. Call the new group DatabaseAdmins and attach the DatabaseAdministrator policy from the new AWS managed policies for job functions, as shown in the following screenshot. This policy enables Arthur to use AWS database services and other supporting services to do his job as a database administrator.

Note: This policy enables you to use additional features in other AWS services (such as Amazon CloudWatch and AWS Data Pipeline). In order to do so, you must create one or more IAM service roles. To understand the different features and the service roles required, see our documentation.
Screenshot of creating DatabaseAdmins group

Review the permissions you assigned to arthur

After creating the group, move to the review step to verify the details and permissions you assigned to arthur (see the following screenshot). If you decide you need to adjust the permissions, you can choose Previous to add or remove assigned permissions. After confirming that arthur has the correct access type and permissions, you can proceed to the next step by choosing Create user.

Screenshot of reviewing the permissions

Retrieve security credentials and email sign-in instructions

The user arthur has now been created with the appropriate permissions.

Screenshot of the "Success" message

You can now retrieve security credentials (access key ID, secret access key, and console password). By choosing Send email, an email with instructions about how to sign in to the AWS Management Console will be generated in your default email application.

Screenshot of the "Send email" link

This email provides a convenient way to send instructions about how to sign in to the AWS Management Console. The email does not include access keys or passwords, so to enable users to get started, you also will need to securely transmit those credentials according to your organization’s security policies.

Screenshot of email with sign-in instructions

Updates to the user details pages

We have also refreshed the user details pages. On the Permissions tab, you will see that the previous Attach policy button is now called Add permissions. This will launch you into the same permissions assignment workflow used in the user creation process. We’ve also changed the way that policies attached to a user are displayed and have added the count of groups attached to the user in the label of the Groups tab.

Screenshot of the changed user details page

On the Security credentials tab, we’ve updated a few items as well. We’ve updated the Sign-in credentials section and added Console password, which shows if AWS Management Console access is enabled or disabled. We’ve also added the Console login link to make it easier to find. We have also updated the Console password, Create access key, and Upload SSH public key workflows so that they are easier to use.

Screenshot of updates made to the "Security credentials" tab

Conclusion

We made these improvements to make it easier for you to create and manage permissions for your IAM users. As you are using these new tools, make sure to review and follow IAM best practices, which can help you improve your security posture and make your account easier to manage.

If you have any feedback or questions about these new features, submit a comment below or start a new thread on the IAM forum.

– Rob

Election Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/election_securi.html

It’s over. The voting went smoothly. As of the time of writing, there are no serious fraud allegations, nor credible evidence that anyone tampered with voting rolls or voting machines. And most important, the results are not in doubt.

While we may breathe a collective sigh of relief about that, we can’t ignore the issue until the next election. The risks remain.

As computer security experts have been saying for years, our newly computerized voting systems are vulnerable to attack by both individual hackers and government-sponsored cyberwarriors. It is only a matter of time before such an attack happens.

Electronic voting machines can be hacked, and those machines that do not include a paper ballot that can verify each voter’s choice can be hacked undetectably. Voting rolls are also vulnerable; they are all computerized databases whose entries can be deleted or changed to sow chaos on Election Day.

The largely ad hoc system in states for collecting and tabulating individual voting results is vulnerable as well. While the difference between theoretical if demonstrable vulnerabilities and an actual attack on Election Day is considerable, we got lucky this year. Not just presidential elections are at risk, but state and local elections, too.

To be very clear, this is not about voter fraud. The risks of ineligible people voting, or people voting twice, have been repeatedly shown to be virtually nonexistent, and “solutions” to this problem are largely voter-suppression measures. Election fraud, however, is both far more feasible and much more worrisome.

Here’s my worry. On the day after an election, someone claims that a result was hacked. Maybe one of the candidates points to a wide discrepancy between the most recent polls and the actual results. Maybe an anonymous person announces that he hacked a particular brand of voting machine, describing in detail how. Or maybe it’s a system failure during Election Day: voting machines recording significantly fewer votes than there were voters, or zero votes for one candidate or another. (These are not theoretical occurrences; they have both happened in the United States before, though because of error, not malice.)

We have no procedures for how to proceed if any of these things happen. There’s no manual, no national panel of experts, no regulatory body to steer us through this crisis. How do we figure out if someone hacked the vote? Can we recover the true votes, or are they lost? What do we do then?

First, we need to do more to secure our elections system. We should declare our voting systems to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states.

We need national security standards for voting machines, and funding for states to procure machines that comply with those standards. Voting-security experts can deal with the technical details, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters, counted by computer but recountable by hand. And we need a system of pre-election and postelection security audits to increase confidence in the system.

Second, election tampering, either by a foreign power or by a domestic actor, is inevitable, so we need detailed procedures to follow–both technical procedures to figure out what happened, and legal procedures to figure out what to do–that will efficiently get us to a fair and equitable election resolution. There should be a board of independent computer-security experts to unravel what happened, and a board of independent election officials, either at the Federal Election Commission or elsewhere, empowered to determine and put in place an appropriate response.

In the absence of such impartial measures, people rush to defend their candidate and their party. Florida in 2000 was a perfect example. What could have been a purely technical issue of determining the intent of every voter became a battle for who would win the presidency. The debates about hanging chads and spoiled ballots and how broad the recount should be were contested by people angling for a particular outcome. In the same way, after a hacked election, partisan politics will place tremendous pressure on officials to make decisions that override fairness and accuracy.

That is why we need to agree on policies to deal with future election fraud. We need procedures to evaluate claims of voting-machine hacking. We need a fair and robust vote-auditing process. And we need all of this in place before an election is hacked and battle lines are drawn.

In response to Florida, the Help America Vote Act of 2002 required each state to publish its own guidelines on what constitutes a vote. Some states — Indiana, in particular — set up a “war room” of public and private cybersecurity experts ready to help if anything did occur. While the Department of Homeland Security is assisting some states with election security, and the F.B.I. and the Justice Department made some preparations this year, the approach is too piecemeal.

Elections serve two purposes. First, and most obvious, they are how we choose a winner. But second, and equally important, they convince the loser–and all the supporters–that he or she lost. To achieve the first purpose, the voting system must be fair and accurate. To achieve the second one, it must be shown to be fair and accurate.

We need to have these conversations before something happens, when everyone can be calm and rational about the issues. The integrity of our elections is at stake, which means our democracy is at stake.

This essay previously appeared in the New York Times.

Building a Backup System for Scaled Instances using AWS Lambda and Amazon EC2 Run Command

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/building-a-backup-system-for-scaled-instances-using-aws-lambda-and-amazon-ec2-run-command/

Diego Natali Diego Natali, AWS Cloud Support Engineer

When an Auto Scaling group needs to scale in, replace an unhealthy instance, or re-balance Availability Zones, the instance is terminated, data on the instance is lost and any on-going tasks are interrupted. This is normal behavior but sometimes there are use cases when you might need to run some commands, wait for a task to complete, or execute some operations (for example, backing up logs) before the instance is terminated. So Auto Scaling introduced lifecycle hooks, which give you more control over timing after an instance is marked for termination.

In this post, I explore how you can leverage Auto Scaling lifecycle hooks, AWS Lambda, and Amazon EC2 Run Command to back up your data automatically before the instance is terminated. The solution illustrated allows you to back up your data to an S3 bucket; however, with minimal changes, it is possible to adapt this design to carry out any task that you prefer before the instance gets terminated, for example, waiting for a worker to complete a task before terminating the instance.

 

Using Auto Scaling lifecycle hooks, Lambda, and EC2 Run Command

You can configure your Auto Scaling group to add a lifecycle hook when an instance is selected for termination. The lifecycle hook enables you to perform custom actions as Auto Scaling launches or terminates instances. In order to perform these actions automatically, you can leverage Lambda and EC2 Run Command to allow you to avoid the use of additional software and to rely completely on AWS resources.

For example, when an instance is marked for termination, Amazon CloudWatch Events can execute an action based on that. This action can be a Lambda function to execute a remote command on the machine and upload your logs to your S3 bucket.

EC2 Run Command enables you to run remote scripts through the agent running within the instance. You use this feature to back up the instance logs and to complete the lifecycle hook so that the instance is terminated.

The example provided in this post works precisely this way. Lambda gathers the instance ID from CloudWatch Events and then triggers a remote command to back up the instance logs.

Architecture Graph

 

Set up the environment

Make sure that you have the latest version of the AWS CLI installed locally. For more information, see Getting Set Up with the AWS Command Line Interface.

Step 1 – Create an SNS topic to receive the result of the backup

In this step, you create an Amazon SNS topic in the region in which to run your Auto Scaling group. This topic allows EC2 Run Command to send you the outcome of the backup. The output of the aws iam create-topic command includes the ARN. Save the ARN, as you need it for future steps.

aws sns create-topic --name backupoutcome

Now subscribe your email address as the endpoint for SNS to receive messages.

aws sns subscribe --topic-arn <enter-your-sns-arn-here> --protocol email --notification-endpoint <your_email>

Step 2 – Create an IAM role for your instances and your Lambda function

In this step, you use the AWS console to create the AWS Identity and Access Management (IAM) role for your instances and Lambda to enable them to run the SSM agent, upload your files to your S3 bucket, and complete the lifecycle hook.

First, you need to create a custom policy to allow your instances and Lambda function to complete lifecycle hooks and publish to the SNS topic set up in Step 1.

  1. Log into the IAM console.
  2. Choose Policies, Create Policy
  3. For Create Your Own Policy, choose Select.
  4. For Policy Name, type “ASGBackupPolicy”.
  5. For Policy Document, paste the following policy which allows to complete a lifecycle hook:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "autoscaling:CompleteLifecycleAction",
        "sns:Publish"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

Create the role for EC2.

  1. In the left navigation pane, choose Roles, Create New Role.
  2. For Role Name, type “instance-role” and choose Next Step.
  3. Choose Amazon EC2 and choose Next Step.
  4. Add the policies AmazonEC2RoleforSSM and ASGBackupPolicy.
  5. Choose Next Step, Create Role.

Create the role for the Lambda function.

  1. In the left navigation pane, choose Roles, Create New Role.
  2. For Role Name, type “lambda-role” and choose Next Step.
  3. Choose AWS Lambda and choose Next Step.
  4. Add the policies AmazonSSMFullAccess, ASGBackupPolicy, and AWSLambdaBasicExecutionRole.
  5. Choose Next Step, Create Role.

Step 3 – Create an Auto Scaling group and configure the lifecycle hook

In this step, you create the Auto Scaling group and configure the lifecycle hook.

  1. Log into the EC2 console.
  2. Choose Launch Configurations, Create launch configuration.
  3. Select the latest Amazon Linux AMI and whatever instance type you prefer, and choose Next: Configuration details.
  4. For Name, type “ASGBackupLaunchConfiguration”.
  5. For IAM role, choose “instance-role” and expand Advanced Details.
  6. For User data, add the following lines to install and launch the SSM agent at instance boot:
    #!/bin/bash
    sudo yum install amazon-ssm-agent -y
    sudo /sbin/start amazon-ssm-agent
  7. Choose Skip to review, Create launch configuration, select your key pair, and then choose Create launch configuration.
  8. Choose Create an Auto Scaling group using this launch configuration.
  9. For Group name, type “ASGBackup”.
  10. Select your VPC and at least one subnet and then choose Next: Configuration scaling policies, Review, and Create Auto Scaling group.

Your Auto Scaling group is now created and you need to add the lifecycle hook named “ASGBackup” by using the AWS CLI:

aws autoscaling put-lifecycle-hook --lifecycle-hook-name ASGBackup --auto-scaling-group-name ASGBackup --lifecycle-transition autoscaling:EC2_INSTANCE_TERMINATING --heartbeat-timeout 3600

Step 4 – Create an S3 bucket for files

Create an S3 bucket where your data will be saved, or use an existing one. To create a new one, you can use this AWS CLI command:

aws s3api create-bucket --bucket <your_bucket_name>

Step 5 – Create the SSM document

The following JSON document archives the files in “BACKUPDIRECTORY” and then copies them to your S3 bucket “S3BUCKET”. Every time this command completes its execution, a SNS message is sent to the SNS topic specified by the “SNSTARGET” variable and completes the lifecycle hook.

In your JSON document, you need to make a few changes according to your environment:

Auto Scaling group name (line 12) “ASGNAME=’ASGBackup’”,
Lifecycle hook name (line 13) “LIFECYCLEHOOKNAME=’ASGBackup’”,
Directory to back up (line 14) “BACKUPDIRECTORY=’/var/log’”,
S3 bucket (line 15) “S3BUCKET='<your_bucket_name>’”,
SNS target (line 16) “SNSTARGET=’arn:aws:sns:’${REGION}’:<your_account_id>:<your_sns_ backupoutcome_topic>”

Here is the document:

{
  "schemaVersion": "1.2",
  "description": "Backup logs to S3",
  "parameters": {},
  "runtimeConfig": {
    "aws:runShellScript": {
      "properties": [
        {
          "id": "0.aws:runShellScript",
          "runCommand": [
            "",
            "ASGNAME='ASGBackup'",
            "LIFECYCLEHOOKNAME='ASGBackup'",
            "BACKUPDIRECTORY='/var/log'",
            "S3BUCKET='<your_bucket_name>'",
            "SNSTARGET='arn:aws:sns:'${REGION}':<your_account_id>:<your_sns_ backupoutcome_topic>'",           
            "INSTANCEID=$(curl http://169.254.169.254/latest/meta-data/instance-id)",
            "REGION=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone)",
            "REGION=${REGION::-1}",
            "HOOKRESULT='CONTINUE'",
            "MESSAGE=''",
            "",
            "tar -cf /tmp/${INSTANCEID}.tar $BACKUPDIRECTORY &> /tmp/backup",
            "if [ $? -ne 0 ]",
            "then",
            "   MESSAGE=$(cat /tmp/backup)",
            "else",
            "   aws s3 cp /tmp/${INSTANCEID}.tar s3://${S3BUCKET}/${INSTANCEID}/ &> /tmp/backup",
            "       MESSAGE=$(cat /tmp/backup)",
            "fi",
            "",
            "aws sns publish --subject 'ASG Backup' --message \"$MESSAGE\"  --target-arn ${SNSTARGET} --region ${REGION}",
            "aws autoscaling complete-lifecycle-action --lifecycle-hook-name ${LIFECYCLEHOOKNAME} --auto-scaling-group-name ${ASGNAME} --lifecycle-action-result ${HOOKRESULT} --instance-id ${INSTANCEID}  --region ${REGION}"
          ]
        }
      ]
    }
  }
}
  1. Log into the EC2 console.
  2. Choose Command History, Documents, Create document.
  3. For Document name, enter “ASGLogBackup”.
  4. For Content, add the above JSON, modified for your environment.
  5. Choose Create document.

Step 6 – Create the Lambda function

The Lambda function uses modules included in the Python 2.7 Standard Library and the AWS SDK for Python module (boto3), which is preinstalled as part of Lambda. The function code performs the following:

  • Checks to see whether the SSM document exists. This document is the script that your instance runs.
  • Sends the command to the instance that is being terminated. It checks for the status of EC2 Run Command and if it fails, the Lambda function completes the lifecycle hook.
  1. Log in to the Lambda console.
  2. Choose Create Lambda function.
  3. For Select blueprint, choose Skip, Next.
  4. For Name, type “lambda_backup” and for Runtime, choose Python 2.7.
  5. For Lambda function code, paste the Lambda function from the GitHub repository.
  6. Choose Choose an existing role.
  7. For Role, choose lambda-role (previously created).
  8. In Advanced settings, configure Timeout for 5 minutes.
  9. Choose Next, Create function.

Your Lambda function is now created.

Step 7 – Configure CloudWatch Events to trigger the Lambda function

Create an event rule to trigger the Lambda function.

  1. Log in to the CloudWatch console.
  2. Choose Events, Create rule.
  3. For Select event source, choose Auto Scaling.
  4. For Specific instance event(s), choose EC2 Instance-terminate Lifecycle Action and for Specific group name(s), choose ASGBackup.
  5. For Targets, choose Lambda function and for Function, select the Lambda function that you previously created, “lambda_backup”.
  6. Choose Configure details.
  7. In Rule definition, type a name and choose Create rule.

Your event rule is now created; whenever your Auto Scaling group “ASGBackup” starts terminating an instance, your Lambda function will be triggered.

Step 8 – Test the environment

From the Auto Scaling console, you can change the desired capacity and the minimum for your Auto Scaling group to 0 so that the instance running starts being terminated. After the termination starts, you can see from Instances tab that the instance lifecycle status changed to Termination:Wait. While the instance is in this state, the Lambda function and the command are executed.

You can review your CloudWatch logs to see the Lambda output. In the CloudWatch console, choose Logs and /aws/lambda/lambda_backup to see the execution output.

You can go to your S3 bucket and check that the files were uploaded. You can also check Command History in the EC2 console to see if the command was executed correctly.

Conclusion

Now that you’ve seen an example of how you can combine various AWS services to automate the backup of your files by relying only on AWS services, I hope you are inspired to create your own solutions.

Auto Scaling lifecycle hooks, Lambda, and EC2 Run Command are powerful tools because they allow you to respond to Auto Scaling events automatically, such as when an instance is terminated. However, you can also use the same idea for other solutions like exiting processes gracefully before an instance is terminated, deregistering your instance from service managers, and scaling stateful services by moving state to other running instances. The possible use cases are only limited by your imagination.

Learn more about:

I’ve open-sourced the code in this example in the awslabs GitHub repo; I can’t wait to see your feedback and your ideas about how to improve the solution.

AWS Week in Review – November 7, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-november-7-2016/

Let’s take a quick look at what happened in AWS-land last week. Thanks are due to the 16 internal and external contributors who submitted pull requests!

Monday

November 7

Tuesday

November 8

Wednesday

November 9

Thursday

November 10

Friday

November 11

Saturday

November 12

Sunday

November 13

New & Notable Open Source

  • Sippy Cup is a Python nanoframework for AWS Lambda and API Gateway.
  • Yesterdaytabase is a Python tool for constantly refreshing data in your staging and test environments with Lambda and CloudFormation.
  • ebs-snapshot-lambda is a Lambda function to snapshot EBS volumes and purge old snapshots.
  • examples is a collection of boilerplates and examples of serverless architectures built with the Serverless Framework and Lambda.
  • ecs-deploy-cli is a simple and easy way to deploy tasks and update services in AWS ECS.
  • Comments-Showcase is a serverless comment webapp that uses API Gateway, Lambda, DynamoDB, and IoT.
  • serverless-offline emulates Lambda and API Gateway locally for development of Serverless projects.
  • aws-sign-web is a JavaScript implementation of AWS Signature v4 for use within web browsers.
  • Zappa implements serverless Django on Lambda and API Gateway.
  • awsping is a console tool to check latency to AWS regions.

New SlideShare Presentations

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

On Trump

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2016/11/on-trump.html

I dislike commenting on politics. I think it’s difficult to contribute any novel thought – and in today’s hyper-polarized world, stating an unpopular or half-baked opinion is a recipe for losing friends or worse. Still, with many of my colleagues expressing horror and disbelief over what happened on Tuesday night, I reluctantly decided to jot down my thoughts.

I think that in trying to explain away the meteoric rise of Mr. Trump, many of the mainstream commentators have focused on two phenomena. Firstly, they singled out the emergence of “filter bubbles” – a mechanism that allows people to reinforce their own biases and shields them from opposing views. Secondly, they implicated the dark undercurrents of racism, misogynism, or xenophobia that still permeate some corners of our society. From that ugly place, the connection to Mr. Trump’s foul-mouthed populism was not hard to make; his despicable bragging about women aside, to his foes, even an accidental hand gesture or an inane 4chan frog meme was proof enough. Once we crossed this line, the election was no longer about economic policy, the environment, or the like; it was an existential battle for equality and inclusiveness against the forces of evil that lurk in our midst. Not a day went by without a comparison between Mr. Trump and Adolf Hitler in the press. As for the moderate voters, the pundits had an explanation, too: the right-wing filter bubble must have clouded their judgment and created a false sense of equivalency between a horrid, conspiracy-peddling madman and our cozy, liberal status quo.

Now, before I offer my take, let me be clear that I do not wish to dismiss the legitimate concerns about the overtones of Mr. Trump’s campaign. Nor do I desire to downplay the scale of discrimination and hatred that the societies around the world are still grappling with, or the potential that the new administration could make it worse. But I found the aforementioned explanation of Mr. Trump’s unexpected victory to be unsatisfying in many ways. Ultimately, we all live in bubbles and we all have biases; in that regard, not much sets CNN apart from Fox News, Vox from National Review, or The Huffington Post from Breitbart. The reason why most of us would trust one and despise the other is that we instinctively recognize our own biases as more benign. After all, in the progressive world, we are fighting for an inclusive society that gives all people a fair chance to succeed. As for the other side? They seem like a bizarre, cartoonishly evil coalition of dimwits, racists, homophobes, and the ultra-rich. We even have serious scientific studies to back that up; their authors breathlessly proclaim that the conservative brain is inferior to the progressive brain. Unlike the conservatives, we believe in science, so we hit the “like” button and retweet the news.

But here’s the thing: I know quite a few conservatives, many of whom have probably voted for Mr. Trump – and they are about as smart, as informed, and as compassionate as my progressive friends. I think that the disconnect between the worldviews stems from something else: if you are a well-off person in a coastal city, you know people who are immigrants or who belong to other minorities, making you acutely attuned to their plight; but you may lack the same, deeply personal connection to – say – the situation of the lower middle class in the Midwest. You might have seen surprising charts or read a touching story in Mother Jones few years back, but it’s hard to think of them as individuals; they are more of a socioeconomic obstacle, a problem to be solved. The same goes for our understanding of immigration or globalization: these phenomena make our high-tech hubs more prosperous and more open; the externalities of our policies, if any, are just an abstract price that somebody else ought to bear for doing what’s morally right. And so, when Mr. Trump promises to temporarily ban travel from Muslim countries linked to terrorism or anti-American sentiments, we (rightly) gasp in disbelief; but when Mr. Obama paints an insulting caricature of rural voters as simpletons who “cling to guns or religion or antipathy to people who aren’t like them”, we smile and praise him for his wit, not understanding how the other side could be so offended by the truth. Similarly, when Mrs. Clinton chuckles while saying “we are going to put a lot of coal miners out of business” to a cheering crowd, the scene does not strike us as a thoughtless, offensive, or in poor taste. Maybe we will read a story about the miners in Mother Jones some day?

Of course, liberals take pride in caring for the common folk, but I suspect that their leaders’ attempts to reach out to the underprivileged workers in the “flyover states” often come across as ham-fisted and insincere. The establishment schools the voters about the inevitability of globalization, as if it were some cosmic imperative; they are told that to reject the premise would not just be wrong – but that it’d be a product of a diseased, nativist mind. They hear that the factories simply had to go to China or Mexico, and the goods just have to come back duty-free – all so that our complex, interconnected world can be a happier place. The workers are promised entitlements, but it stands to reason that they want dignity and hope for their children, not a lifetime on food stamps. The idle, academic debates about automation, post-scarcity societies, and Universal Basic Income probably come across as far-fetched and self-congratulatory, too.

The discourse is poisoned by cognitive biases in many other ways. The liberal media keeps writing about the unaccountable right-wing oligarchs who bankroll the conservative movement and supposedly poison people’s minds – but they offer nothing but praise when progressive causes are being bankrolled by Mr. Soros or Mr. Bloomberg. They claim that the conservatives represent “post-truth” politics – but their fact-checkers shoot down conservative claims over fairly inconsequential mistakes, while giving their favored politicians a pass on half-true platitudes about immigration, gun control, crime, or the sources of inequality. Mr. Obama sneers at the conservative bias of Fox News, but has no concern with the striking tilt to the left in the academia or in the mainstream press. The Economist finds it appropriate to refer to Trump supporters as “trumpkins” in print – but it would be unthinkable for them to refer to the fans of Mrs. Clinton using any sort of a mocking term. The pundits ponder the bold artistic statement made by the nude statues of the Republican nominee – but they would be disgusted if a conservative sculptor portrayed the Democratic counterpart in a similarly unflattering light. The commentators on MSNBC read into every violent incident at Trump rallies – but when a a random group of BLM protesters starts chanting about killing police officers, we all agree it would not be fair to cast the entire movement in a negative light.

Most progressives are either oblivious to these biases, or dismiss them as a harmless casualty of fighting the good fight. Perhaps so – and it is not my intent to imply equivalency between the causes of the left and of the right. But in the end, I suspect that the liberal echo chamber contributed to the election of Mr. Trump far more than anything that ever transpired on the right. It marginalized and excluded legitimate but alien socioeconomic concerns from the mainstream political discourse, binning them with truly bigoted and unintelligent speech – and leaving the “flyover underclass” no option other than to revolt. And it wasn’t just a revolt of the awful fringes. On the right, we had Mr. Trump – a clumsy outsider who eschews many of the core tenets of the conservative platform, and who does not convincingly represent neither the neoconservative establishment of the Bush era, nor the Bible-thumping religious right of the Tea Party. On the left, we had Mr. Sanders – an unaccomplished Senator who offered simplistic but moving slogans, who painted the accumulation of wealth as the source of our ills, and who promised to mold the United States into an idyllic version of the social democracies of Europe – supposedly governed by the workers, and not by the exploitative elites.

I think that people rallied behind Mr. Sanders and Mr. Trump not because they particularly loved the candidates or took all their promises seriously – but because they had no other credible herald for their cause. When the mainstream media derided their rebellion and the left simply laughed it off, it only served as a battle cry. When tens of millions of Trump supporters were labeled as xenophobic and sexist deplorables who deserved no place in politics, it only pushed more moderates toward the fringe. Suddenly, rational people could see themselves voting for a politically inexperienced and brash billionaire – a guy who talks about cutting taxes for the rich, who wants to cozy up to Russia, and whose VP pick previously wasn’t so sure about LGBT rights. I think it all happened not because of Mr. Trump’s character traits or thoughtful political positions, and not because half of the country hates women and minorities. He won because he was the only one to promise to “drain the swamp” – and to promise hope, not handouts, to the lower middle class.

There is a risk that this election will prove to be a step back for civil rights, or that Mr. Trump’s bold but completely untested economic policies will leave the world worse off; while not certain, it pains me to even contemplate this possibility. When we see injustice, we should fight tooth and nail. But for now, I am not swayed by the preemptively apocalyptic narrative on the left. Perhaps naively, I have faith in the benevolence of our compatriots and the strength of the institutions of – as cheesy as it sounds – one of the great nations of the world.

How to Assign Permissions Using New AWS Managed Policies for Job Functions

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/how-to-assign-permissions-using-new-aws-managed-policies-for-job-functions/

Today, AWS Identity and Access Management (IAM) made 10 AWS managed policies available that align with common job functions in organizations. AWS managed policies enable you to set permissions using policies that AWS creates and manages, and with a single AWS managed policy for job functions, you can grant the permissions necessary for network or database administrators, for example.

You can attach multiple AWS managed policies to your users, roles, or groups if they span multiple job functions. As with all AWS managed policies, AWS will keep these policies up to date as we introduce new services or actions. You can use any AWS managed policy as a template or starting point for your own custom policy if the policy does not fully meet your needs (however, AWS will not automatically update any of your custom policies). In this blog post, I introduce the new AWS managed policies for job functions and show you how to use them to assign permissions.

The following table lists the new AWS managed policies for job functions and their descriptions.

 Job function Description
Administrator This policy grants full access to all AWS services.
Billing This policy grants permissions for billing and cost management. These permissions include viewing and modifying budgets and payment methods. An additional step is required to access the AWS Billing and Cost Management pages after assigning this policy.
Data Scientist This policy grants permissions for data analytics and analysis. Access to the following AWS services is a part of this policy: Amazon Elastic Map Reduce, Amazon Redshift, Amazon Kinesis, Amazon Machine Learning, Amazon Data Pipeline, Amazon S3, and Amazon Elastic File System. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
Database Administrator This policy grants permissions to all AWS database services. Access to the following AWS services is a part of this policy: Amazon DynamoDB, Amazon ElastiCache, Amazon RDS, and Amazon Redshift. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
Developer Power User This policy grants full access to all AWS services except for IAM.
Network Administrator This policy grants permissions required for setting up and configuring network resources. Access to the following AWS services is included in this policy: Amazon Route 53, Route 53 Domains, Amazon VPC, and AWS Direct Connect. This policy grants access to actions that require delegating permissions to CloudWatch Logs. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for this service.
Security Auditor This policy grants permissions required for configuring security settings and for monitoring events and logs in the account.
Support User This policy grants permissions to troubleshoot and resolve issues in an AWS account. This policy also enables the user to contact AWS support to create and manage cases.
System Administrator This policy grants permissions needed to support system and development operations. Access to the following AWS services is included in this policy: AWS CloudTrail, Amazon CloudWatch, CodeCommit, CodeDeploy, AWS Config, AWS Directory Service, EC2, IAM, AWS KMS, Lambda, RDS, Route 53, S3, SES, SQS, AWS Trusted Advisor, and Amazon VPC. This policy grants access to actions that require delegating permissions to EC2, CloudWatch, Lambda, and RDS. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
View Only User This policy grants permissions to view existing resources across all AWS services within an account.

Some of the policies in the preceding table enable you to take advantage of additional features that are optional. These policies grant access to iam:passrole, which passes a role to delegate permissions to an AWS service to carry out actions on your behalf.  For example, the Network Administrator policy passes a role to CloudWatch’s flow-logs-vpc so that a network administrator can log and capture IP traffic for all the Amazon VPCs they create. You must create IAM service roles to take advantage of the optional features. To follow security best practices, the policies already include permissions to pass the optional service roles with a naming convention. This  avoids escalating or granting unnecessary permissions if there are other service roles in the AWS account. If your users require the optional service roles, you must create a role that follows the naming conventions specified in the policy and then grant permissions to the role.

For example, your system administrator may want to run an application on an EC2 instance, which requires passing a role to Amazon EC2. The system administrator policy already has permissions to pass a role named ec2-sysadmin-*. When you create a role called ec2-sysadmin-example-application, for example, and assign the necessary permissions to the role, the role is passed automatically to the service and the system administrator can start using the features. The documentation summarizes each of the use cases for each job function that requires delegating permissions to another AWS service.

How to assign permissions by using an AWS managed policy for job functions

Let’s say that your company is new to AWS, and you have an employee, Alice, who is a database administrator. You want Alice to be able to use and manage all the database services while not giving her full administrative permissions. In this scenario, you can use the Database Administrator AWS managed policy. This policy grants view, read, write, and admin permissions for RDS, DynamoDB, Amazon Redshift, ElastiCache, and other AWS services a database administrator might need.

The Database Administrator policy passes several roles for various use cases. The following policy shows the different service roles that are applicable to a database administrator.

{
            "Effect": "Allow",
            "Action": [
                "iam:GetRole",
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::*:role/rds-monitoring-role",
                "arn:aws:iam::*:role/rdbms-lambda-access",
                "arn:aws:iam::*:role/lambda_exec_role",
                "arn:aws:iam::*:role/lambda-dynamodb-*",
                "arn:aws:iam::*:role/lambda-vpc-execution-role",
                "arn:aws:iam::*:role/DataPipelineDefaultRole",
                "arn:aws:iam::*:role/DataPipelineDefaultResourceRole"
            ]
        }

However, in order for user Alice to be able to leverage any of the features that require another service, you must create a service role and grant it permissions. In this scenario, Alice wants only to monitor RDS databases. To enable Alice to monitor RDS databases, you must create a role called rds-monitoring-role and assign the necessary permissions to the role.

Steps to assign permissions to the Database Administrator policy in the IAM console

  1. Sign in to the IAM console.
  2. In the left pane, choose Policies and type database in the Filter box.
    Screenshot of typing "database" in filter box
  1. Choose the Database Administrator policy, and then choose Attach in the Policy Actions drop-down menu.
    Screenshot of choosing "Attach" from drop-down list
  1. Choose the user (in this case, I choose Alice), and then choose Attach Policy.
    Screenshot of selecting the user
  2. User Alice now has Database Administrator permissions. However, for Alice to monitor RDS databases, you must create a role called rds-monitoring-role. To do this, in the left pane, choose Roles, and then choose Create New Role.
  3. For the Role Name, type rds-monitoring-role to match the name that is specified in the Database Administrator. Choose Next Step.
    Screenshot of creating the role called rds-monitoring-role
  1. In the AWS Service Roles section, scroll down and choose Amazon RDS Role for Enhanced Monitoring.
    Screenshot of selecting "Amazon RDS Role for Enhanced Monitoring"
  1. Choose the AmazonRDSEnhancedMonitoringRole policy and then choose Select.
    Screenshot of choosing AmazonRDSEnhancedMonitoringRole
  2. After reviewing the role details, choose Create Role.

User Alice now has Database Administrator permissions and can now monitor RDS databases. To use other roles for the Database Administrator AWS managed policy, see the documentation.

To learn more about AWS managed policies for job functions, see the IAM documentation. If you have comments about this post, submit them in the “Comments” section below. If you have questions about these new policies, please start a new thread on the IAM forum.

– Joy

Build Serverless Applications in AWS Mobile Hub with New Cloud Logic and User Sign-in Features

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/build-serverless-applications-in-aws-mobile-hub/

Last month, we showed you how to power a mobile back end using a serverless stack, with your business logic in AWS Lambda and the resulting cloud APIs exposed to your app through Amazon API Gateway. This pattern enables you to create and test mobile cloud APIs backed by business logic functions you develop, all without managing servers or paying for unused capacity. Further, you can share your business logic across your iOS and Android apps.

Today, AWS Mobile Hub is announcing a new Cloud Logic feature that makes it much easier for mobile app developers to implement this pattern, integrate their mobile apps with the resulting cloud APIs, and connect the business logic functions to a range of AWS services or on-premises enterprise resources. The feature automatically applies access control to the cloud APIs in API Gateway, making it easy to limit access to app users who have authenticated with any of the user sign-in options in Mobile Hub, including two new options that are also launching today:

  • Fully managed email- and password-based app sign-in
  • SAML-based app sign-in

In this post, we show how you can build a secure mobile back end in just a few minutes using a serverless stack.

 

Get started with AWS Mobile Hub

We launched Mobile Hub last year to simplify the process of building, testing, and monitoring mobile applications that use one or more AWS services. Use the integrated Mobile Hub console to choose the features you want to include in your app.

With Mobile Hub, you don’t have to be an AWS expert to begin using its powerful back-end features in your app. Mobile Hub then provisions and configures the necessary AWS services on your behalf and creates a working quickstart app for you. This includes IAM access control policies created to save you the effort of provisioning security policies for resources such as Amazon DynamoDB tables and associating those resources with Amazon Cognito.

Get started with Mobile Hub by navigating to it in the AWS console and choosing your features.

serverless-apps-mobile-hub-console

 

New user sign-in options

We are happy to announce that we now support two new user sign-in options that help you authenticate your app users and provide secure access to control to AWS resources.

The Email and Password option lets you easily provision a fully managed user directory for your app in Amazon Cognito, with sign-in parameters that you configure. The SAML Federation option enables you to authenticate app users using existing credentials in your SAML-enabled identity provider, such as Active Directory Federation Service (ADFS). Mobile Hub also provides ready-to-use app flows for sign-up, sign-in, and password recovery codes that you can add to your own app.

Navigate to the User Sign-in tile in Mobile Hub to get started and choose your sign-in providers.

serverless-apps-mobile-hub-sign-in

Read more about the user sign-in feature in this blog and in the Mobile Hub documentation.

 

Enhanced Cloud Logic

We have enhanced the Cloud Logic feature (the right-hand tile in the top row of the above Mobile Hub screenshot), and you can now easily spin up a serverless stack. This enables you to create and test mobile cloud APIs connected to business logic functions that you develop. Previously, you could use Mobile Hub to integrate existing Lambda functions with your mobile app. With the enhanced Cloud Logic feature, you can now easily create Lambda functions, as well as API Gateway endpoints that you invoke from your mobile apps.

The feature automatically applies access control to the resulting REST APIs in API Gateway, making it easy to limit access to users who have authenticated with any of the user sign-in capabilities in Mobile Hub. Mobile Hub also allows you to test your APIs within your project and set up the permissions that your Lambda function needs for connecting to software resources behind a VPC (e.g., business applications or databases), within AWS or on-premises. Finally, you can integrate your mobile app with your cloud APIs using either the quickstart app (as an example) or the mobile app SDK; both are custom-generated to match your APIs. Here’s how it comes together:

serverless-apps-mobile-hub-enhanced-cloud-logic

Create an API

After you have chosen a sign-in provider, choose Configure more features. Navigate to Cloud Logic in your project and choose Create a new API. You can choose to limit access to your Cloud Logic API to only signed-in app users:

serverless-apps-mobile-hub-cloud-logic-api

Under the covers, this creates an IAM role for the API that limits access to authenticated, or signed-in, users.

serverless-apps-mobile-hub-cloud-logic-auth

Quickstart app

The resulting quickstart app generated by Mobile Hub allows you to test your APIs and learn how to develop a mobile UX that invokes your APIs:

serverless-apps-mobile-hub-quickstart-app

Multi-stage rollouts

To make it easy to deploy and test your Lambda function quickly, Mobile Hub provisions both your API and the Lambda function in a Development stage, for instance, https://<yoururl>/Development. This is mapped to a Lambda alias of the same name, Development. Lambda functions are versioned, and this alias is always points to the latest version of the Lambda function. This way, changes you make to your Lambda function are immediately reflected when you invoke the corresponding API in API Gateway.

When you are ready to deploy to production, you can create more stages in API Gateway, such as Production. This gives you an endpoint such as https://<yoururl>/Production. Then, create an alias of the same name in Lambda but point this alias to a specific version of your Lambda function (instead of $LATEST). This way, your Production endpoint always points to a known version of your Lambda function.

 

Summary

In this post, we demonstrated how to use Mobile Hub to create a secure serverless back end for your mobile app in minutes using three new features – enhanced Cloud Logic, email and password-based app sign-in, and SAML-based app sign-in. While it was just a few steps for the developer, Mobile Hub performed several underlying steps automatically–provisioning back-end resources, generating a sample app, and configuring IAM roles and sign-in providers–so you can focus your time on the unique value in your app. Get started today with AWS Mobile Hub.

Running Swift Web Applications with Amazon ECS

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/running-swift-web-applications-with-amazon-ecs/

This is a guest post from Asif Khan about how to run Swift applications on Amazon ECS.

—–

Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. A goal for Swift is to be the best language for uses ranging from systems programming, to mobile and desktop applications, scaling up to cloud services. As a developer, I am thrilled with the possibility of a homogeneous application stack and being able to leverage the benefits of Swift both on the client and server side. My code becomes more concise, and is more tightly integrated to the iOS environment.

In this post, I provide a walkthrough on building a web application using Swift and deploying it to Amazon ECS with an Ubuntu Linux image and Amazon ECR.

Overview of container deployment

Swift provides an Ubuntu version of the compiler that you can use. You still need a web server, a container strategy, and an automated cluster management with automatic scaling for traffic peaks.

There are some decisions to make in your approach to deploy services to the cloud:

  • HTTP server
    Choose a HTTP server which supports Swift. I found Vapor to be the easiest. Vapor is a type-safe web framework for Swift 3.0 that works on iOS, MACOS, and Ubuntu. It is very simple and easy to deploy a Swift application. Vapor comes with a CLI that will help you create new Vapor applications, generate Xcode projects and build them, as well as deploy your applications to Heroku or Docker. Another Swift webserver is Perfect. In this post, I use Vapor as I found it easier to get started with.

Tip: Join the Vapor slack group; it is super helpful. I got answers on a long weekend which was super cool.

  • Container model
    Docker is an open-source technology that that allows you to build, run, test, and deploy distributed applications inside software containers. It allows you to package a piece of software in a standardized unit for software development, containing everything the software needs to run: code, runtime, system tools, system libraries, etc. Docker enables you to quickly, reliably, and consistently deploy applications regardless of environment.
    In this post, you’ll use Docker, but if you prefer Heroku, Vapor is compatible with Heroku too.
  • Image repository
    After you choose Docker as the container deployment unit, you need to store your Docker image in a repository to automate the deployment at scale. Amazon ECR is a fully-managed Docker registry and you can employ AWS IAM policies to secure your repositories.
  • Cluster management solution
    Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.

With ECS, it is very easy to adopt containers as a building block for your applications (distributed or otherwise) by skipping the need for you to install, operate, and scale your own cluster infrastructure. Using Docker container within ECS provides flexibility to schedule long-running applications, services, and batch processes. ECS maintains application availability and allows you to scale containers.

To put it all together, you have your Swift web application running in a HTTP server (Vapor), deployed on containers (Docker) with images are stored on a secure repository (ECR) with automated cluster management (ECS) to scale horizontally.

Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions.
  2. Use the region selector in the navigation bar to choose the AWS Region where you want to deploy Swift web applications on AWS.
  3. Create a key pair in your preferred region.

Walkthrough

The following steps are required to set up your first web application written in Swift and deploy it to ECS:

  1. Download and launch an instance of the AWS CloudFormation template. The CloudFormation template installs Swift, Vapor, Docker, and the AWS CLI.
  2. SSH into the instance.
  3. Download the vapor example code
  4. Test the Vapor web application locally.
  5. Enhance the Vapor example code to include a new API.
  6. Push your code to a code repository
  7. Create a Docker image of your code.
  8. Push your image to Amazon ECR.
  9. Deploy your Swift web application to Amazon ECS.

Detailed steps

  1. Download the CloudFormation template and spin up an EC2 instance. The CloudFormation has Swift , Vapor, Docker, and git installed and configured. To launch an instance, launch the CloudFormation template from here.
  2. SSH into your instance:
    ssh –i [email protected]
  3. Download the Vapor example code – this code helps deploy the example you are using for your web application:
    git clone https://github.com/awslabs/ecs-swift-sample-app.git
  4. Test the Vapor application locally:
    1. Build a Vapor project:
      cd ~/ecs-swift-sample-app/example \
      vapor build
    2. Run the Vapor project:
      vapor run serve --port=8080
    3. Validate that server is running (in a new terminal window):
      ssh -i [email protected] curl localhost:8080
  5. Enhance the Vapor code:
    1. Follow the guide to add a new route to the sample application: https://Vapor.readme.io/docs/hello-world
    2. Test your web application locally:
      vapor run serve --port=8080
      curl http://localhost/hello.
  6. Commit your changes and push this change to your GitHub repository:
    git add –all
    git commit –m
    git push
  7. Build a new Docker image with your code:
    docker build -t swift-on-ecs \
    --build-arg SWIFT_VERSION=DEVELOPMENT-SNAPSHOT-2016-06-06-a \
    --build-arg REPO_CLONE_URL= \
    ~/ ecs-swift-sample-app/example
  8. Upload to ECR: Create an ECR repository and push the image following the steps in Getting Started with Amazon ECR.
  9. Create a ECS cluster and run tasks following the steps in Getting Started with Amazon ECS:
    1. Be sure to use the full registry/repository:tag naming for your ECR images when creating your task. For example, aws_account_id.dkr.ecr.us-east-1.amazonaws.com/my-web-app:latest.
    2. Ensure that you have port forwarding 8080 set up.
  10. You can now go to the container, get the public IP address, and try to access it to see the result.
    1. Open your running task and get the URL:
    2. Open the public URL in a browser:

Your first Swift web application is now running.

At this point, you can use ECS with Auto Scaling to scale your services and also monitor them using CloudWatch metrics and events.

Conclusion

If you want to leverage the benefits of Swift, you can use Vapor as the web container with Amazon ECS and Amazon ECR to deploy Swift web applications at scale and delegate the cluster management to Amazon ECS.

There are many interesting things you could do with Swift beyond this post. To learn more about Swift, see the additional Swift libraries and read the Swift documentation.

If you have questions or suggestions, please comment below.

In Case You Missed These: AWS Security Blog Posts from September and October

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-september-and-october/

In case you missed any AWS Security Blog posts from September and October, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from enabling multi-factor authentication on your AWS API calls to using Amazon CloudWatch Events to monitor application health.

October

October 30: Register for and Attend This November 10 Webinar—Introduction to Three AWS Security Services
As part of the AWS Webinar Series, AWS will present Introduction to Three AWS Security Services on Thursday, November 10. This webinar will start at 10:30 A.M. and end at 11:30 A.M. Pacific Time. AWS Solutions Architect Pierre Liddle shows how AWS Identity and Access Management (IAM), AWS Config Rules, and AWS Cloud Trail can help you maintain control of your environment. In a live demo, Pierre shows you how to track changes, monitor compliance, and keep an audit record of API requests.

October 26: How to Enable MFA Protection on Your AWS API Calls
Multi-factor authentication (MFA) provides an additional layer of security for sensitive API calls, such as terminating Amazon EC2 instances or deleting important objects stored in an Amazon S3 bucket. In some cases, you may want to require users to authenticate with an MFA code before performing specific API requests, and by using AWS Identity and Access Management (IAM) policies, you can specify which API actions a user is allowed to access. In this blog post, I show how to enable an MFA device for an IAM user and author IAM policies that require MFA to perform certain API actions such as EC2’s TerminateInstances.

October 19: Reserved Seating Now Open for AWS re:Invent 2016 Sessions
Reserved seating is new to re:Invent this year and is now open! Some important things you should know about reserved seating:

  1. All sessions have a predetermined number of seats available and must be reserved ahead of time.
  2. If a session is full, you can join a waitlist.
  3. Waitlisted attendees will receive a seat in the order in which they were added to the waitlist and will be notified via email if and when a seat is reserved.
  4. Only one session can be reserved for any given time slot (in other words, you cannot double-book a time slot on your re:Invent calendar).
  5. Don’t be late! The minute the session begins, if you have not badged in, attendees waiting in line at the door might receive your seat.
  6. Waitlisting will not be supported onsite and will be turned off 7-14 days before the beginning of the conference.

October 17: How to Help Achieve Mobile App Transport Security (ATS) Compliance by Using Amazon CloudFront and AWS Certificate Manager
Web and application users and organizations have expressed a growing desire to conduct most of their HTTP communication securely by using HTTPS. At its 2016 Worldwide Developers Conference, Apple announced that starting in January 2017, apps submitted to its App Store will be required to support App Transport Security (ATS). ATS requires all connections to web services to use HTTPS and TLS version 1.2. In addition, Google has announced that starting in January 2017, new versions of its Chrome web browser will mark HTTP websites as being “not secure.” In this post, I show how you can generate Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificates by using AWS Certificate Manager (ACM), apply the certificates to your Amazon CloudFront distributions, and deliver your websites and APIs over HTTPS.

October 5: Meet AWS Security Team Members at Grace Hopper 2016
For those of you joining this year’s Grace Hopper Celebration of Women in Computing in Houston, you may already know the conference will have a number of security-specific sessions. A group of women from AWS Security will be at the conference, and we would love to meet you to talk about your cloud security and compliance questions. Are you a student, an IT security veteran, or an experienced techie looking to move into security? Make sure to find us to talk about career opportunities.

September

September 29: How to Create a Custom AMI with Encrypted Amazon EBS Snapshots and Share It with Other Accounts and Regions
An Amazon Machine Image (AMI) provides the information required to launch an instance (a virtual server) in your AWS environment. You can launch an instance from a public AMI, customize the instance to meet your security and business needs, and save configurations as a custom AMI. With the recent release of the ability to copy encrypted Amazon Elastic Block Store (Amazon EBS) snapshots between accounts, you now can create AMIs with encrypted snapshots by using AWS Key Management Service (KMS) and make your AMIs available to users across accounts and regions. This allows you to create your AMIs with required hardening and configurations, launch consistent instances globally based on the custom AMI, and increase performance and availability by distributing your workload while meeting your security and compliance requirements to protect your data.

September 19: 32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog
AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

September 8: Automated Reasoning and Amazon s2n
In June 2015, AWS Chief Information Security Officer Stephen Schmidt introduced AWS’s new Open Source implementation of the SSL/TLS network encryption protocols, Amazon s2n. s2n is a library that has been designed to be small and fast, with the goal of providing you with network encryption that is more easily understood and fully auditable. In the 14 months since that announcement, development on s2n has continued, and we have merged more than 100 pull requests from 15 contributors on GitHub. Those active contributors include members of the Amazon S3, Amazon CloudFront, Elastic Load Balancing, AWS Cryptography Engineering, Kernel and OS, and Automated Reasoning teams, as well as 8 external, non-Amazon Open Source contributors.

September 6: IAM Service Last Accessed Data Now Available for the Asia Pacific (Mumbai) Region
In December, AWS Identity and Access Management (IAM) released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support the recently launched Asia Pacific (Mumbai) Region. With this release, you can now view the date when an IAM entity last accessed an AWS service in this region. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

From ELK Stack to EKK: Aggregating and Analyzing Apache Logs with Amazon Elasticsearch Service, Amazon Kinesis, and Kibana

Post Syndicated from Pubali Sen original https://aws.amazon.com/blogs/devops/from-elk-stack-to-ekk-aggregating-and-analyzing-apache-logs-with-amazon-elasticsearch-service-amazon-kinesis-and-kibana/

By Pubali Sen, Shankar Ramachandran

Log aggregation is critical to your operational infrastructure. A reliable, secure, and scalable log aggregation solution makes all the difference during a crunch-time debugging session.

In this post, we explore an alternative to the popular log aggregation solution, the ELK stack (Elasticsearch, Logstash, and Kibana): the EKK stack (Amazon Elasticsearch Service, Amazon Kinesis, and Kibana). The EKK solution eliminates the undifferentiated heavy lifting of deploying, managing, and scaling your log aggregation solution. With the EKK stack, you can focus on analyzing logs and debugging your application, instead of managing and scaling the system that aggregates the logs.

In this blog post, we describe how to use an EKK stack to monitor Apache logs. Let’s look at the components of the EKK solution.

Amazon Elasticsearch Service is a popular search and analytics engine that provides real-time application monitoring and log and clickstream analytics. For this post, you will store and index Apache logs in Amazon ES. As a managed service, Amazon ES is easy to deploy, operate, and scale in the AWS Cloud. Using a managed service also eliminates administrative overhead, like patch management, failure detection, node replacement, backing up, and monitoring. Because Amazon ES includes built-in integration with Kibana, it eliminates installing and configuring that platform. This simplifies your process further. For more information about Amazon ES, see the Amazon Elasticsearch Service detail page.

Amazon Kinesis Agent is an easy-to-install standalone Java software application that collects and sends data. The agent continuously monitors the Apache log file and ships new data to the delivery stream. This agent is also responsible for file rotation, checkpointing, retrying upon failures, and delivering the log data reliably and in a timely manner. For more information, see Writing to Amazon Kinesis Firehose Using Amazon Kinesis Agent or Amazon Kinesis Agent in GitHub.

Amazon Kinesis Firehose provides the easiest way to load streaming data into AWS. In this post, Firehose helps you capture and automatically load the streaming log data to Amazon ES and back it up in Amazon Simple Storage Service (Amazon S3). For more information, see the Amazon Kinesis Firehose detail page.

You’ll provision an EKK stack by using an AWS CloudFormation template. The template provisions an Apache web server and sends the Apache access logs to an Amazon ES cluster using Amazon Kinesis Agent and Firehose. You’ll back up the logs to an S3 bucket. To see the logs, you’ll leverage the Amazon ES Kibana endpoint.

By using the template, you can quickly complete the following tasks:

·      Provision an Amazon ES cluster.

·      Provision an Amazon Elastic Compute Cloud (Amazon EC2) instance.

·      Install Apache HTTP Server version 2.4.23.

·      Install the Amazon Kinesis Agent on the web server.

·      Provision an Elastic Load Balancing load balancer.

·      Create the Amazon ES index and the associated log mappings.

·      Create an Amazon Kinesis Firehose delivery stream.

·      Create all AWS Identity and Access Management (IAM) roles and policies. For example, the Firehose delivery stream backs up the Apache logs to an S3 bucket. This requires that the Firehose delivery stream be associated with a role that gives it permission to upload the logs to the correct S3 bucket.

·      Configure Amazon CloudWatch Logs log streams and log groups for the Firehose delivery stream. This helps you to troubleshoot when the log events don’t reach their destination.

EKK Stack Architecture
The following architecture diagram shows how an EKK stack works.

Arch8-2-2-2

Prerequisites
To build the EKK stack, you must have the following:

·      An Amazon EC2 key pair in the US West (Oregon) Region. If you don’t have one, create one.

·      An S3 bucket in the US West (Oregon) Region. If you don’t have one, create one.

·      A default VPC in the US West (Oregon) Region. If you have deleted the default VPC, request one.

·      Administrator-level permissions in IAM to enable Amazon ES and Amazon S3 to receive the log data from the EC2 instance through Firehose.

Getting Started
Begin by launching the AWS CloudFormation template to create the stack.

1.     In the AWS CloudFormation console, choose  to   launch-stack the AWS CloudFormation template. Make sure that you are in the US West (Oregon) region.

Note: If you want to download the template to your computer and then upload it to AWS CloudFormation, you can do so from this Amazon S3 bucket. Save the template to a location on your computer that’s easy to remember.

2.     Choose Next.

3.     On the Specify Details page, provide the following:

Screen Shot 2016-11-01 at 9.44.20 AM

a)    Stack Name: A name for your stack.

b)    InstanceType: Select the instance family for the EC2 instance hosting the web server.

c)     KeyName: Select the Amazon EC2 key pair in the US West (Oregon) Region.

d)    SSHLocation: The IP address range that can be used to connect to the EC2 instance by using SSH. Accept the default, 0.0.0.0/0.

e)    WebserverPort: The TCP/IP port of the web server. Accept the default, 80.

4.     Choose Next.

5.     On the Options page, optionally specify tags for your AWS CloudFormation template, and then choose Next.

createStack2

6.     On the Review page, review your template details. Select the Acknowledgement checkbox, and then choose Create to create the stack.

It takes about 10-15 minutes to create the entire stack.

Configure the Amazon Kinesis Agent
After AWS CloudFormation has created the stack, configure the Amazon Kinesis Agent.

1.     In the AWS CloudFormation console, choose the Resources tab to find the Firehose delivery stream name. You need this to configure the agent. Record this value because you will need it in step 3.

createStack3

2.     On the Outputs tab, find and record the public IP address of the web server. You need it to connect to the web server using SSH to configure the agent. For instructions on how to connect to an EC2 instance using SSH, see Connecting to Your Linux Instance Using SSH.

outputs

3. On the web server’s command line, run the following command:

sudo vi /etc/aws-kinesis/agent.json

This command opens the configuration file, agent.json, as follows.

{ "cloudwatch.emitMetrics": true, "firehose.endpoint": "firehose.us-west-2.amazonaws.com", "awsAccessKeyId": "", "awsSecretAccessKey": "", "flows": [ { "filePattern": "/var/log/httpd/access_log", "deliveryStream": "", "dataProcessingOptions": [ { "optionName": "LOGTOJSON", "logFormat": "COMMONAPACHELOG" } ] } ] } 

4.     For the deliveryStream key, type the value of the KinesisFirehoseDeliveryName that you retrieved from the stack’s Resources tab. After you type the value, save and terminate the agent.json file.

5.     Run the following command on the CLI:

sudo service aws-kinesis-agent restart

6.     On the AWS CloudFormation console choose the resources tab and note the name of the Amazon ES cluster corresponding to the LogicalID ESDomain.

7.     Go to AWS Management Console, and choose Amazon Elasticsearch Service. Under My Domains, you can see the Amazon ES domain that the AWS CloudFormation template created.

createStac5

Configure Kibana and View Your Apache Logs
Amazon ES provides a default installation of Kibana with every Amazon ES domain. You can find the Kibana endpoint on your domain dashboard in the Amazon ES console.

1.     In the Amazon ES console, choose the Kibana endpoint.

2.     In Kibana, for Index name or pattern, type logmonitor. logmonitor is the name of the AWS ES index that you created for the web server access logs. The health checks from Amazon Elastic Load Balancing generate access logs on the web server, which flow through the EKK pipeline to Kibana for discovery and visualization.

3.     In Time-field name, select datetime.

kibana1

4.     On the Kibana console, choose the Discover tab to see the Apache logs.

kibana3

Use Kibana to visualize the log data by creating bar charts, line and scatter plots, histograms, pie charts, etc.

Kibana4

Pie chart of IP addresses accessing the web server in the last 30 days

Kibana5

Bar chart of IP addresses accessing the web server in the last 5 minutes

You can graph information about http response, bytes, or IP address to provide meaningful insights on the Apache logs. Kibana also facilitates making dashboards by combining graphs.

Monitor Your Log Aggregator

To monitor the Firehose delivery stream, navigate to the Firehose console. Choose the stream, and then choose the Monitoring tab to see the Amazon CloudWatch metrics for the stream.

monito1

 

When log delivery fails, the Amazon S3 and Amazon ES logs help you troubleshoot. For example, the following screenshot shows logs when delivery to an Amazon ES destination fails because the date mapping on the index was not in line with the ingest log.

monitor2

Conclusion
In this post, we showed how to ship Apache logs to Kibana by using Amazon Kinesis Agent, Amazon ES, and Firehose. It’s worth pointing out that Firehose automatically scales up or down based on the rate at which your application generates logs. To learn more about scaling Amazon ES clusters, see the Amazon Elasticsearch Service Developer Guide.

Managed services like Amazon ES and Amazon Kinesis Firehose simplify provisioning and managing a log aggregation system. The ability to run SQL queries against your streaming log data using Amazon Kinesis Analytics further strengthens the case for using an EKK stack. The AWS CloudFormation template used in this post is available to extend and build your own EKK stack.

 

Using pgpool and Amazon ElastiCache for Query Caching with Amazon Redshift

Post Syndicated from Felipe Garcia original https://aws.amazon.com/blogs/big-data/using-pgpool-and-amazon-elasticache-for-query-caching-with-amazon-redshift/

Felipe Garcia and Hugo Rozestraten are Solutions Architects for Amazon Web Services

In this blog post, we’ll use a real customer scenario to show you how to create a caching layer in front of Amazon Redshift using pgpool and Amazon ElastiCache.

Almost every application, no matter how simple, uses some kind of database. With SQL queries pervasive, a lack of coordination between end users or applications can sometimes result in redundant executions. This redundancy wastes resources that could be allocated to other tasks.

For example, BI tools and applications consuming data from Amazon Redshift are likely to issue common queries. You can cache some of them to improve the end-user experience and reduce contention in the database. In fact, when you use good data modeling and classification policies, you can even save some money by reducing your cluster size.

What is caching?

In computing, a cache is a hardware or software component that stores data so future requests for that data can be served faster. The data stored in a cache might be the result of an earlier computation or the duplicate of data stored elsewhere. A cache hit occurs when the requested data is found in a cache; a cache miss occurs when it is not. Cache hits are served by reading data from the cache, which is faster than recomputing a result or reading from a slower data store. The more requests served from the cache, the faster the system performs.

Customer case: laboratory analysis

In a clinical analysis laboratory, a small team of 6 to 10 scientists (geneticists, doctors, and biologists) query around 2 million lines of genetic code looking for specific genetic modifications. The genes next to a modified gene are also of interest because they can confirm a disease or disorder.

The scientists simultaneously analyze one DNA sample and then hold a meeting to discuss their findings and reach a conclusion.

A Node.js web application contains the logic; it issues the queries against Amazon Redshift. Using the web application connected to Amazon Redshift, the team of scientists experienced latencies of around 10 seconds. When the architecture is modified to use pgpool, these scientists were able to return the same queries in less than 1 second (in other words, 10 times faster).

o_PgPool_1

Introducing pgpool

Pgpool is software that sits between your database clients and your database server(s). It acts as a reverse proxy, receiving connections from clients and forwarding them to the database servers. Originally written for PostgreSQL, pgpool has other interesting features besides caching: connection pooling, replication, load balancing, and queueing exceeding connections. Although we didn’t explore these features, we suspect they can be used with Amazon Redshift due to the compatibility between PostgreSQL and Amazon Redshift.

Pgpool can run in an Amazon EC2 instance or in your on-premises environment. For example, you might have a single EC2 instance for dev and test and a fleet of EC2 instances with Elastic Load Balancing and Auto Scaling in production.

The clinical analysis laboratory in our use case used the Psql (command line) and Node.js application to issue queries against Amazon Redshift and it worked as expected. However, we strongly recommend that you test pgpool with your PostgreSQL client before making any changes to your architecture.

Taking a look at the pgpool caching feature

The pgpool caching feature is disabled by default. It can be configured in two ways:

  • On-memory (shmem)
    • This is the default method if you set up the cache and make no changes. It’s slightly faster than Memcached and is easier to configure and maintain. On the other hand, in high-availability scenarios, you tend to waste memory and some database processing, because you cache the query per server and process the query for caching at least once for each server. For example, in a pgpool cluster with four servers, if you expect to have a 20 GB cache, you must provision 4 x m3.xlarge instances and pay four times the cache. Each query must be processed by the database at least four times to be cached in each server.
  • Memcached (memcached)
    • In this method, the cache is maintained externally from the server. The advantage is that the caching storage layer (Memcached) is decoupled from the cache processing layer (pgpool). This means you won’t waste server memory and database processing time because the queries are processed only and cached externally in Memcached.
    • You can run Memcached anywhere, but we suggest you use Amazon ElastiCache with Memcached. Amazon ElastiCache detects and replaces failed nodes automatically, thereby reducing the overhead associated with self-managed infrastructures. It provides a resilient system that mitigates the risk of overloaded databases, which slows website and application load times.

Caching queries with pgpool

The following flow chart shows how query caching works with pgpool:

o_PgPool_2

The following diagram shows the minimum architecture required to install and configure pgpool for a dev/test environment:

o_PgPool_3

The following diagram shows the recommended minimum architecture for a production environment:

o_PgPool_4

Prerequisites

For the steps in this post, we will use the AWS Command Line Interface (AWS CLI). If you want to use your Mac, Linux, or Microsoft Windows machine to follow along, make sure you have installed the AWS CLI installed.  To learn how, see Installing the AWS Command Line Interface.

Steps for installing and configuring pgpool

1. Setting up the variables:

IMAGEID=ami-c481fad3
KEYNAME=<set your key name here>

The IMAGEID variable is set to use an Amazon Linux AMI from the US East (N. Virginia) region.

Set the KEYNAME variable to the name of the EC2 key pair you will use. This key pair must have been created in the US East (N. Virginia) region.

If you will use a region other than US East (N. Virginia), update IMAGEID and KEYNAME accordingly.

2. Creating the EC2 instance:

aws ec2 create-security-group --group-name PgPoolSecurityGroup --description "Security group to allow access to pgpool"

MYIP=$(curl eth0.me -s | awk '{print $1"/32"}')

aws ec2 authorize-security-group-ingress --group-name PgPoolSecurityGroup --protocol tcp --port 5432 --cidr $MYIP

aws ec2 authorize-security-group-ingress --group-name PgPoolSecurityGroup --protocol tcp --port 22 --cidr $MYIP

INSTANCEID=$(aws ec2 run-instances \
	--image-id $IMAGEID \
	--security-groups PgPoolSecurityGroup \
	--key-name $KEYNAME \
	--instance-type m3.medium \
	--query 'Instances[0].InstanceId' \
	| sed "s/\"//g")

aws ec2 wait instance-status-ok --instance-ids $INSTANCEID

INSTANCEIP=$(aws ec2 describe-instances \
	--filters "Name=instance-id,Values=$INSTANCEID" \
	--query "Reservations[0].Instances[0].PublicIpAddress" \
	| sed "s/\"//g")

3. Creating the Amazon ElastiCache cluster:

aws ec2 create-security-group --group-name MemcachedSecurityGroup --description "Security group to allow access to Memcached"

aws ec2 authorize-security-group-ingress --group-name MemcachedSecurityGroup --protocol tcp --port 11211 --source-group PgPoolSecurityGroup

MEMCACHEDSECURITYGROUPID=$(aws ec2 describe-security-groups \
	--group-names MemcachedSecurityGroup \
	--query 'SecurityGroups[0].GroupId' | \
	sed "s/\"//g")

aws elasticache create-cache-cluster \
	--cache-cluster-id PgPoolCache \
	--cache-node-type cache.m3.medium \
	--num-cache-nodes 1 \
	--engine memcached \
	--engine-version 1.4.5 \
	--security-group-ids $MEMCACHEDSECURITYGROUPID

aws elasticache wait cache-cluster-available --cache-cluster-id PgPoolCache

4. Accessing the EC2 instance through SSH, and then updating and installing packages:

ssh -i <your pem file goes here> [email protected]$INSTANCEIP

sudo yum update -y

sudo yum group install "Development Tools" -y

sudo yum install postgresql-devel libmemcached libmemcached-devel -y

5. Downloading the pgpool sourcecode tarball:

curl -L -o pgpool-II-3.5.3.tar.gz http://www.pgpool.net/download.php?f=pgpool-II-3.5.3.tar.gz

6. Extracting and compiling the source:

tar xvzf pgpool-II-3.5.3.tar.gz

cd pgpool-II-3.5.3

./configure --with-memcached=/usr/include/libmemcached-1.0

make

sudo make install

7. Making a copy of the sample conf that comes with pgpool to create our own pgpool.conf:

sudo cp /usr/local/etc/pgpool.conf.sample /usr/local/etc/pgpool.conf

8. Editing pgpool.conf:
Using your editor of choice, open /usr/local/etc/pgpool.conf, and then find and set the following parameters:

  • Set listen_addresses to *.
  • Set port to5432.
  • Set backend_hostname0 to the endpoint address of your Amazon Redshift cluster.
  • Set backend_port0 to 5439.
  • Set memory_cache_enabled to on.
  • Set memqcache_method to memcached.
  • Set memqcache_memcached_host to your Elasticache endpoint address.
  • Set memqcache_memcached_port to your Elasticache endpoint port.
  • Set log_connections to on
  • Set log_per_node_statement to on
  • Set pool_passwd to ‘‘.

The modified parameters in your config file should look like this:

listen_addresses = '*'

port = 5432

backend_hostname0 = '<your redshift endpoint goes here>'

backend_port0 = 5439

memory_cache_enabled = on

memqcache_method = 'memcached'

memqcache_memcached_host = '<your memcached endpoint goes here>'

memqcache_memcached_port = 11211

log_connections = on

log_per_node_statement = on

9. Setting up permissions:

sudo mkdir /var/run/pgpool

sudo chmod u+rw,g+rw,o+rw /var/run/pgpool

sudo mkdir /var/log/pgpool

sudo chmod u+rw,g+rw,o+rw /var/log/pgpool

10. Starting pgpool:

pgpool -n

pgpool is already listening on port 5432:

2016-06-21 16:04:15: pid 18689: LOG: Setting up socket for 0.0.0.0:5432
2016-06-21 16:04:15: pid 18689: LOG: Setting up socket for :::5432
2016-06-21 16:04:15: pid 18689: LOG: pgpool-II successfully started. version 3.5.3 (ekieboshi)

11. Testing the setup:
Now that pgpool is running, we will configure our Amazon Redshift client to point to the pgpool endpoint instead of the Amazon Redshift cluster endpoint. To get the endpoint address, you can use the console or the CLI to retrieve the public IP address of the EC2 instance  or you can just print the value we stored in the $INSTANCEIP variable.

#psql –h <pgpool endpoint address> -p 5432 –U <redshift username>

The first time we run the query, we see the following information in the pgpool log:

2016-06-21 17:36:33: pid 18689: LOG: DB node id: 0 backend pid: 25936 statement: select
      s_acctbal,
      s_name,
      p_partkey,
      p_mfgr,
      s_address,
      s_phone,
      s_comment
  from
      part,
      supplier,
      partsupp,
      nation,
      region
  where
      p_partkey = ps_partkey
      and s_suppkey = ps_suppkey
      and p_size = 5
      and p_type like '%TIN'
      and s_nationkey = n_nationkey
      and n_regionkey = r_regionkey
      and r_name = 'AFRICA'
      and ps_supplycost = (
          select 
              min(ps_supplycost)
          from
              partsupp,
              supplier,
              nation,
              region
          where
              p_partkey = ps_partkey,
              and s_suppkey = ps_suppkey,
              and s_nationkey = n_nationkey,
              and n_regionkey = r_regionkey,
              and r_name = 'AFRICA'
      )
  order by
      s_acctbal desc,
      n_name,
      s_name,
      p_partkey
  limit 100;

The first line in the log shows that the query is running directly on the Amazon Redshift cluster, so this is a cache miss. Executing the query against the database, it took 6814.595 ms to return the results.

If we run this query again, with the same predicates, we see a different result in the logs:

2016-06-21 17:40:19: pid 18689: LOG: fetch from memory cache
2016-06-21 17:40:19: pid 18689: DETAIL: query result fetched from cache. statement: 
select
      s_acctbal,
      s_name,
      p_partkey,
      p_mfgr,
      s_address,
      s_phone,
      s_comment
  from
      part,
      supplier,
      partsupp,
      nation,
      region
  where
      p_partkey = ps_partkey
      and s_suppkey = ps_suppkey
      and p_size = 5
      and p_type like '%TIN'
      and s_nationkey = n_nationkey
      and n_regionkey = r_regionkey
      and r_name = 'AFRICA'
      and ps_supplycost = (
          select 
              min(ps_supplycost)
          from
              partsupp,
              supplier,
              nation,
              region
          where
              p_partkey = ps_partkey,
              and s_suppkey = ps_suppkey,
              and s_nationkey = n_nationkey,
              and n_regionkey = r_regionkey,
              and r_name = 'AFRICA'
      )
  order by
      s_acctbal desc,
      n_name,
      s_name,
      p_partkey
  limit 100;

As the first two lines of the log show, now we are retrieving the results from the cache with the desired result, so this is a cache hit. The difference is huge: The query took only 247.719 ms. In other words, it’s running 30 times faster than in the previous scenario.

Understanding pgpool caching behavior

Pgpool uses your SELECT query as the key for the fetched results.

Caching behavior and invalidation can be configured in a couple ways:

  • Auto invalidation
    • By default, memqcache_auto_cache_invalidation is set to on. When you update a table in Amazon Redshift, the cache in pgpool is invalidated.
  • Expiration
    • memqcache_expire defines, in seconds, how long a result should stay in the cache. The default value is 0, which means infinite.
  • Black list and white list
    • white_memqcache_table_list
      • Comma-separated list of tables that should be cached. Regular expressions are accepted.
    • black_memqcache_table_list
      • Comma-separated list of tables that should not be cached. Regular expressions are accepted.
  • Bypassing Cache
    • /* NO QUERY CACHE */
      • If you specify the comment /* NO QUERY CACHE */ in your query, the query ignores pgpool cache and fetches the result from the database.

If pgpool doesn’t reach the cache due to name resolution or routing issues, for example, it falls back to the database endpoint and doesn’t use any cache.

Conclusion

It is easy to implement a caching solution using pgpool with Amazon Redshift and Amazon Elasticache. This solution significantly improves the end-user experience and alleviate the load on your cluster by orders of magnitude.

This post shows just one example of how pgpool and this caching architecture can help you. To learn more about the pgpool caching feature, see the pgpool documentation here and here.

Happy querying (and caching, of course). If you have questions or suggestions, please leave a comment below.


Related

Query Routing and Rewrite: Introducing pgbouncer-rr for Amazon Redshift and PostgreSQL

Author_pic_bob_strahan_resized_1a

Why Professional Open Source Management is Critical for your Business

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/152340372151

By Gil Yehuda, Sr. Director of Open Source and Technology Strategy

This byline was originally written for and appears in CIO Review

In his Open Source Landscape keynote at LinuxCon Japan earlier this year, Jim Zemlin, Executive Director of the Linux Foundation said that the trend toward corporate-sponsored open source projects is one of the most important developments in the open source ecosystem. The jobs report released by the Linux Foundation earlier this year found that open source professionals are in high demand. The report was followed by the announcement that TODOGroup, a collaboration project for open source professionals who run corporate open source program offices, was joining the Linux Foundation. Open source is no longer exclusively a pursuit of the weekend hobbyist. Professional open source management is a growing field, and it’s critical to the success of your technology strategy.

Open Source Potential to Reality Gap

Open source has indeed proven itself to be a transformative and disruptive part of many companies’ technology strategies. But we know it’s hardly perfect and far from hassle-free. Many developers trust open source projects without carefully reviewing the code or understanding the license terms, thus inviting risk. Developers say they like to contribute to open source, but are not writing as much of it as they wish. By legal default, their code is not open source unless they make it so. Despite being touted as an engineering recruitment tool, developers don’t flock to companies who toss the words “open source” all over their tech blogs. They first check for serious corporate commitment to open source.

Open source offers potential to lower costs, keep up with standards, and make your developers more productive. Turning potential into practice requires open source professionals on your team to unlock the open source opportunity. They will steer you out of the legal and administrative challenges that open source brings, and help you create a voice in the open source communities that matter most to you. Real work goes into managing the strategy, policies, and processes that enable you to benefit from the open source promise. Hence the emerging trend of hiring professionals to run open source program offices at tech companies across the industry.

Getting the Program off the Ground

Program office sounds big. Actually, many companies staff these with as few as one or two people. Often the rest is a virtual team that includes someone from legal, PR, developer services, an architect, and a few others depending on your company. As a virtual team, each member helps address the areas they know best. Their shared mission is to provide an authoritative and supportive decision about all-things open source at your company. Ideally they are technical, respected, and lead with pragmatism – but what’s most important is that they all collaborate toward the same goal.

The primary goal of the open source program office is to steer the technology strategy toward success using the right open source projects and processes. But the day-to-day program role is to provide services to engineers. Engineers need to know when they can use other people’s code within the company’s codebase (to ‘inbound’ code), and when they can publish company code to other projects externally (to ‘outbound’ code). Practical answers require an understanding of engineering strategy, attention to legal issues surrounding licenses (copyright, patent, etc.), and familiarity with managing GitHub at scale.

New open source projects and foundations will attract (or distract) your attention. Engineers will ask about the projects they contribute to on their own time, but in areas your company does business. They seek to contribute to projects and publish new projects. Are there risks? Is it worth it? The questions and issues you deal with on a regular basis will help give you a greater appreciation for where open source truly works for you, and where process-neglect can get in the way of achieving your technology mission.

Will it Work?

I’ve been running the open source program office at Yahoo for over six years. We’ve been publishing and supporting industry-leading open source projects for AI, Big Data, Cloud, Datacenter, Edge, Front end, Mobile, all the way to Zookeeper. We’ve created foundational open source projects like Apache Hadoop and many of its related technologies. When we find promise in other projects, we support and help accelerate them too, like we did with OpenStack, Apache Storm and Spark. Our engineers support hundreds of our own projects, we contribute to thousands of outside projects, and developers around the world download and use our open source code millions of times per week! We are able to operate at scale and take advantage of the open source promise by providing our engineers with a lightweight process that enables them to succeed in open source.

You can do the same at your company. Open source professionals who run program offices at tech companies share openly – it comes with the territory. I publish answers about open source on Quora and I’m a member of TODOGroup, the collaboration project managed by the Linux Foundation for open source program directors. There, I share and learn from my peers who manage the open source programs at various tech companies.

Bottom line: If you want to take advantage of the value that open source offers, you’ll need someone on your team who understands open source pragmatics, who’s plugged into engineering initiative, and who’s empowered to make decisions. The good news is you are not alone and there’s help out there in the open source community.

AWS Config: Checking for Compliance with New Managed Rule Options

Post Syndicated from Shawn O'Connor original https://aws.amazon.com/blogs/devops/aws-config-checking-for-compliance-with-new-managed-rule-options/

Change is the inherent nature of the Cloud. New ideas can immediately take shape through the provisioning of resources, projects will iterate and evolve, and sometimes we must go back to the drawing board. Amazon Web Services accelerates this process by providing the ability to programmatically provision and de-provision resources. However, this freedom requires some responsibility on the part the organization in implementing consistent business and security policies.

AWS Config enables AWS resource inventory and change management as well as Config Rules to confirm that resources are configured in compliance with policies that you define. New options are now available for AWS Config managed rules, providing additional flexibility with regards to the type of rules and policies that can be created.

  • AWS Config rules can now check that running instances are using approved Amazon Machine Images, or AMIs. You can specify a list of approved AMI by ID or provide a tag to specify the list of AMI Ids.
  • The Required-Tag managed rule can now require up to 6 tags with optional values in a single rule. Previously, each rule accepted only a single tag/value combo.
  • Additionally, the Required-Tag managed rule now accepts a comma-separated list of values for each checked tag. This allows for a rule to be compliant if any one of a supplied list of tags is present on the resource.

A Sample Policy

Let’s explore how these new rule options can be incorporated into a broader compliance policy. A typical company’s policy might state that:

  • All data must be encrypted at rest.
  • The AWS IAM password policy must meet the corporate standard.
  • Resources must be billed to the correct cost center.
  • We want to know who owns a resource in case there is an issue or question about the resource.
  • We want to identify whether a resource is a part of Dev, QA, Production, or staging so that we can apply the correct SLAs and make sure the appropriate teams have access.
  • We need to identify any systems that handled privileged information such as Personally Identifiable Information (PII) or credit card data (PCI) as a part of our compliance requirements.
  • Our Ops teams regularly provides hardened images with the latest patches and required software (e.g. security and/or configuration management agents). We want to ensure that all Linux instances are built with these images. We do regular reviews to ensure our images are up to date.

We see organizations implementing a variety of strategies like the one we have defined. Tagging is typically used to categorize and identify resources. We will be using tags and AWS Config managed rules to satisfy each of our policy requirements. In addition to the AWS Config managed rules, custom compliance rules can be developed using AWS Lambda functions.

Implementation

In the AWS Management console, we have built 5 rules.

  • strong_password_policy – Checks that we have a Strong Password policy for AWS IAM Users
  • only_encrypted_volumes – Checks that all attached EBS volumes are encrypted
  • approved_ami_by_id – Checks that approved AMI IDs have been used
  • approved_ami_by_tag – Check that AMIs tagged as sec_approved have been used
  • ec2_required_tags – Check that all required tags have been applied to EC2 instances:

    Tag Name (Key) Tag Value(s) Description
    Environment Prod / Dev / QA / Staging The environment in which the resource deployed
    Owner Joe, Steve, Anne Who is responsible?
    CostCenter General / Ops / Partner / Marketing Who is paying for the resource?
    LastReviewed Date (e.g. 09272016) Last time this instance was reviewed for compliance
    AuditLevel Normal / PII / PCI The required audit level for the instance

Strong Password Policy

AWS Config has pre-built rules to check against the configured IAM password Policy. We have implemented this rule using the default configurations.

We can see that our IAM password policy is Compliant with our managed rule.

blog1

Approved AMIs

We have our hardened AMI from the Ops team. The image also includes encrypted EBS volumes. We have added a tag called sec_approved that we will use to test our image.

blog2

We will create two additional rules to check that EC2 instances are launched with our approved AMIs. One will check that our sec_approved tag is present on the AMI. A second rule checks that the AMI ID (ami-1480c503) is included in a comma-separated list of approved images. Now we will launch two instances. One with a standard Amazon Linux AMI (i-019b0e790a67dd7f1) and one instance with our approved AMI (i-0caf40fcb4f48517c).

blog3

If we look at the details of the approved_amis_by_tag and approved_amis_by_id rules, we see that the instance that was launched with the unapproved image (i-019b0e790a67dd7f1) has been marked Noncompliant. Our hardened instance (i-0caf40fcb4f48517c) is Compliant.

blog4

blog5

Encrypted EBS Volumes

In the EC2 Console we see the volumes attached to our instances. We can see that the EBS volumes created by our hardened AMI have been encrypted as configured. They are attached to instance i-0caf40fcb4f48517c.

blog6

If we look at the details of the only_encrypted_volumes rule, we see that the instance that was launched with the unapproved image (i-019b0e790a67dd7f1) has been marked Noncompliant. Our hardened instance is Compliant.

blog7

Required Tags

Finally, let’s take a look at the ec2_required_tags rule. We have defined our required tags in our AWS Config Rule. Specific tag values are optional, but will be required if specified. Our hardened instance was configured with all required tags.

tags01

tags2

We can see that our tagged instance is compliant with our Required Tags Rule as well.

tags3

How to Enable MFA Protection on Your AWS API Calls

Post Syndicated from Zaher Dannawi original https://aws.amazon.com/blogs/security/how-to-enable-mfa-protection-on-your-aws-api-calls/

Multi-factor authentication (MFA) provides an additional layer of security for sensitive API calls, such as terminating Amazon EC2 instances or deleting important objects stored in an Amazon S3 bucket. In some cases, you may want to require users to authenticate with an MFA code before performing specific API requests, and by using AWS Identity and Access Management (IAM) policies, you can specify which API actions a user is allowed to access. In this blog post, I show how to enable an MFA device for an IAM user and author IAM policies that require MFA to perform certain API actions such as EC2’s TerminateInstances.

Let’s say Alice, an AWS account administrator, wants to add another layer of protection over her users’ access to EC2. Alice wants to allow IAM users to perform RunInstances, DescribeInstances, and StopInstances actions. However, Alice also wants to restrict actions such as TerminateInstances to ensure that users can only perform such API calls if they authenticate with MFA. To accomplish this, Alice must follow the following process’s two parts.

Part 1: Assign an approved authentication device to IAM users

  1. Get an MFA token. Alice can purchase a hardware MFA key fob from Gemalto, a third-party provider. Alternatively, she can install a virtual MFA app for no additional cost, which enables IAM users to use any OATH TOTP–compatible application on their smartphone, tablet, or computer.
  2. Activate the MFA token. Alice must then sign in to the IAM console and follow these steps to activate an MFA device for a user (note that Alice can delegate MFA device activation to her users by granting them access to provision and manage MFA devices):
    1. Click Users in the left pane of the IAM console.
    2. Select a user.
    3. On the Security Credentials tab, choose Manage MFA Device, as shown in the following screenshot.
      Screenshot of Manage MFA Device button
For Virtual MFA devices:
      1. Select A virtual MFA device and choose Next Step.
        Screenshot showing the selection of "A virtual MFA device"
      2. Open the virtual MFA app and choose the option to create a new account.
      3. Use the app to scan the generated QR code. Alternatively, you can select Show secret key for manual configuration (as shown the following screenshot), and then type the secret configuration key in the MFA application.
        Screenshot showing selection of "Show secret key for manual configuration"
      4. Finally, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password in the Authentication Code 2 Choose Activate Virtual MFA.For Hardware MFA devices:
For Hardware MFA devices:
      1. Select A hardware MFA device and choose Next Step.
        Screenshot of choosing "A hardware MFA device"
      2. Type the device serial number. The serial number is usually on the back of the device.
        Screenshot of the "Serial Number" box
      3. In the Authentication Code 1 box, type the six-digit number displayed by the MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password in the Authentication Code 2 Choose Next Step.

Part 2: Author an IAM policy to grant “Allow” access for MFA-authenticated users

Now, Alice has to author an IAM policy that requires MFA authentication to access EC2’s TerminateInstances API action and attach it to her users. To do that, Alice has to insert a condition key in the IAM policy statement. Alice can choose one of the following condition element keys:

  1. aws:MultiFactorAuthPresent

Alice can select this condition key to verify that the user did authenticate with MFA. This key is present only when the user authenticates with short-term credentials. The following example policy demonstrates how to use this condition key to require MFA authentication for users attempting to call the EC2 TerminateInstances action.

{
  "Version": "2012-10-17",
  "Statement":[
   {
          "Sid": "AllowActionsForEC2",
          "Effect": "Allow",
          "Action": ["ec2:RunInstances",
                     "ec2:DescribeInstances",
                     "ec2:StopInstances "],
          "Resource": "*"
   },
   {
      "Sid": "AllowActionsForEC2WhenMFAIsPresent",
      "Effect":"Allow",
      "Action":"ec2:TerminateInstances",
      "Resource":"*",
      "Condition":{
         "bool":{"aws:MultiFactorAuthPresent":"true"}
      }
    }
  ]
}

  1. aws:MultiFactorAuthAge

Alice can select this condition key if she wants to grant access to users only within a specific time period after they authenticate with MFA. For instance, Alice can restrict access to EC2’s TerminateInstances API to users who authenticated with an MFA device within the past 300 seconds (5 minutes). Users with short-term credentials older than 300 seconds must reauthenticate to gain access to this API. The following example policy demonstrates how to use this condition key.

{
  "Version": "2012-10-17",
   "Statement":[
    {
            "Sid": "AllowActionsForEC2",
            "Effect": "Allow",
            "Action": ["ec2:RunInstances",
               "ec2:DescribeInstances",
               "ec2:StopInstances "],
            "Resource": "*"
    },
    {
           "Sid": "AllowActionsForEC2WhenMFAIsPresent",
           "Effect":"Allow",
           "Action":"ec2:TerminateInstances",
           "Resource":"*",
           "Condition":{
             " NumericLessThan ":{"aws: MultiFactorAuthAge":"300"}
    }
   }
  ]
}

Summary

In this blog post, I showed how you can use MFA to protect access to AWS API actions and resources. I first showed how to assign a virtual or hardware MFA device to an IAM user and then how to author IAM policies with a condition element key that requires MFA authentication before granting access to AWS API actions. I covered two condition element keys: the MultiFactorAuthPresent condition key, which matches when a user authenticates with an MFA device, and the MultiFactorAuthAge condition key, which matches when a user authenticates with an MFA device within a given time interval.

For more information about MFA-protected API access, see Configuring MFA-Protected API Access. If you have a comment about this post, submit it in the “Comments” section below. If you have implementation questions, please start a new thread on the IAM forum.

– Zaher

Trump on cybersecurity: vacuous and populist

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/trump-on-cybersecurity-vacuous-and.html

Trump has published his policy on cybersecurity. It demonstrates that he and his people do not understand the first thing about cybersecurity.
Specifically, he wants “the best defense technologies” and “cyber awareness training for all government employees”. These are well known bad policies in the cybersecurity industry. They are the sort of thing the intern with a degree from Trump University would come up with.
Awareness training is the knee-jerk response to any problem. Employees already spend a lot of their time doing mandatory training for everything from environmental friendly behavior, to sexual harassment, to Sarbannes-Oxley financial compliance, to cyber-security. None of it has proven effective, but organizations continue to force it, either because they are required to, or they are covering their asses. No amount of training employees to not click on email attachments helps. Instead, the network must be secure enough that reckless clicking on attachments pose no danger.
Belief in a technological Magic Pill that will stop hackers is common among those who know nothing about cybersecurity. Such pills don’t exist. The least secure networks already have “the best defense technologies”. Things like anti-virus, firewalls, and intrusion prevention systems do not stop hackers by themselves – but area instead tools that knowledgeable teams use in order to make their jobs easier. It’s like how a chisel doesn’t make a sculpture by itself, but is instead just a tool used by the artist. The government already has all the technology it needs. It’s problems instead derive from the fact that they try to solve their problems the way Trump does – by assigning the task to some Trump University intern.
Lastly, Trump suggests that on the offensive side, we need to improve our offensive abilities, in order to create a cyber deterrence. We already do that. The United States is by far the #1 nation in offensive capabilities. In 2015, Obama forced China to the table, to sign an agreement promising they’d stop hacking us. Since then, China has kept the agreement, and has dropped out of the news as being the source of cyber attacks. Privately, many people in government tell me its because we did some major cyber attack in China that successfully deterred them.

Trump promises to be a strong leader who hires effective people. He demonstrates this nowhere. In my area of expertise, he and his people demonstrate a shocking ignorance of the issues. It’s typical populist rhetoric: when China and Russia rape our computers, he’ll blame it on some sort of rigged system, not his own incompetence.
Disclaimer: I don’t care about Trump’s locker room comments, or any of the other things that get the mass media upset. I oppose Trump because he’s a vacuous populist, as I demonstrate here.

Real-time Clickstream Anomaly Detection with Amazon Kinesis Analytics

Post Syndicated from Chris Marshall original https://aws.amazon.com/blogs/big-data/real-time-clickstream-anomaly-detection-with-amazon-kinesis-analytics/

Chris Marshall is a Solutions Architect for Amazon Web Services

Analyzing web log traffic to gain insights that drive business decisions has historically been performed using batch processing.  While effective, this approach results in delayed responses to emerging trends and user activities.  There are solutions to deal with processing data in real time using streaming and micro-batching technologies, but they can be complex to set up and maintain.  Amazon Kinesis Analytics is a managed service that makes it very easy to identify and respond to changes in behavior in real-time.

One use case where it’s valuable to have immediate insights is analyzing clickstream data.   In the world of digital advertising, an impression is when an ad is displayed in a browser and a clickthrough represents a user clicking on that ad.  A clickthrough rate (CTR) is one way to monitor the ad’s effectiveness.  CTR is calculated in the form of: CTR = Clicks / Impressions * 100.  Digital marketers are interested in monitoring CTR to know when particular ads perform better than normal, giving them a chance to optimize placements within the ad campaign.  They may also be interested in anomalous low-end CTR that could be a result of a bad image or bidding model.

In this post, I show an analytics pipeline which detects anomalies in real time for a web traffic stream, using the RANDOM_CUT_FOREST function available in Amazon Kinesis Analytics.

RANDOM_CUT_FOREST 

Amazon Kinesis Analytics includes a powerful set of analytics functions to analyze streams of data.  One such function is RANDOM_CUT_FOREST.  This function detects anomalies by scoring data flowing through a dynamic data stream. This novel approach identifies a normal pattern in streaming data, then compares new data points in reference to it. For more information, see Robust Random Cut Forest Based Anomaly Detection On Streams.

Analytics pipeline components

To demonstrate how the RANDOM_CUT_FOREST function can be used to detect anomalies in real-time click through rates, I will walk you through how to build an analytics pipeline and generate web traffic using a simple Python script.  When your injected anomaly is detected, you get an email or SMS message to alert you to the event.

This post first walks through the components of the analytics pipeline, then discusses how to build and test it in your own AWS account.  The accompanying AWS CloudFormation script builds the Amazon API Gateway API, Amazon Kinesis streams, AWS Lambda function, Amazon SNS components, and IAM roles required for this walkthrough. Then you manually create the Amazon Kinesis Analytics application that uses the RANDOM_CUT_FOREST function in SQL.

Web Client

Because Python is a popular scripting language available on many clients, we’ll use it to generate web impressions and click data by making HTTP GET requests with the requests library.  This script mimics beacon traffic generated by web browsers.  The GET request contains a query string value of click or impression to indicate the type of beacon being captured.  The script has a while loop that iterates 2500 times.  For each pass through the loop, an Impression beacon is sent.  Then, the random function is used to potentially send an additional click beacon.  For most of the passes through the loop, the click request is generated at the rate of roughly 10% of the time.  However, for some web requests, the CTR jumps to a rate of 50% representing an anomaly.  These are artificially high CTR rates used for the purpose of illustration for this post, but the functionality would be the same using small fractional values

import requests
import random
import sys
import argparse

def getClicked(rate):
	if random.random() <= rate:
		return True
	else:
		return False

def httpGetImpression():
	url = args.target + '?browseraction=Impression' 
	r = requests.get(url)

def httpGetClick():
	url = args.target + '?browseraction=Click' 
	r = requests.get(url)
	sys.stdout.write('+')

parser = argparse.ArgumentParser()
parser.add_argument("target",help=" the http(s) location to send the GET request")
args = parser.parse_args()
i = 0
while (i < 2500):
	httpGetImpression()
	if(i<1950 or i>=2000):
		clicked = getClicked(.1)
		sys.stdout.write('_')
	else:
		clicked = getClicked(.5)
		sys.stdout.write('-')
	if(clicked):
		httpGetClick()
	i = i + 1
	sys.stdout.flush()

Web “front door”

Client web requests are handled by Amazon API Gateway, a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. This handles the web request traffic to get data into your analytics pipeline by being a service proxy to an Amazon Kinesis stream.  API Gateway takes click and impression web traffic and converts the HTTP header and query string data into a JSON message and place it into your stream.  The CloudFormation script creates an API endpoint with a body mapping template similar to the following:

#set($inputRoot = $input.path('$'))
{
  "Data": "$util.base64Encode("{ ""browseraction"" : ""$input.params('browseraction')"", ""site"" : ""$input.params('Host')"" }")",
  "PartitionKey" : "$input.params('Host')",
  "StreamName" : "CSEKinesisBeaconInputStream"
}

(API Gateway service proxy body mapping template)

This tells API Gateway to take the host header and query string inputs and convert them into a payload to be put into a stream using a service proxy for Amazon Kinesis Streams.

Input stream

Amazon Kinesis Streams can capture terabytes of data per hour from thousands of sources.  In this case, you use it to deliver your JSON messages to Amazon Kinesis Analytics.

Analytics application

Amazon Kinesis Analytics processes the incoming messages and allow you to perform analytics on the dynamic stream of data.  You use SQL to calculate the CTR and pass that into your RANDOM_CUT_FOREST function to calculate an anomaly score.

First, calculate the counts of impressions and clicks using SQL with a tumbling window in Amazon Kinesis Analytics.  Analytics uses streams and pumps in SQL to process data.  See our earlier post to get an overview of streaming data with Kinesis Analytics. Inside the Analytics SQL, a stream is analogous to a table and a pump is the flow of data into those tables.  Here’s the description of streams for the impression and click data.

CREATE OR REPLACE STREAM "CLICKSTREAM" ( 
   "CLICKCOUNT" DOUBLE
);


CREATE OR REPLACE STREAM "IMPRESSIONSTREAM" ( 
   "IMPRESSIONCOUNT" DOUBLE
);

Later, you calculate the clicks divided by impressions; you are using a DOUBLE data type to contain the count values.  If you left them as integers, the division below would yield only a 0 or 1.  Next, you define the pumps that populate the stream using a tumbling window, a window of time during which all records received are considered as part of the statement.

CREATE OR REPLACE PUMP "CLICKPUMP" STARTED AS 
INSERT INTO "CLICKSTREAM" ("CLICKCOUNT") 
SELECT STREAM COUNT(*) 
FROM "SOURCE_SQL_STREAM_001"
WHERE "browseraction" = 'Click'
GROUP BY FLOOR(
  ("SOURCE_SQL_STREAM_001".ROWTIME - TIMESTAMP '1970-01-01 00:00:00')
    SECOND / 10 TO SECOND
);
CREATE OR REPLACE PUMP "IMPRESSIONPUMP" STARTED AS 
INSERT INTO "IMPRESSIONSTREAM" ("IMPRESSIONCOUNT") 
SELECT STREAM COUNT(*) 
FROM "SOURCE_SQL_STREAM_001"
WHERE "browseraction" = 'Impression'
GROUP BY FLOOR(
  ("SOURCE_SQL_STREAM_001".ROWTIME - TIMESTAMP '1970-01-01 00:00:00')
    SECOND / 10 TO SECOND
);

The GROUP BY statement uses FLOOR and ROWTIME to create a tumbling window of 10 seconds.  This tumbling window will output one record every ten seconds assuming we are receiving data from the corresponding pump.  In this case, you output the total number of records that were clicks or impressions.

Next, use the output of these pumps to calculate the CTR:

CREATE OR REPLACE STREAM "CTRSTREAM" (
  "CTR" DOUBLE
);

CREATE OR REPLACE PUMP "CTRPUMP" STARTED AS 
INSERT INTO "CTRSTREAM" ("CTR")
SELECT STREAM "CLICKCOUNT" / "IMPRESSIONCOUNT" * 100 as "CTR"
FROM "IMPRESSIONSTREAM",
  "CLICKSTREAM"
WHERE "IMPRESSIONSTREAM".ROWTIME = "CLICKSTREAM".ROWTIME;

Finally, these CTR values are used in RANDOM_CUT_FOREST to detect anomalous values.  The DESTINATION_SQL_STREAM is the output stream of data from the Amazon Kinesis Analytics application.

CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
    "CTRPERCENT" DOUBLE,
    "ANOMALY_SCORE" DOUBLE
);

CREATE OR REPLACE PUMP "OUTPUT_PUMP" STARTED AS 
INSERT INTO "DESTINATION_SQL_STREAM" 
SELECT STREAM * FROM
TABLE (RANDOM_CUT_FOREST( 
             CURSOR(SELECT STREAM "CTR" FROM "CTRSTREAM"), --inputStream
             100, --numberOfTrees (default)
             12, --subSampleSize 
             100000, --timeDecay (default)
             1) --shingleSize (default)
)
WHERE ANOMALY_SCORE > 2; 

The RANDOM_CUT_FOREST function greatly simplifies the programming required for anomaly detection.  However, understanding your data domain is paramount when performing data analytics.  The RANDOM_CUT_FOREST function is a tool for data scientists, not a replacement for them.  Knowing whether your data is logarithmic, circadian rhythmic, linear, etc. will provide the insights necessary to select the right parameters for RANDOM_CUT_FOREST.  For more information about parameters, see the RANDOM_CUT_FOREST Function.

Fortunately, the default values work in a wide variety of cases. In this case, use the default values for all but the subSampleSize parameter.  Typically, you would use a larger sample size to increase the pool of random samples used to calculate the anomaly score; for this post, use 12 samples so as to start evaluating the anomaly scores sooner.

Your SQL query outputs one record every ten seconds from the tumbling window so you’ll have enough evaluation values after two minutes to start calculating the anomaly score.  You are also using a cutoff value where records are only output to “DESTINATION_SQL_STREAM” if the anomaly score from the function is greater than 2 using the WHERE clause. To help visualize the cutoff point, here are the data points from a few runs through the pipeline using the sample Python script:

As you can see, the majority of the CTR data points are grouped around a 10% rate.  As the values move away from the 10% rate, their anomaly scores go up because they are more infrequent.  In the graph, an anomaly score above the cutoff is shaded orange. A cutoff value of 2 works well for this data set.

Output stream

The Amazon Kinesis Analytics application outputs the values from “DESTINATION_SQL_STREAM” to an Amazon Kinesis stream when the record has an anomaly score greater than 2.

Anomaly execution function

The output stream is an input event for an AWS Lambda function with a batch size of 1, so the function gets called one time for each message put in the output stream.  This function represents a place where you could react to anomalous events.  In a real world scenario, you may want to instead update a real-time bidding system to react to current events.  In this example, you send a message to SNS to see a tangible response to the changing CTR.  The CloudFormation template creates a Lambda function similar to the following:

var AWS = require('aws-sdk');
var sns = new AWS.SNS( { region: "<region>" });
	exports.handler = function(event, context) {
	//our batch size is 1 record so loop expected to process only once
	event.Records.forEach(function(record) {
		// Kinesis data is base64 encoded so decode here
		var payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
		var rec = payload.split(',');
		var ctr = rec[0];
		var anomaly_score = rec[1];
		var detail = 'Anomaly detected with a click through rate of ' + ctr + '% and an anomaly score of ' + anomaly_score;
		var subject = 'Anomaly Detected';
		var params = {
			Message: detail,
			MessageStructure: 'String',
			Subject: subject,
			TopicArn: 'arn:aws:sns:us-east-1::ClickStreamEvent'
		};
		sns.publish(params, function(err, data) {
			if (err) context.fail(err.stack);
			else     context.succeed('Published Notification');
		});
	});
}; 

Notification system

Amazon SNS is a fully managed, highly scalable messaging platform.  For this walkthrough, you send messages to SNS with subscribers that send email and text messages to a mobile phone.

Analytics pipeline walkthrough

Sometimes it’s best to build out a solution so you can see all the parts working and get a good sense of how it works.   Here are the steps to build out the entire pipeline as described above in your own account and perform real-time clickstream analysis yourself.  Following this walkthrough creates resources in your account that you can use to test the example pipeline.

Prerequisites

To complete the walkthrough, you’ll need an AWS account and a place to execute Python scripts.

Configure the CloudFormation template

Download the CloudFormation template named CSE-cfn.json from the AWS Big Data blog Github repository. In the CloudFormation console, set your region to N. Virginia, Oregon, or Ireland and then browse to that file.

When your script executes and an anomaly is detected, SNS sends a notification.  To see these in an email or text message, enter your email address and mobile phone number in the Parameters section and choose Next.

This CloudFormation script creates policies and roles necessary to process the incoming messages.  Acknowledge this by selecting the field or you will get a warning about IAM_CAPABILITY.

If you entered a valid email or SMS number, you will receive a validation message when the SNS subscriptions are created by CloudFormation.  If you don’t wish to receive email or texts from your pipeline, you don’t need to complete the verification process.

Run the Python script

Copy ClickImpressionGenerator.py to a local folder.

This script uses the requests library to make HTTP requests.  If you don’t have that Python package globally installed, you can install it locally in by running the following command in the same folder as the Python script:

pip install requests –t .

When the CloudFormation stack is complete, choose Outputs, ExecutionCommand, and run the command from the folder with the Python script.

This will start generating web traffic and send it to the API Gateway in the front of your analytics pipeline.  It is important to have data flowing into your Amazon Kinesis Analytics application for the next section so that a schema can be generated based on the incoming data.  If the script completes before you finish with the next section, simply restart the script with the same command.

Create the Analytics pipeline application

Open the Amazon Kinesis console and choose Go to Analytics.

Choose Create new application to create the Kinesis Analytics application, enter an application name and description, and then choose Save and continue.

Choose Connect to a source to see a list of streams.  Select the stream that starts with the name of your CloudFormation stack and contains CSEKinesisBeaconInputStream in the form of: <stack name>- CSEKinesisBeaconInputStream-<random string>.  This is the stream that your click and impression requests will be sent to from API Gateway.

Scroll down and choose Choose an IAM role.  Select the role with the name in the form of <stack name>-CSEKinesisAnalyticsRole-<random string>.  If your ClickImpressionGenerator.py script is still running and generating data, you should see a stream sample.  Scroll to the bottom and choose Save and continue.

Next, add the SQL for your real-time analytics.  Choose the Go to SQL editor, choose Yes, start application, and paste the SQL contents from CSE-SQL.sql into the editor. Choose Save and run SQL.  After a minute or so, you should see data showing up every ten seconds in the CLICKSTREAM.  Choose Exit when you are done editing.

Choose Connect to a destination and select the <stack name>-CSEKinesisBeaconOutputStream-<random string> from the list of streams.  Change the output format to CSV.

Choose Choose an IAM role and select <stack name>-CSEKinesisAnalyticsRole-<random string> again.  Choose Save and continue.  Now your analytics pipeline is complete.  When your script gets to the injected anomaly section, you should get a text message or email to notify you of the anomaly.  To clean up the resources created here, delete the stack in CloudFormation, stop the Kinesis Analytics application then delete it.

Summary

A pipeline like this can be used for many use cases where anomaly detection is valuable. What solutions have you enabled with this architecture? If you have questions or suggestions, please comment below.

 


Related

Process Amazon Kinesis Aggregated Data with AWS Lambda

32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/32-security-and-compliance-sessions-now-live-in-the-reinvent-2016-session-catalog/

re:Invent 2016 logo

AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

Security & Compliance Track sessions

As in past years, the sessions in the Security & Compliance track will take place in The Venetian | Palazzo in Las Vegas. Here’s what you have to look forward to!

SAC201 – Lessons from a Chief Security Officer: Achieving Continuous Compliance in Elastic Environments

Does meeting stringent compliance requirements keep you up at night? Do you worry about having the right audit trails in place as proof?
Cengage Learning’s Chief Security Officer, Robert Hotaling, shares his organization’s journey to AWS, and how they enabled continuous compliance for their dynamic environment with automation. When Cengage shifted from publishing to digital education and online learning, they needed a secure elastic infrastructure for their data intensive and cyclical business, and workload layer security tools that would help them meet compliance requirements (e.g., PCI).
In this session, you will learn why building security in from the beginning saves you time (and painful retrofits) later, how to gather and retain audit evidence for instances that are only up for minutes or hours, and how Cengage used Trend Micro Deep Security to meet many compliance requirements and ensured instances were instantly protected as they came online in a hybrid cloud architecture. Session sponsored by Trend Micro, Inc.

 

SAC302 – Automating Security Event Response, from Idea to Code to Execution

With security-relevant services such as AWS Config, VPC Flow Logs, Amazon CloudWatch Events, and AWS Lambda, you now have the ability to programmatically wrangle security events that may occur within your AWS environment, including prevention, detection, response, and remediation. This session covers the process of automating security event response with various AWS building blocks, taking several ideas from drawing board to code, and gaining confidence in your coverage by proactively testing security monitoring and response effectiveness before anyone else does.

 

SAC303 – Become an AWS IAM Policy Ninja in 60 Minutes or Less

Are you interested in learning how to control access to your AWS resources? Have you ever wondered how to best scope down permissions to achieve least privilege permissions access control? If your answer to these questions is “yes,” this session is for you. We take an in-depth look at the AWS Identity and Access Management (IAM) policy language. We start with the basics of the policy language and how to create and attach policies to IAM users, groups, and roles. As we dive deeper, we explore policy variables, conditions, and other tools to help you author least privilege policies. Throughout the session, we cover some common use cases, such as granting a user secure access to an Amazon S3 bucket or to launch an Amazon EC2 instance of a specific type.

 

SAC304 – Predictive Security: Using Big Data to Fortify Your Defenses

In a rapidly changing IT environment, detecting and responding to new threats is more important than ever. This session shows you how to build a predictive analytics stack on AWS, which harnesses the power of Amazon Machine Learning in conjunction with Amazon Elasticsearch Service, AWS CloudTrail, and VPC Flow Logs to perform tasks such as anomaly detection and log analysis. We also demonstrate how you can use AWS Lambda to act on this information in an automated fashion, such as performing updates to AWS WAF and security groups, leading to an improved security posture and alleviating operational burden on your security teams.

 

SAC305 – Auditing a Cloud Environment in 2016: What Tools Can Internal and External Auditors Leverage to Maintain Compliance?

With the rapid increase of complexity in managing security for distributed IT and cloud computing, security and compliance managers can innovate to ensure a high level of security when managing AWS resources. In this session, Chad Woolf, director of compliance for AWS, discusses which AWS service features to leverage to achieve a high level of security assurance over AWS resources, giving you more control of the security of your data and preparing you for a wide range of audits. You can now implement point-in-time audits and continuous monitoring in system architecture. Internal and external auditors can learn about emerging tools for monitoring environments in real time. Follow use case examples and demonstrations of services like Amazon Inspector, Amazon CloudWatch Logs, AWS CloudTrail, and AWS Config. Learn firsthand what some AWS customers have accomplished by leveraging AWS features to meet specific industry compliance requirements.

 

SAC306 – Encryption: It Was the Best of Controls, It Was the Worst of Controls

Encryption is a favorite of security and compliance professionals everywhere. Many compliance frameworks actually mandate encryption. Though encryption is important, it is also treacherous. Cryptographic protocols are subtle, and researchers are constantly finding new and creative flaws in them. Using encryption correctly, especially over time, also is expensive because you have to stay up to date.
AWS wants to encrypt data. And our customers, including Amazon, want to encrypt data. In this talk, we look at some of the challenges with using encryption, how AWS thinks internally about encryption, and how that thinking has informed the services we have built, the features we have vended, and our own usage of AWS.

 

SAC307 – The Psychology of Security Automation

Historically, relationships between developers and security teams have been challenging. Security teams sometimes see developers as careless and ignorant of risk, while developers might see security teams as dogmatic barriers to productivity. Can technologies and approaches such as the cloud, APIs, and automation lead to happier developers and more secure systems? Netflix has had success pursuing this approach, by leaning into the fundamental cloud concept of self-service, the Netflix cultural value of transparency in decision making, and the engineering efficiency principle of facilitating a “paved road.” This session explores how security teams can use thoughtful tools and automation to improve relationships with development teams while creating a more secure and manageable environment. Topics include Netflix’s approach to IAM entity management, Elastic Load Balancing and certificate management, and general security configuration monitoring.

 

SAC308 – Hackproof Your Cloud: Responding to 2016 Threats

In this session, CloudCheckr CTO Aaron Newman highlights effective strategies and tools that AWS users can employ to improve their security posture. Specific emphasis is placed upon leveraging native AWS services. He covers how to include concrete steps that users can begin employing immediately.  Session sponsored by CloudCheckr.

 

SAC309 – You Can’t Protect What You Can’t See: AWS Security Monitoring & Compliance Validation from Adobe

Ensuring security and compliance across a globally distributed, large-scale AWS deployment requires a scalable process and a comprehensive set of technologies. In this session, Adobe will deep-dive into the AWS native monitoring and security services and some Splunk technologies leveraged globally to perform security monitoring across a large number of AWS accounts. You will learn about Adobe’s collection plumbing including components of S3, Kinesis, CloudWatch, SNS, Dynamo DB and Lambda, as well as the tooling and processes used at Adobe to deliver scalable monitoring without managing an unwieldy number of API keys and input stanzas.  Session sponsored by Splunk.

 

SAC310 – Securing Serverless Architectures, and API Filtering at Layer 7

AWS serverless architecture components such as Amazon S3, Amazon SQS, Amazon SNS, CloudWatch Logs, DynamoDB, Amazon Kinesis, and Lambda can be tightly constrained in their operation. However, it may still be possible to use some of them to propagate payloads that could be used to exploit vulnerabilities in some consuming endpoints or user-generated code. This session explores techniques for enhancing the security of these services, from assessing and tightening permissions in IAM to integrating tools and mechanisms for inline and out-of-band payload analysis that are more typically applied to traditional server-based architectures.

 

SAC311 – Evolving an Enterprise-level Compliance Framework with Amazon CloudWatch Events and AWS Lambda

Johnson & Johnson is in the process of doing a proof of concept to rewrite the compliance framework that they presented at re:Invent 2014. This framework leverages the newest AWS services and abandons the need for continual describes and master rules servers. Instead, Johnson & Johnson plans to use a distributed, event-based architecture that not only reduces costs but also assigns costs to the appropriate projects rather than central IT.

 

SAC312 – Architecting for End-to-End Security in the Enterprise

This session tells how our most mature, security-minded Fortune 500 customers adopt AWS while improving end-to-end protection of their sensitive data. Learn about the enterprise security architecture decisions made during actual sensitive workload deployments as told by the AWS professional services and the solution architecture team members who lived them. In this very prescriptive, technical walkthrough, we share lessons learned from the development of enterprise security strategy, security use-case development, security configuration decisions, and the creation of AWS security operations playbooks to support customer architectures.

 

SAC313 – Enterprise Patterns for Payment Card Industry Data Security Standard (PCI DSS)

Professional services has completed five deep PCI engagements with enterprise customers over the last year. Common patterns were identified and codified in various artifacts. This session introduces the patterns that help customers address PCI requirements in a standard manner that also meets AWS best practices. Hear customers speak about their side of the journey and the solutions that they used to deploy a PCI compliance workload.

 

SAC314 – GxP Compliance in the Cloud

GxP is an acronym that refers to the regulations and guidelines applicable to life sciences organizations that make food and medical products such as drugs, medical devices, and medical software applications. The overall intent of GxP requirements is to ensure that food and medical products are safe for consumers and to ensure the integrity of data used to make product-related safety decisions.

 

The term GxP encompasses a broad range of compliance-related activities such as Good Laboratory Practices (GLP), Good Clinical Practices (GCP), Good Manufacturing Practices (GMP), and others, each of which has product-specific requirements that life sciences organizations must implement based on the 1) type of products they make and 2) country in which their products are sold. When life sciences organizations use computerized systems to perform certain GxP activities, they must ensure that the computerized GxP system is developed, validated, and operated appropriately for the intended use of the system.

 

For this session, co-presented with Merck, services such as Amazon EC2, Amazon CloudWatch Logs, AWS CloudTrail, AWS CodeCommit, Amazon Simple Storage Service (S3), and AWS CodePipeline will be discussed with an emphasis on implementing GxP-compliant systems in the AWS Cloud.

 

SAC315 – Scaling Security Operations: Using AWS Services to Automate Governance of Security Controls and Remediate Violations

This session enables security operators to use data provided by AWS services such as AWS CloudTrail, AWS Config, Amazon CloudWatch Events, and VPC Flow Fogs to reduce vulnerabilities, and when required, execute timely security actions that fix the violation or gather more information about the vulnerability and attacker. We look at security practices for compliance with PCI, CIS Security Controls,and HIPAA. We dive deep into an example from an AWS customer, Siemens AG, which has automated governance and implemented automated remediation using CloudTrail, AWS Config Rules, and AWS Lambda. A prerequisite for this session is knowledge of software development with Java, Python, or Node.

 

SAC316 – Security Automation: Spend Less Time Securing Your Applications

As attackers become more sophisticated, web application developers need to constantly update their security configurations. Static firewall rules are no longer good enough. Developers need a way to deploy automated security that can learn from the application behavior and identify bad traffic patterns to detect bad bots or bad actors on the Internet. This session showcases some of the real-world customer use cases that use machine learning and AWS WAF (a web application firewall) to automatically identify bad actors affecting multiplayer gaming applications. We also present tutorials and code samples that show how customers can analyze traffic patterns and deploy new AWS WAF rules on the fly.

 

SAC317 – IAM Best Practices to Live By

This session covers AWS Identity and Access Management (IAM) best practices that can help improve your security posture. We cover how to manage users and their security credentials. We also explain why you should delete your root access keys—or at the very least, rotate them regularly. Using common use cases, we demonstrate when to choose between using IAM users and IAM roles. Finally, we explore how to set permissions to grant least privilege access control in one or more of your AWS accounts.

 

SAC318 – Life Without SSH: Immutable Infrastructure in Production

This session covers what a real-world production deployment of a fully automated deployment pipeline looks like with instances that are deployed without SSH keys. By leveraging AWS CodeDeploy and Docker, we will show how we achieved semi-immutable and fully immutable infrastructures, and what the challenges and remediations were.

 

SAC401 – 5 Security Automation Improvements You Can Make by Using Amazon CloudWatch Events and AWS Config Rules

This session demonstrates 5 different security and compliance validation actions that you can perform using Amazon CloudWatch Events and AWS Config rules. This session focuses on the actual code for the various controls, actions, and remediation features, and how to use various AWS services and features to build them. The demos in this session include CIS Amazon Web Services Foundations validation; host-based AWS Config rules validation using AWS Lambda, SSH, and VPC-E; automatic creation and assigning of MFA tokens when new users are created; and automatic instance isolation based on SSH logons or VPC Flow Logs deny logs. This session focuses on code and live demos.

 

re:Source Mini Con for Security Services sessions

The re:Source Mini Con for Security Services offers you an opportunity to dive even deeper into security and compliance topics. Think of it as a one-day, fully immersive mini-conference. The Mini Con will take place in The Mirage in Las Vegas.

SEC301 – Audit Your AWS Account Against Industry Best Practices: The CIS AWS Benchmarks

Audit teams can consistently evaluate the security of an AWS account. Best practices greatly reduce complexity when managing risk and auditing the use of AWS for critical, audited, and regulated systems. You can integrate these security checks into your security and audit ecosystem. Center for Internet Security (CIS) benchmarks are incorporated into products developed by 20 security vendors, are referenced by PCI 3.1 and FedRAMP, and are included in the National Vulnerability Database (NVD) National Checklist Program (NCP). This session shows you how to implement foundational security measures in your AWS account. The prescribed best practices help make implementation of core AWS security measures more straightforward for security teams and AWS account owners.

 

SEC302 – WORKSHOP: Working with AWS Identity and Access Management (IAM) Policies and Configuring Network Security Using VPCs and Security Groups

In this 2.5-hour workshop, we will show you how to manage permissions by drafting AWS IAM policies that adhere to the principle of least privilege–granting the least permissions required to achieve a task. You will learn all the ins and outs of drafting and applying IAM policies appropriately to help secure your AWS resources. In addition, we will show you how to configure network security using VPCs and security groups.

 

SEC303 – Get the Most from AWS KMS: Architecting Applications for High Security

AWS Key Management Service provides an easy and cost-effective way to secure your data in AWS. In this session, you learn about leveraging the latest features of the service to minimize risk for your data. We also review the recently released Import Key feature that gives you more control over the encryption process by letting you bring your own keys to AWS.

 

SEC304 – Reduce Your Blast Radius by Using Multiple AWS Accounts Per Region and Service

This session shows you how to reduce your blast radius by using multiple AWS accounts per region and service, which helps limit the impact of a critical event such as a security breach. Using multiple accounts helps you define boundaries and provides blast-radius isolation.

 

SEC305 – Scaling Security Resources for Your First 10 Million Customers

Cloud computing offers many advantages, such as the ability to scale your web applications or website on demand. But how do you scale your security and compliance infrastructure along with the business? Join this session to understand best practices for scaling your security resources as you grow from zero to millions of users. Specifically, you learn the following:
  • How to scale your security and compliance infrastructure to keep up with a rapidly expanding threat base.
  • The security implications of scaling for numbers of users and numbers of applications, and how to satisfy both needs.
  • How agile development with integrated security testing and validation leads to a secure environment.
  • Best practices and design patterns of a continuous delivery pipeline and the appropriate security-focused testing for each.
  • The necessity of treating your security as code, just as you would do with infrastructure.
The services covered in this session include AWS IAM, Auto Scaling, Amazon Inspector, AWS WAF, and Amazon Cognito.

 

SEC306 – WORKSHOP: How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0

AWS supports identity federation using SAML (Security Assertion Markup Language) 2.0. Using SAML, you can configure your AWS accounts to integrate with your identity provider (IdP). Once configured, your federated users are authenticated and authorized by your organization’s IdP, and then can use single sign-on (SSO) to sign in to the AWS Management Console. This not only obviates the need for your users to remember yet another user name and password, but it also streamlines identity management for your administrators. This is great if your federated users want to access the AWS Management Console, but what if they want to use the AWS CLI or programmatically call AWS APIs?
In this 2.5-hour workshop, we will show you how you can implement federated API and CLI access for your users. The examples provided use the AWS Python SDK and some additional client-side integration code. If you have federated users that require this type of access, implementing this solution should earn you more than one high five on your next trip to the water cooler.

 

SEC307 – Microservices, Macro Security Needs: How Nike Uses a Multi-Layer, End-to-End Security Approach to Protect Microservice-Based Solutions at Scale

Microservice architectures provide numerous benefits but also have significant security challenges. This session presents how Nike uses layers of security to protect consumers and business. We show how network topology, network security primitives, identity and access management, traffic routing, secure network traffic, secrets management, and host-level security (antivirus, intrusion prevention system, intrusion detection system, file integrity monitoring) all combine to create a multilayer, end-to-end security solution for our microservice-based premium consumer experiences. Technologies to be covered include Amazon Virtual Private Cloud, access control lists, security groups, IAM roles and profiles, AWS KMS, NAT gateways, ELB load balancers, and Cerberus (our cloud-native secrets management solution).

 

SEC308 – Securing Enterprise Big Data Workloads on AWS

Security of big data workloads in a hybrid IT environment often comes as an afterthought. This session discusses how enterprises can architect securing big data workloads on AWS. We cover the application of authentication, authorization, encryption, and additional security principles and mechanisms to workloads leveraging Amazon Elastic MapReduce and Amazon Redshift.

 

SEC309 – Proactive Security Testing in AWS: From Early Implementation to Deployment Security Testing

Attend this session to learn about security testing your applications in AWS. Effective security testing is challenging, but multiple features and services within AWS make security testing easier. This session covers common approaches to testing, including how we think about testing within AWS, how to apply AWS services to your test setup, remediating findings, and automation.

 

SEC310 – Mitigating DDoS Attacks on AWS: Five Vectors and Four Use Cases

Distributed denial of service (DDoS) attack mitigation has traditionally been a challenge for those hosting on fixed infrastructure. In the cloud, users can build applications on elastic infrastructure that is capable of mitigating and absorbing DDoS attacks. What once required overprovisioning, additional infrastructure, or third-party services is now an inherent capability of many cloud-based applications. This session explains common DDoS attack vectors and how AWS customers with different use cases are addressing these challenges. As part of the session, we show you how to build applications that are resilient to DDoS and demonstrate how they work in practice.

 

SEC311 – How to Automate Policy Validation

Managing permissions across a growing number of identities and resources can be time consuming and complex. Testing, validating, and understanding permissions before and after policy changes are deployed is critical to ensuring that your users and systems have the appropriate level of access. This session walks through the tools that are available to test, validate, and understand the permissions in your account. We demonstrate how to use these tools and how to automate them to continually validate the permissions in your accounts. The tools demonstrated in this session help you answer common questions such as:
  • How does a policy change affect the overall permissions for a user, group, or role?
  • Who has access to perform powerful actions?
  • Which services can this role access?
  • Can a user access a specific Amazon S3 bucket?

 

SEC312 – State of the Union for re:Source Mini Con for Security Services

AWS CISO Steve Schmidt presents the state of the union for re:Source Mini Con for Security Services. He addresses the state of the security and compliance ecosystem; large enterprise customer additions in key industries; the vertical view: maturing spaces for AWS security assurance (GxP, IoT, CIS foundations); and the international view: data privacy protections and data sovereignty. The state of the union also addresses a number of new identity, directory, and access services, and closes by looking at what’s on the horizon.

 

SEC401 – Automated Formal Reasoning About AWS Systems

Automatic and semiautomatic mechanical theorem provers are now being used within AWS to find proofs in mathematical logic that establish desired properties of key AWS components. In this session, we outline these efforts and discuss how mechanical theorem provers are used to replay found proofs of desired properties when software artifacts or networks are modified, thus helping provide security throughout the lifetime of the AWS system. We consider these use cases:
  • Using constraint solving to show that VPCs have desired safety properties, and maintaining this continuously at each change to the VPC.
  • Using automatic mechanical theorem provers to prove that s2n’s HMAC is correct and maintaining this continuously at each change to the s2n source code.
  • Using semiautomatic mechanical theorem provers to prove desired safety properties of Sassy protocol.

– Craig

IAM Service Last Accessed Data Now Available for the Asia Pacific (Mumbai) Region

Post Syndicated from Zaher Dannawi original https://aws.amazon.com/blogs/security/iam-service-last-accessed-data-now-available-for-the-asia-pacific-mumbai-region-2/

In December, AWS Identity and Access Management (IAM) released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support the recently launched Asia Pacific (Mumbai) Region. With this release, you can now view the date when an IAM entity last accessed an AWS service in this region. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

The IAM console now shows service last accessed data in 11 regions: US East (N. Virginia), US West (Oregon), US West (N. California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Mumbai), and South America (Sao Paulo).

Note: IAM began collecting service last accessed data in most regions on October 1, 2015. Information about AWS services accessed before this date is not included in service last accessed data. If you need historical access information about your IAM entities before this date, see the AWS CloudTrail documentation. Also, see Tracking Period Regional Differences to learn the start date of service last accessed data for supported regions.

For more information about IAM and service last accessed data, see Service Last Accessed Data. If you have a comment about service last accessed data, submit it below. If you have a question, please start a new thread on the IAM forum.

– Zaher

How to Use Amazon CloudWatch Events to Monitor Application Health

Post Syndicated from Saurabh Bangad original https://aws.amazon.com/blogs/security/how-to-use-amazon-cloudwatch-events-to-monitor-application-health/

Amazon CloudWatch Events enables you to react selectively to events in the cloud as well as in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns. CloudWatch Events lets you process both AWS-provided events and custom events (those that you create and inject yourself). The AWS-provided events that CloudWatch Events supports include:

  • Amazon EC2 instance state-change events.
  • Auto Scaling lifecycle events, and instance launch and terminate notifications.
  • Scheduled events.
  • AWS API call and console sign-in events reported by AWS CloudTrail.

See the full list of supported events.

In this post, I will show how to inject your own events into CloudWatch Events, and define event patterns and their corresponding responses.

Scenario—application availability

In our scenario, an organization deploys an application to a large number of computers. The application deploys a health-checking component along with the application to each computer, which in turn sends application health reports to a central location. The organization wants to monitor all instances of the deployed application and its availability in near real time. As a result, the health-checking component reports one of three color-coded health statuses about the application:

  • Green: The application is healthy, so no action is required.
  • Yellow: The application has failed health checks. Process the report with AWS Lambda.
  • Red: The application might have failed on at least one computer because it passed the threshold of failed health checks. Notify the operations team.

Using this color-coding, you can monitor an application’s availability and pursue follow-up actions, if required.

Set up the solution

This solution requires you to have a central location that will collect all the health-check data, analyze it, and then take action based on the color-coding. To do this, you will use CloudWatch Events and have the health-checking component send reports as events.

You will use the put-events AWS CLI command, which expects an event entry to be defined by at least two fields: the Source and the DetailType (the latter defines the type of the detail, such as the payload of the event). I use com.mycompany.myapp as my Source. I use myCustomHealthStatus as the DetailType. For the optional payload of an event entry, I use a JSON object with one key, HealthStatus, with possible values of Green, Yellow, and Red.

The following diagram depicts the HealthStatus application’s reporting architecture.

Diagram of the HealthStatus application's reporting architecture

A CloudWatch Events rule is the conditional statement that maps an incoming event and routes the event to its targets. A target is a resource, such as a Lambda function or an Amazon SNS topic that is invoked when a rule is triggered. As shown in the preceding diagram, I set up the rules such that:

  1. If HealthStatus is Yellow, invoke a Lambda function named SampleAppDebugger.
  2. If HealthStatus is Red:
    1. Publish to an SNS topic named RedHealthNotifier, which notifies the operations team and indicates that the situation might require human intervention.
    2. Publish to an Amazon SQS queue named ReportInspectionQueue for the operations team to inspect the individual health-check status by using custom scripts.

Now, let’s deploy the solution. Be sure to deploy it in the US West (Oregon) Region, or choose a region where all the services used in this post are supported.

Create three targets

To create the rules noted above, you first need to create three targets. The first target is invoked when HealthStatus is Yellow. The second and the third targets are invoked when HealthStatus is Red. You can learn more using the getting started links provided in this section.

To create the three targets:

  1. Create a Lambda function called SampleAppDebugger with the run-time version Python 2.7 by using the following code. Creation of the Lambda function will in turn create CloudWatch Logs groups for its logging.
#This Lambda function can process the Yellow HealthStatus report sent by an AppInstance.
#For the purposes of this blog post, the function only prints the received report.

from __future__ import print_function
def lambda_handler(event, context):
      print("nProcessing the following Yellow reportnn")
      print(event)
      print("nnProcessing completed!n")
  1. Create an SNS topic named RedHealthNotifier. To receive messages published to the topic, you have to subscribe an endpoint to that topic. You must complete the subscription by opening the confirmation message from AWS Notifications, and then clicking the link to confirm your subscription.
  2. Create an SQS queue named ReportInspectionQueue.

Create two rules

You will now create two rules. The first rule matches events with a HealthStatus of Yellow. The second rule matches events with a HealthStatus of Red.

To create the first rule with the AWS CLI:

  1. Save the following event pattern as a file named YellowPattern.json.
{
 "detail-type": [
   "myCustomHealthStatus"
 ],
 "detail": {
   "HealthStatus": [
     "Yellow"
   ]
 }
}
  1. Create a rule named myCustomHealthStatusYellow by running the following command.
$ aws events put-rule 
--name myCustomHealthStatusYellow 
--description myCustomHealthStatusYellow 
--event-pattern file://YellowPattern.json
  1. Make the Lambda function, SampleAppDebugger, that you created earlier the target of the rule by running the following command.
$ aws events put-targets 
--rule myCustomHealthStatusYellow 
--target Id=1,Arn=arn:aws:lambda:<Region>:<AccountNumber>:function:SampleAppDebugger
  1. Use the following command to add permission to the Lambda function so that CloudWatch Events can invoke the function when the rule is triggered.
$ aws lambda add-permission 
--function-name SampleAppDebugger 
--statement-id AllowCloudWatchEventsToInvoke 
--action 'lambda:InvokeFunction' 
--principal events.amazonaws.com 
--source-arn arn:aws:events:<Region>:<AccountNumber>:rule/myCustomHealthStatusYellow

If you prefer to use the AWS Management Console, you can create the first rule with the following steps:

  1. Go to the Amazon CloudWatch console.
  2. Click Events in the left pane
  3. Click Create Rule, and then click Show advanced options.
  4. Click Edit the JSON version of the pattern.
  5. Paste the contents of the YellowPattern.json file in the text box.
  6. Click Add target under the Targets section. Select the Lambda function SampleAppDebugger from the list.
  7. Click Configure details.
  8. Name the rule as myCustomHealthStatusYellow.

Now, create the second rule, myCustomHealthStatusRed. Follow the same preceding steps, but replace Yellow with Red in the JSON file name, and save the file as RedPattern.json. Add the SNS topic, RedHealthNotifier, and the SQS queue, ReportInspectionQueue, which you created earlier as two targets for your rule. Make sure to use a different ID for the second target if you are using the AWS CLI, or click Add Target one more time in the AWS Management Console. If you use the AWS CLI, add resource-based policies for CloudWatch Events to invoke each of the targets.

Test the setup

To test the setup, you must add a test case for the custom events in which you inject your event and monitor the rest of the system for the desired results. You again will use the CloudWatch Events put-events CLI command to inject your custom events. To test the setup, you will simulate the component’s input via CLI.

Start with a test event by saving the following to a file named TestYellowEvent.json.

[
 {
   "Source": "com.mycompany.myapp",
   "DetailType": "myCustomHealthStatus",
   "Detail": "{"HealthStatus": "Yellow", "OtherField": "OtherValue"}"
 }                            
]

Create two more files named TestRedEvent.json and TestGreenEntry.json, and replace the value of the HealthStatus field with Red and Green, respectively.

Run the following command to inject a custom event with Yellow status.

$ aws events put-events --entries file://TestYellowEvent.json

Run the following command to inject a custom event with Green status.

$ aws events put-events --entries file://TestGreenEvent.json

Run the following command to inject a custom event with Red status.

$ aws events put-events --entries file://TestRedEvent.json

You can verify the results by using the CloudWatch console:

  1. Click Metrics in the left pane.
  2. In the CloudWatch Metrics by Category pane, under Event Metrics, select By Rule Name, and select the MatchedEvents metric for the respective rule.

When you inject the Green event, nothing happens because there is no rule in CloudWatch Events that matches this event. However, when the Yellow event is injected, CloudWatch Events matches it with a rule and invokes the Lambda function. The function in turn generates an entry in the corresponding CloudWatch Logs group, and the CloudWatch metrics can be verified from the Lambda console. Similarly, the Red event results in a message sent to the SQS queue that can be received from the Amazon SQS console and published to the SNS topic, which in turn sends an email to the email address you subscribed during the target-creation step.

If you were to inject an event with a HealthStatus of, say, Orange or with a detail-type of, say, myProcessStatus, the event would still be injected. However, the event would not match any rules and therefore would be treated similarly to the Green event.

Summary

In this post, I showed you how to create two CloudWatch Events rules that monitor for color-coded health status reports from multiple instances of an application. I simulated the input from the health-checking component by using the put-events CLI command. Your applications can also can use the AWS SDK to inject the events.

If you have comments about this blog post, please submit them below in the “Comments” section. If you have troubleshooting questions related to the solution in this post, please start a new thread on the CloudWatch forum.

– Saurabh