Tag Archives: AWS Trusted Advisor

Now Open AWS EU (Paris) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-eu-paris-region/

Today we are launching our 18th AWS Region, our fourth in Europe. Located in the Paris area, AWS customers can use this Region to better serve customers in and around France.

The Details
The new EU (Paris) Region provides a broad suite of AWS services including Amazon API Gateway, Amazon Aurora, Amazon CloudFront, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Amazon Elastic Block Store (EBS), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Streams, Polly, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS OpsWorks Stacks, AWS Personal Health Dashboard, AWS Server Migration Service, AWS Service Catalog, AWS Shield Standard, AWS Snowball, AWS Snowball Edge, AWS Snowmobile, AWS Storage Gateway, AWS Support (including AWS Trusted Advisor), Elastic Load Balancing, and VM Import.

The Paris Region supports all sizes of C5, M5, R4, T2, D2, I3, and X1 instances.

There are also four edge locations for Amazon Route 53 and Amazon CloudFront: three in Paris and one in Marseille, all with AWS WAF and AWS Shield. Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.

All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.

AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.

From Our Customers
Many AWS customers are preparing to use this new Region. Here’s a small sample:

Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.

SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.

Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.

Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.

AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.

AWS Consulting and Technology Partners
We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:

AWS Premier Consulting PartnersAccenture, Capgemini, Claranet, CloudReach, DXC, and Edifixio.

AWS Consulting PartnersABC Systemes, Atos International SAS, CoreExpert, Cycloid, Devoteam, LINKBYNET, Oxalide, Ozones, Scaleo Information Systems, and Sopra Steria.

AWS Technology PartnersAxway, Commerce Guys, MicroStrategy, Sage, Software AG, Splunk, Tibco, and Zerolight.

AWS in France
We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.

As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.

Use it Today
The EU (Paris) Region is open for business now and you can start using it today!

Jeff;

 

Introducing the New GDPR Center and “Navigating GDPR Compliance on AWS” Whitepaper

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/introducing-the-new-gdpr-center-and-navigating-gdpr-compliance-on-aws-whitepaper/

European Union flag

At AWS re:Invent 2017, the AWS Compliance team participated in excellent engagements with AWS customers about the General Data Protection Regulation (GDPR), including discussions that generated helpful input. Today, I am announcing resulting enhancements to our recently launched GDPR Center and the release of a new whitepaper, Navigating GDPR Compliance on AWS. The resources available on the GDPR Center are designed to give you GDPR basics, and provide some ideas as you work out the details of the regulation and find a path to compliance.

In this post, I focus on two of these new GDPR requirements in terms of articles in the GDPR, and explain some of the AWS services and other resources that can help you meet these requirements.

Background about the GDPR

The GDPR is a European privacy law that will become enforceable on May 25, 2018, and is intended to harmonize data protection laws throughout the European Union (EU) by applying a single data protection law that is binding throughout each EU member state. The GDPR not only applies to organizations located within the EU, but also to organizations located outside the EU if they offer goods or services to, or monitor the behavior of, EU data subjects. All AWS services will comply with the GDPR in advance of the May 25, 2018, enforcement date.

We are already seeing customers move personal data to AWS to help solve challenges in complying with the EU’s GDPR because of AWS’s advanced toolset for identifying, securing, and managing all types of data, including personal data. Steve Schmidt, the AWS CISO, has already written about the internal and external work we have been undertaking to help you use AWS services to meet your own GDPR compliance goals.

Article 25 – Data Protection by Design and by Default (Privacy by Design)

Privacy by Design is the integration of data privacy and compliance into the systems development process, enabling applications, systems, and accounts, among other things, to be secure by default. To secure your AWS account, we offer a script to evaluate your AWS account against the full Center for Internet Security (CIS) Amazon Web Services Foundations Benchmark 1.1. You can access this public benchmark on GitHub. Additionally, AWS Trusted Advisor is an online resource to help you improve security by optimizing your AWS environment. Among other things, Trusted Advisor lists a number of security-related controls you should be monitoring. AWS also offers AWS CloudTrail, a logging tool to track usage and API activity. Another example of tooling that enables data protection is Amazon Inspector, which includes a knowledge base of hundreds of rules (regularly updated by AWS security researchers) mapped to common security best practices and vulnerability definitions. Examples of built-in rules include checking for remote root login being enabled or vulnerable software versions installed. These and other tools enable you to design an environment that protects customer data by design.

An accurate inventory of all the GDPR-impacting data is important but sometimes difficult to assess. AWS has some advanced tooling, such as Amazon Macie, to help you determine where customer data is present in your AWS resources. Macie uses advanced machine learning to automatically discover and classify data so that you can protect data, per Article 25.

Article 32 – Security of Processing

You can use many AWS services and features to secure the processing of data regulated by the GDPR. Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. With Amazon VPC, you can make the Amazon Cloud a seamless extension of your existing on-premises resources.

AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses hardware security modules (HSMs) to help protect your keys. Managing keys with AWS KMS allows you to choose to encrypt data either on the server side or the client side. AWS KMS is integrated with several other AWS services to help you protect the data you store with these services. AWS KMS is also integrated with CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs. You can also use the AWS Encryption SDK to correctly generate and use encryption keys, as well as protect keys after they have been used.

We also recently announced new encryption and security features for Amazon S3, including default encryption and a detailed inventory report. Services of this type as well as additional GDPR enablers will be published regularly on our GDPR Center.

Other resources

As you prepare for GDPR, you may want to visit our AWS Customer Compliance Center or Tools for Amazon Web Services to learn about options for building anything from small scripts that delete data to a full orchestration framework that uses AWS Code services.

-Chad

Now Open – AWS China (Ningxia) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-china-ningxia-region/

Today we launched our 17th Region globally, and the second in China. The AWS China (Ningxia) Region, operated by Ningxia Western Cloud Data Technology Co. Ltd. (NWCD), is generally available now and provides customers another option to run applications and store data on AWS in China.

The Details
At launch, the new China (Ningxia) Region, operated by NWCD, supports Auto Scaling, AWS Config, AWS CloudFormation, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Direct Connect, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon EC2 Systems Manager, AWS Elastic Beanstalk, Amazon ElastiCache, Amazon Elasticsearch Service, Elastic Load Balancing, Amazon EMR, Amazon Glacier, AWS Identity and Access Management (IAM), Amazon Kinesis Streams, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), AWS Support API, AWS Trusted Advisor, Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, and VM Import. Visit the AWS China Products page for additional information on these services.

The Region supports all sizes of C4, D2, M4, T2, R4, I3, and X1 instances.

Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

Operating Partner
To comply with China’s legal and regulatory requirements, AWS has formed a strategic technology collaboration with NWCD to operate and provide services from the AWS China (Ningxia) Region. Founded in 2015, NWCD is a licensed datacenter and cloud services provider, based in Ningxia, China. NWCD joins Sinnet, the operator of the AWS China China (Beijing) Region, as an AWS operating partner in China. Through these relationships, AWS provides its industry-leading technology, guidance, and expertise to NWCD and Sinnet, while NWCD and Sinnet operate and provide AWS cloud services to local customers. While the cloud services offered in both AWS China Regions are the same as those available in other AWS Regions, the AWS China Regions are different in that they are isolated from all other AWS Regions and operated by AWS’s Chinese partners separately from all other AWS Regions. Customers using the AWS China Regions enter into customer agreements with Sinnet and NWCD, rather than with AWS.

Use it Today
The AWS China (Ningxia) Region, operated by NWCD, is open for business, and you can start using it now! Starting today, Chinese developers, startups, and enterprises, as well as government, education, and non-profit organizations, can leverage AWS to run their applications and store their data in the new AWS China (Ningxia) Region, operated by NWCD. Customers already using the AWS China (Beijing) Region, operated by Sinnet, can select the AWS China (Ningxia) Region directly from the AWS Management Console, while new customers can request an account at www.amazonaws.cn to begin using both AWS China Regions.

Jeff;

 

 

AWS Systems Manager – A Unified Interface for Managing Your Cloud and Hybrid Resources

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-systems-manager/

AWS Systems Manager is a new way to manage your cloud and hybrid IT environments. AWS Systems Manager provides a unified user interface that simplifies resource and application management, shortens the time to detect and resolve operational problems, and makes it easy to operate and manage your infrastructure securely at scale. This service is absolutely packed full of features. It defines a new experience around grouping, visualizing, and reacting to problems using features from products like Amazon EC2 Systems Manager (SSM) to enable rich operations across your resources.

As I said above, there are a lot of powerful features in this service and we won’t be able to dive deep on all of them but it’s easy to go to the console and get started with any of the tools.

Resource Groupings

Resource Groups allow you to create logical groupings of most resources that support tagging like: Amazon Elastic Compute Cloud (EC2) instances, Amazon Simple Storage Service (S3) buckets, Elastic Load Balancing balancers, Amazon Relational Database Service (RDS) instances, Amazon Virtual Private Cloud, Amazon Kinesis streams, Amazon Route 53 zones, and more. Previously, you could use the AWS Console to define resource groupings but AWS Systems Manager provides this new resource group experience via a new console and API. These groupings are a fundamental building block of Systems Manager in that they are frequently the target of various operations you may want to perform like: compliance management, software inventories, patching, and other automations.

You start by defining a group based on tag filters. From there you can view all of the resources in a centralized console. You would typically use these groupings to differentiate between applications, application layers, and environments like production or dev – but you can make your own rules about how to use them as well. If you imagine a typical 3 tier web-app you might have a few EC2 instances, an ELB, a few S3 buckets, and an RDS instance. You can define a grouping for that application and with all of those different resources simultaneously.

Insights

AWS Systems Manager automatically aggregates and displays operational data for each resource group through a dashboard. You no longer need to navigate through multiple AWS consoles to view all of your operational data. You can easily integrate your exiting Amazon CloudWatch dashboards, AWS Config rules, AWS CloudTrail trails, AWS Trusted Advisor notifications, and AWS Personal Health Dashboard performance and availability alerts. You can also easily view your software inventories across your fleet. AWS Systems Manager also provides a compliance dashboard allowing you to see the state of various security controls and patching operations across your fleets.

Acting on Insights

Building on the success of EC2 Systems Manager (SSM), AWS Systems Manager takes all of the features of SSM and provides a central place to access them. These are all the same experiences you would have through SSM with a more accesible console and centralized interface. You can use the resource groups you’ve defined in Systems Manager to visualize and act on groups of resources.

Automation


Automations allow you to define common IT tasks as a JSON document that specify a list of tasks. You can also use community published documents. These documents can be executed through the Console, CLIs, SDKs, scheduled maintenance windows, or triggered based on changes in your infrastructure through CloudWatch events. You can track and log the execution of each step in the documents and prompt for additional approvals. It also allows you to incrementally roll out changes and automatically halt when errors occur. You can start executing an automation directly on a resource group and it will be able to apply itself to the resources that it understands within the group.

Run Command

Run Command is a superior alternative to enabling SSH on your instances. It provides safe, secure remote management of your instances at scale without logging into your servers, replacing the need for SSH bastions or remote powershell. It has granular IAM permissions that allow you to restrict which roles or users can run certain commands.

Patch Manager, Maintenance Windows, and State Manager

I’ve written about Patch Manager before and if you manage fleets of Windows and Linux instances it’s a great way to maintain a common baseline of security across your fleet.

Maintenance windows allow you to schedule instance maintenance and other disruptive tasks for a specific time window.

State Manager allows you to control various server configuration details like anti-virus definitions, firewall settings, and more. You can define policies in the console or run existing scripts, PowerShell modules, or even Ansible playbooks directly from S3 or GitHub. You can query State Manager at any time to view the status of your instance configurations.

Things To Know

There’s some interesting terminology here. We haven’t done the best job of naming things in the past so let’s take a moment to clarify. EC2 Systems Manager (sometimes called SSM) is what you used before today. You can still invoke aws ssm commands. However, AWS Systems Manager builds on and enhances many of the tools provided by EC2 Systems Manager and allows those same tools to be applied to more than just EC2. When you see the phrase “Systems Manager” in the future you should think of AWS Systems Manager and not EC2 Systems Manager.

AWS Systems Manager with all of this useful functionality is provided at no additional charge. It is immediately available in all public AWS regions.

The best part about these services is that even with their tight integrations each one is designed to be used in isolation as well. If you only need one component of these services it’s simple to get started with only that component.

There’s a lot more than I could ever document in this post so I encourage you all to jump into the console and documentation to figure out where you can start using AWS Systems Manager.

Randall

The 10 Most Viewed Security-Related AWS Knowledge Center Articles and Videos for November 2017

Post Syndicated from Maggie Burke original https://aws.amazon.com/blogs/security/the-10-most-viewed-security-related-aws-knowledge-center-articles-and-videos-for-november-2017/

AWS Knowledge Center image

The AWS Knowledge Center helps answer the questions most frequently asked by AWS Support customers. The following 10 Knowledge Center security articles and videos have been the most viewed this month. It’s likely you’ve wondered about a few of these topics yourself, so here’s a chance to learn the answers!

  1. How do I create an AWS Identity and Access Management (IAM) policy to restrict access for an IAM user, group, or role to a particular Amazon Virtual Private Cloud (VPC)?
    Learn how to apply a custom IAM policy to restrict IAM user, group, or role permissions for creating and managing Amazon EC2 instances in a specified VPC.
  2. How do I use an MFA token to authenticate access to my AWS resources through the AWS CLI?
    One IAM best practice is to protect your account and its resources by using a multi-factor authentication (MFA) device. If you plan use the AWS Command Line Interface (CLI) while using an MFA device, you must create a temporary session token.
  3. Can I restrict an IAM user’s EC2 access to specific resources?
    This article demonstrates how to link multiple AWS accounts through AWS Organizations and isolate IAM user groups in their own accounts.
  4. I didn’t receive a validation email for the SSL certificate I requested through AWS Certificate Manager (ACM)—where is it?
    Can’t find your ACM validation emails? Be sure to check the email address to which you requested that ACM send validation emails.
  5. How do I create an IAM policy that has a source IP restriction but still allows users to switch roles in the AWS Management Console?
    Learn how to write an IAM policy that not only includes a source IP restriction but also lets your users switch roles in the console.
  6. How do I allow users from another account to access resources in my account through IAM?
    If you have the 12-digit account number and permissions to create and edit IAM roles and users for both accounts, you can permit specific IAM users to access resources in your account.
  7. What are the differences between a service control policy (SCP) and an IAM policy?
    Learn how to distinguish an SCP from an IAM policy.
  8. How do I share my customer master keys (CMKs) across multiple AWS accounts?
    To grant another account access to your CMKs, create an IAM policy on the secondary account that grants access to use your CMKs.
  9. How do I set up AWS Trusted Advisor notifications?
    Learn how to receive free weekly email notifications from Trusted Advisor.
  10. How do I use AWS Key Management Service (AWS KMS) encryption context to protect the integrity of encrypted data?
    Encryption context name-value pairs used with AWS KMS encryption and decryption operations provide a method for checking ciphertext authenticity. Learn how to use encryption context to help protect your encrypted data.

The AWS Security Blog will publish an updated version of this list regularly going forward. You also can subscribe to the AWS Knowledge Center Videos playlist on YouTube.

– Maggie

How to Use Service Control Policies in AWS Organizations to Enforce Healthcare Compliance in Your AWS Account

Post Syndicated from Aaron Lima original https://aws.amazon.com/blogs/security/how-to-use-service-control-policies-in-aws-organizations-to-enforce-healthcare-compliance-in-your-aws-account/

AWS customers with healthcare compliance requirements such as the U.S. Health Insurance Portability and Accountability Act (HIPAA) and Good Laboratory, Clinical, and Manufacturing Practices (GxP) might want to control access to the AWS services their developers use to build and operate their GxP and HIPAA systems. For example, customers with GxP requirements might approve AWS as a supplier on the basis of AWS’s SOC certification and therefore want to ensure that only the services in scope for SOC are available to developers of GxP systems. Likewise, customers with HIPAA requirements might want to ensure that only AWS HIPAA Eligible Services are available to store and process protected health information (PHI). Now with AWS Organizations—policy-based management for multiple AWS accounts—you can programmatically control access to the services within your AWS accounts.

In this blog post, I show how to restrict an AWS account to HIPAA Eligible Services as well as explain why you should include additional supporting AWS services with service control policies (SCPs) in AWS Organizations. Although this example is HIPAA related, you can repurpose it for GxP, a database of Genotypes and Phenotypes (dbGaP) solutions, or other healthcare compliance requirements for which you want to control developers’ access to a specific scope of services.

Managing an account hierarchy with AWS Organizations

Let’s say I manage four AWS accounts: a Payer account, a Development account, a Corporate IT account, and a fourth account that contains PHI. In accordance with AWS’s Business Associate Agreement (BAA), I want to be sure that only AWS HIPAA Eligible Services are allowed in the fourth account along with supporting AWS services that help encrypt and control access to the account. The following diagram shows a logical view of the associated account structure.

Diagram showing the logical view of the account structure

As illustrated in the preceding diagram, Organizations allows me to create this account hierarchy between the four AWS accounts I manage. Before I proceed to show how to create and apply an SCP to the HIPAA account in this hierarchy, I’ll define some Organizations terminology that I use in this post:

  • Organization – A consolidated set of AWS accounts that you manage. For the preceding example, I have already created my organization and invited my accounts. For more information about creating an organization and inviting accounts, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.
  • Master account – The management hub for Organizations. This is where I invite existing accounts, create new accounts and manage my SCPs. I run all commands demonstrated in this post from this master account. This is also my payer account in the preceding account structure diagram.
  • Service control policy (SCP) – A set of controls that the organization’s master account can apply to the organization, selected OUs, and selected accounts. SCPs allow me to whitelist or blacklist services and actions that I can delegate to the users and roles in the account to which the SCPs are applied. The resultant security permissions for a user and role are the union of the permissions in an SCP and the permissions in an AWS Identity and Access Management (IAM) policy. I refer to SCPs as a policy type in some of this post’s command-line arguments.
  • Organizational unit (OU) – A container for a set of AWS accounts. OUs can be arranged into a hierarchy that can be as many as five levels deep. The top of the hierarchy of OUs is also known as the administrative root. In the walkthrough, I create a HIPAA OU and apply my policy to that OU. I then move the account into the OU to have the policy applied. To manage the organization depicted above, I might create OUs for my Corporate IT account and my Development account.

To restrict services in the fourth account to HIPAA Eligible Services and required supporting services, I will show how to create and apply an SCP to the account with the following steps:

  1. Create a JSON document that lists HIPAA Eligible Services and supporting AWS services.
  2. Create an SCP with a JSON document.
  3. Create an OU for the HIPAA account, and move the account into the OU.
  4. Attach the SCP to the HIPAA OU.
  5. Verify which SCPs are attached to the HIPAA OU.
  6. Detach the default FullAWSAccess SCP from the OU.
  7. Verify SCP enforcement.

How to create and apply an SCP to an account

Let’s walk through the steps to create an SCP and apply it to an account. I can manage my organization by using the Organizations console, AWS CLI, or AWS API from my master account. For the purposes of this post, I will demonstrate the creation and application of an SCP to my account by using the AWS CLI.

1.  Create a JSON document that lists HIPAA Eligible Services and supporting AWS services

Creating an SCP will be familiar if you have experience writing an IAM policy because the grammar in crafting the policy is similar. I will create a JSON document that lists only the services I want to allow in my account, and I will use this JSON document to create my SCP via the command line. The SCP I create from this document allows all actions for all resources of the listed services, effectively turning on only these services in my account. I name the document HIPAAExample.json and save it to the directory from which I will demonstrate the CLI commands.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                 "dynamodb:*","rds:*","ec2:*","s3:*","elasticmapreduce:*",
                 "glacier:*","elasticloadbalancing:*", "cloudwatch:*",
                 "importexport:*", "cloudformation:*", "redshift:*",
                 "iam:*", "health:*", "config:*", "snowball:*",
                 "trustedadvisor:*", "kms:*", "apigateway:*",
                 "autoscaling:*", "directconnect:*",
	         "execute-api:*", "sts:*"
             ],
             "Effect": "Allow",
             "Resource": "*"
        }
    ]
}

Note that the SCP includes more than just the HIPAA Eligible Services.

Why include additional supporting services in a HIPAA SCP?

You can use any service in your account, but you can use only HIPAA Eligible Services to store and process PHI. Some services, such as IAM and AWS Key Management Service (KMS), can be used because these services do not directly store or process PHI, but they might still be needed for administrative and security purposes.

To those ends, I include the following supporting services in the SCP to help me with account administration and security:

  • Access controls – I include IAM to ensure that I can manage access to resources in the account. Though Organizations can limit whether a service is available, I still need the granularity of access control that IAM provides.
  • Encryption – I need a way to encrypt the data. The integration of AWS KMS with Amazon Redshift, Amazon RDS, and Amazon Elastic Block Store (Amazon EBS) helps with this security requirement.
  • Auditing – I also need to be able to demonstrate controls in practice, track changes, and discover any malicious activity in my account. You will note that AWS CloudTrail is not included in the SCP, which prohibits any mutating actions against CloudTrail from users within the account. However, when setting up the account, CloudTrail was set up to send logs to a logging account as recommended in AWS Multiple Account Security Strategy. The logs do not reside in the account, and no one has privileges to change the trail including root or administrators, which helps ensure the protection of the API logging of the account. This highlights how SCPs can be used to secure services in an account.
  • Automation – Automation can help me with my security controls as shown in How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series; therefore, I consider including AWS CloudFormation as a way to ensure that applications deployed in the account adhere to my security and compliance policies. Auto Scaling also is an important service to include to help me scale to meet demand and control cost.
  • Monitoring and support – The remaining services in the SCP such as Amazon CloudWatch are needed to make sure that I can monitor the environment and have visibility into the health of the workloads and applications in my AWS account, helping me maintain operational control. AWS Trusted Advisor is a service that helps to make sure that my cloud environment is well architected.

Now that I have created my JSON document with the services that I will include and explained in detail why I include them, I can create my SCP.

2.  Create an SCP with a JSON document

I will now create the SCP via the CLI with the aws organizations create-policy command. Using the name parameter, I name the SCP and define that I am creating an SCP, both of which are required parameters. I then provide a brief description of the SCP and specify the location of the JSON document I created in Step 1.

aws organizations create-policy --name hipaa-example-policy --type SERVICE_CONTROL_POLICY --description
 
"All HIPAA eligible services plus supporting AWS Services." --content file://./HIPAAExample.json

Output

{
    "policy": {
        "policySummary": {
            "type": "SERVICE_CONTROL_POLICY",
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "name": "hipaa-example-policy",
            "awsManaged": false,
            "id": "p-6ldl8bll", "description": "All HIPAA eligible services and supporting AWS services."

I take note of the policy-id because I need it to attach the SCP to my OU in Step 4. Note: Throughout this post, fictitious placeholder values are shown for the purposes of demonstrating this post’s solution.

3.  Create an OU for the HIPAA account, and move the account into the OU

Grouping accounts by function will make it easier to manage the organization and apply policies across multiple accounts. In this step, I create an OU for the HIPAA account and move the target account into the OU. To create an OU, I need to know the ID for the parent object under which I will be placing the OU. In this case, I will place it under the root and need the ID for the root. To get the root ID, I run the list-roots command.

aws organizations list-roots

Output

{
    "Roots": [
        {
            "PolicyTypes": [
                {
                    "Status": "ENABLED", 
                    "Type": "SERVICE_CONTROL_POLICY"
                }
            ], 
            "Id": "r-rth4", 
            "Arn": "arn:aws:organizations::012345678900:root/o-p9bx61i0h1/r-rth4", 
            "Name": "Root"
        }
    ]
}

With the root ID, I can proceed to create the OU under the root.

aws organizations create-organizational-unit --parent-id r-rth4 --name HIPAA-Accounts

Output

{
    "OrganizationalUnit": {
       "Id": "ou-rth4-ezo5wonz", 
        "Arn": "arn:aws:organizations::012345678900:ou/o-p9bx61i0h1/ou-rth4-ezo5wonz", 
        "Name": "HIPAA-Accounts"
    }
}

I take note of the OU ID in the output because I need it in the next command to move my target account. I will also need the root ID in the command because I am moving the target account from the root into the OU.

aws organizations move-account --account-id 098765432110 --source-parent-id r-rth4 --destination-parent-id 
ou-rth4-ezo5wonz

No Output

 

4.  Attach the SCP to the HIPAA OU

Even though you may have enabled All Features in your organization, you still need to enable SCPs at the root level of the organization to attach SCPs to objects. To do this in my case, I will run the enable-policy-type command and provide the root ID.

aws organizations enable-policy-type --root-id r-rth4 --policy-type SERVICE_CONTROL_POLICY

Output

{
    "Root": {
        "PolicyTypes": [], 
        "Id": "r-rth4", 
        "Arn": "arn:aws:organizations::012345678900:root/o-p9bx61i0h1/r-rth4", 
        "Name": "Root"
    }
}

Now, I will attach the SCP to the OU by using the aws organizations attach-policy command. I must include the target-id, which is the OU ID noted in the previous step and the policy-id from the output of the command in Step 2.

aws organizations attach-policy --target-id ou-rth4-ezo5wonz --policy-id p-6ldl8bll

No Output

 

5.  Verify which SCPs are attached to the HIPAA OU

I will now verify which SCPs are attached to my account by using the aws organization list-policies-for-target command. I must provide the OU ID with the target-id parameter and then filter for SERVICE_CONTROL_POLICY type.

aws organizations list-policies-for-target --target-id ou-rth4-ezo5wonz --filter SERVICE_CONTROL_POLICY

Output

{
    "policies": [
        {
            "awsManaged": false,
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "id": "p-6ldl8bll",
            "description": "All HIPAA eligible services plus supporting AWS Services.",
            "name": "hipaa-example-policy",
            "type": "SERVICE_CONTROL_POLICY"
        },
        {
            "awsManaged": true,
            "arn": "arn:aws:organizations::aws:policy/SERVICE_CONTROL_POLICY/p-FullAWSAccess",
            "id": "p-FullAWSAccess",
            "description": "Allows access to every operation",
            "name": "FullAWSAccess",
            "type": "SERVICE_CONTROL_POLICY"
        }
    ]
}

As the output shows, two SCPs are attached to this account. I want to detach the FullAWSAccess SCP so that the HIPAA SCP is properly in effect. The FullAWSAccess SCP is an Allow SCP that allows all AWS services. If I were to leave the default FullAWSAccess SCP in place, it would grant access to services I do not want to allow in my account. Detaching the FullAWSAccess SCP means that only the services I allow in the hipaa-example-policy are allowed in my account. Note that if I were to create a Deny SCP, the SCP would take precedence over an Allow SCP.

6.  Detach the default FullAWSAccess SCP from the OU

Before detaching the default FullAWSAccess SCP from my account, I run the aws workspaces describe-workspaces call from the Amazon WorkSpaces API. I am currently not running any WorkSpaces, so the output shows an empty list. However, I will test this again after I detach the FullAWSAccess SCP from my account and am left with only the HIPAA SCP attached to the account.

aws workspaces describe-workspaces

Output

{
    "Workspaces": []
}

In order to detach the FullAWSAccess SCP, I must run the aws organizations detach-policy command, providing it the policy-id and target-id of the OU.

aws organizations detach-policy --policy-id p-FullAWSAccess --target-id ou-rth4-ezo5wonz

No Output

 

If I rerun the list-policies-for-target command again, I see that only one SCP is attached to my account that allows HIPAA Eligible Services, as shown in the following output.

aws organizations list-policies-for-target --target-id ou-rth4-ezo5wonz --filter SERVICE_CONTROL_POLICY

Output

 

{
    "policies": [
        {
            "name": "hipaa-example-policy",
            "arn": "arn:aws:organizations::012345678900:policy/o-kzceys2q4j/SERVICE_CONTROL_POLICY/p-6ldl8bll",
            "description": "All HIPAA eligible services plus supporting AWS Services.",
            "awsManaged": false,
            "id": "p-6ldl8bll",
            "type": "SERVICE_CONTROL_POLICY"
        }
    ]
}

Now I can test and verify the enforcement of this SCP.

7.  Verify SCP enforcement

Previously, the administrator of the account had full access to all AWS services, including Amazon WorkSpaces. His IAM policy for Amazon WorkSpaces allows all actions for Amazon WorkSpaces. However, after I apply the HIPAA SCP to the account, this changes the effect of the IAM policy to deny all actions for Amazon WorkSpaces because it is not an allowed service.

The following screenshot of the IAM policy simulator shows which permissions are set for the administrator after I apply the HIPAA SCP. Also, note that the IAM policy simulator shows that the Deny permission is being denied by Organizations. Because the policy simulator is aware of the SCPs attached to an account, it is a good tool to use when troubleshooting or validating an SCP.

If I run the aws workspaces describe-workspaces call again as I did in Step 5, this time I receive an AccessDeniedException error, which validates that the HIPAA SCP is working because Amazon WorkSpaces is not an allowed service in the SCP.

aws workspaces describe-workspaces

Output

An error occurred (AccessDeniedException) when calling the DescribeWorkspaces operation: 
User: arn:aws:iam::098765432110:user/admin is not authorized to perform: workspaces:DescribeWorkspaces 
on resource: arn:aws:workspaces:us-east-1:098765432110:workspace/*

This completes the process of creating and applying an SCP to my account.

Summary

In this blog post, I have shown how to create an SCP and attach it to an OU to restrict an account to HIPAA Eligible Services and additional supporting services. I also showed how to create an OU, move an account into the OU, and then validate the SCP attached to the OU. For more information, see AWS Cloud Computing in Healthcare.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues with implementing this solution, please start a new thread on the IAM forum.

– Aaron

AWS Week in Review – March 6, 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-6-2017/

This edition includes all of our announcements, content from all of our blogs, and as much community-generated AWS content as I had time for!

Monday

March 6

Tuesday

March 7

Wednesday

March 8

Thursday

March 9

Friday

March 10

Saturday

March 11

Sunday

March 12

Jeff;

 

Now Open – AWS London Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-london-region/

Last week we launched our 15th AWS Region and today we are launching our 16th. We have expanded the AWS footprint into the United Kingdom with a new Region in London, our third in Europe. AWS customers can use the new London Region to better serve end-users in the United Kingdom and can also use it to store data in the UK.

The Details
The new London Region provides a broad suite of AWS services including Amazon CloudWatch, Amazon DynamoDB, Amazon ECS, Amazon ElastiCache, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon EMR, Amazon Glacier, Amazon Kinesis Streams, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Elastic Beanstalk, AWS Snowball, AWS Snowmobile, AWS Key Management Service (KMS), AWS Marketplace, AWS OpsWorks, AWS Personal Health Dashboard, AWS Shield Standard, AWS Storage Gateway, AWS Support API, Elastic Load Balancing, VM Import/Export, Amazon CloudFront, Amazon Route 53, AWS WAF, AWS Trusted Advisor, and AWS Direct Connect (follow the links for pricing and other information).

The London Region supports all sizes of C4, D2, M4, T2, and X1 instances.

Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

From Our Customers
Many AWS customers are getting ready to use this new Region. Here’s a very small sample:

Trainline is Europe’s number one independent rail ticket retailer. Every day more than 100,000 people travel using tickets bought from Trainline. Here’s what Mark Holt (CTO of Trainline) shared with us:

We recently completed the migration of 100 percent of our eCommerce infrastructure to AWS and have seen awesome results: improved security, 60 percent less downtime, significant cost savings and incredible improvements in agility. From extensive testing, we know that 0.3s of latency is worth more than 8 million pounds and so, while AWS connectivity is already blazingly fast, we expect that serving our UK customers from UK datacenters should lead to significant top-line benefits.

Kainos Evolve Electronic Medical Records (EMR) automates the creation, capture and handling of medical case notes and operational documents and records, allowing healthcare providers to deliver better patient safety and quality of care for several leading NHS Foundation Trusts and market leading healthcare technology companies.

Travis Perkins, the largest supplier of building materials in the UK, is implementing the biggest systems and business change in its history including the migration of its datacenters to AWS.

Just Eat is the world’s leading marketplace for online food delivery. Using AWS, JustEat has been able to experiment faster and reduce the time to roll out new feature updates.

OakNorth, a new bank focused on lending between £1m-£20m to entrepreneurs and growth businesses, became the UK’s first cloud-based bank in May after several months of working with AWS to drive the development forward with the regulator.

Partners
I’m happy to report that we are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in the United Kingdom. Here’s a partial list:

  • AWS Premier Consulting Partners – Accenture, Claranet, Cloudreach, CSC, Datapipe, KCOM, Rackspace, and Slalom.
  • AWS Consulting Partners – Attenda, Contino, Deloitte, KPMG, LayerV, Lemongrass, Perfect Image, and Version 1.
  • AWS Technology Partners – Splunk, Sage, Sophos, Trend Micro, and Zerolight.
  • AWS Managed Service Partners – Claranet, Cloudreach, KCOM, and Rackspace.
  • AWS Direct Connect Partners – AT&T, BT, Hutchison Global Communications, Level 3, Redcentric, and Vodafone.

Here are a few examples of what our partners are working on:

KCOM is a professional services provider offering consultancy, architecture, project delivery and managed service capabilities to large UK-based enterprise businesses. The scalability and flexibility of AWS gives them a significant competitive advantage with their enterprise and public sector customers. The new Region will allow KCOM to build innovative solutions for their public sector clients while meeting local regulatory requirements.

Splunk is a member of the AWS Partner Network and a market leader in analyzing machine data to deliver operational intelligence for security, IT, and the business. They use cloud computing and big data analytics to help their customers to embrace digital transformation and continuous innovation. The new Region will provide even more companies with real-time visibility into the operation of their systems and infrastructure.

Redcentric is a NHS Digital-approved N3 Commercial Aggregator. Their work allows health and care providers such as NHS acute, emergency and mental trusts, clinical commissioning groups (CCGs), and the ISV community to connect securely to AWS. The London Region will allow health and care providers to deliver new digital services and to improve outcomes for citizens and patients.

Visit the AWS Partner Network page to read some case studies and to learn how to join.

Compliance & Connectivity
Every AWS Region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, SOC3, PCI DSS Level 1, and many more. Our Cloud Compliance page includes information about these standards, along with those that are specific to the UK, including Cyber Essentials Plus.

The UK Government recognizes that local datacenters from hyper scale public cloud providers can deliver secure solutions for OFFICIAL workloads. In order to meet the special security needs of public sector organizations in the UK with respect to OFFICIAL workloads, we have worked with our Direct Connect Partners to make sure that obligations for connectivity to the Public Services Network (PSN) and N3 can be met.

Use it Today
The London Region is open for business now and you can start using it today! If you need additional information about this Region, please feel free to contact our UK team at [email protected].

Jeff;

AWS Managed Services – Infrastructure Operations Management for the Enterprise

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-managed-services-infrastructure-operations-management-for-the-enterprise/

Large-scale, enterprise data centers are generally run “by the book.” Policies, best practices, and operational procedures are developed, refined, captured, and codified, as part of responsible IT management, often with an eye toward the ITIL model. Ideally, all infrastructure improvements, configuration changes, and provisioning requests are handled in a process-oriented fashion that serves to impose some discipline on the operation of the data center without becoming overly complex or bureaucratic. With IT staff responsible for provisioning hardware, installing software, applying patches, monitoring operations, taking and restoring backups, and dealing with unpredictable operational and security incidents, there’s plenty of work to go around.

These organizations have been looking at the AWS Cloud and want to take advantage of the scale and innovation that it offers, while also looking to become more agile and to save money in the process. As they plan their migration to the cloud, they want to build on their existing systems and practices, while also getting all of the benefits that the cloud has to offer. They want to add additional automation, make use of standard components that can be used more than once, and to relieve their staff of as many routine operational duties as possible.

Introducing AWS Managed Services
Today we are launching AWS Managed Services. Designed for the Fortune 1000 and the Global 2000, this service is designed to accelerate cloud adoption. It simplifies deployment,  migration, and management using automation and machine learning, backed up by a dedicated team of Amazon employees. AWS MS builds on AWS and provides a set of integration points (APIs and a set of CLI tools) for connection to your existing service management system. We’ve been working with a representative set of AWS enterprise customers and partners for the last couple of years in order to make sure that this service meets a very wide range of enterprise requirements.

AWS MS is built around the concept of a Virtual Data Center that is linked to one or more AWS accounts. The VDC consists of a Virtual Private Cloud (VPC) which contains multiple Deployment Groups which consist of Multi-AZ subnets for a DMZ, shared services, and for customer applications. Each application or application component is packaged up into a Managed Stack.

Here’s a brief overview of the feature set:

Incident Monitoring & ResolutionAWS MS manages incidents that are detected by our monitoring systems or reported by our customers. It correlates multiple Amazon CloudWatch alarms and looks for failed updates and security events that could impact the health of running applications. Incidents are created within AWS MS for investigation and are then resolved either automatically or manually by AWS engineers. False positives are used to improve our systems and processes, allowing AWS MS to improve over time by drawing on data collected at scale.

Change ControlAWS MS coordinates all actions on resources. Changes must originate with a change request (an RFC, or Request for Change), and can be manual or scripted. AWS MS makes sure that changes are applied to individual stacks on an orderly, non-overlapping basis. It also holds all incoming manual requests until they have been approved.

ProvisioningAWS MS includes a set of predefined stacks (application templates), each built to conform to long-established AWS best practices. The stacks contain sensible defaults, any of which can be overridden when the stack is provisioned.

Patch ManagementAWS MS takes care of the above-the-hypervisor patching. This includes operating system (Linux and Windows) and infrastructure application (SSH, RDP, ISS, Apache, and so forth) security updates and patches. AWS MS employs multiple strategies, patching and building new AMIs for cloud-aware applications that can be easily restarted, and resorting to in-place patches for the rest.

Security & Access ManagementAWS MS uses third-party applications from AWS Marketplace, starting with Trend Micro Deep Security to look for viruses and malware and to detect intrusions on managed instances. It makes extensive use of EC2 Security Groups and manages controlled, time-limited access to production systems.

Backup & Restore – Each stack is backed up at a specified frequency. A percentage of the backup snapshots are tested for integrity and a run book is used to bring failed infrastructure back to life.

ReportingAWS MS provides a set of financial and capacity management reports, delivered by a dedicated Cloud Service Advisor using AWS Trusted Advisor and other tools. The underlying AWS CloudTrail and Amazon CloudWatch logs are also accessible.

Accessing AWS Managed Services
You can connect AWS Managed Services to your existing service management tools using the AWS MS API and command-line tools. You can also access it through the AWS Management Console, but we expect API and CLI usage to be far more popular. However you choose to access AWS MS, the basic objects and operations are the same. You can create, view, approve, and manage RFCs, service requests, and incident reports. Here’s what this looks like from the Console:

Here’s how a Request for Change (RFC) is created:

And here’s how technical users can customize the RFC:

After a change request has been entered, approved, and scheduled, AWS MS supervises the actual change. Automated changes take place with no further human interaction. Manual changes are performed within a scheduled change window using temporary credentials specific to the change. AWS engineers use the same mechanisms and follow the same discipline. Either way, the entire process is tracked and logged.

Partners & Customers
AWS Managed Services was designed with partners in mind. We have set up a pair of new training programs (AWS MS Business Essentials and AWS MS Technical Essentials) that will provide partners with the background information needed to start building a practice around AWS MS. I expect partners to help their customers connect their existing IT Service Management (ITSM) systems, processes, and tools to AWS MS, assist with the on-boarding process, and manage the migration of applications. There are also opportunities for partners to use AWS MS to provide even better levels of support and service to customers.

As I mentioned earlier, we’ve been working with enterprise customers and partners to make sure that AWS MS meets their needs. Here are a few observations that they shared with us.

Tom Ray of Cloudreach (“Intelligent Cloud Adoption”), an AWS Premier Partner:

We see AWS Managed Services as a key solution in the AWS portfolio, designed to meet the need for a cost effective, highly controlled AWS environment, where the heavy lifting of management and control can be outsourced to AWS. This will extend our relationship even further, as Cloudreach will help customers design, migrate to AWS Managed Services, plus provide application level support alongside AWS.

Paul Hannan of SGN (a regulated oil & gas utility):

SGN’s migration to cloud is based upon improving the security and durability of its IT, while becoming more responsive to its business and customer service needs – all at a lower cost. We decided the best way for us to manage the migration into AWS, at the lowest risk to ourselves, was to partner with AWS. Its managed service team has the expertise to optimise the AWS platform, allowing us to accelerate our understanding of how to best manage the infrastructure within AWS. It’s been a real benefit working with a partner which recognises our desire to always put our customer first and which will pull out all the stops to achieve what’s needed.

Available Now
AWS Managed Services is available today. It is able to manage AWS resources in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Sydney) Regions, with others coming online as soon as possible.

Pricing is based on your AWS usage. To learn more about AWS MS or to initiate the on-boarding process, contact your AWS sales representative.

Jeff;

Now Open AWS Canada (Central) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-canada-central-region/

We are growing the AWS footprint once again. Our new Canada (Central) Region is now available and you can start using it today. AWS customers in Canada and the northern parts of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Canada (Central) Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon ECS, EC2 Container Registry, AWS Elastic Beanstalk, Amazon EMR, Amazon ElastiCache, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Marketplace, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, AWS Shield Standard, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Workflow Service (SWF), AWS Storage Gateway, AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, M4, T2, and X1 instances.

As part of our on-going focus on making cloud computing available to you in an environmentally friendly fashion, AWS data centers in Canada draw power from a grid that generates 99% of its electricity using hydropower (read about AWS Sustainability to learn more).

Well Connected
After receiving a lot of positive feedback on the network latency metrics that I shared when we launched the AWS Region in Ohio, I am happy to have a new set to share as part of today’s launch (these times represent a lower bound on latency and may change over time).

The first set of metrics are to other Canadian cities:

  • 9 ms to Toronto.
  • 14 ms to Ottawa.
  • 47 ms to Calgary.
  • 49 ms to Edmonton.
  • 60 ms to Vancouver.

The second set are to locations in the US:

  • 9 ms to New York.
  • 19 ms to Chicago.
  • 16 ms to US East (Northern Virginia).
  • 27 ms to US East (Ohio).
  • 75 ms to US West (Oregon).

Canada is also home to CloudFront edge locations in Toronto, Ontario, and Montreal, Quebec.

And Canada Makes 15
Today’s launch brings our global footprint to 15 Regions and 40 Availability Zones, with seven more Availability Zones and three more Regions coming online through the next year. As a reminder, each Region is a physical location where we have two or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

For more information about current and future AWS Regions, take a look at the AWS Global Infrastructure page.

Jeff;


Région AWS Canada (Centre) Maintenant Ouverte

Nous étendons la portée d’AWS une fois de plus. Notre nouvelle Région du Canada (Centre) est maintenant disponible et vous pouvez commencer à l’utiliser dès aujourd’hui. Les clients d’AWS au Canada et dans les régions du nord des États-Unis ont un accès rapide et à latence réduite à l’ensemble des services d’infrastructure AWS.

Les détails
La nouvelle Région du Canada (Centre) supporte Amazon Elastic Compute Cloud (EC2) et les services associés incluant Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances et Dedicated Hosts.

Également supportés sont Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon ECS, EC2 Container Registry, AWS Elastic Beanstalk, Amazon EMR, Amazon ElastiCache, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Marketplace, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, AWS Shield Standard, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Workflow Service (SWF), AWS Storage Gateway, AWS Trusted Advisor, VM Import/Export, et AWS WAF.

La région supporte toutes les tailles des instances C4, D2, M4, T2 et X1.

Dans le cadre de notre mission continue de vous offrir des services infonuagiques de manière écologique, les centres de données d’AWS au Canada sont alimentés par un réseau électrique dont 99 pour cent de l’énergie fournie est de nature hydroélectrique (consultez AWS Sustainability pour en savoir plus).

Bien connecté
Après avoir reçu beaucoup de commentaires positifs sur les mesures de latence du réseau dont je vous ai fait part lorsque nous avons lancé la région AWS en Ohio, je suis heureux de vous faire part d’un nouvel ensemble de mesures dans le cadre du lancement d’aujourd’hui (ces mesures représentent une limite inférieure à la latence et pourraient changer au fil du temps).

Le premier ensemble de mesures concerne d’autres villes canadiennes:

  • 9 ms à Toronto.
  • 14 ms à Ottawa.
  • 47 ms à Calgary.
  • 49 ms à Edmonton.
  • 60 ms à Vancouver.

Le deuxième ensemble concerne des emplacements aux États-Unis :

  • 9 ms à New York.
  • 19 ms à Chicago.
  • 16 ms à USA Est (Virginie du Nord).
  • 27 ms à USA Est (Ohio).
  • 75 ms à USA Ouest (Oregon).

Le Canada compte également des emplacements périphériques CloudFront à Toronto, en Ontario, et à Montréal, au Québec.

Et le Canada fait 15
Le lancement d’aujourd’hui porte notre présence mondiale à 15 régions et 40 zones de disponibilité avec sept autres zones de disponibilité et trois autres régions qui seront mises en opération au cours de la prochaine année. Pour vous rafraîchir la mémoire, chaque région est un emplacement physique où nous avons deux ou plusieurs zones de disponibilité. Chaque zone de disponibilité, à son tour, comprend un ou plusieurs centres de données, chacun doté d’une alimentation, d’une mise en réseau et d’une connectivité redondantes dans des installations distinctes. Avoir deux zones de disponibilité ou plus dans chaque région vous donne la possibilité d’opérer des applications qui sont plus disponibles, plus tolérantes aux pannes et plus durables qu’elles ne le seraient si vous étiez limité à une seule zone de disponibilité.

Pour plus d’informations sur les régions AWS actuelles et futures, consultez la page Infrastructure mondiale AWS.

Jeff;

IT Governance in a Dynamic DevOps Environment

Post Syndicated from Shashi Prabhakar original https://aws.amazon.com/blogs/devops/it-governance-in-a-dynamic-devops-environment/

IT Governance in a Dynamic DevOps Environment
Governance involves the alignment of security and operations with productivity to ensure a company achieves its business goals. Customers who are migrating to the cloud might be in various stages of implementing governance. Each stage poses its own challenges. In this blog post, the first in a series, I will discuss a four-step approach to automating governance with AWS services.

Governance and the DevOps Environment
Developers with a DevOps and agile mindset are responsible for building and operating services. They often rely on a central security team to develop and apply policies, seek security reviews and approvals, or implement best practices.

These policies and rules are not strictly enforced by the security team. They are treated as guidelines that developers can follow to get the much-desired flexibility from using AWS. However, due to time constraints or lack of awareness, developers may not always follow best practices and standards. If these best practices and rules were strictly enforced, the security team could become a bottleneck.

For customers migrating to AWS, the automated governance mechanisms described in this post will preserve flexibility for developers while providing controls for the security team.

These are some common challenges in a dynamic development environment:

·      Quick or short path to accomplishing tasks like hardcoding credentials in code.

·      Cost management (for example, controlling the type of instance launched).

·      Knowledge transfer.

·      Manual processes.

Steps to Governance
Here is a four-step approach to automating governance:

At initial setup, you want to implement some (1) controls for high-risk actions. After they are in place, you need to (2) monitor your environment to make sure you have configured resources correctly. Monitoring will help you discover issues you want to (3) fix as soon as possible. You’ll also want to regularly produce an (4) audit report that shows everything is compliant.

The example in this post helps illustrate the four-step approach: A central IT team allows its Big Data team to run a test environment of Amazon EMR clusters. The team runs the EMR job with 100 t2.medium instances, but when a team member spins up 100 r3.8xlarge instances to complete the job more quickly, the business incurs an unexpected expense.

The central IT team cares about governance and implements a few measures to prevent this from happening again:

·      Control elements: The team uses CloudFormation to restrict the number and type of instances and AWS Identity and Access Management to allow only a certain group to modify the EMR cluster.

·      Monitor elements: The team uses tagging, AWS Config, and AWS Trusted Advisor to monitor the instance limit and determine if anyone exceeded the number of allowed instances.

·      Fix: The team creates a custom Config rule to terminate instances that are not of the type specified.

·      Audit: The team reviews the lifecycle of the EMR instance in AWS Config.

 

Control

You can prevent mistakes by standardizing configurations (through AWS CloudFormation), restricting configuration options (through AWS Service Catalog), and controlling permissions (through IAM).

AWS CloudFormation helps you control the workflow environment in a single package. In this example, we use a CloudFormation template to restrict the number and type of instances and tagging to control the environment.

For example, the team can prevent the choice of r3.8xlarge instances by using CloudFormation with a fixed instance type and a fixed number of instances (100).

Cloudformation Template Sample

EMR cluster with tag:

{
“Type” : “AWS::EMR::Cluster”,
“Properties” : {
“AdditionalInfo” : JSON object,
“Applications” : [ Applications, … ],
“BootstrapActions” [ Bootstrap Actions, … ],
“Configurations” : [ Configurations, … ],
“Instances” : JobFlowInstancesConfig,
“JobFlowRole” : String,
“LogUri” : String,
“Name” : String,
“ReleaseLabel” : String,
“ServiceRole” : String,
“Tags” : [ Resource Tag, … ],
“VisibleToAllUsers” : Boolean
}
}
EMR cluster JobFlowInstancesConfig InstanceGroupConfig with fixed instance type and number:
{

“BidPrice” : String,

“Configurations” : [ Configuration, … ],

“EbsConfiguration” : EBSConfiguration,

“InstanceCount” : Integer,

“InstanceType” : String,

“Market” : String,

“Name” : String

}
AWS Service Catalog can be used to distribute approved products (servers, databases, websites) in AWS. This gives IT administrators more flexibility in terms of which user can access which products. It also gives them the ability to enforce compliance based on business standards.

AWS IAM is used to control which users can access which AWS services and resources. By using IAM role, you can avoid the use of root credentials in your code to access AWS resources.

In this example, we give the team lead full EMR access, including console and API access (not covered here), and give developers read-only access with no console access. If a developer wants to run the job, the developer just needs PEM files.

IAM Policy
This policy is for the team lead with full EMR access:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“cloudwatch:*”,
“cloudformation:CreateStack”,
“cloudformation:DescribeStackEvents”,
“ec2:AuthorizeSecurityGroupIngress”,
“ec2:AuthorizeSecurityGroupEgress”,
“ec2:CancelSpotInstanceRequests”,
“ec2:CreateRoute”,
“ec2:CreateSecurityGroup”,
“ec2:CreateTags”,
“ec2:DeleteRoute”,
“ec2:DeleteTags”,
“ec2:DeleteSecurityGroup”,
“ec2:DescribeAvailabilityZones”,
“ec2:DescribeAccountAttributes”,
“ec2:DescribeInstances”,
“ec2:DescribeKeyPairs”,
“ec2:DescribeRouteTables”,
“ec2:DescribeSecurityGroups”,
“ec2:DescribeSpotInstanceRequests”,
“ec2:DescribeSpotPriceHistory”,
“ec2:DescribeSubnets”,
“ec2:DescribeVpcAttribute”,
“ec2:DescribeVpcs”,
“ec2:DescribeRouteTables”,
“ec2:DescribeNetworkAcls”,
“ec2:CreateVpcEndpoint”,
“ec2:ModifyImageAttribute”,
“ec2:ModifyInstanceAttribute”,
“ec2:RequestSpotInstances”,
“ec2:RevokeSecurityGroupEgress”,
“ec2:RunInstances”,
“ec2:TerminateInstances”,
“elasticmapreduce:*”,
“iam:GetPolicy”,
“iam:GetPolicyVersion”,
“iam:ListRoles”,
“iam:PassRole”,
“kms:List*”,
“s3:*”,
“sdb:*”,
“support:CreateCase”,
“support:DescribeServices”,
“support:DescribeSeverityLevels”
],
“Resource”: “*”
}
]
}
This policy is for developers with read-only access:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“elasticmapreduce:Describe*”,
“elasticmapreduce:List*”,
“s3:GetObject”,
“s3:ListAllMyBuckets”,
“s3:ListBucket”,
“sdb:Select”,
“cloudwatch:GetMetricStatistics”
],
“Resource”: “*”
}
]
}
These are IAM managed policies. If you want to change the permissions, you can create your own IAM custom policy.

 

Monitor

Use logs available from AWS CloudTrail, Amazon Cloudwatch, Amazon VPC, Amazon S3, and Elastic Load Balancing as much as possible. You can use AWS Config, Trusted Advisor, and CloudWatch events and alarms to monitor these logs.

AWS CloudTrail can be used to log API calls in AWS. It helps you fix problems, secure your environment, and produce audit reports. For example, you could use CloudTrail logs to identify who launched those r3.8xlarge instances.

Picture1

AWS Config can be used to keep track of and act on rules. Config rules check the configuration of your AWS resources for compliance. You’ll also get, at a glance, the compliance status of your environment based on the rules you configured.

Amazon CloudWatch can be used to monitor and alarm on incorrectly configured resources. CloudWatch entities–metrics, alarms, logs, and events–help you monitor your AWS resources. Using metrics (including custom metrics), you can monitor resources and get a dashboard with customizable widgets. Cloudwatch Logs can be used to stream data from AWS-provided logs in addition to your system logs, which is helpful for fixing and auditing.

CloudWatch Events help you take actions on changes. VPC flow, S3, and ELB logs provide you with data to make smarter decisions when fixing problems or optimizing your environment.

AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in four categories: cost, performance, security, and fault tolerance. This online resource optimization tool also includes AWS limit warnings.

We will use Trusted Advisor to make sure a limit increase is not going to become bottleneck in launching 100 instances:

Trusted Advisor

Picture2

Fix
Depending on the violation and your ability to monitor and view the resource configuration, you might want to take action when you find an incorrectly configured resource that will lead to a security violation. It’s important the fix doesn’t result in unwanted consequences and that you maintain an auditable record of the actions you performed.

Picture3

You can use AWS Lambda to automate everything. When you use Lambda with Amazon Cloudwatch Events to fix issues, you can take action on an instance termination event or the addition of new instance to an Auto Scaling group. You can take an action on any AWS API call by selecting it as source. You can also use AWS Config managed rules and custom rules with remediation. While you are getting informed about the environment based on AWS Config rules, you can use AWS Lambda to take action on top of these rules. This helps in automating the fixes.

AWS Config to Find Running Instance Type

Picture4
To fix the problem in our use case, you can implement a Config custom rule and trigger (for example, the shutdown of the instances if the instance type is larger than .xlarge or the tearing down of the EMR cluster).

 

Audit

You’ll want to have a report ready for the auditor at the end of the year or quarter. You can automate your reporting system using AWS Config resources.

You can view AWS resource configurations and history so you can see when the r3.8xlarge instance cluster was launched or which security group was attached. You can even search for deleted or terminated instances.

AWS Config Resources

Picture5

Picture6

More Control, Monitor, and Fix Examples
Armando Leite from AWS Professional Services has created a sample governance framework that leverages Cloudwatch Events and AWS Lambda to enforce a set of controls (flows between layers, no OS root access, no remote logins). When a deviation is noted (monitoring), automated action is taken to respond to an event and, if necessary, recover to a known good state (fix).

·      Remediate (for example, shut down the instance) through custom Config rules or a CloudWatch event to trigger the workflow.

·      Monitor a user’s OS activity and escalation to root access. As events unfold, new Lambda functions dynamically enable more logs and subscribe to log data for further live analysis.

·      If the telemetry indicates it’s appropriate, restore the system to a known good state.

AWS Hot Startups – November 2016 – AwareLabs, Doctor On Demand, Starling Bank, and VigLink

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-november-2016-awarelabs-doctor-on-demand-starling-bank-and-viglink/

Tina is back with another impressive set of startups!

Jeff;


This month we are featuring four hot AWS-powered startups:

  • AwareLabs – Helping small businesses build smart websites
  • Doctor On Demand – Delivering fast, easy, and cost-effective access to top healthcare providers.
  • Starling Bank – Mobile banking for the next generation.
  • VigLink – Powering content-driven commerce.

Make sure to also check out October’s Hot Startups if you missed it!

AwareLabs (Phoenix/Charlotte)
AwareLabs is a small, three person startup focused on helping business owners engage their customers through dozens of integrated applications. The startup was born in November 2011 and began as a website building guide that helped hundreds of entrepreneurs within its first few weeks. Early on, founder Paul Kenjora recognized that small businesses were being slowed down by existing business solutions, and in 2013 he took on the task of creating a business centric website builder. After attending an AWS seminar, Paul realized that small teams could design and deploy massive infrastructure just as well as heavily funded, high-tech companies. Previously, only big companies or heavy investment allowed for that type of scale. With the help of AwareLabs, small businesses with limited time and budgets can build the smart websites they need.

The AwareLabs team relies on AWS to achieve what was previously impossible with a team of their size. They’ve been able to raise less capital, move faster, and deliver a solution customers love. AwareLabs leverages Amazon EC2 extensively for everything from running client websites, to maintaining their own secure code repository. Amazon S3 has also been a game changer in offloading the burden of data storage and reliability. This was the single biggest factor in letting the AwareLabs development team focus on client-facing features instead of infrastructure issues. Amazon SES and Amazon SNS freed their developers to deliver integrated one-click newsletters with intelligent bounce reduction, which was very well received by clients. Finally, AWS has helped AwareLabs be profitable, which is huge for any startup!

Be sure to check out AwareLabs for your professional website needs!

Doctor On Demand (San Francisco)
Doctor On Demand was built to address the growing problem that many of those in the U.S. face – lack of access to healthcare providers. The average wait time to see a physician is three weeks, and in rural areas, it can be even longer. It takes an average of 25 days to see a psychiatrist or psychologist and nearly half of all patients with mental health issues go without treatment. With Doctor On Demand, patients can see a board-certified physician or psychologist in a matter of minutes directly from their smartphone, tablet, or computer. They can also have video visits with providers at any time of day – no matter where they are. Patients simply download the Doctor On Demand app (iOS and Android) or visit www.doctorondemand.com, provide a summary of the reason for their visit, and are connected to a licensed provider in their state. Services are delivered through hundreds of employers and work with dozens of major health plans.

From the very beginning, AWS has allowed Doctor on Demand to operate securely in the healthcare space. They utilize Amazon EC2, Amazon S3, Amazon CloudFront, Amazon CloudWatch, and AWS Trusted Advisor. With these services they are able to build compliant security and privacy controls, ‘simple’ fault tolerance, and easily setup a disaster recovery site (utilizing multiple AWS Regions). The company says the best part about working with AWS is that they are able to get everything they need on a startup budget.

Check out the Doctor On Demand blog to keep up with the latest news!

Starling Bank (UK)
Starling Bank is on a mission to shake up financial services.  In the way that TV was radically changed by Netflix, music by the likes of Spotify, and social media by Snapchat – this is what Starling aims to do for banking. Founded in 2014 by Anne Boden, Starling uses the latest technology to make the traditional current account obsolete. Having assembled a team of engineers, artists, and economists, the build of the bank is nearing completion. They will be launching their app in early 2017.

Many next generation banks continue to stick to the traditional bank model that was built on technology from the 1960s and 70s. Instead of providing a range of products that are sold and cross-sold to unwilling customers, Starling will empower their users through seamless access to a mobile marketplace of financial services and products that best meet their needs at any given time. Customers can enjoy the security and protection of a licensed and regulated bank while also getting access to insights, data, and services that empower them to make decisions about their money.

Starling Bank uses AWS to provision and scale a secure infrastructure automatically and on demand. They primarily use Amazon CloudFormation and Amazon EC2, but also make use of Amazon S3, Amazon RDS, and Amazon Lambda.

Sign up here to be one of Starling’s first customers!

VigLink (San Francisco)
Oliver Roup, founder and CEO of VigLink, was first introduced to affiliate marketing as a student at Harvard Business School. His interest in the complex ecosystem prompted him to write a crawler to identify existing product links to Amazon. Roup found that less than half of those links were enrolled in the associates program. It was at this moment that he determined there was a real business opportunity at hand, and VigLink was born.

Over the last seven years, the company has grown into not only a content monetization platform, but a platform that provides publishers and merchants with insights into their ecommerce business. At its core, VigLink identifies commercial product mentions within a publisher’s content and automatically transforms them into revenue generating hyperlinks whose destinations can be determined in real-time, advertiser-bid auctions. Since its founding in 2009, VigLink has been backed by top investors including Google Ventures, Emergence Capital Partners, and RRE. Check out a recent interview with Roup and a tour of VigLink’s offices here!

Since the company’s start, VigLink has utilized AWS extensively. The flexibility to be able to respond to demand elastically without capital costs or hardware maintenance has been game-changing. They use numerous services including Amazon EC2, Amazon S3, Amazon SQS, Amazon RDS, and Amazon Redshift. While continuing to scale, VigLink has recently been able to cut costs by 15% using tools such as AWS Cost Explorer.

Take a behind-the-scenes look at VigLink in this short video.

Tina Barr

How to Assign Permissions Using New AWS Managed Policies for Job Functions

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/how-to-assign-permissions-using-new-aws-managed-policies-for-job-functions/

Today, AWS Identity and Access Management (IAM) made 10 AWS managed policies available that align with common job functions in organizations. AWS managed policies enable you to set permissions using policies that AWS creates and manages, and with a single AWS managed policy for job functions, you can grant the permissions necessary for network or database administrators, for example.

You can attach multiple AWS managed policies to your users, roles, or groups if they span multiple job functions. As with all AWS managed policies, AWS will keep these policies up to date as we introduce new services or actions. You can use any AWS managed policy as a template or starting point for your own custom policy if the policy does not fully meet your needs (however, AWS will not automatically update any of your custom policies). In this blog post, I introduce the new AWS managed policies for job functions and show you how to use them to assign permissions.

The following table lists the new AWS managed policies for job functions and their descriptions.

 Job functionDescription
AdministratorThis policy grants full access to all AWS services.
BillingThis policy grants permissions for billing and cost management. These permissions include viewing and modifying budgets and payment methods. An additional step is required to access the AWS Billing and Cost Management pages after assigning this policy.
Data ScientistThis policy grants permissions for data analytics and analysis. Access to the following AWS services is a part of this policy: Amazon Elastic Map Reduce, Amazon Redshift, Amazon Kinesis, Amazon Machine Learning, Amazon Data Pipeline, Amazon S3, and Amazon Elastic File System. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
Database AdministratorThis policy grants permissions to all AWS database services. Access to the following AWS services is a part of this policy: Amazon DynamoDB, Amazon ElastiCache, Amazon RDS, and Amazon Redshift. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
Developer Power UserThis policy grants full access to all AWS services except for IAM.
Network AdministratorThis policy grants permissions required for setting up and configuring network resources. Access to the following AWS services is included in this policy: Amazon Route 53, Route 53 Domains, Amazon VPC, and AWS Direct Connect. This policy grants access to actions that require delegating permissions to CloudWatch Logs. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for this service.
Security AuditorThis policy grants permissions required for configuring security settings and for monitoring events and logs in the account.
Support UserThis policy grants permissions to troubleshoot and resolve issues in an AWS account. This policy also enables the user to contact AWS support to create and manage cases.
System AdministratorThis policy grants permissions needed to support system and development operations. Access to the following AWS services is included in this policy: AWS CloudTrail, Amazon CloudWatch, CodeCommit, CodeDeploy, AWS Config, AWS Directory Service, EC2, IAM, AWS KMS, Lambda, RDS, Route 53, S3, SES, SQS, AWS Trusted Advisor, and Amazon VPC. This policy grants access to actions that require delegating permissions to EC2, CloudWatch, Lambda, and RDS. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
View Only UserThis policy grants permissions to view existing resources across all AWS services within an account.

Some of the policies in the preceding table enable you to take advantage of additional features that are optional. These policies grant access to iam:passrole, which passes a role to delegate permissions to an AWS service to carry out actions on your behalf.  For example, the Network Administrator policy passes a role to CloudWatch’s flow-logs-vpc so that a network administrator can log and capture IP traffic for all the Amazon VPCs they create. You must create IAM service roles to take advantage of the optional features. To follow security best practices, the policies already include permissions to pass the optional service roles with a naming convention. This  avoids escalating or granting unnecessary permissions if there are other service roles in the AWS account. If your users require the optional service roles, you must create a role that follows the naming conventions specified in the policy and then grant permissions to the role.

For example, your system administrator may want to run an application on an EC2 instance, which requires passing a role to Amazon EC2. The system administrator policy already has permissions to pass a role named ec2-sysadmin-*. When you create a role called ec2-sysadmin-example-application, for example, and assign the necessary permissions to the role, the role is passed automatically to the service and the system administrator can start using the features. The documentation summarizes each of the use cases for each job function that requires delegating permissions to another AWS service.

How to assign permissions by using an AWS managed policy for job functions

Let’s say that your company is new to AWS, and you have an employee, Alice, who is a database administrator. You want Alice to be able to use and manage all the database services while not giving her full administrative permissions. In this scenario, you can use the Database Administrator AWS managed policy. This policy grants view, read, write, and admin permissions for RDS, DynamoDB, Amazon Redshift, ElastiCache, and other AWS services a database administrator might need.

The Database Administrator policy passes several roles for various use cases. The following policy shows the different service roles that are applicable to a database administrator.

{
            "Effect": "Allow",
            "Action": [
                "iam:GetRole",
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::*:role/rds-monitoring-role",
                "arn:aws:iam::*:role/rdbms-lambda-access",
                "arn:aws:iam::*:role/lambda_exec_role",
                "arn:aws:iam::*:role/lambda-dynamodb-*",
                "arn:aws:iam::*:role/lambda-vpc-execution-role",
                "arn:aws:iam::*:role/DataPipelineDefaultRole",
                "arn:aws:iam::*:role/DataPipelineDefaultResourceRole"
            ]
        }

However, in order for user Alice to be able to leverage any of the features that require another service, you must create a service role and grant it permissions. In this scenario, Alice wants only to monitor RDS databases. To enable Alice to monitor RDS databases, you must create a role called rds-monitoring-role and assign the necessary permissions to the role.

Steps to assign permissions to the Database Administrator policy in the IAM console

  1. Sign in to the IAM console.
  2. In the left pane, choose Policies and type database in the Filter box.
    Screenshot of typing "database" in filter box
  1. Choose the Database Administrator policy, and then choose Attach in the Policy Actions drop-down menu.
    Screenshot of choosing "Attach" from drop-down list
  1. Choose the user (in this case, I choose Alice), and then choose Attach Policy.
    Screenshot of selecting the user
  2. User Alice now has Database Administrator permissions. However, for Alice to monitor RDS databases, you must create a role called rds-monitoring-role. To do this, in the left pane, choose Roles, and then choose Create New Role.
  3. For the Role Name, type rds-monitoring-role to match the name that is specified in the Database Administrator. Choose Next Step.
    Screenshot of creating the role called rds-monitoring-role
  1. In the AWS Service Roles section, scroll down and choose Amazon RDS Role for Enhanced Monitoring.
    Screenshot of selecting "Amazon RDS Role for Enhanced Monitoring"
  1. Choose the AmazonRDSEnhancedMonitoringRole policy and then choose Select.
    Screenshot of choosing AmazonRDSEnhancedMonitoringRole
  2. After reviewing the role details, choose Create Role.

User Alice now has Database Administrator permissions and can now monitor RDS databases. To use other roles for the Database Administrator AWS managed policy, see the documentation.

To learn more about AWS managed policies for job functions, see the IAM documentation. If you have comments about this post, submit them in the “Comments” section below. If you have questions about these new policies, please start a new thread on the IAM forum.

– Joy

Now Open – AWS US East (Ohio) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-us-east-ohio-region/

As part of our ongoing plan to expand the AWS footprint, I am happy to announce that our new US East (Ohio) Region is now available. In conjunction with the existing US East (Northern Virginia) Region, AWS customers in the Eastern part of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Ohio Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports (deep breath) Amazon API Gateway, Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, Amazon CloudWatch (including CloudWatch Events and CloudWatch Logs), AWS CloudTrail, AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Container Registy, Amazon ECS, Amazon Elastic File System, Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Import/Export Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Lambda, AWS Marketplace, Mobile Hub, AWS OpsWorks, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Route 53, Amazon Simple Storage Service (S3), AWS Service Catalog, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), AWS Storage Gateway, Amazon Simple Workflow Service (SWF), AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, I2, M4, R3, T2, and X1 instances. As is the case with all of our newer Regions, instances must be launched within a Virtual Private Cloud (read Virtual Private Clouds for Everyone to learn more).

Well Connected
Here are some round-trip network metrics that you may find interesting (all names are airport codes, as is apparently customary in the networking world; all times are +/- 2 ms):

  • 12 ms to IAD (home of the US East (Northern Virginia) Region).
  • 20 ms to JFK (home to an Internet exchange point).
  • 29 ms to ORD (home to a pair of Direct Connect locations hosted by QTS and Equinix and another exchange point).
  • 91 ms to SFO (home of the US West (Northern California) Region).

With just 12 ms of round-trip latency between US East (Ohio) and US East (Northern Virginia), you can make good use of unique AWS features such as S3 Cross-Region Replication, Cross-Region Read Replicas for Amazon Aurora, Cross-Region Read Replicas for MySQL, and Cross-Region Read Replicas for PostgreSQL. Data transfer between the two Regions is priced at the Inter-AZ price ($0.01 per GB), making your cross-region use cases even more economical.

Also on the networking front, we have agreed to work together with Ohio State University to provide AWS Direct Connect access to OARnet. This 100-gigabit network connects colleges, schools, medical research hospitals, and state government across Ohio. This connection provides local teachers, students, and researchers with a dedicated, high-speed network connection to AWS.

14 Regions, 38 Availability Zones, and Counting
Today’s launch of this 3-AZ Region expands our global footprint to a grand total of 14 Regions and 38 Availability Zones. We are also getting ready to open up a second AWS Region in China, along with other new AWS Regions in Canada, France, and the UK.

Since there’s been some industry-wide confusion about the difference between Regions and Availability Zones of late, I think it is important to understand the differences between these two terms. Each Region is a physical location where we have one or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

Around the office, we sometimes play with analogies that can serve to explain the difference between the two terms. My favorites are “Hotels vs. hotel rooms” and “Apple trees vs. apples.” So, pick your analogy, but be sure that you know what it means!


Jeff;

 

AWS Enterprise Support Update – Training Credits, Operations Review, Well-Architected

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-enterprise-support-update-training-credits-operations-review-well-architected/

I often speak to new and potential AWS customers in our EBC (Executive Briefing Center) in Seattle. The vast majority of them have already bought in to the promise of the cloud and are already making plans that involve a combination of  “lifting and shifting” existing applications and building new, cloud-native ones. Because their move to AWS is often part of a larger organizational transformation and modernization, the senior leaders that I talk to want to make sure that their technical team is properly equipped to skillfully design, build, and operate cloud-powered systems.

Many of these customers are taking advantage of the AWS Enterprise Support plan as they move their mission-critical applications to the cloud. Like traditional support plans, this one provides access to technical support people and resources in the event that an issue surfaces. However, unlike traditional plans, it also focuses on helping them to build applications that are robust, cost-effective, easily maintained, and scalable. Our customers tell me that they enjoy the unique combination of hands-on, concierge-quality support and the automated, data-driven recommendations provided to them by AWS Trusted Advisor.

New Enterprise Support Benefits
Today we are making the AWS Enterprise Support Plan even better, adding three new benefits that are available to new and existing plan subscribers at no additional charge:

Training Credits – In conjunction with our training partner qwikLabs, each Enterprise Support customer is entitled to receive 500 qwikLabs training credits annually, along with a 30% discount on additional credits. The quikLabs courses address a wide range of AWS topics; introductory courses are free and the remainder cost between 1 and 15 credits each (read the course catalog to learn more):

If you have an Enterprise Support plan and would like to gain access to your credits and discounts, please contact your AWS Technical Account Manager (TAM).

Cloud Operations Review – Enterprise Support customers are eligible for a Cloud Operations Review designed to help them to identify gaps in their approach to operating in the cloud. Originating from a set of operational best practices distilled from our experience with a large set of representative customers, this program provides you with a review of your cloud operations and the associated management practices. The program uses a four-pillared approach with a focus on preparing, monitoring, operating, and optimizing cloud-based systems in pursuit of operational excellence.

You can work with your TAM to set up a Cloud Operations Review.

Well-Architected Review – Enterprise Support customers are also eligible for a Well-Architected Review of their mission-critical workloads. While the Cloud Operations Review focuses on people and processes, this review allows our customers to measure their architecture against AWS best practices. Our goal is to help our customers to construct architectures that are secure, reliable, performance, and cost-effective. For more information about our Well-Architected program, read Are You Well-Architected?


Jeff;

 

Now Open – AWS Asia Pacific (Mumbai) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-asia-pacific-mumbai-region/

We are expanding the AWS footprint again, this time with a new region in Mumbai, India. AWS customers in the area can use the new Asia Pacific (Mumbai) Region to better serve end users in India.

New Region
The new Mumbai region has two Availability Zones, raising the global total to 35. It supports Amazon Elastic Compute Cloud (EC2) (C4, M4, T2, D2, I2, and R3 instances are available) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, and  Elastic Load Balancing.

It also supports the following services:

There are now three edge locations (Mumbai, Chennai, and New Delhi) in India. The locations support Amazon Route 53, Amazon CloudFront, and S3 Transfer Acceleration. AWS Direct Connect support is available via our Direct Connect Partners (listed below).

This is our thirteenth region (see the AWS Global Infrastructure map for more information). As usual, you can see the list of regions in the region menu of the Console:

Customers
There are over 75,000 active AWS customers in India, representing a diverse base of industries. In the time leading up to today’s launch, we have provided some of these customers with access to the new region in preview form. Two of them (Ola Cabs and NDTV) were kind enough to share some of their experience and observations with us:

Ola Cabs’ mobile app leverages AWS to redefine point-to-point transportation in more than 100 cities across India. AWS allows OLA to constantly innovate faster with new features and services for their customers, without compromising on availability or the customer experience of their service. Ankit Bhati (CTO and Co-Founder) told us:

We are using technology to create mobility for a billion Indians, by giving them convenience and access to transportation of their choice. Technology is a key enabler, where we use AWS to drive supreme customer experience, and innovate faster on new features & services for our customers. This has helped us reach 100+ cities & 550K driver partners across India. We do petabyte scale analytics using various AWS big data services and deep learning techniques, allowing us to bring our driver-partners close to our customers when they need them. AWS allows us to make 30+ changes a day to our highly scalable micro-services based platform consisting of 100s of low latency APIs, serving millions of requests a day. We have tried the AWS India region. It is great and should help us further enhance the experience for our customers.


NDTV, India’s leading media house is watched by millions of people across the world. NDTV has been using AWS since 2009 to run their video platform and all their web properties. During the Indian general elections in May 2014, NDTV fielded an unprecedented amount of web traffic that scaled 26X from 500 million hits per day to 13 billion hits on Election Day (regularly peaking at 400K hits per second), all running on AWS.  According to Kawaljit Singh Bedi (CTO of NDTV Convergence):

NDTV is pleased to report very promising results in terms of reliability and stability of AWS’ infrastructure in India in our preview tests. Based on tests that our technical teams have run in India, we have determined that the network latency from the AWS India infrastructure Region are far superior compared to other alternatives. Our web and mobile traffic has jumped by over 30% in the last year and as we expand to new territories like eCommerce and platform-integration we are very excited on the new AWS India region launch. With the portfolio of services AWS will offer at launch, low latency, great reliability, and the ability to meet regulatory requirements within India, NDTV has decided to move these critical applications and IT infrastructure all-in to the AWS India region from our current set-up.

 


Here are some of our other customers in the region:

Tata Motors Limited, a leading Indian multinational automotive manufacturing company runs its telematics systems on AWS. Fleet owners use this solution to monitor all vehicles in their fleet on a real time basis. AWS has helped Tata Motors become to more agile and has increased their speed of experimentation and innovation.

redBus is India’s leading bus ticketing platform that sells their tickets via web, mobile, and bus agents. They now cover over 67K routes in India with over 1,800 bus operators. redBus has scaled to sell more than 40 million bus tickets annually, up from just 2 million in 2010. At peak season, there are over 100 bus ticketing transactions every minute. The company also recently developed a new SaaS app on AWS that gives bus operators the option of handling their own ticketing and managing seat inventories. redBus has gone global expanding to new geographic locations such as Singapore and Peru using AWS.

Hotstar is India’s largest premium streaming platform with more than 85K hours of drama and movies and coverage of every major global sporting event. Launched in February 2015, Hotstar quickly became one of the fastest adopted new apps anywhere in the world. It has now been downloaded by more than 68M users and has attracted followers on the back of a highly evolved video streaming technology and high attention to quality of experience across devices and platforms.

Macmillan India has provided publishing services to the education market in India for more than 120 years. Prior to using AWS, Macmillan India has moved its core enterprise applications — Business Intelligence (BI), Sales and Distribution, Materials Management, Financial Accounting and Controlling, Human Resources and a customer relationship management (CRM) system from an existing data center in Chennai to AWS. By moving to AWS, Macmillan India has boosted SAP system availability to almost 100 percent and reduced the time it takes them to provision infrastructure from 6 weeks to 30 minutes.

Partners
We are pleased to be working with a broad selection of partners in India. Here’s a sampling:

  • AWS Premier Consulting Partners – BlazeClan Technologies Pvt. Limited, Minjar Cloud Solutions Pvt Ltd, and Wipro.
  • AWS Consulting Partners – Accenture, BluePi, Cloudcover, Frontier, HCL, Powerupcloud, TCS, and Wipro.
  • AWS Technology Partners – Freshdesk, Druva, Indusface, Leadsquared, Manthan, Mithi, Nucleus Software, Newgen, Ramco Systems, Sanovi, and Vinculum.
  • AWS Managed Service Providers – Progressive Infotech and Spruha Technologies.
  • AWS Direct Connect Partners – AirTel, Colt Technology Services,  Global Cloud Xchange, GPX, Hutchison Global Communications, Sify, and Tata Communications.

Amazon Offices in India
We have opened six offices in India since 2011 – Delhi, Mumbai, Hyderabad, Bengaluru, Pune, and Chennai. These offices support our diverse customer base in India including enterprises, government agencies, academic institutions, small-to-mid-size companies, startups, and developers.

Support
The full range of AWS Support options (Basic, Developer, Business, and Enterprise) is also available for the Mumbai Region. All AWS support plans include an unlimited number of account and billing support cases, with no long-term contracts.

Compliance
Every AWS region is designed and built to meet rigorous compliance standards including ISO 27001, ISO 9001, ISO 27017, ISO 27018, SOC 1, SOC 2, and PCI DSS Level 1 (to name a few). AWS implements an information Security Management System (ISMS) that is independently assessed by qualified third parties. These assessments address a wide variety of requirements which are communicated to customers by making certifications and audit reports available, either on our public-facing website or upon request.

To learn more; take a look at the AWS Cloud Compliance page and our Data Privacy FAQ.

Use it Now
This new region is now open for business and you can start using it today! You can find additional information about the new region, documentation on how to migrate, customer use cases, information on training and other events, and a list of AWS Partners in India on the AWS site.

We have set up a seller of record in India (known as AISPL); please see the AISPL customer agreement for details.


Jeff;

 

AWS Week in Review – March 28, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-28-2016/

Let’s take a quick look at what happened in AWS-land last week:

Monday

March 28

Tuesday

March 29

Wednesday

March  30

Thursday

March 31

Friday

April 1

Sunday

April 3

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Jeff;

Free qwikLABS Online Labs Through the End of March

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1X7XGKCJDDGGV/Free-qwikLABS-Online-Labs-Through-the-End-of-March

To celebrate 10 years of AWS, qwikLABS is offering 95 free online labs through the end of March 2016. Here are some of the labs related to security and compliance that you can take for free while the offer is live:

Introduction to AWS Identity and Access Management (IAM)

Introduction to AWS Key Management Service

Performing a Basic Audit of Your AWS Environment

Auditing Changes to Amazon EC2 Security Groups

Auditing Your Security with AWS Trusted Advisor

Microsoft ADFS and AWS IAM

These self-paced labs let you learn at your own pace from this AWS training partner. Start today!

– Craig

Free qwikLABS Online Labs Through the End of March

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1X7XGKCJDDGGV/Free-qwikLABS-Online-Labs-Through-the-End-of-March

To celebrate 10 years of AWS, qwikLABS is offering 95 free online labs through the end of March 2016. Here are some of the labs related to security and compliance that you can take for free while the offer is live:

These self-paced labs let you learn at your own pace from this AWS training partner. Start today!

– Craig