Tag Archives: Amazon Elastic File System

EU Compliance Update: AWS’s 2017 C5 Assessment

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/eu-compliance-update-awss-2017-c5-assessment/

C5 logo

AWS has completed its 2017 assessment against the Cloud Computing Compliance Controls Catalog (C5) information security and compliance program. Bundesamt für Sicherheit in der Informationstechnik (BSI)—Germany’s national cybersecurity authority—established C5 to define a reference standard for German cloud security requirements. With C5 (as well as with IT-Grundschutz), customers in German member states can use the work performed under this BSI audit to comply with stringent local requirements and operate secure workloads in the AWS Cloud.

Continuing our commitment to Germany and the AWS European Regions, AWS has added 16 services to this year’s scope:

The English version of the C5 report is available through AWS Artifact. The German version of the report will be available through AWS Artifact in the coming weeks.

– Oliver

AWS HIPAA Eligibility Update (October 2017) – Sixteen Additional Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hipaa-eligibility-post-update-october-2017-sixteen-additional-services/

Our Health Customer Stories page lists just a few of the many customers that are building and running healthcare and life sciences applications that run on AWS. Customers like Verge Health, Care Cloud, and Orion Health trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.

Sixteen More Services
In my last HIPAA Eligibility Update I shared the news that we added eight additional services to our list of HIPAA eligible services. Today I am happy to let you know that we have added another sixteen services to the list, bringing the total up to 46. Here are the newest additions, along with some short descriptions and links to some of my blog posts to jog your memory:

Amazon Aurora with PostgreSQL Compatibility – This brand-new addition to Amazon Aurora allows you to encrypt your relational databases using keys that you create and manage through AWS Key Management Service (KMS). When you enable encryption for an Amazon Aurora database, the underlying storage is encrypted, as are automated backups, read replicas, and snapshots. Read New – Encryption at Rest for Amazon Aurora to learn more.

Amazon CloudWatch Logs – You can use the logs to monitor and troubleshoot your systems and applications. You can monitor your existing system, application, and custom log files in near real-time, watching for specific phrases, values, or patterns. Log data can be stored durably and at low cost, for as long as needed. To learn more, read Store and Monitor OS & Application Log Files with Amazon CloudWatch and Improvements to CloudWatch Logs and Dashboards.

Amazon Connect – This self-service, cloud-based contact center makes it easy for you to deliver better customer service at a lower cost. You can use the visual designer to set up your contact flows, manage agents, and track performance, all without specialized skills. Read Amazon Connect – Customer Contact Center in the Cloud and New – Amazon Connect and Amazon Lex Integration to learn more.

Amazon ElastiCache for Redis – This service lets you deploy, operate, and scale an in-memory data store or cache that you can use to improve the performance of your applications. Each ElastiCache for Redis cluster publishes key performance metrics to Amazon CloudWatch. To learn more, read Caching in the Cloud with Amazon ElastiCache and Amazon ElastiCache – Now With a Dash of Redis.

Amazon Kinesis Streams – This service allows you to build applications that process or analyze streaming data such as website clickstreams, financial transactions, social media feeds, and location-tracking events. To learn more, read Amazon Kinesis – Real-Time Processing of Streaming Big Data and New: Server-Side Encryption for Amazon Kinesis Streams.

Amazon RDS for MariaDB – This service lets you set up scalable, managed MariaDB instances in minutes, and offers high performance, high availability, and a simplified security model that makes it easy for you to encrypt data at rest and in transit. Read Amazon RDS Update – MariaDB is Now Available to learn more.

Amazon RDS SQL Server – This service lets you set up scalable, managed Microsoft SQL Server instances in minutes, and also offers high performance, high availability, and a simplified security model. To learn more, read Amazon RDS for SQL Server and .NET support for AWS Elastic Beanstalk and Amazon RDS for Microsoft SQL Server – Transparent Data Encryption (TDE) to learn more.

Amazon Route 53 – This is a highly available Domain Name Server. It translates names like www.example.com into IP addresses. To learn more, read Moving Ahead with Amazon Route 53.

AWS Batch – This service lets you run large-scale batch computing jobs on AWS. You don’t need to install or maintain specialized batch software or build your own server clusters. Read AWS Batch – Run Batch Computing Jobs on AWS to learn more.

AWS CloudHSM – A cloud-based Hardware Security Module (HSM) for key storage and management at cloud scale. Designed for sensitive workloads, CloudHSM lets you manage your own keys using FIPS 140-2 Level 3 validated HSMs. To learn more, read AWS CloudHSM – Secure Key Storage and Cryptographic Operations and AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads.

AWS Key Management Service – This service makes it easy for you to create and control the encryption keys used to encrypt your data. It uses HSMs to protect your keys, and is integrated with AWS CloudTrail in order to provide you with a log of all key usage. Read New AWS Key Management Service (KMS) to learn more.

AWS Lambda – This service lets you run event-driven application or backend code without thinking about or managing servers. To learn more, read AWS Lambda – Run Code in the Cloud, AWS Lambda – A Look Back at 2016, and AWS Lambda – In Full Production with New Features for Mobile Devs.

[email protected] – You can use this new feature of AWS Lambda to run Node.js functions across the global network of AWS locations without having to provision or manager servers, in order to deliver rich, personalized content to your users with low latency. Read [email protected] – Intelligent Processing of HTTP Requests at the Edge to learn more.

AWS Snowball Edge – This is a data transfer device with 100 terabytes of on-board storage as well as compute capabilities. You can use it to move large amounts of data into or out of AWS, as a temporary storage tier, or to support workloads in remote or offline locations. To learn more, read AWS Snowball Edge – More Storage, Local Endpoints, Lambda Functions.

AWS Snowmobile – This is an exabyte-scale data transfer service. Pulled by a semi-trailer truck, each Snowmobile packs 100 petabytes of storage into a ruggedized 45-foot long shipping container. Read AWS Snowmobile – Move Exabytes of Data to the Cloud in Weeks to learn more (and to see some of my finest LEGO work).

AWS Storage Gateway – This hybrid storage service lets your on-premises applications use AWS cloud storage (Amazon Simple Storage Service (S3), Amazon Glacier, and Amazon Elastic File System) in a simple and seamless way, with storage for volumes, files, and virtual tapes. To learn more, read The AWS Storage Gateway – Integrate Your Existing On-Premises Applications with AWS Cloud Storage and File Interface to AWS Storage Gateway.

And there you go! Check out my earlier post for a list of resources that will help you to build applications that comply with HIPAA and HITECH.



AWS Online Tech Talks – June 2017

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2017/

As the sixth month of the year, June is significant in that it is not only my birth month (very special), but it contains the summer solstice in the Northern Hemisphere, the day with the most daylight hours, and the winter solstice in the Southern Hemisphere, the day with the fewest daylight hours. In the United States, June is also the month in which we celebrate our dads with Father’s Day and have month-long celebrations of music, heritage, and the great outdoors.

Therefore, the month of June can be filled with lots of excitement. So why not add even more delight to the month, by enhancing your cloud computing skills. This month’s AWS Online Tech Talks features sessions on Artificial Intelligence (AI), Storage, Big Data, and Compute among other great topics.

June 2017 – Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of June. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts. All schedule times for the online tech talks are shown in the Pacific Time (PDT) time zone.

Webinars featured this month are:

Thursday, June 1


9:00 AM – 10:00 AM: Deep Dive on Amazon Elastic File System

Big Data

10:30 AM – 11:30 AM: Migrating Big Data Workloads to Amazon EMR


12:00 Noon – 1:00 PM: Building AWS Lambda Applications with the AWS Serverless Application Model (AWS SAM)


Monday, June 5

Artificial Intelligence

9:00 AM – 9:40 AM: Exploring the Business Use Cases for Amazon Lex


Tuesday, June 6

Management Tools

9:00 AM – 9:40 AM: Automated Compliance and Governance with AWS Config and AWS CloudTrail


Wednesday, June 7


9:00 AM – 9:40 AM: Backing up Amazon EC2 with Amazon EBS Snapshots

Big Data

10:30 AM – 11:10 AM: Intro to Amazon Redshift Spectrum: Quickly Query Exabytes of Data in S3


12:00 Noon – 12:40 PM: Introduction to AWS CodeStar: Quickly Develop, Build, and Deploy Applications on AWS


Thursday, June 8

Artificial Intelligence

9:00 AM – 9:40 AM: Exploring the Business Use Cases for Amazon Polly

10:30 AM – 11:10 AM: Exploring the Business Use Cases for Amazon Rekognition


Monday, June 12

Artificial Intelligence

9:00 AM – 9:40 AM: Exploring the Business Use Cases for Amazon Machine Learning


Tuesday, June 13


9:00 AM – 9:40 AM: DevOps with Visual Studio, .NET and AWS


10:30 AM – 11:10 AM: Create, with Intel, an IoT Gateway and Establish a Data Pipeline to AWS IoT

Big Data

12:00 Noon – 12:40 PM: Real-Time Log Analytics using Amazon Kinesis and Amazon Elasticsearch Service


Wednesday, June 14


9:00 AM – 9:40 AM: Batch Processing with Containers on AWS

Security & Identity

12:00 Noon – 12:40 PM: Using Microsoft Active Directory across On-premises and Cloud Workloads


Thursday, June 15

Big Data

12:00 Noon – 1:00 PM: Building Big Data Applications with Serverless Architectures


Monday, June 19

Artificial Intelligence

9:00 AM – 9:40 AM: Deep Learning for Data Scientists: Using Apache MxNet and R on AWS


Tuesday, June 20


9:00 AM – 9:40 AM: Cloud Backup & Recovery Options with AWS Partner Solutions

Artificial Intelligence

10:30 AM – 11:10 AM: An Overview of AI on the AWS Platform


The AWS Online Tech Talks series covers a broad range of topics at varying technical levels. These sessions feature live demonstrations & customer examples led by AWS engineers and Solution Architects. Check out the AWS YouTube channel for more on-demand webinars on AWS technologies.


New – AWS Resource Tagging API

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-resource-tagging-api/

AWS customers frequently use tags to organize their Amazon EC2 instances, Amazon EBS volumes, Amazon S3 buckets, and other resources. Over the past couple of years we have been working to make tagging more useful and more powerful. For example, we have added support for tagging during Auto Scaling, the ability to use up to 50 tags per resource, console-based support for the creation of resources that share a common tag (also known as resource groups), and the option to use Config Rules to enforce the use of tags.

As customers grow to the point where they are managing thousands of resources, each with up to 50 tags, they have been looking to us for additional tooling and options to simplify their work. Today I am happy to announce that our new Resource Tagging API is now available. You can use these APIs from the AWS SDKs or via the AWS Command Line Interface (CLI). You now have programmatic access to the same resource group operations that had been accessible only from the AWS Management Console.

Recap: Console-Based Resource Group Operations
Before I get in to the specifics of the new API functions, I thought you would appreciate a fresh look at the console-based grouping and tagging model. I already have the ability to find and then tag AWS resources using a search that spans one or more regions. For example, I can select a long list of regions and then search them for my EC2 instances like this:

After I locate and select all of the desired resources, I can add a new tag key by clicking Create a new tag key and entering the desired tag key:

Then I enter a value for each instance (the new ProjectCode column):

Then I can create a resource group that contains all of the resources that are tagged with P100:

After I have created the resource group, I can locate all of the resources by clicking on the Resource Groups menu:

To learn more about this feature, read Resource Groups and Tagging for AWS.

New API for Resource Tagging
The API that we are announcing today gives you power to tag, untag, and locate resources using tags, all from your own code. With these new API functions, you are now able to operate on multiple resource types with a single set of functions.

Here are the new functions:

TagResources – Add tags to up to 20 resources at a time.

UntagResources – Remove tags from up to 20 resources at a time.

GetResources – Get a list of resources, with optional filtering by tags and/or resource types.

GetTagKeys – Get a list of all of the unique tag keys used in your account.

GetTagValues – Get all tag values for a specified tag key.

These functions support the following AWS services and resource types:

AWS Service Resource Types
Amazon CloudFront Distribution.
Amazon EC2 AMI, Customer Gateway, DHCP Option, EBS Volume, Instance, Internet Gateway, Network ACL, Network Interface, Reserved Instance, Reserved Instance Listing, Route Table, Security Group – EC2 Classic, Security Group – VPC, Snapshot, Spot Batch, Spot Instance Request, Spot Instance, Subnet, Virtual Private Gateway, VPC, VPN Connection.
Amazon ElastiCache Cluster, Snapshot.
Amazon Elastic File System Filesystem.
Amazon Elasticsearch Service Domain.
Amazon EMR Cluster.
Amazon Glacier Vault.
Amazon Inspector Assessment.
Amazon Kinesis Stream.
Amazon Machine Learning Batch Prediction, Data Source, Evaluation, ML Model.
Amazon Redshift Cluster.
Amazon Relational Database Service DB Instance, DB Option Group, DB Parameter Group, DB Security Group, DB Snapshot, DB Subnet Group, Event Subscription, Read Replica, Reserved DB Instance.
Amazon Route 53 Domain, Health Check, Hosted Zone.
Amazon S3 Bucket.
Amazon WorkSpaces WorkSpace.
AWS Certificate Manager Certificate.
AWS Directory Service Directory.
AWS Storage Gateway Gateway, Virtual Tape, Volume.
Elastic Load Balancing Load Balancer, Target Group.

Things to Know
Here are a couple of things to keep in mind when you build code or write scripts that use the new API functions or the CLI equivalents:

Compatibility – The older, service-specific functions remain available and you can continue to use them.

Write Permission – The new tagging API adds another layer of permission on top of existing policies that are specific to a single AWS service. For example, you will need to have access to tag:tagResources and EC2:createTags in order to add a tag to an EC2 instance.

Read Permission – You will need to have access to tag:GetResources, tag:GetTagKeys, and tag:GetTagValues in order to call functions that access tags and tag values.

Pricing – There is no charge for the use of these functions or for tags.

Available Now
The new functions are supported by the latest versions of the AWS SDKs. You can use them to tag and access resources in all commercial AWS regions.



NICE EnginFrame – User-Friendly HPC on AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/nice-enginframe-user-friendly-hpc-on-aws/

Last year I announced that AWS had signed an agreement to acquire NICE, and that we planned to work together to create even better tools and services for high performance and scientific computing.

Today I am happy to be able to tell you about the launch of NICE EnginFrame 2017. This product is designed to simplify the process of setting up and running technical and scientific applications that take advantage of the power, scale, and flexibility of the AWS Cloud. You can set up a fully functional HPC cluster in less than an hour and then access it through a simple web-based user interface. If you are already familiar with and using EnginFrame, you can keep running it on-premises or make the move to the cloud.

AWS Inside
Your clusters (you can launch more than one if you’d like) reside within a Virtual Private Cloud (VPC) and are built using multiple AWS services and features including Amazon Elastic Compute Cloud (EC2) instances running the Amazon Linux AMI, Amazon Elastic File System for shared, NFS-style file storage, AWS Directory Service for user authentication, and Application Load Balancers for traffic management. These managed services allow you to focus on your workloads and your work. You don’t have to worry about system software upgrades, patches, scaling of processing or storage, or any of the other responsibilities that you’d have if you built and ran your own clusters.

EnginFrame is launched from a AWS CloudFormation template. The template is parameterized and self-contained, and helps to ensure that every cluster you launch will be configured in the same way. The template creates two separate CloudFormation stacks (collections of AWS resources) when you run it:

Main Stack – This stack hosts the shared, EFS-based storage for your cluster and an Application Load Balancer that routes incoming requests to the Default Cluster Stack. The stack is also host to a set of AWS Lambda functions that take care of setting up and managing IAM Roles and SSL certificates.

Default Cluster Stack – This stack is managed by the Main Stack and is where the heavy lifting takes place. The cluster is powered by CfnCluster and scales up and down as needed, terminating compute nodes when they are no longer needed. It also runs the EnginFrame portal.

EnginFrame Portal
After you launch your cluster, you will interact with it using the web-based EnginFrame portal. The portal will give you access to your applications (both batch and interactive), your data, and your jobs. You (or your cluster administrator) can create templates for batch applications and associate actions for specific file types.

EnginFrame includes an interactive file manager and a spooler view that lets you track the output from your jobs. In this release, NICE added a new file uploader that allows you to upload several files at the same time. The file uploader can also reduce upload time by caching commonly used files.

Running EnginFrame
In order to learn more about EnginFrame and to see how it works, I started at the EnginFrame Quick Start on AWS page, selected the US East (Northern Virginia) Regions, and clicked on Agree and Continue:

After logging in to my AWS account, I am in the CloudFormation Console. The URL to the CloudFormation template is already filled in, so I click on Next to proceed:

Now I configure my stack. I give it a name, set up the network configuration, and enter a pair of passwords:

I finish by choosing an EC2 key pair (if I was a new EC2 user I would have to create and download it first), and setting up the configuration for my cluster. Then I click on Next:

I enter a tag (a key and a value) for tracking purposes, but leave the IAM Role and the Advanced options as-is, and click on Next once more:

On the next page, I review my settings (not shown), and acknowledge that CloudFormation will create some IAM resources on my behalf. Then I click on Create to get things started:


CloudFormation proceeds to create, configure, and connect all of the necessary AWS resources (this is a good time to walk your dog or say hello to your family; the process takes about half an hour):

When the status of the EnginFrame cluster becomes CREATE_COMPLETE, I can click on it, and then open up the Outputs section in order to locate the EnginFrameURL:

Because the URL references an Application Load Balancer with a self-signed SSL certificate, I need to confirm my intent to visit the site:

EnginFrame is now running on the CloudFormation stack that I just launched. I log in with user name efadmin and the password that I set when I created the stack:

From here I can create a service. I’ll start simple, with a service that simply compresses an uploaded file. I click on Admin’s Portal in the blue title bar, until I get to here:

Then I click on Manage, Services, and New to define my service:

I click on Submit, choose the Job Script tab, add one line to the end of the default script, and Close the action window:

Then I Save the new service and click on Test Run in order to verify that it works as desired. I upload a file from my desktop and click on Submit to launch the job:

The job is then queued for execution on my cluster:

This just scratches the surface of what EnginFrame can do, but it is all that I have time for today.

Availability and Pricing
EnginFrame 2017 is available now and you can start using it today. You pay for the AWS resources that you use (EC2 instances, EFS storage, and so forth) and can use EnginFrame at no charge during the initial 90 day evaluation period. After that, EnginFrame is available under a license that is based on the number of concurrent users.



AWS Week in Review – March 6, 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-6-2017/

This edition includes all of our announcements, content from all of our blogs, and as much community-generated AWS content as I had time for!


March 6


March 7


March 8


March 9


March 10


March 11


March 12



Amazon EFS Update – On-Premises Access via Direct Connect

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-efs-update-on-premises-access-via-direct-connect-vpc/

I introduced you to Amazon Elastic File System last year (Amazon Elastic File System – Shared File Storage for Amazon EC2) and announced production readiness earlier this year (Amazon Elastic File System – Production-Ready in Three Regions). Since the launch earlier this year, thousands of AWS customers have used it to set up, scale, and operate shared file storage in the cloud.

Today we are making EFS even more useful with the introduction of simple and reliable on-premises access via AWS Direct Connect. This has been a much-requested feature and I know that it will be useful for migration, cloudbursting, and backup. To use this feature for migration, you simply attach an EFS file system to your on-premises servers, copy your data to it, and then process it in the cloud as desired, leaving your data in AWS for the long term.  For cloudbursting, you would copy on-premises data to an EFS file system, analyze it at high speed using a fleet of Amazon Elastic Compute Cloud (EC2) instances, and then copy the results back on-premises or visualize them in Amazon QuickSight.

You’ll get the same file system access semantics including strong consistency and file locking, whether you access your EFS file systems from your on-premises servers or from your EC2 instances (of course, you can do both concurrently). You will also be able to enjoy the same multi-AZ availability and durability that is part-and-parcel of EFS.

In order to take advantage of this new feature, you will need to use Direct Connect to set up a dedicated network connection between your on-premises data center and an Amazon Virtual Private Cloud. Then you need to make sure that your filesystems have mount targets in subnets that are reachable via the Direct Connect connection:

You also need to add a rule to the mount target’s security group in order to allow inbound TCP and UDP traffic to port 2049 (NFS) from your on-premises servers:

After you create the file system, you can reference the mount targets by their IP addresses, NFS-mount them on-premises, and start copying files. The IP addresses are available from within the AWS Management Console:

The Management Console also provides you with access to step-by-step directions! Simply click on the On-premises mount instructions:

And follow along:

This feature is available today at no extra charge in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and US East (Ohio) Regions.



How to Assign Permissions Using New AWS Managed Policies for Job Functions

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/how-to-assign-permissions-using-new-aws-managed-policies-for-job-functions/

Today, AWS Identity and Access Management (IAM) made 10 AWS managed policies available that align with common job functions in organizations. AWS managed policies enable you to set permissions using policies that AWS creates and manages, and with a single AWS managed policy for job functions, you can grant the permissions necessary for network or database administrators, for example.

You can attach multiple AWS managed policies to your users, roles, or groups if they span multiple job functions. As with all AWS managed policies, AWS will keep these policies up to date as we introduce new services or actions. You can use any AWS managed policy as a template or starting point for your own custom policy if the policy does not fully meet your needs (however, AWS will not automatically update any of your custom policies). In this blog post, I introduce the new AWS managed policies for job functions and show you how to use them to assign permissions.

The following table lists the new AWS managed policies for job functions and their descriptions.

 Job function Description
Administrator This policy grants full access to all AWS services.
Billing This policy grants permissions for billing and cost management. These permissions include viewing and modifying budgets and payment methods. An additional step is required to access the AWS Billing and Cost Management pages after assigning this policy.
Data Scientist This policy grants permissions for data analytics and analysis. Access to the following AWS services is a part of this policy: Amazon Elastic Map Reduce, Amazon Redshift, Amazon Kinesis, Amazon Machine Learning, Amazon Data Pipeline, Amazon S3, and Amazon Elastic File System. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
Database Administrator This policy grants permissions to all AWS database services. Access to the following AWS services is a part of this policy: Amazon DynamoDB, Amazon ElastiCache, Amazon RDS, and Amazon Redshift. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
Developer Power User This policy grants full access to all AWS services except for IAM.
Network Administrator This policy grants permissions required for setting up and configuring network resources. Access to the following AWS services is included in this policy: Amazon Route 53, Route 53 Domains, Amazon VPC, and AWS Direct Connect. This policy grants access to actions that require delegating permissions to CloudWatch Logs. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for this service.
Security Auditor This policy grants permissions required for configuring security settings and for monitoring events and logs in the account.
Support User This policy grants permissions to troubleshoot and resolve issues in an AWS account. This policy also enables the user to contact AWS support to create and manage cases.
System Administrator This policy grants permissions needed to support system and development operations. Access to the following AWS services is included in this policy: AWS CloudTrail, Amazon CloudWatch, CodeCommit, CodeDeploy, AWS Config, AWS Directory Service, EC2, IAM, AWS KMS, Lambda, RDS, Route 53, S3, SES, SQS, AWS Trusted Advisor, and Amazon VPC. This policy grants access to actions that require delegating permissions to EC2, CloudWatch, Lambda, and RDS. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
View Only User This policy grants permissions to view existing resources across all AWS services within an account.

Some of the policies in the preceding table enable you to take advantage of additional features that are optional. These policies grant access to iam:passrole, which passes a role to delegate permissions to an AWS service to carry out actions on your behalf.  For example, the Network Administrator policy passes a role to CloudWatch’s flow-logs-vpc so that a network administrator can log and capture IP traffic for all the Amazon VPCs they create. You must create IAM service roles to take advantage of the optional features. To follow security best practices, the policies already include permissions to pass the optional service roles with a naming convention. This  avoids escalating or granting unnecessary permissions if there are other service roles in the AWS account. If your users require the optional service roles, you must create a role that follows the naming conventions specified in the policy and then grant permissions to the role.

For example, your system administrator may want to run an application on an EC2 instance, which requires passing a role to Amazon EC2. The system administrator policy already has permissions to pass a role named ec2-sysadmin-*. When you create a role called ec2-sysadmin-example-application, for example, and assign the necessary permissions to the role, the role is passed automatically to the service and the system administrator can start using the features. The documentation summarizes each of the use cases for each job function that requires delegating permissions to another AWS service.

How to assign permissions by using an AWS managed policy for job functions

Let’s say that your company is new to AWS, and you have an employee, Alice, who is a database administrator. You want Alice to be able to use and manage all the database services while not giving her full administrative permissions. In this scenario, you can use the Database Administrator AWS managed policy. This policy grants view, read, write, and admin permissions for RDS, DynamoDB, Amazon Redshift, ElastiCache, and other AWS services a database administrator might need.

The Database Administrator policy passes several roles for various use cases. The following policy shows the different service roles that are applicable to a database administrator.

            "Effect": "Allow",
            "Action": [
            "Resource": [

However, in order for user Alice to be able to leverage any of the features that require another service, you must create a service role and grant it permissions. In this scenario, Alice wants only to monitor RDS databases. To enable Alice to monitor RDS databases, you must create a role called rds-monitoring-role and assign the necessary permissions to the role.

Steps to assign permissions to the Database Administrator policy in the IAM console

  1. Sign in to the IAM console.
  2. In the left pane, choose Policies and type database in the Filter box.
    Screenshot of typing "database" in filter box
  1. Choose the Database Administrator policy, and then choose Attach in the Policy Actions drop-down menu.
    Screenshot of choosing "Attach" from drop-down list
  1. Choose the user (in this case, I choose Alice), and then choose Attach Policy.
    Screenshot of selecting the user
  2. User Alice now has Database Administrator permissions. However, for Alice to monitor RDS databases, you must create a role called rds-monitoring-role. To do this, in the left pane, choose Roles, and then choose Create New Role.
  3. For the Role Name, type rds-monitoring-role to match the name that is specified in the Database Administrator. Choose Next Step.
    Screenshot of creating the role called rds-monitoring-role
  1. In the AWS Service Roles section, scroll down and choose Amazon RDS Role for Enhanced Monitoring.
    Screenshot of selecting "Amazon RDS Role for Enhanced Monitoring"
  1. Choose the AmazonRDSEnhancedMonitoringRole policy and then choose Select.
    Screenshot of choosing AmazonRDSEnhancedMonitoringRole
  2. After reviewing the role details, choose Create Role.

User Alice now has Database Administrator permissions and can now monitor RDS databases. To use other roles for the Database Administrator AWS managed policy, see the documentation.

To learn more about AWS managed policies for job functions, see the IAM documentation. If you have comments about this post, submit them in the “Comments” section below. If you have questions about these new policies, please start a new thread on the IAM forum.

– Joy

Now Open – AWS US East (Ohio) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-us-east-ohio-region/

As part of our ongoing plan to expand the AWS footprint, I am happy to announce that our new US East (Ohio) Region is now available. In conjunction with the existing US East (Northern Virginia) Region, AWS customers in the Eastern part of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Ohio Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports (deep breath) Amazon API Gateway, Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, Amazon CloudWatch (including CloudWatch Events and CloudWatch Logs), AWS CloudTrail, AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Container Registy, Amazon ECS, Amazon Elastic File System, Amazon ElastiCache, AWS Elastic Beanstalk, Amazon EMR, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Import/Export Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Lambda, AWS Marketplace, Mobile Hub, AWS OpsWorks, Amazon Relational Database Service (RDS), Amazon Redshift, Amazon Route 53, Amazon Simple Storage Service (S3), AWS Service Catalog, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), AWS Storage Gateway, Amazon Simple Workflow Service (SWF), AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, I2, M4, R3, T2, and X1 instances. As is the case with all of our newer Regions, instances must be launched within a Virtual Private Cloud (read Virtual Private Clouds for Everyone to learn more).

Well Connected
Here are some round-trip network metrics that you may find interesting (all names are airport codes, as is apparently customary in the networking world; all times are +/- 2 ms):

  • 12 ms to IAD (home of the US East (Northern Virginia) Region).
  • 20 ms to JFK (home to an Internet exchange point).
  • 29 ms to ORD (home to a pair of Direct Connect locations hosted by QTS and Equinix and another exchange point).
  • 91 ms to SFO (home of the US West (Northern California) Region).

With just 12 ms of round-trip latency between US East (Ohio) and US East (Northern Virginia), you can make good use of unique AWS features such as S3 Cross-Region Replication, Cross-Region Read Replicas for Amazon Aurora, Cross-Region Read Replicas for MySQL, and Cross-Region Read Replicas for PostgreSQL. Data transfer between the two Regions is priced at the Inter-AZ price ($0.01 per GB), making your cross-region use cases even more economical.

Also on the networking front, we have agreed to work together with Ohio State University to provide AWS Direct Connect access to OARnet. This 100-gigabit network connects colleges, schools, medical research hospitals, and state government across Ohio. This connection provides local teachers, students, and researchers with a dedicated, high-speed network connection to AWS.

14 Regions, 38 Availability Zones, and Counting
Today’s launch of this 3-AZ Region expands our global footprint to a grand total of 14 Regions and 38 Availability Zones. We are also getting ready to open up a second AWS Region in China, along with other new AWS Regions in Canada, France, and the UK.

Since there’s been some industry-wide confusion about the difference between Regions and Availability Zones of late, I think it is important to understand the differences between these two terms. Each Region is a physical location where we have one or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

Around the office, we sometimes play with analogies that can serve to explain the difference between the two terms. My favorites are “Hotels vs. hotel rooms” and “Apple trees vs. apples.” So, pick your analogy, but be sure that you know what it means!



New AWS Quick Starts for Atlassian JIRA Software and Bitbucket Data Center

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-aws-quick-starts-for-atlassian-jira-software-and-bitbucket-data-center/

The AWS Quick Starts help you to rapidly deploy reference implementations of software solutions on the AWS Cloud. You can use the Quick Starts to easily test drive and consume software while taking advantage of best practices promoted by AWS and the software partner.

Today I would like to tell you about a pair of Quick Start guides that were developed in collaboration with APN Advanced Technology Partner (and DevOps competency holder) Atlassian to help you to deploy their JIRA Software Data Center and Bitbucket Data Center on AWS.

Atlassian’s Data Center offerings are designed for customers that have large development teams and a need for scalable, highly available development and project management tools. Because these tools are invariably mission-critical, robustness and resilience are baseline requirements, production deployments are always run in a multi-node or cluster configuration.

New Quick Starts
JIRA Software Data Center is a project and issue management solution for agile teams and Bitbucket Data Center is a Git repository solution, both of which provide large teams working on multiple projects with high availability and performance at scale. With these two newly introduced Atlassian Quick Starts, you have access to a thoroughly tested, fully supported reference architecture that greatly simplifies and accelerates the deployment of these products on AWS.

The Quick Starts include AWS CloudFormation templates that allow you deploy Bitbucket and/or JIRA Software into a new or existing Virtual Private Cloud (VPC). If you want to use a new VPC, the template will create it, along with public and private subnets and a NAT Gateway to allow EC2 instances in the private subnet to connect to the Internet (in regions where the NAT Gateway is not available, the template will create a NAT instance instead). If you are already using AWS and have a suitable VPC, you can deploy JIRA Software Data Center and Bitbucket Data Center there instead.

You will need to sign up for evaluation licenses for the Atlassian products that you intend to launch.

Bitbucket Data Center
The Bitbucket Data Center Quick Start deploys the following components as part of the deployment:

Amazon RDS PostgreSQL – Bitbucket Data Center requires a supported external database. Amazon RDS for PostgreSQL in a Multi-AZ configuration allows failover in the event the master node fails.

NFS Server –  Bitbucket Data Center uses a shared file system to store the repositories in a common location that is accessible to multiple Bitbucket nodes. The Quick Start architecture implements the shared file system in an EC2 instance with an attached Amazon Elastic Block Store (EBS) volume.

Bitbucket Auto Scaling Group – The Bitbucket Data Center product is installed on Amazon Elastic Compute Cloud (EC2) instances in an Auto Scaling group. The deployment will scale out and in, based on utilization.

Amazon Elasticsearch Service – Bitbucket Data Center uses Elasticsearch for indexing and searching.  The Quick Start architecture uses Amazon Elasticsearch Service, a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud.

JIRA Software Data Center
The JIRA Software Data Center Quick Start deploys the following components as part of the deployment:

Amazon RDS PostgreSQL – JIRA Data Center requires a supported external database. Amazon RDS for PostgreSQL in a Multi-AZ configuration allows failover in the event the master node fails.

Amazon Elastic File System – JIRA Software Data Center uses a shared file system to store artifacts in a common location that is accessible to multiple JIRA nodes. The Quick Start architecture implements a highly available shared file system using Amazon Elastic File System.

JIRA Auto Scaling Group – The JIRA Data Center product is installed on Amazon Elastic Compute Cloud (EC2) instances in an Auto Scaling group. The deployment will scale out and in, based on utilization.

We will continue to work with Atlassian to update and refine these two new Quick Starts.  We’re also working on two additional Quick Starts for Atlassian Confluence and Atlassian JIRA Service Desk and hope to have them ready before AWS re:Invent.

To get started, please visit the Bitbucket Data Center Quick Start or the JIRA Software Data Center Quick Start. You can also head over to Atlassian’s Quick Start page. The templates are available today; give them a whirl and let us know what you think!