AWS Identity and Access Management (IAM) now makes it easier for you to control access to your AWS resources by using the AWS organization of IAM principals (users and roles). For some services, you grant permissions using resource-based policies to specify the accounts and principals that can access the resource and what actions they can perform on it. Now, you can use a new condition key, aws:PrincipalOrgID, in these policies to require all principals accessing the resource to be from an account in the organization. For example, let’s say you have an Amazon S3 bucket policy and you want to restrict access to only principals from AWS accounts inside of your organization. To accomplish this, you can define the aws:PrincipalOrgID condition and set the value to your organization ID in the bucket policy. Your organization ID is what sets the access control on the S3 bucket. Additionally, when you use this condition, policy permissions apply when you add new accounts to this organization without requiring an update to the policy.
In this post, I walk through the details of the new condition and show you how to restrict access to only principals in your organization using S3.
Condition concepts
Before I introduce the new condition, let’s review the condition element of an IAM policy. A condition is an optional IAM policy element you can use to specify special circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition. There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions are specific to certain actions in an AWS service. For example, the condition key ec2:InstanceType supports specific EC2 actions. Global conditions support all actions across all AWS services.
Now that I’ve reviewed the condition element in an IAM policy, let me introduce the new condition.
AWS:PrincipalOrgID Condition Key
You can use this condition key to apply a filter to the Principal element of a resource-based policy. You can use any string operator, such as StringLike, with this condition and specify the AWS organization ID for as its value.
Condition key
Description
Operator(s)
Value
aws:PrincipalOrgID
Validates if the principal accessing the resource belongs to an account in your organization.
Example: Restrict access to only principals from my organization
Let’s consider an example where I want to give specific IAM principals in my organization direct access to my S3 bucket, 2018-Financial-Data, that contains sensitive financial information. I have two accounts in my AWS organization with multiple account IDs, and only some IAM users from these accounts need access to this financial report.
To grant this access, I author a resource-based policy for my S3 bucket as shown below. In this policy, I list the individuals who I want to grant access. For the sake of this example, let’s say that while doing so, I accidentally specify an incorrect account ID. This means a user named Steve, who is not in an account in my organization, can now access my financial report. To require the principal account to be in my organization, I add a condition to my policy using the global condition key aws:PrincipalOrgID. This condition requires that only principals from accounts in my organization can access the S3 bucket. This means that although Steve is one of the principals in the policy, he can’t access the financial report because the account that he is a member of doesn’t belong to my organization.
In the policy above, I specify the principals that I grant access to using the principal element of the statement. Next, I add s3:GetObject as the action and 2018-Financial-Data/* as the resource to grant read access to my S3 bucket. Finally, I add the new condition key aws:PrincipalOrgID and specify my organization ID in the condition element of the statement to make sure only the principals from the accounts in my organization can access this bucket.
Summary
You can now use the aws:PrincipalOrgID condition key in your resource-based policies to more easily restrict access to IAM principals from accounts in your AWS organization. For more information about this global condition key and policy examples using aws:PrincipalOrgID, read the IAM documentation.
If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum or contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
This blog post was co-authored by Ujjwal Ratan, a senior AI/ML solutions architect on the global life sciences team.
Healthcare data is generated at an ever-increasing rate and is predicted to reach 35 zettabytes by 2020. Being able to cost-effectively and securely manage this data whether for patient care, research or legal reasons is increasingly important for healthcare providers.
Healthcare providers must have the ability to ingest, store and protect large volumes of data including clinical, genomic, device, financial, supply chain, and claims. AWS is well-suited to this data deluge with a wide variety of ingestion, storage and security services (e.g. AWS Direct Connect, Amazon Kinesis Streams, Amazon S3, Amazon Macie) for customers to handle their healthcare data. In a recent Healthcare IT News article, healthcare thought-leader, John Halamka, noted, “I predict that five years from now none of us will have datacenters. We’re going to go out to the cloud to find EHRs, clinical decision support, analytics.”
I realize simply storing this data is challenging enough. Magnifying the problem is the fact that healthcare data is increasingly attractive to cyber attackers, making security a top priority. According to Mariya Yao in her Forbes column, it is estimated that individual medical records can be worth hundreds or even thousands of dollars on the black market.
In this first of a 2-part post, I will address the value that AWS can bring to customers for ingesting, storing and protecting provider’s healthcare data. I will describe key components of any cloud-based healthcare workload and the services AWS provides to meet these requirements. In part 2 of this post we will dive deep into the AWS services used for advanced analytics, artificial intelligence and machine learning.
The data tsunami is upon us
So where is this data coming from? In addition to the ubiquitous electronic health record (EHR), the sources of this data include:
genomic sequencers
devices such as MRIs, x-rays and ultrasounds
sensors and wearables for patients
medical equipment telemetry
mobile applications
Additional sources of data come from non-clinical, operational systems such as:
human resources
finance
supply chain
claims and billing
Data from these sources can be structured (e.g., claims data) as well as unstructured (e.g., clinician notes). Some data comes across in streams such as that taken from patient monitors, while some comes in batch form. Still other data comes in near-real time such as HL7 messages. All of this data has retention policies dictating how long it must be stored. Much of this data is stored in perpetuity as many systems in use today have no purge mechanism. AWS has services to manage all these data types as well as their retention, security and access policies.
Imaging is a significant contributor to this data tsunami. Increasing demand for early-stage diagnoses along with aging populations drive increasing demand for images from CT, PET, MRI, ultrasound, digital pathology, X-ray and fluoroscopy. For example, a thin-slice CT image can be hundreds of megabytes. Increasing demand and strict retention policies make storage costly.
Due to the plummeting cost of gene sequencing, molecular diagnostics (including liquid biopsy) is a large contributor to this data deluge. Many predict that as the value of molecular testing becomes more identifiable, the reimbursement models will change and it will increasingly become the standard of care. According to the Washington Post article “Sequencing the Genome Creates so Much Data We Don’t Know What to do with It,”
“Some researchers predict that up to one billion people will have their genome sequenced by 2025 generating up to 40 exabytes of data per year.”
Although genomics is primarily used for oncology diagnostics today, it’s also used for other purposes, pharmacogenomics — used to understand how an individual will metabolize a medication.
Reference Architecture
It is increasingly challenging for the typical hospital, clinic or physician practice to securely store, process and manage this data without cloud adoption.
Amazon has a variety of ingestion techniques depending on the nature of the data including size, frequency and structure. AWS Snowball and AWS Snowmachine are appropriate for extremely-large, secure data transfers whether one time or episodic. AWS Glue is a fully-managed ETL service for securely moving data from on-premise to AWS and Amazon Kinesis can be used for ingesting streaming data.
Amazon S3, Amazon S3 IA, and Amazon Glacier are economical, data-storage services with a pay-as-you-go pricing model that expand (or shrink) with the customer’s requirements.
The above architecture has four distinct components – ingestion, storage, security, and analytics. In this post I will dive deeper into the first three components, namely ingestion, storage and security. In part 2, I will look at how to use AWS’ analytics services to draw value on, and optimize, your healthcare data.
Ingestion
A typical provider data center will consist of many systems with varied datasets. AWS provides multiple tools and services to effectively and securely connect to these data sources and ingest data in various formats. The customers can choose from a range of services and use them in accordance with the use case.
For use cases involving one-time (or periodic), very large data migrations into AWS, customers can take advantage of AWS Snowball devices. These devices come in two sizes, 50 TB and 80 TB and can be combined together to create a petabyte scale data transfer solution.
The devices are easy to connect and load and they are shipped to AWS avoiding the network bottlenecks associated with such large-scale data migrations. The devices are extremely secure supporting 256-bit encryption and come in a tamper-resistant enclosure. AWS Snowball imports data in Amazon S3 which can then interface with other AWS compute services to process that data in a scalable manner.
For use cases involving a need to store a portion of datasets on premises for active use and offload the rest on AWS, the Amazon storage gateway service can be used. The service allows you to seamlessly integrate on premises applications via standard storage protocols like iSCSI or NFS mounted on a gateway appliance. It supports a file interface, a volume interface and a tape interface which can be utilized for a range of use cases like disaster recovery, backup and archiving, cloud bursting, storage tiering and migration.
The AWS Storage Gateway appliance can use the AWS Direct Connect service to establish a dedicated network connection from the on premises data center to AWS.
Specific Industry Use Cases
By using the AWS proposed reference architecture for disaster recovery, healthcare providers can ensure their data assets are securely stored on the cloud and are easily accessible in the event of a disaster. The “AWS Disaster Recovery” whitepaper includes details on options available to customers based on their desired recovery time objective (RTO) and recovery point objective (RPO).
AWS is an ideal destination for offloading large volumes of less-frequently-accessed data. These datasets are rarely used in active compute operations but are exceedingly important to retain for reasons like compliance. By storing these datasets on AWS, customers can take advantage of the highly-durable platform to securely store their data and also retrieve them easily when they need to. For more details on how AWS enables customers to run back and archival use cases on AWS, please refer to the following set of whitepapers.
A healthcare provider may have a variety of databases spread throughout the hospital system supporting critical applications such as EHR, PACS, finance and many more. These datasets often need to be aggregated to derive information and calculate metrics to optimize business processes. AWS Glue is a fully-managed Extract, Transform and Load (ETL) service that can read data from a JDBC-enabled, on-premise database and transfer the datasets into AWS services like Amazon S3, Amazon Redshift and Amazon RDS. This allows customers to create transformation workflows that integrate smaller datasets from multiple sources and aggregates them on AWS.
Healthcare providers deal with a variety of streaming datasets which often have to be analyzed in near real time. These datasets come from a variety of sources such as sensors, messaging buses and social media, and often do not adhere to an industry standard. The Amazon Kinesis suite of services, that includes Amazon Kinesis Streams, Amazon Kinesis Firehose, and Amazon Kinesis Analytics, are the ideal set of services to accomplish the task of deriving value from streaming data.
Example: Using AWS Glue to de-identify and ingest healthcare data into S3 Let’s consider a scenario in which a provider maintains patient records in a database they want to ingest into S3. The provider also wants to de-identify the data by stripping personally- identifiable attributes and store the non-identifiable information in an S3 bucket. This bucket is different from the one that contains identifiable information. Doing this allows the healthcare provider to separate sensitive information with more restrictions set up via S3 bucket policies.
To ingest records into S3, we create a Glue job that reads from the source database using a Glue connection. The connection is also used by a Glue crawler to populate the Glue data catalog with the schema of the source database. We will use the Glue development endpoint and a zeppelin notebook server on EC2 to develop and execute the job.
Step 1: Import the necessary libraries and also set a glue context which is a wrapper on the spark context:
Step 2: Create a dataframe from the source data. I call the dataframe “readmissionsdata”. Here is what the schema would look like:
Step 3: Now select the columns that contains indentifiable information and store it in a new dataframe. Call the new dataframe “phi”.
Step 4: Non-PHI columns are stored in a separate dataframe. Call this dataframe “nonphi”.
Step 5: Write the two dataframes into two separate S3 buckets
Once successfully executed, the PHI and non-PHI attributes are stored in two separate files in two separate buckets that can be individually maintained.
Storage
In 2016, 327 healthcare providers reported a protected health information (PHI) breach, affecting 16.4m patient records[1]. There have been 342 data breaches reported in 2017 — involving 3.2 million patient records.[2]
To date, AWS has released 51 HIPAA-eligible services to help customers address security challenges and is in the process of making many more services HIPAA-eligible. These HIPAA-eligible services (along with all other AWS services) help customers build solutions that comply with HIPAA security and auditing requirements. A catalogue of HIPAA-enabled services can be found at AWS HIPAA-eligible services. It is important to note that AWS manages physical and logical access controls for the AWS boundary. However, the overall security of your workloads is a shared responsibility, where you are responsible for controlling user access to content on your AWS accounts.
AWS storage services allow you to store data efficiently while maintaining high durability and scalability. By using Amazon S3 as the central storage layer, you can take advantage of the Amazon S3 storage management features to get operational metrics on your data sets and transition them between various storage classes to save costs. By tagging objects on Amazon S3, you can build a governance layer on Amazon S3 to grant role based access to objects using Amazon IAM and Amazon S3 bucket policies.
To learn more about the Amazon S3 storage management features, see the following link.
Security
In the example above, we are storing the PHI information in a bucket named “phi.” Now, we want to protect this information to make sure its encrypted, does not have unauthorized access, and all access requests to the data are logged.
Encryption: S3 provides settings to enable default encryption on a bucket. This ensures any object in the bucket is encrypted by default.
Logging: S3 provides object level logging that can be used to capture all API calls to the object. The API calls are logged in cloudtrail for easy access and consolidation. Moreover, it also supports events to proactively alert customers of read and write operations.
Access control: Customers can use S3 bucket policies and IAM policies to restrict access to the phi bucket. It can also put a restriction to enforce multi-factor authentication on the bucket. For example, the following policy enforces multi-factor authentication on the phi bucket:
In Part 1 of this blog, we detailed the ingestion, storage, security and management of healthcare data on AWS. Stay tuned for part two where we are going to dive deep into optimizing the data for analytics and machine learning.
We launched AWS Support a full decade ago, with Gold and Silver plans focused on Amazon EC2, Amazon S3, and Amazon SQS. Starting from that initial offering, backed by a small team in Seattle, AWS Support now encompasses thousands of people working from more than 60 locations.
A Quick Look Back Over the years, that offering has matured and evolved in order to meet the needs of an increasingly diverse base of AWS customers. We aim to support you at every step of your cloud adoption journey, from your initial experiments to the time you deploy mission-critical workloads and applications.
We have worked hard to make our support model helpful and proactive. We do our best to provide you with the tools, alerts, and knowledge that will help you to build systems that are secure, robust, and dependable. Here are some of our most recent efforts toward that goal:
Trusted Advisor S3 Bucket Policy Check – AWS Trusted Advisor provides you with five categories of checks and makes recommendations that are designed to improve security and performance. Earlier this year we announced that the S3 Bucket Permissions Check is now free, and available to all AWS users. If you are signed up for the Business or Professional level of AWS Support, you can also monitor this check (and many others) using Amazon CloudWatch Events. You can use this to monitor and secure your buckets without human intervention.
Personal Health Dashboard – This tool provides you with alerts and guidance when AWS is experiencing events that may affect you. You get a personalized view into the performance and availability of the AWS services that underlie your AWS resources. It also generates Amazon CloudWatch Events so that you can initiate automated failover and remediation if necessary.
Well Architected / Cloud Ops Review – We’ve learned a lot about how to architect AWS-powered systems over the years and we want to share everything we know with you! The AWS Well-Architected Framework provide proven, detailed guidance in critical areas including operational excellence, security, reliability, performance efficiency, and cost optimization. You can read the materials online and you can also sign up for the online training course. If you are signed up for Enterprise support, you can also benefit from our Cloud Ops review.
Infrastructure Event Management – If you are launching a new app, kicking off a big migration, or hosting a large-scale event similar to Prime Day, we are ready with guidance and real-time support. Our Infrastructure Event Management team will help you to assess the readiness of your environment and work with you to identify and mitigate risks ahead of time.
To learn more about how AWS customers have used AWS support to realize all of the benefits that I noted above, watch these videos (and find more on the Customer Testmonials page):
The Amazon retail site makes heavy use of AWS. You can read my post, Prime Day 2017 – Powered by AWS, to learn more about the process of preparing to sustain a record-setting amount of traffic and to accept a like number of orders.
Come and Join Us The AWS Support Team is in continuous hiring mode and we have openings all over the world! Here are a couple of highlights:
There’s often tension between distributed and centralized control, especially in larger organizations. While a distributed control model allows teams to move fast and to respond to specialized local needs, a central model can provide the right level of oversight for global initiatives and challenges that span all teams.
We’ve seen this challenge arise first-hand when AWS customers grow to the point where their application footprint encompasses a plethora of AWS regions, AWS accounts, development teams, and applications. They love the fact that AWS increases their agility and responsiveness, while letting them deploy resources in the most appropriate location. This diversity and scale brings new challenges when it comes to security and compliance. The freedom to innovate must be balanced by the need to protect important data and to respond quickly when threats emerge.
Over the last couple of years we have provided our customers with an increasingly broad set of options for protection including AWS WAF and AWS Shield. Our customers are making great use of all of these options, and have asked for the ability to manage them from a single, central location.
Meet AWS Firewall Manager AWS Firewall Manager is designed to help these customers! It gives them the freedom to use multiple AWS accounts and to host applications in any desired region while maintaining centralized control over their organization’s security settings and profile. Developers can develop and innovators can innovate, while the security team gains the ability to respond quickly, uniformly, and globally to potential threats and actual attacks.
With automated policy enforcement across accounts & applications, your security team can be confident that new and existing applications comply with organization-wide security policies when they use Firewall Manager. They can find applications and AWS resources that don’t measure up, and bring them into compliance in minutes.
Firewall Manager is built around named policies that contain WAF rule sets and optional AWS Shield advanced protection. Each policy applies to a specific set of AWS resources, specified by account, resource type, resource identifier, or tag. Policies can be applied automatically to all matching resources, or to a subset that you select. Policies can include WAF rules drawn from within the organization, and also those created by AWS Partners such as Imperva, F5, Trend Micro, and other AWS Marketplace vendors. This gives your security team the power to duplicate their existing on-premises security posture in the cloud.
Take the Tour Firewall Manager has three prerequisites:
Firewall Administrator – You must designate one of the AWS accounts in your organization as the administrator for Firewall Manager. This gives the account permission to deploy AWS WAF rules across the organization.
AWS Config – You must enable AWS Config for all of the accounts in the Organization so that Firewall Manager can detect newly created resources (you can use the Enable AWS Config template on the StackSets Sample Templates page to take care of this). To learn more, read Getting Started with AWS Config.
Since I don’t own an enterprise, my colleagues were kind enough to create some test accounts for me! When I open the Firewall Manager Console in the master account, I can see where I stand with respect to the first two prerequisites:
The Learn more about… button reveals the Account ID of the administrator:
I switch to that account (in a a real-world situation it is unlikely that I would have access to the master account and this one), open the console, and see that I now meet the prerequisites. I click Create policy to move ahead:
The console outlines the process for me. I need to create rules and a rule group, define a policy with the rule group, define the scope of the policy, and then actually create the policy.
At the bottom of the page I choose to create a new policy and rule group, for resources in the US East (N. Virginia) Region, and click Next:
Then I specify the conditions for my rule, choosing from the following options:
Cross-site scripting
Geographic origin
SQL injection
IP address or range
Size constraint
String or regular expression
For example, I can create a condition that blocks malicious IP addresses (this AWS Solution shows you how to use a third-party reputation list with WAF, and may be helpful):
I’ll keep this one simple, but a rule can include multiple conditions. After I have added all of them, I click Next to proceed. Now I am ready to create my rule, and I click Createrule (I can add more conditions to it later if I want):
I give my rule a name (BlockExcludedIPs), enter a CloudWatch metric name, and add my condition (ExcludeIPs), then click Create:
I can create more rules, and include them in the same rule group. Again, I’ll keep this one simple, and click Next to move ahead:
I enter a name for my group, choose the rules that will make up the group, and click Create:
I now have two rule groups (testRuleGroup was already present in the account). I name my policy and click Next to proceed:
Now I define the scope of my policy. I choose the type of resource to be protected, and indicate when the policy should be applied:
I can also use tags to include or exclude resources:
Once I have defined the scope of my policy I click Next and review it, then click Create policy:
Now that the policy is in force, the ALBs within its scope are initially noncompliant:
Within minutes, Firewall Manager applies the policy and provides me with a status report:
Start Using AWS Firewall Manager Today You can start using AWS Firewall Manager today!
If you are using AWS Shield Advanced, you have access to AWS Firewall Manager and AWS WAF at no extra charge. If not, you are charged a monthly fee for each policy in each region, along with the usual charges for WAF WebACLs, WAF Rules, and AWS Config Rules.
As I have discussed in the past, sophisticated AWS customers invariably control multiple AWS accounts. Some of these are the results of acquisitions or a holdover from bottom-up, departmental adoption of cloud computing. Others create multiple accounts in order to isolate developers, projects, or departments from each other. We strongly endorse this as a best practice, and back it up with cross-account features in many AWS services, as well as AWS Organizations for policy-based management that spans accounts. Many of these customers also make great use of AWS Config and use Config Rules (both their own and those supplied by Config) to check their AWS resources for compliance.
Aggregate Across Accounts and Regions Today we are making Config Rules even more useful by adding the ability to aggregate the compliance data produced by their rules across multiple AWS accounts and/or Regions. The aggregated data can then be viewed in a single dashboard, making this a great way to improve governance and compliance. Even better, the aggregation and dashboard are available at no charge to all AWS Config users!
I’ll show you how to set this up in a moment. First, let’s define a couple of terms:
Aggregator – This is a new Config resource. It identifies the sources (accounts and regions) of the compliance data to be aggregated. Multiple aggregators can be used simultaneously, giving you the ability to fine-tune your governance and compliance model.
Aggregator account – This is an AWS account that owns one or more aggregators.
Source account – This is an AWS account that has compliance data to be aggregated.
Aggregated view – A dashboard that shows compliant and non-compliant rules for an aggregator.
Here’s how it all fits together:
Setting up Aggregation Let’s set up aggregation for some AWS Config data! The first steps take place in the aggregator account. I open the Config Console, find the Aggregated View section and click Aggregators:
I review the list of Aggregators, and click Add aggregator to make a new one:
I grant AWS Config permission to replicate data from the source accounts and enter a name for my aggregator (MyAgg):
Next, I select the source accounts. I have three options here: I can manually add the account IDs, upload a file that contains a comma-separated list, or add all of the accounts in my AWS Organization:
I click on Add source accounts to manually add one account, enter the ID, and click Add source accounts:
Next, I choose the regions of interest, with the option to select current regions as well as future ones, then click Save to move ahead:
The next step takes place in the source account, within the Config Console. An Authorization request appears:
And I confirm it:
You can use CloudFormation StackSets to enable authorization programmatically across all source accounts. Also note that the authorization step is not needed if you choose to aggregate all accounts in your AWS Organization.
Compliance data from the source account begins to flow to the aggregator account and becomes visible in the Console, generally within 2-5 minutes:
As you can see, I have a multitude of filtering options! I can focus the view on a particular region or account, and I can see which rules or accounts have the most issues to address. For example, I can see all of the buckets that do not have server-side encryption enabled:
I can also look at the overall compliance situation for an account, seeing both compliant and non-compliant resources:
Things to Know This new feature is available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Singapore) regions at no charge and you can start using it today. You pay for the use of Config and Config Rules as usual.
The multi-account, multi-region data aggregation capability in AWS Config allows you to view the compliance status of your accounts from a central account. It assumes that you have already enabled Config and Config Rules across your accounts (you can use CloudFormation StackSets to distribute and deploy your Config Rules across multiple accounts).
You can always view and manage your Amazon GuardDuty findings on the Findings page in the GuardDuty console or by using GuardDuty APIs with the AWS CLI or SDK. But there’s a quicker and easier way, you can use Amazon Alexa as a conversational interface to review your GuardDuty findings. With Alexa, you can build natural voice experiences and create a more intuitive way of interacting GuardDuty.
In this post, I show you how to deploy a sample custom Alexa skill and use an Alexa-enabled device, such as Amazon Echo, to get information about GuardDuty findings across your AWS accounts and regions. The information provided by this sample skill gives you a broad overview of GuardDuty finding statistics, severities, and descriptions. When you hear something interesting, you can log in to the GuardDuty console or another analysis tool to investigate the findings data.
Note: Although not covered here, you can also deploy this sample skill using Alexa for Business, which you can use to make skills available to your shared devices and enrolled users without having to publish them to the Alexa skills store.
Prerequisites
To complete the steps in this post, make sure you have:
A basic understanding of Alexa Custom Skills, which is helpful for deploying the sample skill described here. If you’re not already familiar with Alexa custom skill concepts and terminology, you might want to review the following documentation resources.
An AWS account with GuardDuty enabled in one or more AWS regions.
Deploy the Lambda function by using the CloudFormation Template.
Create the custom skill in the Alexa developer console.
Test the skill using an Alexa-enabled device.
Deploy the Lambda function with the CloudFormation Template
For this next step, make sure you deploy the template within the AWS account you want to monitor.
To deploy the Lambda function in the N. Virginia region (see the note below), you can use the CloudFormation template provided by clicking the following link: load the supplied template. In the CloudFormation console, on the Select Template page, select Next.
Note: The following AWS regions support hosting custom Alexa skills: US East (N. Virginia), Asia Pacific (Tokyo), EU (Ireland), West (Oregon). If you want to deploy in a region other than N. Virginia, you will first need to upload the custom skill’s Lambda deployment package (zip file with code) to an S3 bucket in the selected region.
After you load the template, provide the following input parameters:
Input parameter
Input parameter description
FLASHREGIONS
Comma separated list of region Ids with NO spaces to include in flash briefing stats. At least one region is required. Make sure GuardDuty is enabled in regions declared.
MAXRESP
Max number of findings to return in a response.
ArtifactsBucket
S3 Bucket where Lambda deployment package resides. Leave the default for N. Virginia.
ArtifactsPrefix
Path in S3 bucket where Lambda deployment package resides. Leave the default for N. Virginia.
On the Specify Details page, enter the input parameters (see above), and then select Next.
On the Options page, accept the default values, and then select Next.
On the Review page, confirm the details, and then select Create. The stack will be created in approximately 2 minutes.
Create the custom skill in the Alexa developer console
In the second part of this solution implementation, you will create the skill in the Amazon Developer Console.
Sign in to the Alexa area of the Amazon Developer Console, select Your Alexa Consoles in the top right, and then select Skills.
Select Create Skill.
For the name, enter Ask Amazon GuardDuty, and then select Next.
In the Choose a model to add to your skill page, select Custom, and then select Create skill.
Select the JSON Editor and paste the contents of the alexa_ask_guardduty_skill.json file into the code editor, and overwrite the existing content. This file contains the intent schema which defines the set of intents the service can accept and process.
Select Save Model, select Build Model, and then wait for the build to complete.
When the model build is complete, on the left side, select Endpoint.
In the Endpoint page, in the Service Endpoint Type section, select AWS Lambda ARN (Amazon Resource Name).
In the Default Region field, copy and paste the value from the CloudFormation Stack Outputs key named AlexaAskGDSkillArn. Leave the default values for other options, and then select Save Endpoints.
Because you’re not publishing this skill, you don’t need to complete the Launch section of the configuration. The skill will remain in the “Development” status and will only be available for Alexa devices linked to the Amazon developer account used to create the skill. Anyone with physical access to the linked Alexa-enabled device can use the custom skill. As a best practice, I recommend that you delete the Lambda trigger created by the CloudFormation template and add a new one with Skill ID verification enabled.
Test the skill using an Alexa-enabled device
Now that you’ve deployed the sample solution, the next step is to test the skill. Make sure you’re using an Alexa-enabled device linked to the Amazon developer account used to create the skill. Before testing, if there are no current GuardDuty findings available, you can generate sample findings in the console. When you generate sample findings, GuardDuty populates your current findings list with one sample finding for each supported finding type.
You can test using the following voice commands:
“Alexa, Open GuardDuty” — Opens the skill and provides a welcome response. You can also use “Alexa, Ask GuardDuty”.
“Get flash briefing” — Provides global and regional counts for low, medium, and high severity findings. The regions declared in the FLASHREGIONS parameter are included. You can also use “Ask GuardDuty to get flash briefing” to bypass the welcome message. You can learn more about GuardDuty severity levels in the documentation.
For the next set of commands, you can specify the region, use region names such as <Virginia>, <Oregon>, <Ireland>, and so on:
“Get statistics for region” — Provides regional counts for low, medium, and high severity findings.
“Get findings for region” — Returns finding information for the requested region. The number of findings returned is configured in the MAXRESP parameter.
“Get <high/medium/low> severity findings for region” – Returns finding information with the minimum severity requested as high, medium, or low. The number of findings returned is configured in the MAXRESP parameter.
“Help” — Provides information about the skill and supported utterances. Also provides current configuration for FLASHREGIONS and MAXRESP.
You can use this sample solution to get GuardDuty statistics and findings through the Alexa conversational interface. You’ll be able to identify findings that require further investigation quickly. This solution’s code is available on GitHub.
With AWS Organizations, you can centrally manage policies across multiple AWS accounts without having to use custom scripts and manual processes. For example, you can apply service control policies (SCPs) across multiple AWS accounts that are members of an organization. SCPs allow you to define which AWS service APIs can and cannot be executed by AWS Identity and Access Management (IAM) entities (such as IAM users and roles) in your organization’s member AWS accounts. SCPs are created and applied from the master account, which is the AWS account that you used when you created your organization.
In a previous post, How to Use Service Control Policies in AWS Organizations to Enforce Healthcare Compliance in Your AWS Account, we reviewed how to create and manage SCPs and Organizational Units (OU) within an organization. In this post, I show how to use SCPs for access control in Organizations, with a specific focus on evaluating SCPs when an IAM entity calls an API in a member AWS account. I first cover some key Organizations concepts, and then I show how an SCP attached to an organization impacts which AWS service APIs are available to member accounts. Finally, I demonstrate these concepts with an example.
Organizational structure in Organizations
OUs give you a way to logically group and structure member AWS accounts in your organization. The screenshot shows the tree view of an example organizational structure in my organization with several OUs. Currently, I have selected OrgUnit01, and this is the current view I see in my main window. You can see here that within the OrgUnit01 OU, I have nested two additional OUs (OrgUnit01ChildA and OrgUnit01ChildB) and an AWS account is also contained within OrgUnit01, named “Developer Sandbox Account”.
The parts of the example organizational structure in the screenshot are:
Tree view — The hierarchy of your organization’s root and any OUs you have created
Tree view toggle — Enable and disable tree view
Organizational Units — Any child OUs of the selected root or OU in tree view
Accounts — Any AWS accounts (members or master) in the current OU
In the next section, I explain why at least one SCP must be attached to your root and OUs and introduce SCP evaluation.
How Service Control Policy evaluation logic works
To allow an AWS service API at the member account level, you must allow the API at every level between the member account and the root of your organization. This means you must attach an SCP at every level between your organization’s root and the member account that allows the given AWS service API (such as ec2:RunInstances). For more information, see About Service Control Policies.
Let’s say you want to allow the ec2:RunInstances API in the Developer Sandbox Account in the example structure in the preceding screenshot. To allow this AWS service API, you must allow the API in at least one SCP attached at each of these levels:
The organization’s root
The OU named OrgUnit01
If you don’t allow the AWS service in an SCP attached at each of these two levels, neither IAM entities nor the root user in the Developer Sandbox Account will be able to call ec2:RunInstances, even if an administrator has given them permission to do so (for IAM entities). In terms of policy evaluation, SCPs follow exactly the same policy evaluation logic as IAM does: by default, all requests are denied, an explicit allow overrides this default, and an explicit deny overrides any explicit allows.
What does this look like in practice? In the next section, I share a practical example to demonstrate how this works in Organizations.
An example structure with nested OUs and SCPs
In the previous section, I introduced design aspects of AWS Organizations that help prevent administrators from breaking structures in their Organizations. But because AWS Organizations is flexible enough to address multiple use cases, administrators can make changes that have unintended consequences, such as breaking organizational structures when moving an AWS account from one OU to another. In this section, I show an example with broken OU and SCP structures and explain how you can fix them.
I’ll take a blacklisting approach. That is, I’ll use the FullAWSAccess SCP, which doesn’t filter out any AWS service APIs. Then, I will filter out specific APIs by blacklisting them in subsequent SCPs attached to OUs at various points in my organization’s structure. For further reading on blacklisting and whitelisting with AWS Organizations, review AWS Organizations Terminology and Concepts.
Let’s say I have developed the OU and SCP structure shown in the diagram below. Before taking a close look at that diagram, I’ll briefly outline the goals I’m trying to achieve. Broadly speaking, there’s a small subset of APIs that I want to filter out using SCPs. This means that IAM entities in some AWS accounts in my organization will not have access to particular AWS service APIs, such as those related to Amazon EC2, while other accounts will not have access to APIs associated with Amazon CloudWatch, Amazon S3, and so on. Apart from these special cases, I do want the accounts in my organization to have access to all other APIs. More specifically, my goals are as follows:
Any AWS accounts in the Root should not have any API filtered out.
Any AWS accounts in OU 001 should have APIs for CloudWatch filtered out, but all other APIs will be accessible.
Any AWS accounts in OU 002 should have APIs for both CloudWatch and EC2 filtered out, but all other APIs will be accessible.
Any AWS accounts in OU 003 should have APIs for S3 filtered out, but all other APIs will be accessible.
To that end, let’s now look at my initial SCP and OU configuration in the image below that shows the example OU and SCP structure. The arrow shows the direction of inheritance: the root and the OUs below it (children) inherit SCPs from the OUs above them (parents). This example structure contains the following SCPs:
FullAWSAccess — Allows all AWS service APIs
Deny_CW — Denies all CloudWatch APIs
Deny_EC2 — Denies all Amazon EC2 APIs
Deny_S3 — Denies all Amazon S3 APIs
Now that I’ve outlined my intent, and shown you the OU / SCP structure that I’ve created to meet that set of goals, you can probably already see that the structure provided in the image above will not work correctly for my stated goals. In fact, AWS accounts in the Root container and OU 001 will have the intended access, as per my goals (1) and (2). I will not, however, meet my goals (3) and (4) with the above structure: entities in member accounts directly under OU 002 cannot perform any actions, even if they’re granted permissions by IAM access policies. This is because the FullAWSAccess SCP isn’t attached directly to this OU (it’s only inherited).
Why is this important? For an AWS service API to be available to IAM entities in a member account, the API must be specified in an SCP attached at every level all the way down the hierarchy to the relevant member account. Similarly, even though OU 003 does have the FullAWSAccess SCP attached directly to it, the fact that it’s not attached to the parent OU (OU 002) means that IAM entities in member accounts under OU 003 also aren’t able to access any service APIs. This doesn’t happen by default—I have deliberately taken this action to organize my structure in this way, to show both the flexibility and the kind of problems you can encounter when working with OUs and SCPs.
So I now need to fix the problems that I’ve inadvertently created. To start with, I’m going to make one change to OU 002 by attaching the FullAWSAccess SCP directly to that OU. After I do that, OU 002 has the attached and inherited policies that are shown in the following image.
With the FullAWSAccess policy attached to OU 002, member accounts in both OU 002 and OU 003 can access the other non-restricted AWS service APIs (keeping in mind that the FullAWSAccess policy was already applied to OU 003).
I have one final issue to address in this example: OU 003 has an SCP attached that blocks access to the Amazon S3 APIs. However, in this OU, the intent is to allow IAM entities in member accounts to access the EC2 APIs. EC2 API access is blocked because in the parent OU (OU 002), an SCP is attached that denies access to that API (the Deny_EC2 SCP), which means that any actions listed in Deny_EC2 have already been filtered out. An explicit deny always trumps an allow, so to meet goal (4) and have an OU in which EC2 APIs are allowed but access to CloudWatch APIs and S3 APIs is filtered out, I will move OU 003 up one level, placing it directly under OU 001. This change gives me a working OU and policy structure, as shown in the following image.
I recommend that at each level of your organization’s hierarchy, you directly apply the relevant SCPs. By doing this, you’re less likely to forget to apply an SCP to a particular OU, which can break your permission structure. By directly applying SCPs, you also make your policy structure easier to read.
If you have a group of accounts in your organization that are for testing purposes, I recommend that you experiment with OUs and SCPs. Applying SCPs to OUs and then moving an AWS account around within that structure can show you how SCPs affect IAM entities. For example, if you have an IAM user with the AdministratorAccess policy attached, you should see how SCPs can filter out certain AWS service APIs from specified member accounts.
Conclusion
I showed you how you can effectively apply SCPs to OUs in your organization and avoid some of the common issues that you might experience. I demonstrated an approach to designing a working organizational structure that I hope will help smooth your deployment of your organization and enables you to better centrally secure and manage your AWS accounts.
If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Organizations forum.
This post courtesy of Michael Edge, Sr. Cloud Architect – AWS Professional Services
Applications built using a microservices architecture typically result in a number of independent, loosely coupled microservices communicating with each other, synchronously via their APIs and asynchronously via events. These microservices are often owned by different product teams, and these teams may segregate their resources into different AWS accounts for reasons that include security, billing, and resource isolation. This can sometimes result in the following challenges:
Cross-account deployment: A single pipeline must deploy a microservice into multiple accounts; for example, a microservice must be deployed to environments such as DEV, QA, and PROD, all in separate accounts.
Cross-account lookup: During deployment, a resource deployed into one AWS account may need to refer to a resource deployed in another AWS account.
Cross-account communication: Microservices executing in one AWS account may need to communicate with microservices executing in another AWS account.
In this post, I look at ways to address these challenges using a sample application composed of a web application supported by two serverless microservices. The microservices are owned by different product teams and deployed into different accounts using AWS CodePipeline, AWS CloudFormation, and the Serverless Application Model (SAM). At runtime, the microservices communicate using an event-driven architecture that requires asynchronous, cross-account communication via an Amazon Simple Notification Service (Amazon SNS) topic.
Sample application
First, look at the sample application I use to demonstrate these concepts. In the following overview diagram, you can see the following:
The entire application consists of three main services:
A Booking microservice, owned by the Booking account.
An Airmiles microservice, owned by the Airmiles account.
A web application that uses the services exposed by both microservices, owned by the Web Channel account.
The Booking microservice creates flight bookings and publishes booking events to an SNS topic.
The Airmiles microservice consumes booking events from the SNS topic and uses the booking event to calculate the airmiles associated with the flight booking. It also supports querying airmiles for a specific flight booking.
The web application allows an end user to make flight bookings, view flights bookings, and view the airmiles associated with a flight booking.
In the sample application, the Booking and Airmiles microservices are implemented using AWS Lambda. Together with Amazon API Gateway, Amazon DynamoDB, and SNS, the sample application is completely serverless.
Sample Serverless microservices application
The typical booking flow would be triggered by an end user making a flight booking using the web application, which invokes the Booking microservice via its REST API. The Booking microservice persists the flight booking and publishes the booking event to an SNS topic to enable sharing of the booking with other interested consumers. In this sample application, the Airmiles microservice subscribes to the SNS topic and consumes the booking event, using the booking information to calculate the airmiles. In line with microservices best practices, both the Booking and Airmiles microservices store their information in their own DynamoDB tables, and expose an API (via API Gateway) that is used by the web application.
Setup
Before you delve into the details of the sample application, get the source code and deploy it.
Cross-account deployment of Lambda functions using CodePipeline has been previously discussed by my colleague Anuj Sharma in his post, Building a Secure Cross-Account Continuous Delivery Pipeline. This sample application builds upon the solution proposed by Anuj, using some of the same scripts and a similar account structure. To make it feasible for you to deploy the sample application, I’ve reduced the number of accounts needed down to three accounts by consolidating some of the services. In the following diagram, you can see the services used by the sample application require three accounts:
Tools: A central location for the continuous delivery/deployment services such as CodePipeline and AWS CodeBuild. To reduce the number of accounts required by the sample application, also deploy the AWS CodeCommit repositories here, though typically they may belong in a separate Dev account.
Booking: Account for the Booking microservice.
Airmiles: Account for the Airmiles microservice.
Without consolidation, the sample application may require up to 10 accounts: one for Tools, and three accounts each for Booking, Airmiles and Web Application (to support the DEV, QA, and PROD environments).
Account structure for sample application
To follow the rest of this post, clone the repository in step 1 below. To deploy the application on AWS, follow steps 2 and 3:
Follow the instructions in the repository README to build the CodePipeline pipelines and deploy the microservices and web application.
Challenge 1: Cross-account deployment using CodePipeline
Though the Booking pipeline executes in the Tools account, it deploys the Booking Lambda functions into the Booking account.
In the sample application code repository, open the ToolsAcct/code-pipeline.yaml CloudFormation template.
Scroll down to the Pipeline resource and look for the DeployToTest pipeline stage (shown below). There are two AWS Identity and Access Management (IAM) service roles used in this stage that allow cross-account activity. Both of these roles exist in the Booking account:
Under Actions.RoleArn, find the service role assumed by CodePipeline to execute this pipeline stage in the Booking account. The role referred to by the parameter NonProdCodePipelineActionServiceRole allows access to the CodePipeline artifacts in the S3 bucket in the Tools account, and also access to the AWS KMS key needed to encrypt/decrypt the artifacts.
Under Actions.Configuration.RoleArn, find the service role assumed by CloudFormation when it carries out the CHANGE_SET_REPLACE action in the Booking account.
These roles are created in the CloudFormation template NonProdAccount/toolsacct-codepipeline-cloudformation-deployer.yaml.
Challenge 2: Cross-account stack lookup using custom resources
Asynchronous communication is a fairly common pattern in microservices architectures. A publishing microservice publishes an event that consumers may be interested in, without any concern for who those consumers may be.
In the case of the sample application, the publisher is the Booking microservice, publishing a flight booking event onto an SNS topic that exists in the Booking account. The consumer is the Airmiles microservice in the Airmiles account. To enable the two microservices to communicate, the Airmiles microservice must look up the ARN of the Booking SNS topic at deployment time in order to set up a subscription to it.
To enable CloudFormation templates to be reused, you are not hardcoding resource names in the templates. Because you allow CloudFormation to generate a resource name for the Booking SNS topic, the Airmiles microservice CloudFormation template must look up the SNS topic name at stack creation time. It’s not possible to use cross-stack references, as these can’t be used across different accounts.
However, you can use a CloudFormation custom resource to achieve the same outcome. This is discussed in the next section. Using a custom resource to look up stack exports in another stack does not create a dependency between the two stacks, unlike cross-stack references. A stack dependency would prevent one stack being deleted if another stack depended upon it. For more information, see Fn::ImportValue.
Using a Lambda function as a custom resource to look up the booking SNS topic
The sample application uses a Lambda function as a custom resource. This approach is discussed in AWS Lambda-backed Custom Resources. Walk through the sample application and see how the custom resource is used to list the stack export variables in another account and return these values to the calling AWS CloudFormation stack. Examine each of the following aspects of the custom resource:
Deploying the custom resource.
Custom resource IAM role.
A calling CloudFormation stack uses the custom resource.
The custom resource assumes a role in another account.
The custom resource obtains the stack exports from all stacks in the other account.
The custom resource returns the stack exports to the calling CloudFormation stack.
The custom resource handles all the event types sent by the calling CloudFormation stack.
Step1: Deploying the custom resource
The custom Lambda function is deployed using SAM, as are the Lambda functions for the Booking and Airmiles microservices. See the CustomLookupExports resource in Custom/custom-lookup-exports.yml.
Step2: Custom resource IAM role
The CustomLookupExports resource in Custom/custom-lookup-exports.yml executes using the CustomLookupLambdaRole IAM role. This role allows the custom Lambda function to assume a cross account role that is created along with the other cross account roles in the NonProdAccount/toolsacct-codepipeline-cloudformation-deployer.yml. See the resource CustomCrossAccountServiceRole.
Step3: The CloudFormation stack uses the custom resource
The Airmiles microservice is created by the Airmiles CloudFormation template Airmiles/sam-airmile.yml, a SAM template that uses the custom Lambda resource to look up the ARN of the Booking SNS topic. The custom resource is specified by the CUSTOMLOOKUP resource, and the PostAirmileFunction resource uses the custom resource to look up the ARN of the SNS topic and create a subscription to it.
Because the Airmiles Lambda function is going to subscribe to an SNS topic in another account, it must grant the SNS topic in the Booking account permissions to invoke the Airmiles Lambda function in the Airmiles account whenever a new event is published. Permissions are granted by the LambdaResourcePolicy resource.
Step4: The custom resource assumes a role in another account
When the Lambda custom resource is invoked by the Airmiles CloudFormation template, it must assume a role in the Booking account (see Step 2) in order to query the stack exports in that account.
This can be seen in Custom/custom-lookup-exports.py, where the AWS Simple Token Service (AWS STS) is used to obtain a temporary access key to allow access to resources in the account referred to by the environment variable: ‘CUSTOM_CROSS_ACCOUNT_ROLE_ARN’. This environment variable is defined in the Custom/custom-lookup-exports.yml CloudFormation template, and refers to the role created in Step 2.
Step5: The custom resource obtains the stack exports from all stacks in the other account
The function get_exports() in Custom/custom-lookup-exports.py uses the AWS SDK to list all the stack exports in the Booking account.
Step 6: The custom resource returns the stack exports to the calling CloudFormation stack
The calling CloudFormation template, Airmiles/sam-airmile.yml, uses the custom resource to look up a stack export with the name of booking-lambda-BookingTopicArn. This is exported by the CloudFormation template Booking/sam-booking.yml.
Step7: The custom resource handles all the event types sent by the calling CloudFormation stack
Custom resources used in a stack are called whenever the stack is created, updated, or deleted. They must therefore handle CREATE, UPDATE, and DELETE stack events. If your custom resource fails to send a SUCCESS or FAILED notification for each of these stack events, your stack may become stuck in a state such as CREATE_IN_PROGRESS or UPDATE_ROLLBACK_IN_PROGRESS. To handle this cleanly, use the crhelper custom resource helper. This strongly encourages you to handle the CREATE, UPDATE, and DELETE CloudFormation stack events.
Challenge 3: Cross-account SNS subscription
The SNS topic is specified as a resource in the Booking/sam-booking.yml CloudFormation template. To allow an event published to this topic to trigger the Airmiles Lambda function, permissions must be granted on both the Booking SNS topic and the Airmiles Lambda function. This is discussed in the next section.
Permissions on booking SNS topic
SNS topics support resource-based policies, which allow a policy to be attached directly to a resource specifying who can access the resource. This policy can specify which accounts can access a resource, and what actions those accounts are allowed to perform. This is the same approach as used by a small number of AWS services, such as Amazon S3, KMS, Amazon SQS, and Lambda. In Booking/sam-booking.yml, the SNS topic policy allows resources in the Airmiles account (referenced by the parameter NonProdAccount in the following snippet) to subscribe to the Booking SNS topic:
The Airmiles microservice is created by the Airmiles/sam-airmile.yml CloudFormation template. The template uses SAM to specify the Lambda functions together with their associated API Gateway configurations. SAM hides much of the complexity of deploying Lambda functions from you.
For instance, by adding an ‘Events’ resource to the PostAirmileFunction in the CloudFormation template, the Airmiles Lambda function is triggered by an event published to the Booking SNS topic. SAM creates the SNS subscription, as well as the permissions necessary for the SNS subscription to trigger the Lambda function.
However, the Lambda permissions generated automatically by SAM are not sufficient for cross-account SNS subscription, which means you must specify an additional Lambda permissions resource in the CloudFormation template. LambdaResourcePolicy, in the following snippet, specifies that the Booking SNS topic is allowed to invoke the Lambda function PostAirmileFunction in the Airmiles account.
In this post, I showed you how product teams can use CodePipeline to deploy microservices into different AWS accounts and different environments such as DEV, QA, and PROD. I also showed how, at runtime, microservices in different accounts can communicate securely with each other using an asynchronous, event-driven architecture. This allows product teams to maintain their own AWS accounts for billing and security purposes, while still supporting communication with microservices in other accounts owned by other product teams.
Acknowledgments
Thanks are due to the following people for their help in preparing this post:
Kevin Yung for the great web application used in the sample application
Jay McConnell for crhelper, the custom resource helper used with the CloudFormation custom resources
Now, your applications and federated users can complete longer running workloads in a single session by increasing the maximum session duration up to 12 hours for an IAM role. Users and applications still retrieve temporary credentials by assuming roles using AWS Security Token Service (AWS STS), but these credentials can now be valid for up to 12 hours when using the AWS SDK or CLI. This change allows your users and applications to perform longer running workloads, such as a batch upload to S3 or a CloudFormation template, using a single session. You can extend the maximum session duration using the IAM console or CLI. Once you increase the maximum session duration, users and applications assuming the IAM role can request temporary credentials that expire when the IAM role session expires.
In this post, I show you how to configure the maximum session duration for an existing IAM role to 4 hours (maximum allowed duration is 12 hours) using the IAM console. I’ll use 4 hours because AWS recommends configuring the session duration for a role to the shortest duration that your federated users would require to access your AWS resources. I’ll then show how existing federated users can use the AWS SDK or CLI to request temporary security credentials that are valid until the role session expires.
Configure the maximum session duration for an existing IAM role to 4 hours
Let’s assume you have an existing IAM role called ADFS-Production that allows your federated users to upload objects to an S3 bucket in your AWS account. You want to extend the maximum session duration for this role to 4 hours. By default, IAM roles in your AWS accounts have a maximum session duration of one hour. To extend a role’s maximum session duration to 4 hours, follow the steps below:
In the left navigation pane, select Roles and then select the role for which you want to increase the maximum session duration. For this example, I select ADFS-Production and verify the maximum session duration for this role. This value is set to 1 hour (3,600 seconds) by default.
Select Edit, and then define the maximum session duration.
Select one of the predefined durations or provide a custom duration. For this example, I set the maximum session duration to be 4 hours.
Select Save changes.
Alternatively, you can use the latest AWS CLI and call Update-Role to set the maximum session duration for the role ADFS-Production. Here’s an example to set the maximum session duration to 14,400 seconds (4 hours).
$ aws iam update-role -–role-name ADFS-Production -–MaxSessionDuration 14400
Now that you’ve successfully extended the maximum session for your IAM role, ADFS-Production, your federated users can use AWS STS to retrieve temporary credentials that are valid for 4 hours to access your S3 buckets.
Access AWS resources with temporary security credentials using AWS CLI/SDK
To enable federated SDK and CLI access for your users who use temporary security credentials, you might have implemented the solution described in the blog post on How to Implement Federated API and CLI Access Using SAML 2.0 and AD FS. That blog post demonstrates how to use the AWS Python SDK and some additional client-side integration code provided in the post to implement federated SDK and CLI access for your users. To enable your users to request longer temporary security credentials, you can make the following changes suggested in this blog to the solution provided in that post.
When calling AssumeRoleWithSAML API to request AWS temporary security credentials, you need to include the DurationSeconds parameter. The value of this parameter is the duration the user requests and, therefore, the duration their temporary security credentials are valid. In this example, I am using boto to request the maximum length of 14,400 seconds (4 hours) using code from the How to Implement Federated API and CLI Access Using SAML 2.0 and AD FS post that I have updated:
# Use the assertion to get an AWS STS token using Assume Role with SAML conn = boto.sts.connect_to_region(region) token = conn.assume_role_with_saml(role_arn, principal_arn, assertion, 14400)
By adding a value for the DurationSeconds parameter in the AssumeRoleWithSAML call, your federated user can retrieve temporary security credentials that are valid for up to 14,400 seconds (4 hours). If you don’t provide this value, the default session duration is 1 hour. If you provide a value of 5 hours for your temporary security credentials, AWS STS will throw an error since this is longer than the role session duration of 4 hours.
Conclusion
I demonstrated how you can configure the maximum session duration for a role from 1 hour (default) up to 12 hours. Then, I showed you how your federated users can retrieve temporary security credentials that are valid for longer durations to access AWS resources using AWS CLI/SDK for up to 12 hours.
Similarly, you can also increase the maximum role session duration for your applications and users who use Web Identity or OpenID Connect Federation or Cross-Account Access with Assume Role. If you have comments about this blog, submit them in the Comments section below. If you have questions or suggestions, please start a new thread on the IAM forum.
Recent amendments to the Australian Privacy Act 1988 (Privacy Act) established the Notifiable Data Breaches (NDB) scheme in Australia, which went into effect February 22, 2018. The NDB scheme aims to give affected individuals the opportunity to take steps to protect their personal information following a data breach that is likely to result in serious harm. It also reinforces entities’ accountability for the personal information they hold.
We’re happy to announce AWS offers an Australian Notifiable Data Breaches (ANDB) Addendum to customers who are subject to the Privacy Act and are using AWS to store and process personal information covered by the NDB scheme. The ANDB Addendum addresses customers’ need for notification if a security event affects their data. We have made the ANDB Addendum available online as a click-through agreement in AWS Artifact, where customers can review and activate the ANDB Addendum for AWS accounts they use to store and process personal information covered by the NDB scheme.
We welcome the arrival of the NDB scheme, and hope it encourages Australian entities to raise the bar on their security capabilities. At AWS, we continually maintain a high bar for security across all of our AWS Regions around the world.
We’re relentlessly innovating on your behalf at AWS, especially when it comes to security. Last November, we launched Amazon GuardDuty, a continuous security monitoring and threat detection service that incorporates threat intelligence, anomaly detection, and machine learning to help protect your AWS resources, including your AWS accounts. Many large customers, including General Electric, Autodesk, and MapBox, discovered these benefits and have quickly adopted the service for its ease of use and improved threat detection. In this post, I want to show you how easy it is for everyone to get started—large and small—and discuss our rapid iteration on the service.
After more than seven years at AWS, I still find myself staying up at night obsessing about unnecessary complexity. Sounds fun, right? Well, I don’t have to tell you that there’s a lot of unnecessary complexity and undifferentiated heavy lifting in security. Most security tooling requires significant care and feeding by humans. It’s often difficult to configure and manage, it’s hard to know if it’s working properly, and it’s costly to procure and run. As a result, it’s not accessible to all customers, and for those that do get their hands on it, they spend a lot of highly-skilled resources trying to keep it operating at its potential.
Even for the most skilled security teams, it can be a struggle to ensure that all resources are covered, especially in the age of virtualization, where new accounts, new resources, and new users can come and go across your organization at a rapid pace. Furthermore, attackers have come up with ingenious ways of giving you the impression your security solution is working when, in fact, it has been completely disabled.
I’ve spent a lot of time obsessing about these problems. How can we use the Cloud to not just innovate in security, but also make it easier, more affordable, and more accessible to all? Our ultimate goal is to help you better protect your AWS resources, while also freeing you up to focus on the next big project.
With GuardDuty, we really turned the screws on unnecessary complexity, distilling continuous security monitoring and threat detection down to a binary decision—it’s either on or off. That’s it. There’s no software, virtual appliances, or agents to deploy, no data sources to enable, and no complex permissions to create. You don’t have to write custom rules or become an expert at machine learning. All we ask of you is to simply turn the service on with a single-click or API call.
GuardDuty operates completely on our infrastructure, so there’s no risk of disrupting your workloads. By providing a hard hypervisor boundary between the code running in your AWS accounts and the code running in GuardDuty, we can help ensure full coverage while making it harder for a misconfiguration or an ingenious attacker to change that. When we detect something interesting, we generate a security finding and deliver it to you through the GuardDuty console and AWS CloudWatch Events. This makes it possible to simply view findings in GuardDuty or push them to an existing SIEM or workflow system. We’ve already seen customers take it a step further using AWS Lambda to automate actions such as changing security groups, isolating instances, or rotating credentials.
Now… are you ready to get started? It’s this simple:
So, you’ve got it enabled, now what can GuardDuty detect?
As soon as you enable the service, it immediately starts consuming multiple metadata streams at scale, including AWS CloudTrail, VPC Flow Logs, and DNS logs. It compares what it finds to fully managed threat intelligence feeds containing the latest malicious IPs and domains. In parallel, GuardDuty profiles all activity in your account, which allows it to learn the behavior of your resources so it can identify highly suspicious activity that suggests a threat.
The threat-intelligence-based detections can identify activity such as an EC2 instance being probed or brute-forced by an attacker. If an instance is compromised, it can detect attempts at lateral movement, communication with a known malware or command-and-control server, crypto-currency mining, or an attempt to exfiltrate data through DNS.
Where it gets more interesting is the ability to detect AWS account-focused threats. For example, if an attacker gets a hold of your AWS account credentials—say, one of your developers exposes credentials on GitHub—GuardDuty will identify unusual account behavior. For example, an unusual instance type being deployed in a region that has never been used, suspicious attempts to inventory your resources by calling unusual patterns of list APIs or describe APIs, or an effort to obscure user activity by disabling CloudTrail logging.
Our obsession with removing complexity meant making these detections fully-managed. We take on all the heavy lifting of building, maintaining, measuring, and improving the detections so that you can focus on what to do when an event does occur.
What’s new?
When we launched at the end of November, we had thirty-four distinct detections in GuardDuty, but we weren’t stopping there. Many of these detections are already on their second or third continuous improvement iteration. In less than three months, we’ve also added twelve more, including nine CloudTrail-based anomaly detections that identify highly suspicious activity in your accounts. These new detections intelligently catch changes to, or reconnaissance of, network, resource, user permissions, and anomalous activity in EC2, CloudTrail, and AWS console log-ins. These are detections we’ve built based on what we’ve learned from observed attack patterns across the scale of AWS.
The intelligence in these detections is built around the identification of highly sensitive AWS API calls that are invoked under one or more highly suspicious circumstances. The combination of “highly sensitive” and “highly suspicious” is important. Highly sensitive APIs are those that either change the security posture of an account by adding or elevating users, user policies, roles, or account-key IDs (AKIDs). Highly suspicious circumstances are determined from underlying models profiled at the API level by GuardDuty. The result is the ability to catch real threats, while decreasing false positives, limiting false negatives, and reducing alert-noise.
It’s still day one
As we like to say in Amazon, it’s still day one. I’m excited about what we’ve built with GuardDuty, but we’re not going to stop improving, even if you’re already happy with what we’ve built. Check out the list of new detections below and all of the GuardDuty detections in our online documentation. Keep the feedback coming as it’s what powers us at AWS.
Now, I have to stop writing because my wife tells me I have some unnecessary complexity to remove from our closet.
New GuardDuty CloudTrail-based anomaly detections
Recon:IAMUser/NetworkPermissions Situation: An IAM user invoked an API commonly used to discover the network access permissions of existing security groups, ACLs, and routes in your AWS account. Description: This finding is triggered when network configuration settings in your AWS environment are probed under suspicious circumstances. For example, if an IAM user in your AWS environment invoked the DescribeSecurityGroups API with no prior history of doing so. An attacker might use stolen credentials to perform this reconnaissance of network configuration settings before executing the next stage of their attack, which might include changing network permissions or making use of existing openings in the network configuration.
Recon:IAMUser/ResourcePermissions Situation: An IAM user invoked an API commonly used to discover the permissions associated with various resources in your AWS account. Description: This finding is triggered when resource access permissions in your AWS account are probed under suspicious circumstances. For example, if an IAM user with no prior history of doing so, invoked the DescribeInstances API. An attacker might use stolen credentials to perform this reconnaissance of your AWS resources in order to find valuable information or determine the capabilities of the credentials they already have.
Recon:IAMUser/UserPermissions Situation: An IAM user invoked an API commonly used to discover the users, groups, policies, and permissions in your AWS account. Description: This finding is triggered when user permissions in your AWS environment are probed under suspicious circumstances. For example, if an IAM user invoked the ListInstanceProfilesForRole API with no prior history of doing so. An attacker might use stolen credentials to perform this reconnaissance of your IAM users and roles to determine the capabilities of the credentials they already have or to find more permissive credentials that are vulnerable to lateral movement.
Persistence:IAMUser/NetworkPermissions Situation: An IAM user invoked an API commonly used to change the network access permissions for security groups, routes, and ACLs in your AWS account. Description: This finding is triggered when network configuration settings are changed under suspicious circumstances. For example, if an IAM user in your AWS environment invoked the CreateSecurityGroup API with no prior history of doing so. Attackers often attempt to change security groups, allowing certain inbound traffic on various ports to improve their ability to access the bot they might have planted on your EC2 instance.
Persistence:IAMUser/ResourcePermissions Situation: An IAM user invoked an API commonly used to change the security access policies of various resources in your AWS account. Description: This finding is triggered when a change is detected to policies or permissions attached to AWS resources. For example, if an IAM user in your AWS environment invoked the PutBucketPolicy API with no prior history of doing so. Some services, such as Amazon S3, support resource-attached permissions that grant one or more IAM principals access to the resource. With stolen credentials, attackers can change the policies attached to a resource, granting themselves future access to that resource.
Persistence:IAMUser/UserPermissions Situation: An IAM user invoked an API commonly used to add, modify, or delete IAM users, groups, or policies in your AWS account. Description: This finding is triggered by suspicious changes to the user-related permissions in your AWS environment. For example, if an IAM user in your AWS environment invoked the AttachUserPolicy API with no prior history of doing so. In an effort to maximize their ability to access the account even after they’ve been discovered, attackers can use stolen credentials to create new users, add access policies to existing users, create access keys, and so on. The owner of the account might notice that a particular IAM user or password was stolen and delete it from the account, but might not delete other users that were created by the fraudulently created admin IAM user, leaving their AWS account still accessible to the attacker.
ResourceConsumption:IAMUser/ComputeResources Situation: An IAM user invoked an API commonly used to launch compute resources like EC2 Instances. Description: This finding is triggered when EC2 instances in your AWS environment are launched under suspicious circumstances. For example, if an IAM user invoked the RunInstances API with no prior history of doing so. This might be an indication of an attacker using stolen credentials to access compute time (possibly for cryptocurrency mining or password cracking). It can also be an indication of an attacker using an EC2 instance in your AWS environment and its credentials to maintain access to your account.
Stealth:IAMUser/LoggingConfigurationModified Situation: An IAM user invoked an API commonly used to stop CloudTrail logging, delete existing logs, and otherwise eliminate traces of activity in your AWS account. Description: This finding is triggered when the logging configuration in your AWS account is modified under suspicious circumstances. For example, if an IAM user invoked the StopLogging API with no prior history of doing so. This can be an indication of an attacker trying to cover their tracks by eliminating any trace of their activity.
UnauthorizedAccess:IAMUser/ConsoleLogin Situation: An unusual console login by an IAM user in your AWS account was observed. Description: This finding is triggered when a console login is detected under suspicious circumstances. For example, if an IAM user invoked the ConsoleLogin API from a never-before- used client or an unusual location. This could be an indication of stolen credentials being used to gain access to your AWS account, or a valid user accessing the account in an invalid or less secure manner (for example, not over an approved VPN).
New GuardDuty threat intelligence based detections
Trojan:EC2/PhishingDomainRequest!DNS This detection occurs when an EC2 instance queries domains involved in phishing attacks.
Trojan:EC2/BlackholeTraffic!DNS This detection occurs when an EC2 instance connects to a black hole domain. Black holes refer to places in the network where incoming or outgoing traffic is silently discarded without informing the source that the data didn’t reach its intended recipient.
Trojan:EC2/DGADomainRequest.C!DNS This detection occurs when an EC2 instance queries algorithmically generated domains. Such domains are commonly used by malware and could be an indication of a compromised EC2 instance.
If you have feedback about this blog post, submit comments in the “Comments” section below. If you have questions about this blog post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.
Amazon S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives you flexibility in the way you manage data for cost optimization, access control, and compliance. However, because the service is flexible, a user could accidentally configure buckets in a manner that is not secure. For example, let’s say you uploaded files to an Amazon S3 bucket with public read permissions, even though you intended only to share this file with a colleague or a partner. Although this might have accomplished your task to share the file internally, the file is now available to anyone on the internet, even without authentication.
In this blog post, we show you how to prevent your Amazon S3 buckets and objects from allowing public access. We discuss how to secure data in Amazon S3 with a defense-in-depth approach, where multiple security controls are put in place to help prevent data leakage. This approach helps prevent you from allowing public access to confidential information, such as personally identifiable information (PII) or protected health information (PHI).
Preventing your Amazon S3 buckets and objects from allowing public access
Every call to an Amazon S3 service becomes a REST API request. When your request is transformed via a REST call, the permissions are converted into parameters included in the HTTP header or as URL parameters. The Amazon S3 bucket policy allows or denies access to the Amazon S3 bucket or Amazon S3 objects based on policy statements, and then evaluates conditions based on those parameters. To learn more, see Using Bucket Policies and User Policies.
With this in mind, let’s say multiple AWS Identity and Access Management (IAM) users at Example Corp. have access to an Amazon S3 bucket and the objects in the bucket. Example Corp. wants to share the objects among its IAM users, while at the same time preventing the objects from being made available publicly.
To demonstrate how to do this, we start by creating an Amazon S3 bucket named examplebucket. After creating this bucket, we must apply the following bucket policy. This policy denies any uploaded object (PutObject) with the attribute x-amz-acl having the values public-read, public-read-write, or authenticated-read. This means authenticated users cannot upload objects to the bucket if the objects have public permissions.
“Deny any Amazon S3 request to PutObject or PutObjectAcl in the bucket examplebucket when the request includes one of the following access control lists (ACLs): public-read, public-read-write, or authenticated-read.”
Remember that IAM policies are evaluated not in a first-match-and-exit model. Instead, IAM evaluates first if there is an explicit Deny. If there is not, IAM continues to evaluate if you have an explicit Allow and then you have an implicit Deny.
The above policy creates an explicit Deny. Even when any authenticated user tries to upload (PutObject) an object with public read or write permissions, such as public-read or public-read-write or authenticated-read, the action will be denied. To understand how S3 Access Permissions work, you must understand what Access Control Lists (ACL) and Grants are. You can find the documentation here.
Now let’s continue our bucket policy explanation by examining the next statement.
This statement is very similar to the first statement, except that instead of checking the ACLs, we are checking specific user groups’ grants that represent the following groups:
AuthenticatedUsers group. Represented by http://acs.amazonaws.com/groups/global/AuthenticatedUsers, this group represents all AWS accounts. Access permissions to this group allow any AWS account to access the resource. However, all requests must be signed (authenticated).
AllUsers group. Represented by http://acs.amazonaws.com/groups/global/AllUsers, access permissions to this group allow anyone on the internet access to the resource. The requests can be signed (authenticated) or unsigned (anonymous). Unsigned requests omit the Authentication header in the request.
Now that you know how to deny object uploads with permissions that would make the object public, you just have two statement policies that prevent users from changing the bucket permissions (Denying s3:PutBucketACL from ACL and Denying s3:PutBucketACL from Grants).
Below is how we’re preventing users from changing the bucket permisssions.
As you can see above, the statement is very similar to the Object statements, except that now we use s3:PutBucketAcl instead of s3:PutObjectAcl, the Resource is just the bucket ARN, and the objects have the “/*” in the end of the ARN.
In this section, we showed how to prevent IAM users from accidently uploading Amazon S3 objects with public permissions to buckets. In the next section, we show you how to enforce multiple layers of security controls, such as encryption of data at rest and in transit while serving traffic from Amazon S3.
Securing data on Amazon S3 with defense-in-depth
Let’s say that Example Corp. wants to serve files securely from Amazon S3 to its users with the following requirements:
The data must be encrypted at rest and during transit.
The data must be accessible only by a limited set of public IP addresses.
All requests for data should be handled only by Amazon CloudFront (which is a content delivery network) instead of being directly available from an Amazon S3 URL. If you’re using an Amazon S3 bucket as the origin for a CloudFront distribution, you can grant public permission to read the objects in your bucket. This allows anyone to access your objects either through CloudFront or the Amazon S3 URL. CloudFront doesn’t expose Amazon S3 URLs, but your users still might have access to those URLs if your application serves any objects directly from Amazon S3, or if anyone gives out direct links to specific objects in Amazon S3.
A domain name is required to consume the content. Custom SSL certificate support lets you deliver content over HTTPS by using your own domain name and your own SSL certificate. This gives visitors to your website the security benefits of CloudFront over an SSL connection that uses your own domain name, in addition to lower latency and higher reliability.
To represent defense-in-depth visually, the following diagram contains several Amazon S3 objects (A) in a single Amazon S3 bucket (B). You can encrypt these objects on the server side or the client side. You also can configure the bucket policy such that objects are accessible only through CloudFront, which you can accomplish through an origin access identity (C). You then can configure CloudFront to deliver content only over HTTPS in addition to using your own domain name (D).
Defense-in-depth requirement 1: Data must be encrypted at rest and during transit
Let’s start with the objects themselves. Amazon S3 objects—files in this case—can range from zero bytes to multiple terabytes in size (see service limits for the latest information). Each Amazon S3 bucket includes a collection of objects, and the objects can be uploaded via the Amazon S3 console, AWS CLI, or AWS API.
If you choose to use server-side encryption, Amazon S3 encrypts your objects before saving them on disks in AWS data centers. To encrypt an object at the time of upload, you need to add the x-amz-server-side-encryption header to the request to tell Amazon S3 to encrypt the object using Amazon S3 managed keys (SSE-S3), AWS KMS managed keys (SSE-KMS), or customer-provided keys (SSE-C). There are two possible values for the x-amz-server-side-encryption header: AES256, which tells Amazon S3 to use Amazon S3 managed keys, and aws:kms, which tells Amazon S3 to use AWS KMS managed keys.
The following code example shows a Put request using SSE-S3.
PUT /example-object HTTP/1.1
Host: myBucket.s3.amazonaws.com
Date: Wed, 8 Jun 2016 17:50:00 GMT
Authorization: authorization string
Content-Type: text/plain
Content-Length: 11434
x-amz-meta-author: Janet
Expect: 100-continue
x-amz-server-side-encryption: AES256
[11434 bytes of object data]
If you choose to use client-side encryption, you can encrypt data on the client side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools. You encrypt data on the client side by using AWS KMS managed keys or a customer-supplied, client-side master key.
Defense-in-depth requirement 2: Data must be accessible only by a limited set of public IP addresses
At the Amazon S3 bucket level, you can configure permissions through a bucket policy. For example, you can limit access to the objects in a bucket by IP address range or specific IP addresses. Alternatively, you can make the objects accessible only through HTTPS.
The following bucket policy allows access to Amazon S3 objects only through HTTPS (the policy was generated with the AWS Policy Generator). Here the bucket policy explicitly denies ("Effect": "Deny") all read access ("Action": "s3:GetObject") from anybody who browses ("Principal": "*") to Amazon S3 objects within an Amazon S3 bucket if they are not accessed through HTTPS ("aws:SecureTransport": "false").
Defense-in-depth requirement 3: Data must not be publicly accessible directly from an Amazon S3 URL
Next, configure Amazon CloudFront to serve traffic from within the bucket. The use of CloudFront serves several purposes:
CloudFront is a content delivery network that acts as a cache to serve static files quickly to clients.
Depending on the number of requests, the cost of delivery is less than if objects were served directly via Amazon S3.
Objects served through CloudFront can be limited to specific countries.
Access to these Amazon S3 objects is available only through CloudFront. We do this by creating an origin access identity (OAI) for CloudFront and granting access to objects in the respective Amazon S3 bucket only to that OAI. As a result, access to Amazon S3 objects from the internet is possible only through CloudFront; all other means of accessing the objects—such as through an Amazon S3 URL—are denied. CloudFront acts not only as a content distribution network, but also as a host that denies access based on geographic restrictions. You apply these restrictions by updating your CloudFront web distribution and adding a whitelist that contains only a specific country’s name (let’s say Liechtenstein). Alternatively, you could add a blacklist that contains every country except that country. Learn more about how to use CloudFront geographic restriction to whitelist or blacklist a country to restrict or allow users in specific locations from accessing web content in the AWS Support Knowledge Center.
Defense-in-depth requirement 4: A domain name is required to consume the content
To serve content from CloudFront, you must use a domain name in the URLs for objects on your webpages or in your web application. The domain name can be either of the following:
The domain name that CloudFront automatically assigns when you create a distribution, such as d111111abcdef8.cloudfront.net
Your own domain name, such as example.com
For example, you might use one of the following URLs to return the file image.jpg:
You use the same URL format whether you store the content in Amazon S3 buckets or at a custom origin, like one of your own web servers.
Instead of using the default domain name that CloudFront assigns for you when you create a distribution, you can add an alternate domain name that’s easier to work with, like example.com. By setting up your own domain name with CloudFront, you can use a URL like this for objects in your distribution: http://example.com/images/image.jpg.
Let’s say that you already have a domain name hosted on Amazon Route 53. You would like to serve traffic from the domain name, request an SSL certificate, and add this to your CloudFront web distribution. The SSL offloading occurs in CloudFront by serving traffic securely from each CloudFront location. You also can configure CloudFront to deliver your content over HTTPS by using your custom domain name and your own SSL certificate. Serving web content through CloudFront reduces response from the origin as requests are redirected to the nearest edge location. This results in faster download times than if the visitor had requested the content from a data center that is located farther away.
Summary
In this post, we demonstrated how you can apply policies to Amazon S3 buckets so that only users with appropriate permissions are allowed to access the buckets. Anonymous users (with public-read/public-read-write permissions) and authenticated users without the appropriate permissions are prevented from accessing the buckets.
We also examined how to secure access to objects in Amazon S3 buckets. The objects in Amazon S3 buckets can be encrypted at rest and during transit. Doing so helps provide end-to-end security from the source (in this case, Amazon S3) to your users.
If you have feedback about this blog post, submit comments in the “Comments” section below. If you have questions about this blog post, start a new thread on the Amazon S3 forum or contact AWS Support.
Today we’d like to walk you through AWS Identity and Access Management (IAM), federated sign-in through Active Directory (AD) and Active Directory Federation Services (ADFS). With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which resources users can access. Customers have the option of creating users and group objects within IAM or they can utilize a third-party federation service to assign external directory users access to AWS resources. To streamline the administration of user access in AWS, organizations can utilize a federated solution with an external directory, allowing them to minimize administrative overhead. Benefits of this approach include leveraging existing passwords and password policies, roles and groups. This guide provides a walk-through on how to automate the federation setup across multiple accounts/roles with an Active Directory backing identity store. This will establish the minimum baseline for the authentication architecture, including the initial IdP deployment and elements for federation.
ADFS Federated Authentication Process
The following describes the process a user will follow to authenticate to AWS using Active Directory and ADFS as the identity provider and identity brokers:
Corporate user accesses the corporate Active Directory Federation Services portal sign-in page and provides Active Directory authentication credentials.
AD FS authenticates the user against Active Directory.
Active Directory returns the user’s information, including AD group membership information.
AD FS dynamically builds ARNs by using Active Directory group memberships for the IAM roles and user attributes for the AWS account IDs, and sends a signed assertion to the users browser with a redirect to post the assertion to AWS STS.
Temporary credentials are returned using STS AssumeRoleWithSAML.
The user is authenticated and provided access to the AWS management console.
Configuration Steps
Configuration requires setup in the Identity Provider store (e.g. Active Directory), the identity broker (e.g. Active Directory Federation Services), and AWS. It is possible to configure AWS to federate authentication using a variety of third-party SAML 2.0 compliant identity providers, more information can be found here.
AWS Configuration
The configuration steps outlined in this document can be completed to enable federated access to multiple AWS accounts, facilitating a single sign on process across a multi-account AWS environment. Access can also be provided to multiple roles in each AWS account. The roles available to a user are based on their group memberships in the identity provider (IdP). In a multi-role and/or multi-account scenario, role assumption requires the user to select the account and role they wish to assume during the authentication process.
Identity Provider
A SAML 2.0 identity provider is an IAM resource that describes an identity provider (IdP) service that supports the SAML 2.0 (Security Assertion Markup Language 2.0) standard. AWS SAML identity provider configurations can be used to establish trust between AWS and SAML-compatible identity providers, such as Shibboleth or Microsoft Active Directory Federation Services. These enable users in an organization to access AWS resources using existing credentials from the identity provider.
A SAML identify provider can be configured using the AWS console by completing the following steps.
2. Select SAML for the provider type. Select a provider name of your choosing (this will become the logical name used in the identity provider ARN). Lastly, download the FederationMetadata.xml file from your ADFS server to your client system file (https://yourADFSserverFQDN/FederationMetadata/2007-06/FederationMetadata.xml). Click “Choose File” and upload it to AWS.
3. Click “Next Step” and then verify the information you have entered. Click “Create” to complete the AWS identity provider configuration process.
IAM Role Naming Convention for User Access Once the AWS identity provider configuration is complete, it is necessary to create the roles in AWS that federated users can assume via SAML 2.0. An IAM role is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. In a federated authentication scenario, users (as defined in the IdP) assume an AWS role during the sign-in process. A role should be defined for each access delineation that you wish to define. For example, create a role for each line of business (LOB), or each function within a LOB. Each role will then be assigned a set of policies that define what privileges the users who will be assuming that role will have.
The following steps detail how to create a single role. These steps should be completed multiple times to enable assumption of different roles within AWS, as required.
2. Select “SAML” as the trusted entity type. Click Next Step.
3. Select your previously created identity provider. Click Next: Permissions.
4. The next step requires selection of policies that represent the desired permissions the user should obtain in AWS, once they have authenticated and successfully assumed the role. This can be either a custom policy or preferably an AWS managed policy. AWS recommends leveraging existing AWS access policies for job functions for common levels of access. For example, the “Billing” AWS Managed policy should be utilized to provide financial analyst access to AWS billing and cost information.
5. Provide a name for your role. All roles should be created with the prefix ADFS-<rolename> to simplify the identification of roles in AWS that are accessed through the federated authentication process. Next click, “Create Role”.
Active Directory Configuration
Determining how you will create and delineate your AD groups and IAM roles in AWS is crucial to how you secure access to your account and manage resources. SAML assertions to the AWS environment and the respective IAM role access will be managed through regular expression (regex) matching between your on-premises AD group name to an AWS IAM role.
One approach for creating the AD groups that uniquely identify the AWS IAM role mapping is by selecting a common group naming convention. For example, your AD groups would start with an identifier, for example AWS-, as this will distinguish your AWS groups from others within the organization. Next, include the 12-digit AWS account number. Finally, add the matching role name within the AWS account. Here is an example:
You should do this for each role and corresponding AWS account you wish to support with federated access. Users in Active Directory can subsequently be added to the groups, providing the ability to assume access to the corresponding roles in AWS. If a user is associated with multiple Active Directory groups and AWS accounts, they will see a list of roles by AWS account and will have the option to choose which role to assume. A user will not be able to assume more than one role at a time, but has the ability to switch between them as needed.
Note: Microsoft imposes a limit on the number of groups a user can be a member of (approximately 1,015 groups) due to the size limit for the access token that is created for each security principal. This limitation, however, is not affected by how the groups may or may not be nested.
Active Directory Federation Services Configuration
ADFS federation occurs with the participation of two parties; the identity or claims provider (in this case the owner of the identity repository – Active Directory) and the relying party, which is another application that wishes to outsource authentication to the identity provider; in this case Amazon Secure Token Service (STS). The relying party is a federation partner that is represented by a claims provider trust in the federation service.
Relying Party
You can configure a new relying party in Active Directory Federation Services by doing the following.
1. From the ADFS Management Console, right-click ADFS and select Add Relying Party Trust.
2. In the Add Relying Party Trust Wizard, click Start.
3. Check Import data about the relying party published online or on a local network, enter https://signin.aws.amazon.com/static/saml-metadata.xml, and then click Next. The metadata XML file is a standard SAML metadata document that describes AWS as a relying party.
Note: SAML federations use metadata documents to maintain information about the public keys and certificates that each party utilizes. At run time, each member of the federation can then use this information to validate that the cryptographic elements of the distributed transactions come from the expected actors and haven’t been tampered with. Since these metadata documents do not contain any sensitive cryptographic material, AWS publishes federation metadata at https://signin.aws.amazon.com/static/saml-metadata.xml
4. Set the display name for the relying party and then click Next.
5. We will not choose to enable/configure the MFA settings at this time.
6. Select “Permit all users to access this relying party” and click Next.
7. Review your settings and then click Next.
8. Choose Close on the Finish page to complete the Add Relying Party Trust Wizard. AWS is now configured as a relying party.
Custom Claim Rules
Microsoft Active Directory Federation Services (AD FS) uses Claims Rule Language to issue and transform claims between claims providers and relying parties. A claim is information about a user from a trusted source. The trusted source is asserting that the information is true, and that source has authenticated the user in some manner. The claims provider is the source of the claim. This can be information pulled from an attribute store such as Active Directory (AD). The relying party is the destination for the claims, in this case AWS.
AD FS provides administrators with the option to define custom rules that they can use to determine the behavior of identity claims with the claim rule language. The Active Directory Federation Services (AD FS) claim rule language acts as the administrative building block to help manage the behavior of incoming and outgoing claims. There are four claim rules that need to be created to effectively enable Active Directory users to assume roles in AWS based on group membership in Active Directory.
Right-click on the relying party (in this case Amazon Web Services) and then click Edit Claim Rules
Here are the steps used to create the claim rules for NameId, RoleSessionName, Get AD Groups and Roles.
1. NameId
a) In the Edit Claim Rules for <relying party> dialog box, click Add Rule. b) Select Transform an Incoming Claim and then click Next. c) Use the following settings:
i) Claim rule name: NameId ii) Incoming claim type: Windows Account Name iii) Outgoing claim type: Name ID iv) Outgoing name ID format: Persistent Identifier v) Pass through all claim values: checked
a) Click Add Rule. b) In the Claim rule template list, select Send Claims Using a Custom Rule and then click Next. c) For Claim Rule Name, select Get AD Groups, and then in Custom rule, enter the following:
This custom rule uses a script in the claim rule language that retrieves all the groups the authenticated user is a member of and places them into a temporary claim named http://temp/variable. Think of this as a variable you can access later.
Note: Ensure there’s no trailing whitespace to avoid unexpected results.
4. Role Attributes
a) Unlike the two previous claims, here we used custom rules to send role attributes. This is done by retrieving all the authenticated user’s AD groups and then matching the groups that start with to IAM roles of a similar name. I used the names of these groups to create Amazon Resource Names (ARNs) of IAM roles in my AWS account (i.e., those that start with AWS-). Sending role attributes requires two custom rules. The first rule retrieves all the authenticated user’s AD group memberships and the second rule performs the transformation to the roles claim.
i) Click Add Rule. ii) In the Claim rule template list, select Send Claims Using a Custom Rule and then click Next. iii) For Claim Rule Name, enter Roles, and then in Custom rule, enter the following:
Rule language: c:[Type == "http://temp/variable", Value =~ "(?i)^AWS-([\d]{12})"] => issue(Type = "https://aws.amazon.com/SAML/Attributes/Role", Value = RegExReplace(c.Value, "AWS-([\d]{12})-", "arn:aws:iam::$1:saml-provider/idp1,arn:aws:iam::$1:role/"));
This custom rule uses regular expressions to transform each of the group memberships of the form AWS-<Account Number>-<Role Name> into in the IAM role ARN, IAM federation provider ARN form AWS expects.
Note: In the example rule language above idp1 represents the logical name given to the SAML identity provider in the AWS identity provider setup. Please change this based on the logical name you chose in the IAM console for your identity provider.
Adjusting Session Duration
By default, the temporary credentials that are issued by AWS IAM for SAML federation are valid for an hour. Depending on your organizations security stance, you may wish to adjust. You can allow your federated users to work in the AWS Management Console for up to 12 hours. This can be accomplished by adding another claim rule in your ADFS configuration. To add the rule, do the following:
1. Access ADFS Management Tool on your ADFS Server. 2. Choose Relying Party Trusts, then select your AWS Relying Party configuration. 3. Choose Edit Claim Rules. 4. Choose Add Rule to configure a new rule, and then choose Send claims using a custom rule. Finally, choose Next. 5. Name your Rule “Session Duration” and add the following rule syntax. 6. Adjust the value of 28800 seconds (8 hours) as appropriate.
Rule language: => issue(Type = "https://aws.amazon.com/SAML/Attributes/SessionDuration", Value = "28800");
Note: AD FS 2012 R2 and AD FS 2016 tokens have a sixty-minute validity period by default. This value is configurable on a per-relying party trust basis. In addition to adding the “Session Duration” claim rule, you will also need to update the security token created by AD FS. To update this value, run the following command:
The Parameter “-TokenLifetime” determines the Lifetime in Minutes. In this example, we set the Lifetime to 480 minutes, eight hours.
These are the main settings related to session lifetimes and user authentication. Once updated, any new console session your federated users initiate will be valid for the duration specified in the SessionDuration claim.
API/CLI Access Access to the AWS API and command-line tools using federated access can be accomplished using techniques in the following blog article:
This will enable your users to access your AWS environment using their domain credentials through the AWS CLI or one of the AWS SDKs.
Conclusion In this post, I’ve shown you how to provide identity federation, and thus SSO, to the AWS Management Console for multiple accounts using SAML assertions. With this approach, the AWS Security Token service (STS) will provide temporary credentials (via SAML) for the user to ‘assume’ a role (that they have access to use, as denoted by AD Group membership) that has specific permissions associated; as opposed to providing long-term access credentials to the AWS resources. By adopting this model, you will have a secure and robust IAM approach for accessing AWS resources that align with AWS security best practices.
Today, AWS made it easier to use the AWS Command Line Interface (CLI) to manage services in your AWS accounts. Now you can sign into the AWS Single Sign-On (AWS SSO) user portal using your existing corporate credentials, choose an AWS account and a specific permission set, and get temporary credentials to manage your AWS services through the AWS CLI.
AWS SSO is a service that enables you to centrally manage single sign-on access to multiple AWS accounts and business applications. AWS temporary security credentials are an easy way to get short-term credentials to manage your AWS services through the AWS CLI or a programmatic client.
Previously, when you issued commands from the CLI to access resources in each of several AWS accounts, you had to remember the password for each account, sign in to each AWS account individually, and fetch the credentials for each account one at a time. Now, AWS SSO eliminates the need to sign in to each AWS account individually to get temporary credentials. Instead, you can sign in to the AWS SSO user portal once using your existing corporate credentials and then fetch temporary credentials for any of your authorized AWS accounts to use with the AWS CLI to access the resources in that account, limited by the permissions granted to you.
In this blog post, I’ll show how to fetch temporary credentials from the AWS SSO user portal to use with the AWS CLI to access resources in your AWS accounts. First, I’ll show you how to obtain short-term credentials for any account for a permission set for which you are authorized. Next, I’ll show you three ways to use these credentials.
For this scenario, let’s say I am an administrator at “AnyCompany” and I want to list instances in two AWS accounts by using the AWS CLI command, aws ec2 describe-instances. “AnyCompany” has enabled access to AWS accounts through AWS SSO.
Prerequisites
You need to install the AWS CLI to use this feature. You also need to configure AWS SSO, connect a corporate directory, and grant access to users or groups to access AWS accounts with permission sets. To learn more, see, “Introducing AWS Single Sign-On“.
How to access resources in your AWS accounts by using AWS SSO and the AWS CLI
1. Sign in to the AWS SSO user portal using your corporate credentials. If you don’t know the URL of your AWS SSO user portal, ask your IT administrator. This URL can be found in AWS SSO Console in the Dashboard menu, under “User portal URL” section. In the user portal, you will see the AWS accounts to which you have been granted access.
2. Choose “AWS Account” to expand the list of AWS accounts.
3. Choose the AWS account that you want to access using the AWS CLI. This expands the list of permission sets in the account that you can use to access the account. For this example, I choose “Administrator” permission set which has the necessary permissions to create security groups in accounts. I then choose “Command Line” or “Programmatic Access” associated with the “Administrator” permissions set.
4. AWS SSO shows the credentials you requested in the appropriate format for your operating system. If you need credentials for an operating system that is different from the one shown, you can switch between the MacOSand Linux and Windows tabs. AWS SSO offers three options to use the temporary security credentials (these credentials are valid for up to 60 minutes; see the following screenshot for examples of each option):
a. To run commands from the AWS CLI against the selected AWS account, copy the commands in the “Setup AWS CLI environment variables” section and paste the commands in the terminal window to set the necessary environment variables. These environment variables will be effective in the current terminal window.
b. To run commands from multiple terminal windows against the same AWS account, copy the profile in the “Setup AWS CLI profile” sectionto setup a new named profile in your AWS credentials file. To learn more, see: “Configuration and Credential Files“. You then will be able to use the –profile option with your AWS CLI command to use this credential. This will be effective in all terminal windows that use the same credential file.
c. To access AWS resources from an AWS service client, use the credentials under the “Copy individual values” section to initialize your client. For more information, see the “Use the temporary credentials to access AWS resources” section on “Getting Temporary Credentials with AWS STS“.
5. Move your mouse over the option you want to copy credentials. I chose option 1.
6. I have copied, pasted, and run the AWS CLI environment variables commands in my terminal window:
7. Optionally, you can verify that the credentials are set up correctly by running the “aws configure list” command. Verify that the access_key and secret_key have values assigned.
8. Now you can run any applicable AWS CLI commands (based on the permission set granted to you by your administrator). In the following example, I list instances in my AWS account.
9. To run the same (or different) AWS CLI command against a different AWS account, repeat this process, starting with Step 3. By keeping the AWS SSO user portal open in a browser window, you can easily switch to another AWS account without needing to sign in again. Every time you want to switch between accounts/permission sets or do additional work in an account after the temporary credentials expire, just copy fresh credentials for that account/permission set from the user portal.
Conclusion
In this post, in order to manage services using the AWS CLI, I’ve showed you how to use your existing corporate username and password to get temporary credentials from AWS SSO. If you have questions, please start a new thread in the AWS SSO Forum.
This is a customer post by Ajay Rathod, a Staff Data Engineer at Realtor.com.
Realtor.com, in their own words: Realtor.com®, operated by Move, Inc., is a trusted resource for home buyers, sellers, and dreamers. It offers the most comprehensive database of for-sale properties, among competing national sites, and the information, tools, and professional expertise to help people move confidently through every step of their home journey.
Move, Inc. processes hundreds of terabytes of data partitioned by day and hour. Various teams run hundreds of queries on this data. Using AWS services, Move, Inc. has built an infrastructure for gathering and analyzing data:
To increase the effectiveness of the storage and subsequent querying, the data is converted into a Parquet format, and stored again in S3.
Amazon Athena is used as the SQL (Structured Query Language) engine to query the data in S3. Athena is easy to use and is often quickly adopted by various teams.
Teams visualize query results in Amazon QuickSight. Amazon QuickSight is a business analytics service that allows you to quickly and easily visualize data and collaborate with other users in your account.
This architecture is known as the data platform and is shared by the data science, data engineering, and the data operations teams within the organization. Move, Inc. also enables other cross-functional teams to use Athena. When many users use Athena, it helps to monitor its usage to ensure cost-effectiveness. This leads to a strong need for Athena metrics that can give details about the following:
Users
Amount of data scanned (to monitor the cost of AWS service usage)
The databases used for queries
Actual queries that teams run
Currently, the Move, Inc. team does not have an easy way of obtaining all these metrics from a single tool. Having a way to do this would greatly simplify monitoring efforts. For example, the data operations team wants to collect several metrics every day obtained from queries run on Athena for their data. They require the following metrics:
Amount of data scanned by each user
Number of queries by each user
Databases accessed by each user
In this post, I discuss how to build a solution for monitoring Athena usage. To build this solution, you rely on AWS CloudTrail. CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to an S3 bucket.
Solution
Here is the high-level overview:
Use the CloudTrail API to audit the user queries, and then use Athena to create a table from the CloudTrail logs.
Query the Athena API with the AWS CLI to gather metrics about the data scanned by the user queries and put this information into another table in Athena.
Combine the information from these two sources by joining the two tables.
Use the resulting data to analyze, build insights, and create a dashboard that shows the usage of Athena by users within different teams in the organization.
The architecture of this solution is shown in the following diagram.
Take a look at this solution step by step.
IAM and permissions setup
This solution uses CloudTrail, Athena, and S3. Make sure that the users who run the following scripts and steps have the appropriate IAM roles and policies. For more information, see Tutorial: Delegate Access Across AWS Accounts Using IAM Roles.
Step 1: Create a table in Athena for data in CloudTrail
The CloudTrail API records all Athena queries run by different teams within the organization. These logs are saved in S3. The fields of most interest are:
User identity
Start time of the API call
Source IP address
Request parameters
Response elements returned by the service
When end users make queries in Athena, these queries are recorded by CloudTrail as responses from Athena web service calls. In these responses, each query is represented as a JSON (JavaScript Object Notation) string.
You can use the following CREATE TABLE statement to create the cloudtrail_logs table in Athena. For more information, see Querying CloudTrail Logs in the Athena documentation.
Step 2: Create a table in Amazon Athena for data from API output
Athena provides an API that can be queried to obtain information of a specific query ID. It also provides an API to obtain information of a batch of query IDs, with a batch size of up to 50 query IDs.
You can use this API call to obtain information about the Athena queries that you are interested in and store this information in an S3 location. Create an Athena table to represent this data in S3. For the purpose of this post, the response fields that are of interest are as follows:
QueryExecutionId
Database
EngineExecutionTimeInMillis
DataScannedInBytes
Status
SubmissionDateTime
CompletionDateTime
The CREATE TABLE statement for athena_api_output, is as follows:
CREATE EXTERNAL TABLE IF NOT EXISTS athena_api_output(
queryid string,
querydatabase string,
executiontime bigint,
datascanned bigint,
status string,
submissiondatetime string,
completiondatetime string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = ',',
'field.delim' = ','
) LOCATION 's3://<s3 location of the output from the API calls>'
TBLPROPERTIES ('has_encrypted_data'='false')
You can inspect the query IDs and user information for the last day. The query is as follows:
with data AS (
SELECT
json_extract(responseelements,
'$.queryExecutionId') AS query_id,
(useridentity.arn) AS uid,
(useridentity.sessioncontext.sessionIssuer.userName) AS role,
from_iso8601_timestamp(eventtime) AS dt
FROM cloudtrail_logs
WHERE eventsource='athena.amazonaws.com'
AND eventname='StartQueryExecution'
AND json_extract(responseelements, '$.queryExecutionId') is NOT null)
SELECT *
FROM data
WHERE dt > date_add('day',-1,now() )
Step 3: Obtain Query Statistics from Athena API
You can write a simple Python script to loop through queries in batches of 50 and query the Athena API for query statistics. You can use the Boto library for these lookups. Boto is a library that provides you with an easy way to interact with and automate your AWS development. The response from the Boto API can be parsed to extract the fields that you need as described in Step 2.
An example Python script is available in the AthenaMetrics GitHub repo.
Format these fields, for each query ID, as CSV strings and store them for the entire batch response in an S3 bucket. This S3 bucket is represented by the table created in Step 2, cloudtrail_logs.
In your Python code, create a variable named sql_query, and assign it a string representing the SQL query defined in Step 2. The s3_query_folder is the location in S3 that is used by Athena for storing results of the query. The code is as follows:
sql_query =
“””
with data AS (
SELECT
json_extract(responseelements,
'$.queryExecutionId') AS query_id,
(useridentity.arn) AS uid,
(useridentity.sessioncontext.sessionIssuer.userName) AS role,
from_iso8601_timestamp(eventtime) AS dt
FROM cloudtrail_logs
WHERE eventsource='athena.amazonaws.com'
AND eventname='StartQueryExecution'
AND json_extract(responseelements, '$.queryExecutionId') is NOT null)
SELECT *
FROM data
WHERE dt > date_add('day',-1,now() )
“””
athena_client = boto3.client('athena')
query_execution = self.client.start_query_execution(
QueryString=sql_query,
ClientRequestToken=str(uuid.uuid4()),
ResultConfiguration={
'OutputLocation': s3_staging_folder,
}
)
query_execution_id = query_execution['QueryExecutionId']
### Allow query to complete, check for status response["QueryExecution"]["Status"]["State"]
response = athena_client.get_query_execution(QueryExecutionId=query_execution_id)
if response[“QueryExecution”][“Status”][“State”] == “SUCCEEDED”:
results = athena_client.get_query_results(QueryEecutionId=query_exection_id)
You can iterate through the results in the response object and consolidate them in batches of 50 results. For each batch, you can invoke the Athena API, batch-get-query-execution.
Store the output in the S3 location pointed to by the CREATE TABLE definition for the table athena_api_output, in Step 2. The SQL statement above returns only queries run in the last 24 hours. You may want to increase that to get usage over a longer period of time. The code snippet for this API call is as follows:
The batchqueryids value is an array of 50 query IDs extracted from the result set of the SELECT query. This script creates the data needed by your second table, athena_api_output, and you are now ready to join both tables in Athena.
Step 4: Join the CloudTrail and Athena API data
Now that the two tables are available with the data that you need, you can run the following Athena query to look at the usage by user. You can limit the output of this query to the most recent five days.
SELECT
c.useridentity.arn,
json_extract(c.responseelements, '$.queryExecutionId') qid,
a.datascanned,
a.querydatabase,
a.executiontime,
a.submissiondatetime,
a.completiondatetime,
a.status
FROM cloudtrail_logs c
JOIN athena_api_output a
ON cast(json_extract(c.responseelements, '$.queryExecutionId') as varchar) = a.queryid
WHERE eventsource = 'athena.amazonaws.com'
AND eventname = 'StartQueryExecution'
AND from_iso8601_timestamp(eventtime) > date_add('day',-5 ,now() )
Step 5: Analyze and visualize the results
In this step, using QuickSight, you can create a dashboard that shows the following metrics:
Average amount of data scanned (MB) by a user and database
Using the solution described in this post, you can continuously monitor the usage of Athena by various teams. Taking this a step further, you can automate and set user limits for how much data the Athena users in your team can query within a given period of time. You may also choose to add notifications when the usage by a particular user crosses a specified threshold. This helps you manage costs incurred by different teams in your organization.
Realtor.com would like to acknowledge the tremendous support and guidance provided by Hemant Borole, Senior Consultant, Big Data & Analytics with AWS Professional Services in helping to author this post.
Ajay Rathod is Staff Data Engineer at Realtor.com. With a deep background in AWS Cloud Platform and Data Infrastructure, Ajay leads the Data Engineering and automation aspect of Data Operations at Realtor.com. He has designed and deployed many ETL pipelines and workflows for the Realtor Data Analytics Platform using AWS services like Data Pipeline, Athena, Batch, Glue and Boto3. He has created various operational metrics to monitor ETL Pipelines and Resource Usage.
Amazon EMR enables data analysts and scientists to deploy a cluster running popular frameworks such as Spark, HBase, Presto, and Flink of any size in minutes. When you launch a cluster, Amazon EMR automatically configures the underlying Amazon EC2 instances with the frameworks and applications that you choose for your cluster. This can include popular web interfaces such as Hue workbench, Zeppelin notebook, and Ganglia monitoring dashboards and tools.
These web interfaces are hosted on the EMR master node and must be accessed using the public DNS name of the master node (master public DNS value). The master public DNS value is dynamically created, not very user friendly and is hard to remember— it looks something like ip-###-###-###-###.us-west-2.compute.internal. Not having a friendly URL to connect to the popular workbench or notebook interfaces may impact the workflow and hinder your gained agility.
Some customers have addressed this challenge through custom bootstrap actions, steps, or external scripts that periodically check for new clusters and register a friendlier name in DNS. These approaches either put additional burden on the data practitioners or require additional resources to execute the scripts. In addition, there is typically some lag time associated with such scripts. They often don’t do a great job cleaning up the DNS records after the cluster has terminated, potentially resulting in a security risk.
The solution in this post provides an automated, serverless approach to registering a friendly master node name for easy access to the web interfaces.
Before I dive deeper, I review these key services and how they are part of this solution.
CloudWatch Events
CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. Using simple rules, you can match events and route them to one or more target functions or streams. An event can be generated in one of four ways:
From an AWS service when resources change state
From API calls that are delivered via AWS CloudTrail
From your own code that can generate application-level events
In this solution, I cover the first type of event, which is automatically emitted by EMR when the cluster state changes. Based on the state of this event, either create or update the DNS record in Route 53 when the cluster state changes to STARTING, or delete the DNS record when the cluster is no longer needed and the state changes to TERMINATED. For more information about all EMR event details, see Monitor CloudWatch Events.
Route 53 private hosted zones
A private hosted zone is a container that holds information about how to route traffic for a domain and its subdomains within one or more VPCs. Private hosted zones enable you to use custom DNS names for your internal resources without exposing the names or IP addresses to the internet.
Route 53 supports resource record sets with a wide range of record types. In this solution, you use a CNAME record that is used to specify a domain name as an alias for another domain (the ‘canonical’ domain). You use a friendly name of the cluster as the CNAME for the EMR master public DNS value.
You are using private hosted zones because an EMR cluster is typically deployed within a private subnet and is accessed either from within the VPC or from on-premises resources over VPN or AWS Direct Connect. To resolve domain names in private hosted zones from your on-premises network, configure a DNS forwarder, as described in How can I resolve Route 53 private hosted zones from an on-premises network via an Ubuntu instance?.
Lambda
Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda executes your code only when needed and scales automatically to thousands of requests per second. Lambda takes care of high availability, and server and OS maintenance and patching. You pay only for the consumed compute time. There is no charge when your code is not running.
Lambda provides the ability to invoke your code in response to events, such as when an object is put to an Amazon S3 bucket or as in this case, when a CloudWatch event is emitted. As part of this solution, you deploy a Lambda function as a target that is invoked by CloudWatch Events when the event matches your rule. You also configure the necessary permissions based on the Lambda permissions model, including a Lambda function policy and Lambda execution role.
Putting it all together
Now that you have all of the pieces, you can put together a complete solution. The following diagram illustrates how the solution works:
Start with a user activity such as launching or terminating an EMR cluster.
EMR automatically sends events to the CloudWatch Events stream.
A CloudWatch Events rule matches the specified event, and routes it to a target, which in this case is a Lambda function. In this case, you are using the EMR Cluster State Change
The Lambda function performs the following key steps:
Get the clusterId value from the event detail and use it to call EMR. DescribeCluster API to retrieve the following data points:
MasterPublicDnsName – public DNS name of the master node
Locate the tag containing the friendly name to use as the CNAME for the cluster. The key name containing the friendly name should be The value should be specified as host.domain.com, where domain is the private hosted zone in which to update the DNS record.
Update DNS based on the state in the event detail.
If the state is STARTING, the function calls the Route 53 API to create or update a resource record set in the private hosted zone specified by the domain tag. This is a CNAME record mapped to MasterPublicDnsName.
Conversely, if the state is TERMINATED, the function calls the Route 53 API to delete the associated resource record set from the private hosted zone.
Deploying the solution
Because all of the components of this solution are serverless, use the AWS Serverless Application Model (AWS SAM) template to deploy the solution. AWS SAM is natively supported by AWS CloudFormation and provides a simplified syntax for expressing serverless resources, resulting in fewer lines of code.
Overview of the SAM template
For this solution, the SAM template has 76 lines of text as compared to 142 lines without SAM resources (and writing the template in YAML would be even slightly smaller). The solution can be deployed using the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SAM Local.
CloudFormation transforms help simplify template authoring by condensing a multiple-line resource declaration into a single line in your template. To inform CloudFormation that your template defines a serverless application, add a line under the template format version as follows:
Before SAM, you would use the AWS::Lambda::Function resource type to define your Lambda function. You would then need a resource to define the permissions for the function (AWS::Lambda::Permission), another resource to define a Lambda execution role (AWS::IAM::Role), and finally a CloudWatch Events resource (Events::Rule) that triggers this function.
With SAM, you need to define just a single resource for your function, AWS::Serverless::Function. Using this single resource type, you can define everything that you need, including function properties such as function handler, runtime, and code URI, as well as the required IAM policies and the CloudWatch event.
A few additional things to note in the code example:
CodeUri – Before you can deploy a SAM template, first upload your Lambda function code zip to S3. You can do this manually or use the aws cloudformation package CLI command to automate the task of uploading local artifacts to a S3 bucket, as shown later.
Lambda execution role and permissions – You are not specifying a Lambda execution role in the template. Rather, you are providing the required permissions as IAM policy documents. When the template is submitted, CloudFormation expands the AWS::Serverless::Function resource, declaring a Lambda function and an execution role. The created role has two attached policies: a default AWSLambdaBasicExecutionRole and the inline policy specified in the template.
CloudWatch Events rule – Instead of specifying a CloudWatch Events resource type, you are defining an event source object as a property of the function itself. When the template is submitted, CloudFormation expands this into a CloudWatch Events rule resource and automatically creates the Lambda resource-based permissions to allow the CloudWatch Events rule to trigger the function.
NOTE: If you are trying this solution outside of us-east-1, then you should download the necessary files, upload them to the buckets in your region, edit the script as appropriate and then run it or use the CLI deployment method below.
3.) Choose Next.
4.) On the Specify Details page, keep or modify the stack name and choose Next.
5.) On the Options page, choose Next.
6.) On the Review page, take the following steps:
Acknowledge the two Transform access capabilities. This allows the CloudFormation transform to create the required IAM resources with custom names.
Under Transforms, choose Create Change Set.
Wait a few seconds for the change set to be created before proceeding. The change set should look as follows:
7.) Choose Execute to deploy the template.
After the template is deployed, you should see four resources created:
After the package is successfully uploaded, the output should look as follows:
Uploading to 0f6d12c7872b50b37dbfd5a60385b854 1872 / 1872.0 (100.00%)
Successfully packaged artifacts and wrote output template to file serverless-output.template.
The CodeUri property in serverless-output.template is now referencing the packaged artifacts in the S3 bucket that you specified:
s3://<bucket>/0f6d12c7872b50b37dbfd5a60385b854
Use the aws cloudformation deploy CLI command to deploy the stack:
You should see the following output after the stack has been successfully created:
Waiting for changeset to be created...
Waiting for stack create/update to complete
Successfully created/updated stack – EmrDnsSetterCli
Validating results
To test the solution, launch an EMR cluster. The Lambda function looks for the cluster_name tag associated with the EMR cluster. Make sure to specify the friendly name of your cluster as host.domain.com where the domain is the private hosted zone in which to create the CNAME record.
Here is a sample CLI command to launch a cluster within a specific subnet in a VPC with the required tag cluster_name.
After the cluster is launched, log in to the Route 53 console. In the left navigation pane, choose Hosted Zones to view the list of private and public zones currently configured in Route 53. Select the hosted zone that you specified in the ZONE tag when you launched the cluster. Verify that the resource records were created.
You can also monitor the CloudWatch Events metrics that are published to CloudWatch every minute, such as the number of TriggeredRules and Invocations.
Now that you’ve verified that the Lambda function successfully updated the Route 53 resource records in the zone file, terminate the EMR cluster and verify that the records are removed by the same function.
Conclusion
This solution provides a serverless approach to automatically assigning a friendly name for your EMR cluster for easy access to popular notebooks and other web interfaces. CloudWatch Events also supports cross-account event delivery, so if you are running EMR clusters in multiple AWS accounts, all cluster state events across accounts can be consolidated into a single account.
I hope that this solution provides a small glimpse into the power of CloudWatch Events and Lambda and how they can be leveraged with EMR and other AWS big data services. For example, by using the EMR step state change event, you can chain various pieces of your analytics pipeline. You may have a transient cluster perform data ingest and, when the task successfully completes, spin up an ETL cluster for transformation and upload to Amazon Redshift. The possibilities are truly endless.
This post contributed by AWS Senior Cloud Infrastructure Architect Anabell St Vincent.
Some systems or applications require Transport Layer Security (TLS) traffic from the client all the way through to the Docker container, without offloading or terminating certificates at a load balancer. Some highly time-sensitive services may require communication over TLS without any decryption and re-encryption in the communication path.
There are multiple options for this type of implementation on AWS. One option is to use a service discovery tool to implement the requirements, but that creates overhead from an implementation and management perspective. In this post, I examine the option of using Amazon Elastic Container Service (Amazon ECS) with a Network Load Balancer.
Amazon ECS is a highly scalable, high-performance service for running Docker containers on AWS. It is integrated with each of the Elastic Load Balancing load balancers offered by AWS:
The Classic Load Balancer supports application or network level traffic and can be used to pass through the TLS traffic when configured with TCP listeners. This approach limits the number of containers to be deployed by Amazon ECS to one per EC2 host. This approach is only available for the ECS launch type of EC2, not Fargate. This is due to Classic Load Balancer not supporting target groups, so dynamic port mapping can’t be leveraged for this type of implementation.
The Application Load Balancer functions at the application layer, the layer 7 of Open System Interconnection (OSI) and supports HTTP and HTTPS protocols. By operating at layer 7, the Application Load Balancer is able to route traffic based on the request path in the URL. It can also provide SSL offloading or termination of the SSL certificates for applications, by hosting the SSL certificate. The Application Load Balancer integrates well with ECS by providing the functionality of the target groups for the container instances, which allows for port mapping and targeting container groups. Port mappings allows the containers to run on different ports but still be represented by the one port configured on the ALB as part of the target group of the task.
The Network Load Balancer, which is a high performance, ultra-low latency load balancer that operates at layer 4. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra-low latency. Operating at layer 4 networking means that the traffic is passed onto the targets based on the IP addresses and ports as oppose to session information or cookies, as is the case with Application Load Balancers. The Network Load Balancer also supports long-lived TCP connections, which are ideal for WebSocket type of applications . Some applications communicate on TCP protocols rather than solely on HTTP and HTTPS. The Network Load Balancer provides the support to load balance between services that require protocols besides HTTP and HTTPS.
The Network Load Balancer is the best option for managing secure traffic as it provides support for TCP traffic pass through, without decrypting and then re-encrypting the traffic. Additionally, the Network Load Balancer supports target groups, which means port mapping can be used, as well as configuring the load balancer as part of the task definition for the containers.
Architecture
The diagram below shows a high-level architecture with TLS traffic coming from the internet that passes through the VPC to the Network Load Balancer, then to a container without offloading or terminating the certificate in the path.
In this diagram, there are three containers running in an ECS cluster. The ECS cluster can be either the EC2 or Fargate launch type.
The following diagram shows an architecture that contains two different services deployed in two different ECS clusters.
In this architecture, the containers in each of the clusters can reference the other containers securely via the Network Load Balancer without terminating or offloading the certificates until it reaches the destination container.
This approach addresses the requirements where containers need to communicate securely whether they are deployed in one VPC, across VPCs, or in separate AWS accounts.
The following diagram below shows both TLS connections from the internet, as well as connections from within the same cluster.
The above approach is achieved by using the same Network Load Balancer for the cluster to serve two different services. Implement this by using two different target groups (one for each service) and associating them with the ECS task definition. The containers use the DNS name associated with the Network Load Balancer to access the containers in the other service.
Summary
The architectures in this post show how the Network Load Balancer integrates seamlessly with ECS and other AWS services, providing end-to-end TLS communication across services without offloading or terminating the certificates. This gives you the ability to use dynamic ports in ECS containers. It can also handle tens of millions of requests per second while maintaining high throughput at ultra-low latency for applications that require the TCP protocol.
If you have questions or suggestions, please comment below.
The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2017. I have included a brief description with each link to explain what each page covers. Use this list to see what other AWS customers have been viewing and perhaps to pique your own interest in a topic you’ve been meaning to learn about.
What Is IAM? Learn more about IAM, a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and how they can use resources (authorization).
Creating an IAM User in Your AWS Account You can create one or more IAM users in your AWS account. You might create an IAM user when someone joins your organization, or when you have a new application that needs to make API calls to AWS.
Managing Access Keys for IAM Users Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret access keys) for IAM users.
IAM JSON Policy Elements Reference Learn more about the elements that you can use when you create a JSON policy. View additional JSON policy examples and learn about conditions, supported data types, and how they are used in various services.
IAM Best Practices To help secure your AWS resources, follow these best practices for IAM.
Using Multi-Factor Authentication (MFA) in AWS For an additional layer of security when signing in to your AWS account, AWS recommends that you configure MFA to help protect your AWS resources. MFA adds extra security because it requires users to enter a unique authentication code from an approved authentication device when they access AWS websites or services.
The IAM Console and the Sign-in Page Learn about the IAM-enabled AWS Management Console sign-in page and how to sign in as an AWS account root user or as an IAM user. To help your users sign in easily, create a unique sign-in URL for your account.
How Users Sign In to Your Account After you create IAM users and passwords for each, your users can sign in to the AWS Management Console using your account ID or alias, or from a special URL that includes your account ID.
Working with Server Certificates Some AWS services can use server certificates that you manage with IAM or AWS Certificate Manager (ACM). ACM is the preferred tool to provision, manage, and deploy your server certificates. Use IAM as a certificate manager only when you must support HTTPS connections in a region that is not supported by ACM.
IAM Roles A role is an AWS identity with permission policies that determine what the identity can and cannot do in AWS using temporary security credentials that are created dynamically and provided to the user. A role is intended to be assumable by anyone who needs it using these temporary security credentials.
IAM Policies Read an overview of policies, which are entities in AWS that, when attached to an identity or resource, define their permissions. Policies are stored in AWS as JSON documents attached to principals as identity-based policies or to resources as resource-based policies.
Example Policies This collection of policies can help you define permissions for your IAM identities, such as granting access to a specific Amazon DynamoDB table or launching Amazon EC2 instances in a specific subnet.
Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances Use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you do not have to distribute long-term credentials to an EC2 instance. Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources.
Creating Your First IAM Admin User and Group As a best practice, do not use the AWS account root user for any task where it’s not required. Instead, learn how to create an IAM administrator user and group for yourself.
Temporary Security Credentials You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use.
The AWS Account Root User When you first create an AWS account, you begin with a single sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. To manage your root user, follow the steps on this page.
In the “Comments” section below, let us know if you would like to see anything on these or other IAM documentation pages expanded or updated to make them more useful to you.
Today, AWS Organizations made it easier for you to remove AWS accounts from an organization. You can remove accounts from an organization without requiring assistance from AWS Support, and the accounts you remove can operate as standalone accounts or be invited to join another organization. For example, you could remove graduating students’ AWS accounts from your organization to become standalone accounts or move accounts into another organization after an acquisition.
In this blog post, I explain the information and permissions required to remove accounts from an organization, and then I demonstrate how to remove an account from an organization.
Information and permissions required to remove accounts from an organization
In order to remove an AWS account from an organization, you must have permission to do so. To verify that your AWS Identity and Access Management (IAM) user or role has the required permissions to remove an account from an organization, navigate to the IAM console. Though you also can remove an account from your organization after signing in as the root user, AWS recommends you follow IAM best practices and use an IAM user or role instead.
To remove an AWS account from an organization, your IAM user or role must have the following permissions:
organizations:DescribeOrganization (console only)
organizations:LeaveOrganization
In order for an AWS account in an organization to be removed and become a standalone account, the account must have the following:
Contact information
An accepted AWS Customer Agreement
A valid payment method, such as a credit card
A verified phone number
A selected support plan option
In certain situations, you may need to update some of this required information for the AWS account you want to remove and the Organizations console walks you through the steps to update it. If you need to update the payment method for the AWS account, the IAM user or role performing the action must have the following permissions:
aws-portal:ModifyBilling
aws-portal:ModifyPaymentMethods
For a complete list of required permissions, steps to remove an account from an organization, and important information applicable to programmatically created accounts in Organizations, see Removing a Member Account from Your Organization.
How to remove an AWS account from an organization
Let’s say that I need to remove some students’ AWS accounts from one of my university’s organizations because the students are graduating and want to continue using their AWS accounts independently of the organization.
To remove these student accounts from the organization:
I navigate to the Organizations console, choose the accounts that I want to remove, and choose Remove accounts.
I confirm that I want to remove these accounts by choosing Remove.
The accounts for Mary Major and Richard Roe already had all of the information required to become standalone accounts and are removed from the organization. The accounts that require additional information to become standalone accounts show a status of Remove failed. In this example, I must complete the information for the Jane Doe and Carlos Salazar accounts before I can remove them.
I will work on the Jane Doe and Carlos Salazar accounts one at a time. To see the sign-in options for the Jane Doe account, I choose Sign-in options.
I want the students to provide their additional required information so that their accounts can become standalone accounts. To allow Jane, the student who owns the Jane Doe account, to sign in to her account and provide the missing information, I choose Copy link and email the link to Jane.
To complete this process for the Carlos Salazar account, I return to the account list on the previous page and choose Sign-in options for the Carlos Salazar account. I email Carlos, the owner of the Carlos Salazar account, a self-service account removal link. Both account owners follow the same process to remove their accounts from the organization.
After receiving the email from me with the sign-in link, Jane clicks the link to sign in to her Jane Doe account. After signing in, Jane provides and submits the information that is required to make her account a standalone account. Jane then is taken to the Organizations console where she chooses Leave organization to remove her account from the organization.
After Jane chooses Leave organization, she is redirected to the Organizations console, and a notification confirms that the Jane Doe account has been removed from the organization.
As the administrator in the master account of the organization, I can verify the successful removal by checking that the Jane Doe account no longer shows up in the list of accounts. Alternatively, if CloudWatch Events has been set up for Organizations, notifications can be provided through additional channels. For information about how to set up CloudWatch Events for Organizations, see Tutorial: Monitor Important Changes to Your Organization with CloudWatch Events.
Summary
Today, AWS Organizations made it easier for you to remove AWS accounts from your organization so that they can operate independently of your organization. To remove an AWS account from your organization, sign in to the Organizations console.
If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the Organizations forum.
Whether you want to review a Security, Compliance, & Identity track session you attended at AWS re:Invent 2017, or you want to experience a session for the first time, videos and slide decks from the Security, Compliance, & Identity track are now available.
Introductory
SID201: IAM for Enterprises: How Vanguard Strikes the Balance Between Agility, Governance, and Security
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.