Amazon QuickSight is a fully managed cloud business intelligence system that gives you Fast & Easy to Use Business Analytics for Big Data. QuickSight makes business analytics available to organizations of all shapes and sizes, with the ability to access data that is stored in your Amazon Redshift data warehouse, your Amazon Relational Database Service (RDS) relational databases, flat files in S3, and (via connectors) data stored in on-premises MySQL, PostgreSQL, and SQL Server databases. QuickSight scales to accommodate tens, hundreds, or thousands of users per organization.
Today we are launching a new, session-based pricing option for QuickSight, along with additional region support and other important new features. Let’s take a look at each one:
Pay-per-Session Pricing Our customers are making great use of QuickSight and take full advantage of the power it gives them to connect to data sources, create reports, and and explore visualizations.
However, not everyone in an organization needs or wants such powerful authoring capabilities. Having access to curated data in dashboards and being able to interact with the data by drilling down, filtering, or slicing-and-dicing is more than adequate for their needs. Subscribing them to a monthly or annual plan can be seen as an unwarranted expense, so a lot of such casual users end up not having access to interactive data or BI.
In order to allow customers to provide all of their users with interactive dashboards and reports, the Enterprise Edition of Amazon QuickSight now allows Reader access to dashboards on a Pay-per-Session basis. QuickSight users are now classified as Admins, Authors, or Readers, with distinct capabilities and prices:
Authors have access to the full power of QuickSight; they can establish database connections, upload new data, create ad hoc visualizations, and publish dashboards, all for $9 per month (Standard Edition) or $18 per month (Enterprise Edition).
Readers can view dashboards, slice and dice data using drill downs, filters and on-screen controls, and download data in CSV format, all within the secure QuickSight environment. Readers pay $0.30 for 30 minutes of access, with a monthly maximum of $5 per reader.
Admins have all authoring capabilities, and can manage users and purchase SPICE capacity in the account. The QuickSight admin now has the ability to set the desired option (Author or Reader) when they invite members of their organization to use QuickSight. They can extend Reader invites to their entire user base without incurring any up-front or monthly costs, paying only for the actual usage.
A New Region QuickSight is now available in the Asia Pacific (Tokyo) Region:
The UI is in English, with a localized version in the works.
Hourly Data Refresh Enterprise Edition SPICE data sets can now be set to refresh as frequently as every hour. In the past, each data set could be refreshed up to 5 times a day. To learn more, read Refreshing Imported Data.
Access to Data in Private VPCs This feature was launched in preview form late last year, and is now available in production form to users of the Enterprise Edition. As I noted at the time, you can use it to implement secure, private communication with data sources that do not have public connectivity, including on-premises data in Teradata or SQL Server, accessed over an AWS Direct Connect link. To learn more, read Working with AWS VPC.
Parameters with On-Screen Controls QuickSight dashboards can now include parameters that are set using on-screen dropdown, text box, numeric slider or date picker controls. The default value for each parameter can be set based on the user name (QuickSight calls this a dynamic default). You could, for example, set an appropriate default based on each user’s office location, department, or sales territory. Here’s an example:
URL Actions for Linked Dashboards You can now connect your QuickSight dashboards to external applications by defining URL actions on visuals. The actions can include parameters, and become available in the Details menu for the visual. URL actions are defined like this:
You can use this feature to link QuickSight dashboards to third party applications (e.g. Salesforce) or to your own internal applications. Read Custom URL Actions to learn how to use this feature.
Dashboard Sharing You can now share QuickSight dashboards across every user in an account.
Larger SPICE Tables The per-data set limit for SPICE tables has been raised from 10 GB to 25 GB.
Upgrade to Enterprise Edition The QuickSight administrator can now upgrade an account from Standard Edition to Enterprise Edition with a click. This enables provisioning of Readers with pay-per-session pricing, private VPC access, row-level security for dashboards and data sets, and hourly refresh of data sets. Enterprise Edition pricing applies after the upgrade.
Available Now Everything I listed above is available now and you can start using it today!
AWS has achieved Spain’s Esquema Nacional de Seguridad (ENS) High certification across 29 services. To successfully achieve the ENS High Standard, BDO España conducted an independent audit and attested that AWS meets confidentiality, integrity, and availability standards. This provides the assurance needed by Spanish Public Sector organizations wanting to build secure applications and services on AWS.
The National Security Framework, regulated under Royal Decree 3/2010, was developed through close collaboration between ENAC (Entidad Nacional de Acreditación), the Ministry of Finance and Public Administration and the CCN (National Cryptologic Centre), and other administrative bodies.
The following AWS Services are ENS High accredited across our Dublin and Frankfurt Regions:
Applying technology to healthcare data has the potential to produce many exciting and important outcomes. The analysis produced from healthcare data can empower clinicians to improve the health of individuals and populations by enabling them to make better decisions that enhance the care they provide.
The Observational Health Data Sciences and Informatics (OHDSI, pronounced “Odyssey”) program and community is working toward this goal by producing data standards and open-source solutions to store and analyze observational health data. Using the OHDSI tools, you can visualize the health of your entire population. You can build cohorts of patients, analyze incidence rates for various conditions, and estimate the effect of treatments on patients with certain conditions. You can also model health outcome predictions using machine learning algorithms.
One of the challenges often faced when working with big data tools is the expense of the infrastructure required to run them. Another challenge is the learning curve to implement and begin using these tools. Amazon Web Services has enabled us to address many of the classic IT challenges by making enterprise class infrastructure and technology available in an affordable, elastic, and automated way. This blog post demonstrates how to combine some of the OHDSI projects (Atlas, Achilles, WebAPI, and the OMOP Common Data Model) with AWS technologies. By doing so, you can quickly and inexpensively implement a health data science and informatics environment.
Shown following is just one example of the population health analysis that is possible with the OHDSI tools. This visualization shows the prevalence of various drugs within the given population of people. This information helps researchers and clinicians discover trends and make better informed decisions about patient health.
OHDSI application architecture on AWS
Before deploying an application on AWS that transmits, processes, or stores protected health information (PHI) or personally identifiable information (PII), address your organization’s compliance concerns. Make sure that you have worked with your internal compliance and legal team to ensure compliance with the laws and regulations that govern your organization. To understand how you can use AWS services as a part of your overall compliance program, see the AWS HIPAA Compliance whitepaper. With that said, we paid careful attention to the HIPAA control set during the design of this solution.
This blog post presents a complete OHDSI application environment, including a data warehouse with sample data. It has the following features:
Following, you can see a block diagram of how the OHDSI tools map to the services provided by AWS.
Atlas is the web application that researchers interact with to perform analysis. Atlas interacts with the underlying databases through a web services application named WebAPI. In this example, both Atlas and WebAPI are deployed and managed by AWS Elastic Beanstalk. Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications. Simply upload the Atlas and WebAPI code and Elastic Beanstalk automatically handles the deployment. It covers everything from capacity provisioning, load balancing, autoscaling, and high availability, to application health monitoring. Using a feature of Elastic Beanstalk called ebextensions, the Atlas and WebAPI servers are customized to use an encrypted storage volume for the middleware application logs.
Atlas stores the state of the various patient cohorts that are analyzed in a dedicated database separate from your observational health data. This database is provided by Amazon Aurora with PostgreSQL compatibility.
Amazon Aurora is a relational database built for the cloud that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It is configured for high availability and uses encryption at rest for the database and backups, and encryption in flight for the JDBC connections.
All of your observational health data is stored inside the OHDSI Observational Medical Outcomes Partnership Common Data Model (OMOP CDM). This model also stores useful vocabulary tables that help to translate values from various data sources (like EHR systems and claims data).
The OMOP CDM schema is deployed onto Amazon Redshift. Amazon Redshift is a fast, fully managed data warehouse that allows you to run complex analytic queries against petabytes of structured data. It uses using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution. You can also resize an Amazon Redshift cluster as your requirements for it change.
The solution in this blog post automatically loads de-identified sample data of 1,000 people from the CMS 2008–2010 Data Entrepreneurs’ Synthetic Public Use File (DE-SynPUF). The data has helpful formatting from LTS Computing LLC. Vocabulary data from the OHDSI Athena project is also loaded into the OMOP CDM, and a results set is computed by OHDSI Achilles.
Following is a detailed technical diagram showing the configuration of the architecture to be deployed.
Deploying OHDSI on AWS
Everything just described is automatically deployed by using an AWS CloudFormation template. Using this template, you can quickly get started with the OHDSI project. The CloudFormation templates for this deployment as well as all of the supporting scripts and source code can be found in the AWS Labs GitHub repo.
From your AWS account, open the CloudFormation Management Console and choose Create Stack. From there, copy and paste the following URL in the Specify an Amazon S3 template URL box, and choose Next.
On the next screen, you provide a Stack Name (this can be anything you like) and a few other parameters for your OHDSI environment.
You use the DatabasePassword parameter to set the password for the master user account of the Amazon Redshift and Aurora databases.
You use the EBEndpoint name to generate a unique URL for Atlas to access the OHDSI environment. It is http://EBEndpoint.AWS-Region.elasticbeanstalk.com, where EBEndpoint.AWS-Region indicates the Elastic Beanstalk endpoint and AWS Region. You can configure this URL through Elastic Beanstalk if you want to change it in the future.
You use the KPair option to choose one of your existing Amazon EC2 key pairs to use with the instances that Elastic Beanstalk deploys. By doing this, you can gain administrative access to these instances in the future if you need to. If you don’t already have an Amazon EC2 key pair, you can generate one for free. You do this by going to the Key Pairs section of the EC2 console and choosing Generate Key Pair.
Finally, you use the UserIPRange parameter to specify a CIDR IP address range from which to access your OHDSI environment. By default, your OHDSI environment is accessible over the public internet. Use UserIPRange to limit access over the Internet to a single IP address or a range of IP addresses that represent users you want to have access. Through additional configuration, you can also make your OHDSI environment completely private and accessible only through a VPN or AWS Direct Connect private circuit.
When you’ve provided all Parameters, choose Next.
On the next screen, you can provide some other optional information like tags at your discretion, or just choose Next.
On the next screen, you can review what will be deployed. At the bottom of the screen, there is a check box for you to acknowledge that AWS CloudFormation might create IAM resources with custom names. This is correct; the template being deployed creates four custom roles that give permission for the AWS services involved to communicate with each other. Details of these permissions are inside the CloudFormation template referenced in the URL given in the first step. Check the box acknowledging this and choose Next.
You can watch as CloudFormation builds out your OHDSI architecture. A CloudFormation deployment is called a stack. The parent stack creates two child stacks, one containing the VPC and IAM roles and another created by Elastic Beanstalk with the Atlas and WebAPI servers. When all three stacks have reached the green CREATE_COMPLETE status, as shown in the screenshot following, then the OHDSI architecture has been deployed.
There is still some work going on behind the scenes, though. To watch the progress, browse to the Amazon Redshift section of your AWS Management Console and choose the Amazon Redshift cluster that was created for your OHDSI architecture. After you do so, you can observe the Loads and Queries tabs.
First, on the Loads tab, you can see the CMS De-SynPUF sample data and Athena vocabulary data being loaded into the OMOP Common Data Model. After you see the VOCABULARY table reach the COMPLETED status (as shown following), all of the sample and vocabulary data has been loaded.
After the data loads, the Achilles computation starts. On the Queries tab, you can watch Achilles running queries against your database to build out the Results schema. Achilles runs a large number of queries, and the entire process can take quite some time (about 20 minutes for the sample data we’ve loaded). Eventually, no new queries show up in the Queries tab, which shows that the Achilles computation is completed. The entire process from the time you executed the CloudFormation template until the Achilles computation is completed usually takes about an hour and 15 minutes.
At this point, you can browse to the Elastic Beanstalk section of the AWS Management Console. There, you can choose the OHDSI Application and Environment (green box) that was deployed by the CloudFormation template. At the top of the dashboard, as shown following, you see a link to a URL. This URL matches the name you provided in the EBEndpoint parameter of the CloudFormation template. Choose this URL, and you can start using Atlas to explore the CMS DE-SynPUF sample data!
Cost of deploying this environment
It used to be common to see healthcare data analytics environments deployed in an on-premises data center with expensive data warehouse appliances and virtualized environments. The cloud era has democratized the availability of the infrastructure required to do this type of data analysis, so that now it is within reach of even small organizations. This environment can expand to analyze petabyte-scale health data, and you only pay for what you need. See an estimated breakdown of the monthly cost components for this environment as deployed on the AWS Solution Calculator.
It’s also worth noting that this environment does not have to be run all of the time. If you are only performing analyses periodically, you can terminate the environment when you are finished and restore it from the database backups when you want to continue working. This would reduce the cost of operation even further.
Summary
Now that you have a fully functional OHDSI environment with sample data, you can use this to explore and learn the toolset and its capabilities. After learning with the sample data, you can begin gaining insights by analyzing your own organization’s health data. You can do this using an extract, transform, load (ETL) process from one or more of your health data sources.
James Wiggins is a senior healthcare solutions architect at AWS. He is passionate about using technology to help organizations positively impact world health. He also loves spending time with his wife and three children.
The Amazon RDS team launched nearly 80 features in 2017. Some of them were covered in this blog, others on the AWS Database Blog, and the rest in What’s New or Forum posts. To wrap up my week, I thought it would be worthwhile to give you an organized recap. So here we go!
I recently heard my manager (Ariel Kelman, VP of Marketing for AWS) talk about the important role that education plays in our work. In fact, he assigned it a significantly higher priority than traditional marketing activities that focus on leads or conversions. I’ve also heard our other leaders talk about their work to create highly scalable education programs that will allow developers, architects, and other IT professionals to improve their skills and to earn AWS Certifications.
AWS Developer Professional Series Today I would like to tell you about the new AWS Developer Professional Series. The AWS Training and Certification team has teamed up with edX to create this new three-part series. Founded by MIT and Harvard, edX is the leading non-profit online learning destination, with a global community of over 14 million learners, backed up by 130 global partners including universities, non-profits, and institutions. This collaboration expands our offerings, and gives you another training option!
The new series is designed to help you and your colleagues to build development and DevOps skills on AWS. The courses are self-paced and build on each other in order to help you to create Python applications that run on AWS by way of the AWS SDK for Python (also known as Boto). Here are the courses:
AWS Developer: Building on AWS – This course will give you an introduction to AWS services and to the AWS SDKs. You’ll create and manage an AWS account, learn about Regions, AZs, and VPCs, and install SDKs. Then you will learn how to launch Amazon Elastic Compute Cloud (EC2) instances, set up AWS Lambda functions, and use managed services such as Amazon Relational Database Service (RDS). You’ll also learn how to use our AI services for image analysis and text-to-speech, and wrap up by focusing on availability and durability.
AWS Developer: Deploying on AWS – This course will teach you about the concepts and practices that allow you practice DevOps on AWS. You will learn how to use developer tools like AWS CodeBuild and AWS CodeDeploy, while monitoring your development and production environments using Amazon CloudWatch.
AWS Developer: Optimizing on AWS – This course focuses on performance optimization and tuning of the application that you built in the predecessor courses. You will learn how to use caching and content distribution to increase performance and to improve the end-user experience for your app. You’ll also learn how to use AWS Key Management Service (KMS) to encrypt data at rest and in transit.
The courses are built with the expectation that you already have one to three years of software development experience, including some Python skills. Each course runs for six weeks and requires three to four hours of work per week on your part. Courses start in February (Building), April (Deploying), and May (Optimizing), and you can enroll now at no charge. You can also pursue a Verified Certificate for a fee of $149 per course.
Today we are launching our 18th AWS Region, our fourth in Europe. Located in the Paris area, AWS customers can use this Region to better serve customers in and around France.
The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.
All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.
AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.
From Our Customers Many AWS customers are preparing to use this new Region. Here’s a small sample:
Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.
SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.
Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.
Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.
AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.
AWS Consulting and Technology Partners We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:
AWS in France We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.
As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.
Use it Today The EU (Paris) Region is open for business now and you can start using it today!
Today we launched our 17th Region globally, and the second in China. The AWS China (Ningxia) Region, operated by Ningxia Western Cloud Data Technology Co. Ltd. (NWCD), is generally available now and provides customers another option to run applications and store data on AWS in China.
Operating Partner To comply with China’s legal and regulatory requirements, AWS has formed a strategic technology collaboration with NWCD to operate and provide services from the AWS China (Ningxia) Region. Founded in 2015, NWCD is a licensed datacenter and cloud services provider, based in Ningxia, China. NWCD joins Sinnet, the operator of the AWS China China (Beijing) Region, as an AWS operating partner in China. Through these relationships, AWS provides its industry-leading technology, guidance, and expertise to NWCD and Sinnet, while NWCD and Sinnet operate and provide AWS cloud services to local customers. While the cloud services offered in both AWS China Regions are the same as those available in other AWS Regions, the AWS China Regions are different in that they are isolated from all other AWS Regions and operated by AWS’s Chinese partners separately from all other AWS Regions. Customers using the AWS China Regions enter into customer agreements with Sinnet and NWCD, rather than with AWS.
Use it Today The AWS China (Ningxia) Region, operated by NWCD, is open for business, and you can start using it now! Starting today, Chinese developers, startups, and enterprises, as well as government, education, and non-profit organizations, can leverage AWS to run their applications and store their data in the new AWS China (Ningxia) Region, operated by NWCD. Customers already using the AWS China (Beijing) Region, operated by Sinnet, can select the AWS China (Ningxia) Region directly from the AWS Management Console, while new customers can request an account at www.amazonaws.cn to begin using both AWS China Regions.
As all AWS Quick Starts do, this Quick Start helps you automate the building of a recommended architecture that, when deployed as a package, provides a baseline AWS configuration. The Quick Start uses sets of nested AWS CloudFormation templates and user data scripts to create an example environment with a two-VPC, multi-tiered web service.
The recommended architecture built by the Quick Start supports a wide variety of AWS best practices (all of which are detailed in the Quick Start), including the use of multiple Availability Zones, isolation using public and private subnets, load balancing, and Auto Scaling.
The Quick Start package also includes a deployment guide with detailed instructions and a security controls matrix that describes how the deployment addresses CJIS Security Policy 5.6 controls. You should have your IT security assessors and risk decision makers review the security controls matrix so that they can understand the extent of the implementation of the controls within the architecture. The matrix also identifies the specific resources in the CloudFormation templates that affect each control, and contains cross-references to the CJIS Security Policy 5.6 security controls.
Today, AWS introduced AWS Single Sign-On (AWS SSO), a service that makes it easy for you to centrally manage SSO access to multiple AWS accounts and business applications. AWS SSO provides a user portal so that your users can find and access all of their assigned accounts and applications from one place, using their existing corporate credentials. AWS SSO is integrated with AWS Organizations to enable you to manage access to AWS accounts in your organization. In addition, AWS SSO supports Security Assertion Markup Language (SAML) 2.0, which means you can extend SSO access to your SAML-enabled applications by using the AWS SSO application configuration wizard. AWS SSO also includes built-in SSO integrations with many business applications, such as Salesforce, Box, and Office 365.
In this blog post, I help you get started with AWS SSO by answering three main questions:
What benefits does AWS SSO provide?
What are the key features of AWS SSO?
How do I get started?
1. What benefits does AWS SSO provide?
You can connect your corporate Microsoft Active Directory to AWS SSO so that your users can sign in to the user portal with their user names and passwords to access the AWS accounts and applications to which you have granted them access. The following screenshot shows an example of the AWS SSO user portal.
You can use AWS SSO to centrally assign, manage, and audit your users’ access to multiple AWS accounts and SAML-enabled business applications. You can add new users to the appropriate Active Directory group, which automatically gives them access to the AWS accounts and applications assigned for members of that group. AWS SSO also provides better visibility into which users accessed which accounts and applications from the user portal by recording all user portal sign-in activities in AWS CloudTrail. AWS SSO records details such as the IP address, user name, date, and time of the sign-in request. Any changes made by administrators in the AWS SSO console also are recorded in CloudTrail, and you can use security information and event management (SIEM) solutions such as Splunk to analyze the associated CloudTrail logs.
2. What are the key features of AWS SSO?
AWS SSO includes the following key features.
AWS SSO user portal: In the user portal, your users can easily find and access all applications and AWS accounts to which you have granted them access. Users can access the user portal with their corporate Active Directory credentials and access these applications without needing to enter their user name and password again.
Integration with AWS Organizations: AWS SSO is integrated with Organizations to enable you to manage access to all AWS accounts in your organization. When you enable AWS SSO in your organization’s master account, AWS SSO lists all the accounts managed in your organization for which you can enable SSO access to AWS consoles.
Integration with on-premises Active Directory: AWS SSO integrates with your on-premises Active Directory by using AWS Directory Service. Users can access AWS accounts and business applications by using their Active Directory credentials. You can manage which users or groups in your corporate directory can access which AWS accounts.
Centralized permissions management: With AWS SSO, you can centrally manage the permissions granted to users when they access AWS accounts via the AWS Management Console. You define users’ permissions as permission sets, which are collections of permissions that are based on a combination of AWS managed policies or AWS managed policies for job functions. AWS managed policies are designed to provide permissions for many common use cases, and AWS managed policies for job functions are designed to closely align with common job functions in the IT industry.
With AWS SSO, you can configure all the necessary user permissions to your AWS resources in your AWS accounts by applying permission sets. For example, you can grant database administrators broad permissions to Amazon Relational Database Service in your development accounts, but limit their permissions in your production accounts. As you change these permission sets, AWS SSO helps you keep them updated in all relevant AWS accounts, allowing you to manage permissions centrally.
Application configuration wizard: You can configure SSO access to any SAML-enabled business application by using the AWS SSO application configuration wizard.
Built-in SSO integrations: AWS SSO provides built-in SSO integrations and step-by-step configuration instructions for many commonly used business applications such as Office 365, Salesforce, and Box.
Centralized auditing: AWS SSO logs all sign-in and administrative activities in CloudTrail. You can send these logs to SIEM solutions such as Splunk to analyze them.
Highly available multi-tenant SSO infrastructure: AWS SSO is built on a highly available, AWS managed SSO infrastructure. The AWS SSO multi-tenant architecture enables you to start using the service quickly without needing to procure hardware or install software.
3. How do I get started?
To get started, connect your corporate Active Directory to AWS SSO by using AWS Directory Service. You have two choices to connect your corporate directory: use AD Connector, or configure an Active Directory trust with your on-premises Active Directory. After connecting your corporate directory, you can set up accounts and applications for SSO access. You also can use AWS Managed Microsoft AD in the cloud to manage your users and groups in the cloud, if you don’t have an on-premises Active Directory or don’t want to connect to on-premises Active Directory.
The preceding diagram shows how AWS SSO helps connect your users to the AWS accounts and business applications to which they need access. The numbers in the diagram correspond to the following use cases.
Use case 1: Manage SSO access to AWS accounts
With AWS SSO, you can grant your users access to AWS accounts in your organization. You can do this by adding your users to groups in your corporate Active Directory. In AWS SSO, specify which Active Directory groups can access which AWS accounts, and then pick a permission set to specify the level of SSO access you are granting these Active Directory groups. AWS SSO then sets up AWS account access for the users in the groups. Going forward, you can add new users to your Active Directory groups, and AWS SSO automatically provides the users access to the configured accounts. You also can grant Active Directory users direct access to AWS accounts (without needing to add users to Active Directory groups).
To configure AWS account access for your users:
Navigate to the AWS SSO console, and choose AWS accounts from the navigation pane. Choose which accounts you want users to access from the list of accounts. For this example, I am choosing three accounts from my MarketingBU organizational unit. I then choose Assign users.
Choose Users, start typing to search for users, and then choose Search connected directory. This search will return a list of users from your connected directory. You can also search for groups.
To select permission sets, you first have to create one. Choose Create new permission set.
You can use an existing job function policy to create a permission set. This type of policy allows you to apply predefined AWS managed policies to a permission set that are based on common job functions in the IT industry. Alternatively, you can create a custom permission set based on custom policies.
For this example, I choose the SecurityAudit job function policy and then choose Create. As a result, this permission set will be available for me to pick on the next screen.
Choose a permission set to indicate what level of access you want to grant your users. For this example, I assign the SecurityAudit permission set I created in the previous step to the users I chose. I then choose Finish.
Your users can sign in to the user portal and access the accounts to which you gave them access. AWS SSO automatically sets up the necessary trust between accounts to enable SSO. AWS SSO also sets up the necessary permissions in each account. This helps you scale your administrative tasks across multiple AWS accounts.
The users can choose an account and a permission set to sign in to that account without needing to provide a password again. For example, if you grant a user two permission sets—one that is more restrictive and one that is less restrictive—the user can choose which permission set to use for a specific session. In the following screenshot, John has signed in to the AWS SSO user portal. He can see all the accounts to which he has access. For example, he can sign in to the Production Account with SecurityAudit permissions.
Use case 2: Manage SSO access to business applications
AWS SSO has built-in support for SSO access to commonly used business applications such as Salesforce, Office 365, and Box. You can find these applications in the AWS SSO console and easily configure SSO access by using the application configuration wizard. After you configure an application for SSO access, you can grant users access by searching for users and groups in your corporate directory. For a complete list of supported applications, navigate to the AWS SSO console.
To configure SSO access to business applications:
Navigate to the AWS SSO console and choose Applications from the navigation pane.
Choose Add a new application and choose one or more of the applications in the list. For this example, I have chosen Dropbox.
Depending on which application you choose, you will be asked to complete step-by-step instructions to configure the application for SSO access. The instructions guide you to use the details provided in the AWS SSO metadata section to configure your application, and then to provide your application details in the Application metadata section. Choose Save changes when you are done.
Optionally, you can provide additional SAML attribute mappings by choosing the Attribute mappings tab. You need to do this only if you want to pass user attributes from your corporate directory to the application.
To give your users access to this application, choose the Assigned users tab. Choose Assign users to search your connected directory, and choose a user or group that can access this application.
Use case 3: Manage SSO access to custom SAML-enabled applications
You also can enable SSO access to your custom-built or partner-built SAML applications by using the AWS SSO application configuration wizard.
To configure SSO access to SAML-enabled applications:
Navigate to the AWS SSO console and choose Applications from the navigation pane.
Choose Add a new application, choose Custom SAML 2.0 application, and choose Add.
On the Custom SAML 2.0 application page, copy or download the AWS SSO metadata from the AWS SSO metadata section to configure your custom SAML-enabled application to recognize AWS SSO as an identity provider.
On the same page, complete the application configuration details in the Application metadata section, and choose Save changes.
You can provide additional SAML attribute mappings to be passed to your application in the SAML assertion by choosing the Attribute mappings tab. See the documentation for list of all available attributes.
To give your users access to this application, choose the Assigned users tab. Choose Assign users to search your connected directory, and choose a user or group that can access this application.
Summary
In this blog post, I introduced AWS SSO and explained its key features, benefits, and use cases. With AWS SSO, you can centrally manage and audit SSO access to all your AWS accounts, cloud applications, and custom applications. To start using AWS SSO, navigate to the AWS SSO console.
If you have feedback or questions about AWS SSO, start a new thread on the AWS SSO forum.
AWS Systems Manager is a new way to manage your cloud and hybrid IT environments. AWS Systems Manager provides a unified user interface that simplifies resource and application management, shortens the time to detect and resolve operational problems, and makes it easy to operate and manage your infrastructure securely at scale. This service is absolutely packed full of features. It defines a new experience around grouping, visualizing, and reacting to problems using features from products like Amazon EC2 Systems Manager (SSM) to enable rich operations across your resources.
As I said above, there are a lot of powerful features in this service and we won’t be able to dive deep on all of them but it’s easy to go to the console and get started with any of the tools.
Resource Groupings
Resource Groups allow you to create logical groupings of most resources that support tagging like: Amazon Elastic Compute Cloud (EC2) instances, Amazon Simple Storage Service (S3) buckets, Elastic Load Balancing balancers, Amazon Relational Database Service (RDS) instances, Amazon Virtual Private Cloud, Amazon Kinesis streams, Amazon Route 53 zones, and more. Previously, you could use the AWS Console to define resource groupings but AWS Systems Manager provides this new resource group experience via a new console and API. These groupings are a fundamental building block of Systems Manager in that they are frequently the target of various operations you may want to perform like: compliance management, software inventories, patching, and other automations.
You start by defining a group based on tag filters. From there you can view all of the resources in a centralized console. You would typically use these groupings to differentiate between applications, application layers, and environments like production or dev – but you can make your own rules about how to use them as well. If you imagine a typical 3 tier web-app you might have a few EC2 instances, an ELB, a few S3 buckets, and an RDS instance. You can define a grouping for that application and with all of those different resources simultaneously.
Insights
AWS Systems Manager automatically aggregates and displays operational data for each resource group through a dashboard. You no longer need to navigate through multiple AWS consoles to view all of your operational data. You can easily integrate your exiting Amazon CloudWatch dashboards, AWS Config rules, AWS CloudTrail trails, AWS Trusted Advisor notifications, and AWS Personal Health Dashboard performance and availability alerts. You can also easily view your software inventories across your fleet. AWS Systems Manager also provides a compliance dashboard allowing you to see the state of various security controls and patching operations across your fleets.
Acting on Insights
Building on the success of EC2 Systems Manager (SSM), AWS Systems Manager takes all of the features of SSM and provides a central place to access them. These are all the same experiences you would have through SSM with a more accesible console and centralized interface. You can use the resource groups you’ve defined in Systems Manager to visualize and act on groups of resources.
Automation
Automations allow you to define common IT tasks as a JSON document that specify a list of tasks. You can also use community published documents. These documents can be executed through the Console, CLIs, SDKs, scheduled maintenance windows, or triggered based on changes in your infrastructure through CloudWatch events. You can track and log the execution of each step in the documents and prompt for additional approvals. It also allows you to incrementally roll out changes and automatically halt when errors occur. You can start executing an automation directly on a resource group and it will be able to apply itself to the resources that it understands within the group.
Run Command
Run Command is a superior alternative to enabling SSH on your instances. It provides safe, secure remote management of your instances at scale without logging into your servers, replacing the need for SSH bastions or remote powershell. It has granular IAM permissions that allow you to restrict which roles or users can run certain commands.
Patch Manager, Maintenance Windows, and State Manager
I’ve written about Patch Manager before and if you manage fleets of Windows and Linux instances it’s a great way to maintain a common baseline of security across your fleet.
Maintenance windows allow you to schedule instance maintenance and other disruptive tasks for a specific time window.
State Manager allows you to control various server configuration details like anti-virus definitions, firewall settings, and more. You can define policies in the console or run existing scripts, PowerShell modules, or even Ansible playbooks directly from S3 or GitHub. You can query State Manager at any time to view the status of your instance configurations.
Things To Know
There’s some interesting terminology here. We haven’t done the best job of naming things in the past so let’s take a moment to clarify. EC2 Systems Manager (sometimes called SSM) is what you used before today. You can still invoke aws ssm commands. However, AWS Systems Manager builds on and enhances many of the tools provided by EC2 Systems Manager and allows those same tools to be applied to more than just EC2. When you see the phrase “Systems Manager” in the future you should think of AWS Systems Manager and not EC2 Systems Manager.
AWS Systems Manager with all of this useful functionality is provided at no additional charge. It is immediately available in all public AWS regions.
The best part about these services is that even with their tight integrations each one is designed to be used in isolation as well. If you only need one component of these services it’s simple to get started with only that component.
There’s a lot more than I could ever document in this post so I encourage you all to jump into the console and documentation to figure out where you can start using AWS Systems Manager.
CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. As part of the build process, developers often require access to resources that should be isolated from the public Internet. Now CodeBuild builds can be optionally configured to have VPC connectivity and access these resources directly.
Accessing Resources in a VPC
You can configure builds to have access to a VPC when you create a CodeBuild project or you can update an existing CodeBuild project with VPC configuration attributes. Here’s how it looks in the console:
To configure VPC connectivity: select a VPC, one or more subnets within that VPC, and one or more VPC security groups that CodeBuild should apply when attaching to your VPC. Once configured, commands running as part of your build will be able to access resources in your VPC without transiting across the public Internet.
Use Cases
The availability of VPC connectivity from CodeBuild builds unlocks many potential uses. For example, you can:
Run integration tests from your build against data in an Amazon RDS instance that’s isolated on a private subnet.
Query data in an ElastiCache cluster directly from tests.
Interact with internal web services hosted on Amazon EC2, Amazon ECS, or services that use internal Elastic Load Balancing.
Retrieve dependencies from self-hosted, internal artifact repositories such as PyPI for Python, Maven for Java, npm for Node.js, and so on.
Access objects in an Amazon S3 bucket configured to allow access only through a VPC endpoint.
Query external web services that require fixed IP addresses through the Elastic IP address of the NAT gateway associated with your subnet(s).
… and more! Your builds can now access any resource that’s hosted in your VPC without any compromise on network isolation.
Internet Connectivity
CodeBuild requires access to resources on the public Internet to successfully execute builds. At a minimum, it must be able to reach your source repository system (such as AWS CodeCommit, GitHub, Bitbucket), Amazon Simple Storage Service (Amazon S3) to deliver build artifacts, and Amazon CloudWatch Logs to stream logs from the build process. The interface attached to your VPC will not be assigned a public IP address so to enable Internet access from your builds, you will need to set up a managed NAT Gateway or NAT instance for the subnets you configure. You must also ensure your security groups allow outbound access to these services.
IP Address Space
Each running build will be assigned an IP address from one of the subnets in your VPC that you designate for CodeBuild to use. As CodeBuild scales to meet your build volume, ensure that you select subnets with enough address space to accommodate your expected number of concurrent builds.
Service Role Permissions
CodeBuild requires new permissions in order to manage network interfaces on your VPCs. If you create a service role for your new projects, these permissions will be included in that role’s policy automatically. For existing service roles, you can edit the policy document to include the additional actions. For the full policy document to apply to your service role, see Advanced Setup in the CodeBuild documentation.
For more information, see VPC Support in the CodeBuild documentation. We hope you find the ability to access internal resources on a VPC useful in your build processes! If you have any questions or feedback, feel free to reach out to us through the AWS CodeBuild forum or leave a comment!
We don’t often recognize or celebrate anniversaries at AWS. With nearly 100 services on our list, we’d be eating cake and drinking champagne several times a week. While that might sound like fun, we’d rather spend our working hours listening to customers and innovating. With that said, Amazon QuickSight has now been generally available for a little over a year and I would like to give you a quick update!
QuickSight in Action Today, tens of thousands of customers (from startups to enterprises, in industries as varied as transportation, legal, mining, and healthcare) are using QuickSight to analyze and report on their business data.
Here are a couple of examples:
Gemini provides legal evidence procurement for California attorneys who represent injured workers. They have gone from creating custom reports and running one-off queries to creating and sharing dynamic QuickSight dashboards with drill-downs and filtering. QuickSight is used to track sales pipeline, measure order throughput, and to locate bottlenecks in the order processing pipeline.
Jivochat provides a real-time messaging platform to connect visitors to website owners. QuickSight lets them create and share interactive dashboards while also providing access to the underlying datasets. This has allowed them to move beyond the sharing of static spreadsheets, ensuring that everyone is looking at the same and is empowered to make timely decisions based on current data.
Transfix is a tech-powered freight marketplace that matches loads and increases visibility into logistics for Fortune 500 shippers in retail, food and beverage, manufacturing, and other industries. QuickSight has made analytics accessible to both BI engineers and non-technical business users. They scrutinize key business and operational metrics including shipping routes, carrier efficient, and process automation.
Looking Back / Looking Ahead The feedback on QuickSight has been incredibly helpful. Customers tell us that their employees are using QuickSight to connect to their data, perform analytics, and make high-velocity, data-driven decisions, all without setting up or running their own BI infrastructure. We love all of the feedback that we get, and use it to drive our roadmap, leading to the introduction of over 40 new features in just a year. Here’s a summary:
Looking forward, we are watching an interesting trend develop within our customer base. As these customers take a close look at how they analyze and report on data, they are realizing that a serverless approach offers some tangible benefits. They use Amazon Simple Storage Service (S3) as a data lake and query it using a combination of QuickSight and Amazon Athena, giving them agility and flexibility without static infrastructure. They also make great use of QuickSight’s dashboards feature, monitoring business results and operational metrics, then sharing their insights with hundreds of users. You can read Building a Serverless Analytics Solution for Cleaner Cities and review Serverless Big Data Analytics using Amazon Athena and Amazon QuickSight if you are interested in this approach.
New Features and Enhancements We’re still doing our best to listen and to learn, and to make sure that QuickSight continues to meet your needs. I’m happy to announce that we are making seven big additions today:
Geospatial Visualization – You can now create geospatial visuals on geographical data sets.
Private VPC Access – You can now sign up to access a preview of a new feature that allows you to securely connect to data within VPCs or on-premises, without the need for public endpoints.
Flat Table Support – In addition to pivot tables, you can now use flat tables for tabular reporting. To learn more, read about Using Tabular Reports.
Calculated SPICE Fields – You can now perform run-time calculations on SPICE data as part of your analysis. Read Adding a Calculated Field to an Analysis for more information.
Wide Table Support – You can now use tables with up to 1000 columns.
HIPAA Compliance – You can now run HIPAA-compliant workloads on QuickSight.
Geospatial Visualization Everyone seems to want this feature! You can now take data that contains a geographic identifier (country, city, state, or zip code) and create beautiful visualizations with just a few clicks. QuickSight will geocode the identifier that you supply, and can also accept lat/long map coordinates. You can use this feature to visualize sales by state, map stores to shipping destinations, and so forth. Here’s a sample visualization:
Private VPC Access Preview If you have data in AWS (perhaps in Amazon Redshift, Amazon Relational Database Service (RDS), or on EC2) or on-premises in Teradata or SQL Server on servers without public connectivity, this feature is for you. Private VPC Access for QuickSight uses an Elastic Network Interface (ENI) for secure, private communication with data sources in a VPC. It also allows you to use AWS Direct Connect to create a secure, private link with your on-premises resources. Here’s what it looks like:
If you are ready to join the preview, you can sign up today.
When you use Amazon Relational Database Service (Amazon RDS), depending on the logging levels on the RDS instances and the volume of transactions, you could generate a lot of log data. To ensure that everything is running smoothly, many customers search for log error patterns using different log aggregation and visualization systems, such as Amazon Elasticsearch Service, Splunk, or other tool of their choice. A module needs to periodically retrieve the RDS logs using the SDK, and then send them to Amazon S3. From there, you can stream them to your log aggregation tool.
One option is writing an AWS Lambda function to retrieve the log files. However, because of the time that this function needs to execute, depending on the volume of log files retrieved and transferred, it is possible that Lambda could time out on many instances. Another approach is launching an Amazon EC2 instance that runs this job periodically. However, this would require you to run an EC2 instance continuously, not an optimal use of time or money.
Using the new Amazon CloudWatch integration with Amazon EC2 Container Service, you can trigger this job to run in a container on an existing Amazon ECS cluster. Additionally, this would allow you to improve costs by running containers on a fleet of Spot Instances.
In this post, I will show you how to use the new scheduled tasks (cron) feature in Amazon ECS and launch tasks using CloudWatch events, while leveraging Spot Fleet to maximize availability and cost optimization for containerized workloads.
Architecture
The following diagram shows how the various components described schedule a task that retrieves log files from Amazon RDS database instances, and deposits the logs into an S3 bucket.
Amazon ECS cluster container instances are using Spot Fleet, which is a perfect match for the workload that needs to run when it can. This improves cluster costs.
The task definition defines which Docker image to retrieve from the Amazon EC2 Container Registry (Amazon ECR) repository and run on the Amazon ECS cluster.
The container image has Python code functions to make AWS API calls using boto3. It iterates over the RDS database instances, retrieves the logs, and deposits them in the S3 bucket. Many customers choose these logs to be delivered to their centralized log-store. CloudWatch Events defines the schedule for when the container task has to be launched.
Walkthrough
To provide the basic framework, we have built an AWS CloudFormation template that creates the following resources:
Amazon ECR repository for storing the Docker image to be used in the task definition
S3 bucket that holds the transferred logs
Task definition, with image name and S3 bucket as environment variables provided via input parameter
CloudWatch Events rule
Amazon ECS cluster
Amazon ECS container instances using Spot Fleet
IAM roles required for the container instance profiles
Before you begin
Ensure that Git, Docker, and the AWS CLI are installed on your computer.
Run the CloudFormation stack and get the names for the Amazon ECR repo and S3 bucket. In the stack, choose Outputs.
Open the ECS console and choose Repositories. The rdslogs repo has been created. Choose View Push Commands and follow the instructions to connect to the repository and push the image for the code that you built in Step 2. The screenshot shows the final result:
Associate the CloudWatch scheduled task with the created Amazon ECS Task Definition, using a new CloudWatch event rule that is scheduled to run at intervals. The following rule is scheduled to run every 15 minutes: aws – profile default – region us-west-2 events put-rule – name demo-ecs-task-rule – schedule-expression "rate(15 minutes)"
CloudWatch requires IAM permissions to place a task on the Amazon ECS cluster when the CloudWatch event rule is executed, in addition to an IAM role that can be assumed by CloudWatch Events. This is done in three steps:
Create the IAM role to be assumed by CloudWatch. aws – profile default – region us-west-2 iam create-role – role-name Test-Role – assume-role-policy-document file://event-role.json
Create the IAM policy defining the ECS cluster and task definition. You need to get these values from the CloudFormation outputs and resources. aws – profile default – region us-west-2 iam create-policy – policy-name test-policy – policy-document file://event-policy.json
Attach the IAM policy to the role. aws – profile default – region us-west-2 iam attach-role-policy – role-name Test-Role – policy-arn arn:aws:iam::1234567890:policy/test-policy
Associate the CloudWatch rule created earlier to place the task on the ECS cluster. The following command shows an example. Replace the AWS account ID and region with your settings. aws events put-targets – rule demo-ecs-task-rule – targets "Id"="1","Arn"="arn:aws:ecs:us-west-2:12345678901:cluster/test-cwe-blog-ecsCluster-15HJFWCH4SP67","EcsParameters"={"TaskDefinitionArn"="arn:aws:ecs:us-west-2:12345678901:task-definition/test-cwe-blog-taskdef:8"},"RoleArn"="arn:aws:iam::12345678901:role/Test-Role"
{
"FailedEntries": [],
"FailedEntryCount": 0
}
That’s it. The logs now run based on the defined schedule.
To test this, open the Amazon ECS console, select the Amazon ECS cluster that you created, and then choose Tasks, Run New Task. Select the task definition created by the CloudFormation template, and the cluster should be selected automatically. As this runs, the S3 bucket should be populated with the RDS logs for the instance.
Conclusion
In this post, you’ve seen that the choices for workloads that need to run at a scheduled time include Lambda with CloudWatch events or EC2 with cron. However, sometimes the job could run outside of Lambda execution time limits or be not cost-effective for an EC2 instance.
In such cases, you can schedule the tasks on an ECS cluster using CloudWatch rules. In addition, you can use a Spot Fleet cluster with Amazon ECS for cost-conscious workloads that do not have hard requirements on execution time or instance availability in the Spot Fleet. For more information, see Powering your Amazon ECS Cluster with Amazon EC2 Spot Instances and Scheduled Events.
If you have questions or suggestions, please comment below.
Today, we updated the AWS Identity and Access Management (IAM) console to make it easier for you to create, manage, and understand IAM roles. We made improvements that include an updated role-creation workflow that better guides you through the process of creating trust relationships (which define who can assume a role) and attaching permissions to roles. Additionally, you can now view and understand the permissions attached to roles more easily by using policy summaries for each role in your account.
In this post, I provide background information about roles, show how to create roles correctly by using the updated IAM console, and review other changes to the role list and details pages that help you better manage your roles.
Background information about IAM roles
What are IAM roles?
IAM roles are a secure way to grant permissions to entities you trust. You can use a role in the following scenarios:
As illustrated in the following diagram, roles have two types of policies attached to them:
Trust policy – This policy defines the entities that can assume a role. When you create a role by using the IAM console, the trust policy is created for you. You can customize the trust policy after it has been created. A role can have only one trust policy.
Permissions policies – These policies define which AWS resources a role can access and the actions it can perform on those resources. These policies can be AWS managed policies, customer managed policies, or inline policies attached directly to the role. You can attach multiple permissions policies to an IAM role.
How do I use an IAM role?
Roles can be used to access AWS both programmatically and via the AWS Management Console.
If you are using AWS services such as EC2 or AWS Lambda, you can assign a role to your EC2 instance or Lambda function. Your code running on either the instance or function will then be able to access AWS using the short-term credentials from that role. These services automatically assume the role and use the returned credentials to enable you to access resources or perform actions. Learn more about IAM roles for EC2 and IAM roles for Lambda.
Using a role to access the AWS Management Console
You can also assume a role to access an account in the AWS Management Console by using Switch Role functionality. You can access this functionality by using the drop-down menu in the upper right corner of the console or through a custom sign-in URL. To switch roles, you must sign in as either an IAM or federated user. Swith Role is not supported for root users.
Introducing the updated role-creation workflow
To make it easier to create IAM roles, we’ve updated the role-creation workflow in the IAM console. To introduce the new experience, let’s look at a specific example.
Let’s suppose I am developing a new application running on an EC2 instance that reads and writes image data to S3. To enable that application to access the S3 bucket where the image data is stored, I need to create a new role that the EC2 instance running my application code can assume. This role trusts EC2 (defined in the trust policy) and has permissions to read and write to S3 (defined in the permissions policy).
I first navigate to the Roles page of the IAM console, and choose Create role.
I then select the type of trusted entity this role will allow. I have four options in this step:
AWS service – To grant access to an AWS service to manage resources in my account.
Another AWS account – To grant access to my account from another account.
SAML – To grant access to identities from a SAML-enabled identity provider.
For this example, I choose AWS service. I then see a list of AWS services the new role could trust. Because I need a role that can be assumed by an EC2 instance, I choose EC2. This opens a section below it with specific use cases for this service. I choose the EC2 (Allows EC2 instances to call AWS services on your behalf) use case. This defines the role’s trust policy to include EC2 as an entity that can assume that role.
Then, I set the permissions that the role needs to access S3. I already created a permissions policy that grants access to my S3 bucket and named it MyImageApp-S3Access. I choose that policy and choose Next to move to the final step.
In the final step, I name the role MyImageAppRole and describe it this way: “Allows application code running on EC2 instances to access data in S3.” I choose Create role and I am done! I can now attach this role to EC2 instances on which my application is running to give them permissions to access data in S3.
Other updates to help you manage your roles
With this update, you will see a new column called Trusted entities on the Roles list page. This column allows you to review at a glance which entities can assume a role. This makes it easier to identify roles that trust a specific account or AWS service. It also makes it easier to audit the trust relationships across all your roles. You also can add other columns to this table, including Creation time and Role ARN, by clicking the gear icon. Your column selection preference will be remembered when you return to the Roles page.
To help you understand the permissions attached to your role, we’ve also updated the Permissions tab to include IAM policy summaries. Policy summaries make it easier for you to understand the permissions for IAM permissions policies attached to roles without having to view a policy’s JSON.
Conclusion
Now it is easier for you to create and manage your IAM roles using the IAM console. As you manage your roles, be sure to review and follow IAM best practices, which can help to improve the security of your AWS resources and make your account easier to manage.
If you have comments about this post, submit them in the “Comments” section below. If you have questions or suggestions, start a new thread on the IAM forum or contact AWS Support.
Our customers use Cost Explorer to better understand and manage their AWS spending, making heavy use of the reporting, analytics, and visualization tools that it provides. We launched Cost Explorer in 2014 with a focus on simplicity – single click signup, preconfigured default views, and a clean user interface (take a look back at The New AWS Cost Explorer to see where we started). The Cost Explorer has been very popular and we’ve received a lot of great feedback from our customers.
Last week we launched a major upgrade to Cost Explorer. We’ve redesigned the user interface to optimize many common workflows including filtering, report management, selection of date ranges, and grouping of data. We have also included some default reports to make it easier for you to explore the costs related to your use of Reserved Instances.
Looking at Cost Explorer Since pictures are reportedly worth 1000 words, let’s take a closer look! Cost Explorer is part of the Billing Dashboard so I can start there:
Here’s the Billing Dashboard. I click on Cost Explorer to move ahead:
I can open up Cost Explorer or access one of three preconfigured views. I’ll go for the first option:
The default report shows my EC2 costs and usage (running hours) for the past 3 months:
I can use the Group By menu to break the costs down by EC2 instance type:
I have many other grouping options:
The filtering options are now easier to access and to edit. Here’s the full set:
I can explore my EC2 costs in any set of desired regions:
I can filter and then group by instance type to see how my spending breaks down:
I can click on Download CSV and then process the data locally:
I can also exclude certain instance types from the report. Here’s how I exclude my m4.xlarge, t2.micro, and t2.nano usage:
Report Management Cost Explorer allows me to customize my existing reports and to create new reports from scratch. I can click on Save As to save my customized report with a new name:
I can see and manage all of my reports on the Saved Reports page (The padlock denotes a default report that cannot be edited and then overwritten):
When I click on New report I can start from a template:
All of my reports are accessible from the Reports menu so I can check on my costs with a click:
We also simplified the process of selecting a range of dates for a report, including options to select common date ranges:
Reserved Instance Reports Cost Explorer also includes a pair of reports that will help you to understand and optimize your usage of Reserved Instances. I don’t own an RI’s so I used screen shots supplied by the team.
The RI Utilization report allows you to see how much of your purchased RI capacity is being put to use (the dashed red line represents a utilization target that you can specify):
The RI Coverage report tells you how much of your EC2 usage is being handled by Reserved Instances (this time, the dashed red line represents the desired amount of coverage):
I hope you have enjoyed this tour of the updated Cost Explorer. It is available now and you can start using it today!
Our customers run an incredible variety of mission-critical workloads on AWS, many of which process and store sensitive data. As detailed in our Overview of Security Processes document, AWS customers have access to an ever-growing set of options for encrypting and protecting this data. For example, Amazon Relational Database Service (RDS) supports encryption of data at rest and in transit, with options tailored for each supported database engine (MySQL, SQL Server, Oracle, MariaDB, PostgreSQL, and Aurora).
Many customers use AWS Key Management Service (KMS) to centralize their key management, with others taking advantage of the hardware-based key management, encryption, and decryption provided by AWS CloudHSM to meet stringent security and compliance requirements for their most sensitive data and regulated workloads (you can read my post, AWS CloudHSM – Secure Key Storage and Cryptographic Operations, to learn more about Hardware Security Modules, also known as HSMs).
Major CloudHSM Update Today, building on what we have learned from our first-generation product, we are making a major update to CloudHSM, with a set of improvements designed to make the benefits of hardware-based key management available to a much wider audience while reducing the need for specialized operating expertise. Here’s a summary of the improvements:
Pay As You Go – CloudHSM is now offered under a pay-as-you-go model that is simpler and more cost-effective, with no up-front fees.
Fully Managed – CloudHSM is now a scalable managed service; provisioning, patching, high availability, and backups are all built-in and taken care of for you. Scheduled backups extract an encrypted image of your HSM from the hardware (using keys that only the HSM hardware itself knows) that can be restored only to identical HSM hardware owned by AWS. For durability, those backups are stored in Amazon Simple Storage Service (S3), and for an additional layer of security, encrypted again with server-side S3 encryption using an AWS KMS master key.
Open & Compatible – CloudHSM is open and standards-compliant, with support for multiple APIs, programming languages, and cryptography extensions such as PKCS #11, Java Cryptography Extension (JCE), and Microsoft CryptoNG (CNG). The open nature of CloudHSM gives you more control and simplifies the process of moving keys (in encrypted form) from one CloudHSM to another, and also allows migration to and from other commercially available HSMs.
More Secure – CloudHSM Classic (the original model) supports the generation and use of keys that comply with FIPS 140-2 Level 2. We’re stepping that up a notch today with support for FIPS 140-2 Level 3, with security mechanisms that are designed to detect and respond to physical attempts to access or modify the HSM. Your keys are protected with exclusive, single-tenant access to tamper-resistant HSMs that appear within your Virtual Private Clouds (VPCs). CloudHSM supports quorum authentication for critical administrative and key management functions. This feature allows you to define a list of N possible identities that can access the functions, and then require at least M of them to authorize the action. It also supports multi-factor authentication using tokens that you provide.
AWS-Native – The updated CloudHSM is an integral part of AWS and plays well with other tools and services. You can create and manage a cluster of HSMs using the AWS Management Console, AWS Command Line Interface (CLI), or API calls.
Diving In You can create CloudHSM clusters that contain 1 to 32 HSMs, each in a separate Availability Zone in a particular AWS Region. Spreading HSMs across AZs gives you high availability (including built-in load balancing); adding more HSMs gives you additional throughput. The HSMs within a cluster are kept in sync: performing a task or operation on one HSM in a cluster automatically updates the others. Each HSM in a cluster has its own Elastic Network Interface (ENI).
All interaction with an HSM takes place via the AWS CloudHSM client. It runs on an EC2 instance and uses certificate-based mutual authentication to create secure (TLS) connections to the HSMs.
At the hardware level, each HSM includes hardware-enforced isolation of crypto operations and key storage. Each customer HSM runs on dedicated processor cores.
Setting Up a Cluster Let’s set up a cluster using the CloudHSM Console:
I click on Create cluster to get started, select my desired VPC and the subnets within it (I can also create a new VPC and/or subnets if needed):
Then I review my settings and click on Create:
After a few minutes, my cluster exists, but is uninitialized:
Initialization simply means retrieving a certificate signing request (the Cluster CSR):
And then creating a private key and using it to sign the request (these commands were copied from the Initialize Cluster docs and I have omitted the output. Note that ID identifies the cluster):
The next step is to apply the signed certificate to the cluster using the console or the CLI. After this has been done, the cluster can be activated by changing the password for the HSM’s administrative user, otherwise known as the Crypto Officer (CO).
Once the cluster has been created, initialized and activated, it can be used to protect data. Applications can use the APIs in AWS CloudHSM SDKs to manage keys, encrypt & decrypt objects, and more. The SDKs provide access to the CloudHSM client (running on the same instance as the application). The client, in turn, connects to the cluster across an encrypted connection.
Available Today The new HSM is available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions, with more in the works. Pricing starts at $1.45 per HSM per hour.
The idea for CloudRanger started where most great ideas do – at a bar in Las Vegas. During a late-night conversation with his friends at re:Invent 2014, Dave Gildea (Founder and CEO) used cocktail napkins and drink coasters to visually illustrate servers and backups, and the light on his phone to represent scheduling. By the end of the night, the idea for automated visual server management was born. With CloudRanger, companies can easily create backup and retention policies, visual scheduling, and simple restoration of snapshots and AMIs. The team behind CloudRanger believes that when servers and cloud resources are represented visually, they are easier to manage and understand. Users are able to see their servers, which turns them into a tangible and important piece of business inventory.
CloudRanger is an excellent platform for MSPs who manage many different AWS accounts, and need a quick method to display many servers and audit certain attributes. The company’s goal is to give anyone the ability to create backup policies in multiple regions, apply them using a tag-based methodology, and manage backups. Servers can be scheduled from one simple dashboard, and restoration is easy and step-by-step. With CloudRanger’s visual representation of resources, customers are encouraged to fully understand their backup policies, schedules, and servers.
As an AWS Partner, CloudRanger has built a globally redundant system after going all-in with AWS. They are using over 25 AWS services for everything including enterprise-level security, automation and 24/7 runtimes, and an emphasis on Machine Learning for efficiency in the sales process. CloudRanger continues to rely more on AWS as new services and features are released, and are replacing current services with AWS CodePipeline and AWS CodeBuild. CloudRanger was also named Startup Company of the Year at a recent Irish tech event!
In 2010, brothers Alexander Peiniger and Frederik Peiniger started a journey to help companies track their social media profiles and improve their strategies against competitors. The startup began under the name “Social.Media.Tracking” and then “AllFacebook Stats” before officially becoming quintly in 2013. With quintly, brands and agencies can analyze, benchmark, and optimize their social media activities on a global scale. The innovative dashboarding system gives clients an overview across all social media profiles on the most important networks (Facebook, Twitter, YouTube, Google+, LinkedIn, Instagram, etc.) and then derives an optimal social media strategy from those profiles. Today, quintly has users in over 180 countries and paying clients in over 65 countries including major agency networks and Fortune 500 companies.
Getting an overview of a brand’s social media activities can be time-consuming, and turning insights into actions is a challenge that not all brands master. Quintly offers a variety of features designed to help clients improve their social media reach. With their web-based SaaS product, brands and agencies can compare their social media performance against competitors and their best practices. Not only can clients learn from their own historic performance, but they can leverage data from any other brand around the world.
Since the company’s founding, quintly built and operates its SaaS offering on top of AWS services, leveraging Amazon EC2, Amazon ECS, Elastic Load Balancing, and Amazon Route53 to host their Docker-based environment. Large amounts of data are stored in Amazon DynamoDB and Amazon RDS, and they use Amazon CloudWatch to monitor and seamlessly scale to the current needs. In addition, quintly is using Amazon Machine Learning to add additional attributes to the data and to drive better decisions for their clients. With the help of AWS, quintly has been able to focus on their core business while having a scalable and well-performing solution to solve their technical needs.
Based in the heart of West Seattle, Tango Card is revolutionizing rewards programs for companies around the world. Too often customers redeem points in a loyalty or rebate program only to wait weeks for their prize to arrive. Companies generously give their employees appreciation gifts, but the gifts can be generic and impersonal. With Tango Card, companies can choose from a variety of rewards that fit the needs of their specific program, event, or business incentive. The extensive Rewards Catalog includes options for e-gift cards that are sure to excite any recipient. There are plenty of options for everyone from traditional e-gift cards to nonprofit donations to cash equivalent rewards.
Tango Card uses a combination of desired rewards, modern technology, and expert service to change the rewards and incentive experience. The Reward Delivery Platform offers solutions including Blast Rewards, Reward Link, and Rewards as a Service API (RaaS). Blast Rewards enables companies to purchase and send e-gift cards in bulk in just one business day. Reward Link lets recipients choose from an assortment of e-gift cards, prepaid cards, digital checks, and donations and is delivered instantly. Finally, Rewards as a Service is a robust digital gift card API that is built to support apps and platforms. With RaaS, Tango Card can send out e-gift cards on company-branded email templates or deliver them directly within a user interface.
Most organizations create multiple AWS accounts because they provide the highest level of resource and security isolation. In this blog post, I will discuss how to use cross account AWS Identity and Access Management (IAM) access to orchestrate continuous integration and continuous deployment.
Do I need multiple accounts?
If you answer “yes” to any of the following questions you should consider creating more AWS accounts:
Does your business require administrative isolation between workloads? Administrative isolation by account is the most straightforward way to grant independent administrative groups different levels of administrative control over AWS resources based on workload, development lifecycle, business unit (BU), or data sensitivity.
Does your business require limited visibility and discoverability of workloads? Accounts provide a natural boundary for visibility and discoverability. Workloads cannot be accessed or viewed unless an administrator of the account enables access to users managed in another account.
Does your business require isolation to minimize blast radius? Separate accounts help define boundaries and provide natural blast-radius isolation to limit the impact of a critical event such as a security breach, an unavailable AWS Region or Availability Zone, account suspensions, and so on.
Does your business require a particular workload to operate within AWS service limits without impacting the limits of another workload? You can use AWS account service limits to impose restrictions on a business unit, development team, or project. For example, if you create an AWS account for a project group, you can limit the number of Amazon Elastic Compute Cloud (Amazon EC2) or high performance computing (HPC) instances that can be launched by the account.
Does your business require strong isolation of recovery or auditing data? If regulatory requirements require you to control access and visibility to auditing data, you can isolate the data in an account separate from the one where you run your workloads (for example, by writing AWS CloudTrail logs to a different account).
Do your workloads depend on specific instance reservations to support high availability (HA) or disaster recovery (DR) capacity requirements? Reserved Instances (RIs) ensure reserved capacity for services such as Amazon EC2 and Amazon Relational Database Service (Amazon RDS) at the individual account level.
Use case
The identities in this use case are set up as follows:
DevAccount
Developers check the code into an AWS CodeCommit repository. It stores all the repositories as a single source of truth for application code. Developers have full control over this account. This account is usually used as a sandbox for developers.
ToolsAccount
A central location for all the tools related to the organization, including continuous delivery/deployment services such as AWS CodePipeline and AWS CodeBuild. Developers have limited/read-only access in this account. The Operations team has more control.
TestAccount
Applications using the CI/CD orchestration for test purposes are deployed from this account. Developers and the Operations team have limited/read-only access in this account.
ProdAccount
Applications using the CI/CD orchestration tested in the ToolsAccount are deployed to production from this account. Developers and the Operations team have limited/read-only access in this account.
Solution
In this solution, we will check in sample code for an AWS Lambda function in the Dev account. This will trigger the pipeline (created in AWS CodePipeline) and run the build using AWS CodeBuild in the Tools account. The pipeline will then deploy the Lambda function to the Test and Prod accounts.
Follow the instructions in the repository README to push the sample AWS Lambda application to an AWS CodeCommit repository in the Dev account.
Install the AWS Command Line Interface as described here. To prepare your access keys or assume-role to make calls to AWS, configure the AWS CLI as described here.
Walkthrough
Note: Follow the steps in the order they’re written. Otherwise, the resources might not be created correctly.
In the Tools account, deploy this CloudFormation template. It will create the customer master keys (CMK) in AWS Key Management Service (AWS KMS), grant access to Dev, Test, and Prod accounts to use these keys, and create an Amazon S3 bucket to hold artifacts from AWS CodePipeline.
In the output section of the CloudFormation console, make a note of the Amazon Resource Number (ARN) of the CMK and the S3 bucket name. You will need them in the next step.
In the Dev account, which hosts the AWS CodeCommit repository, deploy this CloudFormation template. This template will create the IAM roles, which will later be assumed by the pipeline running in the Tools account. Enter the AWS account number for the Tools account and the CMK ARN.
In the Test and Prod accounts where you will deploy the Lambda code, execute this CloudFormation template. This template creates IAM roles, which will later be assumed by the pipeline to create, deploy, and update the sample AWS Lambda function through CloudFormation.
In the Tools account, which hosts AWS CodePipeline, execute this CloudFormation template. This creates a pipeline, but does not add permissions for the cross accounts (Dev, Test, and Prod).
In the Tools account, execute this CloudFormation template, which give access to the role created in step 4. This role will be assumed by AWS CodeBuild to decrypt artifacts in the S3 bucket. This is the same template that was used in step 1, but with different parameters.
In the Tools account, execute this CloudFormation template, which will do the following:
Add the IAM role created in step 2. This role is used by AWS CodePipeline in the Tools account for checking out code from the AWS CodeCommit repository in the Dev account.
Add the IAM role created in step 3. This role is used by AWS CodePipeline in the Tools account for deploying the code package to the Test and Prod accounts.
The pipeline created in step 4 and updated in step 6 checks out code from the AWS CodeCommit repository. It uses the IAM role created in step 2. The IAM role created in step 4 has permissions to assume the role created in step 2. This role will be assumed by AWS CodeBuild to decrypt artifacts in the S3 bucket, as described in step 5.
The IAM role created in step 2 has permission to check out code. See here.
The IAM role created in step 2 also has permission to upload the checked-out code to the S3 bucket created in step 1. It uses the KMS keys created in step 1 for server-side encryption.
Upon successfully checking out the code, AWS CodePipeline triggers AWS CodeBuild. The AWS CodeBuild project created in step 4 is configured to use the CMK created in step 1 for cryptography operations. See here. The AWS CodeBuild role is created later in step 4. In step 5, access is granted to the AWS CodeBuild role to allow the use of the CMK for cryptography.
AWS CodeBuild uses pip to install any libraries for the sample Lambda function. It also executes the aws cloudformation package command to create a Lambda function deployment package, uploads the package to the specified S3 bucket, and adds a reference to the uploaded package to the CloudFormation template. See here.
Using the role created in step 3, AWS CodePipeline executes the transformed CloudFormation template (received as an output from AWS CodeBuild) in the Test account. The AWS CodePipeline role created in step 4 has permissions to assume the IAM role created in step 3, as described in step 5.
The IAM role assumed by AWS CodePipeline passes the role to an IAM role that can be assumed by CloudFormation. AWS CloudFormation creates and updates the Lambda function using the code that was built and uploaded by AWS CodeBuild.
This is what the pipeline looks like using the sample code:
Conclusion
Creating multiple AWS accounts provides the highest degree of isolation and is appropriate for a number of use cases. However, keeping a centralized account to orchestrate continuous delivery and deployment using AWS CodePipeline and AWS CodeBuild eliminates the need to duplicate the delivery pipeline. You can secure the pipeline through the use of cross account IAM roles and the encryption of artifacts using AWS KMS. For more information, see Providing Access to an IAM User in Another AWS Account That You Own in the IAM User Guide.
My colleague Jed Sundwall runs the AWS Public Datasets program. He wrote the guest post below to tell you about an important new dataset that is available as an Amazon RDS Snapshot. In the post, Jed introduces the dataset and shows you how to create an Amazon RDS DB Instance from the snapshot.
I am very excited to announce that, starting today, the entire public USAspending.gov database is available for anyone to copy via Amazon Relational Database Service (RDS). USAspending.gov data includes data on all spending by the federal government, including contracts, grants, loans, employee salaries, and more. The data is available via a PostgreSQL snapshot, which provides bulk access to the entire USAspending.gov database, and is updated nightly. At this time, the database includes all USAspending.gov for the second quarter of fiscal year 2017, and data going back to the year 2000 will be added over the summer. You can learn more about the database and how to access it on its AWS Public Dataset landing page.
Through the AWS Public Datasets program, we work with AWS customers to experiment with ways that the cloud can make data more accessible to more people. Most of our AWS Public Datasets are made available through Amazon S3 because of its tremendous flexibility and ability to scale to serve any volume of any kind of data files. What’s exciting about the USAspending.gov database is that it provides a great example of how Amazon RDS can be used to share an entire relational database quickly and easily. Typically, sharing a relational database requires extract, transfer, and load (ETL) processes that require redundant storage capacity, time for data transfer, and often scripts to migrate your database schema from one database engine to another. ETL processes can be so intimidating and cumbersome that they’re effectively impossible for many people to carry out.
By making their data available as a public Amazon RDS snapshot, the team at USASPending.gov has made it easy for anyone to get a copy of their entire production database for their own use within minutes. This will be useful for researchers and businesses who want to work with real data about all US Government spending and quickly combine it with their own data or other data resources.
Deploying the USASpending.gov Database Using the AWS Management Console Let’s go through the steps involved in deploying the database in your AWS account using the AWS Management Console.
Sign in to the AWS Management Console and select the US East (N. Virginia) region in the menu bar.
Open the Amazon RDS Console and choose Snapshots in the navigation pane.
In the filter for the search bar, select All Public Snapshots and search for 515495268755:
Select the snapshot named arn:aws:rds:us-east-1:515495268755:snapshot:usaspending-db.
Select Snapshot Actions -> Restore Snapshot. Select an instance size, and enter the other details, then click on Restore DB Instance.
You will see that a DB Instance is being created from the snapshot, within your AWS account.
After a few minutes, the status of the instance will change to Available.
You can see the endpoint for your database on the main page along with other useful info:
Deploying the USASpending.gov Database Using the AWS CLI You can also install the AWS Command Line Interface (CLI) and use it to create a DB Instance from the snapshot. Here’s a sample command:
This command will display the Endpoint.Address that you use to connect to the database.
Connecting to the DB Instance After following the AWS Management Console or AWS CLI instructions above, you will have access to the full USAspending.gov database within this Amazon RDS DB instance, and you can connect to it using any PostgreSQL client using the following credentials:
Username: root
Password: password
Database: data_store_api
If you use psql, you can access the database using this command:
You should change the database password after you log in:
ALTER USER "root" WITH ENCRYPTED PASSWORD '{new password}';
If you can’t connect to your instance but think you should be able to, you may need to check your VPC Security Groups and make sure inbound and outbound traffic on the port (usually 5432) is allowed from your IP address.
Exploring the Data The USAspending.gov data is very rich, so it will be hard to do it justice in this blog post, but hopefully these queries will give you an idea of what’s possible. To learn about the contents of the database, please review the USAspending.gov Data Dictionary.
The following query will return the total amount of money the government is obligated to pay for contracts awarded by NASA that include “Mars” or “Martian” in the description of the award:
select sum(total_obligation) from awards, subtier_agency
where (awards.description like '% MARTIAN %' OR awards.description like '% MARS %')
AND subtier_agency.name = 'National Aeronautics and Space Administration';
As I write this, the result I get for this query is $55,411,025.42. Note that the database is updated nightly and will include more historical data in the coming months, so you may get a different result if you run this query.
Now, here’s the same query, but looking for awards with “Jupiter” or “Jovian” in the description:
select sum(total_obligation) from awards, subtier_agency
where (awards.description like '%JUPITER%' OR awards.description like '%JOVIAN%')
AND subtier_agency.name = 'National Aeronautics and Space Administration';
The result I get is $14,766,392.96.
Questions & Comments I’m looking forward to seeing what people can do with this data. If you have any questions about the data, please create an issue on the USAspending.gov API’s issue tracker on GitHub.
As customers grow to the point where they are managing thousands of resources, each with up to 50 tags, they have been looking to us for additional tooling and options to simplify their work. Today I am happy to announce that our new Resource Tagging API is now available. You can use these APIs from the AWS SDKs or via the AWS Command Line Interface (CLI). You now have programmatic access to the same resource group operations that had been accessible only from the AWS Management Console.
Recap: Console-Based Resource Group Operations Before I get in to the specifics of the new API functions, I thought you would appreciate a fresh look at the console-based grouping and tagging model. I already have the ability to find and then tag AWS resources using a search that spans one or more regions. For example, I can select a long list of regions and then search them for my EC2 instances like this:
After I locate and select all of the desired resources, I can add a new tag key by clicking Create a new tag key and entering the desired tag key:
Then I enter a value for each instance (the new ProjectCode column):
Then I can create a resource group that contains all of the resources that are tagged with P100:
After I have created the resource group, I can locate all of the resources by clicking on the Resource Groups menu:
New API for Resource Tagging The API that we are announcing today gives you power to tag, untag, and locate resources using tags, all from your own code. With these new API functions, you are now able to operate on multiple resource types with a single set of functions.
Here are the new functions:
TagResources – Add tags to up to 20 resources at a time.
UntagResources – Remove tags from up to 20 resources at a time.
GetResources – Get a list of resources, with optional filtering by tags and/or resource types.
GetTagKeys – Get a list of all of the unique tag keys used in your account.
GetTagValues – Get all tag values for a specified tag key.
These functions support the following AWS services and resource types:
Batch Prediction, Data Source, Evaluation, ML Model.
Amazon Redshift
Cluster.
Amazon Relational Database Service
DB Instance, DB Option Group, DB Parameter Group, DB Security Group, DB Snapshot, DB Subnet Group, Event Subscription, Read Replica, Reserved DB Instance.
Amazon Route 53
Domain, Health Check, Hosted Zone.
Amazon S3
Bucket.
Amazon WorkSpaces
WorkSpace.
AWS Certificate Manager
Certificate.
AWS CloudHSM
HSM.
AWS Directory Service
Directory.
AWS Storage Gateway
Gateway, Virtual Tape, Volume.
Elastic Load Balancing
Load Balancer, Target Group.
Things to Know Here are a couple of things to keep in mind when you build code or write scripts that use the new API functions or the CLI equivalents:
Compatibility – The older, service-specific functions remain available and you can continue to use them.
Write Permission – The new tagging API adds another layer of permission on top of existing policies that are specific to a single AWS service. For example, you will need to have access to tag:tagResources and EC2:createTags in order to add a tag to an EC2 instance.
Read Permission – You will need to have access to tag:GetResources, tag:GetTagKeys, and tag:GetTagValues in order to call functions that access tags and tag values.
Pricing – There is no charge for the use of these functions or for tags.
Available Now The new functions are supported by the latest versions of the AWS SDKs. You can use them to tag and access resources in all commercial AWS regions.
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.