I’m excited to share that the Defense Information Systems Agency (DISA) has authorized 19 additional AWS services at Impact Level (IL) 5 and four services at IL 4 in the AWS GovCloud (US) Regions.
The authorization at DoD IL 4 and IL 5 allows DoD Mission Owners to process controlled unclassified information (CUI) and to include mission critical workloads for National Security Systems in AWS GovCloud (US) Regions. This authorization supplements the full range of U.S. Government data classifications supported on AWS. AWS remains the only cloud service provider accredited to address the full range, including Unclassified, Secret, and Top Secret.
The recently authorized AWS services and features at DoD Impact Levels 5 include the following:
The addition of the 19 new services will allow DoD Mission Owners and their developers from the Defense Industrial Base to use the newly authorized AWS services and features to solve critical mission challenges as shown below:
DevOps:
Leverage a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy using AWS CodeBuild.
Use a fully managed source control service to collaborate on code in a secure and highly scalable system with AWS CodeCommit.
Artificial Intelligence / Machine Learning and Big Data:
Uncover the insights and relationships in unstructured data and text using Amazon Comprehend.
Accurately transcribe and translate large volumes of text using Amazon Transcribe and Amazon Translate.
Easily move large amounts of data online between on-premises storage and storage services (i.e., Amazon S3 and Amazon Elastic File System) using AWS Data Sync and reliably load streaming data into data lakes, data stores, and analytics services using Amazon Kinesis Data Firehose.
Administration and Security:
Create and manage licenses and catalogs of IT services that are approved for use on AWS (i.e., AWS License Manager and AWS Service Catalog).
Provide scalable workload management with AWS Organizations.
Optimize real-time workload provisioning guidance with AWS Trusted Advisor.
Rotate, manage, and retrieve credentials with AWS Secrets Manager.
Protect web applications using AWS Web Application Firewall (WAF).
IAM, IoT, Networking, Serverless, Tactical Edge:
Organize hierarchies of data along multiple dimensions using Amazon Cloud Directory.
Store, share, and deploy applications through a serverless architecture using AWS Serverless Application Repository.
Build out and connect Internet of Things (IoT) environments with AWS IoT Greengrass.
Efficiently route traffic to Internet applications with Amazon Route53.
Enable file-based video transcoding with AWS Elemental Media Convert.
Centrally manage and securely deliver desktop applications to any computer with Amazon AppStream 2.0.
Figure 1 below highlights the new services now available to DoD Mission Owners.
Figure 1: The new AWS services now available, broken out into categories.
To learn more about AWS solutions for DoD, please see our AWS solution offerings. Follow the AWS Security Blog for future updates on our Services in Scope by Compliance Program page. If you have feedback about this post, let us know in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
AWS continues to expand the number of services that customers can use to run sensitive and highly regulated workloads in the federal government space. Today, I’m pleased to announce another expansion of our FedRAMP program, marking a 36.2% increase in our number of FedRAMP authorizations. We’ve achieved authorizations for 26 additional services, 7 of which have been authorized for both the AWS US East/West and AWS GovCloud (US) Regions.
We’ve achieved FedRAMP authorizations for the following 22 services in our AWS US East/West Regions:
In total, we now offer 70 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate and 54 services authorized in the AWS GovCloud (US) Regions under FedRamp High. You can see our full, updated list of authorizations on the FedRAMP Marketplace. We also list all of our services in scope by compliance program on our Services in Scope page.
Our FedRAMP assessment was completed with a third-party assessment partner to ensure an independent validation of our technical, management, and operational security controls against the FedRAMP baselines.
We care deeply about our customers’ needs, and compliance is my team’s priority. We want to continue to onboard services into the compliance programs our customers are using, such as FedRAMP.
I’m pleased to announce that the Defense Information Systems Agency (DISA) has extended the Provisional Authorization to Operate (P-ATO) of AWS GovCloud (US) Regions for Department of Defense (DoD) workloads at DoD Impact Levels (IL) 4 and 5 under the DoD’s Cloud Computing Security Requirements Guide (DoD CC SRG). Our authorizations at DoD IL 4 and IL 5 allow DoD Mission Owners to process unclassified, mission critical workloads for National Security Systems in the AWS GovCloud (US) Regions.
AWS successfully completed an independent, third-party evaluation that confirmed we effectively implement over 400 security controls using applicable criteria from NIST SP 800-53 Rev 4, the US General Services Administration’s FedRAMP High baseline, the DoD CC SRG, and the Committee on National Security Systems Instruction No. 1253 at the High Confidentiality, High Integrity, and High Availability impact levels.
In addition to a P-ATO extension through November 2020 for the existing 24 services, DISA authorized an additional 15 AWS services at DoD Impact Levels 4 and 5 as listed below.
Newly authorized AWS services at DoD Impact Levels 4 and 5
With these additional 15 services, we can now offer AWS customers a total of 39 services, authorized to store, process, and analyze DoD mission critical data at Impact Level 4 and Impact Level 5.
The AWS GovCloud (US) Regions are isolated and designed to host sensitive data and regulated workloads in the cloud, assisting customers who have United States federal, state, or local government compliance requirements.
AWS Artifact provides on-demand downloads of AWS security and compliance documents, such as AWS ISO certifications, and Payment Card Industry (PCI), AWS Federal Risk and Authorization Management Program (FedRAMP) Partner Package, and Service Organization Control (SOC) reports. You can submit the security and compliance documents (also known as audit artifacts) to your auditors or regulators to demonstrate the security and compliance of the AWS infrastructure and services that you use. You can also use these documents as guidelines to evaluate your own cloud architecture and assess the effectiveness of your company’s internal controls.
AWS Artifact can also be used to review AWS GovCloud (US) terms and conditions, accept agreements with AWS and designate AWS accounts that process restricted information (such as protected health information), and to track the status of multiple AWS agreements. To learn how to use Artifact to accept agreements for multiple accounts, see Managing Your Agreements in AWS Artifact.
Learn more about AWS Artifact here, and consult the Artifact FAQ here.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
It’s my pleasure to announce that we’ve expanded the number of AWS services that customers can use to run sensitive and highly regulated workloads in the federal government space. This expansion of our FedRAMP program marks a 28.6% increase in our number of FedRAMP authorizations.
Today, we’ve achieved FedRAMP authorizations for 6 services in our AWS US East/West Regions:
In total, we now offer 48 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate and 43 services authorized in our AWS GovCloud (US) Regions under FedRamp High. You can see our full, updated list of authorizations on the FedRAMP Marketplace. We also list all of our services in scope by compliance program on our Services in Scope page.
Our FedRAMP assessment was completed with a third-party assessment partner to ensure an independent validation of our technical, management, and operational security controls against the FedRAMP baselines.
We care deeply about our customers’ needs, and compliance is my team’s priority. As we expand in the federal space, we want to continue to onboard services into the compliance programs our customers are using, such as FedRAMP.
AWS Organizations is now available in the AWS GovCloud (US) Regions, enabling you to centrally govern and manage your AWS GovCloud (US) accounts. AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can:
Define organization-wide permission guardrails to establish controls that all IAM principals (users and roles)
Group your accounts into categories using organizational units (OUs)
Programmatically create new accounts in your organization
One fundamental concept is how AWS GovCloud (US) accounts work. Each AWS GovCloud (US) account has a mapped commercial account associated with it in a 1:1 relationship (displayed in the diagrams later on in this blog post by blue dotted lines). The commercial account is used by the AWS GovCloud (US) account for billing and support-related use cases, as well as associating various account information for the AWS GovCloud (US) account (for example, an email address). This association between the AWS GovCloud (US) account and its mapped commercial account can’t be modified.
The AWS GovCloud (US) Regions have special requirements, so you’ll need to have access to the AWS GovCloud (US) Regions to use AWS Organizations in AWS GovCloud (US).
Set up your AWS GovCloud (US) organization
AWS GovCloud (US) organizations are completely separate from commercial organizations and are managed independently of one another. The two most common models used to structure your AWS GovCloud (US) organization in relation to an existing commercial organization are a single company model or a reseller/partner model.
For single companies, you’ll want to use the AWS GovCloud (US) account mapped to your commercial organization master account to create your AWS GovCloud (US) organization. This maintains the relationship between the two organization’s master accounts for easier management. This is visualized in Figure 1.
Figure 1: A common configuration for a single company’s organizations
The AWS GovCloud (US) account mapped to the master account of the commercial organization is used to create an AWS GovCloud (US) organization.
The other AWS GovCloud (US) accounts mapped to the member accounts of the commercial organization are invited into the new AWS GovCloud (US) organization.
For resellers or partners who might be servicing multiple customers in AWS GovCloud (US) from a single commercial organization, you can create an AWS GovCloud (US) organization for each customer. In Figure 2, I assume that the commercial organization is being managed by the reseller who is servicing two customers, A and B.
Figure 2: A common configuration for a partner or reseller who manages multiple organizations in AWS GovCloud (US)
Customer A chooses one of their AWS GovCloud (US) accounts to become the master account of their AWS GovCloud (US) organization and uses it to create the organization.
Customer A invites their other AWS GovCloud (US) accounts into their new AWS GovCloud (US) organization.
Customer B does the same as Customer A.
The reseller manages all of the mapped commercial accounts for billing and support purposes in the reseller’s commercial organization.
Creating your organization in AWS GovCloud (US) follows the same process as in other regions either by using the API/CLI or logging in to the AWS GovCloud (US) Organizations console and choosing Create organization. For more information on creating an organization, please see Creating an Organization.
Frequently asked questions about how AWS Organizations works in the AWS GovCloud (US) Regions
Can I use my AWS GovCloud (US) organization to manage accounts in commercial Regions?
No, Organizations in AWS GovCloud (US) do not enable you to manage accounts in commercial regions. You can use your organization in the AWS GovCloud (US) Regions to help you manage your AWS GovCloud (US) accounts. Your organization in AWS GovCloud (US) has no relation to your existing organization in commercial Regions and is independently managed.
Can I view the relationship between an AWS GovCloud (US) account and its mapped commercial account?
No. Because of the isolated nature of the AWS GovCloud (US) Regions, you will not be able to view any information (such as the account ID or email address) about the mapped commercial accounts associated with your AWS GovCloud (US) accounts managed in your AWS GovCloud (US) organization. You can view and manage information about your AWS GovCloud (US) accounts using your AWS GovCloud (US) organization, and your commercial accounts using your commercial organization.
How does consolidated billing work for AWS GovCloud (US) organizations?
AWS GovCloud (US) accounts are billed to the mapped commercial account associated with them and paid for in the commercial regions. Therefore, AWS GovCloud (US) organizations (and the master account of AWS GovCloud (US) organizations) are not responsible for the billing of account activity incurred by AWS GovCloud (US) accounts in the organization. If you manage all of your mapped commercial accounts associated with your AWS GovCloud (US) accounts in a single commercial organization—as you would if you follow the common configuration of a single company described earlier—you will receive one bill for all of your commercial and AWS GovCloud (US) account usage.
How do I programmatically create a new AWS GovCloud (US) account using Organizations?
New AWS GovCloud (US) accounts can be programmatically created using a new Organizations API, CreateGovCloudAccount, which can be called by the master account of a commercial organization, provided that it already has an associated AWS GovCloud (US) account. If your master account doesn’t have an associated AWS GovCloud (US) account, you’ll need to create one before using this API.
The CreateGovCloudAccount API creates both a standalone AWS GovCloud (US) account as well as its corresponding mapped commercial account, which will automatically be added to your commercial organization. The new AWS GovCloud (US) account is independent and will need to be invited into an AWS GovCloud (US) organization.
How do I access a new AWS GovCloud (US) account programmatically created using the CreateGovCloudAccount API?
Use the commercial organization’s master account to call the CreateGovCloudAccount API, which creates a new account in the commercial organization. A role is created in this new commercial account that allows your commercial organization master account to assume it, the exact same way account creation works in commercial organizations today.
A AWS GovCloud (US) account is then automatically created and mapped to the commercial account that was just created. A role is created in the new AWS GovCloud (US) account that can be assumed by the GovClAWS GovCloud (US)oud account mapped to the master account of the commercial organization.
Sign in to the AWS GovCloud (US) account mapped to your commercial organization’s master account and assume the role into the newly created AWS GovCloud (US) account.
Here’s a diagram to help you understand the process.
Figure 3: How to access a new programmatically created AWS GovCloud (US) account
You call the CreateGovCloudAccount API from the master account of your commercial organization, which creates a new account in your commercial organization and a mapped standalone AWS GovCloud (US) account.
Your AWS GovCloud (US) account mapped to your master account of your commercial organization has permissions to assume the OrganizationAccountAccessRole IAM role in the newly created AWS GovCloud (US) account.
Once you have access to the new AWS GovCloud (US) account, you can set up your own IAM users/roles and invite the account into your AWS GovCloud (US) organization.
Can I manage both commercial accounts and GovCloAWS GovCloud (US)ud accounts using the same organization?
No. You can use your existing commercial organization to manage your commercial accounts and create a new AWS GovCloud (US) organization to manage your AWS GovCloud (US) accounts.
Summary
AWS Organizations now extends its governance and management capabilities to customers in the AWS GovCloud (US) Regions. Customers are now able add their AWS GovCloud (US) accounts in an AWS GovCloud (US) organization for central governance of access, compliance, and security. To get started, sign in to the AWS Organizations console from an AWS GovCloud (US) Region.
If you have comments about this post, submit them in the “Comments” section below. If you have additional questions, please open a new thread on the AWS Organizations forum.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
The Amazon RDS team launched nearly 80 features in 2017. Some of them were covered in this blog, others on the AWS Database Blog, and the rest in What’s New or Forum posts. To wrap up my week, I thought it would be worthwhile to give you an organized recap. So here we go!
In the past I’ve talked about several agents, deaemons, and scripts that you could use to collect system metrics and log files for your Windows and Linux instances and on-premise services and publish them to Amazon CloudWatch. The data collected by this somewhat disparate collection of tools gave you visibility into the status and behavior of your compute resources, along with the power to take action when a value goes out of range and indicates a potential issue. You can graph any desired metrics on CloudWatch Dashboards, initiate actions via CloudWatch Alarms, and search CloudWatch Logs to find error messages, while taking advantage of our support for custom high-resolution metrics.
New Unified Agent Today we are taking a nice step forward and launching a new, unified CloudWatch Agent. It runs in the cloud and on-premises, on Linux and Windows instances and servers, and handles metrics and log files. You can deploy it using AWS Systems Manager (SSM) Run Command, SSM State Manager, or from the CLI. Here are some of the most important features:
Single Agent – A single agent now collects both metrics and logs. This simplifies the setup process and reduces complexity.
Cross-Platform / Cross-Environment – The new agent runs in the cloud and on-premises, on 64-bit Linux and 64-bit Windows, and includes HTTP proxy server support.
Configurable – The new agent captures the most useful system metrics automatically. It can be configured to collect hundreds of others, including fine-grained metrics on sub-resources such as CPU threads, mounted filesystems, and network interfaces.
CloudWatch-Friendly – The new agent supports standard 1-minute metrics and the newer 1-second high-resolution metrics. It automatically includes EC2 dimensions such as Instance Id, Image Id, and Auto Scaling Group Name, and also supports the use of custom dimensions. All of the dimensions can be used for custom aggregation across Auto Scaling Groups, applications, and so forth.
Migration – You can easily migrate existing AWS SSM and EC2Config configurations for use with the new agent.
Installing the Agent The CloudWatch Agent uses an IAM role when running on an EC2 instance, and an IAM user when running on an on-premises server. The role or the user must include the AmazonSSMFullAccess and AmazonEC2ReadOnlyAccess policies. Here’s my role:
I can easily add it to a running instance (this is a relatively new and very handy EC2 feature):
Next, I install the CloudWatch Agent using the AWS Systems Manager:
This takes just a few seconds. Now I can use a simple wizard to set up the configuration file for the agent:
The wizard also lets me set up the log files to be monitored:
The wizard generates a JSON-format config file and stores it on the instance. It also offers me the option to upload the file to my Parameter Store so that I can deploy it to my other instances (I can also do fine-grained customization of the metrics and log collection configuration by editing the file):
Now I can start the CloudWatch Agent using Run Command, supplying the name of my configuration in the Parameter Store:
This runs in a few seconds and the agent begins to publish metrics right away. As I mentioned earlier, the agent can publish fine-grained metrics on the resources inside of or attached to an instance. For example, here are the metrics for each filesystem:
There’s a separate log stream for each monitored log file on each instance:
I can view and search it, just like I can do for any other log stream:
Now Available The new CloudWatch Agent is available now and you can start using it today in all public AWS Regions, with AWS GovCloud (US) and the Regions in China to follow.
There’s no charge for the agent; you pay the usual CloudWatch prices for logs and custom metrics.
One of the main tenets of the Family Educational Rights and Privacy Act (FERPA) is the protection of student education records, including personally identifiable information (PII) and directory information. We recently updated our FERPA Compliance on AWS whitepaper to include AWS service-specific guidance for 24 AWS services. The whitepaper describes how these services can be used to help secure protected data. In conjunction with more detailed service-specific documentation, this updated information helps make it easier for you to plan, deploy, and operate secure environments to meet your compliance requirements in the AWS Cloud.
The updated whitepaper is especially useful for educational institutions and their vendors who need to understand:
How AWS services can be used to help deploy educational and PII workloads securely in the AWS Cloud.
Key security disciplines in a security program to help you run a FERPA-compliant program (such as auditing, data destruction, and backup and disaster recovery).
In a related effort to help you secure PII, we also added to the whitepaper a mapping of NIST SP 800-122, which provides guidance for protecting PII, as well as a link to our NIST SP 800-53 Quick Start, a CloudFormation template that automatically configures AWS resources and deploys a multi-tier, Linux-based web application. To learn how this Quick Start works, see the Automate NIST Compliance in AWS GovCloud (US) with AWS Quick Start Tools video. The template helps you streamline and automate secure baselines in AWS—from initial design to operational security readiness—by incorporating the expertise of AWS security and compliance subject matter experts.
For more information about AWS Compliance and FERPA or to request support for your organization, contact your AWS account manager.
– Chris Gile, Senior Manager, AWS Security Assurance
The original AWS Price List API, as described in New – AWS Price List API, gave you access to prices in JSON and CSV form by way of structured URLs. While this worked well for some types of cost management tools, the size and complexity of the files made them difficult to download and tedious to parse. Today we are updating the API by adding new functions that allow you to perform fine-grained price queries that return only the prices that you need. This will allow you to make use of the prices in mobile and browser-based applications.
New Functions Here are the new functions:
DescribeServices – Returns sets of attribute keys that are used to define products within a service. For example, the keys returned for EC2 will include physicalProcessor, memory, operatingSystem, location, and tenancy.
GetAttributeValues – Returns all of the allowable values for a given attribute key. For example, values for the operatingSystem key include Windows, RHEL, Linux, and SUSE; values for the location key include US East (N. Virginia) and Asia Pacific (Mumbai).
GetProducts -Returns all of the products, along with their public prices, that match a filter expression based on service name and attribute value.
You can access these functions from the AWS SDKs. In order to try them out I used Python and the AWS SDK for Python. I start by importing the SDK and creating a client:
Here’s how I get all of the values for all of EC2’s pricing attributes:
print("Selected EC2 Attributes & Values")
print("================================")
response = pricing.describe_services(ServiceCode='AmazonEC2')
attrs = response['Services'][0]['AttributeNames']
for attr in attrs:
response = pricing.get_attribute_values(ServiceCode='AmazonEC2', AttributeName=attr)
values = []
for attr_value in response['AttributeValues']:
values.append(attr_value['Value'])
print(" " + attr + ": " + ", ".join(values))
The output starts like this:
Selected EC2 Attributes & Values
================================
volumeType: Throughput Optimized HDD, Provisioned IOPS, Magnetic, General Purpose, Cold HDD
maxIopsvolume: 500 - based on 1 MiB I/O size, 40 - 200, 250 - based on 1 MiB I/O size, 20000, 10000
instanceCapacity10xlarge: 1
locationType: AWS Region
instanceFamily: Storage optimized, Micro instances, Memory optimized, GPU instance, General purpose, Compute optimized
operatingSystem: Windows, SUSE, RHEL, NA, Linux
...
And here’s how I use the service name and attribute values to obtain price listings for EC2 instances with 64 vCPUs, 256 GiB of memory, pre-installed SQL Server Enterprise, in the Asia Pacific (Mumbai) Region. Each price is a JSON string:
The next part of the response contains an array of terms, each describing a particular way to purchase the instance (On-Demand or various types of Reserved Instance):
Now Available The new functions are available now and you can start using them today in the US East (Northern Virginia) and Asia Pacific (Mumbai) Regions to access metadata and price listings for all public AWS Regions and AWS GovCloud (US), at no charge.
We have supported sensitive Defense community workloads in the cloud for more than four years, and this latest IL5 authorization is complementary to our FedRAMP High Provisional Authorization that covers 18 services in the AWS GovCloud (US) Region. Our customers now have the flexibility to deploy any range of IL 2, 4, or 5 workloads by leveraging AWS’s services, attestations, and certifications. For example, when the US Air Force needed compute scale to support the Next Generation GPS Operational Control System Program, they turned to AWS.
In partnership with a certified Third Party Assessment Organization (3PAO), an independent validation was conducted to assess both our technical and nontechnical security controls to confirm that they meet the DoD’s stringent CC SRG standards for IL5 workloads. Effective immediately, customers can begin leveraging the IL5 authorization for the following six services in the AWS GovCloud (US) Region:
AWS has been a long-standing industry partner with DoD, federal-agency customers, and private-sector customers to enhance cloud security and policy. We continue to collaborate on the DoD CC SRG, Defense Acquisition Regulation Supplement (DFARS) and other government requirements to ensure that policy makers enact policies to support next-generation security capabilities.
In an effort to reduce the authorization burden of our DoD customers, we’ve worked with DISA to port our assessment results into an easily ingestible format by the Enterprise Mission Assurance Support Service (eMASS) system. Additionally, we undertook a separate effort to empower our industry partners and customers to efficiently solve their compliance, governance, and audit challenges by launching the AWS Customer Compliance Center, a portal providing a breadth of AWS-specific compliance and regulatory information.
We look forward to providing sustained cloud security and compliance support at scale for our DoD customers and adding additional services within the IL5 authorization boundary. See AWS Services in Scope by Compliance Program for updates. To request access to AWS’s DoD security and authorization documentation, contact AWS Sales and Business Development. For a list of frequently asked questions related to AWS DoD SRG compliance, see the AWS DoD SRG page.
Originally, metrics were stored at five minute intervals; this was reduced to one minute (also known as Detailed Monitoring) in response to customer requests way back in 2010. This was a welcome change, but now it is time to do better. Our customers are streaming video, running flash sales, deploying code tens or hundreds of times per day, and running applications that scale in and out very quickly as conditions change. In all of these situations, a minute is simply too coarse of an interval. Important, transient spikes can be missed; disparate (yet related) events are difficult to correlate across time, and the MTTR (mean time to repair) when something breaks is too high.
New High-Resolution Metrics Today we are adding support for high-resolution custom metrics, with plans to add support for AWS services over time. Your applications can now publish metrics to CloudWatch with 1-second resolution. You can watch the metrics scroll across your screen seconds after they are published and you can set up high-resolution CloudWatch Alarms that evaluate as frequently as every 10 seconds.
Imagine alarming when available memory gets low. This is often a transient condition that can be hard to catch with infrequent samples. With high-resolution metrics, you can see, detect (via an alarm), and act on it within seconds:
In this case the alarm on the right would not fire, and you would not know about the issue.
Publishing High-Resolution Metrics You can publish high-resolution metrics in two different ways:
API – The PutMetricData function now accepts an optional StorageResolution parameter. Set this parameter to 1 to publish high-resolution metrics; omit it (or set it to 60) to publish at standard 1-minute resolution.
collectd plugin – The CloudWatch plugin for collectd has been updated to support collection and publication of high-resolution metrics. You will need to set the enable_high_definition_metrics parameter in the config file for the plugin.
CloudWatch metrics are rolled up over time; resolution effectively decreases as the metrics age. Here’s the schedule:
1 second metrics are available for 3 hours.
60 second metrics are available for 15 days.
5 minute metrics are available for 63 days.
1 hour metrics are available for 455 days (15 months).
When you call GetMetricStatistics you can specify a period of 1, 5, 10, 30 or any multiple of 60 seconds for high-resolution metrics. You can specify any multiple of 60 seconds for standard metrics.
A Quick Demo I grabbed my nearest EC2 instance, installed the latest version of collectd and the Python plugin:
$ sudo yum install collectd collectd-python
Then I downloaded the setup script for the plugin, made it executable, and ran it:
I had already created a suitable IAM Role and added it to my instance; it was automatically detected during setup. I was asked to enable the high resolution metrics:
collectd started running and publishing metrics within seconds. I opened up the CloudWatch Console to take a look:
Then I zoomed in to see the metrics in detail:
I also created an alarm that will check the memory.percent.used metric at 10 second intervals. This will make it easier for me to detect situations where a lot of memory is being used for a short period of time:
Now Available High-resolution custom metrics and alarms are available now in all Public AWS Regions, with support for AWS GovCloud (US) coming soon.
As was already the case, you can store 10 metrics at no charge every month; see the CloudWatch Pricing page for more information. Pricing for high-resolution metrics is identical to that for standard resolution metrics, with volume tiers that allow you to realize savings (on a per-metric) basis when you use more metrics. High-resolution alarms are priced at $0.30 per alarm per month.
I first wrote about the benefits of GPU-powered computing in 2013 when we launched the G2 instance type. Since that launch, AWS customers have used the G2 instances to deliver high performance graphics to mobile devices, TV sets, and desktops.
Today we are taking a step forward and launching the G3 instance type. Powered by NVIDIA Tesla M60 GPUs, these instances are available in three sizes (all VPC-only and EBS-only):
Model
GPUs
GPU Memory
vCPUs
Main Memory
EBS Bandwidth
g3.4xlarge
1
8 GiB
16
122 GiB
3.5 Gbps
g3.8xlarge
2
16 GiB
32
244 GiB
7 Gbps
g3.16xlarge
4
32 GiB
64
488 GiB
14 Gbps
Each GPU supports 8 GiB of GPU memory, 2048 parallel processing cores, and a hardware encoder capable of supporting up to 10 H.265 (HEVC) 1080p30 streams and up to 18 H.264 1080p30 streams, making them a great fit for 3D rendering & visualization, virtual reality, video encoding, remote graphics workstation (NVIDIA GRID), and other server-side graphics workloads that need a massive amount of parallel processing power. The GPUs support OpenGL 4.5, DirectX 12.0, CUDA 8.0, and OpenCL 1.2. When you launch a G3 instance you have access to an NVIDIA GRID Virtual Workstation License and can make use of the NVIDIA GRID driver without purchasing a license on your own.
The instances use Intel Xeon E5-2686 v4 (Broadwell) processors running at 2.7 GHz. On the networking side, Enhanced Networking (via the Elastic Network Adapter) provides up to 20 Gbps of aggregate network bandwidth within a Placement Group, along with up to 14 Gbps of EBS bandwidth.
Our customers have told us that they are looking forward to visualizing large 3D seismic models, configuring cars in 3D, and providing students with the ability to run high-end 2D and 3D applications. For example, Calgary Scientific can take applications that are powered by the Unreal Engine and make them accessible on mobile devices and from within web pages, with collaborative viewing support. Visit their Demo Gallery to see PureWeb Reality in action:
You can launch these instances today in the US East (Ohio), US East (Northern Virginia), US West (Oregon), US West (Northern California), AWS GovCloud (US), and EU (Ireland) Regions as On-Demand, Reserved Instances, Spot Instances, and Dedicated Hosts, with more Regions coming soon.
I’ve already told you about Amazon Rekognition and described how it uses deep neural network models to analyze images by detecting objects, scenes, and faces.
Motorola Solutions for Public Safety While I have your attention, I would love to tell you how Motorola Solutions is exploring how Rekognition can enhance real-time intelligence for public safety personnel in the field and at the command center.
Motorola Solutions provides over 100,000 public safety and commercial customers in more than 100 countries with software, services, and tools for mobile intelligence and digital evidence management, many powered by images captured using body, dashboard, and stationary cameras. Due to the exceptionally sensitive nature of these images, they must be stored in an environment that meets stringent CJIS (Criminal Justice Information Systems) security standards defined by the FBI.
For several years, researchers at Motorola Solutions have been exploring the use of artificial intelligence. For example, they have built prototype applications that use Rekognition, Lex, and Polly in conjunction with their own software to scan images from a body-worn camera for missing persons and to raise alerts without requiring continuous human attention or interaction. With approximately 100,000 missing people in the US alone, law enforcement agencies need to bring powerful tools to bear. At re:Invent 2016, Dan Law (Chief Data Scientist for Motorola Solutions) described how they use AWS to aid in this effort. Here’s the video (Dan’s section is titled AI for Public Safety):
AWS and CJIS The applications that Dan described can run in AWS GovCloud (US). This is an isolated cloud built to protect and preserve sensitive IT data while meeting the FBI’s CJIS requirements (and many others). AWS GovCloud (US) resides on US soil and is managed exclusively by US citizens. AWS routinely signs CJIS security agreements with our customers and can either perform or allow background checks on our employees, as needed.
Here are some resources that you can use to learn more about AWS and CJIS:
AWS GovCloud (US) gives AWS customers a place to host sensitive data and regulated workloads in the AWS Cloud. The first AWS GovCloud (US) Region was launched in 2011 and is located on the west coast of the US.
I’m happy to announce that we are working on a second Region that we expect to open in 2018. The upcoming AWS GovCloud (US-East) Region will provide customers with added redundancy, data durability, and resiliency, and will also provide additional options for disaster recovery.
Like the existing region, which we now call AWS GovCloud (US-West), the new region will be isolated and meet top US government compliance requirements including International Traffic in Arms Regulations (ITAR), NIST standards, Federal Risk and Authorization Management Program (FedRAMP) Moderate and High, Department of Defense Impact Levels 2-4, DFARs, IRS1075, and Criminal Justice Information Services (CJIS) requirements. Visit the GovCloud (US) page to learn more about the compliance regimes that we support.
Government agencies and the IT contactors that serve them were early adopters of AWS GovCloud (US), as were companies in regulated industries. These organizations are able to enjoy the flexibility and cost-effectiveness of public cloud while benefiting from the isolation and data protection offered by a region designed and built to meet their regulatory needs and to help them to meet their compliance requirements. Here’s a small sample from our customer base:
Hardware multi-factor authentication (MFA) is now available in the AWS GovCloud (US) Region to help strengthen data security while giving you control over token keys that have access to your data. MFA is a best practice that adds an extra layer of protection on top of users’ user names and passwords.
Today, I am pleased to announce that the AWS GovCloud (US) Region has received Defense Information Systems Agency Impact Level 4 (IL4) Provisional Authorization (PA) for more than one dozen new services. The IL4 PA enables Department of Defense (DoD) customers to operate their mission-critical and regulated workloads in the AWS GovCloud (US) Region, with data up to the DoD Cloud Computing Security Requirements Guide IL4.
The new AWS services added to the authorization include advanced database, low-cost storage, data warehouse, security, and configuration automation solutions that will help organizations with IL4 workloads increase the productivity and security of their data in the AWS Cloud. For example, with AWS CloudFormation you can deploy AWS resources by automating configuration processes. AWS Key Management Service enables you to create and control the encryption keys that you use to encrypt your data. With Amazon Redshift, you can analyze all your data by using your existing business intelligence tools and automate common administrative tasks to manage, monitor, and scale your data warehouse.
Today, we’re pleased to announce an array of AWS services that are available in the AWS GovCloud (US) Region and have achieved Federal Risk and Authorization Management Program (FedRAMP) High authorizations. The FedRAMP Joint Authorization Board (JAB) has issued Provisional Authority to Operate (P-ATO) approvals, which are effective immediately. If you are a federal or commercial customer, you can use these services to process and store your critical workloads in the AWS GovCloud (US) Region’s authorization boundary with data up to the high impact level.
The services newly available in the AWS GovCloud (US) Region include database, storage, data warehouse, security, and configuration automation solutions that will help you increase your ability to manage data in the cloud. For example, with AWS CloudFormation, you can deploy AWS resources by automating configuration processes. AWS Key Management Service (KMS) enables you to create and control the encryption keys used to secure your data. Amazon Redshift enables you to analyze all your data cost effectively by using existing business intelligence tools to automate common administrative tasks for managing, monitoring, and scaling your data warehouse.
Our federal and commercial customers can now leverage our FedRAMP P-ATO to access the following services:
CloudFormation – CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use sample templates in CloudFormation, or create your own templates to describe the AWS resources and any associated dependencies or run-time parameters required to run your application.
Amazon DynamoDB – Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit-millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models.
Amazon EMR – Amazon EMR provides a managed Hadoop framework that makes it efficient and cost effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in EMR, and interact with data in other AWS data stores such as Amazon S3 and DynamoDB.
Amazon Glacier – Amazon Glacier is a secure, durable, and low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $0.004 per gigabyte per month, a significant savings compared to on-premises solutions.
KMS – KMS is a managed service that makes it easier for you to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys. KMS is integrated with other AWS services to help you protect the data you store with these services. For example, KMS is integrated with CloudTrail to provide you with logs of all key usage and help you meet your regulatory and compliance needs.
Redshift – Redshift is a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost effective to analyze all your data by using your existing business intelligence tools.
Amazon Simple Notification Service (SNS) – Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or “fan out” messages to large numbers of recipients. SNS makes it simple and cost effective to send push notifications to mobile device users and email recipients or even send messages to other distributed services.
Amazon Simple Queue Service (SQS) – Amazon SQS is a fully-managed message queuing service for reliably communicating among distributed software components and microservices—at any scale. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be always available.
Amazon Simple Workflow Service (SWF) – Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. SWF is a fully managed state tracker and task coordinator in the cloud.
AWS works closely with the FedRAMP Program Management Office (PMO), National Institute of Standards and Technology (NIST), and other federal regulatory and compliance bodies to ensure that we provide you with the cutting-edge technology you need in a secure and compliant fashion. We are working with our authorizing officials to continue to expand the scope of our authorized services, and we are fully committed to ensuring that AWS GovCloud (US) continues to offer government customers the most comprehensive mix of functionality and security.
Way back in 2010, we launched Resource Tagging for EC2 instances and other EC2 resources. Since that launch, we have raised the allowable number of tags per resource from 10 to 50, and we have made tags more useful with the introduction of resource groups and a tag editor. Our customers use tags to track ownership, drive their cost accounting processes, implement compliance protocols, and to control access to resources via IAM policies.
The AWS tagging model provides separate functions for resource creation and resource tagging. While this is flexible and has worked well for many of our users, it does result in a small time window where the resources exist in an untagged state. Using two separate functions means that resource creation could succeed only for tagging to fail, again leaving resources in an untagged state.
Today we are making tagging more flexible and more useful, with four new features:
Tag on Creation – You can now specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources.
Enforced Tag Usage – You can now write IAM policies that mandate the use of specific tags on EC2 instances or EBS volumes.
Resource-Level Permissions – By popular request, the CreateTags and DeleteTags functions now support IAM’s resource-level permissions.
Enforced Volume Encryption – You can now write IAM policies that mandate the use of encryption for newly created EBS volumes.
Tag on Creation You now have the ability to specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources (if the call creates both instances and volumes, you can specify distinct tags for the instance and for each volume). The resource creation and the tagging are performed atomically; both must succeed in order for the operation (RunInstances, CreateVolume, and other functions that create resources) to succeed. You no longer need to build tagging scripts that run after instances or volumes have been created.
Here’s how you specify tags when you launch an EC2 instance (the CostCenter and SaveSnapshotFlag tags are also set on any EBS volumes created when the instance is launched):
Resource-Level Permissions CreateTags and DeleteTags now support IAM’s resource-level permissions, as requested by many customers. This gives you additional control over the tag keys and values on existing resources.
Also, RunInstances and CreateVolume now support additional resource-level permissions. This allows you to exercise control over the users and groups that can tag resources on creation.
Enforced Tag Usage You can now write IAM policies that enforce the use of specific tags. For example, you could write a policy that blocks the deletion of tags named Owner or Account. Or, you could write a “Deny” policy that disallows the creation of new tags for specific existing resources. You could also use an IAM policy to enforce the use of Department and CostCenter tags to help you achieve more accurate cost allocation reporting. In order to implement stronger compliance and security policies, you could also restrict access to DeleteTags if the resource is not tagged with the user’s name. The ability to enforce tag usage gives you precise control over access to resources, ownership, and cost allocation.
Here’s a statement that requires the use of costcenter and stack tags (with values of “115” and “prod,” respectively) for all newly created volumes:
Enforced Volume Encryption Using the additional IAM resource-level permissions now supported by RunInstances and CreateVolume, you can now write IAM policies that mandate the use of encryption for any EBS boot or data volumes created. You can use this to comply with regulatory requirements, enforce enterprise security policies, and to protect your data in compliance with applicable auditing requirements.
Here’s a sample statement that you can incorporate into an IAM policy for RunInstances and CreateVolume to enforce EBS volume encryption:
Available Now As you can see, the combination of tagging and the new resource-level permissions on the resource creation and tag manipulation functions gives you the ability to track and control access to your EC2 resources.
In case you missed any AWS Security Blog posts published so far in 2017, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from protecting dynamic web applications against DDoS attacks to monitoring AWS account configuration changes and API calls to Amazon EC2 security groups.
March
March 22:How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53 Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, 192.0.2.44). A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints. In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.
March 21:New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK. In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.
March 21:Updated CJIS Workbook Now Available by Request The need for guidance when implementing Criminal Justice Information Services (CJIS)–compliant solutions has become of paramount importance as more law enforcement customers and technology partners move to store and process criminal justice data in the cloud. AWS services allow these customers to easily and securely architect a CJIS-compliant solution when handling criminal justice data, creating a durable, cost-effective, and secure IT infrastructure that better supports local, state, and federal law enforcement in carrying out their public safety missions. AWS has created several documents (collectively referred to as the CJIS Workbook) to assist you in aligning with the FBI’s CJIS Security Policy. You can use the workbook as a framework for developing CJIS-compliant architecture in the AWS Cloud. The workbook helps you define and test the controls you operate, and document the dependence on the controls that AWS operates (compute, storage, database, networking, regions, Availability Zones, and edge locations).
March 9:New Cloud Directory API Makes It Easier to Query Data Along Multiple Dimensions Today, we made available a new Cloud Directory API, ListObjectParentPaths, that enables you to retrieve all available parent paths for any directory object across multiple hierarchies. Use this API when you want to fetch all parent objects for a specific child object. The order of the paths and objects returned is consistent across iterative calls to the API, unless objects are moved or deleted. In case an object has multiple parents, the API allows you to control the number of paths returned by using a paginated call pattern. In this blog post, I use an example directory to demonstrate how this new API enables you to retrieve data across multiple dimensions to implement powerful applications quickly.
March 8:How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML). In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.
March 7:How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper. You also have the option of using Route 53 with an externally hosted content delivery network (CDN). In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to prevent discovery of your application origin.
February 23:s2n Is Now Handling 100 Percent of SSL Traffic for Amazon S3 Today, we’ve achieved another important milestone for securing customer data: we have replaced OpenSSL with s2n for all internal and external SSL traffic in Amazon Simple Storage Service (Amazon S3) commercial regions. This was implemented with minimal impact to customers, and multiple means of error checking were used to ensure a smooth transition, including client integration tests, catching potential interoperability conflicts, and identifying memory leaks through fuzz testing.
February 22:Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials. IAM roles for EC2 make it easier for your applications to make API requests securely from an instance because they do not require you to manage AWS security credentials that the applications use. Recently, we enabled you to use temporary security credentials for your applications by attaching an IAM role to an existing EC2 instance by using the AWS CLI and SDK. To learn more, see New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI. Starting today, you can attach an IAM role to an existing EC2 instance from the EC2 console. You can also use the EC2 console to replace an IAM role attached to an existing instance. In this blog post, I will show how to attach an IAM role to an existing EC2 instance from the EC2 console.
February 22:How to Audit Your AWS Resources for Security Compliance by Using Custom AWS Config Rules AWS Config Rules enables you to implement security policies as code for your organization and evaluate configuration changes to AWS resources against these policies. You can use Config rules to audit your use of AWS resources for compliance with external compliance frameworks such as CIS AWS Foundations Benchmark and with your internal security policies related to the US Health Insurance Portability and Accountability Act (HIPAA), the Federal Risk and Authorization Management Program (FedRAMP), and other regimes. AWS provides some predefined, managed Config rules. You also can create custom Config rules based on criteria you define within an AWS Lambda function. In this post, I show how to create a custom rule that audits AWS resources for security compliance by enabling VPC Flow Logs for an Amazon Virtual Private Cloud (VPC). The custom rule meets requirement 4.3 of the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.”
February 13:How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials You can now enable multi-factor authentication (MFA) for users of AWS services such as Amazon WorkSpaces and Amazon QuickSight and their on-premises credentials by using your AWS Directory Service for Microsoft Active Directory (Enterprise Edition) directory, also known as AWS Microsoft AD. MFA adds an extra layer of protection to a user name and password (the first “factor”) by requiring users to enter an authentication code (the second factor), which has been provided by your virtual or hardware MFA solution. These factors together provide additional security by preventing access to AWS services, unless users supply a valid MFA code.
February 13:How to Create an Organizational Chart with Separate Hierarchies by Using Amazon Cloud Directory Amazon Cloud Directory enables you to create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries. Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions. For example, you can create an organizational chart that you can navigate through separate hierarchies for reporting structure, location, and cost center. In this blog post, I show how to use Cloud Directory APIs to create an organizational chart with two separate hierarchies in a single directory. I also show how to navigate the hierarchies and retrieve data. I use the Java SDK for all the sample code in this post, but you can use other language SDKs or the AWS CLI.
February 10:How to Easily Log On to AWS Services by Using Your On-Premises Active Directory AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainlesslogon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition. In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.
February 9:New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials that AWS creates, distributes, and rotates automatically. Using temporary credentials is an IAM best practice because you do not need to maintain long-term keys on your instance. Using IAM roles for EC2 also eliminates the need to use long-term AWS access keys that you have to manage manually or programmatically. Starting today, you can enable your applications to use temporary security credentials provided by AWS by attaching an IAM role to an existing EC2 instance. You can also replace the IAM role attached to an existing EC2 instance. In this blog post, I show how you can attach an IAM role to an existing EC2 instance by using the AWS CLI.
January 30:How to Protect Data at Rest with Amazon EC2 Instance Store Encryption Encrypting data at rest is vital for regulatory compliance to ensure that sensitive data saved on disks is not readable by any user or application without a valid key. Some compliance regulations such as PCI DSS and HIPAA require that data at rest be encrypted throughout the data lifecycle. To this end, AWS provides data-at-rest options and key management to support the encryption process. For example, you can encrypt Amazon EBS volumes and configure Amazon S3 buckets for server-side encryption (SSE) using AES-256 encryption. Additionally, Amazon RDS supports Transparent Data Encryption (TDE). Instance storage provides temporary block-level storage for Amazon EC2 instances. This storage is located on disks attached physically to a host computer. Instance storage is ideal for temporary storage of information that frequently changes, such as buffers, caches, and scratch data. By default, files stored on these disks are not encrypted. In this blog post, I show a method for encrypting data on Linux EC2 instance stores by using Linux built-in libraries. This method encrypts files transparently, which protects confidential data. As a result, applications that process the data are unaware of the disk-level encryption.
January 27:How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events Amazon S3Access Control Lists (ACLs) enable you to specify permissions that grant access to S3 buckets and objects. When S3 receives a request for an object, it verifies whether the requester has the necessary access permissions in the associated ACL. For example, you could set up an ACL for an object so that only the users in your account can access it, or you could make an object public so that it can be accessed by anyone. If the number of objects and users in your AWS account is large, ensuring that you have attached correctly configured ACLs to your objects can be a challenge. For example, what if a user were to call the PutObjectAcl API call on an object that is supposed to be private and make it public? Or, what if a user were to call the PutObject with the optional Acl parameter set to public-read, therefore uploading a confidential file as publicly readable? In this blog post, I show a solution that uses Amazon CloudWatch Events to detect PutObject and PutObjectAcl API calls in near-real time and helps ensure that the objects remain private by making automatic PutObjectAcl calls, when necessary.
January 24:New SOC 2 Report Available: Confidentiality As with everything at Amazon, the success of our security and compliance program is primarily measured by one thing: our customers’ success. Our customers drive our portfolio of compliance reports, attestations, and certifications that support their efforts in running a secure and compliant cloud environment. As a result of our engagement with key customers across the globe, we are happy to announce the publication of our new SOC 2 Confidentiality report. This report is available now through AWS Artifact in the AWS Management Console.
January 18:Compliance in the Cloud for New Financial Services Cybersecurity Regulations Financial regulatory agencies are focused more than ever on ensuring responsible innovation. Consequently, if you want to achieve compliance with financial services regulations, you must be increasingly agile and employ dynamic security capabilities. AWS enables you to achieve this by providing you with the tools you need to scale your security and compliance capabilities on AWS. The following breakdown of the most recent cybersecurity regulations, NY DFS Rule 23 NYCRR 500, demonstrates how AWS continues to focus on your regulatory needs in the financial services sector.
January 9:New Amazon GameDev Blog Post: Protect Multiplayer Game Servers from DDoS Attacks by Using Amazon GameLift In online gaming, distributed denial of service (DDoS) attacks target a game’s network layer, flooding servers with requests until performance degrades considerably. These attacks can limit a game’s availability to players and limit the player experience for those who can connect. Today’s new Amazon GameDev Blog post uses a typical game server architecture to highlight DDoS attack vulnerabilities and discusses how to stay protected by using built-in AWS Cloud security, AWS security best practices, and the security features of Amazon GameLift. Read the post to learn more.
January 6:The Top 10 Most Downloaded AWS Security and Compliance Documents in 2016 The following list includes the 10 most downloaded AWS security and compliance documents in 2016. Using this list, you can learn about what other people found most interesting about security and compliance last year.
January 6:FedRAMP Compliance Update: AWS GovCloud (US) Region Receives a JAB-Issued FedRAMP High Baseline P-ATO for Three New Services Three new services in the AWS GovCloud (US) region have received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) under the Federal Risk and Authorization Management Program (FedRAMP). JAB issued the authorization at the High baseline, which enables US government agencies and their service providers the capability to use these services to process the government’s most sensitive unclassified data, including Personal Identifiable Information (PII), Protected Health Information (PHI), Controlled Unclassified Information (CUI), criminal justice information (CJI), and financial data.
January 4:The Top 20 Most Viewed AWS IAM Documentation Pages in 2016 The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.
January 3:The Most Viewed AWS Security Blog Posts in 2016 The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.
January 3:How to Monitor AWS Account Configuration Changes and API Calls to Amazon EC2 Security Groups You can use AWS security controls to detect and mitigate risks to your AWS resources. The purpose of each security control is defined by its control objective. For example, the control objective of an Amazon VPC security group is to permit only designated traffic to enter or leave a network interface. Let’s say you have an Internet-facing e-commerce website, and your security administrator has determined that only HTTP (TCP port 80) and HTTPS (TCP 443) traffic should be allowed access to the public subnet. As a result, your administrator configures a security group to meet this control objective. What if, though, someone were to inadvertently change this security group’s rules and enable FTP or other protocols to access the public subnet from any location on the Internet? That expanded access could weaken the security posture of your assets. Consequently, your administrator might need to monitor the integrity of your company’s security controls so that the controls maintain their desired effectiveness. In this blog post, I explore two methods for detecting unintended changes to VPC security groups. The two methods address not only control objectives but also control failures.
If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the forum identified near the end of each post.
– Craig
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.