Tag Archives: announcements

Welcoming the AWS Customer Incident Response Team

Post Syndicated from Kyle Dickinson original https://aws.amazon.com/blogs/security/welcoming-the-aws-customer-incident-response-team/

Well, hello there! Thanks for reading our inaugural blog post.

Who are we, you ask? We are the AWS Customer Incident Response Team (CIRT). The AWS CIRT is a specialized 24/7 global Amazon Web Services (AWS) team that provides support to customers during active security events on the customer side of the AWS Shared Responsibility Model. The team is made up of AWS Global Services Consultants and Solutions Architects with experience in incident response.

When the AWS CIRT supports you, you will receive assistance with triage and recovery for an active security event on AWS. We will assist in root cause analysis through the use of AWS service logs and provide you with recommendations for recovery. We will also provide security tips and best practices to help you avoid security events in the future.

AWS CIRT is thrilled to talk with you in this new medium! This is AWS after all; getting customer feedback and taking steps to make your experience better is our number one goal. The AWS CIRT has heard from customers that they are challenged with 24/7 security event prevention, detection, analysis, and response to security events. You’ve told us that you are seeking the right AWS skill sets, knowledge, and best practices to address your security response needs in the case of an active security event. AWS CIRT wants to share our knowledge with you so that you can excel in preventing and detecting security events in the cloud.

Figure 1 shows the two different sides of the shared responsibility model, in which AWS is responsible for security OF the cloud, while customers are responsible for security IN the cloud.

Figure 1: The Customer/AWS Shared Responsibility Model

Figure 1: The Customer/AWS Shared Responsibility Model

In addition to this blog post, we’ve been working overtime on our favorite social media platform, AWS Twitch. In December 2021, we developed five episodes to share with you some of the most common causes of security events. We received so much positive customer feedback that we decided to create a bi-weekly series! You can find all episodes of The Safe Room on AWS Twitch.

We mentioned earlier that our number one goal is to help you. We’ve heard from you, and understand that many of you do not have the tools or playbooks necessary to operate your AWS workloads securely. We are pleased to announce that we have developed open-source tools to support your security needs:

Stay tuned to this blog. More tools are coming soon!

We’ve told you who we are and what we do—now, how do you contact us? All AWS Customers can engage the AWS CIRT through an AWS support case. Yes, that is correct. We support all AWS customers! For those customers that do have an account team, you can start an escalation to the AWS CIRT with the account team. However, we will always require that you open an AWS support case.

Thanks again for stopping by and giving us your eyeballs for a few minutes. Please stay tuned to this blog, because this is where we will comment on security trends and interesting stuff we find in the security world, as well as make new open source tools public.

Cloud safe, everyone!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Kyle Dickinson

Kyle Dickinson

Kyle is a TDIR Senior Security Consultant with the AWS Customer Incident Response Team, a team who supports AWS Customers during active security events. Kyle also hosts The Safe Room on the AWS Twitch Channel to share information on being more secure on AWS.

New for Amazon GuardDuty – Malware Detection for Amazon EBS Volumes

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-guardduty-malware-detection-for-amazon-ebs-volumes/

With Amazon GuardDuty, you can monitor your AWS accounts and workloads to detect malicious activity. Today, we are adding to GuardDuty the capability to detect malware. Malware is malicious software that is used to compromise workloads, repurpose resources, or gain unauthorized access to data. When you have GuardDuty Malware Protection enabled, a malware scan is initiated when GuardDuty detects that one of your EC2 instances or container workloads running on EC2 is doing something suspicious. For example, a malware scan is triggered when an EC2 instance is communicating with a command-and-control server that is known to be malicious or is performing denial of service (DoS) or brute-force attacks against other EC2 instances.

GuardDuty supports many file system types and scans file formats known to be used to spread or contain malware, including Windows and Linux executables, PDF files, archives, binaries, scripts, installers, email databases, and plain emails.

When potential malware is identified, actionable security findings are generated with information such as the threat and file name, the file path, the EC2 instance ID, resource tags and, in the case of containers, the container ID and the container image used. GuardDuty supports container workloads running on EC2, including customer-managed Kubernetes clusters or individual Docker containers. If the container is managed by Amazon Elastic Kubernetes Service (EKS) or Amazon Elastic Container Service (Amazon ECS), the findings also include the cluster name and the task or pod ID so application and security teams can quickly find the affected container resources.

As with all other GuardDuty findings, malware detections are sent to the GuardDuty console, pushed through Amazon EventBridge, routed to AWS Security Hub, and made available in Amazon Detective for incident investigation.

How GuardDuty Malware Protection Works
When you enable malware protection, you set up an AWS Identity and Access Management (IAM) service-linked role that grants GuardDuty permissions to perform malware scans. When a malware scan is initiated for an EC2 instance, GuardDuty Malware Protection uses those permissions to take a snapshot of the attached Amazon Elastic Block Store (EBS) volumes that are less than 1 TB in size and then restore the EBS volumes in an AWS service account in the same AWS Region to scan them for malware. You can use tagging to include or exclude EC2 instances from those permissions and from scanning. In this way, you don’t need to deploy security software or agents to monitor for malware, and scanning the volumes doesn’t impact running workloads. The EBS volumes in the service account and the snapshots in your account are deleted after the scan. Optionally, you can preserve the snapshots when malware is detected.

The service-linked role grants GuardDuty access to AWS Key Management Service (AWS KMS) keys used to encrypt EBS volumes. If the EBS volumes attached to a potentially compromised EC2 instance are encrypted with a customer-managed key, GuardDuty Malware Protection uses the same key to encrypt the replica EBS volumes as well. If the volumes are not encrypted, GuardDuty uses its own key to encrypt the replica EBS volumes and ensure privacy. Volumes encrypted with EBS-managed keys are not supported.

Security in cloud is a shared responsibility between you and AWS. As a guardrail, the service-linked role used by GuardDuty Malware Protection cannot perform any operation on your resources (such as EBS snapshots and volumes, EC2 instances, and KMS keys) if it has the GuardDutyExcluded tag. Once you mark your snapshots with GuardDutyExcluded set to true, the GuardDuty service won’t be able to access these snapshots. The GuardDutyExcluded tag supersedes any inclusion tag. Permissions also restrict how GuardDuty can modify your snapshot so that they cannot be made public while shared with the GuardDuty service account.

The EBS volumes created by GuardDuty are always encrypted. GuardDuty can use KMS keys only on EBS snapshots that have a GuardDuty scan ID tag. The scan ID tag is added by GuardDuty when snapshots are created after an EC2 finding. The KMS keys that are shared with GuardDuty service account cannot be invoked from any other context except the Amazon EBS service. Once the scan completes successfully, the KMS key grant is revoked and the volume replica in GuardDuty service account is deleted, making sure GuardDuty service cannot access your data after completing the scan operation.

Enabling Malware Protection for an AWS Account
If you’re not using GuardDuty yet, Malware Protection is enabled by default when you activate GuardDuty for your account. Because I am already using GuardDuty, I need to enable Malware Protection from the console. If you’re using AWS Organizations, your delegated administrator accounts can enable this for existing member accounts and configure if new AWS accounts in the organization should be automatically enrolled.

In the GuardDuty console, I choose Malware Protection under Settings in the navigation pane. There, I choose Enable and then Enable Malware Protection.

Console screenshot.

Snapshots are automatically deleted after they are scanned. In General settings, I have the option to retain in my AWS account the snapshots where malware is detected and have them available for further analysis.

Console screenshot.

In Scan options, I can configure a list of inclusion tags, so that only EC2 instances with those tags are scanned, or exclusion tags, so that EC2 instances with tags in the list are skipped.

Console screenshot.

Testing Malware Protection GuardDuty Findings
To generate several Amazon GuardDuty findings, including the new Malware Protection findings, I clone the Amazon GuardDuty Tester repo:

$ git clone https://github.com/awslabs/amazon-guardduty-tester

First, I create an AWS CloudFormation stack using the guardduty-tester.template file. When the stack is ready, I follow the instructions to configure my SSH client to log in to the tester instance through the bastion host. Then, I connect to the tester instance:

$ ssh tester

From the tester instance, I start the guardduty_tester.sh script to generate the findings:

$ ./guardduty_tester.sh 

***********************************************************************
* Test #1 - Internal port scanning                                    *
* This simulates internal reconaissance by an internal actor or an   *
* external actor after an initial compromise. This is considered a    *
* low priority finding for GuardDuty because its not a clear indicator*
* of malicious intent on its own.                                     *
***********************************************************************


Starting Nmap 6.40 ( http://nmap.org ) at 2022-05-19 09:36 UTC
Nmap scan report for ip-172-16-0-20.us-west-2.compute.internal (172.16.0.20)
Host is up (0.00032s latency).
Not shown: 997 filtered ports
PORT     STATE  SERVICE
22/tcp   open   ssh
80/tcp   closed http
5050/tcp closed mmcc
MAC Address: 06:25:CB:F4:E0:51 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 4.96 seconds

-----------------------------------------------------------------------

***********************************************************************
* Test #2 - SSH Brute Force with Compromised Keys                     *
* This simulates an SSH brute force attack on an SSH port that we    *
* can access from this instance. It uses (phony) compromised keys in  *
* many subsequent attempts to see if one works. This is a common      *
* techique where the bad actors will harvest keys from the web in     *
* places like source code repositories where people accidentally leave*
* keys and credentials (This attempt will not actually succeed in     *
* obtaining access to the target linux instance in this subnet)       *
***********************************************************************

2022-05-19 09:36:29 START
2022-05-19 09:36:29 Crowbar v0.4.3-dev
2022-05-19 09:36:29 Trying 172.16.0.20:22
2022-05-19 09:36:33 STOP
2022-05-19 09:36:33 No results found...
2022-05-19 09:36:33 START
2022-05-19 09:36:33 Crowbar v0.4.3-dev
2022-05-19 09:36:33 Trying 172.16.0.20:22
2022-05-19 09:36:37 STOP
2022-05-19 09:36:37 No results found...
2022-05-19 09:36:37 START
2022-05-19 09:36:37 Crowbar v0.4.3-dev
2022-05-19 09:36:37 Trying 172.16.0.20:22
2022-05-19 09:36:41 STOP
2022-05-19 09:36:41 No results found...
2022-05-19 09:36:41 START
2022-05-19 09:36:41 Crowbar v0.4.3-dev
2022-05-19 09:36:41 Trying 172.16.0.20:22
2022-05-19 09:36:45 STOP
2022-05-19 09:36:45 No results found...
2022-05-19 09:36:45 START
2022-05-19 09:36:45 Crowbar v0.4.3-dev
2022-05-19 09:36:45 Trying 172.16.0.20:22
2022-05-19 09:36:48 STOP
2022-05-19 09:36:48 No results found...
2022-05-19 09:36:49 START
2022-05-19 09:36:49 Crowbar v0.4.3-dev
2022-05-19 09:36:49 Trying 172.16.0.20:22
2022-05-19 09:36:52 STOP
2022-05-19 09:36:52 No results found...
2022-05-19 09:36:52 START
2022-05-19 09:36:52 Crowbar v0.4.3-dev
2022-05-19 09:36:52 Trying 172.16.0.20:22
2022-05-19 09:36:56 STOP
2022-05-19 09:36:56 No results found...
2022-05-19 09:36:56 START
2022-05-19 09:36:56 Crowbar v0.4.3-dev
2022-05-19 09:36:56 Trying 172.16.0.20:22
2022-05-19 09:37:00 STOP
2022-05-19 09:37:00 No results found...
2022-05-19 09:37:00 START
2022-05-19 09:37:00 Crowbar v0.4.3-dev
2022-05-19 09:37:00 Trying 172.16.0.20:22
2022-05-19 09:37:04 STOP
2022-05-19 09:37:04 No results found...
2022-05-19 09:37:04 START
2022-05-19 09:37:04 Crowbar v0.4.3-dev
2022-05-19 09:37:04 Trying 172.16.0.20:22
2022-05-19 09:37:08 STOP
2022-05-19 09:37:08 No results found...
2022-05-19 09:37:08 START
2022-05-19 09:37:08 Crowbar v0.4.3-dev
2022-05-19 09:37:08 Trying 172.16.0.20:22
2022-05-19 09:37:12 STOP
2022-05-19 09:37:12 No results found...
2022-05-19 09:37:12 START
2022-05-19 09:37:12 Crowbar v0.4.3-dev
2022-05-19 09:37:12 Trying 172.16.0.20:22
2022-05-19 09:37:16 STOP
2022-05-19 09:37:16 No results found...
2022-05-19 09:37:16 START
2022-05-19 09:37:16 Crowbar v0.4.3-dev
2022-05-19 09:37:16 Trying 172.16.0.20:22
2022-05-19 09:37:20 STOP
2022-05-19 09:37:20 No results found...
2022-05-19 09:37:20 START
2022-05-19 09:37:20 Crowbar v0.4.3-dev
2022-05-19 09:37:20 Trying 172.16.0.20:22
2022-05-19 09:37:23 STOP
2022-05-19 09:37:23 No results found...
2022-05-19 09:37:23 START
2022-05-19 09:37:23 Crowbar v0.4.3-dev
2022-05-19 09:37:23 Trying 172.16.0.20:22
2022-05-19 09:37:27 STOP
2022-05-19 09:37:27 No results found...
2022-05-19 09:37:27 START
2022-05-19 09:37:27 Crowbar v0.4.3-dev
2022-05-19 09:37:27 Trying 172.16.0.20:22
2022-05-19 09:37:31 STOP
2022-05-19 09:37:31 No results found...
2022-05-19 09:37:31 START
2022-05-19 09:37:31 Crowbar v0.4.3-dev
2022-05-19 09:37:31 Trying 172.16.0.20:22
2022-05-19 09:37:34 STOP
2022-05-19 09:37:34 No results found...
2022-05-19 09:37:35 START
2022-05-19 09:37:35 Crowbar v0.4.3-dev
2022-05-19 09:37:35 Trying 172.16.0.20:22
2022-05-19 09:37:38 STOP
2022-05-19 09:37:38 No results found...
2022-05-19 09:37:38 START
2022-05-19 09:37:38 Crowbar v0.4.3-dev
2022-05-19 09:37:38 Trying 172.16.0.20:22
2022-05-19 09:37:42 STOP
2022-05-19 09:37:42 No results found...
2022-05-19 09:37:42 START
2022-05-19 09:37:42 Crowbar v0.4.3-dev
2022-05-19 09:37:42 Trying 172.16.0.20:22
2022-05-19 09:37:46 STOP
2022-05-19 09:37:46 No results found...

-----------------------------------------------------------------------

***********************************************************************
* Test #3 - RDP Brute Force with Password List                        *
* This simulates an RDP brute force attack on the internal RDP port  *
* of the windows server that we installed in the environment.  It uses*
* a list of common passwords that can be found on the web. This test  *
* will trigger a detection, but will fail to get into the target      *
* windows instance.                                                   *
***********************************************************************

Sending 250 password attempts at the windows server...
Hydra v9.4-dev (c) 2022 by van Hauser/THC & David Maciejak - Please do not use in military or secret service organizations, or for illegal purposes (this is non-binding, these *** ignore laws and ethics anyway).

Hydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2022-05-19 09:37:46
[WARNING] rdp servers often don't like many connections, use -t 1 or -t 4 to reduce the number of parallel connections and -W 1 or -W 3 to wait between connection to allow the server to recover
[INFO] Reduced number of tasks to 4 (rdp does not like many parallel connections)
[WARNING] the rdp module is experimental. Please test, report - and if possible, fix.
[DATA] max 4 tasks per 1 server, overall 4 tasks, 1792 login tries (l:7/p:256), ~448 tries per task
[DATA] attacking rdp://172.16.0.24:3389/
[STATUS] 1099.00 tries/min, 1099 tries in 00:01h, 693 to do in 00:01h, 4 active
1 of 1 target completed, 0 valid password found
Hydra (https://github.com/vanhauser-thc/thc-hydra) finished at 2022-05-19 09:39:23

-----------------------------------------------------------------------

***********************************************************************
* Test #4 - CryptoCurrency Mining Activity                            *
* This simulates interaction with a cryptocurrency mining pool which *
* can be an indication of an instance compromise. In this case, we are*
* only interacting with the URL of the pool, but not downloading      *
* any files. This will trigger a threat intel based detection.        *
***********************************************************************

Calling bitcoin wallets to download mining toolkits

-----------------------------------------------------------------------

***********************************************************************
* Test #5 - DNS Exfiltration                                          *
* A common exfiltration technique is to tunnel data out over DNS      *
* to a fake domain.  Its an effective technique because most hosts    *
* have outbound DNS ports open.  This test wont exfiltrate any data,  *
* but it will generate enough unusual DNS activity to trigger the     *
* detection.                                                          *
***********************************************************************

Calling large numbers of large domains to simulate tunneling via DNS

***********************************************************************
* Test #6 - Fake domain to prove that GuardDuty is working            *
* This is a permanent fake domain that customers can use to prove that*
* GuardDuty is working.  Calling this domain will always generate the *
* Backdoor:EC2/C&CActivity.B!DNS finding type                         *
***********************************************************************

Calling a well known fake domain that is used to generate a known finding

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> GuardDutyC2ActivityB.com any
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11495
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;GuardDutyC2ActivityB.com.	IN	ANY

;; ANSWER SECTION:
GuardDutyC2ActivityB.com. 6943	IN	SOA	ns1.markmonitor.com. hostmaster.markmonitor.com. 2018091906 86400 3600 2592000 172800
GuardDutyC2ActivityB.com. 6943	IN	NS	ns3.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns5.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns7.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns2.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns4.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns6.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns1.markmonitor.com.

;; Query time: 27 msec
;; SERVER: 172.16.0.2#53(172.16.0.2)
;; WHEN: Thu May 19 09:39:23 UTC 2022
;; MSG SIZE  rcvd: 238


*****************************************************************************************************
Expected GuardDuty Findings

Test 1: Internal Port Scanning
Expected Finding: EC2 Instance  i-011e73af27562827b  is performing outbound port scans against remote host. 172.16.0.20
Finding Type: Recon:EC2/Portscan

Test 2: SSH Brute Force with Compromised Keys
Expecting two findings - one for the outbound and one for the inbound detection
Outbound:  i-011e73af27562827b  is performing SSH brute force attacks against  172.16.0.20
Inbound:  172.16.0.25  is performing SSH brute force attacks against  i-0bada13e0aa12d383
Finding Type: UnauthorizedAccess:EC2/SSHBruteForce

Test 3: RDP Brute Force with Password List
Expecting two findings - one for the outbound and one for the inbound detection
Outbound:  i-011e73af27562827b  is performing RDP brute force attacks against  172.16.0.24
Inbound:  172.16.0.25  is performing RDP brute force attacks against  i-0191573dec3b66924
Finding Type : UnauthorizedAccess:EC2/RDPBruteForce

Test 4: Cryptocurrency Activity
Expected Finding: EC2 Instance  i-011e73af27562827b  is querying a domain name that is associated with bitcoin activity
Finding Type : CryptoCurrency:EC2/BitcoinTool.B!DNS

Test 5: DNS Exfiltration
Expected Finding: EC2 instance  i-011e73af27562827b  is attempting to query domain names that resemble exfiltrated data
Finding Type : Trojan:EC2/DNSDataExfiltration

Test 6: C&C Activity
Expected Finding: EC2 instance  i-011e73af27562827b  is querying a domain name associated with a known Command & Control server. 
Finding Type : Backdoor:EC2/C&CActivity.B!DNS

After a few minutes, the findings appear in the GuardDuty console. At the top, I see the malicious files found by the new Malware Protection capability. One of the findings is related to an EC2 instance, the other to an ECS cluster.

Console screenshot.

First, I select the finding related to the EC2 instance. In the panel, I see the information on the instance and the malicious file, such as the file name and path. In the Malware scan details section, the Trigger finding ID points to the original GuardDuty finding that triggered the malware scan. In my case, the original finding was that this EC2 instance was performing RDP brute force attacks against another EC2 instance.

Console screenshot.

Here, I choose Investigate with Detective and, directly from the GuardDuty console, I go to the Detective console to visualize AWS CloudTrail and Amazon Virtual Private Cloud (Amazon VPC) flow data for the EC2 instance, the AWS account, and the IP address affected by the finding. Using Detective, I can analyze, investigate, and identify the root cause of suspicious activities found by GuardDuty.

Console screenshot.

When I select the finding related to the ECS cluster, I have more information on the resource affected, such as the details of the ECS cluster, the task, the containers, and the container images.

Console screenshot.

Using the GuardDuty tester scripts makes it easier to test the overall integration of GuardDuty with other security frameworks you use so that you can be ready when a real threat is detected.

Comparing GuardDuty Malware Protection with Amazon Inspector
At this point, you might ask yourself how GuardDuty Malware Protection relates to Amazon Inspector, a service that scans AWS workloads for software vulnerabilities and unintended network exposure. The two services complement each other and offer different layers of protection:

  • Amazon Inspector offers proactive protection by identifying and remediating known software and application vulnerabilities that serve as an entry point for attackers to compromise resources and install malware.
  • GuardDuty Malware Protection detects malware that is found to be present on actively running workloads. At that point, the system has already been compromised, but GuardDuty can limit the time of an infection and take action before a system compromise results in a business-impacting event.

Availability and Pricing
Amazon GuardDuty Malware Protection is available today in all AWS Regions where GuardDuty is available, excluding the AWS China (Beijing), AWS China (Ningxia), AWS GovCloud (US-East), and AWS GovCloud (US-West) Regions.

At launch, GuardDuty Malware Protection is integrated with these partner offerings:

With GuardDuty, you don’t need to deploy security software or agents to monitor for malware. You only pay for the amount of GB scanned in the file systems (not for the size of the EBS volumes) and for the EBS snapshots during the time they are kept in your account. All EBS snapshots created by GuardDuty are automatically deleted after they are scanned unless you enable snapshot retention when malware is found. For more information, see GuardDuty pricing and EBS pricing. Note that GuardDuty only scans EBS volumes less than 1 TB in size. To help you control costs and avoid repeating alarms, the same volume is not scanned more often than once every 24 hours.

Detect malicious activity and protect your applications from malware with Amazon GuardDuty.

Danilo

Amazon Detective Supports Kubernetes Workloads on Amazon EKS for Security Investigations

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-detective-supports-kubernetes-workloads-on-amazon-eks-for-security-investigations/

In March 2020, we introduced Amazon Detective, a fully managed service that makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities.

Amazon Detective continuously extracts temporal events such as login attempts, API calls, and network traffic from Amazon GuardDutyAWS CloudTrail, and Amazon Virtual Private Cloud (Amazon VPC) Flow Logs into a graph model that summarizes the resource behaviors and interactions observed across your entire AWS environment. We have added new features such as AWS IAM Role session analysis, enhanced IP address analytics, Splunk integration, Amazon S3 and DNS finding types, and the support of AWS Organizations.

Customers are rapidly moving to containers to deploy Kubernetes workloads with Amazon Elastic Kubernetes Service (Amazon EKS). Its highly programmatic nature allows thousands of individual container deployments and millions of configuration changes to occur in seconds. To effectively secure EKS workloads, it is important to monitor container deployments and configurations that are captured in the form of EKS audit logs and to correlate activities to user activity and network traffic happening across AWS accounts.

Today we announce new capabilities in Amazon Detective to expand security investigation coverage for Kubernetes workloads running on Amazon EKS. When you enable this new feature, Amazon Detective automatically starts ingesting EKS audit logs to capture chronological API activity from users, applications, and the control plane in Amazon EKS for clusters, pods, container images, and Kubernetes subjects (Kubernetes users and service accounts).

Detective automatically correlates user activity using CloudTrail, and network activity using Amazon VPC Flow logs, without the need for you to enable, store, or retain logs manually. The service gleans key security information from these logs and retains them in a security behavioral graph database that enables fast cross-referenced access to twelve months of activity. Detective provides a data analysis and visualization layer purpose-built to answer common security questions backed by a behavioral graph database that allows you to quickly investigate potential malicious behavior associated with your EKS workloads.

You can rapidly respond to security issues rather than focusing on log management, operational systems, or ongoing security tooling maintenance. Detective’s EKS capabilities come with a free 30-day trial for all customers that allows you to ensure that the capabilities meet your needs and to fully understand the cost for the service on an ongoing basis.

Getting Started with Security Investigations for EKS Audit Logs
To get started, enable Amazon Detective with just a few clicks in the AWS Management Console. GuardDuty is a prerequisite of Amazon Detective. When you try to enable Detective, Detective checks whether GuardDuty has been enabled for your account. You must either enable GuardDuty or wait for 48 hours. This allows GuardDuty to assess the data volume that your account produces.

You can enable your account by attaching the AWS IAM policy or delegate it to an administrator of your organization. To learn more, refer to Setting up Detective in the AWS documentation.

To enable EKS support in Detective as an existing customer, navigate to the Settings menu in the left panel and select General. Under Optional source packages, enable EKS audit logs.

If you are a new customer of Detective, the EKS protection feature will be enabled by default. If you do not want to trial EKS audit logs right away, you can disable this feature within the first week of enabling Detective and preserve the full 30-day free trial period to use in the future.

Once enabled, Detective will begin monitoring the Kubernetes audit logs that are generated by Amazon EKS, extracting and correlating information for security usage. You do not need to enable any log sources or make any configuration changes to your existing EKS clusters or future deployments.

You can see recent monitoring results of your EKS clusters on the Summary page.

When you choose one of the EKS clusters, you will see the details of containers running in the cluster, Kubernetes API activities, and network activities that occurred on this resource around the scope time.

In the Overview tab, you also see details about all containers running in the cluster, including their pod, image and security context.

In the Kubernetes API activity tab, you can get an overview of the full API activities involving the EKS cluster. You can choose a time range to drill down based on specific API methods within the EKS cluster. When you select a specific time, you can see API subjects, IP addresses, and the number of API calls by the success, failure, unauthorized, or forbidden state.

You can also see details of newly observed Kubernetes API calls  inside this cluster for the first time and subjects with increased volume that happened inside the cluster.

Enabling GuardDuty EKS Protection
In January 2022, Amazon GuardDuty expanded coverage to EKS cluster activity to identify malicious or suspicious behavior that represents potential threats to container workloads.

When the optional GuardDuty EKS Protection is enabled, GuardDuty will continuously monitor your EKS deployments and alert you to threats detected in your workloads. You can view and investigate these security findings in Detective.

With Detective for EKS enabled, you can quickly access information about the resources involved in the finding, such as their CloudTrail and Kubernetes API activity, and netflow information. This can aid in investigation and help you determine root cause, impact, and other related resources that may also be compromised.

To learn more, see How to use new Amazon GuardDuty EKS Protection findings in the AWS Security Blog.

Now Available
You can now use Amazon Detective for EKS protection in all Regions where Amazon Detective is available. This feature is priced based on the volume of audit logs processed and analyzed by Detective.

Detective provides a free 30-day trial to all customers that enable EKS coverage, allowing customers to ensure that Detective’s capabilities meet security needs and to get an estimate of the service’s monthly cost before committing to paid usage. To learn more, see the Detective pricing page.

For technical documentation, visit the Amazon Detective User Guide. Please send feedback to AWS re:Post for Amazon Detective or through your usual AWS support contacts.

Learn all the details about Amazon Detective for EKS protection and get started today.

Channy

AWS Week In Review – July 25, 2022

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/aws-week-in-review-july-25-2022/

A few weeks ago, we hosted the first EMEA AWS Heroes Summit in Milan, Italy. This past week, I had the privilege to join the Americas AWS Heroes Summit in Seattle, Washington, USA. Meeting with our community experts is always inspiring and a great opportunity to learn from each other. During the Summit, AWS Heroes from North America and Latin America shared their thoughts with AWS developer advocates and product teams on topics such as serverless, containers, machine learning, data, and DevTools. You can learn more about the AWS Heroes program here.

AWS Heroes Summit Americas 2022

Last Week’s Launches
Here are some launches that got my attention during the previous week:

Cloudscape Design System Cloudscape is an open source design system for creating web applications. It was built for and is used by AWS products and services. We created it in 2016 to improve the user experience across web applications owned by AWS services and also to help teams implement those applications faster. If you’ve ever used the AWS Management Console, you’ve seen Cloudscape in action. If you are building a product that extends the AWS Management Console, designing a user interface for a hybrid cloud management system, or setting up an on-premises solution that uses AWS, have a look at Cloudscape Design System.

Cloudscape Design System

AWS re:Post introduces community-generated articlesAWS re:Post gives you access to a vibrant community that helps you become even more successful on AWS. Expert community members can now share technical guidance and knowledge beyond answering questions through the Articles feature. Using this feature, community members can share best practices and troubleshooting processes and address customer needs around AWS technology in greater depth. The Articles feature is unlocked for community members who have achieved Rising Star status on re:Post or subject matter experts who built their reputation in the community based on their contributions and certifications. If you have a Rising Star status on re:Post, start writing articles now! All other members can unlock Rising Star status through community contributions or simply browse available articles today on re:Post.

AWS re:Post

AWS Lambda announces support for attribute-based access control (ABAC) and new IAM condition key – You can now use attribute-based access control (ABAC) with AWS Lambda to control access to functions within AWS Identity and Access Management (IAM) using tags. ABAC is an authorization strategy that defines access permissions based on attributes. In AWS, these attributes are called tags. With ABAC, you can scale an access control strategy by setting granular permissions with tags without requiring permissions updates for every new user or resource as your organization scales. Read this blog post by Julian Wood and Chris McPeek to learn more.

AWS Lambda also announced support for lambda:SourceFunctionArn, a new IAM condition key that can be used for IAM policy conditions that specify the Amazon Resource Name (ARN) of the function from which a request is made. You can use the Condition element in your IAM policy to compare the lambda:SourceFunctionArn condition key in the request context with values that you specify in your policy. This allows you to implement advanced security controls for the AWS API calls taken by your Lambda function code. For more details, have a look at the Lambda Developer Guide.

Amazon Fraud Detector launches Account Takeover Insights (ATI)Amazon Fraud Detector now supports an Account Takeover Insights (ATI) model, a low-latency fraud detection machine learning model specifically designed to detect accounts that have been compromised through stolen credentials, phishing, social engineering, or other forms of account takeover. The ATI model is designed to detect up to four times more ATI fraud than traditional rules-based account takeover solutions while minimizing the level of friction for legitimate users. To learn more, have a look at the Amazon Fraud Detector documentation.

Amazon EMR on EC2 clusters (EMR Clusters) introduces more fine-grained access controls – Previously, all jobs running on an EMR cluster used the IAM role associated with the EMR cluster’s EC2 instances to access resources. This role is called the EMR EC2 instance profile. With the new runtime roles for Amazon EMR Steps, you can now specify a different IAM role for your Apache Spark and Hive jobs, scoping down access at a job level. This simplifies access controls on a single EMR cluster that is shared between multiple tenants, wherein each tenant is isolated using IAM roles. You can now also enforce table and column permissions based on your Amazon EMR runtime role to manage your access to data lakes with AWS Lake Formation. For more details, read the What’s New post.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some additional news and customer stories you may find interesting:

AWS open-source news and updates – My colleague Ricardo Sueiras writes this weekly open-source newsletter in which he highlights new open-source projects, tools, and demos from the AWS Community. Read edition #121 here.

AI Use Case Explorer – If you are interested in AI use cases, have a look at the new AI Use Case Explorer. You can search over 100 use cases and 400 customer success stories by industry, business function, and the business outcome you want to achieve.

Bayer centralizes and standardizes data from the carbon program using AWS – To help Brazilian farmers adopt climate-smart agricultural practices and reduce carbon emissions in their activities, Bayer created the Carbon Program, which aims to build carbon-neutral agriculture practices. Learn how Bayer uses AWS to centralize and standardize the data received from the different partners involved in the project in this Bayer case study.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS re:Inforce 2022 – The event will be held this week in person on July 26 and 27 in Boston, Massachusetts, USA. You can watch the keynote and leadership sessions online for free. AWS On Air will also stream live from re:Inforce.

AWS SummitAWS Global Summits – AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Registrations are open for the following AWS Summits in August:

Imagine Conference 2022IMAGINE 2022 – The IMAGINE 2022 conference will take place on August 3 at the Seattle Convention Center, Washington, USA. It’s a no-cost event that brings together education, state, and local leaders to learn about the latest innovations and best practices in the cloud. You can register here.

I’ll be speaking at Data Con LA on August 13–14 in Los Angeles, California, USA. Feel free to say “Hi!” if you’re around. And if you happen to be at Ray Summit on August 23–24 in San Francisco, California, USA, stop by the AWS booth. I’ll be there to discuss all things Ray on AWS.

That’s all for this week. Check back next Monday for another Week in Review!

Antje

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Amazon Prime Day 2022 – AWS for the Win!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2022-aws-for-the-win/

As part of my annual tradition to tell you about how AWS makes Prime Day possible, I am happy to be able to share some chart-topping metrics (check out my 2016, 2017, 2019, 2020, and 2021 posts for a look back).

My purchases this year included a first aid kit, some wood brown filament for my 3D printer, and a non-stick frying pan! According to our official news release, Prime members worldwide purchased more than 100,000 items per minute during Prime Day, with best-selling categories including Amazon Devices, Consumer Electronics, and Home.

Powered by AWS
As always, AWS played a critical role in making Prime Day a success. A multitude of two-pizza teams worked together to make sure that every part of our infrastructure was scaled, tested, and ready to serve our customers. Here are a few examples:

Amazon Aurora – On Prime Day, 5,326 database instances running the PostgreSQL-compatible and MySQL-compatible editions of Amazon Aurora processed 288 billion transactions, stored 1,849 terabytes of data, and transferred 749 terabytes of data.

Amazon EC2 – For Prime Day 2022, Amazon increased the total number of normalized instances (an internal measure of compute power) on Amazon Elastic Compute Cloud (Amazon EC2) by 12%. This resulted in an overall server equivalent footprint that was only 7% larger than that of Cyber Monday 2021 due to the increased adoption of AWS Graviton2 processors.

Amazon EBS – For Prime Day, the Amazon team added 152 petabytes of EBS storage. The resulting fleet handled 11.4 trillion requests per day and transferred 532 petabytes of data per day. Interestingly enough, due to increased efficiency of some of the internal Amazon services used to run Prime Day, Amazon actually used about 4% less EBS storage and transferred 13% less data than it did during Prime Day last year. Here’s a graph that shows the increase in data transfer during Prime Day:

Amazon SES – In order to keep Prime Day shoppers aware of the deals and to deliver order confirmations, Amazon Simple Email Service (SES) peaked at 33,000 Prime Day email messages per second.

Amazon SQS – During Prime Day, Amazon Simple Queue Service (SQS) set a new traffic record by processing 70.5 million messages per second at peak:

Amazon DynamoDB – DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 105.2 million requests per second.

Amazon SageMaker – The Amazon Robotics Pick Time Estimator, which uses Amazon SageMaker to train a machine learning model to predict the amount of time future pick operations will take, processed more than 100 million transactions during Prime Day 2022.

Package Planning – In North America, and on the highest traffic Prime 2022 day, package-planning systems performed 60 million AWS Lambda invocations, processed 17 terabytes of compressed data in Amazon Simple Storage Service (Amazon S3), stored 64 million items across Amazon DynamoDB and Amazon ElastiCache, served 200 million events over Amazon Kinesis, and handled 50 million Amazon Simple Queue Service events.

Prepare to Scale
Every year I reiterate the same message: rigorous preparation is key to the success of Prime Day and our other large-scale events. If you are preparing for a similar chart-topping event of your own, I strongly recommend that you take advantage of AWS Infrastructure Event Management (IEM). As part of an IEM engagement, my colleagues will provide you with architectural and operational guidance that will help you to execute your event with confidence!

Jeff;

AWS re:Inforce 2022: Network & Infrastructure Security track preview

Post Syndicated from Satinder Khasriya original https://aws.amazon.com/blogs/security/aws-reinforce-2022-network-infrastructure-security-track-preview/

Register now with discount code SALvWQHU2Km to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

Today we’re going to highlight just some of the network and infrastructure security focused sessions planned for AWS re:Inforce. AWS re:Inforce 2022 will take place in-person in Boston, MA July 26-27. AWS re:Inforce is a learning conference focused on security, compliance, identity, and privacy. When you attend the event, you have access to hundreds of technical and business sessions, demos of the latest technology, an AWS Partner expo hall, a keynote speech from AWS Security leaders, and more. re:Inforce 2022 organizes content across multiple themed tracks: identity and access management; threat detection and incident response; governance, risk, and compliance; networking and infrastructure security; and data protection and privacy. This post describes some of the Breakout sessions, Chalk Talk sessions, Builders’ sessions, and Workshops that are planned for the Network & Infrastructure Security track. For information on the other re:Inforce tracks, see our previous re:Inforce blog posts.

Breakout sessions

These are lecture-style presentations that cover topics at all levels and delivered by AWS experts, builders, customers, and partners. Breakout sessions typically include 10–15 minutes of Q&A at the end.

NIS201: An overview of AWS firewall services and where to use them

In this session, review the firewall services that can be used on AWS, including OS firewalls (Windows and Linux), security group, NACLs, AWS Network Firewall and AWS WAF. This session covers a quick description of each service and where to use it and then offer strategies to help you get the most out of these services.

NIS306: Automating patch management and compliance using AWS

In this session, learn how you can use AWS to automate one of the most common operational challenges that often emerge on the journey to the cloud: patch management and compliance. AWS gives you visibility and control of your infrastructure using AWS Systems Manager. See firsthand how-to setup and configure an automated, multi-account and multi-region patching operation using Amazon CloudWatch Events, AWS Lambda, and AWS Systems Manager.

NIS307: AWS Internet access at scale: Designing a cloud-native internet edge

Today’s on-premises infrastructure typically has a single internet gateway that is sized to handle all corporate traffic. With AWS, infrastructure as code allows you to deploy in different internet access patterns, including distributed DMZs. Automated queries mean you can identify your infrastructure with an API query and ubiquitous instrumentation, allowing precise anomaly detection. In this session, learn about AWS native security tools like Amazon API Gateway, AWS WAF, ELB, Application Load Balancer, and AWS Network Firewall. These options can help you simplify internet service delivery and improve your agility.

NIS308: Deploying AWS Network Firewall at scale: athenahealth’s journey

When the Log4j vulnerability became known in December 2021, athenahealth made the decision to increase their cloud security posture by adding AWS Network Firewall to over 100 accounts and 230 VPCs. Join this session to learn about their initial deployment of a distributed architecture and how they were able to reduce their costs by approximately two-thirds by moving to a centralized model. The session also covers firewall policy creation, optimization, and management at scale. The session is aimed at architects and leaders focused on network and perimeter security that are interested in deploying AWS Network Firewall.

Builders’ sessions

These are small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.

NIS251: Building security defenses for edge computing devices

Once devices run applications at the edge and are interacting with various AWS services, establishing a compliant and secure computing environment is necessary. It’s also necessary to monitor for unexpected behaviors, such as a device running malicious code or mining cryptocurrency. This builders’ session walks you through how to build security mechanisms to detect unexpected behaviors and take automated corrective actions for edge devices at scale using AWS IoT Device Defender and AWS IoT Greengrass.

NIS252: Analyze your network using Amazon VPC Network Access Analyzer

In this builders’ session, review how the new Amazon VPC Network Access Analyzer helps you identify network configurations that can lead to unintended network access. Learn ways that you can improve your security posture while still allowing you and your organization to be agile and flexible.

Chalk Talk sessions

These are highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

NIS332: Implementing traffic inspection capabilities at scale on AWS

Join this chalk talk to learn about a broad range of security offerings to integrate firewall services into your network, including AWS WAF, AWS Network Firewall, and third-party security products. Learn how to choose network architectures for these firewalling options to help protect inbound traffic to your internet-facing applications. Also learn best practices for applying security controls to various traffic flows, such as internet egress, east-west, and internet ingress.

NIS334: Building Zero Trust from the inside out

What is a protect surface and how can it simplify achieving Zero Trust outcomes on AWS? In this chalk talk, discover how to layer security controls on foundational services, such as Amazon EC2, Amazon EKS, and Amazon S3, to achieve Zero Trust. Starting with these foundational services, learn about AWS services and partner offerings to add security layer by layer. Learn how you can satisfy common Zero Trust use cases on AWS, including user, device, and system authentication and authorization.

Workshops

These are interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

NIS372: Build a DDoS-resilient perimeter and enable automatic protection at scale

In this workshop, learn how to build a DDoS-resilient perimeter and how to use services like AWS Shield, AWS WAF, AWS Firewall Manager, and Amazon CloudFront to architect for DDoS resiliency and maintain robust operational capabilities that allow rapid detection and engagement during high-severity events. Learn how to detect and filter out malicious web requests, reduce attack surface, and protect web-facing workloads at scale with maximum automation and visibility.

NIS373: Open-source security appliances with ELB Gateway Load Balancer

ELB Gateway Load Balancer (GWLB) can help you deploy and scale security appliances on AWS. This workshop focuses on integrating GWLB with an open-source thread detection engine from Suricata. Learn about the mechanics of GWLB, build rules for GeoIP blocking, and write scripts for enhanced malware detection. The architecture relies on AWS Transit Gateway for centralized inspection; automate it using a GitOps CI/CD approach.

NIS375: Segmentation and security inspection for global networks with AWS Cloud WAN

In this workshop, learn how to build and design connectivity for global networks using native AWS services. The workshop includes a discussion of security concepts such as segmentation, centralized network security controls, and creating a balance between self-service and governance at scale. Understand new services like AWS Cloud WAN and AWS Direct Connect SiteLink, as well as how they interact with existing services like AWS Transit Gateway, AWS Network Firewall, and SD-WAN. Use cases covered include federated networking models for large enterprises, using AWS as a WAN, SD-WAN at scale, and building extranets for partner connectivity.

NIS374: Strengthen your web application defenses with AWS WAF

In this workshop, use AWS WAF to build an effective set of controls around your web application and perform monitoring and analysis of traffic that is analyzed by your web ACL. Learn to use AWS WAF to mitigate common attack vectors against web applications such as SQL injection and cross-site scripting. Additionally, learn how to use AWS WAF for advanced protections such as bot mitigation and JSON inspection. Also find out how to use AWS WAF logging, query logs with Amazon Athena, and near-real-time dashboards to analyze requests inspected by AWS WAF.

If any of the above sessions look interesting, consider joining us by registering for AWS re:Inforce 2022. We look forward to seeing you in Boston!

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Satinder Khasriya

Satinder leads the product marketing strategy and implementation for AWS Network and Application protection services. Prior to AWS, Satinder spent the last decade leading product marketing for various network security solutions across across several technologies, including network firewall, intrusion prevention, and threat intelligence. Satinder lives in Austin, Texas and enjoys spending time with his family and traveling.

New – Amazon EC2 R6a Instances Powered by 3rd Gen AMD EPYC Processors for Memory-Intensive Workloads

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-r6a-instances-powered-by-3rd-gen-amd-epyc-processors-for-memory-intensive-workloads/

We launched the general-purpose Amazon EC2 M6a instances at AWS re:Invent 2021 and compute-intensive C6a instances in February of this year. These instances are powered by the 3rd Gen AMD EPYC processors running at frequencies up to 3.6 GHz to offer up to 35 percent better price performance versus the previous generation instances.

Today, we are expanding our portfolio to include memory-optimized Amazon EC2 R6a instances featuring AMD EPYC (Milan) processors 10 percent less expensive than comparable x86 instances.

R6a instances, powered by 3rd Gen AMD EPYC processors are well suited for memory-intensive applications such as high-performance databases (relational databases, noSQL databases), distributed web scale in-memory caches (such as memcached, Redis), in-memory databases such as real-time big data analytics (such as Hadoop, Spark clusters), and other enterprise applications.

R6a instances are built on the AWS Nitro System and support Elastic Fabric Adapter (EFA) for workloads that benefit from lower network latency and highly scalable inter-node communication, such as high-performance computing and video processing.

Here’s a quick recap of the advantages of the new R6a instances compared to R5a instances:

  • Up to 35 percent higher price performance per vCPU versus comparable R5a instances
  • Up to 10 percent less expensive than comparable x86 instances
  • Up to 1536 GiB of memory, 2 times more than the previous generation, giving you the benefit of scaling up databases and running larger in-memory workloads.
  • Up to 192 vCPUs, 50 Gbps enhanced networking, and 40 Gbps EBS bandwidth, enabling you to process data faster, consolidate workloads, and lower the cost of ownership.
  • SAP-certified instances require memory-intensive applications such as high-performance enterprise databases like SAP Business Suite.
  • Support always-on memory encryption with AMD transparent sngle key memory encryption (TSME), and support new AVX2 instructions for accelerating encryption and decryption algorithms.

Here are the specs of R6a instances in detail:

Name vCPUs Memory (GiB) Network Bandwidth (Gbps) EBS Throughput (Gbps)
r6a.large 2 16 Up to 12.5 Up to 6.6
r6a.xlarge 4 32 Up to 12.5 Up to 6.6
r6a.2xlarge 8 64 Up to 12.5 Up to 6.6
r6a.4xlarge 16 128 Up to 12.5 Up to 6.6
r6a.8xlarge 32 256 12.5 6.6
r6a.12xlarge 48 384 18.75 10
r6a.16xlarge 64 512 25 13.3
r6a.24xlarge 96 768 37.5 20
r6a.32xlarge 128 1024 50 26.6
r6a.48xlarge 192 1536 50 40
r6a.metal 192 1536 50 40

Now Available
You can launch R6a instances today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Mumbai) and Europe (Frankfurt, Ireland) as On-DemandSpot, and Reserved Instances or as part of a Savings Plan.

To learn more, visit the R6a instances page. Please send feedback to [email protected]AWS re:Post for EC2, or through your usual AWS Support contacts.

— Channy

Introducing Amazon CodeWhisperer in the AWS Lambda console (In preview)

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-amazon-codewhisperer-in-the-aws-lambda-console-in-preview/

This blog post is written by Mark Richman, Senior Solutions Architect.

Today, AWS is launching a new capability to integrate the Amazon CodeWhisperer experience with the AWS Lambda console code editor.

Amazon CodeWhisperer is a machine learning (ML)–powered service that helps improve developer productivity. It generates code recommendations based on their code comments written in natural language and code.

CodeWhisperer is available as part of the AWS toolkit extensions for major IDEs, including JetBrains, Visual Studio Code, and AWS Cloud9, currently supporting Python, Java, and JavaScript. In the Lambda console, CodeWhisperer is available as a native code suggestion feature, which is the focus of this blog post.

CodeWhisperer is currently available in preview with a waitlist. This blog post explains how to request access to and activate CodeWhisperer for the Lambda console. Once activated, CodeWhisperer can make code recommendations on-demand in the Lambda code editor as you develop your function. During the preview period, developers can use CodeWhisperer at no cost.

Amazon CodeWhisperer

Amazon CodeWhisperer

Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications and only pay for what you use.

With Lambda, you can build your functions directly in the AWS Management Console and take advantage of CodeWhisperer integration. CodeWhisperer in the Lambda console currently supports functions using the Python and Node.js runtimes.

When writing AWS Lambda functions in the console, CodeWhisperer analyzes the code and comments, determines which cloud services and public libraries are best suited for the specified task, and recommends a code snippet directly in the source code editor. The code recommendations provided by CodeWhisperer are based on ML models trained on a variety of data sources, including Amazon and open source code. Developers can accept the recommendation or simply continue to write their own code.

Requesting CodeWhisperer access

CodeWhisperer integration with Lambda is currently available as a preview only in the N. Virginia (us-east-1) Region. To use CodeWhisperer in the Lambda console, you must first sign up to access the service in preview here or request access directly from within the Lambda console.

In the AWS Lambda console, under the Code tab, in the Code source editor, select the Tools menu, and Request Amazon CodeWhisperer Access.

Request CodeWhisperer access in Lambda console

Request CodeWhisperer access in Lambda console

You may also request access from the Preferences pane.

Request CodeWhisperer access in Lambda console preference pane

Request CodeWhisperer access in Lambda console preference pane

Selecting either of these options opens the sign-up form.

CodeWhisperer sign up form

CodeWhisperer sign up form

Enter your contact information, including your AWS account ID. This is required to enable the AWS Lambda console integration. You will receive a welcome email from the CodeWhisperer team upon once they approve your request.

Activating Amazon CodeWhisperer in the Lambda console

Once AWS enables your preview access, you must turn on the CodeWhisperer integration in the Lambda console, and configure the required permissions.

From the Tools menu, enable Amazon CodeWhisperer Code Suggestions

Enable CodeWhisperer code suggestions

Enable CodeWhisperer code suggestions

You can also enable code suggestions from the Preferences pane:

Enable CodeWhisperer code suggestions from Preferences pane

Enable CodeWhisperer code suggestions from Preferences pane

The first time you activate CodeWhisperer, you see a pop-up containing terms and conditions for using the service.

CodeWhisperer Preview Terms

CodeWhisperer Preview Terms

Read the terms and conditions and choose Accept to continue.

AWS Identity and Access Management (IAM) permissions

For CodeWhisperer to provide recommendations in the Lambda console, you must enable the proper AWS Identity and Access Management (IAM) permissions for either your IAM user or role. In addition to Lambda console editor permissions, you must add the codewhisperer:GenerateRecommendations permission.

Here is a sample IAM policy that grants a user permission to the Lambda console as well as CodeWhisperer:

{
  "Version": "2012-10-17",
  "Statement": [{
      "Sid": "LambdaConsolePermissions",
      "Effect": "Allow",
      "Action": [
        "lambda:AddPermission",
        "lambda:CreateEventSourceMapping",
        "lambda:CreateFunction",
        "lambda:DeleteEventSourceMapping",
        "lambda:GetAccountSettings",
        "lambda:GetEventSourceMapping",
        "lambda:GetFunction",
        "lambda:GetFunctionCodeSigningConfig",
        "lambda:GetFunctionConcurrency",
        "lambda:GetFunctionConfiguration",
        "lambda:InvokeFunction",
        "lambda:ListEventSourceMappings",
        "lambda:ListFunctions",
        "lambda:ListTags",
        "lambda:PutFunctionConcurrency",
        "lambda:UpdateEventSourceMapping",
        "iam:AttachRolePolicy",
        "iam:CreatePolicy",
        "iam:CreateRole",
        "iam:GetRole",
        "iam:GetRolePolicy",
        "iam:ListAttachedRolePolicies",
        "iam:ListRolePolicies",
        "iam:ListRoles",
        "iam:PassRole",
        "iam:SimulatePrincipalPolicy"
      ],
      "Resource": "*"
    },
    {
      "Sid": "CodeWhispererPermissions",
      "Effect": "Allow",
      "Action": ["codewhisperer:GenerateRecommendations"],
      "Resource": "*"
    }
  ]
}

This example is for illustration only. It is best practice to use IAM policies to grant restrictive permissions to IAM principals to meet least privilege standards.

Demo

To activate and work with code suggestions, use the following keyboard shortcuts:

  • Manually fetch a code suggestion: Option+C (macOS), Alt+C (Windows)
  • Accept a suggestion: Tab
  • Reject a suggestion: ESC, Backspace, scroll in any direction, or keep typing and the recommendation automatically disappears.

Currently, the IDE extensions provide automatic suggestions and can show multiple suggestions. The Lambda console integration requires a manual fetch and shows a single suggestion.

Here are some common ways to use CodeWhisperer while authoring Lambda functions.

Single-line code completion

When typing single lines of code, CodeWhisperer suggests how to complete the line.

CodeWhisperer single-line completion

CodeWhisperer single-line completion

Full function generation

CodeWhisperer can generate an entire function based on your function signature or code comments. In the following example, a developer has written a function signature for reading a file from Amazon S3. CodeWhisperer then suggests a full implementation of the read_from_s3 method.

CodeWhisperer full function generation

CodeWhisperer full function generation

CodeWhisperer may include import statements as part of its suggestions, as in the previous example. As a best practice to improve performance, manually move these import statements to outside the function handler.

Generate code from comments

CodeWhisperer can also generate code from comments. The following example shows how CodeWhisperer generates code to use AWS APIs to upload files to Amazon S3. Write a comment describing the intended functionality and, on the following line, activate the CodeWhisperer suggestions. Given the context from the comment, CodeWhisperer first suggests the function signature code in its recommendation.

CodeWhisperer generate function signature code from comments

CodeWhisperer generate function signature code from comments

After you accept the function signature, CodeWhisperer suggests the rest of the function code.

CodeWhisperer generate function code from comments

CodeWhisperer generate function code from comments

When you accept the suggestion, CodeWhisperer completes the entire code block.

CodeWhisperer generates code to write to S3.

CodeWhisperer generates code to write to S3.

CodeWhisperer can help write code that accesses many other AWS services. In the following example, a code comment indicates that a function is sending a notification using Amazon Simple Notification Service (SNS). Based on this comment, CodeWhisperer suggests a function signature.

CodeWhisperer function signature for SNS

CodeWhisperer function signature for SNS

If you accept the suggested function signature. CodeWhisperer suggest a complete implementation of the send_notification function.

CodeWhisperer function send notification for SNS

CodeWhisperer function send notification for SNS

The same procedure works with Amazon DynamoDB. When writing a code comment indicating that the function is to get an item from a DynamoDB table, CodeWhisperer suggests a function signature.

CodeWhisperer DynamoDB function signature

CodeWhisperer DynamoDB function signature

When accepting the suggestion, CodeWhisperer then suggests a full code snippet to complete the implementation.

CodeWhisperer DynamoDB code snippet

CodeWhisperer DynamoDB code snippet

Once reviewing the suggestion, a common refactoring step in this example would be manually moving the references to the DynamoDB resource and table outside the get_item function.

CodeWhisperer can also recommend complex algorithm implementations, such as Insertion sort.

CodeWhisperer insertion sort.

CodeWhisperer insertion sort.

As a best practice, always test the code recommendation for completeness and correctness.

CodeWhisperer not only provides suggested code snippets when integrating with AWS APIs, but can help you implement common programming idioms, including proper error handling.

Conclusion

CodeWhisperer is a general purpose, machine learning-powered code generator that provides you with code recommendations in real time. When activated in the Lambda console, CodeWhisperer generates suggestions based on your existing code and comments, helping to accelerate your application development on AWS.

To get started, visit https://aws.amazon.com/codewhisperer/. Share your feedback with us at [email protected].

For more serverless learning resources, visit Serverless Land.

AWS achieves TISAX certification (Information with Very High Protection Needs (AL3)

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-tisax-certification-information-with-very-high-protection-needs-al3/

We’re excited to announce the completion of the Trusted Information Security Assessment Exchange (TISAX) certification on June 30, 2022 for 19 AWS Regions. These Regions achieved the Information with Very High Protection Needs (AL3) label for the control domains Information Handling and Data Protection. This alignment with TISAX requirements demonstrates our continued commitment to adhere to the heightened expectations for cloud service providers. AWS automotive customers can run their applications in the AWS Cloud certified Regions in confidence.

The following 19 Regions are currently TISAX certified:

  • US East (Ohio)
  • US East (Northern Virginia)
  • US West (Oregon)
  • Africa (Cape Town)
  • Asia Pacific (Hong Kong)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Osaka)
  • Asia Pacific (Korea)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)
  • Canada (Central)
  • Europe (Frankfurt)
  • Europe (Ireland)
  • Europe (London)
  • Europe (Milan)
  • Europe (Paris)
  • Europe (Stockholm)
  • South America (Sao Paulo)

TISAX is a European automotive industry-standard information security assessment (ISA) catalog based on key aspects of information security, such as data protection and connection to third parties.

AWS was evaluated and certified by independent third-party auditors on June 30, 2022. The Certificate of Compliance demonstrating the AWS compliance status is available on the European Network Exchange (ENX) Portal (the scope ID and assessment ID are SM22TH and AYA2D4-1, respectively) and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

For up-to-date information, including when additional Regions are added, see the AWS Compliance Program, and choose TISAX.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about TISAX compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

AWS achieves HDS certification to three additional Regions

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-hds-certification-to-three-additional-regions/

We’re excited to announce that three additional AWS Regions—Asia Pacific (Korea), Europe (London), and Europe (Stockholm)—have been granted the Health Data Hosting (Hébergeur de Données de Santé, HDS) certification. This alignment with the HDS requirements demonstrates our continued commitment to adhere to the heightened expectations for cloud service providers. AWS customers who handle personal health data can be hosted in the AWS Cloud certified Regions with confidence.

The following 16 Regions are now in scope of this certification:

  • US East (Ohio)
  • US East (Northern Virginia)
  • US West (Northern California)
  • US West (Oregon)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Korea)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)
  • Canada (Central)
  • Europe (Frankfurt)
  • Europe (Ireland)
  • Europe (London)
  • Europe (Paris)
  • Europe (Stockholm)
  • South America (Sao Paulo)

Introduced by the French governmental agency for health, Agence Française de la Santé Numérique (ASIP Santé), HDS certification aims to strengthen the security and protection of personal health data. Achieving this certification demonstrates that AWS provides a framework for technical and governance measures to secure and protect personal health data, governed by French law.

AWS was evaluated and certified by independent third-party auditors on June 30, 2022. The Certificate of Compliance demonstrating the AWS compliance status is available on the Agence du Numérique en Santé (ANS) website and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

For up-to-date information, including when additional Regions are added, see the AWS Compliance Program, and choose HDS.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about HDS compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

New — Detect and Resolve Issues Quickly with Log Anomaly Detection and Recommendations from Amazon DevOps Guru

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/new-detect-and-resolve-issues-quickly-with-log-anomaly-detection-and-recommendations-from-amazon-devops-guru/

Today, we are announcing a new feature, Log Anomaly Detection and Recommendations for Amazon DevOps Guru. With this feature, you can find anomalies throughout relevant logs within your app, and get targeted recommendations to resolve issues. Here’s a quick look at this feature:

AWS launched DevOps Guru, a fully managed AIOps platform service, in December 2020 to make it easier for developers and operators to improve applications’ reliability and availability. DevOps Guru minimizes the time needed for issue remediation by using machine learning models based on more than 20 years of operational expertise in building, scaling, and maintaining applications for Amazon.com.

You can use DevOps Guru to identify anomalies such as increased latency, error rates, and resource constraints and then send alerts with a description and actionable recommendations for remediation. You don’t need any prior knowledge in machine learning to use DevOps Guru, and only need to activate it in the DevOps Guru dashboard.

New Feature – Log Anomaly Detection and Recommendations

Observability and monitoring are integral parts of DevOps and modern applications. Applications can generate several types of telemetry, one of which is metrics, to reveal the performance of applications and to help identify issues.

While the metrics analyzed by DevOps Guru today are critical to surfacing issues occurring in applications, it is still challenging to find the root cause of these issues. As applications become more distributed and complex, developers and IT operators need more automation to reduce the time and effort spend detecting, debugging, and resolving operational issues. By sourcing relevant logs in conjunction with metrics, developers can now more effectively monitor and troubleshoot their applications.

With this new Log Anomaly Detection and Recommendations feature, you can get insights along with precise recommendations from application logs without manual effort. This feature delivers contextualized log data of anomaly occurrences and provides actionable insights from recommendations integrated inside the DevOps Guru dashboard.

The Log Anomaly Detection and Recommendations feature is able to detect exception keywords, numerical anomalies, HTTP status codes, data format anomalies, and more. When DevOps Guru identifies anomalies from logs, you will find relevant log samples and deep links to CloudWatch Logs on the DevOps Guru dashboard. These contextualized logs are an important component for DevOps Guru to provide further features, namely targeted recommendations to help faster troubleshooting and issue remediation.

Let’s Get Started!

This new feature consists of two things, “Log Anomaly Detection” and “Recommendations.” Let’s explore further into how we can use this feature to find the root cause of an issue and get recommendations. As an example, we’ll look at my serverless API built using Amazon API Gateway, with AWS Lambda integrated with Amazon DynamoDB. The architecture is shown in the following image:

If it’s your first time using DevOps Guru, you’ll need to enable it by visiting the DevOps Guru dashboard. You can learn more by visiting the Getting Started page.

Since I’ve already enabled DevOps Guru I can go to the Insights page, navigate to the Log groups section, and select the Enable log anomaly detection.

Log Anomaly Detection

After a few hours, I can visit the DevOps Guru dashboard to check for insights. Here, I get some findings from DevOps Guru, as seen in the following screenshots:

With Log Anomaly Detection, DevOps Guru will show the findings of my serverless API in the Log groups section, as seen in the following screenshot:

I can hover over the anomaly and get a high-level summary of the contextualized enrichment data found in this log group. It also provides me with additional information, including the number of log records analyzed and the log scan time range. From this information, I know these anomalies are new event types that have not been detected in the past with the keyword ERROR.

To investigate further, I can select the log group link and go to the Detail page. The graph shows relevant events that might have occurred around these log showcases, which is a helpful context for troubleshooting the root cause. This Detail page includes different showcases, each representing a cluster of similar log events, like exception keywords and numerical anomalies, found in the logs at the time of the anomaly.

Looking at the first log showcase, I noticed a ConditionalCheckFailedException error within the AWS Lambda function. This can occur when AWS Lambda fails to call DynamoDB. From here, I learned that there was an error in the conditional check section, and I reviewed the logic on AWS Lambda. I can also investigate related CloudWatch Logs groups by selecting View details in CloudWatch links.

One thing I want to emphasize here is that DevOps Guru identifies significant events related to application performance and helps me to see the important things I need to focus on by separating the signal from the noise.

Targeted Recommendations

In addition to anomaly detection of logs, this new feature also provides precise recommendations based on the findings in the logs. You can find these recommendations on the Insights page, by scrolling down to find the Recommendations section.

Here, I get some recommendations from DevOps Guru, which make it easier for me to take immediate steps to remediate the issue. One recommendation shown in the following image is Check DynamoDB ConditionalExpression, which relates to an anomaly found in the logs derived from AWS Lambda.

Availability

You can use DevOps Guru Log Anomaly Detection and Recommendations today at no additional charge in all Regions where DevOps Guru is available, US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

To learn more, please visit Amazon DevOps Guru web site and technical documentation, and get started today.

Happy building
— Donnie

Amazon Redshift Serverless – Now Generally Available with New Capabilities

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/amazon-redshift-serverless-now-generally-available-with-new-capabilities/

Last year at re:Invent, we introduced the preview of Amazon Redshift Serverless, a serverless option of Amazon Redshift that lets you analyze data at any scale without having to manage data warehouse infrastructure. You just need to load and query your data, and you pay only for what you use. This allows more companies to build a modern data strategy, especially for use cases where analytics workloads are not running 24-7 and the data warehouse is not active all the time. It is also applicable to companies where the use of data expands within the organization and users in new departments want to run analytics without having to take ownership of data warehouse infrastructure.

Today, I am happy to share that Amazon Redshift Serverless is generally available and that we added many new capabilities. We are also reducing Amazon Redshift Serverless compute costs compared to the preview.

You can now create multiple serverless endpoints per AWS account and Region using namespaces and workgroups:

  • A namespace is a collection of database objects and users, such as database name and password, permissions, and encryption configuration. This is where your data is managed and where you can see how much storage is used.
  • A workgroup is a collection of compute resources, including network and security settings. Each workgroup has a serverless endpoint to which you can connect your applications. When configuring a workgroup, you can set up private or publicly accessible endpoints.

Each namespace can have only one workgroup associated with it. Conversely, each workgroup can be associated with only one namespace. You can have a namespace without any workgroup associated with it, for example, to use it only for sharing data with other namespaces in the same or another AWS account or Region.

In your workgroup configuration, you can now use query monitoring rules to help keep your costs under control. Also, the way Amazon Redshift Serverless automatically scales data warehouse capacity is more intelligent to deliver fast performance for demanding and unpredictable workloads.

Let’s see how this works with a quick demo. Then, I’ll show you what you can do with namespaces and workgroups.

Using Amazon Redshift Serverless
In the Amazon Redshift console, I select Redshift serverless in the navigation pane. To get started, I choose Use default settings to configure a namespace and a workgroup with the most common options. For example, I’ll be able to connect using my default VPC and default security group.

Console screenshot.

With the default settings, the only option left to configure is Permissions. Here, I can specify how Amazon Redshift can interact with other services such as S3, Amazon CloudWatch Logs, Amazon SageMaker, and AWS Glue. To load data later, I give Amazon Redshift access to an S3 bucket. I choose Manage IAM roles and then Create IAM role.

Console screenshot.

When creating the IAM role, I select the option to give access to specific S3 buckets and pick an S3 bucket in the same AWS Region. Then, I choose Create IAM role as default to complete the creation of the role and to automatically use it as the default role for the namespace.

Console screenshot.

I choose Save configuration and after a few minutes the database is ready for use. In the Serverless dashboard, I choose Query data to open the Redshift query editor v2. There, I follow the instructions in the Amazon Redshift Database Developer guide to load a sample database. If you want to do a quick test, a few sample databases (including the one I am using here) are already available in the sample_data_dev database. Note also that loading data into Amazon Redshift is not required for running queries. I can use data from an S3 data lake in my queries by creating an external schema and an external table.

The sample database consists of seven tables and tracks sales activity for a fictional “TICKIT” website, where users buy and sell tickets for sporting events, shows, and concerts.

Sample database tables relations

To configure the database schema, I run a few SQL commands to create the users, venue, category, date, event, listing, and sales tables.

Console screenshot.

Then, I download the tickitdb.zip file that contains the sample data for the database tables. I unzip and load the files to a tickit folder in the same S3 bucket I used when configuring the IAM role.

Now, I can use the COPY command to load the data from the S3 bucket into my database. For example, to load data into the users table:

copy users from 's3://MYBUCKET/tickit/allusers_pipe.txt' iam_role default;

The file containing the data for the sales table uses tab-separated values:

copy sales from 's3://MYBUCKET/tickit/sales_tab.txt' iam_role default delimiter '\t' timeformat 'MM/DD/YYYY HH:MI:SS';

After I load data in all tables, I start running some queries. For example, the following query joins five tables to find the top five sellers for events based in California (note that the sample data is for the year 2008):

select sellerid, username, (firstname ||' '|| lastname) as sellername, venuestate, sum(qtysold)
from sales, date, users, event, venue
where sales.sellerid = users.userid
and sales.dateid = date.dateid
and sales.eventid = event.eventid
and event.venueid = venue.venueid
and year = 2008
and venuestate = 'CA'
group by sellerid, username, sellername, venuestate
order by 5 desc
limit 5;

Console screenshot.

Now that my database is ready, let’s see what I can do by configuring Amazon Redshift Serverless namespaces and workgroups.

Using and Configuring Namespaces
Namespaces are collections of database data and their security configurations. In the navigation pane of the Amazon Redshift console, I choose Namespace configuration. In the list, I choose the default namespace that I just created.

In the Data backup tab, I can create or restore a snapshot or restore data from one of the recovery points that are automatically created every 30 minutes and kept for 24 hours. That can be useful to recover data in case of accidental writes or deletes.

Console screenshot.

In the Security and encryption tab, I can update permissions and encryption settings, including the AWS Key Management Service (AWS KMS) key used to encrypt and decrypt my resources. In this tab, I can also enable audit logging and export the user, connection, and user activity logs.

Console screenshot.

In the Datashares tab, I can create a datashare to share data with other namespaces and AWS accounts in the same or different Regions. In this tab, I can also create a database from a share I receive from other namespaces or AWS accounts, and I can see the subscriptions for datashares managed by AWS Data Exchange.

Console screenshot.

When I create a datashare, I can select which objects to include. For example, here I want to share only the date and event tables because they don’t contain sensitive data.

Console screenshot.

Using and Configuring Workgroups
Workgroups are collections of compute resources and their network and security settings. They provide the serverless endpoint for the namespace they are configured for. In the navigation pane of the Amazon Redshift console, I choose Workgroup configuration. In the list, I choose the default namespace that I just created.

In the Data access tab, I can update the network and security settings (for example, change the VPC, the subnets, or the security group) or make the endpoint publicly accessible. In this tab, I can also enable Enhanced VPC routing to route network traffic between my serverless database and the data repositories I use (for example, the S3 buckets used to load or unload data) through a VPC instead of the internet. To access serverless endpoints that are in another VPC or subnet, I can create a VPC endpoint managed by Amazon Redshift.

Console screenshot.

In the Limits tab, I can configure the base capacity (expressed in Redshift processing units, or RPUs) used to process my queries. Amazon Redshift Serverless scales the capacity to deal with a higher number of users. Here I also have the option to increase the base capacity to speed up my queries or decrease it to reduce costs.

In this tab, I can also set Usage limits to configure daily, weekly, and monthly thresholds to keep my costs predictable. For example, I configured a daily limit of 200 RPU-hours, and a monthly limit of 2,000 RPU-hours for my compute resources. To control the data-transfer costs for cross-Region datashares, I configured a daily limit of 3 TB and a weekly limit of 10 TB. Finally, to limit the resources used by each query, I use Query limits to time out queries running for more than 60 seconds.

Console screenshot.

Availability and Pricing
Amazon Redshift Serverless is generally available today in the US East (Ohio), US East (N. Virginia), US East (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) AWS Regions.

You can connect to a workgroup endpoint using your favorite client tools via JDBC/ODBC or with the Amazon Redshift query editor v2, a web-based SQL client application available on the Amazon Redshift console. When using web services-based applications (such as AWS Lambda functions or Amazon SageMaker notebooks), you can access your database and perform queries using the built-in Amazon Redshift Data API.

With Amazon Redshift Serverless, you pay only for the compute capacity your database consumes when active. The compute capacity scales up or down automatically based on your workload and shuts down during periods of inactivity to save time and costs. Your data is stored in managed storage, and you pay a GB-month rate.

To give you improved price performance and the flexibility to use Amazon Redshift Serverless for an even broader set of use cases, we are lowering the price from $0.5 to $0.375 per RPU-hour for the US East (N. Virginia) Region. Similarly, we are lowering the price in other Regions by an average of 25 percent from the preview price. For more information, see the Amazon Redshift pricing page.

To help you get practice with your own use cases, we are also providing $300 in AWS credits for 90 days to try Amazon Redshift Serverless. These credits are used to cover your costs for compute, storage, and snapshot usage of Amazon Redshift Serverless only.

Get insights from your data in seconds with Amazon Redshift Serverless.

Danilo

New – Cloud WAN : A Managed WAN Service

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-cloud-wan-a-managed-wan-service/

I am pleased to announce the availability of AWS Cloud WAN, a new network service that makes it easy to build and operate wide area networks (WAN) that connect your data centers and branch offices, as well as multiple VPCs in multiple AWS Regions.

Typically, large enterprises have resources running in different on-premises data centers, branch offices, and in the cloud. To connect these resources, network teams build and manage their own global networks using multiple networking, security, and internet services from multiple providers. They most probably use several technologies and providers to manage cloud-based networks, to connect their data centers to the AWS cloud, and for the connectivity between on-premises data centers and branch offices. All of these networks take different approaches to connectivity, security, and monitoring, resulting in an intricate patchwork of individual networks that are complicated to configure, secure, and manage.

For example, to prevent unauthorized access to resources running across locations that are connected with different network technologies, network operation teams must piece together different firewall solutions from different vendors and then manually configure and manage the policies between them. Every new location, network appliance, and security requirement exponentially increases complexity.

With Cloud WAN, networking teams connect to AWS through their choice of local network providers, then use a central dashboard and network policies to create a unified network that connects their locations and network types. This eliminates the need to configure and manage different networks individually, even when they are based on different technologies. Cloud WAN generates a complete view of your on-premises and AWS networks to help you visualize the health, security, and performance of your entire network.

Cloud WAN provides advanced security and network isolation, and I am excited by the possibilities offered by this network segmentation. You can use policies in Cloud WAN to easily segment your network traffic regardless of how many AWS Regions or on-premises locations you add to your network. For example, you can easily isolate network traffic from retail payment processing from other traffic on your corporate network while still giving both segments access to shared corporate resources. Another example would be the isolation of your development and production environment by creating logical network segments for each environment. This makes it easier to ensure consistent security policies when connecting large numbers of locations with your VPCs especially when your policies need to apply to large groups with unique security and routing requirements. Cloud WAN maintains a consistent configuration across Regions on your behalf. In a traditional network, a segment is like a globally consistent virtual routing and forwarding (VRF) table or a layer 3 IP VPN over an MPLS network. Segments are optional; smaller organizations may use Cloud WAN with one single network segment, encompassing all your traffic.

In addition to network segmentation and the simplicity it brings to your network management tasks, I see four principal benefits of using Cloud WAN:

Centralized management and network monitoring dashboard – Network Manager provides a central dashboard for connecting and managing your branch offices, data centers, VPN connections, and Software-Defined WAN (SD-WAN), as well as your Amazon VPC and AWS Transit Gateway. This dashboard helps you monitor and view the health of your network in one place, simplifying day-to-day operations.

Centralized policy management – You define access controls and traffic routing rules in a central network policy document, expressed in JSON. When you update a policy, Cloud WAN uses a two-step process to ensure accidental errors do not affect your global network. First, you review and validate that your changes will work as expected in production. Once you approve the changes, Cloud WAN handles the configuration details for the entire network. You can change your policy document using the AWS Management Console or Cloud WAN APIs.

Multi-Region VPC connectivity – Cloud WAN connects your VPCs across AWS Regions. Using a simple network policy document, you can create global networks that connect all of your EC2 resources, or you can choose to segment them across Regions.

Built-in automation. Cloud WAN can automatically attach new VPCs and network connections to your network, so you do not need to approve each change manually. It reduces the operational overhead involved in managing a growing network. You do this by tagging attachments and defining network policies that automatically map attachments with a certain tag to a specific network segment. With this tagging structure in place, you can choose which attachments can join a segment automatically, which segments require manual approval, and if attachments on the same segment can talk to each other, all based on the tags you choose.

Let’s get started
To get started with Cloud WAN, I open the AWS Management Console. In the VPC section, there is a new entry for AWS Cloud WAN on the menu on the left. Creating and configuring a global network is a four-step process.

First, I start by creating a global network and a core network.

Cloud WAN create global networkAfter entering the Name and an optional Description, I select Next.

Cloud WAN create core networkAfter giving the core network a Name and a Description, I enter my ASN range and the list of Edge locations, and I enter a Segment name and Segment description for my default segment. The default segment is automatically enabled in all selected edge locations.

Second, I define and attach my core networking policy. The core policy defines the rule to control network access across segments and AWS Regions. Third, I configure segments and segment actions. I can see all routes and filter by network Segment and Edge location.

Cloud Wan - RoutesAnd finally, I register the existing Transit Gateway to the new global network.

Cloud WAN - register transit gateways

Once configured, you have a single monitoring dashboard for your global network. You have access to the network inventory.

Cloud Wan - Monitoring inventoryOr you can have more granular and detailed views with Topology graph and Topology tree.

Cloud Wan - Monitoring topology

Other considerations
During the preview period we ran for Cloud WAN, we often received the question: “When should I build networks with Cloud WAN versus Transit Gateway?” This is a valid question because both Transit Gateway and Cloud WAN allow centralized connectivity between Amazon VPC and on-premises locations. Transit Gateway is a Regional network connectivity hub and is optimal when you operate in a few AWS Regions or when you want to manage your own peering and routing configuration or prefer to use your own automation.

On the other side, Cloud WAN is a managed wide area network (WAN) that unifies your data center, branches, and AWS networks. While you can create your own global network by interconnecting multiple Transit Gateways across Regions, Cloud WAN provides built-in automation, segmentation, and configuration management features designed specifically for building and operating global networks. Cloud WAN has added features such as automated VPC attachments, integrated performance monitoring, and centralized configuration.

But the world is better together, you can peer your Transit Gateways with Cloud WAN’s Core Network Edges (CNEs) and benefit from the central management and monitoring capabilities I described earlier. The peering between Cloud WAN and Transit Gateway keeps your options open – you can migrate from one to another, or use Cloud WAN to centrally connect all your existing Transit Gateways.

But then, AWS released SiteLink in December last year. When should you use SiteLink, and when should you use AWS Cloud WAN? Depending on your use case, you might choose one, the other, or both. Cloud WAN can create and manage networks of VPCs across multiple Regions. SiteLink, on the other hand, connects Direct Connect locations together, bypassing AWS Regions to improve performance. Direct Connect is one of the several connectivity options that you will be able to natively use with Cloud WAN in the future. As of today, you interconnect Direct Connect with Cloud WAN via Transit Gateway peering connections.

Availability and Pricing
Cloud WAN is available today in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), and Middle East (Bahrain) AWS Regions.

As usual, there are no setup or upfront fees, and billing is on-demand based on your actual usage. There are four factors that determine what you pay for using AWS Cloud WAN. First, the number of Core Network Edges (CNEs) deployed. Second, the number of attachments to each CNE. An attachment might be an Amazon VPC, a VPN, or an SD-WAN. Third, the number of Transit Gateways peered with your CNEs. And fourth, there is a data processing charge for traffic sent through each CNE.

On top of these factors that are specific to Cloud WAN, sending data between Regions triggers an EC2 inter-Region data transfer out charge. While EC2 inter-Region data transfer out is billed separately from Cloud WAN, it’s a factor in the total cost of the Cloud WAN service. The pricing page has the details.

Go build your global network!

— seb

A sneak peek at the governance, risk, and compliance sessions for AWS re:Inforce 2022

Post Syndicated from Greg Eppel original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-governance-risk-and-compliance-sessions-for-aws-reinforce-2022/

Register now with discount code SALUZwmdkJJ to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

Today we want to tell you about some of the exciting governance, risk, and compliance sessions planned for AWS re:Inforce 2022. AWS re:Inforce is a conference where you can learn more about security, compliance, identity, and privacy. When you attend the event, you have access to hundreds of technical and business sessions, an AWS Partner expo hall, a keynote speech from AWS Security leaders, and more. AWS re:Inforce 2022 will take place in person in Boston, MA on July 26 and 27. AWS re:Inforce 2022 features content in the following five areas:

  • Data protection and privacy
  • Governance, risk, and compliance
  • Identity and access management
  • Network and infrastructure security
  • Threat detection and incident response

This post will highlight of some of the governance, risk, and compliance offerings that you can sign up for, including breakout sessions, chalk talks, builders’ sessions, and workshops. For the full catalog of all tracks, see the AWS re:Inforce session preview.

Breakout sessions

These are lecture-style presentations that cover topics at all levels and are delivered by AWS experts, builders, customers, and partners. Breakout sessions typically include 10–15 minutes of Q&A at the end.

GRC201: Learn best practices for auditing AWS with Cloud Audit Academy
Do you want to know how to audit in the cloud? Today, control framework language is catered toward on-premises environments, and security IT auditing techniques have not been reshaped for the cloud. The AWS Cloud–specific Cloud Audit Academy provides auditors with the education and tools to audit for security on AWS using a risk-based approach. In this session, experience a condensed sample domain from a four-day Cloud Audit Academy workshop.

GRC203: Panel discussion: Continuous compliance and auditing on AWS
In this session, an AWS leader speaks with senior executives from enterprise customer and AWS Partner organizations as they share their paths to success with compliance and auditing on AWS. Join this session to hear how they have used AWS Cloud Operations to help make compliance and auditing more efficient and improve business outcomes. Also hear how AWS Partners are supporting customer organizations as they automate compliance and move to the cloud.

GRC205: Crawl, walk, run: Accelerating security maturity
Where are you on your cloud security journey? Where do you want to end up? What are your next steps? In this step-by-step roadmap, we provide a comprehensive overview of the AWS security journey based on lessons learned with other organizations. Learn where you are, how to take the next step and how to improve your cloud security program. In this session, we will leverage cloud-native tools like AWS Control Tower, AWS Config, and AWS Security Hub to demonstrate how knowing your current state of security can drive more effective and efficient story telling of your posture.

GRC302: Using AWS security services to build our cloud operations foundation
Organizations new to the cloud need to quickly understand what foundational security capabilities should be considered as a baseline. In this session, learn how AWS security services can help you improve your cloud security posture. Learn how to incorporate security into your AWS architecture based on the AWS Cloud Operations model, which will help you implement governance, manage risk, and achieve compliance while proactively discovering opportunities for improvement.

GRC331: Automating security and compliance with OSCAL
Documentation exports can be very time consuming. In this session, learn how the National Institute of Science and Technology is developing the Open Security Controls Assessment Language (OSCAL) to provide common translation between XML, JSON, and YAML formats. OSCAL also provides a common means to identify and version shared resources, and standardize the expression of assessment artifacts. Learn how AWS is working to implement OSCAL for our security documentation exports so that you can save time when creating and maintaining ATO packages.

Builders’ sessions

These are small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.

GRC351: Implementing compliance as code on AWS
To manage compliance at the speed and scale the cloud requires, organizations need to implement automation and have an effective mechanism to manage it. In this builders’ session, learn how to implement compliance as code (CaC). CaC shares many of the same benefits as infrastructure as code: speed, automation, peer review, and auditability. Learn about defining controls with AWS Config rules, customizing those controls, using remediation actions, packaging and deploying with AWS Config conformance packs, and validating using a CI/CD pipeline.

GRC352: Deploying repeatable, secure, and compliant Amazon EKS clusters
In this builders’ session, learn how to deploy, manage, and scale containerized applications that run Kubernetes on AWS with AWS Service Catalog. Walk through how to deploy the Kubernetes control plane into a virtual private cloud (VPC), connect worker nodes to the cluster, and configure a bastion host for cluster administrative operations. Using AWS CloudFormation registry resource types, learn how to declare Kubernetes manifests or Helm charts to deploy and manage your Kubernetes applications. With AWS Service Catalog, you can empower your teams to deploy securely configured Amazon Elastic Kubernetes Service (Amazon EKS) clusters in multiple accounts and Regions.

GRC354: Building remediation workflows to simplify compliance
Automation and simplification are key to managing compliance at scale. Remediation is one of the key elements of simplifying and managing risk. In this builders’ session, walk through how to build a remediation workflow using AWS Config and AWS Systems Manager Automation. Then, explore how the workflow can be deployed at scale and monitored with AWS Security Hub to oversee your entire organization.

GRC355: Build a Security Posture Leaderboard using AWS Security Hub
This builders’ session introduces you to the possibilities of creating a robust and comprehensive leaderboard using AWS Security Hub findings to improve security and compliance visibility in your organization. Learn how to design and support various use cases, such as combining security and compliance data into a single, centralized dashboard that allows you to make more informed decisions; correlating Security Hub findings with operational data for deeper insights; building a security and compliance scorecard across various dimensions to share across different stakeholders; and supporting a decentralized organization structure with centralized or shared security function.

Chalk talks

These are highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

GRC233: Critical infrastructure: Supply chain and compliance impacts
In this chalk talk, learn how you can benefit from cloud-based solutions that build in security from the beginning. Review technical details around cybersecurity best practices for OT systems in adherence with government partnership with public and private industries. Dive deep into use cases and best practices for using AWS security services to help improve cybersecurity specifically for water utilities. Hear about opportunities to receive AWS cybersecurity training designed to teach you the skills necessary to support cloud adoption.

GRC304: Scaling the possible: Digitizing the audit experience
Do you want to increase the speed and scale of your audits? As companies expand to new industries and environments, so too does the scale of regulatory compliance. AWS undergoes over 500 audits in a year. In this session, hear from AWS experts as they digitize and automate the regulator/auditor experience. Walk through pre-audit educational training, self-service of control evidence and walkthrough information, live chatting with an audit control owner, and virtual data center tours. This session discusses how innovation and digitization allows companies to build trust with regulators and auditors while reducing the level of effort for internal audit teams and compliance executives.

GRC334: Shared responsibility deep dive at the service level
Auditors and regulators often need assistance understanding which configuration settings and security responsibilities are in the company’s control. Depending on the service, the AWS shared responsibility model can vary, which can affect the process for meeting compliance goals. Join AWS subject-matter experts in this chalk talk for an in-depth discussion on the next wave of compliance activation for AWS customers. Explore the configurable security decisions that users have for each service and how you can map to AWS best practices and security controls.

GRC431: Building purpose-driven and data-rich GRC solutions
Are you getting everything you need out of your data? Or do you not have enough information to make data-driven security decisions? Many organizations trying to modernize and innovate using data often struggle with finding the right data security solutions to build data-driven applications. In this chalk talk, explore how you can use Amazon Virtual Private Cloud (Amazon VPC), AWS Identity and Access Management (IAM), AWS Key Management Service (AWS KMS), AWS Systems Manager, AWS Single Sign-On (AWS SSO), and AWS Config to drive valuable insights to make more informed decisions. Hear about best practices and lessons learned to help you on your journey to garner purpose-filled information.

Workshops

These are interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

GRC272: Executive Security Simulation
The Executive Security Simulation takes senior security management and IT/business executive teams through an experiential exercise that illuminates key decision points for a successful and secure cloud journey. During this team-based, game-like competitive simulation, participants leverage an industry case study to make strategic security, risk, and compliance time-based decisions and investments. Participants experience the impact of these investments and decisions on the critical aspects of their secure cloud adoption. Join this workshop to understand the major success factors to lead security, risk, and compliance in the cloud, and learn applicable decision and investment approaches to specific secure cloud adoption journeys.

GRC371: Automate your compliance and evidence collection with AWS
Automation and simplification are key to managing compliance at scale. Remediation is one of the key elements of simplifying and managing risk. In this workshop, we will walk through building a remediation workflow using AWS Config and AWS Systems Manager and show how it can be deployed at scale and then monitored with Security Hub across the entire organization. In this workshop, you will learn how you can set up a continuous collection process that not only establishes controls to help meet the requirements of compliance but also automates the process of collecting evidence to avoid the time-consuming manual effort to prepare for audits.

GRC372: How to implement governance on AWS with ServiceNow
Many AWS customers use IT service management (ITSM) solutions such as ServiceNow to implement governance and compliance and manage security incidents. In this workshop, learn how to use AWS services such as AWS Service Catalog, AWS Config, AWS Systems Manager, and AWS Security Hub on the ServiceNow service portal. Learn how AWS services align to service management standards by integrating AWS capabilities through ITSM process integration with ServiceNow. Design and implement a curated provisioning strategy, along with incident management and resource transparency/compliance, by using the AWS Service Management Connector for ServiceNow.

GRC471: Building guardrails to meet your custom control requirements
In this session, you will experience the process of identifying, designing, and implementing security configurations, as well as detective and preventive guardrails, to meet custom control requirements. You will use a pre-built environment, read a customer scenario to identify specific control needs, and then learn how to design and implement the custom controls.

If any of these sessions look interesting to you, consider joining us in Boston by registering for re:Inforce 2022. We look forward to seeing you there!

Greg Eppel

Greg is the Tech Leader for Cloud Operations and is responsible for the global direction of an internal community of hundreds of AWS experts who are focused on the operational capabilities of AWS. Prior to joining AWS in 2016, he was the CTO of a company that provided SaaS solutions to the sports, media, and entertainment industry. He is a Canadian originally from Vancouver, and he currently resides in the DC metro area with his family.

Author

Alexis Robinson

Alexis is the Head of the US Government Security and Compliance Program for AWS. For over 10 years, she has served federal government clients by advising them on security best practices and conducting cyber and financial assessments. She currently supports the security of the AWS internal environment, including cloud services applicable to AWS US East/West and AWS GovCloud (US) Regions.

Eligible customers can now order a free MFA security key

Post Syndicated from CJ Moses original https://aws.amazon.com/blogs/security/eligible-customers-can-now-order-a-free-mfa-security-key/

One of the best ways for individuals and businesses to protect themselves online is through multi-factor authentication (MFA). MFA offers an additional layer of protection to help prevent unauthorized individuals from gaining access to systems or data.

In fall 2021, Amazon Web Services (AWS) Security began offering a free MFA security key to AWS account owners in the United States. I’m happy to announce that eligible customers can now order the free security key through the ordering portal in the AWS Management Console. In response to customer demand, we’ve streamlined the ordering process, especially for linked accounts. At this time, only U.S.-based AWS account root users who have spent more than $100 each month over the past 3 months are eligible to place an order.

To order your free security key

  1. Confirm your eligibility at the ordering portal. You will be prompted to sign in if you haven’t already.
  2. Choose your free security key from the available options.
  3. Provide your email address for order confirmation and your shipping address.
  4. Place your order.

You can connect the security key to AWS, as well as other security key–enabled applications, such as Dropbox, GitHub, and Gmail. If your organization is still early in adopting MFA, the free security key is another way to help protect your AWS account credentials, as well as to jump start your MFA journey by showing how convenient modern security keys are to use. As you expand your AWS usage, all your users should obtain and enable MFA. This can be done at the AWS Identity and Access Management (IAM) user level in the AWS identity system or upstream in your federated identity provider, since using federated identities is a best practice.

We encourage everyone to use MFA to help protect themselves online. Although some applications do not yet support security keys, nearly all provide an MFA option, such as time-based password codes or mobile push notifications. So, whether you’re signing in to your AWS account, your favorite social networks, or your bank account, MFA can help level-up your security posture.

If you’re not eligible for a free security key but would still like a security key, check out our MFA recommendations, which are available for purchase from many sellers, including Amazon. For more information about the MFA program, see our Free MFA Security Key page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

CJ Moses

CJ Moses

CJ is the Chief Information Security Officer (CISO) at AWS, where he leads product design and security engineering. His mission is to deliver the economic and security benefits of cloud computing to business and government customers. Previously, CJ led the technical analysis of computer and network intrusion efforts at the U.S. Federal Bureau of Investigation Cyber Division. He also served as a Special Agent with the U.S. Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the information security industry today.

New – Amazon EC2 M1 Mac Instances

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-amazon-ec2-m1-mac-instances/

Last year, during the re:Invent 2021 conference, I wrote a blog post to announce the preview of EC2 M1 Mac instances. I know many of you requested access to the preview, and we did our best but could not satisfy everybody. However, the wait is over. I have the pleasure of announcing the general availability of EC2 M1 Mac instances.

EC2 Mac instances are dedicated Mac mini computers attached through Thunderbolt to the AWS Nitro System, which lets the Mac mini appear and behave like another EC2 instance. It connects to your Amazon Virtual Private Cloud (Amazon VPC), boots from Amazon Elastic Block Store (EBS) volumes, and uses EBS snapshots, Amazon Machine Images (AMIs), security groups and other AWS services such as Amazon CloudWatch and AWS Systems Manager.

The availability of EC2 M1 Mac instances lets you access machines built around the Apple-designed M1 System on Chip (SoC). If you are a Mac developer and re-architecting your apps to natively support Macs with Apple silicon, you may now build and test your apps and take advantage of all the benefits of AWS. Developers building for iPhone, iPad, Apple Watch, and Apple TV will also benefit from faster builds. EC2 M1 Mac instances deliver up to 60 percent better price performance over the x86-based EC2 Mac instances for iPhone and Mac app build workloads.

For example, I tested the time it takes to clean, build, archive, and run the unit tests on a sample project I wrote. The new EC2 M1 Mac instances complete this set of tasks in 49 seconds on average. This is 47.8 percent faster than the same set of tasks running on the previous generation of EC2 Mac instances.

To see how to launch an EC2 M1 Mac instance from the AWS Management Console or the AWS Command Line Interface (CLI), I invite you to read my last blog post on the subject.

EC2 Mac M1 Instance

During the six months of the preview, we collected your feedback and fine-tuned the service to your needs.

We’ve added a new FAQ section to our documentation to get started with EC2 M1 Mac instances. Agents for management and observability, such as Systems Manager and CloudWatch, are pre-installed on all our macOS AMIs, along with tools such as the AWS Command Line Interface (CLI) and our AWS SDKs. EC2 M1 Mac instances integrate with other AWS services, such as Amazon Elastic File System (Amazon EFS) for file storage, AWS Auto Scaling, or AWS Secrets Manager.

For example, I am using Secrets Manager to securely store my build secrets, such as the signing keys and certificates used to sign my binaries before to distribute them on the App Store. From my laptop, I first make sure to export the certificate from the macOS keychain. I then upload my certificate to Secrets Manager with this command:

aws secretsmanager create-secret            \
       --name apple-signing-dev-certificate \
       --secret-binary fileb://./secrets/apple_dev_seb.p12 

On the EC2 M1 Mac instance, to prepare my instance before the build phase, I download the certificate, decode it (it is base64-encoded), and store it in the EC2 M1 Mac instance keychain, where the codesign tool will find it during the build.

# download the certificate from Secrets Manager
SIGNING_DEV_KEY=$($aws secretsmanager get-secret-value  \
      --secret-id apple-signing-dev-certificate         \
      --query SecretBinary --output text)
	  

# save the certificate as a file
echo $SIGNING_DEV_KEY | base64 -d > seb_dev_certificate.p12

# import the certificate in the keychain 
security import seb_dev_certificate.p12 \
                -P "my_cert_password"   \
                -k my.dev.keychain      \
                -T /usr/bin/security -T /usr/bin/codesign -T /usr/bin/xcodebuild

# delete the certificate from disk
rm seb_dev_certificate.p12

There are a few more configuration steps to get code signing work from the macOS command line. You can check out this presentation I made or my code repository for the details.

We are preparing a couple of events to help you learn more about EC2 M1 Mac instance use cases and configuration. First, we recently had an online webinar to learn how to take advantage of EC2 Mac instances for iOS development, content is available for you to consume on-demand after a free registration step. Second, we are preparing a one-day, in-person developer conference for later this year. The conference agenda will be packed with technical content and workshops. Stay tuned on social media to learn more about it.

Last and not least, but not related to EC2 Mac instances, the Apple WWDC 2022 conference took place last month, from June 6–8, 2022, and the content is available online. This is a great occasion to learn more about development for Apple systems in general.

And now, go build 😉

— seb

OSPAR 2022 report now available with 142 services in scope

Post Syndicated from Joseph Goh original https://aws.amazon.com/blogs/security/ospar-2022-report-now-available-with-142-services-in-scope/

We’re excited to announce the completion of our annual Outsourced Service Provider’s Audit Report (OSPAR) audit cycle on July 1, 2022. The 2022 OSPAR certification cycle includes the addition of 15 new services in scope, bringing the total number of services in scope to 142 in the AWS Asia Pacific (Singapore) Region.

Newly added services in scope include the following:

Successful completion of the OSPAR assessment demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore (ABS) Guidelines on Control Objectives and Procedures for Outsourced Service Providers. Our alignment with the ABS guidelines demonstrates our commitment to meet the security expectations for cloud service providers set by the financial services industry in Singapore. Customers can use the OSPAR assessment to conduct due diligence, and to help reduce the effort and costs required for compliance. An independent third-party auditor, selected from the ABS list of approved auditors, performs the OSPAR assessment.

You can download the latest OSPAR report from AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact. The list of services in scope for OSPAR is available in the report, and is also available on the AWS Services in Scope by Compliance Program webpage.

As always, we’re committed to bringing new services into the scope of our OSPAR program based on your architectural and regulatory needs. If you have questions about the OSPAR report, contact your AWS account team.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Joseph Goh

Joseph Goh

Joseph is the APJ ASEAN Lead at AWS based in Singapore. He leads security audits, certifications and compliance programs across the Asia Pacific region. Joseph is passionate about delivering programs that build trust with customers and provide them assurance on cloud security.

2022 H1 IRAP report is now available on AWS Artifact

Post Syndicated from Matt Brunker original https://aws.amazon.com/blogs/security/2022-h1-irap-report-is-now-available-on-aws-artifact/

We’re excited to announce that a new Information Security Registered Assessors Program (IRAP) report is now available on AWS Artifact. Amazon Web Services (AWS) successfully completed an IRAP assessment in May 2022 by an independent ASD (Australian Signals Directorate) certified IRAP assessor. The new IRAP report includes an additional nine AWS services that are now assessed at the PROTECTED classification under IRAP. This brings the total number of services assessed at PROTECTED to 132.

For a full list of these services, see the IRAP tab on the AWS Services in Scope page. The following services are the nine newly assessed services:

The IRAP documentation pack is developed in accordance with the Australian Cyber Security Centre (ACSC) Cloud Security Guidance and their Anatomy of a Cloud Assessment and Authorisation framework, which addresses guidance within the Australian Government Information Security Manual (ISM), the Attorney-General’s Protective Security Policy Framework (PSPF), and the Digital Transformation Agency (DTA) Secure Cloud Strategy.

The IRAP package on AWS Artifact also includes the AWS Consumer Guide and the whitepaper Reference Architectures for ISM PROTECTED Workloads in the AWS Cloud.

The IRAP documentation pack is developed to assist Australian government agencies and their partners to plan, architect, and assess risk for their workloads when they use AWS Cloud services. Reach out to your AWS representatives to let us know which additional services you would like to see in scope for upcoming IRAP assessments. We strive to bring more services into scope at the PROTECTED level to support your requirements.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Matt Brunker

Matt is the security program manager for the Australia and New Zealand region, leading multiple security certification programs. Matt is a passionate cybersecurity professional with a strong background in assisting organisations in the design, implementation, and monitoring of security controls.

How to tune TLS for hybrid post-quantum cryptography with Kyber

Post Syndicated from Brian Jarvis original https://aws.amazon.com/blogs/security/how-to-tune-tls-for-hybrid-post-quantum-cryptography-with-kyber/

We are excited to offer hybrid post-quantum TLS with Kyber for AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM). In this blog post, we share the performance characteristics of our hybrid post-quantum Kyber implementation, show you how to configure a Maven project to use it, and discuss how to prepare your connection settings for Kyber post-quantum cryptography (PQC).

After five years of intensive research and cryptanalysis among partners from academia, the cryptographic community, and the National Institute of Standards and Technology (NIST), NIST has selected Kyber for post-quantum key encapsulation mechanism (KEM) standardization. This marks the beginning of the next generation of public key encryption. In time, the classical key establishment algorithms we use today, like RSA and elliptic curve cryptography (ECC), will be replaced by quantum-secure alternatives. At AWS Cryptography, we’ve been researching and analyzing the candidate KEMs through each round of the NIST selection process. We began supporting Kyber in round 2 and continue that support today.

A cryptographically relevant quantum computer that is capable of breaking RSA and ECC does not yet exist. However, we are offering hybrid post-quantum TLS with Kyber today so that customers can see how the performance differences of PQC affect their workloads. We also believe that the use of PQC raises the already-high security bar for connecting to AWS KMS and ACM, making this feature attractive for customers with long-term confidentiality needs.

Performance of hybrid post-quantum TLS with Kyber

Hybrid post-quantum TLS incurs a latency and bandwidth overhead compared to classical crypto alone. To quantify this overhead, we measured how long S2N-TLS takes to negotiate hybrid post-quantum (ECDHE + Kyber) key establishment compared to ECDHE alone. We performed the tests with the Linux perf subsystem on an Amazon Elastic Compute Cloud (Amazon EC2) c6i.4xlarge instance in the US East (Northern Virginia) AWS Region, and we initiated 2,000 TLS connections to a test server running in the US West (Oregon) Region, to include typical internet latencies.

Figure 1 shows the latencies of a TLS handshake that uses classical ECDHE and hybrid post-quantum (ECDHE + Kyber) key establishment. The columns are separated to illustrate the CPU time spent by the client and server compared to the time spent sending data over the network.

Figure 1: Latency of classical compared to hybrid post-quantum TLS handshake

Figure 1: Latency of classical compared to hybrid post-quantum TLS handshake

Figure 2 shows the bytes sent and received during the TLS handshake, as measured by the client, for both classical ECDHE and hybrid post-quantum (ECDHE + Kyber) key establishment.

Figure 2: Bandwidth of classical compared to hybrid post-quantum TLS handshake

Figure 2: Bandwidth of classical compared to hybrid post-quantum TLS handshake

This data shows that the overhead for using hybrid post-quantum key establishment is 0.25 ms on the client, 0.23 ms on the server, and an additional 2,356 bytes on the wire. Intra-Region tests would result in lower network latency. Your latencies also might vary depending on network conditions, CPU performance, server load, and other variables.

The results show that the performance of Kyber is strong; the additional latency is one of the top contenders among the NIST PQC candidates that we analyzed in a previous blog post. In fact, the performance of these ciphers has improved during our latest test, because x86-64 assembly-optimized versions of these ciphers are now available for use.

Configure a Maven project for hybrid post-quantum TLS

In this section, we provide a Maven configuration and code example that will show you how to get started using our assembly-optimized, hybrid post-quantum TLS configuration with Kyber.

To configure a Maven project for hybrid post-quantum TLS

  1. Get the preview release of the AWS Common Runtime HTTP client for the AWS SDK for Java 2.x. Your Maven dependency configuration should specify version 2.17.69-PREVIEW or newer, as shown in the following code sample.
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        aws-crt-client
        <version>[2.17.69-PREVIEW,]</version>
    </dependency>

  2. Configure the desired cipher suite in your code’s initialization. The following code sample configures an AWS KMS client to use the latest hybrid post-quantum cipher suite.
    // Check platform support
    if(!TLS_CIPHER_PREF_PQ_TLSv1_0_2021_05.isSupported()){
        throw new RuntimeException(“Hybrid post-quantum cipher suites are not supported.”);
    }
    
    // Configure HTTP client   
    SdkAsyncHttpClient awsCrtHttpClient = AwsCrtAsyncHttpClient.builder()
              .tlsCipherPreference(TLS_CIPHER_PREF_PQ_TLSv1_0_2021_05)
              .build();
    
    // Create the AWS KMS async client
    KmsAsyncClient kmsAsync = KmsAsyncClient.builder()
             .httpClient(awsCrtHttpClient)
             .build();

With that, all calls made with your AWS KMS client will use hybrid post-quantum TLS. You can use the latest hybrid post-quantum cipher suite with ACM by following the preceding example but using an AcmAsyncClient instead.

Tune connection settings for hybrid post-quantum TLS

Although hybrid post-quantum TLS has some latency and bandwidth overhead on the initial handshake, that cost is amortized over the duration of the TLS session, and you can fine-tune your connection settings to help further reduce the cost. In this section, you learn three ways to reduce the impact of hybrid PQC on your TLS connections: connection pooling, connection timeouts, and TLS session resumption.

Connection pooling

Connection pools manage the number of active connections to a server. They allow a connection to be reused without closing and reopening it, which amortizes the cost of connection establishment over time. Part of a connection’s setup time is the TLS handshake, so you can use connection pools to help reduce the impact of an increase in handshake latency.

To illustrate this, we wrote a test application that generates approximately 200 transactions per second to a test server. We varied the maximum concurrency setting of the HTTP client and measured the latency of the test request. In the AWS CRT HTTP client, this is the maxConcurrency setting. If the connection pool doesn’t have an idle connection available, the request latency includes establishing a new connection. Using Wireshark, we captured the network traffic to observe the number of TLS handshakes that took place over the duration of the application. Figure 3 shows the request latency and number of TLS handshakes as the maxConcurrency setting is increased.

Figure 3: Median request latency and number of TLS handshakes as concurrency pool size increases

Figure 3: Median request latency and number of TLS handshakes as concurrency pool size increases

The biggest latency benefit occurred with a maxConcurrency value greater than 1. Beyond that, the latencies were past the point of diminishing returns. For all maxConcurrency values of 10 and below, additional TLS handshakes took place within the connections, but they didn’t have much impact on median latency. These inflection points will depend on your application’s request volume. The takeaway is that connection pooling allows connections to be reused, thereby spreading the cost of any increased TLS negotiation time over many requests.

More detail about using the maxConcurrency option can be found in the AWS SDK for Java API Reference.

Connection timeouts

Connection timeouts work in conjunction with connection pooling. Even if you use a connection pool, there is a limit to how long idle connections stay open before the pool closes them. You can adjust this time limit to save on connection establishment overhead.

A nice way to visualize this setting is to imagine bursty traffic patterns. Despite tuning the connection pool concurrency, your connections keep closing because the burst period is longer than the idle time limit. By increasing the maximum idle time, you can reuse these connections despite bursty behavior.

To simulate the impact of connection timeouts, we wrote a test application that starts 10 threads, each of which activate at the same time on a periodic schedule every 5 seconds for a minute. We set maxConcurrency to 10 to allow each thread to have its own connection. We set connectionMaxIdleTime of the AWS CRT HTTP client to 1 second for the first test; and to 10 seconds for the second test.

When the maximum idle time was 1 second, the connections for all 10 threads closed during the time between each burst. As a result, 100 total connections were formed over the life of the test, causing a median request latency of 20.3 ms. When we changed the maximum idle time to 10 seconds, the 10 initial connections were reused by each subsequent burst, reducing the median request latency to 5.9 ms.

By setting the connectionMaxIdleTime appropriately for your application, you can reduce connection establishment overhead, including TLS negotiation time, to help achieve time savings throughout the life of your application.

More detail about using the connectionMaxIdleTime option can be found in the AWS SDK for Java API Reference.

TLS session resumption

TLS session resumption allows a client and server to bypass the key agreement that is normally performed to arrive at a new shared secret. Instead, communication quickly resumes by using a shared secret that was previously negotiated, or one that was derived from a previous secret (the implementation details depend on the version of TLS in use). This feature requires that both the client and server support it, but if available, TLS session resumption allows the TLS handshake time and bandwidth increases associated with hybrid PQ to be amortized over the life of multiple connections.

Conclusion

As you learned in this post, hybrid post-quantum TLS with Kyber is available for AWS KMS and ACM. This new cipher suite raises the security bar and allows you to prepare your workloads for post-quantum cryptography. Hybrid key agreement has some additional overhead compared to classical ECDHE, but you can mitigate these increases by tuning your connection settings, including connection pooling, connection timeouts, and TLS session resumption. Begin using hybrid key agreement today with AWS KMS and ACM.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Brian Jarvis

Brian Jarvis

Brian is a Senior Software Engineer at AWS Cryptography. His interests are in post-quantum cryptography and cryptographic hardware. Previously, Brian worked in AWS Security, developing internal services used throughout the company. Brian holds a Bachelor’s degree from Vanderbilt University and a Master’s degree from George Mason University in Computer Engineering. He plans to finish his PhD “some day”.

AWS Week in Review – July 4, 2022

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-week-in-review-july-04-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Summer has arrived in Finland, and these last few days have been hotter than in the Canary Islands! Today in the US it is Independence Day. I hope that if you are celebrating, you’re having a great time. This week I’m very excited about some developer experience and artificial intelligence launches.

Last Week’s Launches
Here are some launches that got my attention during the previous week:

AWS SAM Accelerate is now generally available – SAM Accelerate is a new capability of the AWS Serverless Application Model CLI, which makes it easier for serverless developers to test code changes against the cloud. You can do a hot swap of code directly in the cloud when making a change in your local development environment. This allows you to develop applications faster. Learn more about this launch in the What’s New post.

Amplify UI for React is generally available – Amplify UI is an open-source UI library that helps developers build cloud-native applications. Amplify UI for React comes with over 35 components that you can use, an authentication component that allows you to connect to your backend with no extra configuration, theming for your components. You can also build your UI using Figma. Check the Amplify UI for React site to learn more about all the capabilities offered.

Amazon Connect has new announcements – First, Amazon Connect added support to personalize the flows of the customer experience using Amazon Lex sentiment analysis. It also added support to branch out the flows depending on Amazon Lex confidence scores. Lastly, it added confidence scores to Amazon Connect Customer Profiles to help companies merge duplicate customer records.

Amazon QuickSight – QuickSight authors can now learn and experience Q before signing up. Authors can choose from six different sample topics and explore different visualizations. In addition, QuickSight now supports Level Aware Calculations (LAC) and rolling date functionality. These two new features bring flexibility and simplification to customers to build advanced calculation and dashboards.

Amazon SageMaker – RStudio on SageMaker now allows you to bring your own development environment in a custom image. RStudio on SageMaker is a fully managed RStudio Workbench in the cloud. In addition, SageMaker added four new tabular data modeling algorithms: LightGBM, CatBoost, AutoGluon-Tabular, and TabTransformer to the existing set of built-in algorithms, pre-trained models and pre-built solution templates it provides.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you may have missed:

AWS Support announced an improved experience when creating a case – There is a new interface for creating support cases in the AWS Support Center console. Now you can create a case with a simplified three-step process that guides you through the flow. Learn more about this new process in the What’s new post.

New AWS Step Functions workflows collection on Serverless Land – The Step Functions workflows collection is a new experience that makes it easier to discover, deploy, and share AWS Step Functions workflows. In this collection, you can find opinionated templates that implement the best practices to build using Step Functions. Learn more about this new collection in Ben’s blog post.

Podcast Charlas Técnicas de AWS – If you understand Spanish, this podcast is for you. Podcast Charlas Técnicas is one of the official AWS Podcasts in Spanish, which shares a new episode ever other week. The podcast is meant for builders, and it shares stories about how customers implement and learn AWS, how to architect applications, and how to use new services. You can listen to all the episodes directly from your favorite podcast app or from the AWS Podcasts en español website.

AWS open-source news and updates – A newsletter curated by my colleague Ricardo brings you the latest open-source projects, posts, events, and more.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS Summit New York – Join us on July 12 for the in-person AWS Summit. You can register on the AWS Summit page for free.

AWS re:Inforce – This is an in-person learning conference with a focus on security, compliance, identity, and privacy. You can register now to access hundreds of technical sessions, and other content. It will take place July 26 and 27 in Boston, MA.

That’s all for this week. Check back next Monday for another Week in Review!

— Marcia