Tag Archives: announcements

Spring 2022 PCI DSS report available with seven services added to compliance scope

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/spring-2022-pci-dss-report-available-with-seven-services-added-to-compliance-scope/

We’re continuing to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that seven new services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. This provides our customers with more options to process and store their payment card data and architect their cardholder data environment (CDE) securely in AWS.

You can see the full list of services on our Services in Scope by Compliance program page. The seven new services are:

We were evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). Customers can access the Attestation of Compliance (AOC) report demonstrating AWS’ PCI compliance status through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, see the AWS Compliance Programs page. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS on the Global Audits team, managing the PCI compliance program. He holds a Master’s degree in management and has over 18 years of experience in information technology security risk and control.

Spring 2022 PCI 3DS report now available

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/spring-2022-pci-3ds-report-now-available/

We are excited to announce that Amazon Web Services (AWS) has released the latest 2022 Payment Card Industry 3-D Secure (PCI 3DS) attestation to support our customers in the financial services sector. Although AWS doesn’t perform 3DS functions directly, the AWS PCI 3DS attestation of compliance can help customers to attain their own PCI 3DS compliance for their services running on AWS.

All AWS Regions in scope for PCI DSS were included in the 3DS attestation. AWS was assessed by Coalfire, an independent Qualified Security Assessor (QSA).

AWS compliance reports, including this latest PCI 3DS attestation, are available on demand through AWS Artifact. The 3DS package available in AWS Artifact includes the 3DS Attestation of Compliance (AOC) and Shared Responsibility Guide. To learn more about our PCI program and other compliance and security programs, visit the AWS Compliance Programs page.

We value your feedback and questions. If you have feedback about this post, or want to reach out to our team, submit comments through the Contact Us page.

Want more AWS Security news? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS on the Global Audits team, managing the PCI compliance program. He holds a Master’s degree in management and has over 18 years of experience in information technology security risk and control.

Graviton Fast Start – A New Program to Help Move Your Workloads to AWS Graviton

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/graviton-fast-start-a-new-program-to-help-move-your-workloads-to-aws-graviton/

With the Graviton Challenge last year, we helped customers migrate to Graviton-based EC2 instances and get up to 40 percent price performance benefit in as little as 4 days. Tens of thousands of customers, including 48 of the top 50 Amazon Elastic Compute Cloud (Amazon EC2) customers, use AWS Graviton processors for their workloads. In addition to EC2, many AWS managed services can run their workloads on Graviton. For most customers, adoption is easy, requiring minimal code changes. However, the effort and time required to move workloads to Graviton depends on a few factors including your software development environment and the technology stack on which your application is built.

This year, we want to take it a step further and make it even easier for customers to adopt Graviton not only through EC2, but also through managed services. Today, we are launching AWS Graviton Fast Start, a new program that makes it even easier to move your workloads to AWS Graviton by providing step-by-step directions for EC2 and other managed services that support the Graviton platform:

  • Amazon Elastic Compute Cloud (Amazon EC2) – EC2 provides the most flexible environment for a migration and can support many kinds of workloads, such as web apps, custom databases, or analytics. You have full control over the interpreted or compiled code running in the EC2 instance. You can also use many open-source and commercial software products that support the Arm64 architecture.
  • AWS Lambda – Migrating your serverless functions can be really easy, especially if you use an interpreted runtime such as Node.js or Python. Most of the time, you only have to check the compatibility of your software dependencies. I have shown a few examples in this blog post.
  • AWS Fargate – Fargate works best if your applications are already running in containers or if you are planning to containerize them. By using multi-architecture container images or images that have Arm64 in their image manifest, you get the serverless benefits of Fargate and the price-performance advantages of Graviton.
  • Amazon Aurora – Relational databases are at the core of many applications. If you need a database compatible with PostgreSQL or MySQL, you can use Amazon Aurora to have a highly performant and globally available database powered by Graviton.
  • Amazon Relational Database Service (RDS) – Similarly to Aurora, Amazon RDS engines such as PostgreSQL, MySQL, and MariaDB can provide a fully managed relational database service using Graviton-based instances.
  • Amazon ElastiCache – When your workload requires ultra-low latency and high throughput, you can speed up your applications with ElastiCache and have a fully managed in-memory cache running on Graviton and compatible with Redis or Memcached.
  • Amazon EMR – With Amazon EMR, you can run large-scale distributed data processing jobs, interactive SQL queries, and machine learning applications on Graviton using open-source analytics frameworks such as Apache SparkApache Hive, and Presto.

Here’s some feedback we got from customers running their workloads on Graviton:

  • Formula 1 racing told us that Graviton2-based C6gn instances provided the best price performance benefits for some of their computational fluid dynamics (CFD) workloads. More recently, they found that Graviton3 C7g instances are 40 percent faster for the same simulations and expect Graviton3-based instances to become the optimal choice to run all of their CFD workloads.
  • Honeycomb has 100 percent of their production workloads running on Graviton using EC2 and Lambda. They have tested the high-throughput telemetry ingestion workload they use for their observability platform against early preview instances of Graviton3 and have seen a 35 percent performance increase for their workload over Graviton2. They were able to run 30 percent fewer instances of C7g than C6g serving the same workload and with 30 percent reduced latency. With these instances in production, they expect over 50 percent price performance improvement over x86 instances.
  • Twitter is working on a multi-year project to leverage Graviton-based EC2 instances to deliver Twitter timelines. As part of their ongoing effort to drive further efficiencies, they tested the new Graviton3-based C7g instances. Across a number of benchmarks representative of their workloads, they found Graviton3-based C7g instances deliver 20-80 percent higher performance compared to Graviton2-based C6g instances, while also reducing tail latencies by as much as 35 percent. They are excited to utilize Graviton3-based instances in the future to realize significant price performance benefits.

With all these options, getting the benefits of running all or part of your workload on AWS Graviton can be easier than you expect. To help you get started, there’s also a free trial on the Graviton-based T4g instances for up to 750 hours per month through December 31st, 2022.

Visit AWS Graviton Fast Start to get step-by-step directions on how to move your workloads to AWS Graviton.

Danilo

New – Run Visual Studio Software on Amazon EC2 with User-Based License Model

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-run-visual-studio-software-on-amazon-ec2-with-user-based-license-model/

We announce the general availability of license-included Visual Studio software on Amazon Elastic Cloud Compute (Amazon EC2) instances. You can now purchase fully compliant AWS-provided licenses of Visual Studio with a per-user subscription fee. Amazon EC2 provides preconfigured Amazon Machine Images (AMIs) of Visual Studio Enterprise 2022 and Visual Studio Professional 2022. You can launch on-demand Windows instances including Visual Studio and Windows Server licenses without long-term licensing commitments.

Amazon EC2 provides a broad choice of instances, and customers not only have the flexibility of paying for what their end users use but can also provide the capacity and right hardware to their end users. You can simply launch EC2 instances using license-included AMIs, and multiple authorized users can connect to these EC2 instances by using Remote Desktop software. Your administrator can authorize users centrally using AWS License Manager and AWS Managed Microsoft Active Directory (AD).

Configure Visual Studio License with AWS License Manager
As a prerequisite, your administrator needs to create an instance of AWS Managed Microsoft AD and allow AWS License Manager to onboard to it by accepting permission. To set up authorized users, see AWS Managed Microsoft AD documentation.

AWS License Manager makes it easier to manage your software licenses from vendors such as Microsoft, SAP, Oracle, and IBM across AWS and on-premises environments. To display a list of available Visual Studio software licenses, select User-based subscriptions in the AWS Licence Manager console.

You can see listed products to support user-based subscriptions. Each product has a descriptive name, a count of the subscribed users to utilize the product, and whether the subscription has been activated for use with a directory. Also, you are required to purchase Remote Desktop Services SAL licenses in the same way as Visual Studio by authorizing users for those licenses.

When you select Visual Studio Professional, you can see product details and subscribed users. By selecting Subscribe users, you can add authorized users to the license of Visual Studio Professional software.

You can perform the administrative tasks using the AWS Command Line Interface (CLI) tools via AWS License Manager APIs. For example, you can subscribe a user to the product in your Active Directory.

$ aws license-manager-user-subscriptions start-product-subscription \
         --username vscode2 \
         --product VISUAL_STUDIO_PROFESSIONAL \
         --identity-provider " \
                "ActiveDirectoryIdentityProvider" = \
                {"DirectoryId" = "d-9067b110b5"}" 
         --endpoint-url https://license-manager-user-subscriptions.us-east-1.amazonaws.com

To launch a Windows instance with preconfigured Visual Studio software, go to the EC2 console and select Launch instances. In the Application and OS Images (Amazon Machine Image), search for “Visual Studio on EC2,” and you can find AMIs under the Quickstart AMIs and AWS Marketplace AMIs tabs.

After launching your Windows instance, your administrator associates a user to the product in the Instances screen of the License Manager console. You can see the listed instances were launched using an AMI to provide the specified product to users who can then be associated.

These steps will be performed by the administrators who are responsible for managing users, instances, and costs across the organization. To learn more about administrative tasks, see User-based subscriptions in AWS License Manager.

Run Visual Studio Software on EC2 Instances
Once administrators authorize end users and launch the instances, you can remotely connect to Visual Studio instances using your AD account information shared by your administrator via Remote Desktop software. That’s all!

The instances deployed for user-based subscriptions must remain as managed nodes with AWS Systems Manager. For more information, see Troubleshooting managed node availability and Troubleshooting SSM Agent in the AWS Systems Manager User Guide.

Now Available
License-included Visual Studio on Amazon EC2 is now available in all AWS commercial and public Regions. You are billed per user for licenses of Visual Studio through a monthly subscription and per vCPU for license-included Windows Server instances on EC2.  You can use On-Demand InstancesReserved Instances, and Savings Plan pricing models like you do today for EC2 instances.

To learn more, visit our License Manager User Based Subscriptions documentation, and please send any feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Channy

New – AWS Skill Builder Subscriptions

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-aws-skill-builder-subscriptions/

Today, I am excited to announce AWS Skill Builder Individual and Team subscriptions. This is a new way for you to learn about cloud technologies and get practical experience with hands-on training.

Between 2013 and 2016, I spent three years delivering AWS Training classes to customers in Europe, North America, and Asia. At the time, the only classes we offered were in-person, instructor-led classes. Now, you have the choice between a variety of digital courses or in-person classes, lecture-style or hands-on. The foundations are available online for free, and the new subscriptions we are announcing today give you access to a range of exclusive content to advance your cloud skills and prepare for AWS Certification exams with self-paced, digital training. The subscriptions allow you to learn AWS services with hands-on activities.

At Amazon, we often say that it is still Day 1. The cloud market is still nascent. Gartner predicts global public cloud spending will grow from $396 billion to $482 billion this year, a rate of 22 percent this year alone. But this is just 10 percent of total global IT spending in 2022. I talk with customers every day. When I ask them the main obstacles to adopting the cloud, they all mention the lack of trained IT professionals. In fact, 76 percent of IT decision-makers report an IT skills gap, which is up from 31 percent in 2016, according to the Global Knowledge IT Skills and Salary Report, one of the largest studies of industry salaries, certifications, skills, and more.

To close the skills gap, we want to give learners hands-on experience with cloud technologies.

What Content Is Available When I Subscribe?
Starting today, AWS Skill Builder subscriptions give registered individuals and organizations access to exclusive learning materials built by builders for builders. In addition to our 500+ free courses, there are four new learning experiences available.

AWS Builder Labs are hands-on guided exercises to develop practical skills for common cloud scenarios. You receive a sandbox AWS account for the duration of the lab. There is no need for you to use your own AWS account and risk accruing unwanted charges. Next, we provide you with step-by-step instructions to go through a typical cloud scenario. It goes from simple tasks, such as configuring Amazon Simple Storage Service (Amazon S3) to host a static website, to more advanced scenarios, such as developing a serverless web application using Amazon DynamoDB. These are just two examples, and we have 100+ labs available for you to learn by doing it yourself.

AWS Jam gives you clues to guide you in solving real-world, open-ended problems. There are no step-by-step instructions, just hints. There are two types of AWS Jam: AWS Jam Journey and AWS Jam events. Jam events are exclusive to Team subscription. Once started, the Jam Journey is available for several months to give you time to complete all the challenges at your own pace and schedule. With Jam events, team administrators can create events where teams can come together at a certain date and time to solve challenges and compete with each other. AWS Jam events provide 140+ challenges across different domains.

Let’s take a practical example. When you select the security Jam, you are tasked with resolving a series of security-related challenges curated by AWS experts. Tasks might be to perform a security posture evaluation, restore a previous version of a static website, or encrypt an existing Amazon Relational Database Service (Amazon RDS) database with a customer-managed AWS Key Management Service (AWS KMS) key.

Here is the dashboard for the security AWS Jam Journey.

AWS Jam - Security

AWS Cloud Quest is a role-based game where your mission is to help citizens of a virtual city by learning and building cloud solutions for their challenges. You move around in the city, and you’re assigned tasks to complete. Each time you complete a task, you get rewards, which you can use to transform the city. For each task, the Solution Center guides you through four steps: learn the cloud concept to complete the task, practice the execution of the task with instructions, practice by yourself, and evaluate the result. Once again, the practice is done inside an AWS sandbox environment where you can safely test your new skill. To evaluate the result, the Solution Center asks you to enter validation data, such as the name of an S3 bucket or a URL. The system automatically verifies your setup and grants you points when the test succeeds. As of today, there are four roles available: Cloud Practitioner, Solutions Architect, Serverless Developer, and Machine Learning Specialist. We have plans to add more roles to this list over time. AWS Cloud Quest is a fun way to learn cloud skills!

We’ll see Cloud Quest in action in a minute.

AWS Certification Official Practice Exams are, as the name implies, full-length practice exams to help you to evaluate your exam readiness. But wait! Aren’t there free Official Practice Question Sets already? Yes! But in addition to those free 20-question practice question sets, subscribed individuals or teams can now prepare for AWS Certification with new exam preparation courses that include practice materials and the full-length AWS Certification Official Practice Exams. We have designed the exam preparation courses to help you assess your exam preparedness. Each exam preparation course includes a review of technical content, practice questions, lab exercises, and access to the AWS Certification Official Practice Exams. And this is not just a pass/fail exercise. Official practice exams come with thorough feedback for each question and scaled scores simulating actual exam scores. The questions presented have the same style, depth, rigor, and scoring as our AWS Certification exams. Full-length practice exams and exam preparation courses are currently available for the AWS Certified Cloud Practitioner, AWS Certified Solutions Architect – Associate, and AWS Certified SysOps Administrator – Associate certifications, with more to come. Much of the other content available through the subscription, such as AWS Builder Labs and AWS Cloud Quest, can complement your exam preparation.

Here is a typical screen for an Official Practice Exam. I blurred the answers obviously.

SkilBuilder Practice Exam

Type of Subscriptions
Both Individual and Team subscriptions include these four new learning experiences. Team subscriptions are available to organizations that want to purchase seats for 50 or more people. Besides a tiered pricing model, depending on the number of seats, a Team subscription gives you administrator functionality and a single sign-on experience for employees. Team administrators may assign training to individuals to drive targeted skills in their team and track progress. Built-in reports show course enrollment, course progress, completion rates, and more.

This table compares the free digital training, the Individual subscription, and the Team subscription.

SkillBuilder Subscription Comparison

Let’s See It in Action
Regular readers of this blog know we like to show you what we are talking about. Let’s see what AWS Cloud Quest looks like. First, I open AWS Skill Builder and subscribe as an individual.

AWS Skil Builder Subscription Plans

Then, I search for Cloud Quest and launch the experience.

AWS Cloud Quest

I select the role playing game I want to start. I have the choice between Cloud Practitioner, Solutions Architect, Serverless Developer, and Machine Learning Specialist.

Select a quest

Just like in every role game, I may personalize my avatar before starting the game. Any resemblance with the actual me is pure coincidence 🤔.

Quest : personalize my avatar

And finally, I am ready to walk the city, help citizens, and complete my challenges.

quest : start my mission

How Much Does It Cost?
Inclusion is a core value at Amazon. We believe everybody must have a chance to learn and grow their professional career. We made the Individual subscription available in over 200 countries and territories and up to 12 languages: Chinese (Simplified), Chinese (Traditional), English, French (France), German, Indonesian, Italian, Japanese, Korean, Portuguese (Brazil), Spanish (Latin America), and Spanish (Spain). AWS Cloud Quest is in English.

The Individual subscription is offered monthly at the price of $29 per month or annually at the price of $299 per year (this is a 14 percent discount compared to the monthly price). The subscription fee is added to your monthly AWS bill, and there is no need to have a separate credit card or billing agreement. As usual with AWS, you can stop the subscription at any time.

The Team subscription is available for purchase in 17 countries (Australia, Brazil, Canada, Colombia, France, Germany, Ireland, India, Israel, Japan, Netherlands, New Zealand, Singapore, South Korea, Spain, United Kingdom, and the United States) and the same languages as the Individual subscription. These are available for teams over 50 persons. We offer an annual plan for $449 per year and per seat, with tiered pricing based on volume. Our pricing page has all the details.

I am excited to see a new generation of IT professionals acquiring AWS Cloud skills. I can’t wait to discover the new use cases, applications, or innovations you will bring to the world when armed with these new skills.

And now, get your AWS Skill Builder subscription and go learn.

— seb

AWS Week in Review – August 1, 2022

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-august-1-2022/

AWS re:Inforce returned to Boston last week, kicking off with a keynote from Amazon Chief Security Officer Steve Schmidt and AWS Chief Information Security officer C.J. Moses:

Be sure to take some time to watch this video and the other leadership sessions, and to use what you learn to take some proactive steps to improve your security posture.

Last Week’s Launches
Here are some launches that caught my eye last week:

AWS Wickr uses 256-bit end-to-end encryption to deliver secure messaging, voice, and video calling, including file sharing and screen sharing, across desktop and mobile devices. Each call, message, and file is encrypted with a new random key and can be decrypted only by the intended recipient. AWS Wickr supports logging to a secure, customer-controlled data store for compliance and auditing, and offers full administrative control over data: permissions, ephemeral messaging options, and security groups. You can now sign up for the preview.

AWS Marketplace Vendor Insights helps AWS Marketplace sellers to make security and compliance data available through AWS Marketplace in the form of a unified, web-based dashboard. Designed to support governance, risk, and compliance teams, the dashboard also provides evidence that is backed by AWS Config and AWS Audit Manager assessments, external audit reports, and self-assessments from software vendors. To learn more, read the What’s New post.

GuardDuty Malware Protection protects Amazon Elastic Block Store (EBS) volumes from malware. As Danilo describes in his blog post, a malware scan is initiated when Amazon GuardDuty detects that a workload running on an EC2 instance or in a container appears to be doing something suspicious. The new malware protection feature creates snapshots of the attached EBS volumes, restores them within a service account, and performs an in-depth scan for malware. The scanner supports many types of file systems and file formats and generates actionable security findings when malware is detected.

Amazon Neptune Global Database lets you build graph applications that run across multiple AWS Regions using a single graph database. You can deploy a primary Neptune cluster in one region and replicate its data to up to five secondary read-only database clusters, with up to 16 read replicas each. Clusters can recover in minutes in the result of an (unlikely) regional outage, with a Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute. To learn a lot more and see this new feature in action, read Introducing Amazon Neptune Global Database.

Amazon Detective now Supports Kubernetes Workloads, with the ability to scale to thousands of container deployments and millions of configuration changes per second. It ingests EKS audit logs to capture API activity from users, applications, and the EKS control plane, and correlates user activity with information gleaned from Amazon VPC flow logs. As Channy notes in his blog post, you can enable Amazon Detective and take advantage of a free 30 day trial of the EKS capabilities.

AWS SSO is Now AWS IAM Identity Center in order to better represent the full set of workforce and account management capabilities that are part of IAM. You can create user identities directly in IAM Identity Center, or you can connect your existing Active Directory or standards-based identify provider. To learn more, read this post from the AWS Security Blog.

AWS Config Conformance Packs now provide you with percentage-based scores that will help you track resource compliance within the scope of the resources addressed by the pack. Scores are computed based on the product of the number of resources and the number of rules, and are reported to Amazon CloudWatch so that you can track compliance trends over time. To learn more about how scores are computed, read the What’s New post.

Amazon Macie now lets you perform one-click temporary retrieval of sensitive data that Macie has discovered in an S3 bucket. You can retrieve up to ten examples at a time, and use these findings to accelerate your security investigations. All of the data that is retrieved and displayed in the Macie console is encrypted using customer-managed AWS Key Management Service (AWS KMS) keys. To learn more, read the What’s New post.

AWS Control Tower was updated multiple times last week. CloudTrail Organization Logging creates an org-wide trail in your management account to automatically log the actions of all member accounts in your organization. Control Tower now reduces redundant AWS Config items by limiting recording of global resources to home regions. To take advantage of this change you need to update to the latest landing zone version and then re-register each Organizational Unit, as detailed in the What’s New post. Lastly, Control Tower’s region deny guardrail now includes AWS API endpoints for AWS Chatbot, Amazon S3 Storage Lens, and Amazon S3 Multi Region Access Points. This allows you to limit access to AWS services and operations for accounts enrolled in your AWS Control Tower environment.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some other news items and customer stories that you may find interesting:

AWS Open Source News and Updates – My colleague Ricardo Sueiras writes a weekly open source newsletter and highlights new open source projects, tools, and demos from the AWS community. Read installment #122 here.

Growy Case Study – This Netherlands-based company is building fully-automated robot-based vertical farms that grow plants to order. Read the case study to learn how they use AWS IoT and other services to monitor and control light, temperature, CO2, and humidity to maximize yield and quality.

Journey of a Snap on Snapchat – This video shows you how a snapshot flows end-to-end from your camera to AWS, to your friends. With over 300 million daily active users, Snap takes advantage of Amazon Elastic Kubernetes Service (EKS), Amazon DynamoDB, Amazon Simple Storage Service (Amazon S3), Amazon CloudFront, and many other AWS services, storing over 400 terabytes of data in DynamoDB and managing over 900 EKS clusters.

Cutting Cardboard Waste – Bin packing is almost certainly a part of every computer science curriculum! In the linked article from the Amazon Science site, you can learn how an Amazon Principal Research Scientist developed PackOpt to figure out the optimal set of boxes to use for shipments from Amazon’s global network of fulfillment centers. This is an NP-hard problem and the article describes how they build a parallelized solution that explores a multitude of alternative solutions, all running on AWS.

Upcoming Events
Check your calendar and sign up for these online and in-person AWS events:

AWS SummitAWS Global Summits – AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Registrations are open for the following AWS Summits in August:

Imagine Conference 2022IMAGINE 2022 – The IMAGINE 2022 conference will take place on August 3 at the Seattle Convention Center, Washington, USA. It’s a no-cost event that brings together education, state, and local leaders to learn about the latest innovations and best practices in the cloud. You can register here.

That’s all for this week. Check back next Monday for another Week in Review!

Jeff;

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

New for AWS Global Accelerator – Internet Protocol Version 6 (IPv6) Support

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-global-accelerator-internet-protocol-version-6-ipv6-support/

IPv6 adoption has consistently increased over the last few years, especially among mobile networks. The main reasons to move to IPv6 are:

  • The limited availability of IPv4 addresses can limit the ability to scale up public-facing web and applications servers.
  • IPv6 users from mobile networks experience better performance when their network traffic doesn’t need to manage IPv6 to IPv4 translation.
  • You might need to comply with regulatory rules (such as the Federal Acquisition Regulation in US) to run specific internet traffic over IPv6.

Based on this, we found that we could help improve the network path that your customers use to reach your applications by adding IPv6 support to AWS Global Accelerator. Global Accelerator uses the AWS global network to route network traffic and keep packet loss, jitter, and latency consistently low. Customers like Atlassian, New Relic, and SkyScanner already use Global Accelerator to improve the global availability and performance of their applications.

Global Accelerator provides two global static public IPs that act as a fixed entry point to your application. You can update your application endpoints without making user-facing changes to the IP address. If you configure more than one application endpoint, Global Accelerator automatically reroutes your traffic to your nearest healthy available endpoint to mitigate endpoint failure.

Starting today, you can provide better network performance by routing IPv6 traffic through Global Accelerator to your application endpoints running in AWS Regions. Global Accelerator now supports two types of accelerators: dual-stack and IPv4-only. With a dual-stack accelerator, you are provided with a pair of IPv4 and IPv6 global static IP addresses that can serve both IPv4 and IPv6 traffic.

For existing IPv4-only accelerators, you can update your accelerators to dual-stack to serve both IPv4 and IPv6 traffic. This update enables your accelerator to serve IPv6 traffic and doesn’t impact existing IPv4 traffic served by the accelerator.

Dual-stack accelerators supporting both IPv6 and IPv4 traffic require dual-stack endpoints in the back end. For example, Application Load Balancers (ALBs) can have their IP address type configured as either IPv4-only or dual stack, allowing them to accept both IPv4 or IPv6 client connections. Today, dual-stack ALBs are supported as endpoints for dual-stack accelerators.

Deploying a Dual-Stack Application
To test this new feature, I need a dual-stack application with an ALB entry point. The application must be deployed in Amazon Virtual Private Cloud (Amazon VPC) and support IPv6 traffic. I don’t happen to have IPv6-ready VPCs in my account. I can follow these instructions to migrate an existing VPC that supports IPv4 only to IPv6, or I can create a VPC that supports IPv6 addressing. For this post, I choose to create a VPC.

In the AWS Management Console, I navigate to the Amazon VPC Dashboard. I choose Launch VPC Wizard. In the wizard, I enter a value for the Name tag. This value will be used to auto-generate Name tags for all resources in the VPC. Then, I select the option to associate an Amazon-provided IPv6 CIDR block. I leave all other options to their default values and choose Create VPC.

Console screenshot.

After less than a minute, the VPC is ready. I edit the settings of both public subnets to enable the Auto-assign IP settings to automatically request both a public IPv4 address and an IPv6 address for new network interfaces in this subnet.

Console screenshot.

Now, I want to deploy an application in this VPC. The application will be the endpoint for my accelerator. I view and download the WordPress scalable and durable AWS CloudFormation template from the Sample solutions section of the CloudFormation documentation. This template deploys a full WordPress website behind an ALB. The web tier is scalable and implemented as an EC2 Auto Scaling group. The MySQL database is managed by Amazon Relational Database Service (RDS).

Before deploying the stack, I edit the template to make a few changes. First, I add a DBSubnetGroup resource:

"DBSubnetGroup" : {
  "Type": "AWS::RDS::DBSubnetGroup",
  "Properties": {
    "DBSubnetGroupDescription" : "DB subnet group",
    "SubnetIds" : { "Ref" : "Subnets"}
  }
},

Then, I add the DBSubnetGroupName property to the DBInstance resource. In this way, the database created by the template will be deployed in the same subnets (and VPC) as the web servers.

"DBSubnetGroupName" : { "Ref" : "DBSubnetGroup" },

The last change adds the IpAddressType property to the ApplicationLoadBalancer resource to create a dual-stack load balancer that has IPv6 addresses and will be ready to be used with the new dual-stack option of Global Accelerator.

"IpAddressType": "dualstack",

Because IpAddressType is set to dualstack, the ALB created by the stack will also have IPv6 addresses and will be ready to be used with the new dual-stack option of Global Accelerator.

In the CloudFormation console, I create a stack and upload the template I just edited. In the template parameters, I enter a database user and password to use. For the VpcId parameter, I select the IPv6-ready VPC I just created. For the Subnets parameter, I select the two public subnets of the same VPC. After that, I go to the next steps and create the stack.

After a few minutes, the stack creation is complete. To access the website, I need to open network access to the load balancer. In the EC2 console, I create a security group that allows public access using the HTTP and HTTPS protocols (ports 80 and 443).

Console screenshot.

I choose Load balancers from the navigation pane and select the ALB used by my application. In the Security section, I choose Edit security groups and add the security group I just created to allow web access.

Console screenshot.

Now, I look for the dual-stack (A or AAAA Record) DNS name of the load balancer. I open a browser and connect using the DNS name to complete the configuration of WordPress.

Website.

When connecting again to the endpoint, I see my new (and empty) WordPress website.

Website.

Using Dual-Stack Accelerators with Support for Both IPv6 and IPv4 traffic
Now that my application is ready, I add a dual-stack accelerator in front of the dual-stack ALB. In the Global Accelerator console, I choose Create accelerator. I enter a name for the accelerator and choose the Standard accelerator type.

Console screenshot.

To route both IPv4 and IPv6 through this accelerator, I select the Dual-stack option for the IP address type.

Console screenshot.

Then I add a listener for port 80 using the TCP protocol.

Console screenshot.

For that listener, I configure an endpoint group in the AWS Region where I have my application deployed.

Console screenshot.

I choose Application Load Balancer for the Endpoint type and select the ALB in the CloudFormation stack.

Console screenshot.

Then, I choose Create accelerator. After a few minutes, the accelerator is deployed, and I have a dual-stack DNS name to reach the ALB using IPv4 or IPv6 depending on the network used by the client.

Console screenshot.

Now, my customers can use the IPv4 and IPv6 addresses or, even better, the dual-stack DNS name of the accelerator to connect to the WordPress website. If there is a front-end or mobile application my customers use to connect to the WordPress REST APIs, I can use the dual-stack DNS name so that clients will connect using their preferred IPv4 or IPv6 route.

To understand if the communication between Global Accelerator and the ALB is working, I can monitor the new FlowsDrop Amazon CloudWatch metric. This metric tells me if Global Accelerator is unable to route IPv6 traffic through the endpoint. For example, that can happen if, after the creation of the accelerator, the configuration of the ALB is updated to use IPv4 only.

Availability and Pricing
You can configure dual-stack accelerators using the AWS Management Console, the AWS Command Line Interface (CLI), and AWS SDKs. You can use dual-stack accelerators to optimize access to your applications deployed in any commercial AWS Region.

Protocol translation is not supported, neither IPv4 to IPv6 nor IPv6 to IPv4. For example, Global Accelerator will not allow me to configure a dual-stack accelerator with an IPv4-only ALB endpoint. Also, for IPv6 ALB endpoints, client IP preservation must be enabled.

There are no additional costs for using dual-stack accelerators. You pay for the hours and the amount of data transfer in the dominant direction used by traffic to or from the accelerator. Data transfer costs depend on the location of your clients and the AWS Regions where you are running your applications. For more information, see the Global Accelerator pricing page.

Optimize the IPv6 and IPv4 network paths used by your customers to reach your applications with AWS Global Accelerator.

Danilo

Fortinet FortiCNP – Now Available in AWS Marketplace

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/fortinet-forticnp-now-available-in-aws-marketplace/

When I first started to talk about AWS in front of IT professionals, they would always listen intently and ask great questions. Invariably, a seasoned pro would raise there hand and ask “This all sounds great, but have you thought about security?” Of course we had, and for a while I would describe our principal security features ahead of time instead of waiting for the question.

Today, the field of cloud security is well-developed, as is the practice of SecOps (Security Operations). There are plenty of tools, plenty of best practices, and a heightened level of awareness regarding the important of both. However, as on-premises workloads continue to migrate to the cloud, SecOps practitioners report that they are concerned about alert fatigue, while having to choose tools that ensure the desired level of workload coverage. According to a recent survey conducted by Fortinet, 78% of the respondents were looking for a single cloud security platform that offers sufficient workload coverage to address all of their needs.

Fortinet FortiCNP
In response to this clear need for a single tool that addresses cloud workloads and cloud storage, Fortinet has launched FortiCNP (Cloud Native Protection). As the name implies, this security product is designed to offer simple & effective protection of cloud resources. It monitors and tracks multiple sources of security issues including configurations, user activity, and VPC Flow Logs. FortiCNP scans cloud storage for content that is sensitive or malicious, and also inspects containers for vulnerabilities and misconfigurations. The findings and alerts generated by all of this monitoring, tracking, and scanning is mapped into actionable insights and compliance reports, all available through a single dashboard.

Now in AWS Marketplace
I am happy to report that FortiCNP is now available in AWS Marketplace and that you can start your subscription today! It connects to multiple AWS security tools including Amazon Inspector, AWS Security Hub, and Amazon GuardDuty, with plans to add support for Amazon Macie, and other Fortinet products such as FortiEDR (Endpoint Detection and Response) and FortiGate-VM (next-generation firewall) later this year.

FortinCNP provides you with features that are designed to address your top risk management, threat management, compliance, and SecOps challenges. Drawing on all of the data sources and tools that I mentioned earlier, it runs hundreds of configuration assessments to identify risks, and then presents the findings in a scored, prioritized fashion.

Getting Started with FortiCNP
After subscribing to FortiCNP in AWS Marketplace, I set up my accounts and enable some services. In the screenshots that follow I will show you the highlights of each step, and link you to the docs for more information:

Enable Security Hub and EventBridge – Following the instructions in AWS Security Hub and EventBridge Configuration, I choose an AWS region to hold my aggregated findings, enable Amazon GuardDuty and Amazon Inspector, and route the findings to AWS Security Hub.

Add VPC Flow Logs – Again following the instructions (AWS Traffic Configuration), I enable VPC Flow Logs. This allows FortiCNP to access cloud traffic data and present it in the Traffic view.

Add AWS Accounts – FortiCNP can protect a single AWS account or all of the accounts in an entire Organization, or anywhere in-between. Accounts and Organizations can be added manually, or by using a CloudFormation template that sets up an IAM Role, enables CloudTrail, and takes care of other housekeeping. To learn more, read Amazon Web Services Account OnBoarding. Using the ADMIN page of FortiCNP, I choose to add a single account using a template:

Following the prompts, I run a CloudFormation template and review the resources that it creates:

After a few more clicks, FortiCNP verifies my license and then I am ready to go.

Enable Storage Guardian – I can enable data protection for individual S3 buckets, and initiate a scan (more info at Activate Data Protection on Bucket / Container).

With all of the setup steps complete, I can review and act on the findings. I start by reviewing the dashboard:

Because I just started using the product, the overall risk trend section at the top has just a few days worth of history. The Resource Overview shows that my resources are at low risk, with only informational messages. I have no exposed storage with sensitive data, and none with malware (always good to know).

I can click on a resource type to learn more the findings. Each resource has an associated risk score:

From here I can click on a resource to see which of the findings contribute to the risk score:

I can switch to the Changes tab to see all relevant configuration changes for the resource:

I can also add notes to the resource, and I can send notifications to several messaging and ticketing systems:

Compliance reports are generated automatically on a monthly, quarterly, and yearly basis. I can also generate a one-time compliance report to cover any desired time frame:

Reports are available immediately, and can be downloaded for review:

The policies that are used to generate findings are open and accessible,and can be enabled, disabled, and fine-tuned. For example, the Alert on activity from suspicious locations (sorry, all of you who are connecting from Antarctica):

There’s a lot more but I am just about out of space. Check out the online documentation to learn a lot more.

Available Today
You can subscribe to FortiCNP now and start enjoying the benefits today!

Jeff;

Welcoming the AWS Customer Incident Response Team

Post Syndicated from Kyle Dickinson original https://aws.amazon.com/blogs/security/welcoming-the-aws-customer-incident-response-team/

Well, hello there! Thanks for reading our inaugural blog post.

Who are we, you ask? We are the AWS Customer Incident Response Team (CIRT). The AWS CIRT is a specialized 24/7 global Amazon Web Services (AWS) team that provides support to customers during active security events on the customer side of the AWS Shared Responsibility Model. The team is made up of AWS Global Services Consultants and Solutions Architects with experience in incident response.

When the AWS CIRT supports you, you will receive assistance with triage and recovery for an active security event on AWS. We will assist in root cause analysis through the use of AWS service logs and provide you with recommendations for recovery. We will also provide security tips and best practices to help you avoid security events in the future.

AWS CIRT is thrilled to talk with you in this new medium! This is AWS after all; getting customer feedback and taking steps to make your experience better is our number one goal. The AWS CIRT has heard from customers that they are challenged with 24/7 security event prevention, detection, analysis, and response to security events. You’ve told us that you are seeking the right AWS skill sets, knowledge, and best practices to address your security response needs in the case of an active security event. AWS CIRT wants to share our knowledge with you so that you can excel in preventing and detecting security events in the cloud.

Figure 1 shows the two different sides of the shared responsibility model, in which AWS is responsible for security OF the cloud, while customers are responsible for security IN the cloud.

Figure 1: The Customer/AWS Shared Responsibility Model

Figure 1: The Customer/AWS Shared Responsibility Model

In addition to this blog post, we’ve been working overtime on our favorite social media platform, AWS Twitch. In December 2021, we developed five episodes to share with you some of the most common causes of security events. We received so much positive customer feedback that we decided to create a bi-weekly series! You can find all episodes of The Safe Room on AWS Twitch.

We mentioned earlier that our number one goal is to help you. We’ve heard from you, and understand that many of you do not have the tools or playbooks necessary to operate your AWS workloads securely. We are pleased to announce that we have developed open-source tools to support your security needs:

Stay tuned to this blog. More tools are coming soon!

We’ve told you who we are and what we do—now, how do you contact us? All AWS Customers can engage the AWS CIRT through an AWS support case. Yes, that is correct. We support all AWS customers! For those customers that do have an account team, you can start an escalation to the AWS CIRT with the account team. However, we will always require that you open an AWS support case.

Thanks again for stopping by and giving us your eyeballs for a few minutes. Please stay tuned to this blog, because this is where we will comment on security trends and interesting stuff we find in the security world, as well as make new open source tools public.

Cloud safe, everyone!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Kyle Dickinson

Kyle Dickinson

Kyle is a TDIR Senior Security Consultant with the AWS Customer Incident Response Team, a team who supports AWS Customers during active security events. Kyle also hosts The Safe Room on the AWS Twitch Channel to share information on being more secure on AWS.

New for Amazon GuardDuty – Malware Detection for Amazon EBS Volumes

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-guardduty-malware-detection-for-amazon-ebs-volumes/

With Amazon GuardDuty, you can monitor your AWS accounts and workloads to detect malicious activity. Today, we are adding to GuardDuty the capability to detect malware. Malware is malicious software that is used to compromise workloads, repurpose resources, or gain unauthorized access to data. When you have GuardDuty Malware Protection enabled, a malware scan is initiated when GuardDuty detects that one of your EC2 instances or container workloads running on EC2 is doing something suspicious. For example, a malware scan is triggered when an EC2 instance is communicating with a command-and-control server that is known to be malicious or is performing denial of service (DoS) or brute-force attacks against other EC2 instances.

GuardDuty supports many file system types and scans file formats known to be used to spread or contain malware, including Windows and Linux executables, PDF files, archives, binaries, scripts, installers, email databases, and plain emails.

When potential malware is identified, actionable security findings are generated with information such as the threat and file name, the file path, the EC2 instance ID, resource tags and, in the case of containers, the container ID and the container image used. GuardDuty supports container workloads running on EC2, including customer-managed Kubernetes clusters or individual Docker containers. If the container is managed by Amazon Elastic Kubernetes Service (EKS) or Amazon Elastic Container Service (Amazon ECS), the findings also include the cluster name and the task or pod ID so application and security teams can quickly find the affected container resources.

As with all other GuardDuty findings, malware detections are sent to the GuardDuty console, pushed through Amazon EventBridge, routed to AWS Security Hub, and made available in Amazon Detective for incident investigation.

How GuardDuty Malware Protection Works
When you enable malware protection, you set up an AWS Identity and Access Management (IAM) service-linked role that grants GuardDuty permissions to perform malware scans. When a malware scan is initiated for an EC2 instance, GuardDuty Malware Protection uses those permissions to take a snapshot of the attached Amazon Elastic Block Store (EBS) volumes that are less than 1 TB in size and then restore the EBS volumes in an AWS service account in the same AWS Region to scan them for malware. You can use tagging to include or exclude EC2 instances from those permissions and from scanning. In this way, you don’t need to deploy security software or agents to monitor for malware, and scanning the volumes doesn’t impact running workloads. The EBS volumes in the service account and the snapshots in your account are deleted after the scan. Optionally, you can preserve the snapshots when malware is detected.

The service-linked role grants GuardDuty access to AWS Key Management Service (AWS KMS) keys used to encrypt EBS volumes. If the EBS volumes attached to a potentially compromised EC2 instance are encrypted with a customer-managed key, GuardDuty Malware Protection uses the same key to encrypt the replica EBS volumes as well. If the volumes are not encrypted, GuardDuty uses its own key to encrypt the replica EBS volumes and ensure privacy. Volumes encrypted with EBS-managed keys are not supported.

Security in cloud is a shared responsibility between you and AWS. As a guardrail, the service-linked role used by GuardDuty Malware Protection cannot perform any operation on your resources (such as EBS snapshots and volumes, EC2 instances, and KMS keys) if it has the GuardDutyExcluded tag. Once you mark your snapshots with GuardDutyExcluded set to true, the GuardDuty service won’t be able to access these snapshots. The GuardDutyExcluded tag supersedes any inclusion tag. Permissions also restrict how GuardDuty can modify your snapshot so that they cannot be made public while shared with the GuardDuty service account.

The EBS volumes created by GuardDuty are always encrypted. GuardDuty can use KMS keys only on EBS snapshots that have a GuardDuty scan ID tag. The scan ID tag is added by GuardDuty when snapshots are created after an EC2 finding. The KMS keys that are shared with GuardDuty service account cannot be invoked from any other context except the Amazon EBS service. Once the scan completes successfully, the KMS key grant is revoked and the volume replica in GuardDuty service account is deleted, making sure GuardDuty service cannot access your data after completing the scan operation.

Enabling Malware Protection for an AWS Account
If you’re not using GuardDuty yet, Malware Protection is enabled by default when you activate GuardDuty for your account. Because I am already using GuardDuty, I need to enable Malware Protection from the console. If you’re using AWS Organizations, your delegated administrator accounts can enable this for existing member accounts and configure if new AWS accounts in the organization should be automatically enrolled.

In the GuardDuty console, I choose Malware Protection under Settings in the navigation pane. There, I choose Enable and then Enable Malware Protection.

Console screenshot.

Snapshots are automatically deleted after they are scanned. In General settings, I have the option to retain in my AWS account the snapshots where malware is detected and have them available for further analysis.

Console screenshot.

In Scan options, I can configure a list of inclusion tags, so that only EC2 instances with those tags are scanned, or exclusion tags, so that EC2 instances with tags in the list are skipped.

Console screenshot.

Testing Malware Protection GuardDuty Findings
To generate several Amazon GuardDuty findings, including the new Malware Protection findings, I clone the Amazon GuardDuty Tester repo:

$ git clone https://github.com/awslabs/amazon-guardduty-tester

First, I create an AWS CloudFormation stack using the guardduty-tester.template file. When the stack is ready, I follow the instructions to configure my SSH client to log in to the tester instance through the bastion host. Then, I connect to the tester instance:

$ ssh tester

From the tester instance, I start the guardduty_tester.sh script to generate the findings:

$ ./guardduty_tester.sh 

***********************************************************************
* Test #1 - Internal port scanning                                    *
* This simulates internal reconaissance by an internal actor or an   *
* external actor after an initial compromise. This is considered a    *
* low priority finding for GuardDuty because its not a clear indicator*
* of malicious intent on its own.                                     *
***********************************************************************


Starting Nmap 6.40 ( http://nmap.org ) at 2022-05-19 09:36 UTC
Nmap scan report for ip-172-16-0-20.us-west-2.compute.internal (172.16.0.20)
Host is up (0.00032s latency).
Not shown: 997 filtered ports
PORT     STATE  SERVICE
22/tcp   open   ssh
80/tcp   closed http
5050/tcp closed mmcc
MAC Address: 06:25:CB:F4:E0:51 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 4.96 seconds

-----------------------------------------------------------------------

***********************************************************************
* Test #2 - SSH Brute Force with Compromised Keys                     *
* This simulates an SSH brute force attack on an SSH port that we    *
* can access from this instance. It uses (phony) compromised keys in  *
* many subsequent attempts to see if one works. This is a common      *
* techique where the bad actors will harvest keys from the web in     *
* places like source code repositories where people accidentally leave*
* keys and credentials (This attempt will not actually succeed in     *
* obtaining access to the target linux instance in this subnet)       *
***********************************************************************

2022-05-19 09:36:29 START
2022-05-19 09:36:29 Crowbar v0.4.3-dev
2022-05-19 09:36:29 Trying 172.16.0.20:22
2022-05-19 09:36:33 STOP
2022-05-19 09:36:33 No results found...
2022-05-19 09:36:33 START
2022-05-19 09:36:33 Crowbar v0.4.3-dev
2022-05-19 09:36:33 Trying 172.16.0.20:22
2022-05-19 09:36:37 STOP
2022-05-19 09:36:37 No results found...
2022-05-19 09:36:37 START
2022-05-19 09:36:37 Crowbar v0.4.3-dev
2022-05-19 09:36:37 Trying 172.16.0.20:22
2022-05-19 09:36:41 STOP
2022-05-19 09:36:41 No results found...
2022-05-19 09:36:41 START
2022-05-19 09:36:41 Crowbar v0.4.3-dev
2022-05-19 09:36:41 Trying 172.16.0.20:22
2022-05-19 09:36:45 STOP
2022-05-19 09:36:45 No results found...
2022-05-19 09:36:45 START
2022-05-19 09:36:45 Crowbar v0.4.3-dev
2022-05-19 09:36:45 Trying 172.16.0.20:22
2022-05-19 09:36:48 STOP
2022-05-19 09:36:48 No results found...
2022-05-19 09:36:49 START
2022-05-19 09:36:49 Crowbar v0.4.3-dev
2022-05-19 09:36:49 Trying 172.16.0.20:22
2022-05-19 09:36:52 STOP
2022-05-19 09:36:52 No results found...
2022-05-19 09:36:52 START
2022-05-19 09:36:52 Crowbar v0.4.3-dev
2022-05-19 09:36:52 Trying 172.16.0.20:22
2022-05-19 09:36:56 STOP
2022-05-19 09:36:56 No results found...
2022-05-19 09:36:56 START
2022-05-19 09:36:56 Crowbar v0.4.3-dev
2022-05-19 09:36:56 Trying 172.16.0.20:22
2022-05-19 09:37:00 STOP
2022-05-19 09:37:00 No results found...
2022-05-19 09:37:00 START
2022-05-19 09:37:00 Crowbar v0.4.3-dev
2022-05-19 09:37:00 Trying 172.16.0.20:22
2022-05-19 09:37:04 STOP
2022-05-19 09:37:04 No results found...
2022-05-19 09:37:04 START
2022-05-19 09:37:04 Crowbar v0.4.3-dev
2022-05-19 09:37:04 Trying 172.16.0.20:22
2022-05-19 09:37:08 STOP
2022-05-19 09:37:08 No results found...
2022-05-19 09:37:08 START
2022-05-19 09:37:08 Crowbar v0.4.3-dev
2022-05-19 09:37:08 Trying 172.16.0.20:22
2022-05-19 09:37:12 STOP
2022-05-19 09:37:12 No results found...
2022-05-19 09:37:12 START
2022-05-19 09:37:12 Crowbar v0.4.3-dev
2022-05-19 09:37:12 Trying 172.16.0.20:22
2022-05-19 09:37:16 STOP
2022-05-19 09:37:16 No results found...
2022-05-19 09:37:16 START
2022-05-19 09:37:16 Crowbar v0.4.3-dev
2022-05-19 09:37:16 Trying 172.16.0.20:22
2022-05-19 09:37:20 STOP
2022-05-19 09:37:20 No results found...
2022-05-19 09:37:20 START
2022-05-19 09:37:20 Crowbar v0.4.3-dev
2022-05-19 09:37:20 Trying 172.16.0.20:22
2022-05-19 09:37:23 STOP
2022-05-19 09:37:23 No results found...
2022-05-19 09:37:23 START
2022-05-19 09:37:23 Crowbar v0.4.3-dev
2022-05-19 09:37:23 Trying 172.16.0.20:22
2022-05-19 09:37:27 STOP
2022-05-19 09:37:27 No results found...
2022-05-19 09:37:27 START
2022-05-19 09:37:27 Crowbar v0.4.3-dev
2022-05-19 09:37:27 Trying 172.16.0.20:22
2022-05-19 09:37:31 STOP
2022-05-19 09:37:31 No results found...
2022-05-19 09:37:31 START
2022-05-19 09:37:31 Crowbar v0.4.3-dev
2022-05-19 09:37:31 Trying 172.16.0.20:22
2022-05-19 09:37:34 STOP
2022-05-19 09:37:34 No results found...
2022-05-19 09:37:35 START
2022-05-19 09:37:35 Crowbar v0.4.3-dev
2022-05-19 09:37:35 Trying 172.16.0.20:22
2022-05-19 09:37:38 STOP
2022-05-19 09:37:38 No results found...
2022-05-19 09:37:38 START
2022-05-19 09:37:38 Crowbar v0.4.3-dev
2022-05-19 09:37:38 Trying 172.16.0.20:22
2022-05-19 09:37:42 STOP
2022-05-19 09:37:42 No results found...
2022-05-19 09:37:42 START
2022-05-19 09:37:42 Crowbar v0.4.3-dev
2022-05-19 09:37:42 Trying 172.16.0.20:22
2022-05-19 09:37:46 STOP
2022-05-19 09:37:46 No results found...

-----------------------------------------------------------------------

***********************************************************************
* Test #3 - RDP Brute Force with Password List                        *
* This simulates an RDP brute force attack on the internal RDP port  *
* of the windows server that we installed in the environment.  It uses*
* a list of common passwords that can be found on the web. This test  *
* will trigger a detection, but will fail to get into the target      *
* windows instance.                                                   *
***********************************************************************

Sending 250 password attempts at the windows server...
Hydra v9.4-dev (c) 2022 by van Hauser/THC & David Maciejak - Please do not use in military or secret service organizations, or for illegal purposes (this is non-binding, these *** ignore laws and ethics anyway).

Hydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2022-05-19 09:37:46
[WARNING] rdp servers often don't like many connections, use -t 1 or -t 4 to reduce the number of parallel connections and -W 1 or -W 3 to wait between connection to allow the server to recover
[INFO] Reduced number of tasks to 4 (rdp does not like many parallel connections)
[WARNING] the rdp module is experimental. Please test, report - and if possible, fix.
[DATA] max 4 tasks per 1 server, overall 4 tasks, 1792 login tries (l:7/p:256), ~448 tries per task
[DATA] attacking rdp://172.16.0.24:3389/
[STATUS] 1099.00 tries/min, 1099 tries in 00:01h, 693 to do in 00:01h, 4 active
1 of 1 target completed, 0 valid password found
Hydra (https://github.com/vanhauser-thc/thc-hydra) finished at 2022-05-19 09:39:23

-----------------------------------------------------------------------

***********************************************************************
* Test #4 - CryptoCurrency Mining Activity                            *
* This simulates interaction with a cryptocurrency mining pool which *
* can be an indication of an instance compromise. In this case, we are*
* only interacting with the URL of the pool, but not downloading      *
* any files. This will trigger a threat intel based detection.        *
***********************************************************************

Calling bitcoin wallets to download mining toolkits

-----------------------------------------------------------------------

***********************************************************************
* Test #5 - DNS Exfiltration                                          *
* A common exfiltration technique is to tunnel data out over DNS      *
* to a fake domain.  Its an effective technique because most hosts    *
* have outbound DNS ports open.  This test wont exfiltrate any data,  *
* but it will generate enough unusual DNS activity to trigger the     *
* detection.                                                          *
***********************************************************************

Calling large numbers of large domains to simulate tunneling via DNS

***********************************************************************
* Test #6 - Fake domain to prove that GuardDuty is working            *
* This is a permanent fake domain that customers can use to prove that*
* GuardDuty is working.  Calling this domain will always generate the *
* Backdoor:EC2/C&CActivity.B!DNS finding type                         *
***********************************************************************

Calling a well known fake domain that is used to generate a known finding

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> GuardDutyC2ActivityB.com any
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11495
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;GuardDutyC2ActivityB.com.	IN	ANY

;; ANSWER SECTION:
GuardDutyC2ActivityB.com. 6943	IN	SOA	ns1.markmonitor.com. hostmaster.markmonitor.com. 2018091906 86400 3600 2592000 172800
GuardDutyC2ActivityB.com. 6943	IN	NS	ns3.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns5.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns7.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns2.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns4.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns6.markmonitor.com.
GuardDutyC2ActivityB.com. 6943	IN	NS	ns1.markmonitor.com.

;; Query time: 27 msec
;; SERVER: 172.16.0.2#53(172.16.0.2)
;; WHEN: Thu May 19 09:39:23 UTC 2022
;; MSG SIZE  rcvd: 238


*****************************************************************************************************
Expected GuardDuty Findings

Test 1: Internal Port Scanning
Expected Finding: EC2 Instance  i-011e73af27562827b  is performing outbound port scans against remote host. 172.16.0.20
Finding Type: Recon:EC2/Portscan

Test 2: SSH Brute Force with Compromised Keys
Expecting two findings - one for the outbound and one for the inbound detection
Outbound:  i-011e73af27562827b  is performing SSH brute force attacks against  172.16.0.20
Inbound:  172.16.0.25  is performing SSH brute force attacks against  i-0bada13e0aa12d383
Finding Type: UnauthorizedAccess:EC2/SSHBruteForce

Test 3: RDP Brute Force with Password List
Expecting two findings - one for the outbound and one for the inbound detection
Outbound:  i-011e73af27562827b  is performing RDP brute force attacks against  172.16.0.24
Inbound:  172.16.0.25  is performing RDP brute force attacks against  i-0191573dec3b66924
Finding Type : UnauthorizedAccess:EC2/RDPBruteForce

Test 4: Cryptocurrency Activity
Expected Finding: EC2 Instance  i-011e73af27562827b  is querying a domain name that is associated with bitcoin activity
Finding Type : CryptoCurrency:EC2/BitcoinTool.B!DNS

Test 5: DNS Exfiltration
Expected Finding: EC2 instance  i-011e73af27562827b  is attempting to query domain names that resemble exfiltrated data
Finding Type : Trojan:EC2/DNSDataExfiltration

Test 6: C&C Activity
Expected Finding: EC2 instance  i-011e73af27562827b  is querying a domain name associated with a known Command & Control server. 
Finding Type : Backdoor:EC2/C&CActivity.B!DNS

After a few minutes, the findings appear in the GuardDuty console. At the top, I see the malicious files found by the new Malware Protection capability. One of the findings is related to an EC2 instance, the other to an ECS cluster.

Console screenshot.

First, I select the finding related to the EC2 instance. In the panel, I see the information on the instance and the malicious file, such as the file name and path. In the Malware scan details section, the Trigger finding ID points to the original GuardDuty finding that triggered the malware scan. In my case, the original finding was that this EC2 instance was performing RDP brute force attacks against another EC2 instance.

Console screenshot.

Here, I choose Investigate with Detective and, directly from the GuardDuty console, I go to the Detective console to visualize AWS CloudTrail and Amazon Virtual Private Cloud (Amazon VPC) flow data for the EC2 instance, the AWS account, and the IP address affected by the finding. Using Detective, I can analyze, investigate, and identify the root cause of suspicious activities found by GuardDuty.

Console screenshot.

When I select the finding related to the ECS cluster, I have more information on the resource affected, such as the details of the ECS cluster, the task, the containers, and the container images.

Console screenshot.

Using the GuardDuty tester scripts makes it easier to test the overall integration of GuardDuty with other security frameworks you use so that you can be ready when a real threat is detected.

Comparing GuardDuty Malware Protection with Amazon Inspector
At this point, you might ask yourself how GuardDuty Malware Protection relates to Amazon Inspector, a service that scans AWS workloads for software vulnerabilities and unintended network exposure. The two services complement each other and offer different layers of protection:

  • Amazon Inspector offers proactive protection by identifying and remediating known software and application vulnerabilities that serve as an entry point for attackers to compromise resources and install malware.
  • GuardDuty Malware Protection detects malware that is found to be present on actively running workloads. At that point, the system has already been compromised, but GuardDuty can limit the time of an infection and take action before a system compromise results in a business-impacting event.

Availability and Pricing
Amazon GuardDuty Malware Protection is available today in all AWS Regions where GuardDuty is available, excluding the AWS China (Beijing), AWS China (Ningxia), AWS GovCloud (US-East), and AWS GovCloud (US-West) Regions.

At launch, GuardDuty Malware Protection is integrated with these partner offerings:

With GuardDuty, you don’t need to deploy security software or agents to monitor for malware. You only pay for the amount of GB scanned in the file systems (not for the size of the EBS volumes) and for the EBS snapshots during the time they are kept in your account. All EBS snapshots created by GuardDuty are automatically deleted after they are scanned unless you enable snapshot retention when malware is found. For more information, see GuardDuty pricing and EBS pricing. Note that GuardDuty only scans EBS volumes less than 1 TB in size. To help you control costs and avoid repeating alarms, the same volume is not scanned more often than once every 24 hours.

Detect malicious activity and protect your applications from malware with Amazon GuardDuty.

Danilo

Amazon Detective Supports Kubernetes Workloads on Amazon EKS for Security Investigations

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-detective-supports-kubernetes-workloads-on-amazon-eks-for-security-investigations/

In March 2020, we introduced Amazon Detective, a fully managed service that makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities.

Amazon Detective continuously extracts temporal events such as login attempts, API calls, and network traffic from Amazon GuardDutyAWS CloudTrail, and Amazon Virtual Private Cloud (Amazon VPC) Flow Logs into a graph model that summarizes the resource behaviors and interactions observed across your entire AWS environment. We have added new features such as AWS IAM Role session analysis, enhanced IP address analytics, Splunk integration, Amazon S3 and DNS finding types, and the support of AWS Organizations.

Customers are rapidly moving to containers to deploy Kubernetes workloads with Amazon Elastic Kubernetes Service (Amazon EKS). Its highly programmatic nature allows thousands of individual container deployments and millions of configuration changes to occur in seconds. To effectively secure EKS workloads, it is important to monitor container deployments and configurations that are captured in the form of EKS audit logs and to correlate activities to user activity and network traffic happening across AWS accounts.

Today we announce new capabilities in Amazon Detective to expand security investigation coverage for Kubernetes workloads running on Amazon EKS. When you enable this new feature, Amazon Detective automatically starts ingesting EKS audit logs to capture chronological API activity from users, applications, and the control plane in Amazon EKS for clusters, pods, container images, and Kubernetes subjects (Kubernetes users and service accounts).

Detective automatically correlates user activity using CloudTrail, and network activity using Amazon VPC Flow logs, without the need for you to enable, store, or retain logs manually. The service gleans key security information from these logs and retains them in a security behavioral graph database that enables fast cross-referenced access to twelve months of activity. Detective provides a data analysis and visualization layer purpose-built to answer common security questions backed by a behavioral graph database that allows you to quickly investigate potential malicious behavior associated with your EKS workloads.

You can rapidly respond to security issues rather than focusing on log management, operational systems, or ongoing security tooling maintenance. Detective’s EKS capabilities come with a free 30-day trial for all customers that allows you to ensure that the capabilities meet your needs and to fully understand the cost for the service on an ongoing basis.

Getting Started with Security Investigations for EKS Audit Logs
To get started, enable Amazon Detective with just a few clicks in the AWS Management Console. GuardDuty is a prerequisite of Amazon Detective. When you try to enable Detective, Detective checks whether GuardDuty has been enabled for your account. You must either enable GuardDuty or wait for 48 hours. This allows GuardDuty to assess the data volume that your account produces.

You can enable your account by attaching the AWS IAM policy or delegate it to an administrator of your organization. To learn more, refer to Setting up Detective in the AWS documentation.

To enable EKS support in Detective as an existing customer, navigate to the Settings menu in the left panel and select General. Under Optional source packages, enable EKS audit logs.

If you are a new customer of Detective, the EKS protection feature will be enabled by default. If you do not want to trial EKS audit logs right away, you can disable this feature within the first week of enabling Detective and preserve the full 30-day free trial period to use in the future.

Once enabled, Detective will begin monitoring the Kubernetes audit logs that are generated by Amazon EKS, extracting and correlating information for security usage. You do not need to enable any log sources or make any configuration changes to your existing EKS clusters or future deployments.

You can see recent monitoring results of your EKS clusters on the Summary page.

When you choose one of the EKS clusters, you will see the details of containers running in the cluster, Kubernetes API activities, and network activities that occurred on this resource around the scope time.

In the Overview tab, you also see details about all containers running in the cluster, including their pod, image and security context.

In the Kubernetes API activity tab, you can get an overview of the full API activities involving the EKS cluster. You can choose a time range to drill down based on specific API methods within the EKS cluster. When you select a specific time, you can see API subjects, IP addresses, and the number of API calls by the success, failure, unauthorized, or forbidden state.

You can also see details of newly observed Kubernetes API calls  inside this cluster for the first time and subjects with increased volume that happened inside the cluster.

Enabling GuardDuty EKS Protection
In January 2022, Amazon GuardDuty expanded coverage to EKS cluster activity to identify malicious or suspicious behavior that represents potential threats to container workloads.

When the optional GuardDuty EKS Protection is enabled, GuardDuty will continuously monitor your EKS deployments and alert you to threats detected in your workloads. You can view and investigate these security findings in Detective.

With Detective for EKS enabled, you can quickly access information about the resources involved in the finding, such as their CloudTrail and Kubernetes API activity, and netflow information. This can aid in investigation and help you determine root cause, impact, and other related resources that may also be compromised.

To learn more, see How to use new Amazon GuardDuty EKS Protection findings in the AWS Security Blog.

Now Available
You can now use Amazon Detective for EKS protection in all Regions where Amazon Detective is available. This feature is priced based on the volume of audit logs processed and analyzed by Detective.

Detective provides a free 30-day trial to all customers that enable EKS coverage, allowing customers to ensure that Detective’s capabilities meet security needs and to get an estimate of the service’s monthly cost before committing to paid usage. To learn more, see the Detective pricing page.

For technical documentation, visit the Amazon Detective User Guide. Please send feedback to AWS re:Post for Amazon Detective or through your usual AWS support contacts.

Learn all the details about Amazon Detective for EKS protection and get started today.

Channy

AWS Week In Review – July 25, 2022

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/aws-week-in-review-july-25-2022/

A few weeks ago, we hosted the first EMEA AWS Heroes Summit in Milan, Italy. This past week, I had the privilege to join the Americas AWS Heroes Summit in Seattle, Washington, USA. Meeting with our community experts is always inspiring and a great opportunity to learn from each other. During the Summit, AWS Heroes from North America and Latin America shared their thoughts with AWS developer advocates and product teams on topics such as serverless, containers, machine learning, data, and DevTools. You can learn more about the AWS Heroes program here.

AWS Heroes Summit Americas 2022

Last Week’s Launches
Here are some launches that got my attention during the previous week:

Cloudscape Design System Cloudscape is an open source design system for creating web applications. It was built for and is used by AWS products and services. We created it in 2016 to improve the user experience across web applications owned by AWS services and also to help teams implement those applications faster. If you’ve ever used the AWS Management Console, you’ve seen Cloudscape in action. If you are building a product that extends the AWS Management Console, designing a user interface for a hybrid cloud management system, or setting up an on-premises solution that uses AWS, have a look at Cloudscape Design System.

Cloudscape Design System

AWS re:Post introduces community-generated articlesAWS re:Post gives you access to a vibrant community that helps you become even more successful on AWS. Expert community members can now share technical guidance and knowledge beyond answering questions through the Articles feature. Using this feature, community members can share best practices and troubleshooting processes and address customer needs around AWS technology in greater depth. The Articles feature is unlocked for community members who have achieved Rising Star status on re:Post or subject matter experts who built their reputation in the community based on their contributions and certifications. If you have a Rising Star status on re:Post, start writing articles now! All other members can unlock Rising Star status through community contributions or simply browse available articles today on re:Post.

AWS re:Post

AWS Lambda announces support for attribute-based access control (ABAC) and new IAM condition key – You can now use attribute-based access control (ABAC) with AWS Lambda to control access to functions within AWS Identity and Access Management (IAM) using tags. ABAC is an authorization strategy that defines access permissions based on attributes. In AWS, these attributes are called tags. With ABAC, you can scale an access control strategy by setting granular permissions with tags without requiring permissions updates for every new user or resource as your organization scales. Read this blog post by Julian Wood and Chris McPeek to learn more.

AWS Lambda also announced support for lambda:SourceFunctionArn, a new IAM condition key that can be used for IAM policy conditions that specify the Amazon Resource Name (ARN) of the function from which a request is made. You can use the Condition element in your IAM policy to compare the lambda:SourceFunctionArn condition key in the request context with values that you specify in your policy. This allows you to implement advanced security controls for the AWS API calls taken by your Lambda function code. For more details, have a look at the Lambda Developer Guide.

Amazon Fraud Detector launches Account Takeover Insights (ATI)Amazon Fraud Detector now supports an Account Takeover Insights (ATI) model, a low-latency fraud detection machine learning model specifically designed to detect accounts that have been compromised through stolen credentials, phishing, social engineering, or other forms of account takeover. The ATI model is designed to detect up to four times more ATI fraud than traditional rules-based account takeover solutions while minimizing the level of friction for legitimate users. To learn more, have a look at the Amazon Fraud Detector documentation.

Amazon EMR on EC2 clusters (EMR Clusters) introduces more fine-grained access controls – Previously, all jobs running on an EMR cluster used the IAM role associated with the EMR cluster’s EC2 instances to access resources. This role is called the EMR EC2 instance profile. With the new runtime roles for Amazon EMR Steps, you can now specify a different IAM role for your Apache Spark and Hive jobs, scoping down access at a job level. This simplifies access controls on a single EMR cluster that is shared between multiple tenants, wherein each tenant is isolated using IAM roles. You can now also enforce table and column permissions based on your Amazon EMR runtime role to manage your access to data lakes with AWS Lake Formation. For more details, read the What’s New post.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some additional news and customer stories you may find interesting:

AWS open-source news and updates – My colleague Ricardo Sueiras writes this weekly open-source newsletter in which he highlights new open-source projects, tools, and demos from the AWS Community. Read edition #121 here.

AI Use Case Explorer – If you are interested in AI use cases, have a look at the new AI Use Case Explorer. You can search over 100 use cases and 400 customer success stories by industry, business function, and the business outcome you want to achieve.

Bayer centralizes and standardizes data from the carbon program using AWS – To help Brazilian farmers adopt climate-smart agricultural practices and reduce carbon emissions in their activities, Bayer created the Carbon Program, which aims to build carbon-neutral agriculture practices. Learn how Bayer uses AWS to centralize and standardize the data received from the different partners involved in the project in this Bayer case study.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS re:Inforce 2022 – The event will be held this week in person on July 26 and 27 in Boston, Massachusetts, USA. You can watch the keynote and leadership sessions online for free. AWS On Air will also stream live from re:Inforce.

AWS SummitAWS Global Summits – AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Registrations are open for the following AWS Summits in August:

Imagine Conference 2022IMAGINE 2022 – The IMAGINE 2022 conference will take place on August 3 at the Seattle Convention Center, Washington, USA. It’s a no-cost event that brings together education, state, and local leaders to learn about the latest innovations and best practices in the cloud. You can register here.

I’ll be speaking at Data Con LA on August 13–14 in Los Angeles, California, USA. Feel free to say “Hi!” if you’re around. And if you happen to be at Ray Summit on August 23–24 in San Francisco, California, USA, stop by the AWS booth. I’ll be there to discuss all things Ray on AWS.

That’s all for this week. Check back next Monday for another Week in Review!

Antje

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Amazon Prime Day 2022 – AWS for the Win!

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2022-aws-for-the-win/

As part of my annual tradition to tell you about how AWS makes Prime Day possible, I am happy to be able to share some chart-topping metrics (check out my 2016, 2017, 2019, 2020, and 2021 posts for a look back).

My purchases this year included a first aid kit, some wood brown filament for my 3D printer, and a non-stick frying pan! According to our official news release, Prime members worldwide purchased more than 100,000 items per minute during Prime Day, with best-selling categories including Amazon Devices, Consumer Electronics, and Home.

Powered by AWS
As always, AWS played a critical role in making Prime Day a success. A multitude of two-pizza teams worked together to make sure that every part of our infrastructure was scaled, tested, and ready to serve our customers. Here are a few examples:

Amazon Aurora – On Prime Day, 5,326 database instances running the PostgreSQL-compatible and MySQL-compatible editions of Amazon Aurora processed 288 billion transactions, stored 1,849 terabytes of data, and transferred 749 terabytes of data.

Amazon EC2 – For Prime Day 2022, Amazon increased the total number of normalized instances (an internal measure of compute power) on Amazon Elastic Compute Cloud (Amazon EC2) by 12%. This resulted in an overall server equivalent footprint that was only 7% larger than that of Cyber Monday 2021 due to the increased adoption of AWS Graviton2 processors.

Amazon EBS – For Prime Day, the Amazon team added 152 petabytes of EBS storage. The resulting fleet handled 11.4 trillion requests per day and transferred 532 petabytes of data per day. Interestingly enough, due to increased efficiency of some of the internal Amazon services used to run Prime Day, Amazon actually used about 4% less EBS storage and transferred 13% less data than it did during Prime Day last year. Here’s a graph that shows the increase in data transfer during Prime Day:

Amazon SES – In order to keep Prime Day shoppers aware of the deals and to deliver order confirmations, Amazon Simple Email Service (SES) peaked at 33,000 Prime Day email messages per second.

Amazon SQS – During Prime Day, Amazon Simple Queue Service (SQS) set a new traffic record by processing 70.5 million messages per second at peak:

Amazon DynamoDB – DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 105.2 million requests per second.

Amazon SageMaker – The Amazon Robotics Pick Time Estimator, which uses Amazon SageMaker to train a machine learning model to predict the amount of time future pick operations will take, processed more than 100 million transactions during Prime Day 2022.

Package Planning – In North America, and on the highest traffic Prime 2022 day, package-planning systems performed 60 million AWS Lambda invocations, processed 17 terabytes of compressed data in Amazon Simple Storage Service (Amazon S3), stored 64 million items across Amazon DynamoDB and Amazon ElastiCache, served 200 million events over Amazon Kinesis, and handled 50 million Amazon Simple Queue Service events.

Prepare to Scale
Every year I reiterate the same message: rigorous preparation is key to the success of Prime Day and our other large-scale events. If you are preparing for a similar chart-topping event of your own, I strongly recommend that you take advantage of AWS Infrastructure Event Management (IEM). As part of an IEM engagement, my colleagues will provide you with architectural and operational guidance that will help you to execute your event with confidence!

Jeff;

AWS re:Inforce 2022: Network & Infrastructure Security track preview

Post Syndicated from Satinder Khasriya original https://aws.amazon.com/blogs/security/aws-reinforce-2022-network-infrastructure-security-track-preview/

Register now with discount code SALvWQHU2Km to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

Today we’re going to highlight just some of the network and infrastructure security focused sessions planned for AWS re:Inforce. AWS re:Inforce 2022 will take place in-person in Boston, MA July 26-27. AWS re:Inforce is a learning conference focused on security, compliance, identity, and privacy. When you attend the event, you have access to hundreds of technical and business sessions, demos of the latest technology, an AWS Partner expo hall, a keynote speech from AWS Security leaders, and more. re:Inforce 2022 organizes content across multiple themed tracks: identity and access management; threat detection and incident response; governance, risk, and compliance; networking and infrastructure security; and data protection and privacy. This post describes some of the Breakout sessions, Chalk Talk sessions, Builders’ sessions, and Workshops that are planned for the Network & Infrastructure Security track. For information on the other re:Inforce tracks, see our previous re:Inforce blog posts.

Breakout sessions

These are lecture-style presentations that cover topics at all levels and delivered by AWS experts, builders, customers, and partners. Breakout sessions typically include 10–15 minutes of Q&A at the end.

NIS201: An overview of AWS firewall services and where to use them

In this session, review the firewall services that can be used on AWS, including OS firewalls (Windows and Linux), security group, NACLs, AWS Network Firewall and AWS WAF. This session covers a quick description of each service and where to use it and then offer strategies to help you get the most out of these services.

NIS306: Automating patch management and compliance using AWS

In this session, learn how you can use AWS to automate one of the most common operational challenges that often emerge on the journey to the cloud: patch management and compliance. AWS gives you visibility and control of your infrastructure using AWS Systems Manager. See firsthand how-to setup and configure an automated, multi-account and multi-region patching operation using Amazon CloudWatch Events, AWS Lambda, and AWS Systems Manager.

NIS307: AWS Internet access at scale: Designing a cloud-native internet edge

Today’s on-premises infrastructure typically has a single internet gateway that is sized to handle all corporate traffic. With AWS, infrastructure as code allows you to deploy in different internet access patterns, including distributed DMZs. Automated queries mean you can identify your infrastructure with an API query and ubiquitous instrumentation, allowing precise anomaly detection. In this session, learn about AWS native security tools like Amazon API Gateway, AWS WAF, ELB, Application Load Balancer, and AWS Network Firewall. These options can help you simplify internet service delivery and improve your agility.

NIS308: Deploying AWS Network Firewall at scale: athenahealth’s journey

When the Log4j vulnerability became known in December 2021, athenahealth made the decision to increase their cloud security posture by adding AWS Network Firewall to over 100 accounts and 230 VPCs. Join this session to learn about their initial deployment of a distributed architecture and how they were able to reduce their costs by approximately two-thirds by moving to a centralized model. The session also covers firewall policy creation, optimization, and management at scale. The session is aimed at architects and leaders focused on network and perimeter security that are interested in deploying AWS Network Firewall.

Builders’ sessions

These are small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.

NIS251: Building security defenses for edge computing devices

Once devices run applications at the edge and are interacting with various AWS services, establishing a compliant and secure computing environment is necessary. It’s also necessary to monitor for unexpected behaviors, such as a device running malicious code or mining cryptocurrency. This builders’ session walks you through how to build security mechanisms to detect unexpected behaviors and take automated corrective actions for edge devices at scale using AWS IoT Device Defender and AWS IoT Greengrass.

NIS252: Analyze your network using Amazon VPC Network Access Analyzer

In this builders’ session, review how the new Amazon VPC Network Access Analyzer helps you identify network configurations that can lead to unintended network access. Learn ways that you can improve your security posture while still allowing you and your organization to be agile and flexible.

Chalk Talk sessions

These are highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

NIS332: Implementing traffic inspection capabilities at scale on AWS

Join this chalk talk to learn about a broad range of security offerings to integrate firewall services into your network, including AWS WAF, AWS Network Firewall, and third-party security products. Learn how to choose network architectures for these firewalling options to help protect inbound traffic to your internet-facing applications. Also learn best practices for applying security controls to various traffic flows, such as internet egress, east-west, and internet ingress.

NIS334: Building Zero Trust from the inside out

What is a protect surface and how can it simplify achieving Zero Trust outcomes on AWS? In this chalk talk, discover how to layer security controls on foundational services, such as Amazon EC2, Amazon EKS, and Amazon S3, to achieve Zero Trust. Starting with these foundational services, learn about AWS services and partner offerings to add security layer by layer. Learn how you can satisfy common Zero Trust use cases on AWS, including user, device, and system authentication and authorization.

Workshops

These are interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

NIS372: Build a DDoS-resilient perimeter and enable automatic protection at scale

In this workshop, learn how to build a DDoS-resilient perimeter and how to use services like AWS Shield, AWS WAF, AWS Firewall Manager, and Amazon CloudFront to architect for DDoS resiliency and maintain robust operational capabilities that allow rapid detection and engagement during high-severity events. Learn how to detect and filter out malicious web requests, reduce attack surface, and protect web-facing workloads at scale with maximum automation and visibility.

NIS373: Open-source security appliances with ELB Gateway Load Balancer

ELB Gateway Load Balancer (GWLB) can help you deploy and scale security appliances on AWS. This workshop focuses on integrating GWLB with an open-source thread detection engine from Suricata. Learn about the mechanics of GWLB, build rules for GeoIP blocking, and write scripts for enhanced malware detection. The architecture relies on AWS Transit Gateway for centralized inspection; automate it using a GitOps CI/CD approach.

NIS375: Segmentation and security inspection for global networks with AWS Cloud WAN

In this workshop, learn how to build and design connectivity for global networks using native AWS services. The workshop includes a discussion of security concepts such as segmentation, centralized network security controls, and creating a balance between self-service and governance at scale. Understand new services like AWS Cloud WAN and AWS Direct Connect SiteLink, as well as how they interact with existing services like AWS Transit Gateway, AWS Network Firewall, and SD-WAN. Use cases covered include federated networking models for large enterprises, using AWS as a WAN, SD-WAN at scale, and building extranets for partner connectivity.

NIS374: Strengthen your web application defenses with AWS WAF

In this workshop, use AWS WAF to build an effective set of controls around your web application and perform monitoring and analysis of traffic that is analyzed by your web ACL. Learn to use AWS WAF to mitigate common attack vectors against web applications such as SQL injection and cross-site scripting. Additionally, learn how to use AWS WAF for advanced protections such as bot mitigation and JSON inspection. Also find out how to use AWS WAF logging, query logs with Amazon Athena, and near-real-time dashboards to analyze requests inspected by AWS WAF.

If any of the above sessions look interesting, consider joining us by registering for AWS re:Inforce 2022. We look forward to seeing you in Boston!

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Satinder Khasriya

Satinder leads the product marketing strategy and implementation for AWS Network and Application protection services. Prior to AWS, Satinder spent the last decade leading product marketing for various network security solutions across across several technologies, including network firewall, intrusion prevention, and threat intelligence. Satinder lives in Austin, Texas and enjoys spending time with his family and traveling.

New – Amazon EC2 R6a Instances Powered by 3rd Gen AMD EPYC Processors for Memory-Intensive Workloads

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-r6a-instances-powered-by-3rd-gen-amd-epyc-processors-for-memory-intensive-workloads/

We launched the general-purpose Amazon EC2 M6a instances at AWS re:Invent 2021 and compute-intensive C6a instances in February of this year. These instances are powered by the 3rd Gen AMD EPYC processors running at frequencies up to 3.6 GHz to offer up to 35 percent better price performance versus the previous generation instances.

Today, we are expanding our portfolio to include memory-optimized Amazon EC2 R6a instances featuring AMD EPYC (Milan) processors 10 percent less expensive than comparable x86 instances.

R6a instances, powered by 3rd Gen AMD EPYC processors are well suited for memory-intensive applications such as high-performance databases (relational databases, noSQL databases), distributed web scale in-memory caches (such as memcached, Redis), in-memory databases such as real-time big data analytics (such as Hadoop, Spark clusters), and other enterprise applications.

R6a instances are built on the AWS Nitro System and support Elastic Fabric Adapter (EFA) for workloads that benefit from lower network latency and highly scalable inter-node communication, such as high-performance computing and video processing.

Here’s a quick recap of the advantages of the new R6a instances compared to R5a instances:

  • Up to 35 percent higher price performance per vCPU versus comparable R5a instances
  • Up to 10 percent less expensive than comparable x86 instances
  • Up to 1536 GiB of memory, 2 times more than the previous generation, giving you the benefit of scaling up databases and running larger in-memory workloads.
  • Up to 192 vCPUs, 50 Gbps enhanced networking, and 40 Gbps EBS bandwidth, enabling you to process data faster, consolidate workloads, and lower the cost of ownership.
  • SAP-certified instances require memory-intensive applications such as high-performance enterprise databases like SAP Business Suite.
  • Support always-on memory encryption with AMD transparent sngle key memory encryption (TSME), and support new AVX2 instructions for accelerating encryption and decryption algorithms.

Here are the specs of R6a instances in detail:

Name vCPUs Memory (GiB) Network Bandwidth (Gbps) EBS Throughput (Gbps)
r6a.large 2 16 Up to 12.5 Up to 6.6
r6a.xlarge 4 32 Up to 12.5 Up to 6.6
r6a.2xlarge 8 64 Up to 12.5 Up to 6.6
r6a.4xlarge 16 128 Up to 12.5 Up to 6.6
r6a.8xlarge 32 256 12.5 6.6
r6a.12xlarge 48 384 18.75 10
r6a.16xlarge 64 512 25 13.3
r6a.24xlarge 96 768 37.5 20
r6a.32xlarge 128 1024 50 26.6
r6a.48xlarge 192 1536 50 40
r6a.metal 192 1536 50 40

Now Available
You can launch R6a instances today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Mumbai) and Europe (Frankfurt, Ireland) as On-DemandSpot, and Reserved Instances or as part of a Savings Plan.

To learn more, visit the R6a instances page. Please send feedback to [email protected]AWS re:Post for EC2, or through your usual AWS Support contacts.

— Channy

Introducing Amazon CodeWhisperer in the AWS Lambda console (In preview)

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-amazon-codewhisperer-in-the-aws-lambda-console-in-preview/

This blog post is written by Mark Richman, Senior Solutions Architect.

Today, AWS is launching a new capability to integrate the Amazon CodeWhisperer experience with the AWS Lambda console code editor.

Amazon CodeWhisperer is a machine learning (ML)–powered service that helps improve developer productivity. It generates code recommendations based on their code comments written in natural language and code.

CodeWhisperer is available as part of the AWS toolkit extensions for major IDEs, including JetBrains, Visual Studio Code, and AWS Cloud9, currently supporting Python, Java, and JavaScript. In the Lambda console, CodeWhisperer is available as a native code suggestion feature, which is the focus of this blog post.

CodeWhisperer is currently available in preview with a waitlist. This blog post explains how to request access to and activate CodeWhisperer for the Lambda console. Once activated, CodeWhisperer can make code recommendations on-demand in the Lambda code editor as you develop your function. During the preview period, developers can use CodeWhisperer at no cost.

Amazon CodeWhisperer

Amazon CodeWhisperer

Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications and only pay for what you use.

With Lambda, you can build your functions directly in the AWS Management Console and take advantage of CodeWhisperer integration. CodeWhisperer in the Lambda console currently supports functions using the Python and Node.js runtimes.

When writing AWS Lambda functions in the console, CodeWhisperer analyzes the code and comments, determines which cloud services and public libraries are best suited for the specified task, and recommends a code snippet directly in the source code editor. The code recommendations provided by CodeWhisperer are based on ML models trained on a variety of data sources, including Amazon and open source code. Developers can accept the recommendation or simply continue to write their own code.

Requesting CodeWhisperer access

CodeWhisperer integration with Lambda is currently available as a preview only in the N. Virginia (us-east-1) Region. To use CodeWhisperer in the Lambda console, you must first sign up to access the service in preview here or request access directly from within the Lambda console.

In the AWS Lambda console, under the Code tab, in the Code source editor, select the Tools menu, and Request Amazon CodeWhisperer Access.

Request CodeWhisperer access in Lambda console

Request CodeWhisperer access in Lambda console

You may also request access from the Preferences pane.

Request CodeWhisperer access in Lambda console preference pane

Request CodeWhisperer access in Lambda console preference pane

Selecting either of these options opens the sign-up form.

CodeWhisperer sign up form

CodeWhisperer sign up form

Enter your contact information, including your AWS account ID. This is required to enable the AWS Lambda console integration. You will receive a welcome email from the CodeWhisperer team upon once they approve your request.

Activating Amazon CodeWhisperer in the Lambda console

Once AWS enables your preview access, you must turn on the CodeWhisperer integration in the Lambda console, and configure the required permissions.

From the Tools menu, enable Amazon CodeWhisperer Code Suggestions

Enable CodeWhisperer code suggestions

Enable CodeWhisperer code suggestions

You can also enable code suggestions from the Preferences pane:

Enable CodeWhisperer code suggestions from Preferences pane

Enable CodeWhisperer code suggestions from Preferences pane

The first time you activate CodeWhisperer, you see a pop-up containing terms and conditions for using the service.

CodeWhisperer Preview Terms

CodeWhisperer Preview Terms

Read the terms and conditions and choose Accept to continue.

AWS Identity and Access Management (IAM) permissions

For CodeWhisperer to provide recommendations in the Lambda console, you must enable the proper AWS Identity and Access Management (IAM) permissions for either your IAM user or role. In addition to Lambda console editor permissions, you must add the codewhisperer:GenerateRecommendations permission.

Here is a sample IAM policy that grants a user permission to the Lambda console as well as CodeWhisperer:

{
  "Version": "2012-10-17",
  "Statement": [{
      "Sid": "LambdaConsolePermissions",
      "Effect": "Allow",
      "Action": [
        "lambda:AddPermission",
        "lambda:CreateEventSourceMapping",
        "lambda:CreateFunction",
        "lambda:DeleteEventSourceMapping",
        "lambda:GetAccountSettings",
        "lambda:GetEventSourceMapping",
        "lambda:GetFunction",
        "lambda:GetFunctionCodeSigningConfig",
        "lambda:GetFunctionConcurrency",
        "lambda:GetFunctionConfiguration",
        "lambda:InvokeFunction",
        "lambda:ListEventSourceMappings",
        "lambda:ListFunctions",
        "lambda:ListTags",
        "lambda:PutFunctionConcurrency",
        "lambda:UpdateEventSourceMapping",
        "iam:AttachRolePolicy",
        "iam:CreatePolicy",
        "iam:CreateRole",
        "iam:GetRole",
        "iam:GetRolePolicy",
        "iam:ListAttachedRolePolicies",
        "iam:ListRolePolicies",
        "iam:ListRoles",
        "iam:PassRole",
        "iam:SimulatePrincipalPolicy"
      ],
      "Resource": "*"
    },
    {
      "Sid": "CodeWhispererPermissions",
      "Effect": "Allow",
      "Action": ["codewhisperer:GenerateRecommendations"],
      "Resource": "*"
    }
  ]
}

This example is for illustration only. It is best practice to use IAM policies to grant restrictive permissions to IAM principals to meet least privilege standards.

Demo

To activate and work with code suggestions, use the following keyboard shortcuts:

  • Manually fetch a code suggestion: Option+C (macOS), Alt+C (Windows)
  • Accept a suggestion: Tab
  • Reject a suggestion: ESC, Backspace, scroll in any direction, or keep typing and the recommendation automatically disappears.

Currently, the IDE extensions provide automatic suggestions and can show multiple suggestions. The Lambda console integration requires a manual fetch and shows a single suggestion.

Here are some common ways to use CodeWhisperer while authoring Lambda functions.

Single-line code completion

When typing single lines of code, CodeWhisperer suggests how to complete the line.

CodeWhisperer single-line completion

CodeWhisperer single-line completion

Full function generation

CodeWhisperer can generate an entire function based on your function signature or code comments. In the following example, a developer has written a function signature for reading a file from Amazon S3. CodeWhisperer then suggests a full implementation of the read_from_s3 method.

CodeWhisperer full function generation

CodeWhisperer full function generation

CodeWhisperer may include import statements as part of its suggestions, as in the previous example. As a best practice to improve performance, manually move these import statements to outside the function handler.

Generate code from comments

CodeWhisperer can also generate code from comments. The following example shows how CodeWhisperer generates code to use AWS APIs to upload files to Amazon S3. Write a comment describing the intended functionality and, on the following line, activate the CodeWhisperer suggestions. Given the context from the comment, CodeWhisperer first suggests the function signature code in its recommendation.

CodeWhisperer generate function signature code from comments

CodeWhisperer generate function signature code from comments

After you accept the function signature, CodeWhisperer suggests the rest of the function code.

CodeWhisperer generate function code from comments

CodeWhisperer generate function code from comments

When you accept the suggestion, CodeWhisperer completes the entire code block.

CodeWhisperer generates code to write to S3.

CodeWhisperer generates code to write to S3.

CodeWhisperer can help write code that accesses many other AWS services. In the following example, a code comment indicates that a function is sending a notification using Amazon Simple Notification Service (SNS). Based on this comment, CodeWhisperer suggests a function signature.

CodeWhisperer function signature for SNS

CodeWhisperer function signature for SNS

If you accept the suggested function signature. CodeWhisperer suggest a complete implementation of the send_notification function.

CodeWhisperer function send notification for SNS

CodeWhisperer function send notification for SNS

The same procedure works with Amazon DynamoDB. When writing a code comment indicating that the function is to get an item from a DynamoDB table, CodeWhisperer suggests a function signature.

CodeWhisperer DynamoDB function signature

CodeWhisperer DynamoDB function signature

When accepting the suggestion, CodeWhisperer then suggests a full code snippet to complete the implementation.

CodeWhisperer DynamoDB code snippet

CodeWhisperer DynamoDB code snippet

Once reviewing the suggestion, a common refactoring step in this example would be manually moving the references to the DynamoDB resource and table outside the get_item function.

CodeWhisperer can also recommend complex algorithm implementations, such as Insertion sort.

CodeWhisperer insertion sort.

CodeWhisperer insertion sort.

As a best practice, always test the code recommendation for completeness and correctness.

CodeWhisperer not only provides suggested code snippets when integrating with AWS APIs, but can help you implement common programming idioms, including proper error handling.

Conclusion

CodeWhisperer is a general purpose, machine learning-powered code generator that provides you with code recommendations in real time. When activated in the Lambda console, CodeWhisperer generates suggestions based on your existing code and comments, helping to accelerate your application development on AWS.

To get started, visit https://aws.amazon.com/codewhisperer/. Share your feedback with us at [email protected].

For more serverless learning resources, visit Serverless Land.

AWS achieves TISAX certification (Information with Very High Protection Needs (AL3)

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-tisax-certification-information-with-very-high-protection-needs-al3/

We’re excited to announce the completion of the Trusted Information Security Assessment Exchange (TISAX) certification on June 30, 2022 for 19 AWS Regions. These Regions achieved the Information with Very High Protection Needs (AL3) label for the control domains Information Handling and Data Protection. This alignment with TISAX requirements demonstrates our continued commitment to adhere to the heightened expectations for cloud service providers. AWS automotive customers can run their applications in the AWS Cloud certified Regions in confidence.

The following 19 Regions are currently TISAX certified:

  • US East (Ohio)
  • US East (Northern Virginia)
  • US West (Oregon)
  • Africa (Cape Town)
  • Asia Pacific (Hong Kong)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Osaka)
  • Asia Pacific (Korea)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)
  • Canada (Central)
  • Europe (Frankfurt)
  • Europe (Ireland)
  • Europe (London)
  • Europe (Milan)
  • Europe (Paris)
  • Europe (Stockholm)
  • South America (Sao Paulo)

TISAX is a European automotive industry-standard information security assessment (ISA) catalog based on key aspects of information security, such as data protection and connection to third parties.

AWS was evaluated and certified by independent third-party auditors on June 30, 2022. The Certificate of Compliance demonstrating the AWS compliance status is available on the European Network Exchange (ENX) Portal (the scope ID and assessment ID are SM22TH and AYA2D4-1, respectively) and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

For up-to-date information, including when additional Regions are added, see the AWS Compliance Program, and choose TISAX.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about TISAX compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

AWS achieves HDS certification to three additional Regions

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-achieves-hds-certification-to-three-additional-regions/

We’re excited to announce that three additional AWS Regions—Asia Pacific (Korea), Europe (London), and Europe (Stockholm)—have been granted the Health Data Hosting (Hébergeur de Données de Santé, HDS) certification. This alignment with the HDS requirements demonstrates our continued commitment to adhere to the heightened expectations for cloud service providers. AWS customers who handle personal health data can be hosted in the AWS Cloud certified Regions with confidence.

The following 16 Regions are now in scope of this certification:

  • US East (Ohio)
  • US East (Northern Virginia)
  • US West (Northern California)
  • US West (Oregon)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Korea)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)
  • Asia Pacific (Tokyo)
  • Canada (Central)
  • Europe (Frankfurt)
  • Europe (Ireland)
  • Europe (London)
  • Europe (Paris)
  • Europe (Stockholm)
  • South America (Sao Paulo)

Introduced by the French governmental agency for health, Agence Française de la Santé Numérique (ASIP Santé), HDS certification aims to strengthen the security and protection of personal health data. Achieving this certification demonstrates that AWS provides a framework for technical and governance measures to secure and protect personal health data, governed by French law.

AWS was evaluated and certified by independent third-party auditors on June 30, 2022. The Certificate of Compliance demonstrating the AWS compliance status is available on the Agence du Numérique en Santé (ANS) website and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

For up-to-date information, including when additional Regions are added, see the AWS Compliance Program, and choose HDS.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about HDS compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

New — Detect and Resolve Issues Quickly with Log Anomaly Detection and Recommendations from Amazon DevOps Guru

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/new-detect-and-resolve-issues-quickly-with-log-anomaly-detection-and-recommendations-from-amazon-devops-guru/

Today, we are announcing a new feature, Log Anomaly Detection and Recommendations for Amazon DevOps Guru. With this feature, you can find anomalies throughout relevant logs within your app, and get targeted recommendations to resolve issues. Here’s a quick look at this feature:

AWS launched DevOps Guru, a fully managed AIOps platform service, in December 2020 to make it easier for developers and operators to improve applications’ reliability and availability. DevOps Guru minimizes the time needed for issue remediation by using machine learning models based on more than 20 years of operational expertise in building, scaling, and maintaining applications for Amazon.com.

You can use DevOps Guru to identify anomalies such as increased latency, error rates, and resource constraints and then send alerts with a description and actionable recommendations for remediation. You don’t need any prior knowledge in machine learning to use DevOps Guru, and only need to activate it in the DevOps Guru dashboard.

New Feature – Log Anomaly Detection and Recommendations

Observability and monitoring are integral parts of DevOps and modern applications. Applications can generate several types of telemetry, one of which is metrics, to reveal the performance of applications and to help identify issues.

While the metrics analyzed by DevOps Guru today are critical to surfacing issues occurring in applications, it is still challenging to find the root cause of these issues. As applications become more distributed and complex, developers and IT operators need more automation to reduce the time and effort spend detecting, debugging, and resolving operational issues. By sourcing relevant logs in conjunction with metrics, developers can now more effectively monitor and troubleshoot their applications.

With this new Log Anomaly Detection and Recommendations feature, you can get insights along with precise recommendations from application logs without manual effort. This feature delivers contextualized log data of anomaly occurrences and provides actionable insights from recommendations integrated inside the DevOps Guru dashboard.

The Log Anomaly Detection and Recommendations feature is able to detect exception keywords, numerical anomalies, HTTP status codes, data format anomalies, and more. When DevOps Guru identifies anomalies from logs, you will find relevant log samples and deep links to CloudWatch Logs on the DevOps Guru dashboard. These contextualized logs are an important component for DevOps Guru to provide further features, namely targeted recommendations to help faster troubleshooting and issue remediation.

Let’s Get Started!

This new feature consists of two things, “Log Anomaly Detection” and “Recommendations.” Let’s explore further into how we can use this feature to find the root cause of an issue and get recommendations. As an example, we’ll look at my serverless API built using Amazon API Gateway, with AWS Lambda integrated with Amazon DynamoDB. The architecture is shown in the following image:

If it’s your first time using DevOps Guru, you’ll need to enable it by visiting the DevOps Guru dashboard. You can learn more by visiting the Getting Started page.

Since I’ve already enabled DevOps Guru I can go to the Insights page, navigate to the Log groups section, and select the Enable log anomaly detection.

Log Anomaly Detection

After a few hours, I can visit the DevOps Guru dashboard to check for insights. Here, I get some findings from DevOps Guru, as seen in the following screenshots:

With Log Anomaly Detection, DevOps Guru will show the findings of my serverless API in the Log groups section, as seen in the following screenshot:

I can hover over the anomaly and get a high-level summary of the contextualized enrichment data found in this log group. It also provides me with additional information, including the number of log records analyzed and the log scan time range. From this information, I know these anomalies are new event types that have not been detected in the past with the keyword ERROR.

To investigate further, I can select the log group link and go to the Detail page. The graph shows relevant events that might have occurred around these log showcases, which is a helpful context for troubleshooting the root cause. This Detail page includes different showcases, each representing a cluster of similar log events, like exception keywords and numerical anomalies, found in the logs at the time of the anomaly.

Looking at the first log showcase, I noticed a ConditionalCheckFailedException error within the AWS Lambda function. This can occur when AWS Lambda fails to call DynamoDB. From here, I learned that there was an error in the conditional check section, and I reviewed the logic on AWS Lambda. I can also investigate related CloudWatch Logs groups by selecting View details in CloudWatch links.

One thing I want to emphasize here is that DevOps Guru identifies significant events related to application performance and helps me to see the important things I need to focus on by separating the signal from the noise.

Targeted Recommendations

In addition to anomaly detection of logs, this new feature also provides precise recommendations based on the findings in the logs. You can find these recommendations on the Insights page, by scrolling down to find the Recommendations section.

Here, I get some recommendations from DevOps Guru, which make it easier for me to take immediate steps to remediate the issue. One recommendation shown in the following image is Check DynamoDB ConditionalExpression, which relates to an anomaly found in the logs derived from AWS Lambda.

Availability

You can use DevOps Guru Log Anomaly Detection and Recommendations today at no additional charge in all Regions where DevOps Guru is available, US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

To learn more, please visit Amazon DevOps Guru web site and technical documentation, and get started today.

Happy building
— Donnie