Tag Archives: announcements

AWS re:Inforce 2022: Threat detection and incident response track preview

Post Syndicated from Celeste Bishop original https://aws.amazon.com/blogs/security/aws-reinforce-2022-threat-detection-and-incident-response-track-preview/

Register now with discount code SALXTDVaB7y to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

Today we’re going to highlight just some of the sessions focused on threat detection and incident response that are planned for AWS re:Inforce 2022. AWS re:Inforce is a learning conference focused on security, compliance, identity, and privacy. The event features access to hundreds of technical and business sessions, an AWS Partner expo hall, a keynote featuring AWS Security leadership, and more. AWS re:Inforce 2022 will take place in-person in Boston, MA on July 26-27.

AWS re:Inforce organizes content across multiple themed tracks: identity and access management; threat detection and incident response; governance, risk, and compliance; networking and infrastructure security; and data protection and privacy. This post highlights some of the breakout sessions, chalk talks, builders’ sessions, and workshops planned for the threat detection and incident response track. For additional sessions and descriptions, see the re:Inforce 2022 catalog preview. For other highlights, see our sneak peek at the identity and access management sessions and sneak peek at the data protection and privacy sessions.

Breakout sessions

These are lecture-style presentations that cover topics at all levels and delivered by AWS experts, builders, customers, and partners. Breakout sessions typically include 10–15 minutes of Q&A at the end.

TDR201: Running effective security incident response simulations
Security incidents provide learning opportunities for improving your security posture and incident response processes. Ideally you want to learn these lessons before having a security incident. In this session, walk through the process of running and moderating effective incident response simulations with your organization’s playbooks. Learn how to create realistic real-world scenarios, methods for collecting valuable learnings and feeding them back into implementation, and documenting correction-of-error proceedings to improve processes. This session provides knowledge that can help you begin checking your organization’s incident response process, procedures, communication paths, and documentation.

TDR202: What’s new with AWS threat detection services
AWS threat detection teams continue to innovate and improve the foundational security services for proactive and early detection of security events and posture management. Keeping up with the latest capabilities can improve your security posture, raise your security operations efficiency, and reduce your mean time to remediation (MTTR). In this session, learn about recent launches that can be used independently or integrated together for different use cases. Services covered in this session include Amazon GuardDuty, Amazon Detective, Amazon Inspector, Amazon Macie, and centralized cloud security posture assessment with AWS Security Hub.

TDR301: A proactive approach to zero-days: Lessons learned from Log4j
In the run-up to the 2021 holiday season, many companies were hit by security vulnerabilities in the widespread Java logging framework, Apache Log4j. Organizations were in a reactionary position, trying to answer questions like: How do we figure out if this is in our environment? How do we remediate across our environment? How do we protect our environment? In this session, learn about proactive measures that you should implement now to better prepare for future zero-day vulnerabilities.

TDR303: Zoom’s journey to hyperscale threat detection and incident response
Zoom, a leader in modern enterprise video communications, experienced hyperscale growth during the pandemic. Their customer base expanded by 30x and their daily security logs went from being measured in gigabytes to terabytes. In this session, Zoom shares how their security team supported this breakneck growth by evolving to a centralized infrastructure, updating their governance process, and consolidating to a single pane of glass for a more rapid response to security concerns. Solutions used to accomplish their goals include Splunk, AWS Security Hub, Amazon GuardDuty, Amazon CloudWatch, Amazon S3, and others.

Builders’ sessions

These are small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop.

TDR351: Using Kubernetes audit logs for incident response automation
In this hands-on builders’ session, learn how to use Amazon CloudWatch and Amazon GuardDuty to effectively monitor Kubernetes audit logs—part of the Amazon EKS control plane logs—to alert on suspicious events, such as an increase in 403 Forbidden or 401 Unauthorized Error logs. Also learn how to automate example incident responses for streamlining workflow and remediation.

TDR352: How to mitigate the risk of ransomware in your AWS environment
Join this hands-on builders’ session to learn how to mitigate the risk from ransomware in your AWS environment using the NIST Cybersecurity Framework (CSF). Choose your own path to learn how to protect, detect, respond, and recover from a ransomware event using key AWS security and management services. Use Amazon Inspector to detect vulnerabilities, Amazon GuardDuty to detect anomalous activity, and AWS Backup to automate recovery. This session is beneficial for security engineers, security architects, and anyone responsible for implementing security controls in their AWS environment.

Chalk talks

Highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

TDR231: Automated vulnerability management and remediation for Amazon EC2
In this chalk talk, learn about vulnerability management strategies for Amazon EC2 instances on AWS at scale. Discover the role of services like Amazon Inspector, AWS Systems Manager, and AWS Security Hub in vulnerability management and mechanisms to perform proactive and reactive remediations of findings that Amazon Inspector generates. Also learn considerations for managing vulnerabilities across multiple AWS accounts and Regions in an AWS Organizations environment.

TDR332: Response preparation with ransomware tabletop exercises
Many organizations do not validate their critical processes prior to an event such as a ransomware attack. Through a security tabletop exercise, customers can use simulations to provide a realistic training experience for organizations to test their security resilience and mitigate risk. In this chalk talk, learn about Amazon Managed Services (AMS) best practices through a live, interactive tabletop exercise to demonstrate how to execute a simulation of a ransomware scenario. Attendees will leave with a deeper understanding of incident response preparation and how to use AWS security tools to better respond to ransomware events.

Workshops

These are interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

TDR271: Detecting and remediating security threats with Amazon GuardDuty
This workshop walks through scenarios covering threat detection and remediation using Amazon GuardDuty, a managed threat detection service. The scenarios simulate an incident that spans multiple threat vectors, representing a sample of threats related to Amazon EC2, AWS IAM, Amazon S3, and Amazon EKS, that GuardDuty is able to detect. Learn how to view and analyze GuardDuty findings, send alerts based on the findings, and remediate findings.

TDR371: Building an AWS incident response runbook using Jupyter notebooks
This workshop guides you through building an incident response runbook for your AWS environment using Jupyter notebooks. Walk through an easy-to-follow sample incident using a ready-to-use runbook. Then add new programmatic steps and documentation to the Jupyter notebook, helping you discover and respond to incidents.

TDR372: Detecting and managing vulnerabilities with Amazon Inspector
Join this workshop to get hands-on experience using Amazon Inspector to scan Amazon EC2 instances and container images residing in Amazon Elastic Container Registry (Amazon ECR) for software vulnerabilities. Learn how to manage findings by creating prioritization and suppression rules, and learn how to understand the details found in example findings.

TDR373: Industrial IoT hands-on threat detection
Modern organizations understand that enterprise and industrial IoT (IIoT) yields significant business benefits. However, unaddressed security concerns can expose vulnerabilities and slow down companies looking to accelerate digital transformation by connecting production systems to the cloud. In this workshop, use a case study to detect and remediate a compromised device in a factory using security monitoring and incident response techniques. Use an AWS multilayered security approach and top ten IIoT security golden rules to improve the security posture in the factory.

TDR374: You’ve received an Amazon GuardDuty EC2 finding: What’s next?
You’ve received an Amazon GuardDuty finding drawing your attention to a possibly compromised Amazon EC2 instance. How do you respond? In part one of this workshop, perform an Amazon EC2 incident response using proven processes and techniques for effective investigation, analysis, and lessons learned. Use the AWS CLI to walk step-by-step through a prescriptive methodology for responding to a compromised Amazon EC2 instance that helps effectively preserve all available data and artifacts for investigations. In part two, implement a solution that automates the response and forensics process within an AWS account, so that you can use the lessons learned in your own AWS environments.

If any of the sessions look interesting, consider joining us by registering for re:Inforce 2022. Use code SALXTDVaB7y to save $150 off the price of registration. For a limited time only and while supplies last. Also stay tuned for additional sessions being added to the catalog soon. We look forward to seeing you in Boston!

Celeste Bishop

Celeste Bishop

Celeste is a Product Marketing Manager in AWS Security, focusing on threat detection and incident response solutions. Her background is in experience marketing and also includes event strategy at Fortune 100 companies. Passionate about soccer, you can find her on any given weekend cheering on Liverpool FC, and her local home club, Austin FC.

Charles Goldberg

Charles Goldberg

Charles leads the Security Services product marketing team at AWS. He is based in Silicon Valley and has worked with networking, data protection, and cloud companies. His mission is to help customers understand solution best practices that can reduce the time and resources required for improving their company’s security and compliance outcomes.

New AWS whitepaper: AWS User Guide to Financial Services Regulations and Guidelines in New Zealand

Post Syndicated from Julian Busic original https://aws.amazon.com/blogs/security/new-aws-whitepaper-aws-user-guide-to-financial-services-regulations-and-guidelines-in-new-zealand/

Amazon Web Services (AWS) has released a new whitepaper to help financial services customers in New Zealand accelerate their use of the AWS Cloud.

The new AWS User Guide to Financial Services Regulations and Guidelines in New Zealand—along with the existing AWS Workbook for the RBNZ’s Guidance on Cyber Resilience—continues our efforts to help AWS customers navigate the regulatory expectations of the Reserve Bank of New Zealand (RBNZ) in a shared responsibility environment.

This whitepaper is intended for RBNZ-regulated institutions that are looking to run material workloads in the AWS Cloud, and is particularly useful for leadership, security, risk, and compliance teams that need to understand RBNZ requirements and guidance.

The whitepaper summarizes RBNZ requirements and guidance related to outsourcing, cyber resilience, and the cloud. It also gives RBNZ-regulated institutions information they can use to commence their due diligence and assess how to implement the appropriate programs for their use of AWS cloud services.

This document joins existing guides for other jurisdictions in the Asia Pacific region, such as Australia, India, Singapore, and Hong Kong. As the regulatory environment continues to evolve, we’ll provide further updates on the AWS Security Blog and the AWS Compliance page. You can find more information on cloud-related regulatory compliance at the AWS Compliance Center. You can also reach out to your AWS account manager for help finding the resources you need.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Julian Busic

Julian is a Security Solutions Architect with a focus on regulatory engagement. He works with our customers, their regulators, and AWS teams to help customers raise the bar on secure cloud adoption and usage. Julian has over 15 years of experience working in risk and technology across the financial services industry in Australia and New Zealand.

AWS IoT ExpressLink Now Generally Available – Quickly Develop Devices That Connect Securely to AWS Cloud

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-iot-expresslink-now-generally-available-quickly-develop-devices-that-connect-securely-to-aws-cloud/

At AWS re:Invent 2021, we introduced AWS IoT ExpressLink, software for partner-manufactured connectivity modules that makes it easier and faster for original equipment manufacturers to connect any type of product to the cloud, such as industrial sensors, small and large home appliances, irrigation systems, and medical devices.

Today we announce the general availability of AWS IoT ExpressLink and the related connectivity modules offered by AWS Partners, such as EspressifInfineon, and u-blox. The modules contain built-in cloud-connectivity software implementing AWS-mandated security requirements. Integrating these wireless modules into the hardware design of your device makes it faster and easier to securely connect Internet of Things (IoT) devices to the AWS Cloud and integrate with a range of AWS services.

Connecting devices to the AWS cloud requires developers to add tens of thousands of lines of new code to their processor of devices, which demands specialized skills. Merging this new code with their application code also requires a deep understanding of networking and cryptography to ensure the device is both functional and implementing AWS managed security requirements.

Some devices are too resource-constrained to support cloud connectivity, meaning their processors are too small or slow to handle the additional code. For example, a small piece of equipment, like a pool pump, may contain a tiny processor that is optimized to drive a particular type of motor but does not have the memory space or the performance necessary to handle both the motor and a cloud connection.

Modules with AWS IoT ExpressLink include simple codes required to connect the device to the cloud, thereby reducing the development cycle and accelerating time to market. To take the pool pump from the previous example, you can keep the tiny processor in the equipment, and delegate the heavy lifting of connecting to the cloud to AWS IoT ExpressLink, allowing the manufacturer to make the simple application software, and avoid costly redesign.

Modules with AWS IoT ExpressLink feature best practices for device-to-cloud connectivity and security as manufacturing partners incorporate AWS-mandated security requirements designed to help protect devices from remote attacks and to help achieve a secure connection to the AWS Cloud. These include the following provisioning and security procedures:

  • Cryptographically signed certificate with unique device ID.
  • Cryptographically secured boot based in a hardware root of trust.
  • Transport Layer Security (TLS v1.2 or higher) encryption of wireless network connections.
  • Encryption of all sensitive data stored on the module, both in transit and at rest.
  • Hardware root of trust for secrets storage and application code segregation.
  • Compliance with security regression test suite.
  • Verification of communication interfaces (Command Line Interface, Wi-Fi, BLE, or Cellular) against memory corruption attacks.
  • Support for cryptographically secured AWS IoT over-the-air (OTA) firmware updates to keep the devices up to date with new features and security patches.

AWS IoT ExpressLink natively integrates with AWS IoT services, such as AWS IoT Device Management, to help customers easily monitor and update their device fleets at scale.

How AWS IoT ExpressLink Works
I’ll explain how AWS IoT ExpressLink communicates with AWS partner modules and allows you to simply connect to the cloud.

For example, Infineon’s IFW56810 is a single-band Wi-Fi 4 connectivity module that provides a simple, secure solution for connecting products to AWS IoT cloud services. The IFW56810 module is preprogrammed with a tested secured firmware of AWS IoT ExpressLink implementation and supports an easy-to-use AWS IoT ExpressLink AT command interface for configuration.

To get started, connect the IFW956810 evaluation kit to the PC using either the Type-C connector or Type-A male to Type-C female cable. Run a serial terminal to communicate with the kit over USB by choosing the higher of the two enumerated COM ports on Windows with the following configuration. Once you open the serial terminal after configuring your setting, such as baudrate, type AT in the serial terminal. You should see a response OK.

You can also send AWS IoT ExpressLink commands as simple as CONNECT, SEND, and SUBSCRIBE to start communicating with the cloud. The device will translate these commands, make an MQTT connection, and send messages to AWS IoT Core.

Whether you are using a Wi-Fi or a cellular LTE-M module, you can make the most basic telemetry application that can be expressed in 10 lines of pseudo-code as follows.

int main()
{
    print("AT+CONNECT\n");
    while(1){
        print("AT+SEND data {\"A\"=%d}", getSensorA());
        delays(1);
    }
}

To learn more, visit the AWS IoT ExpressLink programmer’s guide.

Customer Stories
Many of our customers use AWS IoT ExpressLink to offload the complex but undifferentiated work required to securely connect devices to the AWS Cloud, which improves the developer experience by reducing the design effort, and helping them deliver product faster.

Cardinal Peak is a Colorado-based product engineering services company that reduces the risk of outsourcing an engineering project. Cardinal Peak specializes in developing connected products in multiple markets, including audio, video, security, health care and others. With design skills in hardware, electronics, embedded, cloud and end-user software, Cardinal Peak provides end-to-end design services for its clients.

Keegan Landreth, Embedded Software Engineer at Cardinal Peak said:

“AWS IoT ExpressLink allowed me to put together a WiFi-connected product demo sending sensor data to the cloud in a single afternoon! Secure networking for embedded systems has never been this easy. It’s an almost completely transparent interface between my application and AWS, as simple as printing data to a serial port. Being able to do OTA firmware updates through it is a huge value add-on. The best part is that I can reuse the same code to make a cellular version, which is unheard of!”

ēdn makes SmallGarden, cloud-powered indoor smart gardening products to let you easily grow plants providing light, water, nutrients, and heat as necessary at home.

Ryan Woltz, CEO of ēdn, said:

“We were looking for a quick and easy way to enable robust cloud capabilities for our indoor gardening product lines. However, from past experience, we knew that doing so adds significant risk in terms of time, money, and overall go-to-market execution. IoT device connectivity is complex, forcing our team to either outsource the development to a costly third party or allocate internal engineering resources, significantly delaying innovative features that differentiate our offerings in the market. Even a small misstep in the implementation of provisioning, security, or over-the-air functionality can set a product back months.

Now, thanks to u-blox’s hardware module with AWS IoT ExpressLink, we can enable secure and reliable cloud connectivity for our devices within days. This not only allows us to accelerate product development, but it ensures our engineering team remains focused on shipping leading-edge technologies that make nature accessible indoors.”

u-blox is an AWS Partner with a broad portfolio of chips, modules, and services. Harald Kroell, Product Manager at u-blox, said:

“At u-blox, with AWS IoT ExpressLink, we strengthen our Wi-Fi and LTE-M portfolio and bring silicon-to-cloud connectivity to the next level. By bridging our hardware and services with the AWS cloud, we progress on our mission to make businesses wirelessly connected and build solutions to last an IoT lifetime.

With the SARA-R5 and NORA-W2 modules with AWS IoT ExpressLink, customers can connect products with two different wireless technologies to AWS with a single homogeneous interface, which significantly reduces development effort. It also enables new business opportunities by lowering the barrier of connecting devices, which previously would have been too expensive to connect.”

To get started, order SARA-R5 Starter Kit and USB-NORA-W256AWS with its development kit user guide, including modules powered by AWS IoT ExpressLink.

AWS IoT ExpressLink Partners
As in the case of u-blox, two other AWS Partners, Infineon Technologies AG and Espressif Systems, have developed wireless modules that support a range of connectivity options, including Wi-Fi and cellular, and are powered by AWS IoT ExpressLink. All qualified devices in the AWS Partner Device Catalog are available for purchase from AWS Partners.

Infineon Technologies AG specializes in semiconductor solutions the goal of which is to make life easier, safer, and greener. Sivaram Trikutam, Vice President, Wi-Fi Product Line at Infineon Technologies, said:

“We’re excited to be working with AWS on the AIROC™ IFW56810 Cloud Connectivity Manager (CCM) solution supporting AWS IoT ExpressLink. With this plug-and-play solution, developers and engineers no longer need to create complex code or possess a wide range of technical competencies in Wi-Fi, embedded systems, antenna design, and cloud configuration.

Now, they can easily, quickly, and securely connect devices at scale to AWS, so they can focus on creating new revenue streams and getting to market faster. We are excited to work with our partner AWS on new business opportunities that help our customers meet their needs.”

Espressif Systems is a multinational, fabless semiconductor company with a strong focus on providing connectivity solutions to internet-connected devices. Amey Inamdar, Director of Technical Marketing, Espressif Systems, said:

“At Espressif, we continuously strive to provide secure, green, versatile, and cost-effective AIoT solutions with a focus on ease of use for our customers. The AWS IoT ExpressLink program fits well into that philosophy, providing a convenient AWS IoT connectivity.

It enables customers to seamlessly transform their offline product into a cloud-connected product by offloading the complexity to the module with AWS IoT ExpressLink, with reduced development costs and a faster time to market and hence lowering the barrier to entry to build secure connected devices. Espressif is proud to participate in this program with Espressif’s module with AWS IoT ExpressLink to provide secure and affordable AWS IoT connectivity.”

Order and Get Started Now
You can discover a range of Partner-provided modules with AWS IoT ExpressLink in the AWS Partner Device Catalog. Order your evaluation kits with AWS IoT ExpressLink today. The kit will include an application processor or will connect to compatible development platforms such as Arduino.

You can then immediately start sending telemetry data to the cloud through the simple AWS IoT ExpressLink serial interface. You can use sample codes for integrating an AWS IoT ExpressLink module into an application. These examples are intended to demonstrate how to perform the common operations for an IoT device.

To learn more, visit the product page. Please send feedback to AWS re:Post for AWS IoT ExpressLink or through your usual AWS support contacts.

Channy

New – High Volume Outbound Communication with Amazon Connect Outbound Campaigns

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-high-volume-outbound-communication-with-amazon-connect-outbound-campaigns/

The new high volume outbound communication capability in Amazon Connect which was announced at Enterprise Connect last year, is now generally available to all. It is named Amazon Connect outbound campaigns.

If you haven’t heard about Amazon Connect, it is an easy-to-use cloud contact center service that helps companies of any size deliver superior customer service at lower cost. You can read the original blog post Jeff wrote at launch in 2017, with amazing Lego art 🙂

Contact centers not only receive calls and communications, but they also send outbound communications to customers. There are a variety of reasons to send outbound communication: appointment reminders, telemarketing, subscription renewals, and billing reminders. The vast majority of these communications are phone calls, and in many contact centers, agents make the calls manually using customer contact lists in external systems. Since customers only answer about ten percent of calls, these agents can spend nearly half of their time dialing and waiting. This can result in millions of dollars in lost productivity each year for a contact center with as few as 200 agents.

To help you to address this challenge, today we are adding to Amazon Connect outbound campaigns a set of high-volume outbound communication capabilities that allows you to proactively reach more of your customers across voice, SMS, and email. When using this capability, you will have a scalable way for proactive outreach for hundreds to millions of your customers, and you will increase your agents’ productivity and lower your operational costs.

Amazon Connect outbound campaigns delivers a predictive phone dialer. The dialer includes an answering machine detection system powered by machine learning. It allows the automatic detection of answering machines for voice calls and passes calls to agents only when the call is answered by a human. The dialer also adjusts the call rate depending on factors such as percentage of human-answered the calls, call duration, and agent availability. There is no integration required to get the benefit of existing Amazon Connect features, such as automated workflows, routing, and machine learning capabilities like Contact Lens. You now have a single system for inbound and outbound communications.

To further refine the customer experience or use multiple channels in your campaigns, for example, to send an SMS or email message to your customers when they do not answer calls, you have the option to use Amazon Pinpoint. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service. It allows you to define customer segments, define the customer journey, define the contact strategy, and more. Amazon Pinpoint is the system handling high-volume SMS and email campaigns.

To better understand how Amazon Connect, Amazon Pinpoint, and other AWS services work together, you can refer to this very detailed blog post.

Let’s show you how it works
Imagine I am a contact center manager, and I want to create an outbound call campaign to target a selected list of customers.

I first import my customer contact list from a spreadsheet on Amazon S3. I may also import it from popular customer relationship management (CRM) and marketing automation applications, such as Marketo, Salesforce, Twilio’s Segment, ServiceNow, Shopify, Zendesk, and Amazon Pinpoint itself.

Amazon Connect outbound campaigns - import contact 2

Then I create a campaign and define some journey parameters: the communication channel, the start time, and the corresponding content, such as a call script, email template, or SMS message. At the scheduled start time, the journey is executed using Amazon Connect for calls or Amazon Pinpoint for SMS or emails, as specified.

Amazon Connect outbound campaigns - create campaign

When I configure the campaign to run in Predictive dial mode, as I mentioned before, the dialer automatically adjusts the dial rate based on the duration of calls and the real-time availability of agents. Once a call is answered, Amazon Connect distinguishes whether it is a live voice or a recorded message and routes the live customer to an available agent in the Amazon Connect agent application, where the agent can see the call script that I specified during setup, along with relevant customer information.

As explained earlier, I may use Amazon Pinpoint to define the customer journey. By doing so, I can combine voice, email, and SMS channels in the same outbound communication campaign to improve the efficiency of my agents and my customer’s experience. For example, a financial institution can use Amazon Connect to send an SMS notification to remind a customer of a missed payment and include a link to request a call back from an agent. When a call is requested, Amazon Connect automatically queues the call, dials the customer’s number, detects their voice, and connects an available agent to the customer.

Amazon Connect outbound campaigns - journey workflow

Amazon Pinpoint allows you to define the details of the customer journey.

Amazon Connect outbound campaigns - setup quiet times

As usual with AWS services, I can analyze contact events sent via Amazon EventBridge. EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale using events generated from your applications, integrated software-as-a-service (SaaS) applications, and AWS service. When filtering or analyzing events posted to EventBridge, I can create metrics such as time to connect to an agent, duration of the contact, and call abandonment rate

These metrics help me understand the status of my campaign and ensure compliance with applicable regulations, such as maximum call abandonment rates. I also can use historical reports of these metrics to understand the effectiveness of all my communications campaigns over time.

Amazon Connect outbound campaigns - jounrey metrics

Speaking of compliance, we do not want anyone to abuse the system, intentionally or not, or to break any local compliance rules.

Access and Compliance
Using automated services to drive outbound communication campaigns is strictly regulated in several countries and territories. For example, the US adopted the Telephone Consumer Protection Act (TCPA) in 1991, and the United Kingdom’s Office of Communications has similar rules.

Amazon Connect outbound campaigns gives you the tools to stay compliant with these regulations and many others. However, just like with traditional IT security, it is a shared responsibility. It is your responsibility to use the service in a compliant manner. We are happy to assist you in addressing specific use cases.

Let’s share two examples to illustrate how Amazon Connect outbound campaigns can help you meet your compliance status: respect quiet time and monitor call abandonment rate.

The use of quiet times allows contact center managers to configure a schedule for channel communications based on the day of the week and the hours of the day. More precise delivery times means your customers are most likely to engage with the communication and increase metrics such as open rates for SMS and email, as well as pick-up rates for voice calls. It also allows contact center managers to follow country and state-level voice dialing legislation. The following screenshot shows how you can configure quiet times using Amazon Pinpoint.

Amazon Connect outbound campaigns - quiet times

According to TCPA, call abandonment rate is the percentage of calls picked up by a live customer but not connected to a live agent within two seconds after the customer greeting. I found it interesting that in the UK, the time is measured from the start of your customer greetings, while in the US, it is measured from the end of the greeting. Amazon Connect outbound campaigns provides you with metrics, such as customerGreetingStart, customerGreetingStop, andconnectedToAgent for each outbound communication. Contact center managers can use these to compute the abandonment rate and dial up or down the outgoing communication channel accordingly.

Other metrics, configuration parameters, and AWS Lambda API integration allow contact center managers to consult a Do-Not-Call (DNC) registry or list scrubbing and verify your customer’s local time zone or bank holiday calendars, just to name a few.

Pricing and Availability
Amazon Connect outbound campaigns is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (London) AWS Regions. This allows you to start your outbound campaigns for customers in the USA, UK, Australia, and New Zealand.

As usual, pricing is based on your usage; you only pay for what you use with no upfront or minimum engagement. The key metrics we are using for pricing are the minutes of outbound calls. The pricing page has all the details.

And now, go build your contact centers.

— seb

AWS Week in Review – June 20, 2022

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-june-20-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Last Week’s Launches
It’s been a quiet week on the AWS News Blog, however a glance at What’s New page shows the various service teams have been busy as usual. Here’s a round-up of announcements that caught my attention this past week.

Support for 15 new resource types in AWS Config – AWS Config is a service for assessment, audit, and evaluation of the configuration of resources in your account. You can monitor and review changes in resource configuration using automation against a desired configuration. The newly expanded set of types includes resources from Amazon SageMaker, Elastic Load Balancing, AWS Batch, AWS Step Functions, AWS Identity and Access Management (IAM), and more.

New console experience for AWS Budgets – A new split-view panel allows for viewing details of a budget without needing to leave the overview page. The new panel will save you time (and clicks!) when you’re analyzing performance across a set of budgets. By the way, you can also now select multiple budgets at the same time.

VPC endpoint support is now available in Amazon SageMaker Canvas SageMaker Canvas is a visual point-and-click service enabling business analysts to generate accurate machine-learning (ML) models without requiring ML experience or needing to write code. The new VPC endpoint support, available in all Regions where SageMaker Canvas is suppported, eliminates the need for an internet gateway, NAT instance, or a VPN connection when connecting from your SageMaker Canvas environment to services such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and more.

Additional data sources for Amazon AppFlow – Facebook Ads, Google Ads, and Mixpanel are now supported as data sources, providing the ability to ingest marketing and product analytics for downstream analysis in AppFlow-connected software-as-a-service (SaaS) applications such as Marketo and Salesforce Marketing Cloud.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates you may have missed from the past week:

Amazon Elastic Compute Cloud (Amazon EC2) expanded the Regional availability of AWS Nitro System-based C6 instance types. C6gn instance types, powered by Arm-based AWS Graviton2 processors, are now available in the Asia Pacific (Seoul), Europe (Milan), Europe (Paris), and Middle East (Bahrain) Regions, while C6i instance types, powered by 3rd generation Intel Xeon Scalable processors, are now available in the Europe (Frankfurt) Region.

As a .NET and PowerShell Developer Advocate here at AWS, there are some news and updates related to .NET I want to highlight:

Upcoming AWS Events
The AWS New York Summit is approaching quickly, on July 12. Registration is also now open for the AWS Summit Canberra, an in-person event scheduled for August 31.

Microsoft SQL Server users may be interested in registering for the SQL Server Database Modernization webinar on June 21. The webinar will show you how to go about modernizing and how to cost-optimize SQL Server on AWS.

Amazon re:MARS is taking place this week in Las Vegas. I’ll be there as a host of the AWS on Air show, along with special guests highlighting their latest news from the conference. I also have some On Air sessions on using our AI services from .NET lined up! As usual, we’ll be streaming live from the expo hall, so if you’re at the conference, give us a wave. You can watch the show live on Twitch.tv/aws, Twitter.com/AWSOnAir, and LinkedIn Live.

A reminder that if you’re a podcast listener, check out the official AWS Podcast Update Show. There is also the latest installment of the AWS Open Source News and Updates newsletter to help keep you up to date.

No doubt there’ll be a whole new batch of releases and announcements from re:MARS, so be sure to check back next Monday for a summary of the announcements that caught our attention!

— Steve

AWS HITRUST CSF certification is available for customer inheritance

Post Syndicated from Sonali Vaidya original https://aws.amazon.com/blogs/security/aws-hitrust-csf-certification-is-available-for-customer-inheritance/

As an Amazon Web Services (AWS) customer, you don’t have to assess the controls that you inherit from the AWS HITRUST Validated Assessment Questionnaire, because AWS already has completed HITRUST assessment using version 9.4 in 2021. You can deploy your environments onto AWS and inherit our HITRUST CSF certification, provided that you use only in-scope services and apply the controls detailed on the HITRUST website.

HITRUST certification allows you to tailor your security control baselines to a variety of factors—including, but not limited to, regulatory requirements and organization type. HITRUST CSF has been widely adopted by leading organizations in a variety of industries as part of their approach to security and privacy. Visit the HITRUST website for more information.

Have you submitted HITRUST Inheritance Program requests to AWS, but haven’t received a response yet? Understand why …

The HITRUST MyCSF manual provides step-by-step instructions for completing the HITRUST Inheritance process. It’s a simple four-step process, as follows:

  1. You create the Inheritance request in the HITRUST MyCSF tool.
  2. You submit the request to AWS.
  3. AWS will either approve or reject the Inheritance request based on the AWS HITRUST Shared Responsibility Matrix.
  4. Finally, you can apply all approved Inheritance requests to your HITRUST Compliance Assessment.

Unless a request is submitted to AWS, we will not be able to approve it. If a prolonged period of time has gone by and you haven’t received a response from AWS, most likely you created the request but didn’t submit it to AWS.

We are committed to helping you achieve and maintain the highest standard of security and compliance. As always, we value your feedback and questions. Feel free to contact the team through AWS Compliance Contact Us. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sonali Vaidya

Sonali leads multiple AWS global compliance programs, including HITRUST, ISO 27001, ISO 27017, ISO 27018, ISO 27701, ISO 9001, and CSA STAR. Sonali has over 20 years of experience in information security and privacy management and holds multiple certifications, such as CISSP, C-GDPR|P, CCSK, CEH, CISA, PCIP, and Lead Auditor for ISO 27001 and ISO 22301.

AWS and the UK rules on operational resilience and outsourcing

Post Syndicated from Arvind Kannan original https://aws.amazon.com/blogs/security/aws-and-the-uk-rules-on-operational-resilience-and-outsourcing/

Financial institutions across the globe use Amazon Web Services (AWS) to transform the way they do business. Regulations continue to evolve in this space, and we’re working hard to help customers proactively respond to new rules and guidelines. In many cases, the AWS Cloud makes it simpler than ever before to assist customers with their compliance efforts with different regulations and frameworks around the world.

In the United Kingdom, the Financial Conduct Authority (FCA), the Bank of England and the Prudential Regulation Authority (PRA) issued policy statements and rules on operational resilience in March, 2021. The PRA also additionally issued a supervisory statement on outsourcing and third-party risk management. Broadly, these Statements apply to certain firms that are regulated by the UK Financial Regulators: this includes banks, building societies, credit unions, insurers, financial markets infrastructure providers, payment and e-money institutions, major investment firms, mixed activity holding companies, and UK branches of certain overseas firms. For other FCA-authorized financial services firms, the FCA has previously issued FG 16/5 Guidance for firms outsourcing to the ‘cloud’ and other third-party IT services.

These Statements are relevant to the use of cloud services. AWS strives to help support our customers with their compliance obligations and help them meet their regulator’s expectations. We offer our customers a wide range of services that can simplify and directly assist in complying with these Statements, which apply from March 2022.

What do these Statements from the UK Financial Regulators mean for AWS customers?

The Statements aim to ensure greater operational resilience for UK financial institutions and, in the case of the PRA’s papers on outsourcing, facilitate greater adoption of the cloud and other new technologies while also implementing the Guidelines on outsourcing arrangements from the European Banking Authority (EBA) and the relevant sections of the EBA Guidelines on ICT and security risk management. (See the AWS approach to these EBA guidelines in this blog post).

For AWS and our customers, the key takeaway is that these Statements provide a regulatory framework for cloud usage in a resilient manner. The PRA’s outsourcing paper, in particular, sets out conditions that can help give PRA-regulated firms assurance that they can deploy to the cloud in a safe and resilient manner, including for material, regulated workloads. When they consider or use third-party services (such as AWS), many UK financial institutions already follow due diligence, risk management, and regulatory notification processes that are similar to the processes identified in these Statements, the EBA Outsourcing Guidelines, and FG 16/5. UK financial institutions can use a variety of AWS security and compliance services to help them meet requirements on security, resilience, and assurance.

Risk-based approach

The Statements reference the principle of proportionality throughout. In the case of the outsourcing requirements, this includes a focus on material outsourcing arrangements and incorporating a risk-based approach that expects regulated entities to identify, assess, and mitigate the risks associated with outsourcing arrangements. The recognition of a shared responsibility model, referenced by the PRA and the recognition in FCA Guidance FG 16/5 that firms need to be clear about where responsibility lies between themselves and their service providers, is consistent with the long-standing AWS shared responsibility model. The proportionality and risk-based approach applies throughout the Statements, including the areas such as risk assessment, contractual and audit requirements, data location and transfer, operational resilience, and security implementation:

  • Risk assessment – The Statements emphasize the need for UK financial institutions to assess the potential impact of outsourcing arrangements on their operational risk. The AWS shared responsibility model helps customers formulate their risk assessment approach, because it illustrates how their security and management responsibilities change depending on the services from AWS they use. For example, AWS operates some controls on behalf of customers, such as data center security, while customers operate other controls, such as event logging. In practice, AWS helps customers assess and improve their risk profile relative to traditional, on-premises environments.
     
  • Contractual and audit requirements – The PRA supervisory statement on outsourcing and third-party risk management, the EBA Outsourcing Guidelines, and the FCA guidance FG 16/5 lay out requirements for the written agreement between a UK financial institution and its service provider, including access and audit rights. For UK financial institutions that are running regulated workloads on AWS, please contact your AWS account team to address these contractual requirements. We also help institutions that require contractual audit rights to comply with these requirements through the AWS Security & Audit Series, which facilitates customer audits. To align with regulatory requirements and expectations, our audit program incorporates feedback that we’ve received from EU and UK financial supervisory authorities. UK financial services customers interested in learning more about the audit engagements offered by AWS can reach out to their AWS account teams.
     
  • Data location and transfer – The UK Financial Regulators do not place restrictions on where a UK financial institution can store and process its data, but rather state that UK financial institutions should adopt a risk-based approach to data location. AWS continually monitors the evolving regulatory and legislative landscape around data privacy to identify changes and determine what tools our customers might need to help meet their compliance needs. Refer to our Data Protection page for our commitments, including commitments on data access and data storage.
     
  • Operational resilience – Resiliency is a shared responsibility between AWS and the customer. It is important that customers understand how disaster recovery and availability, as part of resiliency, operate under this shared model. AWS is responsible for resiliency of the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure comprises the hardware, software, networking, and facilities that run AWS Cloud services. AWS uses commercially reasonable efforts to make these AWS Cloud services available, ensuring that service availability meets or exceeds the AWS Service Level Agreements (SLAs).

    The customer’s responsibility will be determined by the AWS Cloud services that they select. This determines the amount of configuration work they must perform as part of their resiliency responsibilities. For example, a service such as Amazon Elastic Compute Cloud (Amazon EC2) requires the customer to perform all of the necessary resiliency configuration and management tasks. Customers that deploy Amazon EC2 instances are responsible for deploying EC2 instances across multiple locations (such as AWS Availability Zones), implementing self-healing by using services like AWS Auto Scaling, as well as using resilient workload architecture best practices for applications that are installed on the instances.

    For managed services, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, whereas customers access the endpoints to store and retrieve data. Customers are responsible for managing resiliency of their data, including backup, versioning, and replication strategies. For more details about our approach to operational resilience in financial services, refer to this whitepaper.

  • Security implementation – The Statements set expectations on data security, including data classification and data security, and require UK financial institutions to consider, implement, and monitor various security measures. Using AWS can help customers meet these requirements in a scalable and cost-effective way, while helping improve their security posture. Customers can use AWS Config or AWS Security Hub to simplify auditing, security analysis, change management, and operational troubleshooting.

    As part of their cybersecurity measures, customers can activate Amazon GuardDuty, which provides intelligent threat detection and continuous monitoring, to generate detailed and actionable security alerts. Amazon Macie uses machine learning and pattern matching to help customers classify their sensitive and business-critical data in AWS. Amazon Inspector automatically assesses a customer’s AWS resources for vulnerabilities or deviations from best practices and then produces a detailed list of security findings prioritized by level of severity.

    Customers can also enhance their security by using AWS Key Management Service (AWS KMS) (creation and control of encryption keys), AWS Shield (DDoS protection), and AWS WAF (helps protect web applications or APIs against common web exploits). These are just a few of the many services and features we offer that are designed to provide strong availability and security for our customers.

As reflected in these Statements, it’s important to take a balanced approach when evaluating responsibilities in cloud implementation. AWS is responsible for the security of the AWS infrastructure, and for all of our data centers, we assess and manage environmental risks, employ extensive physical and personnel security controls, and guard against outages through our resiliency and testing procedures. In addition, independent third-party auditors evaluate the AWS infrastructure against more than 2,600 standards and requirements throughout the year.

Conclusion

We encourage customers to learn about how these Statements apply to their organization. Our teams of security, compliance, and legal experts continue to work with our UK financial services customers, both large and small, to support their journey to the AWS Cloud. AWS is closely following how the UK regulatory authorities apply the Statements and will provide further updates as needed. If you have any questions about compliance with these Statements and their application to your use of AWS, reach out to your account representative or request to be contacted.

 
Want more AWS Security news? Follow us on Twitter.

Arvind Kannan

Arvind Kannan

Arvind is a Principal Compliance Specialist at Amazon Web Services based in London, United Kingdom. He spends his days working with financial services customers in the UK and across EMEA, helping them address questions around governance, risk and compliance. He has a strong focus on compliance and helping customers navigate the regulatory requirements and understand supervisory expectations.

A sneak peek at the identity and access management sessions for AWS re:Inforce 2022

Post Syndicated from Ilya Epshteyn original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-identity-and-access-management-sessions-for-aws-reinforce-2022/

Register now with discount code SALFNj7FaRe to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

AWS re:Inforce 2022 will take place in-person in Boston, MA, on July 26 and 27 and will include some exciting identity and access management sessions. AWS re:Inforce 2022 features content in the following five areas:

  • Data protection and privacy
  • Governance, risk, and compliance
  • Identity and access management
  • Network and infrastructure security
  • Threat detection and incident response

The identity and access management track will showcase how quickly you can get started to securely manage access to your applications and resources as you scale on AWS. You will hear from customers about how they integrate their identity sources and establish a consistent identity and access strategy across their on-premises environments and AWS. Identity experts will discuss best practices for establishing an organization-wide data perimeter and simplifying access management with the right permissions, to the right resources, under the right conditions. You will also hear from AWS leaders about how we’re working to make identity, access control, and resource management simpler every day. This post highlights some of the identity and access management sessions that you can add to your agenda. To learn about sessions from across the content tracks, see the AWS re:Inforce catalog preview.

Breakout sessions

Lecture-style presentations that cover topics at all levels and are delivered by AWS experts, builders, customers, and partners. Breakout sessions typically conclude with 10–15 minutes of Q&A.

IAM201: Security best practices with AWS IAM
AWS IAM is an essential service that helps you securely control access to your AWS resources. In this session, learn about IAM best practices like working with temporary credentials, applying least-privilege permissions, moving away from users, analyzing access to your resources, validating policies, and more. Leave this session with ideas for how to secure your AWS resources in line with AWS best practices.

IAM301: AWS Identity and Access Management (IAM) the practical way
Building secure applications and workloads on AWS means knowing your way around AWS Identity and Access Management (AWS IAM). This session is geared toward the curious builder who wants to learn practical IAM skills for defending workloads and data, with a technical, first-principles approach. Gain knowledge about what IAM is and a deeper understanding of how it works and why.

IAM302: Strategies for successful identity management at scale with AWS SSO
Enterprise organizations often come to AWS with existing identity foundations. Whether new to AWS or maturing, organizations want to better understand how to centrally manage access across AWS accounts. In this session, learn the patterns many customers use to succeed in deploying and operating AWS Single Sign-On at scale. Get an overview of different deployment strategies, features to integrate with identity providers, application system tags, how permissions are deployed within AWS SSO, and how to scale these functionalities using features like attribute-based access control.

IAM304: Establishing a data perimeter on AWS, featuring Vanguard
Organizations are storing an unprecedented and increasing amount of data on AWS for a range of use cases including data lakes, analytics, machine learning, and enterprise applications. They want to make sure that sensitive non-public data is only accessible to authorized users from known locations. In this session, dive deep into the controls that you can use to create a data perimeter that allows access to your data only from expected networks and by trusted identities. Hear from Vanguard about how they use data perimeter controls in their AWS environment to meet their security control objectives.

IAM305: How Guardian Life validates IAM policies at scale with AWS
Attend this session to learn how Guardian Life shifts IAM security controls left to empower builders to experiment and innovate quickly, while minimizing the security risk exposed by granting over-permissive permissions. Explore how Guardian validates IAM policies in Terraform templates against AWS best practices and Guardian’s security policies using AWS IAM Access Analyzer and custom policy checks. Discover how Guardian integrates this control into CI/CD pipelines and codifies their exception approval process.

IAM306: Managing B2B identity at scale: Lessons from AWS and Trend Micro
Managing identity for B2B multi-tenant solutions requires tenant context to be clearly defined and propagated with each identity. It also requires proper onboarding and automation mechanisms to do this at scale. Join this session to learn about different approaches to managing identities for B2B solutions with Amazon Cognito and learn how Trend Micro is doing this effectively and at scale.

IAM307: Automating short-term credentials on AWS, with Discover Financial Services
As a financial services company, Discover Financial Services considers security paramount. In this session, learn how Discover uses AWS Identity and Access Management (IAM) to help achieve their security and regulatory obligations. Learn how Discover manages their identities and credentials within a multi-account environment and how Discover fully automates key rotation with zero human interaction using a solution built on AWS with IAM, AWS Lambda, Amazon DynamoDB, and Amazon S3.

Builders’ sessions

Small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.

IAM351: Using AWS SSO and identity services to achieve strong identity management
Organizations often manage human access using IAM users or through federation with external identity providers. In this builders’ session, explore how AWS SSO centralizes identity federation across multiple AWS accounts, replaces IAM users and cross-account roles to improve identity security, and helps administrators more effectively scope least privilege. Additionally, learn how to use AWS SSO to activate time-based access and attribute-based access control.

IAM352: Anomaly detection and security insights with AWS Managed Microsoft AD
This builders’ session demonstrates how to integrate AWS Managed Microsoft AD with native AWS services like Amazon CloudWatch Logs and Amazon CloudWatch metrics and alarms, combined with anomaly detection, to identify potential security issues and provide actionable insights for operational security teams.

Chalk talks

Highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

IAM231: Prevent unintended access: AWS IAM Access Analyzer policy validation
In this chalk talk, walk through ways to use AWS IAM Access Analyzer policy validation to review IAM policies that do not follow AWS best practices. Learn about the Access Analyzer APIs that help validate IAM policies and how to use these APIs to prevent IAM policies from reaching your AWS environment through mechanisms like AWS CloudFormation hooks and CI/CD pipeline controls.

IAM232: Navigating the consumer identity first mile using Amazon Cognito
Amazon Cognito allows you to configure sign-in and sign-up experiences for consumers while extending user management capabilities to your customer-facing application. Join this chalk talk to learn about the first steps for integrating your application and getting started with Amazon Cognito. Learn best practices to manage users and how to configure a customized branding UI experience, while creating a fully managed OpenID Connect provider with Amazon Cognito.

IAM331: Best practices for delegating access on AWS
This chalk talk demonstrates how to use built-in capabilities of AWS Identity and Access Management (IAM) to safely allow developers to grant entitlements to their AWS workloads (PassRole/AssumeRole). Additionally, learn how developers can be granted the ability to take self-service IAM actions (CRUD IAM roles and policies) with permissions boundaries.

IAM332: Developing preventive controls with AWS identity services
Learn about how you can develop and apply preventive controls at scale across your organization using service control policies (SCPs). This chalk talk is an extension of the preventive controls within the AWS identity services guide, and it covers how you can meet the security guidelines of your organization by applying and developing SCPs. In addition, it presents strategies for how to effectively apply these controls in your organization, from day-to-day operations to incident response.

IAM333: IAM policy evaluation deep dive
In this chalk talk, learn how policy evaluation works in detail and walk through some advanced IAM policy evaluation scenarios. Learn how a request context is evaluated, the pros and cons of different strategies for cross-account access, how to use condition keys for actions that touch multiple resources, when to use principal and aws:PrincipalArn, when it does and doesn’t make sense to use a wildcard principal, and more.

Workshops

Interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

IAM271: Applying attribute-based access control using AWS IAM
This workshop provides hands-on experience applying attribute-based access control (ABAC) to achieve a secure and scalable authorization model on AWS. Learn how and when to apply ABAC, which is native to AWS Identity and Access Management (IAM). Also learn how to find resources that could be impacted by different ABAC policies and session tagging techniques to scale your authorization model across Regions and accounts within AWS.

IAM371: Building a data perimeter to allow access to authorized users
In this workshop, learn how to create a data perimeter by building controls that allow access to data only from expected network locations and by trusted identities. The workshop consists of five modules, each designed to illustrate a different AWS Identity and Access Management (IAM) and network control. Learn where and how to implement the appropriate controls based on different risk scenarios. Discover how to implement these controls as service control policies, identity- and resource-based policies, and virtual private cloud endpoint policies.

IAM372: How and when to use different IAM policy types
In this workshop, learn how to identify when to use various policy types for your applications. Work through hands-on labs that take you through a typical customer journey to configure permissions for a sample application. Configure policies for your identities, resources, and CI/CD pipelines using permission delegation to balance security and agility. Also learn how to configure enterprise guardrails using service control policies.

If these sessions look interesting to you, join us in Boston by registering for re:Inforce 2022. We look forward to seeing you there!

Author

Ilya Epshteyn

Ilya is a Senior Manager of Identity Solutions in AWS Identity. He helps customers to innovate on AWS by building highly secure, available, and scalable architectures. He enjoys spending time outdoors and building Lego creations with his kids.

Marc von Mandel

Marc von Mandel

Marc leads the product marketing strategy and execution for AWS Identity Services. Prior to AWS, Marc led product marketing at IBM Security Services across several categories, including Identity and Access Management Services (IAM), Network and Infrastructure Security Services, and Cloud Security Services. Marc currently lives in Atlanta, Georgia and has worked in the cybersecurity and public cloud for more than twelve years.

Introducing a new AWS whitepaper: Does data localization cause more problems than it solves?

Post Syndicated from Jana Kay original https://aws.amazon.com/blogs/security/introducing-a-new-aws-whitepaper-does-data-localization-cause-more-problems-than-it-solves/

Amazon Web Services (AWS) recently released a new whitepaper, Does data localization cause more problems than it solves?, as part of the AWS Innovating Securely briefing series. The whitepaper draws on research from Emily Wu’s paper Sovereignty and Data Localization, published by Harvard University’s Belfer Center, and describes how countries can realize similar data localization objectives through AWS services without incurring the unintended effects highlighted by Wu.

Wu’s research analyzes the intent of data localization policies, and compares that to the reality of the policies’ effects, concluding that data localization policies are often counterproductive to their intended goals of data security, economic competitiveness, and protecting national values.

The new whitepaper explains how you can use the security capabilities of AWS to take advantage of up-to-date technology and help meet your data localization requirements while maintaining full control over the physical location of where your data is stored.

AWS offers robust privacy and security services and features that let you implement your own controls. AWS uses lessons learned around the globe and applies them at the local level for improved cybersecurity against security events. As an AWS customer, after you pick a geographic location to store your data, the cloud infrastructure provides you greater resiliency and availability than you can achieve by using on-prem infrastructure. When you choose an AWS Region, you maintain full control to determine the physical location of where your data is stored. AWS also provides you with resources through the AWS compliance program, to help you understand the robust controls in place at AWS to maintain security and compliance in the cloud.

An important finding of Wu’s research is that localization constraints can deter innovation and hurt local economies because they limit which services are available, or increase costs because there are a smaller number of service providers to choose from. Wu concludes that data localization can “raise the barriers [to entrepreneurs] for market entry, which suppresses entrepreneurial activity and reduces the ability for an economy to compete globally.” Data localization policies are especially challenging for companies that trade across national borders. International trade used to be the remit of only big corporations. Current data-driven efficiencies in shipping and logistics mean that international trade is open to companies of all sizes. There has been particular growth for small and medium enterprises involved in services trade (of which cross-border data flows are a key element). In a 2016 worldwide survey conducted by McKinsey, 86 percent of tech-based startups had at least one cross-border activity. The same report showed that cross-border data flows added some US$2.8 trillion to world GDP in 2014.

However, the availability of cloud services supports secure and efficient cross-border data flows, which in turn can contribute to national economic competitiveness. Deloitte Consulting’s report, The cloud imperative: Asia Pacific’s unmissable opportunity, estimates that by 2024, the cloud will contribute $260 billion to GDP across eight regional markets, with more benefit possible in the future. The World Trade Organization’s World Trade Report 2018 estimates that digital technologies, which includes advanced cloud services, will account for a 34 percent increase in global trade by 2030.

Wu also cites a link between national data governance policies and a government’s concerns that movement of data outside national borders can diminish their control. However, the technology, storage capacity, and compute power provided by hyperscale cloud service providers like AWS, can empower local entrepreneurs.

AWS continually updates practices to meet the evolving needs and expectations of both customers and regulators. This allows AWS customers to use effective tools for processing data, which can help them meet stringent local standards to protect national values and citizens’ rights.

Wu’s research concludes that “data localization is proving ineffective” for meeting intended national goals, and offers practical alternatives for policymakers to consider. Wu has several recommendations, such as continuing to invest in cybersecurity, supporting industry-led initiatives to develop shared standards and protocols, and promoting international cooperation around privacy and innovation. Despite the continued existence of data localization policies, countries can currently realize similar objectives through cloud services. AWS implements rigorous contractual, technical, and organizational measures to protect the confidentiality, integrity, and availability of customer data, regardless of which AWS Region you select to store their data. As an AWS customer, this means you can take advantage of the economic benefits and the support for innovation provided by cloud computing, while improving your ability to meet your core security and compliance requirements.

For more information, see the whitepaper Does data localization cause more problems than it solves?, or contact AWS.

If you have feedback about this post, submit comments in the Comments section below.

Author

Jana Kay

Since 2018, Jana Kay has been a cloud security strategist with the AWS Security Growth Strategies team. She develops innovative ways to help AWS customers achieve their objectives, such as security table top exercises and other strategic initiatives. Previously, she was a cyber, counter-terrorism, and Middle East expert for 16 years in the Pentagon’s Office of the Secretary of Defense.

Arturo Cabanas

Arturo Cabanas

Arturo joined Amazon in 2017 and is AWS Security Assurance Principal for the Public Sector in Latin America, Canada, and the Caribbean. In this role, Arturo creates programs that help governments move their workloads and regulated data to the cloud by meeting their specific security, data privacy regulation, and compliance requirements.

New – Amazon EC2 R6id Instances with NVMe Local Instance Storage of up to 7.6 TB

Post Syndicated from Veliswa Boya original https://aws.amazon.com/blogs/aws/new-amazon-ec2-r6id-instances/

In November 2021, we launched the memory-optimized Amazon EC2 R6i instances, our sixth-generation x86-based offering powered by 3rd Generation Intel Xeon Scalable processors (code named Ice Lake).

Today I am excited to announce a disk variant of the R6i instance: the Amazon EC2 R6id instances with non-volatile memory express (NVMe) SSD local instance storage. The R6id instances are designed to power applications that require low storage latency or require temporary swap space.

Customers with workloads that require access to high-speed, low-latency storage, including those that need temporary storage for scratch space, temporary files, and caches, have the option to choose the R6id instances with NVMe local instance storage of up to 7.6 TB. The new instances are also available as bare-metal instances to support workloads that benefit from direct access to physical resources.

Here’s some background on what led to the development of the sixth-generation instances. Our customers who are currently using fifth-generation instances are looking for the following:

  • Higher Compute Performance – Higher CPU performance to improve latency and processing time for their workloads
  • Improved Price Performance – Customers are very sensitive to price performance to optimize costs
  • Larger Sizes – Customers require larger sizes to scale their enterprise databases
  • Higher Amazon EBS Performance – Customers have requested higher Amazon EBS throughput (“at least double”) to improve response times for their analytics applications
  • Local Storage – Large customers have expressed a need for more local storage per vCPU

Sixth-generation instances address these requirements by offering generational improvement across the board, including 15 percent increase in price performance, 33 percent more vCPUs, up to 1 TB memory, 2x networking performance, 2x EBS performance, and global availability.

Compared to R5d instances, the R6id instances offer:

  • Larger instance size (.32xlarge) with 128 vCPUs and 1024 GiB of memory, enabling customers to consolidate their workloads and scale up applications.
  • Up to 15 percent improvement in compute price performance and 20 percent higher memory bandwidth.
  • Up to 58 percent higher storage per vCPU and 34 percent lower cost per TB.
  • Up to 50 Gbps network bandwidth and up to 40 Gbps EBS bandwidth; EBS burst bandwidth support for sizes up to .4xlarge.
  • Always-on memory encryption.
  • Support for new Intel Advanced Vector Extensions (AVX 512) instructions such as VAES, VCLMUL, VPCLMULQDQ, and GFNI for faster execution of cryptographic algorithms such as those used in IPSec and TLS implementations.

The detailed specifications of the R6id instances are as follows:

Instance Name

vCPUs RAM (GiB)

Local NVMe SSD Storage (GB)

EBS Throughput (Gbps)

Network Bandwidth (Gbps)

r6id.large 2 16 1 x 118 Up to 10 Up to 12.5
r6id.xlarge 4 32 1 x 237 Up to 10 Up to 12.5
r6id.2xlarge 8 64 1 x 474 Up to 10 Up to 12.5
r6id.4xlarge 16 128 1 x 950 Up to 10 Up to 12.5
r6id.8xlarge 32 256 1 x 1900 10 12.5
r6id.12xlarge 48 384 2 x 1425 15 18.75
r6id.16xlarge 64 512 2 x 1900 20 25
r6id.24xlarge 96 768 4 x 1425 30 37.5
r6id.32xlarge 128 1024 4 x 1900 40 50
r6id.metal 128 1024 4 x 1900 40 50

Now available

The R6id instances are available today in the AWS US East (Ohio), US East (N.Virginia), US West (Oregon), and Europe (Ireland) Regions as On-Demand, Spot, and Reserved Instances or as part of a Savings Plan. As usual, with EC2, you pay for what you use. For more information, see the Amazon EC2 pricing page.

To learn more, visit our Amazon EC2 R6i instances page, and please send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Veliswa x

Modernize Your Mainframe Applications & Deploy Them In The Cloud

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/modernize-your-mainframe-applications-deploy-them-in-the-cloud/

Today, we are launching AWS Mainframe Modernization service to help you modernize your mainframe applications and deploy them to AWS fully-managed runtime environments. This new service also provides tools and resources to help you plan and implement migration and modernization.

Since the introduction of System/360 on April 7 1964, mainframe computers have enabled many industries to transform themselves. The mainframe has revolutionized the way people buy things, how people book and purchase travel, and how governments manage taxes or deliver social services. Two thirds of the Fortune 100 companies have their core businesses located on a mainframe. And according to a 2018 estimate, $3 trillion ($3 x 10^12) in daily commerce flows through mainframes.

Mainframes are using their very own set of technologies: programming languages such as COBOL, PL/1, and Natural, to name a few, or databases and data files such as VSAM, DB2, IMS DB, or Adabas. They also run “application servers” (or transaction managers as we call them) such as CICS or IMS TM. Recent IBM mainframes also run applications developed in the Java programming language deployed on WebSphere Application Server.

Many of our customers running mainframes told us they want to modernize their mainframe-based applications to take advantage of the AWS cloud. They want to increase their agility and their capacity to innovate, gain access to a growing pool of talents with experience running workloads on AWS, and benefit from the continual AWS trend of improving cost/performance ratio.

Application modernization is a journey composed of four phases:

  • First, you assess the situation. Are you ready to migrate? You define the business case and educate the migration team.
  • Second, you mobilize. You kick off the project, identify applications for a proof of concept, and refine your migration plan and business cases.
  • Third, you migrate and modernize. For each application, you run in-depth discovery, decide on the right application architecture and migration journey, replatform or refactor the code base, and test and deploy to production.
  • Last, you operate and optimize. You monitor deployed applications, manage resources, and ensure that security and compliance are up to date.

AWS Mainframe Modernization helps you during each phase of your journey.

Assess and Mobilize
During the assessment and mobilization phase, you have access to analysis and development tools to discover the scope of your application portfolio and to transform source code as needed. Typically, the service helps you discover the assets of your mainframe applications and identify all the data and other dependencies. We provide you with integrated development environments where you can adapt or refactor your source code, depending on whether you are replatforming or refactoring your applications.

Application Automated Refactoring
You may choose to use the automated refactoring pattern, where mainframe application assets are automatically converted into a modern language and ecosystem. With automated refactoring, AWS Mainframe Modernization uses Blu Age tools to convert your COBOL, PL/1, or JCL code to Java services and scripts. It generates modern code, data access, and data format by implementing patterns and rules to transform screens, indexed files, and batch applications to a modern application stack.

AWS Mainfraime Modernization Refactoring

Application Replatforming
You may also choose to replatform your applications, meaning move them to AWS with minimal changes to the source code. When replatforming, the fully-managed runtime comes preinstalled with the Micro Focus mainframe-compatible components, such as transaction managers, data mapping tools, screen and maps readers, and batch execution environments, allowing you to run your application with minimum changes.

AWS Mainfraime Modernization Replatforming

This blog post can help you learn more about nuances between replatforming and refactoring.

DevOps For Your Mainframe Applications
AWS Mainframe Modernization service provides you with AWS CloudFormation templates to easily create continuous integration and continuous deployment pipelines. It also deploys and configures monitoring services to monitor the managed runtime. This allows you to maintain or continue to evolve your applications once migrated, using best practices from Agile and DevOps methodologies.

Managed Services
AWS Mainframe Modernization takes care of the undifferentiated heavy lifting and provides you with fully managed runtime environments based on 15 years of cloud architecture best practices in terms of security, high availability, scalability, system management, and using infrastructure as code. These are all important for the business-critical applications running on mainframes.

The analysis tools, development tools, and the replatforming or refactoring runtimes come preinstalled and ready to use. But there is much more than preinstalled environments. The service deploys and manages the whole infrastructure for you. It deploys the required network, load balancer, and configure log collection with Amazon CloudWatch, among others. It manages application versioning, deployments, and high availability dependencies. This saves you days of designing, testing, automating, and deploying your own infrastructure.

The fully managed runtime includes extensive automation and managed infrastructure resources that you can operate via the AWS console, the AWS Command Line Interface (CLI), and application programming interfaces (APIs). This removes the burden and undifferentiated heavy lifting of managing a complex infrastructure. It allows you to spend time and focus on innovating and building new capabilities.

Let’s Deploy an App
As usual, I like to show you how it works. I am using a demo banking application. The application has been replatformed and is available as two .zip files. The first one contains the application binaries, and the second one the data files. I uploaded the content of these zipped files to an Amazon Simple Storage Service (Amazon S3) bucket. As part of the prerequisites, I also created a PostgreSQL Aurora database, stored its username and password in AWS Secrets Manager, and I created an encryption key in AWS Key Management Service (KMS).

Sample Banking Application files

Create an Environment
Let’s deploy and run the BankDemo sample application in an AWS Mainframe Modernization managed runtime environment with the Micro Focus runtime engine. For brevity, I highlight only the main steps. The full tutorial is available as part of the service documentation.

I open the AWS Management Console and navigate to AWS Mainframe Modernization. I navigate to Environments and select Create environment.

AWS Mainframe Migration - Create EnvironmentI give the environment a name and select Micro Focus runtime since we are deploying a replatformed application. Then I select Next.

AWS Mainframe Modernization - Create Environment 2In the Specify Configurations section, I leave all the default values: a Standalone runtime environment, the M2.m5.large EC2 instance type, and the default VPC and subnets. Then I select Next.

AWS Mainframe Modernization - Create Environment 3

On the Attach Storage section, I mount an EFS endpoint as /m2/mount/demo. Then I select Next.

AWS Mainframe Modernization - Create Environment 4In the Review and create section, I review my configuration and select Create environment. After a while, the environment status switches to Available.

AWS Mainframe Modernization - environment available

Create an Application
Now that I have an environment, let’s deploy the sample banking application on it. I select the Applications section and select Create application.

AWS Mainframe Modernization - Create ApplicatioI give my application a name, and under Engine type, I select Micro Focus.

AWS Mainframe Modernization - Create Application 2In the Specify resources and configurations section, I enter a JSON definition of my application. The JSON tells the runtime environment where my application’s various files are located and how to access Secrets Manager. You can find a sample JSON file in the tutorial section of the documentation.

AWS Mainframe Modernization - Create Application 3In the last section, I Review and create the application. I select Create application. After a moment, the application becomes available.

AWS Mainframe Modernization - application is availableOnce available, I deploy the application to the environment. I select the AWSNewsBlog-SampleBanking app, then I select the Actions dropdown menu, and I select Deploy application.

AWS Mainframe Modernization - deploy the appAfter a while, the application status changes to Ready.

Import Data sets
The last step before starting the application is to import its data sets. In the navigation pane, I select Applications, then choose AWSNewsBlog-SampleBank. I then select the Data sets tab and select Import. I may either specify the data set configuration values individually using the console or provide the location of an S3 bucket that contains a data set configuration JSON file.

AWS Mainframe Modernization - import data setsI use the JSON file provided by the tutorial in the documentation. Before uploading the JSON file to S3, I replace the $S3_DATASET_PREFIX variable with the actual value of my S3 bucket and prefix. For this example, I use awsnewsblog-samplebank/catalog.

AWS Mainframe Modernization - import data sets 2After a while, the data set status changes to Completed.

My application and its data set are now deployed into the cloud.

Start the Application
The last step is to start the application. I navigate to the Applications section. I then select AWSNewsBlog-SampleBank. In the Actions dropdown menu, I select Start application. After a moment, the application status changes to Running.

AWS Mainframe Modernization - application running

Access the Application
To access the application, I need a 3270 terminal emulator. Depending on your platform, a couple of options are available. I choose to use a web-based TN3270 web-based client provided by Micro Focus and available on the AWS Marketplace. I configure the terminal emulator to point it to the AWS Mainframe Modernization environment endpoint, and I use port 6000.

TN3270 Configuration

Once the session starts, I receive the CICS welcome prompt. I type BANK and press ENTER to start the app. I authenticate with user BA0001 and password A. The main application menu is displayed. I select the first option of the menu and press ENTER.

TN3270 SampleBank demo

Congrats, your replatformed application has been deployed in the cloud and is available through a standard IBM 3270 terminal emulator.

Pricing and Availability
AWS Mainframe Modernization service is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (Ireland), and South America (São Paulo).

You only pay for what you use. There are no upfront costs. Third-party license costs are included in the hourly price. Runtime environments for refactored applications, based on Blu Age, start at $2.50/hour. Runtime environments for replatformed applications, based on Micro Focus, start at $5.55/hour. This includes the software licenses (Blu Age or Micro Focus). As usual, AWS Support plans are available. They also cover Blu Age and Micro Focus software.

Committed plans are available for pricing discounts. The pricing details are available on the service pricing page.

And now, go build 😉

— seb

AWS HITRUST Shared Responsibility Matrix version 1.2 now available

Post Syndicated from Sonali Vaidya original https://aws.amazon.com/blogs/security/aws-hitrust-shared-responsibility-matrix-version-1-2-now-available/

The latest version of the AWS HITRUST Shared Responsibility Matrix is now available to download. Version 1.2 is based on HITRUST MyCSF version 9.4[r2] and was released by HITRUST on April 20, 2022.

AWS worked with HITRUST to update the Shared Responsibility Matrix and to add new controls based on MyCSF v9.4[r2]. You don’t have to assess these additional controls because AWS already has completed HITRUST assessment using version 9.4 in 2021 . You can deploy your environments on AWS and inherit our HITRUST Common Security Framework (CSF) certification, provided that you use only in-scope services and apply the controls detailed on the HITRUST website.

What this means for our customers

The new AWS HITRUST Shared Responsibility Matrix has been tailored to reflect both the Cross Version ID (CVID) and Baseline Unique ID (BUID) in HITRUST so that you can select the correct control for inheritance even if you’re still using an older version of HITRUST MyCSF for your own assessment.

With the new version, you can also inherit some additional controls based on MyCSF v9.4[r2].

At AWS, we’re committed to helping you achieve and maintain the highest standards of security and compliance. We value your feedback and questions. You can contact the AWS HITRUST team at AWS Compliance Contact Us. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security ‘how-to’ content, news, and feature announcements? Follow us on Twitter.

Author

Sonali Vaidya

Sonali leads multiple AWS global compliance programs, including HITRUST, ISO 27001, ISO 27017, ISO 27018, ISO 27701, ISO 9001, and CSA STAR. Sonali has over 20 years of experience in information security and privacy management and holds multiple certifications such as CISSP, C-GDPR|P, CCSK, CEH, CISA, PCIP, ISO 27001, and ISO 22301 Lead Auditor.

AWS achieves ISO 22301:2019 certification

Post Syndicated from Sonali Vaidya original https://aws.amazon.com/blogs/security/aws-achieves-iso-223012019-certification/

We’re excited to announce that Amazon Web Services (AWS) has successfully achieved ISO 22301:2019 certification without audit findings. ISO 22301:2019 is a rigorous third-party independent assessment of the international standard for Business Continuity Management (BCM). Published by the International Organization for Standardization (ISO), ISO 22301:2019 is designed to help organizations prevent, prepare for, respond to, and recover from unexpected and disruptive events.

EY CertifyPoint, an independent third-party auditor, issued the certificate on June 2, 2022. The covered AWS Regions are included on the ISO 22301:2019 certificate, and the full list of AWS services in scope for ISO 22301:2019 is available on our ISO and CSA STAR Certified webpage. You can view and download the AWS ISO 22301:2019 certificate on demand online and in the AWS Management Console through AWS Artifact.

As always, we value your feedback and questions and are committed to helping you achieve and maintain the highest standard of security and compliance. Feel free to contact our team through AWS Compliance Contact Us. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sonali Vaidya

Sonali leads multiple AWS global compliance programs, including HITRUST, ISO 27001, ISO 27017, ISO 27018, ISO 27701, ISO 9001, and CSA STAR. Sonali has over 20 years of experience in information security and privacy management and holds multiple certifications, such as CISSP, C-GDPR|P, CCSK, CEH, CISA, PCIP, and Lead Auditor for ISO 27001 and ISO 22301.

AWS Week In Review – June 6, 2022

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/aws-week-in-review-june-6-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

I’ve just come back from a long (extended) holiday weekend here in the US and I’m still catching up on all the AWS launches that happened this past week. I’m particularly excited about some of the data, machine learning, and quantum computing news. Let’s have a look!

Last Week’s Launches
The launches that caught my attention last week are the following:

Amazon EMR Serverless is now generally available Amazon EMR Serverless allows you to run big data applications using open-source frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters. The new serverless deployment option for Amazon EMR automatically scales resources up and down to provide just the right amount of capacity for your application, and you only pay for what you use. To learn more, check out Channy’s blog post and listen to The Official AWS Podcast episode on EMR Serverless.

AWS PrivateLink is now supported by additional AWS services AWS PrivateLink provides private connectivity between your virtual private cloud (VPC), AWS services, and your on-premises networks without exposing your traffic to the public internet. The following AWS services just added support for PrivateLink:

  • Amazon S3 on Outposts has added support for PrivateLink to perform management operations on your S3 storage by using private IP addresses in your VPC. This eliminates the need to use public IPs or proxy servers. Read the June 1 What’s New post for more information.
  • AWS Panorama now supports PrivateLink, allowing you to access AWS Panorama from your VPC without using public endpoints. AWS Panorama is a machine learning appliance and software development kit (SDK) that allows you to add computer vision (CV) to your on-premises cameras. Read the June 2 What’s New post for more information.
  • AWS Backup has added PrivateLink support for VMware workloads, providing direct access to AWS Backup from your VMware environment via a private endpoint within your VPC. Read the June 3 What’s New post for more information.

Amazon SageMaker JumpStart now supports incremental model training and automatic tuning – Besides ready-to-deploy solution templates for common machine learning (ML) use cases, SageMaker JumpStart also provides access to more than 300 pre-trained, open-source ML models. You can now incrementally train all the JumpStart models with new data without training from scratch. Through this fine-tuning process, you can shorten the training time to reach a better model. SageMaker JumpStart now also supports model tuning with SageMaker Automatic Model Tuning from its pre-trained model, solution templates, and example notebooks. Automatic tuning allows you to automatically search for the best hyperparameter configuration for your model.

Amazon Transcribe now supports automatic language identification for multi-lingual audioAmazon Transcribe converts audio input into text using automatic speech recognition (ASR) technology. If your audio recording contains more than one language, you can now enable multi-language identification, which identifies all languages spoken in the audio file and creates a transcript using each identified language. Automatic language identification for multilingual audio is supported for all 37 languages that are currently supported for batch transcriptions. Read the What’s New post from Amazon Transcribe to learn more.

Amazon Braket adds support for Borealis, the first publicly accessible quantum computer that is claimed to offer quantum advantage – If you are interested in quantum computing, you’ve likely heard the term “quantum advantage.” It refers to the technical milestone when a quantum computer outperforms the world’s fastest supercomputers on a well-defined task. Until now, none of the devices claimed to demonstrate quantum advantage have been accessible to the public. The Borealis device, a new photonic quantum processing unit (QPU) from Xanadu, is the first publicly available quantum computer that is claimed to have achieved quantum advantage. Amazon Braket, the quantum computing service from AWS, has just added support for Borealis. To learn more about how you can test a quantum advantage claim for yourself now on Amazon Braket, check out the What’s New post covering the addition of Borealis support.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you may have missed:

New AWS Heroes – A warm welcome to our newest AWS Heroes! The AWS Heroes program is a worldwide initiative that acknowledges individuals who have truly gone above and beyond to share knowledge in technical communities. Get to know them in the June 2022 introduction blog post!

AWS open-source news and updates – My colleague Ricardo Sueiras writes this weekly open-source newsletter in which he highlights new open-source projects, tools, and demos from the AWS Community. Read edition #115 here.

Upcoming AWS Events
Join me in Las Vegas for Amazon re:MARS 2022. The conference takes place June 21–24 and is all about the latest innovations in machine learning, automation, robotics, and space. I will deliver a talk on how machine learning can help to improve disaster response. Say “Hi!” if you happen to be around and see me.

We also have more AWS Summits coming up over the next couple of months, both in-person and virtual.

In Europe:

In North America:

In South America:

Find an AWS Summit near you, and get notified when registration opens in your area.

Imagine Conference 2022You can now register for IMAGINE 2022 (August 3, Seattle). The IMAGINE 2022 conference is a no-cost event that brings together education, state, and local leaders to learn about the latest innovations and best practices in the cloud.

Sign up for the SQL Server Database Modernization webinar on June 21 to learn how to modernize and cost-optimize Microsoft SQL Server on AWS.

That’s all for this week. Check back next Monday for another Week in Review!

— Antje

A sneak peek at the data protection and privacy sessions for AWS re:Inforce 2022

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-data-protection-and-privacy-sessions-for-reinforce-2022/

Register now with discount code SALUZwmdkJJ to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

Today we want to tell you about some of the engaging data protection and privacy sessions planned for AWS re:Inforce. AWS re:Inforce is a learning conference where you can learn more about on security, compliance, identity, and privacy. When you attend the event, you have access to hundreds of technical and business sessions, an AWS Partner expo hall, a keynote speech from AWS Security leaders, and more. AWS re:Inforce 2022 will take place in-person in Boston, MA on July 26 and 27. re:Inforce 2022 features content in the following five areas:

  • Data protection and privacy
  • Governance, risk, and compliance
  • Identity and access management
  • Network and infrastructure security
  • Threat detection and incident response

This post will highlight of some of the data protection and privacy offerings that you can sign up for, including breakout sessions, chalk talks, builders’ sessions, and workshops. For the full catalog of all tracks, see the AWS re:Inforce session preview.

Breakout sessions

Lecture-style presentations that cover topics at all levels and delivered by AWS experts, builders, customers, and partners. Breakout sessions typically include 10–15 minutes of Q&A at the end.

DPP 101: Building privacy compliance on AWS
In this session, learn where technology meets governance with an emphasis on building. With the privacy regulation landscape continuously changing, organizations need innovative technical solutions to help solve privacy compliance challenges. This session covers three unique customer use cases and explores privacy management, technology maturity, and how AWS services can address specific concerns. The studies presented help identify where you are in the privacy journey, provide actions you can take, and illustrate ways you can work towards privacy compliance optimization on AWS.

DPP201: Meta’s secure-by-design approach to supporting AWS applications
Meta manages a globally distributed data center infrastructure with a growing number of AWS Cloud applications. With all applications, Meta starts by understanding data security and privacy requirements alongside application use cases. This session covers the secure-by-design approach for AWS applications that helps Meta put automated safeguards before deploying applications. Learn how Meta handles account lifecycle management through provisioning, maintaining, and closing accounts. The session also details Meta’s global monitoring and alerting systems that use AWS technologies such as Amazon GuardDuty, AWS Config, and Amazon Macie to provide monitoring, access-anomaly detection, and vulnerable-configuration detection.

DPP202: Uplifting AWS service API data protection to TLS 1.2+
AWS is constantly raising the bar to ensure customers use the most modern Transport Layer Security (TLS) encryption protocols, which meet regulatory and security standards. In this session, learn how AWS can help you easily identify if you have any applications using older TLS versions. Hear tips and best practices for using AWS CloudTrail Lake to detect the use of outdated TLS protocols, and learn how to update your applications to use only modern versions. Get guidance, including a demo, on building metrics and alarms to help monitor TLS use.

DPP203: Secure code and data in use with AWS confidential compute capabilities
At AWS, confidential computing is defined as the use of specialized hardware and associated firmware to protect in-use customer code and data from unauthorized access. In this session, dive into the hardware- and software-based solutions AWS delivers to provide a secure environment for customer organizations. With confidential compute capabilities such as the AWS Nitro System, AWS Nitro Enclaves, and NitroTPM, AWS offers protection for customer code and sensitive data such as personally identifiable information, intellectual property, and financial and healthcare data. Securing data allows for use cases such as multi-party computation, blockchain, machine learning, cryptocurrency, secure wallet applications, and banking transactions.

Builders’ sessions

Small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.

DPP251: Disaster recovery and resiliency for AWS data protection services
Mitigating unknown risks means planning for any situation. To help achieve this, you must architect for resiliency. Disaster recovery (DR) is an important part of your resiliency strategy and concerns how your workload responds when a disaster strikes. To this end, many organizations are adopting architectures that function across multiple AWS Regions as a DR strategy. In this builders’ session, learn how to implement resiliency with AWS data protection services. Attend this session to gain hands-on experience with the implementation of multi-Region architectures for critical AWS security services.

DPP351: Implement advanced access control mechanisms using AWS KMS
Join this builders’ session to learn how to implement access control mechanisms in AWS Key Management Service (AWS KMS) and enforce fine-grained permissions on sensitive data and resources at scale. Define AWS KMS key policies, use attribute-based access control (ABAC), and discover advanced techniques such as grants and encryption context to solve challenges in real-world use cases. This builders’ session is aimed at security engineers, security architects, and anyone responsible for implementing security controls such as segregating duties between encryption key owners, users, and AWS services or delegating access to different principals using different policies.

DPP352: TLS offload and containerized applications with AWS CloudHSM
With AWS CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. This builders’ session covers two common scenarios for CloudHSM: TLS offload using NGINX and OpenSSL Dynamic agent and a containerized application that uses PKCS#11 to perform crypto operations. Learn about scaling containerized applications, discover how metrics and logging can help you improve the observability of your CloudHSM-based applications, and review audit records that you can use to assess compliance requirements.

DPP353: How to implement hybrid public key infrastructure (PKI) on AWS
As organizations migrate workloads to AWS, they may be running a combination of on-premises and cloud infrastructure. When certificates are issued to this infrastructure, having a common root of trust to the certificate hierarchy allows for consistency and interoperability of the public key infrastructure (PKI) solution. In this builders’ session, learn how to deploy a PKI that allows such capabilities in a hybrid environment. This solution uses Windows Certificate Authority (CA) and ACM Private CA to distribute and manage x.509 certificates for Active Directory users, domain controllers, network components, mobile, and AWS services, including Amazon API Gateway, Amazon CloudFront, and Elastic Load Balancing.

Chalk talks

Highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

DPP231: Protecting healthcare data on AWS
Achieving strong privacy protection through technology is key to protecting patient. Privacy protection is fundamental for healthcare compliance and is an ongoing process that demands legal, regulatory, and professional standards are continually met. In this chalk talk, learn about data protection, privacy, and how AWS maintains a standards-based risk management program so that the HIPAA-eligible services can specifically support HIPAA administrative, technical, and physical safeguards. Also consider how organizations can use these services to protect healthcare data on AWS in accordance with the shared responsibility model.

DPP232: Protecting business-critical data with AWS migration and storage services
Business-critical applications that were once considered too sensitive to move off premises are now moving to the cloud with an extension of the security perimeter. Join this chalk talk to learn about securely shifting these mature applications to cloud services with the AWS Transfer Family and helping to secure data in Amazon Elastic File System (Amazon EFS), Amazon FSx, and Amazon Elastic Block Storage (Amazon EBS). Also learn about tools for ongoing protection as part of the shared responsibility model.

DPP331: Best practices for cutting AWS KMS costs using Amazon S3 bucket keys
Learn how AWS customers are using Amazon S3 bucket keys to cut their AWS Key Management Service (AWS KMS) request costs by up to 99 percent. In this chalk talk, hear about the best practices for exploring your AWS KMS costs, identifying suitable buckets to enable bucket keys, and providing mechanisms to apply bucket key benefits to existing objects.

DPP332: How to securely enable third-party access
In this chalk talk, learn about ways you can securely enable third-party access to your AWS account. Learn why you should consider using services such as Amazon GuardDuty, AWS Security Hub, AWS Config, and others to improve auditing, alerting, and access control mechanisms. Hardening an account before permitting external access can help reduce security risk and improve the governance of your resources.

Workshops

Interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

DPP271: Isolating and processing sensitive data with AWS Nitro Enclaves
Join this hands-on workshop to learn how to isolate highly sensitive data from your own users, applications, and third-party libraries on your Amazon EC2 instances using AWS Nitro Enclaves. Explore Nitro Enclaves, discuss common use cases, and build and run an enclave. This workshop covers enclave isolation, cryptographic attestation, enclave image files, building a local vsock communication channel, debugging common scenarios, and the enclave lifecycle.

DPP272: Data discovery and classification with Amazon Macie
This workshop familiarizes you with Amazon Macie and how to scan and classify data in your Amazon S3 buckets. Work with Macie (data classification) and AWS Security Hub (centralized security view) to view and understand how data in your environment is stored and to understand any changes in Amazon S3 bucket policies that may negatively affect your security posture. Learn how to create a custom data identifier, plus how to create and scope data discovery and classification jobs in Macie.

DPP273: Architecting for privacy on AWS
In this workshop, follow a regulatory-agnostic approach to build and configure privacy-preserving architectural patterns on AWS including user consent management, data minimization, and cross-border data flows. Explore various services and tools for preserving privacy and protecting data.

DPP371: Building and operating a certificate authority on AWS
In this workshop, learn how to securely set up a complete CA hierarchy using AWS Certificate Manager Private Certificate Authority and create certificates for various use cases. These use cases include internal applications that terminate TLS, code signing, document signing, IoT device authentication, and email authenticity verification. The workshop covers job functions such as CA administrators, application developers, and security administrators and shows you how these personas can follow the principal of least privilege to perform various functions associated with certificate management. Also learn how to monitor your public key infrastructure using AWS Security Hub.

If any of these sessions look interesting to you, consider joining us in Boston by registering for re:Inforce 2022. We look forward to seeing you there!

Author

Marta Taggart

Marta is a Seattle-native and Senior Product Marketing Manager in AWS Security Product Marketing, where she focuses on data protection services. Outside of work you’ll find her trying to convince Jack, her rescue dog, not to chase squirrels and crows (with limited success).

Katie Collins

Katie Collins

Katie is a Product Marketing Manager in AWS Security, where she brings her enthusiastic curiosity to deliver products that drive value for customers. Her experience also includes product management at both startups and large companies. With a love for travel, Katie is always eager to visit new places while enjoying a great cup of coffee.

AWS CSA Consensus Assessment Initiative Questionnaire version 4 now available

Post Syndicated from Sonali Vaidya original https://aws.amazon.com/blogs/security/aws-csa-consensus-assessment-initiative-questionnaire-version-4-now-available/

Amazon Web Services (AWS) has published an updated version of the AWS Cloud Security Alliance (CSA) Consensus Assessment Initiative Questionnaire (CAIQ). The questionnaire has been completed using the current CSA CAIQ standard, v4.0.2 (06.07.2021 update), and is now available for download.

The CSA is a not-for-profit organization dedicated to “defining and raising awareness of best practices to help ensure a secure cloud computing environment.” For more information, see the Cloud Security Alliance website. A wide range of industry security practitioners, corporations, and associations participate in CSA.

What is CSA CAIQ and how can you use it?

The CSA Consensus Assessments Initiative Questionnaire provides a set of questions that CSA anticipates a cloud consumer or a cloud auditor would ask of a cloud provider. The AWS CSA CAIQ provides the AWS control implementation descriptions for a series of cloud-specific security questions based on the Cloud Controls Matrix (CCM). The AWS CSA CAIQ also reflects the AWS customer responsibilities according to the shared responsibility model, which can help customers comply with the CSA CCM.

At AWS, we’re committed to helping you achieve and maintain the highest standards of security and compliance. We value your feedback and questions. You can contact the AWS HITRUST team at AWS Compliance Contact Us. If you have feedback about this post, submit comments in the Comments section below.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Sonali Vaidya

Sonali leads multiple AWS global compliance programs, including HITRUST, ISO 27001, ISO 27017, ISO 27018, ISO 27701, ISO 9001, and CSA STAR. Sonali has over 20 years of experience in information security and privacy management and holds multiple certifications such as CISSP, C-GDPR|P, CCSK, CEH, CISA, PCIP, ISO 27001, and ISO 22301 Lead Auditor.

Introducing the newest AWS Heroes – June 2022

Post Syndicated from Ross Barich original https://aws.amazon.com/blogs/aws/introducing-the-newest-aws-heroes-june-2022/

AWS Heroes are some of the worlds most active and vocal leaders in AWS communities, recognized for their unwavering focus on sharing insights and technical knowledge with others. Heroes have a variety of contributions to community learning: they host events, meetups, and workshops, author blogs, contribute to open source projects, speak at conferences, and more. You can view some of their prominent content in the AWS Heroes Content Library.

Today we are thrilled to introduce to the world the latest cohort of AWS Heroes:

Adam Bien – Munich, Germany

DevTools Hero Adam Bien is an independent Architect, Consultant, Developer, Trainer, conference speaker, and podcaster. Adam started with Java since JDK 1.0 and still enjoys writing serverless Java, often in Amazon Corretto. He also codes live on YouTube. Adam uses CDK in greenfield serverless Java applications, as well as to help his clients migrate their on-premise Java applications to the AWS cloud. He likes to apply Java’s pragmatic patterns and best practices to serverless runtimes, especially AWS Lambda and AWS Fargate. High productivity, reduction of complexity, and cost effectiveness are his main focuses.

Adam Elmore – Nixa, USA

DevTools Hero Adam Elmore is an independent cloud consultant who helps startups build products on AWS. He’s also the host of AWS FM, a podcast with guests from around the AWS community, and the creator of the AWS Community on Twitter. Adam is passionate about open source and has made a handful of contributions to the AWS CDK over the years. In 2020 he created Ness, an open source CLI tool for deploying web sites and apps to AWS. Previously, Adam co-founded StatMuse—a Disney backed startup building technology that answers sports questions—and served as CTO for five years.

Brooke Jamieson – Brisbane, Australia

Machine Learning Hero Brooke Jamieson is the Head of Enablement – AI/ML and Data at Blackbook.ai, and is an international conference speaker. Brooke specializes in researching & developing technically robust solutions that help “non-data people” harness the power of Artificial Intelligence and Machine Learning for their industry. Outside of their ‘day job’, Brooke is a dedicated member of the AWS Community and is a regular speaker at local user groups, global events, and guest lectures at multiple Australian Universities. They also make entry-level cloud career and technical content on TikTok, to reach broad audiences and diverse groups wanting to transition to careers in AI/ML and Cloud. Brooke is an Advisory Board member of Women in Digital, and strives to promote STEM pathways to young people in regional Australia & members of the LGBTIQA+ community.

Chao Cai – Beijing, China

Community Hero Chao Cai has 15 years of world-class experience in software development, including more than 10 years as a software architect. He is currently the VP and Chief Architect at Mobvista Inc. Chao is passionate about sharing his knowledge and experience to the community. His WeChat public account has more than 4,000 followers and over 34,000 engineers have taken his online courses. Chao is a respected leader in the China tech community. He is invited as the speaker to the global tech conferences, such as, QCon and ArchSummit each year. As an active advocate for AWS, Chao is also a regular speaker at AWS tech events.

Cyril Bandolo – Douala, Cameroon

Machine Learning Hero Cyril Bandolo is a data scientist working as a Senior Manager Data Analytics at Yoomee Mobile. Cyril has a natural talent and passion for teaching and transferring knowledge in his machine learning blog, where he focuses on building and deploying end-to-end machine learning projects on AWS. On his YouTube channel, he recently launched a weekly live hands-on series called “Sagemaker Saturdays” during which, every weekend he walks the viewers through end-to-end machine learning projects with Sagemaker Studio Lab and Sagemaker Studio. Cyril is always trying to encounter and apply new machine learning solutions to make lives better and help move the bottom line.

Kristi Perreault – Denver, USA

Serverless Hero Kristi Perreault is a Principal Software Engineer at Liberty Mutual Insurance, where her focus is serverless first development, and enterprise enablement. She holds an M.S. in Electrical & Computer Engineering specializing in cloud computing & IoT, and is very passionate about promoting women in technology. She organizes the Serverless Denver user group as part of ServerlessDays, co-organizes CDK Day, and writes extensively about serverless and diversity on her dev.to and Medium blog sites. You’ll find her speaking about embracing and scaling serverless first initiatives on dozens of podcasts, webinars, conferences, and meetups both virtually and on stage.

Sanchit Jain – Mumbai, India

Community Hero Sanchit Jain is the AWS Analytics Practice Lead and a certified expert specializing in AWS Cloud at Quantiphi Inc. He is also the AWS User Group Mumbai Lead and actively contributes to the AWS community by delivering sessions at AWS User Groups, AWS Community Days, and various educational institutes. He also shares his knowledge by publishing blogs about AWS services, architectures, and best practices. Recently, Sanchit hosted an AWS Solution Architect Certification Bootcamp, which spanned over two months, with 7500+ viewers. He also delivered a session recently at AWS Summit India 2022 on Building a data lake with AWS Lake Formation.

Shigeru Oda – Saitama, Japan

Community Hero Shigeru Oda is an expert system engineer at NSD CO., LTD. Since 2020 he has run 25 events with the JAWS-UG Beginners Chapter (about 4200 registered members). In September 2020 he promoted the 24-hour online event JAWS SONIC 2020 & MIDNIGHT JAWS 2020, which was attended by about 1500 people. In March 2021, he promoted JAWS DAYS 2021 as a steering member, which was attended by about 4,000 people. And in November 2021, he promoted JAWS PANKRATION 2021, a second 24-hour online event, providing 900 AWS users in Japan, as well as around the world with an opportunity to learn beyond the language barrier of English and Japanese through simultaneous interpretation. He received the AWS Samurai 2021 Award for these activities.

Yasunori Kirimoto – Sapporo, Japan

DevTools Hero Yasunori Kirimoto is currently the Co-Founder and CTO of MIERUNE Inc. and owner of dayjournal. He specializes in the field of GIS (Geographic Information System) and FOSS4G (Free Open Source Software for GeoSpatial). Yasunori’s work includes contributions to the Amazon Location Service samples on GitHub and open source projects such as AWS Amplify and AWS CDK. He has also published numerous blog posts on AWS CloudFormation and other AWS architectures. When he’s not acting as a bridge between AWS and the location-based information field, he engages with the open-source community and enjoys participating in venture projects to gain a broader understanding of technology.

 

 

 

If you’d like to learn more about the new Heroes, or connect with a Hero near you, please visit the AWS Heroes website or browse the AWS Heroes Content Library.

Ross

Join me in Boston this July for AWS re:Inforce 2022

Post Syndicated from CJ Moses original https://aws.amazon.com/blogs/security/join-me-in-boston-this-july-for-aws-reinforce-2022/

I’d like to personally invite you to attend the Amazon Web Services (AWS) security conference, AWS re:Inforce 2022, in Boston, MA on July 26–27. This event offers interactive educational content to address your security, compliance, privacy, and identity management needs. Join security experts, customers, leaders, and partners from around the world who are committed to the highest security standards, and learn how to improve your security posture.

As the new Chief Information Security Officer of AWS, my primary job is to help our customers navigate their security journey while keeping the AWS environment safe. AWS re:Inforce offers an opportunity for you to understand how to keep pace with innovation in your business while you stay secure. With recent headlines around security and data privacy, this is your chance to learn the tactical and strategic lessons that will help keep your systems and tools secure, while you build a culture of security in your organization.

AWS re:Inforce 2022 will kick off with my keynote on Tuesday, July 26. I’ll be joined by Steve Schmidt, now the Chief Security Officer (CSO) of Amazon, and Kurt Kufeld, VP of AWS Platform. You’ll hear us talk about the latest innovations in cloud security from AWS and learn what you can do to foster a culture of security in your business. Take a look at the most recent re:Invent presentation, Continuous security improvement: Strategies and tactics, and the latest re:Inforce keynote for examples of the type of content to expect.

For those who are just getting started on AWS, as well as our more tenured customers, AWS re:Inforce offers an opportunity to learn how to prioritize your security investments. By using the Security pillar of the AWS Well-Architected Framework, sessions address how you can build practical and prescriptive measures to protect your data, systems, and assets.

Sessions are offered at all levels and for all backgrounds, from business to technical, and there are learning opportunities in over 300 sessions across five tracks: Data Protection & Privacy; Governance, Risk & Compliance; Identity & Access Management; Network & Infrastructure Security; and Threat Detection & Incident Response. In these sessions, connect with and learn from AWS experts, customers, and partners who will share actionable insights that you can apply in your everyday work. At AWS re:Inforce, the majority of our sessions are interactive, such as workshops, chalk talks, boot camps, and gamified learning, which provides opportunities to hear about and act upon best practices. Sessions will be available from the intermediate (200) through expert (400) levels, so you can grow your skills no matter where you are in your career. Finally, there will be a leadership session for each track, where AWS leaders will share best practices and trends in each of these areas.

At re:Inforce, hear directly from AWS developers and experts, who will cover the latest advancements in AWS security, compliance, privacy, and identity solutions—including actionable insights your business can use right now. Plus, you’ll learn from AWS customers and partners who are using AWS services in innovative ways to protect their data, achieve security at scale, and stay ahead of bad actors in this rapidly evolving security landscape.

A full conference pass is $1,099. However, if you register today with the code ALUMkpxagvkV you’ll receive a $300 discount (while supplies last).

We’re excited to get back to re:Inforce in person; it is emblematic of our commitment to giving customers direct access to the latest security research and trends. We’ll continue to release additional details about the event on our website, and you can get real-time updates by following @AWSSecurityInfo. I look forward to seeing you in Boston, sharing a bit more about my new role as CISO and providing insight into how we prioritize security at AWS.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

CJ Moses

CJ Moses

CJ Moses is the Chief Information Security Officer (CISO) at AWS. In his role, CJ leads product design and security engineering for AWS. His mission is to deliver the economic and security benefits of cloud computing to business and government customers. Prior to joining Amazon in 2007, CJ led the technical analysis of computer and network intrusion efforts at the U.S. Federal Bureau of Investigation Cyber Division. CJ also served as a Special Agent with the U.S. Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the information security industry today.

Amazon EMR Serverless Now Generally Available – Run Big Data Applications without Managing Servers

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-emr-serverless-now-generally-available-run-big-data-applications-without-managing-servers/

At AWS re:Invent 2021, we introduced three new serverless options for our data analytics services – Amazon EMR Serverless, Amazon Redshift Serverless, and Amazon MSK Serverless – that make it easier to analyze data at any scale without having to configure, scale, or manage the underlying infrastructure.

Today we announce the general availability of Amazon EMR Serverless, a serverless deployment option for customers to run big data analytics applications using open-source frameworks like Apache Spark and Hive without configuring, managing, and scaling clusters or servers.

With EMR Serverless, you can run analytics workloads at any scale with automatic scaling that resizes resources in seconds to meet changing data volumes and processing requirements. EMR Serverless automatically scales resources up and down to provide just the right amount of capacity for your application, and you only pay for what you use.

During the preview, we heard from customers that EMR Serverless is cost-effective because they do not incur cost from having to overprovision resources to deal with demand spikes. They do not have to worry about right-sizing instances or applying OS updates, and can focus on getting products to market faster.

Amazon EMR provides various deployment options to run applications to fit varied needs such as EMR clusters on Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Kubernetes Service (Amazon EKS) clusters, AWS Outposts, or EMR Serverless.

  • EMR on Amazon EC2 clusters is suitable for customers that need maximum control and flexibility over how to run their application. With EMR clusters, customers can choose the EC2 instance type to enhance the performance of certain applications, customize the Amazon Machine Image (AMI), choose EC2 instance configuration, customize, and extend open-source frameworks and install additional custom software on cluster instances.
  • EMR on Amazon EKS is suitable for customers that want to standardize on EKS to manage clusters across applications or use different versions of an open-source framework on the same cluster.
  • EMR on AWS Outposts is for customers who want to run EMR closer to their data center within an Outpost.
  • EMR Serverless is suitable for customers that want to avoid managing and operating clusters, and simply want to run applications using open-source frameworks.

Also, when you build an application using an EMR release (for example, a Spark job using EMR release 6.4), you can choose to run it on an EMR cluster, EMR on EKS, or EMR Serverless without having to rewrite the application. This allows you to build applications for a given framework version and retain the flexibility to change the deployment model based on future operational needs.

Getting Started with Amazon EMR Serverless
To get started with EMR Serverless, you can use Amazon EMR Studio, a free EMR feature which provides an end to end development and debugging experience. With EMR Studio, you can create EMR Serverless applications (Spark or Hive), choose the version of open-source software for your application, submit jobs, check the status of running jobs, and invoke Spark UI or Tez UI for job diagnostics.

When you select the Get started button in the EMR Serverless Console, you can create and set up EMR Studio with preconfigured EMR Serverless applications.

In EMR Studio, when you choose Applications in the Serverless menu, you can create one or more EMR Serverless applications and choose the open source framework and version for your use case. If you want separate logical environments for test and production or for different line-of-business use cases, you can create separate applications for each logical environment.

An EMR Serverless application is a combination of (a) the EMR release version for the open-source framework version you want to use and (b) the specific runtime that you want your application to use, such as Apache Spark or Apache Hive.

When you choose Create application, you can set your application NameType of either Spark or Hive, and supported Release version. You can also select the option of default or custom settings for pre-initialized capacity, application limits, and Amazon Virtual Private Cloud (Amazon VPC) connectivity options. Each EMR Serverless application is isolated from other applications and runs within a secure VPC.

Use the default option if you want jobs to start immediately. But charges apply for each worker when the application is started. To learn more about pre-initialized capacity, see Configuring and managing pre-initialized capacity.

When you select Start application, your application is setup to start with pre-initialized capacity of 1 Spark driver and 1 Spark executor. Your application is by default configured to start when jobs are submitted and stop when the application is idle for more than 15 minutes.

You can customize these settings and setup different application limits by selecting Choose custom settings.

In the Job runs menu, you can see a list of run jobs for your application.

Choose Submit job and set up job details such as the name, AWS Identity and Access Management (IAM) role used by the job, script location, and arguments of the JAR or Python script in the Amazon Simple Storage Service (Amazon S3) bucket that you want to run.

If you want logs for your Spark or Hive jobs to be submitted to your S3 bucket, you will need to setup the S3 bucket in the same Region where you are running EMR Serverless jobs.

Optionally, you can set additional configuration properties that you can specify for each job, such as Spark properties, job configurations to override the default configurations for applications (such as using the AWS Glue Data Catalog as its metastore), storing logs to Amazon S3, and retaining logs for 30 days.

The following is an example of running a Python script using the StartJobRun API.

$ aws emr-serverless start-job-run \
    --application-id <application_id> \
    --execution-role-arn <iam_role_arn> \
    --job-driver '{
        "sparkSubmit": {
            "entryPoint": "s3://spark-scripts/scripts/spark-etl.py",
            "entryPointArguments": "s3://spark-scripts/output",
            "sparkSubmitParameters": "--conf spark.executor.cores=1 --conf spark.executor.memory=4g --conf spark.driver.cores=1 --conf spark.driver.memory=4g --conf spark.executor.instances=1"
        }
    }' \
    --configuration-overrides '{
        "monitoringConfiguration": {
           "s3MonitoringConfiguration": {
             "logUri": "s3://spark-scripts/logs/"
           }
        }
    }'

You can check on job results in your S3 bucket. For details, you can use Spark UI for Spark Application, and Hive/Tez UI in the Job runs menu to understand how the job ran or to debug it if it failed.

For more debugging, EMR Serverless will push event logs to the sparklogs folder in your S3 log destination for Spark applications. In the case of Hive applications, EMR Serverless will continuously upload the Hive driver and Tez tasks logs to the HIVE_DRIVER or TEZ_TASK folders of your S3 log destination. To learn more, see Logging in the AWS documentation.

Things to Know
With EMR Serverless, you can get all the benefits of running Amazon EMR. I want to quote some things to know about EMR Serverless from an AWS Big Data Blog post of preview announcements:

  • Automatic and fine-grained scaling – EMR Serverless automatically scales up workers at each stage of processing your job and scales them down when they’re not required. You’re charged for aggregate vCPU, memory, and storage resources used from the time a worker starts running until it stops, rounded up to the nearest second with a 1-minute minimum. For example, your job may require 10 workers for the first 10 minutes of processing the job and 50 workers for the next 5 minutes. With fine-grained automatic scaling, you only incur cost for 10 workers for 10 minutes and 50 workers for 5 minutes. As a result, you don’t have to pay for underutilized resources.
  • Resilience to Availability Zone failures – EMR Serverless is a Regional service. When you submit jobs to an EMR Serverless application, it can run in any Availability Zone in the Region. In case an Availability Zone is impaired, a job submitted to your EMR Serverless application is automatically run in a different (healthy) Availability Zone. When using resources in a private VPC, EMR Serverless recommends that you specify the private VPC configuration for multiple Availability Zones so that EMR Serverless can automatically select a healthy Availability Zone.
  • Enable shared applications – When you submit jobs to an EMR Serverless application, you can specify the IAM role that must be used by the job to access AWS resources such as S3 objects. As a result, different IAM principals can run jobs on a single EMR Serverless application, and each job can only access the AWS resources that the IAM principal is allowed to access. This enables you to set up scenarios where a single application with a pre-initialized pool of workers is made available to multiple tenants wherein each tenant can submit jobs using a different IAM role but use the common pool of pre-initialized workers to immediately process requests.

Now Available
Amazon EMR Serverless is available in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions. With EMR Serverless, there are no upfront costs, and you pay only for the resources you use. You pay for the amount of vCPU, memory, and storage resources consumed by your applications. For pricing details, see the EMR Serverless pricing page.

To learn more, visit the Amazon EMR Serverless User Guide. Please send feedback to AWS re:Post for Amazon EMR Serverless or through your usual AWS support contacts.

Learn all the details about Amazon EMR Serverless and get started today.

Channy

AWS Week In Review – May 30, 2022

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-week-in-review-may-30-2022/

Today, the US observes Memorial Day. South Korea also has a national Memorial Day, celebrated next week on June 6. In both countries, the day is set aside to remember those who sacrificed in service to their country. This time provides an opportunity to recognize and show our appreciation for the armed services and the important role they play in protecting and preserving national security.

AWS also has supported our veterans, active-duty military personnel, and military spouses with our training and hiring programs in the US. We’ve developed a number of programs focused on engaging the military community, helping them develop valuable AWS technical skills, and aiding in transitioning them to begin their journey to the cloud. To learn more, see AWS’s military commitment.

Last Week’s Launches
The launches that caught my attention last week are the following:

Three New AWS Wavelength Zones in the US and South Korea  – We announced the availability of three new AWS Wavelength Zones on Verizon’s 5G Ultra Wideband network in Nashville, Tennessee, and Tampa, Florida in the US, and Seoul in South Korea on SK Telecom’s 5G network.

AWS Wavelength Zones embed AWS compute and storage services at the edge of communications service providers’ 5G networks while providing seamless access to cloud services running in an AWS Region. We have a total of 28 Wavelength Zones in Canada, Germany, Japan, South Korea, the UK, and the US globally. Learn more about AWS Wavelength and get started today.

New Amazon EC2 C7g, M6id, C6id, and P4de Instance Types – Last week, we announced four new EC2 instance types. C7g instances are the first instances powered by the latest AWS Graviton3 processors and deliver up to 25 percent better performance over Graviton2-based C6g instances for a broad spectrum of applications, even high-performance computing (HPC) and CPU-based machine learning (ML) inference.

M6id and C6id instances are powered by the Intel Xeon Scalable processors (Ice Lake) with an all-core turbo frequency of 3.5 GHz, equipped with up to 7.6 TB of local NVMe-based SSD block-level storage, and deliver up to 15 percent better price performance compared to the previous generation instances.

P4de instances are a preview of our latest GPU-based instances that provide the highest performance for ML training and HPC applications. It is powered by 8 NVIDIA A100 GPUs with 80 GB high-performance HBM2e GPU memory, 2X higher than the GPUs in our current P4d instances. The new P4de instances provide a total of 640GB of GPU memory, providing up to 60 percent better ML training performance along with 20 percent lower cost to train when compared to P4d instances.

Amazon EC2 Stop Protection Feature to Protect Instances From Unintentional Stop Actions – Now you don’t have to worry about stopping or terminating your instances from accidental actions. With Stop Protection, you can safeguard data in instance store volume(s) from unintentional stop actions. Previously, you could protect your instances from unintentional termination actions by enabling Termination Protection too.

When enabled, the Stop or Termination Protection feature blocks attempts to stop or terminate the instance via the EC2 console, API call, or CLI command. This feature provides an extra measure of protection for stateful workloads since instances can be stopped or terminated only by deactivating the Stop Protection feature.

AWS DataSync Supports Google Cloud Storage and Azure Files Storage Locations – We announced the general availability of two additional storage locations for AWS DataSync, an online data movement service that makes it easy to sync your data both into and out of the AWS Cloud. With this release, DataSync now supports Google Cloud Storage and Azure Files storage locations in addition to Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File Systems (HDFS), self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS), Amazon FSx for Windows File Server, Amazon FSx for Lustre, and Amazon FSx for OpenZFS.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Last week, there were lots of announcements of public sectors at AWS Summit Washington, DC.

To learn more, watch the keynote of Max Peterson, Vice President of AWS Worldwide Public Sector.

Upcoming AWS Events
If you have a developer background or similar and are looking to develop ML skills you can use to solve real-world problems, Let’s Ship It – with AWS! ML Edition is the perfect place to start. Over eight episodes of Twitch training scheduled from June 2 to July 21, you can learn hands-on how to build ML models, such as predicting demand and personalizing your offerings, and more.

The AWS Summit season is mostly over in Asia Pacific and Europe, but there are some upcoming virtual and in-person Summits that might be close to you in June:

More to come in August and September.

Please join Amazon re:MARS 2022 (June 21 – 24) to hear from recognized thought leaders and technical experts who are building the future of machine learning, automation, robotics, and space. You can preview Robotics at Amazon to discuss the recent real-world challenges of building robotic systems, published by Amazon Science.

You can now register for AWS re:Inforce 2022 (July 26 – 27). Join us in Boston to learn how AWS is innovating in the world of cloud security, and hone your technical skills in expert-led interactive sessions.

You can now register for AWS re:Invent 2022 (November 28 – December 2). Join us in Las Vegas to experience our most vibrant event that brings together the global cloud community. You can virtually attend live keynotes and leadership sessions and access our on-demand breakout sessions even after re:Invent closes.

That’s all for this week. Check back next Monday for another Week in Review!

Channy

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!