Tag Archives: announcements

AWS achieves an AAA Pinakes rating for Spanish financial entities

Post Syndicated from Daniel Fuertes original https://aws.amazon.com/blogs/security/aws-achieves-an-aaa-pinakes-rating-for-spanish-financial-entities/

Amazon Web Services (AWS) is pleased to announce that we have achieved an AAA rating from Pinakes. The scope of this qualification covers 166 services in 25 global AWS Regions.

The Spanish banking association Centro de Cooperación Interbancaria (CCI) developed Pinakes, a rating framework intended to manage and monitor the cybersecurity controls of service providers that Spanish financial entities depend on. The requirements arise from the European Banking Authority guidelines (EBA/GL/2019/02).

Pinakes evaluates the cybersecurity levels of service providers through 1,315 requirements across 4 categories (confidentiality, integrity, availability of information, and general) and 14 domains:

  • Information security management program
  • Facility security
  • Third-party management
  • Normative compliance
  • Network controls
  • Access control
  • Incident management
  • Encryption
  • Secure development
  • Monitoring
  • Malware protection
  • Resilience
  • Systems operation
  • Staff safety

Each requirement is associated to a rating level (A+, A, B, C, D), ranging from the highest A+ (provider has implemented the most diligent measures and controls for cybersecurity management) to the lowest D (minimum security requirements are met).

An independent third-party auditor has verified the implementation status for each section. As a result, AWS has been qualified with A ratings for Confidentiality, Integrity and Availability, getting an overall rating of AAA.

Our Spanish financial customers can refer to the AWS Pinakes rating to confirm that the AWS control environment is appropriately designed and implemented. By receiving an AAA, AWS demonstrates our commitment to meet the heightened security expectations for cloud service providers set by the CCI. The full evaluation report will be published on AWS Artifact upon request. Pinakes participants who are AWS customers can contact their AWS account manager to request access to it.

As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page. To learn more about our other compliance and security programs, see AWS Compliance Programs.

 
If you have feedback about this post, please submit them in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Daniel Fuertes

Daniel Fuertes

Daniel is a Security Audit Program Manager at AWS based in Madrid, Spain. Daniel leads multiple security audits, attestations, and certification programs in Spain and other EMEA countries. Daniel has nine years of experience in security assurance, including previous experience as an auditor for the PCI DSS security framework.

Borja Larrumbide

Borja Larrumbide

Borja is a Security Assurance Manager for AWS in Spain and Portugal. Previously, he worked at companies such as Microsoft and BBVA in different roles and sectors. Borja is a seasoned security assurance practitioner with years of experience engaging key stakeholders at national and international levels. His areas of interest include security, privacy, risk management, and compliance.

AWS Week in Review – April 24, 2023: Amazon CodeCatalyst, Amazon S3 on Snowball Edge, and More…

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-april-24-2023-amazon-codecatalyst-amazon-s3-on-snowball-edge-and-more/

As always, there’s plenty to share this week: Amazon CodeCatalyst is now generally available, Amazon S3 is now available on Snowball Edge devices, version 1.0.0 of AWS Amplify Flutter is here, and a lot more. Let’s dive in!

Last Week’s Launches
Here are some of the launches that caught my eye this past week:

Amazon CodeCatalyst – First announced at re:Invent in preview form (Announcing Amazon CodeCatalyst, a Unified Software Development Service), this unified software development and delivery service is now generally available. As Steve notes in the post that he wrote for the preview, “Amazon CodeCatalyst enables software development teams to quickly and easily plan, develop, collaborate on, build, and deliver applications on AWS, reducing friction throughout the development lifecycle.” During the preview we added the ability to use AWS Graviton2 for CI/CD workflows and deployment environments, along with other new features, as detailed in the What’s New.

Amazon S3 on Snowball Edge – You have had the power to create S3 buckets on AWS Snow Family devices for a couple of years, and to PUT and GET object. With this new launch you can, as Channy says, “…use an expanded set of Amazon S3 APIs to easily build applications on AWS and deploy them on Snowball Edge Compute Optimized devices.” This launch allows you to manage the storage using AWS OpsHub, and to address multiple Denied, Disrupted, Intermittent, and Limited Impact (DDIL) use cases. To learn more, read Amazon S3 Compatible Storage on AWS Snowball Edge Compute Optimized Devices Now Generally Available.

Amazon Redshift Updates – We announced multiple updates to Amazon Redshift including the MERGE SQL command so that you can combine a series of DML statements into a single statement, dynamic data masking to simplify the process of protecting sensitive data in your Amazon Redshift data warehouse, and centralized access control for data sharing with AWS Lake Formation.

AWS Amplify – You can now build cross-platform Flutter apps that target iOS, Android, Web, and desktop using a single codebase and with a consistent user experience. To learn more and to see how to get started, read Amplify Flutter announces general availability for web and desktop support. In addition to the GA, we also announced that AWS Amplify supports Push Notifications for Android, Swift, React Native, and Flutter apps.

X in Y – We made existing services available in additional regions and locations:

For a full list of AWS announcements, take a look at the What’s New at AWS page and consider subscribing to the page’s RSS feed. If you want even more detail, you can Subscribe to AWS Daily Feature Updates via Amazon SNS.

Interesting Blog Posts

Other AWS Blogs – Here are some fresh posts from a few of the other AWS Blogs:

AWS Open Source – My colleague Ricardo writes a weekly newsletter to highlight new open source projects, tools, and demos from the AWS Community. Read edition 154 to learn more.

AWS Graviton Weekly – Marcos Ortiz writes a weekly newsletter to highlight the latest developments in AWS custom silicon. Read AWS Graviton weekly #33 to see what’s up.

Upcoming Events
Here are some upcoming live and online events that may be of interest to you:

AWS Community Day Turkey will take place in Istanbul on May 6, and I will be there to deliver the keynote. Get your tickets and I will see you there!

AWS Summits are coming to Berlin (May 4), Washington, DC (June 7 and 8), London (June 7), and Toronto (June 14). These events are free but I highly recommend that you register ahead of time.

.NET Enterprise Developer Day EMEA is a free one-day virtual conference on April 25; register now.

AWS Developer Innovation Day is also virtual, and takes place on April 26 (read Discover Building without Limits at AWS Developer Innovation Day for more info). I’ll be watching all day and sharing a live recap at the end; learn more and see you there.

And that’s all for today!

Jeff;

Choose Korean in AWS Support as Your Preferred Language

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/choose-korean-in-aws-support-as-your-preferred-language/

Today, we are announcing the general availability of AWS Support in Korean as your preferred language, in addition to English, Japanese, and Chinese.

As the number of customers speaking Korean grows, AWS Support is invested in providing the best support experience possible. You can now communicate with AWS Support engineers and agents in Korean when you create a support case at the AWS Support Center.

Now all customers can receive account and billing support in Korean by email, phone, and live chat at no additional cost during the supported hours. Per your Support plan, customers subscribed to Enterprise, Enterprise On-Ramp, or Business Support plans can receive personalized technical support 24 hours a day and 7 days a week in Korean. Customers subscribed to the Developer Support plan can receive technical support during business hours generally defined as 9:00 AM to 6:00 PM in the customer country as set in My Account console, excluding holidays and weekends. These times may vary in countries with multiple time zones.

We also added the localized user interface of the AWS Support Center in Korean, in addition to Japanese and Chinese. AWS Support Center will be displayed in the language you select from the dropdown of available languages in Unified Settings of your AWS Account.

Here is a new AWS Support Center page in Korean:

You can also access customer service, AWS documentation, technical papers, and support forums in Korean.

Getting Started with Your Supported Language in AWS Support
To get started with AWS Support in your supported language, create a Support case in AWS Support Center. In the final step in creating a Support case, you can choose a supported language, such as English, Chinese (中文), Korean (한국어), or Japanese (日本語) as your Preferred contact language.

When you choose Korean, the customized contact options will be shown by your support plan.

For example, in the case of Basic Support plan customers, you can choose Web to get support via email, Phone, or Live Chat when available. AWS customers with account and billing inquiries can receive support in Korean from our customer service representatives with proficiency in Korean at no additional cost during business hours defined as 09:00 AM to 06:00 PM Korean Standard Time (GMT+9), excluding holidays and weekends.

If you get technical support per your Support plan, you may choose Web, Phone, or Live Chat depending on your Support plan to get in touch with support staff with proficiency in Korean, in addition to English, Japanese, and Chinese.

Here is a screen in Korean to get technical support in the Enterprise Support plan:

When you create a support case in your preferred language, the case will be routed to support staff with proficiency in the language indicated in your preferred language selection. To learn more, see Getting started with AWS Support in the AWS documentation.

Now Available
AWS Support in Korean is now available today, in addition to English, Japanese, and Chinese. Give it a try, learn more about AWS Support, and send feedback to your usual AWS Support contacts.

Channy

This article was translated into Korean (한국어) in the AWS Korea Blog.

Amazon S3 Compatible Storage on AWS Snowball Edge Compute Optimized Devices Now Generally Available

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-s3-compatible-storage-on-aws-snowball-edge-compute-optimized-devices-now-generally-available/

We have added a collection of purpose-built services to the AWS Snow Family for customers, such as Snowball Edge in 2016 and Snowcone in 2020. These services run compute intensive workloads and stores data in edge locations with denied, disrupted, intermittent, or limited network connectivity and for transferring large amounts of data from on-premises and rugged or mobile environments.

Each new service is optimized for space- or weight-constrained environments, portability, and flexible networking options. For example, Snowball Edge devices have three options for device configurations. AWS Snowball Edge Compute Optimized provides a suitcase-sized, secure, and rugged device that customers can deploy in rugged and tactical edge locations to run their compute applications. Customers modernize their edge applications in the cloud use AWS compute services and storage services such as Amazon Simple Storage Service (Amazon S3), and then deploy these applications on Snow devices at the edge.

We heard from customers that they also needed access to local object store to run applications at the edge, such as 5G mobile core and real-time data analytics, to process end-user transactions, and they had limited storage infrastructure availability in these environments. Although the Amazon S3 Adapter for Snowball enables the basic storage and retrieval of objects on a Snow device, customers wanted access to a broader set of Amazon S3 APIs, including flexibility at scale, local bucket management, object tagging, and S3 event notifications.

Today, we’re announcing the general availability of Amazon S3 compatible storage on Snow for our Snowball Edge Compute Optimized devices. This makes it easy for you to store data and run applications with local S3 buckets that require low latency processing at the edge.

With Amazon S3 compatible storage on Snow, you can use an expanded set of Amazon S3 APIs to easily build applications on AWS and deploy them on Snowball Edge Compute Optimized devices. This eliminates the need to re-architect applications for each deployment. You can manage applications requiring Amazon S3 compatible storage across the cloud, on-premises, and at the edge in connected and disconnected environments with a consistent experience.

Moreover, you can use AWS OpsHub, a graphical user interface, to manage your Snow Family services and Amazon S3 compatible storage on the devices at the edge or remotely from a central location. You can also use Amazon S3 SDK or AWS Command Line Interface (AWS CLI) to create and manage S3 buckets, get S3 event notifications using MQTT, and local service notifications using SMTP, just as you do in AWS Regions.

With Amazon S3 compatible storage on Snow, we are now able to address various use cases in limited network environments, giving customers secure, durable local object storage. For example, customers in the intelligence community and in industrial IoT deploy applications such as video analytics in rugged and mobile edge locations.

Getting Started with S3 Compatible Storage on Snowball Edge Compute Optimized
To order new Amazon S3 enabled Snowball Edge devices, create a job in the AWS Snow Family console. You can replace an existing Snow device or cluster with new replacement devices that support S3 compatible storage.

In Step 1 – Job type, input your job name and choose Local compute and storage only. In Step 2 – Compute and storage, choose your preferred Snowball Edge Compute Optimized device.

Select Amazon S3 compatible storage, a new option for S3 compatible storage. The current S3 Adapter solution is on deprecation path, and we recommend migrating workloads to use Amazon S3 compatible storage on Snow.

When you select Amazon S3 compatible storage, you can configure Amazon S3 compatible storage capacity for a single device or for a cluster. The Amazon S3 storage capacity depends on the quantity and type of Snowball Edge device.

  • For single-device deployment, you can provision granular Amazon S3 capacity up to a maximum of 31 TB on a Snowball Edge Compute Optimized device.
  • For a cluster setup, all storage capacity on a device is allocated to Amazon S3 compatible storage on Snow. You can provision a maximum of 500 TB on a 16 node cluster of Snowball Edge Compute Optimized devices.

When you provide all necessary job details and create your job, you can see the status of the delivery of your device in the job status section.

Manage S3 Compatible Storage on Snow with OpsHub
Once your device arrives at your site, power it on, and connect it to your network. To manage your device, download, install, and launch the OpsHub application in your laptop. After installation, you can unlock the device and start managing it and using supported AWS services locally.

OpsHub provides a dashboard that summarizes key metrics, such as storage capacity and active instances on your device. It also provides a selection of AWS services that are supported on the Snow Family devices.

Log in to OpsHub, then choose Manage Storage. This takes you to the Amazon S3 compatible storage on Snow landing page.

For Start service setup type, choose Simple if your network uses dynamic host configuration protocol (DHCP). With this option, the virtual network interface cards (VNICs) are created automatically on each device when you start the service. When your network uses static IP addresses, you need to create VNICs for each device manually, so choose the Advanced option.

Once the service starts, you’ll see its status is active with a list of endpoints. The following example shows the service activated in a single device:

Choose Create bucket if you want the new S3 bucket in your device. Otherwise, you can upload files to your selected bucket. New uploaded objects have destination URLs such as s3-snow://test123/test_file with the unique bucket name in the device or cluster.

You can also use the bucket lifecycle rule to define when to trigger object deletion based on age or date. Choose Create lifecycle rule in the Management tab to add a new lifecycle rule.

You can select either Delete objects or Delete incomplete multipart uploads as a rule action. Configure the rule trigger that schedules deletion based on a specific date or object’s age. In this example, I set two days to delete objects after being uploaded.

You can also use the Amazon S3 SDK/CLI for all API operations supported by S3 for Snowball Edge. To learn more, see API Operations Supported on Amazon S3 for Snowball Edge in the AWS documentation.

Things to know
Keep these things in mind regarding additional features and considerations when you use Amazon S3 compatible storage on Snow:

  • Capacity: If you fully utilize Amazon S3 capacity on your device or cluster, your write (PUT) requests return an insufficient capacity error. Read (GET) operations continue to function normally. To monitor the available Amazon S3 capacity, you can use the OpsHub S3 on the Snow page or use the describe-service CLI command. Upon detecting insufficient capacity on the Snow device or cluster, you must free up space by deleting data or transferring data to an S3 bucket in the Region or another on-premises device.
  • Resiliency: Amazon S3 compatible storage on Snow stores data redundantly across multiple disks on each Snow device and multiple devices in your cluster, with built-in protection against correlated hardware failures. In the event of a disk or device failure within the quorum range, Amazon S3 compatible storage on Snow continues to operate until hardware is replaced. Additionally, Amazon S3 compatible storage on Snow continuously scrubs data on the device to make sure of data integrity and recover any corrupted data. For workloads that require local storage, the best practice is to back up your data to further protect your data stored on Snow devices.
  • Notifications: Amazon S3 compatible storage on Snow continuously monitors the health status of the device or cluster. Background processes respond to data inconsistencies and temporary failures to heal and recover data to make sure of resiliency. In the case of nonrecoverable hardware failures, Amazon S3 compatible storage on Snow can continue operations and provides proactive notifications through emails, prompting you to work with AWS to replace failed devices. For connected devices, you have the option to enable the “Remote Monitoring” feature, which will allow AWS to monitor service health online and proactively notify you of any service issues.
  • Security: Amazon S3 compatible storage on Snow supports encryption using server-side encryption with Amazon S3 managed encryption keys (SSE-S3) or customer-provided keys (SSE-C) and authentication and authorization using Snow IAM actions namespace (s3:*) to provide you with distinct controls for data stored on your Snow devices. Amazon S3 compatible storage on Snow doesn’t support object-level access control list and bucket policies. Amazon S3 compatible storage on Snow defaults to Bucket Owner is Object Owner, making sure that the bucket owner has control over objects in the bucket.

Now Available
Amazon S3 compatible storage on Snow is now generally available for AWS Snowball Edge Compute Optimized devices in all AWS Commercial and GovCloud Regions where AWS Snow is available.

To learn more, see the AWS Snowball Edge Developer Guide and send feedback to AWS re:Post for AWS Snowball or through your usual AWS support contacts.

Channy

A sneak peek at the data protection sessions for re:Inforce 2023

Post Syndicated from Katie Collins original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-data-protection-sessions-for-reinforce-2023/

reInforce 2023

A full conference pass is $1,099. Register today with the code secure150off to receive a limited time $150 discount, while supplies last.


AWS re:Inforce is fast approaching, and this post can help you plan your agenda. AWS re:Inforce is a security learning conference where you can gain skills and confidence in cloud security, compliance, identity, and privacy. As a re:Inforce attendee, you have access to hundreds of technical and non-technical sessions, an Expo featuring AWS experts and security partners with AWS Security Competencies, and keynote and leadership sessions featuring Security leadership. AWS re:Inforce 2023 will take place in-person in Anaheim, CA, on June 13 and 14. re:Inforce 2023 features content in the following six areas:

  • Data Protection
  • Governance, Risk, and Compliance
  • Identity and Access Management
  • Network and Infrastructure Security
  • Threat Detection and Incident Response
  • Application Security

The data protection track will showcase services and tools that you can use to help achieve your data protection goals in an efficient, cost-effective, and repeatable manner. You will hear from AWS customers and partners about how they protect data in transit, at rest, and in use. Learn how experts approach data management, key management, cryptography, data security, data privacy, and encryption. This post will highlight of some of the data protection offerings that you can add to your agenda. To learn about sessions from across the content tracks, see the AWS re:Inforce catalog preview.
 

“re:Inforce is a great opportunity for us to hear directly from our customers, understand their unique needs, and use customer input to define solutions that protect sensitive data. We also use this opportunity to deliver content focused on the latest security research and trends, and I am looking forward to seeing you all there. Security is everyone’s job, and at AWS, it is job zero.”
Ritesh Desai, General Manager, AWS Secrets Manager

 

Breakout sessions, chalk talks, and lightning talks

DAP301: Moody’s database secrets management at scale with AWS Secrets Manager
Many organizations must rotate database passwords across fleets of on-premises and cloud databases to meet various regulatory standards and enforce security best practices. One-time solutions such as scripts and runbooks for password rotation can be cumbersome. Moody’s sought a custom solution that satisfies the goal of managing database passwords using well-established DevSecOps processes. In this session, Moody’s discusses how they successfully used AWS Secrets Manager and AWS Lambda, along with open-source CI/CD system Jenkins, to implement database password lifecycle management across their fleet of databases spanning nonproduction and production environments.

DAP401: Security design of the AWS Nitro System
The AWS Nitro System is the underlying platform for all modern Amazon EC2 instances. In this session, learn about the inner workings of the Nitro System and discover how it is used to help secure your most sensitive workloads. Explore the unique design of the Nitro System’s purpose-built hardware and software components and how they operate together. Dive into specific elements of the Nitro System design, including eliminating the possibility of operator access and providing a hardware root of trust and cryptographic system integrity protections. Learn important aspects of the Amazon EC2 tenant isolation model that provide strong mitigation against potential side-channel issues.

DAP322: Integrating AWS Private CA with SPIRE and Ottr at Coinbase
Coinbase is a secure online platform for buying, selling, transferring, and storing cryptocurrency. This lightning talk provides an overview of how Coinbase uses AWS services, including AWS Private CA, AWS Secrets Manager, and Amazon RDS, to build out a Zero Trust architecture with SPIRE for service-to-service authentication. Learn how short-lived certificates are issued safely at scale for X.509 client authentication (i.e., Amazon MSK) with Ottr.

DAP331: AWS Private CA: Building better resilience and revocation techniques
In this chalk talk, explore the concept of PKI resiliency and certificate revocation for AWS Private CA, and discover the reasons behind multi-Region resilient private PKI. Dive deep into different revocation methods like certificate revocation list (CRL) and Online Certificate Status Protocol (OCSP) and compare their advantages and limitations. Leave this talk with the ability to better design resiliency and revocations.

DAP231: Securing your application data with AWS storage services
Critical applications that enterprises have relied on for years were designed for the database block storage and unstructured file storage prevalent on premises. Now, organizations are growing with cloud services and want to bring their security best practices along. This chalk talk explores the features for securing application data using Amazon FSx, Amazon Elastic File System (Amazon EFS), and Amazon Elastic Block Store (Amazon EBS). Learn about the fundamentals of securing your data, including encryption, access control, monitoring, and backup and recovery. Dive into use cases for different types of workloads, such as databases, analytics, and content management systems.

Hands-on sessions (builders’ sessions and workshops)

DAP353: Privacy-enhancing data collaboration with AWS Clean Rooms
Organizations increasingly want to protect sensitive information and reduce or eliminate raw data sharing. To help companies meet these requirements, AWS has built AWS Clean Rooms. This service allows organizations to query their collective data without needing to expose the underlying datasets. In this builders’ session, get hands-on with AWS Clean Rooms preventative and detective privacy-enhancing controls to mitigate the risk of exposing sensitive data.

DAP371: Post-quantum crypto with AWS KMS TLS endpoints, SDKs, and libraries
This hands-on workshop demonstrates post-quantum cryptographic algorithms and compares their performance and size to classical ones. Learn how to use AWS Key Management Service (AWS KMS) with the AWS SDK for Java to establish a quantum-safe tunnel to transfer the most critical digital secrets and protect them from a theoretical computer targeting these communications in the future. Find out how the tunnels use classical and quantum-resistant key exchanges to offer the best of both worlds, and discover the performance implications.

DAP271: Data protection risk assessment for AWS workloads
Join this workshop to learn how to simplify the process of selecting the right tools to mitigate your data protection risks while reducing costs. Follow the data protection lifecycle by conducting a risk assessment, selecting the effective controls to mitigate those risks, deploying and configuring AWS services to implement those controls, and performing continuous monitoring for audits. Leave knowing how to apply the right controls to mitigate your business risks using AWS advanced services for encryption, permissions, and multi-party processing.

If these sessions look interesting to you, join us in California by registering for re:Inforce 2023. We look forward to seeing you there!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Katie Collins

Katie Collins

Katie is a Product Marketing Manager in AWS Security, where she brings her enthusiastic curiosity to deliver products that drive value for customers. Her experience also includes product management at both startups and large companies. With a love for travel, Katie is always eager to visit new places while enjoying a great cup of coffee.

A sneak peek at the application security sessions for re:Inforce 2023

Post Syndicated from Paul Hawkins original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-application-security-sessions-for-reinforce-2023/

reInforce 2023

A full conference pass is $1,099. Register today with the code secure150off to receive a limited time $150 discount, while supplies last.


AWS re:Inforce is a security learning conference where you can gain skills and confidence in cloud security, compliance, identity, and privacy. As a re:Inforce attendee, you have access to hundreds of technical and non-technical sessions, an Expo featuring AWS experts and security partners with AWS Security Competencies, and keynote and leadership sessions featuring Security leadership. AWS re:Inforce 2023 will take place in-person in Anaheim, CA, on June 13 and 14.

In line with recent updates to the Security Pillar of the Well-Architected Framework, we added a new track to the conference on application security. The new track will help you discover how AWS, customers, and AWS Partners move fast while understanding the security of the software they build.

In these sessions, you’ll hear directly from AWS leaders and get hands-on with the tools that help you ship securely. You’ll hear how organization and culture help security accelerate your business, and you’ll dive deep into the technology that helps you more swiftly get features to customers. You might even find some new tools that make it simpler to empower your builders to move quickly and ship securely.

To learn about sessions from across the content tracks, see the AWS re:Inforce catalog preview.

Breakout sessions, chalk talks, and lightning talks

APS221: From code to insight: Amazon Inspector & AWS Lambda in action
In this lightning talk, see a demo of Amazon Inspector support for AWS Lambda. Inspect a Java application running on Lambda for security vulnerabilities using Amazon Inspector for an automated and continual vulnerability assessment. In addition, explore how Amazon Inspector can help you identify package and code vulnerabilities in your serverless applications.

APS302: From IDE to code review, increasing quality and security
Attend this session to discover how to improve the quality and security of your code early in the development cycle. Explore how you can integrate Amazon Code Whisperer, Amazon CodeGuru reviewer, and Amazon Inspector into your development workflow, which can help you identify potential issues and automate your code review process.

APS201: How AWS and MongoDB raise the security bar with distributed ownership
In this session, explore how AWS and MongoDB have approached creating their Security Guardians and Security Champions programs. Learn practical advice on scoping, piloting, measuring, scaling, and managing a program with the goal of accelerating development with a high security bar, and discover tips on how to bridge the gap between your security and development teams. Learn how a guardians or champions program can improve security outside of your dev teams for your company as a whole when applied broadly across your organization.

APS202: AppSec tooling & culture tips from AWS & Toyota Motor North America
In this session, AWS and Toyota Motor North America (TMNA) share how they scale AppSec tooling and culture across enterprise organizations. Discover practical advice and lessons learned from the AWS approach to threat modeling across hundreds of service teams. In addition, gain insight on how TMNA has embedded security engineers into business units working on everything from mainframes to serverless. Learn ways to support teams at varying levels of maturity, the importance of differentiating between risks and vulnerabilities, and how to balance the use of cloud-native, third-party, and open-source tools at scale.

APS331: Shift security left with the AWS integrated builder experience
As organizations start building their applications on AWS, they use a wide range of highly capable builder services that address specific parts of the development and management of these applications. The AWS integrated builder experience (IBEX) brings those separate pieces together to create innovative solutions for both customers and AWS Partners building on AWS. In this chalk talk, discover how you can use IBEX to shift security left by designing applications with security best practices built in and by unlocking agile software to help prevent security issues and bottlenecks.

Hands-on sessions (builders’ sessions and workshops)

APS271: Threat modeling for builders
Attend this facilitated workshop to get hands-on experience creating a threat model for a workload. Learn some of the background and reasoning behind threat modeling, and explore tools and techniques for modeling systems, identifying threats, and selecting mitigations. In addition, explore the process for creating a system model and corresponding threat model for a serverless workload. Learn how AWS performs threat modeling internally, and discover the principles to effectively perform threat modeling on your own workloads.

APS371: Integrating open-source security tools with the AWS code services
AWS, open-source, and partner tooling works together to accelerate your software development lifecycle. In this workshop, learn how to use Automated Security Helper (ASH), a source-application security tool, to quickly integrate various security testing tools into your software build and deployment flows. AWS experts guide you through the process of security testing locally on your machines and within the AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline services. In addition, discover how to identify potential security issues in your applications through static analysis, software composition analysis, and infrastructure-as-code testing.

APS352: Practical shift left for builders
Providing early feedback in the development lifecycle maximizes developer productivity and enables engineering teams to deliver quality products. In this builders’ session, learn how to use AWS Developer Tools to empower developers to make good security decisions early in the development cycle. Tools such as Amazon CodeGuru, Amazon CodeWhisperer, and Amazon DevOps Guru can provide continuous real-time feedback upon each code commit into a source repository and supplement this with ML capabilities integrated into the code review stage.

APS351: Secure software factory on AWS through the DoD DevSecOps lens
Modern information systems within regulated environments are driven by the need to develop software with security at the forefront. Increasingly, organizations are adopting DevSecOps and secure software factory patterns to improve the security of their software delivery lifecycle. In this builder’s session, we will explore the options available on AWS to create a secure software factory aligned with the DoD Enterprise DevSecOps initiative. We will focus on the security of the software supply chain as we deploy code from source to an Amazon Elastic Kubernetes Service (EKS) environment.

If these sessions look interesting to you, join us in California by registering for re:Inforce 2023. We look forward to seeing you there!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Paul Hawkins

Paul helps customers of all sizes understand how to think about cloud security so they can build the technology and culture where security is a business enabler. He takes an optimistic approach to security and believes that getting the foundations right is the key to improving your security posture.

Discover Building without Limits at AWS Developer Innovation Day

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/discover-building-without-limits-at-aws-developer-innovation-day/

We hope you can join us on Wednesday, April 26, for a free-to-attend online event, AWS Developer Innovation Day. AWS will stream the event simultaneously across multiple platforms, including LinkedIn Live, Twitter, YouTube, and Twitch.

Developer Innovation Day is a brand-new event designed specifically for developers and development teams. In the event, sessions throughout the day will show how you can improve productivity and collaboration, provide a first look at new product updates, and go deep on AWS tools for development and delivery. Topics to be addressed include how you can speed up web and mobile application development and how you can build and deliver faster by taking advantage of modern infrastructure, DevOps, and generative AI-enabled tools.

We’ll be starting the day with a keynote from Adam Seligman, Vice President of Developer Experience at AWS. And, to wrap up the event, there are closing sessions from Dr. Werner Vogels, CTO at Amazon, Harry Mower, Director, AWS Code Suite, and Doug Seven, Director of Software Development, Amazon CodeWhisperer. A panel of experts will join Werner to discuss the latest trends in generative AI and what it all means for developers. Harry and Doug will discuss trends in developer and team productivity. Of course, no event would be complete without our own Jeff Barr, VP of AWS Evangelism, who’ll be sharing a “Best practices and key takeaways” session with a recap of key stories, announcements, and moments from the day!

AWS staff will present the technical breakout sessions during the day, and AWS Heroes and Community Builders will also be involved, presenting community spotlights, which are an excellent chance to get involved with the global AWS community. During and after the event there’s also a chance to sign up for an AWS Amplify hackathon that will be held in May and also an Amazon CodeCatalyst immersion day. Other fun events are also in the works—watch the event details page for more information.

AWS Developer Innovation Day Event Banner

There’s no up-front registration required to join AWS Developer Innovation Day, which, I’ll remind you, is also free to attend. Simply head on over to the event page and select Add to calendar. I’m excited to be joined by colleagues from the AWS on Air team to host the event, and we all hope you’ll choose to join in the fun and learning opportunities at this brand-new event!

AWS Week in Review: New Service for Generative AI and Amazon EC2 Trn1n, Inf2, and CodeWhisperer now GA – April 17, 2023

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/aws-week-in-review-new-service-for-generative-ai-and-amazon-ec2-trn1n-inf2-and-codewhisperer-now-ga-april-17-2023/

I could almost title this blog post the “AWS AI/ML Week in Review.” This past week, we announced several new innovations and tools for building with generative AI on AWS. Let’s dive right into it.

Last Week’s Launches
Here are some launches that got my attention during the previous week:

Announcing Amazon Bedrock and Amazon Titan models Amazon Bedrock is a new service to accelerate your development of generative AI applications using foundation models through an API without managing infrastructure. You can choose from a wide range of foundation models built by leading AI startups and Amazon. The new Amazon Titan foundation models are pre-trained on large datasets, making them powerful, general-purpose models. You can use them as-is or privately to customize them with your own data for a particular task without annotating large volumes of data. Amazon Bedrock is currently in limited preview. Sign up here to learn more.

Building with Generative AI on AWS

Amazon EC2 Trn1n and Inf2 instances are now generally availableTrn1n instances, powered by AWS Trainium accelerators, double the network bandwidth (compared to Trn1 instances) to 1,600 Gbps of Elastic Fabric Adapter (EFAv2). The increased bandwidth delivers even higher performance for training network-intensive generative AI models such as large language models (LLMs) and mixture of experts (MoE). Inf2 instances, powered by AWS Inferentia2 accelerators, deliver high performance at the lowest cost in Amazon EC2 for generative AI models, including LLMs and vision transformers. They are the first inference-optimized instances in Amazon EC2 to support scale-out distributed inference with ultra-high-speed connectivity between accelerators. Compared to Inf1 instances, Inf2 instances deliver up to 4x higher throughput and up to 10x lower latency. Check out my blog posts on Trn1 instances and Inf2 instances for more details.

Amazon CodeWhisperer, free for individual use, is now generally availableAmazon CodeWhisperer is an AI coding companion that generates real-time single-line or full-function code suggestions in your IDE to help you build applications faster. With GA, we introduce two tiers: CodeWhisperer Individual and CodeWhisperer Professional. CodeWhisperer Individual is free to use for generating code. You can sign up with an AWS Builder ID based on your email address. The Individual Tier provides code recommendations, reference tracking, and security scans. CodeWhisperer Professional—priced at $19 per user, per month—offers additional enterprise administration capabilities. Steve’s blog post has all the details.

Amazon GameLift adds support for Unreal Engine 5Amazon GameLift is a fully managed solution that allows you to manage and scale dedicated game servers for session-based multiplayer games. The latest version of the Amazon GameLift Server SDK 5.0 lets you integrate your Unreal 5-based game servers with the Amazon GameLift service. In addition, the latest Amazon GameLift Server SDK with Unreal 5 plugin is built to work with Amazon GameLift Anywhere so that you can test and iterate Unreal game builds faster and manage game sessions across any server hosting infrastructure. Check out the release notes to learn more.

Amazon Rekognition launches Face Liveness to deter fraud in facial verification – Face Liveness verifies that only real users, not bad actors using spoofs, can access your services. Amazon Rekognition Face Liveness analyzes a short selfie video to detect spoofs presented to the camera, such as printed photos, digital photos, digital videos, or 3D masks, as well as spoofs that bypass the camera, such as pre-recorded or deepfake videos. This AWS Machine Learning Blog post walks you through the details and shows how you can add Face Liveness to your web and mobile applications.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some additional news items and blog posts that you may find interesting:

Updates to the AWS Well-Architected Framework – The most recent content updates and improvements focus on providing expanded guidance across the AWS service portfolio to help you make more informed decisions when developing implementation plans. Services that were added or expanded in coverage include AWS Elastic Disaster Recovery, AWS Trusted Advisor, AWS Resilience Hub, AWS Config, AWS Security Hub, Amazon GuardDuty, AWS Organizations, AWS Control Tower, AWS Compute Optimizer, AWS Budgets, Amazon CodeWhisperer, and Amazon CodeGuru. This AWS Architecture Blog post has all the details.

Amazon releases largest dataset for training “pick and place” robots – In an effort to improve the performance of robots that pick, sort, and pack products in warehouses, Amazon has publicly released the largest dataset of images captured in an industrial product-sorting setting. Where the largest previous dataset of industrial images featured on the order of 100 objects, the Amazon dataset, called ARMBench, features more than 190,000 objects. Check out this Amazon Science Blog post to learn more.

AWS open-source news and updates – My colleague Ricardo writes this weekly open-source newsletter in which he highlights new open-source projects, tools, and demos from the AWS Community. Read edition #153 here.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

Build On AWS - Generative AI#BuildOn Generative AI – Join our weekly live Build On Generative AI Twitch show. Every Monday morning, 9:00 US PT, my colleagues Emily and Darko take a look at aspects of generative AI. They host developers, scientists, startup founders, and AI leaders and discuss how to build generative AI applications on AWS.

In today’s episode, Emily walks us through the latest AWS generative AI announcements. You can watch the video here.

Dot Net Developer Day.NET Developer Day.NET Enterprise Developer Day EMEA 2023 (April 25) is a free, one-day virtual event providing enterprise developers with the most relevant information to swiftly and efficiently migrate and modernize their .NET applications and workloads on AWS.

AWS Developer Innovation DayAWS Developer Innovation DayAWS Developer Innovation Day (April 26) is a new, free, one-day virtual event designed to help developers and teams be productive and collaborate from discovery to delivery, to running software and building applications. Get a first look at exciting product updates, technical deep dives, and keynotes.

AWS Global Summits – Check your calendars and sign up for the AWS Summit close to where you live or work: Tokyo (April 20–21), Singapore (May 4), Stockholm (May 11), Hong Kong (May 23), Tel Aviv (May 31), Amsterdam (June 1), London (June 7), Washington, DC (June 7–8), Toronto (June 14), Madrid (June 15), and Milano (June 22).

You can browse all upcoming AWS-led in-person and virtual events and developer-focused events such as Community Days.

That’s all for this week. Check back next Monday for another Week in Review!

— Antje

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Amazon EC2 Inf2 Instances for Low-Cost, High-Performance Generative AI Inference are Now Generally Available

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/amazon-ec2-inf2-instances-for-low-cost-high-performance-generative-ai-inference-are-now-generally-available/

Innovations in deep learning (DL), especially the rapid growth of large language models (LLMs), have taken the industry by storm. DL models have grown from millions to billions of parameters and are demonstrating exciting new capabilities. They are fueling new applications such as generative AI or advanced research in healthcare and life sciences. AWS has been innovating across chips, servers, data center connectivity, and software to accelerate such DL workloads at scale.

At AWS re:Invent 2022, we announced the preview of Amazon EC2 Inf2 instances powered by AWS Inferentia2, the latest AWS-designed ML chip. Inf2 instances are designed to run high-performance DL inference applications at scale globally. They are the most cost-effective and energy-efficient option on Amazon EC2 for deploying the latest innovations in generative AI, such as GPT-J or Open Pre-trained Transformer (OPT) language models.

Today, I’m excited to announce that Amazon EC2 Inf2 instances are now generally available!

Inf2 instances are the first inference-optimized instances in Amazon EC2 to support scale-out distributed inference with ultra-high-speed connectivity between accelerators. You can now efficiently deploy models with hundreds of billions of parameters across multiple accelerators on Inf2 instances. Compared to Amazon EC2 Inf1 instances, Inf2 instances deliver up to 4x higher throughput and up to 10x lower latency. Here’s an infographic that highlights the key performance improvements that we have made available with the new Inf2 instances:

Performance improvements with Amazon EC2 Inf2

New Inf2 Instance Highlights
Inf2 instances are available today in four sizes and are powered by up to 12 AWS Inferentia2 chips with 192 vCPUs. They offer a combined compute power of 2.3 petaFLOPS at BF16 or FP16 data types and feature an ultra-high-speed NeuronLink interconnect between chips. NeuronLink scales large models across multiple Inferentia2 chips, avoids communication bottlenecks, and enables higher-performance inference.

Inf2 instances offer up to 384 GB of shared accelerator memory, with 32 GB high-bandwidth memory (HBM) in every Inferentia2 chip and 9.8 TB/s of total memory bandwidth. This type of bandwidth is particularly important to support inference for large language models that are memory bound.

Since the underlying AWS Inferentia2 chips are purpose-built for DL workloads, Inf2 instances offer up to 50 percent better performance per watt than other comparable Amazon EC2 instances. I’ll cover the AWS Inferentia2 silicon innovations in more detail later in this blog post.

The following table lists the sizes and specs of Inf2 instances in detail.

Instance Name
vCPUs AWS Inferentia2 Chips Accelerator Memory NeuronLink Instance Memory Instance Networking
inf2.xlarge 4 1 32 GB N/A 16 GB Up to 15 Gbps
inf2.8xlarge 32 1 32 GB N/A 128 GB Up to 25 Gbps
inf2.24xlarge 96 6 192 GB Yes 384 GB 50 Gbps
inf2.48xlarge 192 12 384 GB Yes 768 GB 100 Gbps

AWS Inferentia2 Innovation
Similar to AWS Trainium chips, each AWS Inferentia2 chip has two improved NeuronCore-v2 engines, HBM stacks, and dedicated collective compute engines to parallelize computation and communication operations when performing multi-accelerator inference.

Each NeuronCore-v2 has dedicated scalar, vector, and tensor engines that are purpose-built for DL algorithms. The tensor engine is optimized for matrix operations. The scalar engine is optimized for element-wise operations like ReLU (rectified linear unit) functions. The vector engine is optimized for non-element-wise vector operations, including batch normalization or pooling.

Here is a short summary of additional AWS Inferentia2 chip and server hardware innovations:

  • Data Types – AWS Inferentia2 supports a wide range of data types, including FP32, TF32, BF16, FP16, and UINT8, so you can choose the most suitable data type for your workloads. It also supports the new configurable FP8 (cFP8) data type, which is especially relevant for large models because it reduces the memory footprint and I/O requirements of the model. The following image compares the supported data types.AWS Inferentia2 Supported Data Types
  • Dynamic Execution, Dynamic Input Shapes – AWS Inferentia2 has embedded general-purpose digital signal processors (DSPs) that enable dynamic execution, so control flow operators don’t need to be unrolled or executed on the host. AWS Inferentia2 also supports dynamic input shapes that are key for models with unknown input tensor sizes, such as models processing text.
  • Custom Operators – AWS Inferentia2 supports custom operators written in C++. Neuron Custom C++ Operators enable you to write C++ custom operators that natively run on NeuronCores. You can use standard PyTorch custom operator programming interfaces to migrate CPU custom operators to Neuron and implement new experimental operators, all without any intimate knowledge of the NeuronCore hardware.
  • NeuronLink v2 – Inf2 instances are the first inference-optimized instance on Amazon EC2 to support distributed inference with direct ultra-high-speed connectivity—NeuronLink v2—between chips. NeuronLink v2 uses collective communications (CC) operators such as all-reduce to run high-performance inference pipelines across all chips.

The following Inf2 distributed inference benchmarks show throughput and cost improvements for OPT-30B and OPT-66B models over comparable inference-optimized Amazon EC2 instances.

Amazon EC2 Inf2 Benchmarks

Now, let me show you how to get started with Amazon EC2 Inf2 instances.

Get Started with Inf2 Instances
The AWS Neuron SDK integrates AWS Inferentia2 into popular machine learning (ML) frameworks like PyTorch. The Neuron SDK includes a compiler, runtime, and profiling tools and is constantly being updated with new features and performance optimizations.

In this example, I will compile and deploy a pre-trained BERT model from Hugging Face on an EC2 Inf2 instance using the available PyTorch Neuron packages. PyTorch Neuron is based on the PyTorch XLA software package and enables the conversion of PyTorch operations to AWS Inferentia2 instructions.

SSH into your Inf2 instance and activate a Python virtual environment that includes the PyTorch Neuron packages. If you’re using a Neuron-provided AMI, you can activate the preinstalled environment by running the following command:

source aws_neuron_venv_pytorch_p37/bin/activate

Now, with only a few changes to your code, you can compile your PyTorch model into an AWS Neuron-optimized TorchScript. Let’s start with importing torch, the PyTorch Neuron package torch_neuronx, and the Hugging Face transformers library.

import torch
import torch_neuronx from transformers import AutoTokenizer, AutoModelForSequenceClassification
import transformers
...

Next, let’s build the tokenizer and model.

name = "bert-base-cased-finetuned-mrpc"
tokenizer = AutoTokenizer.from_pretrained(name)
model = AutoModelForSequenceClassification.from_pretrained(name, torchscript=True)

We can test the model with example inputs. The model expects two sentences as input, and its output is whether or not those sentences are a paraphrase of each other.

def encode(tokenizer, *inputs, max_length=128, batch_size=1):
    tokens = tokenizer.encode_plus(
        *inputs,
        max_length=max_length,
        padding='max_length',
        truncation=True,
        return_tensors="pt"
    )
    return (
        torch.repeat_interleave(tokens['input_ids'], batch_size, 0),
        torch.repeat_interleave(tokens['attention_mask'], batch_size, 0),
        torch.repeat_interleave(tokens['token_type_ids'], batch_size, 0),
    )

# Example inputs
sequence_0 = "The company Hugging Face is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "Hugging Face's headquarters are situated in Manhattan"

paraphrase = encode(tokenizer, sequence_0, sequence_2)
not_paraphrase = encode(tokenizer, sequence_0, sequence_1)

# Run the original PyTorch model on examples
paraphrase_reference_logits = model(*paraphrase)[0]
not_paraphrase_reference_logits = model(*not_paraphrase)[0]

print('Paraphrase Reference Logits: ', paraphrase_reference_logits.detach().numpy())
print('Not-Paraphrase Reference Logits:', not_paraphrase_reference_logits.detach().numpy())

The output should look similar to this:

Paraphrase Reference Logits:     [[-0.34945598  1.9003887 ]]
Not-Paraphrase Reference Logits: [[ 0.5386365 -2.2197142]]

Now, the torch_neuronx.trace() method sends operations to the Neuron Compiler (neuron-cc) for compilation and embeds the compiled artifacts in a TorchScript graph. The method expects the model and a tuple of example inputs as arguments.

neuron_model = torch_neuronx.trace(model, paraphrase)

Let’s test the Neuron-compiled model with our example inputs:

paraphrase_neuron_logits = neuron_model(*paraphrase)[0]
not_paraphrase_neuron_logits = neuron_model(*not_paraphrase)[0]

print('Paraphrase Neuron Logits: ', paraphrase_neuron_logits.detach().numpy())
print('Not-Paraphrase Neuron Logits: ', not_paraphrase_neuron_logits.detach().numpy())

The output should look similar to this:

Paraphrase Neuron Logits: [[-0.34915772 1.8981738 ]]
Not-Paraphrase Neuron Logits: [[ 0.5374032 -2.2180378]]

That’s it. With just a few lines of code changes, we compiled and ran a PyTorch model on an Amazon EC2 Inf2 instance. To learn more about which DL model architectures are a good fit for AWS Inferentia2 and the current model support matrix, visit the AWS Neuron Documentation.

Available Now
You can launch Inf2 instances today in the AWS US East (Ohio) and US East (N. Virginia) Regions as On-Demand, Reserved, and Spot Instances or as part of a Savings Plan. As usual with Amazon EC2, you pay only for what you use. For more information, see Amazon EC2 pricing.

Inf2 instances can be deployed using AWS Deep Learning AMIs, and container images are available via managed services such as Amazon SageMaker, Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Elastic Container Service (Amazon ECS), and AWS ParallelCluster.

To learn more, visit our Amazon EC2 Inf2 instances page, and please send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

— Antje

New Global AWS Data Processing Addendum

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/new-global-aws-data-processing-addendum/

Navigating data protection laws around the world is no simple task. Today, I’m pleased to announce that AWS is expanding the scope of the AWS Data Processing Addendum (Global AWS DPA) so that it applies globally whenever customers use AWS services to process personal data, regardless of which data protection laws apply to that processing. AWS is proud to be the first major cloud provider to adopt this global approach to help you meet your compliance needs for data protection.

The Global AWS DPA is designed to help you satisfy requirements under data protection laws worldwide, without having to create separate country-specific data processing agreements for every location where you use AWS services. By introducing this global, one-stop addendum, we are simplifying contracting procedures and helping to reduce the time that you spend assessing contractual data privacy requirements on a country-by-country basis.

If you have signed a copy of the previous AWS General Data Protection Regulation (GDPR) DPA, then you do not need to take any action and can continue to rely on that addendum to satisfy data processing requirements. AWS is always innovating to help you meet your compliance obligations wherever you operate. We’re confident that this expanded Global AWS DPA will help you on your journey. If you have questions or need more information, see Data Protection & Privacy at AWS and GDPR Center.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS cloud and leads the AWS trade and product compliance team.

Introducing AWS Lambda response streaming

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-aws-lambda-response-streaming/

Today, AWS Lambda is announcing support for response payload streaming. Response streaming is a new invocation pattern that lets functions progressively stream response payloads back to clients.

You can use Lambda response payload streaming to send response data to callers as it becomes available. This can improve performance for web and mobile applications. Response streaming also allows you to build functions that return larger payloads and perform long-running operations while reporting incremental progress.

In traditional request-response models, the response needs to be fully generated and buffered before it is returned to the client. This can delay the time to first byte (TTFB) performance while the client waits for the response to be generated. Web applications are especially sensitive to TTFB and page load performance. Response streaming lets you send partial responses back to the client as they become ready, improving TTFB latency to within milliseconds. For web applications, this can improve visitor experience and search engine rankings.

Other applications may have large payloads, like images, videos, large documents, or database results. Response streaming lets you transfer these payloads back to the client without having to buffer the entire payload in memory. You can use response streaming to send responses larger than Lambda’s 6 MB response payload limit up to a soft limit of 20 MB.

Response streaming currently supports the Node.js 14.x and subsequent managed runtimes. You can also implement response streaming using custom runtimes. You can progressively stream response payloads through Lambda function URLs, including as an Amazon CloudFront origin, along with using the AWS SDK or using Lambda’s invoke API. You can also use Amazon API Gateway and Application Load Balancer to stream larger payloads.

Writing response streaming enabled functions

Writing the handler for response streaming functions differs from typical Node handler patterns. To indicate to the runtime that Lambda should stream your function’s responses, you must wrap your function handler with the streamifyResponse() decorator. This tells the runtime to use the correct stream logic path, allowing the function to stream responses.

This is an example handler with response streaming enabled:

exports.handler = awslambda.streamifyResponse(
    async (event, responseStream, context) => {
        responseStream.setContentType(“text/plain”);
        responseStream.write(“Hello, world!”);
        responseStream.end();
    }
);

The streamifyResponse decorator accepts the following additional parameter, responseStream, besides the default node handler parameters, event, and context.

The new responseStream object provides a stream object that your function can write data to. Data written to this stream is sent immediately to the client. You can optionally set the Content-Type header of the response to pass additional metadata to your client about the contents of the stream.

Writing to the response stream

The responseStream object implements Node’s Writable Stream API. This offers a write() method to write information to the stream. However, we recommend that you use pipeline() wherever possible to write to the stream. This can improve performance, ensuring that a faster readable stream does not overwhelm the writable stream.

An example function using pipeline() showing how you can stream compressed data:

const pipeline = require("util").promisify(require("stream").pipeline);
const zlib = require('zlib');
const { Readable } = require('stream');

exports.gzip = awslambda.streamifyResponse(async (event, responseStream, _context) => {
    // As an example, convert event to a readable stream.
    const requestStream = Readable.from(Buffer.from(JSON.stringify(event)));
    
    await pipeline(requestStream, zlib.createGzip(), responseStream);
});

Ending the response stream

When using the write() method, you must end the stream before the handler returns. Use responseStream.end() to signal that you are not writing any more data to the stream. This is not required if you write to the stream with pipeline().

Reading streamed responses

Response streaming introduces a new InvokeWithResponseStream API. You can read a streamed response from your function via a Lambda function URL or use the AWS SDK to call the new API directly.

Neither API Gateway nor Lambda’s target integration with Application Load Balancer support chunked transfer encoding. It therefore does not support faster TTFB for streamed responses. You can, however, use response streaming with API Gateway to return larger payload responses, up to API Gateway’s 10 MB limit. To implement this, you must configure an HTTP_PROXY integration between your API Gateway and a Lambda function URL, instead of using the LAMBDA_PROXY integration.

You can also configure CloudFront with a function URL as origin. When streaming responses through a function URL and CloudFront, you can have faster TTFB performance and return larger payload sizes.

Using Lambda response streaming with function URLs

You can configure a function URL to invoke your function and stream the raw bytes back to your HTTP client via chunked transfer encoding. You configure the Function URL to use the new InvokeWithResponseStream API by changing the invoke mode of your function URL from the default BUFFERED to RESPONSE_STREAM.

RESPONSE_STREAM enables your function to stream payload results as they become available if you wrap the function with the streamifyResponse() decorator. Lambda invokes your function using the InvokeWithResponseStream API. If InvokeWithResponseStream invokes a function that is not wrapped with streamifyResponse(), Lambda does not stream the response and instead returns a buffered response which is subject to the 6 MB size limit.

Using AWS Serverless Application Model (AWS SAM) or AWS CloudFormation, set the InvokeMode property:

  MyFunctionUrl:
    Type: AWS::Lambda::Url
    Properties:
      TargetFunctionArn: !Ref StreamingFunction
      AuthType: AWS_IAM
      InvokeMode: RESPONSE_STREAM

Using generic HTTP client libraries with function URLs

Each language or framework may use different methods to form an HTTP request and parse a streamed response. Some HTTP client libraries only return the response body after the server closes the connection. These clients do not work with functions that return a response stream. To get the benefit of response streams, use an HTTP client that returns response data incrementally. Many HTTP client libraries already support streamed responses, including the Apache HttpClient for Java, Node’s built-in http client, and Python’s requests and urllib3 packages. Consult the documentation for the HTTP library that you are using.

Example applications

There are a number of example Lambda streaming applications in the Serverless Patterns Collection. They use AWS SAM to build and deploy the resources in your AWS account.

Clone the repository and explore the examples. The README file in each pattern folder contains additional information.

git clone https://github.com/aws-samples/serverless-patterns/ 
cd serverless-patterns

Time to first byte using write()

  1. To show how streaming improves time to first bite, deploy the lambda-streaming-ttfb-write-sam pattern.
  2. cd lambda-streaming-ttfb-write-sam
  3. Use AWS SAM to deploy the resources to your AWS account. Run a guided deployment to set the default parameters for the first deployment.
  4. sam deploy -g --stack-name lambda-streaming-ttfb-write-sam

    For subsequent deployments you can use sam deploy.

  5. Enter a Stack Name and accept the initial defaults.
  6. AWS SAM deploys a Lambda function with streaming support and a function URL.

    AWS SAM deploy --g

    AWS SAM deploy –g

    Once the deployment completes, AWS SAM provides details of the resources.

    AWS SAM resources

    AWS SAM resources

    The AWS SAM output returns a Lambda function URL.

  7. Use curl with your AWS credentials to view the streaming response as the URL uses AWS Identity and Access Management (IAM) for authorization. Replace the URL and Region parameters for your deployment.
curl --request GET https://<url>.lambda-url.<Region>.on.aws/ --user AKIAIOSFODNN7EXAMPLE:wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --aws-sigv4 'aws:amz:<Region>:lambda'

You can see the gradual display of the streamed response.

Using curl to stream response from write () function

Using curl to stream response from write () function

Time to first byte using pipeline()

  1. To try an example using pipeline(), deploy the lambda-streaming-ttfb-pipeline-sam pattern.
  2. cd ..
    cd lambda-streaming-ttfb-pipeline-sam
  3. Use AWS SAM to deploy the resources to your AWS account. Run a guided deployment to set the default parameters for the first deploy.
  4. sam deploy -g --stack-name lambda-streaming-ttfb-pipeline-sam
  5. Enter a Stack Name and accept the initial defaults.
  6. Use curl with your AWS credentials to view the streaming response. Replace the URL and Region parameters for your deployment.
curl --request GET https://<url>.lambda-url.<Region>.on.aws/ --user AKIAIOSFODNN7EXAMPLE:wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --aws-sigv4 'aws:amz:<Region>:lambda'

You can see the pipelined response stream returned.

Using curl to stream response from function

Using curl to stream response from function

Large payloads

  1. To show how streaming enables you to return larger payloads, deploy the lambda-streaming-large-sam application. AWS SAM deploys a Lambda function, which returns a 7 MB PDF file which is larger than Lambda’s non-stream 6 MB response payload limit.
  2. cd ..
    cd lambda-streaming-large-sam
    sam deploy -g --stack-name lambda-streaming-large-sam
  3. The AWS SAM output returns a Lambda function URL. Use curl with your AWS credentials to view the streaming response.
curl --request GET https://<url>.lambda-url.<Region>.on.aws/ --user AKIAIOSFODNN7EXAMPLE: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --aws-sigv4 'aws:amz:<Region>:lambda' -o SVS401-ri22.pdf -w '%{content_type}'

This downloads the PDF file SVS401-ri22.pdf to your current directory and displays the content type as application/pdf.

You can also use API Gateway to stream a large payload with an HTTP_PROXY integration with a Lambda function URL.

Invoking a function with response streaming using the AWS SDK

You can use the AWS SDK to stream responses directly from the new Lambda InvokeWithResponseStream API. This provides additional functionality such as handling midstream errors. This can be helpful when building, for example, internal microservices. Response streaming is supported with the AWS SDK for Java 2.x, AWS SDK for JavaScript v3, and AWS SDKs for Go version 1 and version 2.

The SDK response returns an event stream that you can read from. The event stream contains two event types. PayloadChunk contains a raw binary buffer with partial response data received by the client. InvokeComplete signals that the function has completed sending data. It also contains additional metadata, such as whether the function encountered an error in the middle of the stream. Errors can include unhandled exceptions thrown by your function code and function timeouts.

Using the AWS SDK for Javascript v3

  1. To see how to use the AWS SDK to stream responses from a function, deploy the lambda-streaming-sdk-sam pattern.
  2. cd ..
    cd lambda-streaming-sdk-sam
    sam deploy -g --stack-name lambda-streaming-sdk-sam
  3. Enter a Stack Name and accept the initial defaults.
  4. AWS SAM deploys three Lambda functions with streaming support.

  • HappyPathFunction: Returns a full stream.
  • MidstreamErrorFunction: Simulates an error midstream.
  • TimeoutFunction: Function times out before stream completes.
  • Run the SDK example application, which invokes each Lambda function and outputs the result.
  • npm install @aws-sdk/client-lambda
    node index.mjs

    You can see each function and how the midstream and timeout errors are returned back to the SDK client.

    Streaming midstream error

    Streaming midstream error

    Streaming timeout error

    Streaming timeout error

    Quotas and pricing

    Streaming responses incur an additional cost for network transfer of the response payload. You are billed based on the number of bytes generated and streamed out of your Lambda function over the first 6 MB. For more information, see Lambda pricing.

    There is an initial maximum response size of 20 MB, which is a soft limit you can increase. There is a maximum bandwidth throughput limit of 16 Mbps (2 MB/s) for streaming functions.

    Conclusion

    Today, AWS Lambda is announcing support for response payload streaming to send partial responses to callers as the responses become available. This can improve performance for web and mobile applications. You can also use response streaming to build functions that return larger payloads and perform long-running operations while reporting incremental progress. Stream partial responses through Lambda function URLs, or using the AWS SDK. Response streaming currently supports the Node.js 14.x and subsequent runtimes, as well as custom runtimes.

    There are a number of example Lambda streaming applications in the Serverless Patterns Collection to explore the functionality.

    Lambda response streaming support is also available through many AWS Lambda Partners such as Datadog, Dynatrace, New Relic, Pulumi and Lumigo.

    For more serverless learning resources, visit Serverless Land.

    New – Self-Service Provisioning of Terraform Open-Source Configurations with AWS Service Catalog

    Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-self-service-provisioning-of-terraform-open-source-configurations-with-aws-service-catalog/

    With AWS Service Catalog, you can create, govern, and manage a catalog of infrastructure as code (IaC) templates that are approved for use on AWS. These IaC templates can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. You can control which IaC templates and versions are available, what is configured by each version, and who can access each template based on individual, group, department, or cost center. End users such as engineers, database administrators, and data scientists can then quickly discover and self-service provision approved AWS resources that they need to use to perform their daily job functions.

    When using Service Catalog, the first step is to create products based on your IaC templates. You can then collect products, together with configuration information, in a portfolio.

    Starting today, you can define Service Catalog products and their resources using either AWS CloudFormation or Hashicorp Terraform and choose the tool that better aligns with your processes and expertise. You can now integrate your existing Terraform configurations into Service Catalog to have them part of a centrally approved portfolio of products and share it with the AWS accounts used by your end users. In this way, you can prevent inconsistencies and mitigate the risk of noncompliance.

    When resources are deployed by Service Catalog, you can maintain least privilege access during provisioning and govern tagging on the deployed resources. End users of Service Catalog pick and choose what they need from the list of products and versions they have access to. Then, they can provision products in a single action regardless of the technology (CloudFormation or Terraform) used for the deployment.

    The Service Catalog hub-and-spoke model that enables organizations to govern at scale can now be extended to include Terraform configurations. With the Service Catalog hub and spoke model, you can centrally manage deployments using a management/user account relationship:

    • One management account – Used to create Service Catalog products, organize them into portfolios, and share portfolios with user accounts
    • Multiple user accounts (up to thousands) – A user account is any AWS account in which the end users of Service Catalog are provisioning resources.

    Let’s see how this works in practice.

    Creating an AWS Service Catalog Product Using Terraform
    To get started, I install the Terraform Reference Engine (provided by AWS on GitHub) that configures the code and infrastructure required for the Terraform open-source engine to work with AWS Service Catalog. I only need to do this once, in the management account for Service Catalog, and the setup takes just minutes. I use the automated installation script:

    ./deploy-tre.sh -r us-east-1

    To keep things simple for this post, I create a product deploying a single EC2 instance using AWS Graviton processors and the Amazon Linux 2023 operating system. Here’s the content of my main.tf file:

    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = "~> 4.16"
        }
      }
    
      required_version = ">= 1.2.0"
    }
    
    provider "aws" {
      region  = "us-east-1"
    }
    
    resource "aws_instance" "app_server" {
      ami           = "ami-00c39f71452c08778"
      instance_type = "t4g.large"
    
      tags = {
        Name = "GravitonServerWithAmazonLinux2023"
      }
    }

    I sign in to the AWS Management Console in the management account for Service Catalog. In the Service Catalog console, I choose Product list in the Administration section of the navigation pane. There, I choose Create product.

    In Product details, I select Terraform open source as Product type. I enter a product name and description and the name of the owner.

    Console screenshot.

    In the Version details, I choose to Upload a template file (using a tar.gz archive). Optionally, I can specify the template using an S3 URL or an external code repository (on GitHub, GitHub Enterprise Server, or Bitbucket) using an AWS CodeStar provider.

    Console screenshot.

    I enter support details and custom tags. Note that tags can be used to categorize your resources and also to check permissions to create a resource. Then, I complete the creation of the product.

    Adding an AWS Service Catalog Product Using Terraform to a Portfolio
    Now that the Terraform product is ready, I add it to my portfolio. A portfolio can include both Terraform and CloudFormation products. I choose Portfolios from the Administrator section of the navigation pane. There, I search for my portfolio by name and open it. I choose Add product to portfolio. I search for the Terraform product by name and select it.

    Console screenshot.

    Terraform products require a launch constraint. The launch constraint specifies the name of an AWS Identity and Access Management (IAM) role that is used to deploy the product. I need to separately ensure that this role is created in every account with which the product is shared.

    The launch role is assumed by the Terraform open-source engine in the management account when an end user launches, updates, or terminates a product. The launch role also contains permissions to describe, create, and update a resource group for the provisioned product and tag the product resources. In this way, Service Catalog keeps the resource group up-to-date and tags the resources associated with the product.

    The launch role enables least privilege access for end users. With this feature, end users don’t need permission to directly provision the product’s underlying resources because your Terraform open-source engine assumes the launch role to provision those resources, such as an approved configuration of an Amazon Elastic Compute Cloud (Amazon EC2) instance.

    In the Launch constraint section, I choose Enter role name to use a role I created before for this product:

    • The trust relationship of the role defines the entities that can assume the role. For this role, the trust relationship includes Service Catalog and the management account that contains the Terraform Reference Engine.
    • For permissions, the role allows to provision, update, and terminate the resources required by my product and to manage resource groups and tags on those resources.

    Console screenshot.

    I complete the addition of the product to my portfolio. Now the product is available to the end users who have access to this portfolio.

    Launching an AWS Service Catalog Product Using Terraform
    End users see the list of products and versions they have access to and can deploy them in a single action. If you already use Service Catalog, the experience is the same as with CloudFormation products.

    I sign in to the AWS Console in the user account for Service Catalog. The portfolio I used before has been shared by the management account with this user account. In the Service Catalog console, I choose Products from the Provisioning group in the navigation pane. I search for the product by name and choose Launch product.

    Console screenshot.

    I let Service Catalog generate a unique name for the provisioned product and select the product version to deploy. Then, I launch the product.

    Console screenshot.

    After a few minutes, the product has been deployed and is available. The deployment has been managed by the Terraform Reference Engine.

    Console screenshot.

    In the Associated tags tab, I see that Service Catalog automatically added information on the portfolio and the product.

    Console screenshot.

    In the Resources tab, I see the resources created by the provisioned product. As expected, it’s an EC2 instance, and I can follow the link to open the Amazon EC2 console and get more information.

    Console screenshot.

    End users such as engineers, database administrators, and data scientists can continue to use Service Catalog and launch the products they need without having to consider if they are provisioned using Terraform or CloudFormation.

    Availability and Pricing
    AWS Service Catalog support for Terraform open-source configurations is available today in all AWS Regions where it is offered. There is no change in pricing when using Terraform. With Service Catalog, you pay for the API calls you make to the service, and you can start for free with the free tier. You also pay for the resources used and created by the Terraform Reference Engine. For more information, see Service Catalog Pricing.

    Enable self-service provisioning at scale for your Terraform open-source configurations.

    Danilo

    AWS Supply Chain Now Generally Available – Mitigate Risks and Lower Costs with Increased Visibility and Actionable Insights

    Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-supply-chain-now-generally-available-mitigate-risks-and-lower-costs-with-increased-visibility-and-actionable-insights/

    Like many of you, I experienced the disrupting effects introduced by external forces such as weather, geopolitical instability, and the COVID-19 pandemic. To improve supply chain resilience, organizations need visibility across their supply chain so that they can quickly find and respond to risks. This is increasingly complex as their customers’ preferences are rapidly changing, and historical demand assumptions are not valid anymore.

    To add to that, supply chain data is often spread out across disconnected systems, and existing tools lack the elastic processing power and specialized machine learning (ML) models needed to create meaningful insights. Without real-time insights, organizations cannot detect variations in demand patterns, unexpected trends, or supply disruptions. And failing to react quickly can impact their customers and operational costs.

    Today, I am happy to share that AWS Supply Chain is generally available. AWS Supply Chain is a cloud application that mitigates risk and lowers costs with unified data, ML-powered actionable insights, and built-in contextual collaboration. Let’s see how it can help your organization before taking a look at how you can use it.

    How AWS Supply Chain Works
    AWS Supply Chain connects to your existing enterprise resource planning (ERP) and supply chain management systems. When those connections are in place, you can benefit from the following capabilities:

    • A data lake is set up using ML models that have been pre-trained for supply chains to understand, extract, and transform data from different sources into a unified data model. The data lake can ingest data from a variety of data sources, including your existing ERP systems (such as SAP S4/HANA) and supply chain management systems.
    • Your data is represented in a real-time visual map using a set of interactive visual end-user interfaces built on a micro front-end architecture. This map highlights current inventory selection, quantity, and health at each location (for example, inventory that is at risk for stock out). Inventory managers can drill down into specific facilities and view the current inventory on hand, in transit, and potentially at risk in each location.
    • Actionable insights are automatically generated for potential supply chain risks (for example, overstock or stock outs) using the comprehensive supply chain data in the data lake and are shown in the real-time visual map. ML models, built on similar technology that Amazon uses, are used to generate more accurate vendor lead time predictions. Supply planners can use these predicted vendor lead times to update static assumptions built into planning models to reduce stock out or excess inventory risks.
    • Rebalancing options are automatically evaluated, ranked, and shared to provide inventory managers and planners with recommended actions to take if a risk is detected. Recommendation options are scored by the percentage of risk resolved, the distance between facilities, and the sustainability impact. Supply chain managers can also drill down to review the impact each option will have on other distribution centers across the network. Recommendations continuously improve by learning from the decisions you make.
    • To help you work with remote colleagues and implement rebalancing actions, contextual built-in collaboration capabilities are provided. When teams chat and message each other, the information about the risk and recommended options is shared, reducing errors and delays caused by poor communication so you can resolve issues faster.
    • To help remove the manual effort and guesswork around demand planning, ML is used to analyze historical sales data and real-time data (for example, open orders), create forecasts, and continually adjust models to improve accuracy. Demand planning also continuously learns from changing demand patterns and user inputs to offer near real-time forecast updates, allowing organizations to proactively adjust supply chain operations.

    Now, let’s see how this works in practice.

    Using AWS Supply Chain To Reduce Inventory Risks
    The AWS Supply Chain team was kind enough to share an environment connected to an ERP system. When I log in, I choose Inventory and the Network Map from the navigation pane. Here, I have a general overview of the inventory status of the distribution centers (DCs). Using the timeline slider, I am able to fast forward in time and see how the inventory risks change over time. This allows me to predict future risks, not just the current ones.

    Console screenshot.

    I choose the Seattle DC to have more information on that location.

    Console screenshot.

    Instead of looking at each distribution center, I create an insight watchlist that is analyzed by AWS Supply Chain. I choose Insights from the navigation pane and then Inventory Risk to track stock out and inventory excess risks. I enter a name (Shortages) for the insight watchlist and select all locations and products.

    Console screenshot.

    In the Tracking parameters, I choose to only track Stock Out Risk. I want to be warned if the inventory level is 10 percent below the minimum inventory target and set my time horizon to two weeks. I save to complete the creation of the insight watchlist.

    Console screenshot.

    I choose New Insight Watchlist to create another one. This time, I select the Lead time Deviation insight type. I enter a name (Lead time) for the insight watchlist and, again, all locations and products. This time, I choose to be notified when there is a deviation in the lead time that is 20 percent or more than the planned lead times. I choose to consider one year of historical time.

    Console screenshot.

    After a few minutes, I see that new insights are available. In the Insights page, I select Shortages from the dropdown. On the left, I have a series of stacks of insights grouped by week. I expand the first stack and drag one of the insights to put it In Review.

    Console screenshot.

    I choose View Details to see the status and the recommendations for this out-of-stock risk for a specific product and location.

    Console screenshot.

    Just after the Overview, a list of Resolution Recommendations is sorted by a Score. Score weights are used to rank recommendations by setting the relative importance of distance, emissions (CO2), and percentage of the risk resolved. In the settings, I can also configure a max distance to be considered when proposing recommendations. The first recommendation is the best based on how I configure the score.

    Console screenshot.

    The recommendation shows the effect of the rebalance. If I move eight units of this product from the Detroit DC to the Seattle DC, the projected inventory is now balanced (color green) for the next two days in the After Rebalance section instead of being out of stock (red) as in the Before Rebalance section. This also solves the excess stock risk (purple) in the Detroit DC. At the top of the recommendation, I see the probability that this rebalance resolves the inventory risk and the impact on emissions (CO2).

    I choose Select to proceed with this recommendation. In the dialog, I enter a comment and choose to message the team to start using the collaboration capabilities of AWS Supply Chain. In this way, all the communication from those involved in solving this inventory issue is stored and linked to the specific issue instead of happening in a separate channel such as emails. I choose Confirm.

    Console screenshot.

    Straight from the Stock Out Risk, I can message those that can help me implement the recommendation.

    Console screenshot.

    I get the reply here, but I prefer to see it in all its context. I choose Collaboration from the navigation pane. There, I find all the conversations started from insights (one for now) and the Stock Out Risk and Resolution recommendations as proposed before. All those collaborating on solving the issue have a clear view of the problem and the possible resolutions. For future reference, this conversation will be available with its risk and resolution context.

    Console screenshot.

    When the risk is resolved, I move the Stock Out Risk card to Resolved.

    Console screenshot.

    Now, I look at the Lead time insights. Similar to before, I choose an insight and put it In Review. I choose View Details to have more information. I see that, based on historical purchase orders, the recommended lead time for this specific product and location should be seven days and not one day as found in the connected ERP system. This can have a negative impact on the expectations of my customers.

    Console screenshot.

    Without the need of re-platforming or reimplementing the current systems, I was able to connect AWS Supply Chain and get insights on the inventory of the distribution centers and recommendations based on my personal settings. These recommendations help resolve inventory risks such as items being out of stock or having excess stock in a distribution center. By better understanding the lead time, I can set better expectations for end customers.

    Availability and Pricing
    AWS Supply Chain is available today in the following AWS Regions: US East (N. Virginia), US West (Oregon), and Europe (Frankfurt).

    AWS Supply Chain allows your organization to quickly gain visibility across your supply chain, and it helps you make more informed supply chain decisions. You can use AWS Supply Chain to mitigate overstock and stock-out risks. In this way, you can improve your customer experience, and at the same time, AWS Supply Chain can help you lower excess inventory costs. Using contextual chat and messaging, you can improve the way you collaborate with other teams and resolve issues quickly.

    With AWS Supply Chain, you only pay for what you use. There are no required upfront licensing fees or long-term contracts. For more information, see AWS Supply Chain pricing.

    Mitigate risk and lower cost of with increased visibility and ML-powered actionable insights for your supply chain.

    Danilo

    New – Ready-to-use Models and Support for Custom Text and Image Classification Models in Amazon SageMaker Canvas

    Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/new-ready-to-use-models-and-support-for-custom-text-and-image-classification-models-in-amazon-sagemaker-canvas/

    Today AWS announces new features in Amazon SageMaker Canvas that help business analysts generate insights from thousands of documents, images, and lines of text in minutes with machine learning (ML). Starting today, you can access ready-to-use models and create custom text and image classification models alongside previously supported custom models for tabular data, all without requiring ML experience or writing a line of code.

    Business analysts across different industries want to apply AI/ML solutions to generate insights from a variety of data and respond to ad-hoc analysis requests coming from business stakeholders. By applying AI/ML in their workflows, analysts can automate manual, time-consuming, and error-prone processes, such as inspection, classification, as well as extraction of insights from raw data, images, or documents. However, applying AI/ML to business problems requires technical expertise and building custom models can take several weeks or even months.

    Launched in 2021, Amazon SageMaker Canvas is a visual, point-and-click service that allows business analysts to use a variety of ready-to-use models or create custom models to generate accurate ML predictions on their own.

    Ready-to-use Models
    Customers can use SageMaker Canvas to access ready-to-use models that can be used to extract information and generate predictions from thousands of documents, images, and lines of text in minutes. These ready-to-use models include sentiment analysis, language detection, entity extraction, personal information detection, object and text detection in images, expense analysis for invoices and receipts, identity document analysis, and more generalized document and form analysis.

    For example, you can select the sentiment analysis ready-to-use model and upload product reviews from social media and customer support tickets to quickly understand how your customers feel about your products. Using the personal information detection ready-to-use model, you can detect and redact personally identifiable information (PII) from emails, support tickets, and documents. Using the expense analysis ready-to-use model, you can easily detect and extract data from your scanned invoices and receipts and generate insights about that data.

    These ready-to-use models are powered by AWS AI services, including Amazon Rekognition, Amazon Comprehend, and Amazon Textract.

    Ready-to-use models available

    Custom Text and Image Classification Models
    Customers that need custom models trained for their business-specific use-case can use SageMaker Canvas to create text and image classification models. 

    You can use SageMaker Canvas to create custom text classification models to classify data according to your needs. For example, imagine that you work as a business analyst at a company that provides customer support. When a customer support agent engages with a customer, they create a ticket, and they need to record the ticket type, for example, “incident”, “service request”, or “problem”. Many times, this field gets forgotten, and so, when the reporting is done, the data is hard to analyze. Now, using SageMaker Canvas, you can create a custom text classification model, train it with existing customer support ticket information and ticket type, and use it to predict the type of tickets in the future when working on a report with missing data.

    You can also use SageMaker Canvas to create custom image classification models using your own image datasets. For instance, imagine you work as a business analyst at a company that manufactures smartphones. As part of your role, you need to prepare reports and respond to questions from business stakeholders related to quality assessment and it’s trends. Every time a phone is assembled, a picture is automatically taken, and at the end of the week, you receive all those images. Now with SageMaker Canvas, you can create a new custom image classification model that is trained to identify common manufacturing defects. Then, every week, you can use the model to analyze the images and predict the quality of the phones produced.

    SageMaker Canvas in Action
    Let’s imagine that you are a business analyst for an e-commerce company. You have been tasked with understanding the customer sentiment towards all the new products for this season. Your stakeholders require a report that aggregates the results by item category to decide what inventory they should purchase in the following months. For example, they want to know if the new furniture products have received positive sentiment. You have been provided with a spreadsheet containing reviews for the new products, as well as an outdated file that categorizes all the products on your e-commerce platform. However, this file does not yet include the new products.

    To solve this problem, you can use SageMaker Canvas. First, you will need to use the sentiment analysis ready-to-use model to understand the sentiment for each review, classifying them as positive, negative, or neutral. Then, you will need to create a custom text classification model that predicts the categories for the new products based on the existing ones.

    Ready-to-use Model – Sentiment Analysis
    To quickly learn the sentiment of each review, you can do a bulk update of the product reviews and generate a file with all the sentiment predictions.

    To get started, locate Sentiment analysis on the Ready-to-use models page, and under Batch prediction, select Import new dataset.

    Using ready-to-use sentiment analysis with a batch dataset

    When you create a new dataset, you can upload the dataset from your local machine or use Amazon Simple Storage Service (Amazon S3). For this demo, you will upload the file locally. You can find all the product reviews used in this example in the Amazon Customer Reviews dataset.

    After you complete uploading the file and creating the dataset, you can Generate predictions.

    Select dataset and generate predictions

    The prediction generation takes less than a minute, depending on the size of the dataset, and then you can view or download the results.

    View or download predictions

    The results from this prediction can be downloaded as a .csv file or viewed from the SageMaker Canvas interface. You can see the sentiment for each of the product reviews.

    Preview results from ready-to-use model

    Now you have the first part of your task ready—you have a .csv file with the sentiment of each review. The next step is to classify those products into categories.

    Custom Text Classification Model
    To classify the new products into categories based on the product title, you need to train a new text classification model in SageMaker Canvas.

    In SageMaker Canvas, create a New model of the type Text analysis.

    The first step when creating the model is to select a dataset with which to train the model. You will train this model with a dataset from last season, which contains all the products except for the new collection.

    Once the dataset has finished importing, you will need to select the column that contains the data you want to predict, which in this case is the product_category column, and the column that will be used as the input for the model to make predictions, which is the product_title column.

    After you finish configuring that, you can start to build the model. There are two modes of building:

    • Quick build that returns a model in 15–30 minutes.
    • Standard build takes 2–5 hours to complete.

    To learn more about the differences between the modes of building you can check the documentation. For this demo, pick quick build, as our dataset is smaller than 50,000 rows.

    Prepare and build your model

    When the model is built, you can analyze how the model performs. SageMaker Canvas uses the 80-20 approach; it trains the model with 80 percent of the data from the dataset and uses 20 percent of the data to validate the model.

    Model score

    When the model finishes building, you can check the model score. The scoring section gives you a visual sense of how accurate the predictions were for each category. You can learn more about how to evaluate your model’s performance in the documentation.

    After you make sure that your model has a high prediction rate, you can move on to generate predictions. This step is similar to the ready-to-use models for sentiment analysis. You can make a prediction on a single product or on a set of products. For a batch prediction, you need to select a dataset and let the model generate the predictions. For this example, you will select the same dataset that you selected in the ready-to-use model, the one with the reviews. This can take a few minutes, depending on the number of products in the dataset.

    When the predictions are ready, you can download the results as a .csv file or view how each product was classified. In the prediction results, each product is assigned only one category based on the categories provided during the model-building process.

    Predict categories

    Now you have all the necessary resources to conduct an analysis and evaluate the performance of each product category with the new collection based on customer reviews. Using SageMaker Canvas, you were able to access a ready-to-use model and create a custom text classification model without having to write a single line of code.

    Available Now
    Ready-to-use models and support for custom text and image classification models in SageMaker Canvas are available in all AWS Regions where SageMaker Canvas is available. You can learn more about the new features and how they are priced by visiting the SageMaker Canvas product detail page.

    — Marcia

    Simplify Service-to-Service Connectivity, Security, and Monitoring with Amazon VPC Lattice – Now Generally Available

    Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/simplify-service-to-service-connectivity-security-and-monitoring-with-amazon-vpc-lattice-now-generally-available/

    At AWS re:Invent 2022, we introduced in preview Amazon VPC Lattice, a new capability of Amazon Virtual Private Cloud (Amazon VPC) that gives you a consistent way to connect, secure, and monitor communication between your services. With VPC Lattice, you can define policies for network access, traffic management, and monitoring to connect compute services across instances, containers, and serverless applications.

    Today, I am happy to share that VPC Lattice is now generally available. Compared to the preview, you have access to new capabilities:

    • Services can use a custom domain name in addition to the domain name automatically generated by VPC Lattice. When using HTTPS, you can configure an SSL/TLS certificate that matches the custom domain name.
    • You can deploy the open-source AWS Gateway API Controller to use VPC Lattice with a Kubernetes-native experience. It uses the Kubernetes Gateway API to let you connect services across multiple Kubernetes clusters and services running on EC2 instances, containers, and serverless functions.
    • You can use an Application Load Balancer (ALB) or a Network Load Balancer (NLB) as a target for a service.
    • The IP address target type now supports IPv6 connectivity.

    Let’s see some of these new features in practice.

    Using Amazon VPC Lattice for Service-to-Service Connectivity
    In my previous post introducing VPC Lattice, I show how to create a service network, associate multiple VPCs and services, and configure target groups for EC2 instances and Lambda functions. There, I also show how to route traffic based on request characteristics and how to use weighted routing. Weighted routing is really handy for blue/green and canary-style deployments or for migrating from one compute platform to another.

    Now, let’s see how to use VPC Lattice to allow the services of an e-commerce application to communicate with each other. For simplicity, I only consider four services:

    • The Order service, running as a Lambda function.
    • The Inventory service, deployed as an Amazon Elastic Container Service (Amazon ECS) service in a dual-stack VPC supporting IPv6.
    • The Delivery service, deployed as an ECS service using an ALB to distribute traffic to the service tasks.
    • The Payment service, running on an EC2 instance.

    First, I create a service network. The Order service needs to call the Inventory service (to check if an item is available for purchase), the Delivery service (to organize the delivery of the item), and the Payment service (to transfer the funds). The following diagram shows the service-to-service communication from the perspective of the service network.

    Diagram describing the service network view of the e-commerce services.

    These services run in different AWS accounts and multiple VPCs. VPC Lattice handles the complexity of setting up connectivity across VPC boundaries and permission across accounts so that service-to-service communication is as simple as an HTTP/HTTPS call.

    The following diagram shows how the communication flows from an implementation point of view.

    Diagram describing the implementation view of the e-commerce services.

    The Order service runs in a Lambda function connected to a VPC. Because all the VPCs in the diagram are associated with the service network, the Order service is able to call the other services (Inventory, Delivery, and Payment) even if they are deployed in different AWS accounts and in VPCs with overlapping IP addresses.

    Using a Network Load Balancer (NLB) as Target
    The Inventory service runs in a dual-stack VPC. It’s deployed as an ECS service with an NLB to distribute traffic to the tasks in the service. To get the IPv6 addresses of the NLB, I look for the network interfaces used by the NLB in the EC2 console.

    Console screenshot.

    When creating the target group for the Inventory service, under Basic configuration, I choose IP addresses as the target type. Then, I select IPv6 for the IP Address type.

    Console screenshot.

    In the next step, I enter the IPv6 addresses of the NLB as targets. After the target group is created, the health checks test the targets to see if they are responding as expected.

    Console screenshot.

    Using an Application Load Balancer (ALB) as Target
    Using an ALB as a target is even easier. When creating a target group for the Delivery service, under Basic configuration, I choose the new Application Load Balancer target type.

    Console screenshot.

    I select the VPC in which to look for the ALB and choose the Protocol version.

    Console screenshot.

    In the next step, I choose Register now and select the ALB from the dropdown. I use the default port used by the target group. VPC Lattice does not provide additional health checks for ALBs. However, load balancers already have their own health checks configured.

    Console screenshot.

    Using Custom Domain Names for Services
    To call these services, I use custom domain names. For example, when I create the Payment service in the VPC console, I choose to Specify a custom domain configuration, enter a Custom domain name, and select an SSL/TLS certificate for the HTTPS listener. The Custom SSL/TLS certificate dropdown shows available certificates from AWS Certificate Manager (ACM).

    Console screenshot.

    Securing Service-to-Service Communications
    Now that the target groups have been created, let’s see how I can secure the way services communicate with each other. To implement zero-trust authentication and authorization, I use AWS Identity and Access Management (IAM). When creating a service, I select the AWS IAM as Auth type.

    I select the Allow only authenticated access policy template so that requests to services need to be signed using Signature Version 4, the same signing protocol used by AWS APIs. In this way, requests between services are authenticated by their IAM credentials, and I don’t have to manage secrets to secure their communications.

    Console screenshot.

    Optionally, I can be more precise and use an auth policy that only gives access to some services or specific URL paths of a service. For example, I can apply the following auth policy to the Order service to give to the Lambda function these permissions:

    • Read-only access (GET method) to the Inventory service /stock URL path.
    • Full access (any HTTP method) to the Delivery service /delivery URL path.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "<Order Service Lambda Function IAM Role ARN>"
                },
                "Action": "vpc-lattice-svcs:Invoke",
                "Resource": "<Inventory Service ARN>/stock",
                "Condition": {
                    "StringEquals": {
                        "vpc-lattice-svcs:RequestMethod": "GET"
                    }
                }
            },
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": "<Order Service Lambda Function IAM Role ARN>"
                },
                "Action": "vpc-lattice-svcs:Invoke",
                "Resource": "<Delivery Service ARN>/delivery"
            }
        ]
    }

    Using VPC Lattice, I quickly configured the communication between the services of my e-commerce application, including security and monitoring. Now, I can focus on the business logic instead of managing how services communicate with each other.

    Availability and Pricing
    Amazon VPC Lattice is available today in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Europe (Ireland).

    With VPC Lattice, you pay for the time a service is provisioned, the amount of data transferred through each service, and the number of requests. There is no charge for the first 300,000 requests every hour, and you only pay for requests above this threshold. For more information, see VPC Lattice pricing.

    We designed VPC Lattice to allow incremental opt-in over time. Each team in your organization can choose if and when to use VPC Lattice. Other applications can connect to VPC Lattice services using standard protocols such as HTTP and HTTPS. By using VPC Lattice, you can focus on your application logic and improve productivity and deployment flexibility with consistent support for instances, containers, and serverless computing.

    Simplify the way you connect, secure, and monitor your services with VPC Lattice.

    Danilo

    Gain insights and knowledge at AWS re:Inforce 2023

    Post Syndicated from CJ Moses original https://aws.amazon.com/blogs/security/gain-insights-and-knowledge-at-aws-reinforce-2023/

    I’d like to personally invite you to attend the Amazon Web Services (AWS) security conference, AWS re:Inforce 2023, in Anaheim, CA on June 13–14, 2023. You’ll have access to interactive educational content to address your security, compliance, privacy, and identity management needs. Join security experts, peers, leaders, and partners from around the world who are committed to the highest security standards, and learn how your business can stay ahead in the rapidly evolving security landscape.

    As Chief Information Security Officer of AWS, my primary job is to help you navigate your security journey while keeping the AWS environment secure. AWS re:Inforce offers an opportunity for you to dive deep into how to use security to drive adaptability and speed for your business. With headlines currently focused on the macroeconomy and broader technology topics such as the intersection between AI and security, this is your chance to learn the tactical and strategic lessons that will help you develop a security culture that facilitates business innovation.

    Here are a few reasons I’m especially looking forward to this year’s program:

    Sharing my keynote, including the latest innovations in cloud security and what AWS Security is focused on

    AWS re:Inforce 2023 will kick off with my keynote on Tuesday, June 13, 2023 at 9 AM PST. I’ll be joined by Steve Schmidt, Chief Security Officer (CSO) of Amazon, and other industry-leading guest speakers. You’ll hear all about the latest innovations in cloud security from AWS and learn how you can improve the security posture of your business, from the silicon to the top of the stack. Take a look at my most recent re:Invent presentation, What we can learn from customers: Accelerating innovation at AWS Security and the latest re:Inforce keynote for examples of the type of content to expect.

    Engaging sessions with real-world examples of how security is embedded into the way businesses operate

    AWS re:Inforce offers an opportunity to learn how to prioritize and optimize your security investments, be more efficient, and respond faster to an evolving landscape. Using the Security pillar of the AWS Well-Architected Framework, these sessions will demonstrate how you can build practical and prescriptive measures to protect your data, systems, and assets.

    Sessions are offered at all levels and all backgrounds. Depending on your interests and educational needs, AWS re:Inforce is designed to meet you where you are on your cloud security journey. There are learning opportunities in several hundred sessions across six tracks: Data Protection; Governance, Risk & Compliance; Identity & Access Management; Network & Infrastructure Security, Threat Detection & Incident Response; and, this year, Application Security—a brand-new track. In this new track, discover how AWS experts, customers, and partners move fast while maintaining the security of the software they are building. You’ll hear from AWS leaders and get hands-on experience with the tools that can help you ship quickly and securely.

    Shifting security into the “department of yes”

    Rather than being seen as the proverbial “department of no,” IT teams have the opportunity to make security a business differentiator, especially when they have the confidence and tools to do so. AWS re:Inforce provides unique opportunities to connect with and learn from AWS experts, customers, and partners who share insider insights that can be applied immediately in your everyday work. The conference sessions, led by AWS leaders who share best practices and trends, will include interactive workshops, chalk talks, builders’ sessions, labs, and gamified learning. This means you’ll be able to work with experts and put best practices to use right away.

    Our Expo offers opportunities to connect face-to-face with AWS security solution builders who are the tip of the spear for security. You can ask questions and build solutions together. AWS Partners that participate in the Expo have achieved security competencies and are there to help you find ways to innovate and scale your business.

    A full conference pass is $1,099. Register today with the code ALUMwrhtqhv to receive a limited time $300 discount, while supplies last.

    I’m excited to see everyone at re:Inforce this year. Please join us for this unique event that showcases our commitment to giving you direct access to the latest security research and trends. Our teams at AWS will continue to release additional details about the event on our website, and you can get real-time updates by following @awscloud and @AWSSecurityInfo.

    I look forward to seeing you in Anaheim and providing insight into how we prioritize security at AWS to help you navigate your cloud security investments.

     
    If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    CJ Moses

    CJ Moses

    CJ is the Chief Information Security Officer (CISO) at AWS, where he leads product design and security engineering. His mission is to deliver the economic and security benefits of cloud computing to business and government customers. Previously, CJ led the technical analysis of computer and network intrusion efforts at the U.S. Federal Bureau of Investigation Cyber Division. He also served as a Special Agent with the U.S. Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the information security industry today.

    The National Intelligence Center of Spain and AWS collaborate to promote public sector cybersecurity

    Post Syndicated from Borja Larrumbide original https://aws.amazon.com/blogs/security/the-national-intelligence-center-of-spain-and-aws-collaborate-to-promote-public-sector-cybersecurity/

    Spanish version »

    The National Intelligence Center and National Cryptological Center (CNI-CCN)—attached to the Spanish Ministry of Defense—and Amazon Web Services (AWS) have signed a strategic collaboration agreement to jointly promote cybersecurity and innovation in the public sector through AWS Cloud technology.

    Under the umbrella of this alliance, the CNI-CCN will benefit from the help of AWS in defining the security roadmap of public administrations with different maturity levels and help the CNI-CCN comply with their security requirements and the National Security Scheme.

    In addition, CNI-CCN and AWS will collaborate in various areas of cloud security. First, they will work together on cybersecurity training and awareness-raising focused on government institutions, through education, training, and certification programs for professionals who are experts in technology and security. Second, both organizations will collaborate in creating security guidelines for the use of cloud technology, incorporating the shared security model in the AWS cloud and contributing their experience and knowledge to help organizations with the challenges they face. Third, CNI-CCN and AWS will take part in joint events that demonstrate best practices for the deployment and secure use of the cloud, both for public and private sector organizations. Finally, AWS will support cybersecurity operations centers that are responsible for surveillance, early warning, and response to security incidents in public administrations.

    Today, AWS has achieved certification in the National Security Scheme (ENS) High category and has been the first cloud provider to accredit several security services in the CNI-CCN’s STIC Products and Services catalog (CPSTIC), meaning that its infrastructure meets the highest levels of security and compliance for state agencies and public organizations in Spain. All of this gives Spanish organizations, including startups, large companies, and the public sector, access to AWS infrastructure that allows them to make use of advanced technologies such as data analysis, artificial intelligence, databases, Internet of Things (IoT), machine learning, and mobile or serverless services, to promote innovation.

    In addition, the new cloud infrastructure of AWS in Spain, the AWS Europe (Spain) Region, allows customers who have data residency requirements to store their content in Spain, with the assurance that they maintain full control over the location of their data. This is a critical element for those who have data residency requirements. The launch of the AWS Europe (Spain) Region provides customers building applications that comply with General Data Protection Regulation (GDPR) access to another secure AWS Region in the European Union (EU) that helps meet the highest levels of security, compliance, and data protection. AWS is also Esquema Nacional de Seguridad (ENS) High certified, meaning its infrastructure meets the highest levels of security and compliance for government agencies and public organizations in Spain.
     


    El Centro Nacional de Inteligencia de España y AWS colaboran para promover la ciberseguridad en el sector público

    El Centro Nacional de Inteligencia y Centro Criptológico Nacional (CNI-CCN)–adscrito al Ministerio de Defensa de España- y Amazon Web Services (AWS) han firmado un acuerdo de colaboración estratégica para impulsar de forma conjunta la ciberseguridad y la innovación del sector público a través de la tecnología en la Nube de AWS.

    Bajo el paraguas de esta alianza, el CNI-CCN se beneficiará de la ayuda de AWS en definir la hoja de ruta de seguridad de las administraciones públicas, con distintos niveles de madurez y ayudar al CNI-CCN en el cumplimiento de sus requisitos de seguridad y el Esquema Nacional de Seguridad.

    Además, CNI-CCN y AWS colaborarán en diversos ámbitos en materia de seguridad en la nube. En primer lugar, trabajarán conjuntamente en formación y concienciación en ciberseguridad enfocadas a instituciones gubernamentales, a través de programas de educación, capacitación y certificación de profesionales expertos en tecnología y seguridad. En segundo lugar, ambas organizaciones colaborarán en la creación de guías de seguridad para el uso de tecnología en la nube, incorporando el modelo de seguridad compartida en la nube de AWS y aportando su experiencia y conocimiento para ayudar a las organizaciones con los desafíos a los que se enfrentan. En tercer lugar, CNI-CCN y AWS participarán en eventos conjuntos que demuestren las mejores prácticas de despliegue y uso seguro de la nube, tanto para organizaciones del sector público como privado. Finalmente, AWS apoyará a los centros de operaciones de ciberseguridad encargados de la vigilancia, alerta temprana y respuesta a incidentes de seguridad en las administraciones públicas.

    Hoy AWS cuenta con la certificación del Esquema Nacional de Seguridad (ENS) categoría Alta y ha sido el primer proveedor de la nube en acreditar varios servicios de seguridad en el catálogo de Productos y Servicios STIC (CPSTIC) del CNI-CCN, los cual significa que su infraestructura cumple con los más altos niveles de seguridad y cumplimiento para agencias estatales y organizaciones públicas en España. Todo ello concede a las organizaciones españolas, incluyendo a startups, grandes empresas, así como al sector público, acceso a infraestructura de AWS que les permita hacer uso de tecnologías avanzadas como análisis de datos, inteligencia artificial, bases de datos, Internet de las Cosas (IoT), aprendizaje automático, y servicios móviles o serverless, para impulsar la innovación.

    Además, la nueva infraestructura de nube de AWS en España, la Región AWS Europa (España) permite a los clientes almacenar su contenido en España, con la seguridad de que mantienen el control total sobre la localización de sus datos. Esto es un elemento crítico para quienes tienen requisitos de residencia de datos. Los clientes que desarrollan aplicaciones en cumplimiento con el Reglamento General de Protección de Datos (RGPD) tendrán acceso a otra región de infraestructura segura de AWS en la Unión Europea (UE), respetando los más altos estándares de seguridad, cumplimiento normativo y protección de datos. Hoy AWS también cuenta con la certificación del Esquema Nacional de Seguridad (ENS) categoría Alta, lo cual significa que su infraestructura cumple con los más altos niveles de seguridad y cumplimiento para agencias estatales y organizaciones públicas en España.

     
    If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    Borja Larrumbide

    Borja Larrumbide

    Borja is a Security Assurance Manager for AWS in Spain and Portugal. He received a bachelor’s degree in Computer Science from Boston University (USA). Since then, he has worked in companies such as Microsoft and BBVA, where he has performed in different roles and sectors. Borja is a seasoned security assurance practitioner with many years of experience engaging key stakeholders at national and international levels. His areas of interest include security, privacy, risk management, and compliance.

    AWS Application Migration Service Major Updates: Import and Export Feature, Source Server Migration Metrics Dashboard, and Additional Post-Launch Actions

    Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/aws-application-migration-service-major-updates-import-and-export-feature-source-server-migration-metrics-dashboard-and-additional-post-launch-actions/

    AWS Application Migration Service (AWS MGN) can simplify and expedite your migration to AWS by automatically converting your source servers from physical, virtual, or cloud infrastructure to run natively on AWS. In the post, How to Use the New AWS Application Migration Server for Lift-and-Shift Migrations, Channy introduced us to Application Migration Service and how to get started.

    By using Application Migration Service for migration, you can minimize time-intensive, error-prone manual processes by automating replication and conversion of your source servers from physical, virtual, or cloud infrastructure to run natively on AWS. Last year, we introduced major improvements such as new migration servers grouping, an account-level launch template, and a post-launch actions template.

    Today, I’m pleased to announce three major updates of Application Migration Service. Here’s the quick summary for each feature release:

    • Import and export – You can now use Application Migration Service to import your source environment inventory list to the service from a CSV file. You can also export your source server inventory for reporting purposes, offline reviews and updates, integration with other tools and AWS services, and performing bulk configuration changes by reimporting the inventory list.
    • Server migration metrics dashboard – This new dashboard can help simplify migration project management by providing an aggregated view of the migration lifecycle status of your source servers
    • Additional post-launch modernization actions – In this update, Application Migration Service added eight additional predefined post-launch actions. These actions are applied to your migrated applications when you launch them on AWS.

    Let me share how you can use these features for your migration.

    Import and Export
    Before we go further into the import and export features, let’s discuss two concepts within Application Migration Service: applications and waves, which you can define when migrating with Application Migration Service. Applications represent a group of servers. By using applications, you can define groups of servers and identify them as an application. Within your application, you can perform various activities with Application Migration Service, such as monitoring, specifying tags, and performing bulk operations, for example, launching test instances. Additionally, you can group your applications into waves, which represent a group of servers that are migrated together, as part of your migration plan.

    With the import feature, you can now import your inventory list in CSV form into Application Migration Service. This makes it easy for you to manage large scale-migrations, and ingest your inventory of source servers, applications and waves, including their attributes.

    To start using the import feature, I need to identify my servers and application inventory. I can do this manually, or using discovery tools. The next thing I need to do is download the import template which I can access from the console. 

    After I downloaded the import template, I can start mapping from my inventory list into this template. While mapping my inventory, I can group related servers into applications and waves. I can also perform configurations, such as defining Amazon Elastic Compute Cloud (Amazon EC2) launch template settings, and specifying tags for each wave.

    The following screenshot is an example of the results of my import template:

    The next step is to upload my CSV file to an Amazon Simple Storage Service (Amazon S3) bucket. Then, I can start the import process from the Application Migration Service console by referencing the CSV file containing my inventory list that I’ve uploaded to the S3 bucket.

    When the import process is complete, I can see the details of the import results.

    I can import inventory for servers that don’t have an agent installed, or haven’t yet been discovered by agentless replication. However, to replicate data, I need to use agentless replication, or install the AWS Replication Agent on my source servers.

    Now I can view all my inventory inside the Source servers, Applications and Waves pages on the Application Migration Service console. The following is a screenshot for recently imported waves.

    In addition, with the export feature, I can export my source servers, applications, and waves along with all configurations that I’ve defined into a CSV file.

    This is helpful if you want to do reporting or offline reviews, or for bulk editing before reimporting the CSV file into Application Migration Service.

    Server Migration Metrics Dashboard
    We previously supported a migration metrics dashboard for applications and waves. In this release, we have specifically added a migration metrics dashboard for servers. Now you can view aggregated overviews of your server’s migration process on the Application Migration Service dashboard. Three topics are available in the migration metrics dashboard:

    • Alerts – Shows associated alerts for respective servers.
    • Data replication status – Shows the replication data overview status for source servers. Here, you get a quick overview of the lifecycle status of the replication data process.
    • Migration lifecycle – Shows an overview of the migration lifecycle from source servers.

    Additional Predefined Post-launch Modernization Actions
    Post-launch actions allow you to control and automate actions performed after your servers have been launched in AWS. You can use predefined or use custom post-launch actions.

    Application Migration Service now has eight additional predefined post-launch actions to run in your EC2 instances on top of the existing four predefined post-launch actions. These additional post-launch actions provide you with flexibility to maximize your migration experience.

    The new additional predefined post-launch actions are as follows:

    • Convert MS-SQL license – You can easily convert Windows MS-SQL BYOL to an AWS license using the Windows MS-SQL license conversion action. The launch process includes checking the SQL edition (Enterprise, Standard, or Web) and using the right AMI with the right billing code.
    • Create AMI from instance – You can create a new Amazon Machine Image (AMI) from your Application Migration Service launched instance.
    • Upgrade Windows version – This feature allows you to easily upgrade your migrated server to Windows Server 2012 R2, 2016, 2019, or 2022. You can see the full list of available OS versions on AWSEC2-CloneInstanceAndUpgradeWindows page.
    • Conduct EC2 connectivity checks – You can conduct network connectivity checks to a predefined list of ports and hosts using the EC2 connectivity check feature.
    • Validate volume integrity – You can use this feature to ensure that Amazon Elastic Block Store (Amazon EBS) volumes on the launched instance are the same size as the source, properly mounted on the EC2 instance, and accessible.
    • Verify process status – You can validate the process status to ensure that processes are in a running state after instance launch. You will need to provide a list of processes that you want to verify and specify how long the service should wait before testing begins. This feature lets you do the needed validations automatically and saves time by not having to do them manually.
    • CloudWatch agent installation – Use the Amazon CloudWatch agent installation feature to install and set up the CloudWatch agent and Application Insights features.
    • Join Directory Service domain – You can simplify the AWS join domain process by using this feature. If you choose to activate this action, your instance will be managed by the AWS Cloud Directory (instead of on premises).

    Things to Know
    Keep in mind the following:

    • Updated UI/UX – We have updated the user interface with card layout and table layout view for the action list on the Application Migration Service console. This update helps you to determine which post-launch actions are suitable for your use case . We have also added filter options to make it easy to find relevant actions by operating system, category, and more.
    • Support for additional OS versions – Application Migration Service now supports CentOS 5.5 and later and Red Hat Enterprise Linux (RHEL) 5.5 and later operating systems.
    • Availability – These features are available now, and you can start using them today in all Regions where Application Migration Service is supported.

    Get Started Today

    Visit the Application Migration Service User Guide page to learn more about these features and understand the pricing. You can also visit Getting started with AWS Application Migration Service to learn more about how to get started to migrate your workloads.

    Happy migrating!

    Donnie

    AWS Week in Review – March 27, 2023

    Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-27-2023/

    This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

    In Finland, where I live, spring has arrived. The snow has melted, and the trees have grown their first buds. But I don’t get my hopes high, as usually around Easter we have what is called takatalvi. Takatalvi is a Finnish world that means that the winter returns unexpectedly in the spring.

    Last Week’s Launches
    Here are some launches that got my attention during the previous week.

    AWS SAM CLI – Now the sam sync command will compare your local Serverless Application Model (AWS SAM) template with your deployed AWS CloudFormation template and skip the deployment if there are no changes. For more information, check the latest version of the AWS SAM CLI.

    IAM – AWS Identity and Access Management (IAM) has launched two new global condition context keys. With these new condition keys, you can write service control policies (SCPs) or IAM policies that restrict the VPCs and private IP addresses from which your Amazon Elastic Compute Cloud (Amazon EC2) instance credentials can be used, without hard-coding VPC IDs or IP addresses in the policy. To learn more about this launch and how to get started, see How to use policies to restrict where EC2 instance credentials can be used from.

    Amazon SNS – Amazon Simple Notification Service (Amazon SNS) now supports setting context-type request headers for HTTP/S notifications, such as application/json, application/xml, or text/plain. With this new feature, applications can receive their notifications in a more predictable format.

    AWS Batch – AWS Batch now allows you to configure ephemeral storage up to 200GiB on AWS Fargate type jobs. With this launch, you no longer need to limit the size of your data sets or the size of the Docker images to run machine learning inference.

    Application Load Balancer – Application Load Balancer (ALB) now supports Transport Layer Security (TLS) protocol version 1.3, enabling you to optimize the performance of your application while keeping it secure. TLS 1.3 on ALB works by offloading encryption and decryption of TLS traffic from your application server to the load balancer.

    Amazon IVS – Amazon Interactive Video Service (IVS) now supports combining videos from multiple hosts into the source of a live stream. For a demo, refer to Add multiple hosts to live streams with Amazon IVS.

    For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

    Other AWS News
    Some other updates and news that you may have missed:

    I read the post Implementing an event-driven serverless story generation application with ChatGPT and DALL-E a few days ago, and since then I have been reading my child a lot of  AI-generated stories. In this post, David Boyne, explains step by step how you can create an event-driven serverless story generation application. This application produces a brand-new story every day at bedtime with images, which can be played in audio format.

    Podcast Charlas Técnicas de AWS – If you understand Spanish, this podcast is for you. Podcast Charlas Técnicas is one of the official AWS podcasts in Spanish, and every other week there is a new episode. The podcast is meant for builders, and it shares stories about how customers have implemented and learned AWS services, how to architect applications, and how to use new services. You can listen to all the episodes directly from your favorite podcast app or at AWS Podcasts en español.

    AWS open-source news and updates – The open source newsletter is curated by my colleague Ricardo Sueiras to bring you the latest open-source projects, posts, events, and more.

    Upcoming AWS Events
    Check your calendars and sign up for the AWS Summit closest to your city. AWS Summits are free events that bring the local community together, where you can learn about different AWS services.

    Here are the ones coming up in the next months:

    That’s all for this week. Check back next Monday for another Week in Review!

    — Marcia

    AWS Clean Rooms Now Generally Available — Collaborate with Your Partners without Sharing Raw Data

    Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/aws-clean-rooms-now-generally-available/

    Companies across multiple industries, such as advertising and marketing, retail, consumer packaged goods (CPG), travel and hospitality, media and entertainment, and financial services, increasingly look to supplement their data with data from business partners, to build a complete view of their business.

    Let’s take a marketing use case as an example. Brands, publishers, and their partners need to collaborate using datasets that are stored across many channels and applications to improve the relevance of their campaigns and better engage with consumers. At the same time, they also want to protect sensitive consumer information and eliminate the sharing of raw data. Data clean rooms can help solve this challenge by allowing multiple companies to analyze their collective data in a private environment.

    However, it’s difficult to build data clean rooms. It requires complex privacy controls, specialized tools to protect each collaborator’s data, and months of development time customizing analytics tools. The effort and complexity grows when a new collaborator is added, or a different type of analysis is needed, as companies have to spend even more development time. Finally, companies prefer to limit data movement as much as possible, usually leading to less collaboration and missed opportunities to generate new business insights.

    Introducing AWS Clean Rooms
    Today, I’m excited to announce the general availability of AWS Clean Rooms which we first announced at AWS re:Invent 2022 and released the preview of in January 2023. AWS Clean Rooms is an analytics service of AWS Applications that helps companies and their partners more easily and securely analyze and collaborate on their collective datasets without sharing or copying each other’s data. AWS Clean Rooms enables customers to generate unique insights about advertising campaigns, investment decisions, clinical research, and more, while helping them protect data.

    Now, with AWS Clean Rooms, companies are able to easily create a secure data clean room on the AWS Cloud in minutes and collaborate with their partners. They can use a broad set of built-in, privacy-enhancing controls for clean rooms. These controls allow companies to customize restrictions on the queries run by each clean room participant, including query controls, query output restrictions, and query logging. AWS Clean Rooms also includes advanced cryptographic computing tools that keep data encrypted—even as queries are processed—to help comply with stringent data handling policies.

    Key Features of AWS Clean Rooms
    Let me share with you the key features and how easy it is to collaborate with AWS Clean Rooms.

    Create Your Own Clean Rooms
    AWS Clean Rooms helps you to start a collaboration in minutes and then select the other companies you want to collaborate with. You can collaborate with any of your partners that agree to participate in your clean room collaboration. You can create a collaboration by following several steps.

    After creating a collaboration in AWS Clean Rooms, you can select additional collaboration members who can contribute. Currently, AWS Clean Rooms supports up to five collaboration members, including you as the collaboration creator.

    The next step is to define which collaboration member can perform a query in collaboration with the member abilities setting.

    Then, collaboration members will get notifications in their accounts, see detailed info from a collaboration, and decide whether to join the collaboration by selecting Create membership in their AWS Clean Rooms dashboard.

    Collaborate without Moving Data Outside AWS
    AWS Clean Rooms works by analyzing Amazon S3 data in place. This eliminates the need for companies to copy and load their data into destinations outside their respective AWS environments of the collaboration members or using third-party services.

    Each collaboration member can create configured tables, an AWS Clean Rooms resource that contains reference to the AWS Glue catalog with underlying data that define how that data can be used. The configured table can be used across many collaborations.

    Protecting Data
    AWS Clean Rooms provides you with a broad set of privacy-enhancing controls to protect your customers’ and partners’ data. Each collaboration member has the flexibility to determine what columns can be accessed in a collaboration.

    In addition to column-level privacy controls, as in the example above, AWS Clean Rooms also provides fine-grained query controls called analysis rules. With built-in and flexible analysis rules, customers can tailor queries to specific business needs. AWS Clean Rooms provides two types of analysis rules for customers to use:

    • Aggregation analysis rules allows queries that aggregate analysis without revealing user-level information using COUNT, SUM, and AVG functions along optional dimensions.
    • List analysis rules allow queries that output user-level attribute analysis of the overlap between the customer’s table and the tables of the member who can query.

    Both analysis rule types allow data owners to require a join between their datasets and the datasets of the collaborator running the query. This limits the results to just their intersection of the collaborators datasets.

    After defining the analysis rules, the member who can query and receive results can start writing queries according to the restrictions defined by each participating collaboration member. The following is an example query in the collaboration.

    Analysis rules allow collaboration members to restrict the types of queries that can be performed against their datasets and the usable output of the query results. The following screenshot is an example of a query that will not be successful because it does not satisfy the analysis rule since the hashed_email column cannot be used in SELECT queries.

    Full Programmatic Access
    Any functionality offered by AWS Clean Rooms can also be accessed via the API using AWS SDKs or AWS CLI. This makes it easier for you to integrate AWS Clean Rooms into your products or workflows. This programmatic access also unlocks the opportunity for you to host clean rooms for your customers with your own branding.

    Query Logging
    This feature allows collaboration members to review and audit the queries that use their datasets to make sure data is being used as intended. With query logging, collaboration members who have query control and other members whose data is part of the query, can receive logs if they enable query logging.

    If this feature is enabled, query logs are written to Amazon CloudWatch Logs in each collaboration member’s account. You can access the summary of the log queries in the last 7 days from the collaboration dashboard.

    Cryptographic Computing
    With this feature, you have the option to perform client-side encryption for sensitive data with cryptographic computing. You can encrypt your dataset to add a protection layer, and the data will use a cryptographic computing protocol called private-set intersection to keep data encrypted even as the query runs.

    To use the cryptographic computing feature, you need to download and use the Cryptographic Computing for Clean Rooms (C3R) encryption client to encrypt and decrypt your data. C3R keeps your data cryptographically protected while in use in AWS Clean Rooms. C3R supports a subset of SQL queries, including JOIN, SELECT, GROUP BY, COUNT, and other supported statements on cryptographically protected data.

    The following image shows how you can enable cryptographic computing when creating a collaboration:

    Customer Voices
    During the preview period, we heard lots of feedback from our customers about AWS Clean Rooms. Here’s what our customers say:

    Comscore is a measurement and analytics company that brings trust and transparency to media. Brian Pugh, Chief Information Officer at Comscore, said, “As advertisers and marketers adapt to deliver relevant campaigns leveraging their combined datasets while protecting consumer data, Comscore’s Media Metrix suite, powered by Unified Digital Measurement 2.0 and Campaign Ratings services, will continue to support critical measurement and planning needs with services like AWS Clean Rooms. AWS Clean Rooms will enable new methods of collaboration among media owners, brands, or agency customers through customized data access controls managed and set by each data owner without needing to share underlying data.”

    DISH Media is a leading TV provider that offers over-the-top IPTV service. “At DISH Media, we empower brands and agencies to run their own analyses of prior campaigns to allow for flexibility, visibility, and ease in optimizing future campaigns to reach DISH Media’s 31 million consumers. With AWS Clean Rooms, we believe advertisers will benefit from the ease of use of these services with their analysis, including data access and security controls,” said Kemal Bokhari, Head of Data, Measurement, and Analytics at DISH Media.

    Fox Corporation is a leading producer and distributor of ad-supported content through its sports, news, and entertainment brands. Lindsay Silver, Senior Vice President of Data and Commercial Technology at Fox Corporation, said, “It can be challenging for our advertising clients to figure out how to best leverage more data sources to optimize their media spend across their combined portfolio of entertainment, sports, and news brands which reach 200 million monthly viewers. We are excited to use AWS Clean Rooms to enable data collaborations easily and securely in the AWS Cloud that will help our advertising clients unlock new insights across every Fox brand and screen while protecting consumer data.”

    Amazon Marketing Cloud (AMC) is a secure, privacy-safe clean room application from Amazon Ads that supports thousands of marketers with custom analytics and cross-channel analysis.

    “Providing marketers with greater control over their own signals while being able to analyze them in conjunction with signals from Amazon Ads is crucial in today’s marketing landscape. By migrating AMC’s compute infrastructure to AWS Clean Rooms under the hood, marketers can use their own signals in AMC without storing or maintaining data outside of their AWS environment. This simplifies how marketers can manage their signals and enables AMC teams to focus on building new capabilities for brands,” said Paula Despins, Vice President of Ads Measurement at Amazon Ads.

    Watch this video to learn more about AWS Clean Rooms:

    Availability
    AWS Clean Rooms is generally available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), and Europe (Stockholm).

    Pricing & Free Tier
    AWS Clean Rooms measures compute capacity in Clean Rooms Processing Units (CRPUs). You only pay the compute capacity of queries that you run in CRPU-hours on a per-second basis (with a 60-second minimum charge). AWS Clean Rooms automatically scales up or down to meet your query workload demands and shuts down during periods of inactivity, saving you administration time and costs. AWS Clean Rooms free tier provides a tier of 9 CRPU-hours per month for the first 12 months per new customer.

    AWS Clean Rooms helps companies and their partners more easily and securely analyze and collaborate on their collective datasets without sharing or copying each other’s data. Learn more about benefits, use cases, how to get started, and pricing details on the AWS Clean Rooms page.

    Happy collaborating!

    Donnie