Tag Archives: Thought Leadership

Five troubleshooting examples with Amazon Q

Post Syndicated from Brendan Jenkins original https://aws.amazon.com/blogs/devops/five-troubleshooting-examples-with-amazon-q/

Operators, administrators, developers, and many other personas leveraging AWS come across multiple common issues when it comes to troubleshooting in the AWS Console. To help alleviate this burden, AWS released Amazon Q. Amazon Q is AWS’s generative AI-powered assistant that helps make your organizational data more accessible, write code, answer questions, generate content, solve problems, manage AWS resources, and take action. A component of Amazon Q is Amazon Q Developer. Amazon Q Developer reimagines your experience across the entire development lifecycle, including having the ability to help you understand errors and remediate them in the AWS Management Console. Additionally, Amazon Q also provides access to opening new AWS support cases to address your AWS questions if further troubleshooting help is needed.

In this blog post, we will highlight the five troubleshooting examples with Amazon Q. Specific use cases that will be covered include: EC2 SSH connection issues, VPC Network troubleshooting, IAM Permission troubleshooting, AWS Lambda troubleshooting, and troubleshooting S3 errors.

Prerequisites

To follow along with these examples, the following prerequisites are required:

Five troubleshooting examples with Amazon Q

In this section, we will be covering the examples previously mentioned in the AWS Console.

Note: This feature is only available in US West (Oregon) AWS Region during preview for errors that arise while using the following services in the AWS Management Console: Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS), Amazon Simple Storage Service (Amazon S3), and AWS Lambda.

EC2 SSH connection issues

In this section, we will show an example of troubleshooting an EC2 SSH connection issue. If you haven’t already, please be sure to create an Amazon EC2 instance for the purpose of this walkthrough.

First, sign into the AWS console and navigate to the us-west-2 region then click on the Amazon Q icon in the right sidebar on the AWS Management Console as shown below in figure 1.

Figure 1 - Opening Amazon Q chat in the console

Figure 1 – Opening Amazon Q chat in the console

With the Amazon Q chat open, we enter the following prompt below:

Prompt:

"Why cant I SSH into my EC2 instance <insert Instance ID here>?"

Note: you can obtain the instance ID from within EC2 service in the console.

We now get a response up stating: “It looks like you need help with network connectivity issues. Amazon Q works with VPC Reachability Analyzer to provide an interactive generative AI experience for troubleshooting network connectivity issues. You can try the preview experience here (available in US East N. Virginia Region).”

Click on the preview experience here URL from Amazon Qs response.

Figure 2 - Prompting Q chat in the console.

Figure 2 – Prompting Q chat in the console.

Now, Amazon Q will run an analysis for connectivity between the internet and your EC2 instance. Find a sample response from Amazon Q below:

Figure 3 - Response from Amazon Q network troubleshooting
Figure 3 – Response from Amazon Q network troubleshooting

Toward the end of the explanation from Amazon Q, it states that it checked the security groups for allowing inbound traffic from port 22 and was blocked from accessing.

Figure 4 – Response from Amazon Q network troubleshooting cont.

As a best practice, you will want to follow AWS prescriptive guidance on adding rules for inbound SSH traffic for resolving an issue like this.

VPC Network troubleshooting

In this section, we will show how to troubleshoot a VPC network connection issue.

In this example, I have two EC2 instances, Server-1-demo and Server-2-demo in two separate VPCs shown below in figure 5. I want to leverage amazon Q troubleshooting to understand why these two instances cannot communicate with each other.

Figure 5 - two EC2 instances
Figure 5 – two EC2 instances

First, we navigate to the AWS console and click on the Amazon Q icon in the right sidebar on the AWS Management Console as shown below in figure 1.

Figure 6 - Opening Amazon Q chat in the console

Figure 6 – Opening Amazon Q chat in the console

Now, with the Q console chat open, I enter the following prompt for Amazon Q below to help understand the connectivity issue between the servers:

Prompt:

"Why cant my Server-1-demo communicate with Server-2-demo?"

Figure 7 - prompt for Amazon Q connectivity troubleshooting
Figure 7 – prompt for Amazon Q connectivity troubleshooting

Now, click the preview experience here hyperlink to be redirected to the Amazon Q network troubleshooting – preview. Amazon Q troubleshooting will now generate a response as shown below in Figure 8.

Figure 8 - connectivity troubleshooting response generated by Amazon QFigure 8 – connectivity troubleshooting response generated by Amazon Q

In the response, Amazon Q states, “It sounds like you are troubleshooting connectivity between Server-1-demo and Server-2-demo. Based on the previous context, these instances are in different VPCs which could explain why TCP testing previously did not resolve the issue, if a peering connection is not established between the VPCs.“

So, we need to establish a VPC peering connection between the two instances since they are in different VPCs.

IAM Permission troubleshooting

Now, let’s take a look at how Amazon Q can help resolve IAM Permission issues.

In this example, I’m creating a cluster with Amazon Elastic Container Service (ECS). I chose to deploy my containers on Amazon EC2 instances, which prompted some configuration options, including whether I wanted an SSH Key pair. I chose to “Create a new key pair”.

Figure 9 - Configuring ECS key pair

Figure 9 – Configuring ECS key pair

That opens up a new tab in the EC2 console.

Figure 10 - Creating ECS key pair

Figure 10 – Creating ECS key pair

But when I tried to create the SSH. I got the error below:

Figure 11 - ECS console error

Figure 11 – ECS console error

So, I clicked the link to “Troubleshoot with Amazon Q” which revealed an explanation as to why my user was not able to create the SSH key pair and the specific permissions that were missing.

Figure 12 - Amazon Q troubleshooting analysis

Figure 12 – Amazon Q troubleshooting analysis

So, I clicked the “Help me resolve” link ad I got the following steps.

Figure 13 - Amazon Q troubleshooting resolution

Figure 13 – Amazon Q troubleshooting resolution

Even though my user had permissions to use Amazon ECS, the user also needs certain permission permissions in the Amazon EC2 services as well, specifically ec2:CreateKeyPair. By only enabling the specific action required for this IAM user, your organization can follow the best practice of least privilege.

Lambda troubleshooting

Another area Amazon Q can help is with AWS Lambda errors when doing development work in the AWS Console. Users may find issues with things like missing configurations, environment variables, and code typos. With Amazon Q, it can help you fix and troubleshoot these issues with step by step guidance on how to fix it.

In this example, in the us-west-2 region, we have created a new lambda function called demo_function_blog in the console with the Python 3.12 runtime. The following code below is included with a missing lambda layer for AWS pandas.

Lambda Code:

import json
import pandas as pd

def lambda_handler(event, context):
    data = {'Name': ['John', 'Jane', 'Jim'],'Age': [25, 30, 35]}
    df = pd.DataFrame(data)
    print(df.head()) # print first five rows

    return {
        'statusCode': 200,
        'body': json.dumps("execution successful!")
    }

Now, we configure a test event to test the following code within the lambda console called test-event as shown below in figure 14.

Figure 14 - configuring test event

Figure 14 – configuring test event

Now that the test event is created, we can move over to the Test tab in the lambda console and click the Test button. We will then see an error (intended) and we will click on the Troubleshoot with Amazon Q button as shown below in figure 15.

Figure 15 - Lambda Error

Figure 15 – Lambda Error

Now we will be able to see Amazon Qs analysis of the issue. It states “It appears that the Lambda function is missing a dependency. The error message indicates that the function code requires the ‘pandas’ module, ….”. Click Help me resolve to get step by step instructions on the fix as shown below in figure 16.

Figure 16 - Amazon Q Analysis

Figure 16 – Amazon Q Analysis

Amazon Q will then generate a step-by-step resolution on how to the fix the error as shown below in figure 17.

Figure 17 - Amazon Q Resolution

Figure 17 – Amazon Q Resolution

Following with Amazon Q’s recommendations, we need to add a new lambda layer for the pandas dependency as shown below in figure 18:

Figure 18 – Updating lambda layer

Figure 18 – Updating lambda layer

Once updated, go to the Test tab once again and click Test. The function code should now run successfully as shown below in figure 19:

Figure 19 - Lambda function successfully run

Figure 19 – Lambda function successfully run

Check out the Amazon Q immersion day for more examples of Lambda troubleshooting.

Troubleshooting S3 Errors

While working with Amazon S3, users might encounter errors that can disrupt the smooth functioning of their operations. Identifying and resolving these issues promptly is crucial for ensuring uninterrupted access to S3 resources. Amazon Q, a powerful tool, offers a seamless way to troubleshoot errors across various AWS services, including Amazon S3.

In this example we use Q to troubleshoot S3 Replication rule configuration error. Imagine you’re attempting to configure a replication rule for an Amazon S3 bucket, and configuration fails. You can turn to Amazon Q for assistance. If you receive an error that Amazon Q can help with, a Troubleshoot with Amazon Q button appears in the error message. Navigate to the Amazon S3 service in the console to follow along with this example if it applies to your use case.

Figure 20 - S3 console error

Figure 20 – S3 console error

To use Amazon Q to troubleshoot, choose Troubleshoot with Amazon Q to proceed. A window appears where Amazon Q provides information about the error titled “Analysis“.

Amazon Q diagnosed that the error occurred because versioning is not enabled for the source bucket specified. Versioning must be enabled on the source bucket in order to replicate objects from that bucket.

Amazon Q also provides an overview on how to resolve this error. To see detailed steps for how to resolve the error, choose Help me resolve.

Figure 21 - Amazon Q analysis

Figure 21 – Amazon Q analysis

It can take several seconds for Amazon Q to generate instructions. After they appear, follow the instructions to resolve the error.

Figure 22 - Amazon Q Resolution
Figure 22 – Amazon Q Resolution

Here, Amazon Q recommends the following steps to resolve the error:

  1. Navigate to the S3 console
  2. Select the S3 bucket
  3. Go to the Properties tab
  4. Under Versioning, click Edit
  5. Enable versioning on the bucket
  6. Return to replication rule creation page
  7. Retry creating replication rule

Conclusion

Amazon Q is a powerful AI-powered assistant that can greatly simplify troubleshooting of common issues across various AWS services, especially for Developers. Amazon Q provides detailed analysis and step-by-step guidance to resolve errors efficiently. By leveraging Amazon Q, AWS users can save significant time and effort in diagnosing and fixing problems, allowing them to focus more on building and innovating with AWS. Amazon Q represents a valuable addition to the AWS ecosystem, empowering users with enhanced support and streamlined troubleshooting capabilities.

About the authors

Brendan Jenkins

Brendan Jenkins is a Solutions Architect at Amazon Web Services (AWS) working with Enterprise AWS customers providing them with technical guidance and helping achieve their business goals. He has an area of specialization in DevOps and Machine Learning technology.

Jehu Gray

Jehu Gray is an Enterprise Solutions Architect at Amazon Web Services where he helps customers design solutions that fits their needs. He enjoys exploring what’s possible with IaC.

Robert Stolz

Robert Stolz is a Solutions Architect at Amazon Web Services (AWS) working with Enterprise AWS customers in the financial services industry, helping them achieve their business goals. He has a specialization in AI Strategy and adoption tactics.

AWS named a Leader in IDC MarketScape: Worldwide Analytic Stream Processing Software 2024 Vendor Assessment

Post Syndicated from Anna Montalat original https://aws.amazon.com/blogs/big-data/aws-named-a-leader-in-idc-marketscape-worldwide-analytic-stream-processing-software-2024-vendor-assessment/

We’re thrilled to announce that AWS has been named a Leader in the IDC MarketScape: Worldwide Analytic Stream Processing Software 2024 Vendor Assessment (doc #US51053123, March 2024).

We believe this recognition validates the power and performance of Apache Flink for real-time data processing, and how AWS is leading the way to help customers build and run fully managed Apache Flink applications. You can read the full report from IDC.

Unleashing real-time insights for your organization

Apache Flink’s robust architecture enables real-time data processing at scale, making it a favored choice among organizations for its efficiency and speed. With its advanced features for event time processing and state management, Apache Flink empowers users to build complex stream processing applications, making it indispensable for modern data-driven organizations. Managed Service for Apache Flink takes the complexity out of Apache Flink deployment and management, letting you focus on building game-changing applications. With Managed Service for Apache Flink, you can transform and analyze streaming data in real time using Apache Flink and integrate applications with other AWS services. There are no servers and clusters to manage, and there is no compute and storage infrastructure to set up. You pay only for the resources you use.

But what does this mean for your organizations and IT teams? The following are some use cases and benefits:

  • Faster insights, quicker action – Analyze data streams as they arrive, allowing you to react promptly to changing conditions and make informed decisions based on the latest information, achieving agility and competitiveness in dynamic markets.
  • Real-time fraud detection – Identify suspicious activity the moment it occurs, enabling proactive measures to protect your customers and revenue from potential financial losses, bolstering trust and security in your business operations.
  • Personalized customer interactions – Gain insights from user behavior in real time, enabling personalized experiences and the ability to proactively address potential issues before they impact customer satisfaction, fostering loyalty and enhancing brand reputation.
  • Data-driven optimization – Utilize real-time insights from sensor data and machine logs to streamline processes, identify inefficiencies, and optimize resource allocation, driving operational excellence and cost savings while maintaining peak performance.
  • Advanced AI – Continuously feed real-time data to your machine learning (ML) and generative artificial intelligence (AI) models, allowing them to adapt and personalize outputs for more relevant and impactful results.

Beyond the buzzword: Apache Flink in action

Apache Flink’s versatility extends beyond single use cases. The following are just a few examples of how our customers are taking advantage of its capabilities:

  • The National Hockey League is the second oldest of the four major professional team sports leagues in North America. Predicting events such as face-off winning probabilities during a live game is a complex task that requires processing a significant amount of quality historical data and data streams in real time. The NHL constructed the Face-off Probability model using Apache Flink. Managed Service for Apache Flink provides the underlying infrastructure for the Apache Flink applications, removing the need to self-manage an Apache Flink cluster and reducing maintenance complexity and costs.
  • Arity is a technology company focused on making transportation smarter, safer, and more useful. They transform massive amounts of data into actionable insights to help partners better predict risk and make smarter decisions in real time. Arity uses the managed ability of Managed Service for Apache Flink to transform and analyze streaming data in near real time using Apache Flink. On Managed Service for Apache Flink, Arity generates driving behavior insights based on collated driving data.
  • SOCAR is the leading Korean mobility company with strong competitiveness in car sharing. SOCAR solves mobility-related social problems, such as parking difficulties and traffic congestion, and changes the car ownership-oriented mobility habits in Korea.

Join the leaders in stream processing

By choosing Managed Service for Apache Flink, you’re joining a growing community of organizations who are unlocking the power of real-time data analysis. Get started today and see how Apache Flink can transform your data strategy, including powering the next generation of generative AI applications.

Ready to learn more?

Contact us today and discover how Apache Flink can empower your business.


About the author

Anna Montalat is the Product Marketing lead for AWS analytics and streaming data services, including Amazon Managed Streaming for Apache Kafka (MSK), Kinesis Data Streams, Kinesis Video Streams, Amazon Data Firehose, and Amazon Managed Service for Apache Flink, among others. She is passionate about bringing new and emerging technologies to market, working closely with service teams and enterprise customers. Outside of work, Anna skis through winter time and sails through summer.

The art of possible: Three themes from RSA Conference 2024

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/the-art-of-possible-three-themes-from-rsa-conference-2024/

San Francisco skyline with Oakland Bay Bridge at sunset, California, USA

RSA Conference 2024 drew 650 speakers, 600 exhibitors, and thousands of security practitioners from across the globe to the Moscone Center in San Francisco, California from May 6 through 9.

The keynote lineup was diverse, with 33 presentations featuring speakers ranging from WarGames actor Matthew Broderick, to public and private-sector luminaries such as Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly, U.S. Secretary of State Antony Blinken, security technologist Bruce Schneier, and cryptography experts Tal Rabin, Whitfield Diffie, and Adi Shamir.

Topics aligned with this year’s conference theme, “The art of possible,” and focused on actions we can take to revolutionize technology through innovation, while fortifying our defenses against an evolving threat landscape.

This post highlights three themes that caught our attention: artificial intelligence (AI) security, the Secure by Design approach to building products and services, and Chief Information Security Officer (CISO) collaboration.

AI security

Organizations in all industries have started building generative AI applications using large language models (LLMs) and other foundation models (FMs) to enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels. So it’s not surprising that AI dominated conversations. Over 100 sessions touched on the topic, and the desire of attendees to understand AI technology and learn how to balance its risks and opportunities was clear.

“Discussions of artificial intelligence often swirl with mysticism regarding how an AI system functions. The reality is far more simple: AI is a type of software system.” — CISA

FMs and the applications built around them are often used with highly sensitive business data such as personal data, compliance data, operational data, and financial information to optimize the model’s output. As we explore the advantages of generative AI, protecting highly sensitive data and investments is a top priority. However, many organizations aren’t paying enough attention to security.

A joint generative AI security report released by Amazon Web Services (AWS) and the IBM Institute for Business Value during the conference found that 82% of business leaders view secure and trustworthy AI as essential for their operations, but only 24% are actively securing generative AI models and embedding security processes in AI development. In fact, nearly 70% say innovation takes precedence over security, despite concerns over threats and vulnerabilities (detailed in Figure 1).

Figure 1: Generative AI adoption concerns

Figure 1: Generative AI adoption concerns, Source: IBM Security

Because data and model weights—the numerical values models learn and adjust as they train—are incredibly valuable, organizations need them to stay protected, secure, and private, whether that means restricting access from an organization’s own administrators, customers, or cloud service provider, or protecting data from vulnerabilities in software running in the organization’s own environment.

There is no silver AI-security bullet, but as the report points out, there are proactive steps you can take to start protecting your organization and leveraging AI technology to improve your security posture:

  1. Establish a governance, risk, and compliance (GRC) foundation. Trust in gen AI starts with new security governance models (Figure 2) that integrate and embed GRC capabilities into your AI initiatives, and include policies, processes, and controls that are aligned with your business objectives.

    Figure 2: Updating governance, risk, and compliance models

    Figure 2: Updating governance, risk, and compliance models, Source: IBM Security

    In the RSA Conference session AI: Law, Policy, and Common Sense Suggestions to Stay Out of Trouble, digital commerce and gaming attorney Behnam Dayanim highlighted ethical, policy, and legal considerations—including AI-specific regulations—as well as governance structures such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF 1.0) that can help maximize a successful implementation and minimize potential risk.

  2. Strengthen your security culture. When we think of securing AI, it’s natural to focus on technical measures that can help protect the business. But organizations are made up of people—not technology. Educating employees at all levels of the organization can help avoid preventable harms such as prompt-based risks and unapproved tool use, and foster a resilient culture of cybersecurity that supports effective risk mitigation, incident detection and response, and continuous collaboration.

    “You’ve got to understand early on that security can’t be effective if you’re running it like a project or a program. You really have to run it as an operational imperative—a core function of the business. That’s when magic can happen.” — Hart Rossman, Global Services Security Vice President at AWS
  3. Engage with partners. Developing and securing AI solutions requires resources and skills that many organizations lack. Partners can provide you with comprehensive security support—whether that’s informing and advising you about generative AI, or augmenting your delivery and support capabilities. This can help make your engineers and your security controls more effective.

    While many organizations purchase security products or solutions with embedded generative AI capabilities, nearly two-thirds, as detailed in Figure 3, report that their generative AI security capabilities come through some type of partner.

    Figure 3: More than 90% of security gen AI capabilities are coming from third-party products or partners

    Figure 3: Most security gen AI capabilities are coming from third-party products or partners, Source: IBM Security

    Tens of thousands of customers are using AWS, for example, to experiment and move transformative generative AI applications into production. AWS provides AI-powered tools and services, a Generative AI Innovation Center program, and an extensive network of AWS partners that have demonstrated expertise delivering machine learning (ML) and generative AI solutions. These resources can support your teams with hands-on help developing solutions mapped to your requirements, and a broader collection of knowledge they can use to help you make the nuanced decisions required for effective security.

View the joint report and AWS generative AI security resources for additional guidance.

Secure by Design

Building secure software was a popular and related focus at the conference. Insecure design is ranked as the number four critical web application security concern on the Open Web Application Security Project (OWASP) Top 10.

The concept known as Secure by Design is gaining importance in the effort to mitigate vulnerabilities early, minimize risks, and recognize security as a core business requirement. Secure by Design builds off of security models such as Zero Trust, and aims to reduce the burden of cybersecurity and break the cycle of constantly creating and applying updates by developing products that are foundationally secure.

More than 60 technology companies—including AWS—signed CISA’s Secure by Design Pledge during RSA Conference as part of a collaborative push to put security first when designing products and services.

The pledge demonstrates a commitment to making measurable progress towards seven goals within a year:

  • Broaden the use of multi-factor authentication (MFA)
  • Reduce default passwords
  • Enable a significant reduction in the prevalence of one or more vulnerability classes
  • Increase the installation of security patches by customers
  • Publish a vulnerability disclosure policy (VDP)
  • Demonstrate transparency in vulnerability reporting
  • Strengthen the ability of customers to gather evidence of cybersecurity intrusions affecting products

“From day one, we have pioneered secure by design and secure by default practices in the cloud, so AWS is designed to be the most secure place for customers to run their workloads. We are committed to continuing to help organizations around the world elevate their security posture, and we look forward to collaborating with CISA and other stakeholders to further grow and promote security by design and default practices.” — Chris Betz, CISO at AWS

The need for security by design applies to AI like any other software system. To protect users and data, we need to build security into ML and AI with a Secure by Design approach that considers these technologies to be part of a larger software system, and weaves security into the AI pipeline.

Since models tend to have very high privileges and access to data, integrating an AI bill of materials (AI/ML BOM) and Cryptography Bill of Materials (CBOM) into BOM processes can help you catalog security-relevant information, and gain visibility into model components and data sources. Additionally, frameworks and standards such as the AI RMF 1.0, the HITRUST AI Assurance Program, and ISO/IEC 42001 can facilitate the incorporation of trustworthiness considerations into the design, development, and use of AI systems.

CISO collaboration

In the RSA Conference keynote session CISO Confidential: What Separates The Best From The Rest, Trellix CEO Bryan Palma and CISO Harold Rivas noted that there are approximately 32,000 global CISOs today—4 times more than 10 years ago. The challenges they face include staffing shortages, liability concerns, and a rapidly evolving threat landscape. According to research conducted by the Information Systems Security Association (ISSA), nearly half of organizations (46%) report that their cybersecurity team is understaffed, and more than 80% of CISOs recently surveyed by Trellix have experienced an increase in cybersecurity threats over the past six months. When asked what would most improve their organizations’ abilities to defend against these threats, their top answer was industry peers sharing insights and best practices.

Building trusted relationships with peers and technology partners can help you gain the knowledge you need to effectively communicate the story of risk to your board of directors, keep up with technology, and build success as a CISO.

AWS CISO Circles provide a forum for cybersecurity executives from organizations of all sizes and industries to share their challenges, insights, and best practices. CISOs come together in locations around the world to discuss the biggest security topics of the moment. With NDAs in place and the Chatham House Rule in effect, security leaders can feel free to speak their minds, ask questions, and get feedback from peers through candid conversations facilitated by AWS Security leaders.

“When it comes to security, community unlocks possibilities. CISO Circles give us an opportunity to deeply lean into CISOs’ concerns, and the topics that resonate with them. Chatham House Rule gives security leaders the confidence they need to speak openly and honestly with each other, and build a global community of knowledge-sharing and support.” — Clarke Rodgers, Director of Enterprise Strategy at AWS

At RSA Conference, CISO Circle attendees discussed the challenges of adopting generative AI. When asked whether CISOs or the business own generative AI risk for the organization, the consensus was that security can help with policies and recommendations, but the business should own the risk and decisions about how and when to use the technology. Some attendees noted that they took initial responsibility for generative AI risk, before transitioning ownership to an advisory board or committee comprised of leaders from their HR, legal, IT, finance, privacy, and compliance and ethics teams over time. Several CISOs expressed the belief that quickly taking ownership of generative AI risk before shepherding it to the right owner gave them a valuable opportunity to earn trust with their boards and executive peers, and to demonstrate business leadership during a time of uncertainty.

Embrace the art of possible

There are many more RSA Conference highlights on a wide range of additional topics, including post-quantum cryptography developments, identity and access management, data perimeters, threat modeling, cybersecurity budgets, and cyber insurance trends. If there’s one key takeaway, it’s that we should never underestimate what is possible from threat actors or defenders. By harnessing AI’s potential while addressing its risks, building foundationally secure products and services, and developing meaningful collaboration, we can collectively strengthen security and establish cyber resilience.

Join us to learn more about cloud security in the age of generative AI at AWS re:Inforce 2024 June 10–12 in Pennsylvania. Register today with the code SECBLOfnakb to receive a limited time $150 USD discount, while supplies last.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS, based in Chicago. She has more than a decade of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Danielle Ruderman

Danielle Ruderman

Danielle is a Senior Manager for the AWS Worldwide Security Specialist Organization, where she leads a team that enables global CISOs and security leaders to better secure their cloud environments. Danielle is passionate about improving security by building company security culture that starts with employee engagement.

AWS plans to invest €7.8B into the AWS European Sovereign Cloud, set to launch by the end of 2025

Post Syndicated from Max Peterson original https://aws.amazon.com/blogs/security/aws-plans-to-invest-e7-8b-into-the-aws-european-sovereign-cloud-set-to-launch-by-the-end-of-2025/

English | German

Amazon Web Services (AWS) continues to believe it’s essential that our customers have control over their data and choices for how they secure and manage that data in the cloud. AWS gives customers the flexibility to choose how and where they want to run their workloads, including a proven track record of innovation to support specialized workloads around the world. While many customers are able to meet their stringent security, sovereignty, and privacy requirements using our existing sovereign-by-design AWS Regions, we know there’s not a one-size-fits-all solution. AWS continues to innovate based on the criteria we know are most important to our customers to give them more choice and more control. Last year we announced the AWS European Sovereign Cloud, a new independent cloud for Europe, designed to give public sector organizations and customers in highly regulated industries further choice to meet their unique sovereignty needs. Today, we’re excited to share more details about the AWS European Sovereign Cloud roadmap so that customers and partners can start planning. The AWS European Sovereign Cloud is planning to launch its first AWS Region in the State of Brandenburg, Germany by the end of 2025. Available to all AWS customers, this effort is backed by a €7.8B investment in infrastructure, jobs creation, and skills development.

The AWS European Sovereign Cloud will utilize the full power of AWS with the same familiar architecture, expansive service portfolio, and APIs that customers use today. This means that customers using the AWS European Sovereign Cloud will get the benefits of AWS infrastructure including industry-leading security, availability, performance, and resilience. We offer a broad set of services, including a full suite of databases, compute, storage, analytics, machine learning and AI, networking, mobile, developer tools, IoT, security, and enterprise applications. Today, customers can start building applications in any existing Region and simply move them to the AWS European Sovereign Cloud when the first Region launches in 2025. Partners in the AWS Partner Network, which features more than 130,000 partners, already provide a range of offerings in our existing AWS Regions to help customers meet requirements and will now be able to seamlessly deploy applications on the AWS European Sovereign Cloud.

More control, more choice

Like our existing Regions, the AWS European Sovereign Cloud will be powered by the AWS Nitro System. The Nitro System is an unparalleled computing backbone for AWS, with security and performance at its core. Its specialized hardware and associated firmware are designed to enforce restrictions so that nobody, including anyone in AWS, can access customer workloads or data running on Amazon Elastic Compute Cloud (Amazon EC2) Nitro based instances. The design of the Nitro System has been validated by the NCC Group, an independent cybersecurity firm. The controls that help prevent operator access are so fundamental to the Nitro System that we’ve added them in our AWS Service Terms to provide an additional contractual assurance to all of our customers.

To date, we have launched 33 Regions around the globe with our secure and sovereign-by-design approach. Customers come to AWS because they want to migrate to and build on a secure cloud foundation. Customers who need to comply with European data residency requirements have the choice to deploy their data to any of our eight existing Regions in Europe (Ireland, Frankfurt, London, Paris, Stockholm, Milan, Zurich, and Spain) to keep their data securely in Europe.

For customers who need to meet additional stringent operational autonomy and data residency requirements within the European Union (EU), the AWS European Sovereign Cloud will be available as another option, with infrastructure wholly located within the EU and operated independently from existing Regions. The AWS European Sovereign Cloud will allow customers to keep all customer data and the metadata they create (such as the roles, permissions, resource labels, and configurations they use to run AWS) in the EU. Customers who need options to address stringent isolation and in-country data residency needs will be able to use AWS Dedicated Local Zones or AWS Outposts to deploy AWS European Sovereign Cloud infrastructure in locations they select. We continue to work with our customers and partners to shape the AWS European Sovereign Cloud, applying learnings from our engagements with European regulators and national cybersecurity authorities.

Continued investment in Europe

Over the last 25 years, we’ve driven economic development through our investment in infrastructure, jobs, and skills in communities and countries across Europe. Since 2010, Amazon has invested more than €150 billion in the EU, and we’re proud to employ more than 150,000 people in permanent roles across the European Single Market.

AWS now plans to invest €7.8 billion in the AWS European Sovereign Cloud by 2040, building on our long-term commitment to Europe and ongoing support of the region’s sovereignty needs. This long-term investment is expected to lead to a ripple effect in the local cloud community through accelerating productivity gains, empowering the digital transformation of businesses, empowering the AWS Partner Network (APN), upskilling the cloud and digital workforce, developing renewable energy projects, and creating a positive impact in the communities where AWS operates. In total, the AWS planned investment is estimated to contribute €17.2 billion to Germany’s total Gross Domestic Product (GDP) through 2040, and support an average 2,800 full-time equivalent jobs in local German businesses each year. These positions, including construction, facility maintenance, engineering, telecommunications, and other jobs within the broader local economy, are part of the AWS data center supply chain.

In addition, AWS is also creating new highly skilled permanent roles to build and operate the AWS European Sovereign Cloud. These jobs will include software engineers, systems developers, and solutions architects. This is part of our commitment that all day-to-day operations of the AWS European Sovereign Cloud will be controlled exclusively by personnel located in the EU, including access to data centers, technical support, and customer service.

In Germany, we also collaborate with local communities on long-term, innovative programs that will have a lasting impact in the areas where our infrastructure is located. This includes developing cloud workforce and education initiatives for learners of all ages, helping to solve for the skills gap and prepare for the tech jobs of the future. For example, last year AWS partnered with Siemens AG to design the first apprenticeship program for AWS data centers in Germany, launched the first national cloud computing certification with the German Chamber of Commerce (DIHK), and established the AWS Skills to Jobs Tech Alliance in Germany. We will work closely with local partners to roll out these skills programs and make sure they are tailored to regional needs.

“High performing, reliable, and secure infrastructure is the most important prerequisite for an increasingly digitalized economy and society. Brandenburg is making progress here. In recent years, we have set on a course to invest in modern and sustainable data center infrastructure in our state, strengthening Brandenburg as a business location. State-of-the-art data centers for secure cloud computing are the basis for a strong digital economy. I am pleased Amazon Web Services (AWS) has chosen Brandenburg for a long-term investment in its cloud computing infrastructure for the AWS European Sovereign Cloud.”

Brandenburg’s Minister of Economic Affairs, Prof. Dr. Jörg Steinbach

Build confidently with AWS

For customers that are early in their cloud adoption journey and are considering the AWS European Sovereign Cloud, we provide a wide range of resources to help adopt the cloud effectively. From lifting and shifting workloads to migrating entire data centers, customers get the organizational, operational, and technical capabilities needed for a successful migration to AWS. For example, we offer the AWS Cloud Adoption Framework (AWS CAF) to provide best practices for organizations to develop an efficient and effective plan for cloud adoption, and AWS Migration Hub to help assess migration needs, define migration and modernization strategy, and leverage automation. We frequently host AWS events, webinars, and workshops focused on cloud adoption and migration strategies, where customers can learn from AWS experts and connect with other customers and partners.

We’re committed to giving customers more control and more choice to help meet their unique digital sovereignty needs, without compromising on the full power of AWS. The AWS European Sovereign Cloud is a testament to this. To help customers and partners continue to plan and build, we will share additional updates as we drive towards launch. You can discover more about the AWS European Sovereign Cloud on our European Digital Sovereignty website.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on X.
 


German version

AWS European Sovereign Cloud bis Ende 2025: AWS plant Investitionen in Höhe von 7,8 Milliarden Euro

Amazon Web Services (AWS) ist davon überzeugt, dass es für Kunden von essentieller Bedeutung ist, die Kontrolle über ihre Daten und Auswahlmöglichkeiten zu haben, wie sie diese Daten in der Cloud sichern und verwalten. Daher können Kunden flexibel wählen, wie und wo sie ihre Workloads ausführen. Dazu gehört auch eine langjährige Erfolgsbilanz von Innovationen zur Unterstützung spezialisierter Workloads auf der ganzen Welt. Viele Kunden können bereits ihre strengen Sicherheits-, Souveränitäts- und Datenschutzanforderungen mit unseren AWS-Regionen unter dem „sovereign-by-design“-Ansatz erfüllen. Aber wir wissen ebenso: Es gibt keine Einheitslösung für alle. Daher arbeitet AWS kontinuierlich an Innovationen, die auf jenen Kriterien basieren, die für unsere Kunden am wichtigsten sind und ihnen mehr Auswahl sowie Kontrolle bieten. Vor diesem Hintergrund haben wir letztes Jahr die AWS European Sovereign Cloud angekündigt. Mit ihr entsteht eine neue, unabhängige Cloud für Europa. Sie soll Organisationen des öffentlichen Sektors und Kunden in stark regulierten Branchen dabei helfen, die sich wandelnden Anforderungen an die digitale Souveränität zu erfüllen.

Heute freuen wir uns, dass wir weitere Details über die Roadmap der AWS European Sovereign Cloud bekanntgeben können. So können unsere Kunden und Partner mit ihren weiteren Planungen beginnen. Der Start der ersten Region der AWS European Sovereign Cloud ist in Brandenburg bis zum Jahresende 2025 geplant. Dieses Angebot steht allen AWS-Kunden zur Verfügung und wird von einer Investition in Höhe von 7,8 Milliarden Euro in die Infrastruktur, Arbeitsplatzschaffung und Kompetenzentwicklung unterstützt.

Die AWS European Cloud in Brandenburg bietet die volle Leistungsfähigkeit, mit der bekannten Architektur, dem umfangreichen Angebot an Services und denselben APIs, die Millionen von Kunden bereits kennen. Das bedeutet: Kunden der AWS European Sovereign Cloud profitieren somit bei voller Unabhängigkeit von den bekannten Vorteilen der AWS-Infrastruktur, einschließlich der branchenführenden Sicherheit, Verfügbarkeit, Leistung und Resilienz.

AWS-Kunden haben Zugriff auf ein breites Spektrum an Services – darunter ein umfangreiches Angebot bestehend aus Datenbanken, Datenverarbeitung, Datenspeicherung, Analytics, maschinellem Lernen (ML) und künstlicher Intelligenz (KI), Netzwerken, mobilen Applikationen, Entwickler-Tools, Internet of Things (IoT), Sicherheit und Unternehmensanwendungen. Bereits heute können Kunden Anwendungen in jeder bestehenden Region entwickeln und diese einfach in die AWS European Sovereign Cloud auslagern, sobald die erste AWS-Region 2025 startet. Die Partner im AWS-Partnernetzwerks (APN), das mehr als 130.000 Partner umfasst, bietet bereits eine Reihe von Angeboten in den bestehenden AWS-Regionen an. Dadurch unterstützen sie Kunden dabei, ihre Anforderungen zu erfüllen und Anwendungen einfach in der AWS European Sovereign Cloud bereitzustellen.

Mehr Kontrolle, größere Auswahl

Die AWS European Sovereign Cloud nutzt wie auch unsere bestehenden Regionen das AWS Nitro System. Dabei handelt es sich um einen Computing-Backbone für AWS, bei dem Sicherheit und Leistung im Mittelpunkt stehen. Die spezialisierte Hardware und zugehörige Firmware sind so konzipiert, dass strikte Beschränkungen gelten und niemand, auch nicht AWS selbst, auf die Workloads oder Daten von Kunden zugreifen kann, die auf Amazon Elastic Compute Cloud (Amazon EC2) Nitro-basierten Instanzen laufen. Dieses Design wurde von der NCC Group validiert, einem unabhängigen Unternehmen für Cybersicherheit. Die Kontrollen, die den Zugriff durch Betreiber verhindern, sind grundlegend für das Nitro System. Daher haben wir sie in unsere AWS Service Terms aufgenommen, um allen unseren Kunden diese zusätzliche vertragliche Zusicherung zu geben.

Bis heute haben wir 33 Regionen rund um den Globus mit unserem sicheren und „sovereign-by-design“-Ansatz gestartet. Unsere Kunden nutzen AWS, weil sie auf einer sicheren Cloud-Umgebung migrieren und aufbauen möchten. Für Kunden, die europäische Anforderungen an den Ort der Datenverarbeitung erfüllen müssen, bietet AWS die Möglichkeit, ihre Daten in einer unserer acht bestehenden Regionen in Europa zu verarbeiten: Irland, Frankfurt, London, Paris, Stockholm, Mailand, Zürich und Spanien. So können sie ihre Daten sicher innerhalb Europas halten.

Müssen Kunden zusätzliche Anforderungen an die betriebliche Autonomie und den Ort der Datenverarbeitung innerhalb der Europäischen Union erfüllen, steht die AWS European Sovereign Cloud als weitere Option zur Verfügung. Die Infrastruktur hierfür ist vollständig in der EU angesiedelt und wird unabhängig von den bestehenden Regionen betrieben. Sie ermöglicht es AWS-Kunden, ihre Kundeninhalte und von ihnen erstellten Metadaten in der EU zu behalten – etwa Rollen, Berechtigungen, Ressourcenbezeichnungen und Konfigurationen für den Betrieb von AWS.

Sollten Kunden weitere Optionen benötigen, um eine Isolierung zu ermöglichen und strenge Anforderungen an den Ort der Datenverarbeitung in einem bestimmten Land zu erfüllen, können sie auf AWS Dedicated Local Zones oder AWS Outposts zurückgreifen. Auf diese Weise können sie die Infrastruktur der AWS European Sovereign Cloud am Ort ihrer Wahl einsetzen. Wir arbeiten mit unseren Kunden und Partnern kontinuierlich daran, die AWS European Sovereign Cloud so zu gestalten, dass sie den benötigten Anforderungen entspricht. Dabei nutzen wir auch Feedback aus unseren Gesprächen mit europäischen Regulierungsbehörden und nationalen Cybersicherheitsbehörden.

„Eine funktionierende, verlässliche und sichere Infrastruktur ist die wichtigste Vorrausetzung für eine zunehmend digitalisierte Wirtschaft und Gesellschaft. Brandenburg schreitet hier voran. Wir haben in den vergangenen Jahren entscheidende Weichen gestellt, um Investitionen in eine moderne und nachhaltige Rechenzentruminfrastruktur in unserem Land auszubauen und so den Wirtschaftsstandort Brandenburg zu stärken. Hochmoderne Rechenzentren für sicheres Cloud-Computing sind die Basis für eine digitale Wirtschaft. Für unsere digitale Souveränität ist es wichtig, dass Rechenleistungen vor Ort in Deutschland erbracht werden. Ich freue mich, dass Amazon Web Services Brandenburg für ein langfristiges Investment in ihre Cloud-Computing-Infrastruktur für die AWS European Sovereign Cloud ausgewählt hat.“

sagt Brandenburgs Wirtschaftsminister Prof. Dr.-Ing. Jörg Steinbach

Kontinuierliche Investitionen in Europa

Im Laufe der vergangenen 25 Jahre haben wir die wirtschaftliche Entwicklung in europäischen Ländern und Gemeinden vorangetrieben und in Infrastruktur, Arbeitsplätze sowie den Ausbau von Kompetenzen investiert. Seit 2010 hat Amazon über 150 Milliarden Euro in der Europäischen Union investiert und wir sind stolz darauf, im gesamten europäischen Binnenmarkt mehr als 150.000 Menschen in Festanstellung zu beschäftigen.

AWS plant bis zum Jahr 2040 7,8 Milliarden Euro in die AWS European Sovereign Cloud zu investieren. Diese Investition ist Teil der langfristigen Bestrebungen von AWS, das europäische Bedürfnis nach digitaler Souveränität zu unterstützen. Mit dieser langfristigen Investition löst AWS einen Multiplikatoreffekt für Cloud-Computing in Europa aus. Sie wird die digitale Transformation der Verwaltung und von Unternehmen vorantreiben, das AWS Partner Network (APN) stärken, die Zahl der Cloud- und Digitalfachkräfte erhöhen, erneuerbare Energieprojekte vorantreiben und eine positive Wirkung in den Gemeinden erzielen, in denen AWS präsent ist. Insgesamt wird die geplante AWS-Investition bis 2040 voraussichtlich 17,2 Milliarden Euro zum deutschen Bruttoinlandsprodukt und zur Schaffung von 2.800 Vollzeitstellen bei regionalen Unternehmen beitragen. Diese Arbeitsplätze in den Bereichen Bau, Instandhaltung, Ingenieurwesen, Telekommunikation und der breiteren regionalen Wirtschaft sind Teil der Lieferkette für AWS-Rechenzentren.

Darüber hinaus wird AWS neue Stellen für hochqualifizierte festangestellte Fachkräfte wie Softwareentwickler, Systemingenieure und Lösungsarchitekten schaffen, um die AWS European Sovereign Cloud aufzubauen und zu betreiben. Die Investition in zusätzliches Personal unterstreicht unser Commitment, dass der gesamte Betrieb dieser souveränen Cloud-Umgebung – angefangen bei der Zugangskontrolle zu den Rechenzentren über den technischen Support bis hin zum Kundendienst – ausnahmslos durch Fachkräfte innerhalb der Europäischen Union kontrolliert und gesteuert wird.

In Deutschland arbeitet AWS mit den Beteiligten vor Ort auch an langfristigen und innovativen Programmen zusammen. Diese sollen einen nachhaltigen positiven Einfluss auf die Gemeinden haben, in denen sich die Infrastruktur des Unternehmens befindet. AWS konzentriert sich auf die Entwicklung von Cloud-Fachkräften und Schulungsinitiativen für Lernende aller Altersgruppen. Diese Maßnahmen tragen dazu bei, den Fachkräftemangel zu beheben und sich auf die technischen Berufe der Zukunft vorzubereiten. Im vergangenen Jahr hat AWS beispielsweise gemeinsam mit der Siemens AG das erste Ausbildungsprogramm für AWS-Rechenzentren in Deutschland entwickelt. Ebenso hat das Unternehmen in Kooperation mit dem Deutschen Industrie und Handelstag (DIHK) den bundeseinheitlichen Zertifikatslehrgang zum „Cloud Business Expert“ entwickelt sowie die AWS Skills to Jobs Tech Alliance in Deutschland ins Leben gerufen. AWS wird gemeinsam mit lokalen Partnern daran arbeiten, Ausbildungsprogramme und Fortbildungen anzubieten, die auf die Bedürfnisse vor Ort zugeschnitten sind.

Vertrauensvoll bauen mit AWS

Für Kunden, die sich noch am Anfang ihrer Cloud-Reise befinden und die AWS European Sovereign Cloud in Betracht ziehen, bieten wir eine Vielzahl von Ressourcen an, um den Wechsel in die Cloud effektiv zu gestalten. Egal ob einzelne Workloads verlagert oder ganze Rechenzentren migriert werden sollen – Kunden erhalten von uns die nötigen organisatorischen, operativen und technischen Fähigkeiten für eine erfolgreiche Migration zu AWS. Beispielsweise bieten wir das AWS Cloud Adoption Framework (AWS CAF) an, das Unternehmen bei der Entwicklung eines effizienten und effektiven Cloud-Adoptionsplans mit Best Practices unterstützt. Auch der AWS Migration Hub hilft bei der Bewertung des Migrationsbedarfs, der Definition der Migrations- und Modernisierungsstrategie und der Nutzung von Automatisierung. Darüber hinaus veranstalten wir regelmäßig AWS-Events, Webinare und Workshops rund um die Themen Cloud-Adoption und Migrationsstrategie. Dabei können Kunden von AWS-Experten lernen und sich mit anderen Kunden und Partnern vernetzen.

Wir sind bestrebt, unseren Kunden mehr Kontrolle und weitere Optionen anzubieten, damit diese ihre ganz individuellen Anforderungen an die digitale Souveränität erfüllen können, ohne dabei auf die volle Leistungsfähigkeit von AWS verzichten zu müssen.

Um Kunden und Partnern bei der weiteren Planung und Entwicklung zu unterstützen, werden wir laufend zusätzliche Updates bereitstellen, während wir auf den Start der AWS European Sovereign Cloud hinarbeiten. Mehr über die AWS European Sovereign Cloud erfahren Sie auf unserer Website zur European Digital Sovereignty.

 

Max Peterson

Max Peterson

Max is the Vice President of AWS Sovereign Cloud. He leads efforts to ensure that all AWS customers around the world have the most advanced set of sovereignty controls, privacy safeguards, and security features available in the cloud. Before his current role, Max served as the VP of AWS Worldwide Public Sector (WWPS) and created and led the WWPS International Sales division, with a focus on empowering government, education, healthcare, aerospace and satellite, and nonprofit organizations to drive rapid innovation while meeting evolving compliance, security, and policy requirements. Max has over 30 years of public sector experience and served in other technology leadership roles before joining Amazon. Max has earned both a Bachelor of Arts in Finance and Master of Business Administration in Management Information Systems from the University of Maryland.

Achieve peak performance and boost scalability using multiple Amazon Redshift serverless workgroups and Network Load Balancer

Post Syndicated from Ricardo Serafim original https://aws.amazon.com/blogs/big-data/achieve-peak-performance-and-boost-scalability-using-multiple-amazon-redshift-serverless-workgroups-and-network-load-balancer/

As data analytics use cases grow, factors of scalability and concurrency become crucial for businesses. Your analytic solution architecture should be able to handle large data volumes at high concurrency and without compromising speed, thereby delivering a scalable high-performance analytics environment.

Amazon Redshift Serverless provides a fully managed, petabyte-scale, auto scaling cloud data warehouse to support high-concurrency analytics. It offers data analysts, developers, and scientists a fast, flexible analytic environment to gain insights from their data with optimal price-performance. Redshift Serverless auto scales during usage spikes, enabling enterprises to cost-effectively help meet changing business demands. You can benefit from this simplicity without changing your existing analytics and business intelligence (BI) applications.

To help meet demanding performance needs like high concurrency, usage spikes, and fast query response times while optimizing costs, this post proposes using Redshift Serverless. The proposed solution aims to address three key performance requirements:

  • Support thousands of concurrent connections with high availability by using multiple Redshift Serverless endpoints behind a Network Load Balancer
  • Accommodate hundreds of concurrent queries with low-latency service level agreements through scalable and distributed workgroups
  • Enable subsecond response times for short queries against large datasets using the fast query processing of Amazon Redshift

The suggested architecture uses multiple Redshift Serverless endpoints accessed through a single Network Load Balancer client endpoint. The Network Load Balancer evenly distributes incoming requests across workgroups. This improves performance and reduces latency by scaling out resources to meet high throughput and low latency demands.

Solution overview

The following diagram outlines a Redshift Serverless architecture with multiple Amazon Redshift managed VPC endpoints behind a Network Load Balancer.

The following are the main components of this architecture:

  • Amazon Redshift data sharing – This allows you to securely share live data across Redshift clusters, workgroups, AWS accounts, and AWS Regions without manually moving or copying the data. Users can see up-to-date and consistent information in Amazon Redshift as soon as it’s updated. With Amazon Redshift data sharing, the ingestion can be done at the producer or consumer endpoint, allowing the other consumer endpoints to read and write the same data and thereby enabling horizontal scaling.
  • Network Load Balancer – This serves as the single point of contact for clients. The load balancer distributes incoming traffic across multiple targets, such as Redshift Serverless managed VPC endpoints. This increases the availability, scalability, and performance of your application. You can add one or more listeners to your load balancer. A listener checks for connection requests from clients, using the protocol and port that you configure, and forwards requests to a target group. A target group routes requests to one or more registered targets, such as Redshift Serverless managed VPC endpoints, using the protocol and the port number that you specify.
  • VPC – Redshift Serverless is provisioned in a VPC. By creating a Redshift managed VPC endpoint, you enable private access to Redshift Serverless from applications in another VPC. This design allows you to scale by having multiple VPCs as needed. The VPC endpoint provides a dedicate private IP for each Redshift Serverless workgroup to be used as the target groups on the Network Load Balancer.

Create an Amazon Redshift managed VPC endpoint

Complete the following steps to create the Amazon Redshift managed VPC endpoint:

  1. On the Redshift Serverless console, choose Workgroup configuration in the navigation pane.
  2. Choose a workgroup from the list.
  3. On the Data access tab, in the Redshift managed VPC endpoints section, choose Create endpoint.
  4. Enter the endpoint name. Create a name that is meaningful for your organization.
  5. The AWS account ID will be populated. This is your 12-digit account ID.
  6. Choose a VPC where the endpoint will be created.
  7. Choose a subnet ID. In the most common use case, this is a subnet where you have a client that you want to connect to your Redshift Serverless instance.
  8. Choose which VPC security groups to add. Each security group acts as a virtual firewall to control inbound and outbound traffic to resources protected by the security group, such as specific virtual desktop instances.

The following screenshot shows an example of this workgroup. Note down the IP address to use during the creation of the target group.

Repeat these steps to create all your Redshift Serverless workgroups.

Add VPC endpoints for the target group for the Network Load Balancer

To add these VPC endpoints to the target group for the Network Load Balancer using Amazon Elastic Compute Cloud (Amazon EC2), complete the following steps:

  1. On the Amazon EC2 console, choose Target groups under Load Balancing in the navigation pane.
  2. Choose Create target group.
  3. For Choose a target type, select Instances to register targets by instance ID, or select IP addresses to register targets by IP address.
  4. For Target group name, enter a name for the target group.
  5. For Protocol, choose TCP or TCP_UDP.
  6. For Port, use 5439 (Amazon Redshift port).
  7. For IP address type, choose IPv4 or IPv6. This option is available only if the target type is Instances or IP addresses and the protocol is TCP or TLS.
  8. You must associate an IPv6 target group with a dual-stack load balancer. All targets in the target group must have the same IP address type. You can’t change the IP address type of a target group after you create it.
  9. For VPC, choose the VPC with the targets to register.
  10. Leave the default selections for the Health checks section, Attributes section, and Tags section.

Create a load balancer

After you create the target group, you can create your load balancer. We recommend using port 5439 (Amazon Redshift default port) for it.

The Network Load Balancer serves as a single-access endpoint and will be used on connections to reach Amazon Redshift. This allows you to add more Redshift Serverless workgroups and increase the concurrency transparently.

Testing the solution

We tested this architecture to run three BI reports with the TPC-DS dataset (cloud benchmark dataset) as our data. Amazon Redshift includes this dataset for free when you choose to load sample data (sample_data_dev database). The installation also provides the queries to test the setup.

Among all the queries from TPC-DS benchmark, we chose the following three to use as our report queries. We changed the first two report queries to use a CREATE TABLE AS SELECT (CTAS) query on temporary tables instead of the WITH clause to emulate options you can see on a typical BI tool. For our testing, we also disabled the result cache to make sure that Amazon Redshift would run the queries every time.

The set of queries contains the creation of temporary tables, a join between those tables, and the cleanup. The cleanup step drops tables. This isn’t needed because they’re deleted at the end of the session, but this aims to simulate all that the BI tool does.

We used Apache JMETER to simulate clients invoking the requests. To learn more about how to use and configure Apache JMETER with Amazon Redshift, refer to Building high-quality benchmark tests for Amazon Redshift using Apache JMeter.

For the tests, we used the following configurations:

  • Test 1 – A single 96 RPU Redshift Serverless vs. three workgroups at 32 RPU each
  • Test 2 – A single 48 RPU Redshift Serverless vs. three workgroups at 16 RPU each

We tested three reports by spawning 100 sessions per report (300 total). There were 14 statements across the three reports (4,200 total). All sessions were triggered simultaneously.

The following table summarizes the tables used in the test.

Table Name Row Count
Catalog_page 93,744
Catalog_sales 23,064,768
Customer_address 50,000
Customer 100,000
Date_dim 73,049
Item 144,000
Promotion 2,400
Store_returns 4,600,224
Store_sales 46,086,464
Store 96
Web_returns 1,148,208
Web_sales 11,510,144
Web_site 240

Some tables were modified by ingesting more data than what the TPC-DS schema offers on Amazon Redshift. Data was reinserted on the table to increase the size.

Test results

The following table summarizes our test results.

TEST 1 . Time Consumed Number of Queries Cost Max Scaled RPU Performance
Single: 96 RPUs 0:02:06 2,100 $6 279 Base
Parallel: 3x 32 RPUs 0:01:06 2,100 $1.20 96 48.03%
Parallel 1 (32 RPU) 0:01:03 688 $0.40 32 50.10%
Parallel 2 (32 RPU) 0:01:03 703 $0.40 32 50.13%
Parallel 3 (32 RPU) 0:01:06 709 $0.40 32 48.03%
TEST 2 . Time Consumed Number of Queries Cost Max Scaled RPU Performance
Single: 48 RPUs 0:01:55 2,100 $3.30 168 Base
Parallel: 3x 16 RPUs 0:01:47 2,100 $1.90 96 6.77%
Parallel 1 (16 RPU) 0:01:47 712 $0.70 36 6.77%
Parallel 2 (16 RPU) 0:01:44 696 $0.50 25 9.13%
Parallel 3 (16 RPU) 0:01:46 692 $0.70 35 7.79%

The preceding table shows that the parallel setup was faster than the single at a lower cost. Also, in our tests, even though Test 1 had double the capacity of Test 2 for the parallel setup, the cost was still 36% lower and the speed was 39% faster. Based on these results, we can conclude that for workloads that have high throughput (I/O), low latency, and high concurrency requirements, this architecture is cost-efficient and performant. Refer to the AWS Pricing Cost Calculator for Network Load Balancer and VPC endpoints pricing.

Redshift Serverless automatically scales the capacity to deliver optimal performance during periods of peak workloads including spikes in concurrency of the workload. This is evident from the maximum scaled RPU results in the preceding table.

Recently released features of Redshift Serverless such as MaxRPU and AI-driven scaling were not used for this test. These new features can increase the price-performance of the workload even further.

We recommend enabling cross-zone load balancing on the Network Load Balancer because it distributes requests from clients to registered targets. Enabling cross-zone load balancing will help balance the requests among the Redshift Serverless managed VPC endpoints irrespective of the Availability Zone they are configured in. Also, if the Network Load Balancer receives traffic from only one server (same IP), you should always use an odd number of Redshift Serverless managed VPC endpoints behind the Network Load Balancer.

Conclusion

In this post, we discussed a scalable architecture that increases the throughput of Redshift Serverless in low latency, high concurrency scenarios. Having multiple Redshift Serverless workgroups behind a Network Load Balancer can deliver a horizontally scalable solution at the best price-performance.

Additionally, Redshift Serverless uses AI techniques (currently in preview) to scale automatically with workload changes across all key dimensions—such as data volume changes, concurrent users, and query complexity—to meet and maintain your price-performance targets.

We hope this post provides you with valuable guidance. We welcome any thoughts or questions in the comments section.


About the Authors

Ricardo Serafim is a Senior Analytics Specialist Solutions Architect at AWS.

Harshida Patel is a Analytics Specialist Principal Solutions Architect, with AWS.

Urvish Shah is a Senior Database Engineer at Amazon Redshift. He has more than a decade of experience working on databases, data warehousing and in analytics space. Outside of work, he enjoys cooking, travelling and spending time with his daughter.

Amol Gaikaiwari is a Sr. Redshift Specialist focused on helping customers realize their business outcomes with optimal Redshift price-performance. He loves to simplify data pipelines and enhance capabilities through adoption of latest Redshift features.

Creating an organizational multi-Region failover strategy

Post Syndicated from Michael Haken original https://aws.amazon.com/blogs/architecture/creating-an-organizational-multi-region-failover-strategy/

AWS Regions provide fault isolation boundaries that prevent correlated failure and contain the impact from AWS service impairments to a single Region when they occur. You can use these fault boundaries to build multi-Region applications that consist of independent, fault-isolated replicas in each Region that limit shared fate scenarios. This allows you to build multi-Region applications and leverage a spectrum of approaches from backup and restore to pilot light to active/active to implement your multi-Region architecture. However, applications typically don’t operate in isolation; consider both the components you will use and their dependencies as part of your failover strategy. Generally, multiple applications make up what we refer to as a user story, a specific capability offered to an end user, like “posting a picture and caption on a social media app” or “checking out on an e-commerce site”. Because of this, you should develop an organizational multi-Region failover strategy that provides the necessary coordination and consistency to make your approach successful.

Overview

There are four high-level strategies that organizations can pick from to guide a multi-Region approach:

  • Component-level failover
  • Individual application failover
  • Dependency graph failover
  • Entire application portfolio failover

These strategies move from the most granular to the coarsest approach. Each strategy has tradeoffs and addresses different challenges, including flexibility of failover decision making, testability of the failover combinations, presence of modal behavior, and organizational investment in planning and implementation. By the end of this post, you will be able to identify the pros and cons of each strategy so you can make intentional choices about which you select for your multi-Region failover solution.

Component-level failover

Applications are made up of multiple components, including their infrastructure, code and config, data stores, and dependencies. The component-level failover strategy helps you recover from individual component impairments. This means that when a single component is impaired, the application will fail over to a component hosted in a different Region. Consider the application in Figure 1. When the Amazon Simple Storage Service (Amazon S3) resources used by the application experience elevated error rates or higher latency, the application fails over to use data from an S3 bucket in its secondary Region.

When the application experiences an impairment using S3 resources in the primary Region, it fails over to use an S3 bucket in the secondary Region.

Figure 1. When the application experiences an impairment using S3 resources in the primary Region, it fails over to use an S3 bucket in the secondary Region.

This strategy gives the most autonomy and flexibility to individual applications, but has four main tradeoffs:

  • It adds latency by using resources in a second Region because they are physically further away. This gives the application multiple modes of behavior, lower latency when all components are in one Region, and higher latency when the components are split between Regions. Modal behavior can produce unexpected and undesirable results.
  • It introduces the possibility for inconsistent data if asynchronous replication is used in the data store.
  • It typically requires a runtime update of the application’s configuration to switch a component to a different Region, which can be unreliable during a failure scenario.
  • There are 2N-1 possible configurations (where N is the number of components in the application) of the application, which can make every possible combination in an application difficult to test.

Individual application failover

The next strategy allows individual applications to make an autonomous decision to fail over all of its components together, shown in Figure 2. This removes the latency tradeoff from the previous strategy by keeping all of the application components in the same Region. It also significantly reduces the complexity by only having two possible configurations per application. Additionally, applications can be failed over to another Region without updating their configuration by using approaches like Amazon Route 53 DNS failover, removing the unreliability of runtime configuration updates.

Application 3 experiences an impairment and fails over to the secondary Region.

Figure 2. Application 3 experiences an impairment and fails over to the secondary Region

However, allowing individual applications to make their own failover decision can introduce the same modal behavior we saw with component-level failover, just in a different dimension. In the worst case, 50% of the applications in a user story could fail over while 50% don’t, meaning every application interaction could be a cross-Region request, shown in Figure 3.

The worst-case scenario of allowing applications to make failover decisions independently.

Figure 3. The worst-case scenario of allowing applications to make failover decisions independently

Additionally, while this approach removes the complexity of the component failover approach, it still exhibits a level of similar complexity, albeit smaller, by having 2N-1 combinations of application locations across Regions, also making this approach difficult to test and coordinate.

Dependency graph failover

To solve the complexity of the previous strategy, you might decide to coordinate failover of all applications that support a user story as a single unit. We call this a dependency graph and it ensures that all applications that interact with each other will always be in the same Region, as shown in Figure 4.

A dependency graph of applications that all support user story "A".

Figure 4. A dependency graph of applications that all support user story “A”

While this solves the previous latency, modal behavior, and complexity tradeoffs, it comes with its own challenges. In a portfolio with multiple user stories and applications, this graph can be very large and discovering each dependency, especially infrequently used ones, can be difficult. In fact, seemingly unrelated dependency graphs can be connected by a single vertex that is shared between them, as shown in Figure 5.

Two unrelated user stories share a dependency on Application 4, requiring both dependency graphs to failover if either experience an impairment.

Figure 5. Two unrelated user stories share a dependency on Application 4, requiring both dependency graphs to failover if either experience an impairment

For example, if every user story you provide depends on a single authentication and authorization system, when one graph of applications needs to failover, then so does the entire authorization system. In turn, every other user story that depends on that authorization system needs to fail over as well. To mitigate this, you might implement independent replicas of these types of applications in each Region, if possible, to remove edges from the dependency graph.

Entire portfolio failover

The final strategy is failing over an entire application portfolio, whether or not applications are impacted or have any interaction with those that are, as shown in Figure 6. This strategy helps remove the operational burden of creating and maintaining dependency graphs for every user story your business supports.

Every user story fails over together regardless of observed impact from a failure.

Figure 6. Every user story fails over together regardless of observed impact from a failure

The major tradeoff is the organizational investment to create multi-Region capabilities for every application – you might not have made that broad investment in the other strategies. You can make this strategy slightly more granular by implementing it for specific application tiers, for example, failing over all tier-1 applications together, as long as you know there aren’t dependencies across applications of different criticality.

You can also combine this approach with the second strategy. Let individual applications make failover decisions until you see broad enough impact, or impact from the modal behavior, that you decide to make all applications failover to your secondary Region to mitigate the effects.

Conclusion

This blog post has looked at four different high-level approaches for creating an organizational multi-Region failover strategy.

Each strategy optimizes for different outcomes. Component-level failover gives you the highest degree of flexibility without organizational capabilities or coordination, but introduces the most complexity and bimodal behavior. Individual application failover optimizes for less complexity in failover combinations than component-level while still maintaining decentralized flexibility in failover decision making. Dependency graph failover optimizes for only needing to failover the minimum set of applications to support a capability, which removes the presence of modal behavior while requiring more organizational investment to do so. Finally, portfolio failover optimizes for not needing to maintain dependency graphs, but requires significant additional investment to build a multi-Region capability for every application.

Creating the strategy can be an iterative journey. You might start with allowing individual applications to make failover decisions while you build toward a future state of managing failover of independent dependency graphs. For more information on creating multi-Region architectures, see AWS Multi-Region Fundamentals and Disaster Recovery of Workloads on AWS.

Let’s Architect! Discovering Generative AI on AWS

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-generative-ai/

Generative artificial intelligence (generative AI) is a type of AI used to generate content, including conversations, images, videos, and music. Generative AI can be used directly to build customer-facing features (a chatbot or an image generator), or it can serve as an underlying component in a more complex system. For example, it can generate embeddings (or compressed representations) or any other artifact necessary to improve downstream machine learning (ML) models or back-end services.

With the advent of generative AI, it’s fundamental to understand what it is, how it works under the hood, and which options are available for putting it into production. In some cases, it can also be helpful to move closer to the underlying model in order to fine tune or drive domain-specific improvements. With this edition of Let’s Architect!, we’ll cover these topics and share an initial set of methodologies to put generative AI into production. We’ll start with a broad introduction to the domain and then share a mix of videos, blogs, and hands-on workshops.

Navigating the future of AI 

Many teams are turning to open source tools running on Kubernetes to help accelerate their ML and generative AI journeys. In this video session, experts discuss why Kubernetes is ideal for ML, then tackle challenges like dependency management and security. You will learn how tools like Ray, JupyterHub, Argo Workflows, and Karpenter can accelerate your path to building and deploying generative AI applications on Amazon Elastic Kubernetes Service (Amazon EKS). A real-world example showcases how Adobe leveraged Amazon EKS to achieve faster time-to-market and reduced costs. You will be also introduced to Data on EKS, a new AWS project offering best practices for deploying various data workloads on Amazon EKS.

Take me to this video!

Containers are a powerful tool for creating reproducible research and production environments for ML.

Figure 1. Containers are a powerful tool for creating reproducible research and production environments for ML.

Generative AI: Architectures and applications in depth

This video session aims to provide an in-depth exploration of the emerging concepts in generative AI. By delving into practical applications and detailing best practices for implementation, the session offers a concrete understanding that empowers businesses to harness the full potential of these technologies. You can gain valuable insights into navigating the complexities of generative AI, equipping you with the knowledge and strategies necessary to stay ahead of the curve and capitalize on the transformative power of these new methods. If you want to dive even deeper, check this generative AI best practices post.

Take me to this video!

Models are growing exponentially: improved capabilities come with higher costs for productionizing them.

Figure 2. Models are growing exponentially: improved capabilities come with higher costs for productionizing them.

SaaS meets AI/ML & generative AI: Multi-tenant patterns & strategies

Working with AI/ML workloads and generative AI in a production environment requires appropriate system design and careful considerations for tenant separation in the context of SaaS. You’ll need to think about how the different tenants are mapped to models, how inferencing is scaled, how solutions are integrated with other upstream/downstream services, and how large language models (LLMs) can be fine-tuned to meet tenant-specific needs.

This video drills down into the concept of multi-tenancy for AI/ML workloads, including the common design, performance, isolation, and experience challenges that you can find during your journey. You will also become familiar with concepts like RAG (used to enrich the LLMs with contextual information) and fine tuning through practical examples.

Take me to this video!

Supporting different tenants might need fetching different context information with RAGs or offering different options for fine-tuning.

Figure 3. Supporting different tenants might need fetching different context information with RAGs or offering different options for fine-tuning.

Achieve DevOps maturity with BMC AMI zAdviser Enterprise and Amazon Bedrock

DevOps Research and Assessment (DORA) metrics, which measure critical DevOps performance indicators like lead time, are essential to engineering practices, as shown in the Accelerate book‘s research. By leveraging generative AI technology, the zAdviser Enterprise platform can now offer in-depth insights and actionable recommendations to help organizations optimize their DevOps practices and drive continuous improvement. This blog demonstrates how generative AI can go beyond language or image generation, applying to a wide spectrum of domains.

Take me to this blog post!

Generative AI is used to provide summarization, analysis, and recommendations for improvement based on the DORA metrics.

Figure 4. Generative AI is used to provide summarization, analysis, and recommendations for improvement based on the DORA metrics.

Hands-on Generative AI: AWS workshops

Getting hands on is often the best way to understand how everything works in practice and create the mental model to connect theoretical foundations with some real-world applications.

Generative AI on Amazon SageMaker shows how you can build, train, and deploy generative AI models. You can learn about options to fine-tune, use out-of-the-box existing models, or even customize the existing open source models based on your needs.

Building with Amazon Bedrock and LangChain demonstrates how an existing fully-managed service provided by AWS can be used when you work with foundational models, covering a wide variety of use cases. Also, if you want a quick guide for prompt engineering, you can check out the PartyRock lab in the workshop.

An image replacement example that you can find in the workshop.

Figure 5. An image replacement example that you can find in the workshop.

See you next time!

Thanks for reading! We hope you got some insight into the applications of generative AI and discovered new strategies for using it. In the next blog, we will dive deeper into machine learning.

To revisit any of our previous posts or explore the entire series, visit the Let’s Architect! page.

How the unique culture of security at AWS makes a difference

Post Syndicated from Chris Betz original https://aws.amazon.com/blogs/security/how-the-unique-culture-of-security-at-aws-makes-a-difference/

Our customers depend on Amazon Web Services (AWS) for their mission-critical applications and most sensitive data. Every day, the world’s fastest-growing startups, largest enterprises, and most trusted governmental organizations are choosing AWS as the place to run their technology infrastructure. They choose us because security has been our top priority from day one. We designed AWS from its foundation to be the most secure way for our customers to run their workloads, and we’ve built our internal culture around security as a business imperative.

While technical security measures are important, organizations are made up of people. A recent report from the Cyber Safety Review Board (CSRB) makes it clear that a deficient security culture can be a root cause for avoidable errors that allow intrusions to succeed and remain undetected.

Security is our top priority

Our security culture starts at the top, and it extends through every part of our organization. Over eight years ago, we made the decision for our security team to report directly to our CEO. This structural design redefined how we build security into the culture of AWS and informs everyone at the company that security is our top priority by providing direct visibility to senior leadership. We empower our service teams to fully own the security of their services and scale security best practices and programs so our customers have the confidence to innovate on AWS.

We believe that there are four key principles to building a strong culture of security:

  1. Security is built into our organizational structure

    At AWS, we view security as a core function of our business, deeply connected to our mission objectives. This goes beyond good intentions—it’s embedded directly into our organizational structure. At Amazon, we make an intentional choice for all our security teams to report directly to the CEO while also being deeply embedded in our respective business units. The goal is to build security into the structural fabric of how we make decisions. Every week, the AWS leadership team, led by our CEO, meets with my team to discuss security and ensure we’re making the right choices on tactical and strategic security issues and course-correcting when needed. We report internally on operational metrics that tie our security culture to the impact that it has on our customers, connecting data to business outcomes and providing an opportunity for leadership to engage and ask questions. This support for security from the top levels of executive leadership helps us reinforce the idea that security is accelerating our business outcomes and improving our customers’ experiences rather than acting as a roadblock.

  2. Security is everyone’s job

    AWS operates with a strong ownership model built around our culture of security. Ownership is one of our key Leadership Principles at Amazon. Employees in every role receive regular training and reinforcement of the message that security is everyone’s job. Every service and product team is fully responsible for the security of the service or capability that they deliver. Security is built into every product roadmap, engineering plan, and weekly stand-up meeting, just as much as capabilities, performance, cost, and other core responsibilities of the builder team. The best security is not something that can be “bolted on” at the end of a process or on the outside of a system; rather, security is integral and foundational.

    AWS business leaders prioritize building products and services that are designed to be secure. At the same time, they strive to create an environment that encourages employees to identify and escalate potential security concerns even when uncertain about whether there is an actual issue. Escalation is a normal part of how we work in AWS, and our practice of escalation provides a “security reporting safe space” to everyone. Our teams and individuals are encouraged to report and escalate any possible security issues or concerns with a high-priority ticket to the security team. We would much rather hear about a possible security concern and investigate it, regardless of whether it is unlikely or not. Our employees know that we welcome reports even for things that turn out to be nonissues.

  3. Distributing security expertise and ownership across AWS

    Our central AWS Security team provides a number of critical capabilities and services that support and enable our engineering and service teams to fulfill their security responsibilities effectively. Our central team provides training, consultation, threat-modeling tools, automated code-scanning frameworks and tools, design reviews, penetration testing, automated API test frameworks, and—in the end—a final security review of each new service or new feature. The security reviewer is empowered to make a go or no-go decision with respect to each and every release. If a service or feature does not pass the security review process in the first review, we dive deep to understand why so we can improve processes and catch issues earlier in development. But, releasing something that’s not ready would be an even bigger failure, so we err on the side of maintaining our high security bar and always trying to deliver to the high standards that our customers expect and rely on.

    One important mechanism to distribute security ownership that we’ve developed over the years is the Security Guardians program. The Security Guardians program trains, develops, and empowers service team developers in each two-pizza team to be security ambassadors, or Guardians, within the product teams. At a high level, Guardians are the “security conscience” of each team. They make sure that security considerations for a product are made earlier and more often, helping their peers build and ship their product faster, while working closely with the central security team to help ensure the security bar remains high at AWS. Security Guardians feel empowered by being part of a cross-organizational community while also playing a critical role for the team and for AWS as a whole.

  4. Scaling security through innovation

    Another way we scale security across our culture at AWS is through innovation. We innovate to build tools and processes to help all of our people be as effective as possible and maintain focus. We use artificial intelligence (AI) to accelerate our secure software development process, as well as new generative AI–powered features in Amazon Inspector, Amazon Detective, AWS Config, and Amazon CodeWhisperer that complement the human skillset by helping people make better security decisions, using a broader collection of knowledge. This pattern of combining sophisticated tooling with skilled engineers is highly effective because it positions people to make the nuanced decisions required for effective security.

    For large organizations, it can take years to assess every scenario and prove systems are secure. Even then, their systems are constantly changing. Our automated reasoning tools use mathematical logic to answer critical questions about infrastructure to detect misconfigurations that could potentially expose data. This provable security provides higher assurance in the security of the cloud and in the cloud. We apply automated reasoning in key service areas such as storage, networking, virtualization, identity, and cryptography. Amazon scientists and engineers also use automated reasoning to prove the correctness of critical internal systems. We process over a billion mathematical queries per day that power AWS Identity and Access Management Access Analyzer, Amazon Simple Storage Service (Amazon S3) Block Public Access, and other security offerings. AWS is the first and only cloud provider to use automated reasoning at this scale.

Advancing the future of cloud security

At AWS, we care deeply about our culture of security. We’re consistently working backwards from our customers and investing in raising the bar on our security tools and capabilities. For example, AWS enables encryption of everything. AWS Key Management Service (AWS KMS) is the first and only highly scalable, cloud-native key management system that is also FIPS 140-2 Level 3 certified. No one can retrieve customer plaintext keys, not even the most privileged admins within AWS. With the AWS Nitro System, which is the foundation of the AWS compute service Amazon Elastic Compute Cloud (Amazon EC2), we designed and delivered first-of-a-kind and still unique in the industry innovation to maximize the security of customers’ workloads. The Nitro System provides industry-leading privacy and isolation for all their compute needs, including GPU-based computing for the latest generative AI systems. No one, not even the most privileged admins within AWS, can access a customer’s workloads or data in Nitro-based EC2 instances.

We continue to innovate on behalf of our customers so they can move quickly, securely, and with confidence to enable their businesses, and our track record in the area of cloud security is second to none. That said, cybersecurity challenges continue to evolve, and while we’re proud of our achievements to date, we’re committed to constant improvement as we innovate and advance our technologies and our culture of security.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Chris Betz

Chris is CISO at AWS. He oversees security teams and leads the development and implementation of security policies with the aim of managing risk and aligning the company’s security posture with business objectives. Chris joined Amazon in August 2023 after holding CISO and security leadership roles at leading companies. He lives in Northern Virginia with his family.

Power analytics as a service capabilities using Amazon Redshift

Post Syndicated from Sandipan Bhaumik original https://aws.amazon.com/blogs/big-data/power-analytics-as-a-service-capabilities-using-amazon-redshift/

Analytics as a service (AaaS) is a business model that uses the cloud to deliver analytic capabilities on a subscription basis. This model provides organizations with a cost-effective, scalable, and flexible solution for building analytics. The AaaS model accelerates data-driven decision-making through advanced analytics, enabling organizations to swiftly adapt to changing market trends and make informed strategic choices.

Amazon Redshift is a cloud data warehouse service that offers real-time insights and predictive analytics capabilities for analyzing data from terabytes to petabytes. It offers features like data sharing, Amazon Redshift ML, Amazon Redshift Spectrum, and Amazon Redshift Serverless, which simplify application building and make it effortless for AaaS companies to embed rich data analytics capabilities. Amazon Redshift delivers up to 4.9 times lower cost per user and up to 7.9 times better price-performance than other cloud data warehouses.

The Powered by Amazon Redshift program helps AWS Partners operating an AaaS model quickly build analytics applications using Amazon Redshift and successfully scale their business. For example, you can build visualizations on top of Amazon Redshift and embed them within applications to provide outstanding analytics experiences for end-users. In this post, we explore how AaaS providers scale their processes with Amazon Redshift to deliver insights to their customers.

AaaS delivery models

While serving analytics at scale, AaaS providers and customers can choose where to store the data and where to process the data.

AaaS providers could choose to ingest and process all the customer data into their own account and deliver insights to the customer account. Alternatively, they could choose to directly process data in-place within the customer’s account.

The choice of these delivery models depends on many factors, and each has their own benefits. Because AaaS providers service multiple customers, they could mix these models in a hybrid fashion, meeting each customer’s preference. The following diagram illustrates the two delivery models.

We explore the technical details of each model in the next sections.

Build AaaS on Amazon Redshift

Amazon Redshift has features that allow AaaS providers the flexibility to deploy three unique delivery models:

  • Managed model – Processing data within the Redshift data warehouse the AaaS provider manages
  • Bring-your-own-Redshift (BYOR) model – Processing data directly within the customer’s Redshift data warehouse
  • Hybrid model – Using a mix of both models depending on customer needs

These delivery models give AaaS providers the flexibility to deliver insights to their customers no matter where the data warehouse is located.

Let’s look at how each of these delivery models work in practice.

Managed model

In this model, the AaaS provider ingests customer data in their own account, and engages their own Redshift data warehouse for processing. Then they use one or more methods to deliver the generated insights to their customers. Amazon Redshift enables companies to securely build multi-tenant applications, ensuring data isolation, integrity, and confidentiality. It provides features like row-level security (RLS), column-level security (CLS) for fine-grained access control, role-based access control (RBAC), and assigning permissions at the database and schema level.

The following diagram illustrates the managed delivery model and the various methods AaaS providers can use to deliver insights to their customers.

The workflow includes the following steps:

  1. The AaaS provider pulls data from customer data sources like operational databases, files, and APIs, and ingests them into the Redshift data warehouse hosted in their account.
  2. Data processing jobs enrich the data in Amazon Redshift. This could be an application the AaaS provider has built to process data, or they could use a data processing service like Amazon EMR or AWS Glue to run Spark applications.
  3. Now the AaaS provider has multiple methods to deliver insights to their customers:
    1. Option 1 – The enriched data with insights is shared directly with the customer’s Redshift instance using the Amazon Redshift data sharing feature. End-users consume data using business intelligence (BI) tools and analytics applications.
    2. Option 2 – If AaaS providers are publishing generic insights to AWS Data Exchange to reach millions of AWS customers and monetize those insights, their customers can use AWS Data Exchange for Amazon Redshift. With this feature, customers get instant insights in their Redshift data warehouse without having to write extract, transform, and load (ETL) pipelines to ingest the data. AWS Data Exchange provides their customers a secure and compliant way to subscribe to the data with consolidated billing and subscription management.
    3. Option 3 – The AaaS provider exposes insights on a web application using the Amazon Redshift Data API. Customers access the web application directly from the internet. The gives the AaaS provider the flexibility to expose insights outside an AWS account.
    4. Option 4 – Customers connect to the AaaS provider’s Redshift instance using Amazon QuickSight or other third-party BI tools through a JDBC connection.

In this model, the customer shifts the responsibility of data management and governance to the AaaS providers, with light services to consume insights. This leads to improved decision-making as customers focus on core activities and save time from tedious data management tasks. Because AaaS providers move data from the customer accounts, there could be associated data transfer costs depending on how they move the data. However, because they deliver this service at scale to multiple customers, they can offer cost-efficient services using economies of scale.

BYOR model

In cases where the customer hosts a Redshift data warehouse and wants to run analytics in their own data platform without moving data out, you use the BYOR model.

The following diagram illustrates the BYOR model, where AaaS providers process data to add insights directly in their customer’s data warehouse so the data never leaves the customer account.

The solution includes the following steps:

  1. The customer ingests all the data from various data sources into their Redshift data warehouse.
  2. The data undergoes processing:
    1. The AaaS provider uses a secure channel, AWS PrivateLink for the Redshift Data API, to push data processing logic directly in the customer’s Redshift data warehouse.
    2. They use the same channel to process data at scale with multiple customers. The diagram illustrates a second customer, but this can scale to hundreds or thousands of customers. AaaS providers can tailor data processing logic per customer by isolating scripts for each customer and deploying them according to the customer’s identity, providing a customized and efficient service.
  3. The customer’s end-users consume data from their own account using BI tools and analytics applications.
  4. The customer has control over how to expose insights to their end-users.

This delivery model allows customers to manage their own data, reducing dependency on AaaS providers and cutting data transfer costs. By keeping data in their own environment, customers can reduce the risk of data breach while benefiting from insights for better decision-making.

Hybrid model

Customers have diverse needs influenced by factors like data security, compliance, and technical expertise. To cover a broader range of customers, AaaS providers can choose a hybrid approach that delivers both the managed model and the BYOR model depending on the customer, offering flexibility and the ability to serve multiple customers.

The following diagram illustrates the AaaS provider delivering insights through the BYOR model for Customer 1 and 4, the managed model for Customer 2 and 3, and so on.

Conclusion

In this post, we talked about the rising demand of analytics as a service and how providers can use the capabilities of Amazon Redshift to deliver insights to their customers. We examined two primary delivery models: the managed model, where AaaS providers process data on their own accounts, and the BYOR model, where AaaS providers process and enrich data directly in their customer’s account. Each method offers unique benefits, such as cost-efficiency, enhanced control, and personalized insights. The flexibility of the AWS Cloud facilitates a hybrid model, accommodating diverse customer needs and allowing AaaS providers to scale. We also introduced the Powered by Amazon Redshift program, which supports AaaS businesses in building effective analytics applications, fostering improved user engagement and business growth.

We take this opportunity to invite our ISV partners to reach out to us and learn more about the Powered by Amazon Redshift program.


About the Authors

Sandipan Bhaumik is a Senior Analytics Specialist Solutions Architect based in London, UK. He helps customers modernize their traditional data platforms using the modern data architecture in the cloud to perform analytics at scale.

Sain Das is a Senior Product Manager on the Amazon Redshift team and leads Amazon Redshift GTM for partner programs, including the Powered by Amazon Redshift and Redshift Ready programs.

How AWS can help you navigate the complexity of digital sovereignty

Post Syndicated from Max Peterson original https://aws.amazon.com/blogs/security/how-aws-can-help-you-navigate-the-complexity-of-digital-sovereignty/

Customers from around the world often tell me that digital sovereignty is a top priority as they look to meet new compliance and industry regulations. In fact, 82% of global organizations are either currently using, planning to use, or considering sovereign cloud solutions in the next two years, according to the International Data Corporation (IDC). However, many leaders face complexity as policies and requirements continue to rapidly evolve, and have concerns on acquiring the right knowledge and skills, at an affordable cost, to simplify efforts in meeting digital sovereignty goals.

At Amazon Web Services (AWS), we understand that protecting your data in a world with changing regulations, technology, and risks takes teamwork. We’re committed to making sure that the AWS Cloud remains sovereign-by-design, as it has been from day one, and providing customers with more choice to help meet their unique sovereignty requirements across our offerings in AWS Regions around the world, dedicated sovereign cloud infrastructure solutions, and the recently announced independent European Sovereign Cloud. In this blog post, I’ll share how the cloud is helping organizations meet their digital sovereignty needs, and ways that we can help you navigate the ever-evolving landscape.

Digital sovereignty needs of customers vary based on multiple factors

Digital sovereignty means different things to different people, and every country or region has their own requirements. Adding to the complexity is the fact that no uniform guidance exists for the types of workloads, industries, and sectors that must adhere to these requirements.

Although digital sovereignty needs vary based on multiple factors, key themes that we’ve identified by listening to customers, partners, and regulators include data residency, operator access restriction, resiliency, and transparency. AWS works closely with customers to understand the digital sovereignty outcomes that they’re focused on to determine the right AWS solutions that can help to meet them.

Meet requirements without compromising the benefits of the cloud

We introduced the AWS Digital Sovereignty Pledge in 2022 as part of our commitment to offer all AWS customers the most advanced set of sovereignty controls and security features available in the cloud. We continue to deeply engage with regulators to help make sure that AWS meets various standards and achieves certifications that our customers directly inherit, allowing them to meet requirements while driving continuous innovation. AWS was recently named a leader in Sovereign Cloud Infrastructure Services (EU) by Information Services Group (ISG), a global technology research and IT advisory firm.

Customers who use our global infrastructure with sovereign-by-design features can optimize for increased scale, agility, speed, and reduced costs while getting the highest levels of security and protection. Our AWS Regions are powered by the AWS Nitro System, which helps ensure the confidentiality and integrity of customer data. Building on our commitment to provide greater transparency and assurances on how AWS services are designed and operated, the security design of our Nitro System was validated in an independent public report by the global cybersecurity consulting firm NCC Group.

Customers have full control of their data on AWS and determine where their data is stored, how it’s stored, and who has access to it. We provide tools to help you automate and monitor your storage location and encrypt your data, including data residency guardrails in AWS Control Tower. We recently announced more than 65 new digital sovereignty controls that you can choose from to help prevent actions, enforce configurations, and detect undesirable changes.

All AWS services support encryption, and most services also support encryption with customer managed keys that AWS can’t access such as AWS Key Management Service (KMS), AWS CloudHSM, and AWS KMS External Key Store (XKS). Both the hardware used in AWS KMS and the firmware used in AWS CloudHSM are FIPS 140-2 Level 3 compliant as certified by a NIST-accredited laboratory.

Infrastructure choice to support your unique needs and local regulations

AWS provides hybrid cloud storage and edge computing capabilities so that you can use the same infrastructure, services, APIs, and tools across your environments. We think of our AWS infrastructure and services as a continuum that helps meet your requirements wherever you need it. Having a consistent experience across environments helps to accelerate innovation, increase operational efficiencies and reduce costs by using the same skills and toolsets, and meet specific security standards by adopting cloud security wherever applications and data reside.

We work closely with customers to support infrastructure decisions that meet unique workload needs and local regulations, and continue to invent based on what we hear from customers. To help organizations comply with stringent regulatory requirements, we launched AWS Dedicated Local Zones. This is a type of infrastructure that is fully managed by AWS, built for exclusive use by a customer or community, and placed in a customer-specified location or data center to run sensitive or other regulated industry workloads. At AWS re:Invent 2023, I sat down with Cheow Hoe Chan, Government Chief Digital Technology Officer of Singapore, to discuss how we collaborated with Singapore’s Smart Nation and Digital Government Group to define and build this dedicated infrastructure.

We also recently announced our plans to launch the AWS European Sovereign Cloud to provide customers in highly regulated industries with more choice to help meet varying data residency, operational autonomy, and resiliency requirements. This is a new, independent cloud located and operated within the European Union (EU) that will have the same security, availability, and performance that our customers get from existing AWS Regions today, with important features specific to evolving EU regulations.

Build confidently with AWS and our AWS Partners

In addition to our AWS offerings, you can access our global network of more than 100,000 AWS Partners specialized in various competencies and industry verticals to get local guidance and services.

There is a lot of complexity involved with navigating the evolving digital sovereignty landscape—but you don’t have to do it alone. Using the cloud and working with AWS and our partners can help you move faster and more efficiently while keeping costs low. We’re committed to helping you meet necessary requirements while accelerating innovation, and can’t wait to see the kinds of advancements that you’ll continue to drive.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Max Peterson

Max Peterson

Max is the Vice President of AWS Sovereign Cloud. He leads efforts to ensure that all AWS customers around the world have the most advanced set of sovereignty controls, privacy safeguards, and security features available in the cloud. Before his current role, Max served as the VP of AWS Worldwide Public Sector (WWPS) and created and led the WWPS International Sales division, with a focus on empowering government, education, healthcare, aerospace and satellite, and nonprofit organizations to drive rapid innovation while meeting evolving compliance, security, and policy requirements. Max has over 30 years of public sector experience and served in other technology leadership roles before joining Amazon. Max has earned both a Bachelor of Arts in Finance and Master of Business Administration in Management Information Systems from the University of Maryland.

AWS named as a Leader in 2023 Gartner Magic Quadrant for Strategic Cloud Platform Services for thirteenth year in a row

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/read-the-2023-gartner-magic-quadrant-for-strategic-cloud-platform-services/

On December 4, 2023, AWS was named as a Leader in the 2023 Magic Quadrant for Strategic Cloud Platform Services (SCPS). AWS is the longest-running Magic Quadrant Leader, with Gartner naming AWS a Leader for the thirteenth consecutive year. AWS is placed highest on the Ability to Execute axis.

SCPS, previously known as Magic Quadrant for Cloud Infrastructure and Platform Services (CIPS), is defined as “standardized, automated, public cloud offerings integrating infrastructure services (for example, computing, network, and storage), platform services (for example, managed application and data services) and transformation services (programs/resources that help customers adopt cloud-oriented IT delivery models).”

I have the chance to talk with our customers every single week. When I ask the main reasons why they choose AWS, I consistently hear the following responses:

Breadth and depth. AWS offers more cloud services and features than other providers, including compute, storage, databases, machine learning (ML), data analytics, and Internet of Things (IoT). This allows faster, easier, and cheaper cloud migration of existing apps and building new apps. AWS has the deepest functionality within services, such as a wide variety of purpose-built databases optimized for cost and performance.

A rapid pace of innovation. AWS enables faster experimentation and innovation through the latest technologies. We continually accelerate innovation pace to invent new technologies for business transformation. For example, in 2014, we launched the serverless computing service AWS Lambda, eliminating server provisioning and management for developers. In 2017, we launched the AWS Nitro System, a combination of dedicated hardware and a lightweight hypervisor that enables better performance, increased security, and cost savings for Amazon EC2 instances. At re:Invent 2018, we announced AWS Graviton, a family of processors designed to deliver the best price performance for your cloud workloads running in Amazon Elastic Compute Cloud (Amazon EC2). And today, we continue to innovate with generative artificial intelligence (AI) services such as Amazon Q or Amazon CodeWhisperer, your coding productivity tool available in developer’s integrated development environment (IDE) and on the command line (CLI).

A large community of customers and partners. AWS has a large, active community with millions of customers and tens of thousands of partners globally. Customers in most industries and of varied sizes use AWS for diverse applications. The AWS Partner Network includes thousands of systems integrators specializing in AWS and tens of thousands of independent software vendors (ISV) adapting their technologies for AWS.

You also benefit from the global AWS infrastructure, including the 33 Regions where you can deploy your workload and store your data. We pre-announced four future Regions in Malaysia, New Zealand, Thailand, and the AWS European Sovereign Cloud.

An AWS Region is a physical location in the world where we have multiple Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities. Unlike with other cloud providers, who often define a region as a single data center, having multiple Availability Zones allows you to operate production applications and databases that are more highly available, fault-tolerant, and scalable than would be possible from a single data center.

AWS has more than 17 years of experience building its global infrastructure. And, as Werner Vogels, Amazon CTO, keeps repeating, “There’s no compression algorithm for experience,” especially when it comes to scale, security, and performance.

Here is the graphical representation of the 2023 Magic Quadrant for Strategic Cloud Platform Services.

Gartner | 2023 Magic Quadrant for Strategic Cloud Platform ServicesThe full Gartner report has details about the features and factors they reviewed. It explains the methodology used and the recognitions. This report can serve as a guide when choosing a cloud provider that helps you innovate on behalf of your customers.

— seb

Gartner, 2023 Magic Quadrant for Strategic Cloud Platform Services, 4 December 2023, David Wright, Dennis Smith, et. al.

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from AWS.

AWS re:Invent 2023: Security, identity, and compliance recap

Post Syndicated from Nisha Amthul original https://aws.amazon.com/blogs/security/aws-reinvent-2023-security-identity-and-compliance-recap/

In this post, we share the key announcements related to security, identity, and compliance at AWS re:Invent 2023, and offer details on how you can learn more through on-demand video of sessions and relevant blog posts. AWS re:Invent returned to Las Vegas in November 2023. The conference featured over 2,250 sessions and hands-on labs, with over 52,000 attendees over five days. If you couldn’t join us in person or want to revisit the security, identity, and compliance announcements and on-demand sessions, this post is for you.

At re:Invent 2023, and throughout the AWS security service announcements, there are key themes that underscore the security challenges that we help customers address through the sharing of knowledge and continuous development in our native security services. The key themes include helping you architect for zero trust, scalable identity and access management, early integration of security in the development cycle, container security enhancement, and using generative artificial intelligence (AI) to help improve security services and mean time to remediation.

Key announcements

To help you more efficiently manage identity and access at scale, we introduced several new features:

  • A week before re:Invent, we announced two new features of Amazon Verified Permissions:
    • Batch authorization — Batch authorization is a new way for you to process authorization decisions within your application. Using this new API, you can process 30 authorization decisions for a single principal or resource in a single API call. This can help you optimize multiple requests in your user experience (UX) permissions.
    • Visual schema editor — This new visual schema editor offers an alternative to editing policies directly in the JSON editor. View relationships between entity types, manage principals and resources visually, and review the actions that apply to principal and resources types for your application schema.
  • We launched two new features for AWS Identity and Access Management (IAM) Access Analyzer:
    • Unused access — The new analyzer continuously monitors IAM roles and users in your organization in AWS Organizations or within AWS accounts, identifying unused permissions, access keys, and passwords. Using this new capability, you can benefit from a dashboard to help prioritize which accounts need attention based on the volume of excessive permissions and unused access findings. You can set up automated notification workflows by integrating IAM Access Analyzer with Amazon EventBridge. In addition, you can aggregate these new findings about unused access with your existing AWS Security Hub findings.
    • Custom policy checks — This feature helps you validate that IAM policies adhere to your security standards ahead of deployments. Custom policy checks use the power of automated reasoning—security assurance backed by mathematical proof—to empower security teams to detect non-conformant updates to policies proactively. You can move AWS applications from development to production more quickly by automating policy reviews within your continuous integration and continuous delivery (CI/CD) pipelines. Security teams automate policy reviews before deployments by collaborating with developers to configure custom policy checks within AWS CodePipeline pipelines, AWS CloudFormation hooks, GitHub Actions, and Jenkins jobs.
  • We announced AWS IAM Identity Center trusted identity propagation to manage and audit access to AWS Analytics services, including Amazon QuickSight, Amazon Redshift, Amazon EMR, AWS Lake Formation, and Amazon Simple Storage Service (Amazon S3) through S3 Access Grants. This feature of IAM Identity Center simplifies data access management for users, enhances auditing granularity, and improves the sign-in experience for analytics users across multiple AWS analytics applications.

To help you improve your security outcomes with generative AI and automated reasoning, we introduced the following new features:

AWS Control Tower launched a set of 65 purpose-built controls designed to help you meet your digital sovereignty needs. In November 2022, we launched AWS Digital Sovereignty Pledge, our commitment to offering all AWS customers the most advanced set of sovereignty controls and features available in the cloud. Introducing AWS Control Tower controls that support digital sovereignty is an additional step in our roadmap of capabilities for data residency, granular access restriction, encryption, and resilience. AWS Control Tower offers you a consolidated view of the controls enabled, your compliance status, and controls evidence across multiple accounts.

We announced two new feature expansions for Amazon GuardDuty to provide the broadest threat detection coverage:

We launched two new capabilities for Amazon Inspector in addition to Amazon Inspector code remediation for Lambda function to help you detect software vulnerabilities at scale:

We introduced four new capabilities in AWS Security Hub to help you address security gaps across your organization and enhance the user experience for security teams, providing increased visibility:

  • Central configuration — Streamline and simplify how you set up and administer Security Hub in your multi-account, multi-Region organizations. With central configuration, you can use the delegated administrator account as a single pane of glass for your security findings—and also for your organization’s configurations in Security Hub.
  • Customize security controls — You can now refine the best practices monitored by Security Hub controls to meet more specific security requirements. There is support for customer-specific inputs in Security Hub controls, so you can customize your security posture monitoring on AWS.
  • Metadata enrichment for findings — This enrichment adds resource tags, a new AWS application tag, and account name information to every finding ingested into Security Hub. This includes findings from AWS security services such as GuardDuty, Amazon Inspector, and IAM Access Analyzer, in addition to a large and growing list of AWS Partner Network (APN) solutions. Using this enhancement, you can better contextualize, prioritize, and act on your security findings.
  • Dashboard enhancements — You can now filter and customize your dashboard views, and access a new set of widgets that we carefully chose to help reflect the modern cloud security threat landscape and relate to potential threats and vulnerabilities in your AWS cloud environment. This improvement makes it simpler for you to focus on risks that require your attention, providing a more comprehensive view of your cloud security.

We added three new capabilities for Amazon Detective in addition to Amazon Detective finding group summaries to simplify the security investigation process:

We introduced AWS Secrets Manager batch retrieval of secrets to identify and retrieve a group of secrets for your application at once with a single API call. The new API, BatchGetSecretValue, provides greater simplicity for common developer workflows, especially when you need to incorporate multiple secrets into your application.

We worked closely with AWS Partners to create offerings that make it simpler for you to protect your cloud workloads:

  • AWS Built-in Competency — AWS Built-in Competency Partner solutions help minimize the time it takes for you to figure out the best AWS services to adopt, regardless of use case or category.
  • AWS Cyber Insurance Competency — AWS has worked with leading cyber insurance partners to help simplify the process of obtaining cyber insurance. This makes it simpler for you to find affordable insurance policies from AWS Partners that integrate their security posture assessment through a user-friendly customer experience with Security Hub.

Experience content on demand

If you weren’t able to join in person or you want to watch a session again, you can see the many sessions that are available on demand.

Keynotes, innovation talks, and leadership sessions

Catch the AWS re:Invent 2023 keynote where AWS chief executive officer Adam Selipsky shares his perspective on cloud transformation and provides an exclusive first look at AWS innovations in generative AI, machine learning, data, and infrastructure advancements. You can also replay the other AWS re:Invent 2023 keynotes.

The security landscape is evolving as organizations adapt and embrace new technologies. In this talk, discover the AWS vision for security that drives business agility. Stream the innovation talk from Amazon chief security officer, Steve Schmidt, and AWS chief information security officer, Chris Betz, to learn their insights on key topics such as Zero Trust, builder security experience, and generative AI.

At AWS, we work closely with customers to understand their requirements for their critical workloads. Our work with the Singapore Government’s Smart Nation and Digital Government Group (SNDGG) to build a Smart Nation for their citizens and businesses illustrates this approach. Watch the leadership session with Max Peterson, vice president of Sovereign Cloud at AWS, and Chan Cheow Hoe, government chief digital technology officer of Singapore, as they share how AWS is helping Singapore advance on its cloud journey to build a Smart Nation.

Breakout sessions and new launch talks

Stream breakout sessions and new launch talks on demand to learn about the following topics:

  • Discover how AWS, customers, and partners work together to raise their security posture with AWS infrastructure and services.
  • Learn about trends in identity and access management, detection and response, network and infrastructure security, data protection and privacy, and governance, risk, and compliance.
  • Dive into our launches! Learn about the latest announcements from security experts, and uncover how new services and solutions can help you meet core security and compliance requirements.

Consider joining us for more in-person security learning opportunities by saving the date for AWS re:Inforce 2024, which will occur June 10-12 in Philadelphia, Pennsylvania. We look forward to seeing you there!

If you’d like to discuss how these new announcements can help your organization improve its security posture, AWS is here to help. Contact your AWS account team today.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Nisha Amthul

Nisha Amthul

Nisha is a Senior Product Marketing Manager at AWS Security, specializing in detection and response solutions. She has a strong foundation in product management and product marketing within the domains of information security and data protection. When not at work, you’ll find her cake decorating, strength training, and chasing after her two energetic kiddos, embracing the joys of motherhood.

Author

Himanshu Verma

Himanshu is a Worldwide Specialist for AWS Security Services. He leads the go-to-market creation and execution for AWS security services, field enablement, and strategic customer advisement. Previously, he held leadership roles in product management, engineering, and development, working on various identity, information security, and data protection technologies. He loves brainstorming disruptive ideas, venturing outdoors, photography, and trying new restaurants.

Author

Marshall Jones

Marshall is a Worldwide Security Specialist Solutions Architect at AWS. His background is in AWS consulting and security architecture, focused on a variety of security domains including edge, threat detection, and compliance. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Building a security-first mindset: three key themes from AWS re:Invent 2023

Post Syndicated from Clarke Rodgers original https://aws.amazon.com/blogs/security/building-a-security-first-mindset-three-key-themes-from-aws-reinvent-2023/

Amazon CSO Stephen Schmidt

Amazon CSO Stephen Schmidt

AWS re:Invent drew 52,000 attendees from across the globe to Las Vegas, Nevada, November 27 to December 1, 2023.

Now in its 12th year, the conference featured 5 keynotes, 17 innovation talks, and over 2,250 sessions and hands-on labs offering immersive learning and networking opportunities.

With dozens of service and feature announcements—and innumerable best practices shared by AWS executives, customers, and partners—the air of excitement was palpable. We were on site to experience all of the innovations and insights, but summarizing highlights isn’t easy. This post details three key security themes that caught our attention.

Security culture

When we think about cybersecurity, it’s natural to focus on technical security measures that help protect the business. But organizations are made up of people—not technology. The best way to protect ourselves is to foster a proactive, resilient culture of cybersecurity that supports effective risk mitigation, incident detection and response, and continuous collaboration.

In Sustainable security culture: Empower builders for success, AWS Global Services Security Vice President Hart Rossman and AWS Global Services Security Organizational Excellence Leader Sarah Currey presented practical strategies for building a sustainable security culture.

Rossman noted that many customers who meet with AWS about security challenges are attempting to manage security as a project, a program, or a side workstream. To strengthen your security posture, he said, you have to embed security into your business.

“You’ve got to understand early on that security can’t be effective if you’re running it like a project or a program. You really have to run it as an operational imperative—a core function of the business. That’s when magic can happen.” — Hart Rossman, Global Services Security Vice President at AWS

Three best practices can help:

  1. Be consistently persistent. Routinely and emphatically thank employees for raising security issues. It might feel repetitive, but treating security events and escalations as learning opportunities helps create a positive culture—and it’s a practice that can spread to other teams. An empathetic leadership approach encourages your employees to see security as everyone’s responsibility, share their experiences, and feel like collaborators.
  2. Brief the board. Engage executive leadership in regular, business-focused meetings. By providing operational metrics that tie your security culture to the impact that it has on customers, crisply connecting data to business outcomes, and providing an opportunity to ask questions, you can help build the support of executive leadership, and advance your efforts to establish a sustainable proactive security posture.
  3. Have a mental model for creating a good security culture. Rossman presented a diagram (Figure 1) that highlights three elements of security culture he has observed at AWS: a student, a steward, and a builder. If you want to be a good steward of security culture, you should be a student who is constantly learning, experimenting, and passing along best practices. As your stewardship grows, you can become a builder, and progress the culture in new directions.
Figure 1: Sample mental model for building security culture

Figure 1: Sample mental model for building security culture

Thoughtful investment in the principles of inclusivity, empathy, and psychological safety can help your team members to confidently speak up, take risks, and express ideas or concerns. This supports an escalation-friendly culture that can reduce employee burnout, and empower your teams to champion security at scale.

In Shipping securely: How strong security can be your strategic advantage, AWS Enterprise Strategy Director Clarke Rodgers reiterated the importance of security culture to building a security-first mindset.

Rodgers highlighted three pillars of progression (Figure 2)—aware, bolted-on, and embedded—that are based on meetings with more than 800 customers. As organizations mature from a reactive security posture to a proactive, security-first approach, he noted, security culture becomes a true business enabler.

“When organizations have a strong security culture and everyone sees security as their responsibility, they can move faster and achieve quicker and more secure product and service releases.” — Clarke Rodgers, Director of Enterprise Strategy at AWS
Figure 2: Shipping with a security-first mindset

Figure 2: Shipping with a security-first mindset

Human-centric AI

CISOs and security stakeholders are increasingly pivoting to a human-centric focus to establish effective cybersecurity, and ease the burden on employees.

According to Gartner, by 2027, 50% of large enterprise CISOs will have adopted human-centric security design practices to minimize cybersecurity-induced friction and maximize control adoption.

As Amazon CSO Stephen Schmidt noted in Move fast, stay secure: Strategies for the future of security, focusing on technology first is fundamentally wrong. Security is a people challenge for threat actors, and for defenders. To keep up with evolving changes and securely support the businesses we serve, we need to focus on dynamic problems that software can’t solve.

Maintaining that focus means providing security and development teams with the tools they need to automate and scale some of their work.

“People are our most constrained and most valuable resource. They have an impact on every layer of security. It’s important that we provide the tools and the processes to help our people be as effective as possible.” — Stephen Schmidt, CSO at Amazon

Organizations can use artificial intelligence (AI) to impact all layers of security—but AI doesn’t replace skilled engineers. When used in coordination with other tools, and with appropriate human review, it can help make your security controls more effective.

Schmidt highlighted the internal use of AI at Amazon to accelerate our software development process, as well as new generative AI-powered Amazon Inspector, Amazon Detective, AWS Config, and Amazon CodeWhisperer features that complement the human skillset by helping people make better security decisions, using a broader collection of knowledge. This pattern of combining sophisticated tooling with skilled engineers is highly effective, because it positions people to make the nuanced decisions required for effective security that AI can’t make on its own.

In How security teams can strengthen security using generative AI, AWS Senior Security Specialist Solutions Architects Anna McAbee and Marshall Jones, and Principal Consultant Fritz Kunstler featured a virtual security assistant (chatbot) that can address common security questions and use cases based on your internal knowledge bases, and trusted public sources.

Figure 3: Generative AI-powered chatbot architecture

Figure 3: Generative AI-powered chatbot architecture

The generative AI-powered solution depicted in Figure 3—which includes Retrieval Augmented Generation (RAG) with Amazon Kendra, Amazon Security Lake, and Amazon Bedrock—can help you automate mundane tasks, expedite security decisions, and increase your focus on novel security problems.

It’s available on Github with ready-to-use code, so you can start experimenting with a variety of large and multimodal language models, settings, and prompts in your own AWS account.

Secure collaboration

Collaboration is key to cybersecurity success, but evolving threats, flexible work models, and a growing patchwork of data protection and privacy regulations have made maintaining secure and compliant messaging a challenge.

An estimated 3.09 billion mobile phone users access messaging apps to communicate, and this figure is projected to grow to 3.51 billion users in 2025.

The use of consumer messaging apps for business-related communications makes it more difficult for organizations to verify that data is being adequately protected and retained. This can lead to increased risk, particularly in industries with unique recordkeeping requirements.

In How the U.S. Army uses AWS Wickr to deliver lifesaving telemedicine, Matt Quinn, Senior Director at The U.S. Army Telemedicine & Advanced Technology Research Center (TATRC), Laura Baker, Senior Manager at Deloitte, and Arvind Muthukrishnan, AWS Wickr Head of Product highlighted how The TATRC National Emergency Tele-Critical Care Network (NETCCN) was integrated with AWS Wickr—a HIPAA-eligible secure messaging and collaboration service—and AWS Private 5G, a managed service for deploying and scaling private cellular networks.

During the session, Quinn, Baker, and Muthukrishnan described how TATRC achieved a low-resource, cloud-enabled, virtual health solution that facilitates secure collaboration between onsite and remote medical teams for real-time patient care in austere environments. Using Wickr, medics on the ground were able to treat injuries that exceeded their previous training (Figure 4) with the help of end-to-end encrypted video calls, messaging, and file sharing with medical professionals, and securely retain communications in accordance with organizational requirements.

“Incorporating Wickr into Military Emergency Tele-Critical Care Platform (METTC-P) not only provides the security and privacy of end-to-end encrypted communications, it gives combat medics and other frontline caregivers the ability to gain instant insight from medical experts around the world—capabilities that will be needed to address the simultaneous challenges of prolonged care, and the care of large numbers of casualties on the multi-domain operations (MDO) battlefield.” — Matt Quinn, Senior Director at TATRC
Figure 4: Telemedicine workflows using AWS Wickr

Figure 4: Telemedicine workflows using AWS Wickr

In a separate Chalk Talk titled Bolstering Incident Response with AWS Wickr and Amazon EventBridge, Senior AWS Wickr Solutions Architects Wes Wood and Charles Chowdhury-Hanscombe demonstrated how to integrate Wickr with Amazon EventBridge and Amazon GuardDuty to strengthen incident response capabilities with an integrated workflow (Figure 5) that connects your AWS resources to Wickr bots. Using this approach, you can quickly alert appropriate stakeholders to critical findings through a secure communication channel, even on a potentially compromised network.

Figure 5: AWS Wickr integration for incident response communications

Figure 5: AWS Wickr integration for incident response communications

Security is our top priority

AWS re:Invent featured many more highlights on a variety of topics, including adaptive access control with Zero Trust, AWS cyber insurance partners, Amazon CTO Dr. Werner Vogels’ popular keynote, and the security partnerships showcased on the Expo floor. It was a whirlwind experience, but one thing is clear: AWS is working hard to help you build a security-first mindset, so that you can meaningfully improve both technical and business outcomes.

To watch on-demand conference sessions, visit the AWS re:Invent Security, Identity, and Compliance playlist on YouTube.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Clarke Rodgers

Clarke Rodgers

Clarke is a Director of Enterprise Security at AWS. Clarke has more than 25 years of experience in the security industry, and works with enterprise security, risk, and compliance-focused executives to strengthen their security posture, and understand the security capabilities of the cloud. Prior to AWS, Clarke was a CISO for the North American operations of a multinational insurance company.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS, based in Chicago. She has more than 13 years of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Building a generative AI Marketing Portal on AWS

Post Syndicated from Tristan Nguyen original https://aws.amazon.com/blogs/messaging-and-targeting/building-a-generative-ai-marketing-portal-on-aws/

Introduction

In the preceding entries of this series, we examined the transformative impact of Generative AI on marketing strategies in “Building Generative AI into Marketing Strategies: A Primer” and delved into the intricacies of Prompt Engineering to enhance the creation of marketing content with services such as Amazon Bedrock in “From Prompt Engineering to Auto Prompt Optimisation”. We also explored the potential of Large Language Models (LLMs) to refine prompts for more effective customer engagement.

Continuing this exploration, we will articulate how Amazon Bedrock, Amazon Personalize, and Amazon Pinpoint can be leveraged to construct a marketer portal that not only facilitates AI-driven content generation but also personalizes and distributes this content effectively. The aim is to provide a clear blueprint for deploying a system that crafts, personalizes, and distributes marketing content efficiently. This blog will guide you through the deployment process, underlining the real-world utility of these services in optimizing marketing workflows. Through use cases and a code demonstration, we’ll see these technologies in action, offering a hands-on perspective on enhancing your marketing pipeline with AI-driven solutions.

The Challenge with Content Generation in Marketing

Many companies struggle to streamline their marketing operations effectively, facing hurdles at various stages of the marketing operations pipeline. Below, we list the challenges at three main stages of the pipeline: content generation, content personalization, and content distribution.

Content Generation

Creating high-quality, engaging content is often easier said than done. Companies need to invest in skilled copywriters or content creators who understand not just the product but also the target audience. Even with the right talent, the process can be time-consuming and costly. Moreover, generating content at scale while maintaining quality and compliance to industry regulations is the key blocker for many companies considering adopting generative AI technologies in production environments.

Content Personalization

Once the content is created, the next hurdle is personalization. In today’s digital age, generic content rarely captures attention. Customers expect content tailored to their needs, preferences, and behaviors. However, personalizing content is not straightforward. It requires a deep understanding of customer data, which often resides in siloed databases, making it difficult to create a 360-degree view of the customer.

Content Distribution

Finally, even the most captivating, personalized content is ineffective if it doesn’t reach the right audience at the right time. Companies often grapple with choosing the appropriate channels for content distribution, be it email, social media, or mobile notifications. Additionally, ensuring that the content complies with various regulations and doesn’t end up in spam folders adds another layer of complexity to the distribution phase. Sending at scale requires paying attention to deliverability, security and reliability which often poses significant challenges to marketers.

By addressing these challenges, companies can significantly improve their marketing operations and empower their marketers to be more effective. But how can this be achieved efficiently and at scale? The answer lies in leveraging the power of Amazon Bedrock, Amazon Personalize, and Amazon Pinpoint, as we will explore in the following solution.

The Solution In Action

Before we dive into the details of the implementation, let’s take a look at the end result through the linked demo video.

Use Case 1: Banking/Financial Services Industry

You are a relationship manager working in the Consumer Banking department of a fictitious company called AnyCompany Bank. You are assigned a group of customers and would like to send out personalized and targeted communications to the channel of choice to every members of this group of customer.

Behind the scene, the marketer is utilizing Amazon Pinpoint to create the segment of customers they would like to target. The customers’ information and the marketer’s prompt are then fed into Amazon Bedrock to generate the marketing content, which is then sent to the customer via SMS and email using Amazon Pinpoint.

  • In the Prompt Iterator page, you can employ a process called “prompt engineering” to further optimize your prompt to maximize the effectiveness of your marketing campaigns. Please refer to this blog on the process behind engineering the prompt as well as how to apply an additional LLM model for auto-prompting. To get started, simply copy the sample banking prompt which has gone through the prompt engineering process in this page.
  • Next, you can either upload your customer group by uploading a .csv file (through “Importing a Segment”) or specify a customer group using pre-defined filter criteria based on your current customer database using Amazon Pinpoint.

UseCase1Segment

E.g.: The screenshot shows a sample filtered segment named ManagementOrRetired that only filters to customers who are management or retirees.

  • Once done, you can log into the marketer portal and choose the relevant segment that you’ve just created within the Amazon Pinpoint console.

PinpointSegment

  • You can then preview the customers and their information stored in your Amazon Pinpoint’s customer database. Once satisfied, we’re ready to start generating content for those customers!
  • Click on 1:1 Content Generator tab, your content is automatically generated for your first customer. Here, you can cycle through your customers one by one, and depending on the customer’s preferred language and channel, an email or SMS in the preferred language is automatically generated for them.
    • Generated SMS in English

PostiveSMS

    • A negative example showing proper prompt-engineering at work to moderate content. This happens if we try to insert data that does not make sense for the marketing content generator to output. In this case, the marketing generator refuses to output (justifiably) an advertisement for a 6-year-old on a secured instalment loan.

NegativeSMS

  • Finally, we choose to send the generated content via Amazon Pinpoint by clicking on “Send with Amazon Pinpoint”. In the back end, Amazon Pinpoint will orchestrate the sending of the email/SMS through the appropriate channels.
    • Alternatively, if the auto-generated content still did not meet your needs and you want to generate another draft, you can Disagree and try again.

Use Case 2: Travel & Hospitality

You are a marketing executive that’s working for an online air ticketing agency. You’ve been tasked to promote a specific flight from Singapore to Hong Kong for AnyCompany airline. You’d first like to identify which customers would be prime candidates to promote this flight leg to and then send out hyper-personalized message to them.

Behind the scene, instead of using Amazon Pinpoint to manually define the segment, the marketer in this case is leveraging AIML capabilities of Amazon Personalize to define the best group of customers to recommend the specific flight leg to them. Similar to the above use case, the customers’ information and LLM prompt are fed into the Amazon Bedrock, which generates the marketing content that is eventually sent out via Amazon Pinpoint.

  • Similar to the above use case, you’d need to go through a prompt engineering process to ensure that the content the LLM model is generating will be relevant and safe for use. To get started quickly, go to the Prompt Iterator page, you can use the sample airlines prompt and iterate from there.
  • Your company offers many different flight legs, aggregated from many different carriers. You first filter down to the flight leg that you want to promote using the Filters on the left. In this case, we are filtering for flights originating from Singapore (SRCCity) and going to Hong Kong (DSTCity), operated by AnyCompany Airlines.

PersonalizeInstructions

  • Now, let’s choose the number of customers that you’d like to generate. Once satisfied, you choose to start the batch segmentation job.
  • In the background, Amazon Personalize generates a group of customers that are most likely to be interested in this flight leg based on past interactions with similar flight itineraries.
  • Once the segmentation job is finished as shown, you can fetch the recommended group of customers and start generating content for them immediately, similar to the first use case.

Setup instructions

The setup instructions and deployment details can be found in the GitHub link.

Conclusion

In this blog, we’ve explored the transformative potential of integrating Amazon Bedrock, Amazon Personalize, and Amazon Pinpoint to address the common challenges in marketing operations. By automating the content generation with Amazon Bedrock, personalizing at scale with Amazon Personalize, and ensuring precise content distribution with Amazon Pinpoint, companies can not only streamline their marketing processes but also elevate the customer experience.

The benefits are clear: time-saving through automation, increased operational efficiency, and enhanced customer satisfaction through personalized engagement. This integrated solution empowers marketers to focus on strategy and creativity, leaving the heavy lifting to AWS’s robust AI and ML services.

For those ready to take the next step, we’ve provided a comprehensive guide and resources to implement this solution. By following the setup instructions and leveraging the provided prompts as a starting point, you can deploy this solution and begin customizing the marketer portal to your business’ needs.

Call to Action

Don’t let the challenges of content generation, personalization, and distribution hold back your marketing potential. Deploy the Generative AI Marketer Portal today, adapt it to your specific needs, and watch as your marketing operations transform. For a hands-on start and to see this solution in action, visit the GitHub repository for detailed setup instructions.

Have a question? Share your experiences or leave your questions in the comment section.

About the Authors

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. At work, he specializes in technical implementation of communications services in enterprise systems and architecture/solutions design. In his spare time, he enjoys chess, rock climbing, hiking and triathlon.

Philipp Kaindl

Philipp Kaindl

Philipp Kaindl is a Senior Artificial Intelligence and Machine Learning Solutions Architect at AWS. With a background in data science and
mechanical engineering his focus is on empowering customers to create lasting business impact with the help of AI. Outside of work, Philipp enjoys tinkering with 3D printers, sailing and hiking.

Bruno Giorgini

Bruno Giorgini

Bruno Giorgini is a Senior Solutions Architect specializing in Pinpoint and SES. With over two decades of experience in the IT industry, Bruno has been dedicated to assisting customers of all sizes in achieving their objectives. When he is not crafting innovative solutions for clients, Bruno enjoys spending quality time with his wife and son, exploring the scenic hiking trails around the SF Bay Area.

Architectural patterns for real-time analytics using Amazon Kinesis Data Streams, part 1

Post Syndicated from Raghavarao Sodabathina original https://aws.amazon.com/blogs/big-data/architectural-patterns-for-real-time-analytics-using-amazon-kinesis-data-streams-part-1/

We’re living in the age of real-time data and insights, driven by low-latency data streaming applications. Today, everyone expects a personalized experience in any application, and organizations are constantly innovating to increase their speed of business operation and decision making. The volume of time-sensitive data produced is increasing rapidly, with different formats of data being introduced across new businesses and customer use cases. Therefore, it is critical for organizations to embrace a low-latency, scalable, and reliable data streaming infrastructure to deliver real-time business applications and better customer experiences.

This is the first post to a blog series that offers common architectural patterns in building real-time data streaming infrastructures using Kinesis Data Streams for a wide range of use cases. It aims to provide a framework to create low-latency streaming applications on the AWS Cloud using Amazon Kinesis Data Streams and AWS purpose-built data analytics services.

In this post, we will review the common architectural patterns of two use cases: Time Series Data Analysis and Event Driven Microservices. In the subsequent post in our series, we will explore the architectural patterns in building streaming pipelines for real-time BI dashboards, contact center agent, ledger data, personalized real-time recommendation, log analytics, IoT data, Change Data Capture, and real-time marketing data. All these architecture patterns are integrated with Amazon Kinesis Data Streams.

Real-time streaming with Kinesis Data Streams

Amazon Kinesis Data Streams is a cloud-native, serverless streaming data service that makes it easy to capture, process, and store real-time data at any scale. With Kinesis Data Streams, you can collect and process hundreds of gigabytes of data per second from hundreds of thousands of sources, allowing you to easily write applications that process information in real-time. The collected data is available in milliseconds to allow real-time analytics use cases, such as real-time dashboards, real-time anomaly detection, and dynamic pricing. By default, the data within the Kinesis Data Stream is stored for 24 hours with an option to increase the data retention to 365 days. If customers want to process the same data in real-time with multiple applications, then they can use the Enhanced Fan-Out (EFO) feature. Prior to this feature, every application consuming data from the stream shared the 2MB/second/shard output. By configuring stream consumers to use enhanced fan-out, each data consumer receives dedicated 2MB/second pipe of read throughput per shard to further reduce the latency in data retrieval.

For high availability and durability, Kinesis Data Streams achieves high durability by synchronously replicating the streamed data across three Availability Zones in an AWS Region and gives you the option to retain data for up to 365 days. For security, Kinesis Data Streams provide server-side encryption so you can meet strict data management requirements by encrypting your data at rest and Amazon Virtual Private Cloud (VPC) interface endpoints to keep traffic between your Amazon VPC and Kinesis Data Streams private.

Kinesis Data Streams has native integrations with other AWS services such as AWS Glue and Amazon EventBridge to build real-time streaming applications on AWS. Refer to Amazon Kinesis Data Streams integrations for additional details.

Modern data streaming architecture with Kinesis Data Streams

A modern streaming data architecture with Kinesis Data Streams can be designed as a stack of five logical layers; each layer is composed of multiple purpose-built components that address specific requirements, as illustrated in the following diagram:

The architecture consists of the following key components:

  • Streaming sources – Your source of streaming data includes data sources like clickstream data, sensors, social media, Internet of Things (IoT) devices, log files generated by using your web and mobile applications, and mobile devices that generate semi-structured and unstructured data as continuous streams at high velocity.
  • Stream ingestion – The stream ingestion layer is responsible for ingesting data into the stream storage layer. It provides the ability to collect data from tens of thousands of data sources and ingest in real time. You can use the Kinesis SDK for ingesting streaming data through APIs, the Kinesis Producer Library for building high-performance and long-running streaming producers, or a Kinesis agent for collecting a set of files and ingesting them into Kinesis Data Streams. In addition, you can use many pre-build integrations such as AWS Database Migration Service (AWS DMS), Amazon DynamoDB, and AWS IoT Core to ingest data in a no-code fashion. You can also ingest data from third-party platforms such as Apache Spark and Apache Kafka Connect
  • Stream storage – Kinesis Data Streams offer two modes to support the data throughput: On-Demand and Provisioned. On-Demand mode, now the default choice, can elastically scale to absorb variable throughputs, so that customers do not need to worry about capacity management and pay by data throughput. The On-Demand mode automatically scales up 2x the stream capacity over its historic maximum data ingestion to provide sufficient capacity for unexpected spikes in data ingestion. Alternatively, customers who want granular control over stream resources can use the Provisioned mode and proactively scale up and down the number of Shards to meet their throughput requirements. Additionally, Kinesis Data Streams can store streaming data up to 24 hours by default, but can extend to 7 days or 365 days depending upon use cases. Multiple applications can consume the same stream.
  • Stream processing – The stream processing layer is responsible for transforming data into a consumable state through data validation, cleanup, normalization, transformation, and enrichment. The streaming records are read in the order they are produced, allowing for real-time analytics, building event-driven applications or streaming ETL (extract, transform, and load). You can use Amazon Managed Service for Apache Flink for complex stream data processing, AWS Lambda for stateless stream data processing, and AWS Glue & Amazon EMR for near-real-time compute. You can also build customized consumer applications with Kinesis Consumer Library, which will take care of many complex tasks associated with distributed computing.
  • Destination – The destination layer is like a purpose-built destination depending on your use case. You can stream data directly to Amazon Redshift for data warehousing and Amazon EventBridge for building event-driven applications. You can also use Amazon Kinesis Data Firehose for streaming integration where you can light stream processing with AWS Lambda, and then deliver processed streaming into destinations like Amazon S3 data lake, OpenSearch Service for operational analytics, a Redshift data warehouse, No-SQL databases like Amazon DynamoDB, and relational databases like Amazon RDS to consume real-time streams into business applications. The destination can be an event-driven application for real-time dashboards, automatic decisions based on processed streaming data, real-time altering, and more.

Real-time analytics architecture for time series

Time series data is a sequence of data points recorded over a time interval for measuring events that change over time. Examples are stock prices over time, webpage clickstreams, and device logs over time. Customers can use time series data to monitor changes over time, so that they can detect anomalies, identify patterns, and analyze how certain variables are influenced over time. Time series data is typically generated from multiple sources in high volumes, and it needs to be cost-effectively collected in near real time.

Typically, there are three primary goals that customers want to achieve in processing time-series data:

  • Gain insights real-time into system performance and detect anomalies
  • Understand end-user behavior to track trends and query/build visualizations from these insights
  • Have a durable storage solution to ingest and store both archival and frequently accessed data.

With Kinesis Data Streams, customers can continuously capture terabytes of time series data from thousands of sources for cleaning, enrichment, storage, analysis, and visualization.

The following architecture pattern illustrates how real time analytics can be achieved for Time Series data with Kinesis Data Streams:

Build a serverless streaming data pipeline for time series data

The workflow steps are as follows:

  1. Data Ingestion & Storage – Kinesis Data Streams can continuously capture and store terabytes of data from thousands of sources.
  2. Stream Processing – An application created with Amazon Managed Service for Apache Flink can read the records from the data stream to detect and clean any errors in the time series data and enrich the data with specific metadata to optimize operational analytics. Using a data stream in the middle provides the advantage of using the time series data in other processes and solutions at the same time. A Lambda function is then invoked with these events, and can perform time series calculations in memory.
  3. Destinations – After cleaning and enrichment, the processed time series data can be streamed to Amazon Timestream database for real-time dashboarding and analysis, or stored in databases such as DynamoDB for end-user query. The raw data can be streamed to Amazon S3 for archiving.
  4. Visualization & Gain insights – Customers can query, visualize, and create alerts using Amazon Managed Service for Grafana. Grafana supports data sources that are storage backends for time series data. To access your data from Timestream, you need to install the Timestream plugin for Grafana. End-users can query data from the DynamoDB table with Amazon API Gateway acting as a proxy.

Refer to Near Real-Time Processing with Amazon Kinesis, Amazon Timestream, and Grafana showcasing a serverless streaming pipeline to process and store device telemetry IoT data into a time series optimized data store such as Amazon Timestream.

Enriching & replaying data in real time for event-sourcing microservices

Microservices are an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. When building event-driven microservices, customers want to achieve 1. high scalability to handle the volume of incoming events and 2. reliability of event processing and maintain system functionality in the face of failures.

Customers utilize microservice architecture patterns to accelerate innovation and time-to-market for new features, because it makes applications easier to scale and faster to develop. However, it is challenging to enrich and replay the data in a network call to another microservice because it can impact the reliability of the application and make it difficult to debug and trace errors. To solve this problem, event-sourcing is an effective design pattern that centralizes historic records of all state changes for enrichment and replay, and decouples read from write workloads. Customers can use Kinesis Data Streams as the centralized event store for event-sourcing microservices, because KDS can 1/ handle gigabytes of data throughput per second per stream and stream the data in milliseconds, to meet the requirement on high scalability and near real-time latency, 2/ integrate with Flink and S3 for data enrichment and achieving while being completely decoupled from the microservices, and 3/ allow retry and asynchronous read in a later time, because KDS retains the data record for a default of 24 hours, and optionally up to 365 days.

The following architectural pattern is a generic illustration of how Kinesis Data Streams can be used for Event-Sourcing Microservices:

The steps in the workflow are as follows:

  1. Data Ingestion and Storage – You can aggregate the input from your microservices to your Kinesis Data Streams for storage.
  2. Stream processing Apache Flink Stateful Functions simplifies building distributed stateful event-driven applications. It can receive the events from an input Kinesis data stream and route the resulting stream to an output data stream. You can create a stateful functions cluster with Apache Flink based on your application business logic.
  3. State snapshot in Amazon S3 – You can store the state snapshot in Amazon S3 for tracking.
  4. Output streams – The output streams can be consumed through Lambda remote functions through HTTP/gRPC protocol through API Gateway.
  5. Lambda remote functions – Lambda functions can act as microservices for various application and business logic to serve business applications and mobile apps.

To learn how other customers built their event-based microservices with Kinesis Data Streams, refer to the following:

Key considerations and best practices

The following are considerations and best practices to keep in mind:

  • Data discovery should be your first step in building modern data streaming applications. You must define the business value and then identify your streaming data sources and user personas to achieve the desired business outcomes.
  • Choose your streaming data ingestion tool based on your steaming data source. For example, you can use the Kinesis SDK for ingesting streaming data through APIs, the Kinesis Producer Library for building high-performance and long-running streaming producers, a Kinesis agent for collecting a set of files and ingesting them into Kinesis Data Streams, AWS DMS for CDC streaming use cases, and AWS IoT Core for ingesting IoT device data into Kinesis Data Streams. You can ingest streaming data directly into Amazon Redshift to build low-latency streaming applications. You can also use third-party libraries like Apache Spark and Apache Kafka to ingest streaming data into Kinesis Data Streams.
  • You need to choose your streaming data processing services based on your specific use case and business requirements. For example, you can use Amazon Kinesis Managed Service for Apache Flink for advanced streaming use cases with multiple streaming destinations and complex stateful stream processing or if you want to monitor business metrics in real time (such as every hour). Lambda is good for event-based and stateless processing. You can use Amazon EMR for streaming data processing to use your favorite open source big data frameworks. AWS Glue is good for near-real-time streaming data processing for use cases such as streaming ETL.
  • Kinesis Data Streams on-demand mode charges by usage and automatically scales up resource capacity, so it’s good for spiky streaming workloads and hands-free maintenance. Provisioned mode charges by capacity and requires proactive capacity management, so it’s good for predictable streaming workloads.
  • You can use the Kinesis Shared Calculator to calculate the number of shards needed for provisioned mode. You don’t need to be concerned about shards with on-demand mode.
  • When granting permissions, you decide who is getting what permissions to which Kinesis Data Streams resources. You enable specific actions that you want to allow on those resources. Therefore, you should grant only the permissions that are required to perform a task. You can also encrypt the data at rest by using a KMS customer managed key (CMK).
  • You can update the retention period via the Kinesis Data Streams console or by using the IncreaseStreamRetentionPeriod and the DecreaseStreamRetentionPeriod operations based on your specific use cases.
  • Kinesis Data Streams supports resharding. The recommended API for this function is UpdateShardCount, which allows you to modify the number of shards in your stream to adapt to changes in the rate of data flow through the stream. The resharding APIs (Split and Merge) are typically used to handle hot shards.

Conclusion

This post demonstrated various architectural patterns for building low-latency streaming applications with Kinesis Data Streams. You can build your own low-latency steaming applications with Kinesis Data Streams using the information in this post.

For detailed architectural patterns, refer to the following resources:

If you want to build a data vision and strategy, check out the AWS Data-Driven Everything (D2E) program.


About the Authors

Raghavarao Sodabathina is a Principal Solutions Architect at AWS, focusing on Data Analytics, AI/ML, and cloud security. He engages with customers to create innovative solutions that address customer business problems and to accelerate the adoption of AWS services. In his spare time, Raghavarao enjoys spending time with his family, reading books, and watching movies.

Hang Zuo is a Senior Product Manager on the Amazon Kinesis Data Streams team at Amazon Web Services. He is passionate about developing intuitive product experiences that solve complex customer problems and enable customers to achieve their business goals.

Shwetha Radhakrishnan is a Solutions Architect for AWS with a focus in Data Analytics. She has been building solutions that drive cloud adoption and help organizations make data-driven decisions within the public sector. Outside of work, she loves dancing, spending time with friends and family, and traveling.

Brittany Ly is a Solutions Architect at AWS. She is focused on helping enterprise customers with their cloud adoption and modernization journey and has an interest in the security and analytics field. Outside of work, she loves to spend time with her dog and play pickleball.

Best practices for scaling AWS CDK adoption within your organization

Post Syndicated from David Hessler original https://aws.amazon.com/blogs/devops/best-practices-for-scaling-aws-cdk-adoption-within-your-organization/

Enterprises are constantly seeking ways to accelerate their journey to the cloud. Infrastructure as code (IaC) is crucial for automating and managing cloud resources efficiently. The AWS Cloud Development Kit (AWS CDK) lets you define your cloud infrastructure as code in your favorite programming language and deploy it using AWS CloudFormation. In this post, we will discuss strategies and best practices for accelerating CDK adoption within your organization. Our discussion begins after your organization has successfully completed a pilot. In this post, you will learn how to scale the lessons learned from the pilot project across your organization through platform engineering. You will learn how to reduce complexity through building reusable components, deploy with speed and safety via builder tooling, and accelerate project startup with an internal developer portal (IDP). We will conclude by discussing ways to participate in and benefit from the broader CDK community.

Before we dive in, let’s briefly discuss a new trend in technology: Platform Engineering. DevOps practices have helped IT organizations deliver software to customers more frequently and with higher quality. A recent evolution in DevOps is the introduction of platform engineering teams to build services, toolchains, and documentation to support workload teams. An important responsibility of the platform engineering team is governance of the software delivery process.

At Amazon, we have a long and storied history of leveraging platform engineering to accelerate deployments. This is why we are able to maintain 143 different compliance certifications and attestations while deploying 150 million times per year. Platform engineering increases productivity, reduces friction between ideas and implementation, and improves agility by accelerating the delivery of workloads via a secure, scalable, and reusable set of resources and components through self-service portals and developer tools. Platform Engineering is comprised of seven capabilities: Platform Architecture, Data Architecture, Platform Product Engineering, Data Engineering, Provisioning & Orchestration, Modern App Development and CI/CD. For more information on platform engineering visit the AWS Cloud Adoption Framework.

Establishing these capabilities takes several platform and workload teams working together. From an operating model standpoint, a workload team interacts with Platform Engineering in one of the three following ways (for more information, see Building a Cloud Operating Model):

Image describes a three different cloud operating models. The first model is a transitional model where Application Engineering and Application Operations teams both supported by Cloud Platform Engineering. The second model is strategic where Application Engineering and Cloud Platform Engineering equally own the responsibility. The third model is also strategic where Application Engineering and Cloud Platform Engineering jointly own responsibility but Application Engineering owns most of the responsibility.

Reduce Builder Complexity and Cognitive load with Reusable Components

So, how can the platform team incorporate CDK to accomplish their goals? One of the common objectives of the Platform Engineering team is to publish and curate reusable patterns called Constructs. Constructs provide a mechanism to create reusable, extensible, and common components that can be shared across multiple teams and projects.

Many customers write their own implementations for constructs to enforce security best practices such as encryption and specific AWS Identity and Access Management policies. For example, you might create a MyCompanyBucket that implements your organizations security requirements in place of the default Amazon S3 Bucket construct. This bucket configuration can be implemented and extended by multiple teams to ensure they are using components that are validated by your security and compliance teams.

For customers focused on data governance, CDK constructs can automatically add in best practices for recovery time objectives and recovery point objectives by ensuring backups and architecture meet an organization’s resilience policies. For advance customers looking to enforce data lifecycle policies, create uniform access controls, or emit required KPIs, CDK constructs can provide avenues to create safe and secure configuration by default. Applying CDK constructs to DataOps, customers can benefit from templated ETL pipelines that ensure data lineage metadata is maintained and data cleansing occurs.

Customers also build constructs for non-AWS resources. Teams can build Constructs for third-party builder tooling, observability systems, testing apparatuses and more. In this way, workload teams can codify AWS and non-AWS resources in one code base. There is a balance required when writing your own constructs between ensuring standardization and providing the freedom and flexibility of taking advantage of the growing ecosystems of CDK packages. Examples of this balance include AWS Solutions Constructs, as these are typically built upon standard constructs. Without extending standard constructs, the constructs you build will be harder for consumer to integrate with the larger CDK ecosystem since it uses standardize interfaces.

Construct Hub is a central destination for discovering and sharing cloud application design patterns and reference architectures defined for CDK, that are built and published by the AWS community. While AWS provides a public Construct Hub, enterprises can maintain their private Construct Hub inside their own AWS accounts (see construct-hub, the GitHub repository, or the CDK Workshop for more details). The primary objective in either case remains consistent: to provide shared libraries that can be readily utilized by different workload teams. This approach ensures enhanced consistency, reusability, and ultimately leads to cost reduction and faster development timelines.

One of the pitfalls customers often have with leveraging this approach is that Platform Engineering cannot keep up building reusable components to leverage the latest technology enhancements. This is where leveraging the lessons learned from a pilot really can help. A pilot team works with platform engineering to research and implement security best practices. Some customers have the platform engineering team act as approvers for new constructs in addition to authors of new constructs. In this model, a pilot team works to build construct(s) for a new technology. The platform engineers approve the new construct(s). Platform engineers ensure the pilot team meets required standards such as enforcing encryption at rest, encryption in transit, and least privilege. When approval occurs, the pilot team can publish the new construct(s) to Construct Hub. In this way, platform engineering can enable experimentation and innovation, rather than become a gatekeeper. Additionally, platform engineering teams can encourage and curate an inner-sourcing model for construct creation rather than being the sole creator of constructs.

Deploy Applications Using DevSecOps Best Practices

Application builders are most productive when their expertise is channeled towards writing code that directly addresses business challenges. While creating applications is a skill well within the grasp of many software developers, the complex task of deploying and operating these applications in line with organizational standards can be overwhelming, especially for those new to a team. This complexity often acts as a bottleneck, slowing down the experimentation process and delaying the realization of value from new application initiatives.

A solution to this challenge lies in automating the deployment pipeline and operational model. By employing thoroughly tested CDK (Cloud Development Kit) components that are shared across teams and validated through a robust CI/CD (Continuous Integration/Continuous Deployment) process, the burden on developers is significantly reduced. They no longer need to delve into the complexities of the organization’s deployment strategies, allowing them to concentrate on writing unique, innovative code. This approach not only streamlines the development process but also bridges the gap between development and operations, leading to more cohesive teams and faster, more efficient releases.

One key to high-quality software delivery is to have a proper Continuous Integration and Continuous Delivery (CI/CD) process in place. You can see CDK Pipelines: Continuous delivery for AWS CDK applications for practical examples. This high-level construct, powered by AWS CodePipeline, comes in handy when you need to go beyond test deployments with the cdk deploy command and build automated pipelines for production deployments to multiple environments in different regions and/or accounts.

Whenever you commit your AWS CDK app’s source code into AWS CodeCommit, GitHub, GitLab, BitBucket, or Amazon CodeCatalyst source repository, AWS CDK Pipelines automatically builds, tests, and deploys a new version of the application. This pipeline automatically reconfigures itself to deploy as the resources in stacks changes or the environments being deployed to change. For GitHub Actions users, see CDK Pipelines for GitHub Workflows.

A number of teams are extending these pipelines and adding their own stages to ensure deployed code meets the organization’s quality, security, risk, compliance and cloud financial management criteria. For best practices of what automation to put inside the pipeline, see the AWS Deployment Pipeline Reference Architecture. By creating fully functional pipelines, platform engineering teams can reduce the cognitive load place on development teams and increase the developer experience. This strategy has two implementations: QuickStart pipelines and golden pipelines.

In QuickStart pipelines, these pipelines are created as a construct in your Construct Hub and treated similar to the above discussion on reusable components. While these pipelines offer simplified interfaces and a reduction in cognitive load, workload teams remain in control of the pipeline and are free to modify it. As a result, quality gates such as security or compliance tooling can be disabled by workload teams and controls inside the pipeline aren’t provable. This is suboptimal for organizations looking to reduce costs of compliance and audit. As the number of versions of the construct grows, teams can have difficulty governing which versions are used to ensure teams consume.

In golden pipelines, the pipelines are created as constructs, but deployed via a centralized team. Workload teams cannot control or modify these pipelines, so quality gates such as security and compliance tooling cannot be disabled. These controls become provable to stakeholders in security, risk and compliance such as auditors. Removing permissions from workload teams comes with costs. With golden pipelines, platform engineering teams often spend a majority of their time troubleshooting workload teams’ deployments. With so much time spent on troubleshooting, teams have little time to introduce new tooling to raise the security and quality standard, improve environment setup and organizational consistency, or improve audit evidence and enforcement.

Two mechanisms can augment these strategies. Traditional change control boards (CCB) can provide provability in situations where gathering evidence and enforcement are difficult. CCBs can benefit from CDK constructs that integrate IT Service Management (ITSM) approvals and fleet management processes into the pipeline and account creation processes. Alternatively, there is an emerging story with Software Supply Chain Level Artifacts (SLSA). These artifacts can be used as digital proof. In the Kubernetes space, we see this pattern with tools like Tekton chains where attestations associated with OCI images and Kyverno is used for to enforcement the presence of attestations (see Protect the pipe! Secure CI/CD pipelines with a policy-based approach using Tekton and Kyverno for details).

Multi-account and cross-region deployment with CDK

DevOps best practices suggest multiple stages of deployment and testing before deploying to production. On top of that, AWS recommends a dedicated account for each stage to simplify resource isolation and access control. This multi-account strategy helps organizations make best use of AWS resources and provides fine-grain controls (see Recommended OUs and accounts).

Often, you will have a designated AWS account, where all CI/CD pipelines reside. A deployment is executed by these pipelines to publish to other AWS accounts, which may correspond to development, staging, or production stages. For more information about a cross-account strategy in reference to CI/CD pipelines on AWS, see Building a Secure Cross-Account Continuous Delivery Pipeline.

Automated Governance

Many enterprise customers leverage CDK to enforce security controls and policies and can prevent security issues before deployment with tooling to analyze code as part of the deployment pipeline. Using the industry standard tooling of cdk-nag, many teams check applications for best practices using a combination of available rule packs. We are also seeing enterprises build their own Aspects to enforce additional requirements such as tagging requirements to manage and organize their deployed resources.

Customers can create CDK synthesized CloudFormation and add additional checkpoints with CloudFormation Guard to verify the output using policy-as-code domain-specific language (DSL) rules. Platform Engineering teams can build the rules and workload team can consume rules and run CloudFormation Guard inside the pipeline. There is an official construct that supports makes it easy to add CloudFormation Guard checks to your application.

With AWS CDK, infrastructure is code. So, the standard tooling you already use to ensure quality and improve the builder experience should be used with CDK. If your organization has a code quality program, treat CDK applications no differently than web applications or microservices. Similarly, with Amazon CodeGuru Security and Amazon CodeWhisperer, builders can get actionable recommendations on how to improve both the security and quality on their CDK code as they would with any other type of application.

With Aspects, cdk-nag, and code quality tools, organizations can prevent security issues before they are deployed. However, it is also important to create controls that work after a deployment occurs. AWS CloudFormation Hooks allow customers to inspect resources prior to create, update, or delete CloudFormation Stacks or CDK Applications. With CloudFormation Hooks, Platform Engineering teams can provide warnings or prevent provisioning resources for non-compliant resources. These hooks can be created via CDK (see Build and Deploy CloudFormation Hooks using A CI/CD Pipeline for details).

Finally, you can deploy AWS Config’s conformance packs via CDK. These collections of rules you’re your organization insist on security standards at scale. If your organization wishes to build custom rules, teams can build reactive controls using higher level constructs for AWS Config Rules. While many of these patterns existed prior to CDK, CDK helps accelerate building and deploying cloud applications and controls by leveraging reusable components that are shared within the enterprise or by the community at large.

Operate the Application using Observability

The open-source community provides high-level construct libraries that expand basic monitoring capabilities for CDK applications. The cdk-monitoring-constructs project makes it easy to monitor CDK apps. Similarly, Cdk-wakeful takes that a step further, adding many additional services and provides easily configurable interfaces to automatically be notified by AWS System Manager Incident Manager, AWS Chatbot, or Amazon Simple Notification Service. By leveraging prebuilt solutions from the open-source community, you can focus on creating custom metrics and thresholds around your business logic. Platform Engineering teams can modify and extends 1open-source projects to help workload teams simplify their operations and emit health and status to centralized systems.

Accelerate New Project Startup with an Internal Developer Platform

An Internal Developer Platform (IDP) is built by platform engineering teams to build golden paths and enable developer self-service. These golden paths are expressed as a series of templates that the structure of a source control repository and files stored inside the repository. When the IDP uses these templates to create source code repositories, the resultant repository contains the following:

  • A getting-started tutorial (usually in a README.md)
  • Reference documentation
  • Skeleton source code
  • Dependency Management
  • CI/CD pipeline template
  • IaC template
  • Observability configuration

With CDK, the CI/CD pipeline, IaC template, and observability configuration can all be a part of a single CDK application.

Platform engineering teams build golden paths and expose them using tools like Backstage, Humanitec, or Port. When building golden paths, there are two common approaches to the underlying project structure. Some organizations choose the approach where their IaC code repository is separate from the application code. Others choose to include everything in one repository. There is a healthy tension between how much to place inside a golden path vs a reusable component. In both strategies, platform engineering teams can avoid code duplication by leveraging CDK. The approach your organization chooses will dictate how you organize your reusable components. Below, we will walk through both options and the implications on reusable constructs.

Option 1: Everything in one repository

In this approach, all the code is contained in one repository: infrastructure, application, configuration, and deployment. This approach enables builders to collaborate, build features, and innovate together quickly, which is why it is the recommended approach. For more details, refer to the Best practices documentation. For examples, see AWS Deployment Reference Architecture for Applications.

This approach works best in teams that are “value-stream aligned.” Value-stream aligned teams have development and operations capabilities within the same team. These teams are organized around solving problems for customers rather than technical capabilities. Within the project, teams can organize around logical units such as application tier (API, database, etc.) or business capabilities (order management, product catalog, delivery services, etc.). In organizations that are value stream aligned, larger, highly conventionalized reusable components are better. An extreme example of this type of constructs is a single construct that contains all the code for an entire microservice. In these teams, the cognitive load focuses on the customer problem, so reducing the complexity of developing applications is critical to success.

Option 2: Separated application code pipeline

In this alternative approach, you can decouple your application code from your infrastructure by storing them in separate repositories and having separate pipelines. Separating the pipelines often leads to siloes and less collaboration between workload builders, who shift focus to developing features, and infrastructure engineers, who limit their efforts to building the infrastructure on which those applications run.

This approach works best in teams that are “matrixed.” A matrix organization is structured around technical capabilities (development, operations, security, business, etc.). In these cases, more modular constructs work better than constructs that are highly conventionalized. Experts from each organization can use CDK constructs as mechanisms to share their expertise across the entire organization. Examples of these types of constructs are monitoring, alerting, or security constructs prebuilt with hooks to plug in to centralized monitoring.

Building a Community of Practice with Platform Engineering

Scaling any new technology within a large organization requires the creation and enablement of a community that fosters collaboration, establishes best practices, and stays up to date with the changes in the ecosystem. In order to enable the creation of these communities of practice within your organization, AWS supports multiple public communities centered around the creation of content to educate and enable CDK users. Members of your organization’s community of practice can connect with other CDK development teams around the world through these public AWS supported communities.

Communities of Practice

A Community of Practice (CoP) is a group of people with shared interest who come together to learn, collaborate and develop expertise in a specific domain through informal interactions and knowledge sharing. Within your organization, establishing communities of practice around CDK has been proven to enable mentorship, problem solving, and reusable assets. To get started, your platform engineering team – the creators of reusable constructs and builder tooling with CDK – become early content creators for the community of practice. This establishes a feedback loop where CDK creators publicize their achievements via the CoP and consumers can ask questions and provide direct guidance to creators. Once the CoP has sustainably expanded by the initial group that established it, the CoP can start to add hack-a-thons or game days within your organization, which can bring innovation and solve organization-wide challenges. Fully mature communities of practices own curated wikis or databases of knowledge. They use mechanisms such as townhalls, office hours, newsletters, and chat channels to keep the community up to date. In this way, CDK expertise is diffused across the organization. At AWS, this diffusion of expertise has led to teams other than platform engineering becoming creators of reusable constructs. By expanding who can create reusable constructs, we are able to accelerate our own innovation.

Communities

There is a growing community that supports CDK, with many different platforms available providing content, code, examples and meetups. CDK is currently maintained by AWS with support from the community on AWS CDK GitHub page where you can contribute to the platform, raise issues, see the backlog and join discussions with active community members.

CDK.dev is the community driven hub around the CDK ecosystem. This site brings together all the latest blogs, videos, and educational content. It also provides links to join the community Slack platform.

CDK Patterns houses an open source collection of AWS Serverless architecture patterns built with CDK for developers to use. These patterns are sources via AWS Community Builders / AWS Heroes.

Finally, AWS re:Post provides a question-and-answer portal for the community to resolve.

The AWS Community Builders program offers technical resources, education, and networking opportunities to AWS technical enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community.

Communities of practice can leverage AWS public communities like cdk.dev to fill gaps in knowledge. Townhalls can benefit from speakers from AWS Heroes or community builders, frequent contributors to GitHub or re:Post, or speakers from CDK Day. Newsletters can aggregate and summarize the latest news from across all AWS channels. Once your community of practice establishes CDK competencies, this collaboration can also be bidirectional. For example, experts in your organization’s community can become AWS Heroes. Success stories can be shared via CDK Day, guest blog posts, and you might even speak at one of our major events such as AWS Summits, AWS re:Invent, AWS re:Inforce, or AWS re:Mars.

Final Thoughts

As we’ve said throughout this blog, with CDK, Infrastructure is code. This has enabled a paradigm shift in the infrastructure management space. Today, we see many customers such as Liberty Mutual, Scenario, Checkmarx, and Registers of Scotland establishing mature ecosystems using CDK. With an active open-source community, an AWS dev team for long term support, and multiple platforms for knowledge sharing, your builders can quickly learn, build, and innovate. Due to successful pilots, many organizations adopt CDK, become more agile, and innovate faster. This is exactly what happened at Amazon, where CDK is the first choice for building new services.

Organizations often scale and reduce complexity through platform engineering. These teams build higher level constructs by applying best practices, and provide CI/CD pipelines to accelerate deployments. Your deployment is safer using unit testing on your infrastructure as code and through robust security controls to provide guidance to builders at every stage: from author to operate.

Finally, establishing a community enables your organization to build its own mature ecosystem. Through both internal and open-source communities your builders can connect, discover, and grow.

Photo of David Hessler

David Hessler

Prior to joining AWS, David spent a decade serving as a principal technologist and establishing Platform Engineering and SRE teams for the United States government. Since joining AWS in 2020, David has spent his time helping customers accelerate deployment speed and safety for some of AWS’s largest commercial and public sector customers. Today, as a part of the DevSecOps team within Global Services Security, he is building the next generation of DevSecOps tooling for AWS customers.

Amritha Shetty

Amritha Shetty

Amritha is a Solutions architect at AWS. She works with public sector customers to help migrate and modernize in the cloud. She loves helping citizens get more from public sector institutions through rapid innovation in the cloud. She brings over twelve years of software design and development experience and passionate about helping customers implement the next-generation development experience.

Photo of Chris Scudder

Chris Scudder

Chris is a Senior Solutions Architect with the UK Public Sector team. His primary focus is helping Public Sector customers adopt cloud technologies for their workloads, helping them streamline their development and operational processes. He has a background in application development and has created multiple Industry Solutions for UK Local Government. He has an interesting in Machine Learning and delivers AWS DeepRacer events alongside his day-to-day role.

Photo of Kumar Karra

Kumar Karra

Kumar Karra is a Senior Field Solutions Architect for AWS Small and Medium Business Customers. He has a strong background in designing and developing applications for small consumer facing customers to large mission critical applications for enterprises. He specialized in NextGen Developer Experience tools and enjoys helping customer shorten their time to value by guiding them on strategies to implement fast, repeatable, testable, and scalable tools and architectures.

AWS Security Profile: Arynn Crow, Sr. Manager for AWS User AuthN

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-arynn-crow-sr-manager-for-aws-user-authn/

AWS Security Profile series, I interview some of the humans who work in AWS Security and help keep our customers safe and secure. In this profile, I interviewed Arynn Crow, senior manager for AWS User AuthN in AWS Identity.


How long have you been at AWS, and what do you do in your current role?

I’ve been at Amazon for over 10 years now, and AWS for three of those years. I lead a team of product managers in AWS Identity who define the strategy for our user authentication related services. This includes internal and external services that handle AWS sign-in, account creation, threat mitigation, and underlying authentication components that support other AWS services. It’s safe to say that I’m thinking about something different nearly every day, which keeps it fun.

How do you explain your job to non-technical friends and family?

I tell people that my job is about figuring out how to make sure that people are who they say they are online. If they want to know a bit more, sometimes I will relate this to examples they’re increasingly likely to encounter in their everyday lives—getting text or email messages for additional security when they try to sign in to their favorite website, or using their fingerprint or facial scan to sign in instead of entering a password. There’s a lot more to identity and authentication, of course, but this usually gets the point across!

You haven’t always been in security. Tell me a little bit about your journey and how you got started in this space?

More than 10 years ago now, I started in one of our call centers as a temporary customer service agent. I was handling Kindle support calls (this was back when our Kindles still had physical keyboards on them, and “Alexa” wasn’t even part of our lexicon yet). After New Year’s 2013, I was converted to a full-time employee and resumed my college education—I earned both of my degrees (a BA in International Affairs, and MA in political science) while working at Amazon. Over the next few years, I moved into different positions including our Back Office team, a Kindle taskforce role supporting the launch of a new services, and Executive Customer Relations. Throughout these roles, I continued to manage projects related to anti-abuse and security. I got a lot of fulfillment out of these projects—protecting our customers, employees, and business against fraud and data loss is very gratifying work. When a position opened up in our Customer Service Security team, I got the role thanks in part to my prior experience working with that team to deliver security solutions within our operations centers.

After that, things moved fast—I started first with a project on account recovery and access control for our internal workforce, and continuously expanded my portfolio into increasingly broad and more technical projects that all related to what I now know is the field of Identity and Access Management. Eventually, I started leading our identity strategy for customer service as a whole, including our internal authentication and access management as well as external customer authentication to our call centers. I also began learning about and engaging more with the security and identity community that existed outside of Amazon by attending conferences and getting involved with organizations working on standards development like the FIDO Alliance. Moving to AWS Identity a few years later was an obvious next step to gain exposure to broader applications of identity.

What advice do you have for people who want to get into security but don’t have the traditional background?

First, it can be hard. This journey wasn’t easy for me, and I’m still working to learn more every day. I want to say that because if someone is having trouble landing their first security job, or feeling like they still don’t “fit” at first when they do get the job, they should know it doesn’t mean they’re failing. There are a lot of inspiring stories out there about people who seemingly naturally segued into this field from other projects and work, but there are just as many people working very hard to find their footing. Everyone doubts themselves sometimes. Don’t let it hold you back.

Next for the practical advice, whatever you’re doing now, there are probably opportunities to begin looking at your space with a security lens, and start helping wherever you find problems to address or processes to improve by bringing them to your security teams. This will help your organization while also helping you build relationships. Be insatiably curious! Cybersecurity is community-oriented, and I find that people in this field are very passionate about what we do. Many people I met were excited that I was interested in learning about what they do and how they do it. Sometimes, they’d agree to take a couple hours with me each month for me to ask questions about how things worked, and narrow down what resources were the best use of my time.

Finally, there are a lot of resources for learning. We have highly competent, successful security professionals that learned on the job and don’t hold a roster of certifications, so I don’t think these are essential for success. But, I do think these programs can be beneficial to familiarize you with basic concepts and give you access to a common language. Various certification and training courses exist, from basic, free computer science courses online to security-specific ones like CISSP, SANS, COMPTIA Security+, and CIDPro, to name just a few. AWS offers AWS-specific cloud security training, too, like our Ramp-Up Guide. You don’t have to learn to code beautifully to succeed in security, but I think developing a working understanding of systems and principles will help build credibility and extract deeper learning out of experiences you have.

In your opinion, why is it important to have people with different backgrounds working in security?

Our backgrounds color the way we think about and approach problems, and considering all of these different approaches helps make us well-rounded. And particularly in the current context, in which women and marginalized communities are underrepresented in STEM, expanding our thinking about what skills make a good security practitioner makes room for more people at the table while giving us a more comprehensive toolkit to tackle our toughest problems. As for myself, I apply my training in political science. Security sometimes looks like a series of technical challenges and solutions, but it’s interwoven with a complex array of regulatory and social considerations, too—this makes the systems-based and abstract thinking I honed in my education useful. I know other folks who came to identity from social science, mathematics, and biology backgrounds who feel the same about skills learned from their respective fields.

Pivoting a bit, what’s something that you’re working on right now that you’re excited about?

It’s a very interesting time to be working on authentication, many people who aren’t working in enterprises or regulated industries are still hesitant to adopt controls like multi-factor authentication. And beyond MFA, organizations like NIST and CISA are emphasizing the importance of phishing-resistant MFA. So, at the same time we’re continuously working to innovate in our MFA and other authentication offerings to customers, we’re collaborating with the rest of the industry to advance technologies for strong authentication and their adoption across sectors. I represent Amazon to the FIDO Alliance, which is an industry association that supports the development of a set of protocols collectively known as FIDO2 for strong, phishing-resistant authentication. With FIDO and its various member companies, we’re working to increase the usability, awareness, and adoption of FIDO2 security keys and passkeys, which are a newer implementation of FIDO2 that improves ease of use by enabling customers to use phishing-resistant keys across devices and platforms.

In your opinion, what is the coolest thing happening in identity right now?

What I think is the most important thing happening in identity is the convergence of digital and “traditional” identities. The industry is working through challenging questions with emerging technology right now to bring forth innovation balanced with concern for equity, privacy, and sustainability. Ease of use and improved security for users as well as abuse prevention for businesses is driving conversion of real-life identities and credentials (such as peoples’ driver’s licenses as one example) to a digital format, such as digital driver’s licenses, wallets, and emerging verifiable credentials.

What are you most proud of in your career?

I’m most grateful for the opportunities I’ve had to help define the next chapter of the AWS account protection strategy. Some of our work also translates to features we get to ship to customers, like when we extended support for multiple MFA devices for AWS Identity and Access Management (IAM) late last year, and this year we announced that in 2024 we will require MFA when customers sign in to the AWS Management Console. Seeing how excited people were for a security feature was really awesome. Account protection has always been important, but this is especially true in the years following the COVID-19 outbreak when we saw a rapid acceleration of resources going digital. This kind of work definitely isn’t a one-person show, and as fulfilling as it is to see the impact I have here, what I’m really proud of is that I get to work with and learn from so many really smart, competent, and kind team members that are just as passionate about this space as I am.

If you were to do anything other than security, what would you want to do?

Before I discovered my interest for security, I was trying to decide if I would continue on from my master’s program in political science to do a PhD in either political science or public health. Towards the end of my degree program, I became really interested in how research-driven public policy could drive improvements in maternal and infant health outcomes in areas with acute opioid-related health crises, which is an ongoing struggle for my home place. I’m still very invested in that topic and try to keep on top of the latest research—I could easily see myself moving back towards that if I ever decide it’s time to close this chapter.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for Amazon Security with a passion for creating meaningful content that focuses on the human side of security and encourages a security-first mindset. She previously worked as a reporter and editor, and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and staunchly defending the Oxford comma.

Arynn Crow

Arynn Crow

Arynn Crow is a Manager of Product Management for AWS Identity. Arynn started at Amazon in 2012, trying out many different roles over the years before finding her happy place in security and identity in 2017. Arynn now leads the product team responsible for developing user authentication services at AWS.

AWS Security Profile: Chris Betz, CISO of AWS

Post Syndicated from Chris Betz original https://aws.amazon.com/blogs/security/aws-security-profile-chris-betz-ciso-of-aws/

In the AWS Security Profile series, we feature the people who work in Amazon Web Services (AWS) Security and help keep our customers safe and secure. This interview is with Chris Betz, Chief Information Security Officer (CISO), who began his role as CISO of AWS in August of 2023.


How did you get started in security? What prompted you to pursue this field?

I’ve always had a passion for technology, and for keeping people out of harm’s way. When I found computer science and security in the Air Force, this world opened up to me that let me help others, be a part of building amazing solutions, and engage my competitive spirit. Security has the challenges of the ultimate chess game, though with real and impactful consequences. I want to build reliable, massively scaled systems that protect people from malicious actors. This is really hard to do and a huge challenge I undertake every day. It’s an amazing team effort that brings together the smartest folks that I know, competing with threat actors.

What are you most excited about in your new role?

One of the most exciting things about my role is that I get to work with some of the smartest people in the field of security, people who inspire, challenge, and teach me something new every day. It’s exhilarating to work together to make a significant difference in the lives of people all around the world, who trust us at AWS to keep their information secure. Security is constantly changing, we get to learn, adapt, and get better every single day. I get to spend my time helping to build a team and culture that customers can depend on, and I’m constantly impressed and amazed at the caliber of the folks I get work with here.

How does being a former customer influence your role as AWS CISO?

I was previously the CISO at Capital One and was an AWS customer. As a former customer, I know exactly what it’s like to be a customer who relies on a partner for significant parts of their security. There needs to be a lot of trust, a lot of partnership across the shared responsibility model, and consistent focus on what’s being done to keep sensitive data secure. Every moment that I’m here at AWS, I’m reminded about things from the customer perspective and how I can minimize complexity, and help customers leverage the “super powers” that the cloud provides for CISOs who need to defend the breadth of their digital estate. I know how important it is to earn and keep customer trust, just like the trust I needed when I was in their shoes. This mindset influences me to learn as much as I can, never be satisfied with ”good enough,” and grab every opportunity I can to meet and talk with customers about their security.

What’s been the most dramatic change you’ve seen in the security industry recently?

This is pretty easy to answer: artificial intelligence (AI). This is a really exciting time. AI is dominating the news and is on the mind of every security professional, everywhere. We’re witnessing something very big happening, much like when the internet came into existence and we saw how the world dramatically changed because of it. Every single sector was impacted, and AI has the same potential. Many customers use AWS machine learning (ML) and AI services to help improve signal-to-noise ratio, take over common tasks to free up valuable time to dig deeper into complex cases, and analyze massive amounts of threat intelligence to determine the right action in less time. The combination of Data + Compute power + AI is a huge advantage for cloud companies.

AI and ML have been a focus for Amazon for more than 25 years, and we get to build on an amazing foundation. And it’s exciting to take advantage of and adapt to the recent big changes and the impact this is having on the world. At AWS, we’re focused on choice and broadening access to generative AI and foundation models at every layer of the ML stack, including infrastructure (chips), developer tools, and AI services. What a great time to be in security!

What’s the most challenging part of being a CISO?

Maintaining a culture of security involves each person, each team, and each leader. That’s easy to say, but the challenge is making it tangible—making sure that each person sees that, even though their title doesn’t have “security” in it, they are still an integral part of security. We often say, “If you have access, you have responsibility.” We work hard to limit that access. And CISOs must constantly work to build and maintain a culture of security and help every single person who has access to data understand that security is an important part of their job.

What’s your short- and long-term vision for AWS Security?

Customers trust AWS to protect their data so they can innovate and grow quickly, so in that sense, our vision is for security to be a growth lever for our customers, not added friction. Cybersecurity is key to unlocking innovation, so managing risk and aligning the security posture of AWS with our business objectives will continue for the immediate future and long term. For our customers, my vision is to continue helping them understand that investing in security helps them move faster and take the right risks—the kind of risks they need to remain competitive and innovative. When customers view security as a business accelerator, they achieve new technical capabilities and operational excellence. Strong security is the ultimate business enabler.

If you could give one piece of advice to all CISOs, what would it be?

Nail Zero Trust. Zero Trust is the path to the strongest, most effective security, and getting back to the core concepts is important. While Zero Trust is a different journey for every organization, it’s a natural evolution of cybersecurity and defense in depth in particular. No matter what’s driving organizations toward Zero Trust—policy considerations or the growing patchwork of data protection and privacy regulations—Zero Trust meaningfully improves security outcomes through an iterative process. When companies get this right, they can quickly identify and investigate threats and take action to contain or disrupt unwanted activity.

What are you most proud of in your career?

I’m proud to have worked—and still be working with—such talented, capable, and intelligent security professionals who care deeply about security and are passionate about making the world a safer place. Being among the world’s top security experts really makes me grateful and humble for all the amazing opportunities I’ve had to work alongside them, working together to solve problems and being part of creating a legacy to make security better.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Chris Betz

Chris Betz

Chris is CISO at AWS. He oversees security teams and leads the development and implementation of security policies, with the aim of managing risk and aligning the company’s security posture with business objectives. Chris joined Amazon in August 2023, after holding CISO and security leadership roles at leading companies. He lives in Northern Virginia with his family.

Lisa Maher

Lisa Maher

Lisa Maher joined AWS in February 2022 and leads AWS Security Thought Leadership PR. Before joining AWS, she led crisis communications for clients experiencing data security incidents at two global PR firms. Lisa, a former journalist, is a graduate of Texas A&M School of Law, where she specialized in Cybersecurity Law & Policy, Risk Management & Compliance.

Happy anniversary, Amazon CloudFront: 15 years of evolution and internet advancements

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/happy-anniversary-amazon-cloudfront-15-years-of-evolution-and-internet-advancements/

I can’t believe it’s been 15 years since Amazon CloudFront was launched! When Amazon S3 became available in 2006, developers loved the flexibility and started to build a new kind of globally distributed applications where storage was not a bottleneck. These applications needed to be performant, reliable, and cost-efficient for every user on the planet. So in 2008 a small team (a “two-pizza team“) launched CloudFront in just 200 days. Jeff Barr hinted at the new and yet unnamed service in September and introduced CloudFront two months later.

Since the beginning, CloudFront has provided an easy way to distribute content to end users with low latency, high data transfer speeds, and no long-term commitments. What started as a simple cache for Amazon S3 quickly evolved into a fully featured content delivery network. Now CloudFront delivers applications at blazing speeds across the globe, supporting live sporting events such as NFL, Cricket World Cup, and FIFA World Cup.

At the same time, we also want to provide you with the best tools to secure applications. In 2015, we announced AWS WAF integration with CloudFront to provide fast and secure access control at the edge. Then, we focused on developing robust threat intelligence by combining signals across services. This threat intelligence integrates with CloudFront, adding AWS Shield to protect applications from common exploits and distributed denial of service (DDoS) attacks. For example, we recently detected an unusual spike in HTTP/2 requests to Amazon CloudFront. We quickly realized that CloudFront had automatically mitigated a new type of HTTP request flood DDoS event.

A lot also happens at lower levels than HTTP. For example, when you serve your application with CloudFront, all of the packets received by the application are inspected by a fully inline DDoS mitigation system which doesn’t introduce any observable latency. In this way, L3/L4 DDoS attacks against CloudFront distributions are mitigated in real time.

We also made under-the-hood improvements like s2n-tls (short for “signal to noise”), an open-source implementation of the TLS protocol that has been designed to be small and fast with simplicity as a priority. Another similar improvement is s2n-quic, an open-source QUIC protocol implementation written in Rust.

With CloudFront, you can also control access to content through a number of capabilities. You can restrict access to only authenticated viewers or, through geo-restriction capability, configure the specific geographic locations that can access content.

Security is always important, but not every organization has dedicated security experts on staff. To make robust security more accessible, CloudFront now includes built-in protections such as one-click web application firewall setup, security recommendations, and an intuitive security dashboard. With these integrated security features, teams can put critical safeguards in place without deep security expertise. Our goal is to empower all customers to easily implement security best practices.

Web applications delivery
During the past 15 years, web applications have become much more advanced and essential to end users. When CloudFront launched, our focus was helping deliver content stored in S3 buckets. Dynamic content was introduced to optimize web applications where portions of a website change for each user. Dynamic content also improves access to APIs that need to be delivered globally.

As applications become more distributed, we looked at ways to help developers make efficient use of its global footprint and resources at the edge. To allow customization and personalization of content close to end users and minimize latency, Lambda@Edge was introduced.

When fewer compute resources are needed, CloudFront Functions can run lightweight JavaScript functions across edge locations for low-latency HTTP manipulations and personalized content delivery. Recently, CloudFront Functions expanded to further customize responses, including modifying HTTP status codes and response bodies.

Today, CloudFront handles over 3 trillion HTTP requests daily and uses a global network of more than 600 points of presence and 13 Regional edge caches in more than 100 cities across 50 countries. This scale helps power the most demanding online events. For example, during the 2023 Amazon Prime Day, CloudFront handled peak loads of over 500 million HTTP requests per minute, totaling over 1 trillion HTTP requests.

Amazon CloudFront has more than 600,000 active developers building and delivering applications to end users. To help teams work at their full speed, CloudFront introduced continuous deployment so developers can test and validate configuration changes on a portion of traffic before full deployment.

Media and entertainment
It’s now common to stream music, movies, and TV series to our homes, but 15 years ago, renting DVDs was still the norm. Running streaming servers was technically complex, requiring long-term contracts to access the global infrastructure needed for high performance.

First, we added support for audio and video streaming capabilities using custom protocols since technical standards were still evolving. To handle large audiences and simplify cost-effective delivery of live events, CloudFront launched live HTTP streaming and, shortly after, improved support for both Flash-based (popular at the time) and Apple iOS devices.

As the media industry continued moving to internet-based delivery, AWS acquired Elemental, a pioneer in software-defined video solutions. Integrating Elemental offerings helped provide services, software, and appliances that efficiently and economically scale video infrastructures for use cases such as broadcast and content production.

The evolution of technologies and infrastructure allows for new ways of communication to become possible, such as when NASA did the first-ever live 4K stream from space using CloudFront.

Today, the world’s largest events and leading video platforms rely on CloudFront to deliver massive video catalogs and live stream content to millions. For example, CloudFront delivered streams for the FIFA World Cup 2022 on behalf of more than 19 major broadcasters globally. More recently, CloudFront handled over 120 Tbps of peak data transfer during one of the Thursday Night Football games of the NFL season on Prime Video and helped deliver the Cricket World Cup to millions of viewers across the globe.

What’s next?
Many things have changed during these 15 years but the focus on security, performance, and scalability stays the same. At AWS, it’s always Day 1, and the CloudFront team is constantly looking for ways to improve based on your feedback.

The rise of botnets is driving an ever-evolving, highly dynamic, and shifting threat landscape. Layer 7 DDoS attacks are becoming increasingly prevalent. The pervasiveness of bot traffic is increasing exponentially. As this occurs, we are evolving how we mitigate threats at the network border, at the edge, and in the Region, making it simpler for customers to configure the right security options.

Web applications are becoming more complex and interactive, and viewer expectations on latency and resiliency are even more stringent. This will drive new innovation. As new applications use generative artificial intelligence (AI), needs will evolve. These trends are will continue growing, so our investments will be focused on improving security and edge compute capabilities to support these new use cases.

With the current macroeconomic environment, many customers, especially small and medium-sized businesses and startups, look at how they can reduce their costs. Providing optimal price-performance has always been a priority for CloudFront. Cacheable data transferred to CloudFront edge locations from AWS resources does not incur additional fees. Also, 1 TB of data transfer from CloudFront to the internet per month is included in the free tier. CloudFront operates on a pay-as-you-go model with no upfront costs or minimum usage requirements. For more info, see CloudFront pricing.

As we approach AWS re:Invent, take note of these sessions that can help you learn about the latest innovations and connect with experts:

To learn more on how to speed up your websites and APIs and keep them protected, see the Application Security and Performance section of the AWS Developer Center.

Reduce latency and improve the security for your applications with Amazon CloudFront.

Danilo