Tag Archives: Events

AWS Week in Review – June 20, 2022

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-june-20-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Last Week’s Launches
It’s been a quiet week on the AWS News Blog, however a glance at What’s New page shows the various service teams have been busy as usual. Here’s a round-up of announcements that caught my attention this past week.

Support for 15 new resource types in AWS Config – AWS Config is a service for assessment, audit, and evaluation of the configuration of resources in your account. You can monitor and review changes in resource configuration using automation against a desired configuration. The newly expanded set of types includes resources from Amazon SageMaker, Elastic Load Balancing, AWS Batch, AWS Step Functions, AWS Identity and Access Management (IAM), and more.

New console experience for AWS Budgets – A new split-view panel allows for viewing details of a budget without needing to leave the overview page. The new panel will save you time (and clicks!) when you’re analyzing performance across a set of budgets. By the way, you can also now select multiple budgets at the same time.

VPC endpoint support is now available in Amazon SageMaker Canvas SageMaker Canvas is a visual point-and-click service enabling business analysts to generate accurate machine-learning (ML) models without requiring ML experience or needing to write code. The new VPC endpoint support, available in all Regions where SageMaker Canvas is suppported, eliminates the need for an internet gateway, NAT instance, or a VPN connection when connecting from your SageMaker Canvas environment to services such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and more.

Additional data sources for Amazon AppFlow – Facebook Ads, Google Ads, and Mixpanel are now supported as data sources, providing the ability to ingest marketing and product analytics for downstream analysis in AppFlow-connected software-as-a-service (SaaS) applications such as Marketo and Salesforce Marketing Cloud.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates you may have missed from the past week:

Amazon Elastic Compute Cloud (Amazon EC2) expanded the Regional availability of AWS Nitro System-based C6 instance types. C6gn instance types, powered by Arm-based AWS Graviton2 processors, are now available in the Asia Pacific (Seoul), Europe (Milan), Europe (Paris), and Middle East (Bahrain) Regions, while C6i instance types, powered by 3rd generation Intel Xeon Scalable processors, are now available in the Europe (Frankfurt) Region.

As a .NET and PowerShell Developer Advocate here at AWS, there are some news and updates related to .NET I want to highlight:

Upcoming AWS Events
The AWS New York Summit is approaching quickly, on July 12. Registration is also now open for the AWS Summit Canberra, an in-person event scheduled for August 31.

Microsoft SQL Server users may be interested in registering for the SQL Server Database Modernization webinar on June 21. The webinar will show you how to go about modernizing and how to cost-optimize SQL Server on AWS.

Amazon re:MARS is taking place this week in Las Vegas. I’ll be there as a host of the AWS on Air show, along with special guests highlighting their latest news from the conference. I also have some On Air sessions on using our AI services from .NET lined up! As usual, we’ll be streaming live from the expo hall, so if you’re at the conference, give us a wave. You can watch the show live on Twitch.tv/aws, Twitter.com/AWSOnAir, and LinkedIn Live.

A reminder that if you’re a podcast listener, check out the official AWS Podcast Update Show. There is also the latest installment of the AWS Open Source News and Updates newsletter to help keep you up to date.

No doubt there’ll be a whole new batch of releases and announcements from re:MARS, so be sure to check back next Monday for a summary of the announcements that caught our attention!

— Steve

Security Is Shifting in a Cloud-Native World: Insights From RSAC 2022

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/16/security-is-shifting-in-a-cloud-native-world-insights-from-rsac-2022/

Security Is Shifting in a Cloud-Native World: Insights From RSAC 2022

The cloud has become the default for IT infrastructure and resource delivery, allowing an unprecedented level of speed and flexibility for development and production pipelines. This helps organizations compete and innovate in a fast-paced business environment. But as the cloud becomes more ingrained, the ephemeral nature of cloud infrastructure is presenting new challenges for security teams.

Several talks by our Rapid7 presenters at this year’s RSA Conference touched on this theme. Here’s a closer look at what our RSAC 2022 presenters had to say about adapting security processes to a cloud-native world.

A complex picture

As Lee Weiner, SVP Cloud Security and Chief Innovation Officer, pointed out in his RSA briefing, “Context Is King: The Future of Cloud Security,” cloud adoption is not only increasing — it’s growing more complex. Many organizations are bringing on multiple cloud vendors to meet a variety of different needs. One report estimates that a whopping 89% of companies that have adopted the cloud have chosen a multicloud approach.

This model is so popular because of the flexibility it offers organizations to utilize the right technology, in the right cloud environment, at the right cost — a key advantage in a today’s marketplace.

“Over the last decade or so, many organizations have been going through a transformation to put themselves in a position to use the scale and speed of the cloud as a strategic business advantage,” Jane Man, Director of Product Management for VRM, said in her RSA Lounge presentation, “Adapting Your Vulnerability Management Program for Cloud-Native Environments.”

While DevOps teams can move more quickly than ever before with this model, security pros face a more complex set of questions than with traditional infrastructure, Lee noted. How many of our instances are exposed to known vulnerabilities? Do they have property identity and access management (IAM) controls established? What levels of access do those permissions actually grant users in our key applications?

New infrastructure, new demands

The core components of vulnerability management remain the same in cloud environments, Jane said in her talk. Security teams must:

  • Get visibility into all assets, resources, and services
  • Assess, prioritize, and remediate risks
  • Communicate the organization’s security and compliance posture to management

But because of the ephemeral nature of the cloud, the way teams go about completing these requirements is shifting.

“Running a scheduled scan, waiting for it to complete and then handing a report to IT doesn’t work when instances may be spinning up and down on a daily or hourly basis,” she said.

In his presentation, Lee expressed optimism that the cloud itself may help provide the new methods we need for cloud-native security.

“Because of the way cloud infrastructure is built and deployed, there’s a real opportunity to answer these questions far faster, far more efficiently, far more effectively than we could with traditional infrastructure,” he said.

Calling for context

For Lee, the goal is to enable secure adoption of cloud technologies so companies can accelerate and innovate at scale. But there’s a key element needed to achieve this vision: context.

What often prevents teams from fully understanding the context around their security data is the fact that it is siloed, and the lack of integration between disparate systems requires a high level of manual effort to put the pieces together. To really get a clear picture of risk, security teams need to be able to bring their data together with context from each layer of the environment.

But what does context actually look like in practice, and how do you achieve it? Jane laid out a few key strategies for understanding the context around security data in your cloud environment.

  • Broaden your scope: Set up your VM processes so that you can detect more than just vulnerabilities in the cloud — you want to be able to see misconfigurations and issues with IAM permissions, too.
  • Understand the environment: When you identify a vulnerable instance, identify if it is publicly accessible and what its business application is — this will help you determine the scope of the vulnerability.
  • Catch early: Aim to find and fix vulnerabilities in production or pre-production by shifting security left, earlier in the development cycle.

4 best practices for context-driven cloud security

Once you’re able to better understand the context around security data in your environment, how do you fit those insights into a holistic cloud security strategy? For Lee, this comes down to four key components that make up the framework for cloud-native security.

1. Visibility and findings

You can’t secure what you can’t see — so the first step in this process is to take a full inventory of your attack surface. With different kinds of cloud resources in place and providers releasing new services frequently, understanding the security posture of these pieces of your infrastructure is critical. This includes understanding not just vulnerabilities and misconfigurations but also access, permissions, and identities.

“Understanding the layer from the infrastructure to the workload to the identity can provide a lot of confidence,” Lee said.

2. Contextual prioritization

Not everything you discover in this inventory will be of equal importance, and treating it all the same way just isn’t practical or feasible. The vast amount of data that companies collect today can easily overwhelm security analysts — and this is where context really comes in.

With integrated visibility across your cloud infrastructure, you can make smarter decisions about what risks to prioritize. Then, you can assign ownership to resource owners and help them understand how those priorities were identified, improving transparency and promoting trust.

3. Prevent and automate

The cloud is built with automation in mind through Infrastructure as Code — and this plays a key role in security. Automation can help boost efficiency by minimizing the time it takes to detect, remediate, or contain threats. A shift-left strategy can also help with prevention by building security into deployment pipelines, so production teams can identify vulnerabilities earlier.

Jane echoed this sentiment in her talk, recommending that companies “automate to enable — but not force — remediation” and use tagging to drive remediation of vulnerabilities found running in production.

4. Runtime monitoring

The next step is to continually monitor the environment for vulnerabilities and threat activity — and as you might have guessed, monitoring looks a little different in the cloud. For Lee, it’s about leveraging the increased number of signals to understand if there’s any drift away from the way the service was originally configured.

He also recommended using behavioral analysis to detect threat activity and setting up purpose-built detections that are specific to cloud infrastructure. This will help ensure the security operations center (SOC) has the most relevant information possible, so they can perform more effective investigations.

Lee stressed that in order to carry out the core components of cloud security and achieve the outcomes companies are looking for, having an integrated ecosystem is absolutely essential. This will help prevent data from becoming siloed, enable security pros to obtain that ever-important context around their data, and let teams collaborate with less friction.

Looking for more insights on how to adapt your security program to a cloud-native world? Check out Lee’s presentation on demand, or watch our replays of Rapid7 speakers’ sessions from RSAC 2022.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

We’ll see you at CSTA 2022 Annual Conference

Post Syndicated from James Robinson original https://www.raspberrypi.org/blog/csta-2022/

Connecting face to face with educators around the world is a key part of our mission at the Raspberry Pi Foundation, and it’s something that we’ve sorely missed doing over the last two years. We’re therefore thrilled to be joining over 1000 computing educators in the USA at the Computer Science Teachers Association (CSTA) Annual Conference in Chicago in July.

You will find us at booth 521 in the expo hall throughout the conference, as well as running four sessions. Gemma, Kevin, James, Sue, and Jane are team members representing Hello World magazine, the Raspberry Pi Computing Education Research Centre, and our other free programmes and education initiatives. We thank the team at CSTA for involving us in what we know will be an amazing conference.

Talk to us about computer science pedagogy

Developing and sharing effective computing pedagogy is our theme for CSTA 2022. We’ll be talking to you about our 12 pedagogy principles, laid out in The Big Book of Computing Pedagogy, available to download for free.

Cover of The Big Book of Computing Pedagogy.

An exciting piece of news is that everyone attending CSTA 2022 will find a free print copy of the Big Book in their conference goodie bag!

We’re really looking forward to sharing and discussing the book and all our work with US educators, and to seeing some familiar faces. We’re also hoping to interview lots of old and new friends about your approaches to teaching computing and computer science for future Hello World podcast episodes.

Your sessions with us

Our team will also be running a number of sessions where you can join us to learn, discuss, and prepare lesson plans.

Semantic Waves and Wavy Lessons: Connecting Theory to Practical Activities and Back Again

Thursday 14 July, 9am–12pm: Pre-conference workshop (booking required) with James Robinson and Jane Waite

If you enjoy explaining concepts using unplugged activities, analogy, or storytelling, then this practical pre-conference session is for you. In the session, we’ll introduce the idea of semantic waves, a learning theory that supports learners in building knowledge of new concepts through careful consideration of vocabulary and contexts. Across the world, this approach has been successfully used to teach topics ranging from ballet to chemistry — and now computing.

Three computer science educators discuss something at a screen.

You’ll learn how this theory can be applied to deliver powerful explanations that connect abstract ideas and concrete experiences. By taking part in the session, you’ll gain a solid understanding of semantic wave theory, see it in practice in some freely available lesson plans, and apply it to your own planning.

Write for a Global Computing Community with Hello World Magazine

Friday 15 July, 1–2pm: Workshop with Gemma Coleman

Do you enjoy sharing your teaching ideas, successes, and challenges with others? Do you want to connect with a global community of over 30,000 computing educators? Have you always wanted to be a published author? Then come along to this workshop session.

Issues of Hello World magazine arranged to form a number five.
Hello World has been going strong for five years — find out how you can become one of its authors.

Every single computing or CS teacher out there has at least one lesson to share, idea to voice, or story to tell. In the session, you’ll discuss what makes a good article with Gemma Coleman, Hello World’s Editor, and you’ll learn top tips for how to communicate your ideas in writing. Gemma will also guide you through writing a plan for your very own article. Even if you’re not sure whether you want to write an article, doing this is a powerful way to reflect on your teaching practice.

Developing a Toolkit for Teaching Computer Science in School

Saturday 16 July, 4–5pm: Keynote talk by Sue Sentance

To teach any subject requires good teaching skills, knowledge about the subject being taught, and specific knowledge that a teacher gains about how to teach a particular topic, to their particular students, in a particular context. Teaching computer science is no different, and it’s a challenge for teachers to develop a go-to set of pedagogical strategies for such a new subject, especially for elements of the subject matter that they are just getting to grips with themselves.

12 principles of computing pedagogy: lead with concepts; structure lessons; make concrete; unplug, unpack, repack; work together; read and explore code first; foster program comprehension; model everything; challenge misconceptions; create projects; get hands-on; add variety.

In this keynote talk, our Chief Learning Officer Sue Sentance will focus on some of the 12 pedagogy principles that we developed to support the teaching of computer science. We created this set of principles together with other teachers and researchers to help us and everyone in computing and computer science education reflect on how we teach our learners. Sue will share how we arrived at the principles, and she’ll use classroom examples to illustrate how you can apply them in practice.

Exploring the Hello World Big Book of Computing Pedagogy

Sunday 17 July, 9–10am: Workshop with Sue Sentance

The set of 12 pedagogy principles we’ve developed for teaching computing are presented in our Hello World Big Book of Computing Pedagogy. The book includes summaries, teachers’ perspectives, and lesson plans for each of the 12 principles.

A tweet praising The Big Book of Computing Pedagogy.

All CSTA attendees will get their own print copy of the Big Book, and in this practical session, we will use the book to explore together how you can use the 12 principles in the planning and delivery of your lessons. The session will be very hands-on, so bring along something you know you want or need to teach.

See you at CSTA in July

CSTA is now just a month away, and we can’t wait to meet old friends, make new connections, and learn from each other! Come find us at booth 521 or at our sessions to meet the team, discover Hello World magazine and the Hello World podcast, and find out more about our educational work. We hope to see you soon.

The post We’ll see you at CSTA 2022 Annual Conference appeared first on Raspberry Pi.

A sneak peek at the identity and access management sessions for AWS re:Inforce 2022

Post Syndicated from Ilya Epshteyn original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-identity-and-access-management-sessions-for-aws-reinforce-2022/

Register now with discount code SALFNj7FaRe to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

AWS re:Inforce 2022 will take place in-person in Boston, MA, on July 26 and 27 and will include some exciting identity and access management sessions. AWS re:Inforce 2022 features content in the following five areas:

  • Data protection and privacy
  • Governance, risk, and compliance
  • Identity and access management
  • Network and infrastructure security
  • Threat detection and incident response

The identity and access management track will showcase how quickly you can get started to securely manage access to your applications and resources as you scale on AWS. You will hear from customers about how they integrate their identity sources and establish a consistent identity and access strategy across their on-premises environments and AWS. Identity experts will discuss best practices for establishing an organization-wide data perimeter and simplifying access management with the right permissions, to the right resources, under the right conditions. You will also hear from AWS leaders about how we’re working to make identity, access control, and resource management simpler every day. This post highlights some of the identity and access management sessions that you can add to your agenda. To learn about sessions from across the content tracks, see the AWS re:Inforce catalog preview.

Breakout sessions

Lecture-style presentations that cover topics at all levels and are delivered by AWS experts, builders, customers, and partners. Breakout sessions typically conclude with 10–15 minutes of Q&A.

IAM201: Security best practices with AWS IAM
AWS IAM is an essential service that helps you securely control access to your AWS resources. In this session, learn about IAM best practices like working with temporary credentials, applying least-privilege permissions, moving away from users, analyzing access to your resources, validating policies, and more. Leave this session with ideas for how to secure your AWS resources in line with AWS best practices.

IAM301: AWS Identity and Access Management (IAM) the practical way
Building secure applications and workloads on AWS means knowing your way around AWS Identity and Access Management (AWS IAM). This session is geared toward the curious builder who wants to learn practical IAM skills for defending workloads and data, with a technical, first-principles approach. Gain knowledge about what IAM is and a deeper understanding of how it works and why.

IAM302: Strategies for successful identity management at scale with AWS SSO
Enterprise organizations often come to AWS with existing identity foundations. Whether new to AWS or maturing, organizations want to better understand how to centrally manage access across AWS accounts. In this session, learn the patterns many customers use to succeed in deploying and operating AWS Single Sign-On at scale. Get an overview of different deployment strategies, features to integrate with identity providers, application system tags, how permissions are deployed within AWS SSO, and how to scale these functionalities using features like attribute-based access control.

IAM304: Establishing a data perimeter on AWS, featuring Vanguard
Organizations are storing an unprecedented and increasing amount of data on AWS for a range of use cases including data lakes, analytics, machine learning, and enterprise applications. They want to make sure that sensitive non-public data is only accessible to authorized users from known locations. In this session, dive deep into the controls that you can use to create a data perimeter that allows access to your data only from expected networks and by trusted identities. Hear from Vanguard about how they use data perimeter controls in their AWS environment to meet their security control objectives.

IAM305: How Guardian Life validates IAM policies at scale with AWS
Attend this session to learn how Guardian Life shifts IAM security controls left to empower builders to experiment and innovate quickly, while minimizing the security risk exposed by granting over-permissive permissions. Explore how Guardian validates IAM policies in Terraform templates against AWS best practices and Guardian’s security policies using AWS IAM Access Analyzer and custom policy checks. Discover how Guardian integrates this control into CI/CD pipelines and codifies their exception approval process.

IAM306: Managing B2B identity at scale: Lessons from AWS and Trend Micro
Managing identity for B2B multi-tenant solutions requires tenant context to be clearly defined and propagated with each identity. It also requires proper onboarding and automation mechanisms to do this at scale. Join this session to learn about different approaches to managing identities for B2B solutions with Amazon Cognito and learn how Trend Micro is doing this effectively and at scale.

IAM307: Automating short-term credentials on AWS, with Discover Financial Services
As a financial services company, Discover Financial Services considers security paramount. In this session, learn how Discover uses AWS Identity and Access Management (IAM) to help achieve their security and regulatory obligations. Learn how Discover manages their identities and credentials within a multi-account environment and how Discover fully automates key rotation with zero human interaction using a solution built on AWS with IAM, AWS Lambda, Amazon DynamoDB, and Amazon S3.

Builders’ sessions

Small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.

IAM351: Using AWS SSO and identity services to achieve strong identity management
Organizations often manage human access using IAM users or through federation with external identity providers. In this builders’ session, explore how AWS SSO centralizes identity federation across multiple AWS accounts, replaces IAM users and cross-account roles to improve identity security, and helps administrators more effectively scope least privilege. Additionally, learn how to use AWS SSO to activate time-based access and attribute-based access control.

IAM352: Anomaly detection and security insights with AWS Managed Microsoft AD
This builders’ session demonstrates how to integrate AWS Managed Microsoft AD with native AWS services like Amazon CloudWatch Logs and Amazon CloudWatch metrics and alarms, combined with anomaly detection, to identify potential security issues and provide actionable insights for operational security teams.

Chalk talks

Highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

IAM231: Prevent unintended access: AWS IAM Access Analyzer policy validation
In this chalk talk, walk through ways to use AWS IAM Access Analyzer policy validation to review IAM policies that do not follow AWS best practices. Learn about the Access Analyzer APIs that help validate IAM policies and how to use these APIs to prevent IAM policies from reaching your AWS environment through mechanisms like AWS CloudFormation hooks and CI/CD pipeline controls.

IAM232: Navigating the consumer identity first mile using Amazon Cognito
Amazon Cognito allows you to configure sign-in and sign-up experiences for consumers while extending user management capabilities to your customer-facing application. Join this chalk talk to learn about the first steps for integrating your application and getting started with Amazon Cognito. Learn best practices to manage users and how to configure a customized branding UI experience, while creating a fully managed OpenID Connect provider with Amazon Cognito.

IAM331: Best practices for delegating access on AWS
This chalk talk demonstrates how to use built-in capabilities of AWS Identity and Access Management (IAM) to safely allow developers to grant entitlements to their AWS workloads (PassRole/AssumeRole). Additionally, learn how developers can be granted the ability to take self-service IAM actions (CRUD IAM roles and policies) with permissions boundaries.

IAM332: Developing preventive controls with AWS identity services
Learn about how you can develop and apply preventive controls at scale across your organization using service control policies (SCPs). This chalk talk is an extension of the preventive controls within the AWS identity services guide, and it covers how you can meet the security guidelines of your organization by applying and developing SCPs. In addition, it presents strategies for how to effectively apply these controls in your organization, from day-to-day operations to incident response.

IAM333: IAM policy evaluation deep dive
In this chalk talk, learn how policy evaluation works in detail and walk through some advanced IAM policy evaluation scenarios. Learn how a request context is evaluated, the pros and cons of different strategies for cross-account access, how to use condition keys for actions that touch multiple resources, when to use principal and aws:PrincipalArn, when it does and doesn’t make sense to use a wildcard principal, and more.

Workshops

Interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

IAM271: Applying attribute-based access control using AWS IAM
This workshop provides hands-on experience applying attribute-based access control (ABAC) to achieve a secure and scalable authorization model on AWS. Learn how and when to apply ABAC, which is native to AWS Identity and Access Management (IAM). Also learn how to find resources that could be impacted by different ABAC policies and session tagging techniques to scale your authorization model across Regions and accounts within AWS.

IAM371: Building a data perimeter to allow access to authorized users
In this workshop, learn how to create a data perimeter by building controls that allow access to data only from expected network locations and by trusted identities. The workshop consists of five modules, each designed to illustrate a different AWS Identity and Access Management (IAM) and network control. Learn where and how to implement the appropriate controls based on different risk scenarios. Discover how to implement these controls as service control policies, identity- and resource-based policies, and virtual private cloud endpoint policies.

IAM372: How and when to use different IAM policy types
In this workshop, learn how to identify when to use various policy types for your applications. Work through hands-on labs that take you through a typical customer journey to configure permissions for a sample application. Configure policies for your identities, resources, and CI/CD pipelines using permission delegation to balance security and agility. Also learn how to configure enterprise guardrails using service control policies.

If these sessions look interesting to you, join us in Boston by registering for re:Inforce 2022. We look forward to seeing you there!

Author

Ilya Epshteyn

Ilya is a Senior Manager of Identity Solutions in AWS Identity. He helps customers to innovate on AWS by building highly secure, available, and scalable architectures. He enjoys spending time outdoors and building Lego creations with his kids.

Marc von Mandel

Marc von Mandel

Marc leads the product marketing strategy and execution for AWS Identity Services. Prior to AWS, Marc led product marketing at IBM Security Services across several categories, including Identity and Access Management Services (IAM), Network and Infrastructure Security Services, and Cloud Security Services. Marc currently lives in Atlanta, Georgia and has worked in the cybersecurity and public cloud for more than twelve years.

Defending Against Tomorrow’s Threats: Insights From RSAC 2022

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/13/defending-against-tomorrows-threats-insights-from-rsac-2022/

Defending Against Tomorrow's Threats: Insights From RSAC 2022

The rapidly changing pace of the cyberthreat landscape is on every security pro’s mind. Not only do organizations need to secure complex cloud environments, they’re also more aware than ever that their software supply chains and open-source elements of their application codebase might not be as ironclad as they thought.

It should come as no surprise, then, that defending against a new slate of emerging threats was a major theme at RSAC 2022. Here’s a closer look at what some Rapid7 experts who presented at this year’s RSA conference in San Francisco had to say about staying ahead of attackers in the months to come.

Surveying the threat landscape

Security practitioners often turn to Twitter for the latest news and insights from peers. As Raj Samani, SVP and Chief Data Scientist, and Lead Security Researcher Spencer McIntyre pointed out in their RSA talk, “Into the Wild: Exploring Today’s Top Threats,” the trend holds true when it comes to emerging threats.

“For many people, identifying threats is actually done through somebody that I follow on Twitter posting details about a particular vulnerability,” said Raj.

As Spencer noted, security teams need to be able to filter all these inputs and identify the actual priorities that require immediate patching and remediation. And that’s where the difficulty comes in.

“How do you manage a patching strategy when there are critical vulnerabilities coming out … it seems weekly?” Raj asked. “Criminals are exploiting these vulnerabilities literally in days, if that,” he continued.

Indeed, the average time to exploit — i.e., the interval between a vulnerability being discovered by researchers and clear evidence of attackers using it in the wild — plummeted from 42 days in 2020 to 17 days in 2021, as noted in Rapid7’s latest Vulnerability Intelligence Report. With so many threats emerging at a rapid clip and so little time to react, defenders need the tools and expertise to understand which vulnerabilities to prioritize and how attackers are exploiting them.

“Unless we get a degree of context and an understanding of what’s happening, we’re going to end up ignoring many of these vulnerabilities because we’ve just got other things to worry about,” said Raj.

The evolving threat of ransomware

One of the things that worry security analysts, of course, is ransomware — and as the threat has grown in size and scope, the ransomware market itself has changed. Cybercriminals are leveraging this attack vector in new ways, and defenders need to adapt their strategies accordingly.

That was the theme that Erick Galinkin, Principal AI Researcher, covered in his RSA talk, “How to Pivot Fast and Defend Against Ransomware.” Erick identified four emerging ransomware trends that defenders need to be aware of:

  • Double extortion: In this type of attack, threat actors not only demand a ransom for the data they’ve stolen and encrypted but also extort organizations for a second time — pay an additional fee, or they’ll leak the data. This means that even if you have backups of your data, you’re still at risk from this secondary ransomware tactic.
  • Ransomware as a service (RaaS): Not all threat actors know how to write highly effective ransomware. With RaaS, they can simply purchase malicious software from a provider, who takes a cut of the payout. The result is a broader and more decentralized network of ransomware attackers.
  • Access brokers: A kind of mirror image to RaaS, access brokers give a leg up to bad actors who want to run ransomware on an organization’s systems but need an initial point of entry. Now, that access is for sale in the form of phished credentials, cracked passwords, or leaked data.
  • Lateral movement: Once a ransomware attacker has infiltrated an organization’s network, they can use lateral movement techniques to gain a higher level of access and ransom the most sensitive, high-value data they can find.

With the ransomware threat growing by the day and attackers’ techniques growing more sophisticated, security pros need to adapt to the new landscape. Here are a few of the strategies Erick recommended for defending against these new ransomware tactics.

  • Continue to back up all your data, and protect the most sensitive data with strong admin controls.
  • Don’t get complacent about credential theft — the spoils of a might-be phishing attack could be sold by an access broker as an entry point for ransomware.
  • Implement the principle of least privilege, so only administrator accounts can perform administrator functions — this will help make lateral movement easier to detect.

Shaping a new kind of SOC

With so much changing in the threat landscape, how should the security operations center (SOC) respond?

This was the focus of “Future Proofing the SOC: A CISO’s Perspective,” the RSA talk from Jeffrey Gardner, Practice Advisor for Detection and Response (D&R). In addition to the sprawling attack surface, security analysts are also experiencing a high degree of burnout, understandably overwhelmed by the sheer volume of alerts and threats. To alleviate some of the pressure, SOC teams need a few key things:

For Jeffrey, these needs are best met through a hybrid SOC model — one that combines internally owned SOC resources and staff with external capabilities offered through a provider, for a best-of-both-worlds approach. The framework for this approach is already in place, but the version that Jeffrey and others at Rapid7 envision involves some shifting of paradigms. These include:

  • Collapsing the distinction between product and service and moving toward “everything as a service,” with a unified platform that allows resources — which includes everything from in-product features to provider expertise and guidance — to be delivered at a sliding scale
  • Ensuring full transparency, so the organization understands not only what’s going on in their own SOC but also in their provider’s, through the use of shared solutions
  • More customization, with workflows, escalations, and deliverables tailored to the customer’s needs

Meeting the moment

It’s critical to stay up to date with the most current vulnerabilities we’re seeing and the ways attackers are exploiting them — but to be truly valuable, those insights must translate into action. Defenders need strategies tailored to the realities of today’s threat landscape.

For our RSA 2022 presenters, that might mean going back to basics with consistent data backups and strong admin controls. Or it might mean going bold by fully reimagining the modern SOC. The techniques don’t have to be new or fancy or to be effective — they simply have to meet the moment. (Although if the right tactics turn out to be big and game-changing, we’ll be as excited as the next security pro.)

Looking for more insights on how defenders can protect their organizations amid today’s highly dynamic threat landscape? You can watch these presentations — and even more from our Rapid7 speakers — at our library of replays from RSAC 2022.

Additional reading

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/10/video-an-inside-look-at-the-rsa-2022-experience-from-the-rapid7-team/

[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

The two years since the last RSA Conference have been pretty uneventful. Sure, COVID-19 sent us all to work from home for a little while, but it’s not as though we’ve seen any supply-chain-shattering breaches, headline-grabbing ransomware attacks, internet-inferno vulnerabilities, or anything like that. We’ve mostly just been baking sourdough bread and doing woodworking in between Zoom meetings.

OK, just kidding on basically all of that (although I, for one, have continued to hone my sourdough game). ​

The reality has been quite the opposite. Whether it’s because an unprecedented number of crazy things have happened since March 2020 or because pandemic-era uncertainty has made all of our experiences feel a little more heightened, the past 24 months have been a lot. And now that restrictions on gatherings are largely lifted in most places, many of us are feeling like we need a chance to get together and debrief on what we’ve all been through.

Given that context, what better timing could there have been for RSAC 2022? This past week, a crew of Rapid7 team members gathered in San Francisco to sync up with the greater cybersecurity community and take stock of how we can all stay ahead of attackers and ready for the future in the months to come. We asked four of them — Jeffrey Gardner, Practice Advisor – Detection & Response; Tod Beardsley, Director of Research; Kelly Allen, Social Media Manager; and Erick Galinkin, Principal Artificial Intelligence Researcher — to tell us a little bit about their RSAC 2022 experience. Here’s a look at what they had to say — and a glimpse into the excitement and energy of this year’s RSA Conference.

What’s it been like returning to full-scale in-person events after 2 years?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

What was your favorite session or speaker of the week? What made them stand out?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

What was your biggest takeaway from the conference? How will it shape the way you think about and practice cybersecurity in the months to come?



[VIDEO] An Inside Look at the RSA 2022 Experience From the Rapid7 Team​

Want to relive the RSA experience for yourself? Check out our replays of Rapid7 speakers’ sessions from the week.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

AWS Week In Review – June 6, 2022

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/aws-week-in-review-june-6-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

I’ve just come back from a long (extended) holiday weekend here in the US and I’m still catching up on all the AWS launches that happened this past week. I’m particularly excited about some of the data, machine learning, and quantum computing news. Let’s have a look!

Last Week’s Launches
The launches that caught my attention last week are the following:

Amazon EMR Serverless is now generally available Amazon EMR Serverless allows you to run big data applications using open-source frameworks such as Apache Spark and Apache Hive without configuring, managing, and scaling clusters. The new serverless deployment option for Amazon EMR automatically scales resources up and down to provide just the right amount of capacity for your application, and you only pay for what you use. To learn more, check out Channy’s blog post and listen to The Official AWS Podcast episode on EMR Serverless.

AWS PrivateLink is now supported by additional AWS services AWS PrivateLink provides private connectivity between your virtual private cloud (VPC), AWS services, and your on-premises networks without exposing your traffic to the public internet. The following AWS services just added support for PrivateLink:

  • Amazon S3 on Outposts has added support for PrivateLink to perform management operations on your S3 storage by using private IP addresses in your VPC. This eliminates the need to use public IPs or proxy servers. Read the June 1 What’s New post for more information.
  • AWS Panorama now supports PrivateLink, allowing you to access AWS Panorama from your VPC without using public endpoints. AWS Panorama is a machine learning appliance and software development kit (SDK) that allows you to add computer vision (CV) to your on-premises cameras. Read the June 2 What’s New post for more information.
  • AWS Backup has added PrivateLink support for VMware workloads, providing direct access to AWS Backup from your VMware environment via a private endpoint within your VPC. Read the June 3 What’s New post for more information.

Amazon SageMaker JumpStart now supports incremental model training and automatic tuning – Besides ready-to-deploy solution templates for common machine learning (ML) use cases, SageMaker JumpStart also provides access to more than 300 pre-trained, open-source ML models. You can now incrementally train all the JumpStart models with new data without training from scratch. Through this fine-tuning process, you can shorten the training time to reach a better model. SageMaker JumpStart now also supports model tuning with SageMaker Automatic Model Tuning from its pre-trained model, solution templates, and example notebooks. Automatic tuning allows you to automatically search for the best hyperparameter configuration for your model.

Amazon Transcribe now supports automatic language identification for multi-lingual audioAmazon Transcribe converts audio input into text using automatic speech recognition (ASR) technology. If your audio recording contains more than one language, you can now enable multi-language identification, which identifies all languages spoken in the audio file and creates a transcript using each identified language. Automatic language identification for multilingual audio is supported for all 37 languages that are currently supported for batch transcriptions. Read the What’s New post from Amazon Transcribe to learn more.

Amazon Braket adds support for Borealis, the first publicly accessible quantum computer that is claimed to offer quantum advantage – If you are interested in quantum computing, you’ve likely heard the term “quantum advantage.” It refers to the technical milestone when a quantum computer outperforms the world’s fastest supercomputers on a well-defined task. Until now, none of the devices claimed to demonstrate quantum advantage have been accessible to the public. The Borealis device, a new photonic quantum processing unit (QPU) from Xanadu, is the first publicly available quantum computer that is claimed to have achieved quantum advantage. Amazon Braket, the quantum computing service from AWS, has just added support for Borealis. To learn more about how you can test a quantum advantage claim for yourself now on Amazon Braket, check out the What’s New post covering the addition of Borealis support.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you may have missed:

New AWS Heroes – A warm welcome to our newest AWS Heroes! The AWS Heroes program is a worldwide initiative that acknowledges individuals who have truly gone above and beyond to share knowledge in technical communities. Get to know them in the June 2022 introduction blog post!

AWS open-source news and updates – My colleague Ricardo Sueiras writes this weekly open-source newsletter in which he highlights new open-source projects, tools, and demos from the AWS Community. Read edition #115 here.

Upcoming AWS Events
Join me in Las Vegas for Amazon re:MARS 2022. The conference takes place June 21–24 and is all about the latest innovations in machine learning, automation, robotics, and space. I will deliver a talk on how machine learning can help to improve disaster response. Say “Hi!” if you happen to be around and see me.

We also have more AWS Summits coming up over the next couple of months, both in-person and virtual.

In Europe:

In North America:

In South America:

Find an AWS Summit near you, and get notified when registration opens in your area.

Imagine Conference 2022You can now register for IMAGINE 2022 (August 3, Seattle). The IMAGINE 2022 conference is a no-cost event that brings together education, state, and local leaders to learn about the latest innovations and best practices in the cloud.

Sign up for the SQL Server Database Modernization webinar on June 21 to learn how to modernize and cost-optimize Microsoft SQL Server on AWS.

That’s all for this week. Check back next Monday for another Week in Review!

— Antje

A sneak peek at the data protection and privacy sessions for AWS re:Inforce 2022

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-data-protection-and-privacy-sessions-for-reinforce-2022/

Register now with discount code SALUZwmdkJJ to get $150 off your full conference pass to AWS re:Inforce. For a limited time only and while supplies last.

Today we want to tell you about some of the engaging data protection and privacy sessions planned for AWS re:Inforce. AWS re:Inforce is a learning conference where you can learn more about on security, compliance, identity, and privacy. When you attend the event, you have access to hundreds of technical and business sessions, an AWS Partner expo hall, a keynote speech from AWS Security leaders, and more. AWS re:Inforce 2022 will take place in-person in Boston, MA on July 26 and 27. re:Inforce 2022 features content in the following five areas:

  • Data protection and privacy
  • Governance, risk, and compliance
  • Identity and access management
  • Network and infrastructure security
  • Threat detection and incident response

This post will highlight of some of the data protection and privacy offerings that you can sign up for, including breakout sessions, chalk talks, builders’ sessions, and workshops. For the full catalog of all tracks, see the AWS re:Inforce session preview.

Breakout sessions

Lecture-style presentations that cover topics at all levels and delivered by AWS experts, builders, customers, and partners. Breakout sessions typically include 10–15 minutes of Q&A at the end.

DPP 101: Building privacy compliance on AWS
In this session, learn where technology meets governance with an emphasis on building. With the privacy regulation landscape continuously changing, organizations need innovative technical solutions to help solve privacy compliance challenges. This session covers three unique customer use cases and explores privacy management, technology maturity, and how AWS services can address specific concerns. The studies presented help identify where you are in the privacy journey, provide actions you can take, and illustrate ways you can work towards privacy compliance optimization on AWS.

DPP201: Meta’s secure-by-design approach to supporting AWS applications
Meta manages a globally distributed data center infrastructure with a growing number of AWS Cloud applications. With all applications, Meta starts by understanding data security and privacy requirements alongside application use cases. This session covers the secure-by-design approach for AWS applications that helps Meta put automated safeguards before deploying applications. Learn how Meta handles account lifecycle management through provisioning, maintaining, and closing accounts. The session also details Meta’s global monitoring and alerting systems that use AWS technologies such as Amazon GuardDuty, AWS Config, and Amazon Macie to provide monitoring, access-anomaly detection, and vulnerable-configuration detection.

DPP202: Uplifting AWS service API data protection to TLS 1.2+
AWS is constantly raising the bar to ensure customers use the most modern Transport Layer Security (TLS) encryption protocols, which meet regulatory and security standards. In this session, learn how AWS can help you easily identify if you have any applications using older TLS versions. Hear tips and best practices for using AWS CloudTrail Lake to detect the use of outdated TLS protocols, and learn how to update your applications to use only modern versions. Get guidance, including a demo, on building metrics and alarms to help monitor TLS use.

DPP203: Secure code and data in use with AWS confidential compute capabilities
At AWS, confidential computing is defined as the use of specialized hardware and associated firmware to protect in-use customer code and data from unauthorized access. In this session, dive into the hardware- and software-based solutions AWS delivers to provide a secure environment for customer organizations. With confidential compute capabilities such as the AWS Nitro System, AWS Nitro Enclaves, and NitroTPM, AWS offers protection for customer code and sensitive data such as personally identifiable information, intellectual property, and financial and healthcare data. Securing data allows for use cases such as multi-party computation, blockchain, machine learning, cryptocurrency, secure wallet applications, and banking transactions.

Builders’ sessions

Small-group sessions led by an AWS expert who guides you as you build the service or product on your own laptop. Use your laptop to experiment and build along with the AWS expert.

DPP251: Disaster recovery and resiliency for AWS data protection services
Mitigating unknown risks means planning for any situation. To help achieve this, you must architect for resiliency. Disaster recovery (DR) is an important part of your resiliency strategy and concerns how your workload responds when a disaster strikes. To this end, many organizations are adopting architectures that function across multiple AWS Regions as a DR strategy. In this builders’ session, learn how to implement resiliency with AWS data protection services. Attend this session to gain hands-on experience with the implementation of multi-Region architectures for critical AWS security services.

DPP351: Implement advanced access control mechanisms using AWS KMS
Join this builders’ session to learn how to implement access control mechanisms in AWS Key Management Service (AWS KMS) and enforce fine-grained permissions on sensitive data and resources at scale. Define AWS KMS key policies, use attribute-based access control (ABAC), and discover advanced techniques such as grants and encryption context to solve challenges in real-world use cases. This builders’ session is aimed at security engineers, security architects, and anyone responsible for implementing security controls such as segregating duties between encryption key owners, users, and AWS services or delegating access to different principals using different policies.

DPP352: TLS offload and containerized applications with AWS CloudHSM
With AWS CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. This builders’ session covers two common scenarios for CloudHSM: TLS offload using NGINX and OpenSSL Dynamic agent and a containerized application that uses PKCS#11 to perform crypto operations. Learn about scaling containerized applications, discover how metrics and logging can help you improve the observability of your CloudHSM-based applications, and review audit records that you can use to assess compliance requirements.

DPP353: How to implement hybrid public key infrastructure (PKI) on AWS
As organizations migrate workloads to AWS, they may be running a combination of on-premises and cloud infrastructure. When certificates are issued to this infrastructure, having a common root of trust to the certificate hierarchy allows for consistency and interoperability of the public key infrastructure (PKI) solution. In this builders’ session, learn how to deploy a PKI that allows such capabilities in a hybrid environment. This solution uses Windows Certificate Authority (CA) and ACM Private CA to distribute and manage x.509 certificates for Active Directory users, domain controllers, network components, mobile, and AWS services, including Amazon API Gateway, Amazon CloudFront, and Elastic Load Balancing.

Chalk talks

Highly interactive sessions with a small audience. Experts lead you through problems and solutions on a digital whiteboard as the discussion unfolds.

DPP231: Protecting healthcare data on AWS
Achieving strong privacy protection through technology is key to protecting patient. Privacy protection is fundamental for healthcare compliance and is an ongoing process that demands legal, regulatory, and professional standards are continually met. In this chalk talk, learn about data protection, privacy, and how AWS maintains a standards-based risk management program so that the HIPAA-eligible services can specifically support HIPAA administrative, technical, and physical safeguards. Also consider how organizations can use these services to protect healthcare data on AWS in accordance with the shared responsibility model.

DPP232: Protecting business-critical data with AWS migration and storage services
Business-critical applications that were once considered too sensitive to move off premises are now moving to the cloud with an extension of the security perimeter. Join this chalk talk to learn about securely shifting these mature applications to cloud services with the AWS Transfer Family and helping to secure data in Amazon Elastic File System (Amazon EFS), Amazon FSx, and Amazon Elastic Block Storage (Amazon EBS). Also learn about tools for ongoing protection as part of the shared responsibility model.

DPP331: Best practices for cutting AWS KMS costs using Amazon S3 bucket keys
Learn how AWS customers are using Amazon S3 bucket keys to cut their AWS Key Management Service (AWS KMS) request costs by up to 99 percent. In this chalk talk, hear about the best practices for exploring your AWS KMS costs, identifying suitable buckets to enable bucket keys, and providing mechanisms to apply bucket key benefits to existing objects.

DPP332: How to securely enable third-party access
In this chalk talk, learn about ways you can securely enable third-party access to your AWS account. Learn why you should consider using services such as Amazon GuardDuty, AWS Security Hub, AWS Config, and others to improve auditing, alerting, and access control mechanisms. Hardening an account before permitting external access can help reduce security risk and improve the governance of your resources.

Workshops

Interactive learning sessions where you work in small teams to solve problems using AWS Cloud security services. Come prepared with your laptop and a willingness to learn!

DPP271: Isolating and processing sensitive data with AWS Nitro Enclaves
Join this hands-on workshop to learn how to isolate highly sensitive data from your own users, applications, and third-party libraries on your Amazon EC2 instances using AWS Nitro Enclaves. Explore Nitro Enclaves, discuss common use cases, and build and run an enclave. This workshop covers enclave isolation, cryptographic attestation, enclave image files, building a local vsock communication channel, debugging common scenarios, and the enclave lifecycle.

DPP272: Data discovery and classification with Amazon Macie
This workshop familiarizes you with Amazon Macie and how to scan and classify data in your Amazon S3 buckets. Work with Macie (data classification) and AWS Security Hub (centralized security view) to view and understand how data in your environment is stored and to understand any changes in Amazon S3 bucket policies that may negatively affect your security posture. Learn how to create a custom data identifier, plus how to create and scope data discovery and classification jobs in Macie.

DPP273: Architecting for privacy on AWS
In this workshop, follow a regulatory-agnostic approach to build and configure privacy-preserving architectural patterns on AWS including user consent management, data minimization, and cross-border data flows. Explore various services and tools for preserving privacy and protecting data.

DPP371: Building and operating a certificate authority on AWS
In this workshop, learn how to securely set up a complete CA hierarchy using AWS Certificate Manager Private Certificate Authority and create certificates for various use cases. These use cases include internal applications that terminate TLS, code signing, document signing, IoT device authentication, and email authenticity verification. The workshop covers job functions such as CA administrators, application developers, and security administrators and shows you how these personas can follow the principal of least privilege to perform various functions associated with certificate management. Also learn how to monitor your public key infrastructure using AWS Security Hub.

If any of these sessions look interesting to you, consider joining us in Boston by registering for re:Inforce 2022. We look forward to seeing you there!

Author

Marta Taggart

Marta is a Seattle-native and Senior Product Marketing Manager in AWS Security Product Marketing, where she focuses on data protection services. Outside of work you’ll find her trying to convince Jack, her rescue dog, not to chase squirrels and crows (with limited success).

Katie Collins

Katie Collins

Katie is a Product Marketing Manager in AWS Security, where she brings her enthusiastic curiosity to deliver products that drive value for customers. Her experience also includes product management at both startups and large companies. With a love for travel, Katie is always eager to visit new places while enjoying a great cup of coffee.

AWS Week In Review – May 30, 2022

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-week-in-review-may-30-2022/

Today, the US observes Memorial Day. South Korea also has a national Memorial Day, celebrated next week on June 6. In both countries, the day is set aside to remember those who sacrificed in service to their country. This time provides an opportunity to recognize and show our appreciation for the armed services and the important role they play in protecting and preserving national security.

AWS also has supported our veterans, active-duty military personnel, and military spouses with our training and hiring programs in the US. We’ve developed a number of programs focused on engaging the military community, helping them develop valuable AWS technical skills, and aiding in transitioning them to begin their journey to the cloud. To learn more, see AWS’s military commitment.

Last Week’s Launches
The launches that caught my attention last week are the following:

Three New AWS Wavelength Zones in the US and South Korea  – We announced the availability of three new AWS Wavelength Zones on Verizon’s 5G Ultra Wideband network in Nashville, Tennessee, and Tampa, Florida in the US, and Seoul in South Korea on SK Telecom’s 5G network.

AWS Wavelength Zones embed AWS compute and storage services at the edge of communications service providers’ 5G networks while providing seamless access to cloud services running in an AWS Region. We have a total of 28 Wavelength Zones in Canada, Germany, Japan, South Korea, the UK, and the US globally. Learn more about AWS Wavelength and get started today.

New Amazon EC2 C7g, M6id, C6id, and P4de Instance Types – Last week, we announced four new EC2 instance types. C7g instances are the first instances powered by the latest AWS Graviton3 processors and deliver up to 25 percent better performance over Graviton2-based C6g instances for a broad spectrum of applications, even high-performance computing (HPC) and CPU-based machine learning (ML) inference.

M6id and C6id instances are powered by the Intel Xeon Scalable processors (Ice Lake) with an all-core turbo frequency of 3.5 GHz, equipped with up to 7.6 TB of local NVMe-based SSD block-level storage, and deliver up to 15 percent better price performance compared to the previous generation instances.

P4de instances are a preview of our latest GPU-based instances that provide the highest performance for ML training and HPC applications. It is powered by 8 NVIDIA A100 GPUs with 80 GB high-performance HBM2e GPU memory, 2X higher than the GPUs in our current P4d instances. The new P4de instances provide a total of 640GB of GPU memory, providing up to 60 percent better ML training performance along with 20 percent lower cost to train when compared to P4d instances.

Amazon EC2 Stop Protection Feature to Protect Instances From Unintentional Stop Actions – Now you don’t have to worry about stopping or terminating your instances from accidental actions. With Stop Protection, you can safeguard data in instance store volume(s) from unintentional stop actions. Previously, you could protect your instances from unintentional termination actions by enabling Termination Protection too.

When enabled, the Stop or Termination Protection feature blocks attempts to stop or terminate the instance via the EC2 console, API call, or CLI command. This feature provides an extra measure of protection for stateful workloads since instances can be stopped or terminated only by deactivating the Stop Protection feature.

AWS DataSync Supports Google Cloud Storage and Azure Files Storage Locations – We announced the general availability of two additional storage locations for AWS DataSync, an online data movement service that makes it easy to sync your data both into and out of the AWS Cloud. With this release, DataSync now supports Google Cloud Storage and Azure Files storage locations in addition to Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File Systems (HDFS), self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS), Amazon FSx for Windows File Server, Amazon FSx for Lustre, and Amazon FSx for OpenZFS.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Last week, there were lots of announcements of public sectors at AWS Summit Washington, DC.

To learn more, watch the keynote of Max Peterson, Vice President of AWS Worldwide Public Sector.

Upcoming AWS Events
If you have a developer background or similar and are looking to develop ML skills you can use to solve real-world problems, Let’s Ship It – with AWS! ML Edition is the perfect place to start. Over eight episodes of Twitch training scheduled from June 2 to July 21, you can learn hands-on how to build ML models, such as predicting demand and personalizing your offerings, and more.

The AWS Summit season is mostly over in Asia Pacific and Europe, but there are some upcoming virtual and in-person Summits that might be close to you in June:

More to come in August and September.

Please join Amazon re:MARS 2022 (June 21 – 24) to hear from recognized thought leaders and technical experts who are building the future of machine learning, automation, robotics, and space. You can preview Robotics at Amazon to discuss the recent real-world challenges of building robotic systems, published by Amazon Science.

You can now register for AWS re:Inforce 2022 (July 26 – 27). Join us in Boston to learn how AWS is innovating in the world of cloud security, and hone your technical skills in expert-led interactive sessions.

You can now register for AWS re:Invent 2022 (November 28 – December 2). Join us in Las Vegas to experience our most vibrant event that brings together the global cloud community. You can virtually attend live keynotes and leadership sessions and access our on-demand breakout sessions even after re:Invent closes.

That’s all for this week. Check back next Monday for another Week in Review!

Channy

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

AWS Week in Review – May 16, 2022

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-week-in-review-may-16-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

I had been on the road for the last five weeks and attended many of the AWS Summits in Europe. It was great to talk to so many of you in person. The Serverless Developer Advocates are going around many of the AWS Summits with the Serverlesspresso booth. If you attend an event that has the booth, say “Hi 👋” to my colleagues, and have a coffee while asking all your serverless questions. You can find all the upcoming AWS Summits in the events section at the end of this post.

Last week’s launches
Here are some launches that got my attention during the previous week.

AWS Step Functions announced a new console experience to debug your state machine executions – Now you can opt-in to the new console experience of Step Functions, which makes it easier to analyze, debug, and optimize Standard Workflows. The new page allows you to inspect executions using three different views: graph, table, and event view, and add many new features to enhance the navigation and analysis of the executions. To learn about all the features and how to use them, read Ben’s blog post.

Example on how the Graph View looks

Example on how the Graph View looks

AWS Lambda now supports Node.js 16.x runtime – Now you can start using the Node.js 16 runtime when you create a new function or update your existing functions to use it. You can also use the new container image base that supports this runtime. To learn more about this launch, check Dan’s blog post.

AWS Amplify announces its Android library designed for Kotlin – The Amplify Android library has been rewritten for Kotlin, and now it is available in preview. This new library provides better debugging capacities and visibility into underlying state management. And it is also using the new AWS SDK for Kotlin that was released last year in preview. Read the What’s New post for more information.

Three new APIs for batch data retrieval in AWS IoT SiteWise – With this new launch AWS IoT SiteWise now supports batch data retrieval from multiple asset properties. The new APIs allow you to retrieve current values, historical values, and aggregated values. Read the What’s New post to learn how you can start using the new APIs.

AWS Secrets Manager now publishes secret usage metrics to Amazon CloudWatch – This launch is very useful to see the number of secrets in your account and set alarms for any unexpected increase or decrease in the number of secrets. Read the documentation on Monitoring Secrets Manager with Amazon CloudWatch for more information.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other launches and news that you may have missed:

IBM signed a deal with AWS to offer its software portfolio as a service on AWS. This allows customers using AWS to access IBM software for automation, data and artificial intelligence, and security that is built on Red Hat OpenShift Service on AWS.

Podcast Charlas Técnicas de AWS – If you understand Spanish, this podcast is for you. Podcast Charlas Técnicas is one of the official AWS podcasts in Spanish. This week’s episode introduces you to Amazon DynamoDB and shares stories on how different customers use this database service. You can listen to all the episodes directly from your favorite podcast app or the podcast web page.

AWS Open Source News and Updates – Ricardo Sueiras, my colleague from the AWS Developer Relation team, runs this newsletter. It brings you all the latest open-source projects, posts, and more. Read edition #112 here.

Upcoming AWS Events
It’s AWS Summits season and here are some virtual and in-person events that might be close to you:

You can register for re:MARS to get fresh ideas on topics such as machine learning, automation, robotics, and space. The conference will be in person in Las Vegas, June 21–24.

That’s all for this week. Check back next Monday for another Week in Review!

— Marcia

Come join us at Cloudflare Connect New York this Thursday!

Post Syndicated from Jen Taylor original https://blog.cloudflare.com/cloudflare-connect-nyc-2022/

Come join us at Cloudflare Connect New York this Thursday!

Come join us at Cloudflare Connect New York this Thursday!

We take a break from Platform Week to share big news – we’re going to New York this week for our Cloudflare Connect customer event.

We’re packing our bags, getting on planes and heading to New York to do our first live customer event since 2019 and we could not be more excited.  It is time with you – the people building, delivering and securing the apps and networks we know and trust – that are the inspiration for the innovation we deliver.  We can’t wait to spend time with you.

Our co-founder and CEO Matthew Prince will kick off the day with his view from the top.  We’ll then be breaking out into focused conversations to dig in on our latest product news and roadmaps.

Excited about what we’re talking about for Platform Week?  Come chat with the Workers team in person and hear more about the roadmap.

Intrigued by the latest DDoS stats we posted and want to learn more?  Meet with the team analyzing the attacks and learn about where we go from here.

Not sure where to start your Zero Trust journey?  We’ll talk you through what we’re seeing and introduce you to other customers who are in the process of rolling out Zero Trust solutions for their teams so you can learn from each other.

Don’t miss it!  Register now – use the code BetterInternet to join us in-person for free.  Not in New York?  No worries – we’re coming to London, Sydney and San Francisco later this year.

Benefits of migrating to event-driven architecture

Post Syndicated from Talia Nassi original https://aws.amazon.com/blogs/compute/benefits-of-migrating-to-event-driven-architecture/

Two common options when building applications are request-response and event-driven architecture. In request-response architecture, an application’s components communicate via API calls. The client sends a request and expects a response before performing the next task. In event-driven architecture, the client generates an event and can immediately move on to its next task. Different parts of the application then respond to the event as needed.

events

In this post, you learn about reasons to consider moving from request-response architecture to an event-driven architecture.

Challenges with request-response architecture

When starting to a build a new application, many developers default to a request-response architecture. A request-response architecture may tightly integrate components and those components communicate via synchronous calls. While a request-response approach is often easier to get started with, it can become challenging as your application grows in complexity.

In this post, I review an example request-response ecommerce application and demonstrate the challenges of tightly coupled integrations. Then I show you how building the same application with an event-driven architecture can give you increased scalability, fault tolerance, and developer velocity.

Close coordination between microservices

In a typical ecommerce application that uses a synchronous API, the client makes a request to place an order and the order service sends the request downstream to an invoice service. If successful, the order service responds with a success message or confirmation number.

In this initial stage, this is a straightforward connection between the two services. The challenge comes when you add more services that integrate with the order service.

picture

If you add a fulfillment service and a forecasting service, the order service has more responsibilities and more complexity. The order service must know how to call each service’s API, from the API call structure to the API’s retry semantics. If there are any backwards incompatible changes to the APIs, the order service team must update them. The system forwards heavy traffic spikes to the order service’s dependency, which may not have the same scaling capabilities. Also, dependent services may transmit errors back up the stack to the client.

Error handling and retries

Now, you add new downstream services for fulfillment and shipping orders to the ecommerce application.

architecture

In the happy path, everything works as expected: The order service triggers invoicing, payment systems, and updates forecasting. Once payment clears, this triggers the fulfillment and packing of the order, and then informs the shipping service to request tracking information.

However, if the fulfillment center cannot find the product because they are out of stock, then fulfillment might have to alert the invoice service, then reverse the payment or issue a refund. If fulfillment fails, then the system that triggers shipping might fail as well. Forecasting must also be updated to reflect the change. This remediation workflow is all just to address one of the many potential “unhappy paths” that can occur in this API-driven ecommerce application.

Close coordination between development teams

In a synchronously integrated application, teams must coordinate any new services that are added to the application. This can slow down each development team’s ability to release new features. Imagine your team works on the payment service but you weren’t told that another team added a new rewards service. What now happens when the fulfillment service errors?

Fulfillment may orchestrate all the other services. Your payments team gets a message and you undo the payment, but you may not know who handles retries and error logic. If the rewards service changes vendors and has a new API, and does not tell your team, you may not be aware of the new service.

Ultimately, it can be hard to coordinate these orchestrations and workflows as systems become more complex and management adds more services. This is one reason that it can be beneficial to migrate to event-driven architecture.

Benefits of event-driven architecture

Event-driven architecture can help solve the problems of the close coordination of microservices, error handling and retries, and coordination between development teams.

Close coordination between microservices

In event-driven architecture, the publisher emits an event, which is acknowledged by the event bus. The event bus routes events to subscribers, which process events with self-contained business logic. There is no direct communication between publishers and subscribers.

Decoupled applications enable teams to act more independently, which can increase their velocity. For example, with an API-based integration, if your team wants to know about a change that happened in another team’s microservice, you might have to ask that team to make an API call to your service. Consequently, you may have to account for authentication, coordination with the other team over the structure of the API call. This causes back and forth between teams, which slows down development time. With an event-driven application, you can subscribe to events sent from your microservice and the event bus (for example, Amazon EventBridge) takes care of routing the event and handling authentication.

Error handling and retries

Another reason to migrate to event-driven architecture is to handle unpredictable traffic. Ecommerce websites like Amazon.com have variable amounts of traffic depending on the day. Once you place an order, several things happen.

First, Amazon checks your credit card to make sure that funds are available. Then, Amazon has to pack the merchandise and load onto trucks. That all happens in an Amazon fulfillment center. There is no synchronous API call for the Amazon backend to package and ship products. After the system confirms your payment, the front end puts together some information describing the event and puts your account number, credit card info, and what you bought in a packaged event and put it into the cloud and onto a queue. Later, another piece of software removes the event from the queue and starts the packaging and shipping.

The key point about this process is that these processes can all run at different rates. Normally, the rate at which customers place orders and the rate at which the warehouses can get the boxes packed are roughly equivalent. However, on busy days like Prime Day, customers place orders much more quickly than the warehouses can operate.

Ecommerce applications, like Amazon.com, must be able to scale up to handle unpredictable traffic. When a customer places an order, an event bus like Amazon EventBridge receives the event and all of the downstream microservices are able to select the order event for processing. Because each of the microservices can fail independently, there are no single points of failure.

Loose coordination between development teams

Event-driven architectures promote development team independence due to loose coupling between publishers and subscribers. Applications can subscribe to events with routing requirements and business logic that are separate from the publisher and other subscribers. This allows publishers and subscribers to change independently of each other, providing more flexibility to the overall architecture.

Decoupled applications also allow you to build new features faster. Adding new features or extending existing ones can be simpler with event-driven architectures because you either add new events, or modify existing ones. This process removes complexity in your application.

Conclusion

In this post, you learn about the challenges of developing applications with request-response architecture. In request-response architecture, the client must send a request and wait for a response before moving on to its next task. As applications grow in complexity, this tightly coupled architecture can cause issues. Event-driven architectures can increase scalability, fault tolerance, and developer velocity by decoupling components of your application.

For more serverless content, go to serverlessland.com.

AWS Week in Review – May 9, 2022

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-week-in-review-may-9-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Another week starts, and here’s a collection of the most significant AWS news from the previous seven days. This week is also the one-year anniversary of CloudFront Functions. It’s exciting to see what customers have built during this first year.

Last Week’s Launches
Here are some launches that caught my attention last week:

Amazon RDS supports PostgreSQL 14 with three levels of cascaded read replicas – That’s 5 replicas per instance, supporting a maximum of 155 read replicas per source instance with up to 30X more read capacity. You can now build a more robust disaster recovery architecture with the capability to create Single-AZ or Multi-AZ cascaded read replica DB instances in same or cross Region.

Amazon RDS on AWS Outposts storage auto scalingAWS Outposts extends AWS infrastructure, services, APIs, and tools to virtually any datacenter. With Amazon RDS on AWS Outposts, you can deploy managed DB instances in your on-premises environments. Now, you can turn on storage auto scaling when you create or modify DB instances by selecting a checkbox and specifying the maximum database storage size.

Amazon CodeGuru Reviewer suppression of files and folders in code reviews – With CodeGuru Reviewer, you can use automated reasoning and machine learning to detect potential code defects that are difficult to find and get suggestions for improvements. Now, you can prevent CodeGuru Reviewer from generating unwanted findings on certain files like test files, autogenerated files, or files that have not been recently updated.

Amazon EKS console now supports all standard Kubernetes resources to simplify cluster management – To make it easy to visualize and troubleshoot your applications, you can now use the console to see all standard Kubernetes API resource types (such as service resources, configuration and storage resources, authorization resources, policy resources, and more) running on your Amazon EKS cluster. More info in the blog post Introducing Kubernetes Resource View in Amazon EKS console.

AWS AppConfig feature flag Lambda Extension support for Arm/Graviton2 processors – Using AWS AppConfig, you can create feature flags or other dynamic configuration and safely deploy updates. The AWS AppConfig Lambda Extension allows you to access this feature flag and dynamic configuration data in your Lambda functions. You can now use the AWS AppConfig Lambda Extension from Lambda functions using the Arm/Graviton2 architecture.

AWS Serverless Application Model (SAM) CLI now supports enabling AWS X-Ray tracing – With the AWS SAM CLI you can initialize, build, package, test on local and cloud, and deploy serverless applications. With AWS X-Ray, you have an end-to-end view of requests as they travel through your application, making them easier to monitor and troubleshoot. Now, you can enable tracing by simply adding a flag to the sam init command.

Amazon Kinesis Video Streams image extraction – With Amazon Kinesis Video Streams you can capture, process, and store media streams. Now, you can also request images via API calls or configure automatic image generation based on metadata tags in ingested video. For example, you can use this to generate thumbnails for playback applications or to have more data for your machine learning pipelines.

AWS GameKit supports Android, iOS, and MacOS games developed with Unreal Engine – With AWS GameKit, you can build AWS-powered game features directly from the Unreal Editor with just a few clicks. Now, the AWS GameKit plugin for Unreal Engine supports building games for the Win64, MacOS, Android, and iOS platforms.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates you might have missed:

🎂 One-year anniversary of CloudFront Functions – I can’t believe it’s been one year since we launched CloudFront Functions. Now, we have tens of thousands of developers actively using CloudFront Functions, with trillions of invocations per month. You can use CloudFront Functions for HTTP header manipulation, URL rewrites and redirects, cache key manipulations/normalization, access authorization, and more. See some examples in this repo. Let’s see what customers built with CloudFront Functions:

  • CloudFront Functions enables Formula 1 to authenticate users with more than 500K requests per second. The solution is using CloudFront Functions to evaluate if users have access to view the race livestream by validating a token in the request.
  • Cloudinary is a media management company that helps its customers deliver content such as videos and images to users worldwide. For them, Lambda@Edge remains an excellent solution for applications that require heavy compute operations, but lightweight operations that require high scalability can now be run using CloudFront Functions. With CloudFront Functions, Cloudinary and its customers are seeing significantly increased performance. For example, one of Cloudinary’s customers began using CloudFront Functions, and in about two weeks it was seeing 20–30 percent better response times. The customer also estimates that they will see 75 percent cost savings.
  • Based in Japan, DigitalCube is a web hosting provider for WordPress websites. Previously, DigitalCube spent several hours completing each of its update deployments. Now, they can deploy updates across thousands of distributions quickly. Using CloudFront Functions, they’ve reduced update deployment times from 4 hours to 2 minutes. In addition, faster updates and less maintenance work result in better quality throughout DigitalCube’s offerings. It’s now easier for them to test on AWS because they can run tests that affect thousands of distributions without having to scale internally or introduce downtime.
  • Amazon.com is using CloudFront Functions to change the way it delivers static assets to customers globally. CloudFront Functions allows them to experiment with hyper-personalization at scale and optimal latency performance. They have been working closely with the CloudFront team during product development, and they like how it is easy to create, test, and deploy custom code and implement business logic at the edge.

AWS open-source news and updates – A newsletter curated by my colleague Ricardo to bring you the latest open-source projects, posts, events, and more. Read the latest edition here.

Reduce log-storage costs by automating retention settings in Amazon CloudWatch – By default, CloudWatch Logs stores your log data indefinitely. This blog post shows how you can reduce log-storage costs by establishing a log-retention policy and applying it across all of your log groups.

Observability for AWS App Runner VPC networking – With X-Ray support in App runner, you can quickly deploy web applications and APIs at any scale and take advantage of adding tracing without having to manage sidecars or agents. Here’s an example of how you can instrument your applications with the AWS Distro for OpenTelemetry (ADOT).

Upcoming AWS Events
It’s AWS Summits season and here are some virtual and in-person events that might be close to you:

You can now register for re:MARS to get fresh ideas on topics such as machine learning, automation, robotics, and space. The conference will be in person in Las Vegas, June 21–24.

That’s all from me for this week. Come back next Monday for another Week in Review!

Danilo

AWS Week in Review – May 2, 2022

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/aws-week-in-review-may-2-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Wow, May already! Here in the Pacific Northwest, spring is in full bloom and nature has emerged completely from her winter slumbers. It feels that way here at AWS, too, with a burst of new releases and updates and our in-person summits and other events now in full flow. Two weeks ago, we had the San Francisco summit; last week, we held the London summit and also our .NET Enterprise Developer Day virtual event in EMEA. This week we have the Madrid summit, with more summits and events to come in the weeks ahead. Be sure to check the events section at the end of this post for a summary and registration links.

Last week’s launches
Here are some of the launches and updates last week that caught my eye:

If you’re looking to reduce or eliminate the operational overhead of managing your Apache Kafka clusters, then the general availability of Amazon Managed Streaming for Apache Kafka (MSK) Serverless will be of interest. Starting with the original release of Amazon MSK in 2019, the work needed to set up, scale, and manage Apache Kafka has been reduced, requiring just minutes to create a cluster. With Amazon MSK Serverless, the provisioning, scaling, and management of the required resources is automated, eliminating the undifferentiated heavy-lift. As my colleague Marcia notes in her blog post, Amazon MSK Serverless is a perfect solution when getting started with a new Apache Kafka workload where you don’t know how much capacity you will need or your applications produce unpredictable or highly variable throughput and you don’t want to pay for idle capacity.

Another week, another set of Amazon Elastic Compute Cloud (Amazon EC2) instances! This time around, it’s new storage-optimized I4i instances based on the latest generation Intel Xeon Scalable (Ice Lake) Processors. These new instances are ideal for workloads that need minimal latency, and fast access to data held on local storage. Examples of these workloads include transactional databases such as MySQL, Oracle DB, and Microsoft SQL Server, as well as NoSQL databases including MongoDB, Couchbase, Aerospike, and Redis. Additionally, workloads that benefit from very high compute performance per TB of storage (for example, data analytics and search engines) are also an ideal target for these instance types, which offer up to 30 TB of AWS Nitro SSD storage.

Deploying AWS compute and storage services within telecommunications providers’ data centers, at the edge of the 5G networks, opens up interesting new possibilities for applications requiring end-to-end low latency (for example, delivery of high-resolution and high-fidelity live video streaming, and improved augmented/virtual reality (AR/VR) experiences). The first AWS Wavelength deployments started in the US in 2020, and have expanded to additional countries since. This week we announced the opening of the first Canadian AWS Wavelength zone, in Toronto.

Other AWS News
Some other launches and news items you may have missed:

Amazon Relational Database Service (RDS) had a busy week. I don’t have room to list them all, so below is just a subset of updates!

  • The addition of IPv6 support enables customers to simplify their networking stack. The increase in address space offered by IPv6 removes the need to manage overlapping address spaces in your Amazon Virtual Private Cloud (VPC)s. IPv6 addressing can be enabled on both new and existing RDS instances.
  • Customers in the Asia Pacific (Sydney) and Asia Pacific (Singapore) Regions now have the option to use Multi-AZ deployments to provide enhanced availability and durability for Amazon RDS DB instances, offering one primary and two readable standby database instances spanning three Availability Zones (AZs). These deployments benefit from up to 2x faster transaction commit latency, and automated fail overs, typically under 35 seconds.
  • Amazon RDS PostgreSQL users can now choose from General-Purpose M6i and Memory-Optimized R6i instance types. Both of these sixth-generation instance types are AWS Nitro System-based, delivering practically all of the compute and memory resources of the host hardware to your instances.
  • Applications using RDS Data API can now elect to receive SQL results as a simplified JSON string, making it easier to deserialize results to an object. Previously, the API returned a JSON string as an array of data type and value pairs, which required developers to write custom code to parse the response and extract the values, so as to translate the JSON string into an object. Applications that use the API to receive the previous JSON format are still supported and will continue to work unchanged.

Applications using Amazon Interactive Video Service (IVS), offering low-latency interactive video experiences, can now add a livestream chat feature, complete with built-in moderation, to help foster community participation in livestreams using Q&A discussions. The new chat support provides chat room resource management and a messaging API for sending, receiving, and moderating chat messages.

Amazon Polly now offers a new Neural Text-to-Speech (TTS) voice, Vitória, for Brazilian Portuguese. The original Vitória voice, dating back to 2016, used standard technology. The new voice offers a more natural-sounding rhythm, intonation, and sound articulation. In addition to Vitória, Polly also offers a second Brazilian Portuguese neural voice, Camila.

Finally, if you’re a .NET developer who’s modernizing .NET Framework applications to run in the cloud, then the announcement that the open-source CoreWCF project has reached its 1.0 release milestone may be of interest. AWS is a major contributor to the project, a port of Windows Communication Foundation (WCF), to run on modern cross-platform .NET versions (.NET Core 3.1, or .NET 5 or higher). This project benefits all .NET developers working on WCF applications, not just those on AWS. You can read more about the project in my blog post from last year, where I spoke with one of the contributing AWS developers. Congratulations to all concerned on reaching the 1.0 milestone!

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Upcoming AWS Events
As I mentioned earlier, the AWS Summits are in full flow, with some some virtual and in-person events in the very near future you may want to check out:

I’m also happy to share that I’ll be joining the AWS on Air crew at AWS Summit Washington, DC. This in-person event is coming up May 23–25. Be sure to tune in to the livestream for all the latest news from the event, and if you’re there in person feel free to come say hi!

Registration is also now open for re:MARS, our conference for topics related to machine learning, automation, robotics, and space. The conference will be in-person in Las Vegas, June 21–24.

That’s all the news I have room for this week — check back next Monday for another week in review!

— Steve

Working with events and the Amazon EventBridge schema registry

Post Syndicated from Talia Nassi original https://aws.amazon.com/blogs/compute/working-with-events-and-amazon-eventbridge-schema-registry/

Event-driven architecture, at its core, is driven by producers creating events and subscribers being made aware of those events and acting upon them. An event is a data representation of something that happened elsewhere in the application or from an outside producer. When building event-driven applications, it is critical to determine what events exist in the application, who produces them, and who subscribes and react to them.

The first step in identifying these events is to work through the process of event discovery. In this process, you decide the events that the event source produces, and what parts of the application must know about those events. Events schemas describe the structure of an event and fields included in the event. If the event’s contents match the event target’s requirements, the service sends the event to the target. If you have an existing application that you want to discover event schemas automatically for you, can enable EventBridge schema discovery. If you are building a new application, you can conduct an event discovery exercise.

An event bus connects the event to the subscriber. The event bus at AWS is Amazon EventBridge. Use EventBridge to choreograph interactions between event sources and event targets.

In this post, you learn how to perform an event discovery exercise with your team. It shows how to create a schema registry with EventBridge, and how to represent the event as an object in your code to use in your application.

Event discovery

In the event discovery phase, all business stakeholders of an application come together and write down all the possible events that can happen. For example, possible events for an ecommerce application include: Account Created, Item added to cart, Order Placed. Write events in Noun + Past Tense Verb format.

Events
Account Created
Item Added
Order Placed

Notice that events are not technical or focused on implementation; rather they are real-world things that happened in your system. This is because everyone, from developers to product owners, must understand them. In the event discovery phase, events act as your business requirements. Later, developers then translate those requirements into code.

Once you have the events laid out, you decide who is interested in each event. For example, consider the events listed for the ecommerce application: Account Created, Item Added to cart, and Order Placed.

Events Subscribers
Account Created

Marketing team

Security team

Item Added Inventory team
Order Placed Fulfillment team

Whenever a user creates an account, the marketing team subscribes to that event to send the account holder promotions. The security team encrypts the account holder’s user name and password into a database to save the account credentials. Whenever a user places an order, the system sends an email notification with the order details to stakeholders. The system also sends a message to the fulfillment team to start packing and shipping the order. This approach decouples the interactions between all of these services. This decoupling is beneficial because it increases developer independence by reducing the dependencies on other teams to write integrations.

Once the initial event schema planning is complete, you want to ensure that developers can continue to build new features without needing to coordinate with other teams closely on event schemas. For this reason, \EventBridge’s schema registry and automated schema discovery can help developers quickly build new features based on their application’s events.

You use EventBridge schema registries to search for, find, and track different schemas generated by your application. You can also automatically find schemas with the automated schema registry.

EventBridge schema registry and discovery

Applications can have many different types of events. Events generated from AWS services, third-party SaaS applications, and your custom applications.

With so many event sources, it can be challenging to know what to expect when consuming events. A schema represents the structure of an event. It describes what happened in the event, where the event came from, and the timestamp. The event schema is important for developers as it shows what data contained in the event, and allows them to write code based on that data.

For example, an Order Placed event might always contain a list of items in the order as an array, and a user ID as an integer. EventBridge helps automate the manual process of finding and documenting schemas. There are two capabilities to highlight: Schema registry and schema discovery.

A schema registry is a repository that stores a collection of schemas. You can use a schema registry to search for, find, and track different schemas used and generated by your application. AWS automatically stores schemas for all AWS sources for EventBridge in your schema registry. SaaS partner and custom schemas can be generated and added to the registry using the schema discovery feature. A schema registry enables you to use events as objects in your code more easily.

Adding an event to the EventBridge schema registry

In this tutorial, you create an Account Created event, which includes a user’s name and email address.

  1. Navigate to the Amazon EventBridge console and choose Schemas from the left panel.step 1There are three types of schemas represented in the tabs: AWS event schema registry, discovered schema registry, and custom schema registry.When you choose AWS event schema registry, you can search for any AWS service or event that is supported by EventBridge. There, you can view the schema for that event.
    pic 2

  2. To create a custom schema registry for your application, navigate to the custom schema registry tab and choose Create registry.
  3. Enter a name for the registry and then choose Create.
  4. There are currently no schemas in the registry. Choose Create custom schema to create one.
  5. Choose your registry as the destination and call the schema “user”. You can choose to load the schema template using an OpenAPI format from the Load Template option. You can then manually enter data for each of the fields.
  6. Alternatively, you can have the service discover the schema from JSON. Remember that events are written in JSON. Choose the Discover from JSON tab and enter the following code:{"id": 1, "name": "Talia Nassi", "emailAddress": "[email protected]"}
  7. Choose Discover schema.
  8. EventBridge extrapolates the schema from this information. The schema shows that the ID is a number, the name is a string, and the email address is a string. Choose Create to create the schema.
  9. When you choose your schema from the schema registry, you can see the structure of the event you just created.

Representing events as objects in your code with code bindings

Once a schema is added to the registry, you can download a code binding, which allows you to represent the event as an object in your code. You can take advantage of IDE features such as validation and auto-complete. Code bindings are available for Java, Python, or TypeScript programming languages. You can download bindings from the Amazon EventBridge Console, or directly from your IDE with the AWS Toolkit plugin for IntelliJ and Visual Studio Code.

Choose the programming language you prefer, then choose Download. This downloads the code binding to your local machine.

You can also choose to download code bindings directly to your IDE with the AWS Toolkit. This tutorial uses VS Code but you can also use IntelliJ.

  1. Ensure you have VS Code installed.
  2. Navigate to the VS Code marketplace and search for AWS and install the AWS Toolkit. You may have to restart VS Code.
  3. Choose a profile to connect to AWS. Set the Region to the same Region that you created the schema in. You see this icon on the left panel when you access the AWS Explorer:
  4. Choose Schemas from the left panel, then choose your schema, myRegistry. Open user by right clicking and choosing View Schema.
  5. You can now use this event object in your code.

Conclusion

In this post, you learn about event discovery, schema registry, and schema discovery. Event discovery is essential when creating event-driven applications because it allows the team to see which events are created by your application, and who needs to subscribe to those events.

Events have specific structures, called schemas. Your schema registry includes all of the schemas for your events. You can use the schema registry to search for events produced by other teams, which can make development faster. You learn how to create a custom schema registry, and how to download code bindings to use events in your code.

For more information, visit Serverless Land.

Building an event-driven application with Amazon EventBridge

Post Syndicated from Talia Nassi original https://aws.amazon.com/blogs/compute/building-an-event-driven-application-with-amazon-eventbridge/

In event-driven architecture, services interact with each other through events. An event is something that happened in your application (for example, an item was put into a cart, a new order was placed). Events are JSON objects that tell you information about something that happened in your application. In event-driven architecture, each component of the application raises an event whenever anything changes. Other components listen and decide what to do with it and how they would like to react.

When you build applications with event-driven architecture, you decouple your event sources and event targets. This can enable teams to act more independently, because your services are loosely coupled. When you add new features to your applications, you raise new events and then decide on the event source and event target. The event source is what emits the event, and the event target is what subscribes to or receives the event. Decoupling event sources and event targets can greatly speed up development time, and it can simplify making changes to your application.

Decoupling your application can allow for more seamless cross-team collaboration. For example, let’s say you are a developer at an ecommerce company and you are building a serverless ecommerce application. Your team is in charge of the account creation and authentication process. You build the login workflow, and raise an event when a new user creates an account.

When the event is raised, other teams can be alerted. The marketing team can listen for the Account Created event and act on it (for example, send promotional emails). In this decoupled architecture, event producers and consumers don’t have to know about each other. They only have to listen to events and act accordingly when they are interested in an event. This can speed up development by reducing the complexity caused by building new features.

In AWS, events are choreographed through Amazon EventBridge rules. A rule matches incoming events from an event source and sends them to event targets for processing.

eventbridge architecture

EventBridge accepts events from many different event sources, including over 200 AWS services, custom events from your Lambda functions or applications, and third-party SaaS applications. You specify an action to take when EventBridge receives an event that matches the event pattern in the rule. When an event matches, Amazon EventBridge sends the event to the specified target and triggers the action defined in the rule.

To route events from these sources to the correct target, the events must be placed on a corresponding event bus. There are three types of event buses. The first type is the default bus, which is always available in every account, and it’s where AWS events are routed to. The second type is a custom event bus. You can create custom event buses for your own applications to meet your business needs. Lastly, you can also create SaaS event buses, which are created when you configure SaaS applications as an event source.

There are many potential event targets. Event targets are what the event bus route to once a corresponding event happens. Targets include AWS Lambda, Amazon Kinesis, AWS Step Functions, Amazon API Gateway, and even event buses in other accounts. This flexible design allows you to create a wide variety of integration patterns based on your specific needs.

Configuring events with Amazon EventBridge

This tutorial sends an event from Amazon S3 (the event source) to AWS Lambda (the event target) using an event rule.

In this tutorial, you learn how to configure events with Amazon EventBridge by deploying an AWS Serverless Application Model template. The AWS Serverless Application Model (AWS SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. AWS SAM is an extension of AWS CloudFormation, which is the AWS infrastructure as code tool. You define resources using CloudFormation in your AWS SAM template and use the full suite of resources, intrinsic functions, and other template features that are available in AWS CloudFormation.

First, you upload images to an S3 bucket. This raises an event, which invokes a Lambda function, which resizes the image and places it in a different S3 bucket.

Prerequisites:

  1. AWS SAM CLI (If you use AWS Cloud9, this is installed for you)

To configure events with Amazon EventBridge:

  1. Navigate to the Serverlessland Patterns Collection and choose the pattern Amazon S3 to Amazon EventBridge to AWS Lambda. This AWS SAM template deploys an S3 bucket, a Lambda function, an EventBridge rule, and the IAM resources required to run the application.
  2. Copy and paste the cloning instructions in your terminal.
  3. Run sam deploy --guided to deploy the pattern.
  4. You see the success message:
  5. Navigate to the EventBridge console and choose Rules from the left panel. Then choose the rule that was created by AWS SAM (starting with sam-app)
    The event source is S3 and the rule is invoked when an image is put into the source bucket in S3. Next, notice that the event target is the Lambda function that you created from the AWS SAM template.
  6. Navigate to the S3 console and choose Buckets on the left panel. Then choose the bucket that was created for you (starting with sam-app). Choose the Properties tab, and note that the integration with EventBridge is on.
  7. From the Objects tab, choose Upload, and upload an image.

  8. Navigate to the Lambda console and choose your Lambda function (starting with sam-app). Select the Monitor tab, and choose View Logs in CloudWatch.
  9. You can see the event that triggered the Lambda function in the logs:

Adding more event rules to your application

In the previous example, you add an EventBridge rule that routes events from S3 (the event source) to Lambda (the event target) using an event bus. Now, add another rule:

  1. From the EventBridge console, choose Rules, and then choose Create rule.
  2. Enter a name and description for the rule.
  3. Define the event pattern that is used to invoke the event targets. For Service provider, choose AWS, and for Service name choose S3. For Event type, choose Amazon S3 event notification, and in the event dropdown choose Object created. You are configuring the event source to be an object created in your S3 bucket.
  4. Select either the default AWS event bus or a custom event bus.
  5. Select the event target. In this example, configure an Amazon CloudWatch log group. Enter any name for the log group, which is created automatically.
  6. Choose Create.
  7. Upload an image to the S3 bucket, as shown in step 7 above.
  8. Navigate to the Amazon CloudWatch console and choose Log groups from the left panel. Choose the log group, and then choose a log stream.
  9. The event is logged to CloudWatch Logs.

Adding a second event rule did not change the event source’s behavior or affect other event targets.

Conclusion

This post is a brief introduction of event-driven architecture, and walks through a tutorial where you create an event-driven application with the Serverlessland Patterns Collection. You also add two different event rules to your event bus.

For more serverless learning resources, visit Serverless Land.

Getting Started with Event-Driven Architecture

Post Syndicated from Talia Nassi original https://aws.amazon.com/blogs/compute/getting-started-with-event-driven-architecture/

In modern application development, event-driven architecture is becoming more prominent because it can make building applications in the cloud easier. Event-driven architecture can allow you to decouple your services, which increases developer velocity, and can make it easier for you to debug applications. It also can help remove the bottleneck that occurs when features expand across different teams, which allows teams to progress more independently.

One way to think about how an application works is as a system that reacts to events from other places, like from within your application. In this approach, you focus on the system’s interaction with its surroundings as a transmission of events. The application receives and creates events. Inputs to the application and outputs from the application act as events. At its core, this is event-driven architecture.

API-driven architecture vs. event-driven architecture

Commands/APIs Events
Synchronous Asynchronous

Has an intent

Directed to a target

It’s a fact

Happened in the past

“CreateAccount”

“AddProduct”

“AccountCreated”

“ProductAdded”

A common way of making components of an application work together is through an API-driven, request-response architecture where you have requests and responses. For example, you query a list of orders from an Orders API, and the Orders API responds with a list of orders. This is an example of synchronous architecture. The system asking for the orders waits for the response. You cannot move on until the response comes back. In this approach, you send commands that are directed to a target (for example, “place this order” or “add this record to the database”).

sync vs async

In a synchronous model, the client makes a request to Service A. Service A calls Service B, but then Service A waits for Service B to respond before it continues on and eventually responds to the client.

In asynchronous, event-driven architecture, there is no response path. The service surfaces the event and then immediately moves forward. The trade-off here is that there’s no direct channel for Service B to pass back information to Service A, besides confirming it received the event. But in many cases, you don’t need that explicit coupling between the request and response channels.

An event is something that happened. For example, a new account is created, or an item is dropped into an Amazon S3 bucket. Events are immutable, which means you cannot change them. Once an event happens, you cannot undo it. For example, if there is an event raised when an order is placed, there can be another event for an order being cancelled. Events can come from various places such as messaging systems or databases.

Events are JSON objects that tell you information about something that happened in your application. In event-driven architecture, events represent facts. Each component of the application raises an event whenever anything changes. Other components listen and decide what to do with it and how they would like to react.

event

In the event above, S3 raises the event when you put the image into an Amazon S3 bucket. The event source is an S3 bucket named sam-app-sourcebucket. The object that is put into the bucket is called “brad.jpeg”.

Request-driven applications typically use directed commands to coordinate downstream functions to complete an activity and are often tightly coupled. This makes it harder to determine when errors occur in your application. Event-driven applications create events that are observable by other services and systems. However, the event producer is unaware of which consumers, if any, are listening. Typically, these are loosely coupled.

Events are observable. Any service that is authorized can watch an event. Consider a coffee shop example where there is a barista, who makes coffee, and a pastry chef, who makes pastries. When a customer enters the coffee shop and orders a cup of coffee, the barista starts to make the coffee, and the pastry chef takes no action.

However, if a customer comes in to the coffee shop and orders a chocolate croissant, then the pastry chef starts making the chocolate croissant, and the barista takes no action. The pastry chef is only interested in orders relating to pastries and the barista is only interested in events relating to coffee.

In an ecommerce application, like Amazon.com, there are different departments that respond to different events. You can place orders through Whole Foods, Amazon Fresh, and Amazon.com. When you place an order with Amazon Fresh, the subscribers to that event take action and fulfill your order.

event

Event-driven architecture and command-driven architecture also differ in the ways that they store state. In a typical command-driven architecture, you have only one component store a particular piece of data, and other components ask that component for the data when needed.

In event-driven architecture, every component stores all the data it needs and listens to update events for that data. In command-driven architecture, the component that stores the data is responsible for updating it. In event-driven architecture, all it has to do is ensure new events are raised on the updates.

Benefits of using event-driven architecture

Decoupling event sources and event targets

Many applications are built in a monolith, where the components are tightly coupled, and are highly dependent on each other. This proves to be problematic when there are bugs and you are trying to pinpoint exactly what part of the application is failing. Decoupled architectures are composed of components or services that are loosely coupled. In an event-driven, decoupled architecture, you broadcast events without caring who responds to them. This saves time because events can be queued and forwarded whenever the receiver is ready to process them. This allows for building scalable, highly modifiable systems.

Decoupled applications enable teams to act more independently, which increases their velocity. For example, with an API-based integration, if my team wants to know about some change that happened in another team’s microservice, I have to ask that team to make an API call to my service. That means I have to deal with authentication, coordination with the other team over the structure of the API call, etc. This causes back and forth between teams, which slows down development time. With an event-driven application, you can subscribe to events sent from a microservice and the event router (for example, Amazon EventBridge) takes care of routing the event and handling authentication.

Decoupled applications also allow you to build new features faster. Adding new features or extending existing ones is simpler with event-driven architectures. This is because you only have to choose the event you need to trigger your new feature, and subscribe to it. There’s no need to modify any of your existing services to add new functionality.

Write less code

When you build applications using event-driven architecture, often you write less code because you only need to consider new events, as well as which service is subscribed to those events. For example, if you are building new features for your application, all you have to do is consider the existing events and then add senders and receivers as necessary. In this way, you speed up development time because each functional unit is smaller and there is often less code.

Better extensibility

In the example above, you built a highly extensible application. Other teams can extend features and add functionality without impacting other microservices. By publishing events using EventBridge, this application integrates with existing systems, but also enables any future application to integrate as an event consumer. Producers of events have no knowledge of event consumers, which can help simplify the microservice logic.

Enhancing team collaboration

A common process to build applications is to work with your product managers and business stakeholders to gather requirements. Developers then translate those requirements into code. However, there may be a disconnect between the product requirements and the code. When you use events, everyone in the business understands the logic. You define the events in an application (for example, a customer adds an item to their shopping cart or a customer account is created) and that becomes your product requirements. Whenever that action happens, it produces an event, and whoever is interested can take action on that event.

For example, a marketing manager could be interested whenever a customer creates a new account. One way to choreograph this in event-driven architecture is to have a Marketing event bus that listens for the New Account event. There could also be other teams that are interested, such as the Analytics team, who also subscribe to that event. Each team/service can subscribe to events that are relevant to them. Event-driven architecture is a great way for businesses to describe their business problems and represent them.

Conclusion

This post introduces events, and then compares event-driven architecture to command-driven, request-response architecture. It also explains the benefits of event-driven architecture, including decoupling event sources and targets, writing less code, having better extensibility, and enhancing team collaboration.

For more serverless learning resources, visit Serverless Land.

Welcome to AWS Pi Day 2022

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/welcome-to-aws-pi-day-2022/

We launched Amazon Simple Storage Service (Amazon S3) sixteen years ago today!

As I often told my audiences in the early days, I wanted them to think big thoughts and dream big dreams! Looking back, I think it is safe to say that the launch of S3 empowered them to do just that, and initiated a wave of innovation that continues to this day.

Bigger, Busier, and more Cost-Effective
Our customers count on Amazon S3 to provide them with reliable and highly durable object storage that scales to meet their needs, while growing more and more cost-effective over time. We’ve met those needs and many others; here are some new metrics that prove my point:

Object Storage – Amazon S3 now holds more than 200 trillion (2 x 1014) objects. That’s almost 29,000 objects for each resident of planet Earth. Counting at one object per second, it would take 6.342 million years to reach this number! According to Ethan Siegel, there are about 2 trillion galaxies in the visible Universe, so that’s 100 objects per galaxy! Shortly after the 2006 launch of S3, I was happy to announce the then-impressive metric of 800 million stored objects, so the object count has grown by a factor of 250,000 in less than 16 years.

Request Rate – Amazon S3 now averages over 100 million requests per second.

Cost Effective – Over time we have added multiple storage classes to S3 in order to optimize cost and performance for many different workloads. For example, AWS customers are making great use of Amazon S3 Intelligent Tiering (the only cloud storage class that delivers automatic storage cost savings when data access patterns change), and have saved more than $250 million in storage costs as compared to Amazon S3 Standard. When I first wrote about this storage class in 2018, I said:

In order to make it easier for you to take advantage of S3 without having to develop a deep understanding of your access patterns, we are launching a new storage class, S3 Intelligent-Tiering.

With the improved cost optimizations for small and short-lived objects and the archiving capabilities that we launched late last year, you can now use S3 Intelligent-Tiering as the default storage class for just about every workload, especially data lakes, analytics use cases, and new applications.

Customer Innovation
As you can see from the metrics above, our customers use S3 to store and protect vast amounts of data in support of an equally vast number of use cases and applications. Here are just a few of the ways that our customers are innovating:

NASCARAfter spending 15 years collecting video, image, and audio assets representing over 70 years of motor sports history, NASCAR built a media library that encompassed over 8,600 LTO 6 tapes and a few thousand LTO 4 tapes, with a growth rate of between 1.5 PB and 2 PB per year. Over the course of 18 months they migrated all of this content (a total of 15 PB) to AWS, making use of the Amazon S3 Standard, Amazon S3 Glacier Flexible Retrieval, and Amazon S3 Glacier Deep Archive storage classes. To learn more about how they migrated this massive and invaluable archive, read Modernizing NASCAR’s multi-PB media archive at speed with AWS Storage.

Electronic Arts
This game maker’s core telemetry systems handle tens of petabytes of data, tens of thousands of tables, and over 2 billion objects. As their games became more popular and the volume of data grew, they were facing challenges around data growth, cost management, retention, and data usage. In a series of updates, they moved archival data to Amazon S3 Glacier Deep Archive, implemented tag-driven retention management, and implemented Amazon S3 Intelligent-Tiering. They have reduced their costs and made their data assets more accessible; read
Electronic Arts optimizes storage costs and operations using Amazon S3 Intelligent-Tiering and S3 Glacier to learn more.

NRGene / CRISPR-IL
This team came together to build a best-in-class gene-editing prediction platform. CRISPR (
A Crack In Creation is a great introduction) is a very new and very precise way to edit genes and effect changes to an organism’s genetic makeup. The CRISPR-IL consortium is built around an iterative learning process that allows researchers to send results to a predictive engine that helps to shape the next round of experiments. As described in
A gene-editing prediction engine with iterative learning cycles built on AWS, the team identified five key challenges and then used AWS to build GoGenome, a web service that performs predictions and delivers the results to users. GoGenome stores over 20 terabytes of raw sequencing data, and hundreds of millions of feature vectors, making use of Amazon S3 and other
AWS storage services as the foundation of their data lake.

Some other cool recent S3 success stories include Liberty Mutual (How Liberty Mutual built a highly scalable and cost-effective document management solution), Discovery (Discovery Accelerates Innovation, Cuts Linear Playout Infrastructure Costs by 61% on AWS), and Pinterest (How Pinterest worked with AWS to create a new way to manage data access).

Join Us Online Today
In celebration of AWS Pi Day 2022 we have put together an entire day of educational sessions, live demos, and even a launch or two. We will also take a look at some of the newest S3 launches including Amazon S3 Glacier Instant Retrieval, Amazon S3 Batch Replication and AWS Backup Support for Amazon S3.

Designed for system administrators, engineers, developers, and architects, our sessions will bring you the latest and greatest information on security, backup, archiving, certification, and more. Join us at 9:30 AM PT on Twitch for Kevin Miller’s kickoff keynote, and stick around for the entire day to learn a lot more about how you can put Amazon S3 to use in your applications. See you there!

Jeff;

3 Reasons to Join Rapid7’s Cloud Security Summit

Post Syndicated from Ben Austin original https://blog.rapid7.com/2022/03/09/3-reasons-to-join-rapid7s-cloud-security-summit/

3 Reasons to Join Rapid7’s Cloud Security Summit

The world of the cloud never stops moving — so neither can cloud security. In the face of rapidly evolving technology and a constantly changing threat landscape, keeping up with all the latest developments, trends, and best practices in this emerging practice is more vital than ever.

Enter Rapid7’s third annual Cloud Security Summit, which we’ll be hosting this year on Tuesday, March 29. This one-day virtual event is dedicated to cloud security best practices and will feature industry experts from Rapid7, as well as Amazon Web Services (AWS), Snyk, and more.

While the event is fully virtual and free, we know that the time commitment can be the most challenging part of attending a multi-hour event during the workday. With that in mind, we’ve compiled a short list of the top reasons you’ll definitely want to register, clear your calendar, and attend this event.

Reason 1: Get a sneak peak at some original cloud security research

During the opening session of this year’s summit, two members of Rapid7’s award-winning security research team will be presenting some never-before-published research on the current state of cloud security operations, the most common misconfigurations in 2021, Log4j, and more.

Along with being genuinely interesting data, this research will also give you some insights and benchmarks that will help you evaluate your own cloud security program, and prioritize the most commonly exploited risks in your organization’s environment.

Reason 2: Learn from industry experts, and get CPE credits

Along with a handful of team member’s from Rapid7’s own cloud security practice, this year’s summit includes a host of subject matter experts from across the industry. You can look forward to hearing from Merritt Baer, Principal in the Office of the CISO at Amazon Web Services; Anthony Seto, Field Director for Cloud Native Application Security at Snyk; Keith Hoodlet, Code Security Architect at GitHub; and more. And that doesn’t even include the InsightCloudSec customers who will be joining to share their expert perspectives as well.

While learning and knowledge gain are clearly the most important aspects here, it’s always great to have something extra to show for the time you devoted to an event like this. To help make the case to your management that this event is more than worth the time you’ll put in, we’ve arranged for all attendees to earn 3.5 continuing professional education (CPE) credits to go toward maintaining or upgrading security certifications, such as CISSP, CISM, and more.

Reason 3: Be the first to hear exciting Rapid7 announcements

Last but not least, while the event is primarily focused on cloud security research, strategies, and thought leadership, we are also planning to pepper in some exciting news related to InsightCloudSec, Rapid7’s cloud-native security platform.

We’ll end the day with a demonstration of the product, so you can see some of our newest capabilities in action. Whether you’re already an InsightCloudSec customer, or considering a new solution for uncovering misconfigurations, automating cloud security workflows, shifting left, and more, this is the best way to get a live look at one of the top solutions available in the market today.  

So what are you waiting for? Come join us, and let’s dive into the latest and greatest in cloud security together.

Join our 2022 Cloud Security Summit

Register Now

Additional reading

Celebrating 10 years of Raspberry Pi with a new museum exhibition

Post Syndicated from original https://www.raspberrypi.org/blog/10-years-raspberry-pi-national-museum-of-computing-exhibition/

Ten years ago, Raspberry Pi started shipping its first computers in order to inspire young people to reimagine the role of technology in their lives. What started with a low-cost, high-performance computer has grown into a movement of millions of people of all ages and backgrounds.

A group of children and an adult have fun using Raspberry Pi hardware.

Today, Raspberry Pi is the UK’s best-selling computer, and the Raspberry Pi Foundation is one of the world’s leading educational non-profits. Raspberry Pi computers make technology accessible to people and businesses all over the world. They are used everywhere from homes and schools to factories, offices, and shops.

Several models of the Raspberry Pi computer.

Visit the history of Raspberry Pi

To help celebrate this 10-year milestone, we’ve partnered with The National Museum of Computing, located at the historic Bletchley Park, to open a new temporary exhibit dedicated to telling the story of the Raspberry Pi computer, the Raspberry Pi Foundation, and the global community of innovators, learners, and educators we’re a part of.

A young person programs a robot buggy built with LEGO bricks and the Raspberry Pi Build HAT.

In the exhibit, you’ll be able to get hands-on with Raspberry Pi computers, hear the story of how Raspberry Pi came to be, and see a few of the many ways that Raspberry Pi has made an impact on the world.

Join us for the exhibition opening

We know that not everyone will be able to experience the exhibit in person, and so we’ll live-stream the grand opening this Saturday 5 March 2022 at 11:15am GMT. Keep an eye on our social media channels for the link to watch the video feed. If you’re able to make it to the National Museum of Computing on Saturday, tickets are available to purchase.

We’re delighted to celebrate 10 years with all of you, and we’re excited about the next 10 years of Raspberry Pi.

The post Celebrating 10 years of Raspberry Pi with a new museum exhibition appeared first on Raspberry Pi.