All posts by Luca Mezzalira

Let’s Architect! Optimizing the cost of your architecture

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-optimizing-the-cost-of-your-architecture/

Written in collaboration with Ben Moses, AWS Senior Solutions Architect, and Michael Holtby, AWS Senior Manager Solutions Architecture


Designing an architecture is not a simple task. There are many dimensions and characteristics of a solution to consider, such as the availability, performance, or resilience.

In this Let’s Architect!, we explore cost optimization and ideas on how to rethink your AWS workloads, providing suggestions that span from compute to data transfer.

Migrating AWS Lambda functions to Arm-based AWS Graviton2 processors

AWS Graviton processors are custom silicon from Amazon’s Annapurna Labs. Based on the Arm processor architecture, they are optimized for performance and cost, which allows customers to get up to 34% better price performance.

This AWS Compute Blog post discusses some of the differences between the x86 and Arm architectures, as well as methods for developing Lambda functions on Graviton2, including performance benchmarking.

Many serverless workloads can benefit from Graviton2, especially when they are not using a library that requires an x86 architecture to run.

Take me to this Compute post!

Choosing Graviton2 for AWS Lambda function in the AWS console

Choosing Graviton2 for AWS Lambda function in the AWS console

Key considerations in moving to Graviton2 for Amazon RDS and Amazon Aurora databases

Amazon Relational Database Service (Amazon RDS) and Amazon Aurora support a multitude of instance types to scale database workloads based on needs. Both services now support Arm-based AWS Graviton2 instances, which provide up to 52% price/performance improvement for Amazon RDS open-source databases, depending on database engine, version, and workload. They also provide up to 35% price/performance improvement for Amazon Aurora, depending on database size.

This AWS Database Blog post showcases strategies for updating RDS DB instances to make use of Graviton2 with minimal changes.

Take me to this Database post!

Choose your instance class that leverages Graviton2, such as db.r6g.large (the “g” stands for Graviton2)

Choose your instance class that leverages Graviton2, such as db.r6g.large (the “g” stands for Graviton2)

Overview of Data Transfer Costs for Common Architectures

Data transfer charges are often overlooked while architecting an AWS solution. Considering data transfer charges while making architectural decisions can save costs. This AWS Architecture Blog post describes the different flows of traffic within a typical cloud architecture, showing where costs do and do not apply. For areas where cost applies, it shows best-practice strategies to minimize these expenses while retaining a healthy security posture.

Take me to this Architecture post!

Accessing AWS services in different Regions

Accessing AWS services in different Regions

Improve cost visibility and re-architect for cost optimization

This Architecture Blog post is a collection of best practices for cost management in AWS, including the relevant tools; plus, it is part of a series on cost optimization using an e-commerce example.

AWS Cost Explorer is used to first identify opportunities for optimizations, including data transfer, storage in Amazon Simple Storage Service and Amazon Elastic Block Store, idle resources, and the use of Graviton2 (Amazon’s Arm-based custom silicon). The post discusses establishing a FinOps culture and making use of Service Control Policies (SCPs) to control ongoing costs and guide deployment decisions, such as instance-type selection.

Take me to this Architecture post!

Applying SCPs on different environments for cost control

Applying SCPs on different environments for cost control

See you next time!

Thanks for joining us to discuss optimizing costs while architecting! This is the last Let’s Architect! post of 2022. We will see you again in 2023, when we explore even more architecture topics together.

Wishing you a happy holiday season and joyous new year!

Can’t get enough of Let’s Architect!?

Visit the Let’s Architect! page of the AWS Architecture Blog for access to the whole series.

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Architecting with Amazon DynamoDB

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-with-amazon-dynamodb/

NoSQL databases are an essential part of the technology industry in today’s world. Why are we talking about NoSQL databases? NoSQL databases often allow developers to be in control of the structure of the data, and they are a good fit for big data scenarios and offer fast performance.

In this issue of Let’s Architect!, we explore Amazon DynamoDB capabilities and potential solutions to apply in your architectures. A key strength of DynamoDB is the capability of operating at scale globally; for instance, multiple products built by Amazon are powered by DynamoDB. During Prime Day 2022, the service also maintained high availability while delivering single-digit millisecond responses, peaking at 105.2 million requests-per-second. Let’s start!

Data modeling with DynamoDB

Working with a new database technology means understanding exactly how it works and the best design practices for taking full advantage of its features.

In this video, the key principles for modeling DynamoDB tables are discussed, plus practical patterns to use while defining your data models are explored and how data modeling for NoSQL databases (like DynamoDB) is different from modeling for traditional relational databases.

With this video, you can learn about the main components of DynamoDB, some design considerations that led to its creation, and all the best practices for efficiently using primary keys, secondary keys, and indexes. Peruse the original paper to learn more about DyanamoDB in Dynamo: Amazon’s Highly Available Key-value Store.

Amazon DynamoDB uses partitioning to provide horizontal scalability

Amazon DynamoDB uses partitioning to provide horizontal scalability

Single-table vs. multi-table in Amazon DynamoDB

When considering single-table versus multi-table in DynamoDB, it is all about your application’s needs. It is possible to avoid naïve lifting-and-shifting your relational data model into DynamoDB tables. In this post, you will discover different use cases on when to use single-table compared with multi-table designs, plus understand certain data-modeling principles for DynamoDB.

Use a single-table design to provide materialized joins in Amazon DynamoDB

Use a single-table design to provide materialized joins in Amazon DynamoDB

Optimizing costs on DynamoDB tables

Infrastructure cost is an important dimension for every customer. Despite your role inside an organization, you should monitor opportunities for optimizing costs, when possible.
For this reason, we have created a guide on DynamoDB tables cost-optimization that provides several suggestions for reducing your bill at the end of the month.

Build resilient applications with Amazon DynamoDB global tables: Part 1

When you operate global systems that are spread across multiple AWS regions, dealing with data replication and writes across regions can be a challenge. DynamoDB global tables help by providing the performance of DynamoDB across multiple regions with data synchronization and multi-active database where each replica can be used for both writing and reading data.

Another use case for global tables are resilient applications with the lowest possible recovery time objective (RTO) and recovery point objective (RPO). In this blog series, we show you how to approach such a scenario.

Amazon DynamoDB active-active architecture

Amazon DynamoDB active-active architecture

See you next time!

Thanks for joining our discussion on DynamoDB. See you in a few weeks, when we explore cost optimization!

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Architecting in health tech

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-in-health-tech/

Healthcare technology, commonly referred to as “health tech,” is the use of technologies developed for the purpose of improving any and all aspects of the healthcare system. For example, IT tools or software designed to boost hospital/administrative productivity, give insights into new and existing treatments, or improve the overall quality of care.

Also known as “digital health”, health tech uses databases, applications, mobile devices, and wearables to facilitate the delivery, payment, and/or consumption of healthcare. The increased accessibility to these technologies can further increase the development and launch of additional healthcare products.

In this post, we explore how to build and manage health tech architectures using Amazon Web Services (AWS).

HIPAA Reference Architecture on the AWS Cloud

This Quick Start provides guidance for deploying a U.S. Health Insurance Portability and Accountability Act (HIPAA) architecture on the AWS Cloud. Specifically, this aims to help those in the healthcare industry build and implement HIPAA-ready environments that fit with an organization’s larger HIPAA compliance program. It includes AWS CloudFormation templates that are customizable, plus automatically deploy the environment and configure AWS resources.

HIPAA Reference Architecture on the AWS Cloud

Using AppStream 2.0 to Deliver PACS and Image Analysis in Clinical Trials

Amazon AppStream 2.0 is a fully managed, non-persistent desktop and application service for remotely accessing your work. This means that clinical staff can now access the medical applications and data they need from anywhere. Benefits of using AppStream 2.0 include reduced overhead cost. This Architecture Blog post examines how to construct the AWS architecture for an image analysis application used in clinical trials, while keeping cost down. Furthermore, it demonstrates how something seemingly complex can be built with ease using both AWS services and image analysis applications already in place.

Using AppStream 2.0 to Deliver PACS and Image Analysis in Clinical Trials

How fEMR Delivers Cryptographically Secure and Verifiable Medical Data with Amazon QLDB

Data veracity is fundamental. Patient data is confidential, and when a system deals with sensitive data, there needs to be a clear chain of ownership.

This blog post depicts an architecture based on the use of Amazon Quantum Ledger Database (Amazon QLDB), which addresses the need for data integrity and verifiability in healthcare. By using Amazon QLDB, the team can take advantage of an append-only journal to create a verifiable electronic medical record.

Also explored are the challenges architects face while working on these types of systems, as well as considerations about security, operational efficiency, processes for repeatable deployments using infrastructure as code, and data replication across multiple databases. The design choices architects make when developing a system depend on the context; read more about the mental models adopted in this use case.

How fEMR Delivers Cryptographically Secure and Verifiable Medical Data with Amazon QLDB

Service Workbench on AWS

Service Workbench on AWS is a cloud solution that enables IT teams to provide secure, repeatable, and federated control of access to data, tooling, and compute power. Service Workbench can help redirect researchers’ focus from technical duties back to the research itself, by allowing them to automate of the creation of baseline research setups and providing data access. It gives researchers the ability to build research environments in minutes without having to know about cloud infrastructure or wait for research IT to respond. It is fully HIPAA-compliant and allows for secure peer-to-peer collaboration, including with individuals from other institutions.

See you next time!

Thanks for joining our discussion on health tech architectures! See you in two weeks for more architecture best practices and guidance.

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Architecting with custom chips and accelerators

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-custom-chips-and-accelerators/

It’s hard to imagine a world without computer chips. They are at the heart of the devices that we use to work and play every day. Currently, Amazon Web Services (AWS) is offering customers the next generation of computer chip, with lower cost, higher performance, and a reduced carbon footprint.

This edition of Let’s Architect! focuses on custom computer chips, accelerators, and technologies developed by AWS, such as AWS Nitro System, custom-designed Arm-based AWS Graviton processors that support data-intensive workloads, as well as AWS Trainium, and AWS Inferentia chips optimized for machine learning training and inference.

In this post, we discuss these new AWS technologies, their main characteristics, and how to take advantage of them in your architecture.

Deliver high performance ML inference with AWS Inferentia

As Deep Learning models become increasingly large and complex, the training cost for these models increases, as well as the inference time for serving.

With AWS Inferentia, machine learning practitioners can deploy complex neural-network models that are built and trained on popular frameworks, such as Tensorflow, PyTorch, and MXNet on AWS Inferentia-based Amazon EC2 Inf1 instances.

This video introduces you to the main concepts of AWS Inferentia, a service designed to reduce both cost and latency for inference. To speed up inference, AWS Inferentia: selects and shares a model across multiple chips, places pieces inside the on-chip cache, then streams the data via pipeline for low-latency predictions.

Presenters discuss through the structure of the chip, software considerations, as well as anecdotes from the Amazon Alexa team, who uses AWS Inferentia to serve predictions. If you want to learn more about high throughput coupled with low latency, explore Achieve 12x higher throughput and lowest latency for PyTorch Natural Language Processing applications out-of-the-box on AWS Inferentia on the AWS Machine Learning Blog.

AWS Inferentia shares a model across different chips to speed up inference

AWS Inferentia shares a model across different chips to speed up inference

AWS Lambda Functions Powered by AWS Graviton2 Processor – Run Your Functions on Arm and Get Up to 34% Better Price Performance

AWS Lambda is a serverless, event-driven compute service that enables code to run from virtually any type of application or backend service, without provisioning or managing servers. Lambda uses a high-availability compute infrastructure and performs all of the administration of the compute resources, including server- and operating-system maintenance, capacity-provisioning, and automatic scaling and logging.

AWS Graviton processors are designed to deliver the best price and performance for cloud workloads. AWS Graviton3 processors are the latest in the AWS Graviton processor family and provide up to: 25% increased compute performance, two-times higher floating-point performance, and two-times faster cryptographic workload performance compared with AWS Graviton2 processors. This means you can migrate AWS Lambda functions to Graviton in minutes, plus get as much as 19% improved performance at approximately 20% lower cost (compared with x86).

Comparison between x86 and Arm/Graviton2 results for the AWS Lambda function computing prime numbers

Comparison between x86 and Arm/Graviton2 results for the AWS Lambda function computing prime numbers (click to enlarge)

Powering next-gen Amazon EC2: Deep dive on the Nitro System

The AWS Nitro System is a collection of building-block technologies that includes AWS-built hardware offload and security components. It is powering the next generation of Amazon EC2 instances, with a broadening selection of compute, storage, memory, and networking options.

In this session, dive deep into the Nitro System, reviewing its design and architecture, exploring new innovations to the Nitro platform, and understanding how it allows for fasting innovation and increased security while reducing costs.

Traditionally, hypervisors protect the physical hardware and bios; virtualize the CPU, storage, networking; and provide a rich set of management capabilities. With the AWS Nitro System, AWS breaks apart those functions and offloads them to dedicated hardware and software.

AWS Nitro System separates functions and offloads them to dedicated hardware and software, in place of a traditional hypervisor

AWS Nitro System separates functions and offloads them to dedicated hardware and software, in place of a traditional hypervisor

How Amazon migrated a large ecommerce platform to AWS Graviton

In this re:Invent 2021 session, we learn about the benefits Amazon’s ecommerce Datapath platform has realized with AWS Graviton.

With a range of 25%-40% performance gains across 53,000 Amazon EC2 instances worldwide for Prime Day 2021, the Datapath team is lowering their internal costs with AWS Graviton’s improved price performance. Explore the software updates that were required to achieve this and the testing approach used to optimize and validate the deployments. Finally, learn about the Datapath team’s migration approach that was used for their production deployment.

AWS Graviton2: core components

AWS Graviton2: core components

See you next time!

Thanks for exploring custom computer chips, accelerators, and technologies developed by AWS. Join us in a couple of weeks when we talk more about architectures and the daily challenges faced while working with distributed systems.

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Modern data architectures

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-modern-data-architectures/

With the rapid growth in data coming from data platforms and applications, and the continuous improvements in state-of-the-art machine learning algorithms, data are becoming key assets for companies.

Modern data architectures include data mesh—a recent style that represents a paradigm shift, in which data is treated as a product and data architectures are designed around business domains. This type of approach supports the idea of distributed data, where each business domain focuses on the quality of the data it produces and exposes to the consumers.

In this edition of Let’s Architect!, we focus on data mesh and how it is designed on AWS, plus other approaches to adopt modern architectural patterns.

Design a data mesh architecture using AWS Lake Formation and AWS Glue

Domain Driven Design (DDD) is a software design approach where a solution is divided into domains aligned with business capabilities, software, and organizational boundaries. Unlike software architectures, most data architectures are often designed around technologies rather than business domains.

In this blog, you can learn about data mesh, an architectural pattern that applies the principles of DDD to data architectures. Data are organized into domains and considered the product that each team owns and offers for consumption.

A data mesh design organizes around data domains. Each domain owns multiple data products with their own data and technology stacks

A data mesh design organizes around data domains. Each domain owns multiple data products with their own data and technology stacks

Building Data Mesh Architectures on AWS

In this video, discover how to use the data mesh approach in AWS. Specifically, how to implement certain design patterns for building a data mesh architecture with AWS services in the cloud.

This is a pragmatic presentation to get a quick understanding of data mesh fundamentals, the benefits/challenges, and the AWS services that you can use to build it. This video provides additional context to the aforementioned blog post and includes several examples on the benefits of modern data architectures.

This diagram demonstrates the pattern for sharing data catalogs between producer domains and consumer domains

This diagram demonstrates the pattern for sharing data catalogs between producer domains and consumer domains

Build a modern data architecture on AWS with Amazon AppFlow, AWS Lake Formation, and Amazon Redshift

In this blog, you can learn how to build a modern data strategy using AWS managed services to ingest data from sources like Salesforce. Also discussed is how to automatically create metadata catalogs and share data seamlessly between the data lake and data warehouse, plus creating alerts in the event of an orchestrated data workflow failure.

The second part of the post explains how a data warehouse can be built by using an agile data modeling pattern, as well as how ELT jobs were quickly developed, orchestrated, and configured to perform automated data quality testing.

A data platform architecture and the subcomponents used to build it

A data platform architecture and the subcomponents used to build it

AWS Lake Formation Workshop

With a modern data architecture on AWS, architects and engineers can rapidly build scalable data lakes; use a broad and deep collection of purpose-built data services; and ensure compliance via unified data access, security, and governance. As data mesh is a modern architectural pattern, you can build it using a service like AWS Lake Formation.

Familiarize yourself with new technologies and services by not only learning how they work, but also to building prototypes and projects to gain hands-on experience. This workshop allows builders to become familiar with the features of AWS Lake Formation and its integrations with other AWS services.

A data catalog is a key component in a data mesh architecture. AWS Glue crawlers interact with data stores and other elements to populate the data catalog

A data catalog is a key component in a data mesh architecture. AWS Glue crawlers interact with data stores and other elements to populate the data catalog

See you next time!

Thanks for joining our discussion on data mesh! See you in a couple of weeks when we talk more about architectures and the challenges that we face every day while working with distributed systems.

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Architecting for the edge

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-for-the-edge/

Edge computing comprises elements of geography and networking and brings computing closer to the end users of the application.

For example, using a content delivery network (CDN) such as AWS CloudFront can help video streaming providers reduce latency for distributing their material by taking advantage of caching at the edge. Another example might look like an Internet of Things (IoT) solution that helps a company run business logic in remote areas or with low latency.

IoT is a challenging field because there are multiple aspects to consider as architects, like hardware, protocols, networking, and software. All of these aspects must be designed to interact together and be fault tolerant.

In this edition of Let’s Architect!, we share resources that are helpful for teams that are approaching or expanding their workloads for edge computing We cover macro topics such as security, best practices for IoT, patterns for machine learning (ML), and scenarios with strict latency requirements.

Build Machine Learning at the edge applications

In Let’s Architect! Architecting for Machine Learning, we touched on some of the most relevant aspects to consider while putting ML into production. However, in many scenarios, you may also have specific constraints like latency or a lack of connectivity that require you to design a deployment at the edge.

This blog post considers a solution based on ML applied to agriculture, where a reliable connection to the Internet is not always available. You can learn from this scenario, which includes information from model training to deployment, to design your ML workflows for the edge. The solution uses Amazon SageMaker in the cloud to explore, train, package, and deploy the model to AWS IoT Greengrass, which is used for inference at the edge.

 High-level architecture of the components that reside on the farm and how they interact with the cloud environment

High-level architecture of the components that reside on the farm and how they interact with the cloud environment

Security at the edge

Security is one of the fundamental pillars described in the AWS Well-Architected Framework. In all organizations, security is a major concern both for the business and the technical stakeholders. It impacts the products they are building and the perception that customers have.

We covered security in Let’s Architect! Architecting for Security, but we didn’t focus specifically on edge technologies. This whitepaper shows approaches for implementing a security strategy at the edge, with a focus on describing how AWS services can be used. You can learn how to secure workloads designed for content delivery, as well as how to implement network protection to defend against DDoS attacks and protect your IoT solutions.

The AWS Well-Architected Tool is designed to help you review the state of your applications and workloads. It provides a central place for architectural best practices and guidance

The AWS Well-Architected Tool is designed to help you review the state of your applications and workloads. It provides a central place for architectural best practices and guidance

AWS Outposts High Availability Design and Architecture Considerations

AWS Outposts allows companies to run some AWS services on-premises, which may be crucial to comply with strict data residency or low latency requirements. With Outposts, you can deploy servers and racks from AWS directly into your data center.

This whitepaper introduces architectural patterns, anti-patterns, and recommended practices for building highly available systems based on Outposts. You will learn how to manage your Outposts capacity and use networking and data center facility services to set up highly available solutions. Moreover, you can learn from mental models that AWS engineers adopted to consider the different failure modes and the corresponding mitigations, and apply the same models to your architectural challenges.

An Outpost deployed in a customer data center and connected back to its anchor Availability Zone and parent Region

An Outpost deployed in a customer data center and connected back to its anchor Availability Zone and parent Region

AWS IoT Lens

The AWS Well-Architected Lenses are designed for specific industry or technology scenarios. When approaching the IoT domain, the AWS IoT Lens is a key resource to learn the best practices to adopt for IoT. This whitepaper breaks down the IoT workloads into the different subdomains (for example, communication, ingestion) and maps the AWS services for IoT with each specific challenge in the corresponding subdomain.

As architects and developers, we tend to automate and reduce the risk of human errors, so the IoT Lens Checklist is a great resource to review your workloads by following a structured approach.

Workload context checklist from the IoT Lens Checklist

Workload context checklist from the IoT Lens Checklist

See you next time!

Thanks for joining our discussion on architecting for the edge! See you in two weeks when we talk about database architectures on AWS.

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Architecting for big data workloads

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-for-big-data-workloads/

Big data is often defined by 3 Vs: greater variety, volumes, and velocity. Because of the three Vs, big data poses data management challenges that cannot be solved with traditional databases. Not only that, but trying to overcome these issues can lead to scaling problems, bottlenecks, and spiraling costs.

To help with this, you need to look at the whole data management pipeline. Don’t worry, AWS offer many tools and best practices to help you architect better for these challenges. In this post, we share insights into how to build and manage big data pipelines in your architecture.

Everything You Need to Know About Big Data: From Architectural Principles to Best Practices

There are so many tools, frameworks, and services for big data. It can be overwhelming to know where to start what best practices to apply.

This video distills down good practice and good architecture and principles for big data systems into easy topics and guidance.

Manos Samatas presenting the mental models for big data architectures

Manos Samatas presenting the mental models for big data architectures

AWS workshops for big data

This hands-on practice will show you what’s possible for big data services on AWS.

If you are a builder, this AWS workshop catalog includes  several courses on big data and analytics. These resources provide new ideas and how to realize them in practice.

AWS workshops can help you learn the cloud services and recommended architectural patterns

AWS workshops can help you learn the cloud services and recommended architectural patterns

Securely share your data across AWS accounts using AWS Lake Formation

It’s very common to share data stored across organizations or business units, but sharing data often comes with security risks.

This blog post explains how to share data across accounts via AWS Lake Formation in a secure and controlled manner, so your data is never exposed to the wrong people.

This diagram illustrates the architecture for cross-account access

This diagram illustrates the architecture for cross-account access

How Amazon leverages AWS to deliver analytics at enterprise scale

Amazon.com is a customer of AWS like any other customer. But, as you can imagine, Amazon.com has very large and very complex datasets with tens of thousands of transactions at any one time.

This video goes through how Amazon.com uses AWS technologies to run their business successfully, and how you can add the same architectures and principles for yours.

Data warehouse architectures can be used to run queries on large amounts of data from different data sources

Data warehouse architectures can be used to run queries on large amounts of data from different data sources

See you next time!

Thanks for joining our discussion on big data architecture! See you in two weeks for more architecture best practices and guidance.

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Designing Well-Architected systems

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-designing-well-architected-systems/

Amazon’s CTO Werner Vogels says, “Everything fails, all the time”. This means we should design with failure in mind and assume that something unpredictable could happen.

The AWS Well-Architected Framework is designed to help you prepare your workload for failure. It describes key concepts, design principles, and architectural best practices for designing and running workloads in the cloud. Using this tool regularly will help you gain awareness of the status of your workloads and is in place to improve any workload deployed inside your AWS accounts.

In this edition of Let’s Architect!, we’ve collected solutions and articles that will help you understand the value behind the Well-Architected Framework and how to implement it in your software development lifecycle.

AWS Well-Architected Framework

AWS Well-Architected (AWS WA) helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads. Built around six pillars—operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability—AWS WA provides a consistent approach for customers and partners to evaluate architectures and implement scalable designs.

The AWS WA Framework includes domain-specific lenses, hands-on labs, and the AWS Well-Architected Tool. The AWS Well-Architected Tool (AWS WA Tool), available at no cost in the AWS Management Console, provides a mechanism for regularly evaluating workloads, identifying high-risk issues, and recording improvements.

The 6 pillars that composes the AWS Well-Architected framework

The 6 pillars that composes the AWS Well-Architected framework

Use templated answers to perform Well-Architected reviews at scale

For larger customers, performing AWS WA reviews often involves a combination of different teams. Coordinating participants from each team in order to perform a review increases the time taken and is expensive. In a large organization, there are often hundreds of AWS accounts where teams can store review documents, which means there is no way to quickly identify risks or spot common issues or trends that could influence improvements.

To address this, this blog post offers a solution to help you perform reviews easier and faster. It allows workload owners to automatically populate their reviews with templated answers to questions in the AWS WA Tool. These answers may be a shared responsibility between an application team and a centralized team such as platform, security, or finance. This way, application teams have fewer questions to answer and centralized team members have fewer reviews to attend, because answers that are common to all workloads are pre-populated in workload reviews. The solution also provides centralized reporting to provide a centralized view of AWS WA reviews conducted across the organization.

The components of the solution and the steps in the workflow

The components of the solution and the steps in the workflow

Machine Learning Lens

Machine learning (ML) is used to solve specific business problems and influence revenue. However, moving from experimentation (where scientists design ML models and explore applications) to a production scenario (where ML is used to generate value for the business) can present some challenges. For example, how do you create repeatable experiments? How do you increase automation in the deployment process? How do you deploy my model and monitor the performance?

This blog post and its companion whitepaper provide best practices based on AWS WA for each phase of putting ML into production, including formulating the problem and approaches for monitoring a model’s performance.

ML lifecycle phases with expanded components

ML lifecycle phases with expanded components

Establishing Feedback Loops Based on the AWS Well-Architected Framework Review

When you perform an AWS WA review using the AWS WA Tool, you’ll answer a set of questions. The tool then provides gives recommendations to improve your workloads.

To apply these recommendations effectively, you must 1) define how you’ll apply them, 2) create systems to define what is monitored and which kind of metrics or logs are required, 3) establish automatic or manual process and for reporting, and 4) improve them through iteration. This process is called a feedback loop.

This blog post shows you how to iteratively improve your overall architecture with feedback loops based on the results of the AWS WA review.

Feedback loop based on the AWS WA review

Feedback loop based on the AWS WA review

See you next time!

Thanks for reading! See you in a couple of weeks when we discuss strategies for running serverless applications on AWS.

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Architecting for DevOps

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-for-devops/

Under a DevOps model, the development and operations teams work together and share their skills and knowledge. Sometimes, these teams are merged into a single team where the engineers work across the entire application lifecycle, from development to deployment.

The objective of DevOps is to deliver applications and services quickly and efficiently. This faster pace allows companies to better adapt to their customers’ needs and changes in the market.

In this edition of Let’s Architect!, we’ll talk about DevOps culture and share content to provide helpful mental models and strategies for your work as an architect or engineer.

Automating cross-account CI/CD pipelines

Companies often use the cloud to run their microservices. This means they’re working with different AWS accounts and hosting each microservice in a dedicated account.

This method can be helpful to isolate different environments for software deployment pipelines. A well-designed pipeline is fundamental to releasing software quickly because it allows DevOps engineers to automate the software deployment process.

This video shows the mindset to adopt while designing pipelines for deploying resources across different environments. You’ll learn how to design a pipeline, how to build it using AWS CDK, and see how everything looks in the AWS Console.

AWS X-Ray helps developers analyze distributed applications, such as those built using a microservices architecture

AWS X-Ray helps developers analyze distributed applications, such as those built using a microservices architecture

Automating safe, hands-off deployments

Amazon adopted continuous delivery across the company as a way to automate and standardize how software is deployed and to reduce the time it takes for changes to reach production. In this system, improvements to the release process build up over time. Once deployment risks are identified, teams iterate on the release process and add extra safety in the automated pipeline.

A typical continuous delivery pipeline has four major phases—source, build, test, and production (prod). This article describes the mental models and approaches that engineer use at Amazon to help you understand the design considerations for each step of the pipeline and learn some recommended practices.

Each pipeline has these four major steps; however, more granularity is often added in the testing stage to take advantage of multiple pre-production environments

Each pipeline has these four major steps; however, more granularity is often added in the testing stage to take advantage of multiple pre-production environments

Covert ops on DevOps: Leveraging security to shift left

Architects often deal with complexity and ambiguity while designing architectures and interacting with stakeholders. Consequently, their architectures evolve and grow in complexity.

When your workload becomes more complex, security is an important area to consider and requires attention during the entire Software Development Life Cycle (SDLC). This video shows some methods to add security in a DevOps culture. You’ll learn about shifting your security left to create collaborations between developers and the security team. It will also show you how to uncover vulnerabilities in the SDLC as well as the strategies to implement and automate security in the process through a security as code mindset.

At a high level, people build applications with source code, version control, CI/CD, registries and deployments, and during each step we should design to prevent specific vulnerabilities

At a high level, people build applications with source code, version control, CI/CD, registries and deployments, and during each step we should design to prevent specific vulnerabilities

Instrumenting distributed systems for operational visibility

Every member of a development team works like an owner and operator of the service, whether that member is a developer, manager, or another role. Software developers and architects usually work with logs to see the status of their systems. Logs act as the mechanism to share what’s happening in the software that is running. This information is used for troubleshooting and performance improvement.

This article describes some approaches to feed data into operational dashboards to measure real-time metrics, invoke alarms, and engage with operators to diagnose problems. You’ll learn some mental models and best practices to design a logging system through a set of stories, considerations, and common examples with code samples.

AWS X-Ray helps developers analyze distributed applications, such as those built using a microservices architecture

AWS X-Ray helps developers analyze distributed applications, such as those built using a microservices architecture

Related information

If you want to learn more about DevOps, check What is DevOps?, a public resource with plenty of examples and introductory articles.

See you next time!

Thanks for reading! See you in a couple of weeks when we discuss strategies for applying the AWS Well-Architected framework to your workloads.

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Understanding the build versus buy dilemma

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-understanding-the-build-versus-buy-dilemma/

Vendor lock in happens when you commit to a specific technology and then don’t have the freedom to maintain full control of your applications. Even if you want to switch to another vendor, it’s not easy because of the financial investment, effort, and time needed to do so.

In the cloud computing, technology changes quickly, and vendor lock in can impact your business objectives. In this edition of Let’s Architect!, we show you how to avoid the risks of vendor lock in and examine when you should build or buy new software.

Buy vs. Build Revisited: 3 Traps to Avoid

In this blog post, Gregor Hohpe shares some tips on how to avoid risks of vendor lock in. He advises to “build the software that differentiates your business and buy all else” and shows you how opportunity cost, an economic concept, plays major role in whether to build or buy software.

Time to Rethink Build vs Buy

Which is the right option for your business: build or buy? Customers often ask this question. The answer is: it depends.

Moving to the cloud does not necessarily mean you are locked in to a cloud provider. Most cloud platforms offer you a pay as-you-go model with the flexibility to choose from a wide range of services and solutions such as serverless, DevOps, etc. However, having advanced and scalable technology products powering your business can help differentiate your core product. And, it can help you innovate faster and increase speed and agility. This blog post will help you choose the right path for you based on your business objectives.

Switching Costs and Lock-In

In this blog post, Mark Schwartz shares his personal story. He talks about his role as the CIO of US Citizenship and Immigration Services and how he decided to migrate their workloads to the cloud during his time there. He discusses some of his considerations in moving and some of the obstacles he encountered along the way.

See you next time!

Thanks for reading! See you in a couple of weeks when we discuss DevOps.

Other posts in this series

Let’s Architect! Architecting for front end

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-for-front-end/

Many workloads in the cloud need a front-end interface for interacting with APIs, either for populating content or for consuming it. This edition of Let’s Architect! shows you how to scale your front-end applications and serve data across multiple devices.

Micro-frontend Architectures on AWS

Micro-frontends are the technical representation of a business subdomain, they allow independent implementations with the same or different technology.

They help minimize the code shared with other subdomains and are owned by a single team. This blog post shows you how to apply client-side rendering micro-frontends in AWS.

Microservices backend with the micro-frontends

Microservices backend with the micro-frontends

Building serverless micro frontends at the edge

Microservices architectures use techniques like canary releases or blue-green deployments to reduce the blastradius of issues deployed in production. In this video, you’ll learn how Ryanair scaled their front-end practice across their website and how to implement these techniques using Lambda@Edge and Amazon CloudFront.

A serverless architecture designed using AWS Step Functions for SEO integration of micro-frontends

A serverless architecture designed using AWS Step Functions for SEO integration of micro-frontends

Introduction to GraphQL

Many companies build APIs with GraphQL because it gives front-end developers the ability to query multiple databases, microservices, and APIs with a single GraphQL endpoint.

This video introduces asynchronous APIs, GraphQL, and the most common architectural patterns to work with. It also provides a starting point to understand the differences between REST and GraphQL as well as  mental models to identify the right tool for each job.

Some recommended practices to consider while getting a GraphQL API into production

Some recommended practices to consider while getting a GraphQL API into production

Mocking and Testing Serverless APIs with AWS Amplify

This video covers how to write successful tests against an API backend using AWS Amplify. Amplify speeds up the development of your front-end and serverless backend applications.

Thanks to its low-code approach, you can focus on writing the business logic of your applications without the need to create the plumbing between services. If you need to add more configurations using Amplify, review its custom resources.

The Amplify Command Line Interface (CLI) is a unified toolchain to create, integrate, and manage cloud services for your application

The Amplify Command Line Interface (CLI) is a unified toolchain to create, integrate, and manage cloud services for your application

See you next time!

Thanks for reading! See you in a couple of weeks when we discuss technological lock-in.

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Architecting for governance and management

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-for-governance-and-management/

As you develop next-generation cloud-native applications and modernize existing workloads by migrating to cloud, you need cloud teams that can govern centrally with policies for security, compliance, operations and spend management.

In this edition of Let’s Architect!, we gather content to help software architects and tech leaders explore new ideas, case studies, and technical approaches to help you support production implementations for large-scale migrations.

Seamless Transition from an AWS Landing Zone to AWS Control Tower

A multi-account AWS environment helps businesses migrate, modernize, and innovate faster. With the large number of design choices, setting up a multi-account strategy can take a significant amount of time, because it involves configuring multiple accounts and services and requires a deep understanding of AWS.

This blog post shows you how AWS Control Tower helps customers achieve their desired business outcomes by setting up a scalable, secure, and governed multi-account environment. This post describes a strategic migration of 140 AWS accounts from customer Landing Zone to an AWS Control Tower-based solution.

Multi-account landing zone architecture that uses AWS Control Tower

Multi-account landing zone architecture that uses AWS Control Tower

Build a strong identity foundation that uses your existing on-premises Active Directory

How do you use your existing Microsoft Active Directory (AD) to reliably authenticate access for AWS accounts, infrastructure running on AWS, and third-party applications?

The architecture shown in this blog post is designed to be highly available and extends access to your existing AD to AWS, which enables your users to use their existing credentials to access authorized AWS resources and applications. This post highlights the importance of implementing a cloud authentication and authorization architecture that addresses the variety of requirements for an organization’s AWS Cloud environment.

Multi-account Complete AD architecture with trusts and AWS SSO using AD as the identity source

Multi-account Complete AD architecture with trusts and AWS SSO using AD as the identity source

Migrate Resources Between AWS Accounts

AWS customers often start their cloud journey with one AWS account, and over time they deploy many resources within that account. Eventually though, they’ll need to use more accounts and migrate resources across AWS Regions and accounts to reduce latency or increase resiliency.

This blog post shows four approaches to migrate resources based on type, configuration, and workload needs across AWS accounts.

Migration infrastructure approach

Migration infrastructure approach

Transform your organization’s culture with a Cloud Center of Excellence

As enterprises seek digital transformation, their efforts to use cloud technology within their organizations can be a bit disjointed. This video introduces you to the Cloud Center of Excellence (CCoE) and shows you how it can help transform your business via cloud adoption, migration, and operations. By using the CCoE, you’ll establish and us a cross-functional team of people  for developing and managing your cloud strategy, governance, and best practices that your organization can use to transform the business using the cloud.

Benefits of CCoE

Benefits of CCoE

See you next time!

Thanks for reading! If you want to dive into this topic even more, don’t miss the Management and Governance on AWS product page.

See you in a couple of weeks with novel ways to architect for front-end web and mobile!

Other posts in this series

Looking for more architecture content?

AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Let’s Architect! Creating resilient architecture

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-creating-resilient-architecture/

The AWS Well-Architected Framework defines resilience as “the capability to recover when stressed by load (more requests for service), attacks (either accidental through a bug, or deliberate through intention), and failure of any component in the workload’s components.”

The need for resilient workloads transcends all customer industries, but it can often can be misunderstood, which can lead to workloads that do not incorporate resilient architecture at all or workloads that are over-engineered.

Resilience is a technical problem, but it’s also about people and culture. It’s a continuous process that requires us to learn by iterating. Customers need to understand, from a business perspective, what their SLA requirements are, and from technical perspective, how they achieve this with their architecture. In this post, we share resources to help you build resilience into your AWS architecture.

Amazon’s approach to building resilient services

Building a resilient architecture is not only about the technical implementation of the system, but also about the solutions for observability, operations, and people.

This video shows the Amazon approach for designing resilient systems, where individual teams build and own a service. This way, everyone has operational responsibility. You’ll learn how to deploy often, move fast, and design solutions for automatic rollback, which allows teams to revert their workload to a previous iteration if needed.

The pillars adopted by the engineering teams building services at Amazon

The pillars adopted by the engineering teams building services at Amazon

Five design patterns to build more resilient applications

Resilience is an important consideration for developers. For instance, if a downstream service is not available, how can the software handle the situation? Which mechanisms should you use to implement retries? How can you prevent overloading the downstream service?

This video focuses on five strategies and design patterns that developers can use to build resilient applications. You’ll learn how to add timeouts, retries, exponential backoff with randomness, and circuit breakers into your code. These patterns are powerful because they can be abstracted and implemented in different scenarios.

Software developers can implement different strategies in their application code to design for resiliency

Software developers can implement different strategies in their application code to design for resiliency

Building Resilient Well-Architected Workloads Using AWS Resilience Hub

This blog post shows you how AWS Resilience Hub can help you evaluate the resilience of your architecture. It gives you a central place to monitor, track, and evaluate your application’s resiliency based on your business goals. For example, after you define your RPO and RTO SLAs, Resilience Hub will evaluate your current architecture against them and show you whether you’ve met your goals. If you haven’t met your goals, it recommends changes to help you meet them.

Multi-AZ architecture incorporating data backup features

Multi-AZ architecture incorporating data backup features

Incorporating continuous resilience in your development ecosystem

Resilience encompasses a broad range of considerations, including infrastructure, application patterns, data management, and application building and monitoring. And after you incorporate resilience, it is essential to continuously maintain it.

This video provides useful principles for building continuous resilience in your applications. It also explores various considerations for implementing processes designed to provide continuous improvement through a DevOps methodology and shows you services you can use to incorporate resilience in the development process in a nearly continuous manner.

Software architects can implement several patterns to prevent failures or being fault-tolerant

Software architects can implement several patterns to prevent failures or being fault-tolerant

See you next time!

Thanks for joining our discussion on resilient architecture! See you in a couple of weeks with our content about governance in the cloud!

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Other posts in this series

 

Let’s Architect! Serverless architecture on AWS

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-serverless-architecture-on-aws/

Serverless architecture and computing allow you and your teams to focus on delivering business value in place of investing time tweaking the infrastructure characteristics. AWS is not only providing serverless computing as a service, but share that half of our new applications built by Amazon are using AWS Lambda, as noted by Andy Jassy in his 2020 re:Invent keynote.

In this post, we share insights into reimagining a serverless environment.

I Build Applications – Event-driven Architecture

Event-driven architecture is common in modern applications built with microservices, and it is the cornerstone for designing serverless workloads. It uses events to trigger and communicate between decoupled services.

With this video, you can learn how to start with a prototype then scale to mass adoption using decoupled systems that run when responding to, without needing to redesign. Danilo Poccia, Chief Evangelist at AWS, begins the session with the APIs, then gives an example on how to build an event-driven architecture using Amazon EventBridge. The session closes with how to understand what is happening in this exchange of events.

Event-driven communication with asynchronous invocation

Event-driven communication with asynchronous invocation

Building modern cloud applications? Think integration

This re:Invent 2021 session explains modern cloud applications based on serverless or microservices, and how connections between components define important characteristics, like scalability, availability, and coupling.

How your systems are interconnected describes your system’s essential properties, such as resiliency and changeability. Gregor Hohpe, AWS Enterprise Strategist, shares tips on what to consider when integrating different services, such as lifecycle, level of control over the systems you are integrating, and how integration becomes an integral part of your software delivery cycle. The goal is to use the same method to integrate at the same speed as software deployment.

Integration approaches with Gregor Hohpe

Integration approaches with Gregor Hohpe

Serverless architectural patterns and best practices

Serverless architectures require a mindset shift: existing patterns need to be revisited, and new patterns created using the new architecture style. For each pattern created by AWS, we provide operational, security, and reliability best practices and discuss potential challenges. We also demonstrate some patterns in reference architecture diagrams.

This session helps you identify services and applications to create serverless architectures and understand areas of potential savings, increased agility, and reliability in your organization. Heitor Lessa, Principal Solutions Architect at AWS, starts the session identifying the benefits of Lambda Power Tuning: he details setting up memory when there are hundreds of functions, then follows with best practices for the pattern created.

Best practices for serverless architecture

Best practices for serverless architecture

Best practices of advanced serverless developers

This session is an overview of architectural best practices, optimizations, and handy codes that can be used to build secure, scalable, and high-performance serverless applications.

Julian Wood, Senior Developer Advocate at AWS, provides the recommended practices for implementing serverless applications inside your company, such as Lambda, to transform and not transport, avoid monolithic services and functions, orchestrate workflow with step functions, choreograph events. Julian also touches on understanding different ways you can invoke Lambda functions and what you should be aware of with each invocation model.

Three types of AWS Lambda invocation models

Three types of AWS Lambda invocation models

Building next-gen applications with event-driven architectures

Maintaining data consistency across multiple services can be challenging. It can also be difficult to work with large amounts of data in different data stores and locations. Teams building microservices architectures often find that integration with other applications and external services can make their workloads more monolithic and tightly coupled.

In this session, you can learn how to use event-based architectures to decouple and decentralize application components. Coupling is not one-dimensional, and it’s a trade-off to balance and optimize over time. This video demonstrates patterns based on message queues and events: for each pattern you can learn the advantages, the disadvantages, and the options for building it on AWS.

Sam Dengler, Principal Solutions Architect at AWS, explains the mental models to apply while designing choreography and orchestration in a scenario with microservices. The strategy adopted by Taco Bell for identifying their bounded contexts is also detailed, as well as the architecture built on Lambda for running the business logic and on AWS Step Functions for orchestration.

Choreography and orchestration are two modes of interaction in a microservices architecture

Choreography and orchestration are two modes of interaction in a microservices architecture

See you next time!

Thanks for joining our discussion on serverless architecting! If you want to deep dive into the topic, read all about Serverless on AWS!

See you in a couple of weeks when we discuss architecting for resilience!

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Other posts in this series

Let’s Architect! Using open-source technologies on AWS

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-using-open-source-technologies-on-aws/

With open-source technology, authors make software available to the public, who can view, use, or change it and add new features or support new capabilities. Open-source technology promotes collaboration across different teams, organizations, and people because the process often includes different perspectives and ideas, which typically results a stronger solution.

It can be difficult to create a multi-use solution when building to solve for a specific challenge. With an open-source project or an initiative, multiple teams work together, which prevents coupling and makes the solution easier to generalize.

In this edition of Let’s Architect!, we show you some open-source technologies built with AWS and options for running well-known, open-source projects on AWS.

Firecracker: Secure and Fast microVMs for Serverless Computing

Firecracker was developed at AWS to improve the customer experience of services like AWS Lambda and AWS Fargate. This technology is used to deploy workloads in lightweight virtual machines (VMs), called microVMs. For example, when a new Lambda function is triggered in response to an event, AWS Lambda provisions a microVM (if none already exists) to handle the request. Behind the scenes, this is powered by Firecracker.

This video introduces Firecracker and the concept of virtual machine monitor as a technology to create and manage microVMs. This talk explains Firecracker’s foundation, the minimal device model, and how it interacts with various containers. You’ll learn about the performance, security, and utilization improvements enabled by Firecracker and how Firecracker is used for Lambda and Fargate.

An example host running Firecracker microVMs

An example host running Firecracker microVMs

Deep dive into AWS Cloud Development Kit

AWS Cloud Development Kit (CDK) is an open-source software development framework that allows you to define your cloud application resources using familiar programming languages. It uses object-oriented design to create resources and build an end-to-end process for application development from infrastructure and software-development perspectives.

This video introduces AWS CDK core concepts and demonstrates how to create custom resources and deploy them to the cloud. With AWS CDK, you can make deployments repeatable, automate operations through infrastructure as code, and use the software design patterns while coding your architecture.

AWS CDK is an open-source software development framework for defining cloud infrastructure as code

AWS CDK is an open-source software development framework for defining cloud infrastructure as code

Using Apollo Server on AWS Lambda with Amazon EventBridge for real-time, event-driven streaming

Apollo Server is an open-source, spec-compliant GraphQL server that’s compatible with any GraphQL client. This blog posts covers how you can architect Apollo Server on AWS Lambda in an event-driven architecture. It shows you how to use the Apollo Server on AWS Lambda, integrate it with REST and WebSocket APIs and communicate asynchronously via event bus.

Sample application: a chat app that receives a text message from the client and responds with French and German translations of the message

Sample application: a chat app that receives a text message from the client and responds with French and German translations of the message

Observability the open-source way

Removing the undifferentiated heavy lifting for implementing open-source software can allow you to plug-and-play your favorite solutions with existing AWS services. This video addresses best practices and real-world use cases for Amazon Managed Service for Prometheus, Amazon Managed Grafana, and AWS Distro for OpenTelemetry to gain observability. Observability is fundamental to collect and analyze data coming from your architecture, understand the status of your system, and take action to improve application performance.

Setting up Amazon Managed Service for Prometheus

Setting up Amazon Managed Service for Prometheus

See you next time!

See you in a couple of weeks when we discuss strategies for running serverless applications on AWS!

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Other posts in this series

Let’s Architect! Architecting microservices with containers

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-microservices-with-containers/

Microservices structure an application as a set of independently deployable services. They speed up software development and allow architects to quickly update systems to adhere to changing business requirements.

According to best practices, the different services should be loosely coupled, organized around business capabilities, independently deployable, and owned by a single team. If applied correctly, there are multiple advantages to using microservices. However, working with microservices can also bring challenges. In this edition of Let’s Architect!, we explore the advantages, mental models, and challenges deriving from microservices with containers.

Application integration patterns for microservices

As Tim Bray said in his time with AWS, “If your application is cloud native, large scale, or distributed, and doesn’t include a messaging component, that’s probably a bug.”

This video evaluates several design patterns based on messaging and shows you how to implement them in your workloads to achieve the full capabilities of microservices. You’ll learn some fundamental application integration patterns and some of the benefits that asynchronous messaging can have over REST APIs for communication between microservices.

The scatter-gather pattern scales parallel processing across nodes and aggregates the results in a queue

The scatter-gather pattern scales parallel processing across nodes and aggregates the results in a queue

Distributed monitoring

Customers often cite monitoring as one of the main challenges while working with containers. Monitoring collects operational data as logs, metrics, events, and traces to identify and respond to issues quickly and minimize disruptions.

This whitepaper covers cross-service challenges in microservices, including service discovery, distributed monitoring, and auditing. You’ll learn about the role of DNS and service meshes in interservice communication and discovery and the tools available for monitoring your clusters that run containers and for logging.

This view from AWS X-Ray shows how a request can be tracked across different services. This is implemented by taking advantage of correlation IDs

This view from AWS X-Ray shows how a request can be tracked across different services. This is implemented by taking advantage of correlation IDs

Create a pipeline with canary deployments for Amazon ECS using AWS App Mesh

When architects deploy a new version of an application, they want to test it on a set of users before routing all the traffic to the new version. This is known as a “canary deployment.” A canary deployment can automatically switch traffic back to the old version if some inconsistencies are detected. This decreases the impact of the bug(s) introduced in the new release. For microservices, this is helpful when testing a complex distributed system because you can send a percentage of traffic to newer versions in a controlled manner.

A service mesh provides application-level networking so your services can communicate with each other across multiple types of compute infrastructure. This blog post shows how to use AWS App Mesh to implement a canary deployment strategy using AWS Step Functions for orchestrating the different steps during testing and AWS Code Pipeline for continuous delivery of each microservice.

An overview of the architecture used to create the pipeline and perform the canary deployments

An overview of the architecture used to create the pipeline and perform the canary deployments

Running microservices in Amazon EKS with AWS App Mesh and Kong

Distributed architectures bring up several questions. How do we expose our APIs towards client-side applications? How do our microservices communicate?

This blog post answers these questions with a solution that uses Amazon Elastic Kubernetes Service (Amazon EKS) in conjunction with AWS App Mesh. This solution helps you manage the security and discoverability of microservices, and Kong protects your service mesh and runs side by side with your application services.

The Kong for Kubernetes architecture can be implemented using Amazon EKS and AWS App Mesh

The Kong for Kubernetes architecture can be implemented using Amazon EKS and AWS App Mesh

See you next time!

See you in a couple of weeks when we discuss open source technologies on AWS!

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Other posts in this series

Let’s Architect! Architecting for Blockchain

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-for-blockchain/

You’ve likely read about or heard someone talk about blockchain. This distributed and decentralized ledger collects immutable blocks of information and helps secure your data without going through third party. It is commonly used to maintain secure and decentralized records for registries, consensus, cryptocurrencies, and the latest trend: non-fungible tokens (NFTs).

This collection of content will help you learn the basics of blockchain and drill down in to the mindset to apply while architecting for blockchain. We focus on the architectural aspects to explain what the blockchain is from a technological perspective, how it works, when we need it, as well as its characteristics applied to different scenarios.

Amazon Managed Blockchain: When to use blockchain

There is a lot of buzz about blockchain, but when should you use it? What are its benefits and limitations? This video introduces you to Amazon Managed Blockchain and will help you identify if blockchain is a good solution for you and what type of blockchain is best suited for your use case.

John Liu covers the characteristics and benefits of private and public blockchain

John Liu covers the characteristics and benefits of private and public blockchain

Deep Dive on Amazon Managed Blockchain

In this video, Johnathan Fritz, a Principal Product Manager for Managed Blockchain shares some challenges his team faced while building a distributed and immutable network and how they overcame them. The talk provides a good example of mental models you can use to understand and solve challenges while architecting.

Blockchain is based on a consensus mechanism in a distributed system

Blockchain is based on a consensus mechanism in a distributed system

Mint and deploy NFTs to the Ethereum blockchain using Amazon Managed Blockchain

Buying NFTs is a hot topic right now. But how do you create your own? This blog post provides you a step-by-step guide that shows you how to create an NFT and how to establish a workflow to deploy ERC-721 contracts to the public blockchain Ethereum Rinkeby testnet.

The architecture uses Managed Blockchain to take advantage of maintained Ethereum nodes and allow developers to focus on smart contracts

The architecture uses Managed Blockchain to take advantage of maintained Ethereum nodes and allow developers to focus on smart contracts

How Specright uses Amazon QLDB to create a traceable supply chain network

Blockchain and distributed ledger technologies focus on decentralizing applications involving multiple parties where no single entity owns the application. When your application is decentralized and involves multiple, unknown parties, blockchains can be appropriate. On the other hand, if your application only requires a complete and verifiable history of data changes, you can consider a ledger database.

This post shows how Specright uses use Amazon Quantum Ledger Database (Amazon QLDB) to generate a complete, verifiable history of data changes, to generate an append-only immutable journal of events. Their architecture makes sure that all members of the network have access to the same and latest version of the specification to instantly track change history to investigate quality issues.

This architecture allows all members of the supply chain network to access the same and latest versions of specifications

This architecture allows all members of the supply chain network to access the same and latest versions of specifications

See you next time!

Thanks for reading! If you’re looking for more ways tools to architect your workload, check out the AWS Architecture Center.

See you in a couple of weeks when we discuss strategies for running microservices with containers!

Other posts in this series

Let’s Architect! Tools for Cloud Architects

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-tools-for-cloud-architects/

This International Women’s Day, we’re featuring more than a week’s worth of posts that highlight female builders and leaders. We’re showcasing women in the industry who are building, creating, and, above all, inspiring, empowering, and encouraging everyone—especially women and girls—in tech.


A great way for cloud architects to learn is to experiment with the tools that our teams are using or could consider for the future. This allows us to learn new technologies, become familiar with the latest trends, and understand the entire cycle of our solutions.

Amazon Web Services (AWS) provides several tools for architects, including resources that can analyze your environment for creating a visual diagram and a community of builders who can answer your technical questions.

Today we’re excited to share  tools and methodologies that you should be aware of. In honor of the Architecture Blog’s International Women’s Day, half of these tools have been developed with and by women.

AWS Perspective

One of the main challenges for every architect is making sure their documentation is up to date. Recently, we’ve seen the rise of “architecture as code” tools for deriving architecture diagrams directly from the code in production.

In that vein, AWS developed AWS Perspective, a diagramming tool solution that helps you represent your live workload.

AWS Perspective analyzes your environment and creates a diagram with all your cloud components

AWS Perspective analyzes your environment and creates a diagram with all your cloud components

Chaos Testing with AWS Fault Injection Simulator and AWS CodePipeline

Chaos engineering is the process of testing a distributed computing system to ensure that it can withstand unexpected disruptions.

This blog post shows an architecture pattern for automating chaos testing as part of your continuous integration/continuous delivery (CI/CD) process. By automating the implementation of chaos experiments inside CI/CD pipelines, complex risks and modeled failure scenarios can be tested against application environments with every deployment

This high-level architecture shows how to automate chaos engineering in your environment

This high-level architecture shows how to automate chaos engineering in your environment

AWS re:Post – A Reimagined Q&A Experience for the AWS Community

Often when architecting we run into different design choices, issues, and roadblocks. What service should you use? What is the best way to implement this? Who do you ask?

AWS re:Post is a new question-and-answer service (think Stack Overflow specifically for AWS). It is monitored by the community who answers your questions, and then employees and official partners review these answers to ensure accuracy.

AWS re:Post is public. There is a wide community of AWS experts ready to answer your questions

AWS re:Post is public. There is a wide community of AWS experts ready to answer your questions

Establishing Feedback Loops Based on the AWS Well-Architected Framework Review

In 2018, AWS released the Well-Architected Framework, a mechanism for reviewing and/or improving your workloads that provides recommendations based on best practices in different areas such as security, costs optimization, or reliability. This article shows you how to improve iteratively your systems in the cloud using the Well-Architected Framework.

Creating a healthy feedback loop will enhance your architecture over time

Creating a healthy feedback loop will enhance your architecture over time

See you next time!

Thanks for reading! If you’re looking for more ways tools to architect your workload, check out the AWS Architecture Center.

See you in a couple of weeks when we discuss blockchain!

Other posts in this series

Let’s Architect! Architecting for Security

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/lets-architect-architecting-for-security/

At AWS, security is “job zero” for every employee—it’s even more important than any number one priority. In this Let’s Architect! post, we’ve collected security content to help you protect data, manage access, protect networks and applications, detect and monitor threats, and ensure privacy and compliance.

Managing temporary elevated access to your AWS environment

One challenge many organizations face is maintaining a solid security governance across AWS accounts.

This Security Blog post provides a practical approach to temporarily elevate access for specific users. For example, imagine a developer wants to access a resource in the production environment. With elevated access, you won’t have to provide them an account that has access to the production environment. You would just elevate their access for a short period of time. The following diagram shows the few steps needed to temporarily elevate access to a user.

This diagram shows the few steps needed to temporarily elevate access to a user

This diagram shows the few steps needed for to temporarily elevate access to a user

Security should start left: The problem with shift left

You already know security is job zero at AWS. But it’s not just a technology challenge. The gaps between security, operations, and development cycles are widening. To close these gaps, teams must have real-time visibility and control over their tools, processes, and practices to prevent security breaches.

This re:Invent session shows how establishing relationships, empathy, and understanding between development and operations teams early in the development process helps you maintain the visibility and control you need to keep your applications secure.

Screenshot from re:Invent session

Empowering developers means shifting security left and presenting security issues as early as possible in your process

AWS Security Reference Architecture: Visualize your security

Securing a workload in the cloud can be tough; almost every workload is unique and has different requirements. This re:Invent video shows you how AWS can simplify the security of your workloads, no matter their complexity.

You’ll learn how various services work together and how you can deploy them to meet your security needs. You’ll also see how the AWS Security Reference Architecture can automate common security tasks and expand your security practices for the future. The following diagram shows how AWS Security Reference Architecture provides guidelines for securing your workloads in multiple AWS Regions and accounts.

The AWS Security Reference Architecture provides guidelines for securing your workloads in multiple AWS Regions and accounts

The AWS Security Reference Architecture provides guidelines for securing your workloads in multiple AWS Regions and accounts

Network security for serverless workloads

Serverless technologies can improve your security posture. You can build layers of control and security with AWS managed and abstracted services, meaning that you don’t have to do as much security work and can focus on building your system.

This video from re:Invent provides serverless strategies to consider to gain greater control of networking security. You will learn patterns to implement security at the edge, as well as options for controlling an AWS Lambda function’s network traffic. These strategies are designed to securely access resources (for example, databases) placed in a virtual private cloud (VPC), as well as resources outside of a VPC. The following screenshot shows how
Lambda functions can run in a VPC and connect to services like Amazon DynamoDB using VPC gateway endpoints.

Lambda functions can run in a VPC and connect to services like Amazon DynamoDB using VPC gateway endpoints

Lambda functions can run in a VPC and connect to services like Amazon DynamoDB using VPC gateway endpoints

See you next time!

Thanks for reading! If you’re looking for more ways to architect your workload for security, check out Best Practices for Security, Identity, & Compliance in the AWS Architecture Center.

See you in a couple of weeks when we discuss the best tools offered by AWS for software architects!

Other posts in this series

Let’s Architect! Architecting for Machine Learning

Post Syndicated from Luca Mezzalira original https://aws.amazon.com/blogs/architecture/architecting-for-machine-learning/

Though it seems like something out of a sci-fi movie, machine learning (ML) is part of our day-to-day lives. So often, in fact, that we may not always notice it. For example, social networks and mobile applications use ML to assess user patterns and interactions to deliver a more personalized experience.

However, AWS services provide many options for the integration of ML. In this post, we will show you some use cases that can enhance your platforms and integrate ML into your production systems.

Dynamic A/B testing for machine learning models with Amazon SageMaker MLOps projects

Performing A/B testing on production traffic to compare a new ML model with the old model is a recommended step after offline evaluation.

This blog post explains how A/B testing works and how it can be combined with multi-armed bandit testing to gradually send traffic to the more effective variants during the experiment. It will teach you how to build it with AWS Cloud Development Kit (AWS CDK), architect your system for MLOps, and automate the deployment of the solutions for A/B testing.

This diagram shows the iterative process to analyze the performance of ML models in online and offline scenarios.

This diagram shows the iterative process to analyze the performance of ML models in online and offline scenarios

Enhance your machine learning development by using a modular architecture with Amazon SageMaker projects

Modularity is a key characteristic for modern applications. You can modularize code, infrastructure, and even architecture.

A modular architecture provides an architecture and framework that allows each development role to work on their own part of the system, and hide the complexity of integration, security, and environment configuration. This blog post provides an approach to building a modular ML workload that is easy to evolve and maintain across multiple teams.

A modular architecture allows you to easily assemble different parts of the system and replace them when needed

A modular architecture allows you to easily assemble different parts of the system and replace them when needed

Automate model retraining with Amazon SageMaker Pipelines when drift is detected

The accuracy of ML models can deteriorate over time because of model drift or concept drift. This is a common challenge when deploying your models to production. Have you ever experienced it? How would you architect a solution to address this challenge?

Without metrics and automated actions, maintaining ML models in production can be overwhelming. This blog post shows you how to design an MLOps pipeline for model monitoring to detect concept drift. You can then expand the solution to automatically launch a new training job after the drift was detected to learn from the new samples, update the model, and take into account the changes in the data distribution.

Concept drift happens when there is a shift in the distribution. In this case, the distribution of the newly collected data (in blue) starts differing from the baseline distribution (in green)

Concept drift happens when there is a shift in the distribution. In this case, the distribution of the newly collected data (in blue) starts differing from the baseline distribution (in green)

Architect and build the full machine learning lifecycle with AWS: An end-to-end Amazon SageMaker demo

Moving from experimentation to production forces teams to move fast and automate their operations. Adopting scalable solutions for MLOps is a fundamental step to successfully create production-oriented ML processes.

This blog post provides an extended walkthrough of the ML lifecycle and explains how to optimize the process using Amazon SageMaker. Starting from data ingestion and exploration, you will see how to train your models and deploy them for inference. Then, you’ll make your operations consistent and scalable by architecting automated pipelines. This post offers a fraud detection use case so you can see how all of this can be used to put ML in production.

The ML lifecycle involves three macro steps: data preparation, train and tuning, and deployment with continuous monitoring.

The ML lifecycle involves three macro steps: data preparation, train and tuning, and deployment with continuous monitoring

See you next time!

Thanks for reading! We’ll see you in a couple of weeks when we discuss how to secure your workloads in AWS.

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Other posts in this series