Tag Archives: AWS Transit Gateway

Microservices discovery using Amazon EC2 and HashiCorp Consul

Post Syndicated from Marine Haddad original https://aws.amazon.com/blogs/architecture/microservices-discovery-using-amazon-ec2-and-hashicorp-consul/

These days, large organizations typically have microservices environments that span across cloud platforms, on-premises data centers, and colocation facilities. The reasons for this vary but frequently include latency, local support structures, and historic architectural decisions. However, due to the complex nature of these environments, efficient mechanisms for service discovery and configuration management must be implemented to support operations at scale. This is an issue also faced by Nomura.

Nomura is a global financial services group with an integrated network spanning over 30 countries and regions. By connecting markets East & West, Nomura services the needs of individuals, institutions, corporates, and governments through its three business divisions: Retail, Investment Management, and Wholesale (Global Markets and Investment Banking). E-Trading Strategy Foreign Exchange sits within Global Markets, and focuses on all quantitative analysis and technical aspects of electronic FX flows. The team builds out a number of innovative solutions for clients, all of which are needed to operate in an ultra-low latency environment to be competitive. The focus is to build high-quality engineered platforms that can handle all aspects of Nomura’s growing 24 hours a day, 5-and-a-half days a week FX business.

In this blog post, we share the solution we developed for Nomura and how you can build a service discovery mechanism that uses a hierarchical rule-based algorithm. We use the flexibility of Amazon Elastic Compute Cloud (Amazon EC2) and third-party software, such as SpringBoot and Consul. The algorithm supports features such as service discovery by service name, Domain Name System (DNS) latency, and custom provided tags. This can activate customers with automated deployments, since services are able to auto-discover and connect with other services. Based on provided tags, customers can implement environment boundaries so that a service doesn’t connect to an unintended service. Finally, we built a failover mechanism, so that if a service becomes unavailable, an alternative service would be provided (based on given criteria).

After reading this post, you can use the provided assets in the open source repository to deploy the solution in their sandbox environment. The Terraform and Java code that accompanies this post can be amended as needed to suit individual requirements.

Overview of solution

The solution is composed of a microservices platform that is spread over two different data centers, and a Consul cluster per data center. We use two Amazon Virtual Private Clouds (VPCs) to model geographically distributed Consul “data centers”. These VPCs are then connected via an AWS Transit Gateway. By permitting communication across the different data centers, the Consul clusters can form a wide-area network (WAN) and have visibility of service instances deployed to either. The SpringBoot microservices use the Spring Cloud Consul plugin to connect to the Consul cluster. We have built a custom configuration provider that uses Amazon EC2 instance metadata service to retrieve the configuration. The configuration provider mechanism is highly extensible, so anyone can build their own configuration provider.

The major components of this solution are:

    • Sample microservices built using Java and SpringBoot, and deployed in Amazon EC2 with one microservice instance per EC2 instance
    • A Consul cluster per Region with one Consul agent per EC2 instance
    • A custom service discovery algorithm
Multi-VPC infrastructure architecture

Figure 1. Multi-VPC infrastructure architecture

A typical flow for a microservice would be to 1/ boot up, 2/ retrieve relevant information from the EC2 Metadata Service (such as tags), and, 3/ use it to register itself with Consul. Once a service is registered with Consul it can discover services to integrate with, and it can be discovered by other services.

An important component of this service discovery mechanism is a custom algorithm that performs service discovery based on the tags created when registering the service with Consul.

Service discovery flow

Figure 2. Service discovery flow

The service flow shown in Figure 2 is as follows:

  1. The Consul agent deployed on the instance registers to the local Consul cluster, and the service registers to its Consul agent.
  2. The Trading service looks up for available Pricer services via API calls.
  3. The Consul agent returns the list of available Pricer services, so that the Trading service can query a Pricer service.

Walkthrough

Following are the steps required to deploy this solution:

  • Provision the infrastructure using Terraform. The application .jar file and the Consul configuration are deployed as part of it.
  • Test the solution.
  • Clean up AWS resources.

The steps are detailed in the next section, and the code can be found in this GitHub repository.

Prerequisites

Deployment steps

Note: The default AWS Region used in this deployment is ap-southeast-1. If you’re working in a different AWS Region, make sure to update it.

Clone the repository

First, clone the repository that contains all the deployment assets:

git clone https://github.com/aws-samples/geographical-hierarchical-service-lookup-with-consul-on-aws

Build Amazon Machine Images (AMIs)

1. Build the Consul Server AMI in AWS

Go to the ~/deployment/scripts/amis/consul-server/ directory and build the AMI by running:

packer build .

The output should look like this:

==>  Builds finished. The artifacts of successful builds are:

-->  amazon-ebs.ubuntu20-ami: AMIs were created:

ap-southeast-1: ami-12345678910

Make a note of the AMI ID. This will be used as part of the Terraform deployment.

2. Build the Consul Client AMI in AWS

Go to ~/deployment/scripts/amis/consul-client/ directory and build the AMI by running:

packer build .

The output should look like this:

==> Builds finished. The artifacts of successful builds are:

--> amazon-ebs.ubuntu20-ami: AMIs were created:

ap-southeast-1: ami-12345678910

Make a note of the AMI ID. This will be used as part of the Terraform deployment.

Prepare the deployment

There are a few steps that must be accomplished before applying the Terraform configuration.

1. Update deployment variables

    • In a text editor, go to directory ~/deployment/
    • Edit the variable file template.var.tfvars.json by adding the variables values, including the AMI IDs previously built for the Consul Server and Client

Note: The key pair name should be entered without the “.pem” extension.

2. Place the application file .jar in the root folder ~/deployment/

Deploy the solution

To deploy the solution, run the following commands from the terminal:

export  VAR_FILE=template.var.tfvars.json

terraform init && terraform plan --var-file=$VAR_FILE -out plan.out

terraform apply plan.out

Validate the deployment

All the EC2 instances have been deployed with AWS Systems Manager access, so you can connect privately to the terminal using the AWS Systems Manager Session Manager feature.

To connect to an instance:

1. Select an instance

2. Click Connect

3. Go to Session Manager tab

Using Session Manager, connect to one of the Consul servers and run the following commands:

consul members

This command shows you the list of all Consul servers and clients connected to this cluster.

consul members -wan

This command shows you the list of all Consul servers connected to this WAN environment.

To see the Consul User Interface:

1. Open your terminal and run:

aws ssm start-session --target <instanceID> --document-name AWS-StartPortForwardingSession --parameters '{"portNumber":["8500"],"localPortNumber":["8500"]}' --region <region>

Where instanceID is the AWS Instance ID of one of the Consul servers, and Region is the AWS Region.

Using System Manager Port Forwarding allows you to connect privately to the instance via a browser.

2. Open a browser and go to http://localhost:8500/ui

3. Find the Management Token ID in AWS Secrets Manager in the AWS Management Console

4. Login to the Consul UI using the Management Token ID

Test the solution

Connect to the trading instance and query the different services:

curl http://localhost:9090/v1/discover/service/pricer

curl http://localhost:9090/v1/discover/service/static-data

This deployment assumes that the Trading service queries the Pricer and Static-Data services, and that services are returned based on an order of precedence (see Table 1 following):

Service Precedence Customer Cluster Location Environment
TRADING 1 ACME ALPHA DC1 DEV
PRICER 1 ACME ALPHA DC1 DEV
PRICER 2 ACME ALPHA DC2 DEV
PRICER 3 ACME BETA DC1 DEV
PRICER 4 ACME BETA DC2 DEV
PRICER 5 SHARED ALPHA DC1 DEV
PRICER 6 SHARED ALPHA DC2 DEV
STATIC-DATA 1 SHARED SHARED DC1 DEV
STATIC-DATA 2 SHARED SHARED DC2 DEV
STATIC-DATA 2 SHARED BETA DC2 DEV
STATIC-DATA 2 SHARED GAMMA DC2 DEV
STATIC-DATA -1 STARK ALPHA DC1 DEV
STATIC-DATA -1 ACME BETA DC2 PROD

Table 1. Service order of precedence

To test the solution, switch on and off services in the AWS Management Console and repeat Trading queries to look at where the traffic is being redirected.

Cleaning up

To avoid incurring future charges, delete the solution from ~/deployment/ in the terminal:

terraform destroy --var-file=$VAR_FILE

Conclusion

In this post, we outlined the prevalent challenge of complex globally distributed microservice architectures. We demonstrated how customers can build a hierarchical service discovery mechanism to support such an environment using a combination of Amazon EC2 service and third-party software such as SpringBoot and Consul. Use this to test this solution into your sandbox environment and to see if it could bring the answer to your current challenge.

Additional resources:

Simplify private network access for solutions using Amazon OpenSearch Service managed VPC endpoints

Post Syndicated from Aish Gunasekar original https://aws.amazon.com/blogs/big-data/simplify-private-network-access-for-solutions-using-amazon-opensearch-service-managed-vpc-endpoints/

Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. Amazon OpenSearch is an open source, distributed search and analytics suite. Amazon OpenSearch Service offers the latest versions of OpenSearch, support for 19 versions of Elasticsearch (1.5 to 7.10 versions), as well as visualization capabilities powered by OpenSearch Dashboards and Kibana (1.5 to 7.10 versions). Amazon OpenSearch Service currently has tens of thousands of active customers with hundreds of thousands of clusters under management processing trillions of requests per month.

To meet the needs of customers who want simplicity in their network setup with the Amazon OpenSearch Service, you can now use Amazon OpenSearch Service-managed virtual private cloud (VPC) endpoints (powered by AWS PrivateLink) to connect to your applications using Amazon OpenSearch Service domains launched in Amazon Virtual Private Cloud (VPC). With Amazon OpenSearch Service-managed VPC endpoints, you can privately access your Amazon OpenSearch Service domain from multiple VPCs in your account or other AWS accounts based on your application needs without configuring other services features such as VPC peering, AWS Transit Gateway (TGW), or other more complex network routing strategies that place operational burden on your support and engineering teams.

The feature is built using AWS PrivateLink. AWS PrivateLink provides private connectivity between VPCs, supported AWS services, and your on-premises networks without exposing your traffic to the public internet. It provides you with the means to connect multiple application deployments effortlessly to your Amazon OpenSearch Service domains.

This post introduces Amazon OpenSearch Service-managed VPC endpoints that build on top of AWS PrivateLink and shows how you can access a private Amazon OpenSearch Service from one or more VPCs hosted in the same account, or even VPCs hosted in other AWS accounts using AWS PrivateLink managed by Amazon OpenSearch Service.

­­­­Amazon OpenSearch Service managed VPC endpoints

Before the launch of Amazon OpenSearch Service managed VPC endpoints, if you needed to gain access to your domain outside of your VPC, you had three options:

  • Use VPC peering to connect your VPC with other VPCs
  • Use AWS Transit Gateway to connect your VPC with other VPCs
  • Create your own implementation of an AWS PrivateLink setup

The first two options require you to setup your VPCs so that the Classless Inter-Domain Routing (CIDR) block ranges don’t overlap. If they did, then your options are more complicated. The third option, create your own implementation of AWS PrivateLink, involve configuring a network load balancer (NLB) and associating a target group with the NLB as one of the steps in the setup. The architecture discussed in this post, demonstrates these additional layers of complexity.

With Amazon OpenSearch Service managed VPC endpoints (i.e., powered by AWS PrivateLink), these complex setups and processes are no longer needed!

You can access your Amazon OpenSearch Service private domain as if it were deployed in all the VPCs that you want to connect to your domain. If you need private connectivity from your on-premises hybrid deployments, then AWS PrivateLink helps you bring access from your Amazon OpenSearch Service domain to your data centers with minimal effort.

By using AWS PrivateLink with Amazon OpenSearch Service, you can realize the following benefits:

  • You simplify your network architecture between hybrid, multi-VPC, and multi account solutions
  • You address a multitude of compliance concerns by better controlling the traffic that moves between your solutions and Amazon OpenSearch Service domains

Shared search cluster for multiple development teams

Imagine that your company hosts a service as a software (SaaS) application that provides a search application programming interface (API) for the healthcare industry. Each team works on a different function of the API. The development teams API team 1 and API team 2 are in two different AWS accounts and each has their own VPCs within these accounts. Another team (data refinement team) works on the ingestion and data refinement to populate the Amazon OpenSearch Service domain hosted in the same account as API team 2 but in different VPC. Each team shares the domain during the development cycles to save costs and foster collaboration on the data modeling.

Solution overview

Self-managed AWS PrivateLink architecture to connect different VPCs

In this scenario prior to Amazon OpenSearch Service manage VPC endpoints (i.e., powered by AWS PrivateLink), you would have to create the following items:

  1. Deploy an NLB in your VPC
  2. Create a target group that points to the IP addresses of the Elastic Network Interfaces (ENIs), which the Amazon OpenSearch Service creates in your VPC and is used to launch the Amazon OpenSearch Service
  3. Create an AWS PrivateLink deployment and reference your newly created NLB

When you implement the NLB, a target group can only reference IP addresses, an Amazon EC2 instance, or an Application Load Balancer (ALB). If you referenced the IP addresses as targets, then you had to build a process that detected the changes in the IP address if the domain changed due to service initiated or self-initiated blue/green deployments. You must maintain yet another complex process to ensure that you always have active ENIs with which to point your target groups or you lose connectivity.

Typically, customers use an AWS Lambda with scheduled events in Amazon CloudWatch. This means that you use the AWS Lambda to detect the current state where the ENIs that provided the IP addresses were marked as active for the description that matched the ENIs your domain creates. You schedule AWS Lambda to wake up within the time to live (TTL) of the Domain Name Service (DNS) settings (typically 60 seconds) and compare the existing IP addresses in the target group with any new ones found when you query all ENIs with a description referencing your domain in the VPC. You then build a new target group with the deltas and you swap the target groups and drop the old one. It’s tricky, it’s complex, and you have to maintain the solution!

With the new simplified networking architecture, your teams go through the following steps.

OpenSearch Service managed VPC endpoints architecture (powered by AWS PrivateLink)

Since the Amazon OpenSearch Service takes care of the infrastructure described previously — but not necessarily on the same implementation — all you really need to concern yourself with is creating the connections using the instructions in our service documentation.

Once you complete the steps in the instructions and remove your own implementation, your architecture is then simplified as seen in the following diagram.

Once you complete the steps in the instructions and remove your own implementation, your architecture is then simplified.

At this point, the development teams (API team 1 and API team 2) can access the Amazon OpenSearch cluster via Amazon OpenSearch Service Managed VPC Endpoint. This option is highly scalable with a simplified network architecture in which you don’t have to worry about managing a NLB, or setting up target groups and the additional resources. If the number of development teams and VPCs grow in the future, you associate the domain with the associated interface VPC endpoint. You can access services in VPCs in same or different accounts, even if there are overlapping CIDR Block IP ranges.

Conclusion

In this post, we walked through the architectural design of accessing Amazon OpenSearch cluster from different VPCs across different accounts using OpenSearch Service-managed VPC endpoint (AWS PrivateLink). Using Transit Gateway, self-managed AWS PrivateLink or VPC peering required complex networking strategies that increased operation burden. With the introduction of VPC endpoints for Amazon OpenSearch Service, the complexity of your solutions is greatly simplified and what’s even better, it’s managed for you!


About the authors

Aish Gunasekar is a Specialist Solutions architect with a focus on Amazon OpenSearch Service. Her passion at AWS is to help customers design highly scalable architectures and help them in their cloud adoption journey. Outside of work, she enjoys hiking and baking.

Kevin Fallis (@AWSCodeWarrior) is an AWS specialist search solutions architect.  His passion at AWS is to help customers leverage the correct mix of AWS services to achieve success for their business goals. His after-work activities include family, DIY projects, carpentry, playing drums, and all things music.

AWS and VMware Announce VMware Cloud on AWS integration with Amazon FSx for NetApp ONTAP

Post Syndicated from Veliswa Boya original https://aws.amazon.com/blogs/aws/aws-and-vmware-announce-vmware-cloud-on-aws-integration-with-amazon-fsx-for-netapp-ontap/

Our customers are looking for cost-effective ways to continue to migrate their applications to the cloud. VMware Cloud on AWS is a fully managed, jointly engineered service that brings VMware’s enterprise-class, software-defined data center architecture to the cloud. VMware Cloud on AWS offers our customers the ability to run applications across operationally consistent VMware vSphere-based public, private, and hybrid cloud environments by bringing VMware’s Software-Defined Data Center (SDDC) to AWS.

In 2021, we announced the fully managed shared storage service Amazon FSx for NetApp ONTAP. This service provides our customers with access to the popular features, performance, and APIs of ONTAP file systems with the agility, scalability, security, and resiliency of AWS, making it easier to migrate on-premises applications that rely on network-attached storage (NAS) appliances to AWS.

Today I’m excited to announce the general availability of VMware Cloud on AWS integration with Amazon FSx for NetApp ONTAP. Prior to this announcement, customers could only use VMware VSAN where they could scale datastore capacity with compute. Now, they can scale storage independently and SDDCs can be scaled with the additional storage capacity that is made possible by FSx for NetApp ONTAP.

Customers can already add storage to their SDDCs by purchasing additional hosts or by adding AWS native storage services such as Amazon S3, Amazon EFS, and Amazon FSx for providing storage to virtual machines (VMs) on existing hosts. You may be thinking that nothing about this announcement is new.

Well, with this amazing integration, our customers now have the flexibility to add an external datastore option to support their growing workload needs. If you are running into storage constraints or are continually met with unplanned storage demands, this integration provides a cost-effective way to incrementally add capacity without the need to purchase more hosts. By taking advantage of external datastores through FSx for NetApp ONTAP, you have the flexibility to add more storage capacity when your workloads require it.

An Overview of VMware Cloud on AWS Integration with Amazon FSx for NetApp ONTAP
There are two account connectivity options for enabling storage provisioned by FSx for NetApp ONTAP to be made available for mounting as a datastore to a VMware Cloud on AWS SDDC. Both options use a dedicated Amazon Virtual Private Cloud (Amazon VPC) for the FSx file system to prevent routing conflicts.

The first option is to create a new Amazon VPC under the same connected AWS account and have it connected with the VMware-owned Shadow VPC using VMware Transit Connect. The diagram below shows the architecture of this option:

The first option is to enable storage under the same customer-owned account

The first option is to enable storage under the same AWS connected account

The second option is to create a new AWS account, which by default comes with an Amazon VPC for the Region. Similar to the first option, VMware Transit Connect is used to attach this new VPC with the VMware-owned Shadow VPC. Here is a diagram showing the architecture of this option:

The second option is to enable storage provisioned by FSx for NetApp ONTAP by creating a new AWS account

The second option is to enable storage by creating a new AWS account

Getting Started with VMware Cloud on AWS Integration with Amazon FSx for NetApp ONTAP
The first step is to create an FSx for NetApp ONTAP file system in your AWS account. The steps that you will follow to do this are the same, whether you’re using the first or second path to provision and mount your NFS datastore.

  1. Open the Amazon FSx service page.
  2. On the dashboard, choose Create file system to start the file system creation wizard.
  3. On the Select file system type page, select Amazon FSx for NetApp ONTAP, and then click Next which takes you to the Create ONTAP file system page. Here select the Standard create method.

The following video shows a complete guide on how to create an FSx for NetApp ONTAP:

The same process can be found in this FSx for ONTAP User Guide.

After the file system is created, locate the NFS IP address under the Storage virtual machines tab. The NFS IP address is the floating IP that is used to manage access between file system nodes, and it is required for configuring VMware Transit Connect.

Location of the NFS IP address under the Storage virtual machines tab - AWS console

Location of the NFS IP address under the Storage virtual machines tab – AWS console

Location of the NFS IP address under the Storage virtual machines tab - AWS console

Location of the NFS IP address under the Storage virtual machines tab – AWS console

You are done with creating the FSx for NetApp ONTAP file system, and now you need to create an SDDC group and configure VMware Transit Connect. In order to do this, you need to navigate between the VMware Cloud Console and the AWS console.

Sign in to the VMware Cloud Console, then go to the SDDC page. Here locate the Actions button and select Create SDDC Group. Once you’ve done this, provide the required data for Name (in the following example I used “FSx SDDC Group” for the name) and Description. For Membership, only include the SDDC in question.

After the SDDC Group is created, it shows up in your list of SDDC Groups. Select the SDDC Group, and then go to the External VPC tab.

External VPC tab Add Account - VMC Console

External VPC tab Add Account – VMC Console

Once you are in the External VPC tab, click the ADD ACCOUNT button, then provide the AWS account that was used to provision the FSx file system, and then click Add.

Now it’s time for you to go back to the AWS console and sign in to the same AWS account where you created your Amazon FSx file system. Here navigate to the Resource Access Manager service page and click the Accept resource share button.

Resource Access Manager service page to access the Accept resource share button - AWS console

Resource Access Manager service page to access the Accept resource share button – AWS console

Return to the VMC Console. By now, the External VPC is in an ASSOCIATED state. This can take several minutes to update.

External VPC tab - VMC Console

External VPC tab – VMC Console

Next, you need to attach a Transit Gateway to the VPC. For this, navigate back to the AWS console. A step-by-step guide can be found in the AWS Transit Gateway documentation.

The following is an example that represents a typical architecture of a VPC attached to a Transit Gateway:

A typical architecture of a VPC attached to a Transit Gateway

A typical architecture of a VPC attached to a Transit Gateway

You are almost at the end of the process. You now need to accept the transit gateway attachment and for this you will navigate back to the VMware Cloud Console.

Accept the Transit Gateway attachment as follows:

  1. Navigating back to the SDDC Group, External VPC tab, select the AWS account ID used for creating your FSx NetApp ONTAP, and click Accept. This process may take a few minutes.
  2. Next, you need to add the routes so that the SDDC can see the FSx file system. This is done on the same External VPC tab, where you will find a table with the VPC. In that table, there is a button called Add Routes. In the Add Route section, add two routes:
    1. The CIDR of the VPC where the FSx file system was deployed.
    2. The floating IP address of the file system.
  3. Click Done to complete the route task.

In the AWS console, create the route back to the SDDC by locating VPC on the VPC service page and navigating to the Route Table as seen below.

VPC service page Route Table navigation - AWS console

VPC service page Route Table navigation – AWS console

Ensure that you have the correct inbound rules for the SDDC Group CIDR by locating Security Groups under VPC and finding the Security Group that is being used (it should be the default one) to allow the inbound rules for SDDC Group CIDR.

Security Groups under VPC that is being used to allow the inbound rules for SDDC Group CIDR

Security Groups under VPC that are being used to allow the inbound rules for SDDC Group CIDR

Lastly, mount the NFS Datastore in the VMware Cloud Console as follows:

  1. Locate your SDDC.
  2. After selecting the SDDC, Navigate to the Storage Tab.
  3. Click Attach Datastore to mount the NFS volume(s).
  4. The next step is to select which hosts in the SDDC to mount the datastore to and click Mount to complete the task.
Attach a new datastore

Attach A New Datastore

Available Today
Amazon FSx for NetApp ONTAP is available today for VMware Cloud on AWS customers in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), South America (São Paulo), AWS GovCloud (US-East), and AWS GovCloud (US-West).

Veliswa x

AWS Week In Review – July 18, 2022

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-week-in-review-july-18-2022/

Last week, AWS Summit New York was held in person at the Javits Center with thousands of attendees and over 100 sponsors and partners. During the keynote, Martin Beeby, AWS Principal Developer Advocate, talked about how innovations in cloud infrastructure enable customers to adapt to challenges and seize new opportunities. It included Liz Fong-Jones‘s great migration story of AWS Graviton in Honeycomb and Elliott Cordo‘s story of improving pharmacy experiences using AWS analytics and machine learning services in Capsule.

Watch the full keynote video!

A Recap of AWS Summit NY Announcements
During the keynote, we announced the general availability of some new services:

Amazon Redshift Serverless – This serverless option lets you analyze data at any scale without having to manage data warehouse infrastructure. You can now create multiple serverless endpoints per AWS account and Region using namespaces and workgroups and enjoy reducing serverless compute costs compared to the preview. To learn more, check out Danilio’s blog post, this demo video, and the latest episode of The Official AWS Podcast. We also introduced new features of row-level security (RLS), which implement fine-grained access to the rows in tables, and automated materialized view to lower query latency for repeatable workloads.

AWS Cloud WAN – This new network service makes it easy to build and operate wide area networks (WAN) that connect your data centers and branch offices, as well as multiple VPCs in multiple AWS Regions. To learn more, read Seb’s blog post.

Amazon DevOps Guru’s Log Anomaly Detection and Recommendations – This new feature identifies anomalies such as increased latency, error rates, and resource constraints within your app and then sends alerts with a description and actionable recommendations for remediation. To learn more, see Donnie’s blog post as a new News Blog writer.

Last Week’s Launches
Here are some other launches that caught my attention last week:

AWS AppConfig, a feature of AWS Systems Manager, makes it easy for customers to quickly and safely configure, validate, and deploy feature flags and application configuration. Now, we have announced AWS AppConfig Extensions, a new capability that allows customers to enhance and extend the capabilities of feature flags and dynamic runtime configuration data.

Available extensions at launch include AppConfig Notification extensions that push messages about configuration updates to Amazon EventBridge, Amazon SNS, Amazon SQS, or a Jira extension to track Feature Flag changes in AppConfig as Atlassian’s Jira issues. To get started, read Announcing AWS AppConfig Extensions and AppConfig Extensions.

Amazon VPC Flow Logs for Transit Gateway is a new capability that allows customers to gain deeper visibility and insights into network traffic on AWS Transit Gateway. With this feature, Transit Gateway can export detailed information, such as source/destination IPs, ports, protocols, traffic counters, timestamps, and various metadata for all of the network flow traversing through the Transit Gateway. To learn more, read Introducing VPC Flow Logs for AWS Transit Gateway and Logging network traffic using Transit Gateway Flow Logs.

AWS Lambda Powertools for TypeScript is an open-source developer library that can help you incorporate Well-Architected Serverless best practices focusing on three observability features: distributed tracing (Tracer), structured logging (Logger), and asynchronous business and application metrics (Metrics). Powertools is also available in the Python and Java programming languages. To learn more, see the blog post Simplifying serverless best practices with AWS Lambda Powertools for TypeScript. You can submit feedback, ideas, and issues directly on our GitHub project.

AWS re:Post is a vibrant Q&A community that helps you become even more successful on AWS. You can now add a profile picture or avatar to your account and add inline images such as diagrams or screenshots to support your questions or answers. Add your profile picture and start using inline images today!

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some news, blog posts, and video series for you to know:

In July 2021, we notified users about the end of support for Internet Explorer 11, which is now approaching on July 31, 2022. The browser will no longer be supported in the AWS Management Console, web-based services such as Amazon QuickSight, Amazon Chime, Amazon Honeycode, and some other AWS websites. After that date, we can no longer guarantee that the features and webpages will function properly on IE 11. For more information, please visit AWS Supported Browsers.

In fall 2021, we began offering a free multi-factor authentication (MFA) security key to AWS account owners in the United States. Now eligible customers can order the free MFA security key through the ordering portal in the AWS Management Console. At this time, only U.S.-based AWS account root users who have spent more than $100 each month over the past 3 months are eligible to place an order. For more information, see our Free MFA Security Key page.

Amazon’s Machine Learning University expands with MLU Explains, a public website containing visual essays that incorporate fun animations and scrollytelling to explain machine learning concepts in an accessible manner. The following animation teaches the concepts of data splitting in machine learning using an example model that attempts to determine whether animals are cats or dogs. To learn more, read the Amazon Science blog post.

This is My Architecture is a video series that showcases innovative architectural solutions on the AWS Cloud by customers and partners. In June and July, over 15 episodes were updated, including GoDaddy, Riot Games, and Hudl. Each episode examines the most interesting and technically creative elements of each cloud architecture.

Upcoming AWS Events in August
Check your calendars and sign up for these AWS events:

AWS SummitRegistration is open for upcoming in-person AWS Summits that might be close to you in August: Sao Paulo (August 3–4), Anaheim (August 18), Taiwan (August 10–11), Chicago (August 28), and Canberra (August 31).

AWS Innovate – Data Edition – On August 23, learn how a modern data strategy can support your present and future use cases, including steps to build an end-to-end data solution to store and access, analyze and visualize, and even predict.

AWS Innovate – For Every Application Edition – On August 25, learn about a wide selection of AWS solutions across compute, storage, networking, hybrid, and edge infrastructure to help you scale application resources seamlessly and optimally.

Although these two Innovate events will be held in Asia Pacific and Japan time zones, you can view on-demand videos for two months following your registration.

If you’re interested in learning modern development practices live in New York City, I recommend joining AWS Solutions Day on August 10. I love advanced topics to focus on building new web apps with Java, JavaScript, TypeScript, and GraphQL.

If you’re interested in learning AWS fundamentals and preparing for AWS Certifications, there are several virtual events in August, such as AWS Cloud Practitioner Essentials Day, AWS Technical Essentials Day, and Exam Readiness for AWS Certificates.

That’s all for this week. Check back next Monday for another Week in Review!

— Channy

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Creating a Multi-Region Application with AWS Services – Part 1, Compute and Security

Post Syndicated from Joe Chapman original https://aws.amazon.com/blogs/architecture/creating-a-multi-region-application-with-aws-services-part-1-compute-and-security/

Building a multi-Region application requires lots of preparation and work. Many AWS services have features to help you build and manage a multi-Region architecture, but identifying those capabilities across 200+ services can be overwhelming.

In this 3-part blog series, we’ll explore AWS services with features to assist you in building multi-Region applications. In Part 1, we’ll build a foundation with AWS security, networking, and compute services. In Part 2, we’ll add in data and replication strategies. Finally, in Part 3, we’ll look at the application and management layers.

Considerations before getting started

AWS Regions are built with multiple isolated and physically separate Availability Zones (AZs). This approach allows you to create highly available Well-Architected workloads that span AZs to achieve greater fault tolerance. There are three general reasons that you may need to expand beyond a single Region:

  • Expansion to a global audience as an application grows and its user base becomes more geographically dispersed, there can be a need to reduce latencies for different parts of the world.
  • Reducing Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO) as part of disaster recovery (DR) plan.
  • Local laws and regulations may have strict data residency and privacy requirements that must be followed.

Ensuring security, identity, and compliance

Creating a security foundation starts with proper authentication, authorization, and accounting to implement the principle of least privilege. AWS Identity and Access Management (IAM) operates in a global context by default. With IAM, you specify who can access which AWS resources and under what conditions. For workloads that use directory services, the AWS Directory Service for Microsoft Active Directory Enterprise Edition can be set up to automatically replicate directory data across Regions. This allows applications to reduce lookup latencies by using the closest directory and creates durability by spanning multiple Regions.

Applications that need to securely store, rotate, and audit secrets, such as database passwords, should use AWS Secrets Manager. It encrypts secrets with AWS Key Management Service (AWS KMS) keys and can replicate secrets to secondary Regions to ensure applications are able to obtain a secret in the closest Region.

Encrypt everything all the time

AWS KMS can be used to encrypt data at rest, and is used extensively for encryption across AWS services. By default, keys are confined to a single Region. AWS KMS multi-Region keys can be created to replicate keys to a second Region, which eliminates the need to decrypt and re-encrypt data with a different key in each Region.

AWS CloudTrail logs user activity and API usage. Logs are created in each Region, but they can be centralized from multiple Regions and multiple accounts into a single Amazon Simple Storage Service (Amazon S3) bucket. As a best practice, these logs should be aggregated to an account that is only accessible to required security personnel to prevent misuse.

As your application expands to new Regions, AWS Security Hub can aggregate and link findings to a single Region to create a centralized view across accounts and Regions. These findings are continuously synced between Regions to keep you updated on global findings.

We put these features together in Figure 1.

Multi-Region security, identity, and compliance services

Figure 1. Multi-Region security, identity, and compliance services

Building a global network

For resources launched into virtual networks in different Regions, Amazon Virtual Private Cloud (Amazon VPC) allows private routing between Regions and accounts with VPC peering. These resources can communicate using private IP addresses and do not require an internet gateway, VPN, or separate network appliances. This works well for smaller networks that only require a few peering connections. However, as the number of peered connections increases, the mesh of peered connections can become difficult to manage and troubleshoot.

AWS Transit Gateway can help reduce these difficulties by creating a central transitive hub to act as a cloud router. A Transit Gateway’s routing capabilities can expand to additional Regions with Transit Gateway inter-Region peering to create a globally distributed private network.

Building a reliable, cost-effective way to route users to distributed Internet applications requires highly available and scalable Domain Name System (DNS) records. Amazon Route 53 does exactly that.

Route 53 routing policies can route traffic to a record with the lowest latency, or automatically fail over a record. If a larger failure occurs, the Route 53 Application Recovery Controller can simplify the monitoring and failover process for application failures across Regions, AZs, and on-premises.

Amazon CloudFront’s content delivery network is truly global, built across 300+ points of presence (PoP) spread throughout the world. Applications that have multiple possible origins, such as across Regions, can use CloudFront origin failover to automatically fail over the origin. CloudFront’s capabilities expand beyond serving content, with the ability to run compute at the edge. CloudFront functions make it easy to run lightweight JavaScript functions, and AWS Lambda@Edge makes it easy to run Node.js and Python functions across these 300+ PoPs.

AWS Global Accelerator uses the AWS global network infrastructure to provide two static anycast IPs for your application. It automatically routes traffic to the closest Region deployment, and if a failure is detected it will automatically redirect traffic to a healthy endpoint within seconds.

Figure 2 brings these features together to create a global network across two Regions.

AWS VPC connectivity and content delivery

Figure 2. AWS VPC connectivity and content delivery

Building the compute layer

An Amazon Elastic Compute Cloud (Amazon EC2) instance is based on an Amazon Machine Image (AMI). An AMI specifies instance configurations such as the instance’s storage, launch permissions, and device mappings. When a new standard image needs to be created, EC2 Image Builder can be used to streamline copying AMIs to selected Regions.

Although EC2 instances and their associated Amazon Elastic Block Store (Amazon EBS) volumes live in a single AZ, Amazon Data Lifecycle Manager can automate the process of taking and copying EBS snapshots across Regions. This can enhance DR strategies by providing a relatively easy cold backup-and-restore option for EBS volumes.

As an architecture expands into multiple Regions, it can become difficult to track where instances are provisioned. Amazon EC2 Global View helps solve this by providing a centralized dashboard to see Amazon EC2 resources such as instances, VPCs, subnets, security groups, and volumes in all active Regions.

Microservice-based applications that use containers benefit from quicker start-up times. Amazon Elastic Container Registry (Amazon ECR) can help ensure this happens consistently across Regions with private image replication at the registry level. An ECR private registry can be configured for either cross-Region or cross-account replication to ensure your images are ready in secondary Regions when needed.

We bring these compute layer features together in Figure 3.

AMI and EBS snapshot copy across Regions

Figure 3. AMI and EBS snapshot copy across Regions

Summary

It’s important to create a solid foundation when architecting a multi-Region application. These foundations pave the way for you to move fast in a secure, reliable, and elastic way as you build out your application. In this post, we covered options across AWS security, networking, and compute services that have built-in functionality to take away some of the undifferentiated heavy lifting. We’ll cover data, application, and management services in future posts.

Ready to get started? We’ve chosen some AWS Solutions and AWS Blogs to help you!

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Use a City Planning Analogy to Visualize and Create your Cloud Architecture

Post Syndicated from Marwan Al Shawi original https://aws.amazon.com/blogs/architecture/use-a-city-planning-analogy-to-visualize-and-create-your-cloud-architecture/

If you are new to creating cloud architectures, you might find it a daunting undertaking. However, there is an approach that can help you define a cloud architecture pattern by using a similar construct. In this blog post, I will show you how to envision your cloud architecture using this structured and simplified approach.

Such an approach helps you to envision the architecture as a whole. You can then create reusable architecture patterns that can be used for scenarios with similar requirements. It also will help you define the more detailed technological requirements and interdependencies of the different architecture components.

First, I will briefly define what is meant by an architecture pattern and an architecture component.

Architecture pattern and components

An architecture pattern can be defined as a mechanism used to structure multiple functional components of a software or a technology solution to address predefined requirements. It can be characterized by use case and requirements, and should be tested and reusable whenever possible.

Architecture patterns can be composed of three main elements: the architecture components, the specific functions or capabilities of each component, and the connectivity among those components.

A component in the context of a technology solution architecture is a building block. Modular architecture is composed of a collection of these building blocks.

To think modularly, you must look at the overall technology solution. What is its intended function as a complete system? Then, break it down into smaller parts or components. Think about how each component communicates with others. Identify and define each block or component and its specific roles and function. Consider the technical operational responsibilities each is expected to deliver.

Cloud architecture patterns and the city planning analogy

Let’s assume a content marketing company wants to provide marketing analytics to its partners. It proposes a SaaS solution, by offering an analytics dashboard on Amazon Web Services (AWS). This company may offer the same solution in other locations in the future.

How would you create a reusable architecture pattern for such a solution? To simplify the concept of a component and the architecture pattern, let’s use city planning as a frame of reference.

Subarchitectures or components

A city can be imagined as consisting of three organizing contexts or components:

  1. Overall City Architecture (the big picture)
  2. District Architecture
  3. Building Architecture

Let’s define each of these components or subarchitectures, and see how they correlate to an enterprise cloud architecture.

I. City Architecture consists of the city structures and the integrations of services required by the population, see Figure 1.

Figure 1. Oversimplified city layout

Figure 1. Oversimplified city layout

The overall anticipated capacity within a certain period must be calculatedfor roads, sewage, water, electricity grids, and overall city layout. Typically, this structure should be built from the intended purpose or vision of the city. This can be the type of services it will offer, and the function of each district.

Think of City Architecture as the overall cloud architecture for your enterprise. Include the anticipated capacity, the layout (single Region, multi-Region), type, and number of Amazon Virtual Private Cloud (VPC)s. Decide how you will connect and integrate all these different architecture components.

The initial workflow that can be used to define the high-level architecture pattern layout of the SaaS solution example is analogous to the overall city architecture. We can define its three primary elements: architecture components, specific functions of each component, and the connectivity among those components.

  1. Production environment. The front and backend of your application. It provides the marketing data analytics dashboard.
  2. Testing and development environment. A replica of, but isolated from the Production app. Users’ traffic doesn’t pass through security inspection layer.
  3. Security layer. Provides perimeter security inspection. Users’ traffic passes through security inspection layer.

Translating this workflow into an AWS architecture, Figure 2 shows the analogous structure.

  • Single AWS Region (to be offered in a specific geographical area)
  • Amazon VPC to host the production application
  • Amazon VPC to host the test/dev application
  • Separate VPC (or a layer within a VPC) to provide security services for perimeter security inspection
  • Customer’s connectivity (for example, over public internet, or VPN)
  • AWS Transit Gateway (TGW) to connect and isolate the different components (VPCs and VPN)
Figure 2. Architecture pattern (high-level layout)

Figure 2. Architecture pattern (high-level layout)

Domain-driven design

At this stage, you may also consider a domain-driven design (DDD). This is an approach to software development that centers on a domain model. With your DDD, you can break the solution into different bounded contexts. You can translate the business functions/capabilities into logical domains, and then define how they communicate.

Let’s use the same SaaS example and further analyze the requirements of the solution with the DDD approach in mind. The SaaS solution is offered based on two types of industries: regulated with specific security compliance, and non-regulated. By translating this into logical domains, we can optimize the design to offer a more modular architecture. This will minimize the blast radius of the solution, as illustrated in Figure 3. Watch How AWS Minimizes the Blast Radius of Failures.

Figure 3. DDD-based architecture pattern (high-level layout)

Figure 3. DDD-based architecture pattern (high-level layout)

Now let’s think of governmental boundaries within a city and among its districts. This can be analogous to AWS accounts structures and the trust boundaries among them. By applying this to the example preceding, the VPC with the security compliance requirements can be placed in a separate AWS account. Read Design principles for organizing your AWS accounts.

II. District Architecture consists of the structures and integrations required within a district to manage its buildings, see Figure 4.

Figure 4. City structure with districts

Figure 4. City structure with districts

It illustrates how to connect/integrate back to the city-wide architecture. It should consider the overall anticipated capacity within each district.

For instance, a district can be designed based on the type of function/service it provides, such as residential district, leisure district, or business district.

Mapping this to cloud architecture, you can envision it as the more specific functions/services you are expecting from a certain block, component, or domain. Your architecture can be within one or multiple VPCs, as shown in Figure 5. The structure of a domain or block can vary by number of Availability Zones and VPCs, type of external access, compliance requirements, and the hosted application requirements. Each of these blocks serves a different function and requires different specifications. However, they all need to integrate back to the overall cloud and network architecture to provide a cohesive design.

The architect must define and specify clearly the communication model among the architecture components. You may further break the application architecture at the module level into microservices using the DDD approach. An example is the use of Micro-frontend Architectures on AWS.

Figure 5. Architecture module structure

Figure 5. Architecture module structure

III. Building Architecture refers to the buildings’ structures and standards required to deliver the specific properties/services within a district. It also must integrate back with the district architecture.

To apply this to your architecture, envision the specialized functions/capabilities you are expecting from your application within a module (subcomponents). What are the requirements needed for the application tiers? In this example, let’s assume that the VPC without security compliance requirements will use a frontend web tier on Amazon EC2. Its backend database will be Amazon Relational Database Service (RDS).

Each of these subcomponents must integrate with other components and modules, as well as to the public internet. For example, an AWS Application Load Balancer could handle connections requests from external users, and AWS Web Application Firewall (AWS WAF) used as the perimeter security layer. AWS Transit Gateway could connect to other modules (VPCs). NAT gateways could provide connectivity to the internet for the internal systems in a VPC (shown in Figure 6.)

Figure 6. Architecture module and its subcomponents structure

Figure 6. Architecture module and its subcomponents structure

Conclusion

The vision and goal of a city architecture can set the basis for districts’ architectures. In turn, the district architecture sets the basis of the building architecture within a district. Similarly, the targeted enterprise cloud architecture goal should set the key requirements of the building blocks (or functional components) of the architecture.

Each architecture block sets the requirements of the subcomponents. They collectively construct a system or module of a system, as illustrated in Figure 7.

Figure 7. Structure of cloud architecture requirements and interdependencies

Figure 7. Structure of cloud architecture requirements and interdependencies

As a next step, assess your architecture from both a scale and reliability perspective. Designing for scale alone is not enough. Reliable scalability should be always the targeted architectural attribute. Read Architecting for Reliable Scalability.

Augmenting VMware Cloud on AWS Workloads with Native AWS services

Post Syndicated from Talha Kalim original https://aws.amazon.com/blogs/architecture/augmenting-vmware-cloud-on-aws-workloads-with-native-aws-services/

VMware Cloud on AWS allows you to quickly migrate VMware workloads to a VMware-managed Software-Defined Data Center (SDDC) running in the AWS Cloud and extend your on-premises data centers without replatforming or refactoring applications.

You can use native AWS services with Virtual Machines (VMs) in the SDDC, to reduce operational overhead and lower your Total Cost of Ownership (TCO) while increasing your workload’s agility and scalability.

This post covers patterns for connectivity between native AWS services and VMware workloads. We also explore common integrations, including using AWS Cloud storage from an SDDC, securing VM workloads using AWS networking services, and using AWS databases and analytics services with workloads running in the SDDC.

Networking between SDDC and native AWS services

Establishing robust network connectivity with VMware Cloud SDDC VMs is critical to successfully integrating AWS services. This section shows you different options to connect the VMware SDDC with your native AWS account.

The simplest way to get started is to use AWS services in the connected Amazon Virtual Private Cloud (VPC) that is selected during the SDDC deployment process. Figure 1 shows this connectivity, which is automatically configured and available once the SDDC is deployed.

Figure 1. SDDC to Customer Account VPC connectivity configured at deployment

Figure 1. SDDC to Customer Account VPC connectivity configured at deployment

The SDDC Elastic Network Interface (ENI) allows you to connect to native AWS services within the connected VPC, but it doesn’t provide transitive routing beyond the connected VPC. For example, it will not connect the SDDC to other VPCs and the internet.

If you’re looking to connect to native AWS services in multiple accounts and VPCs in the same AWS Region, you have two connectivity options. These are explained in the following sections.

Attaching VPCs to VMware Transit Connect

When you need high-throughput connectivity in a multi-VPC environment, use VMware Transit Connect (VTGW), as shown in Figure 2.

Figure 2. Multi-account VPC connectivity through VMware Transit Connect VPC attachments

Figure 2. Multi-account VPC connectivity through VMware Transit Connect VPC attachments

VTGW uses a VMware-managed AWS Transit Gateway to interconnect SDDCs within an SDDC group. It also allows you to attach your VPCs in the same Region to the VTGW by providing connectivity to any SDDC within the SDDC group.

Connecting through an AWS Transit Gateway

To connect to your VPCs through an existing Transit Gateway in your account, use IPsec virtual private network (VPN) connections from the SDDC with Border Gateway Protocol (BGP)-based routing, as shown in Figure 3. Multiple IPsec tunnels to the Transit Gateway use equal-cost multi-path routing, which increases bandwidth by load-balancing traffic.

Figure 3. Multi-account VPC connectivity through an AWS Transit Gateway

Figure 3. Multi-account VPC connectivity through an AWS Transit Gateway

For scalable, high throughput connectivity to an existing Transit Gateway, connect to the SDDC via a Transit VPC that is attached to the VTGW, as shown in Figure 3. You must manually configure the routes between the VPCs and SDDC for this architecture.

In the following sections, we’ll show you how to use some of these connectivity options for common native AWS services integrations with VMware SDDC workloads.

Reducing TCO with Amazon EFS, Amazon FSx, and Amazon S3

As you are sizing your VMware Cloud on AWS SDDC, consider using AWS Cloud storage for VMs that provide files services or require object storage. Migrating these workloads to cloud storage like Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS), or Amazon FSx can reduce your overall TCO through optimized SDDC sizing.

Additionally, you can reduce the undifferentiated heavy lifting involved with deploying and managing complex architectures for file services in VM disks. Figure 4 shows how these services integrate with VMs in the SDDC.

Figure 4. Connectivity examples for AWS Cloud storage services

Figure 4. Connectivity examples for AWS Cloud storage services

We recommend connecting to your S3 buckets via the VPC gateway endpoint in the connected VPC. This is a more cost-effective approach because it avoids the data processing costs associated with a VTGW and AWS PrivateLink for Amazon S3.

Similarly, the recommended approach for Amazon EFS and Amazon FSx is to deploy the services in the connected VPC for VM access through the SDDC elastic network interface. You can also connect to existing Amazon EFS and Amazon FSx file shares in other accounts and VPCs using a VTGW, but consider the data transfer costs first.

Integrating AWS networking and content delivery services

Using various AWS networking and content delivery services with VMware Cloud on AWS workloads will provide robust traffic management, security, and fast content delivery. Figure 5 shows how AWS networking and content delivery services integrate with workloads running on VMs.

Figure 5. Connectivity examples for AWS networking and content delivery services

Figure 5. Connectivity examples for AWS networking and content delivery services

Deploy Elastic Load Balancing (ELB) services in a VPC subnet that has network reachability to the SDDC VMs. This includes the connected VPC over the SDDC elastic network interface, a VPC attached via VTGW, and VPCs attached to a Transit Gateway connected to the SDDC.

VTGW connectivity should be used when the design requires using existing networking services in other VPCs. For example, if you have a dedicated internet ingress/egress VPC. An internal ELB can also be used for load-balancing traffic between services running in SDDC VMs and services running within AWS VPCs.

Use Amazon CloudFront, a global content delivery service, to integrate with load balancers, S3 buckets for static content, or directly with publicly accessible SDDC VMs. Additionally, use Amazon Route 53 to provide public and private DNS services for VMware Cloud on AWS. Deploy services such as AWS WAF and AWS Shield to provide comprehensive network security for VMware workloads in the SDDC.

Integrating with AWS database and analytics services

Data is one the most valuable assets in an organization, and databases are often the most demanding and critical workloads running in on-premises VMware environments.

A common customer pattern to reduce TCO for storage-heavy or memory-intensive databases is to use purpose-built Databases on AWS like Amazon Relational Database Service (RDS). Amazon RDS lets you migrate on-premises relational databases to the cloud and integrate it with SDDC VMs. Using AWS databases also reduces operational overhead you may incur with tasks associated with managing availability, scalability, and disaster recovery (DR).

With AWS Analytics services integrations, you can take advantage of the close proximity of data within VMware Cloud on AWS data stores to gain meaningful insights from your business data. For example, you can use Amazon Redshift to create a data warehouse to run analytics at scale on relational data from transactional systems, operational databases, and line-of-business applications running within the SDDC.

Figure 6 shows integration options for AWS databases and analytics services with VMware Cloud on AWS VMs.

Figure 6. Connectivity examples for AWS Database and Analytics services

Figure 6. Connectivity examples for AWS Database and Analytics services

We recommend deploying and consuming database services in the connected VPC. If you have existing databases in other accounts or VPCs that require integration with VMware VMs, connect them using the VTGW.

Analytics services can involve ingesting large amounts of data from various sources, including from VMs within the SDDC, creating a significant amount of data traffic. In such scenarios, we recommend using the SDDC connected VPC to deploy any required interface endpoints for analytics services to achieve a cost-effective architecture.

Summary

VMware Cloud on AWS is one of the fastest ways to migrate on-premises VMware workloads to the cloud. In this blog post, we provided different architecture options for connecting the SDDC to native AWS services. This lets you evaluate your requirements to select the most cost-effective option for your workload.

The example integrations covered in this post are common AWS service integrations, including storage, network, and databases. They are a great starting point, but the possibilities are endless. Integrating services like Amazon Machine Learning (Amazon ML), and Serverless on AWS allows you to deliver innovative services to your users, often without having to re-factor existing application backends running on VMware Cloud on AWS.

Additional Resources

If you need to integrate VMware Cloud on AWS with an AWS service, explore the following resources and reach out to us at AWS.

Overview of Data Transfer Costs for Common Architectures

Post Syndicated from Birender Pal original https://aws.amazon.com/blogs/architecture/overview-of-data-transfer-costs-for-common-architectures/

Data transfer charges are often overlooked while architecting a solution in AWS. Considering data transfer charges while making architectural decisions can help save costs. This blog post will help identify potential data transfer charges you may encounter while operating your workload on AWS. Service charges are out of scope for this blog, but should be carefully considered when designing any architecture.

Data transfer between AWS and internet

There is no charge for inbound data transfer across all services in all Regions. Data transfer from AWS to the internet is charged per service, with rates specific to the originating Region. Refer to the pricing pages for each service—for example, the pricing page for Amazon Elastic Compute Cloud (Amazon EC2)—for more details.

Data transfer within AWS

Data transfer within AWS could be from your workload to other AWS services, or it could be between different components of your workload.

Data transfer between your workload and other AWS services

When your workload accesses AWS services, you may incur data transfer charges.

Accessing services within the same AWS Region

If the internet gateway is used to access the public endpoint of the AWS services in the same Region (Figure 1 – Pattern 1), there are no data transfer charges. If a NAT gateway is used to access the same services (Figure 1 – Pattern 2), there is a data processing charge (per gigabyte (GB)) for data that passes through the gateway.

Accessing AWS services in same Region

Figure 1. Accessing AWS services in same Region

Accessing services across AWS Regions

If your workload accesses services in different Regions (Figure 2), there is a charge for data transfer across Regions. The charge depends on the source and destination Region (as described on the Amazon EC2 Data Transfer pricing page).

Accessing AWS services in different Region

Figure 2. Accessing AWS services in different Region

Data transfer within different components of your workload

Charges may apply if there is data transfer between different components of your workload. These charges vary depending on where the components are deployed.

Workload components in same AWS Region

Data transfer within the same Availability Zone is free. One way to achieve high availability for a workload is to deploy in multiple Availability Zones.

Consider a workload with two application servers running on Amazon EC2 and a database running on Amazon Relational Database Service (Amazon RDS) for MySQL (Figure 3). For high availability, each application server is deployed into a separate Availability Zone. Here, data transfer charges apply for cross-Availability Zone communication between the EC2 instances. Data transfer charges also apply between Amazon EC2 and Amazon RDS. Consult the Amazon RDS for MySQL pricing guide for more information.

Workload components across Availability Zones

Figure 3. Workload components across Availability Zones

To minimize impact of a database instance failure, enable a multi-Availability Zone configuration within Amazon RDS to deploy a standby instance in a different Availability Zone. Replication between the primary and standby instances does not incur additional data transfer charges. However, data transfer charges will apply from any consumers outside the current primary instance Availability Zone. Refer to the Amazon RDS pricing page for more detail.

A common pattern is to deploy workloads across multiple VPCs in your AWS network. Two approaches to enabling VPC-to-VPC communication are VPC peering connections and AWS Transit Gateway. Data transfer over a VPC peering connection that stays within an Availability Zone is free. Data transfer over a VPC peering connection that crosses Availability Zones will incur a data transfer charge for ingress/egress traffic (Figure 4).

VPC peering connection

Figure 4. VPC peering connection

Transit Gateway can interconnect hundreds or thousands of VPCs (Figure 5). Cost elements for Transit Gateway include an hourly charge for each attached VPC, AWS Direct Connect, or AWS Site-to-Site VPN. Data processing charges apply for each GB sent from a VPC, Direct Connect, or VPN to Transit Gateway.

VPC peering using Transit Gateway in same Region

Figure 5. VPC peering using Transit Gateway in same Region

Workload components in different AWS Regions

If workload components communicate across multiple Regions using VPC peering connections or Transit Gateway, additional data transfer charges apply. If the VPCs are peered across Regions, standard inter-Region data transfer charges will apply (Figure 6).

VPC peering across Regions

Figure 6. VPC peering across Regions

For peered Transit Gateways, you will incur data transfer charges on only one side of the peer. Data transfer charges do not apply for data sent from a peering attachment to a Transit Gateway. The data transfer for this cross-Region peering connection is in addition to the data transfer charges for the other attachments (Figure 7).

Transit Gateway peering across Regions

Figure 7. Transit Gateway peering across Regions

Data transfer between AWS and on-premises data centers

Data transfer will occur when your workload needs to access resources in your on-premises data center. There are two common options to help achieve this connectivity: Site-to-Site VPN and Direct Connect.

Data transfer over AWS Site-to-Site VPN

One option to connect workloads to an on-premises network is to use one or more Site-to-Site VPN connections (Figure 8 – Pattern 1). These charges include an hourly charge for the connection and a charge for data transferred from AWS. Refer to Site-to-Site VPN pricing for more details. Another option to connect multiple VPCs to an on-premises network is to use a Site-to-Site VPN connection to a Transit Gateway (Figure 8 – Pattern 2). The Site-to-Site VPN will be considered another attachment on the Transit Gateway. Standard Transit Gateway pricing applies.

Site-to-Site VPN patterns

Figure 8. Site-to-Site VPN patterns

Data transfer over AWS Direct Connect

Direct Connect can be used to connect workloads in AWS to on-premises networks. Direct Connect incurs a fee for each hour the connection port is used and data transfer charges for data flowing out of AWS. Data transfer into AWS is $0.00 per GB in all locations. The data transfer charges depend on the source Region and the Direct Connect provider location. Direct Connect can also connect to the Transit Gateway if multiple VPCs need to be connected (Figure 9). Direct Connect is considered another attachment on the Transit Gateway and standard Transit Gateway pricing applies. Refer to the Direct Connect pricing page for more details.

Figure 9. Direct Connect patterns

Figure 9. Direct Connect patterns

A Direct Connect gateway can be used to share a Direct Connect across multiple Regions. When using a Direct Connect gateway, there will be outbound data charges based on the source Region and Direct Connect location (Figure 10).

Direct Connect gateway

Figure 10. Direct Connect gateway

General tips

Data transfer charges apply based on the source, destination, and amount of traffic. Here are some general tips for when you start planning your architecture:

  • Avoid routing traffic over the internet when connecting to AWS services from within AWS by using VPC endpoints:
    • VPC gateway endpoints allow communication to Amazon S3 and Amazon DynamoDB without incurring data transfer charges.
    • VPC interface endpoints are available for some AWS services. This type of endpoint incurs hourly service charges and data transfer charges.
  • Use Direct Connect instead of the internet for sending data to on-premises networks.
  • Traffic that crosses an Availability Zone boundary typically incurs a data transfer charge. Use resources from the local Availability Zone whenever possible.
  • Traffic that crosses a Regional boundary will typically incur a data transfer charge. Avoid cross-Region data transfer unless your business case requires it.
  • Use the AWS Free Tier. Under certain circumstances, you may be able to test your workload free of charge.
  • Use the AWS Pricing Calculator to help estimate the data transfer costs for your solution.
  • Use a dashboard to better visualize data transfer charges – this workshop will show how.

Conclusion

AWS provides the ability to deploy across multiple Availability Zones and Regions. With a few clicks, you can create a distributed workload. As you increase your footprint across AWS, it helps to understand various data transfer charges that may apply. This blog post provided information to help you make an informed decision and explore different architectural patterns to save on data transfer costs.

Integrate AWS Network Firewall with your ISV Firewall Rulesets

Post Syndicated from Mony Kiem original https://aws.amazon.com/blogs/architecture/integrate-aws-network-firewall-with-your-isv-firewall-rulesets/

You may have requirements to leverage on-premises firewall technology in AWS by using your existing firewall implementation. As you move these workloads to AWS or launch new ones, you may replicate your existing on-premises firewall architecture. In this case, you can run partner appliances such as Palo Alto and Fortinet firewall appliances on Amazon EC2 instances.

Ensure that the firewall and intrusion prevention system (IPS) rules that protect your on-premises data center will also protect your Amazon Virtual Private Cloud (VPC). These rules must be frequently updated to ensure protection against the latest security threats. Many enterprises do not want to manage multiple rulesets across their entire hybrid architecture.

AWS Network Firewall takes the responsibility of this undifferentiated heavy lifting by providing a managed service that runs a fleet of firewall appliances, from patching to security updates. It uses the free and open-source intrusion prevention system (IPS), Suricata, for stateful inspection. Suricata is a network threat detection engine capable of real-time intrusion detection (IDS). It also provides inline intrusion prevention (IPS), network security monitoring (NSM), and offline packet capture processing. Customers can now import their existing IPS rules from their firewall provider software that adheres to the open source Suricata standard. This enables a network security model for your hybrid architecture that minimizes operational overhead while achieving consistent protection.

Overview of AWS services used

The following are AWS services that are used in our solution. These are the fundamental building blocks of a hybrid architecture on AWS.

  • AWS Network Firewall (ANFW): a stateful, managed, network firewall and intrusion detection and prevention service. You can filter network traffic at your VPC using AWS Network Firewall.  AWS Network Firewall pricing is based on the number of firewalls deployed and the amount of traffic inspected. There are no upfront commitments, and you pay only for what you use.
  • AWS Transit Gateway (TGW): a network transit hub that you use to interconnect your virtual private clouds (VPCs) and on-premises networks. Transit Gateway enables customers to connect thousands of VPCs. You can attach all your hybrid connectivity (VPN and Direct Connect connections) to a single Transit Gateway. This enables you to consolidate and control your organization’s entire AWS routing configuration in one place.
  • AWS Direct Connect, AWS Site-to-Site VPN, and Amazon VPC are other core components of this hybrid architecture.

Hybrid architecture with centralized network inspection

The example architecture in Figure 1 depicts the deployment model of a centralized network security architecture. It shows all inbound and outbound traffic flowing through a single VPC for inspection. The centralized inspection architecture incorporates the use of AWS Network Firewall deployed in an inspection VPC. All traffic is routed from other VPCs through AWS Transit Gateway (TGW). The threat intelligence rulesets are managed by a partner integration solution and can be automatically imported into AWS Network Firewall. This will allow you to use the same ruleset that is deployed on-premises. It will reduce inconsistent and manual processes to maintain and update the rules.

Figure 1. Centralized inspection architecture with AWS Network Firewall and imported rules

Figure 1. Centralized inspection architecture with AWS Network Firewall and imported rules

The partner integration with AWS Network Firewall (ANFW) will work for both a centralized and distributed inspection architecture. The AWS Network Firewall service will house the rulesets, and you only need to deploy a Firewall endpoint in the Availability Zone of your VPC. In the centralized architecture deployment, all traffic originating from the attached VPCs is routed to the TGW. On the TGW route table, all traffic is routed to the inspection VPC attachment ID. The route table associated to the subnet where the TGW ENI is created in the inspection VPC will have a default route via the ANFW endpoint and return traffic from the ANFW endpoint is routed back to the TGW. If your VPC Firewall endpoint is being deployed across multiple Availability Zones (AZ), use the TGW appliance mode to allow traffic flow symmetry. This will ensure that return traffic is processed by the same AZ. For further details on how to set up your network routing, reference the Deployment models for AWS Network Firewall blog post.

AWS Network Firewall partner integrations

Figure 1 depicts two partner integrations, which include Trend Micro and FortiNet. View this complete and latest list of partner integrations with AWS Network Firewall.

If you are already a user of Trend Micro for your threat intelligence, you can leverage this deployment model to standardize your hybrid cloud security. Trend Micro enables you to deploy your AWS managed network infrastructure and pair it with a partner-supported threat intelligence. This focuses on detecting and disrupting malware in your environments. You just need to enable the Sharing capability on Trend Micro Cloud One. For further information, see these detailed instructions.

For existing users of Fortinet that are using their managed IPS rulesets, you can automatically deploy updated IPS rule sets to AWS Network Firewall. This will ensure consistent protection across your applications landscape. For more details on this integration, visit the partner page.

Getting started with AWS Firewall

You can get started with this pattern through the following high-level steps with link to detailed instructions along the way.

  1. Determine your current networking architecture and cross reference it with the different deployment models supported by AWS Network Firewall. You can learn more about your different options in the blog Deployment models for AWS Network Firewall. The deployment model will determine how you set up your route tables and where you will deploy your AWS Network Firewall endpoint.
  2. Visit the AWS Network Firewall Partners page to confirm your provider’s integration with ANFW and follow the integration instructions from the partner’s documentation.
  3. Get started with AWS Network Firewall by visiting the Amazon VPC Console to create or import your firewall rules. You can group them into policies and apply them to the VPCs you want to protect per the developer guide.
  4. To start inspecting traffic, deploy your Network Firewall endpoint in your inspection VPC.

Conclusion

You may need to operate a hybrid architecture using the same firewall and IPS rules for both your on-premises and cloud networks. For implementing these rules in the cloud, you can run partner firewall appliances on EC2 instances. This model of operation requires some heavy lifting.

Instead, you can set up AWS Network Firewall quickly, and not worry about deploying and managing any infrastructure. AWS Network Firewall automatically scales with your organizations’ network traffic. AWS provides a flexible rules engine that enables you to define firewall rules to control this traffic. To simplify how organizations determine what rules to define, Fortinet and Trend Micro have made managed rulesets available through AWS Marketplace offerings. These can be deployed to your environment with a few clicks. These partners remove complexity for security teams so they can easily create and maintain rules to take full advantage of the AWS Network Firewall.

New Whitepaper: Selecting & Designing Your Hybrid Connectivity Model

Post Syndicated from Santiago Freitas original https://aws.amazon.com/blogs/architecture/new-whitepaper-selecting-designing-your-hybrid-connectivity-model/

Introduction

Many organizations need to connect their on-premises data centers, remote sites, and the cloud. A hybrid network connects these different environments.

A modern organization uses an extensive array of IT resources. In the past, it was common to host these resources in an on-premises data center or a colocation facility. With the increased adoption of cloud computing, IT resources are delivered and consumed from cloud service providers over a network connection. In some cases, organizations have opted to migrate all existing IT resources to the cloud. In other cases, organizations maintain IT resources both on premises and in the cloud. In both cases, a common network is required to connect on-premises and cloud resources. Coexistence of on-premises and cloud resources is called “hybrid cloud” and the common network connecting them is referred to as a “hybrid network. “ Even if your organization keeps all of its IT resources in the cloud, it may still require hybrid connectivity to remote sites.

There are several connectivity models to choose from. Although having options adds flexibility, selecting the best option requires analysis of the business and technical requirements and the elimination of options that are not suitable. Requirements can be grouped together across considerations, such as: security, time to deploy, performance, reliability, communication model, scalability, and more. Once requirements are carefully collected, analyzed, and considered, network and cloud architects identify applicable AWS hybrid network building blocks and solutions. To identify and select the optimal model(s), architects must understand advantages and disadvantages of each model. There are also technical limitations that might cause an otherwise good model to be excluded.

Consideration covered in the whitepaper

Figure 1 – Consideration covered on the whitepaper.

A new whitepaper on Hybrid Connectivity describes AWS building blocks and the key things to consider when deciding which hybrid connectivity model is right for you. To help you determine the best solution for your business and technical requirements, we provide decision trees to guide you through the logical selection process as well as a customer use case to show how to apply the considerations and decision trees in practice.

Decision tree applied to Example Corp. Automotive use case

Figure 2: Example Corp. Automotive connection type decision tree

Contributors

Contributors to this new whitepaper on Hybrid Connectivity are: Marwan Al Shawi, AWS Solutions Architect; Santiago Freitas, AWS Head of Technology; Evgeny Vaganov, AWS Specialist Solutions Architect – Networking; and Tom Adamski, AWS Specialist Solutions Architect – Networking. Special thanks to Stephen Bird, AWS Senior Program Manager – Content.