Tag Archives: AWS Transit Gateway

Overview of Data Transfer Costs for Common Architectures

Post Syndicated from Birender Pal original https://aws.amazon.com/blogs/architecture/overview-of-data-transfer-costs-for-common-architectures/

Data transfer charges are often overlooked while architecting a solution in AWS. Considering data transfer charges while making architectural decisions can help save costs. This blog post will help identify potential data transfer charges you may encounter while operating your workload on AWS. Service charges are out of scope for this blog, but should be carefully considered when designing any architecture.

Data transfer between AWS and internet

There is no charge for inbound data transfer across all services in all Regions. Data transfer from AWS to the internet is charged per service, with rates specific to the originating Region. Refer to the pricing pages for each service—for example, the pricing page for Amazon Elastic Compute Cloud (Amazon EC2)—for more details.

Data transfer within AWS

Data transfer within AWS could be from your workload to other AWS services, or it could be between different components of your workload.

Data transfer between your workload and other AWS services

When your workload accesses AWS services, you may incur data transfer charges.

Accessing services within the same AWS Region

If the internet gateway is used to access the public endpoint of the AWS services in the same Region (Figure 1 – Pattern 1), there are no data transfer charges. If a NAT gateway is used to access the same services (Figure 1 – Pattern 2), there is a data processing charge (per gigabyte (GB)) for data that passes through the gateway.

Accessing AWS services in same Region

Figure 1. Accessing AWS services in same Region

Accessing services across AWS Regions

If your workload accesses services in different Regions (Figure 2), there is a charge for data transfer across Regions. The charge depends on the source and destination Region (as described on the Amazon EC2 Data Transfer pricing page).

Accessing AWS services in different Region

Figure 2. Accessing AWS services in different Region

Data transfer within different components of your workload

Charges may apply if there is data transfer between different components of your workload. These charges vary depending on where the components are deployed.

Workload components in same AWS Region

Data transfer within the same Availability Zone is free. One way to achieve high availability for a workload is to deploy in multiple Availability Zones.

Consider a workload with two application servers running on Amazon EC2 and a database running on Amazon Relational Database Service (Amazon RDS) for MySQL (Figure 3). For high availability, each application server is deployed into a separate Availability Zone. Here, data transfer charges apply for cross-Availability Zone communication between the EC2 instances. Data transfer charges also apply between Amazon EC2 and Amazon RDS. Consult the Amazon RDS for MySQL pricing guide for more information.

Workload components across Availability Zones

Figure 3. Workload components across Availability Zones

To minimize impact of a database instance failure, enable a multi-Availability Zone configuration within Amazon RDS to deploy a standby instance in a different Availability Zone. Replication between the primary and standby instances does not incur additional data transfer charges. However, data transfer charges will apply from any consumers outside the current primary instance Availability Zone. Refer to the Amazon RDS pricing page for more detail.

A common pattern is to deploy workloads across multiple VPCs in your AWS network. Two approaches to enabling VPC-to-VPC communication are VPC peering connections and AWS Transit Gateway. Data transfer over a VPC peering connection that stays within an Availability Zone is free. Data transfer over a VPC peering connection that crosses Availability Zones will incur a data transfer charge for ingress/egress traffic (Figure 4).

VPC peering connection

Figure 4. VPC peering connection

Transit Gateway can interconnect hundreds or thousands of VPCs (Figure 5). Cost elements for Transit Gateway include an hourly charge for each attached VPC, AWS Direct Connect, or AWS Site-to-Site VPN. Data processing charges apply for each GB sent from a VPC, Direct Connect, or VPN to Transit Gateway.

VPC peering using Transit Gateway in same Region

Figure 5. VPC peering using Transit Gateway in same Region

Workload components in different AWS Regions

If workload components communicate across multiple Regions using VPC peering connections or Transit Gateway, additional data transfer charges apply. If the VPCs are peered across Regions, standard inter-Region data transfer charges will apply (Figure 6).

VPC peering across Regions

Figure 6. VPC peering across Regions

For peered Transit Gateways, you will incur data transfer charges on only one side of the peer. Data transfer charges do not apply for data sent from a peering attachment to a Transit Gateway. The data transfer for this cross-Region peering connection is in addition to the data transfer charges for the other attachments (Figure 7).

Transit Gateway peering across Regions

Figure 7. Transit Gateway peering across Regions

Data transfer between AWS and on-premises data centers

Data transfer will occur when your workload needs to access resources in your on-premises data center. There are two common options to help achieve this connectivity: Site-to-Site VPN and Direct Connect.

Data transfer over AWS Site-to-Site VPN

One option to connect workloads to an on-premises network is to use one or more Site-to-Site VPN connections (Figure 8 – Pattern 1). These charges include an hourly charge for the connection and a charge for data transferred from AWS. Refer to Site-to-Site VPN pricing for more details. Another option to connect multiple VPCs to an on-premises network is to use a Site-to-Site VPN connection to a Transit Gateway (Figure 8 – Pattern 2). The Site-to-Site VPN will be considered another attachment on the Transit Gateway. Standard Transit Gateway pricing applies.

Site-to-Site VPN patterns

Figure 8. Site-to-Site VPN patterns

Data transfer over AWS Direct Connect

Direct Connect can be used to connect workloads in AWS to on-premises networks. Direct Connect incurs a fee for each hour the connection port is used and data transfer charges for data flowing out of AWS. Data transfer into AWS is $0.00 per GB in all locations. The data transfer charges depend on the source Region and the Direct Connect provider location. Direct Connect can also connect to the Transit Gateway if multiple VPCs need to be connected (Figure 9). Direct Connect is considered another attachment on the Transit Gateway and standard Transit Gateway pricing applies. Refer to the Direct Connect pricing page for more details.

Figure 9. Direct Connect patterns

Figure 9. Direct Connect patterns

A Direct Connect gateway can be used to share a Direct Connect across multiple Regions. When using a Direct Connect gateway, there will be outbound data charges based on the source Region and Direct Connect location (Figure 10).

Direct Connect gateway

Figure 10. Direct Connect gateway

General tips

Data transfer charges apply based on the source, destination, and amount of traffic. Here are some general tips for when you start planning your architecture:

  • Avoid routing traffic over the internet when connecting to AWS services from within AWS by using VPC endpoints:
    • VPC gateway endpoints allow communication to Amazon S3 and Amazon DynamoDB without incurring data transfer charges.
    • VPC interface endpoints are available for some AWS services. This type of endpoint incurs hourly service charges and data transfer charges.
  • Use Direct Connect instead of the internet for sending data to on-premises networks.
  • Traffic that crosses an Availability Zone boundary typically incurs a data transfer charge. Use resources from the local Availability Zone whenever possible.
  • Traffic that crosses a Regional boundary will typically incur a data transfer charge. Avoid cross-Region data transfer unless your business case requires it.
  • Use the AWS Free Tier. Under certain circumstances, you may be able to test your workload free of charge.
  • Use the AWS Pricing Calculator to help estimate the data transfer costs for your solution.
  • Use a dashboard to better visualize data transfer charges – this workshop will show how.

Conclusion

AWS provides the ability to deploy across multiple Availability Zones and Regions. With a few clicks, you can create a distributed workload. As you increase your footprint across AWS, it helps to understand various data transfer charges that may apply. This blog post provided information to help you make an informed decision and explore different architectural patterns to save on data transfer costs.

Integrate AWS Network Firewall with your ISV Firewall Rulesets

Post Syndicated from Mony Kiem original https://aws.amazon.com/blogs/architecture/integrate-aws-network-firewall-with-your-isv-firewall-rulesets/

You may have requirements to leverage on-premises firewall technology in AWS by using your existing firewall implementation. As you move these workloads to AWS or launch new ones, you may replicate your existing on-premises firewall architecture. In this case, you can run partner appliances such as Palo Alto and Fortinet firewall appliances on Amazon EC2 instances.

Ensure that the firewall and intrusion prevention system (IPS) rules that protect your on-premises data center will also protect your Amazon Virtual Private Cloud (VPC). These rules must be frequently updated to ensure protection against the latest security threats. Many enterprises do not want to manage multiple rulesets across their entire hybrid architecture.

AWS Network Firewall takes the responsibility of this undifferentiated heavy lifting by providing a managed service that runs a fleet of firewall appliances, from patching to security updates. It uses the free and open-source intrusion prevention system (IPS), Suricata, for stateful inspection. Suricata is a network threat detection engine capable of real-time intrusion detection (IDS). It also provides inline intrusion prevention (IPS), network security monitoring (NSM), and offline packet capture processing. Customers can now import their existing IPS rules from their firewall provider software that adheres to the open source Suricata standard. This enables a network security model for your hybrid architecture that minimizes operational overhead while achieving consistent protection.

Overview of AWS services used

The following are AWS services that are used in our solution. These are the fundamental building blocks of a hybrid architecture on AWS.

  • AWS Network Firewall (ANFW): a stateful, managed, network firewall and intrusion detection and prevention service. You can filter network traffic at your VPC using AWS Network Firewall.  AWS Network Firewall pricing is based on the number of firewalls deployed and the amount of traffic inspected. There are no upfront commitments, and you pay only for what you use.
  • AWS Transit Gateway (TGW): a network transit hub that you use to interconnect your virtual private clouds (VPCs) and on-premises networks. Transit Gateway enables customers to connect thousands of VPCs. You can attach all your hybrid connectivity (VPN and Direct Connect connections) to a single Transit Gateway. This enables you to consolidate and control your organization’s entire AWS routing configuration in one place.
  • AWS Direct Connect, AWS Site-to-Site VPN, and Amazon VPC are other core components of this hybrid architecture.

Hybrid architecture with centralized network inspection

The example architecture in Figure 1 depicts the deployment model of a centralized network security architecture. It shows all inbound and outbound traffic flowing through a single VPC for inspection. The centralized inspection architecture incorporates the use of AWS Network Firewall deployed in an inspection VPC. All traffic is routed from other VPCs through AWS Transit Gateway (TGW). The threat intelligence rulesets are managed by a partner integration solution and can be automatically imported into AWS Network Firewall. This will allow you to use the same ruleset that is deployed on-premises. It will reduce inconsistent and manual processes to maintain and update the rules.

Figure 1. Centralized inspection architecture with AWS Network Firewall and imported rules

Figure 1. Centralized inspection architecture with AWS Network Firewall and imported rules

The partner integration with AWS Network Firewall (ANFW) will work for both a centralized and distributed inspection architecture. The AWS Network Firewall service will house the rulesets, and you only need to deploy a Firewall endpoint in the Availability Zone of your VPC. In the centralized architecture deployment, all traffic originating from the attached VPCs is routed to the TGW. On the TGW route table, all traffic is routed to the inspection VPC attachment ID. The route table associated to the subnet where the TGW ENI is created in the inspection VPC will have a default route via the ANFW endpoint and return traffic from the ANFW endpoint is routed back to the TGW. If your VPC Firewall endpoint is being deployed across multiple Availability Zones (AZ), use the TGW appliance mode to allow traffic flow symmetry. This will ensure that return traffic is processed by the same AZ. For further details on how to set up your network routing, reference the Deployment models for AWS Network Firewall blog post.

AWS Network Firewall partner integrations

Figure 1 depicts two partner integrations, which include Trend Micro and FortiNet. View this complete and latest list of partner integrations with AWS Network Firewall.

If you are already a user of Trend Micro for your threat intelligence, you can leverage this deployment model to standardize your hybrid cloud security. Trend Micro enables you to deploy your AWS managed network infrastructure and pair it with a partner-supported threat intelligence. This focuses on detecting and disrupting malware in your environments. You just need to enable the Sharing capability on Trend Micro Cloud One. For further information, see these detailed instructions.

For existing users of Fortinet that are using their managed IPS rulesets, you can automatically deploy updated IPS rule sets to AWS Network Firewall. This will ensure consistent protection across your applications landscape. For more details on this integration, visit the partner page.

Getting started with AWS Firewall

You can get started with this pattern through the following high-level steps with link to detailed instructions along the way.

  1. Determine your current networking architecture and cross reference it with the different deployment models supported by AWS Network Firewall. You can learn more about your different options in the blog Deployment models for AWS Network Firewall. The deployment model will determine how you set up your route tables and where you will deploy your AWS Network Firewall endpoint.
  2. Visit the AWS Network Firewall Partners page to confirm your provider’s integration with ANFW and follow the integration instructions from the partner’s documentation.
  3. Get started with AWS Network Firewall by visiting the Amazon VPC Console to create or import your firewall rules. You can group them into policies and apply them to the VPCs you want to protect per the developer guide.
  4. To start inspecting traffic, deploy your Network Firewall endpoint in your inspection VPC.

Conclusion

You may need to operate a hybrid architecture using the same firewall and IPS rules for both your on-premises and cloud networks. For implementing these rules in the cloud, you can run partner firewall appliances on EC2 instances. This model of operation requires some heavy lifting.

Instead, you can set up AWS Network Firewall quickly, and not worry about deploying and managing any infrastructure. AWS Network Firewall automatically scales with your organizations’ network traffic. AWS provides a flexible rules engine that enables you to define firewall rules to control this traffic. To simplify how organizations determine what rules to define, Fortinet and Trend Micro have made managed rulesets available through AWS Marketplace offerings. These can be deployed to your environment with a few clicks. These partners remove complexity for security teams so they can easily create and maintain rules to take full advantage of the AWS Network Firewall.

New Whitepaper: Selecting & Designing Your Hybrid Connectivity Model

Post Syndicated from Santiago Freitas original https://aws.amazon.com/blogs/architecture/new-whitepaper-selecting-designing-your-hybrid-connectivity-model/

Introduction

Many organizations need to connect their on-premises data centers, remote sites, and the cloud. A hybrid network connects these different environments.

A modern organization uses an extensive array of IT resources. In the past, it was common to host these resources in an on-premises data center or a colocation facility. With the increased adoption of cloud computing, IT resources are delivered and consumed from cloud service providers over a network connection. In some cases, organizations have opted to migrate all existing IT resources to the cloud. In other cases, organizations maintain IT resources both on premises and in the cloud. In both cases, a common network is required to connect on-premises and cloud resources. Coexistence of on-premises and cloud resources is called “hybrid cloud” and the common network connecting them is referred to as a “hybrid network. “ Even if your organization keeps all of its IT resources in the cloud, it may still require hybrid connectivity to remote sites.

There are several connectivity models to choose from. Although having options adds flexibility, selecting the best option requires analysis of the business and technical requirements and the elimination of options that are not suitable. Requirements can be grouped together across considerations, such as: security, time to deploy, performance, reliability, communication model, scalability, and more. Once requirements are carefully collected, analyzed, and considered, network and cloud architects identify applicable AWS hybrid network building blocks and solutions. To identify and select the optimal model(s), architects must understand advantages and disadvantages of each model. There are also technical limitations that might cause an otherwise good model to be excluded.

Consideration covered in the whitepaper

Figure 1 – Consideration covered on the whitepaper.

A new whitepaper on Hybrid Connectivity describes AWS building blocks and the key things to consider when deciding which hybrid connectivity model is right for you. To help you determine the best solution for your business and technical requirements, we provide decision trees to guide you through the logical selection process as well as a customer use case to show how to apply the considerations and decision trees in practice.

Decision tree applied to Example Corp. Automotive use case

Figure 2: Example Corp. Automotive connection type decision tree

Contributors

Contributors to this new whitepaper on Hybrid Connectivity are: Marwan Al Shawi, AWS Solutions Architect; Santiago Freitas, AWS Head of Technology; Evgeny Vaganov, AWS Specialist Solutions Architect – Networking; and Tom Adamski, AWS Specialist Solutions Architect – Networking. Special thanks to Stephen Bird, AWS Senior Program Manager – Content.

Field Notes: Working with Route Tables in AWS Transit Gateway

Post Syndicated from Prabhakaran Thirumeni original https://aws.amazon.com/blogs/architecture/field-notes-working-with-route-tables-in-aws-transit-gateway/

An AWS Transit Gateway enables you to attach Amazon VPCs, AWS S2S VPN and AWS Direct Connect connections in the same Region, and route traffic between them. Transit Gateways are designed to be highly scalable and resilient. You can attach up to 5000 VPCs to each gateway and each attachment can handle up to 50 Gbits/second of bursty traffic.

In this post,  I explain the packet flow if both source and destination network are associated to the same or different AWS Transit Gateway Route Table. An AWS Transit Gateway Route Table includes dynamic routes, static routes and blackhole routes. This routing operates at layer 3, where the IP packets are sent to a specific next-hop attachment, based on the destination IP addresses. You can create multiple route tables to separate network access. AWS Transit Gateway controls how traffic is routed to all the connected networks using route tables.

Architecture overview

To illustrate how AWS Transit Gateway route tables work, consider the following architecture with resources in multiple AWS accounts (Prod, Pre-Prod, Staging, Dev, Network Service) and all the accounts are under the same AWS Organization.

Figure 1 – How different AWS accounts are connected via AWS Transit Gateway

Overview

I have created a transit gateway in Network Service account and shared the Transit Gateway with other AWS accounts (Prod, Pre-Prod, Staging, Dev) in an AWS Organization with AWS Resource Access Manager (RAM).

AWS RAM enables you to easily and securely share AWS resources with any AWS account, or within your AWS Organization.  I created three Transit Gateway route tables: Prod Route table, Network Service Route table and Staging Route table.

To attach VPCs, VPN and Direct Connect connections in the same region to Transit Gateway, you need to create Transit Gateway attachments. The process is as following:

  • When you attach VPC to Transit Gateway, you must specify one subnet in each Availability Zone to be used by Transit Gateway to route traffic.
  • Create a connectivity subnet in all VPC and define connectivity subnets for the Transit Gateway attachment.
  • Transit Gateway places a network interface in the connectivity subnet using one IP address from the subnet. Specifying one subnet for an Availability Zone enables traffic to reach resources in other subnets in that Availability Zone.
  • If an Availability Zone is not associated when you create Transit Gateway attachments to attach the VPC, resources in that Availability Zone cannot reach the Transit Gateway.

Note: – We can have 20 TGW route tables per Transit Gateway and 10,000 routes per Transit Gateway.

Resources in AWS accounts and on-premises

We have AWS S2S VPN that connects on-premises and TGW in Network Service account. AWS S2S VPN connection is attached to the TGW attachment (TGW Attachment-4) in the Network Service account. Following is the network information for the on-premises network.

The following table shows VPC CIDR blocks in different AWS accounts and the respective Transit Gatewayattachments. It also has one test instance in each VPC in every AWS account.

VPCs, VPN/Direct Connect connections can dynamically propagate routes to the Transit Gateway route table. You can enable or disable route propagation for each Transit Gateway attachment. For a VPC attachment, the CIDR blocks of the VPC are propagated to the Transit Gateway route table. For a VPN/Direct Connect connection attachment, routes in the Transit Gateway route table propagate to your on-premises router/firewall using Border Gateway Protocol (BGP). The prefixes advertised over BGP session from on-premises router/firewall are propagated to the Transit gateway route table.

Transit Gateway Route Table Association

Transit Gateway attachments are associated to a Transit Gateway route table. An attachment can be associated to one route table. However, an attachment can propagate their routes to one or more Transit Gateway route tables.

The following table shows route table and associated Transit Gateway attachments.

To understand how the packet flow within or different Transit Gateway route table. In this post, I have explained it in three different scenarios.

Important:  For all scenarios, I have summarized the network 10.0.0.0/8 and added static route in VPC’s route table, so that we don’t need to make changes on the subnet’s route table every time you want to access resources in different AWS accounts. However, we are able to control the traffic from TGW route table. This reduces operational overhead and gain the ability to centrally manage all the connectivity.

Scenario 1: Packet flow between Prod account and Pre-Prod Account

You are trying to do SSH to Instance-2 (pre-prod) from Instance-1 (Prod). The following image is the packet flow for first two steps of TCP 3-Way handshake between the instances.

Figure 2. Packet flow between Prod and Pre-Prod accountFigure 2 – Packet flow between Prod and Pre-Prod account

AWS Security Groups and NACLs are configured to allow communication for both the instance.

The source IP address of instance-1 is 10.1.10.10 and destination IP address of  instance-2 is 10.2.10.10.

  1. SYN packet is validated against VPC route table associated to the subnet that has instance-1 and VPC route table has route for the network 10.0.0.0/8 points to Transit Gateway attachment. Packet is forwarded to TGW attachment.
  2. Transit Gateway receives the traffic on the Prod TGW route table since Transit Gateway attachment (TGW Attachment-1) is associated to the Prod TGW route table. Prod TGW route table has route for the network 10.2.0.0/16.
  3. SYN packet is evaluated against Prod Transit Gateway route table and forwarded to Pre-Prod VPC. Then it is evaluated against VPC route table of the subnet of TGW attachment and NACL of the connectivity subnet.
  4. Packet is forwarded from the Transit Gateway attachment ENI to instance-2(10.2.10.10) after the NACL and Security groups are evaluated. For the return traffic (SYN/ACK), source IP is 10.2.10.10 and destination IP is 10.1.10.10
  5. SYN/ACK packet is evaluated against VPC route table associated to the subnet that has instance-2 and VPC route table has route for the network 10.0.0.0/8 points to TGW attachment. Packet is forwarded to TGW attachment.
  6. TGW receives the traffic on the Prod TGW route table since the Transit Gateway attachment (TGW Attachment-2) is associated to the Prod TGW route table. Prod TGW route table has route for the network 10.1.0.0/16.
  7. SYN/ACK packet is evaluated against Prod TGW route table and forwarded to Prod VPC. Then it is evaluated against VPC route table of the subnet of Transit gateway attachment and NACL of the same.
  8. SYN/ACK packet is forwarded from TGW attachment ENI to instance-1 after the NACL and Security group inbound states are evaluated.

Result:  Instance-1 is able to receive SYN/ACK packet from instance-2 successfully.

This is the packet flow when both source and destination networks are associated to the same Transit Gateway route table.

Scenario 2: Packet flow between Staging Account and Pre-Prod account

Pre-Prod account’s Transit Gateway attachment (TGW Attachment-2) is associated to Prod TGW Route table and Staging account’s TGW attachment (TGW Attachment-5) is associated to Staging Transit Gateway route table in Network Service account.

On the Transit Gateway route table, you can either add static route or create propagation to attach Transit Gateway attachment and routes will be propagated in the Transit Gateway  route table.  In this post, I chose the option to create propagation to propagate the route in the Transit Gateway route table.

On the Prod Transit Gateway route table, you must create propagation and choose Transit Gateway attachment (TGW Attachment-5) and the VPC CIDR block 10.4.0.0/16 is propagated in Prod TGW route table. Similarly, you must create propagation in Staging Transit Gateway route table and choose TGW attachment (TGW Attachment-2), the VPC CIDR block 10.2.0.0/16 is propagated in the Staging route table.

If you are trying to ping Instance-4 (Staging) from the Instance-2 (Pre-Prod),  the ICMP echo reply doesn’t reach Instance-2 from Instance-4 . Below is the packet flow for ICMP echo request and echo reply between the instances.

Figure 3. Packet flow between Pre-Prod and Staging account

AWS Security Groups and NACLs are configured to allow communication for both the instances.

Source IP address of Instance-2 is 10.2.10.10 and destination IP address of  Instance-4 is 10.4.10.10.

  1. Echo-Request packet is validated against VPC route table associated to the subnet that has instance-1 and VPC route table has route for the network 10.0.0.0/8 points to the Transit Gateway attachment. Packet is forwarded to the Transit Gateway attachment.
  2. TGW receives the traffic on the Prod TGW route table since the Transit Gateway attachment (TGW Attachment-2) is associated to the Prod TGW route table. Prod TGW route table has route for the network 10.4.0.0/16.
  3. Echo-Request packet is evaluated against Prod TGW route table and forwarded to the Staging VPC. Then it is evaluated against VPC route table of the subnet of Transit Gateway attachment and NACL of the connectivity subnet.
  4. Packet is forwarded from TGW attachment ENI to instance-4(10.4.10.10) after the NACL and Security Groups are evaluated. For the return traffic (Echo-Reply), source IP is 10.4.10.10 and destination IP is 10.2.10.10
  5. Echo-Reply packet is evaluated against VPC route table associated to the subnet that has instance-4 and VPC route table has route for the network 10.0.0.0/8 points to the Transit Gateway attachment. Packet is forwarded to the Transit gateway attachment.
  6. The Transit Gateway receives the traffic on the Staging Transit gateway route table since the Transit Gateway attachment (TGW Attachment-5) is associated to the Staging Trnasit Gateway route table. Staging Transit Gateway route table does not have route for the network 10.2.0.0/16
  7. Echo-Reply packet is evaluated against the Staging Transit Gateway route table and it does not  have route for the destination network 10.2.0.0/16. The packet is not forwarded to Pre-Prod VPC.

Instance-2 is not able to receive echo-reply packet from Instance-4 since Staging TGW route table does not have route for Instance-2 network.

To have successful communication between the Transit Gateway route table, we must propagate routes on both Prod and Staging Transit Gateway route table.

Scenario 3: On-premises server needs to access resources in Dev AWS account.

S2S VPN connection is configured as an attachment on the TGW (TGW attachment-4) and it  is associated to Network Service route table. VPC in Network Service Account was also associated (TGW attachment-3) to the Network service route table and it propagated VPC CIDR block(10.3.0.0/16) in Network Service TGW route table.

VPN tunnel is up and Border Gateway Protocol (BGP) session is established.  BGP neighbors exchange routes via update messages. TGW advertised the network 10.3.0.0/16 to on-premises and received the network 172.16.0.0/16 via BGP update messages. Network Service route table has two propagated routes 10.3.0.0/16 (VPC attachment) and 172.16.0.0/16 (S2S VPN attachment).

Note: TGW advertised only 10.3.0.0/16 to on-premises, since that was only route propagated in the Network service Transit Gateway route table.

If you are trying to SSH to the instance-5 (Dev) from the on-premises server 172.16.10.10. Below is the packet flow for first two steps of TCP 3-Way handshake between the instances.

Figure 4. Packet flow between on-premises and Dev account

Figure 4. Packet flow between on-premises and Dev account

On the Network Service TGW route table, you must create propagation and choose Transit Gateway attachment (TGW Attachment-6) and the route 10.5.0.0/16 propagated in Network service Transit Gateway route table. BGP neighbour advertised the network 10.5.0.0/16 to on-premises via BGP update messages.

On the Staging TGW Route table, you must create propagation and choose TGW attachment (TGW Attachment-4) and the route 172.16.0.0/16 is propagated in Staging TGW route table.

You must update the VPC route table in Dev account.  VPC Console → Route Tables → Select the route table belonging to the VPC and add the network 172.16.0.0/16 with Target as TGW.

Below is the packet flow from on-premises workstation to instance-5 in Dev account. AWS Security group and NACL are allowed on the instance-5 in Dev Account and necessary TCP port is allowed on the on-premise router/firewall to communicate each other. Source IP address server is 172.16.10.10 and destination IP of Instance-5 is 10.5.10.10

  1. SYN packet validated against On-premise (router/Firewall)  route table and it has received the route for 10.5.0.0/16 via BGP.  Packet is forwarded to TGW attachment over S2S tunnel.
  2. TGW receives the traffic on the Network Service TGW route table since TGW attachment (TGW Attachment-4) is associated to the Network Service TGW route table. Network Service TGW route table has route for 10.5.0.0/16
  3. SYN packet is evaluated against Network Service TGW route table and forwarded to Dev VPC. Then it is evaluated against VPC route table of the subnet of TGW attachment and NACL of the connectivity subnet.
  4. Packet is forwarded from TGW attachment ENI  to instance-5(10.5.10.10) after the NACL and Security groups are evaluated. For the return traffic, source IP is 10.5.10.10 and destination IP is 172.16.10.10
  5. SYN/ACK packet is evaluated against VPC route table associated to the subnet that  has instance-5 and VPC route table has route for the network 172.16.0.0/16 point to the Transit Gateway Transit Gateway attachment. Packet is forwarded to Transit Gateway attachment.
  6. Transit gateway receives the traffic on the Staging TGW route table since TGW attachment (TGW Attachment-6) is associated to the Staging TGW route table. Staging Transit Gateway route table has route for the network 172.16.0.0/16.
  7. SYN/ACK packet is forwarded to on-premises router/firewall over S2S VPN tunnel. Router/Firewall forwards the packet to the workstation.

SYN/ACK packet returned successfully.

AWS Transit Gateway also has an option to add blackhole static route to the TGW route table. It prevents the source attachment from reaching a specific route. If the static route matches the CIDR of a propagated route, the static route will be preferred than the propagated route.

Conclusion

In this post, you learnt the packet flow if both source and destination network are associated to the same or different TGW route table. AWS Transit Gateway simplifies network architecture, reduces operational overhead, and centrally manages external connectivity. If you have a substantial number of VPCs, it makes it easier for the network team to manage access for a growing environment.

For further reading, review the AWS Transit Gateway User guide.

If you have feedback about this post, submit comments in the Comments section below.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

 

 

 

 

Improve VPN Network Performance of AWS Hybrid Cloud with Global Accelerator

Post Syndicated from Anandprasanna Gaitonde original https://aws.amazon.com/blogs/architecture/improve-vpn-network-performance-of-aws-hybrid-cloud-with-global-accelerator/

Introduction

Connecting on-premises data centers to AWS using AWS Site-to-Site VPN to support distributed applications is a common practice. With business expansion and acquisitions, your company’s on-premises IT footprint may grow into various geographies, with these multiple sites comprising of on-premises data centers and co-location facilities. AWS Site-to-Site VPN supports throughput up to 1.25 Gbps, although the actual throughput can be lower for VPN connections that are in a different geolocations from the AWS region. This is because the internet path between them has to traverse multiple networks. For globally distributed applications that interact with other applications and components located on-premises, these VPN connections can impact performance and user experience.

This blog post provides an architectural approach to improving the performance of such globally distributed applications. We’ll explain an architecture that utilizes AWS Global Accelerator to create highly performant connectivity in terms of latency and bandwidth for VPN connections that originate from distant geographies around the world. Using this architecture, you can optimize your inter-application traffic between remote sites and your AWS environment, which can lead to better application performance and customer experience.

Distributed application architecture in a hybrid cloud using VPN

Distributed application architecture in a hybrid cloud using VPN

The above figure shows a pictorial representation of a customer’s existing IT footprint spread across several locations in the U.S., Europe, and the Asia Pacific (APAC), while the AWS environment is set up in us-east-1 region. In this use case, a business application hosted in AWS has the following dependencies on remote data centers and is also accessed by remote corporate users:

  1. Communication with an application hosted in a data center in EU region
  2. Communication with a data center in the US where corporate users access the AWS application over VPN
  3. Integration with local API based service in the APAC region

Site-to-Site VPN from a remote site to an AWS environment provides secure connectivity for this inter-application traffic, as well as traffic from users to the application. Sites closer to the us-east-1 region may see reasonably good network performance and latency. However, sites that are geographically remote may experience higher latencies and not-so-reliable network performance due to the number of network hops spanning multiple networks and possible congestion. In addition, varying network paths through the Internet backbone can also lead to increased latencies. This impacts the overall application performance, which can lead to an unsatisfactory customer experience.

Optimizing application performance with Accelerated VPN connections

Optimizing application performance with Accelerated VPN connections

The above diagram shows the business application hosted in a multi-VPC architecture on AWS comprising of a production VPC and a sandbox VPC, typical of customer environments. These VPCs are interconnected using AWS Transit Gateway, and the VPN connections from the three remote sites terminate at AWS Transit Gateway as VPN attachments.

To improve the user experience for the application, VPN attachments to AWS Transit gateway are enabled with a feature called Accelerated Site-to-Site VPN. With this feature enabled, AWS Global Accelerator routes traffic from an on-premises network to the AWS Edge location closest to your customer’s gateway. It uses the AWS global network to route traffic through the AWS Global backbone from the closest Edge location, thereby ensuring the traffic remains over the optimum network path. This translates into faster response times, increased throughput, and a better user experience as described in this blog post about better performance for internet traffic with AWS Global Accelerator.

The Accelerated Site-to-Site VPN feature is enabled by creating accelerators that allow you to associate two Anycast static IPs from the Edge network. (Anycast is a network addressing and routing method that attributes a single IP address to multiple endpoints in a network.) These static IP addresses act as a fixed entry point to the VPN tunnel endpoints. This improves the availability and performance of your applications that need to interface with remote sites for their functionality. The above diagram shows three Edge locations, each one corresponding to the accelerators for each of the VPN connections. Since AWS Transit Gateway allows connectivity to multiple VPCs in your AWS environment, the benefit of improved network performance is extended to applications and workloads in VPCs connected to the transit gateway. This architecture scales as business demands and workloads continue to grow on AWS.

Configuring your VPN connections for the Acceleration

To make changes to your existing VPN, consider the following for enabling the acceleration:

  • If your current existing VPN connections are terminating on a VPN Gateway, you will need to create an AWS Transit Gateway and create VPC attachments from the application VPC to the Transit Gateway.
  • Existing VPN connections on Transit Gateway can’t be modified to take advantage of the acceleration, so you will need to tear down existing connections and set up new ones in the AWS console as shown below. Then, configure your customer gateway device to use the new Site-to-Site VPN connection and delete the old Site-to-Site VPN connection.

Create VPN connection

For more information and steps, see Creating a transit gateway VPN attachment.

Accelerated VPN connections use two VPN tunnels per connection like a regular Site-to-Site VPN connection. For accelerated VPN connections, each tunnel uses a separate accelerator and a separate pool of IP addresses for the tunnel endpoint IP addresses. The IP addresses for the two VPN tunnels are selected from two separate network zones. This ensures high availability for your VPN connections and can handle any network disruptions within a particular zone. If an Edge location fails, the customer gateway can reinitiate the VPN tunnel to the same IP address and get connected to the nearest available Edge location, making it resilient. These are the outside IP addresses to which the customer gateway will connect, as shown below:

Outside IP addresses to which customer gateway will connect

Considerations

Accelerated VPN functionality provides benefits to architectures involved in communicating with remote data centers and on-premises locations, but there are some considerations to keep in mind:

  • Additional charges are involved due to the use of Global Accelerator when acceleration is enabled. Performance testing should be done to evaluate the benefit it provides to your application.
  • Don’t enable accelerated VPN when the customer gateway for your VPN connection is also in an AWS environment since that traffic already traverses through the AWS backbone.
  • Applications that require a consistent network performance and a dedicated private connection should consider moving to AWS Direct Connect.

From the AWS Region where your application resides, you can use the Global Accelerator Speed Comparison tool from those remote data centers to see Global Accelerator download speeds compared to direct internet downloads. Note that while the tool uses TCP, the VPN uses UDP protocol, meaning it’s not a performance test of a VPN connection. However, it will give you a reasonable indication of the performance improvement for your VPN.

Summary

As you start adopting the cloud and migrating workloads to the AWS platform, you’ll realize the inherent benefits of scalability, high availability, and security to create fault-tolerant and production-grade applications. During this transition, you will have hybrid cloud environments utilizing VPN connectivity. Accelerated Site-to-Site VPN connections can provide you with performance improvements for your application traffic. This is a good alternative until your traffic demands and architecture considerations mandate the use of a dedicated network path using AWS Direct Connect from your remote locations to AWS.

 

Leveraging AWS Global Backbone for Data Center Migration and Global Expansion

Post Syndicated from Santiago Freitas original https://aws.amazon.com/blogs/architecture/leveraging-aws-global-backbone-for-data-center-migration-and-global-expansion/

Many companies run their applications in data centers, server rooms or in space rented from colocation providers in multiple countries. Those companies usually have a mixture of a small number of central large data centers where their core systems are hosted in several smaller, regional data centers. These offices in the multiple countries require access to applications running in the local data centers, usually in the same country, as well as to applications running in the remote data centers. Companies have taken the approach of establishing a self-managed, international wide area network (WAN) or contracting it as a service from a telecommunications provider to enable connectivity between the different sites. As customers migrate workloads to AWS Regions, they need to maintain connectivity between their offices, AWS Regions, and existing on-premises data centers.

This blog post discusses architectures applicable for international data center migrations as well as to customers expanding their business to new countries. The proposed architectures enable access to both AWS and on-premises hosted applications. These architectures leverage the AWS global backbone for connectivity between customer sites in different countries and even continents.

Let’s look into a use case where a customer has their central data center that hosts its core systems located in London, United Kingdom. The customer has rented space from a colocation provider in Mumbai to run applications required to be hosted in India. They have an office in India where users need access to the applications running in their Mumbai data center as well as the core systems running in their London data center. Those different sites are interconnected by a global WAN as illustrated on the diagram below.

Initial architecture with a global WAN interconnecting customer’s sites

Figure 1: Initial architecture with a global WAN interconnecting customer’s sites

The customer then migrates their applications from their Mumbai data center to the AWS Mumbai region. Users from the customer’s offices in India require access to applications running in the AWS Mumbai Region as well as the core systems running in their London data center. To enable access to the applications hosted in the AWS Mumbai Region, the customer established a connection from their India offices to the AWS Mumbai region. These connections can leverage AWS Direct Connect (DX) or an AWS Site-to-Site VPN. We will also use AWS Transit Gateway (TGW) which allows for customer traffic to transit through AWS infrastructure. For the customer sites using AWS Direct Connect, we attach an AWS Transit Gateway to a Direct Connect gateway (DXGW) to enable customers to manage a single connection for multiple VPCs or VPNs that are in the same region. To optimize their WAN cost, the customer leverages AWS Transit Gateway inter-region peering capability to connect their AWS Transit Gateway in the AWS Mumbai region to their AWS Transit Gateway in the AWS London region. Traffic using inter-region Transit Gateway peering is always encrypted, stays on the AWS global network, and never traverses the public Internet. Transit Gateway peering enables international, in this case intercontinental, communication. Once the traffic arrives at the London region’s Transit Gateway, the customer routes the traffic over an AWS Direct Connect (or VPN) to the central data center, where core systems are hosted.

As applications are migrated from the central data center in London to the AWS London Region, users from India office are able to seamlessly access applications hosted in the AWS London region and on-premises. The architecture below demonstrates the traffic between the customer sites and also from a customer site to a local and a remote AWS Region.

Access from customer sites to applications in AWS regions and on-premises via AWS Global Network

Figure 2: Access from customer sites to applications in AWS regions and on-premises via AWS Global Network

As the customer expands internationally, the architecture evolves to allow access from new international offices such as in Sydney and Singapore to the other customer sites as well as to AWS regions via the AWS Global Network. Depending on the bandwidth requirements, a customer can use AWS DX to the nearest AWS region and then leverage AWS Transit Gateway inter-region peering, as demonstrated on the diagram below for the Singapore site. For sites where a VPN-based connection meets the bandwidth and user experience requirements, the customer can leverage accelerated site-to-site VPN using AWS Global Accelerator, as illustrated for the Sydney office. This architecture allows thousands of sites to be interconnected and use the AWS global network to access applications running on-premises or in AWS.

Global connectivity facilitated by AWS Global Network

Figure 3: Global connectivity facilitated by AWS Global Network

Considerations

The following are some of the characteristic customers should consider when adopting the architectures described in this blog post.

  • You have a fixed hourly cost of TGW attachments, VPN and DX connections.
  • There is also a variable usage-based component that depends on the amount of traffic that flows through TGW, OUT of AWS, and inter-region.
  • In comparison, a fixed price model is often offered by telecommunications providers for the entire network.

For customers with a high number of sites in the same geographical area, consider setting up a regional WAN. This could be done with SD-WAN technologies or private WAN connections. A regional WAN is used to interconnect different sites with nearest AWS region also connected to the regional WAN. Such design uses the AWS global network for international connectivity and a regional WAN for regional connectivity between customer sites.

Conclusion

As customers migrate their applications to AWS, they can leverage the AWS global network to optimize their WAN architecture and associated costs. Leveraging TGW inter-region peering enable customers to build architectures which facilitate data center migration as well as international business expansion, while allowing access to workloads running either on-premises or in AWS regions. For a list of AWS regions where TGW inter-region peering is supported, please refer to the AWS Transit Gateway FAQ.

What is a cyber range and how do you build one on AWS?

Post Syndicated from John Formento, Jr. original https://aws.amazon.com/blogs/security/what-is-cyber-range-how-do-you-build-one-aws/

In this post, we provide advice on how you can build a current cyber range using AWS services.

Conducting security incident simulations is a valuable exercise for organizations. As described in the AWS Security Incident Response Guide, security incident response simulations (SIRS) are useful tools to improve how an organization handles security events. These simulations can be tabletop sessions, individualized labs, or full team exercises conducted using a cyber range.

A cyber range is an isolated virtual environment used by security engineers, researchers, and enthusiasts to practice their craft and experiment with new techniques. Traditionally, these ranges were developed on premises, but on-prem ranges can be expensive to build and maintain (and do not reflect the new realities of cloud architectures).

In this post, we share concepts for building a cyber range in AWS. First, we cover the networking components of a cyber range, then how to control access to the cyber range. We also explain how to make the exercise realistic and how to integrate your own tools into a cyber range on AWS. As we go through each component, we relate them back to AWS services.

Using AWS to build a cyber range provides flexibility since you only pay for services when in use. They can also be templated to a certain degree, to make creation and teardown easier. This allows you to iterate on your design and build a cyber range that is as identical to your production environment as possible.

Networking

Designing the network architecture is a critical component of building a cyber range. The cyber range must be an isolated environment so you have full control over it and keep the live environment safe. The purpose of the cyber range is to be able to play with various types of malware and malicious code, so keeping it separate from live environments is necessary. That being said, the range should simulate closely or even replicate real-world environments that include applications on the public internet, as well as internal systems and defenses.

There are several services you can use to create an isolated cyber range in AWS:

  • Amazon Virtual Private Cloud (Amazon VPC), which lets you provision a logically isolated section of AWS. This is where you can launch AWS resources in a virtual network that you define.
    • Traffic mirroring is an Amazon VPC feature that you can use to copy network traffic from an elastic network interface of Amazon Elastic Compute Cloud (Amazon EC2) instances.
  • AWS Transit Gateway, which is a service that enables you to connect your Amazon VPCs and your on-premises networks to a single gateway.
  • Interface VPC endpoints, which enable you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink. You’re able to do this without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

Amazon VPC provides the fundamental building blocks to create the isolated software defined network for your cyber range. With a VPC, you have fine grained control over IP CIDR ranges, subnets, and routing. When replicating real-world environments, you want the ability to establish communications between multiple VPCs. By using AWS Transit Gateway, you can add or subtract an environment from your cyber range and route traffic between VPCs. Since the cyber range is isolated from the public internet, you need a way to connect to the various AWS services without leaving the cyber range’s isolated network. VPC endpoints can be created for the AWS services so they can function without an internet connection.

A network TAP (test access point) is an external monitoring device that mirrors traffic passing between network nodes. A network TAP can be hardware or virtual; it sends traffic to security and monitoring tools. Though its unconventional in a typical architecture, routing all traffic through an EC2 Nitro instance enables you to use traffic mirroring to provide the network TAP functionality.

Accessing the system

Due to the isolated nature of a cyber range, administrators and participants cannot rely on typical tools to connect to resources, such as the secure shell (SSH) protocol and Microsoft remote desktop protocol (RDP). However, there are several AWS services that help achieve this goal, depending on the type of access role.

There are typically two types of access roles for a cyber range: an administrator and a participant. The administrator – sometimes called the Black Team – is responsible for designing, building, and maintaining the environment. The participant – often called the Red Team (attacking), Blue Team (defending), or Purple Team (both) – then plays within the simulated environment.

For the administrator role, the following AWS services would be useful:

  • Amazon EC2, which provides scalable computing capacity on AWS.
  • AWS Systems Manager, which gives you visibility into and control of your infrastructure on AWS

The administrator can use Systems Manager to establish SSH, RDP, or run commands manually or on a schedule. Files can be transferred in and out of the environment using a combination of S3 and VPC endpoints. Amazon EC2 can host all the web applications and security tools used by the participants, but are managed by the administrators with Systems Manager.

For the participant role, the most useful AWS service is Amazon WorkSpaces, which is a managed, secure cloud desktop service.

The participants are provided with virtual desktops that are contained in the isolated cyber range. Here, they either initiate an attack on resources in the environment or defend the attack. With Amazon WorkSpaces, participants can use the same operating system environment they are accustomed to in the real-world, while still being fully controlled and isolated within the cyber range.

Realism

A cyber range must be realistic enough to provide a satisfactory experience for its participants. This realism must factor tactics, techniques, procedures, communications, toolsets, and more. Constructing a cyber range on AWS enables the builder full control of the environment. It also means the environment is repeatable and auditable.

It is important to replicate the toolsets that are used in real life. By creating re-usable “Golden” Amazon Machine Images (AMIs) that contain tools and configurations that would typically be installed on machines, a builder can easily slot the appropriate systems into an environment. You can also use AWS PrivateLink – a feature of Amazon VPC – to establish connections from your isolated environment to customer-defined outside tools.

To emulate the tactics of an adversary, you can replicate a functional copy of the internet that is scaled down. Using private hosted zones in Amazon Route 53, you can use a “.” record to control name resolution for the entire “internet”. Alternatively, you can use a Route 53 resolver to forward all requests for a specific domain to your name servers. With these strategies, you can create adversaries that act from known malicious domains to test your defense capabilities. You can also use Amazon VPC route tables to send all traffic to a centralized web server under your control. This web server can respond as variety of websites to emulate the internet.

Logging and monitoring

It is important for responders to exercise using the same tools and techniques that they would use in the real world. AWS VPC traffic mirroring is an effective way to utilize many IDS products. Our documentation provides guidance for using popular open source tool such as Zeek and Suricata. Additionally, responders can leverage any network monitoring tools that support VXLAN.

You can interact with the tools outside of the VPC by using AWS PrivateLink. PrivateLink provides secure connectivity to resources outside of a network, without the need for firewall rules or route tables. PrivateLink also enables integration with many AWS Partner offerings.

You can use Amazon CloudWatch Logs to aggregate operating system logs, application logs, and logs from AWS resources. You can also easily share CloudWatch Logs with a dedicated security logging account.

In addition, if there are third party tools that you currently use, you can leverage the AWS Marketplace to easily procure the specific tools and install within your AWS account.

Summary

In this post, I covered what a cyber range is, the value of using one, and how to think about creating one using AWS services. AWS provides a great platform to build a cost effective cyber range that can be used to bolster your security practices.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

John Formento, Jr.

John is a Solutions Architect at Amazon Web Services. He helps large enterprises achieve their goals by architecting secure and scalable solutions on the AWS Cloud. John holds 5 AWS certifications including AWS Certified Solutions Architect – Professional and DevOps Engineer – Professional.

Author

Adam Cerini

Adam is a Senior Solutions Architect with Amazon Web Services. He focuses on helping Public Sector customers architect scalable, secure, and cost effective systems. Adam holds 5 AWS certifications including AWS Certified Solutions Architect – Professional and AWS Certified Security – Specialist.

BBVA: Helping Global Remote Working with Amazon AppStream 2.0

Post Syndicated from Joe Luis Prieto original https://aws.amazon.com/blogs/architecture/bbva-helping-global-remote-working-with-amazon-appstream-2-0/

This post was co-written with Javier Jose Pecete, Cloud Security Architect at BBVA, and Javier Sanz Enjuto, Head of Platform Protection – Security Architecture at BBVA.

Introduction

Speed and elasticity are key when you are faced with unexpected scenarios such as a massive employee workforce working from home or running more workloads on the public cloud if data centers face staffing reductions. AWS customers can instantly benefit from implementing a fully managed turnkey solution to help cope with these scenarios.

Companies not only need to use technology as the foundation to maintain business continuity and adjust their business model for the future, but they also must work to help their employees adapt to new situations.

About BBVA

BBVA is a customer-centric, global financial services group present in more than 30 countries throughout the world, has more than 126,000 employees, and serves more than 78 million customers.

Founded in 1857, BBVA is a leader in the Spanish market as well as the largest financial institution in Mexico. It has leading franchises in South America and the Sun Belt region of the United States and is the leading shareholder in Turkey’s Garanti BBVA.

The challenge

BBVA implemented a global remote working plan that protects customers and employees alike, including a significant reduction of the number of employees working in its branch offices. It also ensured continued and uninterrupted operations for both consumer and business customers by strengthening digital access to its full suite of services.

Following the company’s policies and adhering to new rules announced by national authorities in the last weeks, more than 86,000 employees from across BBVA’s international network of offices and its central service functions now work remotely.

BBVA, subject to a set of highly regulated requirements, and was looking for a global architecture to accommodate remote work. The solution needed to be fast to implement, adaptable to scale out gradually in the various countries in which it operates, and able to meet its operational, security, and regulatory requirements.

The architecture

BBVA selected Amazon AppStream 2.0 for particular use cases of applications that, due to their sensitivity, are not exposed to the internet (such as financial, employee, and security applications). Having had previous experience with the service, BBVA chose AppStream 2.0 to accommodate the remote work experience.

AppStream 2.0 is a fully managed application streaming service that provides users with instant access to their desktop applications from anywhere, regardless of what device they are using.

AppStream 2.0 works with IT environments, can be managed through the AWS SDK or console, automatically scales globally on demand, and is fully managed on AWS. This means there is no hardware or software to deploy, patch, or upgrade.

AppStream 2.0 can be managed through the AWS SDK (1)

  1. The streamed video and user inputs are sent over HTTPS and are SSL-encrypted between the AppStream 2.0 instance executing your applications, and your end users.
  2. Security groups are used to control network access to the customer VPC.
  3. AppStream 2.0 streaming instance access to the internet is through the customer VPC.

AppStream 2.0 fleets are created by use case to apply security restrictions depending on data sensitivity. Setting clipboard, file transfer, or print to local device options, the fleets control the data movement to and from employees’ AppStream 2.0 streaming sessions.

BBVA relies on a proprietary service called Heimdal to authenticate employees through the corporate identity provider. Heimdal calls the AppStream 2.0 API CreateStreamingURL operation to create a temporary URL to start a streaming session for the specified user, and tries to abstract the user from the service using:

  • FleetName to connect the most appropriate fleet based on the user’s location (BBVA has fleets deployed in Europe and America to improve the user’s experience.)
  • ApplicationId to launch the proper application without having to use an intermediate portal
  • SessionContext in situations where, for instance, the authentication service generates a token and needs to be forwarded to a browser application and injected as a session cookie

BBVA uses AWS Transit Gateway to build a hub-and-spoke network topology (2)

To simplify its overall network architecture, BBVA uses AWS Transit Gateway to build a hub-and-spoke network topology with full control over network routing and security.

There are situations where the application streamed in AppStream 2.0 needs to connect:

  1. On-premises, using AWS Direct Connect plus VPN providing an IPsec-encrypted private connection
  2. To the Internet through an outbound VPC proxy with domain whitelisting and content filtering to control the information and threats in the navigation of the employee

AppStream 2.0 activity is logged into a centralized repository support by Amazon S3 for detecting unusual behavior patterns and by regulatory requirements.

Conclusion

BBVA built a global solution reducing implementation time by 90% compared to on-premises projects, and is meeting its operational and security requirements. As well, the solution is helping with the company’s top concern: protecting the health and safety of its employees.

Binge-Watch Live This is My Architecture Videos from AWS re:Invent

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/binge-watch-live-videos-from-aws-reinvent-2019/

AWS re:Invent 2019 was a whirlwind of activity, especially in the Expo Hall, where the AWS team spent four days filming 12 live This is My Architecture videos for Twitch. Watch one a day for the next two weeks…or eat them all in one sitting. Whichever you do, you’re guaranteed to learn something new.

Accolade

Discover security and operational excellence in healthcare with Accolade.

AWS Solution Builders

Get multi-region Availability with Amazon DynamoDB, Amazon S3, and Amazon Cognito.

EcoFit

EcoFit offers responsive, AWS Lambda-based microservices at scale.

Splunk

Splunk explains data at scale by decoupling compute and storage.

Crownpeak

Crownpeak uses AWS Lambda for its decoupled content deployment architecture.

Formula One Group

Learn how Formula One Group is using Amazon SageMaker to deliver real-time insights to fans.

Adobe

Adobe is simplifying networking across thousands of AWS accounts with AWS Transit Gateway.

The Trade Desk

The Trade Desk offers real-time ad bidding in the cloud with AWS Global Accelerator.

Mueller Water Products

Learn all about about scalable ingestion of sensor data for municipal water conservation with Mueller Water Products.

NextRoll

NextRoll is driving OpEx efficiency for ad bidding engines.

Pason Systems

Explore petabyte-scale drilling datamart on AWS with Pason Systems.

UltraServe

Application vending machine with runtime event control at UltraServe.

 

Be sure to visit the AWS channel on Twitch for more in-depth videos and interviews.