All posts by Sid Singh

Exclude cipher suites at the API gateway using a Network Load Balancer security policy

Post Syndicated from Sid Singh original https://aws.amazon.com/blogs/security/exclude-cipher-suites-at-the-api-gateway-using-a-network-load-balancer-security-policy/

In this blog post, we will show you how to use Amazon Elastic Load Balancing (ELB)—specifically a Network Load Balancer—to apply a more granular control on the cipher suites that are used between clients and servers when establishing an SSL/TLS connection with Amazon API Gateway. The solution uses virtual private cloud (VPC) endpoints (powered by AWS PrivateLink) and ELB policies. By using this solution, highly regulated industries like financial services and healthcare and life sciences can exercise more control over cipher suite selection for TLS negotiation.

Configure the minimum TLS version on API Gateway

The TLS protocol is a mechanism to encrypt data in transit — data that is moving from one location to another such as across the internet or through a network. TLS requires that the client and server agree on the family of encryption algorithms — otherwise known as the cipher suite — to use to protect the communication between the client and server. The two parties agree on the cipher suite during the phase known as the TLS handshake, in which the client first provides a lists of preferred cipher suites, and the server then selects the one that it deems most appropriate.

API Gateway supports a wide range of protocols and ciphers and allows you to choose a minimum TLS version to be enforced by selecting a specific security policy. A security policy is a predefined combination of the minimum TLS version and cipher suite offered by API Gateway. Currently, you can choose either a TLS version 1.2 or TLS version 1.0 security policy. Although the usage of TLS v1.0 or TLSv1.2 covers a wide range of network security use cases, it doesn’t address the situation where you need to exclude specific ciphers that don’t meet your security requirements.

Options for granular control on TLS cipher suites

If you want to exclude specific ciphers, you can use the following solutions to offload and control the TLS connection termination with a customized cipher suite:

  • Amazon CloudFront distributionAmazon CloudFront provides the TLS version and cipher suite in the CloudFront-Viewer-TLS-header, and you can configure it by using a CloudFront function on the Viewer request to then forward the appropriate traffic to an API Gateway. CloudFront is a global service that transfers customer data as an essential function of the service, so you should carefully consider its usage according to your specific use case.
  • Self-managed reverse proxy — Using a containerized reverse proxy (for example, an NGINX Docker image) that manages the TLS sessions and forwards traffic to an API Gateway is another approach for more granular control on the cipher suites. You can deploy and manage this solution with Amazon Elastic Container Service (Amazon ECS). You can also run Amazon ECS on AWS Fargate so that you don’t have to manage servers or clusters of Amazon Elastic Compute Cloud (Amazon EC2) instances. The self-managed reverse proxy approach entails an operational overhead associated with the configuration and management of the reverse proxy application.
  • Network Load Balancer — By placing a Network Load Balancer in front of an API Gateway, you can use the load balancer to terminate the TLS session on the client side, and reinitiate a new TLS session with the backend API gateway. This approach, in conjunction with the use of ELB policies, provides you with much more granular control on the cipher suite used for the communication. Network Load Balancer is a fully-managed service, meaning that it handles scalability and elasticity automatically. This represents the main advantage in comparison to a self-managed reverse proxy solution that would add operational overhead due to the need to manage the reverse proxy application and the ECS cluster.

Network Load Balancer is the solution with the most suitable set of trade-offs: it minimizes operational overhead while providing the necessary flexibility to control and secure the connection between client and server. Therefore, we focus on using Network Load Balancer in this post.

Prerequisites

To show how a Network Load Balancer can front-end an API gateway in practice, we will walk you through a real-world example. To follow along, make sure that you have the following prerequisites in place:

Figure 1: Sample architecture of API Gateway with Lambda backend

Figure 1: Sample architecture of API Gateway with Lambda backend

Use Network Load Balancer for cipher suite selection

We start with a scenario where a client interacts with the API gateway domain (for example, api.example.com) over a set of TLS/cipher combinations that are not acceptable for security reasons. In the subsequent steps, we will introduce a Network Load Balancer layer to frontend the API gateway domain without impacting the end-user interaction with the API gateway domain. In this section, we will walk you through how to make the application accessible through a Network Load Balancer and use ELB policies to exclude the TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 cipher suite. In doing so, we will limit the operational overhead as much as possible, while keeping the application scalable, elastic, and highly available.

Figure 2 shows the solution that you will build.

Figure 2: Target architecture, with a load balancer for cipher suite selection

Figure 2: Target architecture, with a load balancer for cipher suite selection

The preceding diagram shows a workflow of the user interaction with the API gateway domain abstracted by the Network Load Balancer layer. For the first interaction, the user retrieves the API gateway domain from the Route 53 hosted zone. This API gateway domain aliases to the Network Load Balancer endpoint. In the next interaction, the user makes an HTTPS request to the domain endpoint with a TLS/cipher combination from the client side. The TLS connection is accepted or denied based on the security policy configured at the Network Load Balancer. In the rest of this post, we will walk you through how to set up this architecture.

Step 1: Create a VPC endpoint

The first step is to create a private VPC endpoint for API Gateway.

To create a VPC endpoint

  1. Open the Amazon VPC console.
  2. In the left navigation pane, choose Endpoints, and select Create endpoint.
  3. For Name tag, enter a name for your endpoint. For this walkthrough, we will enter MyEndPoint as the name for the endpoint.
  4. For Services, search for execute-api and select the service name, which will look similar to the following: com.amazonaws.<region-name>.execute-api.
  5. For VPC, select the VPC where you want to deploy the endpoint. For this walkthrough, we will use MyVPC as the VPC.
  6. For Subnets, select the private subnets where you want the private endpoint to be accessible. To help ensure high availability and resiliency, make sure that you select at least two subnets.
  7. (Optional) Specify the VPC endpoint policy to allow access to the VPC endpoint only for the desired users or services. Make sure that you apply the principle of least privilege.
  8. For Security Groups, select (or create) a security group for the API Gateway VPC endpoint. This security group will allow or deny traffic to the VPC endpoint. You can choose the ports and protocols along with the source and destination IP address range to allow for inbound and outbound traffic. In this example, you want the VPC endpoint to be accessed only from the Network Load Balancer, so make sure that you allow incoming traffic from the VPC’s Classless Inter-Domain Routing (CIDR) on port 443.
  9. Leave the other configuration options as they are, and then choose Create Endpoint. Wait until the VPC endpoint is deployed.
  10. When the VPC endpoint completes provisioning, take note of the endpoint ID and the IP addresses associated with it because you will need this information in the following steps. You will find one address for each subnet where you chose to deploy the VPC endpoint. After you select the newly created endpoint, you can find the assigned IP addresses in the Subnets tab.

Step 2: Associate API Gateway with the VPC endpoint and custom domain

The next step is to instruct the API Gateway to only accept invocations coming from the VPC endpoint, and then map your APIs with the custom domain name.

To associate API Gateway with the VPC endpoint and custom domain

  1. Open the Amazon API Gateway console and take note of the ID of your API.
  2. Choose your existing API in the console. For this walkthrough, we will use an API called MyAPI.
  3. In the left navigation pane, under API: <MyAPI>, choose Resource Policy.
  4. Paste the following policy, and replace <region-id>, <account-id>, <api-id>, and <endpoint-id> with your own information:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": "*",
                "Action": "execute-api:Invoke",
                "Resource": "arn:aws:execute-api:<region-id>:<account-id>:<api-id>/*/*/*",
                "Condition": {
                    "StringEquals": {
                        "aws:SourceVpce": "<endpoint-id>"
                    }
                }
            }
        ]
    }

  5. In the left navigation pane, under API: <MyAPI>, choose Settings.
  6. In the Endpoint Configuration section, for VPC Endpoint IDs, enter your VPC endpoint ID.
  7. Leave the other configuration options as they are and choose Save Changes.
  8. In the left navigation pane, under API: <MyAPI>, choose Resources.
  9. Choose Actions and select Deploy API.
  10. Select an existing stage, or if you haven’t created one yet, select [New Stage] and enter a name for the stage (for example, prod). Then choose Deploy.
  11. Navigate back to the Amazon API Gateway console, and in the left navigation pane, choose Custom domain names.
  12. Choose Create.
  13. For Domain name, enter the full domain name that you plan to associate with your API Gateway (for example, api.example.com).
  14. For ACM certificate, select the certificate for the domain that you own (for example, *.example.com).
  15. Leave the rest as it is and choose Create domain name.
  16. Select the domain name that you just associated with API Gateway and select API mappings.
  17. Choose Configure API Mapping.
  18. For API, select your API, and for Stage, select your preferred stage.
  19. Leave the other configuration options as they are, and choose Save.

Step 3: Create a new target group for Network Load Balancer

Before creating a Network Load Balancer, you need to create a target group that it will redirect the requests to. You will configure the target group to redirect requests to the VPC endpoint.

To create a new target group for Network Load Balancer

  1. Open the Amazon EC2 console.
  2. In the left navigation pane, choose Target groups, and then choose Create target group.
  3. For Choose a target type, select IP addresses.
  4. For Target group name, enter your desired target group name. For this walkthrough, we will enter MyGroup as the target group name.
  5. For Protocol, select TLS.
  6. For Port, enter 443.
  7. Select MyVPC.
  8. Under Heath check protocol, select HTTPS, and under Health check path, enter /ping.
  9. Leave the rest as it is and choose Next.
  10. For Network, select MyVPC.
  11. Choose Add IPv4 address and add the IP addresses associated with the VPC endpoint one by one (these are the IP address associated with the VPC endpoint and detailed in step 10 of the section Step 1: Create a VPC endpoint).
  12. For Ports, enter 443, and then choose Include as pending below.
  13. Choose Create target group, and then wait for the target group to complete creation.

Step 4: Create a Network Load Balancer

Now you can create the Network Load Balancer. You will configure it to redirect traffic to the target group that you defined in Step 3.

To create the Network Load Balancer

  1. Open the Amazon EC2 console.
  2. In the left navigation pane, choose Load Balancers, and then choose Create load balancer.
  3. In the Network Load Balancer section, choose Create.
  4. For Load balancer name, enter a name for your load balancer. For this walkthrough, we will use the name MyNLB.
  5. For Scheme, select Internal.
  6. For VPC, select MyVPC.
  7. For Mappings, select the same subnets that you selected when you created the VPC endpoint in Step 1: Create a VPC endpoint.
  8. In the Listeners and routing section, for Port, enter 443.
  9. Forward the traffic to MyGroup.
  10. Select a security policy that excludes the cipher suites that you don’t want to allow. To learn more about the available policies, see Security policies. In this example, we will select ELBSecurityPolicy-TLS13-1-2-Res-2021-06, which excludes the TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 cipher.
  11. For Default SSL cert, choose Select a certificate and then select your certificate (for example, *.example.com).
  12. Leave the rest as it is and choose Create load balancer. Wait for the load balancer to complete deployment.

Step 5: Set up DNS forwarding

The final step is to configure the Domain Name System (DNS) to associate the custom domain name with our APIs.

To set up DNS forwarding

  1. Open the Route53 console.
  2. In the left navigation pane, choose Hosted zones.
  3. Select the private hosted zone that manages your domain.
  4. Choose Create record.
  5. For Record name, enter the domain name that you plan to associate with your API (for example, api.example.com — the same name as in Step 2: Associate API Gateway with the VPC endpoint and custom domain).
  6. For Record type, leave the default A – Routes traffic to an IPV4 address and some AWS resources.
  7. Turn on Alias.
  8. For Route traffic to, select Alias to Network Load Balancer. Select the AWS Region where you deployed your resources and then select your load balancer.
  9. Choose Create records.

Step 6: Validate your solution

At this point, you have deployed the resources that you need to implement the solution. You now need to validate that it works as expected.

Your resources are deployed in private subnets, so you need to test them by sending requests from within the private subnet itself. For example, you can do that by connecting to a Linux instance that you have running inside the private subnet.

After you have logged in to your private EC2 instance, you can validate your solution by sending requests to your endpoint.

From your terminal of choice, run the following commands. Replace <endpoint> with your chosen domain name—for example, api.example.com/<your-path>.

curl https://<endpoint> ‐‐cipher ECDHE-RSA-AES128-GCM-SHA256

This command sends a GET request to API Gateway by selecting a cipher suite that’s allowed by the ELB policy. As a result, the Network Load Balancer allows the connection and returns success.

curl https://<endpoint> ‐‐cipher ECDHE-RSA-AES128-SHA256

This command sends a GET request to the API Gateway by selecting a cipher suite that is excluded by the ELB policy. As a result, the Network Load Balancer denies the connection and returns an error response.

Figure 3 shows the expected behavior.

Figure 3: Target behavior: accept only connections with selected cipher suites

Figure 3: Target behavior: accept only connections with selected cipher suites

Conclusion

In this blog post, you learned how to use a Network Load Balancer as a reverse proxy for your private APIs managed by Amazon API Gateway. With this solution, the Network Load Balancer allows you to exclude specific cipher suites by selecting the ELB policy that’s most appropriate for your use case.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Sid Singh

Sid is a Solutions Architect with Amazon Web Services. He works with global financial services customers and has more than 10 years of industry experience covering a wide range of technologies. Outside of work, he loves traveling, is an avid foodie, and Bavarian beer enthusiast.

Francesco Vergona

Francesco Vergona

Francesco is a Solutions Architect for AWS Financial Services. He has been with Amazon since November 2019, first in the retail space and then in the cloud business. He assists financial services customers throughout their cloud journey, helping them craft scalable, flexible and resilient architectures. Francesco has an interest in all things serverless and enjoys helping customers understand how serverless technologies can change the way they think about building and running applications at scale with minimal operational overhead.

How ERGO built an on-call support solution in a week

Post Syndicated from Sid Singh original https://aws.amazon.com/blogs/architecture/how-ergo-built-an-on-call-support-solution-in-a-week/

ERGO’s Technology & Services S.A. (ET&S) Cloud Solutions Department is a specialist team of cloud engineers who provide technical support for business owners, project managers, and engineering leads. The support team deals with complex issues, such as failed deployments, security vulnerabilities, environment availability, etc.

When an issue arises, it’s categorized as Priority 1 (P1) or Priority 2 (P2). For urgent P1 incidents, users contact the support team directly via phone. For P2 incidents, the workflow sends an issue description to the support team via SMS.

Originally, the SMS and voice forwarding systems were manually updated every Monday. For SMS, an operator manually updated the phone numbers in the system for the assigned support team members. For voice forwarding, support team members used physical phones, which were handed off from engineer to engineer per the support team roster.

These manual processes were time consuming and occasionally error prone. Additionally, with COVID-19 physical distancing measures in place, handing off physical devices was complicated. To keep up with the increasing number of support cases and the growth of their Cloud Solutions Department, ERGO worked with AWS to modernize and automate their manual workflow. We’ll show you how ERGO implemented a production-ready, on-call support solution with SMS and voice features in just one week using Amazon Connect and Amazon Pinpoint.

Automating the SMS on-call system

Let’s look at how we automated the SMS on-call support system, as shown in Figure 1 and summarized as follows:

  1. We use an open-source orchestration tool, Red Hat Ansible Automation Platform (Ansible), as a frontend to run the template “Assign to On-call SMS”.
  2. The template sets the parameter to a subset of support team members who are assigned to support P1/P2 cases. The assignment is based on the on-call shift schedule.
  3. Next, support team members are subscribed to the Amazon Simple Notification Service (Amazon SNS) topic subscriber’s list using an Ansible playbook.

Now the support team will receive SMS alerts.

Assign to on-call SMS workflow

Figure 1. Assign to on-call SMS workflow

Next, we integrated the SMS workflow with our ZIS IT monitoring tool to capture critical events and forward them via SMS to the support team, as shown in Figure 2:

  1. The Amazon Pinpoint phone number is set as the SMS destination in our monitoring tool.
  2. The monitoring tool then sends the SMS to Amazon Pinpoint, where:
    • We extract the messageBody from the payload that Amazon Pinpoint prepared by sending the message to Amazon SNS “Before Processing Message”, which is subscribed by our AWS Lambda function “Extract messageBody”.
    • The extracted message is then sent to Amazon SNS as “After Processing Message”, which uses the Amazon Pinpoint “Two-way SMS” feature to send the SMS to support team members who are assigned to the Amazon SNS topic.
On-call SMS workflow integration with Amazon Pinpoint

Figure 2. On-call SMS workflow integration with Amazon Pinpoint

Also shown in Figure 2, we track our monthly SMS spending using Amazon CloudWatch. The SMSMonthToDateSpentUSD metric shows the amount spent sending SMS messages during the current month.

Why extract the messageBody before sending the SMS to the support team?

Amazon Pinpoint captures SMS from the monitoring tool in JSON format, which includes additional information, such as the origin and destination numbers, the message ID and related data, as shown in the following example:

{

"originationNumber":"+14255550182",

"destinationNumber":"+12125550101",

"messageKeyword":"JOIN",

"messageBody":"EXAMPLE",

"inboundMessageId":"cae173d2-66b9-564c-8309-21f858e9fb84",

"previousPublishedMessageId":"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"

}

The support team only needs the messageBody, and the JSON format makes it difficult to read on a mobile phone. Therefore, we use a Lambda function for the “messageBody” extraction.

Automating the voice forwarding system

The other half of our on-call solution is voice forwarding. As mentioned in the introduction, we had a physical phone and updated the call forwarding every Monday. This allowed us to forward calls to a single number, but this system had two main problems: it wasn’t scalable and it was prone to human errors.

In our automated system, shown in Figure 3, all calls to the physical phone are forwarded to Amazon Connect, so we do not need to change the number of the phone.

This is how it’s set up:

  • The assigned phone numbers in Amazon Connect are attached to the Contact Flow “ERGO On-call Forwarding Voice”, which starts at the “Entry point” rectangle on the left side of the diagram.
  • In the next step, “Set logging behavior” captures the calling number. This allows us to see the number to return any missed calls.
  • Finally, the set working queue contains routing profiles (in this case, we use a main line and secondary line). The main line has support team members who are assigned to address P1 cases. The secondary line is for managers who will take the call if the support team members are not available.

When a customer is in a queue, the Amazon Connect contact flow tries to route the call to a support team member. If there’s no answer, the service re-routes the call to the next available support team member. After 30 seconds, if there is no answer on the first line (and no other support team members have become available), the service tries the secondary line.

To set this up:

  • Every support team member requires an Amazon Connect account. You can import their data via CSV to automate provisioning.
  • If a support team member is shown as online but does not answer a call, Amazon Connect changes their status to offline. This way, an Amazon Connect admin can see the time and number of the missed call in the Amazon Connect Real-time metrics reports and can return the call when another team member or supervisor is available.
  • Figure 3 shows how Amazon Connect and CloudWatch monitor contact center health metrics like “MissedCalls” and generate alerts via Amazon Simple Notification Service (SNS) to send notifications via email to ensure calls are returned promptly. For more details on this integration pattern, refer to the Monitor and trigger alerts using Amazon CloudWatch for Amazon Connect blog post.
On-call voice forwarding workflow with Amazon Connect

Figure 3. On-call voice forwarding workflow with Amazon Connect

Lessons learned

After creating an Amazon Connect instance, we claimed a phone number to place or receive calls. Requesting phone numbers from Amazon Connect to serve different customers in different countries was the most time intensive part of the setup. Be aware that some countries have regulatory requirements, and this can increase the time and effort required. For example, requesting a German number and a Polish number will require different documents. To save time, we used international toll-free numbers. This allows us to provide support to people in all other countries without the caller incurring additional charges.

To help you with your implementation, you can find the list of ID requirements by country or AWS Region here and AWS support can provide more information.

Conclusion

Using managed services like Amazon Connect and Amazon Pinpoint allowed us to implement a scalable and pay-as-you-go on-call solution for technical support. The new automated setup is a huge improvement over the previous manual and error-prone workflow and enables us to easily onboard customers from new countries.

Looking ahead, we plan to explore using the Amazon Connect APIs to automate the management of an agent’s online/offline status, as well as building a skills-based routing workflow to accommodate a multi-lingual support team. You can read more about AWS Customer Engagement services here.

How Munich Re Automation Solutions Ltd built a digital insurance platform on AWS

Post Syndicated from Sid Singh original https://aws.amazon.com/blogs/architecture/how-munich-re-automation-solutions-ltd-built-a-digital-insurance-platform-on-aws/

Underwriting for life insurance can be quite manual and often time-intensive with lots of re-keying by advisers before underwriting decisions can be made and policies finally issued. In the digital age, people purchasing life insurance want self-service interactions with their prospective insurer. People want speed of transaction with time to cover reduced from days to minutes. While this has been achieved in the general insurance space with online car and home insurance journeys, this is not always the case in the life insurance space. This is where Munich Re Automation Solutions Ltd (MRAS) offers its customers, a competitive edge to shrink the quote-to-fulfilment process using their ALLFINANZ solution.

ALLFINANZ is a cloud-based life insurance and analytics solution to underwrite new life insurance business. It is designed to transform the end consumer’s journey, delivering everything they need to become a policyholder. The core digital services offered to all ALLFINANZ customers include Rulebook Hub, Risk Assessment Interview delivery, Decision Engine, deep analytics (including predictive modeling capabilities), and technical integration services—for example, API integration and SSO integration.

Current state architecture

The ALLFINANZ application began as a traditional three-tier architecture deployed within a datacenter. As MRAS migrated their workload to the AWS cloud, they looked at their regulatory requirements and the technology stack, and decided on the silo model of the multi-tenant SaaS system. Each tenant is provided a dedicated Amazon Virtual Private Cloud (VPC) that holds network and application components, fully isolated from other primary insurers.

As an entry point into the ALLFINANZ environment, MRAS uses Amazon Route 53 to route incoming traffic to the appropriate Amazon VPC. The routing relies on a model where subdomains are assigned to each tenant, for example the subdomain allfinanz.tenant1.munichre.cloud is the subdomain for tenant 1. The diagram below shows the ALLFINANZ architecture. Note: not all links between components are shown here for simplicity.

Current high-level solution architecture for the ALLFINANZ solution

Figure 1. Current high-level solution architecture for the ALLFINANZ solution

  1. The solution uses Route 53 as the DNS service, which provides two entry points to the SaaS solution for MRAS customers:
    • The URL allfinanz.<tenant-id>.munichre.cloud allows user access to the ALLFINANZ Interview Screen (AIS). The AIS can exist as a standalone application, or can be integrated with a customer’s wider digital point-of -sale process.
    • The URL api.allfinanz.<tenant-id>.munichre.cloud is used for accessing the application’s Web services and REST APIs.
  2. Traffic from both entry points flows through the load balancers. While HTTP/S traffic from the application user access entry point flows through an Application Load Balancer (ALB), TCP traffic from the REST API clients flows through a Network Load Balancer (NLB). Transport Layer Security (TLS) termination for user traffic happens at the ALB using certificates provided by the AWS Certificate Manager.  Secure communication over the public network is enforced through TLS validation of the server’s identity.
  3. Unlike application user access traffic, REST API clients use mutual TLS authentication to authenticate a customer’s server. Since NLB doesn’t support mutual TLS, MRAS opted for a solution to pass this traffic to a backend NGINX server for the TLS termination. Mutual TLS is enforced by using self-signed client and server certificates issued by a certificate authority that both the client and the server trust.
  4. Authenticated traffic from ALB and NGINX servers is routed to EC2 instances hosting the application logic. These EC2 instances are hosted in an auto-scaling group spanning two Availability Zones (AZs) to provide high availability and elasticity, therefore, allowing the application to scale to meet fluctuating demand.
  5. Application transactions are persisted in the backend Amazon Relational Database Service MySQL instances. This database layer is configured across multi-AZs, providing high availability and automatic failover.
  6. The application requires the capability to integrate evidence from data sources external to the ALLFINANZ service. This message sharing is enabled through the Amazon MQ managed message broker service for Apache Active MQ.
  7. Amazon CloudWatch is used for end-to-end platform monitoring through logs collection and application and infrastructure metrics and alerts to support ongoing visibility of the health of the application.
  8. Software deployment and associated infrastructure provisioning is automated through infrastructure as code using a combination of Git, Amazon CodeCommit, Ansible, and Terraform.
  9. Amazon GuardDuty continuously monitors the application for malicious activity and delivers detailed security findings for visibility and remediation. GuardDuty also allows MRAS to provide evidence of the application’s strong security posture to meet audit and regulatory requirements.

High availability, resiliency, and security

MRAS deploys their solution across multiple AWS AZs to meet high-availability requirements and ensure operational resiliency. If one AZ has an ongoing event, the solution will remain operational, as there are instances receiving production traffic in another AZ. As described above, this is achieved using ALBs and NLBs to distribute requests to the application subnets across AZs.

The ALLFINANZ solution uses private subnets to segregate core application components and the database storage platform. Security groups provide networking security measures at the elastic network interface level. MRAS restrict access from incoming connection requests to ranges of IP addresses by attaching security groups to the ALBs. Amazon Inspector monitors workloads for software vulnerabilities and unintended network exposure. AWS WAF is integrated with the ALB to protect from SQL injection or cross-site scripting attacks on the application.

Optimizing the existing workload

One of the key benefits of this architecture is that now MRAS can standardize the infrastructure configuration and ensure consistent versioning of the workload across tenants. This makes onboarding new tenants as simple as provisioning another VPC with the same infrastructure footprint.

MRAS are continuing to optimize their architecture iteratively, examining components to modernize to cloud-native components and evolving towards the pool model of multi-tenant SaaS architecture wherever possible. For example, MRAS centralized their per-tenant NAT gateway deployment to a centralized outbound Internet routing design using AWS Transit Gateway, saving approximately 30% on their overall NAT gateway spend.

Conclusion

The AWS global infrastructure has allowed MRAS to serve more than 40 customers in five AWS regions around the world. This solution improves customers’ experience and workload maintainability by standardizing and automating the infrastructure and workload configuration within a SaaS model, compared with multiple versions for the on-premise deployments. SaaS customers are also freed up from the undifferentiated heavy lifting of infrastructure operations, allowing them to focus on their business of underwriting for life insurance.

MRAS used the AWS Well-Architected Framework to assess their architecture and list key recommendations. AWS also offers Well-Architected SaaS Lens and AWS SaaS Factory Program, with a collection of resources to empower and enable insurers at any stage of their SaaS on AWS journey.