All posts by Bryant Bost

Build real-time feature toggles with Amazon DynamoDB Streams and Amazon API Gateway WebSocket APIs

Post Syndicated from Bryant Bost original https://aws.amazon.com/blogs/devops/build-real-time-feature-toggles-with-amazon-dynamodb-streams-and-amazon-api-gateway-websocket-apis/

Feature toggles (or feature flags) are a software development technique allowing developers to programmatically enable or disable features of an application. In practice, feature toggles control a system’s behavior by controlling conditional statements in the application code.

Feature toggles have a number of use cases:

  • Selectively enable or disable features – You can use feature toggles to selectively enable or disable features based on arbitrary conditions. For example, you can use a feature toggle to enable a feature only at certain times of the day or only for certain groups of users.
  • Minimize deployment risk – Deploying a new, experimental, or uncertain feature behind a feature toggle allows an organization to expose this feature to its users with minimal impact should the feature need to be disabled. In this scenario, a feature toggle allows you to quickly restore normal functionality without a code rollback and redeployment.
  • Reduce cross-team dependencies – If an application needs to change its behavior in coordination with another application, or even another group of developers on the same team, a feature toggle can be used to maintain the old functionality and new functionality in parallel.

In the following example code, the Boolean variable example_feature_toggle functions as a feature toggle and controls the system behavior by forcing the system to do_something() or do_something_else():

if example_feature_toggle:
    do_something()
else:
    do_something_else()

Feature toggle implementations can take many forms and display varying levels of complexity. In a basic implementation like the preceding example, feature toggles are static variables that are read from a configuration file at application startup or on certain application events. In a more complex implementation, feature toggles can be read from an API or pushed to the application via a service. Building a complex, custom feature toggle solution from scratch or deploying and managing a third-party feature toggle solution can be tedious and distract from the development of more critical application features.

In this post, we explore a serverless solution that uses managed service offerings from AWS to add simple, customizable, and real-time feature toggles to your application. This solution uses Amazon DynamoDB and Amazon DynamoDB Streams to capture feature toggle changes and trigger an AWS Lambda function that pushes notifications to connected clients through Amazon API Gateway WebSocket APIs.

The code for this solution is stored in a GitHub repository.

Solution Overview

The following diagram shows the high-level architecture that we implement to deliver the feature toggle solution. In this solution, application clients connect to an API Gateway WebSocket API and receive the status of all feature toggles in the application. If a feature toggle changes while the client is still connected to the WebSocket API, the feature toggle service automatically sends a message to the client containing the new status of the feature toggle. This solution sends all feature toggle updates to all connected clients, but you could easily extend it to incorporate application business logic to selectively return the status of feature toggles based on arbitrary conditions.

Feature toggle solution architecture

Feature toggle solution architecture

This architecture represents a complete, customizable service that offers the following benefits:

  • Centralized feature toggle management – To effectively manage the state of each feature toggle in an application, we use a single DynamoDB table to store all the feature toggles and the current status of each feature toggle. Storing the feature toggles in a centralized location simplifies the management of these feature toggles, and removes the possibility that different application clients could receive conflicting feature toggle statuses.
  • Real-time communication – We use WebSocket APIs to immediately notify application clients of changes in feature toggle values. In addition to the ability to push notifications directly from the feature toggle solution to the clients, a WebSocket-based approach is less resource intensive than an HTTP polling-based approach due to the relative frequency of feature toggle changes compared to polling messages and the low overhead associated with maintaining a WebSocket connection.
  • Automatic change detection – When a feature toggle is added, removed, or updated from the DynamoDB table, we need to automatically notify all clients that are reliant on these feature toggles. For example, if our application client is a single-page application, all users currently using the application need to be notified of the updated feature toggle. To capture the change events from DynamoDB, we configure a DynamoDB stream to trigger a Lambda function on each change to the table and push notifications to all connected clients.

Initiating a connection

The feature toggle service is exposed via a single API Gateway WebSocket API, and this API consists of only two routes (as shown in the following screenshot): $connect and $disconnect.

Connecting to feature toggle API

Connecting to feature toggle API

In a WebSocket API, incoming messages are directed to backend services based on routes that are specified as a message header. Because our feature toggle solution is only intended to notify users of feature toggle changes and doesn’t need to accept incoming messages, we don’t need to configure additional routes. For more information about connection initiation and routing incoming messages, see Working with routes for WebSocket APIs.

When an application client initiates a connection to the WebSocket API, API Gateway invokes the $connect route. We configure this route to trigger our connection manager function, which stores the connectionId and callback URL in the active connections table. The connection manager function is also responsible for handling the $disconnect route, which is invoked when the WebSocket connection is closed. When the $disconnect route is invoked, the function simply deletes the client’s connection information from the active connections table. The information in the active connections table is used to push notifications from the service backend (the Lambda functions) to clients with active WebSocket connections to API Gateway. For more information about how to exchange messages with API Gateway WebSocket APIs, see to Sending data from backend services to connected clients.

The following screenshot shows our active connections table.

Feature toggle active connections

Feature toggle active connections

Receiving the initial state

When a client first connects to the feature toggle service, we need to notify the client of the current state of all feature toggles in our application. To do this, we configure a DynamoDB stream to capture changes to the active connections table, and configure the new connection Lambda function to be triggered on writes to the stream. The DynamoDB stream (see the following screenshot) is configured to reflect the NEW_IMAGE of any changed data, meaning the stream contains the entire changed item as it appears after it’s modified.

DynamoDB stream configuration

DynamoDB stream configuration

For more information about working with Lambda functions to process records in a DynamoDB stream, see Using AWS Lambda with Amazon DynamoDB.

From the DynamoDB stream, the new connection function is triggered with any new, updated, or deleted rows from the active connections table. This function examines the records present in the stream, and if a record is determined to be a new connection, the function reads the current status of all feature toggles from the feature toggles table and sends this data as a message to the newly connected client. The new connection function ignores records in the stream that don’t represent newly connected clients.

Respond to updates

The current state of all feature toggles is stored in the feature toggles table. The format of this table is arbitrary, and can be expanded to fit additional data fields required for an application. For this post, I added several example feature toggles to the table (see the following screenshot), with isActive representing whether the feature toggle is currently on or off.

Feature toggle tracking table

Feature toggle tracking table

When a feature toggle is added, edited, or deleted, we need to automatically push a notification containing the new state of the feature toggle to connected clients. Similarly to the way we capture new connections to the active connections table  with a DynamoDB stream, we use another stream to capture changes to the feature toggles table and configure these changes to invoke the feature toggle message function. This DynamoDB stream (see the following screenshot) has identical configurations to the stream attached to the active connections table.

DynamoDB stream configuration

DynamoDB stream configuration

Finally, the feature toggle message function is responsible for pushing any changes from the DynamoDB stream attached to the feature toggles table to currently connected clients. To do this, the function reads all active connections from the active connections table and uses the callback URL to send a message to the client containing the new state of the feature toggle.

Deploying the infrastructure

To create this stack, you can deploy the AWS Serverless Application Model (AWS SAM) template in the GitHub repository for this post. Be sure to configure your AWS CLI with an AWS Identity and Access Management (IAM) user that has permissions to create the resources described in the template. For more details on creating custom IAM users and policies, see Manage IAM permissions. Additionally, you need to have the AWS SAM CLI installed to use the sam command to deploy the stack.

Results

To test our feature toggle service, we use wscat, an open-source command line tool that allows us to send and receive messages over WebSocket connections. To connect to our service, we use the endpoint exposed by API Gateway. This endpoint, as shown in the following screenshot, is produced as an output of the deployed AWS SAM template, and is also available on the API Gateway console.

Feature toggle stack output

Feature toggle stack output

When we initiate a connection to our service, we should receive a notification of the current state of all feature toggles.

wscat -c wss://<your-api-id>.execute-api.us-east-1.amazonaws.com/Sandbox
Connected (press CTRL+C to quit)
< [{"featureId": {"S": "2"}, "isActive": {"BOOL": false}, "featureName": {"S": "Feature toggle 2"}}, {"featureId": {"S": "1"}, "isActive": {"BOOL": false}, "featureName": {"S": "Feature toggle 1"}}, {"featureId": {"S": "4"}, "isActive": {"BOOL": true}, "featureName": {"S": "Feature toggle 4"}}, {"featureId": {"S": "3"}, "isActive": {"BOOL": true}, "featureName": {"S": "Feature toggle 3"}}]

When any of the feature toggles are changed in DynamoDB, the service automatically pushes these changes to the connected client.

For example, the following code illustrates a notification indicating the status of an existing feature toggle has changed:

< {"featureName": {"S": "Feature toggle 1"}, "isActive": {"BOOL": true}, "featureId": {"S": "1"}}

The following code illustrates a notification of a new feature toggle:

< {"featureName": {"S": "New feature toggle!"}, "isActive": {"BOOL": true}, "featureId": {"S": "6"}}

The following code illustrates a notification of a feature toggle being deleted:

< {"isRemoved": "true", "featureId": {"featureId": {"S": "2"}}}

Conclusion

In this post, you saw how to use managed services from AWS to form the core of a solution to deliver real-time feature toggles to your application. With DynamoDB Streams and WebSocket APIs, we built this service with a relatively small amount of code and simple service integration configurations.

The patterns demonstrated in this solution aren’t limited to feature toggles; you can use them in any scenario where an application needs to perform additional processing of events from a DynamoDB stream and send these events to other systems.

Micro-frontend Architectures on AWS

Post Syndicated from Bryant Bost original https://aws.amazon.com/blogs/architecture/micro-frontend-architectures-on-aws/

A microservice architecture is characterized by independent services that are focused on a specific business function and maintained by small, self-contained teams. Microservice architectures are used frequently for web applications developed on AWS, and for good reason. They offer many well-known benefits such as development agility, technological freedom, targeted deployments, and more. Despite the popularity of microservices, many frontend applications are still built in a monolithic style. For example, they have one large code base that interacts with all backend microservices, and is maintained by a large team of developers.

Monolith Frontend

Figure 1. Microservice backend with monolith frontend

What is a Micro-frontend?

The micro-frontend architecture introduces microservice development principles to frontend applications. In a micro-frontend architecture, development teams independently build and deploy “child” frontend applications. These applications are combined by a “parent” frontend application that acts as a container to retrieve, display, and integrate the various child applications. In this parent/child model, the user interacts with what appears to be a single application. In reality, they are interacting with several independent applications, published by different teams.

Micro Frontend

Figure 2. Microservice backend with micro-frontend

Micro-frontend Benefits

Compared to a monolith frontend, a micro-frontend offers the following benefits:

  • Independent artifacts: A core tenet of microservice development is that artifacts can be deployed independently, and this remains true for micro-frontends. In a micro-frontend architecture, teams should be able to independently deploy their frontend applications with minimal impact to other services. Those changes will be reflected by the parent application.
  • Autonomous teams: Each team is the expert in its own domain. For example, the billing service team members have specialized knowledge. This includes the data models, the business requirements, the API calls, and user interactions associated with the billing service. This knowledge allows the team to develop the billing frontend faster than a larger, less specialized team.
  • Flexible technology choices: Autonomy allows each team to make technology choices that are independent from other teams. For instance, the billing service team could develop their micro-frontend using Vue.js and the profile service team could develop their frontend using Angular.
  • Scalable development: Micro-frontend development teams are smaller and are able to operate without disrupting other teams. This allows us to quickly scale development by spinning up new teams to deliver additional frontend functionality via child applications.
  • Easier maintenance: Keeping frontend repositories small and specialized allows them to be more easily understood, and this simplifies long-term maintenance and testing. For instance, if you want to change an interaction on a monolith frontend, you must isolate the location and dependencies of the feature within the context of a large codebase. This type of operation is greatly simplified when dealing with the smaller codebases associated with micro-frontends.

Micro-frontend Challenges

Conversely, a micro-frontend presents the following challenges:

  • Parent/child integration: A micro-frontend introduces the task of ensuring the parent application displays the child application with the same consistency and performance expected from a monolith application. This point is discussed further in the next section.
  • Operational overhead: Instead of managing a single frontend application, a micro-frontend application involves creating and managing separate infrastructure for all teams.
  • Consistent user experience: In order to maintain a consistent user experience, the child applications must use the same UI components, CSS libraries, interactions, error handling, and more. Maintaining consistency in the user experience can be difficult for child applications that are at different stages in the development lifecycle.

Building Micro-frontends

The most difficult challenge with the micro-frontend architecture pattern is integrating child applications with the parent application. Prioritizing the user experience is critical for any frontend application. In the context of micro-frontends, this means ensuring a user can seamlessly navigate from one child application to another inside the parent application. We want to avoid disruptive behavior such as page refreshes or multiple logins. At its most basic definition, parent/child integration involves the parent application dynamically retrieving and rendering child applications when the parent app is loaded. Rendering the child application depends on how the child application was built, and this can be done in a number of ways. Two of the most popular methods of parent/child integration are:

  1. Building each child application as a web component.
  2. Importing each child application as an independent module. These modules either declares a function to render itself or is dynamically imported by the parent application (such as with module federation).

Registering child apps as web components:

<html>
    <head>
        <script src="https://shipping.example.com/shipping-service.js"></script>
        <script src="https://profile.example.com/profile-service.js"></script>
        <script src="https://billing.example.com/billing-service.js"></script>
        <title>Parent Application</title>
    </head>
    <body>
        <shipping-service />
        <profile-service />
        <billing-service />
    </body>
</html>

Registering child apps as modules:

<html>
    <head>
        <script src="https://shipping.example.com/shipping-service.js"></script>
        <script src="https://profile.example.com/profile-service.js"></script>
        <script src="https://billing.example.com/billing-service.js"></script>
     <title>Parent Application</title>
    </head>
    <body>
    </body>
    <script>
        // Load and render the child applications form their JS bundles.
    </script>
</html>

The following diagram shows an example micro-frontend architecture built on AWS.

AWS Micro Frontend

Figure 3. Micro-frontend architecture on AWS

In this example, each service team is running a separate, identical stack to build their application. They use the AWS Developer Tools and deploy the application to Amazon Simple Storage Service (S3) with Amazon CloudFront. The CI/CD pipelines use shared components such as CSS libraries, API wrappers, or custom modules stored in AWS CodeArtifact. This helps drive consistency across parent and child applications.

When you retrieve the parent application, it should prompt you to log in to an identity provider and retrieve JWTs. In this example, the identity provider is an Amazon Cognito User Pool. After a successful login, the parent application retrieves the child applications from CloudFront and renders them inside the parent application. Alternatively, the parent application can elect to render the child applications on demand, when you navigate to a specific route. The child applications should not require you to log in again to the Amazon Cognito user pool. They should be configured to use the JWT obtained by the parent app or silently retrieve a new JWT from Amazon Cognito.

Conclusion

Micro-frontend architectures introduce many of the familiar benefits of microservice development to frontend applications. A micro-frontend architecture also simplifies the process of building complex frontend applications by allowing you to manage small, independent components.