Tag Archives: Vue.js

Building Serverless Land: Part 2 – An auto-building static site

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/building-serverless-land-part-2-an-auto-building-static-site/

In this two-part blog series, I show how serverlessland.com is built. This is a static website that brings together all the latest blogs, videos, and training for AWS serverless. It automatically aggregates content from a number of sources. The content exists in a static JSON file, which generates a new static site each time it is updated. The result is a low-maintenance, low-latency serverless website, with almost limitless scalability.

A companion blog post explains how to build an automated content aggregation workflow to create and update the site’s content. In this post, you learn how to build a static website with an automated deployment pipeline that re-builds on each GitHub commit. The site content is stored in JSON files in the same repository as the code base. The example code can be found in this GitHub repository.

The growing adoption of serverless technologies generates increasing amounts of helpful and insightful content from the developer community. This content can be difficult to discover. Serverless Land helps channel this into a single searchable location. By collating this into a static website, users can enjoy a browsing experience with fast page load speeds.

The serverless nature of the site means that developers don’t need to manage infrastructure or scalability. The use of AWS Amplify Console to automatically deploy directly from GitHub enables a regular release cadence with a fast transition from prototype to production.

Static websites

A static site is served to the user’s web browser exactly as stored. This contrasts to dynamic webpages, which are generated by a web application. Static websites often provide improved performance for end users and have fewer or no dependant systems, such as databases or application servers. They may also be more cost-effective and secure than dynamic websites by using cloud storage, instead of a hosted environment.

A static site generator is a tool that generates a static website from a website’s configuration and content. Content can come from a headless content management system, through a REST API, or from data referenced within the website’s file system. The output of a static site generator is a set of static files that form the website.

Serverless Land uses a static site generator for Vue.js called Nuxt.js. Each time content is updated, Nuxt.js regenerates the static site, building the HTML for each page route and storing it in a file.

The architecture

Serverless Land static website architecture

When the content.json file is committed to GitHub, a new build process is triggered in AWS Amplify Console.

Deploying AWS Amplify

AWS Amplify helps developers to build secure and scalable full stack cloud applications. AWS Amplify Console is a tool within Amplify that provides a user interface with a git-based workflow for hosting static sites. Deploy applications by connecting to an existing repository (GitHub, BitBucket Cloud, GitLab, and AWS CodeCommit) to set up a fully managed, nearly continuous deployment pipeline.

This means that any changes committed to the repository trigger the pipeline to build, test, and deploy the changes to the target environment. It also provides instant content delivery network (CDN) cache invalidation, atomic deploys, password protection, and redirects without the need to manage any servers.

Building the static website

  1. To get started, use the Nuxt.js scaffolding tool to deploy a boiler plate application. Make sure you have npx installed (npx is shipped by default with npm version 5.2.0 and above).
    $ npx create-nuxt-app content-aggregator

    The scaffolding tool asks some questions, answer as follows:Nuxt.js scaffolding tool inputs

  2. Navigate to the project directory and launch it with:
    $ cd content-aggregator
    $ npm run dev

    The application is now running on http://localhost:3000.The pages directory contains your application views and routes. Nuxt.js reads the .vue files inside this directory and automatically creates the router configuration.

  3. Create a new file in the /pages directory named blogs.vue:$ touch pages/blogs.vue
  4. Copy the contents of this file into pages/blogs.vue.
  5. Create a new file in /components directory named Post.vue :$ touch components/Post.vue
  6. Copy the contents of this file into components/Post.vue.
  7. Create a new file in /assets named content.json and copy the contents of this file into it.$ touch /assets/content.json

The blogs Vue component

The blogs page is a Vue component with some special attributes and functions added to make development of your application easier. The following code imports the content.json file into the variable blogPosts. This file stores the static website’s array of aggregated blog post content.

import blogPosts from '../assets/content.json'

An array named blogPosts is initialized:

data(){
    return{
      blogPosts: []
    }
  },

The array is then loaded with the contents of content.json.

 mounted(){
    this.blogPosts = blogPosts
  },

In the component template, the v-for directive renders a list of post items based on the blogPosts array. It requires a special syntax in the form of blog in blogPosts, where blogPosts is the source data array and blog is an alias for the array element being iterated on. The Post component is rendered for each iteration. Since components have isolated scopes of their own, a :post prop is used to pass the iterated data into the Post component:

<ul>
  <li v-for="blog in blogPosts" :key="blog">
     <Post :post="blog" />
  </li>
</ul>

The post data is then displayed by the following template in components/Post.vue.

<template>
    <div class="hello">
      <h3>{{ post.title }} </h3>
      <div class="img-holder">
          <img :src="post.image" />
      </div>
      <p>{{ post.intro }} </p>
      <p>Published on {{post.date}}, by {{ post.author }} p>
      <a :href="post.link"> Read article</a>
    </div>
</template>

This forms the framework for the static website. The /blogs page displays content from /assets/content.json via the Post component. To view this, go to http://localhost:3000/blogs in your browser:

The /blogs page

Add a new item to the content.json file and rebuild the static website to display new posts on the blogs page. The previous content was generated using the aggregation workflow explained in this companion blog post.

Connect to Amplify Console

Clone the web application to a GitHub repository and connect it to Amplify Console to automate the rebuild and deployment process:

  1. Upload the code to a new GitHub repository named ‘content-aggregator’.
  2. In the AWS Management Console, go to the Amplify Console and choose Connect app.
  3. Choose GitHub then Continue.
  4. Authorize to your GitHub account, then in the Recently updated repositories drop-down select the ‘content-aggregator’ repository.
  5. In the Branch field, leave the default as master and choose Next.
  6. In the Build and test settings choose edit.
  7. Replace - npm run build with – npm run generate.
  8. Replace baseDirectory: / with baseDirectory: dist

    This runs the nuxt generate command each time an application build process is triggered. The nuxt.config.js file has a target property with the value of static set. This generates the web application into static files. Nuxt.js creates a dist directory with everything inside ready to be deployed on a static hosting service.
  9. Choose Save then Next.
  10. Review the Repository details and App settings are correct. Choose Save and deploy.

    Amplify Console deployment

Once the deployment process has completed and is verified, choose the URL generated by Amplify Console. Append /blogs to the URL, to see the static website blogs page.

Any edits pushed to the repository’s content.json file trigger a new deployment in Amplify Console that regenerates the static website. This companion blog post explains how to set up an automated content aggregator to add new items to the content.json file from an RSS feed.

Conclusion

This blog post shows how to create a static website with vue.js using the nuxt.js static site generator. The site’s content is generated from a single JSON file, stored in the site’s assets directory. It is automatically deployed and re-generated by Amplify Console each time a new commit is pushed to the GitHub repository. By automating updates to the content.json file you can create low-maintenance, low-latency static websites with almost limitless scalability.

This application framework is used together with this automated content aggregator to pull together articles for http://serverlessland.com. Serverless Land brings together all the latest blogs, videos, and training for AWS Serverless. Download the code from this GitHub repository to start building your own automated content aggregation platform.

Building Serverless Land: Part 1 – Automating content aggregation

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/building-serverless-land-part-1-automating-content-aggregation/

In this two part blog series, I show how serverlessland.com is built. This is a static website that brings together all the latest blogs, videos, and training for AWS Serverless. It automatically aggregates content from a number of sources. The content exists in static JSON files, which generate a new site build each time they are updated. The result is a low-maintenance, low-latency serverless website, with almost limitless scalability.

This blog post explains how to automate the aggregation of content from multiple RSS feeds into a JSON file stored in GitHub. This workflow uses AWS Lambda and AWS Step Functions, triggered by Amazon EventBridge. The application can be downloaded and deployed from this GitHub repository.

The growing adoption of serverless technologies generates increasing amounts of helpful and insightful content from the developer community. This content can be difficult to discover. Serverless Land helps channel this into a single searchable location. By automating the collection of this content with scheduled serverless workflows, the process robustly scales to near infinite numbers. The Step Functions MAP state allows for dynamic parallel processing of multiple content sources, without the need to alter code. On-boarding a new content source is as fast and simple as making a single CLI command.

The architecture

Automating content aggregation with AWS Step Functions

The application consists of six Lambda functions orchestrated by a Step Functions workflow:

  1. The workflow is triggered every 2 hours by an EventBridge scheduler. The schedule event passes an RSS feed URL to the workflow.
  2. The first task invokes a Lambda function that runs an HTTP GET request to the RSS feed. It returns an array of recent blog URLs. The array of blog URLs is provided as the input to a MAP state. The MAP state type makes it possible to run a set of steps for each element of an input array in parallel. The number of items in the array can be different for each execution. This is referred to as dynamic parallelism.
  3. The next task invokes a Lambda function that uses the GitHub REST API to retrieve the static website’s JSON content file.
  4. The first Lambda function in the MAP state runs an HTTP GET request to the blog post URL provided in the payload. The URL is scraped for content and an object containing detailed metadata about the blog post is returned in the response.
  5. The blog post metadata is compared against the website’s JSON content file in GitHub.
  6. A CHOICE state determines if the blog post metadata has already been committed to the repository.
  7. If the blog post is new, it is added to an array of “content to commit”.
  8. As the workflow exits the MAP state, the results are passed to the final Lambda function. This uses a single git commit to add each blog post object to the website’s JSON content file in GitHub. This triggers an event that rebuilds the static site.

Using Secrets in AWS Lambda

Two of the Lambda functions require a GitHub personal access token to commit files to a repository. Sensitive credentials or secrets such as this should be stored separate to the function code. Use AWS Systems Manager Parameter Store to store the personal access token as an encrypted string. The AWS Serverless Application Model (AWS SAM) template grants each Lambda function permission to access and decrypt the string in order to use it.

  1. Follow these steps to create a personal access token that grants permission to update files to repositories in your GitHub account.
  2. Use the AWS Command Line Interface (AWS CLI) to create a new parameter named GitHubAPIKey:
aws ssm put-parameter \
--name /GitHubAPIKey \
--value ReplaceThisWithYourGitHubAPIKey \
--type SecureString

{
    "Version": 1,
    "Tier": "Standard"
}

Deploying the application

  1. Fork this GitHub repository to your GitHub Account.
  2. Clone the forked repository to your local machine and deploy the application using AWS SAM.
  3. In a terminal, enter:
    git clone https://github.com/aws-samples/content-aggregator-example
    sam deploy -g
  4. Enter the required parameters when prompted.

This deploys the application defined in the AWS SAM template file (template.yaml).

The business logic

Each Lambda function is written in Node.js and is stored inside a directory that contains the package dependencies in a `node_modules` folder. These are defined for each function by its relative package.json file. The function dependencies are bundled and deployed using the sam build && deploy -g command.

The GetRepoContents and WriteToGitHub Lambda functions use the octokit/rest.js library to communicate with GitHub. The library authenticates to GitHub by using the GitHub API key held in Parameter Store. The AWS SDK for Node.js is used to obtain the API key from Parameter Store. With a single synchronous call, it retrieves and decrypts the parameter value. This is then used to authenticate to GitHub.

const AWS = require('aws-sdk');
const SSM = new AWS.SSM();


//get Github API Key and Authenticate
    const singleParam = { Name: '/GitHubAPIKey ',WithDecryption: true };
    const GITHUB_ACCESS_TOKEN = await SSM.getParameter(singleParam).promise();
    const octokit = await  new Octokit({
      auth: GITHUB_ACCESS_TOKEN.Parameter.Value,
    })

Lambda environment variables are used to store non-sensitive key value data such as the repository name and JSON file location. These can be entered when deploying with AWS SAM guided deploy command.

Environment:
        Variables:
          GitHubRepo: !Ref GitHubRepo
          JSONFile: !Ref JSONFile

The GetRepoContents function makes a synchronous HTTP request to the GitHub repository to retrieve the contents of the website’s JSON file. The response SHA and file contents are returned from the Lambda function and acts as the input to the next task in the Step Functions workflow. This SHA is used in final step of the workflow to save all new blog posts in a single commit.

Map state iterations

The MAP state runs concurrently for each element in the input array (each blog post URL).

Each iteration must compare a blog post URL to the existing JSON content file and decide whether to ignore the post. To do this, the MAP state requires both the input array of blog post URLs and the existing JSON file contents. The ItemsPath, ResultPath, and Parameters are used to achieve this:

  • The ItemsPath sets input array path to $.RSSBlogs.body.
  • The ResultPath states that the output of the branches is placed in $.mapResults.
  • The Parameters block replaces the input to the iterations with a JSON node. This contains both the current item data from the context object ($$.Map.Item.Value) and the contents of the GitHub JSON file ($.RepoBlogs).
"Type":"Map",
    "InputPath": "$",
    "ItemsPath": "$.RSSBlogs.body",
    "ResultPath": "$.mapResults",
    "Parameters": {
        "BlogUrl.$": "$$.Map.Item.Value",
        "RepoBlogs.$": "$.RepoBlogs"
     },
    "MaxConcurrency": 0,
    "Iterator": {
       "StartAt": "getMeta",

The Step Functions resource

The AWS SAM template uses the following Step Functions resource definition to create a Step Functions state machine:

  MyStateMachine:
    Type: AWS::Serverless::StateMachine
    Properties:
      DefinitionUri: statemachine/my_state_machine.asl.JSON
      DefinitionSubstitutions:
        GetBlogPostArn: !GetAtt GetBlogPost.Arn
        GetUrlsArn: !GetAtt GetUrls.Arn
        WriteToGitHubArn: !GetAtt WriteToGitHub.Arn
        CompareAgainstRepoArn: !GetAtt CompareAgainstRepo.Arn
        GetRepoContentsArn: !GetAtt GetRepoContents.Arn
        AddToListArn: !GetAtt AddToList.Arn
      Role: !GetAtt StateMachineRole.Arn

The actual workflow definition is defined in a separate file (statemachine/my_state_machine.asl.JSON). The DefinitionSubstitutions property specifies mappings for placeholder variables. This enables the template to inject Lambda function ARNs obtained by the GetAtt intrinsic function during template translation:

Step Functions mappings with placeholder variables

A state machine execution role is defined within the AWS SAM template. It grants the `Lambda invoke function` action. This is tightly scoped to the six Lambda functions that are used in the workflow. It is the minimum set of permissions required for the Step Functions to carry out its task. Additional permissions can be granted as necessary, which follows the zero-trust security model.

Action: lambda:InvokeFunction
Resource:
- !GetAtt GetBlogPost.Arn
- !GetAtt GetUrls.Arn
- !GetAtt CompareAgainstRepo.Arn
- !GetAtt WriteToGitHub.Arn
- !GetAtt AddToList.Arn
- !GetAtt GetRepoContents.Arn

The Step Functions workflow definition is authored using the AWS Toolkit for Visual Studio Code. The Step Functions support allows developers to quickly generate workflow definitions from selectable examples. The render tool and automatic linting can help you debug and understand the workflow during development. Read more about the toolkit in this launch post.

Scheduling events and adding new feeds

The AWS SAM template creates a new EventBridge rule on the default event bus. This rule is scheduled to invoke the Step Functions workflow every 2 hours. A valid JSON string containing an RSS feed URL is sent as the input payload. The feed URL is obtained from a template parameter and can be set on deployment. The AWS Compute Blog is set as the default feed URL. To aggregate additional blog feeds, create a new rule to invoke the Step Functions workflow. Provide the RSS feed URL as valid JSON input string in the following format:

{“feedUrl”:”replace-this-with-your-rss-url”}

ScheduledEventRule:
    Type: "AWS::Events::Rule"
    Properties:
      Description: "Scheduled event to trigger Step Functions state machine"
      ScheduleExpression: rate(2 hours)
      State: "ENABLED"
      Targets:
        -
          Arn: !Ref MyStateMachine
          Id: !GetAtt MyStateMachine.Name
          RoleArn: !GetAtt ScheduledEventIAMRole.Arn
          Input: !Sub
            - >
              {
                "feedUrl" : "${RssFeedUrl}"
              }
            - RssFeedUrl: !Ref RSSFeed

A completed workflow with step output

Conclusion

This blog post shows how to automate the aggregation of content from multiple RSS feeds into a single JSON file using serverless workflows.

The Step Functions MAP state allows for dynamic parallel processing of each item. The recent increase in state payload size limit means that the contents of the static JSON file can be held within the workflow context. The application decision logic is separated from the business logic and events.

Lambda functions are scoped to finite business logic with Step Functions states managing decision logic and iterations. EventBridge is used to manage the inbound business events. The zero-trust security model is followed with minimum permissions granted to each service and Parameter Store used to hold encrypted secrets.

This application is used to pull together articles for http://serverlessland.com. Serverless land brings together all the latest blogs, videos, and training for AWS Serverless. Download the code from this GitHub repository to start building your own automated content aggregation platform.

Using serverless backends to iterate quickly on web apps – part 3

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-serverless-backends-to-iterate-quickly-on-web-apps-part-3/

This series is about building flexible backends for web applications. The example is Happy Path, a web app that allows park visitors to upload and share maps and photos, to help replace printed materials.

In part 1, I show how the application works and walk through the backend architecture. In part 2, you deploy a basic image-processing workflow. To do this, you use an AWS Serverless Application Model (AWS SAM) template to create an AWS Step Functions state machine.

In this post, I show how to deploy progressively more complex workflow functionality while using minimal code. This solution in this web app is designed for 100,000 monthly active users. Using this approach, you can introduce more sophisticated workflows without affecting the existing backend application.

The code and instructions for this application are available in the GitHub repo.

Introducing image moderation

In the first version, users could upload any images to parks on the map. One of the most important feature requests is to enable moderation, to prevent inappropriate images from appearing on the app. To handle a large number of uploaded images, using human moderation would be slow and labor-intensive.

In this section, you use Amazon ML services to automate analyzing the images for unsafe content. Amazon Rekognition provides an API to detect if an image contains moderation labels. These labels are categorized into different types of unsafe content that would not be appropriate for this app.

Version 2 of the workflow uses this API to automate the process of checking images. To install version 2:

  1. From a terminal window, delete the v1 workflow stack:
    aws cloudformation delete-stack --stack-name happy-path-workflow-v1
  2. Change directory to the version 2 AWS SAM template in the repo:
    cd .\workflows\templates\v2
  3. Build and deploy the solution:
    sam build
    sam deploy --guided
  4. The deploy process prompts you for several parameters. Enter happy-path-workflow-v2 as the Stack Name. The other values are the outputs from the backend deployment process, detailed in the repo’s README. Enter these to complete the deployment.

From VS Code, open the v2 state machine in the repo from workflows/statemachines/v2.asl.json. Choose the Render graph option in the CodeLens to see the workflow visualization.

Serverless workflow visualization

This new workflow introduces a Moderator step. This invokes a Moderator Lambda function that uses the Amazon Rekognition API. If this API identifies any unsafe content labels, it returns these as part of the function output.

The next step in the workflow is a Moderation result choice state. This evaluates the output of the previous function – if the image passes moderation, the process continues to the Resizer function. If it fails, execution moves to the RecordFailState step.

Step Functions integrates directly with some AWS services so that you can call and pass parameters into the APIs of those services. The RecordFailState uses an Amazon DynamoDB service integration to write the workflow failure to the application table, using the arn:aws:states:::dynamodb:updateItem resource.

Testing the workflow

To test moderation, I use an unsafe image with suggestive content. This is an image that is not considered appropriate for this application. To test the deployed v2 workflow:

  1. Open the frontend application at https://localhost:8080 in your browser.
  2. Select a park location, choose Show Details, and then choose Upload images.
  3. Select an unsafe image to upload.
  4. Navigate to the Step Functions console. This shows the v2StateMachine with one successful execution:State machine result
  5. Select the state machine, and choose the execution to display more information. Scroll down to the Visual workflow panel.Visual workflow panel

This shows that the moderation failed and the path continued to log the failed state in the database. If you open the Output resource, this displays more details about why the image is considered unsafe.

Checking the image size and file type

The upload component in the frontend application limits file selection to JPG images but there is no check to reject images that are too small. It’s prudent to check and enforce image types and sizes on the backend API in addition to the frontend. This is because it’s possible to upload images via the API without using the frontend.

The next version of the workflow enforces image sizes and file types. To install version 3:

  1. From a terminal window, delete the v2 workflow stack:
    aws cloudformation delete-stack --stack-name happy-path-workflow-v2
  2. Change directory to the version 3 AWS SAM template in the repo:
    cd .\workflows\templates\v3
  3. Build and deploy the solution:
    sam build
    sam deploy --guided
  4. The deploy process prompts you for several parameters. Enter happy-path-workflow-v3 as the Stack Name. The other values are the outputs from the backend deployment process, detailed in the repo’s README. Enter these to complete the deployment.

From VS Code, open the v3 state machine in the repo from workflows/statemachines/v3.asl.json. Choose the Render graph option in the CodeLens to see the workflow visualization.

v3 state machine

This workflow changes the starting point of the execution, introducing a Check Dimensions step. This invokes a Lambda function that checks the size and types of the Amazon S3 object using the image-size npm package. This function uses environment variables provided by the AWS SAM template to compare against a minimum size and allowed type array.

The output is evaluated by the Dimension Result choice state. If the image is larger than the minimum size allowed, execution continues to the Moderator function as before. If not, it passes to the RecordFailState step to log the result in the database.

Testing the workflow

To test, I use an image that’s narrower than the mixPixels value. To test the deployed v3 workflow:

  1. Open the frontend application at https://localhost:8080 in your browser.
  2. Select a park location, choose Show Details, and then choose Upload images.
  3. Select an image with a width smaller than 800 pixels. After a few seconds, a rejection message appears:"Image is too small" message
  4. Navigate to the Step Functions console. This shows the v3StateMachine with one successful execution. Choose the execution to show more detail.Execution output

The execution shows that the Check Dimension step added the image dimensions to the event object. Next, the Dimensions Result choice state rejected the image, and logged the result at the RecordFailState step. The application’s DynamoDB table now contains details about the failed upload:

DynamoDB item details

Pivoting the application to a new concept

Until this point, the Happy Path web application is designed to help park visitors share maps and photos. This is the development team’s original idea behind the app. During the product-market fit stage of development, it’s common for applications to pivot substantially from the original idea. For startups, it can be critical to have the agility to modify solutions quickly to meet the needs of customers.

In this scenario, the original idea has been less successful than predicted, and park visitors are not adopting the app as expected. However, the business development team has identified a new opportunity. Restaurants would like an app that allows customers to upload menus and food photos. How can the development team create a new proof-of-concept app for restaurant customers to test this idea?

In this version, you modify the application to work for restaurants. While features continue to be added to the parks workflow, it now supports business logic specifically for the restaurant app.

To create the v4 workflow and update the frontend:

  1. From a terminal window, delete the v3 workflow stack:
    aws cloudformation delete-stack --stack-name happy-path-workflow-v3
  2. Change directory to the version 4 AWS SAM template in the repo:
    cd .\workflows\templates\v4
  3. Build and deploy the solution:
    sam build
    sam deploy --guided
  4. The deploy process prompts you for several parameters. Enter happy-path-workflow-v4 as the Stack Name. The other values are the outputs from the backend deployment process, detailed in the repo’s README. Enter these to complete the deployment.
  5. Open frontend/src/main.js and update the businessType variable on line 63. Set this value to ‘restaurant’.Change config to restaurants
  6. Start the local development server:
    npm run serve
  7. Open the application at http://localhost:8080. This now shows restaurants in the local area instead of parks.

In the Step Functions console, select the v4StateMachine to see the latest workflow, then open the Definition tab to see the visualization:

Workflow definition

This workflow starts with steps that apply to both parks and restaurants – checking the image dimensions. Next, it determines the place type from the placeId record in DynamoDB. Depending on place type, it now follows a different execution path:

  • Parks continue to run the automated moderator process, then resizer and publish the result.
  • Restaurants now use Amazon Rekognition to determine the labels in the image. Any photos containing people are rejected. Next, the workflow continues to the resizer and publish process.
  • Other business types go to the RecordFailState step since they are not supported.

Testing the workflow

To test the deployed v4 workflow:

  1. Open the frontend application at https://localhost:8080 in your browser.
  2. Select a restaurant, choose Show Details, and then choose Upload images.
  3. Select an image from the test photos dataset. After a few seconds, you see a message confirming the photo has been added.
  4. Next, select an image that contains one or more people. The new restaurant workflow rejects this type of photo:"Image rejected" message
  5. In the Step Functions console, select the last execution for the v4StateMachine to see how the Check for people step rejected the image:v4 workflow

If other business types are added later to the application, you can extend the Step Functions workflow accordingly. The cost of Step Functions is based on the number of transitions in a workflow, not the number of total steps. This means you can branch by business type in the Happy Path application. This doesn’t affect the overall cost of running an execution, if the total transitions are the same per execution.

Conclusion

Previously in this series, you deploy a simple workflow for processing image uploads in the Happy Path web application. In this post, you add progressively more complex functionality by deploying new versions of workflows.

The first iteration introduces image moderation using Amazon Rekognition, providing the ability to automate the evaluation of unsafe content. Next, the workflow is modified to check image size and file type. This allows you to reject any images that are too small or do not meet the type requirements. Finally, I show how to expand the logic further to accept other business types with their own custom workflows.

To learn more about building serverless web applications, see the Ask Around Me series.

Using serverless backends to iterate quickly on web apps – part 2

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-serverless-backends-to-iterate-quickly-on-web-apps-part-2/

This series is about building flexible solutions that can adapt as user requirements change. One of the challenges of building modern web applications is that requirements can change quickly. This is especially true for new applications that are finding their product-market fit. Many development teams start building a product with one set of requirements, and quickly find they must build a product with different features.

For both start-ups and enterprises, it’s often important to find a development methodology and architecture that allows flexibility. This is the surest way to keep up with feature requests in evolving products and innovate to delight your end-users. In this post, I show how to build sophisticated workflows using minimal custom code.

Part 1 introduces the Happy Path application that allows park visitors to share maps and photos with other users. In that post, I explain the functionality, how to deploy the application, and walk through the backend architecture.

The Happy Path application accepts photo uploads from users’ smartphones. The application architecture must support 100,000 monthly active users. These binary uploads are typically 3–9 MB in size and must be resized and optimized for efficient distribution.

Using a serverless approach, you can develop a robust low-code solution that can scale to handle millions of images. Additionally, the solution shown here is designed to handle complex changes that are introduced in subsequent versions of the software. The code and instructions for this application are available in the GitHub repo.

Architecture overview

After installing the backend in the previous post, the architecture looks like this:

In this design, the API, storage, and notification layers exist as one application, and the business logic layer is a separate application. These two applications are deployed using AWS Serverless Application Model (AWS SAM) templates. This architecture uses Amazon EventBridge to pass events between the two applications.

In the business logic layer:

  1. The workflow starts when events are received from EventBridge. Each time a new object is uploaded by an end-user, the PUT event in the Amazon S3 Upload bucket triggers this process.
  2. After the workflow is completed successfully, processed images are stored in the Distribution bucket. Related metadata for the object is also stored in the application’s Amazon DynamoDB table.

By separating the architecture into two independent applications, you can replace the business logic layer as needed. Providing that the workflow accepts incoming events and then stores processed images in the S3 bucket and DynamoDB table, the workflow logic becomes interchangeable. Using the pattern, this workflow can be upgraded to handle new functionality.

Introducing AWS Step Functions for workflow management

One of the challenges in building distributed applications is coordinating components. These systems are composed of separate services, which makes orchestrating workflows more difficult than working with a single monolithic application. As business logic grows more complex, if you attempt to manage this in custom code, it can become quickly convoluted. This is especially true if it handles retries and error handling logic, and it can be hard to test and maintain.

AWS Step Functions is designed to coordinate and manage these workflows in distributed serverless applications. To do this, you create state machine diagrams using Amazon States Language (ASL). Step Functions renders a visualization of your state machine, which makes it simpler to see the flow of data from one service to another.

Each state machine consists of a series of steps. Each step takes an input and produces an output. Using ASL, you define how this data progresses through the state machine. The flow from step to step is called a transition. All state machines transition from a Start state towards an End state.

The Step Functions service manages the state of individual executions. The service also supports versioning, which makes it easier to modify state machines in production systems. Executions continue to use the version of a state machine when they were started, so it’s possible to have active executions on multiple versions.

For developers using VS Code, the AWS Toolkit extension provides support for writing state machines using ASL. It also renders visualizations of those workflows. Combined with AWS Serverless Application Model (AWS SAM) templates, this provides a powerful way to deploy and maintain applications based on Step Functions. I refer to this IDE and AWS SAM in this walkthrough.

Version 1: Image resizing

The Happy Path application uses Step Functions to manage the image-processing part of the backend. The first version of this workflow resizes the uploaded image.

To see this workflow:

  1. In VS Code, open the workflows/statemachines folder in the Explorer panel.
  2. Choose the v1.asl.sjon file.v1 state machine
  3. Choose the Render graph option in the CodeLens. This opens the workflow visualization.CodeLens - Render graph

In this basic workflow, the state machine starts at the Resizer step, then progresses to the Publish step before ending:

  • In the top-level attributes in the definition, StartsAt sets the Resizer step as the first action.
  • The Resizer step is defined as a task with an ARN of a Lambda function. The Next attribute determines that the Publish step is next.
  • In the Publish step, this task defines a Lambda function using an ARN reference. It sets the input payload as the entire JSON payload. This step is set as the End of the workflow.

Deploying the Step Functions workflow

To deploy the state machine:

  1. In the terminal window, change directory to the workflows/templates/v1 folder in the repo.
  2. Execute these commands to build and deploy the AWS SAM template:
    sam build
    sam deploy –guided
  3. The deploy process prompts you for several parameters. Enter happy-path-workflow-v1 as the Stack Name. The other values are the outputs from the backend deployment process, detailed in the repo’s README. Enter these to complete the deployment.
  4. SAM deployment output

Testing and inspecting the deployed workflow

Now the workflow is deployed, you perform an integration test directly from the frontend application.

To test the deployed v1 workflow:

  1. Open the frontend application at https://localhost:8080 in your browser.
  2. Select a park location, choose Show Details, and then choose Upload images.
  3. Select an image from the sample photo dataset.
  4. After a few seconds, you see a pop-up message confirming that the image has been added:Upload confirmation message
  5. Select the same park location again, and the information window now shows the uploaded image:Happy Path - park with image data

To see how the workflow processed this image:

  1. Navigate to the Steps Functions console.
  2. Here you see the v1StateMachine with one execution in the Succeeded column.Successful execution view
  3. Choose the state machine to display more information about the start and end time.State machine detail
  4. Select the execution ID in the Executions panel to open details of this single instance of the workflow.

This view shows important information that’s useful for understanding and debugging an execution. Under Input, you see the event passed into Step Functions by EventBridge:

Event detail from EventBridge

This contains detail about the S3 object event, such as the bucket name and key, together with the placeId, which identifies the location on the map. Under Output, you see the final result from the state machine, shows a successful StatusCode (200) and other metadata:

Event output from the state machine

Using AWS SAM to define and deploy Step Functions state machines

The AWS SAM template defines both the state machine, the trigger for executions, and the permissions needed for Step Functions to execute. The AWS SAM resource for a Step Functions definition is AWS::Serverless::StateMachine.

Definition permissions in state machines

In this example:

  • DefinitionUri refers to an external ASL definition, instead of embedding the JSON in the AWS SAM template directly.
  • DefinitionSubstitutions allow you to use tokens in the ASL definition that refer to resources created in the AWS SAM template. For example, the token ${ResizerFunctionArn} refers to the ARN of the resizer Lambda function.
  • Events define how the state machine is invoked. Here it defines an EventBridge rule. If an event matches this source and detail-type, it triggers an execution.
  • Policies: the Step Functions service must have permission to invoke the services that perform tasks in the state machine. AWS SAM policy templates provide a convenient shorthand for common execution policies, such as invoking a Lambda function.

This workflow application is separate from the main backend template. As more functionality is added to the workflow, you deploy the subsequent AWS SAM templates in the same way.

Conclusion

Using AWS SAM, you can specify serverless resources, configure permissions, and define substitutions for the ASL template. You can deploy a standalone Step Functions-based application using the AWS SAM CLI, separately from other parts of your application. This makes it easier to decouple and maintain larger applications. You can visualize these workflows directly in the VS Code IDE in addition to the AWS Management Console.

In part 3, I show how to build progressively more complex workflows and how to deploy these in-place without affecting the other parts of the application.

To learn more about building serverless web applications, see the Ask Around Me series.

Using serverless backends to iterate quickly on web apps – part 1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-serverless-backends-to-iterate-quickly-on-web-apps-part-1/

For many organizations, building applications is an iterative process where requirements change quickly. Traditional software architectures can be challenging to adapt to these changes. Often, early architectural decisions may limit the developers’ ability to deliver new features. Serverless architectural patterns are often much more adaptable, and can help developers keep pace with an evolving list of end-user requirements.

This blog series explores how to structure and build a serverless web app backend to enable the most flexibility for changing product requirements. It covers how to use serverless services in your architecture, and how to separate parts of the backend to make maintenance easier. I also show how you can use AWS Step Functions to encapsulate complex workflows and minimize the amount the custom code in your applications.

In this series:

  • Part 1: Deploy the application, test the upload process, and review the architecture.
  • Part 2: Understand how to use Step Functions, and deploy a custom workflow.
  • Part 3: Advanced workflows with custom branching and image moderation.

The code uses the AWS Serverless Application Model (AWS SAM), enabling you to deploy the application easily in your own AWS account. This walkthrough creates resources covered in the AWS Free Tier but you may incur cost for usage beyond development and testing.

To set up the example, visit the GitHub repo and follow the instructions in the README.md file.

Introducing the “Happy Path” web application

In this scenario, a startup creates a web application called Happy Path. This app is designed to help state parks and nonprofit organizations replace printed materials, such as flyers and maps, with user-generated content. It allows visitors to capture images of park notices and photos of hiking trails. They can share these with other users to reduce printed waste.

The frontend displays and captures images of different locations, and the backend processes this data according to a set of business rules. This web application is designed for smartphones so it’s used while visitors are at the locations. Here is the typical user flow:

Happy Path user interface

  1. When park visitors first navigate to the site’s URL, it shows their current location with parks highlighted in the vicinity.
  2. The visitor selects a park. It shows thumbnails of any maps, photos, and images already uploaded by other users.
  3. If the visitor is logged in, they can upload their own images directly from their smartphone.

The first production version of this application provides a simple way for users to upload photos. It does little more than provide an uploading and sharing process.

However, the developer team quickly realizes that they must make some improvements. The developers need a way to implement complex, changing workflows on the backend without refactoring the code that is running in production. The architecture must also scale for an expected 100,000 monthly active users.

First, they want to optimize the large uploaded images to improve the speed of downloads. Next, they must also determine the suitability of images to ensure that the app only shows appropriate photos. There is also a rapidly growing list of feature requirements from organizations using the app.

In this series, I show how the development team can design the app to provide this level of flexibility. This way, they can implement new features and even pivot the core application if needed.

Deploying the application

In the GitHub repo, there are detailed deployment instructions in the README. The repo contains separate directories for the frontend, backend, and workflows. You must deploy the backend first. Once you have completed the deployment, you can run the frontend code on your local machine.

To launch the frontend application:

  1. Change to the frontend directory.
  2. Run npm run serve to start the development server. After building the modules in the project, the terminal shows the local URL where the application is running:
    Vue build completed
  3. Open a web browser and navigate to http://localhost:8080 to see the application.
  4. Open the developer console in your browser (for Google Chrome, Mozilla Firefox and Microsoft Edge, press F12 on the keyboard). This displays the application in a responsive layout and shows console logging. This can help you understand the flow of execution in the application.

Happy Path browser developer console

Testing the application

Now you have deployed the backend to your AWS account, and you are running the front end locally, you can test the application.

To upload an image for a location:

  1. Choose Log In and sign into the application, creating a new account if necessary.
  2. Select a location on the map to open the information window.
    Select a location on the map
  3. Choose Show Details, then choose Upload Images.
    Uploading images in Happy Path
  4. In the file picker dialog, select any one of the images from the sample photos dataset.

At this stage, the image is now uploaded to the S3 Uploads bucket on the backend. To verify this:

  1. Navigate to the Amazon S3 console.
  2. Choose the application’s upload bucket, then choose the folder name to open its contents. This shows the uploaded image.
    S3 bucket contents
  3. Navigate to the Amazon DynamoDB console.
  4. Select the hp-application table, then select the Items tab.
    DynamoDB table contents

There are two records shown:

  • The place listing: this item contains details about the selected park, such as the name and address.
  • The file metadata: this stores information about who uploaded the file, the timestamp, and the state of the upload.

At this stage, you have successfully tested that the frontend can upload images to the backend.

Architecture overview

After deploying the application using the repo’s README instructions, the backend architecture looks like this:

Happy Path backend architecture

There are five distinct functional areas for the backend application:

  1. API layer: when users interact with one of the API endpoints, this is processed by the API layer. Each API route invokes a Lambda function to complete its task, storing and fetching data from the storage layer.
  2. Storage layer: information about user uploads is persisted durably here. The application uses Amazon S3 buckets to store the binary objects, and a DynamoDB table for associated metadata.
  3. Notification layer: when images are uploaded, the PUT event triggers a Lambda function. This publishes the event to the Amazon EventBridge default event bus.
  4. Business logic layer: the customized business logic is encapsulated in AWS Step Functions workflows.
  5. Content distribution: the processed images are served via an Amazon CloudFront distribution to reduce latency and optimize delivery cost.

For future requirements, you can implement increasingly complex customized logic entirely within the business logic layer. All new workflow features are implemented here, without needing to modify other parts of the application

Conclusion

This series is about using serverless backends to allow you to iterate quickly on web application functionality.

In this post, I introduce the Happy Path example web application. I show the main features of the application, enabling end-users to upload maps and photos to the backend application. I walk through the deployment of the backend and frontend applications. Finally, you test with a sample image upload.

In part 2, you will deploy the image processing and workflow part of the application. This series explores progressively more complicated workflows, and how to manage their deployment. I will discuss some architectural choices which help to build in flexibility and scalability when designing backend applications

To learn more about building serverless web applications, see the Ask Around Me series.

Building a location-based, scalable, serverless web app – part 3

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-location-based-scalable-serverless-web-app-part-3/

In part 2, I cover the API configuration, geohashing algorithm, and real-time messaging architecture used in the Ask Around Me web application. These are needed for receiving and processing questions and answers, and sending results back to users in real time.

In this post, I explain the backend processing architecture, how data is aggregated, and how to deploy the final application to production. The code and instructions for this application are available in the GitHub repo.

Processing questions

The frontend sends new user questions to the backend via the POST questions API. While the predicted volume of questions is only 1,000 per hour, it’s possible for usage to spike unexpectedly. To help handle this load, the PostQuestions Lambda function puts incoming questions onto an Amazon SQS queue. The ProcessQuestions function takes messages from the Questions queue in batches of 10, and loads these into the Questions table in Amazon DynamoDB.

Questions processing architecture

This asynchronous process smooths out traffic spikes, ensuring that the application is not throttled by DynamoDB. It also provides consistent response times to the front-end POST request, since the API call returns as soon as the message is durably persisted to the queue.

Currently, the ProcessQuestions function does not parse or validate user questions. It would be easy to add message filtering at this stage, using Amazon Comprehend to detect sentiment or inappropriate language. These changes would increase the processing time per question, but by handling this asynchronously, the initial POST API latency is not adversely affected.

The ProcessQuestions function uses the Geo Library for Amazon DynamoDB that converts the question’s latitude and longitude into a geohash. This geohash attribute is one of the indexes in the underlying DynamoDB table. The GetQuestions function using the same library for efficiently querying questions based on proximity to the user.

There are a couple of different mechanisms used to pass information between the frontend and backend applications. When the frontend first initializes, it retrieves the current location of the user from the browser. It then calls the questions API to get a list of active questions within 5 miles of the current location. This retrieves the state up to this point in time. To receive notifications of new messages posted in the user’s area, the frontend also subscribes to the geohash topic in AWS IoT Core.

Processing answers

Answers processing architecture

The application allows two types of question that have different answer types. First, the rating questions accept an answer with a 0–5 score range. Second, the geography questions accept a geo-point, which is a latitude and longitude representing a location.

Similar to the way questions are handled, answers are also queued before processing. However, the PostAnswers Lambda function sends answers to different queues, depending on question type. Ratings messages are sent to the StarAnswers queue, while geography messages are routed to the GeoAnswers queue. Star ratings are saved as raw data in the Answers table by the ProcessAnswerStar function. Geography answers are first converted to a geohash before they are stored.

It’s possible for users to submit updates to their answers. For a star rating, the processing function simply saves the new score. For geography answers, if the updated answer contains a latitude and longitude close enough to the original answer, it results in the same geohash. This is due to the different aggregation processes used for these types of answers.

Aggregating data

In this application, the users asking questions are seeking aggregated answers instead of raw data. For example, “How do you rate the park?” shows an average score from users instead of thousands of individual ratings. To maintain performance, this aggregation occurs when new answers are saved to the database, not when the application fetches the question list.

The Answers table emits updates to a DynamoDB stream whenever new items are inserted or updated. The StreamSpecification parameter in the table definition is set to NEW_AND_OLD_IMAGES, meaning the stream record contains both the new and old item record.

New answers to questions are new items in the table, so the stream record only contains the new image. If users update their answers, this creates an updated item in the table, and the stream record contains both the new and old images of the item.

For star ratings, when receiving an updated rating, the Aggregation function uses both images to calculate the delta in the score. For example, if the old rating was 2 and the user changes this to 5, then the delta is 3. The summary score related to the answer is updated in the Questions table, using a DynamoDB update expression:

    const result = await myGeoTableManager.updatePoint({
      RangeKeyValue: { S: update }, 
      GeoPoint: {
        latitude: item.lat,
        longitude: item.lng
      },
      UpdateItemInput: {
        UpdateExpression: 'ADD answers :deltaAnswers, totalScore :deltaTotalScore',
        ExpressionAttributeValues: {
          ':deltaAnswers': { N: item.deltaAnswers.toString()},
          ':deltaTotalScore': { N: item.deltaValue.toString()}
        }
      }
    }).promise()

For geo-point ratings, the same approach is used but if the geohash changes, then the delta is -1 for the geohash in the old image, and +1 for the geohash in the new image. The update expression automatically creates a new geohash attribute on the DynamoDB item if it is not already present:

    const result = await myGeoTableManager.updatePoint({
      RangeKeyValue: { S: item.ID }, 
      GeoPoint: {
        latitude: item.lat,
        longitude: item.lng
      },
      UpdateItemInput: {
        UpdateExpression: `ADD ${item.geohash} :deltaAnswers, answers :deltaAnswers`,
        ExpressionAttributeValues: {
          ':deltaAnswers': { N: item.deltaAnswers.toString() }          
        }
      }
    }).promise()

By using a Lambda function as a DynamoDB stream processor, you can aggregate large amounts of data in near real time. The Questions and Answers tables have a one-to-many relationship – many answers belong to one question. As answers are saved, the aggregation process updates the summaries in the Questions table.

The Questions table also publishes updates to another DynamoDB stream. These are consumed by a Lambda function that sends the aggregated update to topics in AWS IoT Core. This is how updated scores are sent back to the frontend client application.

Publishing to production with Amplify Console

At this point, you can run the application on your local development machine and view the application via the localhost Vue.js server. Once you are ready to launch the application to users, you must deploy to production.

Single-page applications are easy to deploy publicly. The build process creates static HTML, JS, and CSS files. These can be served via Amazon S3 and Amazon CloudFront, together with any image and media assets used. The process of running the build process and managing the deployment can be automated using AWS Amplify Console.

In this walk through, I use GitHub as the repo provider. You can also use AWS CodeCommit, Bitbucket, GitLab, or upload the build directory from your machine.

To deploy the front end via Amplify Console:

  1. From the AWS Management Console, select the Services dropdown and choose AWS Amplify. From the initial splash screen, choose Get Started under Deploy.Amplify Console getting started
  2. Select GitHub as the repository provider, then choose Continue:Select GitHub as your code repo
  3. Follow the prompts to enable GitHub access, then select the repository dropdown and choose the repo. In the Branch dropdown, choose master. Choose Next.Add repository branch
  4. In the App build and test settings page, choose Next.
  5. In the Review page, choose Save and deploy.
  6. The final screen shows the deployment pipeline for the connected repo, starting at the Provision phase:Amplify Console deployment pipeline

After a few minutes, the Build, Deploy, and Verify steps show green checkmarks. Open the URL in a browser, and you see that the application is now served by the public URL:

Ask Around Me - Deployed application

Finally, before logging in, you must add the URL to the list of allowed URLs in the Auth0 settings:

  1. Log into Auth0 and navigate to the dashboard.
  2. Choose Applications in the menu, then select Ask Around Me from the list of applications.
  3. On the Settings tab, add the application’s URL to Allowed Callback URLs, Allowed Logout URLs, and Allowed Web Origins. Separate from the existing values using a comma.Updating the Auth0 configuration
  4. Choose Save changes. This allows the new published domain name to interact with Auth0 for authentication your application’s users.

Anytime you push changes to the code repository, Amplify Console detects the commit and redeploys the application. If errors are detected, the existing version is presented to users. If there are no errors, the new version is served to visitors.

Conclusion

In the last part of this series, I show how the application queues posted questions and answers. I explain how this asynchronous approach smooths traffic spikes and helps maintain responsive APIs.

I cover how answers are collected from thousands of users and are aggregated using DynamoDB streams. These totals are saved as summaries in the Questions table, and live updates are pushed via AWS IoT Core back to the frontend.

Finally, I show how you can automate deployment using Amplify Console. By connecting the service directly with your code repository, it publishes and serves your application with no need to manually copy files.

To learn more about this application, see the accompanying GitHub repo.

Building a location-based, scalable, serverless web app – part 2

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-location-based-scalable-serverless-web-app-part-2/

Part 1 introduces the Ask Around Me web application that allows users to send questions to other local users in real time. I explain the app’s functionality and how using a single-page application (SPA) framework complements a serverless backend. I configure Auth0 for authentication and show how to deploy the frontend and backend. I also introduce how SPA frontends can send and receive data using both a traditional API and real-time messaging via a WebSocket.

In this post, I review the backend architecture, Amazon API Gateway’s HTTP APIs, and the geohashing implementation. The code and instructions for this application are available in the GitHub repo.

Architecture overview

After deploying the application using the repo’s README.md instructions, the backend architecture looks like this:

Ask Around Me backend architecture

The Vue.js frontend primarily interacts with the backend via HTTP APIs using Amazon API Gateway. When users submit questions or answers, the data is sent via the POST API endpoints. When the frontend requests lists of questions or answers, this occurs via the GET API endpoints.

Incoming questions and answers are posted to separate Amazon SQS queues. These queues invoke AWS Lambda functions that process and store the data in the application’s Amazon DynamoDB tables. In the Questions table, the application saves geo-location data and aggregated statistics for each question. The Answers table maintains a record of user IDs and answers to ensure that each user can only post one answer per question.

When new answers are stored in the Answers tables, a DynamoDB stream triggers a Lambda aggregation function with the update. This calculates average scores for questions and aggregates data for the heat map, then stores the result in the main Questions table. When the Questions table is updated, this DynamoDB stream invokes the Publish Lambda function. This publishes updates to the relevant topic in AWS IoT Core, which the front-end application subscribes to.

Using HTTP APIs

API Gateway is a common integration service used between the frontend and backend of serverless web applications. You can choose between the standard REST APIs, and the newer HTTP APIs. The choice depends upon which features you need, and cost considerations for your workload.

This application uses JWT authentication via Auth0 and Lambda proxy integration, and both are supported by HTTP APIs. Many advanced features like API key management, Amazon Cognito integration, and usage plans are not required in this application. It’s also important to compare to cost of each service:

API type Hourly Daily Annually
PUT questions 1,000 24,000 8,760,000
GET questions 50,000 1,200,000 438,000,000
PUT answers 10,000 240,000 87,600,000
Total API requests 534,360,000
REST APIs cost $1,870.26
HTTP APIs $534.36

Using the predicted API usage covered in part 1, you can compare the REST APIs and HTTP APIs overall cost. At an estimated $534 annually, the HTTP APIs option is approximately 30% of the cost of REST APIs.

The AWS Serverless Application Model (SAM) template in the repo defines the HTTP API resource and CORS configuration. It also includes the Auth0 authorizer used to validate each API request:

  MyApi:
    Type: AWS::Serverless::HttpApi
    Properties:
     Auth:
        Authorizers:
          MyAuthorizer:
            JwtConfiguration:
              issuer: !Ref Auth0issuer
              audience:
                - https://auth0-jwt-authorizer
            IdentitySource: "$request.header.Authorization"
        DefaultAuthorizer: MyAuthorizer

      CorsConfiguration:
        AllowMethods:
          - GET
          - POST
          - DELETE
          - OPTIONS
        AllowHeaders:
          - "*"   
        AllowOrigins: 
          - "*"   

With the HTTP API resource defined, each Lambda function has an event configuration referencing this resource. All the functions referencing the HTTP API resource automatically use the Auth0 authorizer.

  GetAnswersFunction: 
    Type: AWS::Serverless::Function
    Properties:
      Description: Get all answers for a question
      ... 
      Events:
        Get:
          Type: HttpApi
          Properties:
            Path: /answers/{Key}
            Method: get
            ApiId: !Ref MyApi    

Using geohashing in web applications

A key part of the functionality in Ask Around Me is the ability to find and answer questions near the user. Given the expected volume of questions in this system, this requires an efficient way to query based upon location that maintains performance as traffic grows.

In a naïve implementation, you might compare the current geographical position of the user with the geo-location of each question and answer in the database. But with an expected 1,000 questions per hour, this would soon become a slow operation with O(n) performance.

A more efficient solution is geohashing. This divides the geographical area of the planet into series of grid cells that are identified by an alphanumeric hash. The first character of the hash identifies one of 32 cells in the grid, roughly 5000 km x 5000 km on the planet. The second character identifies one of 32 squares in that first cell, so combining the first two characters provides a resolution of approximately 1250 km x 1250 km. By the 12th character in the hash, you can identify an area as small as a couple of square inches on Earth. For a more detailed explanation, see this geohashing site.

When using this algorithm, it’s important to choose the correct level of resolution. For Ask Around Me, the frontend searches for questions within 5 miles of the user. You can identify these areas with a 5-character hash. This means you can compare the user’s current location using their geohash, to the geohash stored in the Questions table. This comparison allows you to immediately discard most questions from the search and quickly find the relevant items.

This solution uses the Geo Library for Amazon DynamoDB npm library. Both the GET and POST questions APIs use this library to calculate the geohash when storing and fetching questions. The library requires a dedicated DynamoDB table, which is why user answers are stored in a separate table.

The GET questions API uses the latitude and longitude from the query parameters to query the underlying DynamoDB using this library:

const AWS = require('aws-sdk')
AWS.config.update({region: process.env.AWS_REGION})

const ddb = new AWS.DynamoDB() 
const ddbGeo = require('dynamodb-geo')
const config = new ddbGeo.GeoDataManagerConfiguration(ddb, process.env.TableName)
config.hashKeyLength = 5

const myGeoTableManager = new ddbGeo.GeoDataManager(config)
const SEARCH_RADIUS_METERS = 4000

exports.handler = async (event) => {

  const latitude = parseFloat(event.queryStringParameters.lat)
  const longitude = parseFloat(event.queryStringParameters.lng)

  // Get questions within geo range
  const result = await myGeoTableManager.queryRadius({
    RadiusInMeter: SEARCH_RADIUS_METERS,
    CenterPoint: {
      latitude,
      longitude
    }
  })

  return {
    statusCode: 200,
    body: JSON.stringify(result)
  }
}

The publish/subscribe pattern for real time in web apps

Modern web applications frequently use real-time notifications to keep users informed of state changes. You could achieve this with frequently polling of the APIs to fetch new information. However, this approach is usually wasteful, both in cost and compute terms, because most API calls do not return new information. Additionally, if updates are evenly distributed and you poll every n seconds, there is an average delay of n/2 seconds between data becoming available and your application receiving it.

Instead of polling, a better option for many web applications is a WebSocket. Data availability is closer to real time, and the messaging is less frequent. This can be important for web applications used on mobile devices where unnecessary messaging can impact battery life.

This approach uses the publish-subscribe pattern. The frontend makes subscriptions to a backend service, indicating topics of interest. The backend service receives messages from publishers, which are upstream processes in the application. It filters the messages and routes to the appropriate subscribers.

Although powerful, this can be complex to implement due to connectivity issues over networks. For a web application, users may turn off their devices, disconnect Wi-Fi, or become unreachable due to limited coverage. This pattern is generally forward-only, meaning you only receive messages after the point of subscription.

AWS IoT Core simplifies this process, and the JavaScript SDK handles the common reconnection issues. The backend application sends messages to topics in AWS IoT Core, and the frontend application subscribes to topics of interest. The service maintains the list of active publishers and subscribers, and routes messages between the two. It also automatically manages fan-out, which occurs when there are many subscribers to a single topic.

From a pricing perspective, this is also a cost-efficient approach. At the time of writing, AWS IoT Core costs $0.08 per million minutes of connection, and $1.00 per million messages. There are also no servers to manage, and the service scales automatically to handle your application’s load.

In the example application, the real-time connection is configured and managed in a single component, IoT.vue. This initiates a connection to an IoT endpoint when the application first starts, and listens for messages on subscribed topics. It passes data back to the global Vuex store so other components automatically receive updates with no dependency on the IoT component.

Choosing publish-subscribe topics for web apps

In a typical synchronous API call, the client application makes a specific request and receives a response from a backend service. With a topic-based subscription, the topic itself is the equivalent of the request, but you usually don’t receive immediate information.

In this web application, there are a number of topics that are potentially important to users. Some topics are shared across multiple users, while other are private to a single user:

  • Account-level topic: messages relating only to a single user ID, such as billing and notifications. These are intended for any devices where that user is logged in.
  • Per-question topic: when a user asks a question, they need alerts when new answers arrive. Each question ID maps to an individual topic. Anyone who asks or watches a question subscribes to this topic.
  • Geo-fenced alert topic: a user receives alerts when new questions are asked in their local area. In this case, the geohash of their location is the topic identifier. New questions are published to their geohash topics, and users within the same geohash area receive those messages.
  • A system-wide topic: this is a single topic that all users subscribe to. This is reserved for important messages for all application users.

In web applications, you subscribe to some topics when the application initializes, such as account-level or system-wide topics. Other subscriptions are dynamic. For example, you subscribe to a question ID topic only after posting a question, or subscribe to different geo-fence hashes when the user’s location changes.

Conclusion

This post explores the backend architecture of the Ask Around Me application. I compare the cost and features in deciding between REST APIs and HTTP APIs in API Gateway. I introduce geohashing and the npm library used to handle geo-location queries in DynamoDB. And I show how you can build real-time messaging into your web applications using the publish-subscribe pattern with AWS IoT Core.

To learn more, visit the application’s code repo on GitHub.

Building a location-based, scalable, serverless web app – part 1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-location-based-scalable-serverless-web-app-part-1/

Web applications represent a major category of serverless usage. When used with single-page application (SPA) frameworks for front-end development, you can create highly responsive apps. With a serverless backend, these apps can scale to hundreds of thousands of users without you managing a single server.

In this 3-part series, I demonstrate how to build an example serverless web application. The application includes authentication, real-time updates, and location-specific features. I explore the functionality, architecture, and design choices involved. I provide a complete code repository for both the front-end and backend. By the end of these posts, you can use these patterns and examples in your own web applications.

In this series:

  • Part 1: Deploy the frontend and backend applications, and learn about how SPA web applications interact with serverless backends.
  • Part 2: Review the backend architecture, Amazon API Gateway HTTP APIs, and the geohashing implementation.
  • Part 3: Understand the backend data processing and aggregation with Amazon DynamoDB, and the final deployment of the application to production.

The code uses the AWS Serverless Application Model (SAM), enabling you to deploy the application easily in your own AWS account. This walkthrough creates resources covered in the AWS Free Tier but you may incur cost for usage beyond development and testing.

To set up the example, visit the GitHub repo and follow the instructions in the README.md file.

Introducing “Ask Around Me” – The app for finding answers from local users

Ask Around Me is a web application that allows you to ask questions to a community of local users. It’s designed to be used on a smartphone browser.

 

Ask Around Me front end application

The front-end uses Auth0 for authentication. For simplicity, it supports social logins with other identity providers. Once a user is logged in, the app displays their local area:

No questions in your area

Users can then post questions to the neighborhood. Questions can be ratings-based (“How relaxing is the park?”) or geography-based (“Where is best coffee?”).

Ask a new question

Posted questions are published to users within a 5-mile radius. Any user in this area sees new questions appear in the list automatically:

New questions in Ask Around Me

Other users answer questions by providing a star-rating or dropping a pin on a map. As the question owner, you see real-time average scores or a heat map, depending on the question type:

Ask Around Me Heatmap

The app is designed to be fun and easy to use. It uses authentication to ensure that votes are only counted once per user ID. It uses geohashing to ensure that users only see and answer questions within their local area. It also keeps the question list and answers up to date in real time to create a sense of immediacy.

In terms of traffic, the app is expected to receive 1,000 questions and 10,000 answers posted per hour. The query that retrieves local questions is likely to receive 50,000 requests per hour. In the course of these posts, I explore the architecture and services chosen to handle this volume. All of this is built serverlessly with cost effectiveness in mind. The cost scales in line with usage, and I discuss how to make the best use of the app budget in this scenario.

SPA frameworks and serverless backends

While you can apply a serverless backend to almost any type of web or mobile framework, SPA frameworks can make development much easier. For modern web development, SPA frameworks like React.js, Vue.js, Angular have grown in popularity for serverless development. They have become the standard way to build complex, rich front-ends.

These frameworks offer benefits to both front-end developers and users. For developers, you can create the application within an IDE and test locally with hot reloading, which renders new content in the same context in the browser. For users, it creates a web experience that’s similar to a traditional application, with reactive content and faster interactive capabilities.

When you build a SPA-based application, the build process creates HTML, JavaScript, and CSS files. You serve these static assets from an Amazon CloudFront distribution with an Amazon S3 bucket set as the origin. CloudFront serves these files from 216 global points of presence, resulting in low latency downloads regardless of where the user is located.

CloudFront/S3 app distribution

You can also use AWS Amplify Console, which can automate the build and deployment process. This is triggered by build events in your code repo so once you commit code changes, these are automatically deployed to production.

A traditional webserver often serves both the application’s static assets together with dynamic content. In this model, you offload the serving of all of the application assets to a global CDN. The dynamic application is a serverless backend powered by Amazon API Gateway and AWS Lambda. By using a SPA framework with a serverless backend, you can create performant, highly scalable web applications that are also easy to develop.

Configuring Auth0

This application integrates Auth0 for user authentication. The front-end calls out to this service when users are not logged in, and Auth0 provides an open standard JWT token after the user is authenticated. Before you can install and use the application, you must sign up for an Auth0 account and configure the application:

  1. Navigate to https://auth0.com/ and choose Sign Up. Complete the account creation process.
  2. From the dashboard, choose Create Application. Enter AskAroundMe as the name and select Single Page Web Applications for the Application Type. Choose Create.Auth0 configuration
  3. In the next page, choose the Settings tab. Copy the Client ID and Domain values to a text editor – you need these for setting up the Vue.js application later.Auth0 configuration next step
  4. Further down on this same tab, enter the value http://localhost:8080 into the Allowed Logout URLs, Allowed Callback URLs and Allowed Web Origins fields. Choose Save Changes.
  5. On the Connections tab, in the Social section, add google-oauth2 and twitter and ensure that the toggles are selected. This enables social sign-in for your application.Auth0 Connections tab

This configuration allows the application to interact with the Auth0 service from your local machine. In production, you must enter the domain name of the application in these fields. For more information, see Auth0’s documentation for Application Settings.

Deploying the application

In the code repo, there are separate directories for the front-end and backend applications. You must install the backend first. To complete this step, follow the detailed instructions in the repo’s README.md.

There are several important environment variables to note from the backend installation process:

  • IoT endpoint address and Cognito Pool ID: these are used for real-time messaging between the backend and frontend applications.
  • API endpoint: the base URL path for the backend’s APIs.
  • Region: the AWS Region where you have deployed the application.

Next, you deploy the Vue.js application from the frontend directory:

  1. The application uses the Google Maps API – sign up for a developer account and make a note of your API key.
  2. Open the main.js file in the src directory. Lines 45 through 62 contain the configuration section where you must add the environment variables above:Ask Around Me Vue.js configuration

Ensure you complete the Auth0 configuration and remaining steps in the README.md file, then you are ready to test.

To launch the frontend application, run npm run serve to start the development server. The terminal shows the local URL where the application is now running:

Running the Vue.js app

Open a web browser and navigate to http://localhost:8080 to see the application.

How Vue.js applications work with a serverless backend

Unlike a traditional web application, SPA applications are loaded in the user’s browser and start executing JavaScript on the client-side. The app loads assets and initializes itself before communicating with the serverless backend. This lifecycle and behavior is comparable to a conventional desktop or mobile application.

Vue.js is a component-based framework. Each component optionally contains a user interface with related code and styling. Overall application state may be managed by a store – this example uses Vuex. You can use many of the patterns employed in this application in your own apps.

Auth0 provides a Vue.js component that automates storing and parsing the JWT token in the local browser. Each time the app starts, this component verifies the token and makes it available to your code. This app uses Vuex to manage the timing between the token becoming available and the app needing to request data.

The application completes several initialization steps before querying the backend for a list of questions to display:

Initialization process for the app

Several components can request data from the serverless backend via API Gateway endpoints. In src/views/HomeView.vue, the component loads a list of questions when it determines the location of the user:

const token = await this.$auth.getTokenSilently()
const url = `${this.$APIurl}/questions?lat=${this.currentLat}&lng=${this.currentLng}`
console.log('URL: ', url)
// Make API request with JWT authorization
const { data } = await axios.get(url, {
  headers: {
    // send access token through the 'Authorization' header
    Authorization: `Bearer ${token}`   
  }
})

// Commit question list to global store
this.$store.commit('setAllQuestions', data)

This process uses the Axios library to manage the HTTP request and pass the authentication token in the Authorization header. The resulting dataset is saved in the Vuex store. Since SPA applications react to changes in data, any frontend component displaying data is automatically refreshed when it changes.

The src/components/IoT.vue component uses MQTT messaging via AWS IoT Core. This manages real-time updates published to the frontend. When a question receives a new answer, this component receives an update. The component updates the question status in the global store, and all other components watching this data automatically receive those updates:

        mqttClient.on('message', function (topic, payload) {
          const msg = JSON.parse(payload.toString())
          
          if (topic === 'new-answer') {
            _store.commit('updateQuestion', msg)
          } else {
            _store.commit('saveQuestion', msg)
          }
        })

The application uses both API Gateway synchronous queries and MQTT WebSocket updates to communicate with the backend application. As a result, you have considerable flexibility for tracking overall application state and providing your users with a responsive application experience.

Conclusion

In this post, I introduce the Ask Around Me example web application. I discuss the benefits of using single-page application (SPA) frameworks for both developers and users. I cover how they can create highly scalable and performant web applications when powered with a serverless backend.

In this section, you configure Auth0 and deploy the frontend and backend from the application’s GitHub repo. I review the backend SAM template and the architecture it deploys.

In part 2, I will explain the backend architecture, the Amazon API Gateway configuration, and the geohashing implementation.

Visualize user behavior with Auth0 and Amazon EventBridge

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/visualize-user-behavior-with-auth0-and-amazon-eventbridge/

In this post, I show how to capture user events and monitor user behavior by using the Amazon EventBridge partner integration with Auth0. This enables you to gain insights to help deliver a more customized application experience for your users.

Auth0 is a flexible, drop-in solution that adds authentication and authorization services to your applications. The EventBridge integration automatically and continuously pushes Auth0 log events your AWS account via a custom SaaS event bus.

The examples used in this post are implemented in a custom-built serverless application called FreshTracks. This is a demo application built in Vue.js, which I will use to demonstrate multiple SaaS integrations into AWS with EventBridge in this and future blog posts.

FreshTracks – a demo serverless web application with multiple SaaS Integrations.

FreshTracks – a demo serverless web application with multiple SaaS Integrations.

The components for this EventBridge integration with Auth0 have been extracted into a separate example application in this GitHub repo.

How the application works

Routing Auth0 Events with Amazon EventBridge.

Routing Auth0 Events with Amazon EventBridge.

  1. Events are emitted from Auth0 when a user interacts with the login service on the front-end application.
  2. These events are streamed into a custom SaaS event bus in EventBridge.
  3. Event rules match events and send them downstream to a Lambda function target.
  4. The receiving Lambda function performs some data transformation before writing an object to S3.
  5. These objects are made available by a QuickSight data source manifest file and used as datapoints for QuickSight visuals.

Configuring the Auth0 EventBridge integration

To capture Auth0 emitted events in EventBridge, you must first configure Auth0 for use as the Event Source on your Auth0 Dashboard.

  1. Log in to the Auth0 Dashboard.
  2. Choose to Logs > Streams.
  3. Choose + Create Stream.
  4. Choose Amazon EventBridge and enter a unique name for the new Amazon EventBridge Event Stream.
  5. Create the Event Source by providing your AWS Account ID and AWS Region. The Region you select must match the Region of the Amazon EventBridge bus.
  6. Choose Save.
Event Source Configuration on Auth0 dashboard

Event Source Configuration on Auth0 dashboard

Auth0 provides you with an Event Source Name. Make sure to save your Event Source Name value since you need this at a later point to complete the integration.

Creating a custom event bus

  1. Go to the EventBridge partners tab in your AWS Management Console. Ensure the AWS Region matches where the Event Source was created.
  2. Paste the Event Source Name in the Partner event sources search box to find and choose the new Auth0 event source.Note: The Event Source remains in a pending state until it is associated with an event bus.

    Partner event source

  3. Choose the event source, then choose Associate with Event Bus.
  4. Choose Associate.

Deploying the application

Once you have associated the Event Source with a new partner event bus, you are ready to deploy backend services to receive and respond to these events.

To set up the example application, visit the GitHub repo and follow the instructions in the README.md file.

When deploying the application stack, make sure to provide the custom event bus name with –parameter-overrides.

sam deploy --parameter-overrides Auth0EventBusName=aws.partner/auth0.com/auth0username-0123344567-e5d2-4514-84f2-97dd4ff8aad0/auth0.logs

You can find the name of the new Auth0 custom event bus in the custom event bus section of the EventBridge console:

Custom event bus name

Custom event bus name

Routing events with rules

The AWS Serverless Application Model (SAM) template in the example application creates four event rules:

  1. Successful sign-in
  2. Successful signup
  3. Successful log-out
  4. Unsuccessful signup

These are defined with the `AWS::Events::Rule` resource type. Each of these rules is routed to a single target Lambda function. For a successful sign-in, the rule event pattern is matched with detail:data:type:s.  This refers to the Auth0 event type code for a successful sign-in. Every Auth0 event code is listed here.

SuccessfullSignIn: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "Auth0 User Successfully signed in"
      EventBusName: 
         Ref: Auth0EventBusName
      EventPattern: 
        account:
        - !Sub '${AWS::AccountId}'
        detail:
          data:
            type:
            - s
      Targets: 
        - 
          Arn:
            Fn::GetAtt:
              - "SaveAuth0EventToS3" 
              - "Arn"
          Id: "SignInSuccessV1"

To respond to additional events, copy this event rule pattern and change the event code string for the event you want to match.

Writing events to S3 with Lambda

The application routes events to a Lambda function, which performs some data transformation before writing an object to S3.  The function code uses an environment variable named AuthLogsBucket to store the S3 bucket name. The permissions to write to S3 are granted by policy defined within the SAM template:

  SaveAuth0EventToS3:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: src/
      Handler: saveAuth0EventToS3.handler
      Runtime: nodejs12.x
      MemorySize: 128
      Environment:
        Variables:
          AuthLogBucket: !Ref AuthZeroToEventBridgeUserActivitylogs
      Policies:
        - S3CrudPolicy:
            BucketName: !Ref AuthZeroToEventBridgeUserActivitylogs

The S3 object is a CSV file with context about each event. Each of the Auth0 event schemas is different. To maintain a consistent CSV file structure across different event types, this Lambda function explicitly defines each of the header and row values. An output string is constructed from the Auth0 event:

Lambda function output string

Lambda function output string

This string is placed into a new buffer and written to S3 with the AWS SDK for Javascript as referenced in GitHub here

Sending events to the application

There is a test event in the /event directory of the example application. This contains an example of a successful sign-in event emitted from Auth0.

Sending a test Auth0 event to Lambda

Send a test event to the Lambda function using the AWS Command Line Interface.

Run the following command in the root directory of the example application, replacing {function-name} with the full name of your Lambda function.

aws lambda invoke --function-name {function-name} --invocation-type Event --payload file://events/event.json  events/response.json --log-type Tail

Response:

{  
 "StatusCode": 202
}

The response output appears in the output terminal window. To confirm that an object is stored in S3, navigate to the S3 Console.  Choose the AuthZeroToEventBridgeUserActivityLogs bucket. You see a new auth0 directory and can open the CSV file that holds context about the event.

Object written to S3

Object written to S3

Sending real Auth0 events from a front-end application

Follow the instructions in the Fresh Tracks repo on GitHub to deploy the front-end application. This application includes Auth0’s authentication flow. You can connect to your Auth0 application by entering your credentials in the `auth0_config.json` file:

{
  "domain": "<YOUR AUTH0 DOMAIN>",
  "clientId": "<YOUR AUTH0 CLIENT ID>"
}

The example backend application starts receiving Auth0 emitted events immediately.

To see the full Fresh Tracks application continue to the backend deployment instructions. This is not required for the examples in this blog post.

Building a QuickSight dashboard

You can visualize these Auth0 user events with an Amazon QuickSight dashboard. This provides a snapshot analysis that you can share with other QuickSight users for reporting purposes.

To use Auth0 events as metrics, create a separate calculated field for each event (for example, successful signup and successful login).  For example, an analysis could include multiple visuals, custom fields, conditional formatting, and events. This gives a snapshot of user interaction with the front-end application at any given time.

FreshTracks final dashboard example

FreshTracks final dashboard example

The example application in the GitHub repo provides instructions on how to create a dashboard.

Conclusion

This post explains how to set up EventBridge’s third-party integration with Auth0 to capture events. The example backend application demonstrates how to filter these events, perform computations on them, save as S3 objects, and send to a downstream service.

The ability to build QuickSight story boards from these events and share visuals with key business stakeholders can provide a narrative about the analysis data. This is implemented with minimal code to provide near real-time streaming of events and without adding latency to your application.

The possibilities are vast. I am excited to see how builders use this serverless pattern to create their own visuals to build a better, more customized application experience for their users.

Start here to learn about other SaaS integrations with Amazon EventBridge.