Tag Archives: Front-End Web & Mobile

Getting Started with Push Notifications using AWS Amplify

Post Syndicated from Pauline Kelly original https://aws.amazon.com/blogs/messaging-and-targeting/getting-started-with-push-notifications-using-aws-amplify/

This article was written by Pauline Kelly and Rob Costello, Public Sector Solutions Architects, AWS.

The code in this blog will run on OSX 10.11.6 (El Capitan) and higher, and XCode 8.1 and higher.

Push notifications are a key capability provided by mobile apps to engage with users, providing real-time updates or new information. However, they require integration of several components provided by different vendors. This can be difficult for new mobile developers, or developers who are new to Amazon Web Services (AWS).

As a mobile app developer, building high-quality apps that people want to use requires focus on the front-end components. Still, backend services also need to be considered to help ensure the app provides the level of scalability, reliability, and security that is expected of modern mobile applications. In this post, we will provide an example of how to implement Push Notifications in your mobile apps using AWS Amplify and Amazon Pinpoint, using React Native code to help get you started.

Usage Patterns

There are several use cases where developers benefit from adding Push Notifications into their mobile app, including:

  • Asynchronous actions, such as when an order is placed through an app, and confirmation or status notifications are sent to the user.
  • Scheduled reminders, to provide time-bound notifications to users to prompt engagement, such as “It’s been 7 days since you last checked in, would you like to check in now?”.
  • Instant messaging, where your app provides two-way communication (i.e., chat). This ensures your users can engage with each other in near real-time.

Component Definitions

The components necessary to add push notifications to your app are:

Platform – These are the services provided by mobile device builders (e.g. Apple, Google, Amazon), that transport the messages to the end devices. The Platforms offer device registration, app token creation and management, and the delivery channel for notifications to be delivered to devices. There are several different services that you can use to send push notifications to the users of your applications. The platform you use largely depends on which app store your customers use to obtain your app. The most common platforms are Apple Push Notification service (APNs), Firebase Cloud Messaging (FCM), Baidu Cloud Push, and Amazon Device Messaging (ADM).

Provider – The Provider allows app registration and messaging to be coordinated by your mobile apps backend system, to allow for event or schedule based interactions from your backend with your mobile app. In AWS, we provide both Amazon Simple Notification Service (SNS) and Amazon Pinpoint to fill the role of the Provider.

Client – The client is your mobile application, running on a physical device. On initial launch of the application, two persistent and secure channels are established. First is between the Platform and the Client, which results in the creation of a globally-unique, app-specific device token. Next, the other sets up push notifications between the Provider and the Platform, using the device token. The device token is used to identify the destination for the notifications.

client, platform and provider components with no connections

AWS Options

AWS provides two services that can be used for provide Push Notification services for your mobile apps.

Amazon Pinpoint – Pinpoint enables marketers and developers to deliver customer-centric engagement experiences. It provides a collection of capabilities that enable collection of data from audiences, real-time and historic analytics, and execution of campaigns for direct customer engagement over multiple channels, such as email, SMS, or push notifications. Pinpoint makes it easy to manage device registration, integration with notification channel systems, and integration with other AWS services to create a fully-featured mobile application backend. AWS Amplify includes support for deploying and configuring Pinpoint projects to use in your mobile application.

Amazon Simple Notification Service (SNS) – SNS, or Simple Notification Service, is a publish-subscribe service that accepts incoming messages and delivers them to a variety of destinations, such as using push notifications to mobile devices. Messages are sent from a Publisher (your mobile app backend) to a Subscription, in the form of a Topic. Subscriptions are linked to one or more Platform Applications, which is the Platform you want to connect to (such as APNs or FCM). Each Platform Application has individual Endpoints which are created using the SNS SDK on the device, and passing in the unique device token. To explore fanout patterns with SNS, refer to this blog post.

While both SNS and Pinpoint have Push Notifications capability, Amazon Pinpoint provides a simpler development experience when leveraging AWS Amplify, and a more robust management and operational capability for app owners and developers. The rest of this post will focus on the use of Pinpoint and Amplify.

Registration and Data-flows

In this post we will use the APNs (Apple Push Notification Service) platform for push notifications, but a similar pattern is used for other platforms. The following events take place when a push notification is triggered:

  1. A Device establishes a TLS connection with APNs based on a pre-existing device-specific certificate, and requests an app-specific token to use for push notifications.
  2. The Device passes the token to the AWS provider to register for notifications.
  3. Based on some form of event or trigger, such as a marketer launching a new campaign, a notification can be generated programmatically and sent to the AWS provider (Pinpoint or SNS).
  4. The AWS provider delivers the notification to APNs, along with the associated device token(s) to deliver it.
  5. APNs pushes the notification to the device(s).

client, platform and provider components with connections

For details of how the APNs platform works, please consult the relevant services documentation at https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview.html.

Setting up APNs prerequisites

Note: You will need an account with the Apple Developer Program, as an individual or as part of an organization, and you must have agent or admin privileges in that account. Also, Push Notifications in the Apple ecosystem require a physical device, the iOS simulator is not capable of receiving push notifications.

To setup the required APNs components, follow the Amplify guide to create:

  • An app ID.
  • An SSL certificate, which authorizes you to send push notifications to your app through APNs.
  • A registration for your test device, such as an iPhone, with your Apple Developer account.
  • An iOS distribution certificate, which enables you to install your app on your test device.
  • A provisioning profile, which allows your app to run on your test device.

Setting up your development workstation

Note: As this post focuses on using APNs for an iOS device, you must perform the following steps on an Apple device.

On your development workstation, install all React Native CLI prerequisites identified at https://reactnative.dev/docs/environment-setup, including:

  • NodeJS: JavaScript runtime, used to run your application
  • Watchman: watches files and records when they change, used to update your React Native app automatically rather than manually triggering a rebuild while developing
  • Xcode: integrated development environment for creating iOS applications and more, installed on macOS
  • CocoaPods: a dependency manager for Swift and Objective-C Cocoa projects

You will need to set up the Amplify CLI, which is used to configure the application with Amplify. Be sure to configure the Amplify CLI with credentials and other settings following the documentation here before proceeding.

Set up a new React Native App

Create a new React Native App to begin:

npx react-native init MyApp —template react-native-template-typescript
cd MyApp

Firstly, use the Amplify CLI to initialize the new app:

amplify init
Scanning for plugins...
Plugin scan successful
Note: It is recommended to run this command from the root of your app directory
? Enter a name for the project MyApp
? Enter a name for the environment dev
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using react-native
? Source Directory Path:  src
? Distribution Directory Path: /
? Build Command:  npm run-script build
? Start Command: npm run-script start
Using default provider  awscloudformation

For more information on AWS Profiles, see:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html

? Do you want to use an AWS profile? Yes
? Please choose the profile you want to use default

Then install the required React and CocoaPod dependencies:

npm install aws-amplify-react-native aws-amplify @aws-amplify/pushnotification @react-native-community/push-notification-ios @react-native-community/netinfo
npx pod-install

We are now going to add authentication for the user to sign into the app, and send/receive push notifications. Adding authentication with the default configuration creates a Cognito User Pool in the cloud, and allows Amplify applications to use the identity of the authenticated user.

Note that authentication with Cognito User Pools is not strictly required for Amazon Pinpoint Push Notifications to function, you can use Cognito Identity Pools for unauthenticated integration with Pinpoint APIs in your mobile app.

amplify add auth
Do you want to use the default authentication and security configuration? Default configuration
Warning: you will not be able to edit these selections. 
How do you want users to be able to sign in? Username
Do you want to configure advanced settings? No, I am done.

To finish the creation of the Amazon Cognito User Pool, push the changes to AWS:

amplify push
? Are you sure you want to continue? (Y/n)

The Amplify Analytics module is required before Push Notifications can be configured. The Analytics module creates and configures the Amazon Pinpoint endpoint required for the PushNotification library to be able to register for and be a target for notifications.

amplify add analytics
? Select an Analytics provider Amazon Pinpoint
? Provide your Pinpoint resource name: MyAppAnalytics
? Apps need authorization to send analytics events. Do you want to allow guests and unauthenticated users to send analytics events? (we recommend you allow this when getting started) (Y/n) Yes

To finish the creation of the Amazon Pinpoint endpoint by Amplify CLI, push the changes to AWS. Here is a list of the cloud resources that will be created when you push the stack to cloud:

  • Amazon Cognito User Pool
  • Amazon Cognito Federated Identity Pool
  • Amazon Pinpoint Project (linked to User Pool)
  • Associated AWS IAM Roles

For specific details, you can look in the “amplify” directory in your project for the AWS CloudFormation templates created by the cli.

amplify push
? Are you sure you want to continue? (Y/n)

Next, use the Amplify CLI to add notifications to the project. During the setup for notifications, Amplify will ask you for the path to your *.p12 certificate, which is generated in XCode using your Apple Developer Account. Please refer to the instructions here:

Amplify Notifications: https://docs.amplify.aws/lib/push-notifications/getting-started/q/platform/js#setup-for-ios
Pinpoint Push Notification Setup: https://docs.aws.amazon.com/pinpoint/latest/developerguide/apns-setup.html

amplify add notifications
? Choose the push notification channel to enable. APNS
? Provide your Pinpoint resource name: <Choose default or provide a name>
? Choose authentication method used for APNs Certificate
? The certificate file path (.p12): <path to APNs p12 certificate>
? The certificate password (if any):
MAC verified OK
✔ The APNS channel has been successfully enabled.

Note that when the Notifications category is added, Amplify CLI also adds the Auth category, which creates and configures an Amazon Cognito User to allow authenticated access to the Amazon Pinpoint endpoint.

Next, you will need to open the iOS workspace of the application in XCode, found in ios/MyApp.xcworkspace. To enable notifications to function on iOS, the following settings for the project should be changed using Xcode to enable the @react-native-community/push-notifications-ios module to function:

  1. Add the Background Mode – Remote Notifications capability – https://github.com/react-native-push-notification-ios/push-notification-ios#add-capabilities–background-mode—remote-notifications
  2. Add support for the notification and register events that will be used – https://github.com/react-native-push-notification-ios/push-notification-ios#augment-appdelegate

Finally, update the Apps Workspace settings in Xcode by setting the Bundler Identifier you created when configuring your Apple Developer account. Once your Team is specified in Xcode, it will now populate the associated Signing Certificate field.

xcode project view signing tab

Next you can configure your React Native app to send and receive Push Notifications.

Register and Receive Notifications

To register a device with Amazon Pinpoint to receive Push Notifications, the required libraries should be imported:

import Amplify from 'aws-amplify';
import awsconfig from './aws-exports';
import {withAuthenticator} from 'aws-amplify-react-native';
import Analytics from '@aws-amplify/analytics';
import PushNotification from '@aws-amplify/pushnotification';
import Auth from '@aws-amplify/auth';

Note that Pinpoint requires the Analytics category of Amplify to be configured and imported into your app.

With the required libraries loaded, the Amplify components should be configured with the AWS details created by the amplify push command that are stored in the aws-exports.js file.

Amplify.configure(awsconfig);
Auth.configure(awsconfig);
Analytics.configure(awsconfig);
PushNotification.configure(awsconfig);

Next, register functions to be called when the device registers with APNs, and when a notification is received.

The onRegister event will be triggered when iOS registers a new token with the APNs service, allowing the app to retrieve the token and register it with Amazon Pinpoint via the Analytics.updateEndpoint function. Take care to update the userId key in the Analytics configuration.

PushNotification.onRegister((token: any) => {
  _token = token;
  Analytics.updateEndpoint({
    address: token,
    optOut: 'NONE',
    userId: '<userId>',
  })
    .then((data) => {
      console.log('endpoint updated', JSON.stringify(data));
    })
    .catch((error) => {
      console.log('error updating endpoint', error);
    });
});

The onNotification event will be triggered when the open app receives a notification from Amazon Pinpoint via APNs. Here we can execute actions based on the notification if required:

PushNotification.onNotification((notification: any) => {
  // display notification in debug log in XCode
  console.log('in app notification received', notification);
});

If the app is not open when the notification is received, and the user clicks the notification prompt, the onNotificationOpened event will be triggered allow actions to also be executed.

PushNotification.onNotificationOpened((notification: any) => {
  // display notification in debug log in XCode
  console.log('the notification is opened from iOS', notification);
});

Finally, we must prompt the user to allow Push Notifications from the app:

PushNotification.requestIOSPermissions();

Send Push Notifications

To send Push Notifications, we can use the Amazon Pinpoint SDK. Push notifications are ordinarily sent from a back-end process or application, however you can send notifications from anywhere the Amazon Pinpoint SDK is used.

Code examples in Javascript and Python for using the SendMessage API are available in the Amazon Pinpoint documentation. These examples can be executed from a backend server process or via AWS Lambda.

The address token value used in these examples must match that of the target device token assigned by APNs. This will be visible in the console output of the mobile app when the onRegister event is triggered.

The IAM Role used by your backend server process or AWS Lambda must have the following permission policy applied:

{
    "Effect": "Allow",
    "Action": "mobiletargeting:SendMessages", 
    "Resource": "arn:aws:mobiletargeting:<region>:<accountID>:apps/<projectID>/*"
}

Testing Push Notifications

Now run the project locally:

npx react-native run-ios

This will start Metro for React Native in a new Terminal window. Metro is a bundler for React Native which transpiles JavaScript into native code for use on client devices.

command line running metro bundler

You should now be able to connect your test device to your development workstation, change the target device and run the application from Xcode (Push Notifications are not available in the iOS Simulator in XCode).

You can now test sending and receiving notifications.

Adding Pinpoint Features

Now that you have a functioning mobile app able to receive Push Notifications, Amazon Pinpoint can be used by app owners to engage users through customised Campaigns or Journeys. A campaign sends tailored messages on a schedule that you define. With Journeys, you can send messages to your customers based on their attributes, behaviours, and activities.

Thanks to AWS Amplify, the application has already deployed an Amazon Pinpoint Project for you, so your next steps for engaging users will be:

  1. Creating a Segment of your users that you would like to target.
  2. Create a Campaign or Journey to start communicating with your Segment using Push Notifications.

Amazon Pinpoint can be combined with other AWS services for more advanced scenarios, such as Predictive User Engagement. An example solution that integrates Amazon Personalize created by the AWS Solutions team can be found at to https://aws.amazon.com/solutions/implementations/predictive-user-engagement/

Cleanup

To clean up the project use:

amplify delete

to delete the resources created by the Amplify CLI.

Conclusion

This post has explored how Amazon Pinpoint and the Amplify Framework can be used more easily add Push Notifications into your mobile apps. Using the example provided, you can quickly get started with integrating the required components and configuring your AWS account, so you can run campaigns to engage with your users.

Feedback

We hope you like the Push Notification features in Amazon Pinpoint and the Amplify Framework! Let us know how we are doing, and submit any feedback in the Amplify JavaScript GitHub repository. You can read more about this feature on the Amplify Framework website. Also, check out our community site to find the events, posts, and contributors to the Amplify community.

 

Amazon Location Service Is Now Generally Available with New Routing and Satellite Imagery Capabilities

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/amazon-location-service-is-now-generally-available-with-new-routing-and-satellite-imagery-capabilities/

In December of 2020, we made Amazon Location Service available in preview form for you to start building web and mobile applications with location-based features. Today I’m pleased to announce that we are making Amazon Location generally available along with two new features: routing and satellite imagery.

I have been a full-stack developer for over 15 years. On multiple occasions, I was tasked with creating location-based applications. The biggest challenges I faced when I worked with location providers were integrating the applications into the existing application backend and frontend and keeping the data shared with the location provider secure. When Amazon Location was made available in preview last year, I was so excited. This service makes it possible to build location-based applications with a native integration with AWS services. It uses trusted location providers like Esri and HERE and customers remain in control of their data.

Amazon Location includes the following features:

  • Maps to visualize location information.
  • Places to enable your application to offer point-of-interest search functionality, convert addresses into geographic coordinates in latitude and longitude (geocoding), and convert a coordinate into a street address (reverse geocoding).
  • Routes to use driving distance, directions, and estimated arrival time in your application.
  • Trackers to allow you to retrieve the current and historical location of the devices running your tracking-enabled application.
  • Geofences to give your application the ability to detect and act when a tracked device enters or exits a geographical boundary you define as a geofence. When a breach of the geofence is detected, Amazon Location will send an event to Amazon EventBridge, which can trigger a downstream set of actions, like invoking an AWS Lambda function or sending a notification using Amazon Simple Notification Service (SNS). This level of integration with AWS services is one of the most powerful features of Amazon Location. It will help shorten your application’s time to production.

In the preview announcement blog post, Jeff introduced the service functionality in a lot of detail. In this blog post, I want to focus on the new two features: satellite imagery and routing.

Satellite Imagery

You can use satellite imagery to pack your maps with information and provide more context to the map users. It helps the map users answer questions like “Is there a swamp in that area?” or “What does that building look like?”

To get started with satellite imagery maps, go to the Amazon Location console. On Create a new map, choose Esri Imagery. 

Creating a new map with satellite imagery

Routing
With Amazon Location Routes, your application can request the travel time, distance, and all directions between two locations. This makes it possible for your application users to obtain accurate travel-time estimates based on live road and traffic information.

If you provide these extra attributes when you use the route feature, you can get very tailored information including:

  • Waypoints: You can provide a list of ordered intermediate positions to be reached on the route. You can have up to 25 stopover points including the departure and destination.
  • Departure time: When you specify the departure time for this route, you will receive a result optimized for the traffic conditions at that time.
  • Travel mode: The mode of travel you specify affects the speed and the road compatibility. Not all vehicles can travel on all roads. The available travel modes are car, truck and walking. Depending on which travel mode you select, there are parameters that you can tune. For example, for car and truck, you can specify if you want a route without ferries or tolls. But the most interesting results are when you choose the truck travel mode. You can define the truck dimensions and weight and then get a route that is optimized for these parameters. No more trucks stuck under bridges!

Amazon Location Service and its features can be used for interesting use cases with low effort. For example, delivery companies using Amazon Location can optimize the order of the deliveries, monitor the position of the delivery vehicles, and inform the customers when the vehicle is arriving. Amazon Location can be also used to route medical vehicles to optimize the routing of patients or medical supplies. Logistic companies can use the service to optimize their supply chain by monitoring all the delivery vehicles.

To use the route feature, start by creating a route calculator. In the Amazon Location console, choose Route calculators. For the provider of the route information, choose Esri or HERE.

Screenshot of create a new routing calculator

You can use the route calculator from the AWS SDKs, AWS Command Line Interface (CLI) or the Amazon Location HTTP API.

For example, to calculate a simple route between departure and destination positions using the CLI, you can write something like this:

aws location \
    calculate-route \
        --calculator-name MyExampleCalculator \
        --departure-position -123.1376951951309 49.234371474778385 \
        --destination-position -122.83301379875074 49.235860182576886

The departure-position and destination-positions are defined as longitude, latitude.

This calculation returns a lot of information. Because you didn’t define the travel mode, the service assumes that you are using a car. You can see the total distance of the route (in this case, 29 kilometers). You can change the distance unit when you do the calculation. The service also returns the duration of the trip (in this case, 29 minutes). Because you didn’t define when to depart, Amazon Location will assume that you want to travel when there is the least amount of traffic.

{
    "Legs": [{
        "Distance": 26.549,
        "DurationSeconds": 1711,
        "StartPosition":[-123.1377012, 49.2342994],
        "EndPosition": [-122.833014,49.23592],
        "Steps": [{
            "Distance":0.7,
            "DurationSeconds":52,
            "EndPosition":[-123.1281,49.23395],
            "GeometryOffset":0,
            "StartPosition":[-123.137701,49.234299]},
            ...
        ]
    }],
    "Summary": {
        "DataSource": "Esri",
        "Distance": 29.915115551209176,
        "DistanceUnit": "Kilometers",
        "DurationSeconds": 2275.5813682980006,
        "RouteBBox": [
            -123.13769762299995,
            49.23068000000006,
            -122.83301399999999,
            49.258440000000064
        ]
    }
}

It will return an array of steps, which form the directions to get from departure to destination. The steps are represented by a starting position and end position. In this example, there are 11 steps and the travel mode is a car.

Screenshot of route drawn in map

The result changes depending on the travel mode you selected. For example, if you do the calculation for the same departure and destination positions but choose a travel mode of walking, you will get a series of steps that draw the map as shown below. The travel time and distance are different: 24.1 kilometers and 6 hours and 43 minutes.

Map of route when walking

Available Now
Amazon Location Service is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Learn about the pricing models of Amazon Location Service. For more about the service, see Amazon Location Service

Marcia

Amplify Flutter is Now Generally Available: Build Beautiful Cross-Platform Apps

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/amplify-flutter-is-now-generally-available-build-beautiful-cross-platform-apps/

AWS Amplify is a set of tools and services for building secure, scalable mobile and web applications. Currently, Amplify supports iOS, Android, and JavaScript (web and React Native) and is the quickest and easiest way to build applications powered by Amazon Web Services (AWS).

Flutter is Google’s UI toolkit for building natively compiled mobile, web, and desktop applications from a single code base and is one of the fastest-growing mobile frameworks.

Amplify Flutter brings together AWS Amplify and Flutter, and we designed it for customers who have invested in the Flutter ecosystem and now want to take advantage of the power of AWS.

In August 2020, we launched the developer preview of Amplify Flutter and asked for feedback. We were delighted with the response. After months of refining the service, today we are happy to announce the general availability of Amplify Flutter.

New Amplify Flutter Features in GA
The GA release makes it easier to build powerful Flutter apps with the addition of three new capabilities:

First, we recently added a GraphQL API backed by AWS AppSync as well as REST APIs and handlers using Amazon API Gateway and AWS Lambda.

Second, Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working with distributed, cross-user data just as simple as working with local-only data.

Finally, we have Hosted UI which is a great way to implement authentication, and works with Amazon Cognito and other social identity providers such as Facebook, Google and Amazon. Hosted UI is a customizable OAuth 2.0 flow that allows you to launch a login screen without embedding the SDK for Cognito or a social provider in your application.

Digging Deeper Into Amplify DataStore
I have been building an app over the past two weeks using Amplify Flutter, and my favorite feature is Amplify DataStore, primarily because it has saved me so much time.

Working with the REST and GraphQL APIs is great in Amplify. However, when I create a mobile app, I’m often thinking about what happens when the mobile device has intermittent connectivity and can’t connect to the API endpoints. Storing data locally and syncing back to the cloud can become quite complicated. Amplify DataStore solves that problem by providing a persistent on-device data store that handles the offline or online scenario.

When I started developing my app, I used DataStore as a stand-alone local database. However, its power became apparent to me when I connected it to a cloud backend. DataStore uses my AWS AppSync API to sync data when network connectivity is available. If the app is offline, it stores it locally, ready for when a connection becomes available.

Amplify DataStore automatically versions data and implements conflict detection and resolution in the cloud using AppSync. The toolchain also generates object definitions for Dart based on the GraphQL schema that I provide.

Writing to Amplify DataStore
Writing to the DataStore is straightforward. The documentation site shows an example that you can try yourself that uses a schema from a blog site.

Post newPost = Post(
    title: 'New Post being saved', rating: 15, status: PostStatus.DRAFT);
await Amplify.DataStore.save(newPost);

Reading from Amplify DataStore
To read from the DataStore, you can query for all records of a given model type.

try {
   List<Post> posts = await Amplify.DataStore.query(Post.classType);
 } catch (e) {
   print("Query failed: " + e);
 }

Synchronization with Amplify DataStore
If you enable data synchronization, there can be different versions of an object across clients, and multiple clients may have updated their copies of an object. DataStore will converge different object versions by applying conflict detection and resolution strategies. The default resolution is called Auto Merge, but other strategies include optimistic concurrency control and custom Lambda functions.

Additional Amplify Flutter Features
Amplify Flutter allows you to work with AWS in three additional ways:

  • Authentication. Amplify Flutter provides an interface for authenticating a user and enables use cases like Sign-Up, Sign-In, and Multi-Factor Authentication. Behind the scenes, it provides the necessary authorization to the other Amplify categories. It comes with built-in support for Cognito user pools and identity pools.
  • Storage. Amplify Flutter provides an interface for managing user content for your app in public, protected, or private storage buckets. It enables use cases like upload, download, and deleting objects and provides built-in support for Amazon Simple Storage Service (S3) by default.
  • Analytics. Amplify Flutter enables you to collect tracking data for authenticated or unauthenticated users in Amazon Pinpoint. You can easily record events and extend the default functionality for custom metrics or attributes as needed.

Available Now
Amplify Flutter is now available in GA in all regions that support AWS Amplify. There is no additional cost for using Amplify Flutter; you only pay for the backend services your applications use above the free tier; check out the pricing page for more details.

Visit the Amplify Flutter documentation to get started and learn more. Happy coding.

— Martin

Amazon Location – Add Maps and Location Awareness to Your Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-location-add-maps-and-location-awareness-to-your-applications/

We want to make it easier and more cost-effective for you to add maps, location awareness, and other location-based features to your web and mobile applications. Until now, doing this has been somewhat complex and expensive, and also tied you to the business and programming models of a single provider.

Introducing Amazon Location Service
Today we are making Amazon Location available in preview form and you can start using it today. Priced at a fraction of common alternatives, Amazon Location Service gives you access to maps and location-based services from multiple providers on an economical, pay-as-you-go basis.

You can use Amazon Location Service to build applications that know where they are and respond accordingly. You can display maps, validate addresses, perform geocoding (turn an address into a location), track the movement of packages and devices, and much more. You can easily set up geofences and receive notifications when tracked items enter or leave a geofenced area. You can even overlay your own data on the map while retaining full control.

You can access Amazon Location Service from the AWS Management Console, AWS Command Line Interface (CLI), or via a set of APIs. You can also use existing map libraries such as Mapbox GL and Tangram.

All About Amazon Location
Let’s take a look at the types of resources that Amazon Location Service makes available to you, and then talk about how you can use them in your applications.

MapsAmazon Location Service lets you create maps that make use of data from our partners. You can choose between maps and map styles provided by Esri and by HERE Technologies, with the potential for more maps & more styles from these and other partners in the future. After you create a map, you can retrieve a tile (at one of up to 16 zoom levels) using the GetMapTile function. You won’t do this directly, but will use Mapbox GL, Tangram, or another library instead.

Place Indexes – You can choose between indexes provided by Esri and HERE. The indexes support the SearchPlaceIndexForPosition function which returns places, such as residential addresses or points of interest (often known as POI) that are closest to the position that you supply, while also performing reverse geocoding to turn the position (a pair of coordinates) into a legible address. Indexes also support the SearchPlaceIndexForText function, which searches for addresses, businesses, and points of interest using free-form text such as an address, a name, a city, or a region.

Trackers –Trackers receive location updates from one or more devices via the BatchUpdateDevicePosition function, and can be queried for the current position (GetDevicePosition) or location history (GetDevicePositionHistory) of a device. Trackers can also be linked to Geofence Collections to implement monitoring of devices as they move in and out of geofences.

Geofence Collections – Each collection contains a list of geofences that define geographic boundaries. Here’s a geofence (created with geojson.io) that outlines a park near me:

Amazon Location in Action
I can use the AWS Management Console to get started with Amazon Location and then move on to the AWS Command Line Interface (CLI) or the APIs if necessary. I open the Amazon Location Service Console, and I can either click Try it! to create a set of starter resources, or I can open up the navigation on the left and create them one-by-one. I’ll go for one-by-one, and click Maps:

Then I click Create map to proceed:

I enter a Name and a Description:

Then I choose the desired map and click Create map:

The map is created and ready to be added to my application right away:

Now I am ready to embed the map in my application, and I have several options including the Amplify JavaScript SDK, the Amplify Android SDK, the Amplify iOS SDK, Tangram, and Mapbox GL (read the Developer Guide to learn more about each option).

Next, I want to track the position of devices so that I can be notified when they enter or exit a given region. I use a GeoJSON editing tool such as geojson.io to create a geofence that is built from polygons, and save (download) the resulting file:

I click Create geofence collection in the left-side navigation, and in Step 1, I add my GeoJSON file, enter a Name and Description, and click Next:

Now I enter a Name and a Description for my tracker, and click Next. It will be linked to the geofence collection that I just created:

The next step is to arrange for the tracker to send events to Amazon EventBridge so that I can monitor them in CloudWatch Logs. I leave the settings as-is, and click Next to proceed:

I review all of my choices, and click Finalize to move ahead:

The resources are created, set up, and ready to go:

I can then write code or use the CLI to update the positions of my devices:

$ aws location batch-update-device-position \
   --tracker-name MyTracker1 \
   --updates "DeviceId=Jeff1,Position=-122.33805,47.62748,SampleTime=2020-11-05T02:59:07+0000"

After I do this a time or two, I can retrieve the position history for the device:

$ aws location get-device-position-history \
  -tracker-name MyTracker1 --device-id Jeff1
------------------------------------------------
|           GetDevicePositionHistory           |
+----------------------------------------------+
||               DevicePositions              ||
|+---------------+----------------------------+|
||  DeviceId     |  Jeff1                     ||
||  ReceivedTime |  2020-11-05T02:59:17.246Z  ||
||  SampleTime   |  2020-11-05T02:59:07Z      ||
|+---------------+----------------------------+|
|||                 Position                 |||
||+------------------------------------------+||
|||  -122.33805                              |||
|||  47.62748                                |||
||+------------------------------------------+||
||               DevicePositions              ||
|+---------------+----------------------------+|
||  DeviceId     |  Jeff1                     ||
||  ReceivedTime |  2020-11-05T03:02:08.002Z  ||
||  SampleTime   |  2020-11-05T03:01:29Z      ||
|+---------------+----------------------------+|
|||                 Position                 |||
||+------------------------------------------+||
|||  -122.43805                              |||
|||  47.52748                                |||
||+------------------------------------------+||

I can write Amazon EventBridge rules that watch for the events, and use them to perform any desired processing. Events are published when a device enters or leaves a geofenced area, and look like this:

{
  "version": "0",
  "id": "7cb6afa8-cbf0-e1d9-e585-fd5169025ee0",
  "detail-type": "Location Geofence Event",
  "source": "aws.geo",
  "account": "123456789012",
  "time": "2020-11-05T02:59:17.246Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:geo:us-east-1:123456789012:geofence-collection/MyGeoFences1",
    "arn:aws:geo:us-east-1:123456789012:tracker/MyTracker1"
  ],
  "detail": {
        "EventType": "ENTER",
        "GeofenceId": "LakeUnionPark",
        "DeviceId": "Jeff1",
        "SampleTime": "2020-11-05T02:59:07Z",
        "Position": [-122.33805, 47.52748]
  }
}

Finally, I can create and use place indexes so that I can work with geographical objects. I’ll use the CLI for a change of pace. I create the index:

$ aws location create-place-index \
  --index-name MyIndex1 --data-source Here

Then I query it to find the addresses and points of interest near the location:

$ aws location search-place-index-for-position --index-name MyIndex1 \
  --position "[-122.33805,47.62748]" --output json \
  |  jq .Results[].Place.Label
"Terry Ave N, Seattle, WA 98109, United States"
"900 Westlake Ave N, Seattle, WA 98109-3523, United States"
"851 Terry Ave N, Seattle, WA 98109-4348, United States"
"860 Terry Ave N, Seattle, WA 98109-4330, United States"
"Seattle Fireboat Duwamish, 860 Terry Ave N, Seattle, WA 98109-4330, United States"
"824 Terry Ave N, Seattle, WA 98109-4330, United States"
"9th Ave N, Seattle, WA 98109, United States"
...

I can also do a text-based search:

$ aws location search-place-index-for-text --index-name MyIndex1 \
  --text Coffee --bias-position "[-122.33805,47.62748]" \
  --output json | jq .Results[].Place.Label
"Mohai Cafe, 860 Terry Ave N, Seattle, WA 98109, United States"
"Starbucks, 1200 Westlake Ave N, Seattle, WA 98109, United States"
"Metropolitan Deli and Cafe, 903 Dexter Ave N, Seattle, WA 98109, United States"
"Top Pot Doughnuts, 590 Terry Ave N, Seattle, WA 98109, United States"
"Caffe Umbria, 1201 Westlake Ave N, Seattle, WA 98109, United States"
"Starbucks, 515 Westlake Ave N, Seattle, WA 98109, United States"
"Cafe 815 Mercer, 815 9th Ave N, Seattle, WA 98109, United States"
"Victrola Coffee Roasters, 500 Boren Ave N, Seattle, WA 98109, United States"
"Specialty's, 520 Terry Ave N, Seattle, WA 98109, United States"
...

Both of the searches have other options; read the Geocoding, Reverse Geocoding, and Search to learn more.

Things to Know
Amazon Location is launching today as a preview, and you can get started with it right away. During the preview we plan to add an API for routing, and will also do our best to respond to customer feedback and feature requests as they arrive.

Pricing is based on usage, with an initial evaluation period that lasts for three months and lets you make numerous calls to the Amazon Location APIs at no charge. After the evaluation period you pay the prices listed on the Amazon Location Pricing page.

Amazon Location is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions.

Jeff;

 

New AWS Amplify Admin UI Helps You Develop App Backends, No Cloud Experience Required

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-amplify-admin-ui-helps-you-develop-app-backends-no-cloud-experience-required/

Today AWS Amplify announces new Admin UI to configure an application backend, and manage app users and content outside the AWS console. This new feature makes it easier to use AWS services and accelerates the development and management of full-stack web and mobile apps.

We launched AWS Amplify in November 2018, and since then it has been helping front-end web and mobile developers to quickly develop and deploy cloud-connected web and mobile applications. In order to stay ahead of the curve and deliver innovation to customers, businesses need to ship features fast. However, developers and non-developers who are unfamiliar with AWS fundamentals require training, which slows the entire process down.

AWS Amplify today launches a new Admin UI that enables team members to interface with AWS without requiring an AWS account (only the first deployment requires an AWS account).

The Admin UI provides simple yet powerful tools to model database tables, add authentication and authorization, and manage app content, users, and groups. It also offers the ability to manage the application users and content. The AWS Amplify Admin UI focuses on data types rather than backend infrastructure. All the backend resources generate infrastructure as code (IaC) templates that can be committed in the team repository and integrated with AWS Amplify continuous deployment workflow to manage the different environments.

Let’s Look at an Example Using the New AWS Amplify Admin UI
Imagine that you are a front-end web developer creating a website for a local restaurant. The restaurant owner wants to have a website where they can show their daily menu, and wants a simple way to update the content of the page every day.

There are many ways to solve this problem. You can spin up a server and install a CMS for the restaurant owner to manage the menu. For this particular use case, having a server exclusively to do this is just over-provisioning resources. Or, you can create the CMS yourself using serverless tools; however, this adds a lot of complexity and extra time to the development cycle.

Another option is to use the new AWS Amplify Admin UI that allows you to take advantage of many AWS managed services to create the backend quickly and also provides the ability to manage the application users and content.

The first thing you need to do is to create a new AWS Amplify app backend in the AWS Console. AWS Amplify will create a backend environment called staging. When, your app backend is ready, open the new Admin UI. If you would like to get another developer working on this application who doesn’t have experience with AWS, nor access to the AWS account, now you can grant access to them so they can continue the work on the UI. But for now, let’s imagine that you are going to do all the development.

Screenshot of opening the admin ui

The Admin UI contains all the tools that application developers need to configure the application backend and that content managers need to update the application content.

In the sidebar of the Admin UI (as shown in the following illustration), you can find all the different options for setting up your application.

To get started with the restaurant website, you need a menu data model. For that, first go to Data (1), then create a new data model call Menu (2), add the necessary fields and Save and deploy (3) the model. Saving and deploying the model will create all the needed AWS resources in the backend, like an AWS AppSync API and a Amazon DynamoDB table to host the menu items. Deploying takes a few minutes.

Screenshot for data modeling

After your model is deployed, you can start working on your website. For this example I will be using React, one of the web frameworks supported by AWS Amplify, but you can do the same example with any of the supported frameworks.

First, you need to install the AWS Amplify CLI:

npm install -g @aws-amplify/cli

Then create a new React application:

npx create-react-app react-amplified
cd react-amplified

When your application is created, you can configure it with the AWS Amplify application we just created. For that, go back to the Admin UI and select Local setup instructions (1), and execute the amplify command (2) in the directory where the web application is stored in your computer.

Screenshot of pulling amplify configuration

When you execute that command, a browser window will open that asks you if you are sure that you want to log in to the AWS Amplify Admin UI. Selecting yes will grant the AWS Amplify CLI access to deploy updates to the backend directly from your local desktop. The CLI will prompt you with a few questions about your local environment, and finally will ask if you plan to modify this backend locally. Choose yes.

When that process ends, you will notice some changes in your web application directory: a couple of new directories were created (amplify and src/models) and also a new file (aws-exports.js). These files and directories hold all the configuration for your AWS Amplify application.

Now it’s time to develop your application. To access the menu data model you created in the first steps, you will use the DataStore library from AWS Amplify. DataStore allows you to connect to your deployed database and perform CRUD, sort and filter operations from your UI to manipulate backend data. In the Admin UI, you can see some examples on how to create, update, delete and query the model.

Screenshot of using the data model

When the website is ready, it’s time to add some content. The restaurant owner is the one adding the menu items. In order for them to be able to add items, they need to have permissions to access the Admin UI for this application.

To do this, you need to create a new Admin UI account for the restaurant owner with the correct permissions. Go to the AWS Amplify console for your application and then to the Admin UI management and invite users.

When adding new users to the Admin UI you can define their permission scope. If you want to grant them full access, they will be able to configure and manage the application backend environment, and if you want them just to be able to edit the content, you can give them the manage only access scope. For the restaurant owner grant manage only permissions.

Screenshot for inviting new users to the AdminUI

After sending the invite, the restaurant owner will receive an email with a link to access the Admin UI and a username and password to log in. When they log in, they can go to the Content tab (1) and start adding items in their menu (2) and they can see the items available in the table in the screen (3).

Screenshot adding new content

From this screen, the restaurant owner can add, delete and edit items in their menu whenever they want to. These changes are reflected in the website immediately after they save.

The use cases for Admin UI are endless, such as blogs, e-commerce sites, planning apps, etc. Developers can build complex and feature-rich apps by focusing on their domain-specific data model instead of spending hours deploying and stitching together cloud infrastructure. AWS Amplify gives front-end developers the fastest and easiest way to develop mobile and web apps. And all accessible to developers that are not familiar with the cloud and without the need to give AWS access to everybody in the team.

Availability
AWS Amplify Admin UI is available at launch in: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (London).

For more information, visit the Amplify service page. Get started building a data model without an AWS account in the sandbox experience.

Marcia

Building Serverless Land: Part 2 – An auto-building static site

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/building-serverless-land-part-2-an-auto-building-static-site/

In this two-part blog series, I show how serverlessland.com is built. This is a static website that brings together all the latest blogs, videos, and training for AWS serverless. It automatically aggregates content from a number of sources. The content exists in a static JSON file, which generates a new static site each time it is updated. The result is a low-maintenance, low-latency serverless website, with almost limitless scalability.

A companion blog post explains how to build an automated content aggregation workflow to create and update the site’s content. In this post, you learn how to build a static website with an automated deployment pipeline that re-builds on each GitHub commit. The site content is stored in JSON files in the same repository as the code base. The example code can be found in this GitHub repository.

The growing adoption of serverless technologies generates increasing amounts of helpful and insightful content from the developer community. This content can be difficult to discover. Serverless Land helps channel this into a single searchable location. By collating this into a static website, users can enjoy a browsing experience with fast page load speeds.

The serverless nature of the site means that developers don’t need to manage infrastructure or scalability. The use of AWS Amplify Console to automatically deploy directly from GitHub enables a regular release cadence with a fast transition from prototype to production.

Static websites

A static site is served to the user’s web browser exactly as stored. This contrasts to dynamic webpages, which are generated by a web application. Static websites often provide improved performance for end users and have fewer or no dependant systems, such as databases or application servers. They may also be more cost-effective and secure than dynamic websites by using cloud storage, instead of a hosted environment.

A static site generator is a tool that generates a static website from a website’s configuration and content. Content can come from a headless content management system, through a REST API, or from data referenced within the website’s file system. The output of a static site generator is a set of static files that form the website.

Serverless Land uses a static site generator for Vue.js called Nuxt.js. Each time content is updated, Nuxt.js regenerates the static site, building the HTML for each page route and storing it in a file.

The architecture

Serverless Land static website architecture

When the content.json file is committed to GitHub, a new build process is triggered in AWS Amplify Console.

Deploying AWS Amplify

AWS Amplify helps developers to build secure and scalable full stack cloud applications. AWS Amplify Console is a tool within Amplify that provides a user interface with a git-based workflow for hosting static sites. Deploy applications by connecting to an existing repository (GitHub, BitBucket Cloud, GitLab, and AWS CodeCommit) to set up a fully managed, nearly continuous deployment pipeline.

This means that any changes committed to the repository trigger the pipeline to build, test, and deploy the changes to the target environment. It also provides instant content delivery network (CDN) cache invalidation, atomic deploys, password protection, and redirects without the need to manage any servers.

Building the static website

  1. To get started, use the Nuxt.js scaffolding tool to deploy a boiler plate application. Make sure you have npx installed (npx is shipped by default with npm version 5.2.0 and above).
    $ npx create-nuxt-app content-aggregator

    The scaffolding tool asks some questions, answer as follows:Nuxt.js scaffolding tool inputs

  2. Navigate to the project directory and launch it with:
    $ cd content-aggregator
    $ npm run dev

    The application is now running on http://localhost:3000.The pages directory contains your application views and routes. Nuxt.js reads the .vue files inside this directory and automatically creates the router configuration.

  3. Create a new file in the /pages directory named blogs.vue:$ touch pages/blogs.vue
  4. Copy the contents of this file into pages/blogs.vue.
  5. Create a new file in /components directory named Post.vue :$ touch components/Post.vue
  6. Copy the contents of this file into components/Post.vue.
  7. Create a new file in /assets named content.json and copy the contents of this file into it.$ touch /assets/content.json

The blogs Vue component

The blogs page is a Vue component with some special attributes and functions added to make development of your application easier. The following code imports the content.json file into the variable blogPosts. This file stores the static website’s array of aggregated blog post content.

import blogPosts from '../assets/content.json'

An array named blogPosts is initialized:

data(){
    return{
      blogPosts: []
    }
  },

The array is then loaded with the contents of content.json.

 mounted(){
    this.blogPosts = blogPosts
  },

In the component template, the v-for directive renders a list of post items based on the blogPosts array. It requires a special syntax in the form of blog in blogPosts, where blogPosts is the source data array and blog is an alias for the array element being iterated on. The Post component is rendered for each iteration. Since components have isolated scopes of their own, a :post prop is used to pass the iterated data into the Post component:

<ul>
  <li v-for="blog in blogPosts" :key="blog">
     <Post :post="blog" />
  </li>
</ul>

The post data is then displayed by the following template in components/Post.vue.

<template>
    <div class="hello">
      <h3>{{ post.title }} </h3>
      <div class="img-holder">
          <img :src="post.image" />
      </div>
      <p>{{ post.intro }} </p>
      <p>Published on {{post.date}}, by {{ post.author }} p>
      <a :href="post.link"> Read article</a>
    </div>
</template>

This forms the framework for the static website. The /blogs page displays content from /assets/content.json via the Post component. To view this, go to http://localhost:3000/blogs in your browser:

The /blogs page

Add a new item to the content.json file and rebuild the static website to display new posts on the blogs page. The previous content was generated using the aggregation workflow explained in this companion blog post.

Connect to Amplify Console

Clone the web application to a GitHub repository and connect it to Amplify Console to automate the rebuild and deployment process:

  1. Upload the code to a new GitHub repository named ‘content-aggregator’.
  2. In the AWS Management Console, go to the Amplify Console and choose Connect app.
  3. Choose GitHub then Continue.
  4. Authorize to your GitHub account, then in the Recently updated repositories drop-down select the ‘content-aggregator’ repository.
  5. In the Branch field, leave the default as master and choose Next.
  6. In the Build and test settings choose edit.
  7. Replace - npm run build with – npm run generate.
  8. Replace baseDirectory: / with baseDirectory: dist

    This runs the nuxt generate command each time an application build process is triggered. The nuxt.config.js file has a target property with the value of static set. This generates the web application into static files. Nuxt.js creates a dist directory with everything inside ready to be deployed on a static hosting service.
  9. Choose Save then Next.
  10. Review the Repository details and App settings are correct. Choose Save and deploy.

    Amplify Console deployment

Once the deployment process has completed and is verified, choose the URL generated by Amplify Console. Append /blogs to the URL, to see the static website blogs page.

Any edits pushed to the repository’s content.json file trigger a new deployment in Amplify Console that regenerates the static website. This companion blog post explains how to set up an automated content aggregator to add new items to the content.json file from an RSS feed.

Conclusion

This blog post shows how to create a static website with vue.js using the nuxt.js static site generator. The site’s content is generated from a single JSON file, stored in the site’s assets directory. It is automatically deployed and re-generated by Amplify Console each time a new commit is pushed to the GitHub repository. By automating updates to the content.json file you can create low-maintenance, low-latency static websites with almost limitless scalability.

This application framework is used together with this automated content aggregator to pull together articles for http://serverlessland.com. Serverless Land brings together all the latest blogs, videos, and training for AWS Serverless. Download the code from this GitHub repository to start building your own automated content aggregation platform.

Building Serverless Land: Part 1 – Automating content aggregation

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/building-serverless-land-part-1-automating-content-aggregation/

In this two part blog series, I show how serverlessland.com is built. This is a static website that brings together all the latest blogs, videos, and training for AWS Serverless. It automatically aggregates content from a number of sources. The content exists in static JSON files, which generate a new site build each time they are updated. The result is a low-maintenance, low-latency serverless website, with almost limitless scalability.

This blog post explains how to automate the aggregation of content from multiple RSS feeds into a JSON file stored in GitHub. This workflow uses AWS Lambda and AWS Step Functions, triggered by Amazon EventBridge. The application can be downloaded and deployed from this GitHub repository.

The growing adoption of serverless technologies generates increasing amounts of helpful and insightful content from the developer community. This content can be difficult to discover. Serverless Land helps channel this into a single searchable location. By automating the collection of this content with scheduled serverless workflows, the process robustly scales to near infinite numbers. The Step Functions MAP state allows for dynamic parallel processing of multiple content sources, without the need to alter code. On-boarding a new content source is as fast and simple as making a single CLI command.

The architecture

Automating content aggregation with AWS Step Functions

The application consists of six Lambda functions orchestrated by a Step Functions workflow:

  1. The workflow is triggered every 2 hours by an EventBridge scheduler. The schedule event passes an RSS feed URL to the workflow.
  2. The first task invokes a Lambda function that runs an HTTP GET request to the RSS feed. It returns an array of recent blog URLs. The array of blog URLs is provided as the input to a MAP state. The MAP state type makes it possible to run a set of steps for each element of an input array in parallel. The number of items in the array can be different for each execution. This is referred to as dynamic parallelism.
  3. The next task invokes a Lambda function that uses the GitHub REST API to retrieve the static website’s JSON content file.
  4. The first Lambda function in the MAP state runs an HTTP GET request to the blog post URL provided in the payload. The URL is scraped for content and an object containing detailed metadata about the blog post is returned in the response.
  5. The blog post metadata is compared against the website’s JSON content file in GitHub.
  6. A CHOICE state determines if the blog post metadata has already been committed to the repository.
  7. If the blog post is new, it is added to an array of “content to commit”.
  8. As the workflow exits the MAP state, the results are passed to the final Lambda function. This uses a single git commit to add each blog post object to the website’s JSON content file in GitHub. This triggers an event that rebuilds the static site.

Using Secrets in AWS Lambda

Two of the Lambda functions require a GitHub personal access token to commit files to a repository. Sensitive credentials or secrets such as this should be stored separate to the function code. Use AWS Systems Manager Parameter Store to store the personal access token as an encrypted string. The AWS Serverless Application Model (AWS SAM) template grants each Lambda function permission to access and decrypt the string in order to use it.

  1. Follow these steps to create a personal access token that grants permission to update files to repositories in your GitHub account.
  2. Use the AWS Command Line Interface (AWS CLI) to create a new parameter named GitHubAPIKey:
aws ssm put-parameter \
--name /GitHubAPIKey \
--value ReplaceThisWithYourGitHubAPIKey \
--type SecureString

{
    "Version": 1,
    "Tier": "Standard"
}

Deploying the application

  1. Fork this GitHub repository to your GitHub Account.
  2. Clone the forked repository to your local machine and deploy the application using AWS SAM.
  3. In a terminal, enter:
    git clone https://github.com/aws-samples/content-aggregator-example
    sam deploy -g
  4. Enter the required parameters when prompted.

This deploys the application defined in the AWS SAM template file (template.yaml).

The business logic

Each Lambda function is written in Node.js and is stored inside a directory that contains the package dependencies in a `node_modules` folder. These are defined for each function by its relative package.json file. The function dependencies are bundled and deployed using the sam build && deploy -g command.

The GetRepoContents and WriteToGitHub Lambda functions use the octokit/rest.js library to communicate with GitHub. The library authenticates to GitHub by using the GitHub API key held in Parameter Store. The AWS SDK for Node.js is used to obtain the API key from Parameter Store. With a single synchronous call, it retrieves and decrypts the parameter value. This is then used to authenticate to GitHub.

const AWS = require('aws-sdk');
const SSM = new AWS.SSM();


//get Github API Key and Authenticate
    const singleParam = { Name: '/GitHubAPIKey ',WithDecryption: true };
    const GITHUB_ACCESS_TOKEN = await SSM.getParameter(singleParam).promise();
    const octokit = await  new Octokit({
      auth: GITHUB_ACCESS_TOKEN.Parameter.Value,
    })

Lambda environment variables are used to store non-sensitive key value data such as the repository name and JSON file location. These can be entered when deploying with AWS SAM guided deploy command.

Environment:
        Variables:
          GitHubRepo: !Ref GitHubRepo
          JSONFile: !Ref JSONFile

The GetRepoContents function makes a synchronous HTTP request to the GitHub repository to retrieve the contents of the website’s JSON file. The response SHA and file contents are returned from the Lambda function and acts as the input to the next task in the Step Functions workflow. This SHA is used in final step of the workflow to save all new blog posts in a single commit.

Map state iterations

The MAP state runs concurrently for each element in the input array (each blog post URL).

Each iteration must compare a blog post URL to the existing JSON content file and decide whether to ignore the post. To do this, the MAP state requires both the input array of blog post URLs and the existing JSON file contents. The ItemsPath, ResultPath, and Parameters are used to achieve this:

  • The ItemsPath sets input array path to $.RSSBlogs.body.
  • The ResultPath states that the output of the branches is placed in $.mapResults.
  • The Parameters block replaces the input to the iterations with a JSON node. This contains both the current item data from the context object ($$.Map.Item.Value) and the contents of the GitHub JSON file ($.RepoBlogs).
"Type":"Map",
    "InputPath": "$",
    "ItemsPath": "$.RSSBlogs.body",
    "ResultPath": "$.mapResults",
    "Parameters": {
        "BlogUrl.$": "$$.Map.Item.Value",
        "RepoBlogs.$": "$.RepoBlogs"
     },
    "MaxConcurrency": 0,
    "Iterator": {
       "StartAt": "getMeta",

The Step Functions resource

The AWS SAM template uses the following Step Functions resource definition to create a Step Functions state machine:

  MyStateMachine:
    Type: AWS::Serverless::StateMachine
    Properties:
      DefinitionUri: statemachine/my_state_machine.asl.JSON
      DefinitionSubstitutions:
        GetBlogPostArn: !GetAtt GetBlogPost.Arn
        GetUrlsArn: !GetAtt GetUrls.Arn
        WriteToGitHubArn: !GetAtt WriteToGitHub.Arn
        CompareAgainstRepoArn: !GetAtt CompareAgainstRepo.Arn
        GetRepoContentsArn: !GetAtt GetRepoContents.Arn
        AddToListArn: !GetAtt AddToList.Arn
      Role: !GetAtt StateMachineRole.Arn

The actual workflow definition is defined in a separate file (statemachine/my_state_machine.asl.JSON). The DefinitionSubstitutions property specifies mappings for placeholder variables. This enables the template to inject Lambda function ARNs obtained by the GetAtt intrinsic function during template translation:

Step Functions mappings with placeholder variables

A state machine execution role is defined within the AWS SAM template. It grants the `Lambda invoke function` action. This is tightly scoped to the six Lambda functions that are used in the workflow. It is the minimum set of permissions required for the Step Functions to carry out its task. Additional permissions can be granted as necessary, which follows the zero-trust security model.

Action: lambda:InvokeFunction
Resource:
- !GetAtt GetBlogPost.Arn
- !GetAtt GetUrls.Arn
- !GetAtt CompareAgainstRepo.Arn
- !GetAtt WriteToGitHub.Arn
- !GetAtt AddToList.Arn
- !GetAtt GetRepoContents.Arn

The Step Functions workflow definition is authored using the AWS Toolkit for Visual Studio Code. The Step Functions support allows developers to quickly generate workflow definitions from selectable examples. The render tool and automatic linting can help you debug and understand the workflow during development. Read more about the toolkit in this launch post.

Scheduling events and adding new feeds

The AWS SAM template creates a new EventBridge rule on the default event bus. This rule is scheduled to invoke the Step Functions workflow every 2 hours. A valid JSON string containing an RSS feed URL is sent as the input payload. The feed URL is obtained from a template parameter and can be set on deployment. The AWS Compute Blog is set as the default feed URL. To aggregate additional blog feeds, create a new rule to invoke the Step Functions workflow. Provide the RSS feed URL as valid JSON input string in the following format:

{“feedUrl”:”replace-this-with-your-rss-url”}

ScheduledEventRule:
    Type: "AWS::Events::Rule"
    Properties:
      Description: "Scheduled event to trigger Step Functions state machine"
      ScheduleExpression: rate(2 hours)
      State: "ENABLED"
      Targets:
        -
          Arn: !Ref MyStateMachine
          Id: !GetAtt MyStateMachine.Name
          RoleArn: !GetAtt ScheduledEventIAMRole.Arn
          Input: !Sub
            - >
              {
                "feedUrl" : "${RssFeedUrl}"
              }
            - RssFeedUrl: !Ref RSSFeed

A completed workflow with step output

Conclusion

This blog post shows how to automate the aggregation of content from multiple RSS feeds into a single JSON file using serverless workflows.

The Step Functions MAP state allows for dynamic parallel processing of each item. The recent increase in state payload size limit means that the contents of the static JSON file can be held within the workflow context. The application decision logic is separated from the business logic and events.

Lambda functions are scoped to finite business logic with Step Functions states managing decision logic and iterations. EventBridge is used to manage the inbound business events. The zero-trust security model is followed with minimum permissions granted to each service and Parameter Store used to hold encrypted secrets.

This application is used to pull together articles for http://serverlessland.com. Serverless land brings together all the latest blogs, videos, and training for AWS Serverless. Download the code from this GitHub repository to start building your own automated content aggregation platform.

Building, bundling, and deploying applications with the AWS CDK

Post Syndicated from Cory Hall original https://aws.amazon.com/blogs/devops/building-apps-with-aws-cdk/

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to model and provision your cloud application resources using familiar programming languages.

The post CDK Pipelines: Continuous delivery for AWS CDK applications showed how you can use CDK Pipelines to deploy a TypeScript-based AWS Lambda function. In that post, you learned how to add additional build commands to the pipeline to compile the TypeScript code to JavaScript, which is needed to create the Lambda deployment package.

In this post, we dive deeper into how you can perform these build commands as part of your AWS CDK build process by using the native AWS CDK bundling functionality.

If you’re working with Python, TypeScript, or JavaScript-based Lambda functions, you may already be familiar with the PythonFunction and NodejsFunction constructs, which use the bundling functionality. This post describes how to write your own bundling logic for instances where a higher-level construct either doesn’t already exist or doesn’t meet your needs. To illustrate this, I walk through two different examples: a Lambda function written in Golang and a static site created with Nuxt.js.

Concepts

A typical CI/CD pipeline contains steps to build and compile your source code, bundle it into a deployable artifact, push it to artifact stores, and deploy to an environment. In this post, we focus on the building, compiling, and bundling stages of the pipeline.

The AWS CDK has the concept of bundling source code into a deployable artifact. As of this writing, this works for two main types of assets: Docker images published to Amazon Elastic Container Registry (Amazon ECR) and files published to Amazon Simple Storage Service (Amazon S3). For files published to Amazon S3, this can be as simple as pointing to a local file or directory, which the AWS CDK uploads to Amazon S3 for you.

When you build an AWS CDK application (by running cdk synth), a cloud assembly is produced. The cloud assembly consists of a set of files and directories that define your deployable AWS CDK application. In the context of the AWS CDK, it might include the following:

  • AWS CloudFormation templates and instructions on where to deploy them
  • Dockerfiles, corresponding application source code, and information about where to build and push the images to
  • File assets and information about which S3 buckets to upload the files to

Use case

For this use case, our application consists of front-end and backend components. The example code is available in the GitHub repo. In the repository, I have split the example into two separate AWS CDK applications. The repo also contains the Golang Lambda example app and the Nuxt.js static site.

Golang Lambda function

To create a Golang-based Lambda function, you must first create a Lambda function deployment package. For Go, this consists of a .zip file containing a Go executable. Because we don’t commit the Go executable to our source repository, our CI/CD pipeline must perform the necessary steps to create it.

In the context of the AWS CDK, when we create a Lambda function, we have to tell the AWS CDK where to find the deployment package. See the following code:

new lambda.Function(this, 'MyGoFunction', {
  runtime: lambda.Runtime.GO_1_X,
  handler: 'main',
  code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-go-executable')),
});

In the preceding code, the lambda.Code.fromAsset() method tells the AWS CDK where to find the Golang executable. When we run cdk synth, it stages this Go executable in the cloud assembly, which it zips and publishes to Amazon S3 as part of the PublishAssets stage.

If we’re running the AWS CDK as part of a CI/CD pipeline, this executable doesn’t exist yet, so how do we create it? One method is CDK bundling. The lambda.Code.fromAsset() method takes a second optional argument, AssetOptions, which contains the bundling parameter. With this bundling parameter, we can tell the AWS CDK to perform steps prior to staging the files in the cloud assembly.

Breaking down the BundlingOptions parameter further, we can perform the build inside a Docker container or locally.

Building inside a Docker container

For this to work, we need to make sure that we have Docker running on our build machine. In AWS CodeBuild, this means setting privileged: true. See the following code:

new lambda.Function(this, 'MyGoFunction', {
  code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-source-code'), {
    bundling: {
      image: lambda.Runtime.GO_1_X.bundlingDockerImage,
      command: [
        'bash', '-c', [
          'go test -v',
          'GOOS=linux go build -o /asset-output/main',
      ].join(' && '),
    },
  })
  ...
});

We specify two parameters:

  • image (required) – The Docker image to perform the build commands in
  • command (optional) – The command to run within the container

The AWS CDK mounts the folder specified as the first argument to fromAsset at /asset-input inside the container, and mounts the asset output directory (where the cloud assembly is staged) at /asset-output inside the container.

After we perform the build commands, we need to make sure we copy the Golang executable to the /asset-output location (or specify it as the build output location like in the preceding example).

This is the equivalent of running something like the following code:

docker run \
  --rm \
  -v folder-containing-source-code:/asset-input \
  -v cdk.out/asset.1234a4b5/:/asset-output \
  lambci/lambda:build-go1.x \
  bash -c 'GOOS=linux go build -o /asset-output/main'

Building locally

To build locally (not in a Docker container), we have to provide the local parameter. See the following code:

new lambda.Function(this, 'MyGoFunction', {
  code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-source-code'), {
    bundling: {
      image: lambda.Runtime.GO_1_X.bundlingDockerImage,
      command: [],
      local: {
        tryBundle(outputDir: string) {
          try {
            spawnSync('go version')
          } catch {
            return false
          }

          spawnSync(`GOOS=linux go build -o ${path.join(outputDir, 'main')}`);
          return true
        },
      },
    },
  })
  ...
});

The local parameter must implement the ILocalBundling interface. The tryBundle method is passed the asset output directory, and expects you to return a boolean (true or false). If you return true, the AWS CDK doesn’t try to perform Docker bundling. If you return false, it falls back to Docker bundling. Just like with Docker bundling, you must make sure that you place the Go executable in the outputDir.

Typically, you should perform some validation steps to ensure that you have the required dependencies installed locally to perform the build. This could be checking to see if you have go installed, or checking a specific version of go. This can be useful if you don’t have control over what type of build environment this might run in (for example, if you’re building a construct to be consumed by others).

If we run cdk synth on this, we see a new message telling us that the AWS CDK is bundling the asset. If we include additional commands like go test, we also see the output of those commands. This is especially useful if you wanted to fail a build if tests failed. See the following code:

$ cdk synth
Bundling asset GolangLambdaStack/MyGoFunction/Code/Stage...
✓  . (9ms)
✓  clients (5ms)

DONE 8 tests in 11.476s
✓  clients (5ms) (coverage: 84.6% of statements)
✓  . (6ms) (coverage: 78.4% of statements)

DONE 8 tests in 2.464s

Cloud Assembly

If we look at the cloud assembly that was generated (located at cdk.out), we see something like the following code:

$ cdk synth
Bundling asset GolangLambdaStack/MyGoFunction/Code/Stage...
✓  . (9ms)
✓  clients (5ms)

DONE 8 tests in 11.476s
✓  clients (5ms) (coverage: 84.6% of statements)
✓  . (6ms) (coverage: 78.4% of statements)

DONE 8 tests in 2.464s

It contains our GolangLambdaStack CloudFormation template that defines our Lambda function, as well as our Golang executable, bundled at asset.01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952/main.

Let’s look at how the AWS CDK uses this information. The GolangLambdaStack.assets.json file contains all the information necessary for the AWS CDK to know where and how to publish our assets (in this use case, our Golang Lambda executable). See the following code:

{
  "version": "5.0.0",
  "files": {
    "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952": {
      "source": {
        "path": "asset.01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952",
        "packaging": "zip"
      },
      "destinations": {
        "current_account-current_region": {
          "bucketName": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}",
          "objectKey": "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952.zip",
          "assumeRoleArn": "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/cdk-hnb659fds-file-publishing-role-${AWS::AccountId}-${AWS::Region}"
        }
      }
    }
  }
}

The file contains information about where to find the source files (source.path) and what type of packaging (source.packaging). It also tells the AWS CDK where to publish this .zip file (bucketName and objectKey) and what AWS Identity and Access Management (IAM) role to use (assumeRoleArn). In this use case, we only deploy to a single account and Region, but if you have multiple accounts or Regions, you see multiple destinations in this file.

The GolangLambdaStack.template.json file that defines our Lambda resource looks something like the following code:

{
  "Resources": {
    "MyGoFunction0AB33E85": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Code": {
          "S3Bucket": {
            "Fn::Sub": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}"
          },
          "S3Key": "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952.zip"
        },
        "Handler": "main",
        ...
      }
    },
    ...
  }
}

The S3Bucket and S3Key match the bucketName and objectKey from the assets.json file. By default, the S3Key is generated by calculating a hash of the folder location that you pass to lambda.Code.fromAsset(), (for this post, folder-containing-source-code). This means that any time we update our source code, this calculated hash changes and a new Lambda function deployment is triggered.

Nuxt.js static site

In this section, I walk through building a static site using the Nuxt.js framework. You can apply the same logic to any static site framework that requires you to run a build step prior to deploying.

To deploy this static site, we use the BucketDeployment construct. This is a construct that allows you to populate an S3 bucket with the contents of .zip files from other S3 buckets or from a local disk.

Typically, we simply tell the BucketDeployment construct where to find the files that it needs to deploy to the S3 bucket. See the following code:

new s3_deployment.BucketDeployment(this, 'DeployMySite', {
  sources: [
    s3_deployment.Source.asset(path.join(__dirname, 'path-to-directory')),
  ],
  destinationBucket: myBucket
});

To deploy a static site built with a framework like Nuxt.js, we need to first run a build step to compile the site into something that can be deployed. For Nuxt.js, we run the following two commands:

  • yarn install – Installs all our dependencies
  • yarn generate – Builds the application and generates every route as an HTML file (used for static hosting)

This creates a dist directory, which you can deploy to Amazon S3.

Just like with the Golang Lambda example, we can perform these steps as part of the AWS CDK through either local or Docker bundling.

Building inside a Docker container

To build inside a Docker container, use the following code:

new s3_deployment.BucketDeployment(this, 'DeployMySite', {
  sources: [
    s3_deployment.Source.asset(path.join(__dirname, 'path-to-nuxtjs-project'), {
      bundling: {
        image: cdk.BundlingDockerImage.fromRegistry('node:lts'),
        command: [
          'bash', '-c', [
            'yarn install',
            'yarn generate',
            'cp -r /asset-input/dist/* /asset-output/',
          ].join(' && '),
        ],
      },
    }),
  ],
  ...
});

For this post, we build inside the publicly available node:lts image hosted on DockerHub. Inside the container, we run our build commands yarn install && yarn generate, and copy the generated dist directory to our output directory (the cloud assembly).

The parameters are the same as described in the Golang example we walked through earlier.

Building locally

To build locally, use the following code:

new s3_deployment.BucketDeployment(this, 'DeployMySite', {
  sources: [
    s3_deployment.Source.asset(path.join(__dirname, 'path-to-nuxtjs-project'), {
      bundling: {
        local: {
          tryBundle(outputDir: string) {
            try {
              spawnSync('yarn --version');
            } catch {
              return false
            }

            spawnSync('yarn install && yarn generate');

       fs.copySync(path.join(__dirname, ‘path-to-nuxtjs-project’, ‘dist’), outputDir);
            return true
          },
        },
        image: cdk.BundlingDockerImage.fromRegistry('node:lts'),
        command: [],
      },
    }),
  ],
  ...
});

Building locally works the same as the Golang example we walked through earlier, with one exception. We have one additional command to run that copies the generated dist folder to our output directory (cloud assembly).

Conclusion

This post showed how you can easily compile your backend and front-end applications using the AWS CDK. You can find the example code for this post in this GitHub repo. If you have any questions or comments, please comment on the GitHub repo. If you have any additional examples you want to add, we encourage you to create a Pull Request with your example!

Our code also contains examples of deploying the applications using CDK Pipelines, so if you’re interested in deploying the example yourself, check out the example repo.

 

About the author

Cory Hall

Cory is a Solutions Architect at Amazon Web Services with a passion for DevOps and is based in Charlotte, NC. Cory works with enterprise AWS customers to help them design, deploy, and scale applications to achieve their business goals.