Tag Archives: Amazon EventBridge

Integrating Amazon EventBridge into your serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/integrating-amazon-eventbridge-into-your-serverless-applications/

Event-driven architecture enables developers to create decoupled services across applications. When combined with the range of managed services available in AWS, this approach can make applications highly scalable and flexible, with minimal maintenance.

Many services in the AWS Cloud produce events, including integrated software as a service (SaaS) applications. Your custom applications can also produce and consume events. With so many events from different sources, you need a way to coordinate this traffic. Amazon EventBridge is a serverless event bus that helps manage how all these events are routed throughout your applications.

The routing logic is managed by rules that evaluate the events against event expressions. EventBridge delivers matching events to targets such as AWS Lambda, so you can process events with your custom business logic.

EventBridge architecture

In this blog post, I show how you can build an event producer and consumer in AWS Lambda, and create a rule to route events. The code uses the AWS Serverless Application Model (SAM), so you can deploy the application in your own AWS Account. This walkthrough uses AWS resources that are covered by the AWS Free Tier.

To set up the example application, visit the GitHub repo and follow the instructions in the README.md file.

How the example application works

In this example, a banking application for automated teller machine (ATM) produces events about transactions. It sends the events to EventBridge, which then uses rules defined by the application to route accordingly. There are three downstream services consuming a subset of these events.

 

Sample ATM application architecture

In the repo, the atmProducer subdirectory contains handler.js, which represents the ATM service producing events. This code is a Lambda handler written in Node.js, and publishes events to EventBridge via the AWS SDK using this line of code:

const result = await eventbridge.putEvents(params).promise()

This directory also contains events.js, listing several test transactions in an Entries array. A single event is defined as follows:

    {
      // Event envelope fields
      Source: 'custom.myATMapp',
      EventBusName: 'default',
      DetailType: 'transaction',
      Time: new Date(),

      // Main event body
      Detail: JSON.stringify({
        action: 'withdrawal',
        location: 'MA-BOS-01',
        amount: 300,
        result: 'approved',
        transactionId: '123456',
        cardPresent: true,
        partnerBank: 'Example Bank',
        remainingFunds: 722.34
      })
    }

The Detail section of the event specifies transaction attributes. These include the location of the ATM, the amount, the partner bank, and the result of the transaction.

The handler.js file in the atmConsumer subdirectory contains three functions:

exports.case1Handler = async (event) => {
  console.log('--- Approved transactions ---')
  console.log(JSON.stringify(event, null, 2))
}

exports.case2Handler = async (event) => {
  console.log('--- NY location transactions ---')
  console.log(JSON.stringify(event, null, 2))
}

exports.case3Handler = async (event) => {
  console.log('--- Unapproved transactions ---')
  console.log(JSON.stringify(event, null, 2))
}

Each function receives transaction events, which are logged via the console.log statements to Amazon CloudWatch Logs. The consumer functions operate independently of the producer and are unaware of the source of the events.

The routing logic is contained in the EventBridge rules that are deployed by the application’s SAM template. The rules evaluate the incoming stream of events, and route matching events to the target Lambda functions.

Running the ATM application

After deploying the sample application, you can generate test events by invoking the atmProducer Lambda function:

  1. Open the Lambda console in the same Region where you deployed the SAM application.

    AWS Lambda console

  2. There are four Lambda functions with the prefix atm-demo. Choose the atmProducerFn function, then choose Test.

    Testing the Lambda function

  3. For Event name, enter Test, then choose Create. Choose Test once more to invoke the function.

    Invoking the Lambda function

This puts the sample events onto the EventBridge default event bus. Next, inspect the logs from the three consumer functions to see which events route to each function:

  1. Navigate to the CloudWatch console in the same Region. Select Logs, then Log groups from the menu.

    CloudWatch console

  2. Select the log group containing atmConsumerCase1. You see two streams representing the two transactions approved by the ATM. Choose a log stream to view the output.

    Log stream output

  3. Navigate back to the list of log groups, then select the log group containing atmConsumerCase2. You see streams for the two transactions matching the “New York” location filter.

    Transaction matching "New York" filter

  4. Navigate back once more to the list of log groups and select the log group containing atmConsumerCase3. Open the stream to see the denied transaction.

    Denied transaction in logs

How EventBridge rules work

From the AWS Management Console, navigate to EventBridge from the Services dropdown. Choose Rules from the menu to see the rules created by the application deployment.

Amazon EventBridge rules

Choose one of the rules to see the configuration. Each rule is associated with a single event bus (the default bus for this application), which means it evaluates every event published to the bus. Scroll down to view the Event pattern used by the rule:

EventBridge event patterns

The event pattern is a JSON object with the same structure as the events they match. Each matching value must be wrapped in an array, and you can provide multiple values if necessary. If you use multiple values, this is compared using ‘or’ logic – only ones of the values needs to match the incoming event.

You can also use content-based filtering in event patterns to create more complex rules to match dynamically. The prefix operator above matches any event where the detail.location value begins with “NY-“.

Integrating EventBridge into SAM templates

You can build and test rules manually in the EventBridge console, which can help in the development process as you refine event patterns. However, once you are ready to deploy your application, it’s easier to use a framework like SAM to launch all your serverless resources consistently.

In the example application, open the template.yaml file to view the SAM template, which defines the four Lambda functions. This shows two different ways to integrate the Lambda functions with EventBridge. The first approach uses the Events property to configure the EventBridge rule:

  atmConsumerCase3Fn:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: atmConsumer/
      Handler: handler.case3Handler
      Runtime: nodejs12.x
      Events:
        Trigger:
          Type: CloudWatchEvent 
          Properties:
            Pattern:
              source:
                - custom.myATMapp
              detail-type:
                - transaction                
              detail:
                result:
                  - "anything-but": "approved"

The syntax defines an event that invokes the Lambda function. In the YAML, you only need to define the pattern, and SAM automatically creates an IAM role with the required permissions. The pattern is the YAML equivalent of the Event Pattern shown in the console earlier.

This example automatically creates the rule on the default event bus, which exists in every AWS account. To associate the rule with a custom event bus, you can add the EventBusName to the template. If this property is missing, SAM uses the default bus.

In the second approach to defining an EventBridge configuration in SAM, you can separate the resources more clearly in the template. First, you define the Lambda function:

  atmConsumerCase1Fn:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: atmConsumer/
      Handler: handler.case1Handler
      Runtime: nodejs12.x

Next, the rule is defined using an AWS::Events::Rule resource. The properties define the event pattern as before, but can also specify targets. Whereas the first method can only create a single, implied target (the parent Lambda function), you can explicitly define multiple targets using this syntax:

  EventRuleCase1: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "Approved transactions"
      EventPattern: 
        source: 
          - "custom.myATMapp"
        detail-type:
          - transaction   
        detail: 
          result: 
            - "approved"
      State: "ENABLED"
      Targets: 
        - 
          Arn: 
            Fn::GetAtt: 
              - "atmConsumerCase1Fn"
              - "Arn"
          Id: "atmConsumerTarget1"

Finally, there is an AWS::Lambda::Permission resource that grants permission to EventBridge to invoke the target:

  PermissionForEventsToInvokeLambda: 
    Type: AWS::Lambda::Permission
    Properties: 
      FunctionName: 
        Ref: "atmConsumerCase1Fn"
      Action: "lambda:InvokeFunction"
      Principal: "events.amazonaws.com"
      SourceArn: 
        Fn::GetAtt: 
          - "EventRuleCase1"
          - "Arn"

For simple integrations where one Lambda function is invoked by one rule, the first approach is recommended. If you have complex routing logic, or you are connecting to resources outside of your SAM template, the second method is the better choice.

Conclusion

This walkthrough shows how to build a simple serverless application that produces and consumes events, using the EventBridge event bus to managing the routing. Using event patterns in rules, you can centralize the routing logic at EventBridge, helping reduce code in your downstream consuming services.

SAM templates make it simple to create EventBridge rules and define Lambda functions as targets. I show two ways to use SAM statements to define IAM permissions implicitly or explicitly. This allows you to decouple the services within your serverless applications, and take advantage of the routing offered by EventBridge with minimal configuration in your SAM templates.

To learn more, visit the Amazon EventBridge documentation.

 

Application analytics pipeline with Amazon EventBridge

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/application-analytics-pipeline-with-amazon-eventbridge/

This post is courtesy of Rajdeep Tarat, Solutions Architect and Venugopal Pai, Solutions Architect

Customers across industry verticals collect, analyze, and derive insights from end-user application analytics using solutions such as Google Analytics and MixPanel. While these solutions provide built-in dashboards for marketing analytics, it can be difficult to reuse the raw event data.

Setting up a pipeline to move the raw event data into AWS opens up possibilities for various rule-based, statistical, and machine learning algorithms to derive deep insights about end-user behavior. Additionally, the raw event data can be enriched with other transactional data points available within the customer’s AWS environment.

This post uses the Segment Partner integration in Amazon EventBridge to pipe the data into your AWS environment. Segment allows you to collect, unify, and connect end-user application analytics into AWS using Amazon EventBridge as a destination.

Segment already supports direct, optimized connections to many AWS services such as Amazon Redshift, Amazon Personalize, Amazon Kinesis, Amazon Kinesis Data Firehose, AWS Lambda, and Amazon S3. The EventBridge destination is a good choice for customers who want the flexibility and centralization that EventBridge offers.

EventBridge makes it easy to build scalable event-driven application pipelines by handling event ingestion, delivery, security, authorization, and error handling for you. The architecture of this pipeline is shown below:Segment architecture

In the diagram, end-user applications send the data into Segment, which is routed to each of the configured destinations (for example, EventBridge). Once the data reaches EventBridge, it is again routed to multiple targets. With this approach you can continue using existing solutions supported by Segment in parallel to the Amazon EventBridge destination.

This architecture makes the pipeline highly extensible and modular. Firstly, you can configure multiple Segment destinations to fan out the event data into other existing solutions in parallel to EventBridge. Marketing teams can continue to use their existing tools without any disruptions while the data is seamlessly pumped into AWS. Within the AWS Cloud, EventBridge can again route the event data to up to five targets per rule.

The following section provides a walkthrough of setting up the Segment integration with EventBridge, and configuring two targets within the AWS Cloud.

  • The first target uses an Amazon Kinesis Data Firehose to deliver the data to an S3 bucket. From the S3 bucket, multiple AWS services can use the data (learn more about using S3 as a data lake).
  • The second target posts the event data to an SNS topic. From here, the data can be consumed by subscribers for the topic.

Walkthrough

To set up the pipeline, you must configure the Segment partner integration in EventBridge, and then set up the targets where analytics data is sent.

Amazon EventBridge – Segment partner integration:

  1. From the Amazon EventBridge console, navigate to the Partner Event Sources > Segment setup page. Copy your AWS Account ID from here.
    Segment setup
  2. On the Segment destination setup page, use the Amazon EventBridge integration. Enter the AWS Account ID and select a Region (learn more about setting up a Segment destination).
    EventBridge settings

Create the event bus:

  1. After linking the Segment Destination with the AWS Account ID, send a test event from Segment to create a Partner Event Source. This also creates an Event bus with the same name. This is done by firing a test event from the Event Tester in the Segment Dashboard.
    Event Tester
  2. After the first test event is fired, the Partner Event Source and the corresponding event bus is created in the EventBridge console.
    Partner event sources

Create rules:

  1. A rule watches for incoming events and routes them to specific targets that are configured. Start by creating a new rule and entering a name.
    Creating rules
  2. For Event Pattern, select the Predefined pattern by Service, and select Service Partners > Segment.
    Define event pattern
  3. Under Select event bus, select the Custom or partner event bus, and the name of the event bus created.
    Select event bus

Configuring multiple targets for the event bus:

  1. For Kinesis streams, select Kinesis stream from the Target dropdown, and the name of the stream. For more details on creating a Kinesis data stream, read this documentation.
    Select targets
  2. For SNS topic, choose Add Target and repeat the same steps to add an SNS topic instead. For more details on creating an SNS topic, read this documentation.
    SNS as target
  3. You can optionally tag the resource, then choose Create.

The pipeline is ready to send data to the targets configured via the event bus. You can now send test events from the Segment dashboard and monitor Kinesis Data Firehose or by setting up subscribers for the SNS topic.

Conclusion

This post shows how customers can capture end-user application analytics using the partner solution Segment in real time, and ingest data into Amazon EventBridge. The data routing is made extensible using multiple Segment destinations (for third-party solutions), and using multiple rules in EventBridge (for multiple destinations within the AWS Cloud).

To learn more about Amazon EventBridge integrations, read the EventBridge documentation.

Reducing custom code by using advanced rules in Amazon EventBridge

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/reducing-custom-code-by-using-advanced-rules-in-amazon-eventbridge/

Amazon EventBridge allows you to route events between AWS services, integrated software as a service (SaaS) applications, and your own applications. Event producers publish events onto an event bus, which uses rules to determine where to send those events. The rules can specify one or more targets, which can be other AWS services or Lambda functions. This model makes it easy to develop scalable, distributed serverless applications by handling event routing and filtering.

EventBridge Content Filtering

EventBridge recently introduced additional content filtering functionality, which creates new possibilities for building sophisticated rules. This blog post explores how to use event patterns to build rules that make this routing process more powerful without needing custom code. I show how this could work with a sample ATM banking application integrating into an AWS service.

Events, rules, and filtering

In EventBridge, an event is simply a JSON structure. It contains some top-level envelope fields, such as the source, event, and timestamp, followed by a detail field containing the body of the event. Events generated from AWS services always contain a number of descriptive fields and are identifiable by the source attribute prefix “aws”.

You can also generate events from your own applications. EventBridge requires specific envelope fields, but otherwise you are free to add additional attributes as needed. A typical event structure for a custom application looks like this:

{
  "Source": string,
  "EventBusName": string,
  "DetailType": string,
  "Detail": string
}

If your application uses nested attributes, you must convert the Detail attribute into a string. In programming languages such as Node.js, you can do this using JSON.stringify to send an event, and JSON.parse when receiving it. For example, for a banking application where an ATM application sends events to EventBridge, a cash withdrawal event may look like this:

{ 
  "Source": "custom.myATMapp",
  "EventBusName": "default",
  "DetailType": "transaction",
  "Detail": "{\"action\":\"withdrawal”,\"amount\":300}"
}

EventBridge rules use event patterns that are JSON structures. These match against the attributes in the events. In the rules, you only specify the fields where you want to apply filtering logic.

To see all events for a single application using an event bus, you can filter by source. Any incoming event with this source matches regardless of the content of other fields. In the ATM application example, a rule that accepts all events looks like this:

{ 
  "source": [ "custom.myATMapp" ]
}

EventBridge examines the incoming event and compares it against this rule. The rule specifies a source value of custom.myATMapp and, as this exists in the event, the pattern matches. It then routes the event to the rule’s targets:

EventBridge rules

The example above shows a static, exact match pattern – the attribute is either present, or it’s not. There are now additional operators available for dynamic matching based on specific comparison conditions. This provides functionality that’s similar to what you use in a SQL where clause for filtering records in a database.

Here is a summary of all the comparison operators available in EventBridge:

ComparisonExampleRule syntax
NullUserID is null“UserID”: [ null ]
EmptyLastName is empty“LastName”: [“”]
EqualsName is “Alice”“Name”: [ “Alice” ]
AndLocation is “New York” and Day is “Monday”“Location”: [ “New York” ], “Day”: [“Monday”]
OrPaymentType is “Credit” or “Debit”“PaymentType”: [ “Credit”, “Debit”]
NotWeather is anything but “Raining”“Weather”: [ { “anything-but”: [ “Raining” ] } ]
Numeric (equals)Price is 100“Price”: [ { “numeric”: [ “=”, 100 ] } ]
Numeric (range)Price is more than 10, and less than or equal to 20“Price”: [ { “numeric”: [ “>”, 10, “<=”, 20 ] } ]
ExistsProductName exists“ProductName”: [ { “exists”: true } ]
Does not existProductName does not exist“ProductName”: [ { “exists”: false } ]
Begins withRegion is in the US“Region”: [ {“prefix”: “us-“ } ]

Filtering events in a custom application

In this example, a bank runs software on a network of ATMs that forwards transactional information to EventBridge. This software sends all events to EventBridge, but downstream systems only want to receive a subset of ATM events:

ATM example application

The events from the ATMs have the following structure:

      {
        "Source": "custom.myATMapp",
        "EventBusName": "default",
        "DetailType": "transaction",
        "Time": "Wed Jan 29 2020 08:03:18 GMT-0500",
        "Detail":{
          "action": "withdrawal",
          "location": "NY-NYC-001",
          "amount": 300,
          "result": "approved",
          "transactionId": "123456",
          "cardPresent": true,
          "partnerBank": "Example Bank",
          "remainingFunds": 722.34
        }
      }

The downstream services can use the event patterns in EventBridge rules to ensure that they only receive specific events.

1. Transactions where the amount is over $300

The following event pattern filters for ATM transactions over $300.

{
  "source": [ "custom.myATMapp" ],
  "detail-type": [ "transaction" ],
  "detail": {
    "amount": [ { "numeric": [ ">", 300 ] } ]
  }
}

2. All ATMs in New York City

The ATM location attribute uses the format state-city-id, so NY-NYC-001 indicates that the machine is located in New York City in New York state. To filter events from only ATMs in the New York City area, I use a prefix in the filter:

{
  "source": [ "custom.myATMapp" ],
  "detail-type": [ "transaction" ],
  "detail": {
    "location": [ { "prefix": "NY-NYC-" } ]
  }
}

3. ATM customers using a third-party bank account

To filter for transactions that show a partnerBank attribute, the following event pattern checks for the existence of this attribute:

{
  "source": [ "custom.myATMapp" ],
  "detail-type": [ "transaction" ],
  "detail": {
    "partnerBank": [ { "exists": true } ]
  }
}

4. Combined filter

I can combine filters in a single event pattern to create use-cases that are more complex. For example, this filters on approved transactions where no partnerBank attribute exists, reporting from any ATM with a location different to NY-NYC-002:

{
  "source": [ "custom.myATMapp" ],
  "detail-type": [ "transaction" ],
  "detail": {
    "result": [ "approved" ],
    "partnerBank": [ { "exists": false } ],
    "location": [ { "anything-but": "NY-NYC-002" }]
  }
}

In each of these cases, EventBridge matches incoming events against the event patterns in these rules. If there is no match, it does not route the event. This eliminates custom code that otherwise exists to filter incoming events and terminate if necessary.

Filtering AWS events to create a custom S3-to-Lambda integration

EventBridge uses a variety of AWS services as native event sources. For other AWS services, such as Amazon S3, it consumes events via AWS CloudTrail. You must first enable CloudTrail logging for the service you want to use with EventBridge. Once enabled, you can filter on any of the attributes available in an AWS event. This allows you to create dynamic, flexible integrations in your event-driven applications.

The standard S3-to-Lambda trigger allows developers to subscribe a Lambda function to an event on a single bucket. Although these events can filter on prefixes and suffixes of object keys in S3, you cannot use multiple configurations that overlap. Beyond the prefix and suffix of the key name, you cannot filter further on any other attributes of the event before invoking the Lambda function. To examine the S3 event further, you must do this within the code in the function itself.

Using EventBridge, you can configure a rule between one or more S3 buckets, and one or more Lambda functions, based upon any of the attributes available. This enables you to create much more granular filters for routing events to downstream consumers. Using a declarative approach results in greater flexibility and less custom code. In this section, I show four use-cases where this could be useful.

S3 to EventBridge

(a) Invoking a single Lambda function from events in multiple buckets

This example uses multiple buckets with a common prefix in the bucket name (for example, buckets with the names “myApp-images”, “myApp-uploads”, and “myApp-archive”). You can use all these buckets as an event source to trigger the same Lambda function. This event pattern matches for all put events in those buckets:

{
  "source": [ "aws.s3" ],
  "detail-type": [ "AWS API Call via CloudTrail" ],
  "detail": {
    "eventSource": [ "s3.amazonaws.com" ],
    "eventName": [ "PutObject" ],
    "requestParameters": {
      "bucketName": [ { "prefix": "myApp-" } ]
    }
  }
}

(b) Invoking multiple consumers as targets

EventBridge allows up to five targets per rule, so you can specify up to five separate Lambda functions to receive the event. All five functions are invoked in parallel when the event pattern matches. To use this, add the targets in the rule – no changes to the event pattern is required.

If you need more than five targets, use Amazon Simple Notification Service (SNS). You can define an SNS topic as the EventBridge rule target, and then fan out from SNS to much larger number of subscribers.

{
  "source": [ "aws.s3" ],
  "detail-type": [ "AWS API Call via CloudTrail" ],
  "detail": {
    "eventSource": [ "s3.amazonaws.com" ],
    "eventName": [ "GetObject" ],
    "userAgent": [ "userAgent" ],

    "requestParameters": {
      "bucketName": [ "mybucket" ]
    }
  }
}

Conclusion

The new content filtering syntax in EventBridge enables precise filtering of events using comparison operators and ranges of values. This allows you to filter declaratively at the event bus rather than filtering downstream using custom code. For custom applications, like the ATM example, it enables you to build precise rules for specific use-cases, reducing the number of calls to targets.

This approach enables you to route events more precisely based upon any of the attributes reported in an event. This makes it easier to handle complex routing at the EventBridge level and reduces the need for custom code across your application.

To learn more about content filtering, see the Amazon EventBridge documentation.

ICYMI: Serverless Q4 2019

Post Syndicated from Rob Sutter original https://aws.amazon.com/blogs/compute/icymi-serverless-q4-2019/

Welcome to the eighth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

In case you missed our last ICYMI, checkout what happened last quarter here.

The three months comprising the fourth quarter of 2019

AWS re:Invent

AWS re:Invent 2019

re:Invent 2019 dominated the fourth quarter at AWS. The serverless team presented a number of talks, workshops, and builder sessions to help customers increase their skills and deliver value more rapidly to their own customers.

Serverless talks from re:Invent 2019

Chris Munns presenting 'Building microservices with AWS Lambda' at re:Invent 2019

We presented dozens of sessions showing how customers can improve their architecture and agility with serverless. Here are some of the most popular.

Videos

Decks

You can also find decks for many of the serverless presentations and other re:Invent presentations on our AWS Events Content.

AWS Lambda

For developers needing greater control over performance of their serverless applications at any scale, AWS Lambda announced Provisioned Concurrency at re:Invent. This feature enables Lambda functions to execute with consistent start-up latency making them ideal for building latency sensitive applications.

As shown in the below graph, provisioned concurrency reduces tail latency, directly impacting response times and providing a more responsive end user experience.

Graph showing performance enhancements with AWS Lambda Provisioned Concurrency

Lambda rolled out enhanced VPC networking to 14 additional Regions around the world. This change brings dramatic improvements to startup performance for Lambda functions running in VPCs due to more efficient usage of elastic network interfaces.

Illustration of AWS Lambda VPC to VPC NAT

New VPC to VPC NAT for Lambda functions

Lambda now supports three additional runtimes: Node.js 12, Java 11, and Python 3.8. Each of these new runtimes has new version-specific features and benefits, which are covered in the linked release posts. Like the Node.js 10 runtime, these new runtimes are all based on an Amazon Linux 2 execution environment.

Lambda released a number of controls for both stream and async-based invocations:

  • You can now configure error handling for Lambda functions consuming events from Amazon Kinesis Data Streams or Amazon DynamoDB Streams. It’s now possible to limit the retry count, limit the age of records being retried, configure a failure destination, or split a batch to isolate a problem record. These capabilities help you deal with potential “poison pill” records that would previously cause streams to pause in processing.
  • For asynchronous Lambda invocations, you can now set the maximum event age and retry attempts on the event. If either configured condition is met, the event can be routed to a dead letter queue (DLQ), Lambda destination, or it can be discarded.

AWS Lambda Destinations is a new feature that allows developers to designate an asynchronous target for Lambda function invocation results. You can set separate destinations for success and failure. This unlocks new patterns for distributed event-based applications and can replace custom code previously used to manage routing results.

Illustration depicting AWS Lambda Destinations with success and failure configurations

Lambda Destinations

Lambda also now supports setting a Parallelization Factor, which allows you to set multiple Lambda invocations per shard for Kinesis Data Streams and DynamoDB Streams. This enables faster processing without the need to increase your shard count, while still guaranteeing the order of records processed.

Illustration of multiple AWS Lambda invocations per Kinesis Data Streams shard

Lambda Parallelization Factor diagram

Lambda introduced Amazon SQS FIFO queues as an event source. “First in, first out” (FIFO) queues guarantee the order of record processing, unlike standard queues. FIFO queues support messaging batching via a MessageGroupID attribute that supports parallel Lambda consumers of a single FIFO queue, enabling high throughput of record processing by Lambda.

Lambda now supports Environment Variables in the AWS China (Beijing) Region and the AWS China (Ningxia) Region.

You can now view percentile statistics for the duration metric of your Lambda functions. Percentile statistics show the relative standing of a value in a dataset, and are useful when applied to metrics that exhibit large variances. They can help you understand the distribution of a metric, discover outliers, and find hard-to-spot situations that affect customer experience for a subset of your users.

Amazon API Gateway

Screen capture of creating an Amazon API Gateway HTTP API in the AWS Management Console

Amazon API Gateway announced the preview of HTTP APIs. In addition to significant performance improvements, most customers see an average cost savings of 70% when compared with API Gateway REST APIs. With HTTP APIs, you can create an API in four simple steps. Once the API is created, additional configuration for CORS and JWT authorizers can be added.

AWS SAM CLI

Screen capture of the new 'sam deploy' process in a terminal window

The AWS SAM CLI team simplified the bucket management and deployment process in the SAM CLI. You no longer need to manage a bucket for deployment artifacts – SAM CLI handles this for you. The deployment process has also been streamlined from multiple flagged commands to a single command, sam deploy.

AWS Step Functions

One powerful feature of AWS Step Functions is its ability to integrate directly with AWS services without you needing to write complicated application code. In Q4, Step Functions expanded its integration with Amazon SageMaker to simplify machine learning workflows. Step Functions also added a new integration with Amazon EMR, making EMR big data processing workflows faster to build and easier to monitor.

Screen capture of an AWS Step Functions step with Amazon EMR

Step Functions step with EMR

Step Functions now provides the ability to track state transition usage by integrating with AWS Budgets, allowing you to monitor trends and react to usage on your AWS account.

You can now view CloudWatch Metrics for Step Functions at a one-minute frequency. This makes it easier to set up detailed monitoring for your workflows. You can use one-minute metrics to set up CloudWatch Alarms based on your Step Functions API usage, Lambda functions, service integrations, and execution details.

Step Functions now supports higher throughput workflows, making it easier to coordinate applications with high event rates. This increases the limits to 1,500 state transitions per second and a default start rate of 300 state machine executions per second in US East (N. Virginia), US West (Oregon), and Europe (Ireland). Click the above link to learn more about the limit increases in other Regions.

Screen capture of choosing Express Workflows in the AWS Management Console

Step Functions released AWS Step Functions Express Workflows. With the ability to support event rates greater than 100,000 per second, this feature is designed for high-performance workloads at a reduced cost.

Amazon EventBridge

Illustration of the Amazon EventBridge schema registry and discovery service

Amazon EventBridge announced the preview of the Amazon EventBridge schema registry and discovery service. This service allows developers to automate discovery and cataloging event schemas for use in their applications. Additionally, once a schema is stored in the registry, you can generate and download a code binding that represents the schema as an object in your code.

Amazon SNS

Amazon SNS now supports the use of dead letter queues (DLQ) to help capture unhandled events. By enabling a DLQ, you can catch events that are not processed and re-submit them or analyze to locate processing issues.

Amazon CloudWatch

Amazon CloudWatch announced Amazon CloudWatch ServiceLens to provide a “single pane of glass” to observe health, performance, and availability of your application.

Screenshot of Amazon CloudWatch ServiceLens in the AWS Management Console

CloudWatch ServiceLens

CloudWatch also announced a preview of a capability called Synthetics. CloudWatch Synthetics allows you to test your application endpoints and URLs using configurable scripts that mimic what a real customer would do. This enables the outside-in view of your customers’ experiences, and your service’s availability from their point of view.

CloudWatch introduced Embedded Metric Format, which helps you ingest complex high-cardinality application data as logs and easily generate actionable metrics. You can publish these metrics from your Lambda function by using the PutLogEvents API or using an open source library for Node.js or Python applications.

Finally, CloudWatch announced a preview of Contributor Insights, a capability to identify who or what is impacting your system or application performance by identifying outliers or patterns in log data.

AWS X-Ray

AWS X-Ray announced trace maps, which enable you to map the end-to-end path of a single request. Identifiers show issues and how they affect other services in the request’s path. These can help you to identify and isolate service points that are causing degradation or failures.

X-Ray also announced support for Amazon CloudWatch Synthetics, currently in preview. CloudWatch Synthetics on X-Ray support tracing canary scripts throughout the application, providing metrics on performance or application issues.

Screen capture of AWS X-Ray Service map in the AWS Management Console

X-Ray Service map with CloudWatch Synthetics

Amazon DynamoDB

Amazon DynamoDB announced support for customer-managed customer master keys (CMKs) to encrypt data in DynamoDB. This allows customers to bring your own key (BYOK) giving you full control over how you encrypt and manage the security of your DynamoDB data.

It is now possible to add global replicas to existing DynamoDB tables to provide enhanced availability across the globe.

Another new DynamoDB capability to identify frequently accessed keys and database traffic trends is currently in preview. With this, you can now more easily identify “hot keys” and understand usage of your DynamoDB tables.

Screen capture of Amazon CloudWatch Contributor Insights for DynamoDB in the AWS Management Console

CloudWatch Contributor Insights for DynamoDB

DynamoDB also released adaptive capacity. Adaptive capacity helps you handle imbalanced workloads by automatically isolating frequently accessed items and shifting data across partitions to rebalance them. This helps reduce cost by enabling you to provision throughput for a more balanced workload instead of over provisioning for uneven data access patterns.

Amazon RDS

Amazon Relational Database Services (RDS) announced a preview of Amazon RDS Proxy to help developers manage RDS connection strings for serverless applications.

Illustration of Amazon RDS Proxy

The RDS Proxy maintains a pool of established connections to your RDS database instances. This pool enables you to support a large number of application connections so your application can scale without compromising performance. It also increases security by enabling IAM authentication for database access and enabling you to centrally manage database credentials using AWS Secrets Manager.

AWS Serverless Application Repository

The AWS Serverless Application Repository (SAR) now offers Verified Author badges. These badges enable consumers to quickly and reliably know who you are. The badge appears next to your name in the SAR and links to your GitHub profile.

Screen capture of SAR Verifiedl developer badge in the AWS Management Console

SAR Verified developer badges

AWS Developer Tools

AWS CodeCommit launched the ability for you to enforce rule workflows for pull requests, making it easier to ensure that code has pass through specific rule requirements. You can now create an approval rule specifically for a pull request, or create approval rule templates to be applied to all future pull requests in a repository.

AWS CodeBuild added beta support for test reporting. With test reporting, you can now view the detailed results, trends, and history for tests executed on CodeBuild for any framework that supports the JUnit XML or Cucumber JSON test format.

Screen capture of AWS CodeBuild

CodeBuild test trends in the AWS Management Console

Amazon CodeGuru

AWS announced a preview of Amazon CodeGuru at re:Invent 2019. CodeGuru is a machine learning based service that makes code reviews more effective and aids developers in writing code that is more secure, performant, and consistent.

AWS Amplify and AWS AppSync

AWS Amplify added iOS and Android as supported platforms. Now developers can build iOS and Android applications using the Amplify Framework with the same category-based programming model that they use for JavaScript apps.

Screen capture of 'amplify init' for an iOS application in a terminal window

The Amplify team has also improved offline data access and synchronization by announcing Amplify DataStore. Developers can now create applications that allow users to continue to access and modify data, without an internet connection. Upon connection, the data synchronizes transparently with the cloud.

For a summary of Amplify and AppSync announcements before re:Invent, read: “A round up of the recent pre-re:Invent 2019 AWS Amplify Launches”.

Illustration of AWS AppSync integrations with other AWS services

Q4 serverless content

Blog posts

October

November

December

Tech talks

We hold several AWS Online Tech Talks covering serverless tech talks throughout the year. These are listed in the Serverless section of the AWS Online Tech Talks page.

Here are the ones from Q4:

Twitch

October

There are also a number of other helpful video series covering Serverless available on the AWS Twitch Channel.

AWS Serverless Heroes

We are excited to welcome some new AWS Serverless Heroes to help grow the serverless community. We look forward to some amazing content to help you with your serverless journey.

AWS Serverless Application Repository (SAR) Apps

In this edition of ICYMI, we are introducing a section devoted to SAR apps written by the AWS Serverless Developer Advocacy team. You can run these applications and review their source code to learn more about serverless and to see examples of suggested practices.

Still looking for more?

The Serverless landing page has much more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. We’re also kicking off a fresh series of Tech Talks in 2020 with new content providing greater detail on everything new coming out of AWS for serverless application developers.

Throughout 2020, the AWS Serverless Developer Advocates are crossing the globe to tell you more about serverless, and to hear more about what you need. Follow this blog to keep up on new launches and announcements, best practices, and examples of serverless applications in action.

You can also follow all of us on Twitter to see latest news, follow conversations, and interact with the team.

Chris Munns: @chrismunns
Eric Johnson: @edjgeek
James Beswick: @jbesw
Moheeb Zara: @virgilvox
Ben Smith: @benjamin_l_s
Rob Sutter: @rts_rob
Julian Wood: @julian_wood

Happy coding!

ICYMI: Serverless re:Invent re:Cap 2019

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/icymi-serverless-reinvent-recap-2019/

Thank you for attending re:Invent 2019

In the week before AWS re:Invent 2019 we wrote about a number of service and feature launches leading up to the biggest event of the year for us at AWS. These included new features for AWS Lambda, integrations for AWS Step Functions, and other exciting service and feature launches for related product areas. But this was just the warm-up – AWS re:Invent 2019 itself saw several new serverless or serverless related announcements.

Here’s what’s new.

AWS Lambda

For developers needing greater control over performance of their serverless applications at any scale, AWS Lambda announced Provisioned Concurrency. This feature enables Lambda functions to execute with consistent start-up latency making them ideal for building latency sensitive applications.

AWS Step Functions

Express work flows

AWS Step Functions released AWS Step Functions Express Workflows. With the ability to support event rates greater than 100,000 per second, this feature is designed for high performance workloads at a reduced cost.

Amazon EventBridge

EventBridge schema registry and discovery

Amazon EventBridge announced the preview of the Amazon EventBridge schema registry and discovery service. This service allows developers to automate discovery and cataloging event schemas for use in their applications. Additionally, once a schema is stored in the registry, you can generate and download a code binding that represents the schema as an object in your code.

Amazon API Gateway

HTTP API

Amazon API Gateway announced the preview of HTTP APIs. With HTTP APIs most customers will see an average cost saving up to 70%, when compared to API Gateway REST APIs. In addition, you will see significant performance improvements in the API Gateway service overhead. With HTTP APIs, you can create an API in four simple steps. Once the API is created, additional configuration for CORS and JWT authorizers can be added.

Databases

Amazon Relational Database Services (RDS) announced a previews of Amazon RDS Proxy to help developers manage RDS connection strings for serverless applications.

RDS Proxy

The RDS proxy maintains a pool of established connections to your RDS database instances. This pool enables you to support a large number of application connections so your application can scale without compromising performance. It also increases security by enabling IAM authentication for database access and enabling you to centrally manage database credentials using AWS Secrets Manager.

AWS Amplify

Amplify platform choices

AWS Amplify has expanded their delivery platforms to include iOS and Android. Developers can now build iOS and Android applications using the Amplify Framework with the same category-based programming model that they use for JavaScript apps.

The Amplify team has also improved offline data access and synchronization by announcing Amplify DataStore. Developers can now create applications that allow users to continue to access and modify data, without an internet connection. Upon connection, the data synchronizes transparently with the cloud.

Amazon CodeGuru

Whether you are a team of one or an enterprise with thousands of developers, code review can be difficult. At re:Invent 2019, AWS announced a preview of Amazon CodeGuru, a machine learning based service to help make code reviews more effective and aid developers in writing code that is secure, performant, and consistent.

Serverless talks from re:Invent 2019

re:Invent presentation recordings

We presented dozens of sessions showing how customers can improve their architecture and agility with serverless. Here are some of the most popular.

Videos

Decks

You can also find decks for many of the serverless presentations and other re:Invent presentations on our AWS Events Content.

Conclusion

Prior to AWS re:Invent, AWS serverless had many service and feature launches and the pace continued throughout re:Invent itself. As we head towards 2020, follow this blog to keep up on new launches and announcements, best practices, and examples of serverless applications in action

Additionally, the AWS Serverless Developer Advocates will be crossing the globe to tell you more about serverless, and to hear more about what you need. You can also follow all of us on Twitter to see latest news, follow conversations, and interact with the team.

Chris Munns: @chrismunns
Eric Johnson: @edjgeek
James Beswick: @jbesw
Moheeb Zara: @virgilvox
Ben Smith: @benjamin_l_s
Rob Sutter: @rts_rob
Julian Wood: @julian_wood

Happy coding!

Introducing Amazon EventBridge schema registry and discovery – In preview

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-amazon-eventbridge-schema-registry-and-discovery-in-preview/

Today, AWS announces the preview of Amazon EventBridge schema registry and discovery. These are new developer tool features, which allow you to automatically find events and their structure, or schema, and store them in a shared central location. This makes it faster and easier to build event-driven applications. You can access the registry and generate code bindings for schemas directly in popular Integrated Development Environments (IDEs), including JetBrains IntelliJ and PyCharm, Microsoft Visual Studio Code, as well as through the Amazon EventBridge console and APIs.

What is a schema?

A schema represents the structure of an event, and commonly includes information such as the title and type of each piece of data. For example, in a review on a product website, a schema might include fields for the reviewer’s name, user id, and review description, and that the name is a text string, and the user id is an integer. The event schema is important for developers as it shows what data is contained in the event, and allows them to write code based on that data.

Event-driven architectures

Developers embracing event-driven architectures may use an event bus such as Amazon EventBridge. EventBridge allows application decoupling without needing to write point-to-point integrations between services. This decoupling increases developer independence, as they simply subscribe to the events they’re interested in, reducing dependencies on other teams to write integrations.

However, decoupling introduces a new set of challenges. Finding events and their schema is a manual process. Developers must coordinate with the team responsible for publishing an event, or look through documentation to find its schema, and then manually create an object for the event in order to use it in their code.

EventBridge schema registry solves these problems by introducing two capabilities, a schema registry and schema discovery.

Schema registry

A schema registry stores a collection of schemas. You can use schema registry to search for, find, and track different schemas used and generated by your applications. Schemas for all AWS sources supported in EventBridge are automatically visible in your schema registry. SaaS partner and custom schemas can be generated and added to the registry using the schema discovery feature.

Schema discovery

Schema discovery automates the process of finding schemas and adding them to your registry. When schema discovery is enabled for an EventBridge event bus, the schema of each event sent to the bus is automatically added to the registry. If the schema of an event changes, schema discovery automatically creates a new version in the registry. Once a schema is added to the registry, you can generate a code binding for the schema, either in the EventBridge console or directly in your IDE.

Generally, you only enable schema discovery in your development environments (AWS Free Tier includes 5 million ingested events). Schemas of any new events you create are automatically added to your registry to use when developing your application. If you need to audit all of the events going through your event bus, you can enable discovery on your production event bus, and pay $0.10 per million events ingested for any usage outside of the Free Tier.

Code bindings

Once a schema is added to the registry, you can download a code binding. This allows you to represent the event as a strongly typed object in your code, and take advantage of IDE features such as validation and auto-complete. Code bindings are available for Java, Python, or TypeScript programming languages. You can download bindings from the AWS Management Console, or directly from your IDE with the AWS Toolkit plugin for IntelliJ and VS Code.

If you use the AWS Serverless Application Model (SAM), you can now use the interactive sam init command to generate a serverless application with a schema as a trigger. This automatically adds a class file for the schema to your code, and generates a handler function that serializes the trigger event into an object. This makes it quicker to build serverless event-driven applications.

Event Schemas

Viewing the schema registry

You can view the schema registry in the Amazon EventBridge console and download code bindings.

  1. Choose Go to schema registry and then choose Schemas in the left-side navigation bar. You can view and search for built-in AWS schemas, as well as your discovered and custom schemas within the registry.
  2. For example, searching for a schema for AWS Step Functions, there is an existing schema for: aws.states.StepFunctionsExecutionStatusChange. The schema details as well as the JSON representation are visible.
  3. json schema

  4. To Download code bindings to use in your IDE, select a language option, download the .zip file, and manually import the schema into your IDE.

Automatically discovering schemas

Schema registry’s discovery capabilities allow you to generate schemas for your own events. EventBridge can ingest events from a number of SaaS vendors including, for example MongoDB.

In this example, I use an online shop, which stores product reviews in a MongoDB database. A MongoDB trigger is configured to send all new database entries to the EventBridge MongoDB partner event bus. A Lambda function is then triggered for all new events and calls Amazon Comprehend to do sentiment analysis on the new reviews. Any negative reviews generate a service desk ticket for further investigation.

  1. I have previously set up MongoDB Atlas to connect as a SaaS partner to EventBridge. I have configured a stitch trigger to send events to EventBridge for any updates to the database.
  2. To discover the schema, in the EventBridge console, I navigate to Events and Partner event sources.
  3. I select the MongoDB event source and choose Associate with event bus.
  4. I choose Associate.
  5. I Navigate to Event buses and choose the MongoDB Custom event bus and choose Start discovery. An event bus-managed rule is created automatically.
  6. custom event busses

  7. I write a new product review, creating a new database record, which triggers a new event on the event bus. You can simulate this using the AWS CLI. Replace the EventBusName with your partner event bus.
  8. aws events put-events --entries '[[{"Source": "mystore","DetailType": "Review Created","EventBusName":"aws.partner/mongodb.com/stitch.trigger/5ddf5c9476ff0ff8b0916763","Detail": "{\"star_rating\": 5,  \"description\": \"The size and length fit me well and the design is fun. I felt very secure wearing this tshirt. \",  \"helpful_count\": 34,  \"unhelpful_count\": 1,  \"pros\": [\"lightweight\",\"fits well\"  ],  \"cons\": [],  \"customer\": {\"name\": \"Julian Wood\",\"email\": \"[email protected]\",\"phone\": \"+1 604 123 1234\"  },  \"product\": {\"product_id\": 788032119674292922,\"title\": \"Encrypt Everything Tshirt\",\"sku\": \"encrypt-everything-tshirt\",\"inventory_id\": 23190823132,\"size\": \"medium\",\"taxable\": true,\"image_url\": \"https://img.mystore.test/encrypt-tshirt.jpg\",\"weight\": 200.0}}"}]'

  9. I navigate to Schema registry and schemas, choose Discovered schema registry, and can see the discovered schema from the new review event.
  10. Choosing the schema name, I can view the generated schema. Here is an excerpt

Schema discovery has automatically discovered the MongoDB schema from events passing through the event bus and added it to the registry.

Downloading code bindings directly into an IDE

Schema code bindings can be downloaded directly from the AWS Management Console as well as within an IDE, for example JetBrains IntelliJ.

  1. I have IntelliJ installed.
  2. I launch IntelliJ, navigate to File | Settings, and choose Plugins.
  3. On the Marketplace tab, in Search plugins, I enter AWS. When AWS Toolkit by Amazon Web Services is displayed, I select it and choose Install. Minimum version 1.9 is required.
  4. AWSToolkit

  5. I accept the third-party Plugins Privacy Note and choose Restart IDE.
  6. Once the IDE has restarted, I navigate to AWS Explorer at the bottom-left of the IDE to view available schemas.
  7. I choose Configure AWS connection.
  8. Configure AWS Connection

  9. I choose Credentials, my local AWS account profiles are automatically loaded from the AWS credentials file.
  10. I choose Schemas\discovered-schemas, right-click and then choose View Schema to see the schema.
  11. View Schema

  12. I can then use the code binding in a project, I navigate to File | New Project.
  13. A number of languages and frameworks are available to build an application. I want to build a serverless application and so choose AWS and AWS Serverless Application, which builds an AWS Serverless Application Model application project. I choose Next.
  14. Create SAM App

  15. I enter a Project name and file location, choosing java8 as the Runtime.
  16. I choose AWS SAM EventBridge App from Scratch (for any Event trigger from a Schema Registry) for Gradle.
  17. I select Credentials and Region
  18. Under Event Schema, I browse the available schema. At the end of the list, I find the discovered MongoDB schema. I select it and choose Finish.

SAM Peoject

Once the project is created, I can see the event schema imported into the SAM project.

I navigate to and open mongodb-app\HelloWorldFunction\src\main\java\helloworld\App.java to see the Lambda handler created with the event schema details. I can also use full IDE auto-complete with the schema.

Code Completion

I can then add the Amazon Comprehend sentiment analysis code using the schema into the Lambda function.
The SAM template.yaml in the project root directory specifies the Lambda function triggered by an EventBridge event (previously called CloudWatchEvent) using the discovered MongoDB schema.
I change the EventBusName to the correct partner event bus.

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: HelloWorldFunction
      Handler: helloworld.App::handleRequest
      Runtime: java8
      Environment: 
        Variables:
          PARAM1: VALUE
      Events:
        HelloWorld:
          Type: CloudWatchEvent 
          Properties:
            EventBusName:
                - aws.partner/mongodb.com/stitch.trigger/5ddf5c9476ff0ff8b0916763
            Pattern:
              source:
                - aws.partner/mongodb.com.test/stitch.trigger/5ddf5c9476ff0ff8b0916
              detail-type:
                - MongoDB Database Trigger for my_store.reviews

I can then deploy the SAM application.
I navigate to File | Deploy SAM Application .I choose Create Stack and enter a name. Select an S3 bucket and choose Deploy.

Deploy SAM

The SAM application is deployed. I can also automate the process using SAM CLI. Version 0.35.0 includes the schema features in the new interactive sam init command. This uses an AWS Quick Start Template to generate a handler function and add a class file for the schema to my code.

Pricing

Usage of the schema registry is free.

Schema discovery includes a free tier of 5M ingested events per month. In use cases where discovery is used in your development environment, your usage should stay within the free tier.

For additional usage outside of the free tier:

  • $0.10 per million events ingested for discovery.

All ingested events are measured in 8 KB chunks.

Availability

The EventBridge schema registry preview is available in the US East (Ohio), US West (Oregon), US East (Northern Virginia) Asia Pacific (Tokyo) Region, and Europe (Ireland) Regions. For details on EventBridge availability, please see the AWS Region table (https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/).

Conclusion

Amazon EventBridge schema registry and discovery helps developers take advantage of new schema capabilities to build exciting event-driven applications. All AWS sources supported in EventBridge are visible in your schema registry. SaaS partner and custom schemas can be added automatically using schema discovery. Code bindings, which allow you to represent the event as an object in your code, can be downloaded from the console or directly within IDEs.

Happy coding with event schemas!

Introducing AWS Lambda Destinations

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-aws-lambda-destinations/

Today we’re announcing AWS Lambda Destinations for asynchronous invocations. This is a feature that provides visibility into Lambda function invocations and routes the execution results to AWS services, simplifying event-driven applications and reducing code complexity.

Asynchronous invocations

When a function is invoked asynchronously, Lambda sends the event to an internal queue. A separate process reads events from the queue and executes your Lambda function. When the event is added to the queue, Lambda previously only returned a 2xx status code to confirm that the queue has received this event. There was no additional information to confirm whether the event had been processed successfully.

A common event-driven microservices architectural pattern is to use a queue or message bus for communication. This helps with resilience and scalability. Lambda asynchronous invocations can put an event or message on Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), or Amazon EventBridge for further processing. Previously, you needed to write the SQS/SNS/EventBridge handling code within your Lambda function and manage retries and failures yourself.

With Destinations, you can route asynchronous function results as an execution record to a destination resource without writing additional code. An execution record contains details about the request and response in JSON format including version, timestamp, request context, request payload, response context, and response payload. For each execution status such as Success or Failure you can choose one of four destinations: another Lambda function, SNS, SQS, or EventBridge. Lambda can also be configured to route different execution results to different destinations.

Asynchronous Function Execution Result

Success

When a function is invoked successfully, Lambda routes the record to the destination resource for every successful invocation. You can use this to monitor the health of your serverless applications via execution status or build workflows based on the invocation result.

You no longer need to chain long-running Lambda functions together synchronously. Previously you needed to complete the entire workflow within the Lambda 15-minute function timeout, pay for idle time, and wait for a response. Destinations allows you to return a Success response to the calling function and then handle the remaining chaining functions asynchronously.

Failure

Alongside today’s announcement of Maximum Event Age and Maximum Retry Attempt for asynchronous invocations, Destinations gives you the ability to handle the Failure of function invocations along with their Success. When a function invocation fails, such as when retries are exhausted or the event age has been exceeded (hitting its TTL), Destinations routes the record to the destination resource for every failed invocation for further investigation or processing.

Dead Letter Queues (DLQ) have been available since 2016 and are a great way to handle asynchronous failure situations. Destinations provide more useful capabilities by passing additional function execution information, including code exception stack traces, to more destination services.

Destinations and DLQs can be used together and at the same time although Destinations should be considered a more preferred solution. If you already have DLQs set up, existing functionality does not change and Destinations does not replace existing DLQ configurations. If both Destinations and DLQ are used for Failure notifications, function invoke errors are sent to both DLQ and Destinations targets.

How to configure Destinations

Adding Destinations is a straightforward process. This walkthrough uses the AWS Management Console but you can also use the AWS CLI, AWS SAM, AWS CloudFormation, or language-specific SDKs for Lambda.

  1. Open the Lambda console Functions page. Choose an existing Lambda function, or create a new one. In this example, I create a new Lambda function. Choose Create Function.
  2. Enter a Function name, select Node.js 12.x for Runtime, and Choose or create an execution role. Ensure that your Lambda function execution role includes access to the destination resource.
    Basic information
  3. Choose Create function.
  4. Within the Function code pane, paste the following Lambda function code. The code generates a function execution result of either Success or Failure depending on a JSON input ("Success": true or "Success": false).
    // Lambda Destinations tester, Success returns a json blob, Failure throws an error
    
    exports.handler = function(event, context, callback) {
        var event_received_at = new Date().toISOString();
        console.log('Event received at: ' + event_received_at);
        console.log('Received event:', JSON.stringify(event, null, 2));
    
        if (event.Success) {
            console.log("Success");
            context.callbackWaitsForEmptyEventLoop = false;
            callback(null);
        } else {
            console.log("Failure");
            context.callbackWaitsForEmptyEventLoop = false;
            callback(new Error("Failure from event, Success = false, I am failing!"), 'Destination Function Error Thrown');
        }
    };
    
  5. Choose Save.
  6. To configure Destinations, within the Designer pane, choose Add destination.
    Designer pane
  7. Select the Source as Asynchronous invocation. Select the Condition as On failure or On success, depending on your use case. In this example, I select On Success.
  8. Enter the Amazon Resource Name (ARN) for the Destination SQS queue, SNS topic, Lambda function, or EventBridge event bus. In this example, I use the ARN of an SNS topic I have already configured.
    Add destination
  9. Choose Save. The Destination is added to SNS for On Success.
    Designer
  10. Add another Destination for Failure to Lambda. Within the Designer pane, choose Add destination.
    Add destination
  11. Select the Source as Asynchronous invocation, the Condition as On failure and Enter a Destination Lambda function ARN, then choose Save.
    Enter a Destination Lambda function ARN, and choose Save
  12. The Destination is added to Lambda for On Failure.
    7. The Destination has been added to Lambda for On Failure.

Success testing

To test invoking the asynchronous Lambda function to generate a Success result, use the AWS CLI:

aws lambda invoke --function-name event-destinations --invocation-type Event --payload '{ "Success": true }' response.json

The Lambda function is invoked successfully with a response "StatusCode": 202.

And an SNS notification email is received, showing the invocation details with "condition":"Success" and the requestPayload.

{
	"version": "1.0",
	"timestamp": "2019-11-24T23:08:25.651Z",
	"requestContext": {
		"requestId": "c2a6f2ae-7dbb-4d22-8782-d0485c9877e2",
		"functionArn": "arn:aws:lambda:sa-east-1:123456789123:function:event-destinations:$LATEST",
		"condition": "Success",
		"approximateInvokeCount": 1
	},
	"requestPayload": {
		"Success": true
	},
	"responseContext": {
		"statusCode": 200,
		"executedVersion": "$LATEST"
	},
	"responsePayload": null
}

Failure testing

The Lambda function can be set to Failure by throwing an exception within the code. To test invoking the asynchronous Lambda function to generate a Failure result, use the AWS CLI:

aws lambda invoke --function-name event-destinations --invocation-type Event --payload '{ "Success": false }' response.json

The Lambda function is executed and reports a successful invoke on the Lambda processing queue. If Lambda is not able to add the event to the queue, the error message appears in the command output.

However, due to the exception error within the code, the function invocation will fail. Destinations then routes the invoke failure to the configured destination Lambda function. You can see the failed function invocation information in the Amazon CloudWatch Logs for the Destination function including "condition": "RetriesExhausted", along with the requestPayload, errorMessage, and stackTrace.

2019-11-24T21:52:47.855Z	d123456-c0dd-4871-a123-a356cb1b3ba6	EVENT
{
    "version": "1.0",
    "timestamp": "2019-11-24T21:52:47.333Z",
    "requestContext": {
        "requestId": "8ea123e4-1db7-4aca-ad10-d9ca1234c1fd",
        "functionArn": "arn:aws:lambda:sa-east-1:123456678912:function:event-destinations:$LATEST",
        "condition": "RetriesExhausted",
        "approximateInvokeCount": 3
    },
    "requestPayload": {
        "Success": false
    },
    "responseContext": {
        "statusCode": 200,
        "executedVersion": "$LATEST",
        "functionError": "Handled"
    },
    "responsePayload": {
        "errorMessage": "Failure from event, Success = false, I am failing!",
        "errorType": "Error",
        "stackTrace": [ "exports.handler (/var/task/index.js:18:18)" ]
    }
}

Destination-specific JSON format

  • For SNS/SQS, the JSON object is passed as the Message to the destination.
  • For Lambda, the JSON is passed as the payload to the function. The destination function cannot be the same as the source function. For example, if LambdaA has a Destination configuration attached for Success, LambdaA is not a valid destination ARN. This prevents recursive functions.
  • For EventBridge, the JSON is passed as the Detail in the PutEvents call. The source is lambda, and detail type is either Lambda Function Invocation Result - Success or Lambda Function Invocation Result – Failure. The resource fields contain the function and destination ARNs.

AWS CloudFormation configuration

Destinations CloudFormation configuration is created via the following YAML.

Resources: 
  EventInvokeConfig:
    Type: AWS::Lambda::EventInvokeConfig
    Properties:
        FunctionName: “YourLambdaFunctionWithEventInvokeConfig”
        Qualifier: "$LATEST"
        MaximumEventAgeInSeconds: 600
        MaximumRetryAttempts: 0
        DestinationConfig:
            OnSuccess:
                Destination: “arn:aws:sns:us-east-1:123456789012:YourSNSTopicOnSuccess”
            OnFailure:
                Destination: “arn:aws:lambda:us-east-1:123456789012:function:YourLambdaFunctionOnFailure”

Conclusion

AWS Lambda Destinations gives you more visibility and control of function execution results. This helps you build better event-driven applications, reducing code, and using Lambda’s native failure handling controls.

There are no additional costs for enabling Lambda Destinations. However, calls made to destination target services may be charged.

To learn more, see Lambda Destinations in the AWS Lambda Developer Guide.

Improving Containers by Listening to Customers

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/improving-containers/

At AWS, we build our product roadmap based upon feedback from our customers. The following three new features have all come about because customers have asked us to solve specific issues they have face when building and operating sophisticated container-based applications.

Managed Node Groups for Amazon Elastic Kubernetes Service
Our customers have told us that they want to focus on building innovative solutions for their customers, and focus less on the heavy lifting of managing Kubernetes infrastructure.

Amazon Elastic Kubernetes Service already provides you with a standard, highly-available Kubernetes cluster control plane, and now, AWS can also manage the nodes (Amazon Elastic Compute Cloud (EC2) instances) for your Kubernetes cluster. Amazon Elastic Kubernetes Service makes it easy to apply bug fixes and security patches to nodes, and updates them to the latest Kubernetes versions along with the cluster.

The Amazon Elastic Kubernetes Service console and API give you a single place to understand the state of your cluster, you no longer have to jump around different services to see all of the resources that make up your cluster.

You can provision managed nodes today when you create a new Amazon EKS cluster. There is no additional cost to use Amazon EKS managed node groups, you only pay for the Amazon EKS cluster and AWS resources they provision. To find out more check out this blog: Extending the EKS API: Managed Node Groups.

Managing your container Logs with AWS FireLens
Customers building container-based applications told us that they wanted more flexibility when it came to logging; however, they didn’t wish to to install, configure, or troubleshoot logging agents.

AWS FireLens, gives you this flexibility as you can now forward container logs to storage and analytics tools by configuring your task definition in Amazon ECS or AWS Fargate.

This means that developers have their containers send logs to Stdout and then FireLens picks up these logs and forwards them to the destination that has been configured.

FireLens works with the open-source projects Fluent Bit and Fluentd, which means that you can send logs to any destination supported by either of those projects.

There are a lot of configuration options with FireLens, and you can choose to filter logs and even have logs sent to multiple destinations. For more information, you can take a look at the demo I wrote earlier in the week: Announcing Firelens – A New Way to Manage Container Logs.

If’ you would like a deeper understanding of how the technology works and was built, Wesley Pettit goes into even further depth on the Containers Blog in his article: Under the hood: FireLens for Amazon ECS Tasks.

Amazon Elastic Container Registry EventBridge Support
Customers using Amazon Elastic Container Registry have told us they want to be able to start a build process when new container images are pushed to Elastic Container Registry.

We have therefore added Amazon Elastic Container Registry EventBridge support.

Using events that Elastic Container Registry now publishes to EventBridge, you can trigger actions such as starting a pipeline or posting a message to somewhere like Amazon Chime or Slack when your image is successfully pushed.

To learn more about this new feature, check out the following blog post where I give a more detailed explication and demo: EventBridge support in Amazon Elastic Container Registry.

More to come
These 3 new releases add to other great releases we have already had this year such as Savings Plans, Amazon EKS Windows Containers support, and Native Container Image Scanning in Amazon ECR.

We are still listening, and we need your feedback, so if you have a feature request or a pain point with your container applications, please let us know by creating or commenting on issues in our public containers roadmap. Sometime in the future I might one-day writing about a new feature that was inspired by you.

Martin

 

EventBridge Support in Amazon Elastic Container Registry

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/eventbridge-support-in-amazon-elastic-container-registry/

Many of our customers require a secure and private place to store their container images, and that’s why they use our fully managed container registry Amazon Elastic Container Registry. We recently added support for Amazon EventBridge so that you can trigger actions when images are pushed or deleted. These actions can trigger a continuous integration, continuous deployment pipeline when an image is pushed or post a message to your DevOps team Slack channel when an image has been deleted.

This new capability can even enable complicated workflows, for example, customers can use the image push event on a base image to trigger a rebuild of images built on top of that base. In this scenario, a base image might be rebuilt weekly to pick up the latest security patches. A push event from the base image repository can trigger other builds, so that all derivative images are patched, too.

To show you how to go about using this new capability, I thought I’d open up the console and work through an example of how all the pieces fit together.

In the Amazon EventBridge console, I create a new rule, and I enter a unique name and description.

Next, I scroll down to Define pattern and begin to customise the type of event pattern that I want to use. I leave the default Event pattern radio button selected and also that I want to use a Pre-defined patten by service. Since Elastic Container Registry is an AWS service, I select AWS as the Service Provider.

In the Service Name section, you can select one of the many different AWS services as the event source. I am going to choose the newest addition to this list Elastic Container Registry (ECR). Lastly, in this section, I select ECR Image Action as the Event type. This ECR Image Action contains both DELETE and PUSH as action types.

Next, I’m asked to configure which event bus I want to use. For this example, I select the AWS default event bus that comes with every AWS account.

Now that I have identified where my events are coming from, I now need to say where I want them to go. We call these targets, and there are plenty of options here. For example, I could send the event to a Lambda Function, a Kinesis stream, or any one of the wide variety of AWS targets.

To keep things simple, I’m going to choose to invoke a Amazon Simple Notification Service (SNS) topic. This topic is called ImageAction, and I have subscribed to this topic so that I receive an email when new messages are received by this topic.

Back over on my laptop, I push a new version of my container to my repository in to Elastic Container Registry.

If I go over to the Elastic Container Registry console, I can see that my Docker Image was successfully pushed, I’m now going to select the image and click the Delete button, which will delete my new image.

This will have sent both a PUSH and a DELETE event through to my SNS topic which in turn deliver two emails to me as a subscriber to that topic.

 

If I open up Outlook, sure enough, I have two (admittedly not pretty) emails that have both the respective action-type of PUSH and DELETE.

So there you have it, you can now wire up events in Elastic Container Registry and enable exciting and wonderful things to happen as a result. Amazon EventBridge support in Amazon Elastic Container Registry is available in all public AWS Regions and GovCloud (US). Try it now in the Amazon EventBridge console.

Happy Eventing!

Martin

Automating Zendesk With Amazon EventBridge and AWS Step Functions

Post Syndicated from benjasl original https://aws.amazon.com/blogs/compute/automating-zendesk-with-amazon-eventbridge-and-aws-step-functions/

In July 2019, AWS launched Amazon EventBridge, a serverless event bus that offers third-party software as a service (SaaS) integration capabilities. This service allows applications and AWS services to integrate with each other in near-real time via an event bus. Amazon EventBridge launched with a number of partner integrations, to enable you to quickly connect to some of your favorite SaaS solutions.

This post describes how to deploy an application from the AWS Serverless Application Repository that uses EventBridge to seamlessly integrate with and automate Zendesk. The application performs sentiment analysis on Zendesk support tickets with Amazon Comprehend. It then uses AWS Lambda and AWS Step Functions to categorize and orchestrate the escalation priority, based on configurable SLA wait times.

High-level architecture diagram

This application serves as a starter template for an automated ticket escalation policy. It could be extended to self-serve and remediate automatically, according to the individual tickets submitted. For example, creating database backups in response to release tickets, or creating new user accounts for user access requests.

Important: the application uses various AWS services, and there are costs associated with these services after the Free Tier usage. Please see the AWS pricing page for details. This application also requires a Zendesk account.

To show how AWS services integrate applications or third-party SaaS via EventBridge, you deploy this application from the AWS Serverless Application Repository. You then enable, connect, and configure the EventBridge rules from within the AWS Management Console before triggering the rule and running the application.

Before deploying this application from the AWS Serverless Application Repository, you must generate an API key from within Zendesk.

Creating the Zendesk API Resource

Use an API to execute events on your Zendesk account from AWS. It’s not currently possible to sync bidirectionally between Zendesk and AWS. Follow these steps to generate a Zendesk API Token that is used by the application to authenticate Zendesk API calls.

To generate an API token:

1. Log in to the Zendesk dashboard.

2. Choose the Admin icon in the sidebar, then select Channels > API.

3. Choose the Settings tab, and make sure that Token Access is enabled.

4. Choose the + button to the right of Active API Tokens.

Creating a Zendesk API token

5. Copy the token, and store it securely. Once you close this window, the full token will never be displayed again.

6. Choose Save to return to the API page, which shows a truncated version of the token.

Zendesk API token

Deploy the application from the Serverless Application Repository

1. Go to the deployment page on the Serverless Application Repository.

2. Fill out the required deployment fields:

  • ZenDeskDomain: this appears in the account’s URL: https://[yoursubdomain].zendesk.com.
  • ZenDeskPassword: the API key generated in the earlier step, “Creating the Zendesk API Resource.”
  • ZenDeskUsername: the account’s primary email address.

Deployment Fields

3. Choose Deploy.

Once the deployment process has completed, five new resources have been created. This includes four Lambda functions that perform the individual compute functionality, and one Step Functions state machine.

AWS Step Functions is a serverless orchestration service. It lets you easily coordinate multiple Lambda functions into flexible workflows that are easy to debug and easy to change. The state machine is used to manage the Lambda functions, together with business logic and wait times.

When EventBridge receives a new event, it’s directed into the pre-assigned event bus. Here, it’s compared with associated rules. Each rule has an event pattern defined, which acts as a filter to match inbound events to their corresponding rules. In this application, a matching event rule triggers an AWS Step Functions invocation, passing in the event payload from Zendesk.

To integrate a partner SaaS application with Amazon EventBridge, you must configure three components:

1. The event source

2. The event bus

3. The event rule and target

Configuring Zendesk with Amazon EventBridge

To send Zendesk events to EventBridge, you need access to the Zendesk Events connector early access program (EAP). You can register for this here.

Step 1. Configuring your Zendesk event source

1. Go to your Zendesk Admin Center and select Admin Center > Integrations.

Zendesk integrations

2. Choose Connect in Events Connector for Amazon EventBridge to open the page to configure your Zendesk event source.

3. Enter your AWS account ID in the Amazon Web Services account ID field, and select the Region to receive events.

4. Choose Save.

Step 2. Associate the Zendesk event source with a new event bus

1. Sign into the AWS Management Console and navigate to Services > Amazon EventBridge > Partner event sources.

New event source

2. Select the radio button next to the new event source and choose the Associate with event bus button.

Associating event source with event bus

3. Choose Associate.

4. Navigate to Amazon EventBridge > Events > Event buses.

Creating an event bus

5. You can see the newly-created event bus in the Custom event bus section.

Step 3 Create a new Rule for the event bus

1. Navigate to the rules page in the EventBridge Console, then select Events > Rules.

2. To select the new event bus, use the drop-down arrow in the Select event bus section.

Custom event bus

3. Choose Create Rule.

4. Enter a name for the new rule, such as “New Zendesk Ticket.”

5. In the Define Pattern section, choose Event pattern. Select Custom Pattern. A new input box appears that allows you to enter a pre-defined event pattern, represented as a JSON object. This is used to match relevant events.

6. Copy and paste this JSON object into the Event Pattern input box.

{
    "account": [
        "{YourAWSAccountNumber}"
    ],
    "detail-type": [
        "Support Ticket: Ticket Created"
    ]
 }

This event pattern can be found in the list of event schemas provided by Zendesk. It’s important to test the event pattern to ensure it correctly matches the event schema that EventBridge receives.

7. Choose Save.

Each event has the option to forward the data input (or a filtered version) onto a wide selection of targets. This application invokes a Step Functions state machine and passes in the Zendesk event data.

8. In the Targets Section drop-down, select Step Functions state machine. Select the application’s step function.

Event target selector

9. Scroll down and choose Create.

Running the application

Once EventBridge is configured to receive Zendesk events, it’s possible to trigger the application by creating a new ticket in Zendesk. This sends the event to EventBridge, which then triggers the Step Functions state machine:

Step Function Orchestration

The Step Functions state machine holds each state object in the workflow. Some of the state objects use the Lambda functions created in the earlier steps to process data. Others use Amazon States Language (ASL) enabling the application to conditionally branch, wait, and transition to the next state.

Using a state machine this way ensures that the business logic is decoupled from the Lambda compute functionality. Each of the Step Functions states are detailed below:

ZenDeskGetFullTicket

State Type: Task, service: AWS Lambda

This function receives a ticket ID and invokes the Zendesk API to retrieve a complete record of ticket metadata. This is used for the subsequent lifecycle of the AWS Step Functions state machine.

ZenDeskDemoGetSentiment

State type: Task. Services: AWS Lambda, Amazon Comprehend

This function uses Amazon Comprehend, a natural language processing (NLP) service using machine learning to find insights and relationships in text. For this use case, the ‘detect Sentiment’ function determines the sentiment of a Zendesk ticket.  The function accepts a single text string as its input and returns a JSON object containing a sentiment score.

isNegative

State type: Choice

This choice state adds branching logic to the state machine. It uses a “choice rule” to determine if the string input from the preceding task is equal to Negative. If true, it branches on to the next task. If false, the state machine’s execution ends.

SetTags

State type: Task. Service: AWS Lambda

This task invokes the “ZenDeskDemoSetTags” Lambda function. A Zendesk API resource sets a new tag on the ticket before passing the returned output onto the next state.

isClosed

State type: Choice

This compares the current status input to the string “Open” to check if a ticket has been actioned or closed. If a ticket status remains “Open”, the state machine continues along the true branch to the “GetSLAWaitTime” State. Otherwise it exits along the false branch and end execution.

GetSLAWaitTime

State type: Choice

This state conditionally branches to a different SLA wait time, depending on the ticket’s current priority status.

SLAUrgentWait, SLAHighWait, SANormalWait

State type: Wait

These three states delay the state machine from continuing for a set period of time dependent on the urgency of the ticket, allowing the ticket to be actioned by a Zendesk agent.  The wait time is specified when deploying the application.

ZenDeskDemoSetPriority

State type: Task. Services: AWS Lambda

This Lambda function receives a ticket ID and priority value, then invokes Zendesk’s API to escalate the ticket to a higher priority value.

closedOrNotNegative

State type: Pass

This state passes its input to its output, without performing work. Pass states are useful when constructing and debugging state machines.

FinalEscalation

State type: Success

This stops the execution successfully.

The sequence shows an accelerated version of the ticket’s lifecycle in Zendesk:

Zendesk ticket lifecycle

The application runs entirely in the background. Each Step Functions invocation can last for up to a year, allowing for long wait periods before automatically escalating the ticket’s priority. There is no extra cost associated with longer wait time – you only pay for the number of state transitions and not for the idle wait time.

Conclusion

Using EventBridge to route an event directly to AWS Step Functions has reduced the need for unnecessary communication layers. It helps promote good use of compute resources, ensuring Lambda is used to transform data and not transport or orchestrate.

The implementation of AWS Step Functions adds resiliency to the orchestration layer and allows the compute processes to remain decoupled from the business logic. This application demonstrates how EventBridge can be used as management layer for event ingestion and routing.  Additional Zendesk events such as “Comment Created”, “Priority Changed” or any number listed in the Zendesk events schema can be added using a rule.

By adding a single connection point from Zendesk to AWS, you can extend and automate your support ticketing system with a serverless application that is performant, cost-efficient, and scalable.

Combining the functionality of your favorite SaaS solutions with the power of AWS, EventBridge has the potential to trigger a new wave of serverless applications. What will you integrate with first?

ICYMI: Serverless Q3 2019

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/icymi-serverless-q3-2019/

This post is courtesy of Julian Wood, Senior Developer Advocate – AWS Serverless

Welcome to the seventh edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

In case you missed our last ICYMI, checkout what happened last quarter here.

ICYMI calendar

Launches/New products

Amazon EventBridge was technically launched in this quarter although we were so excited to let you know, we squeezed it into the Q2 2019 update. If you missed it, EventBridge is the serverless event bus that connects application data from your own apps, SaaS, and AWS services. This allows you to create powerful event-driven serverless applications using a variety of event sources.

The AWS Bahrain Region has opened, the official name is Middle East (Bahrain) and the API name is me-south-1. AWS Cloud now spans 22 geographic Regions with 69 Availability Zones around the world.

AWS Lambda

In September we announced dramatic improvements in cold starts for Lambda functions inside a VPC. With this announcement, you see faster function startup performance and more efficient usage of elastic network interfaces, drastically reducing VPC cold starts.

VPC to VPC NAT

These improvements are rolling out to all existing and new VPC functions at no additional cost. Rollout is ongoing, you can track the status from the announcement post.

AWS Lambda now supports custom batch window for Kinesis and DynamoDB Event sources, which helps fine-tune Lambda invocation for cost optimization.

You can now deploy Amazon Machine Images (AMIs) and Lambda functions together from the AWS Marketplace using using AWS CloudFormation with just a few clicks.

AWS IoT Events actions now support AWS Lambda as a target. Previously you could only define actions to publish messages to SNS and MQTT. Now you can define actions to invoke AWS Lambda functions and even more targets, such as Amazon Simple Queue Service and Amazon Kinesis Data Firehose, and republish messages to IoT Events.

The AWS Lambda Console now shows recent invocations using CloudWatch Logs Insights. From the monitoring tab in the console, you can view duration, billing, and memory statistics for the 10 most recent invocations.

AWS Step Functions

AWS Step Functions example

AWS Step Functions has now been extended to support probably its most requested feature, Dynamic Parallelism, which allows steps within a workflow to be executed in parallel, with a new Map state type.

One way to use the new Map state is for fan-out or scatter-gather messaging patterns in your workflows:

  • Fan-out is applied when delivering a message to multiple destinations, and can be useful in workflows such as order processing or batch data processing. For example, you can retrieve arrays of messages from Amazon SQS and Map sends each message to a separate AWS Lambda function.
  • Scatter-gather broadcasts a single message to multiple destinations (scatter), and then aggregates the responses back for the next steps (gather). This is useful in file processing and test automation. For example, you can transcode ten 500-MB media files in parallel, and then join to create a 5-GB file.

Another important update is AWS Step Functions adds support for nested workflows, which allows you to orchestrate more complex processes by composing modular, reusable workflows.

AWS Amplify

A new Predictions category as been added to the Amplify Framework to quickly add machine learning capabilities to your web and mobile apps.

Amplify framework

With a few lines of code you can add and configure AI/ML services to configure your app to:

  • Identify text, entities, and labels in images using Amazon Rekognition, or identify text in scanned documents to get the contents of fields in forms and information stored in tables using Amazon Textract.
  • Convert text into a different language using Amazon Translate, text to speech using Amazon Polly, and speech to text using Amazon Transcribe.
  • Interpret text to find the dominant language, the entities, the key phrases, the sentiment, or the syntax of unstructured text using Amazon Comprehend.

AWS Amplify CLI (part of the open source Amplify Framework) has added local mocking and testing. This allows you to mock some of the most common cloud services and test your application 100% locally.

For this first release, the Amplify CLI can mock locally:

amplify mock

AWS CloudFormation

The CloudFormation team has released the much-anticipated CloudFormation Coverage Roadmap.

Styled after the popular AWS Containers Roadmap, the CloudFormation Coverage Roadmap provides transparency about our priorities, and the opportunity to provide your input.

The roadmap contains four columns:

  • Shipped – Available for use in production in all public AWS Regions.
  • Coming Soon – Generally a few months out.
  • We’re working on It – Work in progress, but further out.
  • Researching – We’re thinking about the right way to implement the coverage.

AWS CloudFormation roadmap

Amazon DynamoDB

NoSQL Workbench for Amazon DynamoDB has been released in preview. This is a free, client-side application available for Windows and macOS. It helps you more easily design and visualize your data model, run queries on your data, and generate the code for your application.

Amazon Aurora

Amazon Aurora Serverless is a dynamically scaling version of Amazon Aurora. It automatically starts up, shuts down, and scales up or down, based on your application workload.

Aurora Serverless has had a MySQL compatible edition for a while, now we’re excited to bring more serverless joy to databases with the PostgreSQL compatible version now GA.

We also have a useful post on Reducing Aurora PostgreSQL storage I/O costs.

AWS Serverless Application Repository

The AWS Serverless Application Repository has had some useful SAR apps added by Serverless Developer Advocate James Beswick.

  • S3 Auto Translator which automatically converts uploaded objects into other languages specified by the user, using Amazon Translate.
  • Serverless S3 Uploader allows you to upload JPG files to Amazon S3 buckets from your web applications using presigned URLs.

Serverless posts

July

August

September

Tech talks

We hold several AWS Online Tech Talks covering serverless tech talks throughout the year. These are listed in the Serverless section of the AWS Online Tech Talks page.

Here are the ones from Q3:

Twitch

July

August

September

There are also a number of other helpful video series covering Serverless available on the AWS Twitch Channel.

AWS re:Invent

AWS re:Invent

December 2 – 6 in Las Vegas, Nevada is peak AWS learning time with AWS re:Invent 2019. Join tens of thousands of AWS customers to learn, share ideas, and see exciting keynote announcements.

Be sure to take a look at the growing catalog of serverless sessions this year. Make sure to book time for Builders SessionsChalk Talks, and Workshops as these sessions will fill up quickly. The schedule is updated regularly so if your session is currently fully booked, a repeat may be scheduled.

Register for AWS re:Invent now!

What did we do at AWS re:Invent 2018? Check out our recap here: AWS re:Invent 2018 Recap at the San Francisco Loft.

Our friends at IOPipe have written 5 tips for avoiding serverless FOMO at this year’s re:Invent.

AWS Serverless Heroes

We are excited to welcome some new AWS Serverless Heroes to help grow the serverless community. We look forward to some amazing content to help you with your serverless journey.

Still looking for more?

The Serverless landing page has much more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials.

 

Learn about AWS Services & Solutions – September AWS Online Tech Talks

Post Syndicated from Jenny Hang original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-september-aws-online-tech-talks/

Learn about AWS Services & Solutions – September AWS Online Tech Talks

AWS Tech Talks

Join us this September to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

 

Compute:

September 23, 2019 | 11:00 AM – 12:00 PM PTBuild Your Hybrid Cloud Architecture with AWS – Learn about the extensive range of services AWS offers to help you build a hybrid cloud architecture best suited for your use case.

September 26, 2019 | 1:00 PM – 2:00 PM PTSelf-Hosted WordPress: It’s Easier Than You Think – Learn how you can easily build a fault-tolerant WordPress site using Amazon Lightsail.

October 3, 2019 | 11:00 AM – 12:00 PM PTLower Costs by Right Sizing Your Instance with Amazon EC2 T3 General Purpose Burstable Instances – Get an overview of T3 instances, understand what workloads are ideal for them, and understand how the T3 credit system works so that you can lower your EC2 instance costs today.

 

Containers:

September 26, 2019 | 11:00 AM – 12:00 PM PTDevelop a Web App Using Amazon ECS and AWS Cloud Development Kit (CDK) – Learn how to build your first app using CDK and AWS container services.

 

Data Lakes & Analytics:

September 26, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Provisioning Amazon MSK Clusters and Using Popular Apache Kafka-Compatible Tooling – Learn best practices on running Apache Kafka production workloads at a lower cost on Amazon MSK.

 

Databases:

September 25, 2019 | 1:00 PM – 2:00 PM PTWhat’s New in Amazon DocumentDB (with MongoDB compatibility) – Learn what’s new in Amazon DocumentDB, a fully managed MongoDB compatible database service designed from the ground up to be fast, scalable, and highly available.

October 3, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Enterprise-Class Security, High-Availability, and Scalability with Amazon ElastiCache – Learn about new enterprise-friendly Amazon ElastiCache enhancements like customer managed key and online scaling up or down to make your critical workloads more secure, scalable and available.

 

DevOps:

October 1, 2019 | 9:00 AM – 10:00 AM PT – CI/CD for Containers: A Way Forward for Your DevOps Pipeline – Learn how to build CI/CD pipelines using AWS services to get the most out of the agility afforded by containers.

 

Enterprise & Hybrid:

September 24, 2019 | 1:00 PM – 2:30 PM PT Virtual Workshop: How to Monitor and Manage Your AWS Costs – Learn how to visualize and manage your AWS cost and usage in this virtual hands-on workshop.

October 2, 2019 | 1:00 PM – 2:00 PM PT – Accelerate Cloud Adoption and Reduce Operational Risk with AWS Managed Services – Learn how AMS accelerates your migration to AWS, reduces your operating costs, improves security and compliance, and enables you to focus on your differentiating business priorities.

 

IoT:

September 25, 2019 | 9:00 AM – 10:00 AM PTComplex Monitoring for Industrial with AWS IoT Data Services – Learn how to solve your complex event monitoring challenges with AWS IoT Data Services.

 

Machine Learning:

September 23, 2019 | 9:00 AM – 10:00 AM PTTraining Machine Learning Models Faster – Learn how to train machine learning models quickly and with a single click using Amazon SageMaker.

September 30, 2019 | 11:00 AM – 12:00 PM PTUsing Containers for Deep Learning Workflows – Learn how containers can help address challenges in deploying deep learning environments.

October 3, 2019 | 1:00 PM – 2:30 PM PTVirtual Workshop: Getting Hands-On with Machine Learning and Ready to Race in the AWS DeepRacer League – Join DeClercq Wentzel, Senior Product Manager for AWS DeepRacer, for a presentation on the basics of machine learning and how to build a reinforcement learning model that you can use to join the AWS DeepRacer League.

 

AWS Marketplace:

September 30, 2019 | 9:00 AM – 10:00 AM PTAdvancing Software Procurement in a Containerized World – Learn how to deploy applications faster with third-party container products.

 

Migration:

September 24, 2019 | 11:00 AM – 12:00 PM PTApplication Migrations Using AWS Server Migration Service (SMS) – Learn how to use AWS Server Migration Service (SMS) for automating application migration and scheduling continuous replication, from your on-premises data centers or Microsoft Azure to AWS.

 

Networking & Content Delivery:

September 25, 2019 | 11:00 AM – 12:00 PM PTBuilding Highly Available and Performant Applications using AWS Global Accelerator – Learn how to build highly available and performant architectures for your applications with AWS Global Accelerator, now with source IP preservation.

September 30, 2019 | 1:00 PM – 2:00 PM PTAWS Office Hours: Amazon CloudFront – Just getting started with Amazon CloudFront and [email protected]? Get answers directly from our experts during AWS Office Hours.

 

Robotics:

October 1, 2019 | 11:00 AM – 12:00 PM PTRobots and STEM: AWS RoboMaker and AWS Educate Unite! – Come join members of the AWS RoboMaker and AWS Educate teams as we provide an overview of our education initiatives and walk you through the newly launched RoboMaker Badge.

 

Security, Identity & Compliance:

October 1, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on Running Active Directory on AWS – Learn how to deploy Active Directory on AWS and start migrating your windows workloads.

 

Serverless:

October 2, 2019 | 9:00 AM – 10:00 AM PTDeep Dive on Amazon EventBridge – Learn how to optimize event-driven applications, and use rules and policies to route, transform, and control access to these events that react to data from SaaS apps.

 

Storage:

September 24, 2019 | 9:00 AM – 10:00 AM PTOptimize Your Amazon S3 Data Lake with S3 Storage Classes and Management Tools – Learn how to use the Amazon S3 Storage Classes and management tools to better manage your data lake at scale and to optimize storage costs and resources.

October 2, 2019 | 11:00 AM – 12:00 PM PTThe Great Migration to Cloud Storage: Choosing the Right Storage Solution for Your Workload – Learn more about AWS storage services and identify which service is the right fit for your business.