Tag Archives: java

Bending pause times to your will with Generational ZGC

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/bending-pause-times-to-your-will-with-generational-zgc-256629c9386b

The surprising and not so surprising benefits of generations in the Z Garbage Collector.

By Danny Thomas, JVM Ecosystem Team

The latest long term support release of the JDK delivers generational support for the Z Garbage Collector.

More than half of our critical streaming video services are now running on JDK 21 with Generational ZGC, so it’s a good time to talk about our experience and the benefits we’ve seen. If you’re interested in how we use Java at Netflix, Paul Bakker’s talk How Netflix Really Uses Java, is a great place to start.

Reduced tail latencies

In both our GRPC and DGS Framework services, GC pauses are a significant source of tail latencies. That’s particularly true of our GRPC clients and servers, where request cancellations due to timeouts interact with reliability features such as retries, hedging and fallbacks. Each of these errors is a canceled request resulting in a retry so this reduction further reduces overall service traffic by this rate:

Errors rates per second. Previous week in white vs current cancellation rate in purple, as ZGC was enabled on a service cluster on November 15

Removing the noise of pauses also allows us to identify actual sources of latency end-to-end, which would otherwise be hidden in the noise, as maximum pause time outliers can be significant:

Maximum GC pause times by cause, for the same service cluster as above. Yes, those ZGC pauses really are usually under one millisecond

Efficiency

Even after we saw very promising results in our evaluation, we expected the adoption of ZGC to be a trade off: a little less application throughput, due to store and load barriers, work performed in thread local handshakes, and the GC competing with the application for resources. We considered that an acceptable trade off, as avoiding pauses provided benefits that would outweigh that overhead.

In fact, we’ve found for our services and architecture that there is no such trade off. For a given CPU utilization target, ZGC improves both average and P99 latencies with equal or better CPU utilization when compared to G1.

The consistency in request rates, request patterns, response time and allocation rates we see in many of our services certainly help ZGC, but we’ve found it’s equally capable of handling less consistent workloads (with exceptions of course; more on that below).

Operational simplicity

Service owners often reach out to us with questions about excessive pause times and for help with tuning. We have several frameworks that periodically refresh large amounts of on-heap data to avoid external service calls for efficiency. These periodic refreshes of on-heap data are great at taking G1 by surprise, resulting in pause time outliers well beyond the default pause time goal.

This long lived on-heap data was the major contributor to us not adopting non-generational ZGC previously. In the worst case we evaluated, non-generational ZGC caused 36% more CPU utilization than G1 for the same workload. That became a nearly 10% improvement with generational ZGC.

Half of all services required for streaming video use our Hollow library for on-heap metadata. Removing pauses as a concern allowed us to remove array pooling mitigations, freeing hundreds of megabytes of memory for allocations.

Operational simplicity also stems from ZGC’s heuristics and defaults. No explicit tuning has been required to achieve these results. Allocation stalls are rare, typically coinciding with abnormal spikes in allocation rates, and are shorter than the average pause times we saw with G1.

Memory overhead

We expected that losing compressed references on heaps < 32G, due to colored pointers requiring 64-bits object pointers, would be a major factor in the choice of a garbage collector.

We’ve found that while that’s an important consideration for stop-the-world GCs, that’s not the case for ZGC where even on small heaps, the increase in allocation rate is amortized by the efficiency and operational improvements. Our thanks to Erik Österlund at Oracle for explaining the less intuitive benefits of colored pointers when it comes to concurrent garbage collectors, which lead us to evaluating ZGC more broadly than initially planned.

In the majority of cases ZGC is also able to consistently make more memory available to the application:

Used vs available heap capacity following each GC cycle, for the same service cluster as above

ZGC has a fixed overhead 3% of the heap size, requiring more native memory than G1. Except in a couple of cases, there’s been no need to lower the maximum heap size to allow for more headroom, and those were services with greater than average native memory needs.

Reference processing is also only performed in major collections with ZGC. We paid particular attention to deallocation of direct byte buffers, but we haven’t seen any impact thus far. This difference in reference processing did cause a performance problem with JSON thread dump support, but that’s a unusual situation caused by a framework accidentally creating an unused ExecutorService instance for every request.

Transparent huge pages

Even if you’re not using ZGC, you probably should be using huge pages, and transparent huge pages is the most convenient way to use them.

ZGC uses shared memory for the heap and many Linux distributions configure shmem_enabled to never, which silently prevents ZGC from using huge pages with -XX:+UseTransparentHugePages.

Here we have a service deployed with no other change but shmem_enabled going from never to advise, reducing CPU utilization significantly:

Deployment moving from 4k to 2m pages. Ignore the gap, that’s our immutable deployment process temporarily doubling the cluster capacity

Our default configuration:

  • Sets heap minimum and maximums to equal size
  • Configures -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch
  • Uses the following transparent_hugepage configuration:
echo madvise | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
echo advise | sudo tee /sys/kernel/mm/transparent_hugepage/shmem_enabled
echo defer | sudo tee /sys/kernel/mm/transparent_hugepage/defrag
echo 1 | sudo tee /sys/kernel/mm/transparent_hugepage/khugepaged/defrag

What workloads weren’t a good fit?

There is no best garbage collector. Each trades off collection throughput, application latency and resource utilization depending on the goal of the garbage collector.

For the workloads that have performed better with G1 vs ZGC, we’ve found that they tend to be more throughput oriented, with very spiky allocation rates and long running tasks holding objects for unpredictable periods.

A notable example was a service where very spiky allocation rates and large numbers of long lived objects, which happened to be a particularly good fit for G1’s pause time goal and old region collection heuristics. It allowed G1 to avoid unproductive work in GC cycles that ZGC couldn’t.

The switch to ZGC by default has provided the perfect opportunity for application owners to think about their choice of garbage collector. Several batch/precompute cases had been using G1 by default, where they would have seen better throughput from the parallel collector. In one large precompute workload we saw a 6–8% improvement in application throughput, shaving an hour off the batch time, versus G1.

Try it for yourself!

Left unquestioned, assumptions and expectations could have caused us to miss one of the most impactful changes we’ve made to our operational defaults in a decade. We’d encourage you to try generational ZGC for yourself. It might surprise you as much as it surprised us.


Bending pause times to your will with Generational ZGC was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Re-platforming Java applications using the updated AWS Serverless Java Container

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/re-platforming-java-applications-using-the-updated-aws-serverless-java-container/

This post is written by Dennis Kieselhorst, Principal Solutions Architect.

The combination of portability, efficiency, community, and breadth of features has made Java a popular choice for businesses to build their applications for over 25 years. The introduction of serverless functions, pioneered by AWS Lambda, changed what you need in a programming language and runtime environment. Functions are often short-lived, single-purpose, and do not require extensive infrastructure configuration.

This blog post shows how you can modernize a legacy Java application to run on Lambda with minimal code changes using the updated AWS Serverless Java Container.

Deployment model comparison

Classic Java enterprise applications often run on application servers such as JBoss/ WildFly, Oracle WebLogic and IBM WebSphere, or servlet containers like Apache Tomcat. The underlying Java virtual machine typically runs 24/7 and serves multiple requests using its multithreading capabilities.

Typical long running Java application server

Typical long running Java application server

When building Lambda functions with Java, an HTTP server is no longer required and there are other considerations for running code in a Lambda environment. Code runs in an execution environment, which processes a single invocation at a time. Functions can run for up to 15 minutes with a maximum of 10 Gb allocated memory.

Functions are triggered by events such as an HTTP request with a corresponding payload. An Amazon API Gateway HTTP request invokes the function with the following JSON payload:

Amazon API Gateway HTTP request payload

Amazon API Gateway HTTP request payload

The code to process these events is different from how you implement it in a traditional application.

AWS Serverless Java Container

The AWS Serverless Java Container makes it easier to run Java applications written with frameworks such as Spring, Spring Boot, or JAX-RS/Jersey in Lambda.

The container provides adapter logic to minimize code changes. Incoming events are translated to the Servlet specification so that frameworks work as before.

AWS Serverless Java Container adapter

AWS Serverless Java Container adapter

Version 1 of this library was released in 2018. Today, AWS is announcing the release of version 2, which supports the latest Jakarta EE specification, along with Spring Framework 6.x, Spring Boot 3.x and Jersey 3.x.

Example: Modifying a Spring Boot application

This following example illustrates how to migrate a Spring Boot 3 application. You can find the full example for Spring and other frameworks in the GitHub repository.

  1. Add the AWS Serverless Java dependency to your Maven POM build file (or Gradle accordingly):
  2. <dependency>
        <groupId>com.amazonaws.serverless</groupId>
        <artifactId>aws-serverless-java-container-springboot3</artifactId>
        <version>2.0.0</version>
    </dependency>
  3. Spring Boot, by default, embeds Apache Tomcat to deal with HTTP requests. The examples use Amazon API Gateway to handle inbound HTTP requests so you can exclude the dependency.
  4. <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <configuration>
                    <createDependencyReducedPom>false</createDependencyReducedPom>
                </configuration>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <artifactSet>
                                <excludes>
                                    <exclude>org.apache.tomcat.embed:*</exclude>
                                </excludes>
                            </artifactSet>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

    The AWS Serverless Java Container accepts API Gateway proxy requests and transforms them into a plain Java object. The library also transforms outputs into a suitable API Gateway response object.

    Once you run your build process, Maven’s Shade-plugin now produces an Uber-JAR that bundles all dependencies, which you can upload to Lambda.

  5. The Lambda runtime must know which handler method to invoke. You can configure and use the SpringDelegatingLambdaContainerHandler implementation or implement your own handler Java class that delegates to AWS Serverless Java Container. This is useful if you want to add additional functionality.
  6. Configure the handler name in the runtime settings of your function.
  7. Configure the handler name

    Configure the handler name

  8. Configure an environment variable named MAIN_CLASS to let the generic handler know where to find your original application main class, which is usually annotated with @SpringBootApplication.
  9. Configure MAIN_CLASS environment variable

    Configure MAIN_CLASS environment variable

    You can also configure these settings using infrastructure as code (IaC) tools such as AWS CloudFormation, the AWS Cloud Development Kit (AWS CDK), or the AWS Serverless Application Model (AWS SAM).

    In an AWS SAM template, the related changes are as follows. Full templates are part of the GitHub repository.

    Handler: com.amazonaws.serverless.proxy.spring.SpringDelegatingLambdaContainerHandler 
    Environment:
      Variables:
        MAIN_CLASS: com.amazonaws.serverless.sample.springboot3.Application

    Optimizing memory configuration

    When running Lambda functions, start-up time and memory footprint are important considerations. The amount of memory you configure for your Lambda function also determines the amount of virtual CPU available. Adding more memory proportionally increases the amount of CPU, and therefore increases the overall computational power available. If a function is CPU-, network- or memory-bound, adding more memory can improve performance.

    Lambda charges for the total amount of gigabyte-seconds consumed by a function. Gigabyte-seconds are a combination of total memory (in gigabytes) and duration (in seconds). Increasing memory incurs additional cost. However, in many cases, increasing the memory available causes a decrease in the function duration due to the additional CPU available. As a result, the overall cost increase may be negligible for additional performance, or may even decrease.

    Choosing the memory allocated to your Lambda functions is an optimization process that balances speed (duration) and cost. You can manually test functions by selecting different memory allocations and measuring the completion time. AWS Lambda Power Tuning is a tool to simplify and automate the process, which you can use to optimize your configuration.

    Power Tuning uses AWS Step Functions to run multiple concurrent versions of a Lambda function at different memory allocations and measures the performance. The function runs in your AWS account, performing live HTTP calls and SDK interactions, to measure performance in a production scenario.

    Improving cold-start time with AWS Lambda SnapStart

    Traditional applications often have a large tree of dependencies. Lambda loads the function code and initializes dependencies during Lambda lifecycle initialization phase. With many dependencies, this initialization time may be too long for your requirements. AWS Lambda SnapStart for Java based functions can deliver up to 10 times faster startup performance.

    Instead of running the function initialization phase on every cold-start, Lambda SnapStart runs the function initialization process at deployment time. Lambda takes a snapshot of the initialized execution environment. This snapshot is encrypted and persisted in a tiered cache for low latency access. When the function is invoked and scales, Lambda resumes the execution environment from the persisted snapshot instead of running the full initialization process. This results in lower startup latency.

    To enable Lambda SnapStart you must first enable the configuration setting, and also publish a function version.

    Enabling SnapStart

    Enabling SnapStart

    Ensure you point your API Gateway endpoint to the published version or an alias to ensure you are using the SnapStart enabled function.

    The corresponding settings in an AWS SAM template contain the following:

    SnapStart: 
      ApplyOn: PublishedVersions
    AutoPublishAlias: my-function-alias

    Read the Lambda SnapStart compatibility considerations in the documentation as your application may contain specific code that requires attention.

    Conclusion

    When building serverless applications with Lambda, you can deliver features faster, but your language and runtime must work within the serverless architectural model. AWS Serverless Java Container helps to bridge between traditional Java Enterprise applications and modern cloud-native serverless functions.

    You can optimize the memory configuration of your Java Lambda function using AWS Lambda Power Tuning tool and enable SnapStart to optimize the initial cold-start time.

    The self-paced Java on AWS Lambda workshop shows how to build cloud-native Java applications and migrate existing Java application to Lambda.

    Explore the AWS Serverless Java Container GitHub repo where you can report related issues and feature requests.

    For more serverless learning resources, visit Serverless Land.

Building resilient serverless applications using chaos engineering

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/compute/building-resilient-serverless-applications-using-chaos-engineering/

This post is written by Suranjan Choudhury (Head of TME and ITeS SA) and Anil Sharma (Sr PSA, Migration) 

Chaos engineering is the process of stressing an application in testing or production environments by creating disruptive events, such as outages, observing how the system responds, and implementing improvements. Chaos engineering helps you create the real-world conditions needed to uncover hidden issues and performance bottlenecks that are challenging to find in distributed applications.

You can build resilient distributed serverless applications using AWS Lambda and test Lambda functions in real world operating conditions using chaos engineering.  This blog shows an approach to inject chaos in Lambda functions, making no change to the Lambda function code. This blog uses the AWS Fault Injection Simulator (FIS) service to create experiments that inject disruptions for Lambda based serverless applications.

AWS FIS is a managed service that performs fault injection experiments on your AWS workloads. AWS FIS is used to set up and run fault experiments that simulate real-world conditions to discover application issues that are difficult to find otherwise. You can improve application resilience and performance using results from FIS experiments.

The sample code in this blog introduces random faults to existing Lambda functions, like an increase in response times (latency) or random failures. You can observe application behavior under introduced chaos and make improvements to the application.

Approaches to inject chaos in Lambda functions

AWS FIS currently does not support injecting faults in Lambda functions. However, there are two main approaches to inject chaos in Lambda functions: using external libraries or using Lambda layers.

Developers have created libraries to introduce failure conditions to Lambda functions, such as chaos_lambda and failure-Lambda. These libraries allow developers to inject elements of chaos into Python and Node.js Lambda functions. To inject chaos using these libraries, developers must decorate the existing Lambda function’s code. Decorator functions wrap the existing Lambda function, adding chaos at runtime. This approach requires developers to change the existing Lambda functions.

You can also use Lambda layers to inject chaos, requiring no change to the function code, as the fault injection is separated. Since the Lambda layer is deployed separately, you can independently change the element of chaos, like latency in response or failure of the Lambda function. This blog post discusses this approach.

Injecting chaos in Lambda functions using Lambda layers

A Lambda layer is a .zip file archive that contains supplementary code or data. Layers usually contain library dependencies, a custom runtime, or configuration files. This blog creates an FIS experiment that uses Lambda layers to inject disruptions in existing Lambda functions for Java, Node.js, and Python runtimes.

The Lambda layer contains the fault injection code. It is invoked prior to invocation of the Lambda function and injects random latency or errors. Injecting random latency simulates real world unpredictable conditions. The Java, Node.js, and Python chaos injection layers provided are generic and reusable. You can use them to inject chaos in your Lambda functions.

The Chaos Injection Lambda Layers

Java Lambda Layer for Chaos Injection

Java Lambda Layer for Chaos Injection

The chaos injection layer for Java Lambda functions uses the JAVA_TOOL_OPTIONS environment variable. This environment variable allows specifying the initialization of tools, specifically the launching of native or Java programming language agents. The JAVA_TOOL_OPTIONS has a javaagent parameter that points to the chaos injection layer. This layer uses Java’s premain method and the Byte Buddy library for modifying the Lambda function’s Java class during runtime.

When the Lambda function is invoked, the JVM uses the class specified with the javaagent parameter and invokes its premain method before the Lambda function’s handler invocation. The Java premain method injects chaos before Lambda runs.

The FIS experiment adds the layer association and the JAVA_TOOL_OPTIONS environment variable to the Lambda function.

Python and Node.js Lambda Layer for Chaos Injection

Python and Node.js Lambda Layer for Chaos Injection

When injecting chaos in Python and Node.js functions, the Lambda function’s handler is replaced with a function in the respective layers by the FIS aws:ssm:start-automation-execution action. The automation, which is an SSM document, saves the original Lambda function’s handler to in AWS Systems Manager Parameter Store, so that the changes can be rolled back once the experiment is finished.

The layer function contains the logic to inject chaos. At runtime, the layer function is invoked, injecting chaos in the Lambda function. The layer function in turn invokes the Lambda function’s original handler, so that the functionality is fulfilled.

The result in all runtimes (Java, Python, or Node.js), is invocation of the original Lambda function with latency or failure injected. The observed changes are random latency or failure injected by the layer.

Once the experiment is completed, an SSM document is provided. This rolls back the layer’s association to the Lambda function and removes the environment variable, in the case of the Java runtime.

Sample FIS experiments using SSM and Lambda layers

In the sample code provided, Lambda layers are provided for Python, Node.js and Java runtimes along with sample Lambda functions for each runtime.

The sample deploys the Lambda layers and the Lambda functions, FIS experiment template, AWS Identity and Access Management (IAM) roles needed to run the experiment, and the AWS Systems Manger (SSM) Documents. AWS CloudFormation template is provided for deployment.

Step 1: Complete the prerequisites

  • To deploy the sample code, clone the repository locally:
    git clone https://github.com/aws-samples/chaosinjection-lambda-samples.git
  • Complete the prerequisites documented here.

Step 2: Deploy using AWS CloudFormation

The CloudFormation template provided along with this blog deploys sample code. Execute runCfn.sh.

When this is complete, it returns the StackId that CloudFormation created:

Step 3: Run the chaos injection experiment

By default, the experiment is configured to inject chaos in the Java sample Lambda function. To change it to Python or Node.js Lambda functions, edit the experiment template and configure it to inject chaos using steps from here.

Step 4: Start the experiment

From the FIS Console, choose Start experiment.

 Start experiment

Wait until the experiment state changes to “Completed”.

Step 5: Run your test

At this stage, you can inject chaos into your Lambda function. Run the Lambda functions and observe their behavior.

1. Invoke the Lambda function using the command below:

aws lambda invoke --function-name NodeChaosInjectionExampleFn out --log-type Tail --query 'LogResult' --output text | base64 -d

2. The CLI commands output displays the logs created by the Lambda layers showing latency introduced in this invocation.

In this example, the output shows that the Lambda layer injected 1799ms of random latency to the function.

The experiment injects random latency or failure in the Lambda function. Running the Lambda function again results in a different latency or failure. At this stage, you can test the application, and observe its behavior under conditions that may occur in the real world, like an increase in latency or Lambda function’s failure.

Step 6: Roll back the experiment

To roll back the experiment, run the SSM document for rollback. This rolls back the Lambda function to the state before chaos injection. Run this command:

aws ssm start-automation-execution \
--document-name “InjectLambdaChaos-Rollback” \
--document-version “\$DEFAULT” \
--parameters \
‘{“FunctionName”:[“FunctionName”],”LayerArn”:[“LayerArn”],”assumeRole”:[“RoleARN
”]}’ \
--region eu-west-2

Cleaning up

To avoid incurring future charges, clean up the resources created by the CloudFormation template by running the following CLI command. Update the stack name to the one you provided when creating the stack.

aws cloudformation delete-stack --stack-name myChaosStack

Using FIS Experiments results

You can use FIS experiment results to validate expected system behavior. An example of expected behavior is: “If application latency increases by 10%, there is less than a 1% increase in sign in failures.” After the experiment is completed, evaluate whether the application resiliency aligns with your business and technical expectations.

Conclusion

This blog explains an approach for testing reliability and resilience in Lambda functions using chaos engineering. This approach allows you to inject chaos in Lambda functions without changing the Lambda function code, with clear segregation of chaos injection and business logic. It provides a way for developers to focus on building business functionality using Lambda functions.

The Lambda layers that inject chaos can be developed and managed separately. This approach uses AWS FIS to run experiments that inject chaos using Lambda layers and test serverless application’s performance and resiliency. Using the insights from the FIS experiment, you can find, fix, or document risks that surface in the application while testing.

For more serverless learning resources, visit Serverless Land.

Developing with Java and Spring Boot using Amazon CodeWhisperer

Post Syndicated from Rajdeep Banerjee original https://aws.amazon.com/blogs/devops/developing-with-java-and-spring-boot-using-amazon-codewhisperer/

Developers often have to work with multiple programming languages depending on the task at hand.  Sometimes, this is a result of choosing the right tool for a specific problem, or it is mandated by adhering to a specific technology adopted by a team.  Within a specific programming language, developers may have to work with frameworks, software libraries, and popular cloud services from providers such as Amazon Web Services (AWS). This must be done while adhering to secure and best programming practices.  Despite these challenges, developers must continue to release code at a sufficiently high velocity.    

 Amazon CodeWhisperer is a real-time, AI coding companion that provides code suggestions in your IDE code editor. Developers can simply write a comment that outlines a specific task in plain English, such as “method to upload a file to S3.” Based on this, CodeWhisperer automatically determines which cloud services and public libraries are best suited to accomplish the  task and recommends multiple code snippets directly in the IDE. The code is generated based on the context of your file, such as comments as well as surrounding source code and import statements. CodeWhisperer is available as part of the AWS Toolkit for Visual Studio Code and JetBrain family of IDEs.  CodeWhisperer is also available for AWS Cloud9, AWS Lambda console, JupyterLabAmazon SageMaker Studio and AWS Glue Studio. CodeWhisperer supports popular programming languages like Java, Python, C#, TypeScript, GO, JavaScript, Rust, PHP, Kotlin, C, C++, Shell scripting, SQL, and Scala. 

 In this post, we will explore how to leverage CodeWhisperer in Java applications specifically using the Spring Boot framework. Spring Boot is an extension of the Spring framework that makes it easier to develop Java applications and microservices. Using CodeWhisperer, you will be spending less time creating boilerplate and repetitive code and more time focusing on business logic. You can generate entire Java Spring Boot functions and logical code blocks without having to search for code snippets from the web and customize them according to your requirements.  CodeWhisperer will enable you to responsibly use AI to create syntactically correct and secure Java Spring Boot applications. To enable CodeWhisperer in your IDE, please see Setting up CodeWhisperer for VS Code or Setting up Amazon CodeWhisperer for JetBrains depending on which IDE you are using.

 Note: Please note that CodeWhisperer uses artificial intelligence to provide code recommendations and this is non-deterministic. This code might differ from what you get from Amazon CodeWhisperer in your case.

Creating Data Transfer Objects (DTO)

Amazon CodeWhisperer makes it easier to develop the classes as you include import statements and provide brief comments on the purpose of the class.  Let’s start with the basics and develop a simple DTO or Plain Old Java Object (POJO).  This class will contain properties representing a product.  This DTO will be referenced later as part of a REST controller we generate to serialize the output to JSON.  CodeWhisperer will create a DTO class by using the class name and comments provided in plain language. Detailed and contextual comments will enable CodeWhisperer to generate code suggestions ranging from snippets to full functions in real time. For this use case, you are going to create a product class with id, name, price, description and rating properties.

 Type the following or similar comment in the class :

package com.amazonws.demo.cart.dto;

//create a Product class with id, name, price, description and rating properties. 

Quickly develop a Java class using Amazon Codewhisperer

After entering the comment and pressing ENTER, CodeWhisperer will start providing code suggestions. You can use the Tab key to accept a suggestion based on the context or use the left/right arrow keys to see more suggestions.  As shown below, the product class is auto generated with five properties id, name, price, rating and description with default getter/setter methods and two constructors. If you need more properties, you can either update the comment to include the new columns or manually create the columns in the file:

package com.amazonws.demo.cart.dto;

//create a Product class with id, name, price, description and rating properties. 

public class Product {
    private String id;
    private String name;
    private Double price;
    private String description;
    private Integer rating;
    
    public Product() {
    }
  
    public Product(String id, String name, Double price) {
      this.id = id;
      this.name = name;
      setPrice(this.price = price);
    }
  
    public String getId() {
      return id;
    }
  
    public void setId(String id) {
      this.id = id;
    }
  
    public String getName() {
      return name;
    }
  
    public void setName(String name) {
      this.name = name;
    }
  
    public Double getPrice() {
      return price;
    }
  
    public void setPrice(Double price) {
      this.price = price;
    }
    
    public String getDescription(){
      return description;
    }

    public void setDescription(String description){
      this.description = description;
    }

    public Integer getRating(){
      return rating;
    }
    
    public void setRating(Integer rating){
      this.rating = rating;
    }
}

Implementing Data Access Object  (DAO) pattern

Next, we implement the DAO pattern in this case for Amazon DynamoDB. The DAO pattern allows you to decouple the application/business layers from the persistence layer. It contains all the implementation logic for interacting with the persistence layer. We will create an entity class which represents the data to persist in DynamoDB along with the DAO class which contains the persistence logic.

 First, create a ProductDaoEntity class which maps to the Amazon DynamoDB table. Create a blank ProductDaoEntity class and import DynamoDB packages for annotations, attributes, and partition key, as shown below. Notice that the class has a comment about the class structure and use of the DynamoDB enhanced client, so that CodeWhisperer can provide meaningful suggestion. The enhanced client allows you to map client-side classes to DynamoDB tables.

package com.amazonws.demo.cart.dao.entity;

import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.DynamoDbAttribute;
import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.DynamoDbBean;
import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.DynamoDbPartitionKey;
import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.DynamoDbSortKey;

//create a dynamo db ProductDaoEntity class with partition Key as id, name, price, description, rating attributes using dynamo db enhanced mapper annotations

CodeWhisperer can now infer the comments, contexts, and import statements and start to generate the class implementation. You can accept or reject comments based on your requirements. Below, you can see the complete class generated by CodeWhisperer.

@DynamoDbBean
public class ProductDaoEntity {

    private String id;
    private String name;
    private double price;
    private String description;
    private int rating;
    
    public ProductDaoEntity() {
    }

    public ProductDaoEntity(String id, String name, double price, String description, int rating) {
        this.id = id;
        this.name = name;
        this.price = price;
        this.description = description;
        this.rating = rating;
    }

    @DynamoDbPartitionKey
    @DynamoDbAttribute("id")
    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    @DynamoDbSortKey
    @DynamoDbAttribute("name")
    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    @DynamoDbAttribute("price")
    public double getPrice() {
        return price;
    }

    public void setPrice(double price) {
        this.price = price;
    }

    @DynamoDbAttribute("description")
    public String getDescription() {
        return description;
    }

    public void setDescription(String description) {
        this.description = description;
    }

    @DynamoDbAttribute("rating")
    public int getRating() {
        return rating;
    }

    public void setRating(int rating) {
        this.rating = rating;
    }
    
    @Override
    public String toString() {
        return "ProductDaoEntity [id=" + id + ", name=" + name + ", price=" + price + ", description=" + description
                + ", rating=" + rating + "]";
    }

}

Notice how CodeWhisperer includes the appropriate DynamoDB related annotations such as @DynamoDbBean, @DynamoDbPartitionKey, @DynamoDbSortKey and @DynamoDbAttribute. This will be used to generate a TableSchema for mapping classes to tables.

 Now that you have the mapper methods completed, you can create the actual persistence logic that is specific to DynamoDB. Create a class named ProductDaoImpl. (Note: it’s a best practice for DAOImpl class to implement a DAO interface class.  We left that out for brevity.) Using the import statements and comments, CodeWhisperer can auto-generate most of the DynamoDB persistence logic for you. Create a ProductDaoImpl class which uses a DynamoDbEnhancedClient object as shown below.

package com.amazonws.demo.cart.dao;

import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import com.amazonws.demo.cart.dao.Mapper.ProductMapper;
import com.amazonws.demo.cart.dao.entity.ProductDaoEntity;
import com.amazonws.demo.cart.dto.Product;

import software.amazon.awssdk.core.internal.waiters.ResponseOrException;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbEnhancedClient;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbTable;
import software.amazon.awssdk.enhanced.dynamodb.Key;
import software.amazon.awssdk.enhanced.dynamodb.TableSchema;


@Component
public class ProductDaoImpl{
    private static final Logger logger = LoggerFactory.getLogger(ProductDaoImpl.class);
    private static final String PRODUCT_TABLE_NAME = "Products";
    private final DynamoDbEnhancedClient enhancedClient;

    @Autowired
    public ProductDaoImpl(DynamoDbEnhancedClient enhancedClient){
        this.enhancedClient = enhancedClient;

    }

Rather than providing comments that describe the functionality of the entire class, you can provide comments for each specific method here. You will use CodeWhisperer to generate the implementation details for interacting with DynamoDB.  If the Products table doesn’t already exist, you will need to create it. Based on the comment, CodeWhisperer will generate a method to create a a Products table if one does not exist.  As you can see, you don’t have to memorize or search through the DynamoDB API documentation to implement this logic. CodeWhisperer will save you time and effort by giving contextualized suggestions.

//Create the DynamoDB table through enhancedClient object from ProductDaoEntity. If the table already exists, log the error.
    @PostConstruct
    public void createTable() {
        try {
            DynamoDbTable<ProductDaoEntity> productTable = enhancedClient.table(PRODUCT_TABLE_NAME, TableSchema.fromBean(ProductDaoEntity.class));
            productTable.createTable();
        } catch (Exception e) {
            logger.error("Error creating table: ", e);
        }
    }

Now, you can create the CRUD operations for the Product object. You can start with the createProduct operation to insert a new product entity to the DynamoDB table. Provide a comment about the purpose of the method along with relevant implementation details.

    // Create the createProduct() method 
    // Insert the ProductDaoEntity object into the DynamoDB table
    // Return the Product object

CodeWhisperer will start auto generating the Create operation as shown below. You can accept/reject the suggestions as needed. Or, you may select from alternate suggestion if available using the left/right arrow keys.

   // Create the createProduct() method
   // Insert the ProductDaoEntity object into the DynamoDB table
   // Return the Product object
    public ProductDaoEntity createProduct(ProductDaoEntity productDaoEntity) {
        DynamoDbTable<ProductDaoEntity> productTable = enhancedClient.table(PRODUCT_TABLE_NAME, TableSchema.fromBean(ProductDaoEntity.class));
        productTable.putItem(productDaoEntity);
        return product;
    }  

Similarly, you can generate a method to return a specific product by id. Provide a contextual comment, as shown below.

// Get a particular ProductDaoEntity object from the DynamoDB table using the
 // product id and return the Product object

Below is the auto-generated code. CodeWhisperer has correctly analyzed the comments and generated the method to get a Product by its id.

    //Get a particular ProductDaoEntity object from the DynamoDB table using the
    // product id and return the Product object
    
    public ProductDaoEntity getProduct(String productId) {
        DynamoDbTable<ProductDaoEntity> productTable = enhancedClient.table(PRODUCT_TABLE_NAME, TableSchema.fromBean(ProductDaoEntity.class));
        ProductDaoEntity productDaoEntity = productTable.getItem(Key.builder().partitionValue(productId).build());
        return productDaoEntity;
    }

Similarly, you can implement the DAO layer to delete and update products using DynamoDB table.

Creating a Service Object

Next, you will generate the ProductService class which retrieves the Product using ProductDAO.  In Spring Boot, a class annotated with @Service allows it to be detected through classpath scanning.

Let’s provide a comment to generate the ProductService class:

package com.amazonws.demo.cart.service;

import java.util.List;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import com.amazonws.demo.cart.dto.Product;
import com.amazonws.demo.cart.dao.ProductDao;

//Create a class called ProductService with methods: getProductById(string id),
//getAllProducts(), updateProduct(Product product), 
//deleteProduct(string id), createProduct(Product product)

CodeWhisperer will create the following class implementation.  Note, you may have to adjust return types or method parameter types as needed.  Notice the @Service annotation for this class along with the productDao property being @Autowired.

@Service
public class ProductService {

   @Autowired
   ProductDao productDao;

   public Product getProductById(String id) {
      return productDao.getProductById(id);
   }

   public List<Product> getProducts() {
      return productDao.getAllProducts();
   }

   public void updateProduct(Product product) {
      productDao.updateProduct(product);
   }

   public void deleteProduct(String id) {
      productDao.deleteProduct(id);
   }

   public void createProduct(Product product) {
      productDao.createProduct(product);
   }

}

Creating a REST Controller

The REST controller typically handles incoming client HTTP requests and responses and its output is typically serialized into JSON or XML formats.  Using annotations, Spring Boot maps the HTTPS methods such as GET, PUT, POST, and DELETE to appropriate methods within the controller.  It also binds the HTTP request data to parameters defined within the controller methods.

Provide a comment as shown below specifying that the class is a REST controller that should support CORS along with the required methods.

package com.amazonws.demo.product.controller;

import java.util.List;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.PutMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import com.amazonws.demo.product.dto.Product;
import com.amazonws.demo.product.service.ProductService;

//create a RestController called ProductController to get all
//products, get a product by id, create a product, update a product,
//and delete a product. support cross origin requests from all origins.
 

Notice how the appropriate annotations are added to support CORS along with the mapping annotations that correspond with the GET, PUT, POST and DELETE HTTP methods. The @RestController annotation is used to specify that this controller returns an object serialized as XML or JSON rather than a view.

@RestController
@RequestMapping("/product")
@CrossOrigin(origins = "*")
public class ProductController {

    @Autowired
    private ProductService productService;
    
    @GetMapping("/getAllProducts")
    public List<Product> getAllProducts() {
        return productService.getAllProducts();
    }

    @GetMapping("/getProductById/{id}")
    public Product getProductById(@PathVariable String id) {
        return productService.getProductById(id);
    }

    @PostMapping("/createProduct")
    public Product createProduct(@RequestBody Product product) {
        return productService.createProduct(product);
    }

    @PutMapping("/updateProduct")
    public Product updateProduct(@RequestBody Product product) {
        return productService.updateProduct(product);
    }

    @DeleteMapping("/deleteProduct/{id}")
    public void deleteProduct(@PathVariable String id) {
        productService.deleteProduct(id);
    }

}

Conclusion

In this post, you have used CodeWhisperer to generate DTOs, controllers, service objects, and persistence classes. By inferring your natural language comments, CodeWhisperer will provide contextual code snippets to accelerate your development.  In addition, CodeWhisperer has additional features like reference tracker that detects whether a code suggestion might resemble open-source training data and can flag such suggestions with the open-source project’s repository URL, file reference, and license information for your review before deciding whether to incorporate the suggested code.

Try out Amazon CodeWhisperer today to get a head start on your coding projects.

Rajdeep Banerjee

Rajdeep Banerjee

Rajdeep Banerjee is a Senior Partner Solutions Architect at AWS helping strategic partners and clients in the AWS cloud migration and digital transformation journey. Rajdeep focuses on working with partners to provide technical guidance on AWS, collaborate with them to understand their technical requirements, and designing solutions to meet their specific needs. He is a member of Serverless technical field community.Rajdeep is based out of Richmond,Virginia.

Jason Varghese

Jason Varghese

Jason is a Senior Solutions Architect at AWS guiding enterprise customers on their cloud migration and modernization journeys. He has served in multiple engineering leadership roles and has over 20 years of experience architecting, designing and building scalable software solutions. Jason holds a bachelor’s degree in computer engineering from the University of Oklahoma and an MBA from the University of Central Oklahoma.

Debugging SnapStart-enabled Lambda functions made easy with AWS X-Ray

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/compute/debugging-snapstart-enabled-lambda-functions-made-easy-with-aws-x-ray/

This post is written by Rahul Popat (Senior Solutions Architect) and Aneel Murari (Senior Solutions Architect) 

Today, AWS X-Ray is announcing support for SnapStart-enabled AWS Lambda functions. Lambda SnapStart is a performance optimization that significantly improves the cold startup times for your functions. Announced at AWS re:Invent 2022, this feature delivers up to 10 times faster function startup times for latency-sensitive Java applications at no extra cost, and with minimal or no code changes.

X-Ray is a distributed tracing system that provides an end-to-end view of how an application is performing. X-Ray collects data about requests that your application serves and provides tools you can use to gain insight into opportunities for optimizations. Now you can use X-Ray to gain insights into the performance improvements of your SnapStart-enabled Lambda function.

With today’s feature launch, by turning on X-Ray tracing for SnapStart-enabled Lambda functions, you see separate subsegments corresponding to the Restore and Invoke phases for your Lambda function’s execution.

How does Lambda SnapStart work?

With SnapStart, the function’s initialization is done ahead of time when you publish a function version. Lambda takes an encrypted snapshot of the initialized execution environment and persists the snapshot in a tiered cache for low latency access.

When the function is first invoked or scaled, Lambda restores the cached execution environment from the persisted snapshot instead of initializing anew. This results in reduced startup times.

X-Ray tracing before this feature launch

Using an example of a Hello World application written in Java, a Lambda function is configured with SnapStart and fronted by Amazon API Gateway:

Before today’s launch, X-Ray was not supported for SnapStart-enabled Lambda functions. So if you had enabled X-Ray tracing for API Gateway, the X-Ray trace for the sample application would look like:

The trace only shows the overall duration of the Lambda service call. You do not have insight into your function’s execution or the breakdown of different phases of Lambda function lifecycle.

Next, enable X-Ray for your Lambda function and see how you can view a breakdown of your function’s total execution duration.

Prerequisites for enabling X-Ray for SnapStart-enabled Lambda function

SnapStart is only supported for Lambda functions with Java 11 and newly launched Java 17 managed runtimes. You can only enable SnapStart for the published versions of your Lambda function. Once you’ve enabled SnapStart, Lambda publishes all subsequent versions with snapshots. You may also create a Lambda function alias, which points to the published version of your Lambda function.

Make sure that the Lambda function’s execution role has appropriate permissions to write to X-Ray.

Enabling AWS X-Ray for your Lambda function with SnapStart

You can enable X-Ray tracing for your Lambda function using AWS Management Console, AWS Command Line Interface (AWS CLI), AWS Serverless Application Model (AWS SAM), AWS CloudFormation template, or via AWS Cloud Deployment Kit (CDK).

This blog shows how you can achieve this via AWS Management Console and AWS SAM. For more information on enabling SnapStart and X-Ray using other methods, refer to AWS Lambda Developer Guide.

Enabling SnapStart and X-Ray via AWS Management Console

To enable SnapStart and X-Ray for Lambda function via the AWS Management Console:

  1. Navigate to your Lambda Function.
  2. On the Configuration tab, choose Edit and change the SnapStart attribute value from None to PublishedVersions.
  3. Choose Save.

To enable X-Ray via the AWS Management Console:

  1. Navigate to your Lambda Function.
  2. ­On the Configuration tab, scroll down to the Monitoring and operations tools card and choose Edit.
  3. Under AWS X-Ray, enable Active tracing.
  4. Choose Save

To publish a new version of Lambda function via the AWS Management Console:

  1. Navigate to your Lambda Function.
  2. On the Version tab, choose Publish new version.
  3. Verify that PublishedVersions is shown below SnapStart.
  4. Choose Publish.

To create an alias for a published version of your Lambda function via the AWS Management Console:

  1. Navigate to your Lambda Function.
  2. On the Aliases tab, choose Create alias.
  3. Provide a Name for an alias and select a Version of your Lambda function to point the alias to.
  4. Choose Save.

Enabling SnapStart and X-Ray via AWS SAM

To enable SnapStart and X-Ray for Lambda function via AWS SAM:

    1. Enable Lambda function versions and create an alias by adding a AutoPublishAlias property in template.yaml file. AWS SAM automatically publishes a new version for each new deployment and automatically assigns the alias to the newly published version.
      Resources:
        my-function:
          type: AWS::Serverless::Function
          Properties:
            […]
            AutoPublishAlias: live
    2. Enable SnapStart on Lambda function by adding the SnapStart property in template.yaml file.
      Resources: 
        my-function: 
          type: AWS::Serverless::Function 
          Properties: 
            […] 
            SnapStart:
             ApplyOn: PublishedVersions
    3. Enable X-Ray for Lambda function by adding the Tracing property in template.yaml file.
      Resources:
        my-function:
          type: AWS::Serverless::Function
          Properties:
            […]
            Tracing: Active 

You can find the complete AWS SAM template for the preceding example in this GitHub repository.

Using X-Ray to gain insights into SnapStart-enabled Lambda function’s performance

To demonstrate X-Ray integration for your Lambda function with SnapStart, you can build, deploy, and test the sample Hello World application using AWS SAM CLI. To do this, follow the instructions in the README file of the GitHub project.

The build and deployment output with AWS SAM looks like this:

Once your application is deployed to your AWS account, note that SnapStart and X-Ray tracing is enabled for your Lambda function. You should also see an alias `live` created against the published version of your Lambda function.

You should also have an API deployed via API Gateway, which is pointing to the `live` alias of your Lambda function as the backend integration.

Now, invoke your API via `curl` command or any other HTTP client. Make sure to replace the url with your own API’s url.

$ curl --location --request GET https://{rest-api-id}.execute-api.{region}.amazonaws.com/{stage}/hello

Navigate to Amazon CloudWatch and under the X-Ray service map, you see a visual representation of the trace data generated by your application.

Under Traces, you can see the individual traces, Response code, Response time, Duration, and other useful metrics.

Select a trace ID to see the breakdown of total Duration on your API call.

You can now see the complete trace for the Lambda function’s invocation with breakdown of time taken during each phase. You can see the Restore duration and actual Invocation duration separately.

Restore duration shown in the trace includes the time it takes for Lambda to restore a snapshot on the microVM, load the runtime (JVM), and run any afterRestore hooks if specified in your code. Note that, the process of restoring snapshots can include time spent on activities outside the microVM. This time is not reported in the Restore sub-segment, but is part of the AWS::Lambda segment in X-Ray traces.

This helps you better understand the latency of your Lambda function’s execution, and enables you to identify and troubleshoot the performance issues and errors.

Conclusion

This blog post shows how you can enable AWS X-Ray for your Lambda function enabled with SnapStart, and measure the end-to-end performance of such functions using X-Ray console. You can now see a complete breakdown of your Lambda function’s execution time. This includes Restore duration along with the Invocation duration, which can help you to understand your application’s startup times (cold starts), diagnose slowdowns, or troubleshoot any errors and timeouts.

To learn more about the Lambda SnapStart feature, visit the AWS Lambda Developer Guide.

For more serverless learning resources, visit Serverless Land.

S3 URI Parsing is now available in AWS SDK for Java 2.x

Post Syndicated from David Ho original https://aws.amazon.com/blogs/devops/s3-uri-parsing-is-now-available-in-aws-sdk-for-java-2-x/

The AWS SDK for Java team is pleased to announce the general availability of Amazon Simple Storage Service (Amazon S3) URI parsing in the AWS SDK for Java 2.x. You can now parse path-style and virtual-hosted-style S3 URIs to easily retrieve the bucket, key, region, style, and query parameters. The new parseUri() API and S3Uri class provide the highly-requested parsing features that many customers miss from the AWS SDK for Java 1.x. Please note that Amazon S3 AccessPoints and Amazon S3 on Outposts URI parsing are not supported.

Motivation

Users often need to extract important components like bucket and key from stored S3 URIs to use in S3Client operations. The new parsing APIs allow users to conveniently do so, bypassing the need for manual parsing or storing the components separately.

Getting Started

To begin, first add the dependency for S3 to your project.

<dependency>
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3</artifactId>
    <version>${s3.version}</version>
</dependency>

Next, instantiate S3Client and S3Utilities objects.

S3Client s3Client = S3Client.create();
S3Utilities s3Utilities = s3Client.utilities();

Parsing an S3 URI

To parse your S3 URI, call parseUri() from S3Utilities, passing in the URI. This will return a parsed S3Uri object. If you have a String of the URI, you’ll need to convert it into an URI object first.

String url = "https://s3.us-west-1.amazonaws.com/myBucket/resources/doc.txt?versionId=abc123&partNumber=77&partNumber=88";
URI uri = URI.create(url);
S3Uri s3Uri = s3Utilities.parseUri(uri);

With the S3Uri, you can call the appropriate getter methods to retrieve the bucket, key, region, style, and query parameters. If the bucket, key, or region is not specified in the URI, an empty Optional will be returned. If query parameters are not specified in the URI, an empty map will be returned. If the field is encoded in the URI, it will be returned decoded.

Region region = s3Uri.region().orElse(null); // Region.US_WEST_1
String bucket = s3Uri.bucket().orElse(null); // "myBucket"
String key = s3Uri.key().orElse(null); // "resources/doc.txt"
boolean isPathStyle = s3Uri.isPathStyle(); // true

Retrieving query parameters

There are several APIs for retrieving the query parameters. You can return a Map<String, List<String>> of the query parameters. Alternatively, you can specify a query parameter to return the first value for the given query, or return the list of values for the given query.

Map<String, List<String>> queryParams = s3Uri.rawQueryParameters(); // {versionId=["abc123"], partNumber=["77", "88"]}
String versionId = s3Uri.firstMatchingRawQueryParameter("versionId").orElse(null); // "abc123"
String partNumber = s3Uri.firstMatchingRawQueryParameter("partNumber").orElse(null); // "77"
List<String> partNumbers = s3Uri.firstMatchingRawQueryParameters("partNumber"); // ["77", "88"]

Caveats

Special Characters

If you work with object keys or query parameters with reserved or unsafe characters, they must be URL-encoded, e.g., replace whitespace " " with "%20".

Valid:
"https://s3.us-west-1.amazonaws.com/myBucket/object%20key?query=%5Bbrackets%5D"

Invalid:
"https://s3.us-west-1.amazonaws.com/myBucket/object key?query=[brackets]"

Virtual-hosted-style URIs

If you work with virtual-hosted-style URIs with bucket names that contain a dot, i.e., ".", the dot must not be URL-encoded.

Valid:
"https://my.Bucket.s3.us-west-1.amazonaws.com/key"

Invalid:
"https://my%2EBucket.s3.us-west-1.amazonaws.com/key"

Conclusion

In this post, I discussed parsing S3 URIs in the AWS SDK for Java 2.x and provided code examples for retrieving the bucket, key, region, style, and query parameters. To learn more about how to set up and begin using the feature, visit our Developer Guide. If you are curious about how it is implemented, check out the source code on GitHub. As always, the AWS SDK for Java team welcomes bug reports, feature requests, and pull requests on the aws-sdk-java-v2 GitHub repository.

Week in Review – AWS Verified Access, Java 17, Amplify Flutter, Conferences, and More – May 1, 2023

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/week-in-review-aws-verified-access-java-17-amplify-flutter-conferences-and-more-may-1-2023/

Conference season has started and I was happy to meet and talk with iOS and Swift developers at the New York Swifty conference last week. I will travel again to Turino (Italy), Amsterdam (Netherlands), Frankfurt (Germany), and London (UK) in the coming weeks. Feel free to stop by and say hi if you are around. But, while I was queuing for passport control at JFK airport, AWS teams continued to listen to your feedback and innovate on your behalf.

What happened on AWS last week ? I counted 26 new capabilities since last Monday (not counting last Friday, since I am writing these lines before the start of the day in the US). Here are the eight that caught my attention.

Last Week on AWS

Amplify Flutter now supports web and desktop apps. You can now write Flutter applications that target six platforms, including iOS, Android, Web, Linux, MacOS, and Windows with a single codebase. This update encompasses not only the Amplify libraries but also the Flutter Authenticator UI library, which has been entirely rewritten in Dart. As a result, you can now deliver a consistent experience across all targeted platforms.

AWS Lambda adds support for Java 17. AWS Lambda now supports Java 17 as both a managed runtime and a container base image. Developers creating serverless applications in Lambda with Java 17 can take advantage of new language features including Java records, sealed classes, and multi-line strings. The Lambda Java 17 runtime also has numerous performance improvements, including optimizations when running Lambda functions on Graviton 2 processors. It supports AWS Lambda Snap Start (in supported Regions) for fast cold starts, and the latest versions of the popular Spring Boot 3 and Micronaut 4 application frameworks

AWS Verified Access is now generally available. I first wrote about Verified Access when we announced the preview at the re:Invent conference last year. AWS Verified Access is now available. This new service helps you provide secure access to your corporate applications without using a VPN. Built based on AWS Zero Trust principles, you can use Verified Access to implement a work-from-anywhere model with added security and scalability.

AWS Support is now available in Korean. As the number of customers speaking Korean grows, AWS Support is invested in providing the best support experience possible. You can now communicate with AWS Support engineers and agents in Korean when you create a support case at the AWS Support Center.

AWS DataSync Discovery is now generally available. DataSync Discovery enables you to understand your on-premises storage performance and capacity through automated data collection and analysis. It helps you quickly identify data to be migrated and evaluate suggested AWS Storage services that align with your performance and capacity needs. Capabilities added since preview include support for NetApp ONTAP 9.7, recommendations at cluster and storage virtual machine (SVM) levels, and discovery job events in Amazon EventBridge.

Amazon Location Service adds support for long-distance matrix routing. This makes it easier for you to quickly calculate driving time and driving distance between multiple origins and destinations, no matter how far apart they are. Developers can now make a single API request to calculate up to 122,500 routes (350 origins and 350 destinations) within a 180 km region or up to 100 routes without any distance limitation.

AWS Firewall Manager adds support for multiple administrators. You can now create up to 10 AWS Firewall Manager administrator accounts from AWS Organizations to manage your firewall policies. You can delegate responsibility for firewall administration at a granular scope by restricting access based on OU, account, policy type, and Region, thereby enabling policy management tasks to be implemented faster and more effectively.

AWS AppSync supports TypeScript and source maps in JavaScript resolvers. With this update, you can take advantage of TypeScript features when you write JavaScript resolvers. With the updated libraries, you get improved support for types and generics in AppSync’s utility functions. The updated AppSync documentation provides guidance on how to get started and how to bundle your code when you want to use TypeScript.

Amazon Athena Provisioned Capacity. Athena is a query service that makes it simple to analyze data in S3 data lakes and 30 different data sources, including on-premises data sources or other cloud systems, using standard SQL queries. Athena is serverless, so there is no infrastructure to manage, and–until today–you pay only for the queries that you run. Starting last week, you can now get dedicated capacity for your queries and use new workload management features to prioritize, control, and scale your most important queries, paying only for the capacity you provision.

X in Y – We made existing services available in additional Regions and locations:

Upcoming AWS Events
And to finish this post, I recommend you check your calendars and sign up for these AWS events:

AWS Serverless Innovation DayJoin us on May 17, 2023, for a virtual event hosted on the Twitch AWS channel. We will showcase AWS serverless technology choices such as AWS Lambda, Amazon ECS with AWS Fargate, Amazon EventBridge, and AWS Step Functions. In addition, we will share serverless modernization success stories, use cases, and best practices.

AWS re:Inforce 2023 – Now register for AWS re:Inforce, in Anaheim, California, June 13–14. AWS Chief Information Security Officer CJ Moses will share the latest innovations in cloud security and what AWS Security is focused on. The breakout sessions will provide real-world examples of how security is embedded into the way businesses operate. To learn more and get the limited discount code to register, see CJ’s blog post Gain insights and knowledge at AWS re:Inforce 2023 in the AWS Security Blog.

AWS Global Summits – Check your calendars and sign up for the AWS Summit close to where you live or work: Seoul (May 3–4), Berlin and Singapore (May 4), Stockholm (May 11), Hong Kong (May 23), Amsterdam (June 1), London (June 7), Madrid (June 15), and Milano (June 22).

AWS Community Day – Join community-led conferences driven by AWS user group leaders close to your city: Chicago (June 15), Manila (June 29–30), and Munich (September 14). Recently, we have been bringing together AWS user groups from around the world into Meetup Pro accounts. Find your group and its meetups in your city!

AWS User Group Peru Conference – There is more than a new edge location opening in Lima. The local AWS User Group announced a one-day cloud event in Spanish and English in Lima on September 23. Three of us from the AWS News blog team will attend. I will be joined by my colleagues Marcia and Jeff. Save the date and register today!

You can browse all upcoming AWS-led in-person and virtual events and developer-focused events such as AWS DevDay.

Stay Informed
That was my selection for this week! To better keep up with all of this news, don’t forget to check out the following resources:

That’s all for this week. Check back next Monday for another Week in Review!

— seb

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

AWS Lambda now supports Java 17

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/java-17-runtime-now-available-on-aws-lambda/

This post was written by Mark Sailes, Senior Specialist Solutions Architect, Serverless.

You can now develop AWS Lambda functions with the Amazon Corretto distribution of Java 17. This version of Corretto comes with long-term support (LTS), which means it will receive updates and bug fixes for an extended period, providing stability and reliability to developers who build applications on it. This runtime also supports AWS Lambda SnapStart, so you can upgrade to the latest managed runtime without losing your performance improvements.

Java 17 comes with new language features for developers, including Java records, sealed classes, and multi-line strings. It also comes with improvements to further optimize running Java on ARM CPU architectures, such as Graviton.

This blog explains how to get started using Java 17 with Lambda, how to use the new language features, and what else has changed with the runtime.

New language features

In Java, it is common to pass data using an immutable object. Before Java 17, this resulted in boiler plate code or the use of an external library like Lombok. For example, a generic Person object may look like this:

public class Person {
    
    private final String name;
    private final int age;

    public Person(String name, int age) {
        this.name = name;
        this.age = age;
    }

    public String getName() {
        return name;
    }

    public int getAge() {
        return age;
    }
    
    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;

        Person person = (Person) o;

        if (age != person.age) return false;
        return Objects.equals(name, person.name);
    }

    @Override
    public int hashCode() {
        return Objects.hash(name, age);
    }

    @Override
    public String toString() {
        return "Person{" +
                "name='" + name + '\'' +
                ", age=" + age +
                '}';
    }
}

In Java 17, you can replace this entire class with a record, expressed as:

public record Person(String name, int age) {

}

The equals, hashCode, and toString methods, as well as the private, final fields and public constructor, are generated by the Java compiler. This simplifies the code that you have to maintain.

The Java 17 managed runtime introduces a new feature allowing developers to use records as the object to represent event data in the handler method. Records were introduced in Java 14 and provide a simpler syntax to declare classes primarily used to store data. Records allow developers to define an immutable class with a set of named properties and methods to access those properties, making them perfect for event data. This feature simplifies code, making it easier to read and maintain. Additionally, it can provide better performance since records are immutable by default, and Java’s runtime can optimize the memory allocation and garbage collection process. To use records as the parameter for the event handler method, define the record with the required properties, and pass the record to the method. The ability to use records as the object to represent event data in the handler method is a useful addition to the Java language, providing a concise and efficient way to define event data structures.

For example, the following Lambda function uses a Person record to represent the event data:

public class App implements RequestHandler<Person, APIGatewayProxyResponseEvent> {

    public APIGatewayProxyResponseEvent handleRequest(Person person, Context context) {
        
        String id = UUID.randomUUID().toString();
        Optional<Person> savedPerson = createPerson(id, person.name(), person.age());
        if (savedPerson.isPresent()) {
            return new APIGatewayProxyResponseEvent().withStatusCode(200);
        } else {
            return new APIGatewayProxyResponseEvent().withStatusCode(500);
        }
    }

Garbage collection

Java 17 makes available two new Java garbage collectors (GCs): Z Garbage Collector (ZGC) introduced in Java 15 and Shenandoah introduced in Java 12.

You can evaluate GCs against three axes:

  • Throughput: the amount of work that can be done.
  • Latency: how long work takes to complete.
  • Memory footprint: how much additional memory is required.

Both the ZGC and Shenandoah GCs trade throughput and footprint to focus on reducing latency where possible. They perform all expensive work concurrently, without stopping the execution of application threads for more than a few milliseconds.

In the Java 17 managed runtime, Lambda continues to use the Serial GC as it does in Java 11. This is a low footprint GC well-suited for single processor machines, which is often the case when using Lambda functions.

You can change the default GC using the JAVA_TOOL_OPTIONS environment variable to an alternative if required. For example, if you were running with more memory and therefore multiple CPUs consider the Parallel GC. To use this, set JAVA_TOOL_OPTIONS to -XX:+UseParallelGC.

Runtime JVM configuration changes

In the Java 17 runtime, the JVM flag for tiered compilation is now set to stop at level 1 by default. In previous versions, you would have to do this by setting the JAVA_TOOL_OPTIONS to -XX:+TieredCompilation -XX:TieredStopAtLevel=1.

This is helpful in the majority of synchronous workloads because it can reduce startup latency by up to 60%. For more information on configuring tiered compilation, see “Optimizing AWS Lambda function performance for Java“.

If you are running a workload that processes large numbers of batches, simulates events, or any other highly repetitive action, you might find that this slows the duration of your function. An example of this would be Monte Carlo simulations. To change back to the previous settings, set JAVA_TOOL_OPTIONS to -XX:-TieredCompilation.

Using Java 17 in Lambda

AWS Management Console

To use the Java 17 runtime to develop your Lambda functions, set the runtime value to Java 17 when creating or updating a function.

To update an existing Lambda function to Java 17, navigate to the function in the Lambda console, then choose Edit in the Runtime settings panel. The new version is available in the Runtime dropdown:

AWS Serverless Application Model (AWS SAM)

In AWS SAM, set the Runtime attribute to java17 to use this version:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Simple Lambda Function

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: HelloWorldFunction
      Handler: helloworld.App::handleRequest
      Runtime: java17
      MemorySize: 1024

AWS SAM supports the generation of this template with Java 17 out of the box for new serverless applications using the sam init command. Refer to the AWS SAM documentation here.

AWS Cloud Development Kit (AWS CDK)

In the AWS CDK, set the runtime attribute to Runtime.JAVA_17 to use this version. In Java:

import software.amazon.awscdk.core.Construct;
import software.amazon.awscdk.core.Stack;
import software.amazon.awscdk.core.StackProps;
import software.amazon.awscdk.services.lambda.Code;
import software.amazon.awscdk.services.lambda.Function;
import software.amazon.awscdk.services.lambda.Runtime;

public class InfrastructureStack extends Stack {

    public InfrastructureStack(final Construct parent, final String id, final StackProps props) {
        super(parent, id, props);

        Function.Builder.create(this, "HelloWorldFunction")
                .runtime(Runtime.JAVA_17)
                .code(Code.fromAsset("target/hello-world.jar"))
                .handler("helloworld.App::handleRequest")
                .memorySize(1024)
                .build();
    }
}

Application frameworks

Java application frameworks Spring and Micronaut have announced that their latest versions Spring Boot 3 and Micronaut 4 require Java 17 as a minimum. Quarkus 3 continues to support Java 11. Java 17 is faster than 8 or 11, and framework developers want to pass on the performance improvements to customers. They also want to use the improvements to the Java language in their own code and show code examples with the most modern ways of working.

To try Micronaut 4 and Java 17, you can use the Micronaut launch web service to generate an example project that includes all the application code and AWS Cloud Development Kit (CDK) infrastructure as code you need to deploy it to Lambda.

The following command creates a Micronaut application, which uses the common controller pattern to handle REST requests. The infrastructure code will create an Amazon API Gateway and proxy all its requests to the Lambda function.

curl --location --request GET 'https://launch.micronaut.io/create/default/blog.example.lambda-java-17?lang=JAVA&build=MAVEN&test=JUNIT&javaVersion=JDK_17&features=amazon-api-gateway&features=aws-cdk&features=crac' --output lambda-java-17.zip

Unzip the downloaded file then run the following Maven command to generate the deployable artifact.

./mvnw package

Finally, deploy the resources to AWS with CDK:

cd infra
cdk deploy

Conclusion

This blog post describes how to create a new Lambda function running the Amazon Corretto Java 17 managed runtime. It introduces the new records language feature to model the event being sent to your Lambda function and explains how changes to the default JVM configuration might affect the performance of your functions.

If you’re interested in learning more, visit serverlessland.com. If this has inspired you to try migrating an existing application to Lambda, read our re-platforming guide.

Serverless ICYMI Q1 2023

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/serverless-icymi-q1-2023/

Welcome to the 21st edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all the most recent product launches, feature enhancements, blog posts, webinars, live streams, and other interesting things that you might have missed!

ICYMI2023Q1

In case you missed our last ICYMI, check out what happened last quarter here.

Artificial intelligence (AI) technologies, ChatGPT, and DALL-E are creating significant interest in the industry at the moment. Find out how to integrate serverless services with ChatGPT and DALL-E to generate unique bedtime stories for children.

Example notification of a story hosted with Next.js and App Runner

Example notification of a story hosted with Next.js and App Runner

Serverless Land is a website maintained by the Serverless Developer Advocate team to help you build serverless applications and includes workshops, code examples, blogs, and videos. There is now enhanced search functionality so you can search across resources, patterns, and video content.

SLand-search

ServerlessLand search

AWS Lambda

AWS Lambda has improved how concurrency works with Amazon SQS. You can now control the maximum number of concurrent Lambda functions invoked.

The launch blog post explains the scaling behavior of Lambda using this architectural pattern, challenges this feature helps address, and a demo of maximum concurrency in action.

Maximum concurrency is set to 10 for the SQS queue.

Maximum concurrency is set to 10 for the SQS queue.

AWS Lambda Powertools is an open-source library to help you discover and incorporate serverless best practices more easily. Lambda Powertools for .NET is now generally available and currently focused on three observability features: distributed tracing (Tracer), structured logging (Logger), and asynchronous business and application metrics (Metrics). Powertools is also available for Python, Java, and Typescript/Node.js programming languages.

To learn more:

Lambda announced a new feature, runtime management controls, which provide more visibility and control over when Lambda applies runtime updates to your functions. The runtime controls are optional capabilities for advanced customers that require more control over their runtime changes. You can now specify a runtime management configuration for each function with three settings, Automatic (default), Function update, or manual.

There are three new Amazon CloudWatch metrics for asynchronous Lambda function invocations: AsyncEventsReceived, AsyncEventAge, and AsyncEventsDropped. You can track the asynchronous invocation requests sent to Lambda functions to monitor any delays in processing and take corrective actions if required. The launch blog post explains the new metrics and how to use them to troubleshoot issues.

Lambda now supports Amazon DocumentDB change streams as an event source. You can use Lambda functions to process new documents, track updates to existing documents, or log deleted documents. You can use any programming language that is supported by Lambda to write your functions.

There is a helpful blog post suggesting best practices for developing portable Lambda functions that allow you to port your code to containers if you later choose to.

AWS Step Functions

AWS Step Functions has expanded its AWS SDK integrations with support for 35 additional AWS services including Amazon EMR Serverless, AWS Clean Rooms, AWS IoT FleetWise, AWS IoT RoboRunner and 31 other AWS services. In addition, Step Functions also added support for 1000+ new API actions from new and existing AWS services such as Amazon DynamoDB and Amazon Athena. For the full list of added services, visit AWS SDK service integrations.

Amazon EventBridge

Amazon EventBridge has launched the AWS Controllers for Kubernetes (ACK) for EventBridge and Pipes . This allows you to manage EventBridge resources, such as event buses, rules, and pipes, using the Kubernetes API and resource model (custom resource definitions).

EventBridge event buses now also support enhanced integration with Service Quotas. Your quota increase requests for limits such as PutEvents transactions-per-second, number of rules, and invocations per second among others will be processed within one business day or faster, enabling you to respond quickly to changes in usage.

AWS SAM

The AWS Serverless Application Model (SAM) Command Line Interface (CLI) has added the sam list command. You can now show resources defined in your application, including the endpoints, methods, and stack outputs required to test your deployed application.

AWS SAM has a preview of sam build support for building and packaging serverless applications developed in Rust. You can use cargo-lambda in the AWS SAM CLI build workflow and AWS SAM Accelerate to iterate on your code changes rapidly in the cloud.

You can now use AWS SAM connectors as a source resource parameter. Previously, you could only define AWS SAM connectors as a AWS::Serverless::Connector resource. Now you can add the resource attribute on a connector’s source resource, which makes templates more readable and easier to update over time.

AWS SAM connectors now also support multiple destinations to simplify your permissions. You can now use a single connector between a single source resource and multiple destination resources.

In October 2022, AWS released OpenID Connect (OIDC) support for AWS SAM Pipelines. This improves your security posture by creating integrations that use short-lived credentials from your CI/CD provider. There is a new blog post on how to implement it.

Find out how best to build serverless Java applications with the AWS SAM CLI.

AWS App Runner

AWS App Runner now supports retrieving secrets and configuration data stored in AWS Secrets Manager and AWS Systems Manager (SSM) Parameter Store in an App Runner service as runtime environment variables.

AppRunner also now supports incoming requests based on HTTP 1.0 protocol, and has added service level concurrency, CPU and Memory utilization metrics.

Amazon S3

Amazon S3 now automatically applies default encryption to all new objects added to S3, at no additional cost and with no impact on performance.

You can now use an S3 Object Lambda Access Point alias as an origin for your Amazon CloudFront distribution to tailor or customize data to end users. For example, you can resize an image depending on the device that an end user is visiting from.

S3 has introduced Mountpoint for S3, a high performance open source file client that translates local file system API calls to S3 object API calls like GET and LIST.

S3 Multi-Region Access Points now support datasets that are replicated across multiple AWS accounts. They provide a single global endpoint for your multi-region applications, and dynamically route S3 requests based on policies that you define. This helps you to more easily implement multi-Region resilience, latency-based routing, and active-passive failover, even when data is stored in multiple accounts.

Amazon Kinesis

Amazon Kinesis Data Firehose now supports streaming data delivery to Elastic. This is an easier way to ingest streaming data to Elastic and consume the Elastic Stack (ELK Stack) solutions for enterprise search, observability, and security without having to manage applications or write code.

Amazon DynamoDB

Amazon DynamoDB now supports table deletion protection to protect your tables from accidental deletion when performing regular table management operations. You can set the deletion protection property for each table, which is set to disabled by default.

Amazon SNS

Amazon SNS now supports AWS X-Ray active tracing to visualize, analyze, and debug application performance. You can now view traces that flow through Amazon SNS topics to destination services, such as Amazon Simple Queue Service, Lambda, and Kinesis Data Firehose, in addition to traversing the application topology in Amazon CloudWatch ServiceLens.

SNS also now supports setting content-type request headers for HTTPS notifications so applications can receive their notifications in a more predictable format. Topic subscribers can create a DeliveryPolicy that specifies the content-type value that SNS assigns to their HTTPS notifications, such as application/json, application/xml, or text/plain.

EDA Visuals collection added to Serverless Land

The Serverless Developer Advocate team has extended Serverless Land and introduced EDA visuals. These are small bite sized visuals to help you understand concept and patterns about event-driven architectures. Find out about batch processing vs. event streaming, commands vs. events, message queues vs. event brokers, and point-to-point messaging. Discover bounded contexts, migrations, idempotency, claims, enrichment and more!

EDA-visuals

EDA Visuals

To learn more:

Serverless Repos Collection on Serverless Land

There is also a new section on Serverless Land containing helpful code repositories. You can search for code repos to use for examples, learning or building serverless applications. You can also filter by use-case, runtime, and level.

Serverless Repos Collection

Serverless Repos Collection

Serverless Blog Posts

January

Jan 12 – Introducing maximum concurrency of AWS Lambda functions when using Amazon SQS as an event source

Jan 20 – Processing geospatial IoT data with AWS IoT Core and the Amazon Location Service

Jan 23 – AWS Lambda: Resilience under-the-hood

Jan 24 – Introducing AWS Lambda runtime management controls

Jan 24 – Best practices for working with the Apache Velocity Template Language in Amazon API Gateway

February

Feb 6 – Previewing environments using containerized AWS Lambda functions

Feb 7 – Building ad-hoc consumers for event-driven architectures

Feb 9 – Implementing architectural patterns with Amazon EventBridge Pipes

Feb 9 – Securing CI/CD pipelines with AWS SAM Pipelines and OIDC

Feb 9 – Introducing new asynchronous invocation metrics for AWS Lambda

Feb 14 – Migrating to token-based authentication for iOS applications with Amazon SNS

Feb 15 – Implementing reactive progress tracking for AWS Step Functions

Feb 23 – Developing portable AWS Lambda functions

Feb 23 – Uploading large objects to Amazon S3 using multipart upload and transfer acceleration

Feb 28 – Introducing AWS Lambda Powertools for .NET

March

Mar 9 – Server-side rendering micro-frontends – UI composer and service discovery

Mar 9 – Building serverless Java applications with the AWS SAM CLI

Mar 10 – Managing sessions of anonymous users in WebSocket API-based applications

Mar 14 –
Implementing an event-driven serverless story generation application with ChatGPT and DALL-E

Videos

Serverless Office Hours – Tues 10AM PT

Weekly office hours live stream. In each session we talk about a specific topic or technology related to serverless and open it up to helping you with your real serverless challenges and issues. Ask us anything you want about serverless technologies and applications.

January

Jan 10 – Building .NET 7 high performance Lambda functions

Jan 17 – Amazon Managed Workflows for Apache Airflow at Scale

Jan 24 – Using Terraform with AWS SAM

Jan 31 – Preparing your serverless architectures for the big day

February

Feb 07- Visually design and build serverless applications

Feb 14 – Multi-tenant serverless SaaS

Feb 21 – Refactoring to Serverless

Feb 28 – EDA visually explained

March

Mar 07 – Lambda cookbook with Python

Mar 14 – Succeeding with serverless

Mar 21 – Lambda Powertools .NET

Mar 28 – Server-side rendering micro-frontends

FooBar Serverless YouTube channel

Marcia Villalba frequently publishes new videos on her popular serverless YouTube channel. You can view all of Marcia’s videos at https://www.youtube.com/c/FooBar_codes.

January

Jan 12 – Serverless Badge – A new certification to validate your Serverless Knowledge

Jan 19 – Step functions Distributed map – Run 10k parallel serverless executions!

Jan 26 – Step Functions Intrinsic Functions – Do simple data processing directly from the state machines!

February

Feb 02 – Unlock the Power of EventBridge Pipes: Integrate Across Platforms with Ease!

Feb 09 – Amazon EventBridge Pipes: Enrichment and filter of events Demo with AWS SAM

Feb 16 – AWS App Runner – Deploy your apps from GitHub to Cloud in Record Time

Feb 23 – AWS App Runner – Demo hosting a Node.js app in the cloud directly from GitHub (AWS CDK)

March

Mar 02 – What is Amazon DynamoDB? What are the most important concepts? What are the indexes?

Mar 09 – Choreography vs Orchestration: Which is Best for Your Distributed Application?

Mar 16 – DynamoDB Single Table Design: Simplify Your Code and Boost Performance with Table Design Strategies

Mar 23 – 8 Reasons You Should Choose DynamoDB for Your Next Project and How to Get Started

Sessions with SAM & Friends

SAMFiends

AWS SAM & Friends

Eric Johnson is exploring how developers are building serverless applications. We spend time talking about AWS SAM as well as others like AWS CDK, Terraform, Wing, and AMPT.

Feb 16 – What’s new with AWS SAM

Feb 23 – AWS SAM with AWS CDK

Mar 02 – AWS SAM and Terraform

Mar 10 – Live from ServerlessDays ANZ

Mar 16 – All about AMPT

Mar 23 – All about Wing

Mar 30 – SAM Accelerate deep dive

Still looking for more?

The Serverless landing page has more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials.

You can also follow the Serverless Developer Advocacy team on Twitter to see the latest news, follow conversations, and interact with the team.

How to run AWS CloudHSM workloads in container environments

Post Syndicated from Derek Tumulak original https://aws.amazon.com/blogs/security/how-to-run-aws-cloudhsm-workloads-on-docker-containers/

January 25, 2023: We updated this post to reflect the fact that CloudHSM SDK3 does not support serverless environments and we strongly recommend deploying SDK5.


AWS CloudHSM provides hardware security modules (HSMs) in the AWS Cloud. With CloudHSM, you can generate and use your own encryption keys in the AWS Cloud, and manage your keys by using FIPS 140-2 Level 3 validated HSMs. Your HSMs are part of a CloudHSM cluster. CloudHSM automatically manages synchronization, high availability, and failover within a cluster.

CloudHSM is part of the AWS Cryptography suite of services, which also includes AWS Key Management Service (AWS KMS), AWS Secrets Manager, and AWS Private Certificate Authority (AWS Private CA). AWS KMS, Secrets Manager, and AWS Private CA are fully managed services that are convenient to use and integrate. You’ll generally use CloudHSM only if your workload requires single-tenant HSMs under your own control, or if you need cryptographic algorithms or interfaces that aren’t available in the fully managed alternatives.

CloudHSM offers several options for you to connect your application to your HSMs, including PKCS#11, Java Cryptography Extensions (JCE), OpenSSL Dynamic Engine, or Microsoft Cryptography API: Next Generation (CNG). Regardless of which library you choose, you’ll use the CloudHSM client to connect to HSMs in your cluster.

In this blog post, I’ll show you how to use Docker to develop, deploy, and run applications by using the CloudHSM SDK, and how to manage and orchestrate workloads by using tools and services like Amazon Elastic Container Service (Amazon ECS), Kubernetes, Amazon Elastic Kubernetes Service (Amazon EKS), and Jenkins.

Solution overview

This solution demonstrates how to create a Docker container that uses the CloudHSM JCE SDK to generate a key and use it to encrypt and decrypt data.

Note: In this example, you must manually enter the crypto user (CU) credentials as environment variables when you run the container. For production workloads, you’ll need to consider how to secure and automate the handling and distribution of these credentials. You should work with your security or compliance officer to ensure that you’re using an appropriate method of securing HSM login credentials. For more information on securing credentials, see AWS Secrets Manager.

Figure 1 shows the solution architecture. The Java application, running in a Docker container, integrates with JCE and communicates with CloudHSM instances in a CloudHSM cluster through HSM elastic network interfaces (ENIs). The Docker container runs in an EC2 instance, and access to the HSM ENIs is controlled with a security group.

Figure 1: Architecture diagram

Figure 1: Architecture diagram

Prerequisites

To implement this solution, you need to have working knowledge of the following items:

  • CloudHSM
  • Docker 20.10.17 – used at the time of this post
  • Java 8 or Java 11 – supported at the time of this post
  • Maven 3.05 – used at the time of this post

Here’s what you’ll need to follow along with my example:

  1. An active CloudHSM cluster with at least one active HSM instance. You can follow the CloudHSM getting started guide to create, initialize, and activate a CloudHSM cluster.

    Note: For a production cluster, you should have at least two active HSM instances spread across Availability Zones in the Region.

  2. An Amazon Linux 2 EC2 instance in the same virtual private cloud (VPC) in which you created your CloudHSM cluster. The Amazon Elastic Compute Cloud (Amazon EC2) instance must have the CloudHSM cluster security group attached—this security group is automatically created during the cluster initialization and is used to control network access to the HSMs. To learn about attaching security groups to allow EC2 instances to connect to your HSMs, see Create a cluster in the AWS CloudHSM User Guide.
  3. A CloudHSM crypto user (CU) account. You can create a CU by following the steps in the topic Managing HSM users in AWS CloudHSM in the AWS CloudHSM User Guide.

Solution details

In this section, I’ll walk you through how to download, configure, compile, and run a solution in Docker.

To set up Docker and run the application that encrypts and decrypts data with a key in AWS CloudHSM

  1. On your Amazon Linux EC2 instance, install Docker by running the following command.

    # sudo yum -y install docker

  2. Start the docker service.

    # sudo service docker start

  3. Create a new directory and move to it. In my example, I use a directory named cloudhsm_container. You’ll use the new directory to configure the Docker image.

    # mkdir cloudhsm_container
    # cd cloudhsm_container

  4. Copy the CloudHSM cluster’s trust anchor certificate (customerCA.crt) to the directory that you just created. You can find the trust anchor certificate on a working CloudHSM client instance under the path /opt/cloudhsm/etc/customerCA.crt. The certificate is created during initialization of the CloudHSM cluster and is required to connect to the CloudHSM cluster. This enables our application to validate that the certificate presented by the CloudHSM cluster was signed by our trust anchor certificate.
  5. In your new directory (cloudhsm_container), create a new file with the name run_sample.sh that includes the following contents. The script runs the Java class that is used to generate an Advanced Encryption Standard (AES) key to encrypt and decrypt your data.
    #! /bin/bash
    
    # start application
    echo -e "\n* Entering AES GCM encrypt/decrypt sample in Docker ... \n"
    
    java -ea -jar target/assembly/aesgcm-runner.jar -method environment
    
    echo -e "\n* Exiting AES GCM encrypt/decrypt sample in Docker ... \n"

  6. In the new directory, create another new file and name it Dockerfile (with no extension). This file will specify that the Docker image is built with the following components:
    • The CloudHSM client package.
    • The CloudHSM Java JCE package.
    • OpenJDK 1.8 (Java 8). This is needed to compile and run the Java classes and JAR files.
    • Maven, a build automation tool that is needed to assist with building the Java classes and JAR files.
    • The AWS CloudHSM Java JCE samples that will be downloaded and built as part of the solution.
  7. Cut and paste the following contents into Dockerfile.

    Note: You will need to customize your Dockerfile, as follows:

    • Make sure to specify the SDK version to replace the one specified in the pom.xml file in the sample code. As of the writing of this post, the most current version is 5.7.0. To find the SDK version, follow the steps in the topic Check your client SDK version. For more information, see the Building section in the README file for the Cloud HSM JCE examples.
    • Make sure to update the HSM_IP line with the IP of an HSM in your CloudHSM cluster. You can get your HSM IPs from the CloudHSM console, or by running the describe-clusters AWS CLI command.
      	# Use the amazon linux image
      	FROM amazonlinux:2
      
      	# Pass HSM IP address as a build argument
      	ARG HSM_IP
      
      	# Install CloudHSM client
      	RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-jce-latest.el7.x86_64.rpm
      
      	# Install Java, Maven, wget, unzip and ncurses-compat-libs
      	RUN yum install -y java maven wget unzip ncurses-compat-libs
              
      	# Create a work dir
      	WORKDIR /app
              
      	# Download sample code
      	RUN wget https://github.com/aws-samples/aws-cloudhsm-jce-examples/archive/refs/heads/sdk5.zip
              
      	# unzip sample code
      	RUN unzip sdk5.zip
             
      	# Change to the create directory
      	WORKDIR aws-cloudhsm-jce-examples-sdk5
      
      # Build JAR files using the installed CloudHSM JCE Provider version
      RUN export CLOUDHSM_CLIENT_VERSION=`rpm -qi cloudhsm-jce | awk -F': ' '/Version/ {print $2}'` \
              && mvn validate -DcloudhsmVersion=$CLOUDHSM_CLIENT_VERSION \
              && mvn clean package -DcloudhsmVersion=$CLOUDHSM_CLIENT_VERSION
              
        # Configure cloudhsm-client
        COPY customerCA.crt /opt/cloudhsm/etc/
        RUN /opt/cloudhsm/bin/configure-jce -a $HSM_IP
             
        # Copy the run_sample.sh script
        COPY run_sample.sh .
              
        # Run the script
        CMD ["bash","run_sample.sh"]

  8. Now you’re ready to build the Docker image. Run the following command, with the name jce_sample. This command will let you use the Dockerfile that you created in step 6 to create the image.

    # sudo docker build --build-arg HSM_IP=”<your HSM IP address>” -t jce_sample .

  9. To run a Docker container from the Docker image that you just created, run the following command. Make sure to replace the user and password with your actual CU username and password. (If you need help setting up your CU credentials, see prerequisite 3. For more information on how to provide CU credentials to the AWS CloudHSM Java JCE Library, see Providing credentials to the JCE provider in the CloudHSM User Guide).

    # sudo docker run --env HSM_USER=<user> --env HSM_PASSWORD=<password> jce_sample

    If successful, the output should look like this:

    	* Entering AES GCM encrypt/decrypt sample in Docker ... 
    
    	737F92D1B7346267D329C16E
    	Successful decryption
    
    	* Exiting AES GCM encrypt/decrypt sample in Docker ...

Conclusion

This solution provides an example of how to run CloudHSM client workloads in Docker containers. You can use the solution as a reference to implement your cryptographic application in a way that benefits from the high availability and load balancing built in to CloudHSM without compromising the flexibility that Docker provides for developing, deploying, and running applications.

If you have comments about this post, submit them in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Derek Tumulak

Derek Tumulak

Derek joined AWS in May 2021 as a Principal Product Manager. He is a data protection and cybersecurity expert who is enthusiastic about assisting customers with a wide range of sophisticated use cases.

Reducing Java cold starts on AWS Lambda functions with SnapStart

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/reducing-java-cold-starts-on-aws-lambda-functions-with-snapstart/

Written by Mark Sailes, Senior Serverless Solutions Architect, AWS.

At AWS re:Invent 2022, AWS announced SnapStart for AWS Lambda functions running on Java Corretto 11. This feature enables customers to achieve up to 10x faster function startup performance for Java functions, at no additional cost, and typically with minimal or no code changes.

Overview

Today, for Lambda’s function invocations, the largest contributor to startup latency is the time spent initializing a function. This includes loading the function’s code and initializing dependencies. For interactive workloads that are sensitive to start-up latencies, this can cause suboptimal end user experience.

To address this challenge, customers either provision resources ahead of time, or spend effort building relatively complex performance optimizations, such as compiling with GraalVM native-image. Although these workarounds help reduce the startup latency, users must spend time on some heavy lifting instead of focusing on delivering business value. SnapStart addresses this concern directly for Java-based Lambda functions.

How SnapStart works

With SnapStart, when a customer publishes a function version, the Lambda service initializes the function’s code. It takes an encrypted snapshot of the initialized execution environment, and persists the snapshot in a tiered cache for low latency access.

When the function is first invoked and then scaled, Lambda resumes the execution environment from the persisted snapshot instead of initializing from scratch. This results in a lower startup latency.

Lambda function lifecycle

Lambda function lifecycle

A function version activated with SnapStart transitions to an inactive state if it remains idle for 14 days, after which Lambda deletes the snapshot. When you try to invoke a function version that is inactive, the invocation fails. Lambda sends a SnapStartNotReadyException and begins initializing a new snapshot in the background, during which the function version remains in Pending state. Wait until the function reaches the Active state, and then invoke it again. To learn more about this process and the function states, read the documentation.

Using SnapStart

Application frameworks such as Spring give developers an enormous productivity gain by reducing the amount of boilerplate code they write to accomplish common tasks. When first created, frameworks didn’t have to consider startup time because they run on application servers, which run for long periods of time. The startup time is minimal compared to the running duration. You often only restart them when there is an application version change.

If the functionality that these frameworks bring is implemented at runtime, then they often contribute to latency in startup time. SnapStart allows you to use frameworks like Spring and not compromise tail latency.

To demonstrate SnapStart, I use a sample application that saves records into Amazon DynamoDB. This Spring Boot application (TODO Link) uses a REST controller to handle CRUD requests. This sample includes infrastructure as code to deploy the application using the AWS Serverless Application Model (AWS SAM). You must install the AWS SAM CLI to deploy this example.

To deploy:

  1. Clone the git repository and change to project directory:
    git clone https://github.com/aws-samples/serverless-patterns.git
    cd serverless-patterns/apigw-lambda-snapstart
  2. Use the AWS SAM CLI to build the application:
    sam build
  3. Use the AWS SAM CLI to deploy the resources to your AWS account:
    sam deploy -g

This project deploys with SnapStart already enabled. To enable or disable this functionality in the AWS Management Console:

  1. Navigate to your Lambda function.
  2. Select the Configuration tab.
  3. Choose Edit and change the SnapStart attribute to PublishedVersions.
  4. Choose Save.

    Lambda Console confoguration

    Lambda Console confoguration

  5. Select the Versions tab and choose Publish new.
  6. Choose Publish.

Once you’ve enabled SnapStart, Lambda publishes all subsequent versions with snapshots. The time to run your publish version depends on your init code. You can run init up to 15 minutes with this feature.

Considerations

Stale credentials

Using SnapStart and restoring from a snapshot often changes how you create functions. With on-demand functions, you might access one time data in the init phase, and then reuse it during future invokes. If this data is ephemeral, a database password for example, then there might be a time between fetching the secret and using it, that the password has changed leading to an error. You must write code to handle this error case.

With SnapStart, if you follow the same approach, your database password is persisted in an encrypted snapshot. All future execution environments have the same state. This can be days, weeks, or longer after the snapshot is taken. This makes it more likely that your function has the incorrect password stored. To improve this, you could move the functionality to fetch the password to the post-snapshot hook. With each approach, it is important to understand your application’s needs and handle errors when they occur.

Demo application architecture

Demo application architecture

A second challenge in sharing the initial state is with randomness and uniqueness. If random seeds are stored in the snapshot during the initialization phase, then it may cause random numbers to be predictable.

Cryptography

AWS has changed the managed runtime to help customers handle the effects of uniqueness and randomness when restoring functions.

Lambda has already incorporated updates to Amazon Linux 2 and one of the commonly used cryptographic libraries, OpenSSL (1.0.2), to make them resilient to snapshot operations. AWS has also validated that Java runtime’s built-in RNG java.security.SecureRandom maintains uniqueness when resuming from a snapshot.

Software that always gets random numbers from the operating system (for example, from /dev/random or /dev/urandom) is already resilient to snapshot operations. It does not need updates to restore uniqueness. However, customers who prefer to implement uniqueness using custom code for their Lambda functions must verify that their code restores uniqueness when using SnapStart.

For more details, read Starting up faster with AWS Lambda SnapStart and refer to Lambda documentation on SnapStart uniqueness.

Runtime hooks

These pre- and post-hooks give developers a way to react to the snapshotting process.

For example, a function that must always preload large amounts of data from Amazon S3 should do this before Lambda takes the snapshot. This embeds the data in the snapshot so that it does not need fetching repeatedly. However, in some cases, you may not want to keep ephemeral data. A password to a database may be rotated frequently and cause unnecessary errors. I discuss this in greater detail in a later section.

The Java managed runtime uses the open-source Coordinated Restore at Checkpoint (CRaC) project to provide hook support. The managed Java runtime contains a customized CRaC context implementation that calls your Lambda function’s runtime hooks before completing snapshot creation and after restoring the execution environment from a snapshot.

The following function example shows how you can create a function handler with runtime hooks. The handler implements the CRaC Resource and the Lambda RequestHandler interface.

...
import org.crac.Resource;
import org.crac.Core;
...

public class HelloHandler implements RequestHandler<String, String>, Resource {

    public HelloHandler() {
        Core.getGlobalContext().register(this);
    }

    public String handleRequest(String name, Context context) throws IOException {
        System.out.println("Handler execution");
        return "Hello " + name;
    }

    @Override
    public void beforeCheckpoint(org.crac.Context<? extends Resource> context) throws Exception {
        System.out.println("Before Checkpoint");
    }

    @Override
    public void afterRestore(org.crac.Context<? extends Resource> context) throws Exception {
        System.out.println("After Restore");
    }
}

For the classes required to write runtime hooks, add the following dependency to your project:

Maven

<dependency>
  <groupId>io.github.crac</groupId>
  <artifactId>org-crac</artifactId>
  <version>0.1.3</version>
</dependency>

Gradle

implementation 'io.github.crac:org-crac:0.1.3'

Priming

SnapStart and runtime hooks give you new ways to build your Lambda functions for low startup latency. You can use the pre-snapshot hook to make your Java application as ready as possible for the first invoke. Do as much as possible within your function before the snapshot is taken. This is called priming.

When you upload your zip file of Java code to Lambda, the zip contains .class files of bytecode. This can be run on any machine with a JVM. When the JVM executes your bytecode, it is initially interpreted, then compiled into native machine code. This compilation stage is relatively CPU intensive and happens just in time (JIT Compiler).

You can use the before snapshot hook to run code paths before the snapshot is taken. The JVM compiles these code paths and the optimization is kept for future restores. For example, if you have a function that integrates with DynamoDB, you can make a read operation in your before snapshot hook.

This means that your function code, the AWS SDK for Java, and any other libraries used in that action are compiled and kept within the snapshot. The JVM then won’t need to compile this code when your function is invoked, meaning your latency is less the first time an execution environment is invoked.

Priming requires that you understand your application code and the consequences of executing it. The sample application includes a before snapshot hook, which primes the application by making a read operation from DynamoDB. (TODO Link)

Metrics

The following chart reflects invoking the sample application Lambda function 100 times per second for 10 minutes. This test is based on this function, both with and without SnapStart.

 

p50 p99.9
On-demand 7.87ms 5,114ms
SnapStart 7.87ms 488ms

Conclusion

This blog shows how SnapStart reduces startup (cold-start) latencies times for Java-based Lambda functions. You can configure SnapStart using AWS SDK, AWS CloudFormation, AWS SAM, and CDK.

To learn more, see Configuring function options in the AWS documentation. This functionality may require some minimal code changes. In most cases, the existing code is already compatible with SnapStart. You can now bring your latency-sensitive Java-based workloads to Lambda and run with improved tail latencies.

This feature allows developers to use the on-demand model in Lambda with low-latency response times, without incurring extra cost. To read more about how to use SnapStart with partner frameworks, find out more from Quarkus and Micronaut. To read more about this and other features, visit Serverless Land.

Starting up faster with AWS Lambda SnapStart

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/starting-up-faster-with-aws-lambda-snapstart/

This blog written by Tarun Rai Madan, Sr. Product Manager, AWS Lambda, and Mike Danilov, Sr. Principal Engineer, AWS Lambda.

AWS Lambda SnapStart is a new performance optimization developed by AWS that can significantly improve the startup time for applications. Announced at AWS re:Invent 2022, the first capability to feature SnapStart is Lambda SnapStart for Java. This feature delivers up to 10x faster function startup times for latency-sensitive Java applications at no extra cost, and with minimal or no code changes.

Overview

When applications start up, whether it’s an app on your phone, or a serverless Lambda function, they go through initialization. The initialization process can vary based on the application and the programming language, but even the smallest applications written in the most efficient programming languages require some kind of initialization before they can do anything useful. For a Lambda function, the initialization phase involves downloading the function’s code, starting the runtime and any external dependencies, and running the function’s initialization code. Ordinarily, for a Lambda function, this initialization happens every time your application scales up to create a new execution environment.

With SnapStart, the function’s initialization is done ahead of time when you publish a function version. Lambda takes a Firecracker microVM snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access. When your application starts up and scales to handle traffic, Lambda resumes new execution environments from the cached snapshot instead of initializing them from scratch, improving startup performance.

The following diagram compares a cold start request lifecycle for a non-SnapStart function and a SnapStart function. The time it takes to initialize the function, which is the predominant contributor to high startup latency, is replaced by a faster resume phase with SnapStart.

Diagram of a non-SnapStart function versus a SnapStart function

Diagram of a non-SnapStart function versus a SnapStart function

Request lifecycle for a non-SnapStart function versus a SnapStart function

Front loading the initialization phase can significantly improve the startup performance for latency-sensitive Lambda functions, such as synchronous microservices that are sensitive to initialization time. Because Java is a dynamic language with its own runtime and garbage collector, Lambda functions written in Java can be amongst the slowest to initialize. For applications that require frequent scaling, the delay introduced by initialization, commonly referred to as a cold start, can lead to a suboptimal experience for end users. Such applications can now start up faster with SnapStart.

AWS’ work in Firecracker makes it simple to use SnapStart. Because SnapStart uses micro Virtual Machine (microVM) snapshots to checkpoint and restore full applications, the approach is adaptable and general purpose. It can be used to speed up many kinds of application starts. While microVMs have long been used for strong secure isolation between applications and environments, the ability to front-load initialization with SnapStart means that microVMs can also augment performance savings at scale.

SnapStart and uniqueness

Lambda SnapStart speeds up applications by re-using a single initialized snapshot to resume multiple execution environments. As a result, unique content included in the snapshot during initialization is reused across execution environments, and so may no longer remain unique. A class of applications where uniqueness of state is a key consideration is cryptographic software, which assumes that the random numbers are truly random (both random and unpredictable). If content such as a random seed is saved in the snapshot during initialization, it is re-used when multiple execution environments resume and may produce predictable random sequences.

To maintain uniqueness, you must verify before using SnapStart that any unique content previously generated during the initialization now gets generated after that initialization. This includes unique IDs, unique secrets, and entropy used to generate pseudo-randomness.

Multiple execution environments resumed from a shared snapshot

SnapStart life cycle

SnapStart life cycle

However, we have implemented a few things to make it easier for customers to maintain uniqueness.

First, it is not common or a best practice for applications to generate these unique items directly. Still, it’s worth confirming that your application handles uniqueness correctly. That’s usually a matter of checking for any unique IDs, keys, timestamps, or “homemade” entropy in the initializer methods for your function.

Lambda offers a SnapStart scanning tool that checks for certain categories of code that assume uniqueness, so customers can make changes as required. The SnapStart scanning tool is an open-source SpotBugs plugin that runs static analysis against a set of rules and reports “potential SnapStart bugs”. We are committed to engaging with the community to expand these set of rules against which the scanning tool checks the code.

As an example, the following Lambda function creates a unique log stream for each execution environment during initialization. This unique value is re-used across execution environments when they re-use a snapshot.

public class LambdaUsingUUID {

    private AWSLogsClient logs;
    private final UUID sandboxId;

    public LambdaUsingUUID() {
       sandboxId = UUID.randomUUID(); // <-- unique content created
       logs = new AWSLogsClient();
    }
    @Override
    public String handleRequest(Map<String,String> event, Context context) {
       CreateLogStreamRequest request = new CreateLogStreamRequest(
         "myLogGroup", sandboxId + ".log9.txt");
         logs.createLogStream(request);     
         return "Hello world!";
    }
} 

When you run the scanning tool on the previous code, the following message helps identify a potential implementation that assumes uniqueness. One way to address such cases is to move the generation of the unique ID inside your function’s handler method.

H C SNAP_START: Detected a potential SnapStart bug in Lambda function initialization code. At LambdaUsingUUID.java: [line 7]

A best practice used by many applications is to rely on the system libraries and kernel for uniqueness. These have long-handled other cases where keys and IDs may be inadvertently duplicated, such as when forking or cloning processes. AWS has worked with upstream kernel maintainers and open source developers so that the existing protection mechanisms use the open standard VM Generation ID (vmgenid) that SnapStart supports. vmgenid is an emulated device, which exposes a 128-bit, cryptographically random integer value identifier to the kernel, and is statistically unique across all resumed microVMs.

Lambda’s included versions of Amazon Linux 2, OpenSSL (1.0.2), and java.security.SecureRandom all automatically re-initialize their randomness and secrets after a SnapStart. Software that always gets random numbers from the operating system (for example, from /dev/random or /dev/urandom) does not need any updates to maintain randomness. Because Lambda always reseeds /dev/random and /dev/urandom when restoring a snapshot, random numbers are not repeated even when multiple execution environments resume from the same snapshot.

Lambda’s request IDs are already unique for each invocation and are available using the getAwsRequestId() method of the Lambda request object. Most Lambda functions should require no modification to run with SnapStart enabled. It’s generally recommended that for SnapStart, you do not include unique state in the function’s initialization code, and use cryptographically secure random number generators (CSPRNGs) when needed.

Second, if you do want to create unique data directly in a Lambda function initialization phase, Lambda supports two new runtime hooks. Runtime hooks are available as part of the open-source Coordinated Restore at Checkpoint (CRaC) project. You can use the beforeCheckpoint hook to run code immediately before a snapshot is taken, and use the afterRestore hook to run code immediately after restoring a snapshot. This helps you delete any unique content before the snapshot is created, and restore any unique content after the snapshot is restored. For an example of how to use CRaC with a reference application, see the CRaC GitHub repository.

Conclusion

This blog describes how SnapStart optimizes startup performance under the hood, and outlines considerations around uniqueness. We also introduce the new interfaces that AWS Lambda provides (via scanning tool and runtime hooks) to customers to maintain uniqueness for their SnapStart functions.

SnapStart is made possible by several pieces of open-source work, including Firecracker, Linux, CraC, OpenSSL and more. AWS is grateful to the maintainers and developers who have made this possible. With this work, we’re excited to launch Lambda SnapStart for Java as what we hope is the first amongst many other capabilities to benefit from the performance savings and enhanced security that SnapStart microVMs provide.

For more serverless learning resources, visit Serverless Land.

Build a custom Java runtime for AWS Lambda

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/compute/build-a-custom-java-runtime-for-aws-lambda/

This post is written by Christian Müller, Principal AWS Solutions Architect and Maximilian Schellhorn, AWS Solutions Architect

When running applications on AWS Lambda, you have the option to use either one of the managed runtime versions that AWS provides or bring your own custom runtime. The following blog post provides a walkthrough of how you can create and optimize a custom runtime for Java based Lambda functions.

Builders might rely on customized or experimental runtime behavior when creating solutions in the cloud. The Java ecosystem fosters innovation and encourages experiments with the current six-month release schedule for the latest runtime versions.

However, Lambda focuses on providing stable long-term support (LTS) versions. The official Lambda runtimes are built around a combination of operating system, programming language, and software libraries that are subject to maintenance and security updates. For example, the Lambda runtime for Java supports the LTS versions Java 8 Corretto and Java 11 Corretto as of April 2022. The Java 17 Corretto version is pending. In addition, there is no provided runtime for non LTS versions like Java 15 Corretto, Java 16 Corretto, or Java 18 Corretto.

To use other language versions, Lambda allows you to create custom runtimes. Custom runtimes allow builders to provide and configure their own runtimes for running their application code. To enable communication between your custom runtime and Lambda, you can use the runtime interface client library in Java.

With the introduction of modular runtime images in Java 9 (JEP 220), it is possible to include only the Java runtime modules that your application depends on. This reduces the overall runtime size and increases performance, especially during cold-starts. In addition, there are other techniques in Java, like class data sharing and tiered compilation, which allow you to reduce the startup time of your application even further.

To combine those capabilities, this blog post provides an overview for creating and deploying a minified Java runtime on Lambda by using Java 18 Corretto. For step-by-step instructions and prerequisites, refer to the official GitHub example.

Overview of the example

In the following example, you build a custom runtime for a basic Java application that writes request headers to Amazon DynamoDB and is fronted by Amazon API Gateway.

Application architecture

The following diagram summarizes the steps to create the application and the custom runtime:

Steps to create the application custom runtime

  1. Download the preferred Java version and take advantage of jdeps, jlink and class data sharing to create a minified and optimized Java runtime based on the application code (function.jar).
  2. Create a bootstrap file with optimized starting instructions for the application.
  3. Package the application code, the optimized Java runtime, and the bootstrap file as a zip file.
  4. Deploy the runtime, including the app, to Lambda. For example, using the AWS Cloud Development Kit (CDK)

Steps 1–3 are automated and abstracted via Docker. The following section provides a high-level walkthrough of the build and deployment process. For the full version, see the Dockerfile in the GitHub example.

Creating the optimized Java runtime

1. Download the desired Java version and copy the local application code to the Docker environment and build it with Maven:

FROM amazonlinux:2

...

# Update packages and install Amazon Corretto 18, Maven and Zip
RUN yum -y update
RUN yum install -y java-18-amazon-corretto-devel maven zip

...

# Copy the software folder to the image and build the function
COPY software software
WORKDIR /software/example-function
RUN mvn clean package

2. This step results in an uber-jar (function.jar) that you can use as an input argument for jdeps. The output is a file containing all the Java modules that the function depends on:

RUN jdeps -q \
    --ignore-missing-deps \
    --multi-release 18 \
    --print-module-deps \
    target/function.jar > jre-deps.info

3. Create an optimized Java runtime based on those application modules with jlink. Remove unnecessary information from the runtime, for example header files or man-pages:

RUN jlink --verbose \
    --compress 2 \
    --strip-java-debug-attributes \
    --no-header-files \
    --no-man-pages \
    --output /jre18-slim \
    --add-modules $(cat jre-deps.info)

4. This creates your own custom Java 18 runtime in the /jre18-slim folder. You can apply additional optimization techniques such as Class-Data-Sharing (CDS) to generate a classes.jsa file to accelerate the class loading time of the JVM.

RUN /jre18-slim/bin/java -Xshare:dump

Adding optimized starting instructions

You must tell the Lambda execution environment how to start the application. You can achieve that with a bootstrap file that includes the necessary instructions. In addition, you can define parameters to improve the performance further. For example, you could use tiered compilation and SerialGC.

The following snippet represents an example of a bootstrap file:

#!/bin/sh

$LAMBDA_TASK_ROOT/jre18-slim/bin/java \
    --add-opens java.base/java.util=ALL-UNNAMED \
    -XX:+TieredCompilation \
    -XX:TieredStopAtLevel=1 \
    -XX:+UseSerialGC \
    -jar function.jar "$_HANDLER"

Packaging the components

Combine the bootstrap file, the custom Java runtime, and the application code in a zip file for later use as the deployment package:

RUN zip -r runtime.zip \
    bootstrap \
    function.jar \
    /jre18-slim

The GitHub example provides a build.sh script to run the above-mentioned process via Docker. This results in a runtime.zip that you can then use as a deployment package.

Deploying the application with the custom runtime

To deploy the custom runtime, use AWS CDK. This allows you to define the needed infrastructure as code more easily in your favorite programming language.

The following code snippet shows how to create a Lambda function from a custom runtime:

Function customJava18Function = new Function(this, "LambdaCustomRuntimeJava18", FunctionProps.builder()
        .functionName("custom-runtime-java-18")
.handler("com.amazon.aws.example.ExampleDynamoDbHandler::handleRequest")
        .runtime(Runtime.PROVIDED_AL2)
        .code(Code.fromAsset("../runtime.zip"))
        .memorySize(512)
        .environment(Map.of("TABLE_NAME", exampleTable.getTableName()))
        .timeout(Duration.seconds(20))
        .logRetention(RetentionDays.ONE_WEEK)
        .build());

To deploy the application and output the necessary API Gateway URL to invoke the Lambda function, use the following command or use the provided provision_infrastructure.sh script:

cdk deploy --outputs-file target/outputs.json

Testing the application and validating the example results

After deployment, you can load test the application with the open-source software project Artillery.

The following command creates 120 concurrent invocations of the Lambda function for a duration of 60 seconds. It uses the API Gateway URL that is exported after the AWS CDK successfully deployed the application:

artillery run -t $(cat infrastructure/target/outputs.json | jq -r '.LambdaCustomRuntimeMinimalJRE18InfrastructureStack.apiendpoint') -v '{ "url": "/custom-runtime" }' infrastructure/loadtest.yml

Use CloudWatch Log Insights to query the Lambda logs and gather information about the cold start (initDuration) and duration percentiles:

filter @type = "REPORT"
    | parse @log /\d+:\/aws\/lambda\/(?<function>.*)/
    | stats
    count(*) as invocations,
    pct(@duration+coalesce(@initDuration,0), 0) as p0,
    pct(@duration+coalesce(@initDuration,0), 25) as p25,
    pct(@duration+coalesce(@initDuration,0), 50) as p50,
    pct(@duration+coalesce(@initDuration,0), 75) as p75,
    pct(@duration+coalesce(@initDuration,0), 90) as p90,
    pct(@duration+coalesce(@initDuration,0), 95) as p95,
    pct(@duration+coalesce(@initDuration,0), 99) as p99,
    pct(@duration+coalesce(@initDuration,0), 100) as p100
    group by function, ispresent(@initDuration) as coldstart
    | sort by coldstart, function

The results provide an indication of how your application performs with the custom runtime. This is especially helpful when comparing different versions.

  • Invocation time (@duration) for both cold and warm starts plus function initialization time (@initDuration) if it is a cold start:

Invocation time

  • Function initialization time (@initDuration) only:

Function initialisation time

Conclusion

In this blog post, you learn how to create your own optimized Java runtime for AWS Lambda by using a variety of Java optimization techniques. This allows you to tailor your Java runtime to your application needs.

See the full example on GitHub and make use of your own preferred Java version. Add additional optimization steps in the Dockerfile or tune the parameters in the bootstrap file to optimize the start of the Java virtual machine.

In case you want to re-use your custom runtime in multiple Lambda functions, you can also distribute it via a Lambda layer.

For more serverless learning resources, visit Serverless Land.

How to re-platform and modernize Java web applications on AWS

Post Syndicated from Rick Armstrong original https://aws.amazon.com/blogs/compute/re-platform-java-web-applications-on-aws/

This post is written by: Bill Chan, Enterprise Solutions Architect

According to a report from Grand View Research, “the global application server market size was valued at USD 15.84 billion in 2020 and is expected to expand at a compound annual growth rate (CAGR) of 13.2% from 2021 to 2028.” The report also suggests that Java based application servers “accounted for the largest share of around 50% in 2020.” This means that many organizations continue to rely on Java application server capabilities to deliver middleware services that underpin the web applications running their transactional, content management and business process workloads.

The maturity of the application server technology also means that many of these web applications were built on traditional three-tier web architectures running in on-premises data centers. And as organizations embark on their journey to cloud, the question arises as to what is the best approach to migrate these applications?

There are seven common migration strategies when moving applications to the cloud, including:

  • Retain – keeping applications running as is and revisiting the migration at a later stage
  • Retire – decommissioning applications that are no longer required
  • Repurchase – switching from existing applications to a software-as-a-service (SaaS) solution
  • Rehost – moving applications as is (lift and shift), without making any changes to take advantage of cloud capabilities
  • Relocate – moving applications as is, but at a hypervisor level
  • Replatform – moving applications as is, but introduce capabilities that take advantage of cloud-native features
  • Refactor – re-architect the application to take full advantage of cloud-native features

Refer to Migrating to AWS: Best Practices & Strategies and the 6 Strategies for Migrating Applications to the Cloud for more details.

This blog focuses on the ‘replatform’ strategy, which suits customers who have large investments in application server technologies and the business case for re-architecting doesn’t stack up. By re-platforming their applications into the cloud, customers can benefit from the flexibility of a ‘pay-as-you-go’ model, dynamically scale to meet demand and provision infrastructure as code. Additionally, customers can increase the speed and agility to modernize existing applications and build new cloud-native applications to deliver better customer experiences.

In this post, we walk through the steps to replatform a simple contact management Java application running on an open-source Tomcat application server, along with modernization aspects that include:

  • Deploying a Tomcat web application with automatic scaling capabilities
  • Integrating Tomcat with Redis cache (using Redisson Session Manager for Tomcat)
  • Integrating Tomcat with Amazon Cognito for authentication (using Boyle Software’s OpenID Connect Authenticator for Tomcat)
  • Delegating user log in and sign up to Amazon Cognito

Overview of solution

Solution architecture overview diagram

The solution is comprised of the following components:

  • A VPC across two Availability Zones
  • Two public subnets, two private app subnets, and two private DB subnets
  • An Internet Gateway attached to the VPC
    • A public route table routing internet traffic to the Internet Gateway
    • Two private route tables routing traffic internally within the VPC
  • A frontend web server application Elastic Load Balancing that routes traffic to the Apache Web Servers
  • An Auto Scaling group that launches additional Apache Web Servers based on defined scaling policies. Each instance of the web server is based on a launch template, which defines the same configuration for each new web server.
  • A hosted zone in Amazon Route 53 with a domain name that routes to the frontend web server Elastic Load Balancing
  • An application Elastic Load Balancing that routes traffic to the Tomcat application servers
  • An Auto Scaling group that launches additional Tomcat Application Servers based on defined scaling policies. Each instance of the Tomcat application server is based on a launch template, which defines the same configuration and software components for each new application server
  • A Redis cache cluster with a primary and replica node to store session data after the user has authenticated, making your application servers stateless
  • A Redis open-source Java client, with a Tomcat Session Manager implementation to store authenticated user session data in Redis cache
  • A MySQL Amazon Relational Database Service (Amazon RDS) Multi-AZ deployment for MySQL RDS to store the contact management and role access tables
  • An Amazon Simple Storage Service (Amazon S3) bucket to store the application and framework artifacts, images, scripts and configuration files that are referenced by any new Tomcat application server instances provisioned by automatic scaling
  • Amazon Cognito with a sign-up Lambda function to register users and insert a corresponding entry in the user account tables. Cognito acts as an identity provider and performs the user authentication using an OpenID Connect Authenticator Java component

Walkthrough

The following steps overviews how to deploy the blog solution:

  • Clone and build the Sample Web Application and AWS Signup Lambda Maven projects from GitHub repository
  • Deploy the CloudFormation template (java-webapp-infra.yaml) to create the AWS networking infrastructure and the CloudFormation template (java-webapp-rds.yaml) to create the database instance
  • Update and build the sample web application and signup Lambda function
  • Upload the packages into your S3 bucket
  • Deploy the CloudFormation template (java-webapp-components.yaml) to create the blog solution components
  • Update the solution configuration files and upload them into your S3 bucket
  • Run a script to provision the underlying database tables
  • Validate the web application, session cache and automatic scaling functionality
  • Clean up resources

Prerequisites

For this walkthrough, you should have the following prerequisites:

  • An AWS account
  • An Amazon Elastic Compute Cloud (Amazon EC2) key pair (required for authentication). For more details, see Amazon EC2 key pairs
  • A Java Integrated Development Environment (IDE) such as Eclipse or NetBeans. AWS also offers a cloud-based IDE that lets you write, run and debug code in your browser without having to install files or configure your development machine, called AWS Cloud9. I will show how AWS Cloud9 can be used as part of a DevOps solution in a subsequent post
  • A valid domain name and SSL certificate for the deployed web application. To validate the OAuth 2.0 integration, Cognito requires the URL that the user is redirected to after successful sign-in to be HTTPS. Refer to a configuring a user pool app client for more details
  • Downloaded the following JARs:

Note: the solution was validated in the preceding versions and therefore, the launch template created for the CloudFormation solution stack refers to these specific JARs. If you decide to use different versions, then the ‘java-webapp-components.yaml’ will need to be updated to reflect the new versions. Alternatively, you can externalize the parameters in the template.

Clone the GitHub repository to your local machine

This repository contains the sample code for the Java web application and post confirmation sign-up Lambda function. It also contains the CloudFormation templates required to set up the AWS infrastructure, SQL script to create the supporting database and configuration files for the web server, Tomcat application server and Redis cache.

Deploy infrastructure CloudFormation template

  1. Log in to the AWS Management Console and open the CloudFormation service.

Diagram showing the first step in creating a CloudFormation stack.

2. Create the infrastructure stack using the java-webapp-infra.yaml template (located in the ‘config’ directory of the repo).

3. Infrastructure stack outputs:

Diagram showing the outputs generated from the infrastructure stack creation

Deploy database CloudFormation template

  1.  Log in to the AWS Management Console and open the CloudFormation service.
  2. Create the infrastructure stack using the java-webapp-rds.yaml template (located in the ‘config’ directory of the repo).
  3. Database stack outputs.

Diagram showing the outputs generated from the relational database service stack creation

Update and build sample web application and signup Lambda function

  1. Import the ‘sample-webapp’ and ‘aws-signup-lambda’ Maven projects from the repository into your IDE.
  2. Update the sample-webapp’s UserDAO class to reflect the RDSEndpoint, DBUserName, and DBPassword from the previous step:”
    // Externalize and update jdbcURL, jdbcUsername, jdbcPassword parameters specific to your environment
    	private String jdbcURL = "jdbc:mysql://<RDSEndpoint>:3306/webappdb?useSSL=false";
    	private String jdbcUsername = "<DBUserName>";
    	private String jdbcPassword = "<DBPassword>";

  3. To build the ‘sample-webapp’ Maven project, use the standard ‘clean install’ goals.
  4. Update the aws-signup-lambda’s signupHandler class to reflect RDSEndpoint, DBUserName, and DBPassword from the solution stack:
    // Update with your database connection details
    		String jdbcURL = "jdbc:mysql://<RDSEndpoint>:3306/webappdb?useSSL=false";
    		String jdbcUsername = "<DBUserName>";
    		String jdbcPassword = "<DBPassword>";

  5. To build the aws-signup-lambda Maven project, use the ‘package shade:shade’ goals to include all dependencies in the package.
  6. Two packages are created in their respective target directory: ‘sample-webapp.war’ and ‘create-user-lambda-1.0.jar’

Upload the packages into your S3 bucket

  1. Log in to the AWS Management Console and open the S3 service.
  2. Select the bucket created by the infrastructure CloudFormation template in an earlier step.

Diagram showing the S3 bucket interface with no objects.

3.  Create a ‘config’ and ‘lib’ folder in the bucket.

Diagram showing the S3 bucket interface with the new folders.

4.  Upload the ‘sample-webapp.war’ and ‘create-user-lambda-1.0.jar’ created an earlier step (along with the downloaded packages from the pre-requisites section) into the ‘lib’ folder of the bucket. The ‘lib’ folder should look like this:

Diagram showing the S3 bucket interface and objects in the lib folder

Note: the solution was validated in the preceding versions and therefore, the launch template created for the CloudFormation solution stack refers to these specific package names.

Deploy the solution components CloudFormation template

1.       Log in to the AWS Management Console and open the CloudFormation service (if you aren’t already logged in from the previous step).

2.       Create the web application solution stack using the ‘java-webapp-components.yaml’ template (located in the ‘config’ directory of the repo).

3.       Guidance on the different template input parameters:

a.       BastionSGSource – default is 0.0.0.0/0, but it is recommended to restrict this to your allowed IPv4 CIDR range for additional security

b.       BucketName – the bucket name created as part of the infrastructure stack. This blog uses the bucket name is ‘chanbi-java-webapp-bucket’

c.       CallbackURL – the URL that the user is redirected to after successful sign up/sign in is composed of your domain name (blog.example.com), the application root (sample-webapp), and the authentication form action ‘j_security_check’. As noted earlier, this needs to be over HTTPS

d.       CreateUserLambdaKey – the S3 object key for the signup Lambda package. This blog uses the key ‘lib/create-user-lambda-1.0.jar’

e.       DBUserName – the database user name for the MySQL RDS. Make note of this as it will be required in a subsequent step

f.        DBUserPassword – the database user password. Make note of this as it will be required in a subsequent step

g.       KeyPairName – the key pair to use when provisioning the EC2 instances. This key pair was created in the pre-requisite step

h.       WebALBSGSource – the IPv4 CIDR range allowed to access the web app. Default is 0.0.0.0/0

i.         The remaining parameters are import names from the infrastructure stack. Use default settings

4.       After successful stack creation, you should see the following java web application solution stack output:

 Diagram showing the outputs generated from the solution components stack creation.

Update configuration files

  1. The GitHub repository’s ‘config’ folder contains the configuration files for the web server, Tomcat application server and Redis cache, which needs to be updated to reflect the parameters specific to your stack output in the previous step.
  2. Update the virtual hosts in ‘httpd.conf’ to proxy web traffic to the internal app load balancer. Use the value defined by the key ‘AppALBLoadBalancerDNS’ from the stack output.
    <VirtualHost *:80>
    ProxyPass / http://<AppALBLoadBalancerDNS>:8080/
    ProxyPassReverse / http://<AppALBLoadBalancerDNS>:8080/
    </VirtualHost>

  3. Update JDBC resource for the ‘webappdb’ in the ‘context.xml, with the values defined by the RDSEndpoint, DBUserName, and DBPassword from the solution components CloudFormation stack:
    <Resource name="jdbc/webappdb" auth="Container" type="javax.sql.DataSource"
                   maxTotal="100" maxIdle="30" maxWaitMillis="10000"
                   username="<DBUserName>" password="<DBPassword>" driverClassName="com.mysql.jdbc.Driver"
                   url="jdbc:mysql://<RDSEndpoint>:3306/webappdb"/>

  4. Log in to the AWS Management Console and open the Amazon Cognito service. Select ‘Manage User Pools’ and you will notice that a ‘java-webapp-pool’ has been created by the solution components CloudFormation stack. Select the ‘java-webapp-pool’ and make note of the ‘Pool Id’, ‘App client id’ and ‘App client secret’.

Diagram showing the Cognito User Pool interface general settings

Diagram showing the Cognito User Pool interface app client settings

5.  Update ‘Valve’ configuration in the ‘context.xml’, with the ‘Pool Id’, ‘App client id’ and ‘App client secret’ values from the previous step. The Cognito IDP endpoint specific to your Region can be found here. The host base URI needs to be replaced with the domain for your web application.

    <Valve className="org.bsworks.catalina.authenticator.oidc.tomcat90.OpenIDConnectAuthenticator"
       providers="[
           {
               name: 'Amazon Cognito',
               issuer: https://<cognito-idp-endpoint-for-you-region>/<cognito-pool-id>,
               clientId: <user-pool-app-client-id>,
               clientSecret: <user-pool-app-client-secret>
           }
       ]"
        hostBaseURI="https://<your-sample-webapp-domain>" usernameClaim="email" />

6.  Update the ‘address’ parameter in ‘redisson.yaml’ with Redis cluster endpoint. Use the value defined by the key ‘RedisClusterEndpoint’ from the solution components CloudFormation stack output.

singleServerConfig:
    address: "redis://<RedisClusterEndpoint>:6379"

7.  No updates are required to the following files:

a.  server.xml – defines a data source realm for the user names, passwords, and roles assigned to users

      <Realm className="org.apache.catalina.realm.DataSourceRealm"
   dataSourceName="jdbc/webappdb" localDataSource="true"
   userTable="user_accounts" userNameCol="user_name" userCredCol="user_pass"
   userRoleTable="user_account_roles" roleNameCol="role_name" debug="9" />
      </Realm>

b.  tomcat.service – allows Tomcat to run as a service

c.  uninstall-sample-webapp.sh – removes the sample web application

Upload configuration files into your S3 bucket

  1. Upload the configuration files from the previous step into the ‘config’ folder of the bucket. The ‘config’ folder should look like this:

Diagram showing the S3 bucket interface and objects in the config folder

Update the Auto Scaling groups

  1. Auto Scaling groups manage the provisioning and removal of the web and application instances in our solution. To start an instance of the web server, update the Auto Scaling group’s desired capacity (1), minimum capacity (1) and maximum capacity (2) as shown in the following image:

Diagram showing the web server auto scaling group interface and group details.

2.  To start an instance of the application server, update the Auto Scaling group’s desired capacity (1), minimum capacity (1) and maximum capacity (2) for as shown in the following image:

Diagram showing the web server auto scaling group interface and group details.

The web and application scaling groups will show a status of “Updating capacity” (as shown in the following image) as the instances start up.

Diagram showing the auto scaling groups interface and updating capacity status.

After web and application servers have started, an instance will appear under ‘Instance management’ with a ‘Healthy’ status for each Auto Scaling group (as shown in the following image).

Diagram showing the web server auto scaling group interface and instance status

Diagram showing the application server auto scaling group interface and instance status

Run the database script webappdb_create_tables.sql

  1. The database script creates the database and underlying tables required by the web application. As the database server resides in the DB private subnet and is only accessible from the application server instance, we need to first connect (via SSH) to the bastion host (using public IPv4 DNS), and from there we can connect (via SSH) to the application server instance (using its private IPv4 address). This will in turn allow us to connect to the database instance and run the database script. Refer to connecting to your Linux instance using SSH for more details. Instance details are located under the ‘Instances’ view (as shown in the following image).

Diagram showing the instances interface and the running instances for the VPC

2.  Transfer the database script webappdb_create_tables.sql to the application server instance via the Bastion Host. Refer to transferring files using a client for details.

3.  Once connected to the application server via SSH, execute the command to connect to the database instance:

mysql -h <RDSEndpoint> -P 3306 -u <DBUserName> -p

4. Enter the DB user password used when creating the database instance. You will be presented with the MySQL prompt after successful login:

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 300
Server version: 8.0.23 Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]>

5. Run the command to run the database script webappdb_create_tables.sql:

source /home/ec2-user/webappdb_create_tables.sql

Add an HTTPS listener to the external web load balancer

  1. Log in to the AWS Management Console and select Load Balancers as part of the EC2 service
    Diagram showing the load balancer interface
  2. Add a HTTPS listener on port 443 for the web load balancer. The default action for the listener is to forward traffic to the web instance target group. Refer to create an HTTPS listener for your Application Load Balancer for more details.
    Diagram showing the load balancer add listener interface

Reference the SSL certificate for your domain. In the following example, I have used a certificate from AWS Certificate Manager (ACM) for my domain. You also have the option of using a certificate from Identity Access Management or importing your own certificate.

Diagram showing the secure listener settings interface

Update your DNS to route traffic from the external web load balancer

  1. In this example, I use Amazon Route 53 as the Domain Name Server (DNS) service, but the steps will be similar when using your own DNS service.
  2. Create an A record type that routes traffic from your domain name to the external web load balancer. For more details, refer to creating records by using the Amazon Route 53 console.
    Diagram showing the hosted zone interface

Validate the web application

  1. In your browser, access the following https://<yourdomain.example.com>/sample-webapp
    Diagram showing the log in page for the sample web application.
  2. Select “Amazon Cognito” to authenticate using Cognito as the Identity Provider (IdP). You will be redirected to the login page for your Cognito domain.
    Diagram showing the sign in page provided by Amazon Cognito
  3. Select the “Sign up” to create a new user and enter your email and password. Note the password strength requirements that can be configured as part of the user pool’s policies.
    Diagram showing the sign up page provided by Amazon Cognito
  4. An email with the verification code will be sent to the sign-up email address. Enter the code on the verification code screen.
    Diagram showing the account confirmation page with verification code provided by Amazon Cognito
  5. After successful confirmation, you will be re-directed to the authenticated landing page for the web application.
    Diagram showing the main page with the list of contacts for the sample web application.
  6. The simple web application allows you to add, edit, and delete contacts as shown in the following image.
    Diagram showing the list of contacts for the sample web application with edit and delete functionality.

Validate the session data on Redis

  1. Follow the steps outlined in connecting to nodes for details on connecting to your Redis cache cluster. You will need to connect to your application server instance (via the bastion host) to perform this as the Redis cache is only accessible from the private subnet.
  2. After successfully installing the Redis client, search for your authenticated user session key in the cluster by running the command (from within the ‘redis-stable’ directory):
    src/redis-cli -c -h <RedisClusterEndpoint> -p 6379 -–bigkeys

  3. You should see an output with your Tomcat authenticated session (if you can’t, perform another login via the Cognito login screen):
    # Scanning the entire keyspace to find biggest keys as well as
    # average sizes per key type.  You can use -i 0.1 to sleep 0.1 sec
    # per 100 SCAN commands (not usually needed).
    
    [00.00%] Biggest hash   found so far '"redisson:tomcat_session:AE647D93F2BECEFEE07B5B42C435E3DE"' with 8 fields

  4. Connect to the cache cluster:
    # src/redis-cli -c -h <RedisClusterEndpoint> -p 6379

  5. Run the HGETALL command to get the session details:
    java-webapp-redis-cluster.<xxxxxx>.0001.apse2.cache.amazonaws.com:6379> HGETALL "redisson:tomcat_session:AE647D93F2BECEFEE07B5B42C435E3DE"
     1) "session:creationTime"
     2) "\x04L\x00\x00\x01}\x16\x92\x1bX"
     3) "session:lastAccessedTime"
     4) "\x04L\x00\x00\x01}\x16\x92%\x9c"
     5) "session:principal"
     6) "\x04\x04\t>@org.apache.catalina.realm.GenericPrincipal$SerializablePrincipal\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x04>\x04name\x16\x00>\bpassword\x16\x00>\tprincipal\x16\x00>\x05roles\x16\x00\x16>\[email protected]>\[email protected]\x01B\x01\x14>\bstandard"
     7) "session:maxInactiveInterval"
     8) "\x04K\x00\x00\a\b"
     9) "session:isValid"
    10) "\x04P"
    11) "session:authtype"
    12) "\x04>\x04FORM"
    13) "session:isNew"
    14) "\x04Q"
    15) "session:thisAccessedTime"
    16) "\x04L\x00\x00\x01}\x16\x92%\x9c"

Scale your web and application server instances

  1. Amazon EC2 Auto Scaling provides several ways for you to scale instances in your Auto Scaling group such as scaling manually as we did in an earlier step. But you also have the option to scale dynamically to meet changes in demand (such as maintaining CPU Utilization at 50%), predictively scale in advance of daily and weekly patterns in traffic flows, or scale based on a scheduled time. Refer to scaling the size of your Auto Scaling group for more details.
    Diagram showing the auto scaling groups interface and scaling policies
  2. We will create a scheduled action to provision another application server instance.
    Diagram showing the auto scaling group's create schedule action interface.
  3. As per our scheduled action, at 11.30 am, an additional application server instance is started.
    Diagram showing the activity history for the instance.
  4. Under instance management, you will see an additional instance in ‘Pending’ state as it starts.
    Diagram showing the auto scaling groups interface and additional instances.
  5. To test the stateless nature of your application, you can manually stop the original application server instance and observe that your end-user experience is unaffected i.e. you are not prompted to re-authenticate and can continue using the application as your session data is stored in Redis ElastiCache and not tied to the original instance.

Cleaning up

To avoid incurring future charges, remove the resources by deleting the java-webapp-components, java-webapp-rds and java-webapp-infra CloudFormation stacks.

Conclusion

Customers with significant investments in Java application server technologies have options to migrate to the cloud without requiring a complete re-architecture of their applications. In this blog, we’ve shown an approach to modernizing Java applications running on Tomcat Application Server in AWS. And in doing so, take advantage of cloud-native features such as automatic scaling, provisioning infrastructure as code, and leveraging managed services (such as ElastiCache for Redis and Amazon RDS) to make our application stateless. We also demonstrated modernization features such as authentication and user provisioning via an external IdP (Amazon Cognito). For more information on different re-platforming patterns refer to the AWS Prescriptive Guidance on Migration.

Implementing mutual TLS for Java-based AWS Lambda functions

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/implementing-mutual-tls-for-java-based-aws-lambda-functions-2/

This post is written by Dhiraj Mahapatro, Senior Specialist SA, Serverless and Christian Mueller, Principal Solutions Architect

Modern secure applications establish network connections to other services through HTTPS. This ensures that the application connects to the right party and encrypts the data before sending it over the network.

You might not want unauthenticated users to connect to your service as a service provider. One solution to this requirement is to use mutual TLS (Transport Layer Security). Mutual TLS (or mTLS) is a common security mechanism that uses client certificates to add an authentication layer. This allows the service provider to verify the client’s identity cryptographically.

The purpose of mutual TLS in serverless

mTLS refers to two parties authenticating each other at the same time when establishing a connection. By default, the TLS protocol only proves the identity of the server to a client using X.509 certificates. With mTLS, a client must prove its identity to the server to communicate. This helps support a zero-trust policy to protect against adversaries like man-in-the-middle attacks.

mTLS is often used in business-to-business (B2B) applications and microservices, where interservice communication needs mutual authentication of parties. In Java, you see the following error when the server expects a certificate, but the client does not provide one:

PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

This blog post explains multiple ways to implement a Java-based AWS Lambda function that uses mTLS to authenticate with a third-party internal or external service. The sample application and this post explain the advantages and tradeoffs of each approach.

The KeyStore and TrustStore in Java

The TrustStore is used to store certificate public keys from a certificate authority (CA) or trusted servers. A client can verify the public certificate presented by the server in a TLS connection. A KeyStore stores private key and identity certificates that a specific application uses to prove the client’s identity.

The stores contain opposite certificates. The TrustStore holds the identification certificates that identify others, while the KeyStore holds the identification certificates that identify itself.

Overview

To start, you create certificates. For brevity, this sample application uses a script that uses OpenSSL and Java’s keytool for self-signed certificates from a CA. You store the generated keys in Java KeyStore and TrustStore. However, the best practice for creating and maintaining certificates and private CA is to use AWS Certificate Manager and AWS Certificate Manager Private Certificate Authority.

You can find the details of the script in the README file.

The following diagram shows the use of KeyStore and TrustStore in the client Lambda function, and the server running on Fargate.

KeyStore and TrustStore

KeyStore and TrustStore

The demo application contains several Lambda functions. The Lambda functions act as clients to services provided by Fargate behind an Amazon Network Load Balancer (NLB) running in a private Amazon VPC. Amazon Route 53 private hosted zones are used to resolve selected hostnames. You attach the Lambda functions to this VPC to resolve the hostnames for the NLB. To learn more, read how AWS Lambda uses Hyperplane elastic network interfaces to work with custom VPC.

The following examples refer to portions of InfrastructureStack.java and the implementation in the corresponding Lambda functions.

Providing a client certificate in a Lambda function artifact

The first option is to provide the KeyStore and TrustStore in a Lambda functions’ .zip artifact. You provide specific Java environment variables within the Lambda configuration to instruct the JVM to load and trust your provided Keystore and TrustStore. The JVM uses these settings instead of the Java Runtime Environment’s (JRE) default settings (use a stronger password for your use case):

"-Djavax.net.ssl.keyStore=./client_keystore_1.jks -Djavax.net.ssl.keyStorePassword=secret -Djavax.net.ssl.trustStore=./client_truststore.jks -Djavax.net.ssl.trustStorePassword=secret"

The JRE uses this KeyStore and TrustStore to build a default SSLContext. The HttpClient uses this default SSLContext to create a TLS connection to the backend service running on Fargate.

The following architecture diagram shows the sample implementation. It consists of an Amazon API Gateway endpoint with a Lambda proxy integration that calls a backend Fargate service running behind an NLB.

Providing a client certificate in a Lambda function artifact

Providing a client certificate in a Lambda function artifact

This is a basic approach for a prototype. However, it has a few shortcomings related to security and separation of duties. The KeyStore contains the private key, and the password is exposed to the source code management (SCM) system, which is a security concern. Also, it is the Lambda function owner’s responsibility to update the certificate before its expiration. You can address these concerns about separation of duties with the following approach.

Providing the client certificate in a Lambda layer

In this approach, you separate the responsibility between two entities. The Lambda function owner and the KeyStore and TrustStore owner.

The KeyStore and TrustStore owner provides the certificates securely to the function developer who may be working in a separate AWS environment. For simplicity, the demo application uses the same AWS account.

The KeyStore and TrustStore owner achieves this by using AWS Lambda layers. The KeyStore and TrustStore owner packages and uploads the certificates as a Lambda layer and only allows access to authorized functions. The Lambda function owner does not access the KeyStore or manage its lifecycle. The KeyStore and TrustStore owner’s responsibility is to release a new version of this layer when necessary and inform users.

Providing the client certificate in a Lambda layer

Providing the client certificate in a Lambda layer

The KeyStore and TrustStore are extracted under the path /opt as part of including a Lambda layer. The Lambda function can now use the layer as:

Function lambdaLayerFunction = new Function(this, "LambdaLayerFunction", FunctionProps.builder()
  .functionName("lambda-layer")
  .handler("com.amazon.aws.example.AppClient::handleRequest")
  .runtime(Runtime.JAVA_11)
  .architecture(ARM_64)
  .layers(singletonList(lambdaLayerForService1cert))
  .vpc(vpc)
  .code(Code.fromAsset("../software/2-lambda-using-separate-layer/target/lambda-using-separate-layer.jar"))
  .memorySize(1024)
  .environment(Map.of(
    "BACKEND_SERVICE_1_HOST_NAME", BACKEND_SERVICE_1_HOST_NAME,
    "JAVA_TOOL_OPTIONS", "-Djavax.net.ssl.keyStore=/opt/client_keystore_1.jks -Djavax.net.ssl.keyStorePassword=secret -Djavax.net.ssl.trustStore=/opt/client_truststore.jks -Djavax.net.ssl.trustStorePassword=secret"
  ))
  .timeout(Duration.seconds(10))
  .logRetention(RetentionDays.ONE_WEEK)
  .build());

The KeyStore and TrustStore passwords are still supplied as environment variables and stored in the SCM system, which is against best practices. You can address this with the next approach.

Storing passwords securely in AWS Systems Manager Parameter Store

AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data and secret management. You can use Parameter Store to store the KeyStore and TrustStore passwords instead of environment variables. The Lambda function uses an IAM policy to access Parameter Store and gets the passwords as a secure string during the Lambda initialization phase.

With this approach, you build a custom SSLContext after retrieving the KeyStore and TrustStore passwords from the Parameter Store. Once you create SSLContext, provide that to the HttpClient you use to connect with the backend service:

HttpClient client = HttpClient.newBuilder()
  .version(HttpClient.Version.HTTP_2)
  .connectTimeout(Duration.ofSeconds(5))
  .sslContext(sslContext)
  .build();

You can also use a VPC interface endpoint for AWS Systems Manager to keep the traffic from your Lambda function to Parameter Store internal to AWS. The following diagram shows the interaction between AWS Lambda and Parameter Store.

Storing passwords securely in AWS Systems Manager Parameter Store

Storing passwords securely in AWS Systems Manager Parameter Store

This approach works for Lambda functions interacting with a single backend service requiring mTLS. However, it is common in a modern microservices architecture to integrate with multiple backend services. Sometimes, these services require a client to assume different identities by using different KeyStores. The next approach explains how to handle the multiple services scenario.

Providing multiple client certificates in Lambda layers

You can provide multiple KeyStore and TrustStore pairs within multiple Lambda layers. All layers attached to a function are merged when provisioning the function. Ensure your KeyStore and TrustStore names are unique. A Lambda function can use up to five Lambda layers.

Similar to the previous approach, you load multiple KeyStores and TrustStores to construct multiple SSLContext objects. You abstract the common logic to create an SSLContext object in another Lambda layer. Now, the Lambda function calling two different backend services uses 3 Lambda layers:

  • Lambda layer for backend service 1 (under /opt)
  • Lambda layer for backend service 2 (under /opt)
  • Lambda layer for the SSL utility that takes the KeyStore, TrustStore, and their passwords to return an SSLContext object

SSL utility Lambda layer provides the getSSLContext default method in a Java interface. The Lambda function implements this interface. Now, you create a dedicated HTTP client per service.

The following diagram shows your final architecture:

Providing multiple client certificates in Lambda layers

Providing multiple client certificates in Lambda layers

Prerequisites

To run the sample application, you need:

  1. CDK v2
  2. Java 11
  3. AWS CLI
  4. Docker
  5. jq

To build and provision the stack:

  1. Clone the git repository.
  2. git clone https://github.com/aws-samples/serverless-mutual-tls.git
    cd serverless-mutual-tls
  3. Create the two root CA’s, client, and server certificates.
  4. ./scripts/1-create-certificates.sh
  5. Build and package all examples.
  6. ./scripts/2-build_and_package-functions.sh
  7. Provision the AWS infrastructure (make sure that Docker is running).
  8. ./scripts/3-provision-infrastructure.sh

Verification

Verify that the API endpoints are working and using mTLS by running these commands from the base directory:

export API_ENDPOINT=$(cat infrastructure/target/outputs.json | jq -r '.LambdaMutualTLS.apiendpoint')

To see the error when mTLS is not used in the Lambda function, run:

curl -i $API_ENDPOINT/lambda-no-mtls

The preceding curl command responds with an HTTP status code 500 and plain body as:

PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

For successful usage of mTLS as shown in the previous use cases, run:

curl -i $API_ENDPOINT/lambda-only
curl -i $API_ENDPOINT/lambda-layer
curl -i $API_ENDPOINT/lambda-parameter-store
curl -i $API_ENDPOINT/lambda-multiple-certificates

The last curl command responds with an HTTP status code 200 and body as:

[
 {"hello": "from backend service 1"}, 
 {"hello": "from backend service 2"}
]

Additional security

You can add additional controls via Java environment variables. Compliance standards like PCI DSS in financial services require customers to exercise more control over the underlying negotiated protocol and ciphers.

Some of the useful Java environment variables to troubleshoot SSL/TLS connectivity issues in a Lambda function are:

-Djavax.net.debug=all
-Djavax.net.debug=ssl,handshake
-Djavax.net.debug=ssl:handshake:verbose:keymanager:trustmanager
-Djavax.net.debug=ssl:record:plaintext

You can enforce a specific minimum version of TLS (for example, v1.3) to meet regulatory requirements:

-Dhttps.protocols=TLSv1.3

Alternatively, programmatically construct your SSLContext inside the Lambda function:

SSLContext sslContext = SSLContext.getInstance("TLSv1.3");

You can also use the following Java environment variable to limit the use of weak cipher suites or unapproved algorithms, and explicitly provide the supported cipher suites:

-Dhttps.cipherSuites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,TLS_DHE_RSA_WITH_AES_256_CBC_SHA256

You achieve the same programmatically with the following code snippet:

httpClient = HttpClient.newBuilder()
  .version(HttpClient.Version.HTTP_2)
  .connectTimeout(Duration.ofSeconds(5))
  .sslContext(sslContext)
  .sslParameters(new SSLParameters(new String[]{
    "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
    "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
    "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
    ………
  }))
  .build();

Cleaning up

The stack creates a custom VPC and other related resources. Clean up after usage to avoid the ongoing cost of running these services. To clean up the infrastructure and the self-generated certificates, run:

./scripts/4-delete-certificates.sh
./scripts/5-deprovision-infrastructure.sh

Conclusion

mTLS in Java using KeyStore and TrustStore is a well-established approach for using client certificates to add an authentication layer. This blog highlights the four approaches that you can take to implement mTLS using Java-based Lambda functions.

Each approach addresses the separation of concerns required while implementing mTLS with additional security features. Use an approach that suits your needs, organizational security best practices, and enterprise requirements. Refer to the demo application for additional details.

For more serverless learning resources, visit Serverless Land.

New for Amazon CodeGuru Reviewer – Detector Library and Security Detectors for Log-Injection Flaws

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-codeguru-reviewer-detector-library-and-security-detectors-for-log-injection-flaws/

Amazon CodeGuru Reviewer is a developer tool that detects security vulnerabilities in your code and provides intelligent recommendations to improve code quality. For example, CodeGuru Reviewer introduced Security Detectors for Java and Python code to identify security risks from the top ten Open Web Application Security Project (OWASP) categories and follow security best practices for AWS APIs and common crypto libraries. At re:Invent, CodeGuru Reviewer introduced a secrets detector to identify hardcoded secrets and suggest remediation steps to secure your secrets with AWS Secrets Manager. These capabilities help you find and remediate security issues before you deploy.

Today, I am happy to share two new features of CodeGuru Reviewer:

  • A new Detector Library describes in detail the detectors that CodeGuru Reviewer uses when looking for possible defects and includes code samples for both Java and Python.
  • New security detectors have been introduced for detecting log-injection flaws in Java and Python code, similar to what happened with the recent Apache Log4j vulnerability we described in this blog post.

Let’s see these new features in more detail.

Using the Detector Library
To help you understand more clearly which detectors CodeGuru Reviewer uses to review your code, we are now sharing a Detector Library where you can find detailed information and code samples.

These detectors help you build secure and efficient applications on AWS. In the Detector Library, you can find detailed information about CodeGuru Reviewer’s security and code quality detectors, including descriptions, their severity and potential impact on your application, and additional information that helps you mitigate risks.

Note that each detector looks for a wide range of code defects. We include one noncompliant and compliant code example for each detector. However, CodeGuru uses machine learning and automated reasoning to identify possible issues. For this reason, each detector can find a range of defects in addition to the explicit code example shown on the detector’s description page.

Let’s have a look at a few detectors. One detector is looking for insecure cross-origin resource sharing (CORS) policies that are too permissive and may lead to loading content from untrusted or malicious sources.

Detector Library screenshot.

Another detector checks for improper input validation that can enable attacks and lead to unwanted behavior.

Detector Library screenshot.

Specific detectors help you use the AWS SDK for Java and the AWS SDK for Python (Boto3) in your applications. For example, there are detectors that can detect hardcoded credentials, such as passwords and access keys, or inefficient polling of AWS resources.

New Detectors for Log-Injection Flaws
Following the recent Apache Log4j vulnerability, we introduced in CodeGuru Reviewer new detectors that check if you’re logging anything that is not sanitized and possibly executable. These detectors cover the issue described in CWE-117: Improper Output Neutralization for Logs.

These detectors work with Java and Python code and, for Java, are not limited to the Log4j library. They don’t work by looking at the version of the libraries you use, but check what you are actually logging. In this way, they can protect you if similar bugs happen in the future.

Detector Library screenshot.

Following these detectors, user-provided inputs must be sanitized before they are logged. This avoids having an attacker be able to use this input to break the integrity of your logs, forge log entries, or bypass log monitors.

Availability and Pricing
These new features are available today in all AWS Regions where Amazon CodeGuru is offered. For more information, see the AWS Regional Services List.

The Detector Library is free to browse as part of the documentation. For the new detectors looking for log-injection flaws, standard pricing applies. See the CodeGuru pricing page for more information.

Start using Amazon CodeGuru Reviewer today to improve the security of your code.

Danilo

Finding code inconsistencies using Amazon CodeGuru Reviewer

Post Syndicated from Hangqi Zhao original https://aws.amazon.com/blogs/devops/inconsistency-detection-in-amazon-codeguru-reviewer/

Here we are introducing the inconsistency detector for Java in Amazon CodeGuru Reviewer. CodeGuru Reviewer automatically analyzes pull requests (created in supported repositories such as AWS CodeCommit, GitHub, GitHub Enterprise, and Bitbucket) and generates recommendations for improving code quality. For more information, see Automating code reviews and application profiling with Amazon CodeGuru.

The Inconsistency Principle

Software is repetitive, so it’s possible mine usage specifications from the mining of large code bases. While the mined specifications are often correct, specification violations are not often bugs. This is because the code uses are contextual, therefore contextual analysis is required to detect genuine violations. Moreover, naive enforcement of learned specifications leads to many false positives. The inconsistency principle is based on the following observation: While deviations from frequent behavior (or specifications) are often permissible across repositories or packages, deviations from frequent behavior inside of a given repository are worthy of investigation by developers, and these are often bugs or code smells.

Consider the following example. Assume that within a repository or package, we observe 10 occurrences of this statement:

private static final String stage = AppConfig.getDomain().toLowerCase();

And assume that there is one occurrence of the following statement in the same package:

private static final String newStage = AppConfig.getDomain();

The inconsistency can be described as: in this package, typically there is an API call of toLowerCase() follows AppConfig.getDomain() that converts the string generated by AppConfig.getDomain() to its lower-case correspondence. However, in one occurrence, the call is missing. This should be examined.

In summary, the inconsistency detector can learn from a single package (read: your own codebase) and can identify common code usage patterns in the code, such as common patterns of logging, throwing and handling exceptions, performing null checks, etc. Next, usage patterns inconsistent with frequent usage patterns are flagged. While the flagged violations are most often bugs or code smells, it is possible that in some cases the majority usage patterns are the anti-patterns and the flagged violations are actually correct. Mining and detection are conducted on-the-fly during code analysis, and neither the code nor the patterns are persisted. This ensures that the customer code privacy requirements are preserved.

Diverse Recommendations

Perhaps the most distinguishing feature of the inconsistency detector family is how it is based on learning from the customer’s own code while conducting analysis on-the-fly without persisting the code. This is rather different than the pre-defined rules or dedicated ML algorithms targeting specific code defects like concurrency. These approaches also persist learned patterns or models. Therefore, the inconsistency principle-based detectors are much more diversified and spread out to a wide range of code problems.

Our approach aims to automatically detect inconsistent code pieces that are abnormal and deviant from common code patterns in the same package—assuming that the common behaviors are correct. This anomaly detection approach doesn’t require prior knowledge and minimizes the human effort to predefine rule sets, as it can automatically and implicitly “infer” rules, i.e., common code patterns, from the source code. While inconsistencies can be detected in the specific aspects of programs (such as synchronization, exceptions, logging, etc.), CodeGuru Reviewer can detect inconsistencies in multiple program aspects. The detections aren’t restricted to specific code defect types, but instead cover diverse code issue categories. These code defects may not necessarily be bugs, but they could also be code smells that should be avoided for better code maintenance and readability. CodeGuru Reviewer can detect 12 types of code inconsistencies in Java code. These detectors identify issues like typos, inconsistent messaging, inconsistent logging, inconsistent declarations, general missing API calls, missing null checks, missing preconditions, missing exceptions, etc. Now let’s look at several real code examples (they may not cover all types of findings).

Typo

Typos frequently appear in code messages, logs, paths, variable names, and other strings. Even simple typos can be difficult to find and can cause severe issues in your code. In a real code example below, the developer is setting a configuration file location.

context.setConfigLocation("com.amazom.daas.customerinfosearch.config");

The inconsistency detector identifies a typo in the path amazomamazon. Failure to identify and fix this typo will make the configuration location not found. Accordingly, the developer fixed the typo. In some other cases, certain typos are customized to the customer code and may be difficult to find via pre-defined rules or vocabularies. In the example below, the developer sets a bean name as a string.

public static final String COOKIE_SELECTOR_BEAN_NAME = "com.amazon.um.webapp.config.CookiesSelectorService";

The inconsistency detector learns from the entire codebase and identifies that um should instead be uam, as uam appears in multiple similar cases in the code, thereby making the use of um abnormal. The developer responded “Cool!” to this finding and accordingly made the fix.

Inconsistent Logging

In the example below, the developer is logging an exception related to a missing field, ID_FIELD.

 log.info ("Missing {} from data {}", ID_FIELD, data); 

The inconsistency detector learns the patterns in the developer’s code and identifies that, in 47 other similar cases of the same codebase, the developer’s code uses a log level of error instead of info, as shown below.

 log.error ("Missing {} from data {}", ID_FIELD, data);

Utilizing the appropriate log level can improve the application debugging process and ensure that developers are aware of all potential issues within their applications. The message from the CodeGuru Reviewer was “The detector found 47 occurrences of code with logging similar to the selected lines. However, the logging level in the selected lines is not consistent with the other occurrences. We recommend you check if this is intentional. The following are some examples in your code of the expected level of logging in similar occurrences: log.error (“Missing {} from data {}”, ID_FIELD, data); …

The developer acknowledges the recommendation to make the log level consistent, and they made the corresponding change to upgrade the level from info to error.

Apart from log levels, the inconsistency detector also identifies potentially missing log statements in your code, and it ensures that the log messages are consistent across the entire codebase.

Inconsistent Declaration

The inconsistency detector also identifies inconsistencies in variable or method declarations. For instance, a typical example is missing a final keyword for certain variables or functions. While a variable doesn’t always need to be declared final, the inconsistency determines certain variables as final by inferring from other appearances of the variables. In the example below, the developer declares a method with the variable request.

protected ExecutionContext setUpExecutionContext(ActionRequest request) {

Without prior knowledge, the inconsistency learns from 9 other similar occurrences of the same codebase, and it determines that the request should be with final keyword. The developer responded by saying “Good catch!”, and then made the change.

Missing API

Missing API calls is a typical code bug, and their identification is highly valued by customers. The inconsistency detector can find these issues via the inconsistency principle. In the example below, the developer deletes stale entities in the document object.

private void deleteEntities(Document document) {
         ...
         for (Entity returnEntity : oldReturnCollection) {
             document.deleteEntity(returnEntity);
         }
     }

The inconsistency detector learns from 4 other similar methods in the developer’s codebase, and then determines that the API document.deleteEntity() is strongly associated with another API document.acceptChanges(), which is usually called after document.deleteEntity() in order to ensure that the document can accept other changes, while it is missing in this case. Therefore, the inconsistency detector recommends adding the document.acceptChanges() call. Another human reviewer agrees with CodeGuru’s suggestion, and they also refer to the coding guidelines of their platform that echo with CodeGuru’s recommendation. The developer made the corresponding fix as shown below.

private void deleteEntities(Document document) {
         ...
         for (Entity returnEntity : oldReturnCollection) {
             document.deleteEntity(returnEntity);
         }
         document.acceptChanges();
     }

Null Check

Consider the following code snippet:

String ClientOverride = System.getProperty(MB_CLIENT_OVERRIDE_PROPERTY);
if (ClientOverride != null) {
      log.info("Found Client override {}", ClientOverride);
      config.setUnitTestOverride(ConfigKeys.Client, ClientOverride);
}
 
String ServerOverride = System.getProperty(MB_SERVER_OVERRIDE_PROPERTY);
if (ClientOverride != null) {
      log.info("Found Server override {}", ServerOverride);
      config.setUnitTestOverride(ConfigKeys.ServerMode, ServerOverride);
}

CodeGuru Reviewer indicates that the output generated by the System.getProperty(MB_SERVER_OVERRIDE_PROPERTY) misses a null-check. A null-check appears to be available in the next line. However, if we look carefully, that null check is not applied to the correct output. It is a copy-paste error, and instead of checking on ServerOverride it checks ClientOverride. The developer fixed the code as the recommendation suggested.

Exception Handling

The following example shows an inconsistency finding regarding exception handling.

Detection:

try {
        if (null != PriceString) {
            final Double Price = Double.parseDouble(PriceString);
            if (Price.isNaN() || Price <= 0 || Price.isInfinite()) {
                throw new InvalidParameterException(
                  ExceptionUtil.getExceptionMessageInvalidParam(PRICE_PARAM));
             }
         }
    } catch (NumberFormatException ex) {
        throw new InvalidParameterException(ExceptionUtil.getExceptionMessageInvalidParam(PRICE_PARAM));
}

Supporting Code Block:

try {
        final Double spotPrice = Double.parseDouble(launchTemplateOverrides.getSpotPrice());
        if (spotPrice.isNaN() || spotPrice <= 0 || spotPrice.isInfinite()) {
            throw new InvalidSpotFleetRequestConfigException(
                ExceptionUtil.getExceptionMessageInvalidParam(LAUNCH_TEMPLATE_OVERRIDE_SPOT_PRICE_PARAM));
         }
      } catch (NumberFormatException ex) {
          throw new InvalidSpotFleetRequestConfigException(
                ExceptionUtil.getExceptionMessageInvalidParam(LAUNCH_TEMPLATE_OVERRIDE_SPOT_PRICE_PARAM));
 }

There are 4 occurrences of handling an exception thrown by Double.parseDouble using catch with InvalidSpotFleetRequestConfigException(). The detected code snippet with an incredibly similar context simply uses InvalidParameterException(), which does not provide as much detail as the customized exception. The developer responded with “yup the detection sounds right”, and then made the revision.

Responses to the Inconsistency Detector Findings

Large repos are typically developed by more than one developer over a period of time. Therefore, maintaining consistency for the code snippets that serve the same or similar purpose, code style, etc., can easily be overlooked, especially for repos with months or years of development history. As a result, the inconsistency detector in CodeGuru Reviewer has flagged numerous findings and alerted developers within Amazon regarding hundreds of inconsistency issues before they hit production.

The inconsistency detections have witnessed a high developer acceptance rate, and developer feedback of the inconsistency detector has been very positive. Some feedback from the developers includes “will fix. Didn’t realize as this was just copied from somewhere else.”, “Oops!, I’ll correct it and use consistent message format.”, “Interesting. Now it’s learning from our own code. AI is trying to take our jobs haha”, and “lol yeah, this was pretty surprising”. Developers have also concurred that the findings are essential and must be fixed.

Conclusion

Inconsistency bugs are difficult to detect or replicate during testing. They can impact production services availability. As a result, it’s important to automatically detect these bugs early in the software development lifecycle, such as during pull requests or code scans. The CodeGuru Reviewer inconsistency detector combines static code analysis algorithms with ML and data mining in order to surface only the high confidence deviations. It has a high developer acceptance rate and has alerted developers within Amazon to hundreds of inconsistencies before they reach production.

CICD on Serverless Applications using AWS CodeArtifact

Post Syndicated from Anand Krishna original https://aws.amazon.com/blogs/devops/cicd-on-serverless-applications-using-aws-codeartifact/

Developing and deploying applications rapidly to users requires a working pipeline that accepts the user code (usually via a Git repository). AWS CodeArtifact was announced in 2020. It’s a secure and scalable artifact management product that easily integrates with other AWS products and services. CodeArtifact allows you to publish, store, and view packages, list package dependencies, and share your application’s packages.

In this post, I will show how we can build a simple DevOps pipeline for a sample JAVA application (JAR file) to be built with Maven.

Solution Overview

We utilize the following AWS services/Tools/Frameworks to set up our continuous integration, continuous deployment (CI/CD) pipeline:

The following diagram illustrates the pipeline architecture and flow:

 

aws-codeartifact-pipeline

 

Our pipeline is built on CodePipeline with CodeCommit as the source (CodePipeline Source Stage). This triggers the pipeline via a CloudWatch Events rule. Then the code is fetched from the CodeCommit repository branch (main) and sent to the next pipeline phase. This CodeBuild phase is specifically for compiling, packaging, and publishing the code to CodeArtifact by utilizing a package manager—in this case Maven.

After Maven publishes the code to CodeArtifact, the pipeline asks for a manual approval to be directly approved in the pipeline. It can also optionally trigger an email alert via Amazon Simple Notification Service (Amazon SNS). After approval, the pipeline moves to another CodeBuild phase. This downloads the latest packaged JAR file from a CodeArtifact repository and deploys to the AWS Lambda function.

Clone the Repository

Clone the GitHub repository as follows:

git clone https://github.com/aws-samples/aws-cdk-codeartifact-pipeline-sample.git

Code Deep Dive

After the Git repository is cloned, the directory structure is shown as in the following screenshot :

aws-codeartifact-pipeline-code

Let’s study the files and code to understand how the pipeline is built.

The directory java-events is a sample Java Maven project. Find numerous sample applications on GitHub. For this post, we use the sample application java-events.

To add your own application code, place the pom.xml and settings.xml files in the root directory for the AWS CDK project.

Let’s study the code in the file lib/cdk-pipeline-codeartifact-new-stack.ts of the stack CdkPipelineCodeartifactStack. This is the heart of the AWS CDK code that builds the whole pipeline. The stack does the following:

  • Creates a CodeCommit repository called ca-pipeline-repository.
  • References a CloudFormation template (lib/ca-template.yaml) in the AWS CDK code via the module @aws-cdk/cloudformation-include.
  • Creates a CodeArtifact domain called cdkpipelines-codeartifact.
  • Creates a CodeArtifact repository called cdkpipelines-codeartifact-repository.
  • Creates a CodeBuild project called JarBuild_CodeArtifact. This CodeBuild phase does all of the code compiling, packaging, and publishing to CodeArtifact into a repository called cdkpipelines-codeartifact-repository.
  • Creates a CodeBuild project called JarDeploy_Lambda_Function. This phase fetches the latest artifact from CodeArtifact created in the previous step (cdkpipelines-codeartifact-repository) and deploys to the Lambda function.
  • Finally, creates a pipeline with four phases:
    • Source as CodeCommit (ca-pipeline-repository).
    • CodeBuild project JarBuild_CodeArtifact.
    • A Manual approval Stage.
    • CodeBuild project JarDeploy_Lambda_Function.

 

CodeArtifact shows the domain-specific and repository-specific connection settings to mention/add in the application’s pom.xml and settings.xml files as below:

aws-codeartifact-repository-connections

Deploy the Pipeline

The AWS CDK code requires the following packages in order to build the CI/CD pipeline:

  • @aws-cdk/core
  • @aws-cdk/aws-codepipeline
  • @aws-cdk/aws-codepipeline-actions
  • @aws-cdk/aws-codecommit
  • @aws-cdk/aws-codebuild
  • @aws-cdk/aws-iam
  • @aws-cdk/cloudformation-include

 

Install the required AWS CDK packages as below:

npm i @aws-cdk/core @aws-cdk/aws-codepipeline @aws-cdk/aws-codepipeline-actions @aws-cdk/aws-codecommit @aws-cdk/aws-codebuild @aws-cdk/pipelines @aws-cdk/aws-iam @ @aws-cdk/cloudformation-include

Compile the AWS CDK code:

npm run build

Deploy the AWS CDK code:

cdk synth
cdk deploy

After the AWS CDK code is deployed, view the final output on the stack’s detail page on the AWS CloudFormation :

aws-codeartifact-pipeline-cloudformation-stack

 

How the pipeline works with artifact versions (using SNAPSHOTS)

In this demo, I publish SNAPSHOT to the repository. As per the documentation here and here, a SNAPSHOT refers to the most recent code along a branch. It’s a development version preceding the final release version. Identify a snapshot version of a Maven package by the suffix SNAPSHOT appended to the package version.

The application settings are defined in the pom.xml file. For this post, we define the following:

  • The version to be used, called 1.0-SNAPSHOT.
  • The specific packaging, called jar.
  • The specific project display name, called JavaEvents.
  • The specific group ID, called JavaEvents.

The screenshot below shows the pom.xml settings we utilised in the application:

aws-codeartifact-pipeline-pom-xml

 

You can’t republish a package asset that already exists with different content, as per the documentation here.

When a Maven snapshot is published, its previous version is preserved in a new version called a build. Each time a Maven snapshot is published, a new build version is created.

When a Maven snapshot is published, its status is set to Published, and the status of the build containing the previous version is set to Unlisted. If you request a snapshot, the version with status Published is returned. This is always the most recent Maven snapshot version.

For example, the image below shows the state when the pipeline is run for the FIRST RUN. The latest version has the status Published and previous builds are marked Unlisted.

aws-codeartifact-repository-package-versions

 

For all subsequent pipeline runs, multiple Unlisted versions will occur every time the pipeline is run, as all previous versions of a snapshot are maintained in its build versions.

aws-codeartifact-repository-package-versions

 

Fetching the Latest Code

Retrieve the snapshot from the repository in order to deploy the code to an AWS Lambda Function. I have used AWS CLI to list and fetch the latest asset of package version 1.0-SNAPSHOT.

 

Listing the latest snapshot

export ListLatestArtifact = `aws codeartifact list-package-version-assets —domain cdkpipelines-codeartifact --domain-owner $Account_Id --repository cdkpipelines-codeartifact-repository --namespace JavaEvents --format maven --package JavaEvents --package-version "1.0-SNAPSHOT"| jq ".assets[].name"|grep jar|sed ’s/“//g’`

NOTE : Please note the dynamic CDK variable $Account_Id which represents AWS Account ID.

 

Fetching the latest code using Package Version

aws codeartifact get-package-version-asset --domain cdkpipelines-codeartifact --repository cdkpipelines-codeartifact-repository --format maven --package JavaEvents --package-version 1.0-SNAPSHOT --namespace JavaEvents --asset $ListLatestArtifact demooutput

Notice that I’m referring the last code by using variable $ListLatestArtifact. This always fetches the latest code, and demooutput is the outfile of the AWS CLI command where the content (code) is saved.

 

Testing the Pipeline

Now clone the CodeCommit repository that we created with the following code:

git clone https://git-codecommit.<region>.amazonaws.com/v1/repos/codeartifact-pipeline-repository

 

Enter the following code to push the code to the CodeCommit repository:

cp -rp cdk-pipeline-codeartifact-new /* ca-pipeline-repository
cd ca-pipeline-repository
git checkout -b main
git add .
git commit -m “testing the pipeline”
git push origin main

Once the code is pushed to Git repository, the pipeline is automatically triggered by Amazon CloudWatch events.

The following screenshots shows the second phase (AWS CodeBuild Phase – JarBuild_CodeArtifact) of the pipeline, wherein the asset is successfully compiled and published to the CodeArtifact repository by Maven:

aws-codeartifact-pipeline-codebuild-jarbuild

aws-codeartifact-pipeline-codebuild-screenshot

aws-codeartifact-pipeline-codebuild-screenshot2

 

The following screenshots show the last phase (AWS CodeBuild Phase – Deploy-to-Lambda) of the pipeline, wherein the latest asset is successfully pulled and deployed to AWS Lambda Function.

Asset JavaEvents-1.0-20210618.131629-5.jar is the latest snapshot code for the package version 1.0-SNAPSHOT. This is the same asset version code that will be deployed to AWS Lambda Function, as seen in the screenshots below:

aws-codeartifact-pipeline-codebuild-jardeploy

aws-codeartifact-pipeline-codebuild-screenshot-jarbuild

The following screenshot of the pipeline shows a successful run. The code was fetched and deployed to the existing Lambda function (codeartifact-test-function).

aws-codeartifact-pipeline-codepipeline

Cleanup

To clean up, You can either delete the entire stack through the AWS CloudFormation console or use AWS CDK command like below –

cdk destroy

For more information on the AWS CDK commands, please check the here or sample here.

Summary

In this post, I demonstrated how to build a CI/CD pipeline for your serverless application with AWS CodePipeline by utilizing AWS CDK with AWS CodeArtifact. Please check the documentation here for an in-depth explanation regarding other package managers and the getting started guide.

Introducing new self-paced courses to improve Java and Python code quality with Amazon CodeGuru

Post Syndicated from Rafael Ramos original https://aws.amazon.com/blogs/devops/new-self-paced-courses-to-improve-java-and-python-code-quality-with-amazon-codeguru/

Amazon CodeGuru icon

During the software development lifecycle, organizations have adopted peer code reviews as a common practice to keep improving code quality and prevent bugs from reaching applications in production. Developers traditionally perform those code reviews manually, which causes bottlenecks and blocks releases while waiting for the peer review. Besides impacting the teams’ agility, it’s a challenge to maintain a high bar for code reviews during the development workflow. This is especially challenging for less experienced developers, who have more difficulties identifying defects, such as thread concurrency and resource leaks.

With Amazon CodeGuru Reviewer, developers have an automated code review tool that catches critical issues, security vulnerabilities, and hard-to-find bugs during application development. CodeGuru Reviewer is powered by pre-trained machine learning (ML) models and uses millions of code reviews on thousands of open-source and Amazon repositories. It also provides recommendations on how to fix issues to improve code quality and reduces the time it takes to fix bugs before they reach customer-facing applications. Java and Python developers can simply add Amazon CodeGuru to their existing development pipeline and save time and reduce the cost and burden of bad code.

If you’re new to writing code or an experienced developer looking to automate code reviews, we’re excited to announce two new courses on CodeGuru Reviewer. These courses, developed by the AWS Training and Certification team, consist of guided walkthroughs, gaming elements, knowledge checks, and a final course assessment.

About the course

During these courses, you learn how to use CodeGuru Reviewer to automatically scan your code base, identify hard-to-find bugs and vulnerabilities, and get recommendations for fixing the bugs and security issues. The course covers CodeGuru Reviewer’s main features, provides a peek into how CodeGuru finds code anomalies, describes how its ML models were built, and explains how to understand and apply its prescriptive guidance and recommendations. Besides helping on improving the code quality, those recommendations are useful for new developers to learn coding best practices, such as refactor duplicated code, correct implementation of concurrency constructs, and how to avoid resource leaks.

The CodeGuru courses are designed to be completed within a 2-week time frame. The courses comprise 60 minutes of videos, which include 15 main lectures. Four of the lectures are specific to Java, and four focus on Python. The courses also include exercises and assessments at the end of each week, to provide you with in-depth, hands-on practice in a lab environment.

Week 1

During the first week, you learn the basics of CodeGuru Reviewer, including how you can benefit from ML and automated reasoning to perform static code analysis and identify critical defects from coding best practices. You also learn what kind of actionable recommendations CodeGuru Reviewer provides, such as refactoring, resource leak, potential race conditions, deadlocks, and security analysis. In addition, the course covers how to integrate this tool on your development workflow, such as your CI/CD pipeline.

Topics include:

  • What is Amazon CodeGuru?
  • How CodeGuru Reviewer is trained to provide intelligent recommendations
  • CodeGuru Reviewer recommendation categories
  • How to integrate CodeGuru Reviewer into your workflow

Week 2

Throughout the second week, you have the chance to explore CodeGuru Reviewer in more depth. With Java and Python code snippets, you have a more hands-on experience and dive into each recommendation category. You use these examples to learn how CodeGuru Reviewer looks for duplicated lines of code to suggest refactoring opportunities, how it detects code maintainability issues, and how it prevents resource leaks and concurrency bugs.

Topics include (for both Java and Python):

  • Common coding best practices
  • Resource leak prevention
  • Security analysis

Get started

Developed at the source, this new digital course empowers you to learn about CodeGuru from the experts at AWS whenever, wherever you want. Advance your skills and knowledge to build your future in the AWS Cloud. Enroll today:

Rafael Ramos

Rafael Ramos

Rafael is a Solutions Architect at AWS, where he helps ISVs on their journey to the cloud. He spent over 13 years working as a software developer, and is passionate about DevOps and serverless. Outside of work, he enjoys playing tabletop RPG, cooking and running marathons.