Tag Archives: Compute

Creating an EC2 instance in the AWS Wavelength Zone

Post Syndicated from Bala Thekkedath original https://aws.amazon.com/blogs/compute/creating-an-ec2-instance-in-the-aws-wavelength-zone/

Creating an EC2 instance in the AWS Wavelength Zone

This blog post is contributed by Saravanan Shanmugam, Lead Solution Architect, AWS Wavelength

AWS announced Wavelength at re:Invent 2019 in partnership with Verizon in US, SK Telecom in South Korea, KDDI in Japan, and Vodafone in UK and Europe. Following the re:Invent 2019 announcement, on August 6, 2020, AWS announced GA of one Wavelength Zone with Verizon in Boston connected to US East (N.Virginia) Region and one in San Francisco connected to the US West (Oregon) Region.

In this blog, I walk you through the steps required to create an Amazon EC2 instance in an AWS Wavelength Zone from the AWS Management console. We also address the questions asked by our customers regarding the different protocol traffic allowed into and out of a AWS Wavelength Zones.

Customers who want to access AWS Wavelength Zones and deploy their applications to the Wavelength Zone can sign up using this link. Customers that opted in to access the AWS Wavelength Zone can confirm the status on the EC2 console Account Attribute section as shown in the following image.

 Services and features

AWS Wavelength Zones are Availability Zones inside the Carrier Service Provider network closer to the Edge of the Mobile Network. Wavelength Zones bring the AWS core compute and storage services like Amazon EC2 and Amazon EBS that can be used by other services like Amazon EKS and Amazon ECS. We look at Wavelength Zone(s) as a hub and spoke model, where developers can deploy latency sensitive, high-bandwidth applications at the Edge and non-latency sensitive and data persistent applications in the Region.

Wavelength Zones supports three Nitro based Amazon EC2 instance types t3 (t3.medium, t3.xlarge) r5 (r5.2xlarge) and g4 (g4dn.2xlarge) with EBS volume types gp2. Customers can also use Amazon ECS and Amazon EKS to deploy container applications at the Edge. Other AWS Services, like AWS CloudFormation templates, CloudWatch, IAM resources, and Organizations, continue to work as expected, providing you a consistent experience. You can also leverage the full suite of services like Amazon S3 in the parent Region over AWS’s private network backbone. Now that we have reviewed AWS wavelength, the services and features associated with it, let us talk about the steps to launch an EC2 instance in the AWS Wavelength zone.

Creating a Subnet in the Wavelength Zone

Once the Wavelength Zone is enabled for your AWS Account, you can extend your existing VPC from the parent Region to a Wavelength Zone by creating a new VPC subnet assigned to the AWS Wavelength Zone. Customers can also create a new VPC and then a Subnet to deploy their applications in the Wavelength zone. The following image shows the Subnet creation step, where you pick the Wavelength Zone as the Availability zone for the subnet

Carrier Gateway

We have introduced a new gateway type called Carrier Gateway, which allows you to route traffic from the Wavelength Zone subnet to the CSP network and to the Internet. Carrier Gateways are similar to the Internet gateway in the Region. Carrier Gateway is also responsible for NAT’ing the traffic from/to the Wavelength Zone subnets mapping it to the carrier ip address assigned to the instances.

Creating a Carrier Gateway

In the VPC console, you can now create Carrier Gateway and attach it to your VPC.

You select the VPC to which the Carrier Gateway must be attached. There is also option to select “Route subnet traffic to the Carrier Gateway” in the Carrier Gateway creation step. By selecting this option, you can pick the Wavelength subnets you want to default route to the Carrier Gateway. This option automatically deletes the existing route table to the subnets, creates a new route table, creates a default route entry, and attaches the new route table to the Subnets you selected. The following picture captures the necessary input required while creating a Carrier Gateway

 

Creating an EC2 instance in a Wavelength Zone with Private IP Address

Once a VPC subnet is created for the AWS Wavelength Zone, you can launch an EC2 instance with a Private address using the EC2 Launch Wizard. In the configure instance details step, you can select the Wavelength Zone Subnet that you created in the “Creating a Subnet” section.

Attach a IAM profile with SSM role included, which allows you to SSH into the console of the instance through SSM. This is a recommended practice for Wavelength Zone instances as there is no direct SSH access allowed from Public internet.

 Creating an EC2 instance in a Wavelength Zone with Carrier IP Address

The instances running in the Wavelength Zone subnets can obtain a Carrier IP address, which is allocated from a pool of IP addresses called Network Border group (NBG). To create an EC2 instance in the Wavelength Zone with a carrier routable IP address, you can use AWS CLI. You can use the following command to create EC2 instance in a Wavelength Zone subnet. Note the additional network interface (NIC) option “AssociateCarrierIpAddress: as part of the EC2 run instance command, as shown in the following command.

aws ec2 --region us-west-2 run-instances --network-interfaces '[{"DeviceIndex":0, "AssociateCarrierIpAddress": true, "SubnetId": "<subnet-0d3c2c317ac4a262a>"}]' --image-id <ami-0a07be880014c7b8e> --instance-type t3.medium --key-name <san-francisco-wavelength-sample-key>

 *To use “AssociateCarrierIpAddress” option in the ec2 run-instance command use the latest aws cli v2.

The carrier IP assigned to the EC2 instance can be obtained by running the following command.

 aws ec2 describe-instances --instance-ids <replace-with-your-instance-id> --region us-west-2

 Make necessary changes to the default security group that is attached to the EC2 instance after running the run-instance command to allow the necessary protocol traffic. If you allow ICMP traffic to your EC2 instance, you can test ICMP connectivity to your instance from the public internet.

The different protocols allowed in and out of the Wavelength Zone are captured in the following table.

 

TCP Connection FROMTCP Connection TOResult*
Region ZonesWL ZonesAllowed
Wavelength ZonesRegionAllowed
Wavelength ZonesInternetAllowed
Internet (TCP SYN)WL ZonesBlocked
Internet (TCP EST)WL ZonesAllowed
Wavelength ZonesUE (Radio)Allowed
UE(Radio)WL ZonesAllowed

 

UDP Packets FROMUDP Packets TOResult*
Wavelength ZonesWL ZonesAllowed
Wavelength ZonesRegionAllowed
Wavelength ZonesInternetAllowed
InternetWLBlocked
Wavelength ZonesUE (Radio)Allowed
UE(Radio)WL ZonesAllowed

 

ICMP FROMICMP TOResult*
Wavelength ZonesWL ZonesAllowed
Wavelength ZonesRegionAllowed
Wavelength ZonesInternetAllowed
InternetWLAllowed
Wavelength ZonesUE (Radio)Allowed
UE(Radio)WL ZonesAllowed

Conclusion

We have covered how to create and run an EC2 instance in the AWS Wavelength Zone, the core foundation for application deployments. We will continue to publish blogs helping customers to create ECS and EKS clusters in the AWS Wavelength Zones and deploy container applications at the Mobile Carriers Edge. We are really looking forward to seeing what all you can do with them. AWS would love to get your advice on additional local services/features or other interesting use cases, so feel free to leave us your comments!

 

EFA-enabled C5n instances to scale Simcenter STAR-CCM+

Post Syndicated from Ben Peven original https://aws.amazon.com/blogs/compute/efa-enabled-c5n-instances-to-scale-simcenter-star-ccm/

This post was contributed by Dnyanesh Digraskar, Senior Partner SA, High Performance Computing; Linda Hedges, Principal SA, High Performance Computing

In this blog, we define and demonstrate the scalability metrics for a typical real-world application using Computational Fluid Dynamics (CFD) software from Siemens, Simcenter STAR-CCM+, running on a High Performance Computing (HPC) cluster on Amazon Web Services (AWS). This scenario demonstrates the scaling of an external aerodynamics CFD case with 97 million cells to over 4,000 cores of Amazon EC2 C5n.18xlarge instances using the Simcenter STAR-CCM+ software. We also discuss the effects of scaling on efficiency, simulation turn-around time, and total simulation costs. TLG Aerospace, a Seattle-based aerospace engineering services company, contributed the data used in this blog. For a detailed case study describing TLG Aerospace’s experience and the results they achieved, see the TLG Aerospace case study.

For HPC workloads that use multiple nodes, the cluster setup including the network is at the heart of scalability concerns. Some of the most common concerns from CFD or HPC engineers are “how well will my application scale on AWS?”, “how do I optimize the associated costs for best performance of my application on AWS?”, “what are the best practices in setting up an HPC cluster on AWS to reduce the simulation turn-around time and maintain high efficiency?” This post aims to answer these concerns by defining and explaining important scalability-related parameters by illustrating the results from the CFD case. For detailed HPC-specific information, see visit the High Performance Computing page and download the CFD whitepaper, Computational Fluid Dynamics on AWS.

CFD scaling on AWS

Scale-up

HPC applications, such as CFD, depend heavily on the applications’ ability to scale compute tasks efficiently in parallel across multiple compute resources. We often evaluate parallel performance by determining an application’s scale-up. Scale-up – a function of the number of processors used – is the time to complete a run on one processor, divided by the time to complete the same run on the number of processors used for the parallel run.

Scale-up formula

In addition to characterizing the scale-up of an application, scalability can be further characterized as “strong” or “weak”. Strong scaling offers a traditional view of application scaling, where a problem size is fixed and spread over an increasing number of processors. As more processors are added to the calculation, good strong scaling means that the time to complete the calculation decreases proportionally with increasing processor count. In comparison, weak scaling does not fix the problem size used in the evaluation, but purposely increases the problem size as the number of processors also increases. An application demonstrates good weak scaling when the time to complete the calculation remains constant as the ratio of compute effort to the number of processors is held constant. Weak scaling offers insight into how an application behaves with varying case size.

Figure 1, the following image, shows scale-up as a function of increasing processor count for the Simcenter STAR-CCM+ case data provided by TLG Aerospace. This is a demonstration of “strong” scalability. The blue line shows what ideal or perfect scalability looks like. The purple triangles show the actual scale-up for the case as a function of increasing processor count. The closeness of these two curves demonstrates excellent scaling to well over 3,000 processors for this mid-to-large-sized 97M cell case. This example was run on Amazon EC2 C5n.18xlarge Intel Skylake instances, 3.0 GHz, each providing 36 cores with Hyper-Threading disabled.

Figure 1. Strong scaling demonstrated for a 97M cell Simcenter STAR-CCM+ CFD calculation

Efficiency

Now that you understand the variation of scale-up with the number of processors, we discuss the relation of scale-up with number of grid cells per processor, which determines the efficiency of the parallel simulation. Efficiency is the scale-up divided by the number of processors used in the calculation. By plotting grid cells per processor, as in Figure 2, scaling estimates can be made for simulations with different grid sizes with Simcenter STAR-CCM+. The purple line in Figure 2 shows scale-up as a function of grid cells per processor. The vertical axis for scale-up is on the left-hand side of the graph as indicated by the purple arrow. The green line in Figure 2 shows efficiency as a function of grid cells per processor. The vertical axis for efficiency is on the right side of the graph and is indicated by the green arrow.

Figure 2. Scale-up and efficiency as a function of cells per processor.

Fewer grid cells per processor means reduced computational effort per processor. Maintaining efficiency while reducing cells per processor demonstrates the strong scalability of Simcenter STAR-CCM+ on AWS.

Efficiency remains at about 100% between approximately 700,000 cells per processor core and 60,000 cells per processor core. Efficiency starts to fall off at about 60,000 cells per core. An efficiency of at least 80% is maintained until 25,000 cells per core. Decreasing cells per core leads to decreased efficiency because the total computational effort per processor core is reduced. The goal of achieving more than 100% efficiency (here, at about 250,000 cells per core) is common in scaling studies, is case-specific, and often related to smaller effects such as timing variation and memory caching.

Turn-around time and cost

Case turn-around time and cost is what really matters to most HPC users. A plot of turn-around time versus CPU cost for this case is shown in Figure 3. As the number of cores increases, the total turn-around time decreases. But as the number of cores increases, the inefficiency also increases, which leads to increased costs. The cost, represented by solid blue curve, is based on the On-Demand price for the C5n.18xlarge, and only includes the computational costs. Small costs are also incurred for data storage. Minimum cost and turn-around time are achieved with approximately 60,000 cells per core.

Figure 3. Cost per run for: On-Demand pricing ($3.888 per hour for C5n.18xlarge in US-East-1) with and without the Simcenter STAR-CCM+ POD license cost as a function of turn-around time [Blue]; 3-yr all-upfront pricing ($1.475 per hour for C5n.18xlarge in US-East-1) [Green]

Many users choose a cell count per core count to achieve the lowest possible cost. Others may choose a cell count per core count to achieve the fastest turn-around time. If a run is desired in 1/3rd the time of the lowest price point, it can be achieved with approximately 25,000 cells per core.

Additional information about the test scenario

TLG Aerospace has used the Simcenter STAR-CCM+ Power-On-Demand (POD) license for running the simulations for this case. POD license enables flexible On-Demand usage of the software on unlimited cores for a fixed price of $22 per hour. The total cost per run, which includes the computational cost, plus the POD license cost is represented in Figure 3 by the dashed blue curve. As POD license is charged per hour, the total cost per run increases for higher turn-around times. Note that many users run Simcenter STAR-CCM+ with fewer cells per core than this case. While this increases the compute cost, other concerns—such as license costs or schedules—can be overriding factors. However, many find the reduced turn-around time well worth the price of the additional instances.

AWS also offers Savings Plans, which are a flexible pricing model offering substantially lower price on EC2 instances compared to On-Demand prices for a committed usage of 1- or 3-year term. For example, the 3-year all-upfront pricing of C5n.18xlarge instance is 62% cheaper than the On-Demand pricing. The total cost per run using the 3-year all-upfront pricing model is illustrated in Figure 3 by solid green line. The 3-year all-upfront pricing plan offers a substantial reduction in price for running the simulations.

Amazon Linux is optimized to run on AWS and offers excellent performance for running HPC applications. For the case presented here, the operating system used was Amazon Linux 2. While other Linux distributions are also performant, we strongly recommend that for Linux HPC applications, you use a current Linux kernel.

Amazon Elastic Block Store (Amazon EBS) is a persistent, block-level storage device that is often used for cluster storage on AWS. A standard EBS General Purpose SSD (gp2) volume was used for this scenario. For other HPC applications that may require faster I/O to prevent data writes from being a bottleneck to turn-around speed, we recommend FSx for Lustre. FSx for Lustre seamlessly integrates with Amazon S3, allowing users for efficient data interaction with Amazon S3.

AWS customers can choose to run their applications on either threads or cores. With hyper-threading, a single CPU physical core appears as two logical CPUs to the operating system. For an application like Simcenter STAR-CCM+, excellent linear scaling can be seen when using either threads or cores, though we generally recommend disabling hyper-threading. Most HPC applications benefit from disabling hyper-threading, and therefore, it tends to be the preferred environment for running HPC workloads. For more information, see Well-Architected Framework HPC Lens.

Elastic Fabric Adapter (EFA)

Elastic Fabric Adapter (EFA) is a network device that can be attached to Amazon EC2 instances to accelerate HPC applications by providing lower and consistent latency and higher throughput than the Transmission Control Protocol (TCP) transport. C5n.18xlarge instances used for running Simcenter STAR-CCM+ for this case support EFA technology, which is generally recommended for best scaling.

Summary

This post demonstrates the scalability of a commercial CFD software Simcenter STAR-CCM+ for an external aerodynamics simulation performed on the Amazon EC2 C5n.18xlarge instances. The availability of EFA, a high-performing network device on these instances result in excellent scalability of the application. The case turn-around time and associated costs of running Simcenter STAR-CCM+ on AWS hardware are discussed. In general, excellent performance can be achieved on AWS for most HPC applications. In addition to low cost and quick turn-around time, important considerations for HPC also include throughput and availability. AWS offers high throughput, scalability, security, cost-savings, and high availability, decreasing a long queue time and reducing the case turn-around time.

New EC2 T4g Instances – Burstable Performance Powered by AWS Graviton2 – Try Them for Free

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-t4g-instances-burstable-performance-powered-by-aws-graviton2/

Two years ago Amazon Elastic Compute Cloud (EC2) T3 instances were first made available, offering a very cost effective way to run general purpose workloads. While current T3 instances offer sufficient compute performance for many use cases, many customers have told us that they have additional workloads that would benefit from increased peak performance and lower cost.

Today, we are launching T4g instances, a new generation of low cost burstable instance type powered by AWS Graviton2, a processor custom built by AWS using 64-bit Arm Neoverse cores. Using T4g instances you can enjoy a performance benefit of up to 40% at a 20% lower cost in comparison to T3 instances, providing the best price/performance for a broader spectrum of workloads.

T4g instances are designed for applications that don’t use CPU at full power most of the time, using the same credit model as T3 instances with unlimited mode enabled by default. Examples of production workloads that require high CPU performance only during times of heavy data processing are web/application servers, small/medium data stores, and many microservices. Compared to previous generations, the performance of T4g instances makes it possible to migrate additional workloads such as caching servers, search engine indexing, and e-commerce platforms.

T4g instances are available in 7 sizes providing up to 5 Gbps of network and up to 2.7 Gbps of Amazon Elastic Block Store (EBS) performance:

NamevCPUsBaseline Performance/vCPUCPU Credits Earned/HourMemory
t4g.nano25%60.5 GiB
t4g.micro210%121 GiB
t4g.small220%242 GiB
t4g.medium220%244 GiB
t4g.large230%368 GiB
t4g.xlarge440%9616 GiB
t4g.2xlarge840%19232 GiB

Free Trial
To make it easier to develop, test, and run your applications on T4g instances, all AWS customers are automatically enrolled in a free trial on the t4g.micro size. Starting September 2020 until December 31st 2020, you can run a t4g.micro instance and automatically get 750 free hours per month deducted from your bill, including any CPU credits during the free 750 hours of usage. The 750 hours are calculated in aggregate across all regions. For details on terms and conditions of the free trial, please refer to the EC2 FAQs.

During the free trial, have a look at this getting started guide on using the Arm-based AWS Graviton processors. There, you can find suggestions on how to build and optimize your applications, using different programming languages and operating systems, and on managing container-based workloads. Some of the tips are specific for the Graviton processor, but most of the content works generally for anyone using Arm to run their code.

Using T4g Instances
You can start an EC2 instance in different ways, for example using the EC2 console, the AWS Command Line Interface (CLI), AWS SDKs, or AWS CloudFormation. For my first T4g instance, I use the AWS CLI:

$ aws ec2 run-instances \
  --instance-type t4g.micro \
  --image-id ami-09a67037138f86e67 \
  --security-groups MySecurityGroup \
  --key-name my-key-pair

The Amazon Machine Image (AMI) I am using is based on Amazon Linux 2. Other platforms are available, such as Ubuntu 18.04 or newer, Red Hat Enterprise Linux 8.0 and newer, and SUSE Enterprise Server 15 and newer. You can find additional AMIs in the AWS Marketplace, for example Fedora, Debian, NetBSD, CentOS, and NGINX Plus. For containerized applications, Amazon ECS and Amazon Elastic Kubernetes Service optimized AMIs are available as well.

The security group I selected gives me SSH access to the instance. I connect to the instance and do a general update:

$ sudo yum update -y

Since the kernel has been updated, I reboot the instance.

I’d like to set up this instance as a development environment. I can use it to build new applications, or to recompile my existing apps to the 64-bit Arm architecture. To install most development tools, such as Git, GCC, and Make, I use this group of packages:

$ sudo yum groupinstall -y "Development Tools"

AWS is working with several open source communities to drive improvements to the performance of software stacks running on AWS Graviton2. For example, you can see our contributions to PHP for Arm64 in this post.

Using the latest versions helps you obtain maximum performance from your Graviton2-based instances. The amazon-linux-extras command enables new versions for some of my favorite programming environments:

$ sudo amazon-linux-extras enable golang1.11 corretto8 php7.4 python3.8 ruby2.6

The output of the amazon-linux-extras command tells me which packages to install with yum:

$ yum clean metadata
$ sudo yum install -y golang java-1.8.0-amazon-corretto \
  php-cli php-pdo php-fpm php-json php-mysqlnd \
  python38 ruby ruby-irb rubygem-rake rubygem-json rubygems

Let’s check the versions of the tools that I just installed:

$ go version
go version go1.13.14 linux/arm64
$ java -version
openjdk version "1.8.0_265"
OpenJDK Runtime Environment Corretto-8.265.01.1 (build 1.8.0_265-b01)
OpenJDK 64-Bit Server VM Corretto-8.265.01.1 (build 25.265-b01, mixed mode)
$ php -v
PHP 7.4.9 (cli) (built: Aug 21 2020 21:45:13) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
$ python3.8 -V
Python 3.8.5
$ ruby -v
ruby 2.6.3p62 (2019-04-16 revision 67580) [aarch64-linux]

It looks like I am ready to go! Many more packages are available with yum, such as MariaDB and PostgreSQL. If you’re interested in databases, you might also want to try the preview of Amazon RDS powered by AWS Graviton2 processors.

Available Now
T4g instances are available today in US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo, Mumbai), Europe (Frankfurt, Ireland).

You now have a broad choice of Graviton2-based instances to better optimize your workloads for cost and performance: low cost burstable general-purpose (T4g), general purpose (M6g), compute optimized (C6g) and memory optimized (R6g) instances. Local NVMe-based SSD storage options are also available.

You can use the free trial to develop new applications, or migrate your existing workloads to the AWS Graviton2 processor. Let me know how that goes!

Danilo

Jump-starting your serverless development environment

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/jump-starting-your-serverless-development-environment/

Developers building serverless applications often wonder how they can jump-start their local development environment. This blog post provides a broad guide for those developers wanting to set up a development environment for building serverless applications.

serverless development environment

AWS and open source tools for a serverless development environment .

To use AWS Lambda and other AWS services, create and activate an AWS account.

Command line tooling

Command line tools are scripts, programs, and libraries that enable rapid application development and interactions from within a command line shell.

The AWS CLI

The AWS Command Line Interface (AWS CLI) is an open source tool that enables developers to interact with AWS services using a command line shell. In many cases, the AWS CLI increases developer velocity for building cloud resources and enables automating repetitive tasks. It is an important piece of any serverless developer’s toolkit. Follow these instructions to install and configure the AWS CLI on your operating system.

AWS enables you to build infrastructure with code. This provides a single source of truth for AWS resources. It enables development teams to use version control and create deployment pipelines for their cloud infrastructure. AWS CloudFormation provides a common language to model and provision these application resources in your cloud environment.

AWS Serverless Application Model (AWS SAM CLI)

AWS Serverless Application Model (AWS SAM) is an extension for CloudFormation that further simplifies the process of building serverless application resources.

It provides shorthand syntax to define Lambda functions, APIs, databases, and event source mappings. During deployment, the AWS SAM syntax is transformed into AWS CloudFormation syntax, enabling you to build serverless applications faster.

The AWS SAM CLI is an open source command line tool used to locally build, test, debug, and deploy serverless applications defined with AWS SAM templates.

Install AWS SAM CLI on your operating system.

Test the installation by initializing a new quick start project with the following command:

$ sam init
  1. Choose 1 for the “Quick Start Templates
  2. Choose 1 for the “Node.js runtime
  3. Use the default name.

The generated /sam-app/template.yaml contains all the resource definitions for your serverless application. This includes a Lambda function with a REST API endpoint, along with the necessary IAM permissions.

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: hello-world/
      Handler: app.lambdaHandler
      Runtime: nodejs12.x
      Events:
        HelloWorld:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            Path: /hello
            Method: get

Deploy this application using the AWS SAM CLI guided deploy:

$ sam deploy -g

Local testing with AWS SAM CLI

The AWS SAM CLI requires Docker containers to simulate the AWS Lambda runtime environment on your local development environment. To test locally, install Docker Engine and run the Lambda function with following command:

$ sam local invoke "HelloWorldFunction" -e events/event.json

The first time this function is invoked, Docker downloads the lambci/lambda:nodejs12.x container image. It then invokes the Lambda function with a pre-defined event JSON file.

Helper tools

There are a number of open source tools and packages available to help you monitor, author, and optimize your Lambda-based applications. Some of the most popular tools are shown in the following list.

Template validation tooling

CloudFormation Linter is a validation tool that helps with your CloudFormation development cycle. It analyses CloudFormation YAML and JSON templates to resolve and validate intrinsic functions and resource properties. By analyzing your templates before deploying them, you can save valuable development time and build automated validation into your deployment release cycle.

Follow these instructions to install the tool.

Once, installed, run the cfn-lint command with the path to your AWS SAM template provided as the first argument:

cfn-lint template.yaml
AWS SAM template validation with cfn-lint

AWS SAM template validation with cfn-lint

The following example shows that the template is not valid because the !GettAtt function does not evaluate correctly.

IDE tooling

Use AWS IDE plugins to author and invoke Lambda functions from within your existing integrated development environment (IDE). AWS IDE toolkits are available for PyCharm, IntelliJ. Visual Studio.

The AWS Toolkit for Visual Studio Code provides an integrated experience for developing serverless applications. It enables you to invoke Lambda functions, specify function configurations, locally debug, and deploy—all conveniently from within the editor. The toolkit supports Node.js, Python, and .NET.

The AWS Toolkit for Visual Studio Code

From Visual Studio Code, choose the Extensions icon on the Activity Bar. In the Search Extensions in Marketplace box, enter AWS Toolkit and then choose AWS Toolkit for Visual Studio Code as shown in the following example. This opens a new tab in the editor showing the toolkit’s installation page. Choose the Install button in the header to add the extension.

AWS Toolkit extension for Visual Studio Code

AWS Toolkit extension for Visual Studio Code

AWS Cloud9

Another option to build a development environment without having to install anything locally is to use AWS Cloud9. AWS Cloud9 is a cloud-based integrated development environment (IDE) for writing, running, and debugging code from within the browser.

It provides a seamless experience for developing serverless applications. It has a preconfigured development environment that includes AWS CLI, AWS SAM CLI, SDKs, code libraries, and many useful plugins. AWS Cloud9 also provides an environment for locally testing and debugging AWS Lambda functions. This eliminates the need to upload your code to the Lambda console. It allows developers to iterate on code directly, saving time, and improving code quality.

Follow this guide to set up AWS Cloud9 in your AWS environment.

Advanced tooling

Efficient configuration of Lambda functions is critical when expecting optimal cost and performance of your serverless applications. Lambda allows you to control the memory (RAM) allocation for each function.

Lambda charges based on the number of function requests and the duration, the time it takes for your code to run. The price for duration depends on the amount of RAM you allocate to your function. A smaller RAM allocation may reduce the performance of your application if your function is running compute-heavy workloads. If performance needs outweigh cost, you can increase the memory allocation.

Cost and performance optimization tooling

AWS Lambda power tuner is an open source tool that uses an AWS Step Functions state machine to suggest cost and performance optimizations for your Lambda functions. It invokes a given function with multiple memory configurations. It analyzes the execution log results to determine and suggest power configurations that minimize cost and maximize performance.

To deploy the tool:

  1. Clone the repository as follows:
    $ git clone https://github.com/alexcasalboni/aws-lambda-power-tuning.git
  2. Create an Amazon S3 bucket and enter the deployment configurations in /scripts/deploy.sh:
    # config
    BUCKET_NAME=your-sam-templates-bucket
    STACK_NAME=lambda-power-tuning
    PowerValues='128,512,1024,1536,3008'
  3. Run the deploy.sh script from your terminal, this uses the AWS SAM CLI to deploy the application:
    $ bash scripts/deploy.sh
  4. Run the power tuning tool from the terminal using the AWS CLI:
    aws stepfunctions start-execution \
    --state-machine-arn arn:aws:states:us-east-1:0123456789:stateMachine:powerTuningStateMachine-Vywm3ozPB6Am \
    --input "{\"lambdaARN\": \"arn:aws:lambda:us-east-1:1234567890:function:testytest\", \"powerValues\":[128,256,512,1024,2048],\"num\":50,\"payload\":{},\"parallelInvocation\":true,\"strategy\":\"cost\"}" \
    --output json
  5. The Step Functions execution output produces a link to a visual summary of the suggested results:

    AWS Lambda power tuning results

    AWS Lambda power tuning results

Monitoring and debugging tooling

Sls-dev-tools is an open source serverless tool that delivers serverless metrics directly to the terminal. It provides developers with feedback on their serverless application’s metrics and key bindings that deploy, open, and manipulate stack resources. Bringing this data directly to your terminal or IDE, reduces context switching between the developer environment and the web interfaces. This can increase application development speed and improve user experience.

Follow these instructions to install the tool onto your development environment.

To open the tool, run the following command:

$ Sls-dev-tools

Follow the in-terminal interface to choose which stack to monitor or edit.

The following example shows how the tool can be used to invoke a Lambda function with a custom payload from within the IDE.

Invoke an AWS Lambda function with a custom payload using sls-dev-tools

Invoke an AWS Lambda function with a custom payload using sls-dev-tools

Serverless database tooling

NoSQL Workbench for Amazon DynamoDB is a GUI application for modern database development and operations. It provides a visual IDE tool for data modeling and visualization with query development features to help build serverless applications with Amazon DynamoDB tables. Define data models using one or more tables and visualize the data model to see how it works in different scenarios. Run or simulate operations and generate the code for Python, JavaScript (Node.js), or Java.

Choose the correct operating system link to download and install NoSQL Workbench on your development machine.

The following example illustrates a connection to a DynamoDB table. A data scan is built using the GUI, with Node.js code generated for inclusion in a Lambda function:

Connecting to an Amazon DynamoBD table with NoSQL Workbench for AmazonDynamoDB

Connecting to an Amazon DynamoDB table with NoSQL Workbench for Amazon DynamoDB

Generating query code with NoSQL Workbench for Amazon DynamoDB

Generating query code with NoSQL Workbench for Amazon DynamoDB

Conclusion

Building serverless applications allows developers to focus on business logic instead of managing and operating infrastructure. This is achieved by using managed services. Developers often struggle with knowing which tools, libraries, and frameworks are available to help with this new approach to building applications. This post shows tools that builders can use to create a serverless developer environment to help accelerate software development.

This list represents AWS and open source tools but does not include our APN Partners. For partner offers, check here.

Read more to start building serverless applications.

New – AWS Fargate for Amazon EKS now supports Amazon EFS

Post Syndicated from Harunobu Kameda original https://aws.amazon.com/blogs/aws/new-aws-fargate-for-amazon-eks-now-supports-amazon-efs/

AWS Fargate is a serverless compute engine for containers available with both Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS). With Fargate, developers are able to focus on building applications, eliminating the need to manage the infrastructure related undifferentiated heavy lifting.

Developers specify resources for each Kubernetes pod, and are charged only for provisioned compute resource. When using Fargate, each EKS pod runs in its own kernel runtime environment and CPU, memory, storage, and network resources are never shared with other pods, providing workload isolation and increased security.

Containers are ephemeral in nature. They are dynamically scaled in and out, and their saved state or data is cleared on exit. We’ve had many requirements from our customers about data persistence and shared storage of containerized applications since launching EKS support for Fargate in 2019, and announced Amazon Elastic File System(EFS) support for Fargate on ECS in April 2020. Now many customers are operating stateful workloads on it, and others have requested support for EFS with Fargate when used with EKS. Today we are happy to announce this EFS support.

EFS provides a simple, scalable, and fully managed shared file system for use with AWS cloud services, and can also help Kubernetes applications be highly available because all data written to EFS is written to multiple AWS Availability Zones. EFS is built for on-demand petabyte growth without application interruption, and it automatically grows and shrinks as files are added and removed, eliminating the need to provision and manage capacity to accommodate growth. EFS Access Points is also ideal for security sensitive workloads as it can encrypt data in the file system and data in transit.

Kubernetes supports “Container Storage Interface (CSI)” which is a standard for exposing block and file storage systems to containerized workloads. The EFS CSI driver makes it simple to configure elastic file storage for Kubernetes clusters, and before this update customers could to use EFS via Amazon EC2 worker nodes connected to a cluster. Now customers can also configure their pods running on Fargate to access an EFS file system using standard Kubernetes APIs. With this update, customers can run stateful workloads that require highly available file systems as well as workloads that require access to shared storage. Using the EFS CSI driver, all data in transit is encrypted by default.

We released a generally available version of the Amazon EFS CSI driver for EKS in July 2020. The Amazon EFS CSI driver makes it easy to configure elastic file storage for both EKS and self-managed Kubernetes clusters running on AWS using standard Kubernetes interfaces. If a Kubernetes pod is terminated and relaunched, the CSI driver reconnects the EFS file system, even if the pod is relaunched in a different AWS Availability Zone.​ When using standard EC2 worker nodes, the EFS CSI driver needs to be deployed as a set of pods and DaemonSets. With this new update, for Fargate this step is not required and you do not need to install the EFS CSI driver, as it is installed in the Fargate stack and support for EFS is provided out of the box. Customers can use EFS with Fargate for EKS without spending the time and resources to install and update the CSI driver.

How to configure the Fargate/EKS and EFS integration?

You need to use three Kubernetes settings to mount EFS to Farfgate on EKS. Those are StorageClass, PersistentVolume (PV), and PersistentVolumeClaim (PVC). Configuring StorageClass and PVs are steps that an administrator (or similar) would do to make EFS file systems available for application developers. PVCs are used to allocate PVs from the pool of existing PVs as needed to deploy applications.

The StorageClass object provides a way for a Kubernetes administrator to register a specific storage type (e.g. EFS or EBS) and configuration (e.g. throughput, backup policy). Once a StorageClass is defined the PV object is used to create actual storage volumes inside that class. StorageClass and PV are the Kubernetes mechanisms that allow actual storage subsystems to be abstracted and decoupled from the way they are consumed by Kubernetes users. For example, while a Kubernetes administrator needs to know how exactly to configure a specific storage configuration from a particular storage service, Kubernetes users do not because they only see their volumes within abstract classes of storage.

The last step is the binding: Kubernetes users requests access to said volumes via the PVC object and related API. These volumes can be created dynamically when the user requests them via the PVC or they need to be statically pre-created by an administrator for later consumption by a Kubernetes user. The current implementation of the EFS CSI driver requires the volumes to be statically pre-created for the PVC binding to work.

If you are new to Kubernetes persistent volumes and want to know more about how they work, please refer to this page in the Kubernetes documentation that has all the details.

Let’s see this in action. First, you need to create your own EFS file system in the same AWS Region. If you are not familiar with EFS this EFS getting start guide is a good resource you can start with.

Once you create an EFS file system, you get your file system ID. You can configure the mount settings using a Kubernetes StorageClass and PersistentVolume. Here is an example of the YAML files:

CSIDriver Object

apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: efs.csi.aws.com
spec:
  attachRequired: false

For now you need to add the EFS CSIDriver object shown above to your cluster so Kubernetes can discover the driver that Fargate automatically installs. In the future, this manifest will be added by default to EKS clusters.

Storage Class

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com

PersistentVolume(PV)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: <EFS filesystem ID>

The volumeHandle is returned by the EFS service when you create a file system, and you need to use it to configure the CSI driver to create the PV. You can obtain EFS filesystem ID from the AWS management console or the below command by AWS CLI.

aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text

Now that you have created a PV by applying the manifest above, you configure Kurbenetes pods to access the EFS file system by including a PersistentVolumeClaim in the pod manifest. These are two manifest examples that do that:

PersistentVolumeClaim(PVC)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi

Pod manifest

apiVersion: v1
kind: Pod
metadata:
  name: app1
spec:
  containers:
  - name: app1 image: busybox command: ["/bin/sh"] args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

Available Today

Today, this feature update is available for newly created EKS clusters with Kubernetes version 1.17, and we are planning to roll out support for this feature with additional Kubernetes versions on EKS in the coming weeks. This update is available in all AWS regions where Fargate with EKS is available. You can check our latest documentation for more detail.

– Kame;

Amazon ECS Now Supports EC2 Inf1 Instances

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/amazon-ecs-now-supports-ec2-inf1-instances/

As machine learning and deep learning models become more sophisticated, hardware acceleration is increasingly required to deliver fast predictions at high throughput. Today, we’re very happy to announce that AWS customers can now use the Amazon EC2 Inf1 instances on Amazon ECS, for high performance and the lowest prediction cost in the cloud. For a few weeks now, these instances have also been available on Amazon Elastic Kubernetes Service.

A primer on EC2 Inf1 instances
Inf1 instances were launched at AWS re:Invent 2019. They are powered by AWS Inferentia, a custom chip built from the ground up by AWS to accelerate machine learning inference workloads.

Inf1 instances are available in multiple sizes, with 1, 4, or 16 AWS Inferentia chips, with up to 100 Gbps network bandwidth and up to 19 Gbps EBS bandwidth. An AWS Inferentia chip contains four NeuronCores. Each one implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps cut down on external memory accesses, saving I/O time in the process. When several AWS Inferentia chips are available on an Inf1 instance, you can partition a model across them and store it entirely in cache memory. Alternatively, to serve multi-model predictions from a single Inf1 instance, you can partition the NeuronCores of an AWS Inferentia chip across several models.

Compiling Models for EC2 Inf1 Instances
To run machine learning models on Inf1 instances, you need to compile them to a hardware-optimized representation using the AWS Neuron SDK. All tools are readily available on the AWS Deep Learning AMI, and you can also install them on your own instances. You’ll find instructions in the Deep Learning AMI documentation, as well as tutorials for TensorFlow, PyTorch, and Apache MXNet in the AWS Neuron SDK repository.

In the demo below, I will show you how to deploy a Neuron-optimized model on an ECS cluster of Inf1 instances, and how to serve predictions with TensorFlow Serving. The model in question is BERT, a state of the art model for natural language processing tasks. This is a huge model with hundreds of millions of parameters, making it a great candidate for hardware acceleration.

Creating an Amazon ECS Cluster
Creating a cluster is the simplest thing: all it takes is a call to the CreateCluster API.

$ aws ecs create-cluster --cluster-name ecs-inf1-demo

Immediately, I see the new cluster in the console.

New cluster

Several prerequisites are required before we can add instances to this cluster:

  • An AWS Identity and Access Management (IAM) role for ECS instances: if you don’t have one already, you can find instructions in the documentation. Here, my role is named ecsInstanceRole.
  • An Amazon Machine Image (AMI) containing the ECS agent and supporting Inf1 instances. You could build your own, or use the ECS-optimized AMI for Inferentia. In the us-east-1 region, its id is ami-04450f16e0cd20356.
  • A Security Group, opening network ports for TensorFlow Serving (8500 for gRPC, 8501 for HTTP). The identifier for mine is sg-0994f5c7ebbb48270.
  • If you’d like to have ssh access, your Security Group should also open port 22, and you should pass the name of an SSH key pair. Mine is called admin.

We also need to create a small user data file in order to let instances join our cluster. This is achieved by storing the name of the cluster in an environment variable, itself written to the configuration file of the ECS agent.

#!/bin/bash
echo ECS_CLUSTER=ecs-inf1-demo >> /etc/ecs/ecs.config

We’re all set. Let’s add a couple of Inf1 instances with the RunInstances API. To minimize cost, we’ll request Spot Instances.

$ aws ec2 run-instances \
--image-id ami-04450f16e0cd20356 \
--count 2 \
--instance-type inf1.xlarge \
--instance-market-options '{"MarketType":"spot"}' \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=ecs-inf1-demo}]' \
--key-name admin \
--security-group-ids sg-0994f5c7ebbb48270 \
--iam-instance-profile Name=ecsInstanceRole \
--user-data file://user-data.txt

Both instances appear right away in the EC2 console.

Inf1 instances

A couple of minutes later, they’re ready to run tasks on the cluster.

Inf1 instances

Our infrastructure is ready. Now, let’s build a container storing our BERT model.

Building a Container for Inf1 Instances
The Dockerfile is pretty straightforward:

  • Starting from an Amazon Linux 2 image, we open ports 8500 and 8501 for TensorFlow Serving.
  • Then, we add the Neuron SDK repository to the list of repositories, and we install a version of TensorFlow Serving that supports AWS Inferentia.
  • Finally, we copy our BERT model inside the container, and we load it at startup.

Here is the complete file.

FROM amazonlinux:2
EXPOSE 8500 8501
RUN echo $'[neuron] \n\
name=Neuron YUM Repository \n\
baseurl=https://yum.repos.neuron.amazonaws.com \n\
enabled=1' > /etc/yum.repos.d/neuron.repo
RUN rpm --import https://yum.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB
RUN yum install -y tensorflow-model-server-neuron
COPY bert /bert
CMD ["/bin/sh", "-c", "/usr/local/bin/tensorflow_model_server_neuron --port=8500 --rest_api_port=8501 --model_name=bert --model_base_path=/bert/"]

Then, I build and push the container to a repository hosted in Amazon Elastic Container Registry. Business as usual.

$ docker build -t neuron-tensorflow-inference .

$ aws ecr create-repository --repository-name ecs-inf1-demo

$ aws ecr get-login-password | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com

$ docker tag neuron-tensorflow-inference 123456789012.dkr.ecr.us-east-1.amazonaws.com/ecs-inf1-demo:latest

$ docker push

Now, we need to create a task definition in order to run this container on our cluster.

Creating a Task Definition for Inf1 Instances
If you don’t have one already, you should first create an execution role, i.e. a role allowing the ECS agent to perform API calls on your behalf. You can find more information in the documentation. Mine is called ecsTaskExecutionRole.

The full task definition is visible below. As you can see, it holds two containers:

  • The BERT container that I built,
  • A sidecar container called neuron-rtd, that allows the BERT container to access NeuronCores present on the Inf1 instance. The AWS_NEURON_VISIBLE_DEVICES environment variable lets you control which ones may be used by the container. You could use it to pin a container on one or several specific NeuronCores.
{
  "family": "ecs-neuron",
  "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "entryPoint": [
        "sh",
        "-c"
      ],
      "portMappings": [
        {
          "hostPort": 8500,
          "protocol": "tcp",
          "containerPort": 8500
        },
        {
          "hostPort": 8501,
          "protocol": "tcp",
          "containerPort": 8501
        },
        {
          "hostPort": 0,
          "protocol": "tcp",
          "containerPort": 80
        }
      ],
      "command": [
        "tensorflow_model_server_neuron --port=8500 --rest_api_port=8501 --model_name=bert --model_base_path=/bert"
      ],
      "cpu": 0,
      "environment": [
        {
          "name": "NEURON_RTD_ADDRESS",
          "value": "unix:/sock/neuron-rtd.sock"
        }
      ],
      "mountPoints": [
        {
          "containerPath": "/sock",
          "sourceVolume": "sock"
        }
      ],
      "memoryReservation": 1000,
      "image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/ecs-inf1-demo:latest",
      "essential": true,
      "name": "bert"
    },
    {
      "entryPoint": [
        "sh",
        "-c"
      ],
      "portMappings": [],
      "command": [
        "neuron-rtd -g unix:/sock/neuron-rtd.sock"
      ],
      "cpu": 0,
      "environment": [
        {
          "name": "AWS_NEURON_VISIBLE_DEVICES",
          "value": "ALL"
        }
      ],
      "mountPoints": [
        {
          "containerPath": "/sock",
          "sourceVolume": "sock"
        }
      ],
      "memoryReservation": 1000,
      "image": "790709498068.dkr.ecr.us-east-1.amazonaws.com/neuron-rtd:latest",
      "essential": true,
      "linuxParameters": { "capabilities": { "add": ["SYS_ADMIN", "IPC_LOCK"] } },
      "name": "neuron-rtd"
    }
  ],
  "volumes": [
    {
      "name": "sock",
      "host": {
        "sourcePath": "/tmp/sock"
      }
    }
  ]
}

Finally, I call the RegisterTaskDefinition API to let the ECS backend know about it.

$ aws ecs register-task-definition --cli-input-json file://inf1-task-definition.json

We’re now ready to run our container, and predict with it.

Running a Container on Inf1 Instances
As this is a prediction service, I want to make sure that it’s always available on the cluster. Instead of simply running a task, I create an ECS Service that will make sure the required number of container copies is running, relaunching them should any failure happen.

$ aws ecs create-service --cluster ecs-inf1-demo \
--service-name bert-inf1 \
--task-definition ecs-neuron:1 \
--desired-count 1

A minute later, I see that both task containers are running on the cluster.

Running containers

Predicting with BERT on ECS and Inf1
The inner workings of BERT are beyond the scope of this post. This particular model expects a sequence of 128 tokens, encoding the words of two sentences we’d like to compare for semantic equivalence.

Here, I’m only interested in measuring prediction latency, so dummy data is fine. I build 100 prediction requests storing a sequence of 128 zeros. Using the IP address of the BERT container, I send them to the TensorFlow Serving endpoint via grpc, and I compute the average prediction time.

Here is the full code.

import numpy as np
import grpc
import tensorflow as tf
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
import time

if __name__ == '__main__':
    channel = grpc.insecure_channel('18.234.61.31:8500')
    stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'bert'
    i = np.zeros([1, 128], dtype=np.int32)
    request.inputs['input_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape))
    request.inputs['input_mask'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape))
    request.inputs['segment_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(i, shape=i.shape))

    latencies = []
    for i in range(100):
        start = time.time()
        result = stub.Predict(request)
        latencies.append(time.time() - start)
        print("Inference successful: {}".format(i))
    print ("Ran {} inferences successfully. Latency average = {}".format(len(latencies), np.average(latencies)))

For convenience, I’m running this code on an EC2 instance based on the Deep Learning AMI. It comes pre-installed with a Conda environment for TensorFlow and TensorFlow Serving, saving me from installing any dependencies.

$ source activate tensorflow_p36
$ python predict.py

On average, prediction took 56.5ms. As far as BERT goes, this is pretty good!

Ran 100 inferences successfully. Latency average = 0.05647835493087769

Getting Started
You can now deploy Amazon Elastic Compute Cloud (EC2) Inf1 instances on Amazon ECS today in the US East (N. Virginia) and US West (Oregon) regions. As Inf1 deployment progresses, you’ll be able to use them with Amazon ECS in more regions.

Give this a try, and please send us feedback either through your usual AWS Support contacts, on the AWS Forum for Amazon ECS, or on the container roadmap on Github.

– Julien

Introducing the CDK construct library for the serverless LAMP stack

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-the-cdk-construct-library-for-the-serverless-lamp-stack/

In this post, you learn how the new CDK construct library for the serverless LAMP stack is helping developers build serverless PHP applications.

The AWS Cloud Development Kit (AWS CDK) is an open source software development framework for defining cloud application resources in code. It allows developers to define their infrastructure in familiar programming languages such as TypeScript, Python, C# or Java. Developers benefit from the features those languages provide such as Interfaces, Generics, Inheritance, and Method Access Modifiers. The AWS Construct Library provides a broad set of modules that expose APIs for defining AWS resources in CDK applications.

The “Serverless LAMP stack” blog series provides best practices, code examples and deep dives into many serverless concepts and demonstrates how these are applied to PHP applications. It also highlights valuable contributions from the community to help spark inspiration for PHP developers.

Each component of this serverless LAMP stack is explained in detail in the blog post series:

The CDK construct library for the serverless LAMP stack is an abstraction created by AWS Developer Advocate, Pahud Hsieh. It offers a single high-level component for defining all resources that make up the serverless LAMP stack.

CDK construct for Serverless LAMP stack

CDK construct for Serverless LAMP stack

  1. Amazon API Gateway HTTP API.
  2. AWS Lambda with Bref-FPM runtime.
  3. Amazon Aurora for MySQL database cluster with Amazon RDS Proxy enabled.

Why build PHP applications with AWS CDK constructs?

Building complex web applications from scratch is a time-consuming process. PHP frameworks such as Laravel and Symfony provide a structured and standardized way to build web applications. Using templates and generic components helps reduce overall development effort. Using a serverless approach helps to address some of the traditional LAMP stack challenges of scalability and infrastructure management. Defining these resources with the AWS CDK construct library allows developers to apply the same framework principles to infrastructure as code.

The AWS CDK enables fast and easy onboarding for new developers. In addition to improved readability through reduced codebase size, PHP developers can use their existing skills and tools to build cloud infrastructure. Familiar concepts such as objects, loops, and conditions help to reduce cognitive overhead. Defining the LAMP stack infrastructure for your PHP application within the same codebase reduces context switching and streamlines the provisioning process. Connect CDK constructs to deploy a serverless LAMP infrastructure quickly with minimal code.

Code is a liability and with the AWS CDK you are applying the serverless first mindset to infra code by allowing others to create abstractions they maintain so you don’t need to. I always love deleting code

Says Matt Coulter, creator of CDK patterns – An open source resource for CDK based architecture patterns.

Building a serverless Laravel application with the ServerlessLaravel construct

The cdk-serverless-lamp construct library is built with aws/jsii and published as npm and Python modules. The stack is deployed in either TypeScript or Python and includes the ServerlessLaravel construct. This makes it easier for PHP developers to deploy a serverless Laravel application.

First, follow the “Working with the AWS CDK with in TypeScript“ steps to prepare the AWS CDK environment for TypeScript.

Deploy the serverless LAMP stack with the following steps:

  1. Confirm the CDK CLI instillation:
    $ cdk –version
  2. Create a new Laravel project with AWS CDK:
    $ mkdir serverless-lamp && cd serverless-lamp
  3. Create directories for AWS CDK and Laravel project:
    $ mkdir cdk codebase
  4. Create the new Laravel project with docker
    $ docker run --rm -ti \
    --volume $PWD:/app \
    composer create-project --prefer-dist laravel/laravel ./codebase

The cdk-serverless-lamp construct library uses the bref-FPM custom runtime to run PHP code in a Lambda function. The bref runtime performs similar functionality to Apache or NGINX by forwarding HTTP requests through the FastCGI protocol. This process is explained in detail in “The Serverless LAMP stack part 3: Replacing the web server”. In addition to this, a bref package named larval-bridge automatically configures Laravel to work on Lambda. This saves the developer from having to manually implement some of the configurations detailed in “The serverless LAMP stack part 4: Building a serverless Laravel application

  1. Install bref/bref and bref/laravel-bridge packages in the vendor directories:
    $ cd codebase
    $ docker run --rm -ti \
    --volume $PWD:/app \
    composer require bref/bref bref/laravel-bridge
  2. Initialize the AWS CDK project with typescript.
    $ cd ../cdk
    $ cdk init -l typescript
  3. Install the cdk-severless-lamp npm module
    $ yarn add cdk-serverless-lamp

This creates the following directory structure:

.
├── cdk
└── codebase

The cdk directory contains the AWS CDK resource definitions. The codebase directory contains the Laravel project.

Building a Laravel Project with the AWS CDK

Replace the contents of ./lib/cdk-stack.ts with:

import * as cdk from '@aws-cdk/core';
import * as path from 'path';
import { ServerlessLaravel } from 'cdk-serverless-lamp';

export class CdkStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    new ServerlessLaravel(this, 'ServerlessLaravel', {
      brefLayerVersion: 'arn:aws:lambda:us-east-1:209497400698:layer:php-74-fpm:12',
      laravelPath: path.join(__dirname, '../../codebase'),
    });
  }
}

The brefLayerVersion argument refers to the AWS Lambda layer version ARN of the Bref PHP runtime. Select the correct ARN and corresponding Region from the bref website. This example deploys the stack into the us-east-1 Region with the corresponding Lambda layer version ARN for the Region.

  1. Deploy the stack:
    cdk deploy

Once the deployment is complete, an Amazon API Gateway HTTP API endpoint is returned in the CDK output. This URL serves the Laravel application.

CDK construct output for Serverless LAMP stack

The application is running PHP on Lambda using bref’s FPM custom runtime. This entire stack is deployed by a single instantiation of the ServerlessLaravel construct class with required properties.

Adding an Amazon Aurora database

The ServerlessLaravel stack is extended with the DatabaseCluster construct class to provision an Amazon Aurora database. Pass a Amazon RDS Proxy instance for this cluster to the ServerlessLaravel construct:

  1. Edit the ./lib/cdk-stack.ts :
 import * as cdk from '@aws-cdk/core';
 import { InstanceType, Vpc } from '@aws-cdk/aws-ec2';
 import * as path from 'path';
 import { ServerlessLaravel, DatabaseCluster } from 'cdk-serverless-lamp';

 export class CdkStack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);
 const vpc = new Vpc(this, 'Vpc',{ maxAzs: 3, natGateways: 1 } )
    // the DatabaseCluster sharing the same vpc with the ServerlessLaravel
    const db = new DatabaseCluster(this, 'DatabaseCluster', { vpc, instanceType: new InstanceType('t3.small'), rdsProxy: true, })
    // the ServerlessLaravel
    new ServerlessLaravel(this, 'ServerlessLaravel', {
      brefLayerVersion: 'arn:aws:lambda:us-east-1:209497400698:layer:php-74-fpm:12',
      laravelPath: path.join(__dirname, '../composer/laravel-bref'),
      vpc, 
      databaseConfig: { writerEndpoint: db.rdsProxy!.endpoint, },
    });
  }
 }
  1. Run cdk diff to check the difference :
    $ cdk diff

The output shows that a shared VPC is created for the ServerlessLaravel stack and the DatabaseCluster stack. An Amazon Aurora DB cluster with a single DB instance and a default secret from AWS Secrets Manager is also created. The cdk-serverless-lamp construct library configures Amazon RDS proxy automatically with the required AWS IAM policies and connection rules.

  1. Deploy the stack.
    $ cdk deploy

The ServerlessLaravel stack is running with DatabaseCluster in a single VPC. A single Lambda function is automatically configured with the RDS Proxy DB_WRITER and DB_READER stored as Lambda environment variables.

Database authentication

The Lambda function authenticates to RDS Proxy with the execution IAM role. RDS Proxy authenticates to the Aurora DB cluster using the credentials stored in the AWS Secrets Manager. This is a more secure alternative to embedding database credentials in the application code base. Read “Introducing the serverless LAMP stack – part 2 relational databases” for more information on connecting to an Aurora DB cluster with Lambda using RDS Proxy.

Clean up

To remove the stack, run:
$ cdk destroy

The video below demonstrates a deployment with the CDK construct for the serverless LAMP stack.

Conclusion

This post introduces the new CDK construct library for the serverless LAMP stack. It explains how to use it to deploy a serverless Laravel application. Combining this with other CDK constructs such as DatabaseCluster gives PHP developers the building blocks to create scalable, repeatable patterns at speed with minimal coding.

With the CDK construct library for the serverless LAMP stack, PHP development teams can focus on shipping code without changing the way they build.

Start building serverless applications with PHP.

Building a Pulse Oximetry tracker using AWS Amplify and AWS serverless

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-pulse-oximetry-tracker-using-aws-amplify-and-aws-serverless/

This guide demonstrates an example solution for collecting, tracking, and sharing pulse oximetry data for multiple users. It’s built using AWS serverless technologies, enabling reliable scalability and security. The frontend application is written in VueJS and uses the Amplify Framework. It takes oxygen saturation measurements as manual input or a BerryMed pulse oximeter connected to a browser using Web Bluetooth.

The serverless backend that handles user data and shared access management is deployed using the AWS Serverless Application Model (AWS SAM). The backend application consists of an Amazon API Gateway REST API, which invokes AWS Lambda functions. The code is written in Python to handle the business logic of interacting with an Amazon DynamoDB database. Authentication is managed by Amazon Cognito.

A screenshot of the frontend application running in a desktop browser.

A screenshot of the frontend application running in a desktop browser.

Prerequisites

You need the following to complete the project:

Deploy the application

A high-level diagram of the full oxygen monitor application.

A high-level diagram of the full oxygen monitor application.

The solution consists of two parts, the frontend application and the serverless backend. The Amplify CLI deploys all the Amazon Cognito authentication and hosting resources for the frontend. The backend requires the Amazon Cognito user pool identifier to configure an authorizer on the API. This enables an authorization workflow, as shown in the following image.

A diagram showing how an Amazon Cognito authorization workflow works

A diagram showing how an Amazon Cognito authorization workflow works

First, configure the frontend. Complete the following steps using a terminal running on a computer or by using the AWS Cloud9 IDE. If using AWS Cloud9, create an instance using the default options.

From the terminal:

  1. Install the Amplify CLI by running this command.
    npm install -g @aws-amplify/cli
  2. Configure the Amplify CLI using this command. Follow the guided process to completion.
    amplify configure
  3. Clone the project from GitHub.
    git clone https://github.com/aws-samples/aws-serverless-oxygen-monitor-web-bluetooth.git
  4. Navigate to the amplify-frontend directory and initialize the project using the Amplify CLI command. Follow the guided process to completion.
    cd aws-serverless-oxygen-monitor-web-bluetooth/amplify-frontend
    
    amplify init
  5. Deploy all the frontend resources to the AWS Cloud using the Amplify CLI command.
    amplify push
  6. After the resources have finishing deploying, make note of the aws_user_pools_id property in the src/aws-exports.js file. This is required when deploying the serverless backend.

Next, deploy the serverless backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Navigate to the oxygen-monitor-backend application in the AWS Serverless Application Repository.
  2. In Application settings, name the application and provide the aws_user_pools_id from the frontend application for the UserPoolID parameter.
  3. Choose Deploy.
  4. Once complete, copy the API endpoint so that it can be configured on the frontend application in the next step.

Configure and run the frontend application

  1. Create a file, amplify-frontend/src/api-config.js, in the frontend application with the following content. Include the API endpoint from the previous step.
    const apiConfig = {
      “endpoint”: “<API ENDPOINT>”
    };
    
    export default apiConfig;
  2. In a terminal, navigate to the root directory of the frontend application and run it locally for testing.
    cd aws-serverless-oxygen-monitor-web-bluetooth/amplify-frontend
    
    npm install
    
    npm run serve

    You should see an output like this:

  3. To publish the frontend application to cloud hosting, run the following command.
    amplify publish

    Once complete, a URL to the hosted application is provided.

Using the frontend application

Once the application is running locally or hosted in the cloud, navigating to it presents a user login interface with an option to register.

The registration flow requires a code sent to the provided email for verification. Once verified you’re presented with the main application interface. A sample value is displayed when the account has no oxygen saturation or pulse rate history.

To connect a BerryMed pulse oximeter to begin reading measurements, turn on the device. Choose the Connect Pulse Oximeter button and then select it from the list. A Chrome browser on a desktop or Android mobile device is required to use the Web Bluetooth feature.

If you do not have a compatible Bluetooth pulse oximeter or access to Web Bluetooth, checking the Enter Manually check box presents direct input boxes.

Once oxygen saturation and pulse rate values are available, choose the cloud upload icon. This publishes the values to the serverless backend, where they are stored in a DynamoDB table. The trend chart then updates to reflect the new data.

Access to your historical data can be shared to another user, for example a healthcare professional. Choose the share icon on the right to open sharing options. From here, you can add or remove access to others by user name.

To view data shared with you, select the user name from the drop-down and choose the refresh icon.

Understanding the serverless backend

In the GitHub project, the folder serverless-backend/ contains the AWS SAM template file and the Lambda functions. It creates an API Gateway endpoint, six Lambda functions, and two DynamoDB tables. The template also defines an Amazon Cognito authorizer for the API using the UserPoolID passed in as a parameter:

This only allows authenticated users of the frontend application to make requests with a JWT token containing their user name and email. The backend uses that information to fetch and store data in DynamoDB that corresponds to the user making the request.

The first three endpoints handle updating and retrieving oxygen and pulse rate levels. When a user publishes a new measurement, the AddLevels function is invoked which creates a new item in the DynamoDB “Levelstable.

The FetchLevels function retrieves the user’s personal history. The FetchSharedUserLevels function checks the Access Table to see if the requesting user has shared access rights.

The remaining endpoints handle access management. When you add a shared user, this invokes the ManageAccess function with a user name and an action, such as share or revoke. If sharing, the item is added to the Access Table that enables the relationship. If revoking, the item is removed from the table.

The GetSharedUsers function fetches the list of shared with the user making the request. This populates the drop-down of accessible users. FetchUsersWithAccess fetches all users that have access to the user making the request, this populates the list of users in the sharing options.

The DynamoDB tables are created by the AWS SAM template with the partition key and range key defined for each table. These are used by the Lambda functions to query and sort items. See the documentation to learn more about DynamoDB table key schema.

LevelsTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "username"
          AttributeType: "S"
        - 
          AttributeName: "timestamp"
          AttributeType: "N"
      KeySchema: 
        - AttributeName: username
          KeyType: HASH
        - AttributeName: timestamp
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

  SharedAccessTable:
    Type: AWS::DynamoDB::Table
    Properties: 
      AttributeDefinitions: 
        - 
          AttributeName: "username"
          AttributeType: "S"
        - 
          AttributeName: "shared_user"
          AttributeType: "S"
      KeySchema: 
        - AttributeName: username
          KeyType: HASH
        - AttributeName: shared_user
          KeyType: RANGE
      ProvisionedThroughput: 
        ReadCapacityUnits: "5"
        WriteCapacityUnits: "5"

 

Understanding the frontend

In the GitHub project, the folder amplify-frontend/src/ contains all the code for the frontend application. In main.js, the Amplify VueJS modules are configured to use the resources defined in aws-exports.js. It also configures the endpoint of the serverless backend, defined in api-config.js.

In the file, components/OxygenMonitor.vue, the API module is imported and the desired API is defined.

API calls are defined as Vue methods that can be called by various other components and elements of the application.

In components/ConnectDevice.vue, the connect method initializes a Web Bluetooth connection to the pulse oximeter. It searches for a Bluetooth service UUID and device name specific to BerryMed pulse oximeters. On a successful connection it creates an event listener on the Bluetooth characteristic that notifies changes on measurements.

The handleData method parses notification events. It emits on any changes to oxygen saturation or pulse rate.

The OxygenMonitor component defines the ConnectDevice component in its template. It binds handlers on emitted events.

The handlers assign the values to the Vue data object for use throughout the application.

Further explore the project code to see how the Amplify Framework and the serverless backend are used to make a practical application.

Conclusion

Tracking patient vitals remotely has become more relevant than ever. This guide demonstrates a solution for a personal health and telemedicine application. The full solution includes multiuser functionality and a secure and scalable serverless backend. The application uses a browser to interact with a physical device to measure oxygen saturation and pulse rate. It publishes measurements to a database using a serverless API. The historical data can be displayed as a trend chart and can also be shared with other users.

Once more familiarized with the sample project you may want to begin developing an application with your team. The Amplify Framework has support for team environments, allowing all your developers to work together seamlessly.

To learn more about AWS serverless and keep up to date on the latest features, subscribe to the YouTube channel.

Deploying your first 5G enabled application with AWS Wavelength

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/deploying-your-first-5g-enabled-application-with-aws-wavelength/

This post was written by Mike Coleman, Senior Developer Advocate, Twitter handle: @mikegcoleman

Today, AWS released AWS Wavelength. Wavelength allows you to deploy applications and services at the edge of a mobile carrier’s 5G network. By combining the benefits of 5G, such as high bandwidth and low latency, with the ability to use AWS tools and services you’re already familiar with, you’re able to build next generation edge applications quickly and easily.

Rather than go into more depth about Wavelength in this blog, I’d recommend reading Jeff Barr’s blog post. His post goes into detail about why we built Wavelength, and how you can get started with deploying AWS resources in a Wavelength Zone.

In this blog, I walk you through deploying one of the most common Wavelength use cases: machine learning inference.

Why inference at the edge?

One of the tradeoffs with machine learning applications is system responsiveness. If your application must be highly responsive, you may need to deploy your inference processing application close to the end user. In the case of mobile devices, this could mean that the inference processing takes place on the device itself. This type of additional processing demand on the device often results in reduced device battery life among other tradeoffs. Additionally, if you need to update your machine learning model, you must push out an update to all the devices running your application.

As I mentioned earlier, one of the key benefits of 5G and Wavelength is significantly lower latencies compared to previous generation mobile networks. For edge applications, this implies you can actually perform inference processing in a Wavelength zone with near real-time responsiveness to the mobile device. By moving the inference processing to the Wavelength zone, you reduce power consumption and battery drain on the mobile device. Additionally, you can simplify application updates.  If you need to make a change to your training model, you simply update your servers in the Wavelength Zone instead of having to ship a new version to all the devices running your code.

Solution Overview

architecture of the wavelength zone

The following tutorial guides you through deploying an object detection application that is comprised of three components:

  • A Wavelength-hosted API endpoint (using Flask)
  • A Wavelength-hosted Inference server (running Torchserve)
  • A React web app being accessed via the browser via mobile device running on the carrier’s 5G network..
  • A server that acts as a bastion host allowing you to SSH into your other instances and as a web server for the React web application.

The API server is built using Python and Flask, and runs on a t3.medium instance based upon a standard Unbuntu 18.04 image. It accepts an image from the client application running on a device connected to the carriers 5G mobile network, which it then forwards to the inference server. The inference server returns the detected object along with coordinates for that object (or an error if it can’t detect any objects). The API server adds a text label and bounding boxes to the image and returns it to the mobile client.

The inference server runs Torchserve, an open source project that provides a flexible and easy way to serve up PyTorch models. Object detection is done using a Faster R-CNN model. It is then deploy it on a g4dn.2xlarge instance running the AWS deep learning Amazon Machine Image (AMI).

You will use the web browser on your mobile device to access the web server which will host the client application which is written in React.

Wavelength is designed to provide access to services and applications that require low latency. It’s important to note that you don’t need to deploy your entire application in a Wavelength Zone. You only need to deploy parts of your application that benefit from being deployed in the Wavelength Zone – such as application components requiring low latency.

In the case of the demo application, the API and inference servers are located in the Wavelength Zone because one of the design goals of the application is low-latency processing of the inference requests.

On the other hand, because the web server is only serving a small single page React web app, it does not have the same latency requirements as the inference processing. For that reason, it’s hosted in the Region instead of the Wavelength Zone.

Prerequisites

To complete the walkthrough below, you need:

  • To be familiar working from the command line, including editing text files.
  • The AWS CLI installed on your local machine. Ensure it’s the latest version so it supports Wavelength.
  • An administrative account with sufficient permissions to create VPC resources (instances, subnets, etc).
  • In order to access resources in a Wavelength Zone, you need a mobile device on a carrier’s 5G mobile network in a city that has access to the Zone. The following tutorial is written to be deployed in the Boston Wavelength Zone, but you can adjust the environment variables for the Zone and Region to deploy it other area.
  • An SSH key pair in the us-east-1 Region.
  • The commands below work on Mac and Linux machines. If you are on a Windows machine the easiest way to run through the tutorial is to spin up a Linux-based EC2 instance, install and configure the AWS CLI, and run the commands from the EC2 instance’s command line.

Create the VPC and associated resources

The first step in this tutorial is deploying to the VPC, internet gateway, and carrier gateway.

Start by configuring some environment variables, and then deploying the resources.

  1. In order to get started, you need to first set some environment variables.
    Note: replace the value for KEY_NAME with the name of the key pair you wish to use.
    Note: these values are specific to the us-east-1 Region. If you wish to deploy into another region, you’ll need to modify them as appropriate. Check the documentation for more info.

    export REGION="us-east-1"
    export WL_ZONE="us-east-1-wl1-bos-wlz-1"
    export NBG="us-east-1-wl1-bos-wlz-1"
    export INFERENCE_IMAGE_ID="ami-029510cec6d69f121"
    export API_IMAGE_ID="ami-0ac80df6eff0e70b5"
    export BASTION_IMAGE_ID="ami-027b7646dafdbe9fa"
    export KEY_NAME=<your key name>
  1. Use the AWS CLI to create the VPC.
    export VPC_ID=$(aws ec2 --region $REGION \
    --output text \
    create-vpc \
    --cidr-block 10.0.0.0/16 \
    --query 'Vpc.VpcId') \
    && echo '\nVPC_ID='$VPC_ID
  1. Create an internet gateway and attach it to the VPC.
    export IGW_ID=$(aws ec2 --region $REGION \
    --output text \
    create-internet-gateway \
    --query 'InternetGateway.InternetGatewayId') \
    && echo '\nIGW_ID='$IGW_ID
    aws ec2 --region $REGION \
    attach-internet-gateway \
    --vpc-id $VPC_ID \
    --internet-gateway-id $IGW_ID
  1. Add the carrier gateway.
    export CAGW_ID=$(aws ec2 --region $REGION \
    --output text \
    create-carrier-gateway \
    --vpc-id $VPC_ID \
    --query 'CarrierGateway.CarrierGatewayId') \
    && echo '\nCAGW_ID='$CAGW_ID

Deploy the security groups

In this section, you add three security groups:

  • Bastion SG allows SSH traffic from your local machine to the bastion host as well as HTTP web traffic from the Internet
  • API SG allows SSH traffic from the Bastion SG and opens up port 5000 to accept incoming API requests
  • Inference SG allows SSH traffic from the Bastion host and communications on port 8080 and 8081 (the ports used by the inference server) from the API SG.

 

  1. Create the bastion security group and add the ingress SSH role.Note: SSH access is only being allowed from your current IP address. You can adjust if you need by changing the –-cidr parameter in the second command.
    export BASTION_SG_ID=$(aws ec2 --region $REGION \
    --output text \
    create-security-group \
    --group-name bastion-sg \
    --description "Security group for bastion host" \
    --vpc-id $VPC_ID \
    --query 'GroupId') \
    && echo '\nBASTION_SG_ID='$BASTION_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $BASTION_SG_ID \
    --protocol tcp \
    --port 22 \
    --cidr $(curl https://checkip.amazonaws.com)/32
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $BASTION_SG_ID \
    --protocol tcp \
    --port 80 \
    --cidr 0.0.0.0/0
    
  2. Create the API security group along with two ingress rules: one for SSH from the bastion security group and one opening up the port the API server communicates on (5000).
    export API_SG_ID=$(aws ec2 --region $REGION \
    --output text \
    create-security-group \
    --group-name api-sg \
    --description "Security group for API host" \
    --vpc-id $VPC_ID \
    --query 'GroupId') \
    && echo '\nAPI_SG_ID='$API_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $API_SG_ID \
    --protocol tcp \
    --port 22 \
    --source-group $BASTION_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $API_SG_ID \
    --protocol tcp \
    --port 5000 \
    --cidr 0.0.0.0/0
  3. Create the security group for the inference server along with three ingress rules: one for SSH from the bastion security group, and opening the ports the inference server communicates on (8080 and 8081) to the API security group.
    export INFERENCE_SG_ID=$(aws ec2 --region $REGION \
    --output text \
    create-security-group \
    --group-name inference-sg \
    --description "Security group for inference host" \
    --vpc-id $VPC_ID \
    --query 'GroupId') \
    && echo '\nINFERENCE_SG_ID='$INFERENCE_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $INFERENCE_SG_ID \
    --protocol tcp \
    --port 22 \
    --source-group $BASTION_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $INFERENCE_SG_ID \
    --protocol tcp \
    --port 8080 \
    --source-group $API_SG_ID
    
    aws ec2 --region $REGION \
    authorize-security-group-ingress \
    --group-id $INFERENCE_SG_ID \
    --protocol tcp \
    --port 8081 \
    --source-group $API_SG_ID
    

Add the subnets and routing tables

In the following steps you’ll create two subnets along with their associated routing tables and routes.

  1. Create the subnet for the Wavelength Zone
    export WL_SUBNET_ID=$(aws ec2 --region $REGION \
    --output text \
    create-subnet \
    --cidr-block 10.0.0.0/24 \
    --availability-zone $WL_ZONE \
    --vpc-id $VPC_ID \
    --query 'Subnet.SubnetId') \
    && echo '\nWL_SUBNET_ID='$WL_SUBNET_ID
    
  2. Create the route table for the Wavelength subnet
    export WL_RT_ID=$(aws ec2 --region $REGION \
    --output text \
    create-route-table \
    --vpc-id $VPC_ID \
    --query 'RouteTable.RouteTableId') \
    && echo '\nWL_RT_ID='$WL_RT_ID
    
  3. Associate the route table with the Wavelength subnet and a route to route traffic to the carrier gateway which in turns routes traffic to the carrier mobile network.
    aws ec2 --region $REGION \
    associate-route-table \
    --route-table-id $WL_RT_ID \
    --subnet-id $WL_SUBNET_ID
    
    aws ec2 --region $REGION create-route \
    --route-table-id $WL_RT_ID \
    --destination-cidr-block 0.0.0.0/0 \
    --carrier-gateway-id $CAGW_ID
    

Next, repeat the same process to create the subnet and routing for the bastion subnet.

  1. Create the bastion subnet
    BASTION_SUBNET_ID=$(aws ec2 --region $REGION \
    --output text \
    create-subnet \
    --cidr-block 10.0.1.0/24 \
    --vpc-id $VPC_ID \
    --query 'Subnet.SubnetId') \
    && echo '\nBASTION_SUBNET_ID='$BASTION_SUBNET_ID
    
  2. Deploy the bastion subnet route table and a route to direct traffic to the internet gateway
    export BASTION_RT_ID=$(aws ec2 --region $REGION \
    --output text \
    create-route-table \
    --vpc-id $VPC_ID \
    --query 'RouteTable.RouteTableId') \
    && echo '\nBASTION_RT_ID='$BASTION_RT_ID
    
    aws ec2 --region $REGION \
    create-route \
    --route-table-id $BASTION_RT_ID \
    --destination-cidr-block 0.0.0.0/0 \
    --gateway-id $IGW_ID
    
    aws ec2 --region $REGION \
    associate-route-table \
    --subnet-id $BASTION_SUBNET_ID \
    --route-table-id $BASTION_RT_ID
    
  3. Modify the bastion’s subnet to assign public IPs by default
    aws ec2 --region $REGION \
    modify-subnet-attribute \
    --subnet-id $BASTION_SUBNET_ID \
    --map-public-ip-on-launch

Create the Elastic IPs and networking interfaces

The final step before deploying the actual instances is to create two carrier IPs, IP addresses associated with the carrier network. These IP addresses will be assigned to two Elastic Network Interfaces (ENIs), and the ENIs will be assigned to our API and Inference server (the bastion host will have it’s public IP assigned upon creation by the bastion subnet).

  1. Create two carrier IPs, one for the API server and one for the inference server
    export INFERENCE_CIP_ALLOC_ID=$(aws ec2 --region $REGION \
    --output text \
    allocate-address \
    --domain vpc \
    --network-border-group $NBG \
    --query 'AllocationId') \
    && echo '\nINFERENCE_CIP_ALLOC_ID='$INFERENCE_CIP_ALLOC_ID
    
    export API_CIP_ALLOC_ID=$(aws ec2 --region $REGION \
    --output text \
    allocate-address \
    --domain vpc \
    --network-border-group $NBG \
    --query 'AllocationId') \
    && echo '\nAPI_CIP_ALLOC_ID='$API_CIP_ALLOC_ID
    
  2. Create two elastic network interfaces (ENIs)
    export INFERENCE_ENI_ID=$(aws ec2 --region $REGION \
    --output text \
    create-network-interface \
    --subnet-id $WL_SUBNET_ID \
    --groups $INFERENCE_SG_ID \
    --query 'NetworkInterface.NetworkInterfaceId') \
    && echo '\nINFERENCE_ENI_ID='$INFERENCE_ENI_ID
    
    export API_ENI_ID=$(aws ec2 --region $REGION \
    --output text \
    create-network-interface \
    --subnet-id $WL_SUBNET_ID \
    --groups $API_SG_ID \
    --query 'NetworkInterface.NetworkInterfaceId') \
    && echo '\nAPI_ENI_ID='$API_ENI_ID
    
  3. Associate the carrier IPs with the ENIs
    aws ec2 --region $REGION associate-address \
    --allocation-id $INFERENCE_CIP_ALLOC_ID \
    --network-interface-id $INFERENCE_ENI_ID   
    
    aws ec2 --region $REGION associate-address \
    --allocation-id $API_CIP_ALLOC_ID \
    --network-interface-id $API_ENI_ID

Deploy the API and inference instances

With the VPC and underlying networking and security deployed, you can now move on to deploying your API and Inference instances. The API server is a t3.instance based on a standard Ubuntu 18.04 AMI. The Inference server is a g4dn.2xlarge running the AWS deep learning AMI. You install and configure the software components in subsequent steps.

 

  1. Deploy the API instance
    aws ec2 --region $REGION \
    run-instances \
    --instance-type r5d.2xlarge \
    --network-interface '[{"DeviceIndex":0,"NetworkInterfaceId":"'$API_ENI_ID'"}]' \
    --image-id $API_IMAGE_ID \
    --key-name $KEY_NAME
    
  2. Deploy the inference instance
    aws ec2 --region $REGION \
    run-instances \
    --instance-type t3.medium \
    --network-interface '[{"DeviceIndex":0,"NetworkInterfaceId":"'$API_ENI_ID'"}]' \
    --image-id $API_IMAGE_ID \
    --key-name $KEY_NAME

Deploy the bastion / web server

You must deploy a bastion server in order to SSH into your application instances. Remember that the carrier gateway in a Wavelength Zone only allows ingress from the carrier’s 5G network. This means that in order to SSH into the API and inference servers you need to first SSH into the bastion host, and then from there SSH into your Wavelength instances.

You are also going to install the client front end application onto the bastion host. You can use the webserver to test the application if you don’t want to install the React Native version of the application onto a mobile device. Remember that even though you’re not using the native application, the website must still be accessed from a device on the carrier’s 5G network.

  1. Issue the command below to create your bastion host
    aws ec2 --region $REGION run-instances \
    --instance-type t3.medium \
    --associate-public-ip-address \
    --subnet-id $BASTION_SUBNET_ID \
    --image-id $BASTION_IMAGE_ID \
    --security-group-ids $BASTION_SG_ID \
    --key-name $KEY_NAME
    

Note: It takes a few minutes for your instances to be ready. Even when the status check in the EC2 console reads 2/2 checks passed, It may still be a few minutes before the instance is done installing additional software packages and configuring itself. If you receive a lock error while running apt-get, wait several minutes and try again.

 

Configure the bastion host / web server

The last server you deployed serves two purposes. It acts as the bastion host allowing you to SSH into your other two servers, and it serves the client web app. In this section you’ll install that web app.

  1. SSH into bastion host (the user name is bitnami).Note: In order to be able to easily SSH from the bastion host to the inference server you should use the -A (agent forwarding) parameter when starting your SSH session e.g.:
    ssh -i /path/to/key.pem -A [email protected]<bastion ip address>
  1. Clone the GitHub repo with the React code
    git clone https://github.com/mikegcoleman/react-wavelength-inference-demo.git
  2. Install the dependencies
    cd react-wavelength-inference-demo && npm install
  1. Build the webpage
    npm run build
  1. Copy the page into web servers root directory
    cp -r ./build/* /home/bitnami/htdocs
  2. Test that the web app is running correctly by navigating to the public IP address of your bastion instance

 

Configure the inference server

In this section you deploy a Torchserve server running on EC2. Torchserve is configured with the fasterrcnn model. It receives the image from the API server, runs the inference, and returns the labels and bounding boxes for the items found in the image.

I’m not going to spend time going into the inner workings of Torchserve in this post. However, if you’re interested in learning more, check out my colleague Shashank’s blog.

  1. SSH into bastion host and the nSSH into the inference server instance.Note: In order to be able to easily SSH from the bastion host to the inference server you will want to use the -A (agent forwarding) parameter when starting your SSH session with the bastion host e.g.:
    ssh -i /path/to/key.pem -A [email protected]<bastion public ip>

    To SSH from the bastion host to the inference server you do not need the -i or -A parameters e.g.:

    ssh [email protected]<inference server private ip>
  1. Update the packages on the server and install the necessary prerequisite packages.
    sudo apt-get update -y \
    && sudo apt-get install -y virtualenv openjdk-11-jdk gcc python3-dev
  2. Create a virtual environment.
    mkdir inference && cd inference
    virtualenv --python=python3 inference
    source inference/bin/activate
  3. Install Torchserve and its related components
    pip3 install \
    torch torchtext torchvision sentencepiece psutil \
    future wheel requests torchserve torch-model-archiver
  1. Install the inference model that the application will use.
    mkdir torchserve-examples && cd torchserve-examples
    
    git clone https://github.com/pytorch/serve.git
    
    mkdir model_store
    
    wget https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth
    
    torch-model-archiver --model-name fasterrcnn --version 1.0 \
    --model-file serve/examples/object_detector/fast-rcnn/model.py \
    --serialized-file fasterrcnn_resnet50_fpn_coco-258fb6c6.pth \
    --handler object_detector \
    --extra-files serve/examples/object_detector/index_to_name.json
    
    mv fasterrcnn.mar model_store/
  2. Create a configuration file for Torchserve (config.properties) and configure Torchserve to listen on your instance’s private IP.Be sure to substitute the private IP of your instance below, you can find the private IP for your instance in the EC2 console.The contents of config.properties should look as follows:
    inference_address=http://<your instance private IP>:8080
    management_address=http://<your instance private IP>:8081

    For example:

    inference_address=http://10.0.0.253:8080
    management_address=http://10.0.0.253:8081
  3. Start the Torchserve server.
    torchserve --start --model-store model_store --models
    fasterrcnn=fasterrcnn.mar --ts-config config.properties

    It takes a few seconds for the server to startup, when it’s ready you should see a line that ends with:

    State change WORKER_STARTED -> WORKER_MODEL_LOADED

Leave this SSH session running so you can watch the inference server’s logs to see when it receives requests from the API server.

 

Configure the API server

In this section, you deploy the Flask-based API server.

  1. SSH into bastion host and then SSH into the API server instance.Note: In order to be able to easily SSH from the bastion host to the API server you should use the -A (agent forwarding) parameter when starting your SSH session with the bastion host e.g.:
    ssh -i /path/to/key.pem -A [email protected]<bastion public ip>

    To SSH from the bastion host to the API server you do not need the -i or -A parameters e.g.:

    ssh [email protected]<api server private ip>
  1. Test your inference server (being sure to substitute the INTERNAL IP of the inference instance in the second line below):
    curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
    
    curl -X POST \
    http://<your_inf_server_internal_IP>:8080/predictions/fasterrcnn \
    -T kitten.jpg

    You should see something similar to

    [
           {
        "cat": "[(228.7825, 82.63463), (583.77545, 677.3058)]"
         },
      {
        "car": "[(124.427414, 69.34327), (270.15457, 205.53458)]"
         }
    ]
    

    The inference server returns the labels of the objects it detected, and the corner coordinates of boxes that surround those objects.

    Now that you have verified the API server can connect to the inference server, you can configure the API server.

  1. Run the following command to update system package information and install necessary prerequisites.
    sudo apt-get update -y \
    && sudo apt-get install -y \
    libsm6 libxrender1 libfontconfig1 virtualenv
  1. Clone the Python code into the application directory
    mkdir apiserver && cd apiserver
    git clone https://github.com/mikegcoleman/flask_wavelength_api .
  2. Create and activate a virtual environment.
    virtualenv --python=python3 apiserver
    source apiserver/bin/activate
  3. Install necessary Python packages.
    pip3 install opencv-python flask pillow requests flask-cors
  4. Create a configuration file (config_values.txt) with the following line (substituting the INTERNAL IP of your inference server).
    http://<your_inf_server_internal_IP>:8080/predictions/fasterrcnn
  5. Start the application.
    python api.py

    You should see output similar to the following:

    * Serving Flask app "api" (lazy loading)
    * Environment: production
    WARNING: This is a development server. Do not use it in a production
    deployment.
    Use a production WSGI server instead
    * Debug mode: on
    * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
    * Restarting with stat
    * Debugger is active!
    * Debugger PIN: 311-750-351

 

Test the client application

To test the application, you need to have a device on the carrier’s 5G network. From your device’s web browser navigate the bastion / web server’s public IP address. In the text box at the top of the app enter the public IP of your API server.

Next, choose an existing photo from your camera roll, or take a photo with the camera and press the process object button underneath the preview photo (you may need to scroll down).

The client will send the image to the API server, which forwards it to the inference server for detection. The API server then receives back the prediction from the inference server, adds a label and bounding boxes, and return the marked-up image to the client where it will be displayed.

example screenshot of the image

If the inference server cannot detect any objects in the image, you will receive a message indicating the prediction failed.

Conclusion and next steps

In this blog I covered some of the architectural considerations when deploying applications into Wavelength Zones. You then deployed a sample application designed to give you an idea of how you might architect an inference-at-the-edge solution. I hope this has inspired you to go off build something new to take advantage of the exciting capabilities that Wavelength and 5G enable. Visit https://aws.amazon.com/wavelength/  to request access and check out documentation and other resources.

 

 

Building a Graylog server to run on an Amazon Lightsail instance

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/building-a-graylog-server-to-run-on-an-amazon-lightsail-instance/

This post is part of a collection by the Amazon Lightsail team to highlight how builders are using Lightsail to get started on AWS building interesting solutions. If you’re interested in contributing a post on how you’re using Lightsail reach out to us at [email protected]! This post is guest contributed by Amazon Lightsail customer, Richard Gate

This post reviews how to build a Graylog server on Amazon Lightsail, the easiest way to get started on AWS. Graylog is an open source log management system that allows textual logging data created by network devices, applications, and servers to be centrally stored, searched, and reported on.

This blog is relevant to those working from home with various pieces of network equipment and a need to centralize log data for these devices. My personal networking equipment includes a pfSense gateway managing a couple of broadband lines, routers, and Wi-Fi access points. With Graylog, you can centralize the log data collection for these devices and automate looking for issues raised by them in their log messages.

In this post, I walk you through how I built a Graylog server on a Lightsail instance running Ubuntu 18.04 LTS with the pre-requite packages, mainly Elasticsearch, and MongoDB. This server receives log messages from my pfSense server, routers and access points. Also, taking into account that the devices being used are inside a private network NATing out to the internet but that must be uniquely identified in Graylog.

Network design

The following diagram shows where the various parts of the network fit and provides details of the TCP and UDP ports involved at different points in the network. You can see, the internal Wi-Fi AP and router behind the pfSense server with its own firewall, outbound NAT (Network Address Translation) and outbound load balancing (over two broadband lines, not shown). Traffic flowing over the internet to the Lightsail edge firewall and on into the Lightsail instance running Graylog and the Elasticsearch and MongoDB services.

The following image is a simple diagram of the network.

architecture diagram

Network access to the Ubuntu instance is restricted by the Lightsail firewall which allows TCP/UDP ports (and PING) to be allowed or blocked. Ports TCP:22 (SSH) and UDP (syslog from pfSense), UDP:51401 (syslog from the Wi-Fi AP) and UDP:51402 (Syslog from the router). These separate UDP ports are used so that Graylog can have a listener on each of the separate ports and can tag a source on them for the individual devices. This is needed as the Source IP is one of two IPs of the two broadband lines that pfSense routes traffic through (outbound load balancing). The pfSense and other devices are configured to use the Public IP of the Ubuntu Lightsail instance as their remote Syslog server with the relevant destination UDP Port. Recent changes to the Lightsail firewall now allow for the source IP address of inbound traffic to be used to limit where the Syslog data comes from. This is useful to prevent when whole internet trying to send Syslog data to the Graylog server.

Lightsail instance setup

Now that you have an idea of the network architecture, I can walk through how to set up Graylog on Amazon Lightsail.

The following section details the setup and configuration of the Lightsail instance to be used to run Graylog under the Ubuntu operating system (OS). This gets the instance ready to connect to and to start the process of installing Graylog.

The Lightsail Ubuntu 18.04 LTS instance is a 4-GB RAM instance, based on the minimum server specification given in the Graylog installation guide.

  1. From the Lightsail console, click Create instance.
  2. From Select a platform, choose Linux/Unix.
  3. From Select a blueprint, choose OS Only and then Ubuntu 18.04 LTS.

instance platform and blueprint

  1. From Choose your instance plan, choose the $20 bundle, with 4 GB, 2 vCPUs and 80 GB SSD.
  2. In Identify your instance, enter a unique name for your instance.

instance pricing plans

  1. Then click Create instance.

You are then taken back to the main Lightsail home page with your new instance showing grayed out and in a state of “Pending” until it has been created. Once it is running, the state changes to “Running.”

pending instance

instance running

  1. Click on the three dots at the top right of the new instance’s panel and select Manage.
  2. Then select Networking.
  3. Click Attach static IP in the “IP addresses” box.

create a static ip address to your instance

  1. If you already have a static IP available, select it from the dropdown list and click the green tick icon to the right of the “Select static IP” dropdown list.
  2. If not, click Create static IP, select your new instance, give the IP a unique name, and click Create.
  3. Under the firewall remove (click) the TCP:80 rule.
    As a best practice you should restrict any incoming traffic to your Graylog server to the IP addresses to the specific IP address (or addresses) that will need to access your instance.  
  4. Click the SSH (TCP:22) rule and click the edit icon, then check the Restrict to IP address box,  enter the IP address of the system you will use to SSH into the instance in the Source IP address box, and click Save.
  5. Click on Add rule, set Application as Custom, Protocol as TCP and Range as 9000 (this is later used for web access to Graylog), specify the IP you will use to access the system as you did in the previous step, and click Create.
  6. Click on Add rule, use Application as Custom, Protocol as UDP and Range as 51400-51402 (one port of each of the devices sending syslog data), specify the IP you will use to access the system as you did in the previous step, and click Create.

add firewall rule

The static IP address used preceding should  be assigned to a DNS name (“A” record) on your domain’s DNS server. The exact mechanism for doing depends on where and how your DNS is hosted. This forms the Fully Qualified Domain Name (FQDN) used to connect to the Lightsail instance. But, you can also use the public IP address  toconnect via SSH, the Graylog web interface and for device to send logging data.

Access the Lightsail instance to configure and install the software.

Having set up the Lightsail instance, the next step is to connect to the Ubuntu operating system to be able to run commands to configure Ubuntu and install Graylog. The remote command-line connection utility “SSH” is used. This secure (encrypted) connection method requires the security to be set up before use.

The Lightsail browser-based SSH client can also be used to connect and enter the command to install and configure the system without the need to manage the SSH authentication key file. However, I prefer to use a standalone SSH client for two main reasons. Firstly, I have a number of servers in different hosting environments and I prefer to use the same method to connect to them all. Secondly, I automate the installation and configuration using ansible, which connects via SSH and needs access to the authentication key file.

An SSH connection is used to enter commands into the Lightsail instance. Lightsail protects SSH connections using an authentication key (pem). The preceding procedure assumes you are using the default pem for SSH connections to the new Lightsail instance. The pem must be downloaded and saved for SSH use.

  1. From the Lightsail console, click Account, and select Account from the menu.

search in lightsail console

  1. Click SSH keys and Download to the right of the “Default” key.

manage ssh keys in console

  1. Download () the pem file as “aws.pem” for later use by SSH.
  2. On UNIX systems from the command line chmod 0600 aws.pem.

Test the SSH connection to the Lightsail instance. Use the directory where you saved the “aws.pem” file to, use the command “SSH -l ubuntu -i aws.pem <FQDN>” where “<FQDN>” is the Full Qualified Domain Name of the Lightsail instance. Your SSH client may ask for the initial connection to be confirmed or may reject it if the name or IP of the Lightsail instance already exists in the local SSH “.ssh/known_hosts” file, if so, edit the file and delete the record.

Configuring Ubuntu from the Command Line (SSH)

Now that you created the Lightsail instance, you are ready to connect to your instance using your SSH client of choice. After you connect, there is a small amount of Ubuntu operating system configuration required to make certain the software that is pre-installed on the Lightsail instance is up to date, to set the hostname/timezone and create a swap file (which allows more memory to be used than actually exists by swapping out unused parts until needed again).

Update the operating system to the latest level and reboot:

apt –y update

apt –y upgrade

reboot

Set the hostname (e.g. mygraylog):

hostname mygraylog

Edit “/etc/hosts” and add the new host name to the “127.0.0.1” record

127.0.0.1 localhost mygraylog

Set your local timezone (mine is “Europe/London”):

timedatectl set-timezone Europe/London

Create a swap file, activate, and make available at boot time:

dd if=/dev/zero of=/swap count=8192 bs=1MiB

chmod 600 /swap

mkswap /swap

swapon /swap

Edit “/etc/fstab” add the following at the end of the file

/swap swap swap 0 0

Install Graylog and pre-requisites from the Command Line (SSH)

Finally, Graylog itself (and pre-requisite software packages that Graylog uses) can be installed.

Generate secrets to be used by Graylog:

This is required to create an encrypted version of the Graylog login password.

apt –y install pwgen

Save the string create by the next command to be used as <secret> later

pwgen -N 1 -s 96

Save the string create by the next command to be used as <password-sha2> later

<yourpassword> will be the password for the user “admin” for the Graylog web interface

echo –n “<yourpassword>” | sha256sum

The quotes around <yourpassword> are needed.

Install pre-requisite software packages:

These packages are required for the Graylog server to operate.

apt –y install apt-transport-https openjdk-8-jre-headless

apt –y install uuid-runtime curl dirmngr

Set up install for Elasticsearch:

Elasticsearch is used by Graylog to store all the received messages and for searching the stored messages in a flexible way. First, the location to install Elasticsearch from must be configured.

(the following is a single-line command)

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

(the following is a single-line command)

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

apt –y update

Install Elasticsearch, enable it to start at boot and start it:

apt –y install elasticsearch

Edit “/etc/elasticsearch/elasticsearch.yml” and change cluster.name: my-application to cluster.name: graylog

systemctl enable elasticsearch

systemctl start elasticsearch

Set up install for MongoDB:

MongoDB is used by Graylog to store its configuration. First, the location to install MongoDB from must be configured.

(the following is a single-line command)

wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | sudo apt-key add -

(the following is a single-line command)

echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list

apt –y update

Install MongoDB, enable it to start at boot and start it:

apt –y install mongodb-org

systemctl enable mongod

systemctl start mongod

Set up install for Graylog:

(the following is a single-line command)

wget https://packages.graylog2.org/repo/packages/graylog-3.2-repository_latest.deb

(the following is a single-line command)

dpkg -i graylog-3.2-repository_latest.deb

apt –y update

Install Graylog:

apt –y install graylog-server

Update the Graylog configuration:

Before starting the Graylog server, a few file updates are required for the network and security environment in which it runs.

Edit “/etc/graylog/server/server.conf” and make the following changes

  • Change “password_secret =” to “password_secret = <password-sha2>” (see preceding)
  • Change “elasticsearch_shards = 4” to “elasticsearch_shards = 1”
  • Change “http_bind_address = 127.0.0.1:9000” to “http_bind_address = 0.0.0.0:9000”
  • Change “http_publish_uri = …” to “http_publish_uri = http://<FQDN>:9000” (see preceding)
  • Uncomment “#root_email = ….” and enter your email address
  • Uncomment “#root_timezone = ….” And change to “root_timezone = UTC”

Edit “/etc/default/graylog-server” and the make the following change.

  • Add “-Djava.net.preferIPv4Stack=true” at the start of the “GRAYLOG_SERVER_JAVA_OPTS”

Enable Graylog to start at boot and start it:

systemctl enable graylog-server

systemctl start graylog-server

Connect and log in to Graylog

The Graylog server is now ready to be connected to via its Web interface so that final configuration to be completed.

Assuming all the preceding ran without error, you can now log in to Graylog via;

http://<FQDN>:9000

<FQDN> is the Fully Qualified Domain Name of your Lightsail instance. Logon as the user “admin” with the password that you used to generate the <password_sha2> preceding.

enter username and password in graylog

Graylog basic configuration.

Assuming that the devices that send their syslog records to Graylog have been configured to forward to <FQDN>:51400 (51401 and 51402), Graylog listeners must be set up to receive the syslog records. Repeat the following for each of the ports;

  • From the top menu bar, go to System then Inputs.
  • From the Select input dropdown list, select Syslog UDP.
  • Click Launch new input.

syslog udp

  • On the Launch new input pop-up, tick Global, fill in the Title, Port, Override source (the source name that shows on messages received via this Listener) and click Save.

syslog udp input

Having completed the creation and configuration of a Lightsail instance, configuring Ubuntu, installing the Graylog server and additional services, with a small amount of Graylog configuration, you start to see messages from the devices appearing in Graylog. Additional devices can be added and the numerous other features of Graylog can be tried out.

Graylog provides an excellent way of bringing all the logging data from various devices into one central management server, allowing you to see the effects of issues within a network in a single time line, making problem determination a much simpler process.

Author

Richard Gate, CommuniG8 Ltd

Email: [email protected]

Twitter: @communig8

Improving website performance with Lightsail Content Delivery Network

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/improving-website-performance-with-lightsail-content-delivery-network/

This post was written by Mike Coleman, Senior Developer Advocate 

Amazon Lightsail recently announced the release of Lightsail Content Delivery Network (CDN). With this launch customers can now distribute their content more securely to users across the globe. Content is served from the edge location closest to the end user which improves performance while reducing server load.

In this blog, I walk through exactly how to configure Lightsail distribution to work with both a standard web server in addition to WordPress. I will take advantage of the fact that Lightsail CDN offers pre-configured settings optimized for WordPress.  I cover creating a new distribution, verifying that the distribution is working correctly, and how to use a custom domain name with Lightsail CDN.

What is a CDN?

A CDN is a globally distributed set of network endpoints that cache your website’s content so it’s closer to your end users. When a user requests content from your site, that request is first routed to one of the CDN endpoints, if the content is available in the cache then it is served from that location. If it’s not available in the cache, then it is retrieved from your web server and presented to the requestor. Additionally, the content is placed in the cache so subsequent requests from that part of the world can be served from the cache without having to make a call to the web server.

Using Lightsail CDN with your websites offer a variety of benefits:

  1. End-users access your web content from the closest Lightsail CDN edge location which greately reduce response times.
  2. Serving content from the endpoint cache reduces the load on your actual web server since your server won’t need to service as many requests directly.
  3. Lightsail distributions make it easy to deliver content over Hypertext Transfer Protocol Secure (HTTPS) by providing SSL certificates and TLS support.

I am particularly excited about the third point. Before the release of Lightsail distributions, applying an SSL certificate to a standalone website required several manual steps. With Lightsail CDN, you can secure your web traffic with a few clicks.

One final point, Lightsail CDN is designed to cache what’s often called “static content.” Static content is content that is the same regardless of who requests it, or, stated another way, the content is not rendered on a per-user basis. This could include non-dynamic webpages, but also things like CSS stylesheets, images and videos, in addition to files containing JavaScript code.

The rest of this post covers how to set up a Lightsail distribution with either a typical web server or WordPress. Additionally, I talk about how to encrypt the traffic going from your users to your endpoint.

 

Prerequisites

You should have either a standard webserver (for example, Apache or NGINX) or a WordPress server running in your Amazon Lightsail account. Your server should also have a static IP address. Our documentation has you covered if you need some help getting a server deployed.

In order to use WordPress with Lightsail CDN, you’ll need to edit a configuration file from the Linux command line. You should be familiar with both how to SSH into your Lightsail instance in addition to how to use a Linux text editor such as Vim.

Configuring a custom domain requires the ability to manage the DNS for your domain. The DNS does not need to be managed by Lightsail or AWS, but you do need to have the ability to add domain records.

 

Creating the Lightsail distribution

The actual resource that you deploy to manage your web traffic is called a “distribution,” and the endpoint Origins can be either a Lightsail instance running a web server, a Lightsail instance running WordPress, or a Lightsail Load Balancer. This blog covers the web server and WordPress use cases.

  1. From the Lightsail console, choose Networking.
  2. Click Create distribution.
  3. Under Select your origin choose the server you previously created. Notice that your server is automatically listed in the dropdown.
    lightsail console: select your origin
  4. If your instance does not have a static IP attached to it already, you will need to either assign an existing static IP or create a new one.
    lightsail console: assign an existing static ip                                                                                                     Note: If you’re configuring Lightsail distribution to work with a WordPress server, you will be prompted to confirm you wish to use the WordPress preset. By providing smart presets for WordPress instances, Lightsail CDN reduces the time and complexity usually associated with creating a traditional CDN distribution.Click Yes, apply.
  5. Leave caching behavior set to the default (either Best for static content for a typical web server or Best for WordPress if you are using a WordPress server).This setting controls which directories are cached on your distribution’s endpoints.
  6. Leave the rest of the settings at their defaults and click Create Distribution.

It takes several minutes for your distribution to become ready.
distribution status updating settings

 

Additional steps for WordPress

In this section, you edit your WordPress configuration file (wp-config.php) to allow HTTPS connections to your server.

  1. SSH into your WordPress server.
  2. Create a backup of your wp-condfig.phpsudo cp /opt/bitnami/apps/wordpress/htdocs/wp-config.php /opt/bitnami/apps/wordpress/htdocs/wp-config.php.backup
  3. Open wp-config.php in your text editor of choice.sudo vi /opt/bitnami/apps/wordpress/htdocs/wp-config.php
  4. Delete the following two lines.define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST'] . '/');
    define('WP_HOME', 'http://' . $_SERVER['HTTP_HOST'] . '/');
  5. Copy and paste the following into your wp-config.phpdefine('WP_SITEURL', 'https://' . $_SERVER['HTTP_HOST'] . '/');
    define('WP_HOME', 'https://' . $_SERVER['HTTP_HOST'] . '/');if (isset($_SERVER['HTTP_CLOUDFRONT_FORWARDED_PROTO'])
    && $_SERVER['HTTP_CLOUDFRONT_FORWARDED_PROTO'] === 'https') {
    $_SERVER['HTTPS'] = 'on';
    }
  6. Save the file.
  7. Restart the Apache web server.

sudo /opt/bitnami/ctlscript.sh restart Apache

After the server restarts, you can test to ensure that the Lightsail distribution is configured correctly.

 

Testing your distribution

Behind the scenes, Lightsail distributions use Amazon CloudFront. Any static content from your site will be served up by the CloudFront network of edge locations. You can verify this behavior with your browser’s developer tools. In the following steps, I use Google Chrome, but the steps are similar for other browsers.

  1. In your web browser, navigate to the URL of the distribution you just created. You can find the URL at the top of the details page for your distribution.                                                                                                                                      distribution default domain
  2. Open the developer tools console by clicking on the three-dot menu at the end of address bar and choosing More tools and then Developer tools.
    developer tools
  3. Click the Sources tab and notice that net is listed as the source for the web site content. This shows you that your website traffic is now being served via the Lightsail distribution.
    sources in cloudfront.net

(Optional) Adding a custom domain

At this point, your website is accessed via a randomly generated URL (for example,  d3b09eq0j1fbdq.cloudfront.net). In a production deployment, you’d want to use your own registered domain name (for example, www.example.com). In this next section, you configure Lightsail distribution to work with a custom domain by creating an SSL certificate for your domain, and a DNS CNAME record that maps your domain to the distribution URL.

As mentioned previously, your DNS does not need to be managed by Lightsail to perform the steps, but you do need to have the ability to create records for the domain on whichever provider you’re currently using.

 

  1. Select Domains and HTTPS from your distribution’s menu.
  2. Click +Create certificate
  3. Under Primary domain enter the fully qualified domain name (FQDN) you want to use for your server, and click Create.
    creating a certificate in ls console
  4. You’ll be prompted to create a DNS CNAME record to validate that you own the requested domain. Use the values in the dialog below to populate the record. If you need more assistance with this step, checkout the documentation.Note: that the text is truncated on the page, but the entire string will be copied if you highlight the fragment.
    certificate validation pending
  5. It can take several minutes for the domain validation to occur. Once the validation has finalized, the certificate status changes to Valid, not in use. Click the Custom domains are disabled slider to activate the new certificate.

disable custom domains

Wait several minutes until the distribution status is Enabled before moving to the final step

.status is enabled

The last step is to create a CNAME record that maps your domain name to the URL for the distribution. If you’re using Lightsail to manage you DNS, follow the steps below. If your domain name is managed by a 3rd party, consult their documentation.

  1. From the Lightsail home page click Networking on the horizontal menu.
  2. Click on the name of the DNS zone you wish to use.
  3. Click + Add record.
  4. Enter the subdomain you want to use (e.g. www or @ for an apex record). Click in the Resolves to text box, and notice that Lightsail automatically populates the name of your distribution. Click on your distribution name.
    lightsail console: DNS screenshot
  5. Click the green check mark to save your DNS record.

 

At this point, you should be able to access your domain by navigating to your FQDN into your browser.

Conclusion

So that’s all there is to accelerating and securing the deliver your website content with Lightsail Content Delivery Network. If you’ve already got a web server running on Lightsail, why not take advantage of the one-year free tier and configure it to work with Lightsail distribution. If you need more information on Lightsail distribution be sure to check out the documentation.

ICYMI: Serverless Q2 2020

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/icymi-serverless-q2-2020/

Welcome to the 10th edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

In case you missed our last ICYMI, checkout what happened last quarter here.

AWS Lambda

AWS Lambda functions can now mount an Amazon Elastic File System (EFS). EFS is a scalable and elastic NFS file system storing data within and across multiple Availability Zones (AZ) for high availability and durability. In this way, you can use a familiar file system interface to store and share data across all concurrent execution environments of one, or more, Lambda functions. EFS supports full file system access semantics, such as strong consistency and file locking.

Using different EFS access points, each Lambda function can access different paths in a file system, or use different file system permissions. You can share the same EFS file system with Amazon EC2 instances, containerized applications using Amazon ECS and AWS Fargate, and on-premises servers.

Learn how to create an Amazon EFS-mounted Lambda function using the AWS Serverless Application Model in Sessions With SAM Episode 10.

With our recent launch of .NET Core 3.1 AWS Lambda runtime, we’ve also released version 2.0.0 of the PowerShell module AWSLambdaPSCore. The new version now supports PowerShell 7.

Amazon EventBridge

At AWS re:Invent 2019, we introduced a preview of Amazon EventBridge schema registry and discovery. This is a way to store the structure of the events (the schema) in a central location. It can simplify using events in your code by generating the code to process them for Java, Python, and TypeScript. In April, we announced general availability of EventBridge Schema Registry.

We also added support for resource policies. Resource policies allow sharing of schema repository across different AWS accounts and organizations. In this way, developers on different teams can search for and use any schema that another team has added to the shared registry.

Ben Smith, AWS Serverless Developer Advocate, published a guide on how to capture user events and monitor user behavior using the Amazon EventBridge partner integration with Auth0. This enables better insight into your application to help deliver a more customized experience for your users.

AWS Step Functions

In May, we launched a new AWS Step Functions service integration with AWS CodeBuild. CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces packages that are ready for deployment. Now, during the execution of a state machine, you can start or stop a build, get build report summaries, and delete past build executions records.

With the new AWS CodePipeline support to invoke Step Functions you can customize your delivery pipeline with choices, external validations, or parallel tasks. Each of those tasks can now call CodeBuild to create a custom build following specific requirements. Learn how to build a continuous integration workflow with Step Functions and AWS CodeBuild.

Rob Sutter, AWS Serverless Developer Advocate, has published a video series on Step Functions. We’ve compiled a playlist on YouTube to help you on your serverless journey.

AWS Amplify

The AWS Amplify Framework announced in April that they have rearchitected the Amplify UI component library to enable JavaScript developers to easily add authentication scenarios to their web apps. The authentication components include numerous improvements over previous versions. These include the ability to automatically sign in users after sign-up confirmation, better customization, and improved accessibility.

Amplify also announced the availability of Amplify Framework iOS and Amplify Framework Android libraries and tools. These help mobile application developers to easily build secure and scalable cloud-powered applications. Previously, mobile developers relied on a combination of tools and SDKS along with the Amplify CLI to create and manage a backend.

These new native libraries are oriented around use-cases, such as authentication, data storage and access, machine learning predictions etc. They provide a declarative interface that enables you to programmatically apply best practices with abstractions.

A mono-repository is a repository that contains more than one logical project, each in its own repository. Monorepo support is now available for the AWS Amplify Console, allowing developers to connect Amplify Console to a sub-folder in your mono-repository. Learn how to set up continuous deployment and hosting on a monorepo with the Amplify Console.

Amazon Keyspaces (for Apache Cassandra)

Amazon Managed Apache Cassandra Service (MCS) is now generally available under the new name: Amazon Keyspaces (for Apache Cassandra). Amazon Keyspaces is built on Apache Cassandra and can be used as a fully managed serverless database. Your applications can read and write data from Amazon Keyspaces using your existing Cassandra Query Language (CQL) code, with little or no changes. Danilo Poccia explains how to use Amazon Keyspace with API Gateway and Lambda in this launch post.

AWS Glue

In April we extended AWS Glue jobs, based on Apache Spark, to run continuously and consume data from streaming platforms such as Amazon Kinesis Data Streams and Apache Kafka (including the fully-managed Amazon MSK). Learn how to manage a serverless extract, transform, load (ETL) pipeline with Glue in this guide by Danilo Poccia.

Serverless posts

Our team is always working to build and write content to help our customers better understand all our serverless offerings. Here is a list of the latest published to the AWS Compute Blog this quarter.

Introducing the new serverless LAMP stack

Ben Smith, AWS Serverless Developer Advocate, introduces the Serverless LAMP stack. He explains how to use serverless technologies with PHP. Learn about the available tools, frameworks and strategies to build serverless applications, and why now is the right time to start.

 

Building a location-based, scalable, serverless web app

James Beswick, AWS Serverless Developer Advocate, walks through building a location-based, scalable, serverless web app. Ask Around Me is an example project that allows users to ask questions within a geofence to create an engaging community driven experience.

Building well-architected serverless applications

Julian Wood, AWS Serverless Developer Advocate, published two blog series on building well-architected serverless applications. Learn how to better understand application health and lifecycle management.

Device hacking with serverless

Go beyond the browser with these creative and physical projects. Moheeb Zara, AWS Serverless Developer Advocate, published several serverless powered device hacks, all using off the shelf parts.

April

May

June

Tech Talks and events

We hold AWS Online Tech Talks covering serverless topics throughout the year. You can find these in the serverless section of the AWS Online Tech Talks page. We also regularly join in on podcasts, and record short videos you can find to learn in quick bite-sized chunks.

Here are the highlights from Q2.

Innovator Island Workshop

Learn how to build a complete serverless web application for a popular theme park called Innovator Island. James Beswick created a video series to walk you through this popular workshop at your own pace.

Serverless First Function

In May, we held a new virtual event series, the Serverless-First Function, to help you and your organization get the most out of the cloud. The first event, on May 21, included sessions from Amazon CTO, Dr. Werner Vogels, and VP of Serverless at AWS, David Richardson. The second event, May 28, was packed with sessions with our AWS Serverless Developer Advocate team. Catch up on the AWS Twitch channel.

Live streams

The AWS Serverless Developer Advocate team hosts several weekly livestreams on the AWS Twitch channel covering a wide range of topics. You can catch up on all our past content, including workshops, on the AWS Serverless YouTube channel.

Eric Johnson hosts “Sessions with SAM” every Thursday at 10AM PST. Each week, Eric shows how to use SAM to solve different serverless challenges. He explains how to use SAM templates to build powerful serverless applications. Catch up on the last few episodes.

James Beswick, AWS Serverless Developer Advocate, has compiled a round-up of all his content from Q2. He has plenty of videos ranging from beginner to advanced topics.

AWS Serverless Heroes

We’re pleased to welcome Kyuhyun Byun and Serkan Özal to the growing list of AWS Serverless Heroes. The AWS Hero program is a selection of worldwide experts that have been recognized for their positive impact within the community. They share helpful knowledge and organize events and user groups. They’re also contributors to numerous open-source projects in and around serverless technologies.

Still looking for more?

The Serverless landing page has much more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more getting started tutorials.

Follow the AWS Serverless team on our new LinkedIn page we share all the latest news and events. You can also follow all of us on Twitter to see latest news, follow conversations, and interact with the team.

Chris Munns: @chrismunns
Eric Johnson: @edjgeek
James Beswick: @jbesw
Moheeb Zara: @virgilvox
Ben Smith: @benjamin_l_s
Rob Sutter: @rts_rob
Julian Wood: @julian_wood

How to enable X11 forwarding from Red Hat Enterprise Linux (RHEL), Amazon Linux, SUSE Linux, Ubuntu server to support GUI-based installations from Amazon EC2

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/how-to-enable-x11-forwarding-from-red-hat-enterprise-linux-rhel-amazon-linux-suse-linux-ubuntu-server-to-support-gui-based-installations-from-amazon-ec2/

This post was written by Sivasamy Subramaniam, AWS Database Consultant.

In this post, I discuss enabling X11 forwarding from Red Hat Enterprise Linux (RHEL), Amazon Linux, SUSE Linux, Ubuntu servers running on Amazon EC2. This is helpful for system and database administrators, and application teams that want to perform software installations on Amazon EC2 using GUI method. This blog provides detailed steps around SSH and x11 tools, various network and operating system (OS) level settings, and best practices to achieve the X11 forwarding on Amazon EC2 when installing databases like Oracle using GUI.

There are several techniques to connect Amazon EC2 instances to manage OS level configurations. You typically use SSH clients (such as putty or SSH client) to establish the connection from the Windows or Mac-based laptop or bastion hosts or jump servers to connect with Amazon EC2 instances running OS. During the application installation or configuration, the application might require you to install software such as an Oracle database or a third-party database using GUI methods. This article talks about steps that must be done in order to forward the X11 screen to your highly secure Windows OS-based based bastion host.

 

Prerequisites

To complete this walkthrough the following is required:

 

  • Ensure that you have a bastion host running on Amazon EC2 with Windows OS. This OS must have access to the EC2 machines running Linux such as RHEL, Amazon Linux, SUSE Linux, and Ubuntu servers. If not, please configure a bastion host using Windows operating system with needed SSH access via port 22 to EC2 instance running Linux-based operating systems.
  • I recommend having Windows-based hosts in the same Availability Zone or Region as the EC2 Linux hosts that you plan to connect and forward X11 to. This is to avoid any high latency in X11 forwarding during your application installations.
  • Install tools such as putty and Xming in the Windows based bastion host that you want to SSH to Linux EC2 host and X11 forwarding.
  • Install Quartz if you want to redirect X11 to macOS. Quartz is package used in Mac for display management. To start using X11 forwarding to your Mac, use the -X switch. In other words, the SSH command looks like this “SSH -X -i “<ssh private key file name>” <user_name>@<ip-address>”. You must log out and log in back after installation of Quartz work properly.
  • In order to securely configure or install putty, refer to the section Configuring ssh-agent on Windows in the blog post Securely Connect to Linux Instances Running in a Private Amazon VPC.
  • I don’t recommend doing the X11 forwarding to the laptop because it’s not secure, and you must resolve latency issues if user is located in different Region than the EC2 instance hosted Region.
  • You may need sudo permission to run X11 forwarding commands as a root user in order to complete the setup.

 

Solution

Connect to your EC2 instance using SSH client, and perform following setup as needed.

Step 1: Verify or install required X11 packages.

To find out OS information and release, install required X11 packages and xclock or xterm. Installing xclock or xterm packages are optional as this is installed in this post to test the X11 forwarding.

  • List OS information and release with following command:

sudo cat /etc/os-release

  • List and install X11 packages with following command based on your operating system release and version:

Amazon Linux:

sudo yum list installed '*X11*'

sudo yum install xclock xterm

sudo yum install xorg-x11-xauth.x86_64 -y

sudo yum list installed '*X11*'

Red Hat Enterprise Linux:

sudo yum list installed '*X11*'

sudo yum install xterm

sudo yum install xorg-x11-xauth

sudo yum list installed '*X11*'

Note: The xorg-x11-apps package has been provided in the CodeReady Linux Builder Repository for RHEL8. So, I skipped installing this package, which has xclock and I used only xterm to test the X11 forwarding.

SUSE:

sudo zypper install xclock

sudo zypper install xauth

Ubuntu:

sudo apt list installed '*X11*'

sudo apt install x11-apps

sudo apt list installed '*X11*'

 

Step 2: Verify and configure X11 forwarding

  • On the Linux server, edit sshd_config file, set X11Forwarding parameter to yes, and restart the sshd service:

sudo  cat /etc/ssh/sshd_config |grep -i X11Forwarding

  • Enable “X11Forwarding yes” if this is set to “no”  and restart sshd service.

sudo  vi /etc/ssh/sshd_config

sudo  cat /etc/ssh/sshd_config |grep -i X11Forwarding

  • X11Forwarding yes

x110 forwarding yes and no

  • To Restart sshd:

sudo service sshd status

 

Step 3: Configure putty and Xming to perform X11 forwarding connect and verify X11 forwarding

Log in to your Windows bastion host. Then, open a fresh PuTTY session, and use a private key or password-based authentication per your organization setup. Then, test the xclock or xterm command to see x11 forwarding in action.

  • Click the xming utility you installed on Windows bastion host and have it running.

click on xming icon

  • Select Session from the Category pane on left. Set Host Name as your private IP, port 22 and Connection Type as SSH. Please note that you use the Private IP of EC2 instance later when you connect inside from the VPC/network.

putty config host ip

  • Go to Connection, and click Then, set Auto-login username as ec2-user, ubuntu, or whichever user you are allowed to logging in as.
  • Go to Connection, select SSH, and then click Then, click on Browse to select the private key generated earlier If you are using key based authentication.
  • Go to Connection, select SSH, and then click on Then, select enable X11 forwarding.
  • Set X display location as localhost:0.0

putty config with x11 forwarding

  • Go back to Session and click on Save after creating a session name in Saved session.

 Now that you set up PuTTY, xming and configuring the x11 settings, you can click on load button and then Open button. This opens up a new SSH terminal with x11 forwarding enabled. Now, I move on to the testing X11 forwarding.

  • Test the X11 from the use you logged in:

Example:

xauth list

export DISPLAY=localhost:10.0

xclock or xterm

X11 forwarding on Mac

  • It is simple to start using X11 forwarding to your Mac, use the -X switch.

Example:

“SSH -X -i "<ssh private key file name>" <user_name>@<ip-address>

xclock or xterm

 

You see that xclock or xterm window opened similar to the one following. This means your x11 forwarding setup working as expected, and you can start using GUI-based application installation or configuration by running the installer or configuration tools.

xclock to demonstrate this is correct

Step 4: Configure the EC2 Linux session to forward X11 if you are switching to different user after login to run GUI-based installation / commands

In this example: ec2-user is the user logged in with SSH and then switched to oracle user.

  • From the Logged User to identify the xauth details:

xauth list

env|grep DISPLAY

xauth list | grep unix`echo $DISPLAY | cut -c10-12` > /tmp/xauth

ll /tmp/xauth ; cat /tmp/xauth

  • Switch to the user where you want to run GUI-based installation or tools:

sudo su - oracle

xauth add `cat /tmp/xauth`

xauth list

env|grep DISPLAY

export DISPLAY=localhost:10.0

xclock

You see xclock or xterm window opened similar to the one following. This means your x11 forwarding setup working as expected even after switched to different user. You can start using GUI-based  application installation or configuration by running the installer or configuration tools.

another xclock to show the forwarding worked  

Conclusion

In this blog, I demonstrated how to configure Amazon EC2 instances running on various Linux-based operating systems to forward X11 to the Windows bastion host. This is helpful to any application installation that requires GUI-based installation methods. This is also helpful to any bastion hosts that provide highly secure and low latency environments to perform SSH related operations including GUI-based installations as this do not require any additional network configuration other than opening the port 22 for standard SSH authentication. Please try this walkthrough for yourself, and leave any comments following!

 

 

Building an electronic security lock using serverless

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-an-electronic-security-lock-using-serverless/

In this guide I show how to build an electronic security lock for package delivery, securing physical documents, or granting access to a secret lab. This project uses AWS Serverless to create a touchscreen keypad lock that uses SMS to alert a recipient with a custom message and unlock code. Files are included for the lockbox shown, but the system can be installed in anything with a door.

CircuitPython is a lightweight version of Python that works on embedded hardware. It runs on an Adafruit PyPortal open-source IoT touch display. A relay wired to the PyPortal acts as an electronic switch to bridge power to an electronic solenoid lock.

I deploy the backend to the AWS Cloud using the AWS Serverless Application Repository. The code on the PyPortal makes REST calls to the backend to send a random four-digit code as a text message using Amazon Pinpoint. It also stores the lock state in AWS System Manager Parameter Store, a secure service for storing and retrieving sensitive information.

Prerequisites

You need the following to complete the project:

Deploy the backend application

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

The serverless backend consists of three Amazon API Gateway endpoints that invoke AWS Lambda functions. At boot, the PyPortal calls the FetchState function to access the lock state from a Parameter Store in AWS Systems Manager. For example, if the returned state is:

{ “locked”: True, “code”: “1234” }

the PyPortal leaves the relay open so that the solenoid lock remains locked. Once the matching “1234” code is entered, the relay circuit is closed and the solenoid lock is opened. When unlocked the PyPortal calls the UpdateState function to update the state to:

{ “locked”: False, “code”: “ ” }

In an unlocked state, the PyPortal requests a ten-digit phone number to be entered in order to lock. The SendCode function is called with the phone number so that it can generate a random four-digit code. A message is then sent to the recipient using Amazon Pinpoint, and the Parameter Store state is updated to “locked”. The state is returned in the response and the PyPortal opens the relay again and stores the unlock code locally.

Before deploying the backend, create an Amazon Pinpoint Project and request a long code. A long code is a dedicated phone number required for sending SMS.

  1. Navigate to the Amazon Pinpoint console.
  2. Ensure that you are in a Region where Amazon Pinpoint is supported. For the most up-to-date list, see AWS Service Endpoints.
  3. Choose Create Project.
  4. Name your project and choose Create.
  5. Choose Configure under SMS and Voice.
  6. Select Enable the SMS channel for this project and choose Save changes.

  7. Under Settings, SMS and Voice choose Request long codes.
  8. Enter the target country and select Transactional for Default call type. Choose Request long codes. This incurs a monthly cost of one dollar and can be canceled anytime. For a breakdown of costs, check out current pricing.
  9. Under Settings, General settings make a note of the Project ID.

I use the AWS Serverless Application Model (AWS SAM) to create the backend template. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Navigate to the aws-serverless-pyportal-lock application in the AWS Serverless Application Repository.
  2. Under Application settings, fill the parameters PinpointApplicationID and LockboxCustomMessage.
  3. Choose Deploy.
  4. Once complete, choose View CloudFormation Stack.
  5. Select the Outputs tab and make a note of the LockboxBaseApiUrl. This is required for configuring the PyPortal.
  6. Navigate to the URL listed as LockboxApiKey in the Outputs tab.
  7. Choose Show to reveal the API key. Make a note of this. This is required for authenticating requests from the PyPortal to the backend.

PyPortal setup

The following instructions walk through installing the latest version of the Adafruit CircuityPython libraries and firmware.

  1. Follow these instructions from Adafruit to install the latest version of the CircuitPython bootloader. At the time of writing, the latest version is 5.3.0.
  2. Follow these instructions to install the latest Adafruit CircuitPython library bundle. I use bundle version 5.x.
  3. Optionally install the Mu Editor, a multi-platform code editor and serial debugger compatible with Adafruit CircuitPython boards. This can help with troubleshooting issues.

Wiring

Electronic solenoid locks come in varying shapes, sizes, and voltages. Choose one that works for your needs and wire it according to the following instructions for the PyPortal.

  1. Gather the PyPortal, a solenoid lock, relay module, JST connectors, jumper wire, and a power source that matches the solenoid being used. For this project, a six-volt solenoid is used with a four AA battery holder.
  2. Wire the system following this diagram.
  3. Splice female jumper wires to the exposed leads of a JST connector to connect the relay module.
  4. Insert the JST connector end to the port labeled D4 on the PyPortal.
  5. Power the PyPortal using USB or by feeding a five-volt supply to the port labeled D3.

Code PyPortal

As with regular Python, CircuitPython does not need to be compiled to execute. You can flash new firmware on the PyPortal by copying a Python file and necessary assets to a mounted volume. The bootloader runs code.py anytime the device starts or any files are updated.

  1. Use a USB cable to plug the PyPortal into your computer and wait until a new mounted volume CIRCUITPY is available.
  2. Download the project from GitHub. Inside the project, copy the contents of /circuit-python on to the CIRCUITPY volume.
  3. Inside the volume, open and edit the secrets.py file. Include your Wi-Fi credentials along with the LockboxApiKey and LockboxBaseApiURL API Gateway endpoint. These can be found under Outputs in the AWS CloudFormation stack created by the AWS Serverless Application Repository.
  4. Save the file, and the device restarts. It takes a moment to connect to Wi-Fi and make the first request to the FetchState function.
  5. Test the system works by entering in a phone number when prompted. An SMS message with the unlock code is sent to the provided number.
  6. Mount the system to the desired door or container, such as a 3D printed safe (files included in the GitHub project).

    Optionally
    , if you installed the Mu Editor, you can choose “Serial” to follow along the device log.

 

Understanding the code

See circuit-python/code.py from the GitHub project, this is the main code for the PyPortal. When the PyPortal connects to Wi-Fi, the first thing it does is make a GET request to the API Gateway endpoint for the FetchState function.

def getState():
    endpoint = secrets['base-api'] + "/state"
    headers = {"x-api-key": secrets['x-api-key']}
    response = wifi.get(endpoint, headers=headers, timeout=30)
    handleState(response.json())
    response.close()

The FetchState Lambda function code, written in Python, gets the state from the Parameter Store and returns it in the response to the PyPortal.

import os
import json
import boto3

client = boto3.client('ssm')
parameterName = os.environ.get('PARAMETER_NAME')

def lambda_handler(event, context):
    response = client.get_parameter(
        Name=parameterName,
        WithDecryption=False
    )

    state = json.loads(response['Parameter']['Value'])

    return {
        "statusCode": 200,
        "body": json.dumps(state)
    }

The getState function in the CircuitPython code passes the returned state to the handleState function, which determines whether to physically lock or unlock the device.

def handleState(newState):
    print(state)
    state['code'] = newState['code']
    state['locked'] = newState['locked']
    print(state)
    if state['locked'] == True:
        lock()
    if state['locked'] == False:
        unlock()

When the device is unlocked, and a phone number is entered to lock the device, the CircuitPython command function is called.

def command(action, num):
    if action == "unlock":
        if num == state["code"]:
            unlock()
        else:
            number_label.text = "Wrong code!"
            playBeep()
    if action == "lock":
        if validate(num) == True:
            data = sendCode(num)
            handleState(data)

The CircuitPython sendCode function makes a POST request with the entered phone number to the API Gateway endpoint for the SendCode Lambda function

def sendCode(num):
    endpoint = secrets['base-api'] + "/lock"
    headers = {"x-api-key": secrets['x-api-key']}
    data = { "number": num }
    response = wifi.post(endpoint, json=data, headers=headers, timeout=30)
    data = response.json()
    print("Code received: ", data)
    response.close()
    return data

This Lambda function generates a random four-digit number and adds it to the custom message stored as an environment variable. It then sends a text message to the provided phone number using Amazon Pinpoint, and saves the new state in the Parameter Store. The new state is returned in the response and is used by the handleState function in the CircuitPython code.

import os
import json
import boto3
import random

pinpoint = boto3.client('pinpoint')
ssm = boto3.client('ssm')

applicationId = os.environ.get('APPLICATION_ID')
parameterName = os.environ.get('PARAMETER_NAME')
message = os.environ.get('MESSAGE')

def lambda_handler(event, context):
    print(event)
    body = json.loads(event['body'])

    number = "+1" + str(body['number'])
    code = str(random.randint(1111,9999))

    addresses = {}
    addresses[number] = {'ChannelType': 'SMS'}
    pinpoint.send_messages(
        ApplicationId=applicationId,
        MessageRequest={
            'Addresses': addresses,
            'MessageConfiguration': {
                'SMSMessage': {
                    'Body': message + code,
                    'MessageType': 'TRANSACTIONAL'
                }
            }
        }
    )

    state = { "locked": True, "code": code }

    response = ssm.put_parameter(
        Name=parameterName,
        Value=json.dumps(state),
        Type='String',
        Overwrite=True
    )

    return {
        "statusCode": 200,
        "body": json.dumps(state)
    }

Entering the correct unlock code from the SMS message calls the unlock function. The unlock function closes the relay circuit to open the solenoid lock. It plays a beep sound and then calls the updateState function, which makes a POST request to the API Gateway endpoint for the UpdateState Lambda function.

def updateState(newState):
    endpoint = secrets['base-api'] + "/state"
    headers = {"x-api-key": secrets['x-api-key']}
    response = wifi.post(endpoint, json=newState, headers=headers, timeout=30)
    data = response.json()
    print("Updated state to: ", data)
    response.close()
    return data

def unlock():
    print("Unlocked!")
    number_label.text = "Enter Phone# to Lock"
    time.sleep(1)
    btn = find_button("Unlock")
    if btn is not None:
        btn.selected = True
        btn.label = "Lock"
    lock_relay.value = True
    playBeep()
    updateState({"locked": False, "code": ""})

The UpdateState Lambda function updates the Parameter Store whenever the state is changed. When the PyPortal loses power or restarts, the last known state is fetched, preventing a false lock/unlocked position.

import os
import json
import boto3

client = boto3.client('ssm')
parameterName = os.environ.get('PARAMETER_NAME')

def lambda_handler(event, context):
    state = json.loads(event['body'])

    response = client.put_parameter(
        Name=parameterName,
        Value=json.dumps(state),
        Type='String',
        Overwrite=True
    )

    return {
        "statusCode": 200,
        "body": json.dumps(state)
    }

Conclusion

I show how to build an electronic keypad lock system using a basic relay circuit and a microcontroller. The system is managed by a serverless backend API deployed using the AWS Serverless Application Repository. The backend uses API Gateway to provide a REST API for Lambda functions that handle fetching lock state, updating lock state, and sending a random four-digit code via SMS using Amazon Pinpoint. Language consistency is achieved by using CircuitPython on the PyPortal and Python 3.8 in the Lambda function code.

Use this project as a template to build out any solution that requires secure physical access control. It can be embedded in cabinet drawers to protect documents or can be used with a door solenoid to control room access. Try combining it with a serverless geohashing app to develop a treasure hunting experience. Explore how to further modify the serverless application in the GitHub project by learning about the AWS Serverless Application Model. Read my previous guide to learn how you can add voice to a CircuitPython project on a PyPortal.

 

Using WebSockets and Load Balancers Part two

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/using-websockets-and-load-balancers-part-two/

This post was written by Robert Zhu, Principal Developer Advocate at AWS. 

This article continues a blog I posted earlier about using Load Balancers on Amazon Lightsail. In this article, I demonstrate a few common challenges and solutions when combining stateful applications with load balancers. I start with a simple WebSocket application in Amazon Lightsail that counts the number of seconds the client has been connected. Then, I add a Lightsail Load Balancer, and show you how the application performs routing and retries. Let’s get started.

WebSockets

WebSockets are persistent, duplex sockets that enable bi-directional communication between a client and server. Applications often use WebSockets to provide real-time functionality such as chat and gaming. Let’s start with some sample code for a simple WebSocket server:

const WebSocket = require("ws");
const name = require("./randomName");
const server = require("http").createServer();
const express = require("express");
const app = express();

console.log(`This server is named: ${name}`);

// serve files from the public directory
server.on("request", app.use(express.static("public")));

// tell the WebSocket server to use the same HTTP server
const wss = new WebSocket.Server({
  server,
});

wss.on("connection", function connection(ws, req) {
  const clientId = req.url.replace("/?id=", "");
  console.log(`Client connected with ID: ${clientId}`);

  let n = 0;
  const interval = setInterval(() => {
    ws.send(`${name}: you have been connected for ${n++} seconds`);
  }, 1000);

  ws.on("close", () => {
    clearInterval(interval);
  });
});

const port = process.env.PORT || 80;
server.listen(port, () => {
  console.log(`Server listening on port ${port}`);
});

We serve static files from the public directory and WebSocket connection requests on the same port. An incoming HTTP request from a browser loads public/index.html, and a WebSocket connection initiated from the client triggers the wss.on(“connection”, …) code. Upon receiving a WebSocket connection, I set up a recurring callback where I tell the client how long it has been connected. Now, let’s look at a look at the client code:

buttonConnect.onclick = async () => {
  const serverAddress = inputServerAddress.value;
  messages.innerHTML = "";
  instructions.parentElement.removeChild(instructions);

  appendMessage(`Connecting to ${serverAddress}`);

  try {
    let retries = 0;
    while (retries < 50) {
      appendMessage(`establishing connection... retry #${retries}`);
      await runSession(serverAddress);
      await sleep(1500);
      retries++;
    }

    appendMessage("Reached maximum retries, giving up.");
  } catch (e) {
    appendMessage(e.message || e);
  }
};

async function runSession(address) {
  const ws = new WebSocket(address);

  ws.addEventListener("open", () => {
    appendMessage("connected to server");
  });

  ws.addEventListener("message", ({ data }) => {
    console.log(data);
    appendMessage(data);
  });

  return new Promise((resolve) => {
    ws.addEventListener("close", () => {
      appendMessage("Connection lost with server.");
      resolve();
    });
  });
}

I use the WebSocket DOM API to connect to the server. Once connected, I append any received messages to console and on screen via the custom appendMessage function. If the client loses connectivity, it will try to reconnect up to 50 times. Let’s run it:

"slimy-cardinal" is a randomly generated server name

Now, suppose I am running a very demanding real-time application, and need to scale the server capacity beyond a single host. How would I do this? I create two Ubuntu 18.04 instances. Once the instances are up, I SSH to each one, and run the following commands:

sudo apt-get update
sudo apt-get -y install nodejs npm
git clone https://github.com/robzhu/ws-time
cd ws-time && npm install
node server.js

During installation, select Yes when presented with the prompt:

Select "Yes" when prompted to install libssl for npm

Keep these SSH sessions open, you need them shortly. Next, you create the Load Balancer in Amazon Lightsail, and attach the instances:

screenshot of target instances for load balancers

Note: the Lightsail load balancer only works for port 80, which is part of the reason I use the same port for HTTP and WebSocket requests.

Copy the DNS name for the load balancer, open it in a new browser tab, and paste it into the WebSocket server address with the format:

ws://<DNSName>

screenshot of what the correct server address should look like

Make sure the server address does not accidentally start with “ws://http://…”

Next, locate the SSH session that accepted the connection. It looks like this:

accepted ssh section

The server logs the client ID when it receives a connection.

If you kill this process, the client disconnects and runs its retry logic, hopefully causing the load balancer to route the client to a healthy node. Next, hit connect from the client. After a few seconds, kill the process on the server, and you should see the client reconnect to a healthy instance:

what you should see when you connect to a healthy instance

The client retry was routed to a healthy instance on the first attempt. This is due to the round-robin algorithm that the Lightsail load balancer uses. In production, you should not expect the load balancer to detect an unhealthy node immediately. If the load balancer continues to route incoming connections to an unhealthy node, the client will need more retry attempts before reconnecting. If this is a large scale system, we will want to implement an exponential backoff on the retry intervals to avoid overwhelming other nodes in the cluster (aka the thundering herd problem).

Notice that the message “you have been connected for X seconds” reset X to 0 after the client reconnected. What if you want to make the failover transparent to the user? The problem is that the connection duration (X) is stored in the NodeJS process that we killed. That state is lost if the process dies or if the host goes down. The solution is unsurprising: move the state off the WebSocket server and into a distributed cache, such as redis.

Deep health checks

When you attached your instances to the load balancer, your health checks passed because the Lightsail load balancer issues an HTTP request for the default path (where you serve index.html). However, if you expect most of your server load to come from I/O on the WebSocket connections, the ability to serve our index.html file is not a good health check.  You might implement a better health check like so:

app.get('/healthcheck', (req, res) => {
  const serverHasCapacity = getAverageNetworkUsage() < 0.6;
  if (serverHasCapacity) res.status(200).send("ok");
  res.status(400).send("server is overloaded");
});

This caused the load balancer to consider a node as “unhealthy” when the target node’s network usage reaches a threshold value. In response, the load balancer stops routing new incoming connections to that node. However, note that the load balancer will not terminate existing connections to an over-subscribed node.

When working with persistent connections or sticky sessions, always leave some capacity buffer. For example, do not mark the server as unhealthy only when it reaches 100% capacity. This is because existing connections or sticky clients will continue to generate traffic for that node, and some workloads may increase server usage beyond its threshold (e.g. a chat room that suddenly gets very busy).

 

Conclusion

I hope this post has given you a clear idea of how to use load balancers to improve scalability for stateful applications, and how to implement such a solution using Amazon Lightsail instances and load balancers. Please feel free to leave comments, and try this solution for yourself.

 

Adding voice to a CircuitPython project using Amazon Polly

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/adding-voice-to-a-circuitpython-project-using-amazon-polly/

An Adafruit PyPortal displaying a quote while synthesizing and playing speech using Amazon Polly.

An Adafruit PyPortal displaying a quote while synthesizing and playing speech using Amazon Polly.

As a natural means of communication, voice is a powerful way to humanize an experience. What if you could make anything talk? This guide walks through how to leverage the cloud to add voice to an off-the-shelf microcontroller. Use it to develop more advanced ideas, like a talking toaster that encourages healthy breakfast habits or a house plant that can express its needs.

This project uses an Adafruit PyPortal, an open-source IoT touch display programmed using CircuitPython, a lightweight version of Python that works on embedded hardware. You copy your code to the PyPortal like you would to a thumb drive and it runs. Random quotes from the PaperQuotes API are periodically displayed on the PyPortal LCD.

A microcontroller can’t do speech synthesis on its own so I use Amazon Polly, a natural text to speech synthesis service, to generate audio. Adding speech also extends accessibility to the visually impaired. This project includes an example for requesting arbitrary speech in addition to random quotes. Use this example to add a voice to any CircuitPython project.

An Adafruit PyPortal, an external speaker, and a microSD card.

An Adafruit PyPortal, an external speaker, and a microSD card.

I deploy the backend to the AWS Cloud using the AWS Serverless Application Repository. The code on the PyPortal makes a REST call to the backend to fetch a quote and synthesize speech audio for playback on the device.

Prerequisites

You need the following to complete the project:

Deploy the backend application

An architecture diagram of the serverless backend when requesting speech synthesis of a text string.

An architecture diagram of the serverless backend when requesting speech synthesis of a text string.

The serverless backend consists of an Amazon API Gateway endpoint that invokes an AWS Lambda function. If called with a JSON object containing text and voiceId attributes, it uses Amazon Polly to synthesize speech and uploads an MP3 file as a public object to Amazon S3. Upon completion, it returns the URL for downloading the audio file. It also processes the submitted text and adds return lines so that it can appear text-wrapped when displayed on the PyPortal. For a full list of voices, see the Amazon Polly documentation. An example response:

To fetch quotes instead of a text field, call the endpoint with a comma-separated list of tags as shown in the following diagram. The Lambda function then calls the PaperQuotes API. It fetches up to 50 quotes per tag and selects a random one to synthesize as speech. As with arbitrary text, it returns a URL and a text-wrapped representation of the quote.

An architecture diagram of the serverless backend when requesting a random quote from the PaperQuotes API to synthesize as speech.

An architecture diagram of the serverless backend when requesting a random quote from the PaperQuotes API to synthesize as speech.

I use the AWS Serverless Application Model (AWS SAM) to create the backend template. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Generate a free PaperQuotes API key at paperquotes.com. The serverless backend requires this to fetch quotes.
  2. Navigate to the aws-serverless-pyportal-polly application in the AWS Serverless Application Repository.
  3. Under Application settings, enter the parameter, PaperQuotesAPIKey.
  4. Choose Deploy.
  5. Once complete, choose View CloudFormation Stack.
  6. Select the Outputs tab and make a note of the SpeechApiUrl. This is required for configuring the PyPortal.
  7. Click the link listed for SpeechApiKey in the Outputs tab.
  8. Click Show to reveal the API key. Make a note of this. This is required for authenticating requests from the PyPortal to the SpeechApiUrl.

PyPortal setup

The following instructions walk through installing the latest version of the Adafruit CircuityPython libraries and firmware. It also shows how to enable an external speaker module.

  1. Follow these instructions from Adafruit to install the latest version of the CircuitPython bootloader. At the time of writing, the latest version is 5.3.0.
  2. Follow these instructions to install the latest Adafruit CircuitPython library bundle. I use bundle version 5.x.
  3. Insert the microSD card in the slot located on the back of the device.
  4. Cut the jumper pad on the back of the device labeled A0. This enables you to use an external speaker instead of the built-in speaker.
  5. Plug the external speaker connector into the port labeled SPEAKER on the back of the device.
  6. Optionally install the Mu Editor, a multi-platform code editor and serial debugger compatible with Adafruit CircuitPython boards. This can help with troubleshooting issues.
  7. Optionally if you have a 3D printer at home, you can print a case for your PyPortal. This can protect and showcase your project.

Code PyPortal

As with regular Python, CircuitPython does not need to be compiled to execute. You can flash new firmware on the PyPortal by copying a Python file and necessary assets to a mounted volume. The bootloader runs code.py anytime the device starts or any files are updated.

  1. Use a USB cable to plug the PyPortal into your computer and wait until a new mounted volume CIRCUITPY is available.
  2. Download the project from GitHub. Inside the project, copy the contents of /circuit-python on to the CIRCUITPY volume.
  3. Inside the volume, open and edit the secrets.py file. Include your Wi-Fi credentials along with the SpeechApiKey and SpeechApiUrl API Gateway endpoint. These can be found under Outputs in the AWS CloudFormation stack created by the AWS Serverless Application Repository.
  4. Save the file, and the device restarts. It takes a moment to connect to Wi-Fi and make the first request.
    Optionally, if you installed the Mu Editor, you can click on “Serial” to follow along the device log.

The PyPortal takes a few moments to connect to the Wi-Fi network and make its first request. On success, you hear it greet you and describe itself. The default interval is set to then display and read a quote every five minutes.

Understanding the CircuitPython code

See the bottom of circuit-python/code.py from the GitHub project. When the PyPortal connects to Wi-Fi, the first thing it does is synthesize an arbitrary “hello world” text for display. It then begins periodically displaying and “speaking” quotes.

# Connect to WiFi
print("Connecting to WiFi...")
wifi.connect()
print("Connected!")

displayQuote("Ready!")

speakText('Hello world! I am an Adafruit PyPortal running Circuit Python speaking to you using AWS Serverless', 'Joanna')

while True:
    speakQuote('equality, humanity', 'Joanna')
    time.sleep(60*secrets['interval'])

Both the speakText and speakQuote function call the synthesizeSpeech function. The difference is whether text or tags are passed to the API.

def speakText(text, voice):
    data = { "text": text, "voiceId": voice }
    synthesizeSpeech(data)

def speakQuote(tags, voice):
    data = { "tags": tags, "voiceId": voice }
    synthesizeSpeech(data)

The synthesizeSpeech function posts the data to the API Gateway endpoint. It then invokes the Lambda function and returns the MP3 URL and the formatted text. The downloadfile function is called to fetch the MP3 file and store it on the SD card. displayQuote is called to display the quote on the LCD. Finally, the playMP3 opens the file and plays the speech audio using the built-in or external speaker.

def synthesizeSpeech(data):
    response = postToAPI(secrets['endpoint'], data)
    downloadfile(response['url'], '/sd/cache.mp3')
    displayQuote(response['text'])
    playMP3("/sd/cache.mp3")

Modifying the Lambda function

The serverless application includes a Lambda function, SynthesizeSpeechFunction, which can be modified directly in the Lambda console. The AWS SAM template used to deploy the AWS Serverless Application Repository application adds policies for accessing the S3 bucket where audio is stored. It also grants access to Amazon Polly for synthesizing speech. It also adds the PaperQuote API token as an environment variable and sets API Gateway as an event source.

SynthesizeSpeechFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: lambda_functions/SynthesizeSpeech/
      Handler: app.lambda_handler
      Runtime: python3.8
      Policies:
        - S3FullAccessPolicy:
            BucketName: !Sub "${AWS::StackName}-audio"
        - Version: '2012-10-17'
          Statement:
            - Effect: Allow
              Action:
                - polly:*
              Resource: '*'
      Environment:
        Variables:
          BUCKET_NAME: !Sub "${AWS::StackName}-audio"
          PAPER_QUOTES_TOKEN: !Ref PaperQuotesAPIKey
      Events:
        Speech:
          Type: Api
          Properties:
            RestApiId: !Ref SpeechApi
            Path: /speech
            Method: post

To edit the Lambda function, navigate back to the CloudFormation stack and click on the SpeechSynthesizeFunction under the Resources tab.

From here, you can edit the Lambda function code directly. Clicking Save deploys the new code.

The getQuotes function is called to fetch quotes from the PaperQuotes API. You can change this to call from a different source, such as a custom selection of quotes. Try modifying it to fetch social media posts or study questions.

Conclusion

I show how to add natural sounding text to speech on a microcontroller using a serverless backend. This is accomplished by deploying an application through the AWS Serverless Application Repository. The deployed API uses API Gateway to securely invoke a Lambda function that fetches quotes from the PaperQuotes API and generates speech using Amazon Polly. The speech audio is uploaded to S3.

I then show how to program a microcontroller, the Adafruit PyPortal, using CircuitPython. The code periodically calls the serverless API to fetch a quote and to download speech audio for playback. The sample code also demonstrates synthesizing arbitrary text to speech, meaning it can be used for any project you can conceive. Check out my previous guide on using the PyPortal to create a Martian weather display for inspiration.

Low-latency computing with AWS Local Zones – Part 1

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/low-latency-computing-with-aws-local-zones-part-1/

This post was contributed by: Pranav Chachra, Rob Chen, Alan Goodman

AWS launched a Local Zone in Los Angeles (LA), California, in late 2019 at re:Invent. Since the launch, we have seen a lot of interest from you, and have worked to bring additional features and services based on your feedback.

In this blog series, we share best practices and recommendations for hosting your applications in Local Zones based on what has worked for customers. Part one focuses on sharing more about Local Zones and how customers are already using the LA Local Zone. In this blog, we also address key questions asked by customers since the launch of the LA Local Zone. In the upcoming parts of this blog series, we do a deep dive into specific use cases for which our customers are using the LA Local Zone. We also list best practices to host your applications in Local Zones and plan to provide related architectural considerations.

AWS Local Zones introduction

local zone base picture

Today, there are a significant number of applications that can run in the AWS Cloud. Most applications work well in our Regions. However, for workloads that require low-latency or local data processing, you need us to bring AWS infrastructure closer to you. Without local AWS presence, you have to procure, operate, and maintain IT infrastructure in your own data center or colocation facility for such workloads. And, you end up building and running such application components with a different set of APIs and tools than the other parts of your applications running in the AWS Cloud. This results in a lot of extra effort and expenses.

And that’s why, based on your feedback, we launched AWS Local Zones, a new type of AWS infrastructure deployment that places compute, storage, and other select services closer to large cities. We designed Local Zones as a powerful construct that extends our existing Regions into new locations. This gives you the ability to run applications on AWS that require single-digit millisecond latencies to your end-users or on-premises installations in the local area. Just like Regions, Local Zones are fully managed and supported by AWS, giving you the elasticity, scalability, and security benefits of running on the AWS. The first Local Zone is generally available in LA, and is an extension of the US West (Oregon) Region (the parent Region).

Use cases

Like Regions, customers leverage the LA Local Zone for many different use cases. The top customer use cases include:

  1. Media and Entertainment (M&E) content creation: M&E customers are migrating expensive on-premises workstations to the LA Local Zone. They do this to accelerate content creation by getting rid of capacity constraints while improving security and operational efficiency. For artist workstations, latency is the key to having a jitter-free experience on a remote instance. These customers typically require less than five millisecond latency from their offices to virtual instances. With Direct Connect, most of these customers are able to achieve as low as 1-2 millisecond latency from their animation hubs in LA to the Local Zone, and run latency-sensitive workloads, such as live production, video editing.
  2. Enterprise migration with hybrid architecture: Enterprises have workloads running in their existing on-premises data centers in the LA metro area. These customers use the Local Zone to migrate complex legacy on-premises applications to AWS without expensive revamp of their architecture. Customers have told us that it can be daunting to migrate a portfolio of interdependent applications to the cloud. Now with a Direct Connect to the LA Local Zone, customers can establish a hybrid environment that provides ultra-low latency communication between applications running in the LA Local Zone and on-premises installations. In turn, this enables customers to migrate applications incrementally, simplifying migrations drastically and enabling on-going hybrid deployments in the LA area.
  3. Real-time multiplayer gaming: This category includes gaming companies that deploy game servers for multiplayer sessions all over the world to be closer to gamers. A latency of 20 milliseconds or less is considered ideal for good gameplay experience. Until now, these customers were using on-premises installations in the LA area to supplement AWS presence. However, now, customers are deploying latency-sensitive game servers in the LA Local Zone to run real-time and interactive multiplayer game sessions, enabling them to provide end users in the Southern California area with a great experience.

In the next parts of the blog series, we further dive into these use cases, cover respective best practices, and review architectural guidance for you.

Services and features

AWS launched the LA Local Zone with support for seven Nitro based Amazon EC2 instance types (T3, C5, M5, R5, R5d, I3en, and G4), two EBS volume types (io1 and gp2), Amazon FSx for Windows File Server, Amazon FSx for Lustre, Application Load Balancer, Amazon VPC and Amazon Direct Connect. Since launch, AWS added support for Amazon RDS, Amazon EMR, AWS Shield, and Amazon EC2 Dedicated Hosts, and are adding more services based on your feedback.

Other parts of AWS, like AWS CloudFormation templates, CloudWatch, IAM resources, and Organizations, will continue to work as expected, providing you a consistent experience. You can also leverage the full suite of services like Amazon S3 in the parent Region, US West (Oregon), over AWS’s private network backbone with ~20–30 milliseconds latency.

Getting started and using the Local Zone

Now that we reviewed some common use cases and services of Local Zones, we want to get you started using the Local Zone. First, enable the LA Local Zone from the new “Zone settings” section of the EC2 console, as shown in the following image:

console Zone Settings

Once enabled, Local Zones looks and behaves similarly to an Availability Zones (AZs).  You can access Local Zones through the parent Region’s console and API endpoints. The following image shows that the LA Local Zone is visible as us-west-2-lax-1a along with other AZs in the EC2 console:

service health of zone

Once the LA Local Zone is enabled, you can extend your existing VPC from the parent Region to a Local Zone by creating a new VPC subnet assigned to the LA Local Zone:

creating subnet

Once a VPC subnet is established for the LA Local Zone, simply select the subnet while creating local resources. For example, you can launch an EC2 instance in the LA Local Zone by selecting the local subnet as the following:

configuring instance details

Local resources are then ready within seconds. You can manage these resources in the LA Local Zone just like resources in AZs:

shows the local zone created

Just like Regions, you can set up an Internet Gateway to access your local resources in the LA Local Zone over the internet. Or you can use AWS Direct Connect to route your traffic over a private network connection from any Direct Connect location. To get the best latency performance, you should use one of the Direct Connect locations available in LA:

  • T5 at El Segundo, Los Angeles, CA (recommended for lowest latency to the LA Local Zone)
  • CoreSite LA1, Los Angeles, CA
  • Equinix LA3, El Segundo, CA

Pricing

From a pricing perspective, Instances, and other local AWS resources running in a Local Zone have their own prices that might differ from the parent Region. Billing reports include a prefix, “LAX1,” or location name, “US West (Los Angeles),” that is specific to a group of Local Zones in LA. EC2 instances in Local Zones are available in On-Demand and Spot form, and you can also purchase Savings Plans. For pricing information, you can visit the pricing section on the respective services and filter pricing information by choosing the Local Zone location as “US West (Los Angeles)” in the dropdown.

Data transfer charges in AWS Local Zones are the same as in the AZs in the parent Region today. For example, data transferred between Amazon EC2 instances in the LA Local Zone and Amazon S3 in the parent Region, US West (Oregon), is free. Similarly, data transferred IN and OUT from Amazon EC2 in the Local Zone to the Amazon EC2 in the Parent Region is charged at $0.01/GB in each direction. You can learn more about data transfer prices “in” and “out” of Amazon EC2 here.

Thinking ahead

Later this year, AWS plans to open a second Local Zone in LA (us-west-2-lax-1b). The two Local Zones in LA will be interconnected with high-bandwidth, low-latency AWS networking allowing you to architect your low-latency applications for high availability and fault tolerance. Based on your feedback, we are also working on adding Local Zones in other locations along with the availability of the additional services, including Amazon ECS, Amazon Elastic Kubernetes Service, Amazon ElastiCache, Amazon ES, and Amazon Managed Streaming for Apache Kafka.

Conclusion

Now that we provided AWS Local Zones that you requested; we are really looking forward to seeing what all you can do with them. AWS would love to get your advice on locations or additional local services/features or other interesting use cases, so feel free to leave us your comments!

Introducing Instance Refresh for EC2 Auto Scaling

Post Syndicated from Ben Peven original https://aws.amazon.com/blogs/compute/introducing-instance-refresh-for-ec2-auto-scaling/

This post is contributed to by: Ran Sheinberg – Principal EC2 Spot SA, and Isaac Vallhonrat – Sr. EC2 Spot Specialist SA

Today, we are launching Instance Refresh. This is a new feature in EC2 Auto Scaling that enables automatic deployments of instances in Auto Scaling Groups (ASGs), in order to release new application versions or make infrastructure updates.

Amazon EC2 Auto Scaling is used for a wide variety of workload types and applications. EC2 Auto Scaling helps you maintain application availability through a rich feature set. This feature set includes integration into Elastic Load Balancing, automatically replacing unhealthy instances, balancing instances across Availability Zones, provisioning instances across multiple pricing options and instance types, dynamically adding and removing instances, and more.

Many customers use an immutable infrastructure approach. This approach encourages replacing EC2 instances to update the application or configuration, instead of deploying into EC2 instances that are already running. This can be done by baking code and software in golden Amazon Machine Images (AMIs), and rolling out new EC2 Instances that use the new AMI version. Another common pattern for rolling out application updates is changing the package version that the instance pulls when it boots (via updates to instance user data). Or, keeping that pointer static, and pushing a new version to the code repository or another type of artifact (container, package on Amazon S3) to be fetched by an instance when it boots and gets provisioned.

Until today, EC2 Auto Scaling customers used different methods for replacing EC2 instances inside EC2 Auto Scaling groups when a deployment or operating system update was needed. For example, UpdatePolicy within AWS CloudFormation, create_before_destroy lifecycle in Hashicorp Terraform, using AWS CodeDeploy, or even custom scripts that call the EC2 Auto Scaling API.

Customers told us that they want native deployment functionality built into EC2 Auto Scaling to take away the heavy lifting of custom solutions, or deployments that are initiated from outside of Auto Scaling groups.

Introducing Instance Refresh in EC2 Auto Scaling

You can trigger an Instance Refresh using the EC2 Auto Scaling groups Management Console, or use the new StartInstanceRefresh API in AWS CLI or any AWS SDK. All you need to do is specify the percentage of healthy instances to keep in the group while ASG terminates and launches instances. Also specify the warm-up time, which is the time period that ASG waits between groups of instances, that refreshes via Instance Refresh. If your ASG is using Health Checks, the ASG waits for the instances in the group to be healthy before it continues to the next group of instances.

Instance Refresh in action

To get started with Instance Refresh in the AWS Management Console, click on an existing ASG in the EC2 Auto Scaling Management Console. Then click the Instance refresh tab.

When clicking the Start instance refresh button, I am presented with the following options:

start instance refresh

With the default configuration, ASG works to keep 90% of the instances running and does not proceed to the next group of instances if that percentage is not kept. After each group, ASG waits for the newly launched instances to transition into the healthy state, in addition to the 300-second warm-up time to pass before proceeding to the next group of instances.

I can also initiate the same action from the AWS CLI by using the following code:

aws autoscaling start-instance-refresh --auto-scaling-group-name ASG-Instance-Refresh —preferences MinHealthyPercentage=90,InstanceWarmup=300

After initializing the instance refresh process, I can see ongoing instance refreshes in the console:

initialize instance refresh

The following image demonstrates how an active Instance refresh looks in the EC2 Instances console. Moreover, ASG strives to keep the capacity balanced between Availability Zones by terminating and launching instances in different Availability Zones in each group.

active instance refresh

Automate your workflow with Instance Refresh

You can now use this new functionality to create automations that work for your use-case.

To get started quickly, we created an example solution based on AWS Lambda. Visit the solution page on Github and see the deployment instructions.

Here’s an overview of what the solution contains and how it works:

  • An EC2 Auto Scaling group with two instances running
  • An EC2 Image Builder pipeline, set up to build and test an AMI
  • An SNS topic that would get notified when the image build completes
  • A Lambda function that is subscribed to the SNS topic, which gets triggered when the image build completes
  • The Lambda function gets the new AMI ID from the SNS notification, creates a new Launch Template version, and then triggers an Instance Refresh in the ASG, which starts the rolling update of instances.
  • Because you can configure the ASG with LaunchTemplateVersion = $Latest, every new instance that is launched by the Instance Refresh process uses the new AMI from the latest version of the Launch Template.

See the automation flow in the following diagram.

instance refresh automation flow

Conclusion

We hope that the new Instance Refresh functionality in your ASGs allow for a more streamlined approach to launching and updating your application deployments running on EC2. You can now create automations that fit your use case. This allows you to more easily refresh the EC2 instances running in your Auto Scaling groups, when deploying a new version of your application or when you must replace the AMI being used. Visit the user-guide to learn more and get started.

New – A Shared File System for Your Lambda Functions

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-a-shared-file-system-for-your-lambda-functions/

I am very happy to announce that AWS Lambda functions can now mount an Amazon Elastic File System (EFS), a scalable and elastic NFS file system storing data within and across multiple availability zones (AZ) for high availability and durability. In this way, you can use a familiar file system interface to store and share data across all concurrent execution environments of one, or more, Lambda functions. EFS supports full file system access semantics, such as strong consistency and file locking.

To connect an EFS file system with a Lambda function, you use an EFS access point, an application-specific entry point into an EFS file system that includes the operating system user and group to use when accessing the file system, file system permissions, and can limit access to a specific path in the file system. This helps keeping file system configuration decoupled from the application code.

You can access the same EFS file system from multiple functions, using the same or different access points. For example, using different EFS access points, each Lambda function can access different paths in a file system, or use different file system permissions.

You can share the same EFS file system with Amazon Elastic Compute Cloud (EC2) instances, containerized applications using Amazon ECS and AWS Fargate, and on-premises servers. Following this approach, you can use different computing architectures (functions, containers, virtual servers) to process the same files. For example, a Lambda function reacting to an event can update a configuration file that is read by an application running on containers. Or you can use a Lambda function to process files uploaded by a web application running on EC2.

In this way, some use cases are much easier to implement with Lambda functions. For example:

  • Processing or loading data larger than the space available in /tmp (512MB).
  • Loading the most updated version of files that change frequently.
  • Using data science packages that require storage space to load models and other dependencies.
  • Saving function state across invocations (using unique file names, or file system locks).
  • Building applications requiring access to large amounts of reference data.
  • Migrating legacy applications to serverless architectures.
  • Interacting with data intensive workloads designed for file system access.
  • Partially updating files (using file system locks for concurrent access).
  • Moving a directory and all its content within a file system with an atomic operation.

Creating an EFS File System
To mount an EFS file system, your Lambda functions must be connected to an Amazon Virtual Private Cloud that can reach the EFS mount targets. For simplicity, I am using here the default VPC that is automatically created in each AWS Region.

Note that, when connecting Lambda functions to a VPC, networking works differently. If your Lambda functions are using Amazon Simple Storage Service (S3) or Amazon DynamoDB, you should create a gateway VPC endpoint for those services. If your Lambda functions need to access the public internet, for example to call an external API, you need to configure a NAT Gateway. I usually don’t change the configuration of my default VPCs. If I have specific requirements, I create a new VPC with private and public subnets using the AWS Cloud Development Kit, or use one of these AWS CloudFormation sample templates. In this way, I can manage networking as code.

In the EFS console, I select Create file system and make sure that the default VPC and its subnets are selected. For all subnets, I use the default security group that gives network access to other resources in the VPC using the same security group.

In the next step, I give the file system a Name tag and leave all other options to their default values.

Then, I select Add access point. I use 1001 for the user and group IDs and limit access to the /message path. In the Owner section, used to create the folder automatically when first connecting to the access point, I use the same user and group IDs as before, and 750 for permissions. With this permissions, the owner can read, write, and execute files. Users in the same group can only read. Other users have no access.

I go on, and complete the creation of the file system.

Using EFS with Lambda Functions
To start with a simple use case, let’s build a Lambda function implementing a MessageWall API to add, read, or delete text messages. Messages are stored in a file on EFS so that all concurrent execution environments of that Lambda function see the same content.

In the Lambda console, I create a new MessageWall function and select the Python 3.8 runtime. In the Permissions section, I leave the default. This will create a new AWS Identity and Access Management (IAM) role with basic permissions.

When the function is created, in the Permissions tab I click on the IAM role name to open the role in the IAM console. Here, I select Attach policies to add the AWSLambdaVPCAccessExecutionRole and AmazonElasticFileSystemClientReadWriteAccess AWS managed policies. In a production environment, you can restrict access to a specific VPC and EFS access point.

Back in the Lambda console, I edit the VPC configuration to connect the MessageWall function to all subnets in the default VPC, using the same default security group I used for the EFS mount points.

Now, I select Add file system in the new File system section of the function configuration. Here, I choose the EFS file system and accesss point I created before. For the local mount point, I use /mnt/msg and Save. This is the path where the access point will be mounted, and corresponds to the /message folder in my EFS file system.

In the Function code editor of the Lambda console, I paste the following code and Save.

import os
import fcntl

MSG_FILE_PATH = '/mnt/msg/content'


def get_messages():
    try:
        with open(MSG_FILE_PATH, 'r') as msg_file:
            fcntl.flock(msg_file, fcntl.LOCK_SH)
            messages = msg_file.read()
            fcntl.flock(msg_file, fcntl.LOCK_UN)
    except:
        messages = 'No message yet.'
    return messages


def add_message(new_message):
    with open(MSG_FILE_PATH, 'a') as msg_file:
        fcntl.flock(msg_file, fcntl.LOCK_EX)
        msg_file.write(new_message + "\n")
        fcntl.flock(msg_file, fcntl.LOCK_UN)


def delete_messages():
    try:
        os.remove(MSG_FILE_PATH)
    except:
        pass


def lambda_handler(event, context):
    method = event['requestContext']['http']['method']
    if method == 'GET':
        messages = get_messages()
    elif method == 'POST':
        new_message = event['body']
        add_message(new_message)
        messages = get_messages()
    elif method == 'DELETE':
        delete_messages()
        messages = 'Messages deleted.'
    else:
        messages = 'Method unsupported.'
    return messages

I select Add trigger and in the configuration I select the Amazon API Gateway. I create a new HTTP API. For simplicity, I leave my API endpoint open.

With the API Gateway trigger selected, I copy the endpoint of the new API I just created.

I can now use curl to test the API:

$ curl https://1a2b3c4d5e.execute-api.us-east-1.amazonaws.com/default/MessageWall
No message yet.
$ curl -X POST -H "Content-Type: text/plain" -d 'Hello from EFS!' https://1a2b3c4d5e.execute-api.us-east-1.amazonaws.com/default/MessageWall
Hello from EFS!

$ curl -X POST -H "Content-Type: text/plain" -d 'Hello again :)' https://1a2b3c4d5e.execute-api.us-east-1.amazonaws.com/default/MessageWall
Hello from EFS!
Hello again :)

$ curl https://1a2b3c4d5e.execute-api.us-east-1.amazonaws.com/default/MessageWall
Hello from EFS!
Hello again :)

$ curl -X DELETE https://1a2b3c4d5e.execute-api.us-east-1.amazonaws.com/default/MessageWall
Messages deleted.

$ curl https://1a2b3c4d5e.execute-api.us-east-1.amazonaws.com/default/MessageWall
No message yet.

It would be relatively easy to add unique file names (or specific subdirectories) for different users and extend this simple example into a more complete messaging application. As a developer, I appreciate the simplicity of using a familiar file system interface in my code. However, depending on your requirements, EFS throughput configuration must be taken into account. See the section Understanding EFS performance later in the post for more information.

Now, let’s use the new EFS file system support in AWS Lambda to build something more interesting. For example, let’s use the additional space available with EFS to build a machine learning inference API processing images.

Building a Serverless Machine Learning Inference API
To create a Lambda function implementing machine learning inference, I need to be able, in my code, to import the necessary libraries and load the machine learning model. Often, when doing so, the overall size of those dependencies goes beyond the current AWS Lambda limits in the deployment package size. One way of solving this is to accurately minimize the libraries to ship with the function code, and then download the model from an S3 bucket straight to memory (up to 3 GB, including the memory required for processing the model) or to /tmp (up 512 MB). This custom minimization and download of the model has never been easy to implement. Now, I can use an EFS file system.

The Lambda function I am building this time needs access to the public internet to download a pre-trained model and the images to run inference on. So I create a new VPC with public and private subnets, and configure a NAT Gateway and the route table used by the the private subnets to give access to the public internet. Using the AWS Cloud Development Kit, it’s just a few lines of code.

I create a new EFS file system and an access point in the new VPC using similar configurations as before. This time, I use /ml for the access point path.

Then, I create a new MLInference Lambda function with the same set up as before for permissions and connect the function to the private subnets of the new VPC. Machine learning inference is quite a heavy workload, so I select 3 GB for memory and 5 minutes for timeout. In the File system configuration, I add the new access point and mount it under /mnt/inference.

The machine learning framework I am using for this function is PyTorch, and I need to put the libraries required to run inference in the EFS file system. I launch an Amazon Linux EC2 instance in a public subnet of the new VPC. In the instance details, I select one of the availability zones where I have an EFS mount point, and then Add file system to automatically mount the same EFS file system I am using for the function. For the security groups of the EC2 instance, I select the default security group (to be able to mount the EFS file system) and one that gives inbound access to SSH (to be able to connect to the instance).

I connect to the instance using SSH and create a requirements.txt file containing the dependencies I need:

torch
torchvision
numpy

The EFS file system is automatically mounted by EC2 under /mnt/efs/fs1. There, I create the /ml directory and change the owner of the path to the user and group I am using now that I am connected (ec2-user).

$ sudo mkdir /mnt/efs/fs1/ml
$ sudo chown ec2-user:ec2-user /mnt/efs/fs1/ml

I install Python 3 and use pip to install the dependencies in the /mnt/efs/fs1/ml/lib path:

$ sudo yum install python3
$ pip3 install -t /mnt/efs/fs1/ml/lib -r requirements.txt

Finally, I give ownership of the whole /ml path to the user and group I used for the EFS access point:

$ sudo chown -R 1001:1001 /mnt/efs/fs1/ml

Overall, the dependencies in my EFS file system are using about 1.5 GB of storage.

I go back to the MLInference Lambda function configuration. Depending on the runtime you use, you need to find a way to tell where to look for dependencies if they are not included with the deployment package or in a layer. In the case of Python, I set the PYTHONPATH environment variable to /mnt/inference/lib.

I am going to use PyTorch Hub to download this pre-trained machine learning model to recognize the kind of bird in a picture. The model I am using for this example is relatively small, about 200 MB. To cache the model on the EFS file system, I set the TORCH_HOME environment variable to /mnt/inference/model.

All dependencies are now in the file system mounted by the function, and I can type my code straight in the Function code editor. I paste the following code to have a machine learning inference API:

import urllib
import json
import os

import torch
from PIL import Image
from torchvision import transforms

transform_test = transforms.Compose([
    transforms.Resize((600, 600), Image.BILINEAR),
    transforms.CenterCrop((448, 448)),
    transforms.ToTensor(),
    transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
])

model = torch.hub.load('nicolalandro/ntsnet-cub200', 'ntsnet', pretrained=True,
                       **{'topN': 6, 'device': 'cpu', 'num_classes': 200})
model.eval()


def lambda_handler(event, context):
    url = event['queryStringParameters']['url']

    img = Image.open(urllib.request.urlopen(url))
    scaled_img = transform_test(img)
    torch_images = scaled_img.unsqueeze(0)

    with torch.no_grad():
        top_n_coordinates, concat_out, raw_logits, concat_logits, part_logits, top_n_index, top_n_prob = model(torch_images)

        _, predict = torch.max(concat_logits, 1)
        pred_id = predict.item()
        bird_class = model.bird_classes[pred_id]
        print('bird_class:', bird_class)

    return json.dumps({
        "bird_class": bird_class,
    })

I add the API Gateway as trigger, similarly to what I did before for the MessageWall function. Now, I can use the serverless API I just created to analyze pictures of birds. I am not really an expert in the field, so I looked for a couple of interesting images on Wikipedia:

I call the API to get a prediction for these two pictures:

$ curl https://1a2b3c4d5e.execute-api.us-east-1.amazonaws.com/default/MLInference?url=https://path/to/image/atlantic-puffin.jpg

{"bird_class": "106.Horned_Puffin"}

$ curl https://1a2b3c4d5e.execute-api.us-east-1.amazonaws.com/default/MLInference?url=https://path/to/image/western-grebe.jpg

{"bird_class": "053.Western_Grebe"}

It works! Looking at Amazon CloudWatch Logs for the Lambda function, I see that the first invocation, when the function loads and prepares the pre-trained model for inference on CPUs, takes about 30 seconds. To avoid a slow response, or a timeout from the API Gateway, I use Provisioned Concurrency to keep the function ready. The next invocations take about 1.8 seconds.

Understanding EFS Performance
When using EFS with your Lambda function, is very important to understand how EFS performance works. For throughput, each file system can be configured to use bursting or provisioned mode.

When using bursting mode, all EFS file systems, regardless of size, can burst at least to 100 MiB/s of throughput. Those over 1 TiB in the standard storage class can burst to 100 MiB/s per TiB of data stored in the file system. EFS uses a credit system to determine when file systems can burst. Each file system earns credits over time at a baseline rate that is determined by the size of the file system that is stored in the standard storage class. A file system uses credits whenever it reads or writes data. The baseline rate is 50 KiB/s per GiB of storage.

You can monitor the use of credits in CloudWatch, each EFS file system has a BurstCreditBalance metric. If you see that you are consuming all credits, and the BurstCreditBalance metric is going to zero, you should enable provisioned throughput mode for the file system, from 1 to 1024 MiB/s. There is an additional cost when using provisioned throughput, based on how much throughput you are adding on top of the baseline rate.

To avoid running out of credits, you should think of the throughput as the average you need during the day. For example, if you have a 10GB file system, you have 500 KiB/s of baseline rate, and every day you can read/write 500 KiB/s * 3600 seconds * 24 hours = 43.2 GiB.

If the libraries and everything you function needs to load during initialization are about 2 GiB, and you only access the EFS file system during function initialization, like in the MLInference Lambda function above, that means you can initialize your function (for example because of updates or scaling up activities) about 20 times per day. That’s not a lot, and you would probably need to configure provisioned throughput for the EFS file system.

If you have 10 MiB/s of provisioned throughput, then every day you have 10 MiB/s * 3600 seconds * 24 hours = 864 GiB to read or write. If you only use the EFS file system at function initialization to read about 2 GB of dependencies, it means that you can have 400 initializations per day. That may be enough for your use case.

In the Lambda function configuration, you can also use the reserve concurrency control to limit the maximum number of execution environments used by a function.

If, by mistake, the BurstCreditBalance goes down to zero, and the file system is relatively small (for example, a few GiBs), there is the possibility that your function gets stuck and can’t execute fast enough before reaching the timeout. In that case, you should enable (or increase) provisioned throughput for the EFS file system, or throttle your function by setting the reserved concurrency to zero to avoid all invocations until the EFS file system has enough credits.

Understanding Security Controls
When using EFS file systems with AWS Lambda, you have multiple levels of security controls. I’m doing a quick recap here because they should all be considered during the design and implementation of your serverless applications. You can find more info on using IAM authorization and access points with EFS in this post.

To connect a Lambda function to an EFS file system, you need:

  • Network visibility in terms of VPC routing/peering and security group.
  • IAM permissions for the Lambda function to access the VPC and mount (read only or read/write) the EFS file system.
  • You can specify in the IAM policy conditions which EFS access point the Lambda function can use.
  • The EFS access point can limit access to a specific path in the file system.
  • File system security (user ID, group ID, permissions) can limit read, write, or executable access for each file or directory mounted by a Lambda function.

The Lambda function execution environment and the EFS mount point uses industry standard Transport Layer Security (TLS) 1.2 to encrypt data in transit. You can provision Amazon EFS to encrypt data at rest. Data encrypted at rest is transparently encrypted while being written, and transparently decrypted while being read, so you don’t have to modify your applications. Encryption keys are managed by the AWS Key Management Service (KMS), eliminating the need to build and maintain a secure key management infrastructure.

Available Now
This new feature is offered in all regions where AWS Lambda and Amazon EFS are available, with the exception of the regions in China, where we are working to make this integration available as soon as possible. For more information on availability, please see the AWS Region table. To learn more, please see the documentation.

EFS for Lambda can be configured using the console, the AWS Command Line Interface (CLI), the AWS SDKs, and the Serverless Application Model. This feature allows you to build data intensive applications that need to process large files. For example, you can now unzip a 1.5 GB file in a few lines of code, or process a 10 GB JSON document. You can also load libraries or packages that are larger than the 250 MB package deployment size limit of AWS Lambda, enabling new machine learning, data modelling, financial analysis, and ETL jobs scenarios.

Amazon EFS for Lambda is supported at launch in AWS Partner Network solutions, including Epsagon, Lumigo, Datadog, HashiCorp Terraform, and Pulumi.

There is no additional charge for using EFS from Lambda functions. You pay the standard price for AWS Lambda and Amazon EFS. Lambda execution environments always connect to the right mount target in an AZ and not across AZs. You can connect to EFS in the same AZ via cross account VPC but there can be data transfer costs for that. We do not support cross region, or cross AZ connectivity between EFS and Lambda.

Danilo

Introducing the serverless LAMP stack – part 2 relational databases

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-the-serverless-lamp-stack-part-2-relational-databases/

In this post, you learn how to use an Amazon Aurora MySQL relational database in your serverless applications. I show how to pool and share connections to the database with Amazon RDS Proxy, and how to choose configurations. The code examples in this post are written in PHP and can be found in this GitHub repository. The concepts can be applied to any AWS Lambda supported runtime.

TThe serverless LAMP stack

The serverless LAMP stack

This serverless LAMP stack architecture is first discussed in this post. This architecture uses a PHP Lambda function (or multiple functions) to read and write to an Amazon Aurora MySQL database.

Amazon Aurora provides high performance and availability for MySQL and PostgreSQL databases. The underlying storage scales automatically to meet demand, up to 64 tebibytes (TiB). An Amazon Aurora DB instance is created inside a virtual private cloud (VPC) to prevent public access. To connect to the Aurora database instance from a Lambda function, that Lambda function must be configured to access the same VPC.

Database memory exhaustion can occur when connecting directly to an RDS database. This is caused by a surge in database connections or by a large number of connections opening and closing at a high rate. This can lead to slower queries and limited application scalability. Amazon RDS Proxy is implemented to solve this problem. RDS Proxy is a fully managed database proxy feature for Amazon RDS. It establishes a database connection pool that sits between your application and your relational database and reuses connections in this pool. This protects the database against oversubscription, without the memory and CPU overhead of opening a new database connection each time. Credentials for the database connection are securely stored in AWS Secrets Manager. They are accessed via an AWS Identity and Access Management (IAM) role. This enforces strong authentication requirements for database applications without a costly migration effort for the DB instances themselves.

The following steps show how to connect to an Amazon Aurora MySQL database running inside a VPC. The connection is made from a Lambda function running PHP. The Lambda function connects to the database via RDS Proxy. The database credentials that RDS Proxy uses are held in  Secrets Manager and accessed via IAM authentication.

RDS Proxy with IAM Authentication

RDS Proxy with IAM authentication

Getting started

RDS Proxy is currently in preview and not recommended for production workloads. For a full list of available Regions, refer to the RDS Proxy pricing page.

Creating an Amazon RDS Aurora MySQL database

Before creating an Aurora DB cluster, you must meet the prerequisites, such as creating a VPC and an RDS DB subnet group. For more information on how to set this up, see DB cluster prerequisites.

  1. Call the create-db-cluster AWS CLI command to create the Aurora MySQL DB cluster.
    aws rds create-db-cluster \
    --db-cluster-identifier sample-cluster \
    --engine aurora-mysql \
    --engine-version 5.7.12 \
    --master-username admin \
    --master-user-password secret99 \
    --db-subnet-group-name default-vpc-6cc1cf0a \
    --vpc-security-group-ids sg-d7cf52a3 \
    --enable-iam-database-authentication true
  2. Add a new DB instance to the cluster.
    aws rds create-db-instance \
        --db-instance-class db.r5.large \
        --db-instance-identifier sample-instance \
        --engine aurora-mysql  \
        --db-cluster-identifier sample-cluster
  3. Store the database credentials as a secret in AWS Secrets Manager.
    aws secretsmanager create-secret \
    --name MyTestDatabaseSecret \
    --description "My test database secret created with the CLI" \
    --secret-string '{"username":"admin","password":"secret99","engine":"mysql","host":"<REPLACE-WITH-YOUR-DB-WRITER-ENDPOINT>","port":"3306","dbClusterIdentifier":"<REPLACE-WITH-YOUR-DB-CLUSTER-NAME>"}'

    Make a note of the resulting ARN for later

    {
        "VersionId": "eb518920-4970-419f-b1c2-1c0b52062117", 
        "Name": "MySampleDatabaseSecret", 
        "ARN": "arn:aws:secretsmanager:eu-west-1:1234567890:secret:MySampleDatabaseSecret-JgEWv1"
    }

    This secret is used by RDS Proxy to maintain a connection pool to the database. To access the secret, the RDS Proxy service requires permissions to be explicitly granted.

  4. Create an IAM policy that provides secretsmanager permissions to the secret.
    aws iam create-policy \
    --policy-name my-rds-proxy-sample-policy \
    --policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": [
            "secretsmanager:GetResourcePolicy",
            "secretsmanager:GetSecretValue",
            "secretsmanager:DescribeSecret",
            "secretsmanager:ListSecretVersionIds"
          ],
          "Resource": [
            "<the-arn-of-the-secret>”
          ]
        },
        {
          "Sid": "VisualEditor1",
          "Effect": "Allow",
          "Action": [
            "secretsmanager:GetRandomPassword",
            "secretsmanager:ListSecrets"
          ],
          "Resource": "*"
        }
      ]
    }'
    

    Make a note of the resulting policy ARN, which you need to attach to a new role.

    {
        "Policy": {
            "PolicyName": "my-rds-proxy-sample-policy", 
            "PermissionsBoundaryUsageCount": 0, 
            "CreateDate": "2020-06-04T12:21:25Z", 
            "AttachmentCount": 0, 
            "IsAttachable": true, 
            "PolicyId": "ANPA6JE2MLNK3Z4EFQ5KL", 
            "DefaultVersionId": "v1", 
            "Path": "/", 
            "Arn": "arn:aws:iam::1234567890112:policy/my-rds-proxy-sample-policy", 
            "UpdateDate": "2020-06-04T12:21:25Z"
         }
    }
    
  5. Create an IAM Role that has a trust relationship with the RDS Proxy service. This allows the RDS Proxy service to assume this role to retrieve the database credentials.

    aws iam create-role --role-name my-rds-proxy-sample-role --assume-role-policy-document '{
     "Version": "2012-10-17",
     "Statement": [
      {
       "Sid": "",
       "Effect": "Allow",
       "Principal": {
        "Service": "rds.amazonaws.com"
       },
       "Action": "sts:AssumeRole"
      }
     ]
    }'
    
  6. Attach the new policy to the role:
    aws iam attach-role-policy \
    --role-name my-rds-proxy-sample-role \
    --policy-arn arn:aws:iam::123456789:policy/my-rds-proxy-sample-policy
    

Create an RDS Proxy

  1. Use the AWS CLI to create a new RDS Proxy. Replace the – -role-arn and SecretArn value to those values created in the previous steps.
    aws rds create-db-proxy \
    --db-proxy-name sample-db-proxy \
    --engine-family MYSQL \
    --auth '{
            "AuthScheme": "SECRETS",
            "SecretArn": "arn:aws:secretsmanager:eu-west-1:123456789:secret:exampleAuroraRDSsecret1-DyCOcC",
             "IAMAuth": "REQUIRED"
          }' \
    --role-arn arn:aws:iam::123456789:role/my-rds-proxy-sample-role \
    --vpc-subnet-ids  subnet-c07efb9a subnet-2bc08b63 subnet-a9007bcf
    

    To enforce IAM authentication for users of the RDS Proxy, the IAMAuth value is set to REQUIRED. This is a more secure alternative to embedding database credentials in the application code base.

    The Aurora DB cluster and its associated instances are referred to as the targets of that proxy.

  2. Add the database cluster to the proxy with the register-db-proxy-targets command.
    aws rds register-db-proxy-targets \
    --db-proxy-name sample-db-proxy \
    --db-cluster-identifiers sample-cluster
    

Deploying a PHP Lambda function with VPC configuration

This GitHub repository contains a Lambda function with a PHP runtime provided by a Lambda layer. The function uses the MySQLi PHP extension to connect to the RDS Proxy. The extension has been installed and compiled along with a PHP executable using this command:

The PHP executable is packaged together with a Lambda bootstrap file to create a PHP custom runtime. More information on building your own custom runtime for PHP can be found in this post.

Deploy the application stack using the AWS Serverless Application Model (AWS SAM) CLI:

sam deploy -g

When prompted, enter the SecurityGroupIds and the SubnetIds for your Aurora DB cluster.

The SAM template attaches the SecurityGroupIds and SubnetIds parameters to the Lambda function using the VpcConfig sub-resource.

Lambda creates an elastic network interface for each combination of security group and subnet in the function’s VPC configuration. The function can only access resources (and the internet) through that VPC.

Adding RDS Proxy to a Lambda Function

  1. Go to the Lambda console.
  2. Choose the PHPHelloFunction that you just deployed.
  3. Choose Add database proxy at the bottom of the page.
  4. Choose existing database proxy then choose sample-db-proxy.
  5. Choose Add.

Using the RDS Proxy from within the Lambda function

The Lambda function imports three libraries from the AWS PHP SDK. These are used to generate a password token from the database credentials stored in Secrets Manager.

The AWS PHP SDK libraries are provided by the PHP-example-vendor layer. Using Lambda layers in this way creates a mechanism for incorporating additional libraries and dependencies as the application evolves.

The function’s handler named index, is the entry point of the function code. First, getenv() is called to retrieve the environment variables set by the SAM application’s deployment. These are saved as local variables and available for the duration of the Lambda function’s execution.

The AuthTokenGenerator class generates an RDS auth token for use with IAM authentication. This is initialized by passing in the credential provider to the SDK client constructor. The createToken() method is then invoked, with the Proxy endpoint, port number, Region, and database user name provided as method parameters. The resultant temporary token is then used to connect to the proxy.

The PHP mysqli class represents a connection between PHP and a MySQL database. The real_connect() method is used to open a connection to the database via RDS Proxy. Instead of providing the database host endpoint as the first parameter, the proxy endpoint is given. The database user name, temporary token, database name, and port number are also provided. The constant MYSQLI_CLIENT_SSL is set to ensure that the connection uses SSL encryption.

Once a connection has been established, the connection object can be used. In this example, a SHOW TABLES query is executed. The connection is then closed, and the result is encoded to JSON and returned from the Lambda function.

This is the output:

RDS Proxy monitoring and performance tuning

RDS Proxy allows you to monitor and adjust connection limits and timeout intervals without changing application code.

Limit the timeout wait period that is most suitable for your application with the connection borrow timeout option. This specifies how long to wait for a connection to become available in the connection pool before returning a timeout error.

Adjust the idle connection timeout interval to help your applications handle stale resources. This can save your application from mistakenly leaving open connections that hold important database resources.

Multiple applications using a single database can each use an RDS Proxy to divide the connection quotas across each application. Set the maximum proxy connections as a percentage of the max_connections configuration (for MySQL).

The following example shows how to change the MaxConnectionsPercent setting for a proxy target group.

aws rds modify-db-proxy-target-group \
--db-proxy-name sample-db-proxy \
--target-group-name default \
--connection-pool-config '{"MaxConnectionsPercent": 75 }'

Response:

{
    "TargetGroups": [
        {
            "DBProxyName": "sample-db-proxy",
            "TargetGroupName": "default",
            "TargetGroupArn": "arn:aws:rds:eu-west-1:####:target-group:prx-tg-03d7fe854604e0ed1",
            "IsDefault": true,
            "Status": "available",
            "ConnectionPoolConfig": {
            "MaxConnectionsPercent": 75,
            "MaxIdleConnectionsPercent": 50,
            "ConnectionBorrowTimeout": 120,
            "SessionPinningFilters": []
        	},            
"CreatedDate": "2020-06-04T16:14:35.858000+00:00",
            "UpdatedDate": "2020-06-09T09:08:50.889000+00:00"
        }
    ]
}

RDS Proxy may keep a session on the same connection until the session ends when it detects a session state change that isn’t appropriate for reuse. This behavior is called pinning. Performance tuning for RDS Proxy involves maximizing connection reuse by minimizing pinning.

The Amazon CloudWatch metric DatabaseConnectionsCurrentlySessionPinned can be monitored to see how frequently pinning occurs in your application.

Amazon CloudWatch collects and processes raw data from RDS Proxy into readable, near real-time metrics. Use these metrics to observe the number of connections and the memory associated with connection management. This can help identify if a database instance or cluster would benefit from using RDS Proxy. For example, if it is handling many short-lived connections, or opening and closing connections at a high rate.

Conclusion

In this post, you learn how to create and configure an RDS Proxy to manage connections from a PHP Lambda function to an Aurora MySQL database. You see how to enforce strong authentication requirements by using Secrets Manager and IAM authentication. You deploy a Lambda function that uses Lambda layers to store the AWS PHP SDK as a dependency.

You can create secure, scalable, and performant serverless applications with relational databases. Do this by placing the RDS Proxy service between your database and your Lambda functions. You can also migrate your existing MySQL database to an Aurora DB cluster without altering the database. Using RDS Proxy and Lambda, you can build serverless PHP applications faster, with less code.

Find more PHP examples with the Serverless LAMP stack.