Tag Archives: .net

AWS App2Container – A New Containerizing Tool for Java and ASP.NET Applications

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-app2container-a-new-containerizing-tool-for-java-and-asp-net-applications/

Our customers are increasingly developing their new applications with containers and serverless technologies, and are using modern continuous integration and delivery (CI/CD) tools to automate the software delivery life cycle. They also maintain a large number of existing applications that are built and managed manually or using legacy systems. Maintaining these two sets of applications with disparate tooling adds to operational overhead and slows down the pace of delivering new business capabilities. As much as possible, they want to be able to standardize their management tooling and CI/CD processes across both their existing and new applications, and see the option of packaging their existing applications into containers as the first step towards accomplishing that goal.

However, containerizing existing applications requires a long list of manual tasks such as identifying application dependencies, writing dockerfiles, and setting up build and deployment processes for each application. These manual tasks are time consuming, error prone, and can slow down the modernization efforts.

Today, we are launching AWS App2Container, a new command-line tool that helps containerize existing applications that are running on-premises, in Amazon Elastic Compute Cloud (EC2), or in other clouds, without needing any code changes. App2Container discovers applications running on a server, identifies their dependencies, and generates relevant artifacts for seamless deployment to Amazon ECS and Amazon EKS. It also provides integration with AWS CodeBuild and AWS CodeDeploy to enable a repeatable way to build and deploy containerized applications.

AWS App2Container generates the following artifacts for each application component: Application artifacts such as application files/folders, Dockerfiles, container images in Amazon Elastic Container Registry (ECR), ECS Task definitions, Kubernetes deployment YAML, CloudFormation templates to deploy the application to Amazon ECS or EKS, and templates to set up a build/release pipeline in AWS Codepipeline which also leverages AWS CodeBuild and CodeDeploy.

Starting today, you can use App2Container to containerize ASP.NET (.NET 3.5+) web applications running in IIS 7.5+ on Windows, and Java applications running on Linux—standalone JBoss, Apache Tomcat, and generic Java applications such as Spring Boot, IBM WebSphere, Oracle WebLogic, etc.

By modernizing existing applications using containers, you can make them portable, increase development agility, standardize your CI/CD processes, and reduce operational costs. Now let’s see how it works!

AWS App2Container – Getting Started
AWS App2Container requires that the following prerequisites be installed on the server(s) hosting your application: AWS Command Line Interface (CLI) version 1.14 or later, Docker tools, and (in the case of ASP.NET) Powershell 5.0+ for applications running on Windows. Additionally, you need to provide appropriate IAM permissions to App2Container to interact with AWS services.

For example, let’s look how you containerize your existing Java applications. App2Container CLI for Linux is packaged as a tar.gz archive. The file provides users an interactive shell script, install.sh to install the App2Container CLI. Running the script guides users through the install steps and also updates the user’s path to include the App2Container CLI commands.

First, you can begin by running a one-time initialization on the installed server for the App2Container CLI with the init command.

$ sudo app2container init
Workspace directory path for artifacts[default:  /home/ubuntu/app2container/ws]:
AWS Profile (configured using 'aws configure --profile')[default: default]:  
Optional S3 bucket for application artifacts (Optional)[default: none]: 
Report usage metrics to AWS? (Y/N)[default: y]:
Require images to be signed using Docker Content Trust (DCT)? (Y/N)[default: n]:
Configuration saved

This sets up a workspace to store application containerization artifacts (minimum 20GB of disk space available). You can extract them into your Amazon Simple Storage Service (S3) bucket using your AWS profile configured to use AWS services.

Next, you can view Java processes that are running on the application server by using the inventory command. Each Java application process has a unique identifier (for example, java-tomcat-9e8e4799) which is the application ID. You can use this ID to refer to the application with other App2Container CLI commands.

$ sudo app2container inventory
{
    "java-jboss-5bbe0bec": {
        "processId": 27366,
        "cmdline": "java ... /home/ubuntu/wildfly-10.1.0.Final/modules org.jboss.as.standalone -Djboss.home.dir=/home/ubuntu/wildfly-10.1.0.Final -Djboss.server.base.dir=/home/ubuntu/wildfly-10.1.0.Final/standalone ",
        "applicationType": "java-jboss"
    },
    "java-tomcat-9e8e4799": {
        "processId": 2537,
        "cmdline": "/usr/bin/java ... -Dcatalina.home=/home/ubuntu/tomee/apache-tomee-plume-7.1.1 -Djava.io.tmpdir=/home/ubuntu/tomee/apache-tomee-plume-7.1.1/temp org.apache.catalina.startup.Bootstrap start ",
        "applicationType": "java-tomcat"
    }
}

You can also intialize ASP.NET applications on an administrator-run PowerShell session of Windows Servers with IIS version 7.0 or later. Note that Docker tools and container support are available on Windows Server 2016 and later versions. You can select to run all app2container operations on the application server with Docker tools installed or use a worker machine with Docker tools using Amazon ECS-optimized Windows Server AMIs.

PS> app2container inventory
{
    "iis-smarts-51d2dbf8": {
        "siteName": "nopCommerce39",
        "bindings": "http/*:90:",
        "applicationType": "iis"
    }
}

The inventory command displays all IIS websites on the application server that can be containerized. Each IIS website process has a unique identifier (for example, iis-smarts-51d2dbf8) which is the application ID. You can use this ID to refer to the application with other App2Container CLI commands.

You can choose a specific application by referring to its application ID and generate an analysis report for the application by using the analyze command.

$ sudo app2container analyze --application-id java-tomcat-9e8e4799
Created artifacts folder /home/ubuntu/app2container/ws/java-tomcat-9e8e4799
Generated analysis data in /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/analysis.json
Analysis successful for application java-tomcat-9e8e4799
Please examine the same, make appropriate edits and initiate containerization using "app2container containerize --application-id java-tomcat-9e8e4799"

You can use the analysis.json template generated by the application analysis to gather information on the analyzed application that helps identify all system dependencies from the analysisInfo section, and update containerization parameters to customize the container images generated for the application using the containerParameters section.

$ cat java-tomcat-9e8e4799/analysis.json
{
    "a2CTemplateVersion": "1.0",
	"createdTime": "2020-06-24 07:40:5424",
    "containerParameters": {
        "_comment1": "*** EDITABLE: The below section can be edited according to the application requirements. Please see the analyisInfo section below for deetails discoverd regarding the application. ***",
        "imageRepository": "java-tomcat-9e8e4799",
        "imageTag": "latest",
        "containerBaseImage": "ubuntu:18.04",
        "coopProcesses": [ 6446, 6549, 6646]
    },
    "analysisInfo": {
        "_comment2": "*** NON-EDITABLE: Analysis Results ***",
        "processId": 2537
        "appId": "java-tomcat-9e8e4799",
		"userId": "1000",
        "groupId": "1000",
        "cmdline": [...],
        "os": {...},
        "ports": [...]
    }
}

Also, you can run the $ app2container extract --application-id java-tomcat-9e8e4799 command to generate an application archive for the analyzed application. This depends on the analysis.json file generated earlier in the workspace folder for the application,and adheres to any containerization parameter updates specified in there. By using extract command, you can continue the workflow on a worker machine after running the first set of commands on the application server.

Now you can containerize command generated Docker images for the selected application.

$ sudo app2container containerize --application-id java-tomcat-9e8e4799
AWS pre-requisite check succeeded
Docker pre-requisite check succeeded
Extracted container artifacts for application
Entry file generated
Dockerfile generated under /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/Artifacts
Generated dockerfile.update under /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/Artifacts
Generated deployment file at /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/deployment.json
Containerization successful. Generated docker image java-tomcat-9e8e4799
You're all set to test and deploy your container image.

Next Steps:
1. View the container image with \"docker images\" and test the application.
2. When you're ready to deploy to AWS, please edit the deployment file as needed at /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/deployment.json.
3. Generate deployment artifacts using app2container generate app-deployment --application-id java-tomcat-9e8e4799

Using this command, you can view the generated container images using Docker images on the machine where the containerize command is run. You can use the docker run command to launch the container and test application functionality.

Note that in addition to generating container images, the containerize command also generates a deployment.json template file that you can use with the next generate-appdeployment command. You can edit the parameters in the deployment.json template file to change the image repository name to be registered in Amazon ECR, the ECS task definition parameters, or the Kubernetes App name.

$ cat java-tomcat-9e8e4799/deployment.json
{
       "a2CTemplateVersion": "1.0",
       "applicationId": "java-tomcat-9e8e4799",
       "imageName": "java-tomcat-9e8e4799",
       "exposedPorts": [
              {
                     "localPort": 8090,
                     "protocol": "tcp6"
              }
       ],
       "environment": [],
       "ecrParameters": {
              "ecrRepoTag": "latest"
       },
       "ecsParameters": {
              "createEcsArtifacts": true,
              "ecsFamily": "java-tomcat-9e8e4799",
              "cpu": 2,
              "memory": 4096,
              "dockerSecurityOption": "",
              "enableCloudwatchLogging": false,
              "publicApp": true,
              "stackName": "a2c-java-tomcat-9e8e4799-ECS",
              "reuseResources": {
                     "vpcId": "",
                     "cfnStackName": "",
                     "sshKeyPairName": ""
              },
              "gMSAParameters": {
                     "domainSecretsArn": "",
                     "domainDNSName": "",
                     "domainNetBIOSName": "",
                     "createGMSA": false,
                     "gMSAName": ""
              }
       },
       "eksParameters": {
              "createEksArtifacts": false,
              "applicationName": "",
              "stackName": "a2c-java-tomcat-9e8e4799-EKS",
              "reuseResources": {
                     "vpcId": "",
                     "cfnStackName": "",
                     "sshKeyPairName": ""
              }
       }
 }

At this point, the application workspace where the artifacts are generated serves as an iteration sandbox. You can choose to edit the Dockerfile generated here to make changes to their application and use the docker build command to build new container images as needed. You can generate the artifacts needed to deploy the application containers in Amazon EKS by using the generate-deployment command.

$ sudo app2container generate app-deployment --application-id java-tomcat-9e8e4799
AWS pre-requisite check succeeded
Docker pre-requisite check succeeded
Created ECR Repository
Uploaded Cloud Formation resources to S3 Bucket: none
Generated Cloud Formation Master template at: /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml
EKS Cloudformation templates and additional deployment artifacts generated successfully for application java-tomcat-9e8e4799

You're all set to use AWS Cloudformation to manage your application stack.
Next Steps:
1. Edit the cloudformation template as necessary.
2. Create an application stack using the AWS CLI or the AWS Console. AWS CLI command:

       aws cloudformation deploy --template-file /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml --capabilities CAPABILITY_NAMED_IAM --stack-name java-tomcat-9e8e4799

3. Setup a pipeline for your application stack:

       app2container generate pipeline --application-id java-tomcat-9e8e4799

This command works based on the deployment.json template file produced as part of running the containerize command. App2Container will now generate ECS/EKS cloudformation templates as well and an option to deploy those stacks.

The command registers the container image to user specified ECR repository, generates cloudformation template for Amazon ECS and EKS deployments. You can register ECS task definition with Amazon ECS and use kubectl to launch the containerized application on the existing Amazon EKS or self-managed kubernetes cluster using App2Container generated amazon-eks-master.template.deployment.yaml.

Alternatively, you can directly deploy containerized applications by --deploy options into Amazon EKS.

$ sudo app2container generate app-deployment --application-id java-tomcat-9e8e4799 --deploy
AWS pre-requisite check succeeded
Docker pre-requisite check succeeded
Created ECR Repository
Uploaded Cloud Formation resources to S3 Bucket: none
Generated Cloud Formation Master template at: /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml
Initiated Cloudformation stack creation. This may take a few minutes. Please visit the AWS Cloudformation Console to track progress.
Deploying application to EKS

Handling ASP.NET Applications with Windows Authentication
Containerizing ASP.NET applications is almost same process as Java applications, but Windows containers cannot be directly domain joined. They can however still use Active Directory (AD) domain identities to support various authentication scenarios.

App2Container detects if a site is using Windows authentication and accordingly makes the IIS site’s application pool run as the network service identity, and generates the new cloudformation templates for Windows authenticated IIS applications. The creation of gMSA and AD Security group, domain join ECS nodes and making containers use this gMSA are all taken care of by those templates.

Also, it provides two PowerShell scripts as output to the $ app2container containerize command along with an instruction file on how to use it.

The following is an example output:

PS C:\Windows\system32> app2container containerize --application-id iis-SmartStoreNET-a726ba0b
Running AWS pre-requisite check...
Running Docker pre-requisite check...
Container build complete. Please use "docker images" to view the generated container images.
Detected that the Site is using Windows Authentication.
Generating powershell scripts into C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b\Artifacts required to setup Container host with Windows Authentication
Please look at C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b\Artifacts\WindowsAuthSetupInstructions.md for setup instructions on Windows Authentication.
A deployment file has been generated under C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b
Please edit the same as needed and generate deployment artifacts using "app2container generate-deployment"

The first PowerShellscript, DomainJoinAddToSecGroup.ps1, joins the container host and adds it to an Active Directory security group. The second script, CreateCredSpecFile.ps1, creates a Group Managed Service Account (gMSA), grants access to the Active Directory security group, generates the credential spec for this gMSA, and stores it locally on the container host. You can execute these PowerShellscripts on the ECS host. The following is an example usage of the scripts:

PS C:\Windows\system32> .\DomainJoinAddToSecGroup.ps1 -ADDomainName Dominion.com -ADDNSIp 10.0.0.1 -ADSecurityGroup myIISContainerHosts -CreateADSecurityGroup:$true
PS C:\Windows\system32> .\CreateCredSpecFile.ps1 -GMSAName MyGMSAForIIS -CreateGMSA:$true -ADSecurityGroup myIISContainerHosts

Before executing the app2container generate-deployment command, edit the deployment.json file to change the value of dockerSecurityOption to the name of the CredentialSpec file that the CreateCredSpecFile script generated. For example,
"dockerSecurityOption": "credentialspec:file://dominion_mygmsaforiis.json"

Effectively, any access to network resource made by the IIS server inside the container for the site will now use the above gMSA to authenticate. The final step is to authorize this gMSA account on the network resources that the IIS server will access. A common example is authorizing this gMSA inside the SQL Server.

Finally, if the application must connect to a database to be fully functional and you run the container in Amazon ECS, ensure that the application container created from the Docker image generated by the tool has connectivity to the same database. You can refer to this documentation for options on migrating: MS SQL Server from Windows to Linux on AWS, Database Migration Service, and backup and restore your MS SQL Server to Amazon RDS.

Now Available
AWS App2Container is offered free. You only pay for the actual usage of AWS services like Amazon EC2, ECS, EKS, and S3 etc based on their usage. For details, please refer to App2Container FAQs and documentations. Give this a try, and please send us feedback either through your usual AWS Support contacts, on the AWS Forum for ECS, AWS Forum for EKS, or on the container roadmap on Github.

Channy;

Field Notes: Bring your C#.NET skills to Amazon SageMaker

Post Syndicated from Haider Abdullah original https://aws.amazon.com/blogs/architecture/field-notes-bring-your-c-net-skills-to-amazon-sagemaker/

Amazon SageMaker is a fully managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the undifferentiated heavy lifting from each step of the machine learning process to make it easier to develop high-quality models.

Amazon SageMaker Notebooks are one-click Jupyter Notebooks with elastic compute that can be spun up quickly. Notebooks contain everything necessary to run or recreate a machine learning workflow. Notebooks in SageMaker are pre-loaded with all the common CUDA and cuDNN drivers, Anaconda packages, and framework libraries. However, there is a small amount of work required to support C# .NET code in the Notebooks.

This blog post focuses on customizing the Amazon SageMaker Notebooks environments to support C# .NET so C#.NET developers can get started with SageMaker Notebooks and machine learning on AWS. To provide support for C#.NET in our SageMaker Jupyter Notebook environment, we use the .NET interactive tool, which is an evolution of the Try .NET global tool. An installation script is provided for you, which  automatically downloads and installs these components. Note that the components are distributed under a proprietary license from Microsoft.

After we set up the environment, we walk through an end-to-end example of building, training, deploying, and invoking a built-in image classification model provided by SageMaker. The example in this blog post is modeled after the End-to-End Multiclass Image Classification Example but written entirely using the C# .NET APIs!

Procedure

The following are the high level steps to build out and invoke a fully functioning image classification model in SageMaker using C#.NET:

  • Customize the SageMaker Jupyter Notebook instances by creating a SageMaker lifecycle configuration
  • Launch a Jupyter Notebook using the SageMaker lifecycle configuration
  • Create an Amazon S3 bucket for the validation and training datasets, and the output data
  • Train a model using the sample dataset and built-in SageMaker image classification algorithm
  • Deploy and host the model in SageMaker
  • Create an inference endpoint using the trained model
  • Invoke the endpoint with the trained model to obtain real time inferences for sample images
  • Clean up the used resources used in the example to stop incurring charges

Customize the Notebook instances

We use Amazon SageMaker lifecycle configurations to install the required components for C# .NET support.

Use the following steps to create the Lifecycle configuration.

  1. Sign in to the AWS Management Console.
  2. Navigate to Amazon SageMaker and select Lifecycle configurations from the left menu.
  3. Select Create configuration and provide a name for the configuration.
  4. In the Start notebook section paste the following script:
#!/bin/bash

set -e
wget https://download.visualstudio.microsoft.com/download/pr/d731f991-8e68-4c7c-8ea0-
fad5605b077a/49497b5420eecbd905158d86d738af64/dotnet-sdk-3.1.100-linux-x64.tar.gz
wget https://download.visualstudio.microsoft.com/download/pr/30ab052d-dbb6-4bce-8a44-
a831034589ed/7ffaad695afb7ccd778b0d3fc1c89f50/dotnet-runtime-3.0.1-linux-x64.tar.gz
mkdir -p /home/ec2-user/dotnet and&and& tar zxf dotnet-runtime-3.0.1-linux-x64.tar.gz -C /home/ec2-user/dotnet
export DOTNET_ROOT=/home/ec2-user/dotnet
export PATH=$PATH:/home/ec2-user/dotnet
export DOTNET_CLI_HOME=/home/ec2-user/dotnet
export HOME=/home/ec2-user

tar zxf dotnet-sdk-3.1.100-linux-x64.tar.gz -C /home/ec2-user/dotnet
Dotnetdotnet tool install --global Microsoft.dotnet-interactive
export PATH=$PATH:/home/ec2-user/dotnet/.dotnet/tools
Dotnetdotnet interactive jupyter install
jupyter kernelspec list

touch /etc/profile.d/jupyter-env.sh
echo "export PATH='$PATH:/home/ec2-user/dotnet/.dotnet/tools:/home/ec2-user/dotnet'">>
/etc/profile.d/jupyter-env.sh

touch /etc/profile.d/dotnet-env.sh
echo "export DOTNET_ROOT='/home/ec2-user/dotnet'">>/etc/profile.d/dotnet-env.sh
sudo chmod -R 777 /home/ec2-user/.dotnet

initctl restart jupyter-server --no-wait

In the above code snippet the following actions are taken:

  • Download of  the .NET SDK and runtime from Microsoft
  • Extraction of the downloads
  • Installation of the .NET interactive global tool
  • Setting of required PATH and DOTNET_ROOT variables so they are available to the Jupyter Notebook instance

Note: Creation of the lifecycle configuration automatically downloads and installs third-party software components that are subject to a proprietary license.

Launch a Jupyter Notebook instance

After the lifecycle configuration is created, we are ready to launch a Notebook instance.

Use the following steps to create the Notebook instance:

1. In the AWS Management Console, navigate to the Notebook instances page from the left menu.

2. Select Create notebook instance.

3. Provide a name for your Notebook and select an instance type (a smaller instance type such as ml.m2.medium suffices for the purposes for this example).

4. Set Elastic interface to none.

5. Expand the Additional configuration menu to expose the Lifecycle configuration drop down list. Then select the configuration you created. Volume size in GB can be set to the default of 5.

Notebook Instance Settings

6. In the Permissions and encryption section, select Create a new IAM Role from the IAM role dropdown. This is the role that is used by your Notebook instance, and you can use the provided default permissions for the role. Select Create role.

7. The Network, Git repositories, and tags sections can be left as is.

8. Select Create notebook instance.

Create an S3 bucket to hold the validation and training datasets

It takes a few minutes for the newly launched Jupyter Notebook instance to be ‘InService’.  While it is launching, create an S3 bucket to hold the datasets required for our example. Make sure to create the S3 bucket in the same Region as your Notebook instance.

Open the Jupyter Notebook and begin writing code

After the Notebook instance has launched, the ‘Status’ in the AWS Management Console will report as ‘InService’. Select the Notebook, and choose the Open Jupyter link in the Actions column. To begin authoring from scratch, select New -> .NET (C#) from the drop-down menu on the top right hand side.

A blank ‘Untitled’ file opens and is ready for you to start writing code. Alternatively, if you’d like to follow along with this post, you can download the full Notebook from Github and use the ‘Upload’ button to start stepping through the Notebook.

To run a block of code in the Notebook, click into the block and then select the Run button, or hold down your SHIFT Key and press ‘Enter’ on your keyboard.

Last checkpoint

We start by including the relevant NuGet packages for SageMaker, Amazon S3, and a JSON parser:

#r "nuget:AWSSDK.SageMaker, 3.3.112.3"
#r "nuget:AWSSDK.SageMakerRuntime, 3.3.101.49"
#r "nuget:AWSSDK.S3, 3.3.110.45"
#r "nuget:Newtonsoft.Json, 12.0.3"

Next, we create the service client objects that are used throughout the rest of the code:

static AmazonS3Client s3Client = new AmazonS3Client();
AmazonSageMakerClient smClient = new AmazonSageMakerClient();
AmazonSageMakerRuntimeClient smrClient = new AmazonSageMakerRuntimeClient();

Download the required training and validation datasets and store them in Amazon S3

The next step is to download the required training and validation datasets and store them in the S3 bucket that was previously created so they are accessible for our training job. In this demo, we use Caltech-256 dataset, which contains 30608 images of 256 objects. For the sake of brevity, the C# code to download web files and upload them to Amazon S3 is not shown here but can be found in full in the Github repo.

Train a model using the sample dataset and built-in SageMaker image classification algorithm

After we have the data available in the correct format for training, the next step is to actually train the model using the data. We start by retrieving the IAM role we want SageMaker to use from the currently running Notebook instance dynamically.

DescribeNotebookInstanceRequest dniReq = new DescribeNotebookInstanceRequest() {
   NotebookInstanceName = "dotNetV3-1"
};
DescribeNotebookInstanceResponse dniResp = await smClient.DescribeNotebookInstanceAsync(dniReq);
Console.WriteLine(dniResp.RoleArn);

We set all the training parameters and kick off the training job. We use the built-in image classification algorithm for our training job. We specify this in the TrainingImage parameter by providing the URI for the docker image for this algorithm from the documentation–there are a number of training Images available that correspond with the desired algorithm and the Region we have chosen.

string jobName = String.Format("DEMO-imageclassification-{0}",DateTime.Now.ToString("yyyy-MM-dd-hh-mmss"));

CreateTrainingJobRequest ctrRequest = new CreateTrainingJobRequest(){
  AlgorithmSpecification = new AlgorithmSpecification(){
  TrainingImage = "433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:1",
  TrainingInputMode = "File" 
 },
 RoleArn = dniResp.RoleArn, 
 OutputDataConfig = new OutputDataConfig(){
   S3OutputPath = String.Format(@"Amazon S3s3://{0}/{1}/output",bucketName,jobName)
 },
 ResourceConfig = new ResourceConfig(){
   InstanceCount = 1,
   Instance typeInstanceType = Amazon.SageMaker.TrainingInstanceType.MlP2Xlarge,
   VolumeSizeInGB = 50
 },
 TrainingJobName = jobName,
 HyperParameters = new Dictionary<string,string>() {
   {"image_shape","3,224,224"},
   {"num_layers","18"}, 
   {"num_training_samples","15420"},
   {"num_classes","257"},
   {"mini_batch_size","64"},
   {"Epochsepochs","10"},
   {"learning_rate","0.01"}
 },
 StoppingCondition = new StoppingCondition(){
    MaxRuntimeInSeconds = 360000
 },
 InputDataConfig = new List<Amazon.SageMaker.Model.Channel>(){
   new Amazon.SageMaker.Model.Channel() {
    ChannelName = "train",
    ContentType = "application/x-recordio",
    CompressionType = Amazon.SageMaker.CompressionType.None,
    DatasourceDataSource = new Amazon.SageMaker.Model.DataSource(){
      S3DataSource = new Amazon.SageMaker.Model.S3DataSource(){
      S3DataType = Amazon.SageMaker.S3DataType.S3Prefix,
      S3Uri = s3Train,
      S3DataDistributionType = Amazon.SageMaker.S3DataDistribution.FullyReplicated
      }
    }
 },
 new Amazon.SageMaker.Model.Channel(){
   ChannelName = "validation",
   ContentType = "application/x-recordio",
   CompressionType = Amazon.SageMaker.CompressionType.None,
   DatasourceDataSource = new Amazon.SageMaker.Model.DataSource(){
     S3DataSource = new Amazon.SageMaker.Model.S3DataSource(){
       S3DataType = Amazon.SageMaker.S3DataType.S3Prefix,
       S3Uri = s3Validation,
       S3DataDistributionType = Amazon.SageMaker.S3DataDistribution.FullyReplicated
     }
    }
   }
  }
};

Poll the job for completion status a few times until the job status reports Completed, then you can proceed to the next step.

Deploy and host the model in SageMaker

After the training job has completed, it is time to build the model. Create the request object with all required parameters and make API call to generate the model.

String modelName = String.Format(“DEMO-full-image-classification-model-{0}”,DateTime.Now.ToString(“yyyy-MM-dd-hh-mmss”));
Console.WriteLine(modelName);

CreateModelRequest modelRequest = new CreateModelRequest(){
  ModelName = modelName,
  ExecutionRoleArn = dniResp.RoleArn,
  PrimaryContainer = new ContainerDefinition(){
     Image = “433757028032.dkr.ecr.us-west-2.amazonaws.com/image-classification:latest”,
     ModelDataUrl = tjResp.ModelArtifacts.S3ModelArtifacts
  }</p><p>
};

CreateModelResponse modelResponse = await smClient.CreateModelAsync(modelRequest);
</p><p>Console.WriteLine(modelResponse.ModelArn);

Create an inference endpoint using the trained model

After deploying the model, we are ready to create the endpoint that will be invoked to get real time inferences for images. This is a two-step process: first, we create an endpoint configuration and use it to create the endpoint itself.

CreateEndpointConfigRequest epConfReq = new CreateEndpointConfigRequest(){
   EndpointConfigName = epConfName,
   ProductionVariants = new List&lt;ProductionVariant&gt;(){
     new ProductionVariant() {
       Instance typeInstanceType = Amazon.SageMaker.ProductionVariantInstanceType.MlP28xlarge,
       InitialInstanceCount = 1,
       ModelName = modelName,
       VariantName = "AllTraffic"
     }
   }
 }; 

CreateEndpointConfigResponse epConfResp = await smClient.CreateEndpointConfigAsync(epConfReq);
Console.WriteLine(epConfResp.EndpointConfigArn);

string epName = String.Format("{0}-EndPoint",jobName);
Console.WriteLine(epName);

CreateEndpointRequest epReq = new CreateEndpointRequest(){
  EndpointName = epName,
  EndpointConfigName = epConfName
};

CreateEndpointResponse epResp = await smClient.CreateEndpointAsync(epReq);
Console.WriteLine(epResp.EndpointArn);

Poll the endpoint status a few times until the it reports InService, then proceed to the next step.

Invoke the endpoint with the trained model to obtain real time inferences for sample images

Load the known list of classes/categories into a List (shortened here for brevity).  We compare the inference response to this list.

String[] categoriesArray = new String[]{"ak47", "american-flag", "backpack", "baseball-bat", "baseball-glove", "basketball-hoop", "bat", "bathtub", "bear", "beer-mug", ..... "clutter"};
List&lt;String&gt; categories = categoriesArray.ToList();

Two images from the Caltech dataset have been chosen at random to be tested against the deployed model.  These two images are downloaded locally and loaded into memory so they can be passed as payload to the endpoint. The code below demonstrates one of these in action:

webClient.DownloadFile("http://www.vision.caltech.edu/Image_Datasets/Caltech256/images/008.bathtub/008_0007.jpg", "008_0007.jpg");

MemoryStream data streamdataStream = new MemoryStream(File.ReadAllBytes(@"./008_0007.jpg"));
InvokeEndpointRequest invReq = new InvokeEndpointRequest(){
  EndpointName = epName,
  ContentType = "application/x-image",
  Body = data streamdataStream</p><p>
};
InvokeEndpointResponse invResp = await smrClient.InvokeEndpointAsync(invReq);

//Read the response stream back into a string so it can be reviewed
StreamReader sr = new StreamReader(invResp.Body);
String responseBody = sr.ReadToEnd();

We now have a response returned from the endpoint and we must inspect this result to determine if it is correct. The response is in the form of a list of probabilities–each item in the list represents the probability that the image provided to the endpoint matches the specific class/category in our previously loaded list.

//Load the values into a List so they can be more easily searched
List&lt;Decimal&gt; probabilities = JsonConvert.DeserializeObject&lt;List&lt;Decimal&gt;&gt;(responseBody);

//Determine which category returned the highest Probability match and print it's value and Index
var indexAtMax = probabilities.IndexOf(probabilities.Max());
Console.WriteLine(String.Format("Index of Max Probability: {0}",indexAtMax));
Console.WriteLine(String.Format("Value of Max Probability: {0}",probabilities[indexAtMax]));

//Print which Category name matches with the image
Console.WriteLine(String.Format("Category of image : {0}",categories[indexAtMax]));

The response indicates that for the given image the highest probabilistic match (~16.9%) to one of our known classes/categories of images, is at Index ‘7’ of the list.  We inspect our list of known classes/categories at the same index value to determine the name, which returns “bathtub”.  We have a successful match!

Index of Max Probability: 7
Value of Max Probability: 0.16936515271663666
Category of image : bathtub

Clean up the used resources

In order to avoid continuing charges, delete the deployed endpoint and stop the Jupyter Notebook.

Conclusion

If you are a C# .NET developer that was previously overwhelmed by the prospect of getting started with machine learning on AWS, following the guidance in this post will get you up and running quickly. The full Jupyter Notebook for this example can be found in the Github repo.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Deploying a ASP.NET Core web application to Amazon ECS using an Azure DevOps pipeline

Post Syndicated from John Formento original https://aws.amazon.com/blogs/devops/deploying-a-asp-net-core-web-application-to-amazon-ecs-using-an-azure-devops-pipeline/

For .NET developers, leveraging Team Foundation Server (TFS) has been the cornerstone for CI/CD over the years. As more and more .NET developers start to deploy onto AWS, they have been asking questions about using the same tools to deploy to the AWS cloud. By configuring a pipeline in Azure DevOps to deploy to the AWS cloud, you can easily use familiar Microsoft development tools to build great applications.

Solution overview

This blog post demonstrates how to create a simple Azure DevOps project, repository, and pipeline to deploy an ASP.NET Core web application to Amazon ECS using Azure DevOps. The following screenshot shows a high-level architecture diagram of the pipeline:

 

Solution Architecture Diagram

In this example, you perform the following steps:

  1. Create an Azure DevOps Project, clone project repo, and push ASP.NET Core web application.
  2. Create a pipeline in Azure DevOps
  3. Build an Amazon ECS Cluster, Task and Service.
  4. Kick-off deployment of the ASP.Net Core web application using the newly create Azure DevOps pipeline.

 

Prerequisites

Ensure you have the following prerequisites set up:

  • An Amazon ECR repository
  • An IAM user with permissions for Amazon ECR and Amazon ECS (the user will need an access key and secret access key)

 

Create an Azure DevOps Project, clone project repo, and push ASP.NET Core web application

Follow these steps to deploy a .NET Core app onto your Amazon ECS cluster using the Azure DevOps (ADO) repository and pipeline:

 

  1. Login to dev.azure.com and navigate to the marketplace.
  2. Go to Visual Studio, search for “AWS”, and add the AWS Tools for Microsoft Visual Studio Team Services.
  3. Create a project in ADO: Provide a project name and choose Create.
  4. On the Project Summary page, choose Project Settings.
  5. In the Project Settings pane, navigate to the Service Connections page.
  6. Choose Create service connection, select AWS, and choose Next.
  7. Input an Access Key ID and Secret Access Key. (You’ll need an IAM user with permissions for Amazon ECR and Amazon ECS in order to deploy via the Azure DevOps pipeline.) Choose Save.
  8. Choose Repos in the left pane, then Clone in Visual Studio under Clone to your computer.
  9. Create a ASP.NET Core web application in Visual Studio, set the location to locally cloned repository, and check Enable Docker support.
  10. Once you’ve created the new project, perform an initial commit and push to the repository in Azure DevOps.

 

Creating a pipeline in Azure DevOps

Now that you have synced the repository, create a pipeline in Azure DevOps.

  1. Go to the pipeline page within Azure DevOps and choose Create Pipeline.
  2. Choose Use the classic editor.Pipeline configuration with repository
  3. Select Azure Repos Git for the location of your code and select the repository you created earlier.
  4. On the Choose a Template page, select Docker Container and choose Apply.
  5. Remove the Push an image step.
  6. Add an Amazon ECR Push task by choosing the + symbol next to Agent job 1. You can search for “AWS” in the Add tasks pane to filter for all AWS tasks.

 

Now, configure each task:

  1. Choose the Build an image task and ensure that the action is set to Build an image. Additionally, you can modify the Image Name to your standards.Pipeline configuration page Azure DevOps
  2. Choose the Push Image task and provide the following
    • Enter a name under Display Name.
    • Select the AWS Credentials that you created in Service Connections.
    • Select the AWS Region.
    • Provide the source image name, which you can find in the setting for the Build an image task.
    • Enter the name of the repository in Amazon ECR to which the image is pushedPipeline configuration page Azure DevOps
  3. Choose Save and queue.

Build Amazon ECS Cluster, Task, and Service

The goal here is to test up to building the Docker image and ensure it’s pushed to Amazon ECR. Once the Docker image is in Amazon ECR, you can create the Amazon ECS cluster, task definition, and service leveraging the newly created Docker image.

  1. Create an Amazon ECS cluster.
  2. Create an Amazon ECS task definition. When you create the task definition and configure the container, use the Amazon ECR URI for the Docker image that was just pushed to Amazon ECR.
  3. Create an Amazon ECS service.

Go back and edit the pipeline:

  1. Add the last step by choosing the + symbol next to Agent job 1.
  2. Search for “AWS CLI” in the search bar and add the task.
  3. Choose AWS CLI and configure the task.
  4. Enter a name under Display Name, such as Update ECS Service.
  5. Select the AWS Credentials that you created in Service Connections.
  6. Select the AWS Region.
  7. Input the following command, which updates the Amazon ECS service after a new image is pushed to Amazon ECR. Replace <clustername> and <servicename> with your Amazon ECS cluster and service names.
    • Command:ecs
    • Subcommand:update-service
    • Options and parameters: --cluster <clustername> --service <servicename> --force-new-deployment
  8. Now choose the Triggers tab and select Enable continuous integration with the repository you created.
  9. Choose Save and queue.

 

At this point, your build pipeline kicks off and builds a Docker image from the source code in the repository you created, pushes the image to Amazon ECR, and updates the Amazon ECS service with the new image.

You can verify by viewing the build. Choose Pipelines in Azure DevOps, selecting the entry for the latest run, and then the icon under the status column. Once it successfully completes, you can log in to the AWS console and view the updated image in Amazon ECR and the updated service in Amazon ECS.Pipeline status page Azure DevOps

Every time you commit and push your code through Visual Studio, this pipeline kicks off and builds and deploys your application to Amazon ECS.

Cleanup

At the end of this example, once you’ve completed all steps and are finished testing, follow these steps to disable or delete resources to avoid incurring costs:

  1. Go to the Amazon ECS console within the AWS Console.
  2. Navigate to the cluster you created, then choose the Tasks tab.
  3. Choose Stop all to turn off the tasks.

Conclusion

This blog post reviewed how to create a CI/CD pipeline in Azure DevOps to deploy a Docker Image to Amazon ECR and container to Amazon ECS. It provided detailed steps on how to set up a basic CI/CD pipeline, leveraging tools with which .NET developers are familiar and the steps needed to integrate with Amazon ECR and Amazon ECS.

I hope this post was informative and has helped you learn the basics of how to integrate Amazon ECR and Amazon ECS with Azure DevOps to create a robust CI/CD pipeline.

About the Authors

John Formento

 

 

John Formento is a Solution Architect at Amazon Web Services. He helps large enterprises achieve their goals by architecting secure and scalable solutions on the AWS Cloud.

Announcing AWS Lambda support for .NET Core 3.1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/announcing-aws-lambda-supports-for-net-core-3-1/

This post is courtesy of Norm Johanson, Senior Software Development Engineer, AWS SDKs and Tools.

From today, you can develop AWS Lambda functions using .NET Core 3.1. You can deploy to Lambda by setting the runtime parameter value to dotnetcore3.1. Version 1.17.0.0 AWS Toolkit for Visual Studio and version 4.0.0 of the .NET Core Global Tool Amazon.Lambda.Tools are also available today. These make it easy to build and deploy your .NET Core 3.1 Lambda functions.

New features of .NET Core 3.1

.NET Core 3.1 brings many new runtime features to Lambda including C# 8.0 and F# 4.7 support, .NET Standard 2.1 support, new JSON serializer, and a new ReadyToRun feature for ahead-of-time compilation. There are also new versions of the .NET Lambda tooling and libraries. These include the Amazon.Lambda.AspNetCoreServer package, which allows you to run ASP.NET Core 3.1
projects as Lambda functions.

New Lambda JSON serializer

.NET Core Lambda functions support JSON serialization of input and return parameters. Use this feature by registering a serializer in your Lambda code. Typically, this is done using an assembly attribute like this which registers the JsonSerializer class from the Amazon.Lambda.Serialization.Json NuGet package as the serializer:

[assembly:LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

Amazon.Lambda.Serialization.Json uses the popular NuGet package Newtonsoft.Json for serialization. Newtonsoft.Json is a powerful serializer with many built-in features. This makes it a large assembly to add to your .NET Core Lambda functions.

Starting with .NET Core 3.0, a new JSON serializer called System.Text.Json is built into the .NET Core framework. This serializer is focused on the core features of serialization and built for performance. To take advantage of this new serializer, use the new NuGet package Amazon.Lambda.Serialization.SystemTextJson. Testing with this new serializer shows significant
improvements to Lambda cold start performance. The new Lambda blueprints available in Visual Studio or dotnet new via Amazon.Lambda.Templates default to this new serializer using the following assembly attribute.

[assembly:LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.LambdaJsonSerializer))]

Most .NET Lambda event packages work with either of the AWS serializer packages. For performance and simplicity reasons, the new versions of Amazon.Lambda.APIGatewayEvents and Amazon.Lambda.AspNetCoreServer only use the newer, faster Amazon.Lambda.Serialization.SystemTextJson for JSON serialization when targeting .NET Core 3.1.

ReadyToRun for better cold start performance

.NET Core 3.0 introduces a new build concept called ReadyToRun, which is also available in .NET Core 3.1. ReadyToRun performs much of the work of the just-in-time compiler used by the .NET runtime. If your project contains large amounts of code or large dependencies like the AWS SDK for .NET, this feature can significantly reduce cold start performance. It has less effect on small
functions using only the .NET Core base library.

To use ReadyToRun for Lambda, you must package your .NET Lambda function on Linux. You can use a Linux environment like an EC2 Linux instance, or CodeBuild, which also recently added .NET Core 3.1 support. Once you are in a Linux environment using .NET Lambda tooling, enable ReadyToRun by setting the –msbuild-parameters switch:

"/p:PublishReadyToRun=true --self-contained false"

For example, to deploy a function with ReadyToRun enabled using the .NET Core Global Tool for Lambda Amazon.Lambda.Tools, use:

dotnet lambda deploy-function R2RExample --msbuild-parameters "/p:PublishReadyToRun=true --self-contained false"

To avoid setting in the command line, set msbuild-parameters as a property in the aws-lambda-tools-defaults.json file.

Updated AWS Mock .NET Lambda Test Tool

Lambda test tool

With .NET Core 2.1, AWS released the AWS .NET Core Mock Lambda Test Tool. This makes it easy to debug .NET Core Lambda functions. If you are using the AWS Toolkit for Visual Studio, the toolkit automatically installs or updates the test tool, and configures your launchSettings.json file. With the toolkit, you can use F5 debugging when the project opens.

For .NET Core 3.1, this tool offers new features. First, the way the tool loads .NET Lambda code internally is redesigned. Previously, the assemblies in customer code may collide with the test tool’s assemblies. Now the Lambda code is loaded in a separate AssemblyLoadContext, preventing this collision.

The test tool is an ASP.NET Core application that loads and executes the Lambda code. This allows the debugger that is currently attached to the test tool to debug the loaded Lambda code. Pressing F5 opens the web interface, allowing you to select the function, payload, and other parameters. Once everything is set, choose execute to run the code inside the test tool’s process. To improve the debug turnaround cycle, there is a new switch: –no-ui. This skips the web interface after code changes, making it faster to debug your code.

My work flow for this tool is to use the web interface for the initial debug session, then save the request JSON. After the initial debug session, I edit the launchSettings.json file, which looks like this, setting up the port for the web interface:


{
  "profiles": {
    "Mock Lambda Test Tool": {
      "commandName": "Executable",
      "commandLineArgs": "--port 5050",
      "workingDirectory": ".\\bin\\$(Configuration)\\netcoreapp3.1",
      "executablePath": "C:\\Users\\%USERNAME%\\.dotnet\\tools\\dotnet-lambda-test-tool-3.1.exe"
    }
  }
}

I update this to:


{
  "profiles": {
    "Mock Lambda Test Tool": {
      "commandName": "Executable",
      "commandLineArgs": "—no-ui --payload SavedRequest",
      "workingDirectory": ".\\bin\\$(Configuration)\\netcoreapp3.1",
      "executablePath": "C:\\Users\\%USERNAME%\\.dotnet\\tools\\dotnet-lambda-test-tool-3.1.exe"
    }
  }
}

This uses the saved request from the web interface as the input payload, instead of the web interface.

For more information about this feature, see the new Documentation tab in the test tool after launching. Here is a demonstration of my debug workflow:

Lambda debug workflow

Amazon Linux 2

.NET Core 3.1, like (Ruby 2.7, Python 3.8, Node.js 10 and 12, and Java 11) is based on an Amazon Linux 2 execution environment. Amazon Linux 2 provides a secure, stable, and high-performance execution environment to develop and run cloud and enterprise applications.

Migrate to .NET Core 3.1

To migrate existing .NET Core 2.1 Lambda functions to the new 3.1 runtime, follow the steps below:

  1. Open the csproj or fsproj file.
    • Set the TargetFramework element to netcoreapp3.1.
  2. Open the aws-lambda-tools-defaults.json file.
    • If it exists, set the function-runtime field to dotnetcore3.1
    • If it exists, set the framework field to netcoreapp3.1. If you remove the field, the value is inferred from the project file.
  3. If it exists, open the serverless.template file.
    • For any AWS::Lambda::Function or AWS::Servereless::Function, set the Runtime property to dotnetcore3.1
  4. Update all Amazon.Lambda.* NuGet package references to the latest versions.

To use the new JSON serializer, follow these steps:

  1. Remove the NuGet package reference to Amazon.Lambda.Serialization.Json.
  2. Add the NuGet package reference to Amazon.Lambda.Serialization.SystemTextJson.
  3. In your code, where the LambdaSerializer attribute registers the JSON serializer, change the parameter to Amazon.Lambda.Serialization.SystemTextJson.LambdaJsonSerializer.

Conclusion

There is a blueprint in Visual Studio for detecting labels for images uploaded in S3:

Detect image labels

By converting this blueprint to .NET Core 3.1 and using the new JSON serializer and ReadyToRun features, the cold start time is reduced by 40% when using 256 MB of memory. Performance improvements vary, so be sure to try these new features in your Lambda functions.

Start building .NET Core 3.1 Lambda functions with the latest versions of the AWS Toolkit for Visual Studio or the .NET Core Global Tool Amazon.Lambda.Tools. If you are not using .NET Core Lambda tooling, specify dotnetcore3.1 as the runtime value in your preferred tool to deploy Lambda functions.

We would like to hear your feedback for AWS .NET Lambda support. Contact the AWS .NET Team for Lambda questions through our .NET Lambda GitHub repository.

How to use AWS Secrets Manager client-side caching in .NET

Post Syndicated from Sepehr Samiei original https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-client-side-caching-in-dotnet/

AWS Secrets Manager now has a client-side caching library for.NET that makes it easier to access secrets from .NET applications. This is in addition to client-side caching libraries for Java, JDBC, Python, and Go. These libraries help you improve availability, reduce latency, and reduce the cost of retrieving your secrets. Secrets Manager cache library does this by serving secrets out of a local cache and eliminating frequent Secrets Manager API calls.

AWS Secrets Manager enables you to automatically rotate, manage, and retrieve secrets throughout their lifecycle. Users and applications can access secrets through a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. It offers secret rotation with built-in integration for AWS services such as Amazon Relational Database Service (Amazon RDS) and Amazon Redshift, and it’s also extensible to other types of secrets. Secrets Manager enables you to control access to secrets using fine-grained permissions, and all actions on secrets, including retrievals, are traceable and auditable through AWS CloudTrail.

AWS Secrets Manager client-side caching for .NET extends benefits of the AWS Secrets Manager to wider use cases in .NET applications. These extra benefits are now available without having to spend precious time and effort on developing your own caching solution.

In this post, I’ll discuss the following topics:

  • The benefits offered by Secrets Manager client-side caching library for .NET
  • How Secrets Manager client-side caching library for .NET works
  • How to use Secrets Manager client-side library in .NET applications
  • How to extend Secrets Manager client-side library with your own custom logic

The benefits offered by Secrets Manager client-side caching library for .NET

Client-side caching is benefitial in following ways:

  • Availability: Network links sometimes suffer slowdowns or intermittent breaks. Client-side caching can significantly improve availability by eliminating a large number of API calls.
  • Latency: Retrieving secrets through API calls includes the network latency. Retrieving secrets from the local cache eliminates that latency and, therefore, improves performance.
  • Cost: Each API call to a Secrets Manager endpoint encounters a small charge. Using a local cache saves costs associated with API calls.

Using a client-side cache is a best practice; however, in the same way that you don’t want to reinvent the wheel everytime you need one, crafting your own client-side caching solution is suboptimal. The Secrets Manager client-side caching library relieves you from writing your own client-side caching solution while still giving you its benefits. Furthermore, it includes best practices such as:

  • Automatically refreshing cached secrets: the library periodically updates secrets to ensure your application gets the most recent version of a secret. You can control and change refresh intervals using configuration properties.
  • Integration with your applications: To use this library, just add the dependency to your .NET project and provide the identifier to the secret you want to access in your code.

How Secrets Manager client-side caching library for .NET works

The library is implemented in .NET Standard. This means you can reuse the same library in projects of all flavors of .NET, including .NET Framework, .NET Core, and Xamarin.

Note: Because the AWS Secrets Manager client-side caching library depends on Microsoft.Extensions.caching.memory, make sure you add it to your project dependencies.

As an extension to Secrets Manager .NET SDK, the cache library provides you an alternative to direct invocation of Secrets Manager API methods. You invoke cache library methods, and if the value doesn’t exist in the cache, the cache library invokes Secrets Manager methods on your behalf.

The default refresh interval for “current” version of secrets (the latest value stored in Secrets Manager for that secret) is 1 hour. This is because latest version may change from time to time. The library allows you to configure this frequency to higher or lower per your specific application requirements.

If you request a specific version of a secret by specifying both secret ID and secret version parameters, by default the library sets refresh interval to 48 hours. Since each version of a secret is immutable, there is no need to refresh them frequently.

You can also enable “Last known good value caching” to provide some protection in cases of transient network issues or service outages. If this is enabled, the cache will keep track of the last known good secret value, and in the event of an error occurring while refreshing a secret value from the service, the cache will return the last known good value. This feature is disabled by default, and can be enabled by setting the EnableLastKnownGoodValueCaching property of SecretsManagerCacheOptions class to true. You can pass your instance of SecretsManagerCacheOptions to SecretsManagerCache constructor.

The cache library provides a thread-safe implementation for both cache-check as well as entry populations. Therefore, simultaneous requests for a secret that is not available in the cache will result in a single API request to SecretsManager.

How to use Secrets Manager client-side caching library in .NET applications

You can add Secrets Manager client-side caching library to your projects either directly or through dependency injection. The dependency package is also available through NuGet. In this example, I use NuGet to add the library to my project. Open the NuGet Package Manager console and browse for AWSSDK.SecretsManager.Caching. Select the library and install it.
 

Figure 1: Select the AWSSDK.SecretsManager.Caching library

Figure 1: Select the AWSSDK.SecretsManager.Caching library

Before using the cache, you need to have at least one secret stored in your account using AWS Secrets Manager. To create a test secret:

  1. Go to the AWS Console, and then select AWS Secrets Manager.
  2. Select Store a new secret.
  3. For secret type, select Other type of secret, and then add three key/value pairs as shown here:
    
        {
          "Domain": "<yourDomainName>",
          "UserName": "<yourUserName>",
          "Password": "<yourPassword>"
        }  
        

  4. Next, create a cache object, and then invoke its methods with appropriate parameters. Below is a code snippet using AWS Secrets Manager client-side cache library to access our secret. Notice this snippet assumes you’ve added Newtonsoft.Json library to your project:
    
        public MyClass : IDisposable
        {
                private readonly IAmazonSecretsManager secretsManager;
                private readonly ISecretsManagerCache cache;
            
                public MyClass()
                {
                    this.secretsManager = new AmazonSecretsManagerClient();
                    this.cache = new SecretsManagerCache(this.secretsManager);
                }
        
                public void Dispose()
                {
                    this.secretsManager.Dispose();
                    this.cache.Dispose();
                }
            
                public async Task<NetworkCredential> GetNetworkCredential(string secretId)
                    {
                        var sec = await this.cache.GetSecretAsync(secretId);
                        var jo = Newtonsoft.Json.Linq.JObject.Parse(sec.SecretString);
                        return new NetworkCredential(
                            domain: jo["Domain"].ToObject<string>(),
                            userName: jo["Username"].ToObject<string>(),
                            password: jojo["Password"].ToObject<string>());
                    }
                }        
        

For ASP.NET projects, you can use the library with dependency-injection. To do this, you first have to register Secrets Manager caching to the dependency injection service collection in the Startup class of your ASP.NET project:


public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddSecretsManagerCaching();
    }
}

Then, you’ll be able to consume the cache using constructor injection in your classes.


public MyClass : IDisposable
    {
        private readonly ISecretsManagerCache cache;

        public MyClass(ISecretsManagerCache cache)
        {
            this.cache = cache;
        }

        public async Task<NetworkCredential> GetNetworkCredential(string secretId)
        {
            var sec = await this.cache.GetSecretAsync(secretId);
            var jo = Newtonsoft.Json.Linq.JObject.Parse(sec.SecretString);
                        return new NetworkCredential(
                        domain: jo["Domain"].ToObject<string>(),
                        userName: jo["Username"].ToObject<string>(),
                        password: jojo["Password"].ToObject<string>());
        }
    }    

How to add in-memory encryption and other custom extensions

The Secrets Manager caching library is designed to be extendable with your own custom logic. One possibility is to extend its implementation to include in-memory encryption of cached secrets to add another layer of protection on your retrieved secrets. For this purpose, you have to manually implement two of the interfaces included in the library. The library includes SecretCacheEntry class, implementing the interface ISecretCacheEntry. This is the object that stores secrets in memory. You could create another class implementing the same ISecretCacheEntry interface to add in-memory encryption/decryption.


public class EncryptedSecretCacheEntry : ISecretCacheEntry
    {
        public EncryptedSecretCacheEntry(GetSecretValueResponse response, TimeSpan expiry)
        {
            this.VersionId = response.VersionId;
            this.LastRetreived = DateTime.UtcNow;
            this.Name = response.Name;
            this.Expires = this.LastRetreived.Add(expiry);

            if (response.SecretBinary != null && response.SecretBinary.Length > 0)
            {
                using (var ms = response.SecretBinary)
                {
                    this.SecretBinary = ms.ToArray();
                }
            }
            else
            {
                this.SecretString = response.SecretString; 
            }
        }
    
        private byte[] _EncryptedSecretString;
        public string SecretString
        {
            get { return MyCustomCipherService.DecryptString(_EncryptedSecretString); }
            set { _EncryptedSecretString = MyCustomCipherService.EncryptString(value); }
        }
    
        private byte[] _EncryptedSecretBinary;
        public byte[] SecretBinary
        {
            get { return MyCustomCipherService.Decrypt(_EncryptedSecretBinary); }
            set { _EncryptedSecretBinary = MyCustomCipherService.Encrypt(value); }
        }
    
        public string VersionId { get; private set; }

        public string Name { get; private set; }

        public string LocalId => $"{this.Name}:{this.VersionId}";

        public DateTime LastRetreived { get; private set; }

        public DateTime Expires { get; private set; }
    } 

The second step is to implement the ISecretCacheEntryFactory class:


public class EncryptedSecretCacheEntryFactory : ISecretCacheEntryFactory
    {
        public ISecretCacheEntry CreateEntry(GetSecretValueResponse response, TimeSpan expiry)
        {
            return new EncryptedSecretCacheEntry(response, expiry);
        }
    }

Having these two classes, I can now modify the constructor of my SecretsUserClass to add my custom encryption logic to Secrets Manager cache library:


public SecretsUserClass()
    {
        this.secretsManager = new AmazonSecretsManagerClient();
        this.cache = new SecretsManagerCache(this.secretsManager, new   EncryptedSecretCacheEntryFactory(), new SecretsManagerCacheOptions(), null);
    }

You could even go further and fully customize the cache by implementing ISecretsManagerCache or implementing a child class that inherits functionality of SecretsManagerCache and adds new methods to it.

Conclusion

It’s critical for enterprises to protect secrets from unauthorized access and adhere to various industry or legislative compliance requirements. Mitigating the risk of compromise often involves complex techniques, significant effort, and costs, such as applying encryption, managing vaults and HSM modules, rotating secrets, audit access, and so on. Because the level of effort is high, many developers tend to use the much simpler, but substantially riskier, alternative of hard-coding secrets in application code, or simply storing secrets in plain-text format. These practices are problematic from the security and compliance point of view, but they need to be understood as symptoms of the more fundamental problem of complexity in the systems enterprises have built. To address the problem of weak security and compliance practices, you have to address the problem of complexity. Complex systems can be simplified and made more secure when they are reusable, accessible, and are automated, needing no human interaction.

In this post, I’ve shown how you can improve availability, reduce latency, and reduce the cost of using your secrets by using the Secrets Manager client-side caching library for .NET. I also showed how to extend it by implementing your own custom logic for more advanced use-cases, such as in-memory encryption of secrets.

To get started managing secrets, open the Secrets Manager console. To learn more, read
How to Store, Distribute, and Rotate Credentials Securely with Secret Manager or refer to the Secrets Manager documentation. See AWS Region Table for the list of AWS regions where Secrets Manager is available.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread in the Secrets Manager forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Sepehr Samiei

Sepehr is currently a Senior Solutions Architect at AWS. He started his professional career as a .Net developer, which continued for more than 10 years. Early on, he quickly became a fan of cloud computing and loves to help customers utilise the power of Microsoft tech on AWS. His wife and daughter are the most precious parts of his life, and he and his wife expect to have a son soon!

Developing .NET Core AWS Lambda functions

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/developing-net-core-aws-lambda-functions/

This post is courtesy of Mark Easton, Senior Solutions Architect – AWS

One of the biggest benefits of Lambda functions is that they isolate you from the underlying infrastructure. While that makes it easy to deploy and manage your code, it’s critical to have a clearly defined approach for testing, debugging, and diagnosing problems.

There’s a variety of best practices and AWS services to help you out. When developing Lambda functions in .NET, you can follow a four-pronged approach:

This post demonstrates the approach by creating a simple Lambda function that can be called from a gateway created by Amazon API Gateway and which returns the current UTC time. The post shows you how to design your code to allow for easy debugging, logging and tracing.

If you haven’t created Lambda functions with .NET Core before, then the following posts can help you get started:

Unit testing Lambda functions

One of the easiest ways to create a .NET Core Lambda function is to use the .NET Core CLI and create a solution using the Lambda Empty Serverless template.

If you haven’t already installed the Lambda templates, run the following command:

dotnet new -i Amazon.Lambda.Templates::*

You can now use the template to create a serverless project and unit test project, and then add them to a .NET Core solution by running the following commands:

dotnet new serverless.EmptyServerless -n DebuggingExample
cd DebuggingExample
dotnet new sln -n DebuggingExample\
dotnet sln DebuggingExample.sln add */*/*.csproj

Although you haven’t added any code yet, you can validate that everything’s working by executing the unit tests. Run the following commands:

cd test/DebuggingExample.Tests/
dotnet test

One of the key principles to effective unit testing is ensuring that units of functionality can be tested in isolation. It’s good practice to de-couple the Lambda function’s actual business logic from the plumbing code that handles the actual Lambda requests.

Using your favorite editor, create a new file, ITimeProcessor.cs, in the src/DebuggingExample folder, and create the following basic interface:

using System;

namespace DebuggingExample
{
    public interface ITimeProcessor
    {
        DateTime CurrentTimeUTC();
    }
}

Then, create a new TimeProcessor.cs file in the src/DebuggingExample folder. The file contains a concrete class implementing the interface.

using System;

namespace DebuggingExample
{
    public class TimeProcessor : ITimeProcessor
    {
        public DateTime CurrentTimeUTC()
        {
            return DateTime.UtcNow;
        }
    }
} 

Now add a TimeProcessorTest.cs file to the src/DebuggingExample.Tests folder. The file should contain the following code:

using System;
using Xunit;

namespace DebuggingExample.Tests
{
    public class TimeProcessorTest
    {
        [Fact]
        public void TestCurrentTimeUTC()
        {
            // Arrange
            var processor = new TimeProcessor();
            var preTestTimeUtc = DateTime.UtcNow;

            // Act
            var result = processor.CurrentTimeUTC();

            // Assert time moves forwards 
            var postTestTimeUtc = DateTime.UtcNow;
            Assert.True(result >= preTestTimeUtc);
            Assert.True(result <= postTestTimeUtc);
        }
    }
}

You can then execute all the tests. From the test/DebuggingExample.Tests folder, run the following command:

dotnet test

Surfacing business logic in a Lambda function

Now that you have your business logic written and tested, you can surface it as a Lambda function. Edit the src/DebuggingExample/Function.cs file so that it calls the CurrentTimeUTC method:

using System;
using System.Collections.Generic;
using System.Net;
using Amazon.Lambda.Core;
using Amazon.Lambda.APIGatewayEvents;
using Newtonsoft.Json;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(
typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))] 

namespace DebuggingExample
{
    public class Functions
    {
        ITimeProcessor processor = new TimeProcessor();

        public APIGatewayProxyResponse Get(
APIGatewayProxyRequest request, ILambdaContext context)
        {
            var result = processor.CurrentTimeUTC();

            return CreateResponse(result);
        }

APIGatewayProxyResponse CreateResponse(DateTime? result)
{
    int statusCode = (result != null) ? 
        (int)HttpStatusCode.OK : 
        (int)HttpStatusCode.InternalServerError;

    string body = (result != null) ? 
        JsonConvert.SerializeObject(result) : string.Empty;

    var response = new APIGatewayProxyResponse
    {
        StatusCode = statusCode,
        Body = body,
        Headers = new Dictionary<string, string>
        { 
            { "Content-Type", "application/json" }, 
            { "Access-Control-Allow-Origin", "*" } 
        }
    };
    
    return response;
}
    }
}

First, an instance of the TimeProcessor class is instantiated, and a Get() method is then defined to act as the entry point to the Lambda function.

By default, .NET Core Lambda function handlers expect their input in a Stream. This can be overridden by declaring a customer serializer, and then defining the handler’s method signature using a custom request and response type.

Because the project was created using the serverless.EmptyServerless template, it already overrides the default behavior. It does this by including a using reference to Amazon.Lambda.APIGatewayEvents and then declaring a custom serializer. For more information about using custom serializers in .NET, see the AWS Lambda for .NET Core repository on GitHub.

Get() takes a couple of parameters:

  • The APIGatewayProxyRequest parameter contains the request from the API Gateway fronting the Lambda function
  • The optional ILambdaContext parameter contains details of the execution context.

The Get() method calls CurrentTimeUTC() to retrieve the time from the business logic.

Finally, the result from CurrentTimeUTC() is passed to the CreateResponse() method, which converts the result into an APIGatewayResponse object to be returned to the caller.

Because the updated Lambda function no longer passes the unit tests, update the TestGetMethod in test/DebuggingExample.Tests/FunctionTest.cs file. Update the test by removing the following line:

Assert.Equal("Hello AWS Serverless", response.Body);

This leaves your FunctionTest.cs file as follows:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Xunit;
using Amazon.Lambda.Core;
using Amazon.Lambda.TestUtilities;
using Amazon.Lambda.APIGatewayEvents;
using DebuggingExample;

namespace DebuggingExample.Tests
{
    public class FunctionTest
    {
        public FunctionTest()
        {
        }

        [Fact]
        public void TetGetMethod()
        {
            TestLambdaContext context;
            APIGatewayProxyRequest request;
            APIGatewayProxyResponse response;

            Functions functions = new Functions();

            request = new APIGatewayProxyRequest();
            context = new TestLambdaContext();
            response = functions.Get(request, context);
            Assert.Equal(200, response.StatusCode);
        }
    }
}

Again, you can check that everything is still working. From the test/DebuggingExample.Tests folder, run the following command:

dotnet test

Local integration testing with the AWS SAM CLI

Unit testing is a great start for testing thin slices of functionality. But to test that your API Gateway and Lambda function integrate with each other, you can test locally by using the AWS SAM CLI, installed as described in the AWS Lambda Developer Guide.

Unlike unit testing, which allows you to test functions in isolation outside of their runtime environment, the AWS SAM CLI executes your code in a locally hosted Docker container. It can also simulate a locally hosted API gateway proxy, allowing you to run component integration tests.

After you’ve installed the AWS SAM CLI, you can start using it by creating a template that describes your Lambda function by saving a file named template.yaml in the DebuggingExample directory with the following contents:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Sample SAM Template for DebuggingExample

# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
    Function:
        Timeout: 10

Resources:

    DebuggingExampleFunction:
        Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
        Properties:
            FunctionName: DebuggingExample
			CodeUri: src/DebuggingExample/bin/Release/netcoreapp2.1/publish
            Handler: DebuggingExample::DebuggingExample.Functions::Get
            Runtime: dotnetcore2.1
            Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
                Variables:
                    PARAM1: VALUE
            Events:
                DebuggingExample:
                    Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
                    Properties:
                        Path: /
                        Method: get

Outputs:

    DebuggingExampleApi:
      Description: "API Gateway endpoint URL for Prod stage for Debugging Example function"
      Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/DebuggingExample/"

    DebuggingExampleFunction:
      Description: "Debugging Example Lambda Function ARN"
      Value: !GetAtt DebuggingExampleFunction.Arn

    DebuggingExampleFunctionIamRole:
      Description: "Implicit IAM Role created for Debugging Example function"
      Value: !GetAtt DebuggingExampleFunctionRole.Arn

Now that you have an AWS SAM CLI template, you can test your code locally. Because the Lambda function expects a request from API Gateway, create a sample API Gateway request. Run the following command:

sam local generate-event api > testApiRequest.json

You can now publish your DebuggingExample code locally and invoke it by passing in the sample request as follows:

dotnet publish -c Release
sam local invoke "DebuggingExampleFunction" --event testApiRequest.json

The first time that you run it, it might take some time to pull down the container image in which to host the Lambda function. After you’ve invoked it one time, the container image is cached locally, and execution speeds up.

Finally, rather than testing your function by sending it a sample request, test it with a real API gateway request by running API Gateway locally:

sam local start-api

If you now navigate to http://127.0.0.1:3000/ in your browser, you can get the API gateway to send a request to your locally hosted Lambda function. See the results in your browser.

Logging events with CloudWatch

Having a test strategy allows you to execute, test, and debug Lambda functions. After you’ve deployed your functions to AWS, you must still log what the functions are doing so that you can monitor their behavior.

The easiest way to add logging to your Lambda functions is to add code that writes events to CloudWatch. To do this, add a new method, LogMessage(), to the src/DebuggingExample/Function.cs file.

void LogMessage(ILambdaContext ctx, string msg)
{
    ctx.Logger.LogLine(
        string.Format("{0}:{1} - {2}", 
            ctx.AwsRequestId, 
            ctx.FunctionName,
            msg));
}

This takes in the context object from the Lambda function’s Get() method, and sends a message to CloudWatch by calling the context object’s Logger.Logline() method.

You can now add calls to LogMessage in the Get() method to log events in CloudWatch. It’s also a good idea to add a Try… Catch… block to ensure that exceptions are logged as well.

        public APIGatewayProxyResponse Get(APIGatewayProxyRequest request, ILambdaContext context)
        {
            LogMessage(context, "Processing request started");

            APIGatewayProxyResponse response;
            try
            {
                var result = processor.CurrentTimeUTC();
                response = CreateResponse(result);

                LogMessage(context, "Processing request succeeded.");
            }
            catch (Exception ex)
            {
                LogMessage(context, string.Format("Processing request failed - {0}", ex.Message));
                response = CreateResponse(null);
            }

            return response;
        }

To validate that the changes haven’t broken anything, you can now execute the unit tests again. Run the following commands:

cd test/DebuggingExample.Tests/
dotnet test

Tracing execution with X-Ray

Your code now logs events in CloudWatch, which provides a solid mechanism to help monitor and diagnose problems.

However, it can also be useful to trace your Lambda function’s execution to help diagnose performance or connectivity issues, especially if it’s called by or calling other services. X-Ray provides a variety of features to help analyze and trace code execution.

To enable active tracing on your function you need to modify the SAM template we created earlier to add a new attribute to the function resource definition. With SAM this is as easy as adding the Tracing attribute and specifying it as Active below the Timeout attribute in the Globals section of the template.yaml file:

Globals:
    Function:
        Timeout: 10
        Tracing: Active

To call X-Ray from within your .NET Core code, you must add the AWSSDKXRayRecoder to your solution by running the following command in the src/DebuggingExample folder:

dotnet add package AWSXRayRecorder –-version 2.2.1-beta

Then, add the following using statement at the top of the src/DebuggingExample/Function.cs file:

using Amazon.XRay.Recorder.Core;

Add a new method to the Function class, which takes a function and name and then records an X-Ray subsegment to trace the execution of the function.

        private T TraceFunction<T>(Func<T> func, string subSegmentName)
        {
            AWSXRayRecorder.Instance.BeginSubsegment(subSegmentName);
            T result = func();
            AWSXRayRecorder.Instance.EndSubsegment();

            return result;
        } 

You can now update the Get() method by replacing the following line:

var result = processor.CurrentTimeUTC();

Replace it with this line:

var result = TraceFunction(processor.CurrentTimeUTC, "GetTime");

The final version of Function.cs, in all its glory, is now:

using System;
using System.Collections.Generic;
using System.Net;
using Amazon.Lambda.Core;
using Amazon.Lambda.APIGatewayEvents;
using Newtonsoft.Json;
using Amazon.XRay.Recorder.Core;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(
typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace DebuggingExample
{
    public class Functions
    {
        ITimeProcessor processor = new TimeProcessor();

        public APIGatewayProxyResponse Get(APIGatewayProxyRequest request, ILambdaContext context)
        {
            LogMessage(context, "Processing request started");

            APIGatewayProxyResponse response;
            try
            {
                var result = TraceFunction(processor.CurrentTimeUTC, "GetTime");
                response = CreateResponse(result);

                LogMessage(context, "Processing request succeeded.");
            }
            catch (Exception ex)
            {
                LogMessage(context, string.Format("Processing request failed - {0}", ex.Message));
                response = CreateResponse(null);
            }

            return response;
        }

        APIGatewayProxyResponse CreateResponse(DateTime? result)
        {
            int statusCode = (result != null) ?
                (int)HttpStatusCode.OK :
                (int)HttpStatusCode.InternalServerError;

            string body = (result != null) ?
                JsonConvert.SerializeObject(result) : string.Empty;

            var response = new APIGatewayProxyResponse
            {
                StatusCode = statusCode,
                Body = body,
                Headers = new Dictionary<string, string>
        {
            { "Content-Type", "application/json" },
            { "Access-Control-Allow-Origin", "*" }
        }
            };

            return response;
        }

        private void LogMessage(ILambdaContext context, string message)
        {
            context.Logger.LogLine(string.Format("{0}:{1} - {2}", context.AwsRequestId, context.FunctionName, message));
        }

        private T TraceFunction<T>(Func<T> func, string actionName)
        {
            AWSXRayRecorder.Instance.BeginSubsegment(actionName);
            T result = func();
            AWSXRayRecorder.Instance.EndSubsegment();

            return result;
        }
    }
}

Since AWS X-Ray requires an agent to collect trace information, if you want to test the code locally you should now install the AWS X-Ray agent. Once it’s installed, confirm the changes haven’t broken anything by running the unit tests again:

cd test/DebuggingExample.Tests/
dotnet test

For more information about using X-Ray from .NET Core, see the AWS X-Ray Developer Guide. For information about adding support for X-Ray in Visual Studio, see the New AWS X-Ray .NET Core Support post.

Deploying and testing the Lambda function remotely

Having created your Lambda function and tested it locally, you’re now ready to package and deploy your code.

First of all you need an Amazon S3 bucket to deploy the code into. If you don’t already have one, create a suitable S3 bucket.

You can now package the .NET Lambda Function and copy it to Amazon S3.

sam package \
  --template-file template.yaml \
  --output-template debugging-example.yaml \
  --s3-bucket debugging-example-deploy

Finally, deploy the Lambda function by running the following command:

sam deploy \
   --template-file debugging-example.yaml \
   --stack-name DebuggingExample \
   --capabilities CAPABILITY_IAM \
   --region eu-west-1

After your code has deployed successfully, test it from your local machine by running the following command:

dotnet lambda invoke-function DebuggingExample -–region eu-west-1

Diagnosing the Lambda function

Having run the Lambda function, you can now monitor its behavior by logging in to the AWS Management Console and then navigating to CloudWatch LogsCloudWatch Logs Console

You can now click on the /aws/lambda/DebuggingExample log group to view all the recorded log streams for your Lambda function.

If you open one of the log streams, you see the various messages recorded for the Lambda function, including the two events explicitly logged from within the Get() method.Lambda CloudWatch Logs

To review the logs locally, you can also use the AWS SAM CLI to retrieve CloudWatch logs and then display them in your terminal.

sam logs -n DebuggingExample --region eu-west-1

As a final alternative, you can also execute the Lambda function by choosing Test on the Lambda console. The execution results are displayed in the Log output section. Lambda Console Execution

In the X-Ray console, the Service Map page shows a map of the Lambda function’s connections.

Your Lambda function is essentially standalone. However, the Service Map page can be critical in helping to understand performance issues when a Lambda function is connected with a number of other services.X-Ray Service Map

If you open the Traces screen, the trace list showing all the trace results that it’s recorded. Open one of the traces to see a breakdown of the Lambda function performance.

X-Ray Traces UI

Conclusion

In this post, I showed you how to develop Lambda functions in .NET Core, how unit tests can be used, how to use the AWS SAM CLI for local integration tests, how CloudWatch can be used for logging and monitoring events, and finally how to use X-Ray to trace Lambda function execution.

Put together, these techniques provide a solid foundation to help you debug and diagnose your Lambda functions effectively. Explore each of the services further, because when it comes to production workloads, great diagnosis is key to providing a great and uninterrupted customer experience.

Неделя, 3 Юни 2018

Post Syndicated from georgi original http://georgi.unixsol.org/diary/archive.php/2018-06-03

Всеки има нужда да бъде спасен от свинщината, наречена “реклама” във
всичките и форми. За хората с компютър и бразуер, това отдавна е решен
проблем благодарение на AdBlock и подобни плъгини (стига да не
използвате браузер като Chrome, но в този случай си заслужавате
всичко дето ви се случва).

По-принцип не оставям компютър без инсталиран AdBlock, това си е направо
обществено полезна дейност. Кофтито е, че на мобилния телефон, дори и да
използвате Firefox и да имате подходящите Addons, програмчетата пак
се изхитряват и ви спамят.

Сега, ако сте root-нали телефона (което никой не прави), можете да
направите нещо по въпроса, но си е разправия, а както всички знаем,
удобството винаги печели пред сигурността.

За щастие има има много лесен начин, да се отървете от долните
спамери в две прости стъпки:

1. Инсталирате си Blokada.

2. Активирате я.

Et voilà – никъде повече няма да ви изкача спам,

Как работи нещото? Прави се на vpn защото това му дава възможност
да филтрира dns заявките и съответно когато някоя програма пита
за pagead.doubleclick.net и подобни – просто му отговаря с 0.0.0.0

Просто, ефективно, не изисква root и бърка директно в джоба на
всичката интернет паплач, която си въобразява, че може да ви залива
с лайна 24/7.

Monitoring with Azure and Grafana

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/05/31/monitoring-with-azure-and-grafana/

Monitoring with Azure and Grafana What is whitebox monitoring?
Why do we monitor our systems?
What is the Azure Monitor plugin and how can I use it to monitor my Azure resources?
Recently, I spoke at Swetugg 2018, a .NET conference held in Stockholm, Sweden to answer these questions. In this video you’ll learn some basic monitoring principles, some of the tools we use to monitor our systems, and get an inside look at the new Azure Monitor plugin for Grafana.

Protecting your API using Amazon API Gateway and AWS WAF — Part I

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/protecting-your-api-using-amazon-api-gateway-and-aws-waf-part-i/

This post courtesy of Thiago Morais, AWS Solutions Architect

When you build web applications or expose any data externally, you probably look for a platform where you can build highly scalable, secure, and robust REST APIs. As APIs are publicly exposed, there are a number of best practices for providing a secure mechanism to consumers using your API.

Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

In this post, I show you how to take advantage of the regional API endpoint feature in API Gateway, so that you can create your own Amazon CloudFront distribution and secure your API using AWS WAF.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

As you make your APIs publicly available, you are exposed to attackers trying to exploit your services in several ways. The AWS security team published a whitepaper solution using AWS WAF, How to Mitigate OWASP’s Top 10 Web Application Vulnerabilities.

Regional API endpoints

Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution created and managed by API Gateway. Before the launch of regional API endpoints, this was the default option when creating APIs using API Gateway. It primarily helped to reduce latency for API consumers that were located in different geographical locations than your API.

When API requests predominantly originate from an Amazon EC2 instance or other services within the same AWS Region as the API is deployed, a regional API endpoint typically lowers the latency of connections. It is recommended for such scenarios.

For better control around caching strategies, customers can use their own CloudFront distribution for regional APIs. They also have the ability to use AWS WAF protection, as I describe in this post.

Edge-optimized API endpoint

The following diagram is an illustrated example of the edge-optimized API endpoint where your API clients access your API through a CloudFront distribution created and managed by API Gateway.

Regional API endpoint

For the regional API endpoint, your customers access your API from the same Region in which your REST API is deployed. This helps you to reduce request latency and particularly allows you to add your own content delivery network, as needed.

Walkthrough

In this section, you implement the following steps:

  • Create a regional API using the PetStore sample API.
  • Create a CloudFront distribution for the API.
  • Test the CloudFront distribution.
  • Set up AWS WAF and create a web ACL.
  • Attach the web ACL to the CloudFront distribution.
  • Test AWS WAF protection.

Create the regional API

For this walkthrough, use an existing PetStore API. All new APIs launch by default as the regional endpoint type. To change the endpoint type for your existing API, choose the cog icon on the top right corner:

After you have created the PetStore API on your account, deploy a stage called “prod” for the PetStore API.

On the API Gateway console, select the PetStore API and choose Actions, Deploy API.

For Stage name, type prod and add a stage description.

Choose Deploy and the new API stage is created.

Use the following AWS CLI command to update your API from edge-optimized to regional:

aws apigateway update-rest-api \
--rest-api-id {rest-api-id} \
--patch-operations op=replace,path=/endpointConfiguration/types/EDGE,value=REGIONAL

A successful response looks like the following:

{
    "description": "Your first API with Amazon API Gateway. This is a sample API that integrates via HTTP with your demo Pet Store endpoints", 
    "createdDate": 1511525626, 
    "endpointConfiguration": {
        "types": [
            "REGIONAL"
        ]
    }, 
    "id": "{api-id}", 
    "name": "PetStore"
}

After you change your API endpoint to regional, you can now assign your own CloudFront distribution to this API.

Create a CloudFront distribution

To make things easier, I have provided an AWS CloudFormation template to deploy a CloudFront distribution pointing to the API that you just created. Click the button to deploy the template in the us-east-1 Region.

For Stack name, enter RegionalAPI. For APIGWEndpoint, enter your API FQDN in the following format:

{api-id}.execute-api.us-east-1.amazonaws.com

After you fill out the parameters, choose Next to continue the stack deployment. It takes a couple of minutes to finish the deployment. After it finishes, the Output tab lists the following items:

  • A CloudFront domain URL
  • An S3 bucket for CloudFront access logs
Output from CloudFormation

Output from CloudFormation

Test the CloudFront distribution

To see if the CloudFront distribution was configured correctly, use a web browser and enter the URL from your distribution, with the following parameters:

https://{your-distribution-url}.cloudfront.net/{api-stage}/pets

You should get the following output:

[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Set up AWS WAF and create a web ACL

With the new CloudFront distribution in place, you can now start setting up AWS WAF to protect your API.

For this demo, you deploy the AWS WAF Security Automations solution, which provides fine-grained control over the requests attempting to access your API.

For more information about deployment, see Automated Deployment. If you prefer, you can launch the solution directly into your account using the following button.

For CloudFront Access Log Bucket Name, add the name of the bucket created during the deployment of the CloudFormation stack for your CloudFront distribution.

The solution allows you to adjust thresholds and also choose which automations to enable to protect your API. After you finish configuring these settings, choose Next.

To start the deployment process in your account, follow the creation wizard and choose Create. It takes a few minutes do finish the deployment. You can follow the creation process through the CloudFormation console.

After the deployment finishes, you can see the new web ACL deployed on the AWS WAF console, AWSWAFSecurityAutomations.

Attach the AWS WAF web ACL to the CloudFront distribution

With the solution deployed, you can now attach the AWS WAF web ACL to the CloudFront distribution that you created earlier.

To assign the newly created AWS WAF web ACL, go back to your CloudFront distribution. After you open your distribution for editing, choose General, Edit.

Select the new AWS WAF web ACL that you created earlier, AWSWAFSecurityAutomations.

Save the changes to your CloudFront distribution and wait for the deployment to finish.

Test AWS WAF protection

To validate the AWS WAF Web ACL setup, use Artillery to load test your API and see AWS WAF in action.

To install Artillery on your machine, run the following command:

$ npm install -g artillery

After the installation completes, you can check if Artillery installed successfully by running the following command:

$ artillery -V
$ 1.6.0-12

As the time of publication, Artillery is on version 1.6.0-12.

One of the WAF web ACL rules that you have set up is a rate-based rule. By default, it is set up to block any requesters that exceed 2000 requests under 5 minutes. Try this out.

First, use cURL to query your distribution and see the API output:

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets
[
  {
    "id": 1,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 2,
    "type": "cat",
    "price": 124.99
  },
  {
    "id": 3,
    "type": "fish",
    "price": 0.99
  }
]

Based on the test above, the result looks good. But what if you max out the 2000 requests in under 5 minutes?

Run the following Artillery command:

artillery quick -n 2000 --count 10  https://{distribution-name}.cloudfront.net/prod/pets

What you are doing is firing 2000 requests to your API from 10 concurrent users. For brevity, I am not posting the Artillery output here.

After Artillery finishes its execution, try to run the cURL request again and see what happens:

 

$ curl -s https://{distribution-name}.cloudfront.net/prod/pets

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<HTML><HEAD><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>ERROR: The request could not be satisfied</TITLE>
</HEAD><BODY>
<H1>ERROR</H1>
<H2>The request could not be satisfied.</H2>
<HR noshade size="1px">
Request blocked.
<BR clear="all">
<HR noshade size="1px">
<PRE>
Generated by cloudfront (CloudFront)
Request ID: [removed]
</PRE>
<ADDRESS>
</ADDRESS>
</BODY></HTML>

As you can see from the output above, the request was blocked by AWS WAF. Your IP address is removed from the blocked list after it falls below the request limit rate.

Conclusion

In this first part, you saw how to use the new API Gateway regional API endpoint together with Amazon CloudFront and AWS WAF to secure your API from a series of attacks.

In the second part, I will demonstrate some other techniques to protect your API using API keys and Amazon CloudFront custom headers.

Robin “Roblimo” Miller

Post Syndicated from corbet original https://lwn.net/Articles/755563/rss

The Linux Journal mourns
the passing of Robin Miller
, a longtime presence in our community.
Miller was perhaps best known by the community for his roll as
Editor in Chief of Open Source Technology Group, the company that owned
Slashdot, SourceForge.net, freshmeat, Linux.com, NewsForge, and ThinkGeek
from 2000 to 2008.

masscan, macOS, and firewall

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/masscan-macos-and-firewall.html

One of the more useful features of masscan is the “–banners” check, which connects to the TCP port, sends some request, and gets a basic response back. However, since masscan has it’s own TCP stack, it’ll interfere with the operating system’s TCP stack if they are sharing the same IPv4 address. The operating system will reply with a RST packet before the TCP connection can be established.

The way to fix this is to use the built-in packet-filtering firewall to block those packets in the operating-system TCP/IP stack. The masscan program still sees everything before the packet-filter, but the operating system can’t see anything after the packet-filter.

Note that we are talking about the “packet-filter” firewall feature here. Remember that macOS, like most operating systems these days, has two separate firewalls: an application firewall and a packet-filter firewall. The application firewall is the one you see in System Settings labeled “Firewall”, and it controls things based upon the application’s identity rather than by which ports it uses. This is normally “on” by default. The packet-filter is normally “off” by default and is of little use to normal users.

Also note that macOS changed packet-filters around version 10.10.5 (“Yosemite”, October 2014). The older one is known as “ipfw“, which was the default firewall for FreeBSD (much of macOS is based on FreeBSD). The replacement is known as PF, which comes from OpenBSD. Whereas you used to use the old “ipfw” command on the command line, you now use the “pfctl” command, as well as the “/etc/pf.conf” configuration file.

What we need to filter is the source port of the packets that masscan will send, so that when replies are received, they won’t reach the operating-system stack, and just go to masscan instead. To do this, we need find a range of ports that won’t conflict with the operating system. Namely, when the operating system creates outgoing connections, it randomly chooses a source port within a certain range. We want to use masscan to use source ports in a different range.

To figure out the range macOS uses, we run the following command:

sysctl net.inet.ip.portrange.first net.inet.ip.portrange.last

On my laptop, which is probably the default for macOS, I get the following range. Sniffing with Wireshark confirms this is the range used for source ports for outgoing connections.

net.inet.ip.portrange.first: 49152
net.inet.ip.portrange.last: 65535

So this means I shouldn’t use source ports anywhere in the range 49152 to 65535. On my laptop, I’ve decided to use for masscan the ports 40000 to 41023. The range masscan uses must be a power of 2, so here I’m using 1024 (two to the tenth power).

To configure masscan, I can either type the parameter “–source-port 40000-41023” every time I run the program, or I can add the following line to /etc/masscan/masscan.conf. Remember that by default, masscan will look in that configuration file for any configuration parameters, so you don’t have to keep retyping them on the command line.

source-port = 40000-41023

Next, I need to add the following firewall rule to the bottom of /etc/pf.conf:

block in proto tcp from any to any port 40000 >< 41024

However, we aren’t done yet. By default, the packet-filter firewall is off on some versions of macOS. Therefore, every time you reboot your computer, you need to enable it. The simple way to do this is on the command line run:

pfctl -e

Or, if that doesn’t work, try:

pfctl -E

If the firewall is already running, then you’ll need to load the file explicitly (or reboot):

pfctl -f /etc/pf.conf

You can check to see if the rule is active:

pfctl -s rules

AWS Online Tech Talks – May and Early June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-and-early-june-2018/

AWS Online Tech Talks – May and Early June 2018  

Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Analytics & Big Data

May 21, 2018 | 11:00 AM – 11:45 AM PT Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.

May 23, 2018 | 11:00 AM – 11:45 AM PTData Warehousing and Data Lake Analytics, Together – Learn how to query data across your data warehouse and data lake without moving data.

May 24, 2018 | 11:00 AM – 11:45 AM PTData Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.

Compute

May 29, 2018 | 01:00 PM – 01:45 PM PT – Creating and Managing a WordPress Website with Amazon Lightsail – Learn about Amazon Lightsail and how you can create, run and manage your WordPress websites with Amazon’s simple compute platform.

May 30, 2018 | 01:00 PM – 01:45 PM PTAccelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.

Containers

May 24, 2018 | 01:00 PM – 01:45 PM PT – Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.

Databases

May 21, 2018 | 01:00 PM – 01:45 PM PTHow to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.

May 23, 2018 | 01:00 PM – 01:45 PM PT5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.

DevOps

May 23, 2018 | 09:00 AM – 09:45 AM PT.NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.

Enterprise & Hybrid

May 22, 2018 | 11:00 AM – 11:45 AM PTHybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.

IoT

May 31, 2018 | 11:00 AM – 11:45 AM PTUsing AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.

Machine Learning

May 22, 2018 | 09:00 AM – 09:45 AM PTUsing Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.

May 24, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.

Management Tools

May 21, 2018 | 09:00 AM – 09:45 AM PTGaining Better Observability of Your VMs with Amazon CloudWatch – Learn how CloudWatch Agent makes it easy for customers like Rackspace to monitor their VMs.

Mobile

May 29, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive on Amazon Pinpoint Segmentation and Endpoint Management – See how segmentation and endpoint management with Amazon Pinpoint can help you target the right audience.

Networking

May 31, 2018 | 09:00 AM – 09:45 AM PTMaking Private Connectivity the New Norm via AWS PrivateLink – See how PrivateLink enables service owners to offer private endpoints to customers outside their company.

Security, Identity, & Compliance

May 30, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.

June 1, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.

Serverless

May 22, 2018 | 01:00 PM – 01:45 PM PTBuilding API-Driven Microservices with Amazon API Gateway – Learn how to build a secure, scalable API for your application in our tech talk about API-driven microservices.

Storage

May 30, 2018 | 11:00 AM – 11:45 AM PTAccelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.

June 1, 2018 | 11:00 AM – 11:45 AM PTLearn to Build a Cloud-Scale Website Powered by Amazon EFS – Technical deep dive where you’ll learn tips and tricks for integrating WordPress, Drupal and Magento with Amazon EFS.