Tag Archives: ASP.NET

Build Next-Generation Microservices with .NET 5 and gRPC on AWS

Post Syndicated from Matt Cline original https://aws.amazon.com/blogs/devops/next-generation-microservices-dotnet-grpc/

Modern architectures use multiple microservices in conjunction to drive customer experiences. At re:Invent 2015, AWS senior project manager Rob Brigham described Amazon’s architecture of many single-purpose microservices – including ones that render the “Buy” button, calculate tax at checkout, and hundreds more.

Microservices commonly communicate with JSON over HTTP/1.1. These technologies are ubiquitous and human-readable, but they aren’t optimized for communication between dozens or hundreds of microservices.

Next-generation Web technologies, including gRPC and HTTP/2, significantly improve communication speed and efficiency between microservices. AWS offers the most compelling experience for builders implementing microservices. Moreover, the addition of HTTP/2 and gRPC support in Application Load Balancer (ALB) provides an end-to-end solution for next-generation microservices. ALBs can inspect and route gRPC calls, enabling features like health checks, access logs, and gRPC-specific metrics.

This post demonstrates .NET microservices communicating with gRPC via Application Load Balancers. The microservices run on AWS Graviton2 instances, utilizing a custom-built 64-bit Arm processor to deliver up to 40% better price/performance than x86.

Architecture Overview

Modern Tacos is a new restaurant offering delivery. Customers place orders via mobile app, then they receive real-time status updates as their order is prepared and delivered.

The tutorial includes two microservices: “Submit Order” and “Track Order”. The Submit Order service receives orders from the app, then it calls the Track Order service to initiate order tracking. The Track Order service provides streaming updates to the app as the order is prepared and delivered.

Each microservice is deployed in an Amazon EC2 Auto Scaling group. Each group is behind an ALB that routes gRPC traffic to instances in the group.

Shows the communication flow of gRPC traffic from users through an ALB to EC2 instances.
This architecture is simplified to focus on ALB and gRPC functionality. Microservices are often deployed in
containers for elastic scaling, improved reliability, and efficient resource utilization. ALB, gRPC, and .NET all work equally effectively in these architectures.

Comparing gRPC and JSON for microservices

Microservices typically communicate by sending JSON data over HTTP. As a text-based format, JSON is readable, flexible, and widely compatible. However, JSON also has significant weaknesses as a data interchange format. JSON’s flexibility makes enforcing a strict API specification difficult — clients can send arbitrary or invalid data, so developers must write rigorous data validation code. Additionally, performance can suffer at scale due to JSON’s relatively high bandwidth and parsing requirements. These factors also impact performance in constrained environments, such as smartphones and IoT devices. gRPC addresses all of these issues.

gRPC is an open-source framework designed to efficiently connect services. Instead of JSON, gRPC sends messages via a compact binary format called Protocol Buffers, or protobuf. Although protobuf messages are not human-readable, they utilize less network bandwidth and are faster to encode and decode. Operating at scale, these small differences multiply to a significant performance gain.

gRPC APIs define a strict contract that is automatically enforced for all messages. Based on this contract, gRPC implementations generate client and server code libraries in multiple programming languages. This allows developers to use higher-level constructs to call services, rather than programming against “raw” HTTP requests.

gRPC also benefits from being built on HTTP/2, a major revision of the HTTP protocol. In addition to the foundational performance and efficiency improvements from HTTP/2, gRPC utilizes the new protocol to support bi-directional streaming data. Implementing real-time streaming prior to gRPC typically required a completely separate protocol (such as WebSockets) that might not be supported by every client.

gRPC for .NET developers

Several recent updates have made gRPC more useful to .NET developers. .NET 5 includes significant performance improvements to gRPC, and AWS has broad support for .NET 5. In May 2021, the .NET team announced their focus on a gRPC implementation written entirely in C#, called “grpc-dotnet”, which follows C# conventions very closely.

Instead of working with JSON, dynamic objects, or strings, C# developers calling a gRPC service use a strongly-typed client, automatically generated from the protobuf specification. This obviates much of the boilerplate validation required by JSON APIs, and it enables developers to use rich data structures. Additionally, the generated code enables full IntelliSense support in Visual Studio.

For example, the “Submit Order” microservice executes this code in order to call the “Track Order” microservice:

using var channel = GrpcChannel.ForAddress("https://track-order.example.com");

var trackOrderClient = new TrackOrder.Protos.TrackOrder.TrackOrderClient(channel);

var reply = await trackOrderClient.StartTrackingOrderAsync(new TrackOrder.Protos.Order
{
    DeliverTo = "Address",
    LastUpdated = Timestamp.FromDateTime(DateTime.UtcNow),
    OrderId = order.OrderId,
    PlacedOn = order.PlacedOn,
    Status = TrackOrder.Protos.OrderStatus.Placed
});

This code calls the StartTrackingOrderAsync method on the Track Order client, which looks just like a local method call. The method intakes a data structure that supports rich data types like DateTime and enumerations, instead of the loosely-typed JSON. The methods and data structures are defined by the Track Order service’s protobuf specification, and the .NET gRPC tools automatically generate the client and data structure classes without requiring any developer effort.

Configuring ALB for gRPC

To make gRPC calls to targets behind an ALB, create a load balancer target group and select gRPC as the protocol version. You can do this through the AWS Management Console, AWS Command Line Interface (CLI), AWS CloudFormation, or AWS Cloud Development Kit (CDK).

Screenshot of the AWS Management Console, showing how to configure a load balancer's target group for gRPC communication.

This CDK code creates a gRPC target group:

var targetGroup = new ApplicationTargetGroup(this, "TargetGroup", new ApplicationTargetGroupProps
{
    Protocol = ApplicationProtocol.HTTPS,
    ProtocolVersion = ApplicationProtocolVersion.GRPC,
    Vpc = vpc,
    Targets = new IApplicationLoadBalancerTarget {...}
});

gRPC requests work with target groups utilizing HTTP/2, but the gRPC protocol enables additional features including health checks, request count metrics, access logs that differentiate gRPC requests, and gRPC-specific response headers. gRPC also works with native ALB features like stickiness, multiple load balancing algorithms, and TLS termination.

Deploy the Tutorial

The sample provisions AWS resources via the AWS Cloud Development Kit (CDK). The CDK code is provided in C# so that .NET developers can use a familiar language.

The solution deployment steps include:

  • Configuring a domain name in Route 53.
  • Deploying the microservices.
  • Running the mobile app on AWS Device Farm.

The source code is available on GitHub.

Prerequisites

For this tutorial, you should have these prerequisites:

Configure the environment variables needed by the CDK. In the sample commands below, replace AWS_ACCOUNT_ID with your numeric AWS account ID. Replace AWS_REGION with the name of the region where you will deploy the sample, such as us-east-1 or us-west-2.

If you’re using a *nix shell such as Bash, run these commands:

export CDK_DEFAULT_ACCOUNT=AWS_ACCOUNT_ID
export CDK_DEFAULT_REGION=AWS_REGION

If you’re using PowerShell, run these commands:

$Env:CDK_DEFAULT_ACCOUNT="AWS_ACCOUNT_ID"
$Env:CDK_DEFAULT_REGION="AWS_REGION"
Set-DefaultAWSRegion -Region AWS_REGION

Throughout this tutorial, replace RED TEXT with the appropriate value.

Save the directory path where you cloned the GitHub repository. In the sample commands below, replace EXAMPLE_DIRECTORY with this path.

In your terminal or PowerShell, run these commands:

cd EXAMPLE_DIRECTORY/src/ModernTacoShop/Common/cdk
cdk bootstrap --context domain-name=PARENT_DOMAIN_NAME
cdk deploy --context domain-name=PARENT_DOMAIN_NAME

The CDK output includes the name of the S3 bucket that will store deployment packages. Save the name of this bucket. In the sample commands below, replace SHARED_BUCKET_NAME with this name.

Deploy the Track Order microservice

Compile the Track Order microservice for the Arm microarchitecture utilized by AWS Graviton2 processors. The TrackOrder.csproj file includes a target that automatically packages the compiled microservice into a ZIP file. You will upload this ZIP file to S3 for use by CodeDeploy. Next, you will utilize the CDK to deploy the microservice’s AWS infrastructure, and then install the microservice on the EC2 instance via CodeDeploy.

The CDK stack deploys these resources:

  • An Amazon EC2 Auto Scaling group.
  • An Application Load Balancer (ALB) using gRPC, targeting the Auto Scaling group and configured with microservice health checks.
  • A subdomain for the microservice, targeting the ALB.
  • A DynamoDB table used by the microservice.
  • CodeDeploy infrastructure to deploy the microservice to the Auto Scaling group.

If you’re using the AWS CLI, run these commands:

cd EXAMPLE_DIRECTORY/src/ModernTacoShop/TrackOrder/src/
dotnet publish --runtime linux-arm64 --self-contained
aws s3 cp ./bin/TrackOrder.zip s3://SHARED_BUCKET_NAME
etag=$(aws s3api head-object --bucket SHARED_BUCKET_NAME \
    --key TrackOrder.zip --query ETag --output text)
cd ../cdk
cdk deploy

The CDK output includes the name of the CodeDeploy deployment group. Use this name to run the next command:

aws deploy create-deployment --application-name ModernTacoShop-TrackOrder \
    --deployment-group-name TRACK_ORDER_DEPLOYMENT_GROUP_NAME \
    --s3-location bucket=SHARED_BUCKET_NAME,bundleType=zip,key=TrackOrder.zip,etag=$etag \
    --file-exists-behavior OVERWRITE

If you’re using PowerShell, run these commands:

cd EXAMPLE_DIRECTORY/src/ModernTacoShop/TrackOrder/src/
dotnet publish --runtime linux-arm64 --self-contained
Write-S3Object -BucketName SHARED_BUCKET_NAME `
    -Key TrackOrder.zip `
    -File ./bin/TrackOrder.zip
Get-S3ObjectMetadata -BucketName SHARED_BUCKET_NAME `
    -Key TrackOrder.zip `
    -Select ETag `
    -OutVariable etag
cd ../cdk
cdk deploy

The CDK output includes the name of the CodeDeploy deployment group. Use this name to run the next command:

New-CDDeployment -ApplicationName ModernTacoShop-TrackOrder `
    -DeploymentGroupName TRACK_ORDER_DEPLOYMENT_GROUP_NAME `
    -S3Location_Bucket SHARED_BUCKET_NAME `
    -S3Location_BundleType zip `
    -S3Location_Key TrackOrder.zip `
    -S3Location_ETag $etag[0] `
    -RevisionType S3 `
    -FileExistsBehavior OVERWRITE

Deploy the Submit Order microservice

The steps to deploy the Submit Order microservice are identical to the Track Order microservice. See that section for details.

If you’re using the AWS CLI, run these commands:

cd EXAMPLE_DIRECTORY/src/ModernTacoShop/SubmitOrder/src/
dotnet publish --runtime linux-arm64 --self-contained
aws s3 cp ./bin/SubmitOrder.zip s3://SHARED_BUCKET_NAME
etag=$(aws s3api head-object --bucket SHARED_BUCKET_NAME \
    --key SubmitOrder.zip --query ETag --output text)
cd ../cdk
cdk deploy

The CDK output includes the name of the CodeDeploy deployment group. Use this name to run the next command:

aws deploy create-deployment --application-name ModernTacoShop-SubmitOrder \
    --deployment-group-name SUBMIT_ORDER_DEPLOYMENT_GROUP_NAME \
    --s3-location bucket=SHARED_BUCKET_NAME,bundleType=zip,key=SubmitOrder.zip,etag=$etag \
    --file-exists-behavior OVERWRITE

If you’re using PowerShell, run these commands:

cd EXAMPLE_DIRECTORY/src/ModernTacoShop/SubmitOrder/src/
dotnet publish --runtime linux-arm64 --self-contained
Write-S3Object -BucketName SHARED_BUCKET_NAME `
    -Key SubmitOrder.zip `
    -File ./bin/SubmitOrder.zip
Get-S3ObjectMetadata -BucketName SHARED_BUCKET_NAME `
    -Key SubmitOrder.zip `
    -Select ETag `
    -OutVariable etag
cd ../cdk
cdk deploy

The CDK output includes the name of the CodeDeploy deployment group. Use this name to run the next command:

New-CDDeployment -ApplicationName ModernTacoShop-SubmitOrder `
    -DeploymentGroupName SUBMIT_ORDER_DEPLOYMENT_GROUP_NAME `
    -S3Location_Bucket SHARED_BUCKET_NAME `
    -S3Location_BundleType zip `
    -S3Location_Key SubmitOrder.zip `
    -S3Location_ETag $etag[0] `
    -RevisionType S3 `
    -FileExistsBehavior OVERWRITE

Data flow diagram

Architecture diagram showing the complete data flow of the sample gRPC microservices application.
  1. The app submits an order via gRPC.
  2. The Submit Order ALB routes the gRPC call to an instance.
  3. The Submit Order instance stores order data.
  4. The Submit Order instance calls the Track Order service via gRPC.
  5. The Track Order ALB routes the gRPC call to an instance.
  6. The Track Order instance stores tracking data.
  7. The app calls the Track Order service, which streams the order’s location during delivery.

Test the microservices

Once the CodeDeploy deployments have completed, test both microservices.

First, check the load balancers’ status. Go to Target Groups in the AWS Management Console, which will list one target group for each microservice. Click each target group, then click “Targets” in the lower details pane. Every EC2 instance in the target group should have a “healthy” status.

Next, verify each microservice via gRPCurl. This tool lets you invoke gRPC services from the command line. Install gRPCurl using the instructions, and then test each microservice:

grpcurl submit-order.PARENT_DOMAIN_NAME:443 modern_taco_shop.SubmitOrder/HealthCheck
grpcurl track-order.PARENT_DOMAIN_NAME:443 modern_taco_shop.TrackOrder/HealthCheck

If a service is healthy, it will return an empty JSON object.

Run the mobile app

You will run a pre-compiled version of the app on AWS Device Farm, which lets you test on a real device without managing any infrastructure. Alternatively, compile your own version via the AndroidApp.FrontEnd project within the solution located at EXAMPLE_DIRECTORY/src/ModernTacoShop/AndroidApp/AndroidApp.sln.

Go to Device Farm in the AWS Management Console. Under “Mobile device testing projects”, click “Create a new project”. Enter “ModernTacoShop” as the project name, and click “Create Project”. In the ModernTacoShop project, click the “Remote access” tab, then click “Start a new session”. Under “Choose a device”, select the Google Pixel 3a running OS version 10, and click “Confirm and start session”.

Screenshot of the AWS Device Farm showing a Google Pixel 3a.

Once the session begins, click “Upload” in the “Install applications” section. Unzip and upload the APK file located at EXAMPLE_DIRECTORY/src/ModernTacoShop/AndroidApp/com.example.modern_tacos.grpc_tacos.apk.zip, or upload an APK that you created.

Screenshot of the gRPC microservices demo Android app, showing the map that displays streaming location data.

Screenshot of the gRPC microservices demo Android app, on the order preparation screen.

Once the app has uploaded, drag up from the bottom of the device screen in order to reach the “All apps” screen. Click the ModernTacos app to launch it.

Once the app launches, enter the parent domain name in the “Domain Name” field. Click the “+” and “-“ buttons next to each type of taco in order to create your order, then click “Submit Order”. The order status will initially display as “Preparing”, and will switch to “InTransit” after about 30 seconds. The Track Order service will stream a random route to the app, updating with new position data every 5 seconds. After approximately 2 minutes, the order status will change to “Delivered” and the streaming updates will stop.

Once you’ve run a successful test, click “Stop session” in the console.

Cleaning up

To avoid incurring charges, use the cdk destroy command to delete the stacks in the reverse order that you deployed them.

You can also delete the resources via CloudFormation in the AWS Management Console.

In addition to deleting the stacks, you must delete the Route 53 hosted zone and the Device Farm project.

Conclusion

This post demonstrated multiple next-generation technologies for microservices, including end-to-end HTTP/2 and gRPC communication over Application Load Balancer, AWS Graviton2 processors, and .NET 5. These technologies enable builders to create microservices applications with new levels of performance and efficiency.

Matt Cline

Matt Cline

Matt Cline is a Solutions Architect at Amazon Web Services, supporting customers in his home city of Pittsburgh PA. With a background as a full-stack developer and architect, Matt is passionate about helping customers deliver top-quality applications on AWS. Outside of work, Matt builds (and occasionally finishes) scale models and enjoys running a tabletop role-playing game for his friends.

Ulili Nhaga

Ulili Nhaga

Ulili Nhaga is a Cloud Application Architect at Amazon Web Services in San Diego, California. He helps customers modernize, architect, and build highly scalable cloud-native applications on AWS. Outside of work, Ulili loves playing soccer, cycling, Brazilian BBQ, and enjoying time on the beach.

Frictionless hosting of containerized ASP.NET web apps using Amazon Lightsail

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/frictionless-hosting-of-containerized-asp-net-web-apps-using-amazon-lightsail/

This post is written by Fahad Mustafa, Cloud Application Architect, AWS Professional Services

There are many ways to deploy ASP.NET web apps to AWS. Each with its own use cases and differing pricing models. But what if you have a small website and database that you must deploy rapidly, manage, and scale? What if you want a cost-effective simple monthly plan? In these cases, Amazon Lightsail is a great choice. This post shows you how to take a containerized ASP.NET web application that connects to a PostgreSQL database and deploy it to Lightsail. So that you can get your ASP.NET web app up and running.

Product Overview

Amazon Lightsail is an easy way to get started on AWS. It gives you building blocks to deploy an application or website and provision a database at an affordable, monthly price.

Lightsail is perfect for students, small businesses, and startups to get their website or application up and running in the cloud. By providing a secure, highly available, and managed environment Lightsail does all the heavy lifting like setting up IAM roles and policies.

Lightsail can also run containers! By pointing Lightsail to a public image on Amazon ECR or Docker Hub, or uploading an image from your local machine, you can easily run the container, scale it, monitor it and use a custom domain.

Overview of solution

To deploy an ASP.NET app that connects to a PostgreSQL database, you create a Lightsail container service and PostgreSQL database through the AWS Management Console. Create your app and container image. Push the image to Lightsail and finally create the Lightsail deployment to run the container.

solution diagram

Overview of steps

In this post, you create a sample ASP.NET web app through the .NET CLI. Alternatively, you can use Visual Studio to create the app.

This is the sequence of steps I review in this post:

  • Create a PostgreSQL database
  • Create a Lightsail container service
  • Create an ASP.NET web app
  • Create a Dockerfile and build image
  • Upload the image to Lightsail
  • Deploy and run the image

Prerequisites

For this walkthrough, you should have the following prerequisites:

Walkthrough

Create a PostgreSQL database

In this step, you create a PostgreSQL database through the Lightsail console.

Create the database

  1. Sign in to the Lightsail console.
  2. On the Lightsail home page, choose the Database
  3. Choose Create database.
  4. Choose the Database location by changing the AWS Region and Availability Zone.
  5. Choose the database engine. In this example, select PostgreSQL 12.6.
  6. Optional – Specify login credentials. If not changed, AWS generates a default secure password.
  7. Optional – Specify the master database name. If not changed, AWS will use “dbmaster” as the default.
  8. Choose the database plan. Compare the plan’s memory, CPU, storage, and transfer quota to decide which best fits your needs. The smallest database plan is Free Tier eligible.
  9. Identify your database by giving it a unique name.
  10. Choose Create database.

Creating and configuring the database can take a few minutes. Once ready, the status changes to Available. For more information and options on creating a database in Lightsail, see Creating a database in Amazon Lightsail.

available database

Now you are ready to connect to the database and create a table. To connect, see Connecting to your PostgreSQL database in Amazon Lightsail. This sample uses a database named aspnetlightsaildb and a table named Person that you can create by running the following script using PgAdmin. Note that the Owner value is dbmasteruser. This is the default username AWS generates. If you changed the default, then use the username you specified in step 6.

-- Database: aspnetlightsaildb
CREATE DATABASE aspnetlightsaildb
    WITH 
    OWNER = dbmasteruser
    ENCODING = 'UTF8'
    LC_COLLATE = 'en_US.UTF-8'
    LC_CTYPE = 'en_US.UTF-8'
    TABLESPACE = pg_default
    CONNECTION LIMIT = -1;	
-- Table: public.Person
CREATE TABLE IF NOT EXISTS public."Person"
(
    "Id" integer NOT NULL GENERATED ALWAYS AS IDENTITY ( INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 2147483647 CACHE 1 ),
    "Name" text COLLATE pg_catalog."default",
    "DateOfBirth" date,
    "Address" text COLLATE pg_catalog."default",
    CONSTRAINT "Person_pkey" PRIMARY KEY ("Id")
)
TABLESPACE pg_default;
ALTER TABLE public."Person"
    OWNER to dbmasteruser;

Now your database and table is created and you can create a container service.

Create a Lightsail container service

In this step, you create a Lightsail container service that is ready to accept your container images.

Create the container service

  1. Sign in to the Lightsail console.
  2. On the Lightsail home page, choose the Containers Tab
  3. Choose Create container service.
  4. In the Create a container service page, choose Change AWS Region, then choose an AWS Region for your container service.
  5. Choose a capacity for your container service. For more information, see Container service capacity (scale and power).
  6. Skip the Set up your first deployment step as you’ll create the deployment after creating the container image on your dev machine.
  7. Enter a name for your container service. Take note of this name, you’ll need it later to deploy the container to Lightsail.
  8. Click Create container service.

After a few minutes, your container service status changes from Pending to Ready. This indicates you can now deploy images. If this is the first time you created a service, it can take 10–15 minutes for the status to become Ready.

container service

Create an ASP.NET web app

Using the .NET CLI, you’ll create a sample ASP.NET web app. In an empty directory run the following command:

dotnet new webapp --name HelloWorldLightsail

The “webapp” segment of the commands specifies the project template to use. In this case, it’s a default ASP.NET web app. The “name” parameter is the name of the ASP.NET project.

To connect to your PostgreSQL Db from ASP.NET, you must install the “Npgsql” Nuget package. In the root directory of the project run the following command in the terminal:

dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL –-version 5.0.6

Once installed, you create a Model class to represent the data and a DbContext class to connect and query the database.

public class Person
    {
        public int Id { get; set; }

        public string Name { get; set; }

        public DateTime DateOfBirth { get; set; }

        public string Address { get; set; }
    }

public class PostgreSqlContext : DbContext
    {
        public PostgreSqlContext(DbContextOptions<PostgreSqlContext> options) : base(options)
        {
        }

        public DbSet<Person> Person { get; set; }
    }

The next step is to add the connection string to appSettings.json. In the root of the settings file add a new ConnectionStrings property as shown below. The following properties are required:

  • lightsail-endpoint: The database endpoint as shown in the Lightsail console.
  • db-name: The name of the database you want to connect to.
  • db-username: The username as shown in the Lightsail console.
  • db-password: The password as shown in the Lightsail console.
"ConnectionStrings": {
    "AspnetLightsailDb": "Server=<lightsail-endpoint>;Port=5432;Database=<db-name>;User Id=<db-username>;Password=<db-password>;"
  }

Next step is to tell ASP.NET where to find the connection string and which DbContext class to use. This is done by configuring the DbContext in Startup.cs. Under the ConfigureServices method add the following line of code:

  services.AddDbContext<PostgreSqlContext>(options => options.UseNpgsql(Configuration.GetConnectionString("AspnetLightsailDb")));

 

Now, you are ready to perform operations against the database. This is done by performing operations against the Person property of the PostgreSqlContext instance.

For example to fetch all records form the Person table:

public IList<Person> Person { get;set; }

        public async Task OnGetAsync()
        {
            Person = await _context.Person.ToListAsync();
        }

You now have an ASP.NET web application that can query the “Person” table against the PostgreSQL database.

Create a Dockerfile and build image

In order to containerize the web app, you must create a Dockerfile. This file provides instructions to Docker on how to build the container image.

To create a Dockerfile and build image

  1. In the root directory of the project, you created (where the .csproj file lives) create an empty file named “Dockerfile”. Note this file does not have an extension.
  2. Open the file with a text editor or IDE and insert the following:
# https://hub.docker.com/_/microsoft-dotnet
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /source

# copy csproj and restore as distinct layers
COPY *.csproj .
RUN dotnet restore

# copy everything else and build app
COPY . .
RUN dotnet publish -c release -o /app --no-restore

# final stage/image
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build /app ./
ENTRYPOINT ["dotnet", "HelloWorldLightsail.dll"]
  1. To build the image, open a terminal in the same directory as the Dockerfile. Run the following command to build the image.
    docker build -t helloworldlightsail .
    The “-t” parameter is a human readable tag you give the image to make it easy to identify.
  2. After the command completes, you can verify that the image exists by running
    docker images
    You should see the newly created image.newly created image

Upload the image to Lightsail

In this step, you upload the newly built image to the Lightsail container service that you created earlier.

To upload the image to Lightsail

  1. Ensure you have configured the AWS CLI to access AWS.
  2. In a terminal enter the following command:
    aws lightsail push-container-image --region ap-southeast-2 --service-name aspnet-helloworld --label helloworldlightsail --image helloworldlightsail:latest
    The –-region and –-service-name parameters should match the container service you created through the AWS Management Console. The –-label parameter is a descriptive name you give the image when it’s stored in the container service. This will help you track the different versions of the image. The –-image parameter consists of the image name and tag on your local machine that you want to push to Lightsail. Read more about how to push images to Lightsail.
  3. After the command runs successfully browse your container service in the Lightsail console and click the “Images” tab. You should see the uploaded image.

3. After the command runs successfully browse your container service in the Lightsail console and click the “Images” tab

Deploy and run the image

Now that your image is uploaded to the container service it’s time to create a deployment to run the app.

To create a deployment

  1. Go to the Deployments tab in the Lightsail console.
  2. Click on Create your first deployment.
  3. Enter the Container name.
  4. Click Choose stored image and select the image you uploaded in the previous step.
  5. Click on Add open ports to add a port mapping to the container. This allows Lightsail to forward web traffic to your ASP.NET web app. By default ASP.NET web server will listen to port 80.
  6. Under the Public endpoint section, select the container from the drop-down. This specifies which container Lightsail will forward traffic to since a single deployment can have more than one container.
  7. Click Save and deploy

Your configuration should looks like this. Read more about creating container services deployments in Lightsail.

configuration overview

After the deployment is complete, you can navigate to the Public domain of your container service. You will see your ASP.NET web app in action!

public domain

Conclusion

In this post, I demonstrated how easy it is to create a PostgreSQL DB and deploy an ASP.NET web app to Amazon Lightsail. Going from a container on your dev machine to a publicly accessible, scalable, and secure cloud environment within minutes.

You can now add a custom domain to your web app through the Lightsail console. Additionally, you can increase the scale of your container to keep up with demand based on the useful CPU and memory metrics provided in the console.

If you have more advanced needs for your web app, you have the whole robust ecosystem of AWS at your disposal. You can deploy your ASP.NET web app to Amazon Elastic Container Service (Amazon ECS) or even decide to go completely serverless and utilize AWS Lambda and API Gateway.

Visit the Amazon Lightsail homepage to get started with your next idea and read the docs for more details about container services on Amazon Lightsail.

 

Standardizing CI/CD pipelines for .NET web applications with AWS Service Catalog

Post Syndicated from Borja Prado Miguelez original https://aws.amazon.com/blogs/devops/standardizing-cicd-pipelines-net-web-applications-aws-service-catalog/

As companies implement DevOps practices, standardizing the deployment of continuous integration and continuous deployment (CI/CD) pipelines is increasingly important. Your developer team may not have the ability or time to create your own CI/CD pipelines and processes from scratch for each new project. Additionally, creating a standardized DevOps process can help your entire company ensure that all development teams are following security and governance best practices.

Another challenge that large enterprise and small organization IT departments deal with is managing their software portfolio. This becomes even harder in agile scenarios working with mobile and web applications where you need to not only provision the cloud resources for hosting the application, but also have a proper DevOps process in place.

Having a standardized portfolio of products for your development teams enables you to provision the infrastructure resources needed to create development environments, and helps reduce the operation overhead and accelerate the overall development process.

This post shows you how to provide your end-users a catalog of resources with all the functionality a development team needs to check in code and run it in a highly scalable load balanced cloud compute environment.

We use AWS Service Catalog to provision a cloud-based AWS Cloud9 IDE, a CI/CD pipeline using AWS CodePipeline, and the AWS Elastic Beanstalk compute service to run the website. AWS Service Catalog allows organizations to keep control of the services and products that can be provisioned across the organization’s AWS account, and there’s an effective software delivery process in place by using CodePipeline to orchestrate the application deployment. The following diagram illustrates this architecture.

Architecture Diagram

You can find all the templates we use in this post on the AWS Service Catalog Elastic Beanstalk Reference architecture GitHub repo.

Provisioning the AWS Service Catalog portfolio

To get started, you must provision the AWS Service Catalog portfolio with AWS CloudFormation.

  1. Choose Launch Stack, which creates the AWS Service Catalog portfolio in your AWS account.Launch Stack action
  2. If you’re signed into AWS as an AWS Identity and Access Management (IAM) role, add your role name in the LinkedRole1 parameter.
  3. Continue through the stack launch screens using the defaults and choosing Next.
  4. Select the acknowledgements in the Capabilities box on the third screen.

When the stack is complete, a top-level CloudFormation stack with the default name SC-RA-Beanstalk-Portfolio, which contains five nested stacks, has created the AWS Service Catalog products with the services the development team needs to implement a CI/CD pipeline and host the web application. This AWS Service Catalog reference architecture provisions the AWS Service Catalog products needed to set up the DevOps pipeline and the application environment.

Cloudformation Portfolio Stack

When the portfolio has been created, you have completed the administrator setup. As an end-user (any roles you added to the LinkedRole1 or LinkedRole2 parameters), you can access the portfolio section on the AWS Service Catalog console and review the product list, which now includes the AWS Cloud9 IDE, Elastic Beanstalk application, and CodePipeline project that we will use for continuous delivery.

Service Catalog Products

On the AWS Service Catalog administrator section, inside the Elastic Beanstalk reference architecture portfolio, we can add and remove groups, roles, and users by choosing Add groups, roles, users on the Group, roles, and users tab. This lets us enable developers or other users to deploy the products from this portfolio.

Service Catalog Groups, Roles, and Users

Solution overview

The rest of this post walks you through how to provision the resources you need for CI/CD and web application deployment. You complete the following steps:

  1. Deploy the CI/CD pipeline.
  2. Provision the AWS Cloud9 IDE.
  3. Create the Elastic Beanstalk environment.

Deploying the CI/CD pipeline

The first product you need is the CI/CD pipeline, which manages the code and deployment process.

  1. Sign in to the AWS Service Catalog console in the same Region where you launched the CloudFormation stack earlier.
  2. On the Products list page, locate the CodePipeline product you created earlier.
    Service Catalog Products List
  3. Choose Launch product.

You now provision the CI/CI pipeline. For this post, we use some name examples for the pipeline name, Elastic Beanstalk application name, and code repository, which you can of course modify.

  1. Enter a name for the provisioned Codepipeline product.
  2. Select the Windows version and click Next.
  3. For the application and repository name, enter dotnetapp.
  4. Leave all other settings at their default and click Next.
  5. Choose Launch to start the provisioning of the CodePipeline product.

When you’re finished, the provisioned pipeline should appear on the Provisioned products list.

CodePipeline Product Provisioned

  1. Copy the CloneUrlHttp output to use in a later step.

You now have the CI/CD pipeline ready, with the code repository and the continuous integration service that compiles the code, runs tests, and generates the software bundle stored in Amazon Simple Storage Service (Amazon S3) ready to be deployed. The following diagram illustrates this architecture.

CodePipeline Configuration Diagram

When the Elastic Beanstalk environment is provisioned, the deploy stage takes care of deploying the bundle application stored in Amazon S3, so the DevOps pipeline takes care of the full software delivery as shown in the earlier architecture diagram.

The Region we use should support the WINDOWS_SERVER_2019_CONTAINER build image that AWS CodeBuild uses. You can modify the environment type or create a custom one by editing the CloudFormation template used for the CodePipeline for Windows.

Provisioning the AWS Cloud9 IDE

To show the full lifecycle of the deployment of a web application with Elastic Beanstalk, we use a .NET web application, but this reference architecture also supports Linux. To provision an AWS Cloud9 environment, complete the following steps:

  1. From the AWS Service Catalog product list, choose the AWS Cloud9 IDE.
  2. Click Launch product.
  3. Enter a name for the provisioned Cloud9 product and click Next.
  4. Enter an EnvironmentName and select the InstanceType.
  5. Set LinkedRepoPath to /dotnetapp.
  6. For LinkedRepoCloneUrl, enter the CloneUrlHttp from the previous step.
  7. Leave the default parameters for tagOptions and Notifications, and click Launch.
    Cloud9 Environment Settings

Now we download a sample ASP.NET MVC application in the AWS Cloud9 IDE, move it under the folder we specified in the previous step, and push the code.

  1. Open the IDE with the Cloud9Url link from AWS Service Catalog output.
  2. Get the sample .NET web application and move it under the dotnetapp. See the following code:
  3. cp -R aws-service-catalog-reference-architectures/labs/SampleDotNetApplication/* dotnetapp/
  1. Check in to the sample application to the CodeCommit repo:
  2. cd dotnetapp
    git add --all
    git commit -m "initial commit"
    git push

Now that we have committed the application to the code repository, it’s time to review the DevOps pipeline.

  1. On the CodePipeline console, choose Pipelines.

You should see the pipeline ElasticBeanstalk-ProductPipeline-dotnetapp running.

CodePipeline Execution

  1. Wait until the three pipeline stages are complete, this may take several minutes.

The code commitment and web application build stages are successful, but the code deployment stage fails because we haven’t provisioned the Elastic Beanstalk environment yet.

If you want to deploy your own sample or custom ASP.NET web application, CodeBuild requires the build specification file buildspec-build-dotnet.yml for the .NET Framework, which is located under the elasticbeanstalk/codepipeline folder in the GitHub repo. See the following example code:

version: 0.2
env:
  variables:
    DOTNET_FRAMEWORK: 4.6.1
phases:
  build:
    commands:
      - nuget restore
      - msbuild /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release /p:DeployIisAppPath="Default Web Site" /t:Package
      - dir obj\Release\Package
artifacts:
  files:
    - 'obj/**/*'
    - 'codepipeline/*'

Creating the Elastic Beanstalk environment

Finally, it’s time to provision the hosting system, an Elastic Beanstalk Windows-based environment, where the .NET sample web application runs. For this, we follow the same approach from the previous steps and provision the Elastic Beanstalk AWS Service Catalog product.

  1. On the AWS Service Catalog console, on the Product list page, choose the Elastic Beanstalk application product.
  2. Choose Launch product.
  3. Enter an environment name and click Next.
  4. Enter the application name.
  5. Enter the S3Bucket and S3SourceBundle that were generated (you can retrieve them from the Amazon S3 console).
  6. Set the SolutionStackName to 64bit Windows Server Core 2019 v2.5.8 running IIS 10.0. Follow this link for up to date platform names.
  7. Elastic Beanstalk Environment Settings
  1. Launch the product.
  2. To verify that you followed the steps correctly, review that the provisioned products are all available (AWS Cloud9 IDE, Elastic Beanstalk CodePipeline project, and Elastic Beanstalk application) and the recently created Elastic Beanstalk environment is healthy.

As in the previous step, if you’re planning to deploy your own sample or custom ASP.NET web application, AWS CodeDeploy requires the deploy specification file buildspec-deploy-dotnet.yml for the .NET Framework, which should be located under the codepipeline folder in the GitHub repo. See the following code:

version: 0.2
phases:
  pre_build:
    commands:          
      - echo application deploy started on `date`      
      - ls -l
      - ls -l obj/Release/Package
      - aws s3 cp ./obj/Release/Package/SampleWebApplication.zip s3://$ARTIFACT_BUCKET/$EB_APPLICATION_NAME-$CODEBUILD_BUILD_NUMBER.zip
  build:
    commands:
      - echo Pushing package to Elastic Beanstalk...      
      - aws elasticbeanstalk create-application-version --application-name $EB_APPLICATION_NAME --version-label v$CODEBUILD_BUILD_NUMBER --description "Auto deployed from CodeCommit build $CODEBUILD_BUILD_NUMBER" --source-bundle S3Bucket="$ARTIFACT_BUCKET",S3Key="$EB_APPLICATION_NAME-$CODEBUILD_BUILD_NUMBER.zip"
      - aws elasticbeanstalk update-environment --environment-name "EB-ENV-$EB_APPLICATION_NAME" --version-label v$CODEBUILD_BUILD_NUMBER

The same codepipeline folder contains some build and deploy specification files besides the .NET ones, which you could use if you prefer to use a different framework like Python to deploy a web application with Elastic Beanstalk.

  1. To complete the application deployment, go to the application pipeline and release the change, which triggers the pipeline with the application environment now ready.
    Deployment Succeeded

When you create the environment through the AWS Service Catalog, you can access the provisioned Elastic Beanstalk environment.

  1. In the Events section, locate the LoadBalancerURL, which is the public endpoint that we use to access the website.
    Elastic Beanstalk LoadBalancer URL
  1. In our preferred browser, we can check that the website has been successfully deployed.ASP.NET Sample Web Application

Cleaning up

When you’re finished, you should complete the following steps to delete the resources you provisioned to avoid incurring further charges and keep the account free of unused resources.

  1. The CodePipeline product creates an S3 bucket which you must empty from the S3 console.
  2. On the AWS Service Catalog console, end the provisioned resources from the Provisioned products list.
  3. As administrator, in the CloudFormation console, delete the stack SC-RA-Beanstalk-Portfolio.

Conclusions

This post has shown you how to deploy a standardized DevOps pipeline which was then used to manage and deploy a sample .NET application on Elastic Beanstalk using the Service Catalog Elastic Beanstalk reference architecture. AWS Service Catalog is the ideal service for administrators who need to centrally provision and manage the AWS services needed with a consistent governance model. Deploying web applications to Elastic Beanstalk is very simple for developers and provides built in scalability, patch maintenance, and self-healing for your applications.

The post includes the information and references on how to extend the solution with other programming languages and operating systems supported by Elastic Beanstalk.

About the Authors

Borja Prado
Borja Prado Miguelez

Borja is a Senior Specialist Solutions Architect for Microsoft workloads at Amazon Web Services. He is passionate about web architectures and application modernization, helping customers to build scalable solutions with .NET and migrate their Windows workloads to AWS.

Chris Chapman
Chris Chapman

Chris is a Partner Solutions Architect covering AWS Marketplace, Service Catalog, and Control Tower. Chris was a software developer and data engineer for many years and now his core mission is helping customers and partners automate AWS infrastructure deployment and provisioning.

Modernizing and containerizing a legacy MVC .NET application with Entity Framework to .NET Core with Entity Framework Core: Part 2

Post Syndicated from Pratip Bagchi original https://aws.amazon.com/blogs/devops/modernizing-and-containerizing-a-legacy-mvc-net-application-with-entity-framework-to-net-core-with-entity-framework-core-part-2/

This is the second post in a two-part series in which you migrate and containerize a modernized enterprise application. In Part 1, we walked you through a step-by-step approach to re-architect a legacy ASP.NET MVC application and ported it to .NET Core Framework. In this post, you will deploy the previously re-architected application to Amazon Elastic Container Service (Amazon ECS) and run it as a task with AWS Fargate.

Overview of solution

In the first post, you ported the legacy MVC ASP.NET application to ASP.NET Core, you will now modernize the same application as a Docker container and host it in the ECS cluster.

The following diagram illustrates this architecture.

Architecture Diagram

 

You first launch a SQL Server Express RDS (1) instance and create a Cycle Store database on that instance with tables of different categories and subcategories of bikes. You use the previously re-architected and modernized ASP.NET Core application as a starting point for this post, this app is using AWS Secrets Manager (2) to fetch database credentials to access Amazon RDS instance. In the next step, you build a Docker image of the application and push it to Amazon Elastic Container Registry (Amazon ECR) (3). After this you create an ECS cluster (4) to run the Docker image as a AWS Fargate task.

Prerequisites

For this walkthrough, you should have the following prerequisites:

This post implements the solution in Region us-east-1.

Source Code

Clone the source code from the GitHub repo. The source code folder contains the re-architected source code and the AWS CloudFormation template to launch the infrastructure, and Amazon ECS task definition.

Setting up the database server

To make sure that your database works out of the box, you use a CloudFormation template to create an instance of Microsoft SQL Server Express and AWS Secrets Manager secrets to store database credentials, security groups, and IAM roles to access Amazon Relational Database Service (Amazon RDS) and Secrets Manager. This stack takes approximately 15 minutes to complete, with most of that time being when the services are being provisioned.

  1. On the AWS CloudFormation console, choose Create stack.
  2. For Prepare template, select Template is ready.
  3. For Template source, select Upload a template file.
  4. Upload SqlServerRDSFixedUidPwd.yaml, which is available in the GitHub repo.
  5. Choose Next.
    Create AWS CloudFormation stack
  6. For Stack name, enter SQLRDSEXStack.
  7. Choose Next.
  8. Keep the rest of the options at their default.
  9. Select I acknowledge that AWS CloudFormation might create IAM resources with custom names.
  10. Choose Create stack.
    Add IAM Capabilities
  11. When the status shows as CREATE_COMPLETE, choose the Outputs tab.
  12. Record the value for the SQLDatabaseEndpoint key.
    CloudFormation output
  13. Connect the database from the SQL Server Management Studio with the following credentials:User id: DBUserand Password:DBU$er2020

Setting up the CYCLE_STORE database

To set up your database, complete the following steps:

  1. On the SQL Server Management console, connect to the DB instance using the ID and password you defined earlier.
  2. Under File, choose New.
  3. Choose Query with Current Connection.Alternatively, choose New Query from the toolbar.
    Run database restore script
  4. Open CYCLE_STORE_Schema_data.sql from the GitHub repository and run it.

This creates the CYCLE_STORE database with all the tables and data you need.

Setting up the ASP.NET MVC Core application

To set up your ASP.NET application, complete the following steps:

  1. Open the re-architected application code that you cloned from the GitHub repo. The Dockerfile added to the solution enables Docker support.
  2. Open the appsettings.Development.json file and replace the RDS endpoint present in the ConnectionStrings section with the output of the AWS CloudFormation stack without the port number which is :1433 for SQL Server.

The ASP.NET application should now load with bike categories and subcategories. See the following screenshot.

Final run

Setting up Amazon ECR

To set up your repository in Amazon ECR, complete the following steps:

  1. On the Amazon ECR console, choose Repositories.
  2. Choose Create repository.
  3. For Repository name, enter coretoecsrepo.
  4. Choose Create repository Create repositiory
  5. Copy the repository URI to use later.
  6. Select the repository you just created and choose View push commands.View Push Commands
  7. In the folder where you cloned the repo, navigate to the AdventureWorksMVCCore.Web folder.
  8. In the View push commands popup window, complete steps 1–4 to push your Docker image to Amazon ECR. Push to Amazon Elastic Repository

The following screenshot shows completion of Steps 1 and 2 and ensure your working directory is set to AdventureWorksMVCCore.Web as below.

Login
The following screenshot shows completion of Steps 3 and 4.

Amazon ECR Push

Setting up Amazon ECS

To set up your ECS cluster, complete the following steps:

    1. On the Amazon ECS console, choose Clusters.
    2. Choose Create cluster.
    3. Choose the Networking only cluster template.
    4. Name your cluster cycle-store-cluster.
    5. Leave everything else as its default.
    6. Choose Create cluster.
    7. Select your cluster.
    8. Choose Task Definitions and choose Create new Task Definition.
    9. On the Select launch type compatibility page choose FARGATE and click Next step.
    10. On the Configure task and container definitions page, scroll to the bottom of the page and choose Configure via JSON.
    11. In the text area, enter the task definition JSON (task-definition.json) provided in the GitHub repo. Make sure to replace [YOUR-AWS-ACCOUNT-NO] in task-definition.json with your AWS account number on the line number 44, 68 and 71. The task definition file assumes that you named your repository as coretoecsrepo. If you named it something else, modify this file accordingly. It also assumes that you are using us-east-1 as your default region, so please consider replacing the region in the task-definition.json on line number 15 and 44 if you are not using us-east-1 region. Replace AWS Account number
    12. Choose Save.
    13. On the Task Definitions page, select cycle-store-td.
    14. From the Actions drop-down menu, choose Run Task.Run task
    15. Choose Launch type is equal to Fargate.
    16. Choose your default VPC as Cluster VPC.
    17. Select at least one Subnet.
    18. Choose Edit Security Groups and select ECSSecurityGroup (created by the AWS CloudFormation stack).Select security group
    19. Choose Run Task

Running your application

Choose the link under the task and find the public IP. When you navigate to the URL http://your-public-ip, you should see the .NET Core Cycle Store web application user interface running in Amazon ECS.

See the following screenshot.

Final run

Cleaning up

To avoid incurring future charges, delete the stacks you created for this post.

  1. On the AWS CloudFormation console, choose Stacks.
  2. Select SQLRDSEXStack.
  3. In the Stack details pane, choose Delete.

Conclusion

This post concludes your journey towards modernizing a legacy enterprise MVC ASP.NET web application using .NET Core and containerizing using Amazon ECS using the AWS Fargate compute engine on a Linux container. Portability to .NET Core helps you run enterprise workload without any dependencies on windows environment and AWS Fargate gives you a way to run containers directly without managing any EC2 instances and giving you full control. Additionally, couple of recent launched AWS tools in this area.

About the Author

Saleha Haider is a Senior Partner Solution Architect with Amazon Web Services.
Pratip Bagchi is a Partner Solutions Architect with Amazon Web Services.