Tag Archives: Migration

Unified serverless streaming ETL architecture with Amazon Kinesis Data Analytics

Post Syndicated from Ram Vittal original https://aws.amazon.com/blogs/big-data/unified-serverless-streaming-etl-architecture-with-amazon-kinesis-data-analytics/

Businesses across the world are seeing a massive influx of data at an enormous pace through multiple channels. With the advent of cloud computing, many companies are realizing the benefits of getting their data into the cloud to gain meaningful insights and save costs on data processing and storage. As businesses embark on their journey towards cloud solutions, they often come across challenges involving building serverless, streaming, real-time ETL (extract, transform, load) architecture that enables them to extract events from multiple streaming sources, correlate those streaming events, perform enrichments, run streaming analytics, and build data lakes from streaming events.

In this post, we discuss the concept of unified streaming ETL architecture using a generic serverless streaming architecture with Amazon Kinesis Data Analytics at the heart of the architecture for event correlation and enrichments. This solution can address a variety of streaming use cases with various input sources and output destinations. We then walk through a specific implementation of the generic serverless unified streaming architecture that you can deploy into your own AWS account for experimenting and evolving this architecture to address your business challenges.

Overview of solution

As data sources grow in volume, variety, and velocity, the management of data and event correlation become more challenging. Most of the challenges stem from data silos, in which different teams and applications manage data and events using their own tools and processes.

Modern businesses need a single, unified view of the data environment to get meaningful insights through streaming multi-joins, such as the correlation of sensory events and time-series data. Event correlation plays a vital role in automatically reducing noise and allowing the team to focus on those issues that really matter to the business objectives.

To realize this outcome, the solution proposes creating a three-stage architecture:

  • Ingestion
  • Processing
  • Analysis and visualization

The source can be a varied set of inputs comprising structured datasets like databases or raw data feeds like sensor data that can be ingested as single or multiple parallel streams. The solution envisions multiple hybrid data sources as well. After it’s ingested, the data is divided into single or multiple data streams depending on the use case and passed through a preprocessor (via an AWS Lambda function). This highly customizable processor transforms and cleanses data to be processed through analytics application. Furthermore, the architecture allows you to enrich data or validate it against standard sets of reference data, for example validating against postal codes for address data received from the source to verify its accuracy. After the data is processed, it’s sent to various sink platforms depending on your preferences, which could range from storage solutions to visualization solutions, or even stored as a dataset in a high-performance database.

The solution is designed with flexibility as a key tenant to address multiple, real-world use cases. The following diagram illustrates the solution architecture.

The architecture has the following workflow:

  1. We use AWS Database Migration Service (AWS DMS) to push records from the data source into AWS in real time or batch. For our use case, we use AWS DMS to fetch records from an on-premises relational database.
  2. AWS DMS writes records to Amazon Kinesis Data Streams. The data is split into multiple streams as necessitated through the channels.
  3. A Lambda function picks up the data stream records and preprocesses them (adding the record type). This is an optional step, depending on your use case.
  4. Processed records are sent to the Kinesis Data Analytics application for querying and correlating in-application streams, taking into account Amazon Simple Storage Service (Amazon S3) reference data for enrichment.

Solution walkthrough

For this post, we demonstrate an implementation of the unified streaming ETL architecture using Amazon RDS for MySQL as the data source and Amazon DynamoDB as the target. We use a simple order service data model that comprises orders, items, and products, where an order can have multiple items and the product is linked to an item in a reference relationship that provides detail about the item, such as description and price.

We implement a streaming serverless data pipeline that ingests orders and items as they are recorded in the source system into Kinesis Data Streams via AWS DMS. We build a Kinesis Data Analytics application that correlates orders and items along with reference product information and creates a unified and enriched record. Kinesis Data Analytics outputs output this unified and enriched data to Kinesis Data Streams. A Lambda function consumer processes the data stream and writes the unified and enriched data to DynamoDB.

To launch this solution in your AWS account, use the GitHub repo.

Prerequisites

Before you get started, make sure you have the following prerequisites:

Setting up AWS resources in your account

To set up your resources for this walkthrough, complete the following steps:

  1. Set up the AWS CDK for Java on your local workstation. For instructions, see Getting Started with the AWS CDK.
  2. Install Maven binaries for Java if you don’t have Maven installed already.
  3. If this is the first installation of the AWS CDK, make sure to run cdk bootstrap.
  4. Clone the following GitHub repo.
  5. Navigate to the project root folder and run the following commands to build and deploy:
    1. mvn compile
    2. cdk deploy UnifiedStreamETLCommonStack UnifiedStreamETLDataStack UnifiedStreamETLProcessStack

Setting up the orders data model for CDC

In this next step, you set up the orders data model for change data capture (CDC).

  1. On the Amazon Relational Database Service (Amazon RDS) console, choose Databases.
  2. Choose your database and make sure that you can connect to it securely for testing using bastion host or other mechanisms (not detailed in scope of this post).
  3. Start MySQL Workbench and connect to your database using your DB endpoint and credentials.
  4. To create the data model in your Amazon RDS for MySQL database, run orderdb-setup.sql.
  5. On the AWS DMS console, test the connections to your source and target endpoints.
  6. Choose Database migration tasks.
  7. Choose your AWS DMS task and choose Table statistics.
  8. To update your table statistics, restart the migration task (with full load) for replication.
  9. From your MySQL Workbench session, run orders-data-setup.sql to create orders and items.
  10. Verify that CDC is working by checking the Table statistics

Setting up your Kinesis Data Analytics application

To set up your Kinesis Data Analytics application, complete the following steps:

  1. Upload the product reference products.json to your S3 bucket with the logical ID prefix unifiedBucketId (which was previously created by cdk deploy).

You can now create a Kinesis Data Analytics application and map the resources to the data fields.

  1. On the Amazon Kinesis console, choose Analytics Application.
  2. Choose Create application.
  3. For Runtime, choose SQL.
  4. Connect the streaming data created using the AWS CDK as a unified order stream.
  5. Choose Discover schema and wait for it to discover the schema for the unified order stream. If discovery fails, update the records on the source Amazon RDS tables and send streaming CDC records.
  6. Save and move to the next step.
  7. Connect the reference S3 bucket you created with the AWS CDK and uploaded with the reference data.
  8. Input the following:
    1. “products.json” on the path to the S3 object
    2. Products on the in-application reference table name
  9. Discover the schema, then save and close.
  10. Choose SQL Editor and start the Kinesis Data Analytics application.
  11. Edit the schema for SOURCE_SQL_STREAM_001 and map the data resources as follows:
Column NameColumn TypeRow Path
orderIdINTEGER$.data.orderId
itemIdINTEGER$.data.orderId
itemQuantityINTEGER$.data.itemQuantity
itemAmountREAL$.data.itemAmount
itemStatusVARCHAR$.data.itemStatus
COL_timestampVARCHAR$.metadata.timestamp
recordTypeVARCHAR$.metadata.table-name
operationVARCHAR$.metadata.operation
partitionkeytypeVARCHAR$.metadata.partition-key-type
schemanameVARCHAR$.metadata.schema-name
tablenameVARCHAR$.metadata.table-name
transactionidBIGINT$.metadata.transaction-id
orderAmountDOUBLE$.data.orderAmount
orderStatusVARCHAR$.data.orderStatus
orderDateTimeTIMESTAMP$.data.orderDateTime
shipToNameVARCHAR$.data.shipToName
shipToAddressVARCHAR$.data.shipToAddress
shipToCityVARCHAR$.data.shipToCity
shipToStateVARCHAR$.data.shipToState
shipToZipVARCHAR$.data.shipToZip

 

  1. Choose Save schema and update stream samples.

When it’s complete, verify for 1 minute that nothing is in the error stream. If an error occurs, check that you defined the schema correctly.

  1. On your Kinesis Data Analytics application, choose your application and choose Real-time analytics.
  2. Go to the SQL results and run kda-orders-setup.sql to create in-application streams.
  3. From the application, choose Connect to destination.
  4. For Kinesis data stream, choose unifiedOrderEnrichedStream.
  5. For In-application stream, choose ORDER_ITEM_ENRICHED_STREAM.
  6. Choose Save and Continue.

Testing the unified streaming ETL architecture

You’re now ready to test your architecture.

  1. Navigate to your Kinesis Data Analytics application.
  2. Choose your app and choose Real-time analytics.
  3. Go to the SQL results and choose Real-time analytics.
  4. Choose the in-application stream ORDER_ITEM_ENRCIHED_STREAM to see the results of the real-time join of records from the order and order item streaming Kinesis events.
  5. On the Lambda console, search for UnifiedStreamETLProcess.
  6. Choose the function and choose Monitoring, Recent invocations.
  7. Verify the Lambda function run results.
  8. On the DynamoDB console, choose the OrderEnriched table.
  9. Verify the unified and enriched records that combine order, item, and product records.

The following screenshot shows the OrderEnriched table.

Operational aspects

When you’re ready to operationalize this architecture for your workloads, you need to consider several aspects:

  • Monitoring metrics for Kinesis Data Streams: GetRecords.IteratorAgeMilliseconds, ReadProvisionedThroughputExceeded, and WriteProvisionedThroughputExceeded
  • Monitoring metrics available for the Lambda function, including but not limited to Duration, IteratorAge, Error count and success rate (%), Concurrent executions, and Throttles
  • Monitoring metrics for Kinesis Data Analytics (millisBehindLatest)
  • Monitoring DynamoDB provisioned read and write capacity units
  • Using the DynamoDB automatic scaling feature to automatically manage throughput

We used the solution architecture with the following configuration settings to evaluate the operational performance:

  • Kinesis OrdersStream with two shards and Kinesis OrdersEnrichedStream with two shards
  • The Lambda function code does asynchronous processing with Kinesis OrdersEnrichedStream records in concurrent batches of five, with batch size as 500
  • DynamoDB provisioned WCU is 3000, RCU is 300

We observed the following results:

  • 100,000 order items are enriched with order event data and product reference data and persisted to DynamoDB
  • An average of 900 milliseconds latency from the time of event ingestion to the Kinesis pipeline to when the record landed in DynamoDB

The following screenshot shows the visualizations of these metrics.

Cleaning up

To avoid incurring future charges, delete the resources you created as part of this post (the AWS CDK provisioned AWS CloudFormation stacks).

Conclusion

In this post, we designed a unified streaming architecture that extracts events from multiple streaming sources, correlates and performs enrichments on events, and persists those events to destinations. We then reviewed a use case and walked through the code for ingesting, correlating, and consuming real-time streaming data with Amazon Kinesis, using Amazon RDS for MySQL as the source and DynamoDB as the target.

Managing an ETL pipeline through Kinesis Data Analytics provides a cost-effective unified solution to real-time and batch database migrations using common technical knowledge skills like SQL querying.


About the Authors

Ram Vittal is an enterprise solutions architect at AWS. His current focus is to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he enjoys tennis, photography, and movies.

 

 

 

 

Akash Bhatia is a Sr. solutions architect at AWS. His current focus is helping customers achieve their business outcomes through architecting and implementing innovative and resilient solutions at scale.

 

 

Modernizing and containerizing a legacy MVC .NET application with Entity Framework to .NET Core with Entity Framework Core: Part 2

Post Syndicated from Pratip Bagchi original https://aws.amazon.com/blogs/devops/modernizing-and-containerizing-a-legacy-mvc-net-application-with-entity-framework-to-net-core-with-entity-framework-core-part-2/

This is the second post in a two-part series in which you migrate and containerize a modernized enterprise application. In Part 1, we walked you through a step-by-step approach to re-architect a legacy ASP.NET MVC application and ported it to .NET Core Framework. In this post, you will deploy the previously re-architected application to Amazon Elastic Container Service (Amazon ECS) and run it as a task with AWS Fargate.

Overview of solution

In the first post, you ported the legacy MVC ASP.NET application to ASP.NET Core, you will now modernize the same application as a Docker container and host it in the ECS cluster.

The following diagram illustrates this architecture.

Architecture Diagram

 

You first launch a SQL Server Express RDS (1) instance and create a Cycle Store database on that instance with tables of different categories and subcategories of bikes. You use the previously re-architected and modernized ASP.NET Core application as a starting point for this post, this app is using AWS Secrets Manager (2) to fetch database credentials to access Amazon RDS instance. In the next step, you build a Docker image of the application and push it to Amazon Elastic Container Registry (Amazon ECR) (3). After this you create an ECS cluster (4) to run the Docker image as a AWS Fargate task.

Prerequisites

For this walkthrough, you should have the following prerequisites:

This post implements the solution in Region us-east-1.

Source Code

Clone the source code from the GitHub repo. The source code folder contains the re-architected source code and the AWS CloudFormation template to launch the infrastructure, and Amazon ECS task definition.

Setting up the database server

To make sure that your database works out of the box, you use a CloudFormation template to create an instance of Microsoft SQL Server Express and AWS Secrets Manager secrets to store database credentials, security groups, and IAM roles to access Amazon Relational Database Service (Amazon RDS) and Secrets Manager. This stack takes approximately 15 minutes to complete, with most of that time being when the services are being provisioned.

  1. On the AWS CloudFormation console, choose Create stack.
  2. For Prepare template, select Template is ready.
  3. For Template source, select Upload a template file.
  4. Upload SqlServerRDSFixedUidPwd.yaml, which is available in the GitHub repo.
  5. Choose Next.
    Create AWS CloudFormation stack
  6. For Stack name, enter SQLRDSEXStack.
  7. Choose Next.
  8. Keep the rest of the options at their default.
  9. Select I acknowledge that AWS CloudFormation might create IAM resources with custom names.
  10. Choose Create stack.
    Add IAM Capabilities
  11. When the status shows as CREATE_COMPLETE, choose the Outputs tab.
  12. Record the value for the SQLDatabaseEndpoint key.
    CloudFormation output
  13. Connect the database from the SQL Server Management Studio with the following credentials:User id: DBUserand Password:DBU$er2020

Setting up the CYCLE_STORE database

To set up your database, complete the following steps:

  1. On the SQL Server Management console, connect to the DB instance using the ID and password you defined earlier.
  2. Under File, choose New.
  3. Choose Query with Current Connection.Alternatively, choose New Query from the toolbar.
    Run database restore script
  4. Open CYCLE_STORE_Schema_data.sql from the GitHub repository and run it.

This creates the CYCLE_STORE database with all the tables and data you need.

Setting up the ASP.NET MVC Core application

To set up your ASP.NET application, complete the following steps:

  1. Open the re-architected application code that you cloned from the GitHub repo. The Dockerfile added to the solution enables Docker support.
  2. Open the appsettings.Development.json file and replace the RDS endpoint present in the ConnectionStrings section with the output of the AWS CloudFormation stack without the port number which is :1433 for SQL Server.

The ASP.NET application should now load with bike categories and subcategories. See the following screenshot.

Final run

Setting up Amazon ECR

To set up your repository in Amazon ECR, complete the following steps:

  1. On the Amazon ECR console, choose Repositories.
  2. Choose Create repository.
  3. For Repository name, enter coretoecsrepo.
  4. Choose Create repository Create repositiory
  5. Copy the repository URI to use later.
  6. Select the repository you just created and choose View push commands.View Push Commands
  7. In the folder where you cloned the repo, navigate to the AdventureWorksMVCCore.Web folder.
  8. In the View push commands popup window, complete steps 1–4 to push your Docker image to Amazon ECR. Push to Amazon Elastic Repository

The following screenshot shows completion of Steps 1 and 2 and ensure your working directory is set to AdventureWorksMVCCore.Web as below.

Login
The following screenshot shows completion of Steps 3 and 4.

Amazon ECR Push

Setting up Amazon ECS

To set up your ECS cluster, complete the following steps:

    1. On the Amazon ECS console, choose Clusters.
    2. Choose Create cluster.
    3. Choose the Networking only cluster template.
    4. Name your cluster cycle-store-cluster.
    5. Leave everything else as its default.
    6. Choose Create cluster.
    7. Select your cluster.
    8. Choose Task Definitions and choose Create new Task Definition.
    9. On the Select launch type compatibility page choose FARGATE and click Next step.
    10. On the Configure task and container definitions page, scroll to the bottom of the page and choose Configure via JSON.
    11. In the text area, enter the task definition JSON (task-definition.json) provided in the GitHub repo. Make sure to replace [YOUR-AWS-ACCOUNT-NO] in task-definition.json with your AWS account number on the line number 44, 68 and 71. The task definition file assumes that you named your repository as coretoecsrepo. If you named it something else, modify this file accordingly. It also assumes that you are using us-east-1 as your default region, so please consider replacing the region in the task-definition.json on line number 15 and 44 if you are not using us-east-1 region. Replace AWS Account number
    12. Choose Save.
    13. On the Task Definitions page, select cycle-store-td.
    14. From the Actions drop-down menu, choose Run Task.Run task
    15. Choose Launch type is equal to Fargate.
    16. Choose your default VPC as Cluster VPC.
    17. Select at least one Subnet.
    18. Choose Edit Security Groups and select ECSSecurityGroup (created by the AWS CloudFormation stack).Select security group
    19. Choose Run Task

Running your application

Choose the link under the task and find the public IP. When you navigate to the URL http://your-public-ip, you should see the .NET Core Cycle Store web application user interface running in Amazon ECS.

See the following screenshot.

Final run

Cleaning up

To avoid incurring future charges, delete the stacks you created for this post.

  1. On the AWS CloudFormation console, choose Stacks.
  2. Select SQLRDSEXStack.
  3. In the Stack details pane, choose Delete.

Conclusion

This post concludes your journey towards modernizing a legacy enterprise MVC ASP.NET web application using .NET Core and containerizing using Amazon ECS using the AWS Fargate compute engine on a Linux container. Portability to .NET Core helps you run enterprise workload without any dependencies on windows environment and AWS Fargate gives you a way to run containers directly without managing any EC2 instances and giving you full control. Additionally, couple of recent launched AWS tools in this area.

About the Author

Saleha Haider is a Senior Partner Solution Architect with Amazon Web Services.
Pratip Bagchi is a Partner Solutions Architect with Amazon Web Services.

 

Migrating Subversion repositories to AWS CodeCommit

Post Syndicated from Iftikhar khan original https://aws.amazon.com/blogs/devops/migrating-subversion-repositories-aws-codecommit/

In this post, we walk you through migrating Subversion (SVN) repositories to AWS CodeCommit. But before diving into the migration, we do a brief review of SVN and Git based systems such as CodeCommit.

About SVN

SVN is an open-source version control system. Founded in 2000 by CollabNet, Inc., it was originally designed to be a better Concurrent Versions System (CVS), and is being developed as a project of the Apache Software Foundation. SVN is the third implementation of a revision control system: Revision Control System (RCS), then CVS, and finally SVN.

SVN is the leader in centralized version control. Systems such as CVS and SVN have a single remote server of versioned data with individual users operating locally against copies of that data’s version history. Developers commit their changes directly to that central server repository.

All the files and commit history information are stored in a central server, but working on a single central server means more chances of having a single point of failure. SVN offers few offline access features; a developer has to connect to the SVN server to make a commit that makes commits slower. The single point of failure, security, maintenance, and scaling SVN infrastructure are the major concerns for any organization.

About DVCS

Distributed Version Control Systems (DVCSs) address the concerns and challenges of SVN. In a DVCS (such as Git or Mercurial), you don’t just check out the latest snapshot of the files; rather, you fully mirror the repository, including its full history. If any server dies, and these systems are collaborating via that server, you can copy any of the client repositories back up to the server to restore it. Every clone is a full backup of all the data.

DVCs such as Git are built with speed, non-linear development, simplicity, and efficiency in mind. It works very efficiently with large projects, which is one of the biggest factors why customers find it popular.

A significant reason to migrate to Git is branching and merging. Creating a branch is very lightweight, which allows you to work faster and merge easily.

About CodeCommit

CodeCommit is a version control system that is fully managed by AWS. CodeCommit can host secure and highly scalable private Git repositories, which eliminates the need to operate your source control system and scale its infrastructure. You can use it to securely store anything, from source code to binaries. CodeCommit features like collaboration, encryption, and easy access control make it a great choice. It works seamlessly with most existing Git tools and provides free private repositories.

Understanding the repository structure of SVN and Git

SVNs have a tree model with one branch where the revisions are stored, whereas Git uses a graph structure and each commit is a node that knows its parent. When comparing the two, consider the following features:

  • Trunk – An SVN trunk is like a primary branch in a Git repository, and contains tested and stable code.
  • Branches – For SVN, branches are treated as separate entities with its own history. You can merge revisions between branches, but they’re different entities. Because of its centralized nature, all branches are remote. In Git, branches are very cheap; it’s a pointer for a particular commit on the tree. It can be local or be pushed to a remote repository for collaboration.
  • Tags – A tag is just another folder in the main repository in SVN and remains static. In Git, a tag is a static pointer to a specific commit.
  • Commits – To commit in SVN, you need access to the main repository and it creates a new revision in the remote repository. On Git, the commit happens locally, so you don’t need to have access to the remote. You can commit the work locally and then push all the commits at one time.

So far, we have covered how SVN is different from Git-based version control systems and illustrated the layout of SVN repositories. Now it’s time to look at how to migrate SVN repositories to CodeCommit.

Planning for migration

Planning is always a good thing. Before starting your migration, consider the following:

  • Identify SVN branches to migrate.
  • Come up with a branching strategy for CodeCommit and document how you can map SVN branches.
  • Prepare build, test scripts, and test cases for system testing.

If the size of the SVN repository is big enough, consider running all migration commands on the SVN server. This saves time because it eliminates network bottlenecks.

Migrating the SVN repository to CodeCommit

When you’re done with the planning aspects, it’s time to start migrating your code.

Prerequisites

You must have the AWS Command Line Interface (AWS CLI) with an active account and Git installed on the machine that you’re planning to use for migration.

Listing all SVN users for an SVN repository
SVN uses a user name for each commit, whereas Git stores the real name and email address. In this step, we map SVN users to their corresponding Git names and email.

To list all the SVN users, run the following PowerShell command from the root of your local SVN checkout:

svn.exe log --quiet | ? { $_ -notlike '-*' } | % { "{0} = {0} <{0}>" -f ($_ -split ' \| ')[1] } | Select-Object -Unique | Out-File 'authors-transform.txt'

On a Linux based machine, run the following command from the root of your local SVN checkout:

svn log -q | awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' | sort -u > authors-transform.txt

The authors-transform.txt file content looks like the following code:

ikhan = ikhan <ikhan>
foobar= foobar <foobar>
abob = abob <abob>

After you transform the SVN user to a Git user, it should look like the following code:

ikhan = ifti khan <[email protected]>
fbar = foo bar <[email protected]>
abob = aaron bob <[email protected]>

Importing SVN contents to a Git repository

The next step in the migration from SVN to Git is to import the contents of the SVN repository into a new Git repository. We do this with the git svn utility, which is included with most Git distributions. The conversion process can take a significant amount of time for larger repositories.

The git svn clone command transforms the trunk, branches, and tags in your SVN repository into a new Git repository. The command depends on the structure of the SVN.

git svn clone may not be available in all installations; you might consider using an AWS Cloud9 environment or using a temporary Amazon Elastic Compute Cloud (Amazon EC2) instance.

If your SVN layout is standard, use the following command:

git svn clone --stdlayout --authors-file=authors.txt  <svn-repo>/<project> <temp-dir/project>

If your SVN layout isn’t standard, you need to map the trunk, branches, and tags folder in the command as parameters:

git svn clone <svn-repo>/<project> --prefix=svn/ --no-metadata --trunk=<trunk-dir> --branches=<branches-dir>  --tags==<tags-dir>  --authors-file "authors-transform.txt" <temp-dir/project>

Creating a bare Git repository and pushing the local repository

In this step, we create a blank repository and match the default branch with the SVN’s trunk name.

To create the .gitignore file, enter the following code:

cd <temp-dir/project>
git svn show-ignore > .gitignore
git add .gitignore
git commit -m 'Adding .gitignore.'

To create the bare Git repository, enter the following code:

git init --bare <git-project-dir>\local-bare.git
cd <git-project-dir>\local-bare.git
git symbolic-ref HEAD refs/heads/trunk

To update the local bare Git repository, enter the following code:

cd <temp-dir/project>
git remote add bare <git-project-dir\local-bare.git>
git config remote.bare.push 'refs/remotes/*:refs/heads/*'
git push bare

You can also add tags:

cd <git-project-dir\local-bare.git>

For Windows, enter the following code:

git for-each-ref --format='%(refname)' refs/heads/tags | % { $_.Replace('refs/heads/tags/','') } | % { git tag $_ "refs/heads/tags/$_"; git branch -D "tags/$_" }

For Linux, enter the following code:

for t in $(git for-each-ref --format='%(refname:short)' refs/remotes/tags); do git tag ${t/tags\//} $t &amp;&amp; git branch -D -r $t; done

You can also add branches:

cd <git-project-dir\local-bare.git>

For Windows, enter the following code:

git for-each-ref --format='%(refname)' refs/remotes | % { $_.Replace('refs/remotes/','') } | % { git branch "$_" "refs/remotes/$_"; git branch -r -d "$_"; }

For Linux, enter the following code:

for b in $(git for-each-ref --format='%(refname:short)' refs/remotes); do git branch $b refs/remotes/$b && git branch -D -r $b; done

As a final touch-up, enter the following code:

cd <git-project-dir\local-bare.git>
git branch -m trunk master

Creating a CodeCommit repository

You can now create a CodeCommit repository with the following code (make sure that the AWS CLI is configured with your preferred Region and credentials):

aws configure
aws codecommit create-repository --repository-name MySVNRepo --repository-description "SVN Migration repository" --tags Team=Migration

You get the following output:

{
    "repositoryMetadata": {
        "repositoryName": "MySVNRepo",
        "cloneUrlSsh": "ssh://ssh://git-codecommit.us-east-2.amazonaws.com/v1/repos/MySVNRepo",
        "lastModifiedDate": 1446071622.494,
        "repositoryDescription": "SVN Migration repository",
        "cloneUrlHttp": "https://git-codecommit.us-east-2.amazonaws.com/v1/repos/MySVNRepo",
        "creationDate": 1446071622.494,
        "repositoryId": "f7579e13-b83e-4027-aaef-650c0EXAMPLE",
        "Arn": "arn:aws:codecommit:us-east-2:111111111111:MySVNRepo",
        "accountId": "111111111111"
    }
}

Pushing the code to CodeCommit

To push your code to the new CodeCommit repository, enter the following code:

cd <git-project-dir\local-bare.git>
git remote add origin https://git-codecommit.us-east-2.amazonaws.com/v1/repos/MySVNRepo
git add *

git push origin --all
git push origin --tags (Optional if tags are mapped)

Troubleshooting

When migrating SVN repositories, you might encounter a few SVN errors, which are displayed as code on the console. For more information, see Subversion client errors caused by inappropriate repository URL.

For more information about the git-svn utility, see the git-svn documentation.

Conclusion

In this post, we described the straightforward process of using the git-svn utility to migrate SVN repositories to Git or Git-based systems like CodeCommit. After you migrate an SVN repository to CodeCommit, you can use any Git-based client and start using CodeCommit as your primary version control system without worrying about securing and scaling its infrastructure.

Migrating your rules from AWS WAF Classic to the new AWS WAF

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/migrating-rules-from-aws-waf-classic-to-new-aws-waf/

In November 2019, Amazon launched a new version of AWS Web Application Firewall (WAF) that offers a richer and easier to use set of features. In this post, we show you some of the changes and how to migrate from AWS WAF Classic to the new AWS WAF.

AWS Managed Rules for AWS WAF is one of the more powerful new capabilities in AWS WAF. It helps you protect your applications without needing to create or manage rules directly in the service. The new release includes other enhancements and a brand new set of APIs for AWS WAF.

Before you start, we recommend that you review How AWS WAF works as a refresher. If you’re already familiar with AWS WAF, please feel free to skip ahead. If you’re new to AWS WAF and want to know the best practice for deploying AWS WAF, we recommend reading the Guidelines for Implementing AWS WAF whitepaper.

What’s changed in AWS WAF

Here’s a summary of what’s changed in AWS WAF:

  • AWS Managed Rules for AWS WAF – a new capability that provides protection against common web threats and includes the Amazon IP reputation list and an anonymous IP list for blocking bots and traffic that originate from VPNs, proxies, and Tor networks.
  • New API (wafv2) – allows you to configure all of your AWS WAF resources using a single set of APIs instead of two (waf and waf-regional).
  • Simplified service limits – gives you more rules per web ACL and lets you define longer regex patterns. Limits per condition have been eliminated and replaced with web ACL capacity units (WCU).
  • Document-based rule writing – allows you to write and express rules in JSON format directly to your web ACLs. You’re no longer required to use individual APIs to create different conditions and associate them to a rule, greatly simplifying your code and making it more maintainable.
  • Rule nesting and full logical operation support – lets you write rules that contain multiple conditions, including OR statements. You can also nest logical operations, creating statement such as [A AND NOT(B OR C)].
  • Variable CIDR range support for IP set – gives you more flexibility in defining the IP range you want to block. The new AWF WAF supports IPv4 /1 to /32 IPv6 /1 to /128.
  • Chainable text transformation – allows you to perform multiple text transformations before executing a rule against incoming traffic.
  • Revamped console experience – features a visual rule builder and more intuitive design.
  • AWS CloudFormation support for all condition types – including that rules written in JSON can easily be converted into YAML format.

Although there were many changes introduced, the concepts and terminology that you’re already familiar with have stayed the same. The previous APIs have been renamed to AWS WAF Classic. It’s important to stress that resources created under AWS WAF Classic aren’t compatible with the new AWS WAF.

About web ACL capacity units

Web ACL capacity units (WCUs) are a new concept that we introduced to AWS WAF in November 2019. WCU is a measurement that’s used to calculate and control the operating resources that are needed to run the rules associated with your web ACLs. WCU helps you visualize and plan how many rules you can add to a web ACL. The number of WCUs used by a web ACL depends on which rule statements you add. The maximum WCUs for each web ACL is 1,500, which is sufficient for most use cases. We recommend that you take some time to review how WCUs work and understand how each type of rule statement consumes WCUs before continuing with your migration.

Planning your migration to the new AWS WAF

We recently announced a new API and a wizard that will help you migrate from AWS WAF Classic to the new AWS WAF. At high level summary, it will parse the web ACL under AWS WAF Classic and generate a CloudFormation template that will create equivalent web ACL under the new AWS WAF once deployed. In this section, we explain how you can use the wizard to plan your migration.

Things to know before you get started

The migration wizard will first examine your existing web ACL. It will examine and record for conversion any rules associated to the web ACL, as well as any IP sets, regex pattern sets, string match filters, and account-owned rule groups. Executing the wizard will not delete or modify your existing web ACL configuration, or any resource associated with that web ACL. Once finished, it will generate an AWS CloudFormation template within your S3 bucket that represents an equivalent web ACL with all rules, sets, filters, and groups converted for use in the new AWS WAF. You’ll need to manually deploy the template in order to recreate the web ACL in the new AWS WAF.

Please note the following limitations:

  • Only the AWS WAF Classic resources that are under the same account will be migrated over.
  • If you migrate multiple web ACLs that reference shared resources—such as IP sets or regex pattern sets—they will be duplicated under new AWS WAF.
  • Conditions associated with rate-based rules won’t be carried over. Once migration is complete, you can manually recreate the rules and the conditions.
  • Managed rules from AWS Marketplace won’t be carried over. Some sellers have equivalent managed rules that you can subscribe to in the new AWS WAF.
  • The web ACL associations won’t be carried over. This was done on purpose so that migration doesn’t affect your production environment. Once you verify that everything has been migrated over correctly, you can re-associate the web ACLs to your resources.
  • Logging for the web ACLs will be disabled by default. You can re-enable the logging once you are ready to switch over.
  • Any CloudWatch alarms that you may have will not be carried over. You will need to set up the alarms again once the web ACL has been recreated.

While you can use the migration wizard to migrate AWS WAF Security Automations, we don’t recommend doing so because it won’t convert any Lambda functions that are used behind the scenes. Instead, redeploy your automations using the new solution (version 3.0 and higher), which has been updated to be compatible with the new AWS WAF.

About AWS Firewall Manager and migration wizard

The current version of the migration API and wizard doesn’t migrate rule groups managed by AWS Firewall Manager. When you use the wizard on web ACLs managed by Firewall Manager, the associated rule groups won’t be carried over. Instead of using the wizard to migrate web ACLs managed by Firewall Manager, you’ll want to recreate the rule groups in the new AWS WAF and replace the existing policy with a new policy.

Note: In the past, the rule group was a concept that existed under Firewall Manager. However, with the latest change, we have moved the rule group under AWS WAF. The functionality remains the same.

Migrate to the new AWS WAF

Use the new migration wizard which creates a new executable AWS CloudFormation template in order to migrate your web ACLs from AWS WAF Classic to the new AWS WAF. The template is used to create a new version of the AWS WAF rules and corresponding entities.

  1. From the new AWS WAF console, navigate to AWS WAF Classic by choosing Switch to AWS WAF Classic. There will be a message box at the top of the window. Select the migration wizard link in the message box to start the migration process.
     
    Figure 1: Start the migration wizard

    Figure 1: Start the migration wizard

  2. Select the web ACL you want to migrate.
     
    Figure 2: Select a web ACL to migrate

    Figure 2: Select a web ACL to migrate

  3. Specify a new S3 bucket for the migration wizard to store the AWS CloudFormation template that it generates. The S3 bucket name needs to start with the prefix aws-waf-migration-. For example, name it aws-waf-migration-helloworld. Store the template in the region you will be deploying it to. For example, if you have a web ACL that is in us-west-2, you would create the S3 bucket in us-west-2, and deploy the stack to us-west-2.

    Select Auto apply the bucket policy required for migration to have the wizard configure the permissions the API needs to access your S3 bucket.

    Choose how you want any rules that can’t be migrated to be handled. Select either Exclude rules that can’t be migrated or Stop the migration process if a rule can’t be migrated. The ability of the wizard to migrate rules is affected by the WCU limit mentioned earlier.
     

    Figure 3: Configure migration options

    Figure 3: Configure migration options

    Note: If you prefer, you can run the following code to manually set up your S3 bucket with the policy below to configure the necessary permissions before you start the migration. If you do this, select Use the bucket policy that comes with the S3 bucket. Don’t forget to replace <BUCKET_NAME> and <CUSTOMER_ACCOUNT_ID> with your information.

    For all AWS Regions (waf-regional):

    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "apiv2migration.waf-regional.amazonaws.com"
                },
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<BUCKET_NAME>/AWSWAF/<CUSTOMER_ACCOUNT_ID>/*"
            }
        ]
    }
    

    For Amazon CloudFront (waf):

    
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "apiv2migration.waf.amazonaws.com"
                },
                "Action": "s3:PutObject",
                "Resource": "arn:aws:s3:::<BUCKET_NAME>/AWSWAF/<CUSTOMER_ACCOUNT_ID>/*"
            }
        ]
    }
    

  4. Verify the configuration, then choose Start creating CloudFormation template to begin the migration. Creating the AWS CloudFormation template will take about a minute depending on the complexity of your web ACL.
     
    Figure 4: Create CloudFormation template

    Figure 4: Create CloudFormation template

  5. Once completed, you have an option to review the generated template file and make modifications (for example, you can add more rules) should you wish to do so. To continue, choose Create CloudFormation stack.
     
    Figure 5: Review template

    Figure 5: Review template

  6. Use the AWS CloudFormation console to deploy the new template created by the migration wizard. Under Prepare template, select Template is ready. Select Amazon S3 URL as the Template source. Before you deploy, we recommend that you download and review the template to ensure the resources have been migrated as expected.
     
    Figure 6: Create stack

    Figure 6: Create stack

  7. Choose Next and step through the wizard to deploy the stack. Once created successfully, you can review the new web ACL and associate it to resources.
     
    Figure 7: Complete the migration

    Figure 7: Complete the migration

Post-migration considerations

After you verify that migration has been completed correctly, you might want to revisit your configuration to take advantage of some of the new AWS WAF features.

For instance, consider adding AWS Managed Rules to your web ACLs to improve the security of your application. AWS Managed Rules feature three different types of rule groups:

  • Baseline
  • Use-case specific
  • IP reputation list

The baseline rule groups provide general protection against a variety of common threats, such as stopping known bad inputs from making into your application and preventing admin page access. The use-case specific rule groups provide incremental protection for many different use cases and environments, and the IP reputation lists provide threat intelligence based on client’s source IP.

You should also consider revisiting some of the old rules and optimizing them by rewriting the rules or removing any outdated rules. For example, if you deployed the AWS CloudFormation template from our OWASP Top 10 Web Application Vulnerabilities whitepaper to create rules, you should consider replacing them with AWS Managed Rules. While the concepts found within the whitepaper are still applicable and might assist you in writing your own rules, the rules created by the template have been superseded by AWS Managed Rules.

For information about writing your own rules in the new AWS WAF using JSON, please watch the Protecting Your Web Application Using AWS Managed Rules for AWS WAF webinar. In addition, you can refer to the sample JSON/YAML to help you get started.

Revisit your CloudWatch metrics after the migration and set up alarms as necessary. The alarms aren’t carried over by the migration API and your metric names might have changed as well. You’ll also need to re-enable logging configuration and any field redaction you might have had in the previous web ACL.

Finally, use this time to work with your application team and check your security posture. Find out what fields are parsed frequently by the application and add rules to sanitize the input accordingly. Check for edge cases and add rules to catch these cases if the application’s business logic fails to process them. You should also coordinate with your application team when you make the switch, as there might be a brief disruption when you change the resource association to the new web ACL.

Conclusion

We hope that you found this post helpful in planning your migration from AWS WAF Classic to the new AWS WAF. We encourage you to explore the new AWS WAF experience as it features new enhancements, such as AWS Managed Rules, and offers much more flexibility in creating your own rules.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Umesh Kumar Ramesh

Umesh is a Sr. Cloud Infrastructure Architect with Amazon Web Services. He delivers proof-of-concept projects, topical workshops, and lead implementation projects to AWS customers. He holds a Bachelor’s degree in Computer Science & Engineering from National Institute of Technology, Jamshedpur (India). Outside of work, Umesh enjoys watching documentaries, biking, and practicing meditation.

Author

Kevin Lee

Kevin is a Sr. Product Manager at Amazon Web Service, currently overseeing AWS WAF.

Modernizing and containerizing a legacy MVC .NET application with Entity Framework to .NET Core with Entity Framework Core: Part 1

Post Syndicated from Pratip Bagchi original https://aws.amazon.com/blogs/devops/modernizing-and-containerizing-a-legacy-mvc-net-application-with-entity-framework-to-net-core-with-entity-framework-core-part-1/

Tens of thousands of .NET applications are running across the world, many of which are ASP.NET web applications. This number becomes interesting when you consider that the .NET framework, as we know it, will be changing significantly. The current release schedule for .NET 5.0 is November 2020, and going forward there will be just one .NET that you can use to target multiple platforms like Windows and Linux. This is important because those .NET applications running in version 4.8 and lower can’t automatically upgrade to this new version of .NET. This is because .NET 5.0 is based on .NET Core and thus has breaking changes when trying to upgrade from an older version of .NET.

This is an important step in the .NET Eco-sphere because it enables .NET applications to move beyond the Windows world. However, this also means that active applications need to go through a refactoring before they can take advantage of this new definition. One choice for this refactoring is to wait until the new version of .NET is released and start the refactoring process at that time. The second choice is to get an early start and start converting your applications to .NET Core v3.1 so that the migration to .NET 5.0 will be smoother. This post demonstrates an approach of migrating an ASP.NET MVC (Model View Controller) web application using Entity Framework 6 to and ASP.NET Core with Entity Framework Core.

This post shows steps to modernize a legacy enterprise MVC ASP.NET web application using .NET core along with converting Entity Framework to Entity Framework Core.

Overview of the solution

The first step that you take is to get an Asp.NET MVC application and its required database server up and working in your AWS environment. We take this approach so you can run the application locally to see how it works. You first set up the database, which is SQL Server running in Amazon Relational Database Service (Amazon RDS). Amazon RDS provides a managed SQL Server experience. After you define the database, you set up schema and data. If you already have your own SQL Server instance running, you can also load data there if desired; you simply need to ensure your connection string points to that server rather than the Amazon RDS server you set up in this walk-through.

Next you launch a legacy MVC ASP.NET web application that displays lists of bike categories and its subcategories. This legacy application uses Entity Framework 6 to fetch data from database.

Finally, you take a step-by-step approach to convert the same use case and create a new ASP.NET Core web application. Here you use Entity Framework Core to fetch data from the database. As a best practice, you also use AWS Secrets Manager to store database login information.

MIgration blog overview

Prerequisites

For this walkthrough, you should have the following prerequisites:

Setting up the database server

For this walk-through, we have provided an AWS CloudFormation template inside the GitHub repository to create an instance of Microsoft SQL Server Express. Which can be downloaded from this link.

  1. On the AWS CloudFormation console, choose Create stack.
  2. For Prepare template, select Template is ready.
  3. For Template source, select Upload a template file.
  4. Upload SqlServerRDSFixedUidPwd.yaml and choose Next.
    Create AWS CloudFormation stack
  5. For Stack name, enter SQLRDSEXStack and choose Next.
  6. Keep the rest of the options at their default.
  7. Select I acknowledge that AWS CloudFormation might create IAM resources with custom names.
  8. Choose Create stack.
    Add IAM Capabilities
  9. When the status shows as CREATE_COMPLETE, choose the Outputs tab and record the value from the SQLDatabaseEndpoint key.AWS CloudFormation output
  10. Connect the database from the SQL Server Management Studio with the following credentials:User id: DBUserPassword: DBU$er2020

Setting up the CYCLE_STORE database

To set up your database, complete the following steps:

  1. On the SQL Server Management console, connect to the DB instance using the ID and password you defined earlier.
  2. Under File, choose New.
  3. Choose Query with Current Connection.
    Alternatively, choose New Query from the toolbar.
    Run database restore script
  4. Download cycle_store_schema_data.sql  and run it.

This creates the CYCLE_STORE database with all the tables and data you need.

Setting up and validating the legacy MVC application

  1. Download the source code from the GitHub repo.
  2. Open AdventureWorksMVC_2013.sln and modify the database connection string in the web.config file by replacing the Data Source property value with the server name from your Amazon RDS setup.

The ASP.NET application should load with bike categories and subcategories. The following screenshot shows the Unicorn Bike Rentals website after configuration.Legacy code output

Now that you have the legacy application running locally, you can look at what it would take to refactor it so that it’s a .NET Core 3.1 application. Two main approaches are available for this:

  • Update in place – You make all the changes within a single code set
  • Move code – You create a new .NET Core solution and move the code over piece by piece

For this post, we show the second approach because it means that you have to do less scaffolding.

Creating a new MVC Core application

To create your new MVC Core application, complete the following steps:

  1. Open Visual Studio.
  2. From the Get Started page, choose Create a New Project.
  3. Choose ASP.NET Core Web Application.
  4. For Project name, enter AdventureWorksMVCCore.Web.
  5. Add a Location you prefer.
  6. For Solution name, enter AdventureWorksMVCCore.
  7. Choose Web Application (Model-View-Controller).
  8. Choose Create. Make sure that the project is set to use .NET Core 3.1.
  9. Choose Build, Build Solution.
  10. Press CTRL + Shift + B to make sure the current solution is building correctly.

You should get a default ASP.NET Core startup page.

Aligning the projects

ASP.NET Core MVC is dependent upon the use of a well-known folder structure; a lot of the scaffolding depends upon view source code files being in the Views folder, controller source code files being in the Controllers folder, etc. Some of the non-.NET specific folders are also at the same level, such as css and images. In .NET Core, the expectations are that static content should be in a new construct, the wwwroot folder. This includes Javascript, CSS, and image files. You also need to update the configuration file with the same database connection string that you used earlier.

Your first step is to move the static content.

  1. In the .NET Core solution, delete all the content created during solution creation.This includes css, js, and lib directories and the favicon.ico file.
  2. Copy over the css, favicon, and Images folders from the legacy solution to the wwwroot folder of the new solution.When completed, your .NET Core wwwroot directory should appear like the following screenshot.
    Static content
  3. Open appsettings.Development.json and add a ConnectionStrings section (replace the server with the Amazon RDS endpoint that you have already been using). See the following code:.
{
  "ConnectionStrings": {
    "DefaultConnection": "Server=sqlrdsdb.xxxxx.us-east-1.rds.amazonaws.com; Database=CYCLE_STORE;User Id=DBUser;Password=DBU$er2020;"
  },
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  }
}

Setting up Entity Framework Core

One of the changes in .NET Core is around changes to Entity Framework. Entity Framework Core is a lightweight, extensible data access technology. It can act as an object-relational mapper (O/RM) that enables interactions with the database using .NET objects, thus abstracting out much of the database access code. To use Entity Framework Core, you first have to add the packages to your project.

  1.  In the Solution Explorer window, choose the project (right-click) and choose Manage Nuget packages…
  2. On the Browse tab, search for the latest stable version of these two Nuget packages.You should see a screen similar to the following screenshot, containing:
    • Microsoft.EntityFrameworkCore.SqlServer
    • Microsoft.EntityFrameworkCore.Tools
      Add Entity Framework Core nuget
      After you add the packages, the next step is to generate database models. This is part of the O/RM functionality; these models map to the database tables and include information about the fields, constraints, and other information necessary to make sure that the generated models match the database. Fortunately, there is an easy way to generate those models. Complete the following steps:
  3. Open Package Manager Console from Visual Studio.
  4. Enter the following code (replace the server endpoint):
    Scaffold-DbContext "Server= sqlrdsdb.xxxxxx.us-east-1.rds.amazonaws.com; Database=CYCLE_STORE;User Id= DBUser;Password= DBU`$er2020;" Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models
    The ` in the password right before the $ is the escape character.
    You should now have a Context and several Model classes from the database stored within the Models folder. See the following screenshot.
    Databse model scaffolding
  5. Open the CYCLE_STOREContext.cs file under the Models folder and comment the following lines of code as shown in the following screenshot. You instead take advantage of the middleware to read the connection string in appsettings.Development.json that you previously configured.
    if (!optionsBuilder.IsConfigured)
                {
    #warning To protect potentially sensitive information in your connection string, you should move it out of source code. See http://go.microsoft.com/fwlink/?LinkId=723263 for guidance on storing connection strings.
                    optionsBuilder.UseSqlServer(
                        "Server=sqlrdsdb.cph0bnedghnc.us-east-1.rds.amazonaws.com; " +
                        "Database=CYCLE_STORE;User Id= DBUser;Password= DBU$er2020;");
                }
  6. Open the startup.cs file and add the following lines of code in the ConfigureServices method. You need to add reference of AdventureWorksMVCCore.Web.Models and Microsoft.EntityFrameworkCore in the using statement. This reads the connection string from the appSettings file and integrates with Entity Framework Core.
    services.AddDbContext<CYCLE_STOREContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));

Setting up a service layer

Because you’re working with an MVC application, that application control logic belongs in the controller. In the previous step, you created the data access layer. You now create the service layer. This service layer is responsible for mediating communication between the controller and the data access layer. Generally, this is where you put considerations such as business logic and validation.

Setting up the interfaces for these services follows the dependency inversion and interface segregation principles.

    1. Create a folder named Service under the project directory.
    2. Create two subfolders, Interface and Implementation, under the new Service folder.
      Service setup
    3. Add a new interface, ICategoryService, to the Service\Interface folder.
    4. Add the following code to that interface
      using AdventureWorksMVCCore.Web.Models;
      using System.Collections.Generic;
       
      namespace AdventureWorksMVCCore.Web.Service.Interface
      {
          public interface ICategoryService
          {
              List<ProductCategory> GetCategoriesWithSubCategory();”
          }
      }
    5. Add a new service file, CategoryService, to the Service/Implementation folder.
    6. Create a class file CategoryService.cs and implement the interface you just created with the following code:
      using AdventureWorksMVCCore.Web.Models;
      using AdventureWorksMVCCore.Web.Service.Interface;
      using Microsoft.EntityFrameworkCore;
      using System.Collections.Generic;
      using System.Linq;
       
      namespace AdventureWorksMVCCore.Web.Service.Implementation
      {
        
          public class CategoryService : ICategoryService
          {
              private readonly CYCLE_STOREContext _context;
              public CategoryService(CYCLE_STOREContext context)
              {
                  _context = context;
              }
              public List<ProductCategory> GetCategoriesWithSubCategory()
              {
                  return _context.ProductCategory
                          .Include(category => category.ProductSubcategory)
                          .ToList(); 
               }
          }
      }
      

      Now that you have the interface and instantiation completed, the next step is to add the dependency resolver. This adds the interface to the application’s service collection and acts as a map on how to instantiate that class when it’s injected into a class constructor.

    7. To add this mapping, open the startup.cs file and add the following line of code below where you added DbContext:
      services.TryAddScoped<ICategoryService, CategoryService>();

      You may also need to add the following references:

      using AdventureWorksMVCCore.Web.Service.Implementation;
      using AdventureWorksMVCCore.Web.Service.Interface;
      using Microsoft.Extensions.DependencyInjection;
      using Microsoft.Extensions.DependencyInjection.Extensions;
      

Setting up view components

In this section, you move the UI to your new ASP.NET Core project. In ASP.NET Core, the default folder structure for managing views is different that it was in ASP.NET MVC. Also, the formatting of the Razor files is slightly different.

    1. Under Views, Shared, create a  folder called Components.
    2. Create a sub folder called Header.
    3. In the Header folder, create a new view called Default.cshtml and enter the following code:
      <div method="post" asp-action="header" asp-controller="home">
          <div id="hd">
              <div id="doc2" class="yui-t3 wrapper">
                  <table>
                      <tr>
                          <td>
                              <h2 class="banner">
                                  <a href="@Url.Action("Default","Home")" id="LnkHome">
                                      <img src="/Images/logo.png" style="width:125px;height:125px" alt="ComponentOne" />
       
                                  </a>
                              </h2>
                          </td>
                          <td class="hd-header"><h2 class="banner">Unicorn Bike Rentals</h2></td>
                      </tr>
                  </table>
              </div>
       
          </div>
      </div>
      
    4. Create a class within the Header folder called HeaderLayout.cs and enter the following code:
      using Microsoft.AspNetCore.Mvc;
       
      namespace AdventureWorksMVCCore.Web
      {
          public class HeaderViewComponent : ViewComponent
          {
              public IViewComponentResult Invoke()
              {
                  return View();
              }
          }
      }
      

      You can now create the content view component, which shows bike categories and subcategories.

    5. Under Views, Shared, Components, create a folder called Content
    6. Create a class ContentLayoutModel.cs and enter the following code:
      using AdventureWorksMVCCore.Web.Models;
      using System.Collections.Generic;
       
      namespace AdventureWorksMVCCore.Web.Views.Components
      {
          public class ContentLayoutModel
          {
              public List<ProductCategory> ProductCategories { get; set; }
          }
      }
      
    7. In this folder, create a view Default.cshtml and enter the following code:
      @model AdventureWorksMVCCore.Web.Views.Components.ContentLayoutModel
       
      <div method="post" asp-action="footer" asp-controller="home">
          <div class="content">
       
              <div class="footerinner">
                  <div id="PnlExpFooter">
                      <div>
                          @foreach (var category in Model.ProductCategories)
                          {
                              <div asp-for="@category.Name" [email protected]($"{category.Name}Menu")>
                                  <h1>
                                      <b>@category.Name</b>
                                  </h1>
                                  <ul [email protected]($"{category.Name}List")>
                                      @foreach (var subCategory in category.ProductSubcategory.ToList())
                                      {
                                          <li>@subCategory.Name</li>
                                      }
                                  </ul>
                              </div>
                          }
                      </div>
                  </div>
              </div>
          </div>
      </div>
    8. Create a class ContentLayout.cs and enter the following code:
      using AdventureWorksMVCCore.Web.Models;
      using AdventureWorksMVCCore.Web.Service.Interface;
      using Microsoft.AspNetCore.Mvc;
       
      namespace AdventureWorksMVCCore.Web.Views.Components
      {
          public class ContentViewComponent : ViewComponent
          {
              private readonly CYCLE_STOREContext _context;
              private readonly ICategoryService _categoryService;
              public ContentViewComponent(CYCLE_STOREContext context,
                  ICategoryService categoryService)
              {
                  _context = context;
                  _categoryService = categoryService;
              }
       
              public IViewComponentResult Invoke()
              {
                  ContentLayoutModel content = new ContentLayoutModel();
                  content.ProductCategories = _categoryService.GetCategoriesWithSubCategory();
                  return View(content);
              }
       
       
          }
      }

      The website layout is driven by _Layout.cshtml file.

    9. To render the header and portal the way you want, modify _Layout.cshtml and replace the existing code with the following code:
      <!DOCTYPE html>
      <html lang="en">
      <head>
          <meta name="viewport" content="width=device-width" />
          <title>Core Cycles Store</title>
          <link rel="apple-touch-icon" sizes="180x180" href="favicon/apple-touch-icon.png" />
          <link rel="icon" type="image/png" href="favicon/favicon-32x32.png" sizes="32x32" />
          <link rel="icon" type="image/png" href="favicon/favicon-16x16.png" sizes="16x16" />
          <link rel="manifest" href="favicon/manifest.json" />
          <link rel="mask-icon" href="favicon/safari-pinned-tab.svg" color="#503b75" />
          <link href="@Url.Content("~/css/StyleSheet.css")" rel="stylesheet" />
      </head>
      <body class='@ViewBag.BodyClass' id="body1">
          @await Component.InvokeAsync("Header");
       
          <div id="doc2" class="yui-t3 wrapper">
              <div id="bd">
                  <div id="yui-main">
                      <div class="content">
                          <div> 
                              @RenderBody()
                          </div>
                      </div>
                  </div>
              </div>
          </div>
      </body>
      </html>
      

      Upon completion your directory should look like the following screenshot.
      View component setup

Modifying the index file

In this final step, you modify the Home, Index.cshtml file to hold this content ViewComponent:

@{
    ViewBag.Title = "Core Cycles";
    Layout = "~/Views/Shared/_Layout.cshtml";
}
 
<div id="homepage" class="">    
    <div class="content-mid">
        @await Component.InvokeAsync("Content");
    </div>
 
</div>

You can now build the solution. You should have an MVC .NET Core 3.1 application running with data from your database. The following screenshot shows the website view.

Re-architected code output

Securing the database user and password

The CloudFormation stack you launched also created a Secrets Manager entry to store the CYCLE_STORE database user ID and password. As an optional step, you can use that to retrieve the database user ID and password instead of hard-coding it to ConnectionString.

To do so, you can use the AWS Secrets Manager client-side caching library. The dependency package is also available through NuGet. For this post, I use NuGet to add the library to the project.

  1. On the NuGet Package Manager console, browse for AWSSDK.SecretsManager.Caching.
  2. Choose the library and install it.
  3. Follow these steps to also install Newtonsoft.Json.
    Add Aws Secrects Manager nuget
  4. Add a new class ServicesConfiguration to this solution and enter the following code in the class. Make sure all the references are added to the class. This is an extension method so we made the class static:
    public static class ServicesConfiguration
       {
           public static async Task<Dictionary<string, string>> GetSqlCredential(this IServiceCollection services, string secretId)
           {
     
               var credential = new Dictionary<string, string>();
     
               using (var secretsManager = new AmazonSecretsManagerClient(Amazon.RegionEndpoint.USEast1))
               using (var cache = new SecretsManagerCache(secretsManager))
               {
     
     
                   var sec = await cache.GetSecretString(secretId);
                   var jo = Newtonsoft.Json.Linq.JObject.Parse(sec);
     
                   credential["username"] = jo["username"].ToObject<string>();
                   credential["password"] = jo["password"].ToObject<string>();
               }
     
               return credential;
           }
       }
    
  5. In appsettings.Development.json, replace the DefaultConnection with the following code:
    "Server=sqlrdsdb.cph0bnedghnc.us-east-1.rds.amazonaws.com; Database=CYCLE_STORE;User Id=<UserId>;Password=<Password>;"
  6. Add the following code in the startup.cs, which replaces the placeholder user ID and password with the value retrieved from Secrets Manager:
    Dictionary<string, string> secrets = services.GetSqlCredential("CycleStoreCredentials").Result;
    connectionString = connectionString.Replace("<UserId>", secrets["username"]);
    connectionString = connectionString.Replace("<Password>", secrets["password"]);
    services.AddDbContext<CYCLE_STOREContext>(options => options.UseSqlServer(connectionString));

    Build the solution again.

Cleaning up

To avoid incurring future charges, on the AWS CloudFormation console, delete the SQLRDSEXStack stack.

Conclusion

This post showed the process to modernize a legacy enterprise MVC ASP.NET web application using .NET Core and convert Entity Framework to Entity Framework Core. In Part 2 of this post, we take this one step further to show you how to host this application in Linux containers.

About the Author

Saleha Haider is a Senior Partner Solution Architect with Amazon Web Services.
Pratip Bagchi is a Partner Solutions Architect with Amazon Web Services.

 

Use MAP for Windows to Simplify your Migration to AWS

Post Syndicated from Fred Wurden original https://aws.amazon.com/blogs/compute/use-map-for-windows-to-simplify-your-migration-to-aws/

There’s no question that organizations today are being disrupted in their industry. In a previous blog post, I shared that such disruption often accelerates organizations’ decisions to move to the cloud. When these organizations migrate to the cloud, Windows workloads are often critical to their business and these workloads require a performant, reliable, and secure cloud infrastructure. Customers tell us that reducing risk, building cloud expertise, and lowering costs are important factors when choosing that infrastructure.

Today, we are announcing the general availability of the Migration Acceleration Program (MAP) for Windows, a comprehensive program that helps you execute large-scale migrations and modernizations of your Windows workloads on AWS. We have millions of customers on AWS, and have spent the last 11 years helping Windows customers successfully move to our cloud. We’ve built a proven methodology, providing you with AWS services, tools, and expertise to help simplify the migration of your Windows workloads to AWS. MAP for Windows provides prescriptive guidance, consulting support from experts, tools, trainings, and service credits to help reduce the risk and cost of migrating to the cloud as you embark on your migration journey.

MAP for Windows also helps you along the pathways to modernize current and legacy versions of Windows Server and SQL Server to cloud native and open source solutions, enabling you to break free from commercial licensing costs. With the strong price-performance of open-source solutions and the proven reliability of AWS, you can innovate quickly while reducing your risk.

With MAP for Windows, you will follow a simple three-step migration process to your migration:

  1. Assess Your Readiness: The migration readiness assessment helps you identify gaps along the six dimensions of the AWS Cloud Adoption Framework: business, process, people, platform, operations, and security. This assessment helps customers identify capabilities required in the migration. MAP for Windows also includes an Optimization and Licensing Assessment, which provides recommendations on how to optimize your licenses on AWS.
  2. Mobilize Your Resources: The mobilize phase helps you build an operational foundation for your migration, with the goal of fixing the capability gaps identified in the assessment phase. The mobilize phase accelerates your migration decisions by providing clear guidance on migration plans that improve the success of your migration.
  3. Migrate or Modernize Your Workloads: APN Partners and the AWS ProServe team help customers execute the large-scale migration plan developed during the mobilize phase. MAP for Windows also offers financial incentives to help you offset migration costs such as labor, training, and the expense of sometimes running two environments in parallel.

MAP for Windows includes support from AWS Professional Services and AWS Migration Competency Partners, such as Rackspace, 2nd Watch, Accenture, Cloudreach, Enimbos Global Services, Onica, and Slalom. Our MAP for Windows partners have successfully demonstrated completion of multiple large-scale migrations to AWS. They have received the APN Migration Competency Partner and the Microsoft Workloads Competency designations.

Learn about what MAP for Windows can do for you on this page. Learn also about the migration experiences of AWS customers. And contact us to discuss your Windows migration or modernization initiatives and apply to MAP for Windows.

About the Author

Fred Wurden is the GM of Enterprise Engineering (Windows, VMware, Red Hat, SAP, benchmarking) working to make AWS the most customer-centric cloud platform on Earth. Prior to AWS, Fred worked at Microsoft for 17 years and held positions, including: EU/DOJ engineering compliance for Windows and Azure, interoperability principles and partner engagements, and open source engineering. He lives with his wife and a few four-legged friends since his kids are all in college now.

Migrating ASP.NET applications to Elastic Beanstalk with Windows Web Application Migration Assistant

Post Syndicated from Sepehr Samiei original https://aws.amazon.com/blogs/devops/migrating-asp-net-applications-to-elastic-beanstalk-with-windows-web-application-migration-assistant/

This blog post discusses benefits of using AWS Elastic Beanstalk as a business application modernization tool, and walks you through how to use the new Windows Web Application Migration Assistant. Businesses and organizations in all types of industries are migrating their workloads to the Cloud in ever-increasing numbers. Among migrated workloads, websites hosted on Internet Information Services (IIS) on Windows Server is a common pattern. Developers and IT teams that manage these workloads want their web applications to scale seamlessly based on continuously changing load, without having to guess or forecast future demand. They want their infrastructure, including the IIS server and Windows Server operating system, to automatically receive latest patches and updates, and remain compliant with their baseline policies without having to spend lots of manual effort or heavily invest in expensive commercial products. Additionally, many businesses want to tap into the global infrastructure, security, and resiliency offered by the AWS Cloud.

AWS Elastic Beanstalk is designed to address these requirements. Elastic Beanstalk enables you to focus on your application code as it handles provisioning, maintenance, health-check, autoscaling, and other common tasks necessary to keep your application running. Now, the Windows Web Application Migration Assistant makes it easier than ever to migrate ASP.NET and ASP.NET Core web applications from IIS on Windows servers running on-premises or in another cloud to Elastic Beanstalk. This blog post discusses how you can use this open-source tool to automate your migration efforts and expedite your first step into the AWS Cloud.

 

What is Windows Web Application Migration Assistant?

The Windows Web Application Migration Assistant is an interactive PowerShell script that migrates entire websites and their configurations to Elastic Beanstalk. The migration assistant is available as an open-source project on GitHub, and can migrate entire ASP.NET applications running on .NET Framework in Windows, as well as on ASP.NET Core. Elastic Beanstalk also runs ASP.NET Core applications on Windows platform. For ASP.NET applications, you can use this tool to migrate both classic Web Forms applications, as well as ASP.NET MVC apps.

Windows Web Application Migration Assistant workflow

The migration assistant interactively walks you through the steps shown in the following diagram during the migration process.

Workflow of using Windows Web Application Migration Assistant. Showing steps as discover, select, DB connection string, generate, and deploy.

1-      Discover: The script automatically discovers and lists any websites hosted on the local IIS server.

2-      Select: You choose the websites that you wish to migrate.

3-      Database connection string: The script checks selected website’s web.config and iterates through their connection string settings. You are then prompted to update connection strings to have them point to the migrated database instance.

4-      Generate: The script generates an Elastic Beanstalk deployment bundle. This bundle includes web application binaries and instructions for Elastic Beanstalk on how it should host the application.

5-      Deploy: Finally, the script uploads the deployment bundle to the Elastic Beanstalk application for hosting.

Elastic Beanstalk application and other concepts

The most high-level concept in Elastic Beanstalk is application. An Elastic Beanstalk application is a logical collection of lower-level components, such as environments and versions. For example, you might have an Elastic Beanstalk application called MarketingApp, which contains two Elastic Beanstalk environments: Test and Production. Each one of these environments can host one of the different versions of your application, as shown in the following diagram.

Elastic Beanstalk application, environments, and versions.

As you can see in the diagram, although it’s possible to have multiple versions of the same application uploaded to Elastic Beanstalk, only one version is deployed to each environment at any point in time. Elastic Beanstalk enables rapid rollback and roll-forward between different versions, as well as swapping environments for Blue-Green deployments.

Migrating databases

The Windows Web Application Migration Assistant is specifically designed to migrate web applications only. However, it’s quite likely that web applications may have dependencies to one or more backend databases. You can use other tools and services provided by AWS to migrate databases. Depending on your requirements, there are multiple options available:

The Windows Web Application Migration Assistant allows you to migrate your database(s) during or after migration of web applications. In either case, your migrated web applications should be modified to point any connection strings to the new database endpoints. The migration assistant script also helps you to modify connection strings.

Prerequisites

Ensure your server meets the following software requirements.

  1. IIS 8 and above running on Windows 2012 and above.
  2. PowerShell 3.0 and above: To verify the version of PowerShell on your server, open a PowerShell window and enter this command: $PSVersionTable.PSVersion
  3. Microsoft Web Deploy version 3.6 or above: You can verify the installed version from list of programs in Add or Remove Programs in Windows Control Panel.
  4. .NET Framework (supported versions: 4.x, 2.0, 1.x) or .NET Core (supported versions: 3.0.0, 2.2.8, 2.1.14) installed on your web server.
  5. AWSPowerShell module for MS PowerShell: You can use the following cmdlet to install it: Install-Module AWSPowerShell. Make sure the latest version of AWSPowerShell module is installed. If you have an older version already installed, you can first uninstall it using following cmdlet: Uninstall-Module AWSPowerShell -AllVersions -Force
  6. WebAdministration module for MS PowerShell: You can check for this dependency by invoking the PowerShell command Import-Module WebAdministration.
  7. Since the script automatically inspects IIS and web application settings, it needs to run as a local administrator on your web server. Make sure you have appropriate credentials to run it. If you are logged in with a user other than a local Administrator, open a PowerShell session window as an administrator.
  8. Finally, make sure your server has unrestricted internet connectivity to the AWS IP range.

Configure the destination AWS account

You need to have an AWS Identity and Access Management (IAM) user in your destination AWS account. Follow the instructions to create a new IAM user. Your user needs credentials for both the AWS Console and programmatic access, so remember to check both boxes, as shown in the following screenshot.

AWS Console screen for adding a new IAM user.

On the permissions page, attach existing policies directly to this user, as shown in the following screenshot. Select the following two AWS-managed policies:

  • IAMReadOnlyAccess
  • AWSElasticBeanstalkFullAccess

Screen to add existing policies to a user.

Before finishing the user creation, get the user’s AccessKey and SecretKey from the console, as shown in the following screenshot.

Success message for creating a new user, showing secret keys and credentials.

Open a PowerShell terminal on your Windows Server and invoke the following two commands.

PS C:\> Import-Module AWSPowerShell
PS C:\> Set-AWSCredential -AccessKey {access_key_of_the_user} -SecretKey {secret_key_of_the_user} -StoreAs {profile_name} -ProfileLocation {optional - path_to_the_new_profile_file}

 

The parameter {profile_name} is an arbitrary name to label these credentials. The optional parameter path_to_the_new_profile_file can be used to store the encrypted credentials profile in a customized path.

Setting up

Once you’ve verified the checklist for web server and configured the destination AWS account, set up is as simple as:

  1. Download the script from GitHub, using clone or the download option of the repository, and place the extracted content on the local file system of your web server.
  2. Optionally, edit the utils/settings.txt JSON file, and set the following two variables (if not set, the script will use the default values of the AWS PowerShell profile name and path):
    • defaultAwsProfileFileLocation : {path_to_the_new_profile_file}
    • defaultAwsProfileName : {profile_name }

You’re now ready to run the migration assistant script to migrate your web applications.

Running the migration assistant

The migration assistant is a PowerShell script. Open a PowerShell window as administrator in your web server and run the MigrateIISWebsiteToElasticBeanstalk.ps1 script. Assuming you have copied the content of the GitHub package to your C: drive, launching the script looks as follows:

PS C:\> .\MigrateIISWebsiteToElasticBeanstalk.ps1

The migration assistant prompts you to enter the location of your AWS credentials file. If you have used the optional parameter to store your IAM user’s credentials in a file, you can enter its path here. Otherwise, choose Enter to skip this step. The assistant then prompts you to enter the profile name of your credentials. Enter the same {profile_name} that you had used for Set-AWSCredentials.

Having acquired access credentials, the assistant script then prompts you to enter an AWS Region. Enter the identifier of the AWS Region in which you want your migrated web application to run. If you are not sure which values can be used, check the list of available Regions using following cmdlet (in another PowerShell window):

Get-AWSRegion

For example, I am using ap-southeast-2 to deploy into the Sydney Region.

After specifying the Region, the migration assistant script starts scanning your web server for deployed web applications. You can then choose one of the discovered web applications by entering its application number as shown in the list.

Migration assistant lists web sites hosted in IIS and prompts to select one.

So far, you’ve kickstarted the migration assistant and selected a web application to migrate. At this point, the migration assistant begins to assess and analyze your web application. The next step is to update the connection strings for any backend databases.

 

Modifying connection strings

Once you enter the number of the web application that has to migrate, the assistant takes a snapshot of your environment and lists any connection strings used by your application. To update a connection string, enter its number. Optionally, you could choose Enter to skip this step, but remember to manually update connection strings, if any, before bringing the web application online.

Enter the number of the connection string you would like to update, or press ENTER:

Next, the assistant pauses and allows you to migrate your database. Choose Enter to continue.

Please separately migrate your database, if needed.

The assistant then prompts you to update any connection strings selected above. If you choose M, you can update the string manually by editing it in the file path provided by the migration assistant. Otherwise, paste the contents of the new connection string and choose Enter.

Enter "M" to manually edit the file containing the connection string, or paste the replacement string and press ENTER (default M) :

The script then continues with configuration parameters for the target Elastic Beanstalk environment.

Configuring the target Elastic Beanstalk environment using the script

The migration assistant script prompts you to enter a name for the Elastic Beanstalk application.

Please enter the name of your new EB application.

The name has to be unique:

By default, the migration assistant automatically creates an environment with a name prefixed with MigrationRun. It also creates a package containing your web application files. This package is called application source bundle. The migration assistant uploads this source bundle as a version inside the Elastic Beanstalk application.

Elastic Beanstalk automatically provisions an Amazon EC2 instance to host your web application. The default Amazon EC2 instance type is t3.medium. When the migration assistant prompts you to enter the instance type, you can choose Enter to accept the default size, or you can enter any other Amazon EC2 Instance Type.

Enter the instance type (default t3.medium) :

Elastic Beanstalk provides preconfigured solution stacks. The migration assistant script prompts you to select one of these solution stacks. Not only are these stacks preconfigured with an optimized operating system, as well as an application server and web server to host your application, but Elastic Beanstalk can also handle their ongoing maintenance by offering enhanced health checks, managed updates, and immutable deployment. These extra benefits are available with version 2 solution stacks (i.e. v2.x.x, as, for example, in “64bit Windows Server 2016 v2.3.0 running IIS 10.0”).

The migration assistant prompts you to enter the name of the Windows Server Elastic Beanstalk solution stack. For a list of all supported solution stacks, see .NET on Windows Server with IIS in the AWS Elastic Beanstalk Platforms guide.

Solution stack name (default 64bit Windows Server 2016 v2.3.0 running IIS 10.0):

That is the final step. Now, the migration assistant starts to do its magic and after a few minutes you should be able to see your application running in Elastic Beanstalk.

Advanced deployment options

The Windows Web Application Migration Assistant uses all the default values to quickly create a new Elastic Beanstalk application. However, you might want to change some of these default values to address your more specific requirements. For example, you may need to deploy your web application inside an Amazon Virtual Private Cloud (VPC), set up advanced health monitoring, or enable managed updates features.

To do this, you can stop running the migration assistant script after receiving this message:

An application bundle was successfully generated.

Now follow these steps:

  1. Sign in to your AWS console and go to the Elastic Beanstalk app creation page.
  2. From the top right corner of the page, make sure the correct AWS Region is selected.
  3. Create the application by following the instructions on the page.
  4. Choose Upload your code in the Application code section and upload the deployment bundle generated by the migration assistant. Go to your on-premises web server and open the folder to which you copied the migration assistant script. The deployment bundle is a .zip file placed in the MigrationRun-xxxxxxx/output folder. Use this .zip file as application code and upload it to Elastic Beanstalk.
  5. Choose the Configure more options button to set advanced settings as per your requirements.
  6.  Select the Create application button to instruct Elastic Beanstalk to start provisioning resources and deploy your application.

If your web application has to use existing SSL certificates, manually export those certificates from Windows IIS and then import them into AWS Certificate Manager (ACM) in your AWS account. Elastic Beanstalk creates an Elastic Load Balancer (ELB) for your application. You can configure this ELB to use your imported certificates. This way HTTPS connections will be terminated on the ELB, and you don’t need further configuration overhead on the web server. For more details, see Configuring HTTPS for your Elastic Beanstalk Environment.

Also keep in mind that Elastic Beanstalk supports a maximum of one HTTP and one HTTPS endpoint. No matter what port numbers are used for these endpoints within the on-premises IIS, Elastic Beanstalk will bind them to ports 80 and 433 respectively.

Any software dependencies outside of the website directory, such as Global Assembly Cache (GAC), have to manually be configured on the target environment.

Although Amazon EC2 supports Active Directory (AD)—both the managed AD service provided by AWS Directory Service and customer-managed AD—Elastic Beanstalk currently does not support web servers being joined to an AD domain. If your web application needs the web server to be joined to an Active Directory domain, either alter the application to eliminate the dependency, or use other approaches to migrate your application onto Amazon EC2 instances without using Elastic Beanstalk.

Cleanup

If you use the steps described in this post to test the migration assistant, it creates resources in your AWS account. You keep getting charged for these resources if they go beyond the free tier limit. To avoid that, you can delete resources to stop being billed for them. This is as easy as going to the Elastic Beanstalk console and deleting any applications that you no longer want.

Conclusion

This blog post discussed how you can use Windows Web Applications Migration Assistant to simplify the migration of your existing web applications running on Windows into Elastic Beanstalk. The migration assistant provides an automated script that’s based on best practices and automates commonly used migration steps. You don’t need in-depth skills with AWS to use this tool. Furthermore, by following the steps as performed by the Windows Web Application Migration Assistant tool and Elastic Beanstalk, you can acquire more in-depth skills and knowledge about AWS services and capabilities in practice and as your workloads move into the Cloud. These skills are essential to the ongoing modernization and evolution of your workloads in the Cloud.

Estimating Amazon EC2 instance needed when migrating ERP from IBM Power

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/estimating-amazon-ec2-instance-needed-when-migrating-erp-from-ibm-power/

This post courtesy of CK Tan, AWS, Enterprise Migration Architect – APAC

Today there are many enterprise customers who are keen on migrating their mission critical Enterprise Resource Planning (ERP) applications such as Oracle E-Business Suite (Oracle EBS) or SAP running on IBM Power System from on-premises to the AWS cloud platform. They realize running critical ERP applications on AWS will enable their business to be more agile, cost-effective, and secure than on premises. AWS provides cloud native services that streamline your ability to adopt emerging technologies, enabling greater innovation, and faster time-to-value.

However, it is essential to note that both IBM Power System and Amazon Elastic Cloud Compute (EC2) are using different CPU processor architectures. Amazon EC2 instances are running on either x86 or ARM-based processors while IBM Power System is running on IBM Power processors. As a result, there are no direct performance benchmarks mapping the two platforms, or sizing for enterprise customers who intend to migrate their application running on IBM Power System to Amazon EC2.

The purpose of this blog is to share a methodology along with example of how to estimate the sizing of mission critical applications running on IBM Power System to Amazon EC2 running on x86 platform.

Methodology

In order to normalize the CPU performance between IBM Power and Intel x86 processors, we may refer to SAP Application Performance Standard (SAPS) for performance benchmark purposes. SAPS is a hardware-independent unit of measurement that describes the performance of a system configuration in the SAP environment. It is derived from the Sales and Distribution (SD) two-tier benchmark, where 100 SAPS is defined as 2,000 fully business processed order line items per hour.

In technical terms, this throughput is achieved by processing 6,000 dialog steps (screen changes), 2,000 postings per hour in the SD Benchmark, or 2,400 SAP transactions.

In the SD benchmark, fully business processed means the full business process of an order line item: creating the order, creating a delivery note for the order, displaying the order, changing the delivery, posting a goods issue, listing orders, and creating an invoice.

In this methodology, we compare the performance sizing by using SAPS which is not released by other ERP software such as Oracle EBS. However, we believe that this methodology will be most pertinent of indirect performance benchmark from an ERP software perspective as most of the enterprise businesses have similar process of ordering.

Discussion by Example

We will walk you through a real example to demonstrate how you can translate and migrate an SAP ERP 6.0 or Oracle EBS application database that is running on IBM Power 795 (IBM P795) with IBM AIX from on-premises to AWS cloud environment by choosing the right size of Amazon EC2 instances.

Step 1: IBM P795 System Performance Analysis

Users can leverage IBM nmon Analyzer which is a free tool to produce AIX performance analysis reports running on IBM Power System. The tool is designed to work with the latest version of nmon, but it is also tested with older versions for backwards compatibility. The tool is updated whenever nmon is updated, and at irregular intervals for new function. It allows users to:

  • View the data in spreadsheet form
  • Eliminate “bad” data
  • Automatically produce graphs for presentation to clients

Below is the performance analysis being captured for CPU, memory, disk throughput, disk Input / Output (I/O), and network I/O during the system’s database peak demand at month end closing – using IBM nmon Analyzer.

Figure 1: Actual Number of Pyhsical Core of CPU Utilization Analysis on IBM P795

 

Figure 2: Memory Usage Analysis on IBM P795

 

Figure 3: Total Disk Throughput Analysis on IBM P795

 

Figure 4: Total Disk I/O on IBM P795

 

Figure 5: Network I/O Analysis on IBM P795

Step 2: CPU Normalization

Users can find the SAPS results for different hardware manufacturers from this link. Table 1 below shows how to calculate the processing power for single CPU core with respect to different hardware manufacturers. We have chosen Amazon EC2 instances r5.metal and r4.16xlarge as potential candidates because they provide memory sizing which best fits the current demand of 408 GB – refer to Figure 2.

Instance TypeSAPS (2- Tier)Memory (GB)Total CPU CoresSAPS/Core
IBM P795688,6304,0962562,690
EC2 r5.metal140,000768482,917
EC2 r4.16xlarge76,400512362,122

Table 1: Calculate the SAPS Processing Power for Single Core of Physical CPU

In order to calculate the Normalization Factor for any targeted Amazon EC2 instance with respect to source system of IBM Power system, we will apply Equation 1 as shown below:

Normalization Factor = [SAPS Per Core] Amazon EC2 / [SAPS Per Core] IBM POWER System (1)

Therefore, by inserting the SAPS/Core being calculated from Table 1 into Equation (1):

  • The normalization factor for 1 core of Amazon EC2 of r5.metal to IBM P795 is equivalent to 1.08
  • The normalization factor for 1 core of Amazon EC2 of r4.16xlarge to IBM P795 is equivalent to 0.79

Table 2 below shows how to calculate normalized total number of CPU cores being allocated to each system. We need to make sure that the normalized total number of CPU cores from selected Amazon EC2 instance must be greater than or equal to the assigned entitlement capacity of IBM P795 in this case.

 

Instance TypeNormalization Factor (A)Total # CPU Cores (B)Normalized Total # CPU Core (A X B)
IBM P7951.003232
EC2 r5.metal1.084852
EC2 x1.32xlarge0.793628

Table 2: Normalized Total Number of CPU Core

Figure 1 indicates that the peak to meet the total processing power of database ERP running on IBM P795 is 29 cores of physical CPU as compared to assigned entitlement capacity of 32. As a result, the Amazon EC2 instance, r5.metal, should be the right candidate which will deliver sufficient processing power with enough buffer for future business growth in next 12 to 24 months.

Step 3: Amazon EC2 Right Sizing

Next, we need to check and ensure that other specifications such as network I/O, disk throughput and disk I/O will meet the current peak demand of the application. Table 3 shows the different specifications of r5.metal . It is clear from Table 3 that r5.metal meets all the demand requirements of the database ERP which is running on IBM P795.

SpecificationsAmazon EC2 Instance Type of r5.metal
Physical Core of CPU48 > 29 (current CPU utilization)
RAM (GiB)768 > 408 (current memory usage)
Network Perf (GBps)3 > 0.3 (current network I/O throughput)
Dedicated EBS BW (GBps)~ 2.4 > 2.0 (current disk throughput)
Maximum IOPS (16 KiB I/O)80,000 > 25,000 (current disk IO consumption)

Table 3: EC2 r5.metal Specifications

Conclusion

The methodology and example discussed in this blog gives a pragmatic estimation for optimal performance by selecting the right type and size of Amazon EC2 instances when you plan to migrate from IBM Power System, which is running mission critical ERP application, from on-premises datacenter to AWS cloud. This methodology has been applied to many enterprise cloud transformation projects and delivered more predictable performance with significant TCO savings. Additionally, this methodology can be adopted for capacity planning and helps enterprises establish strong business justifications for heterogeneous platform migration.

New AWS Program to Help Future-proof Your End-of-Support Windows Server Applications

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/new-program-to-future-proof-windows-server-applications/

If you have been in business a while, you might find yourself in a situation where you have legacy Windows Server applications, critical to your business, that you can’t move to a newer, supported version of Windows Server. Customers give us many reasons why they can’t move these legacy applications – perhaps the application has dependencies on a specific version of Windows Server, or the customer has no expertise with the application, it may even be the case that the installation media or source code has been lost.

On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. Having an application that can run only on an unsupported version of Windows Server is problematic as you will no longer get free security patch updates, leaving you vulnerable to security and compliance risks. It is also difficult to move an application like this to the cloud without significant refactoring.

If you have legacy applications that run only on unsupported versions of Windows Server, then it’s often tempting to spend money on extended support. However, this is simply delaying the inevitable and customers tell us that they want a long-term solution which future-proofs their legacy applications.

We have a long-term solution
To help you with this, today, we are launching the AWS End-of-Support Migration Program (EMP) for Windows Server. This new program combines technology with expert guidance, to migrate your legacy applications running on outdated versions of Windows Server to newer, supported versions on AWS.

If you are facing Windows Server 2008 end-of-support, this program offers a unique solution and path forward that solves the issue for the long-term as opposed to just delaying the decision for another day. It is important to note, you don’t need to make a single code change in the legacy application and you also do not require the original installation media or source code.

In a moment, I will demonstrate how the technology portion of this program works. However, you should know, you will need to engage an AWS Partner or Professional Services to actually conduct the migration, the product page lists the network of partners who you can speak to about pricing and your specific requirements.

So let us look at how this works, I’ll run you through the steps that a partner might take when they migrate your application where the installation media is available.

On Windows Server 2016, I attempt to run the setup file that would install Microsoft SQL Server 2000. I am prompted by Windows that this application can’t run on this version of Windows Server. In this case, I also cannot run the application in compatibility mode.

I then move over to an old Windows Server 2003 that is running locally in my office. I run a tool that is a key piece of technology used in the AWS EMP for Windows Server, we use this tool to migrate the application, decoupling it from the underlying operating system. First, I have to select a folder to place my completed application package after the tool completes.

Next, I start a capture which takes a snapshot of my machine. This is used later to understand what has been changed by the installation process.

The tool then prompts me to install the application that I want to migrate. The tool is listening and capturing all of the changes that take place on the machine.

I run the application and go through the installation process, setting up the application as I would normally do.

The tool identifies all of the shortcuts created by the application installer, and I am asked to run the application using one of these entry points and to run through a typical workflow inside the application. The whole time, the tool is watching what process and system-level APIs are called, so it creates a picture of the application’s dependencies.

Once the capture is complete, I am presented with all of the files changed by the monitored application. I then need to work through these and manually confirm that the folders are indeed part of the installation process.

I go through the same process for registry keys. I manually verify that the registry keys are indeed related to the installation.

Finally, I give the package a name. The package includes all the application files, run times, components, deployment tools, as well as an engine that redirects the API calls from your application to files within the package. This resolves the dependencies and decouples the application from the underlying OS.

Once the package is complete, there are a few manual configuration adjustments that will need to be made. I won’t show these, but I mention it as it highlights why it’s required to have an AWS Partner or Professional Services conduct this process – some of the migration process requires in-depth knowledge and experience.

I then move over to a Windows Server 2016, and run the packaged application. Below you can see that my application is now running on a server it was previously incompatible with.

The AWS EMP for Windows Server supports even your most complex applications, including the ones with tight dependencies on older versions of the operating system, registries, libraries, and other files.

If you want to get started with future-proofing your legacy Windows Server workloads on AWS, head over to the AWS EMP for Windows Server page, where we list partners that can help you on your migration journey. You can also contact us directly about this program by completing this form.

— Martin

 

Migrating Azure VM to AWS using AWS SMS Connector for Azure

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/migrating-azure-vm-to-aws-using-aws-sms-connector-for-azure/

AWS SMS is an agentless service that facilitates and expedites the migration of your existing workloads to AWS. The service enables you to automate, schedule, and monitor incremental replications of active server volumes, which facilitates large-scale server migration coordination. Recently, you could only migrate virtual machines (VMs) running in VMware vSphere and Microsoft Hyper-V environments. Currently, you can use the simplicity and ease of AWS Server Migration Service (SMS) to migrate virtual machines running on Microsoft Azure. You can discover Azure VMs, group them into applications, and migrate a group of applications as a single unit without having to go through the hassle of coordinating the replication of the individual servers or decoupling application dependencies. SMS significantly reduces application migration time, as well as decreases the risk of errors in the migration process.

 

This post takes you step-by-step through how to provision the SMS virtual machine on Microsoft Azure, discover the virtual machines in a Microsoft Azure subscription, create a replication job, and finally launch the instance on AWS.

 

1- Provisioning the SMS virtual machine

To provision your SMS virtual machine on Microsoft Azure, complete the following steps.

  1. Download three PowerShell scripts listed under Step 1 of Installing the Server Migration Connection on Azure.
FileURL
Installation scripthttps://s3.amazonaws.com/sms-connector/aws-sms-azure-setup.ps1
MD5 hashhttps://s3.amazonaws.com/sms-connector/aws-sms-azure-setup.ps1.md5
SHA256 hashhttps://s3.amazonaws.com/sms-connector/aws-sms-azure-setup.ps1.sha256

 

  1. To validate the integrity of the files you can compare the checksums of the files. You can use PowerShell 5.1 or newer.

 

2.1 To validate the MD5 hash of the aws-sms-azure-setup.ps1 script, run the following command and wait for an output similar to the following result:

Command to validate the MD5 has of the aws-sems-azure-setup.ps1 script

2.2 To validate the SHA256 hash of the aws-sms-azure-setup.ps1 file, run the following command and wait for an output similar to the following result:

Command to validate the SHA256 hash of the aws-sms-azure-setup.ps1 file

2.3 Compare the returned values ​​by opening the aws-sms-azure-setup.ps1.md5 and aws-sms-azure-setup.ps1.sha256 files in your preferred text editor.

2.4 To validate if the PowerShell script has a valid Amazon Web Services signature, run the following command and wait for an output similar to the following result:

Command to validate validate if the PowerShell script has a valid Amazon Web Services signature

 

  1. Before running the script for provisioning the SMS virtual machine, you must have an Azure Virtual Network and an Azure Storage Account in which you will temporarily store metadata for the tasks that SSM performs against the Microsoft Azure Subscription. A good recommendation is to use the same Azure Virtual Network as the Azure Virtual Machines being migrated, since the SSM virtual machine performs REST API communications to communicate with AWS endpoints as well as the Azure Cloud Service. It is not necessary for the SMS virtual machine to have a Public IP or Internet Inbounds Rules.

 

4.  Run the installation script .\aws-sms-azure-setup.ps1

Screenshot of running the installation script

  1. Enter with the name of the existing Storage Account Name and Azure Virtual Network in the subscription:

Screenshot of where to enter Storage Account Name and Azure Virtual Network

  1. The Microsoft Azure modules imports into the local PowerShell, and you receive a prompt for credentials to access the subscription.

Azure login credentials

  1. A summary of the created features appears, similar to the following:

Screenshot of created features

  1. Wait for the process to complete. It may take a few minutes:

screenshot of processing jobs

  1. After the provisioning an output containing the Object Id of System Assigned Identity and Private IP. Save this information as it is going to be used to register the connector to the SMS service in the step 23.

Screenshot of the information to save

  1. To check the provisioned resources, log into the Microsoft Azure Portal and select the Resource Group option. The provided AWS script performed a role created in the Microsoft Azure IAM that allows the virtual machine to make use of the necessary services through REST APIs over HTTPS calls and to be authenticated via Azure Inbuilt Instance Metadata Service (IMDS).

Screenshot of provisioned resources log in Microsoft Azure Portal

  1. As a requirement, you need to create an IAM User that contains the necessary permissions for the SMS service to perform the migration. To do this, log into your AWS account at https://aws.amazon.com/console, under services select IAM. Then select User, and click Add user.

Screenshot of AWS console. add user

 

  1. In the Add user page, insert a username and check the option Programmatic access. Click: Next Permissions

Screenshot of adding a username

  1. Attach an existing policy with the name ServerMigrationConnector. This policy allows the AWS Connector to connects and executes API-requests against AWS. Click Next:Tags.

Adding policy ServerMigrationConnector

  1. Optionally add tags to the user. Click Next: Review.

Screenshot of option to add tags to the user

15. Click Create User and save the Access Key and Secret Access Key. This information is going to be used during the AWS SMS Connector setup.

Create User and save the access key and secret access key

 

  1. From a computer that has access to the Azure Virtual Network, access the SMS Virtual Machine configuration using a browser and the previously recorded private IP from the output of the script. In this example, the URL is https://10.0.0.4.

Screenshot of accessing the SMS Virtual Machine configuration

  1. On the main page of the SMS virtual machine, click Get Started Now

Screenshot of the SMS virtual machine start page

  1. Read and accept the terms of the contract, then click Next.

Screenshot of accepting terms of contract

  1. Create a password that will be used to login later in the management connector console and click Next.

Screenshot of creating a password

  1. Review the Network Info and click Next.

Screenshot of reviewing the network info

  1. Choose if you would like to opt-in to having anonymous log data set to AWS then click Next.

Screenshot of option to add log data to AWS

  1. Insert an Access Key and Secret Access Key for an IAM User whose only policy is attached: “ServerMigrationConnector” Also, select the region in which the SMS endpoint will be used and click Next. The access key mentioned it was created through step 11 to 15.

Selet AWS Region, and Insert Access Key and Secret Key

  1. Enter the Object Id of System Assigned Identify copied in step 9 and click Next.

Enter Object Id of System Assigned Identify

  1. Congratulations, you have successfully configured the Azure connector, click Go to connector dashboard.

Screenshot of the successful configuration of the Azure connector

  1. Verify that the connector status is HEALTHY by clicking Connectors on the menu.

Screenshot of verifying that the connector status is healthy

 

2 – Replicating Azure Virtual Machines to Amazon Web Services

  1. Access the SMS console and go to the Servers option. Click Import Server Catalog or Re-Import Server Catalog if it has been previously executed.

Screenshot of SMS console and servers option

  1. Select the Azure Virtual Machines to be migrated and click Create Replication Job.

Screenshot of Azure virtual machines migration

  1. Select which type of licensing best suits your environment, such as:

– Auto (Current licensing autodetection)

– AWS (License Included)

– BYOL (Bring Your Own License).
See options: https://aws.amazon.com/windows/resources/licensing/

Screenshot of best type of licensing for your environment

  1. Select the appropriate replication frequency, when the replication should start, and the IAM service role. You can leave it blank and the SMS service is going to use the built-in service role “sms”

Screenshot of replication jobs and IAM service role

  1. A summary of the settings are displayed and click Create. 
    Screenshot of the summary of settings displayed
  2. In the SMS Console, go to the Replication Jobs option and follow the replication job status:

Overview of replication jobs

  1. After completion, access the EC2 console, go to AMIs, and a list of the AMIs generated by SMS will now be in this list. In the example below, several AMIs were generated because the replication frequency is 1 hour.

List of AMIs generated by SMS

  1. Now navigate to the SMS console, click Launch Instance and follow the screen processes for creating a new Amazon EC2 instance.

SMS console and Launch Instance screenshot

 

3 – Conclusion

This solution provides a simple, agentless, non-intrusive way to the migration process with the AWS Server Migration Service.

 

For more about Windows Workloads on AWS go to:  http://aws.amazon.com/windows

 

About the Author

Photo of the Author

 

 

Marcio Morales is a Senior Solution Architect at Amazon Web Services. He works with AWS customers to provide guidance and technical assistance on running their Microsoft workloads on AWS.

How to migrate symmetric exportable keys from AWS CloudHSM Classic to AWS CloudHSM

Post Syndicated from Mohamed AboElKheir original https://aws.amazon.com/blogs/security/migrate-symmetric-exportable-keys-aws-cloudhsm-classic-aws-cloudhsm/

In August 2017, we announced the “new” AWS CloudHSM service, which had a lot of improvements over AWS CloudHSM Classic (for clarity in this post I will refer to the services as New CloudHSM and CloudHSM Classic). These advantages in security, scalability, usability, and economy, included FIPS 140-2 Level 3 certification, fully managed high availability and backup, a management console, and lower costs.

Now, we turn another page. The Luna 5 HSMs used for CloudHSM Classic are reaching end of life, and the CloudHSM Classic service is being subsequently decommissioned, so CloudHSM Classic users must migrate cryptographic key material to the New CloudHSM.

In this post, I’ll show you how to use the RSA OAEP (optimal asymmetric encryption padding) wrapping mechanism, which was introduced in the CloudHSM client version 2.0.0, to move key material from CloudHSM Classic to New CloudHSM without exposing the plain text of the key material outside the HSM boundaries. You’ll use an RSA public key to wrap the key material (export it in encrypted form) on CloudHSM Classic, then use the corresponding RSA Private Key to unwrap it on New CloudHSM.

NOTE: This solution only works for symmetric exportable keys. Asymmetric keys on CloudHSM Classic can’t be exported. To replace non-exportable and asymmetric keys, you must generate new keys on New CloudHSM, then use the old keys to decrypt and the new keys to re-encrypt your data.

Solution overview

My solution shows you how to use the CKDemo utility on CloudHSM Classic, and key_mgmt_util on New CloudHSM, to: generate an RSA wrapping key pair; use it to wrap keys on CloudHSM Classic; and then unwrap the keys on New CloudHSM. These are all done via the RSA OAEP mechanism.
The following diagram provides a summary of the steps involved in the solution:

Figure 1: Solution overview

Figure 1: Solution overview

  1. Generate the RSA wrapping key pair on New CloudHSM.
  2. Export the RSA Public Key to the New CloudHSM client instance.
  3. Move the RSA public key to the CloudHSM Classic client instance.
  4. Import the RSA public key to CloudHSM Classic.
  5. Wrap the key using the imported RSA public key.
  6. Move the wrapped key to the New CloudHSM client instance.
  7. Unwrap the key on New CloudHSM with the RSA Private Key.

NOTE: You can perform the same procedure using supported libraries, such as JCE (Java Cryptography Extension) and PKCS#11. For example, you can use the wrap_with_imported_rsa_key sample to import an RSA public key into CloudHSM Classic, use that key to wrap your CloudHSM Classic keys, and then use the rsa_wrapping sample (specifically the rsa_oaep_unwrap_key function) to unwrap the keys into New CloudHSM using the RSA OAEP mechanism.

Prerequisites

  1. An active New CloudHSM cluster with at least one active hardware security module (HSM). Follow the Getting Started Guide to create and initialize a New CloudHSM cluster.
  2. An Amazon Elastic Compute Cloud (Amazon EC2) instance with the New CloudHSM client installed and configured to connect to the New CloudHSM cluster. You can refer to the Getting Started Guide to configure and connect the client instance.
  3. New CloudHSM CU (crypto user) credentials.
  4. An EC2 instance with the CloudHSM Classic client installed and configured to connect to the CloudHSM Classic partition or the high-availability (HA) partition group that contains the keys you want to migrate. You can refer to this guide to install and configure a CloudHSM Classic Client.
  5. The Password of the CloudHSM Classic partition or HA partition group that contains the keys you want to migrate.
  6. The handle of the symmetric key on CloudHSM Classic you want to migrate.

Step 1: Generate the RSA wrapping key pair on CloudHSM

1.1. On the New CloudHSM client instance, run the key_mgmt_util command line tool, and log in as the CU, as described in Getting Started with key_mgmt_util.


Command:  loginHSM -u CU -s <CU user> -p <CU password>
    
	Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

1.2. Run the following
genRSAKeyPair command to generate an RSA key pair with the label
classic_wrap. Take note of the private and public key handles, as they’ll be used in the coming steps.


Command:  genRSAKeyPair -m 2048 -e 65537 -l classic_wrap

	Cfm3GenerateKeyPair returned: 0x00 : HSM Return: SUCCESS

	Cfm3GenerateKeyPair:    public key handle: 407    private key handle: 408

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

Step 2: Export the RSA public key to the New CloudHSM client instance

2.1. Run the following exportPubKey command to export the RSA public key to the New CloudHSM client instance using the public key handle you received in step 1.2 (407, in my example). This will export the public key to a file named wrapping_public.pem.


Command:  exportPubKey -k <public key handle> -out wrapping_public.pem

PEM formatted public key is written to wrapping_public.pem

	Cfm3ExportPubKey returned: 0x00 : HSM Return: SUCCESS

Step 3: Move the RSA public key to the CloudHSM Classic client instance

Move the RSA Public Key to the CloudHSM Classic client instance using scp (or any other tool you prefer).

Step 4: Import the RSA public key to CloudHSM Classic

4.1. On the CloudHSM Classic instance, use the cmu command as shown below to import the RSA public key with the label classic_wrap. You’ll need the partition or HA partition group password for this command, plus the slot number of the partition or HA partition group (you can get the slot number of your partition or HA partition group using the vtl listSlots command).


# cmu import -inputFile=wrapping_public.pem -label classic_wrap
Select token
 [1] Token Label: partition1
 [2] Token Label: partition2
 [3] Token Label: partition3
 Enter choice: <slot number>
Please enter password for token in slot 1 : <password>

4.2. Run the below command to get the handle (highlighted below) of the imported key.


# cmu list -label classic_wrap
Select token
 [1] Token Label: partition1
 [2] Token Label: partition2
 [3] Token Label: partition3
 Enter choice: <slot number>
Please enter password for token in slot 1 : <password>
handle=149	label=classic_wrap

4.3. Run the CKDemo utility.


# ckdemo

4.4. Open a session to the partition or HA partition group slot.


Enter your choice : 1

Slots available:
	slot#1 - LunaNet Slot
	slot#2 - LunaNet Slot
	...
Select a slot: <slot number>

SO[0], normal user[1], or audit user[2]? 1

Status: Doing great, no errors (CKR_OK)

4.5. Log in using the partition or HA partition group pin.


Enter your choice : 3
Security Officer[0]
Crypto-Officer  [1]
Crypto-User     [2]:
Audit-User      [3]: 1
Enter PIN          : <password>

Status: Doing great, no errors (CKR_OK)

4.6. Change the CKA_WRAP attribute of the imported RSA public key to be able to use it for wrapping using the imported public key handle you received in step 4.2 above (149, in my example).


Enter your choice : 25

Which object do you want to modify (-1 to list available objects) : <imported public key handle>

Edit template for set attribute operation.

(1) Add Attribute   (2) Remove Attribute   (0) Accept Template :1

 0 - CKA_CLASS                  1 - CKA_TOKEN
 2 - CKA_PRIVATE                3 - CKA_LABEL
 4 - CKA_APPLICATION            5 - CKA_VALUE
 6 - CKA_XXX                    7 - CKA_CERTIFICATE_TYPE
 8 - CKA_ISSUER                 9 - CKA_SERIAL_NUMBER
10 - CKA_KEY_TYPE              11 - CKA_SUBJECT
12 - CKA_ID                    13 - CKA_SENSITIVE
14 - CKA_ENCRYPT               15 - CKA_DECRYPT
16 - CKA_WRAP                  17 - CKA_UNWRAP
18 - CKA_SIGN                  19 - CKA_SIGN_RECOVER
20 - CKA_VERIFY                21 - CKA_VERIFY_RECOVER
22 - CKA_DERIVE                23 - CKA_START_DATE
24 - CKA_END_DATE              25 - CKA_MODULUS
26 - CKA_MODULUS_BITS          27 - CKA_PUBLIC_EXPONENT
28 - CKA_PRIVATE_EXPONENT      29 - CKA_PRIME_1
30 - CKA_PRIME_2               31 - CKA_EXPONENT_1
32 - CKA_EXPONENT_2            33 - CKA_COEFFICIENT
34 - CKA_PRIME                 35 - CKA_SUBPRIME
36 - CKA_BASE                  37 - CKA_VALUE_BITS
38 - CKA_VALUE_LEN             39 - CKA_LOCAL
40 - CKA_MODIFIABLE            41 - CKA_ECDSA_PARAMS
42 - CKA_EC_POINT              43 - CKA_EXTRACTABLE
44 - CKA_ALWAYS_SENSITIVE      45 - CKA_NEVER_EXTRACTABLE
46 - CKA_CCM_PRIVATE           47 - CKA_FINGERPRINT_SHA1
48 - CKA_OUID                  49 - CKA_X9_31_GENERATED
50 - CKA_PRIME_BITS            51 - CKA_SUBPRIME_BITS
52 - CKA_USAGE_COUNT           53 - CKA_USAGE_LIMIT
54 - CKA_EKM_UID               55 - CKA_GENERIC_1
56 - CKA_GENERIC_2             57 - CKA_GENERIC_3
58 - CKA_FINGERPRINT_SHA256
Select which one: 16
Enter boolean value: 1

CKA_WRAP=01

(1) Add Attribute   (2) Remove Attribute   (0) Accept Template :0

Status: Doing great, no errors (CKR_OK)

Step 5: Wrap the key using the imported RSA public key

5.1. Check whether the symmetric key you want to migrate is exportable. This can be done by following the below command using the handle of the key you want to migrate, and confirming the value of the CKA_EXTRACTABLE attribute (highlighted below) is equal to 1. Otherwise, the key can’t be exported.


Enter your choice : 27

Enter handle of object to display (-1 to list available objects): <handle of the key to be migrated>
Object handle=120
CKA_CLASS=00000004
CKA_TOKEN=01
CKA_PRIVATE=01
CKA_LABEL=Generated AES Key
CKA_KEY_TYPE=0000001f
CKA_ID=
CKA_SENSITIVE=01
CKA_ENCRYPT=01
CKA_DECRYPT=01
CKA_WRAP=01
CKA_UNWRAP=01
CKA_SIGN=01
CKA_VERIFY=01
CKA_DERIVE=01
CKA_START_DATE=
CKA_END_DATE=
CKA_VALUE_LEN=00000020
CKA_LOCAL=01
CKA_MODIFIABLE=01
CKA_EXTRACTABLE=01
CKA_ALWAYS_SENSITIVE=01
CKA_NEVER_EXTRACTABLE=00
CKA_CCM_PRIVATE=00
CKA_FINGERPRINT_SHA1=f8babf341748ba5810be21acc95c6d4d9fac75aa
CKA_OUID=29010002f90900005e850700
CKA_EKM_UID=
CKA_GENERIC_1=
CKA_GENERIC_2=
CKA_GENERIC_3=
CKA_FINGERPRINT_SHA256=7a8efcbff27703e281617be3c3d484dc58df6a78f6b144207c1a54ad32a98c00

Status: Doing great, no errors (CKR_OK)

5.2. Wrap the key using the imported RSA public key. This will create a file called wrapped.key that contains the wrapped key. Make sure to use handles of public key handle you received in step 4.2 above (149, in my example), and the handle of the key you want to migrate.


Enter your choice : 60
[1]DES-ECB        [2]DES-CBC        [3]DES3-ECB       [4]DES3-CBC
                                    [7]CAST3-ECB      [8]CAST3-CBC
[9]RSA            [10]TRANSLA       [11]DES3-CBC-PAD  [12]DES3-CBC-PAD-IPSEC
[13]SEED-ECB      [14]SEED-CBC      [15]SEED-CBC-PAD  [16]DES-CBC-PAD
[17]CAST3-CBC-PAD [18]CAST5-CBC-PAD [19]AES-ECB       [20]AES-CBC
[21]AES-CBC-PAD   [22]AES-CBC-PAD-IPSEC [23]ARIA-ECB  [24]ARIA-CBC
[25]ARIA-CBC-PAD
[26]RSA_OAEP    [27]SET_OAEP
Select mechanism for wrapping: 26

Enter filename of OAEP Source Data [0 for none]: 0

Enter handle of wrapping key (-1 to list available objects) : <imported public key handle>

Enter handle of key to wrap (-1 to list available objects) : <handle of the key to be migrated>
Wrapped key was saved in file wrapped.key

Status: Doing great, no errors (CKR_OK)

Step 6: Move the wrapped key to the New CloudHSM client instance

Move the wrapped key to the New CloudHSM client instance using scp (or any other tool you prefer).

Step 7: Unwrap the key on New CloudHSM with the RSA Private Key

7.1. On the New CloudHSM client instance, run the key_mgmt_util and login as the CU.


Command:  loginHSM -s <CU user> -p <CU password>

	Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

7.2. Run the following unWrapKey command to unwrap the key using the RSA private key handle you received in step 1.2 (408, in my example). The output of the command should show the handle of the wrapped key (highlighted below).


Command:  unWrapKey -f wrapped.key -w <private key handle> -m 8 -noheader -l unwrapped_aes -kc 4 -kt 31

	Cfm3CreateUnwrapTemplate2 returned: 0x00 : HSM Return: SUCCESS

	Cfm2UnWrapWithTemplate3 returned: 0x00 : HSM Return: SUCCESS

	Key Unwrapped.  Key Handle: 410

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

Conclusion

Using RSA OAEP for key migration ensures that your key material doesn’t leave the HSM boundary in plain text, as it’s encrypted using an RSA public key before being exported from CloudHSM Classic, and it can only be decrypted by New CloudHSM through the RSA private key that is generated and kept on New CloudHSM.

My post provides an example of how to use the ckdemo and key_mgmt_util utilities for the migration, but the same procedure can also be performed using the supported software libraries, such as the Java JCE library or the PKCS#11 library,a to migrate larger volumes of keys in an automated manner.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mohamed AboElKheir

Mohamed AboElKheir is an Application Security Engineer who worksa with different team to ensure AWS services, applications, and websites are designed and implemented to the highest security standards. He is a subject matter expert for CloudHSM and is always enthusiastic about assisting CloudHSM customers with advanced issues and use cases. Mohamed is passionate about InfoSec, specifically cryptography, penetration testing (he’s OSCP certified), application security, and cloud security (he’s AWS Security Specialty certified).

Automating your lift-and-shift migration at no cost with CloudEndure Migration

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/automating-your-lift-and-shift-migration-at-no-cost-with-cloudendure-migration/

This post is courtesy of Gonen Stein, Head of Product Strategy, CloudEndure 

Acquired by AWS in January 2019, CloudEndure offers a highly automated migration tool to simplify and expedite rehost (lift-and-shift) migrations. AWS recently announced that CloudEndure Migration is now available to all customers and partners at no charge.

Each free CloudEndure Migration license provides 90 days of use following agent installation. During this period, you can perform all migration steps: replicate your source machines, conduct tests, and perform a scheduled cutover to complete the migration.

Overview

In this post, I show you how to obtain free CloudEndure Migration licenses and how to use CloudEndure Migration to rehost a machine from an on-premises environment to AWS. Although I’ve chosen to focus on an on-premises-to-AWS use case, CloudEndure also supports migration from cloud-based environments. For those of you who are interested in advanced automation, I include information on how to further automate large-scale migration projects.

Understanding CloudEndure Migration

CloudEndure Migration works through an agent that you install on your source machines. No reboot is required, and there is no performance impact on your source environment. The CloudEndure Agent connects to the CloudEndure User Console, which issues an API call to your target AWS Region. The API call creates a Staging Area in your AWS account that is designated to receive replicated data.

CloudEndure Migration automated rehosting consists of three main steps :

  1. Installing the Agent: The CloudEndure Agent replicates entire machines to a Staging Area in your target.
  2. Configuration and testing: You configure your target machine settings and launch non-disruptive tests.
  3. Performing cutover: CloudEndure automatically converts machines to run natively in AWS.

The Staging Area comprises both lightweight Amazon EC2 instances that act as replication servers and staging Amazon EBS volumes. Each source disk maps to an identically sized EBS volume in the Staging Area. The replication servers receive data from the CloudEndure Agent running on the source machines and write this data onto staging EBS volumes. One replication server can handle multiple source machines replicating concurrently.

After all source disks copy to the Staging Area, the CloudEndure Agent continues to track and replicate any changes made to the source disks. Continuous replication occurs at the block level, enabling CloudEndure to replicate any application that runs on supported x86-based Windows or Linux operating systems via an installed agent.

When the target machines launch for testing or cutover, CloudEndure automatically converts the target machines so that they boot and run natively on AWS. This conversion includes injecting the appropriate AWS drivers, making appropriate bootloader changes, modifying network adapters, and activating operating systems using the AWS KMS. This machine conversion process generally takes less than a minute, irrespective of machine size, and runs on all launched machines in parallel.

CloudEndure Migration Architecture

CloudEndure Migration Architecture

Installing CloudEndure Agent

To install the Agent:

  1. Start your migration project by registering for a free CloudEndure Migration license. The registration process is quick–use your email address to create a username and password for the CloudEndure User Console. Use this console to create and manage your migration projects.

    After you log in to the CloudEndure User Console, you need an AWS access key ID and secret access key to connect your CloudEndure Migration project to your AWS account. To obtain these credentials, sign in to the AWS Management Console and create an IAM user.Enter your AWS credentials in the CloudEndure User Console.

  2. Configure and save your migration replication settings, including your migration project’s Migration Source, Migration Target, and Staging Area. For example, I selected the Migration Source: Other Infrastructure, because I am migrating on-premises machines. Also, I selected the Migration Target AWS Region: AWS US East (N. Virginia). The CloudEndure User Console also enables you to configure a variety of other settings after you select your source and target, such as subnet, security group, VPN or Direct Connect usage, and encryption.

    After you save your replication settings, CloudEndure prompts you to install the CloudEndure Agent on your source machines. In my example, the source machines consist of an application server, a web server, and a database server, and all three are running Debian GNU / Linux 9.

  3. Download the CloudEndure Agent Installer for Linux by running the following command:
    wget -O ./installer_linux.py https://console.cloudendure.com/installer_linux.py
  4. Run the Installer:
    sudo python ./installer_linux.py -t <INSTALLATION TOKEN> --no-prompt

    You can install the CloudEndure Agent locally on each machine. For large-scale migrations, use the unattended installation parameters with any standard deployment tool to remotely install the CloudEndure Agent on your machines.

    After the Agent installation completes, CloudEndure adds your source machines to the CloudEndure User Console. From there, your source machines undergo several initial replication steps. To obtain a detailed breakdown of these steps, in the CloudEndure User Console, choose Machines, and select a specific machine to open the Machine Details View page.

    details page

    Data replication consists of two stages: Initial Sync and Continuous Data Replication. During Initial Sync, CloudEndure copies all of the source disks’ existing content into EBS volumes in the Staging Area. After Initial Sync completes, Continuous Data Replication begins, tracking your source machines and replicating new data writes to the staging EBS volumes. Continuous Data Replication makes sure that your Staging Area always has the most up-to-date copy of your source machines.

  5. To track your source machines’ data replication progress, in the CloudEndure User Console, choose Machines, and see the Details view.
    When the Data Replication Progress status column reads Continuous Data Replication, and the Migration Lifecycle status column reads Ready for Testing, Initial Sync is complete. These statuses indicate that the machines are functioning correctly and are ready for testing and migration.

Configuration and testing

To test how your machine runs on AWS, you must configure the Target Machine Blueprint. This Blueprint is a set of configurations that define where and how the target machines are launched and provisioned, such as target subnet, security groups, instance type, volume type, and tags.

For large-scale migration projects, APIs can be used to configure the Blueprint for all of your machines within seconds.

I recommend performing a test at least two weeks before migrating your source machines, to give you enough time to identify potential problems and resolve them before you perform the actual cutover. For more information, see Migration Best Practices.

To launch machines in Test Mode:

  1. In the CloudEndure User Console, choose Machines.
  2. Select the Name box corresponding to each machine to test.
  3. Choose Launch Target Machines, Test Mode.Launch target test machines

After the target machines launch in Test Mode, the CloudEndure User Console reports those machines as Tested and records the date and time of the test.

Performing cutover

After you have completed testing, your machines continue to be in Continuous Data Replication mode until the scheduled cutover window.

When you are ready to perform a cutover:

  1. In the CloudEndure User Console, choose Machines.
  2. Select the Name box corresponding to each machine to migrate.
  3. Choose Launch Target Machines, Cutover Mode.
    To confirm that your target machines successfully launch, see the Launch Target Machines menu. As your data replicates, verify that the target machines are running correctly, make any necessary configuration adjustments, perform user acceptance testing (UAT) on your applications and databases, and redirect your users.
  4. After the cutover completes, remove the CloudEndure Agent from the source machines and the CloudEndure User Console.
  5. At this point, you can also decommission your source machines.

Conclusion

In this post, I showed how to rehost an on-premises workload to AWS using CloudEndure Migration. CloudEndure automatically converts your machines from any source infrastructure to AWS infrastructure. That means they can boot and run natively in AWS, and run as expected after migration to the cloud.

If you have further questions see CloudEndure Migration, or Registering to CloudEndure Migration.

Get started now with free CloudEndure Migration licenses.

Learn about AWS Services & Solutions – September AWS Online Tech Talks

Post Syndicated from Jenny Hang original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-september-aws-online-tech-talks/

Learn about AWS Services & Solutions – September AWS Online Tech Talks

AWS Tech Talks

Join us this September to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

 

Compute:

September 23, 2019 | 11:00 AM – 12:00 PM PTBuild Your Hybrid Cloud Architecture with AWS – Learn about the extensive range of services AWS offers to help you build a hybrid cloud architecture best suited for your use case.

September 26, 2019 | 1:00 PM – 2:00 PM PTSelf-Hosted WordPress: It’s Easier Than You Think – Learn how you can easily build a fault-tolerant WordPress site using Amazon Lightsail.

October 3, 2019 | 11:00 AM – 12:00 PM PTLower Costs by Right Sizing Your Instance with Amazon EC2 T3 General Purpose Burstable Instances – Get an overview of T3 instances, understand what workloads are ideal for them, and understand how the T3 credit system works so that you can lower your EC2 instance costs today.

 

Containers:

September 26, 2019 | 11:00 AM – 12:00 PM PTDevelop a Web App Using Amazon ECS and AWS Cloud Development Kit (CDK) – Learn how to build your first app using CDK and AWS container services.

 

Data Lakes & Analytics:

September 26, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Provisioning Amazon MSK Clusters and Using Popular Apache Kafka-Compatible Tooling – Learn best practices on running Apache Kafka production workloads at a lower cost on Amazon MSK.

 

Databases:

September 25, 2019 | 1:00 PM – 2:00 PM PTWhat’s New in Amazon DocumentDB (with MongoDB compatibility) – Learn what’s new in Amazon DocumentDB, a fully managed MongoDB compatible database service designed from the ground up to be fast, scalable, and highly available.

October 3, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Enterprise-Class Security, High-Availability, and Scalability with Amazon ElastiCache – Learn about new enterprise-friendly Amazon ElastiCache enhancements like customer managed key and online scaling up or down to make your critical workloads more secure, scalable and available.

 

DevOps:

October 1, 2019 | 9:00 AM – 10:00 AM PT – CI/CD for Containers: A Way Forward for Your DevOps Pipeline – Learn how to build CI/CD pipelines using AWS services to get the most out of the agility afforded by containers.

 

Enterprise & Hybrid:

September 24, 2019 | 1:00 PM – 2:30 PM PT Virtual Workshop: How to Monitor and Manage Your AWS Costs – Learn how to visualize and manage your AWS cost and usage in this virtual hands-on workshop.

October 2, 2019 | 1:00 PM – 2:00 PM PT – Accelerate Cloud Adoption and Reduce Operational Risk with AWS Managed Services – Learn how AMS accelerates your migration to AWS, reduces your operating costs, improves security and compliance, and enables you to focus on your differentiating business priorities.

 

IoT:

September 25, 2019 | 9:00 AM – 10:00 AM PTComplex Monitoring for Industrial with AWS IoT Data Services – Learn how to solve your complex event monitoring challenges with AWS IoT Data Services.

 

Machine Learning:

September 23, 2019 | 9:00 AM – 10:00 AM PTTraining Machine Learning Models Faster – Learn how to train machine learning models quickly and with a single click using Amazon SageMaker.

September 30, 2019 | 11:00 AM – 12:00 PM PTUsing Containers for Deep Learning Workflows – Learn how containers can help address challenges in deploying deep learning environments.

October 3, 2019 | 1:00 PM – 2:30 PM PTVirtual Workshop: Getting Hands-On with Machine Learning and Ready to Race in the AWS DeepRacer League – Join DeClercq Wentzel, Senior Product Manager for AWS DeepRacer, for a presentation on the basics of machine learning and how to build a reinforcement learning model that you can use to join the AWS DeepRacer League.

 

AWS Marketplace:

September 30, 2019 | 9:00 AM – 10:00 AM PTAdvancing Software Procurement in a Containerized World – Learn how to deploy applications faster with third-party container products.

 

Migration:

September 24, 2019 | 11:00 AM – 12:00 PM PTApplication Migrations Using AWS Server Migration Service (SMS) – Learn how to use AWS Server Migration Service (SMS) for automating application migration and scheduling continuous replication, from your on-premises data centers or Microsoft Azure to AWS.

 

Networking & Content Delivery:

September 25, 2019 | 11:00 AM – 12:00 PM PTBuilding Highly Available and Performant Applications using AWS Global Accelerator – Learn how to build highly available and performant architectures for your applications with AWS Global Accelerator, now with source IP preservation.

September 30, 2019 | 1:00 PM – 2:00 PM PTAWS Office Hours: Amazon CloudFront – Just getting started with Amazon CloudFront and [email protected]? Get answers directly from our experts during AWS Office Hours.

 

Robotics:

October 1, 2019 | 11:00 AM – 12:00 PM PTRobots and STEM: AWS RoboMaker and AWS Educate Unite! – Come join members of the AWS RoboMaker and AWS Educate teams as we provide an overview of our education initiatives and walk you through the newly launched RoboMaker Badge.

 

Security, Identity & Compliance:

October 1, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on Running Active Directory on AWS – Learn how to deploy Active Directory on AWS and start migrating your windows workloads.

 

Serverless:

October 2, 2019 | 9:00 AM – 10:00 AM PTDeep Dive on Amazon EventBridge – Learn how to optimize event-driven applications, and use rules and policies to route, transform, and control access to these events that react to data from SaaS apps.

 

Storage:

September 24, 2019 | 9:00 AM – 10:00 AM PTOptimize Your Amazon S3 Data Lake with S3 Storage Classes and Management Tools – Learn how to use the Amazon S3 Storage Classes and management tools to better manage your data lake at scale and to optimize storage costs and resources.

October 2, 2019 | 11:00 AM – 12:00 PM PTThe Great Migration to Cloud Storage: Choosing the Right Storage Solution for Your Workload – Learn more about AWS storage services and identify which service is the right fit for your business.

 

 

Learn about AWS Services & Solutions – February 2019 AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-february-2019-aws-online-tech-talks/

AWS Tech Talks

Join us this February to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Application Integration

February 20, 2019 | 11:00 AM – 12:00 PM PTCustomer Showcase: Migration & Messaging for Mission Critical Apps with S&P Global Ratings – Learn how S&P Global Ratings meets the high availability and fault tolerance requirements of their mission critical applications using the Amazon MQ.

AR/VR

February 28, 2019 | 1:00 PM – 2:00 PM PTBuild AR/VR Apps with AWS: Creating a Multiplayer Game with Amazon Sumerian – Learn how to build real-world augmented reality, virtual reality and 3D applications with Amazon Sumerian.

Blockchain

February 18, 2019 | 11:00 AM – 12:00 PM PTDeep Dive on Amazon Managed Blockchain – Explore the components of blockchain technology, discuss use cases, and do a deep dive into capabilities, performance, and key innovations in Amazon Managed Blockchain.

Compute

February 25, 2019 | 9:00 AM – 10:00 AM PTWhat’s New in Amazon EC2 – Learn about the latest innovations in Amazon EC2, including new instances types, related technologies, and consumption options that help you optimize running your workloads for performance and cost.

February 27, 2019 | 1:00 PM – 2:00 PM PTDeploy and Scale Your First Cloud Application with Amazon Lightsail – Learn how to quickly deploy and scale your first multi-tier cloud application using Amazon Lightsail.

Containers

February 19, 2019 | 9:00 AM – 10:00 AM PTSecuring Container Workloads on AWS Fargate – Explore the security controls and best practices for securing containers running on AWS Fargate.

Data Lakes & Analytics

February 18, 2019 | 1:00 PM – 2:00 PM PTAmazon Redshift Tips & Tricks: Scaling Storage and Compute Resources – Learn about the tools and best practices Amazon Redshift customers can use to scale storage and compute resources on-demand and automatically to handle growing data volume and analytical demand.

Databases

February 18, 2019 | 9:00 AM – 10:00 AM PTBuilding Real-Time Applications with Redis – Learn about Amazon’s fully managed Redis service and how it makes it easier, simpler, and faster to build real-time applications.

February 21, 2019 | 1:00 PM – 2:00 PM PT – Introduction to Amazon DocumentDB (with MongoDB Compatibility) – Get an introduction to Amazon DocumentDB (with MongoDB compatibility), a fast, scalable, and highly available document database that makes it easy to run, manage & scale MongoDB-workloads.

DevOps

February 20, 2019 | 1:00 PM – 2:00 PM PTFireside Chat: DevOps at Amazon with Ken Exner, GM of AWS Developer Tools – Join our fireside chat with Ken Exner, GM of Developer Tools, to learn about Amazon’s DevOps transformation journey and latest practices and tools that support the current DevOps model.

End-User Computing

February 28, 2019 | 9:00 AM – 10:00 AM PTEnable Your Remote and Mobile Workforce with Amazon WorkLink – Learn about Amazon WorkLink, a new, fully-managed service that provides your employees secure, one-click access to internal corporate websites and web apps using their mobile phones.

Enterprise & Hybrid

February 26, 2019 | 1:00 PM – 2:00 PM PTThe Amazon S3 Storage Classes – For cloud ops professionals, by cloud ops professionals. Wallace and Orion will tackle your toughest AWS hybrid cloud operations questions in this live Office Hours tech talk.

IoT

February 26, 2019 | 9:00 AM – 10:00 AM PTBring IoT and AI Together – Learn how to bring intelligence to your devices with the intersection of IoT and AI.

Machine Learning

February 19, 2019 | 1:00 PM – 2:00 PM PTGetting Started with AWS DeepRacer – Learn about the basics of reinforcement learning, what’s under the hood and opportunities to get hands on with AWS DeepRacer and how to participate in the AWS DeepRacer League.

February 20, 2019 | 9:00 AM – 10:00 AM PTBuild and Train Reinforcement Models with Amazon SageMaker RL – Learn about Amazon SageMaker RL to use reinforcement learning and build intelligent applications for your businesses.

February 21, 2019 | 11:00 AM – 12:00 PM PTTrain ML Models Once, Run Anywhere in the Cloud & at the Edge with Amazon SageMaker Neo – Learn about Amazon SageMaker Neo where you can train ML models once and run them anywhere in the cloud and at the edge.

February 28, 2019 | 11:00 AM – 12:00 PM PTBuild your Machine Learning Datasets with Amazon SageMaker Ground Truth – Learn how customers are using Amazon SageMaker Ground Truth to build highly accurate training datasets for machine learning quickly and reduce data labeling costs by up to 70%.

Migration

February 27, 2019 | 11:00 AM – 12:00 PM PTMaximize the Benefits of Migrating to the Cloud – Learn how to group and rationalize applications and plan migration waves in order to realize the full set of benefits that cloud migration offers.

Networking

February 27, 2019 | 9:00 AM – 10:00 AM PTSimplifying DNS for Hybrid Cloud with Route 53 Resolver – Learn how to enable DNS resolution in hybrid cloud environments using Amazon Route 53 Resolver.

Productivity & Business Solutions

February 26, 2019 | 11:00 AM – 12:00 PM PTTransform the Modern Contact Center Using Machine Learning and Analytics – Learn how to integrate Amazon Connect and AWS machine learning services, such Amazon Lex, Amazon Transcribe, and Amazon Comprehend, to quickly process and analyze thousands of customer conversations and gain valuable insights.

Serverless

February 19, 2019 | 11:00 AM – 12:00 PM PTBest Practices for Serverless Queue Processing – Learn the best practices of serverless queue processing, using Amazon SQS as an event source for AWS Lambda.

Storage

February 25, 2019 | 11:00 AM – 12:00 PM PT Introducing AWS Backup: Automate and Centralize Data Protection in the AWS Cloud – Learn about this new, fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services in the cloud as well as on-premises.

Learn about hourly-replication in Server Migration Service and the ability to migrate large data volumes

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/learn-about-hourly-replication-in-server-migration-service-and-the-ability-to-migrate-large-data-volumes/

This post courtesy of Shane Baldacchino, AWS Solutions Architect

AWS Server Migration Service (AWS SMS) is an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations.

In my previous blog posts, we introduced how you can use AWS Server Migration Service (AWS SMS) to migrate a popular commercial off the shelf software, WordPress into AWS.

For details and a walkthrough on how setup the AWS Server Migration Service, please see the following blog posts for Hyper-V and VMware hypervisors which will guide you through the high level process.

In this article we are going to step it up a few notches and look past common the migration of off-the-shelf software and provide you a pattern on how you can use AWS SMS and some of the recently launched features to migrate a more complicated environment, especially compression and resiliency for replication jobs and the support for data volumes greater than 4TB.

This post covers a migration of a complex internally developed eCommerce system comprising of a polyglot architecture. It is made up a Windows Microsoft IIS presentation tier, Tomcat application tier, and Microsoft SQL Server database tier. All workloads run on-premises as virtual machines in a VMware vCenter 5.5 and ESX 5.5 environment.

This theoretical customer environment has various business and infrastructure requirements.

Application downtime: During any migration activities, the application cannot be offline for more than 2 hours
Licensing: The customer has renewed their Microsoft SQL Server license for an additional 3 years and holds License Mobility with Software Assurance option for Microsoft SQL Server and therefore wants to take advantage of AWS BYOL licensing for Microsoft SQL server and Microsoft Windows Server.
Large data volumes: The Microsoft SQL Server database engine (.mdf, .ldf and .ndf files) consumes 11TB of storage

Walkthrough

Key elements of this migration process are identical to the process outlined in my previous blog posts and for more information on this process, please see the following blog posts Hyper-V and VMware hypervisors, but a high level you will need to.

• Establish your AWS environment.
• Download the SMS Connector from the AWS Management Console.
• Configure AWS SMS and Hypervisor permissions.
• Install and configure the SMS Connector appliance.
• Import your virtual machine inventory and create a replication jobs
• Launch your Amazon EC2 instances and associated NACL’s, Security Groups and AWS Elastic Load Balancers
• Change your DNS records to resolve the custom application to an AWS Elastic Load Balancer.

Before you start, ensure that your source systems OS and vCenter version are supported by AWS. For more information, see the Server Migration Service FAQ.

Planning the Migration

Once you have downloaded and configured the AWS SMS connector with your given Hypervisor you can get started in creating replication jobs.

The artifacts derived from our replication jobs with AWS SMS will be AMI’s (Amazon Machine Images) and as such we do not need to replicate each server individually and that is because we have a three-tier architecture that has commonality between servers with multiple Application and Web servers performing the same function, and as such we can leverage a common AMI and create three replication jobs.

1. Microsoft SQL Server – Database Tier
2. Ubuntu Server – Application Tier
3. IIS Web server – Webserver Tier

Performing the Replication

After validating that the SMS Connector is in a “HEALTHY” state, import your server catalog from your Hypervisor to AWS SMS. This process can take up to a minute.

Select the three servers (Microsoft SQL Server, Ubuntu Server, IIS Web server) to migrate and choose Create replication job. AWS SMS now supports creating replications jobs with frequencies as short as 1 hour, and as such to ensure our business RTO (Recovery Time Objective) of 2 hours is met we will create our replication jobs with a frequency of 1 hour. This will minimize the risk of any delta updates during the cutover windows not completing.

Given the businesses existing licensing investment in Microsoft SQL Server, they will leverage these the BYOL (Bring Your Own License) offering when creating the Microsoft SQL Server replication job.

The AWS SMS console guides you through the process. The time that the initial replication task takes to complete is dependent on available bandwidth and the size of your virtual machines.

After the initial seed replication, network bandwidth requirement is minimized as AWS SMS replicates only incremental changes occurring on the VM.

The progress updates from AWS SMS are automatically sent to AWS Migration Hub so that you can track tasks in progress.

AWS Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions. In this post, we are using AWS SMS as a mechanism to migrate the virtual machines (VMs) and track them via AWS Migration Hub.

Migration Hub and AWS SMS are both free. You pay only for the cost of the individual migration tools that you use, and any resources being consumed on AWS

The dashboard reflects any status changes that occur in the linked services. You can see from the following image that two servers are complete whilst another is in progress.

Using Migration Hub, you can view the migration progress of all applications. This allows you to quickly get progress updates across all of your migrations, easily identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects.

After validating that the SMS Connector

Testing Your Replicated Instances

Thirty hours after creating the replication jobs, notification was received via AWS SNS (Simple Notification Service) that all 3 replication jobs have completed. During the 30-hour replication window the customers ISP experienced downtime and sporadic flapping of the link, but this was negated by the network auto-recovery feature of SMS. It recovered and resumed replication without any intervention.

With the replication tasks being complete. The artifact created by AWS SMS is a custom AMI that you can use to deploy an EC2 instance. Follow the usual process to launch your EC2 instance, noting that you may need to replace any host-based firewalls with security groups and NACLs and any hardware based load balancers with Elastic Load Balancing to achieve fault tolerance, scalability, performance and security.

As this environment is a 3-tier architecture with commonality been tiers (Application and Presentation Tier) we can create during the EC2 Launch process an ASG (Auto Scaling Group) to ensure that deployed capacity matches user demand. The ASG will be based on the custom AMI’s generated by the replication jobs.

When you create an EC2 instance, ensure that you pick the most suitable EC2 instance type and size to match your performance and cost requirements.

While your new EC2 instances are a replica of your on-premises VM, you should always validate that applications are functioning. How you do this differs on an application-by-application basis. You can use a combination of approaches, such as editing a local host file and testing your application, SSH, RDP and Telnet.

For our Windows Presentation and database tier, I can RDP in to my systems and validate IIS 8.0 and other services are functioning correctly.

For our Ubuntu Application tier, we can SSH in to perform validation.

Post validation of each individual server we can now continue to test the application end to end. This is because our systems have been instantiated inside a VPC with no route back to our on-premises environment which allows us to test functionality without the risk of communication back to our production application.

After validation of systems it is now time to cut over, plan your runbook accordingly to ensure you either eliminate or minimize application disruption.

Cutting Over

As the replication window specified in AWS SMS replication jobs was 1 hour, there were hourly AMI’s created that provide delta updates since the initial seed replication was performed. The customer verified the stack by executing the previously created runbook using the latest AMIs, and verified the application behaved as expected.

After another round of testing, the customer decided to plan the cutover on the coming Saturday at midnight, by announcing a two-hour scheduled maintenance window. During the cutover window, the customer took the application offline, shutdown Microsoft SQL Server instance and performed an on-demand sync of all systems.

This generate a new versioned AMI that contained all on-premise data. The customer then executed the runbook on the new AMI’s. For the application and presentation tier these AMI’s were used in the ASG configuration. After application validation Amazon Route 53 was updated to resolve the application CNAME to the Application Load Balancer CNAME used to load balance traffic to the fleet of IIS servers.

Based on the TTL (Time To Live) of your Amazon Route 53 DNS zone file, end users slowly resolve the application to AWS, in this case within 300 seconds. Once this TTL period had elapsed the customer brought their application back online and exited their maintenance window, with time to spare.

After modifying the Amazon Route 53 Zone Apex, the physical topology now looks as follows with traffic being routed to AWS.

After validation of a successful migration the customer deleted their AWS Server Migration Service replication jobs and began planning to decommission their on-premises resources.

Summary

This is an example pattern on migrate a complex custom polyglot environment in to AWS using AWS migration services, specifically leveraging many of the new features of the AWS SMS service.

Many architectures can be extended to use many of the inherent benefits of AWS, with little effort. For example this article illustrated how AWS Migration Services can be used to migrate complex environments in to AWS and then use native AWS services such as Amazon CloudWatch metrics to drive Auto Scaling policies to ensure deployed capacity matches user demand whilst technologies such as Application Load Balancers can be used to achieve fault tolerance and scalability

Think big and get building!

 

 

Optimizing a Lift-and-Shift for Security

Post Syndicated from Jonathan Shapiro-Ward original https://aws.amazon.com/blogs/architecture/optimizing-a-lift-and-shift-for-security/

This is the third and final blog within a three-part series that examines how to optimize lift-and-shift workloads. A lift-and-shift is a common approach for migrating to AWS, whereby you move a workload from on-prem with little or no modification. This third blog examines how lift-and-shift workloads can benefit from an improved security posture with no modification to the application codebase. (Read about optimizing a lift-and-shift for performance and for cost effectiveness.)

Moving to AWS can help to strengthen your security posture by eliminating many of the risks present in on-premise deployments. It is still essential to consider how to best use AWS security controls and mechanisms to ensure the security of your workload. Security can often be a significant concern in lift-and-shift workloads, especially for legacy workloads where modern encryption and security features may not present. By making use of AWS security features you can significantly improve the security posture of a lift-and-shift workload, even if it lacks native support for modern security best practices.

Adding TLS with Application Load Balancers

Legacy applications are often the subject of a lift-and-shift. Such migrations can help reduce risks by moving away from out of date hardware but security risks are often harder to manage. Many legacy applications leverage HTTP or other plaintext protocols that are vulnerable to all manner of attacks. Often, modifying a legacy application’s codebase to implement TLS is untenable, necessitating other options.

One comparatively simple approach is to leverage an Application Load Balancer or a Classic Load Balancer to provide SSL offloading. In this scenario, the load balancer would be exposed to users, while the application servers that only support plaintext protocols will reside within a subnet which is can only be accessed by the load balancer. The load balancer would perform the decryption of all traffic destined for the application instance, forwarding the plaintext traffic to the instances. This allows  you to use encryption on traffic between the client and the load balancer, leaving only internal communication between the load balancer and the application in plaintext. Often this approach is sufficient to meet security requirements, however, in more stringent scenarios it is never acceptable for traffic to be transmitted in plaintext, even if within a secured subnet. In this scenario, a sidecar can be used to eliminate plaintext traffic ever traversing the network.

Improving Security and Configuration Management with Sidecars

One approach to providing encryption to legacy applications is to leverage what’s often termed the “sidecar pattern.” The sidecar pattern entails a second process acting as a proxy to the legacy application. The legacy application only exposes its services via the local loopback adapter and is thus accessible only to the sidecar. In turn the sidecar acts as an encrypted proxy, exposing the legacy application’s API to external consumers via TLS. As unencrypted traffic between the sidecar and the legacy application traverses the loopback adapter, it never traverses the network. This approach can help add encryption (or stronger encryption) to legacy applications when it’s not feasible to modify the original codebase. A common approach to implanting sidecars is through container groups such as pod in EKS or a task in ECS.

Implementing the Sidecar Pattern With Containers

Figure 1: Implementing the Sidecar Pattern With Containers

Another use of the sidecar pattern is to help legacy applications leverage modern cloud services. A common example of this is using a sidecar to manage files pertaining to the legacy application. This could entail a number of options including:

  • Having the sidecar dynamically modify the configuration for a legacy application based upon some external factor, such as the output of Lambda function, SNS event or DynamoDB write.
  • Having the sidecar write application state to a cache or database. Often applications will write state to the local disk. This can be problematic for autoscaling or disaster recovery, whereby having the state easily accessible to other instances is advantages. To facilitate this, the sidecar can write state to Amazon S3, Amazon DynamoDB, Amazon Elasticache or Amazon RDS.

A sidecar requires customer development, but it doesn’t require any modification of the lift-and-shifted application. A sidecar treats the application as a blackbox and interacts with it via its API, configuration file, or other standard mechanism.

Automating Security

A lift-and-shift can achieve a significantly stronger security posture by incorporating elements of DevSecOps. DevSecOps is a philosophy that argues that everyone is responsible for security and advocates for automation all parts of the security process. AWS has a number of services which can help implement a DevSecOps strategy. These services include:

  • Amazon GuardDuty: a continuous monitoring system which analyzes AWS CloudTrail Events, Amazon VPC Flow Log and DNS Logs. GuardDuty can detect threats and trigger an automated response.
  • AWS Shield: a managed DDOS protection services
  • AWS WAF: a managed Web Application Firewall
  • AWS Config: a service for assessing, tracking, and auditing changes to AWS configuration

These services can help detect security problems and implement a response in real time, achieving a significantly strong posture than traditional security strategies. You can build a DevSecOps strategy around a lift-and-shift workload using these services, without having to modify the lift-and-shift application.

Conclusion

There are many opportunities for taking advantage of AWS services and features to improve a lift-and-shift workload. Without any alteration to the application you can strengthen your security posture by utilizing AWS security services and by making small environmental and architectural changes that can help alleviate the challenges of legacy workloads.

About the author

Dr. Jonathan Shapiro-Ward is an AWS Solutions Architect based in Toronto. He helps customers across Canada to transform their businesses and build industry leading cloud solutions. He has a background in distributed systems and big data and holds a PhD from the University of St Andrews.

Optimizing a Lift-and-Shift for Cost Effectiveness and Ease of Management

Post Syndicated from Jonathan Shapiro-Ward original https://aws.amazon.com/blogs/architecture/optimizing-a-lift-and-shift-for-cost/

Lift-and-shift is the process of migrating a workload from on premise to AWS with little or no modification. A lift-and-shift is a common route for enterprises to move to the cloud, and can be a transitionary state to a more cloud native approach. This is the second blog post in a three-part series which investigates how to optimize a lift-and-shift workload. The first post is about performance.

A key concern that many customers have with a lift-and-shift is cost. If you move an application as is  from on-prem to AWS, is there any possibility for meaningful cost savings? By employing AWS services, in lieu of self-managed EC2 instances, and by leveraging cloud capability such as auto scaling, there is potential for significant cost savings. In this blog post, we will discuss a number of AWS services and solutions that you can leverage with minimal or no change to your application codebase in order to significantly reduce management costs and overall Total Cost of Ownership (TCO).

Automate

Even if you can’t modify your application, you can change the way you deploy your application. The adopting-an-infrastructure-as-code approach can vastly improve the ease of management of your application, thereby reducing cost. By templating your application through Amazon CloudFormation, Amazon OpsWorks, or Open Source tools you can make deploying and managing your workloads a simple and repeatable process.

As part of the lift-and-shift process, rationalizing the workload into a set of templates enables less time to spent in the future deploying and modifying the workload. It enables the easy creation of dev/test environments, facilitates blue-green testing, opens up options for DR, and gives the option to roll back in the event of error. Automation is the single step which is most conductive to improving ease of management.

Reserved Instances and Spot Instances

A first initial consideration around cost should be the purchasing model for any EC2 instances. Reserved Instances (RIs) represent a 1-year or 3-year commitment to EC2 instances and can enable up to 75% cost reduction (over on demand) for steady state EC2 workloads. They are ideal for 24/7 workloads that must be continually in operation. An application requires no modification to make use of RIs.

An alternative purchasing model is EC2 spot. Spot instances offer unused capacity available at a significant discount – up to 90%. Spot instances receive a two-minute warning when the capacity is required back by EC2 and can be suspended and resumed. Workloads which are architected for batch runs – such as analytics and big data workloads – often require little or no modification to make use of spot instances. Other burstable workloads such as web apps may require some modification around how they are deployed.

A final alternative is on-demand. For workloads that are not running in perpetuity, on-demand is ideal. Workloads can be deployed, used for as long as required, and then terminated. By leveraging some simple automation (such as AWS Lambda and CloudWatch alarms), you can schedule workloads to start and stop at the open and close of business (or at other meaningful intervals). This typically requires no modification to the application itself. For workloads that are not 24/7 steady state, this can provide greater cost effectiveness compared to RIs and more certainty and ease of use when compared to spot.

Amazon FSx for Windows File Server

Amazon FSx for Windows File Server provides a fully managed Windows filesystem that has full compatibility with SMB and DFS and full AD integration. Amazon FSx is an ideal choice for lift-and-shift architectures as it requires no modification to the application codebase in order to enable compatibility. Windows based applications can continue to leverage standard, Windows-native protocols to access storage with Amazon FSx. It enables users to avoid having to deploy and manage their own fileservers – eliminating the need for patching, automating, and managing EC2 instances. Moreover, it’s easy to scale and minimize costs, since Amazon FSx offers a pay-as-you-go pricing model.

Amazon EFS

Amazon Elastic File System (EFS) provides high performance, highly available multi-attach storage via NFS. EFS offers a drop-in replacement for existing NFS deployments. This is ideal for a range of Linux and Unix usecases as well as cross-platform solutions such as Enterprise Java applications. EFS eliminates the need to manage NFS infrastructure and simplifies storage concerns. Moreover, EFS provides high availability out of the box, which helps to reduce single points of failure and avoids the need to manually configure storage replication. Much like Amazon FSx, EFS enables customers to realize cost improvements by moving to a pay-as-you-go pricing model and requires a modification of the application.

Amazon MQ

Amazon MQ is a managed message broker service that provides compatibility with JMS, AMQP, MQTT, OpenWire, and STOMP. These are amongst the most extensively used middleware and messaging protocols and are a key foundation of enterprise applications. Rather than having to manually maintain a message broker, Amazon MQ provides a performant, highly available managed message broker service that is compatible with existing applications.

To use Amazon MQ without any modification, you can adapt applications that leverage a standard messaging protocol. In most cases, all you need to do is update the application’s MQ endpoint in its configuration. Subsequently, the Amazon MQ service handles the heavy lifting of operating a message broker, configuring HA, fault detection, failure recovery, software updates, and so forth. This offers a simple option for reducing management overhead and improving the reliability of a lift-and-shift architecture. What’s more is that applications can migrate to Amazon MQ without the need for any downtime, making this an easy and effective way to improve a lift-and-shift.

You can also use Amazon MQ to integrate legacy applications with modern serverless applications. Lambda functions can subscribe to MQ topics and trigger serverless workflows, enabling compatibility between legacy and new workloads.

Integrating Lift-and-Shift Workloads with Lambda via Amazon MQ

Figure 1: Integrating Lift-and-Shift Workloads with Lambda via Amazon MQ

Amazon Managed Streaming Kafka

Lift-and-shift workloads which include a streaming data component are often built around Apache Kafka. There is a certain amount of complexity involved in operating a Kafka cluster which incurs management and operational expense. Amazon Kinesis is a managed alternative to Apache Kafka, but it is not a drop-in replacement. At re:Invent 2018, we announced the launch of Amazon Managed Streaming Kafka (MSK) in public preview. MSK provides a managed Kafka deployment with pay-as-you-go pricing and an acts as a drop-in replacement in existing Kafka workloads. MSK can help reduce management costs and improve cost efficiency and is ideal for lift-and-shift workloads.

Leveraging S3 for Static Web Hosting

A significant portion of any web application is static content. This includes videos, image, text, and other content that changes seldom, if ever. In many lift-and-shifted applications, web servers are migrated to EC2 instances and host all content – static and dynamic. Hosting static content from an EC2 instance incurs a number of costs including the instance, EBS volumes, and likely, a load balancer. By moving static content to S3, you can significantly reduce the amount of compute required to host your web applications. In many cases, this change is non-disruptive and can be done at the DNS or CDN layer, requiring no change to your application.

Reducing Web Hosting Costs with S3 Static Web Hosting

Figure 2: Reducing Web Hosting Costs with S3 Static Web Hosting

Conclusion

There are numerous opportunities for reducing the cost of a lift-and-shift. Without any modification to the application, lift-and-shift workloads can benefit from cloud-native features. By using AWS services and features, you can significantly reduce the undifferentiated heavy lifting inherent in on-prem workloads and reduce resources and management overheads.

About the author

Dr. Jonathan Shapiro-Ward is an AWS Solutions Architect based in Toronto. He helps customers across Canada to transform their businesses and build industry leading cloud solutions. He has a background in distributed systems and big data and holds a PhD from the University of St Andrews.

Optimizing a Lift-and-Shift for Performance

Post Syndicated from Jonathan Shapiro-Ward original https://aws.amazon.com/blogs/architecture/optimizing-a-lift-and-shift-for-performance/

Many organizations begin their cloud journey with a lift-and-shift of applications from on-premise to AWS. This approach involves migrating software deployments with little, or no, modification. A lift-and-shift avoids a potentially expensive application rewrite but can result in a less optimal workload that a cloud native solution. For many organizations, a lift-and-shift is a transitional stage to an eventual cloud native solution, but there are some applications that can’t feasibly be made cloud-native such as legacy systems or proprietary third-party solutions. There are still clear benefits of moving these workloads to AWS, but how can they be best optimized?

In this blog series post, we’ll look at different approaches for optimizing a black box lift-and-shift. We’ll consider how we can significantly improve a lift-and-shift application across three perspectives: performance, cost, and security. We’ll show that without modifying the application we can integrate services and features that will make a lift-and-shift workload cheaper, faster, more secure, and more reliable. In this first blog, we’ll investigate how a lift-and-shift workload can have improved performance through leveraging AWS features and services.

Performance gains are often a motivating factor behind a cloud migration. On-premise systems may suffer from performance bottlenecks owing to legacy infrastructure or through capacity issues. When performing a lift-and-shift, how can you improve performance? Cloud computing is famous for enabling horizontally scalable architectures but many legacy applications don’t support this mode of operation. Traditional business applications are often architected around a fixed number of servers and are unable to take advantage of horizontal scalability. Even if a lift-and-shift can’t make use of auto scaling groups and horizontal scalability, you can achieve significant performance gains by moving to AWS.

Scaling Up

The easiest alternative to scale up to compute is vertical scalability. AWS provides the widest selection of virtual machine types and the largest machine types. Instances range from small, burstable t3 instances series all the way to memory optimized x1 series. By leveraging the appropriate instance, lift-and-shifts can benefit from significant performance. Depending on your workload, you can also swap out the instances used to power your workload to better meet demand. For example, on days in which you anticipate high load you could move to more powerful instances. This could be easily automated via a Lambda function.

The x1 family of instances offers considerable CPU, memory, storage, and network performance and can be used to accelerate applications that are designed to maximize single machine performance. The x1e.32xlarge instance, for example, offers 128 vCPUs, 4TB RAM, and 14,000 Mbps EBS bandwidth. This instance is ideal for high performance in-memory workloads such as real time financial risk processing or SAP Hana.

Through selecting the appropriate instance types and scaling that instance up and down to meet demand, you can achieve superior performance and cost effectiveness compared to running a single static instance. This affords lift-and-shift workloads far greater efficiency that their on-prem counterparts.

Placement Groups and C5n Instances

EC2 Placement groups determine how you deploy instances to underlying hardware. One can either choose to cluster instances into a low latency group within a single AZ or spread instances across distinct underlying hardware. Both types of placement groups are useful for optimizing lift-and-shifts.

The spread placement group is valuable in applications that rely on a small number of critical instances. If you can’t modify your application  to leverage auto scaling, liveness probes, or failover, then spread placement groups can help reduce the risk of simultaneous failure while improving the overall reliability of the application.

Cluster placement groups help improve network QoS between instances. When used in conjunction with enhanced networking, cluster placement groups help to ensure low latency, high throughput, and high network packets per second. This is beneficial for chatty applications and any application that leveraged physical co-location for performance on-prem.

There is no additional charge for using placement groups.

You can extend this approach further with C5n instances. These instances offer 100Gbps networking and can be used in placement group for the most demanding networking intensive workloads. Using both placement groups and the C5n instances require no modification to your application, only to how it is deployed – making it a strong solution for providing network performance to lift-and-shift workloads.

Leverage Tiered Storage to Optimize for Price and Performance

AWS offers a range of storage options, each with its own performance characteristics and price point. Through leveraging a combination of storage types, lift-and-shifts can achieve the performance and availability requirements in a price effective manner. The range of storage options include:

Amazon EBS is the most common storage service involved with lift-and-shifts. EBS provides block storage that can be attached to EC2 instances and formatted with a typical file system such as NTFS or ext4. There are several different EBS types, ranging from inexpensive magnetic storage to highly performant provisioned IOPS SSDs. There are also storage-optimized instances that offer high performance EBS access and NVMe storage. By utilizing the appropriate type of EBS volume and instance, a compromise of performance and price can be achieved. RAID offers a further option to optimize EBS. EBS utilizes RAID 1 by default, providing replication at no additional cost, however an EC2 instance can apply other RAID levels. For instance, you can apply RAID 0 over a number of EBS volumes in order to improve storage performance.

In addition to EBS, EC2 instances can utilize the EC2 instance store. The instance store provides ephemeral direct attached storage to EC2 instances. The instance store is included with the EC2 instance and provides a facility to store non-persistent data. This makes it ideal for temporary files that an application produces, which require performant storage. Both EBS and the instance store are expose to the EC2 instance as block level devices, and the OS can use its native management tools to format and mount these volumes as per a traditional disk – requiring no significant departure from the on prem configuration. In several instance types including the C5d and P3d are equipped with local NVMe storage which can support extremely IO intensive workloads.

Not all workloads require high performance storage. In many cases finding a compromise between price and performance is top priority. Amazon S3 provides highly durable, object storage at a significantly lower price point than block storage. S3 is ideal for a large number of use cases including content distribution, data ingestion, analytics, and backup. S3, however, is accessible via a RESTful API and does not provide conventional file system semantics as per EBS. This may make S3 less viable for applications that you can’t easily modify, but there are still options for using S3 in such a scenario.

An option for leveraging S3 is AWS Storage Gateway. Storage Gateway is a virtual appliance than can be run on-prem or on EC2. The Storage Gateway appliance can operate in three configurations: file gateway, volume gateway and tape gateway. File gateway provides an NFS interface, Volume Gateway provides an iSCSI interface, and Tape Gateway provides an iSCSI virtual tape library interface. This allows files, volumes, and tapes to be exposed to an application host through conventional protocols with the Storage Gateway appliance persisting data to S3. This allows an application to be agnostic to S3 while leveraging typical enterprise storage protocols.

Using S3 Storage via Storage Gateway

Figure 1: Using S3 Storage via Storage Gateway

Conclusion

A lift-and-shift can achieve significant performance gains on AWS by making use of a range of instance types, storage services, and other features. Even without any modification to the application, lift-and-shift workloads can benefit from cutting edge compute, network, and IO which can help realize significant, meaningful performance gains.

About the author

Dr. Jonathan Shapiro-Ward is an AWS Solutions Architect based in Toronto. He helps customers across Canada to transform their businesses and build industry leading cloud solutions. He has a background in distributed systems and big data and holds a PhD from the University of St Andrews.

Creating an opportunistic IPSec mesh between EC2 instances

Post Syndicated from Vesselin Tzvetkov original https://aws.amazon.com/blogs/security/creating-an-opportunistic-ipsec-mesh-between-ec2-instances/

IPSec diagram

IPSec (IP Security) is a protocol for in-transit data protection between hosts. Configuration of site-to-site IPSec between multiple hosts can be an error-prone and intensive task. If you need to protect N EC2 instances, then you need a full mesh of N*(N-1)IPSec tunnels. You must manually propagate every IP change to all instances, configure credentials and configuration changes, and integrate monitoring and metrics into the operation. The efforts to keep the full-mesh parameters in sync are enormous.

Full mesh IPSec, known as any-to-any, builds an underlying network layer that protects application communication. Common use cases are:

  • You’re migrating legacy applications to AWS, and they don’t support encryption. Examples of protocols without encryption are File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP) or Lightweight Directory Access Protocol (LDAP).
  • You’re offloading protection to IPSec to take advantage of fast Linux kernel encryption and automated certificate management, the use case we focus on in this solution.
  • You want to segregate duties between your application development and infrastructure security teams.
  • You want to protect container or application communication that leaves an EC2 instance.

In this post, I’ll show you how to build an opportunistic IPSec mesh that sets up dynamic IPSec tunnels between your Amazon Elastic Compute Cloud (EC2) instances. IPSec is based on Libreswan, an open-source project implementing opportunistic IPSec encryption (IKEv2 and IPSec) on a large scale.

Solution benefits and deliverable

The solution delivers the following benefits (versus manual site-to-site IPSec setup):

  • Automatic configuration of opportunistic IPSec upon EC2 launch.
  • Generation of instance certificates and weekly re-enrollment.
  • IPSec Monitoring metrics in Amazon CloudWatch for each EC2 instance.
  • Alarms for failures via CloudWatch and Amazon Simple Notification Service (Amazon SNS).
  • An initial generation of a CA root key if needed, including IAM Policies and two customer master keys (CMKs) that will protect the CA key and instance key.

Out of scope

This solution does not deliver IPSec protection between EC2 instances and hosts that are on-premises, or between EC2 instances and managed AWS components, like Elastic Load Balancing, Amazon Relational Database Service, or Amazon Kinesis. Your EC2 instances must have general IP connectivity that allows NACLs and Security Groups. This solution cannot deliver extra connectivity like VPC peering or Transit VPC can.

Prerequisites

You’ll need the following resources to deploy the solution:

  • A trusted Unix/Linux/MacOS machine with AWS SDK for Python and OpenSSL
  • AWS admin rights in your AWS account (including API access)
  • AWS Systems Manager on EC2
  • Linux RedHat, Amazon Linux 2, or CentOS installed on the EC2 instances you want to configure
  • Internet access on the EC2 instances for downloading Linux packages and reaching AWS Systems Manager endpoint
  • The AWS services used by the solution, which are AWS Lambda, AWS Key Management Service (AWS KMS), AWS Identity and Access Management (IAM), AWS Systems Manager, Amazon CloudWatch, Amazon Simple Storage Service (Amazon S3), and Amazon SNS

Solution and performance costs

My solution does not require any additional charges to standard AWS services, since it uses well-established open source software. Involved AWS services are as follows:

  • AWS Lambda is used to issue the certificates. Per EC2 and per week, I estimate the use of two 30 second Lambda functions with 256 MB of allocated memory. For 100 EC2 instances, the cost will be several cents. See AWS Lambda Pricing for details.
  • Certificates have no charge, since they’re issued by the Lambda function.
  • CloudWatch Events and Amazon S3 Storage usage are within the free tier policy.
  • AWS Systems Manager has no additional charge.
  • AWS EC2 is a standard AWS service on which you deploy your workload. There are no charges for IPSec encryption.
  • EC2 CPU performance decrease due to encryption is negligible since we use hardware encryption support of the Linux kernel. The IKE negotiation that is done by the OS in your CPU may add minimal CPU overhead depending on the number of EC2 instances involved.

Installation (one-time setup)

To get started, on a trusted Unix/Linux/MacOS machine that has admin access to your AWS account and AWS SDK for Python already installed, complete the following steps:

  1. Download the installation package from https://github.com/aws-quickstart/quickstart-ec2-ipsec-mesh.
  2. Edit the following files in the package to match your network setup:
    • config/private should contain all networks with mandatory IPSec protection, such as EC2s that should only be communicated with via IPSec. All of these hosts must have IPSec installed.
    • config/clear should contain any networks that do not need IPSec protection. For example, these might include Route 53 (DNS), Elastic Load Balancing, or Amazon Relational Database Service (Amazon RDS).
    • config/clear-or-private should contain networks with optional IPSec protection. These networks will start clear and attempt to add IPSec.
    • config/private-or-clear should also contain networks with optional IPSec protection. However, these networks will start with IPSec and fail back to clear.
  3. Execute ./aws_setup.py and carefully set and verify the parameters. Use -h to view help. If you don’t provide customized options, default values will be generated. The parameters are:
    • Region to install the solution (default: your AWS Command Line Interface region)
    • Buckets for configuration, sources, published host certificates and CA storage. (Default: random values that follow the pattern ipsec-{hostcerts|cacrypto|sources}-{stackname} will be generated.) If the buckets do not exist, they will be automatically created.
    • Reuse of an existing CA? (default: no)
    • Leave encrypted backup copy of the CA key? The password will be printed to stdout (default: no)
    • Cloud formation stackname (default: ipsec-{random string}).
    • Restrict provisioning to certain VPC (default: any)

     
    Here is an example output:

    
                ./aws_setup.py  -r ca-central-1 -p ipsec-host-v -c ipsec-crypto-v -s ipsec-source-v
                Provisioning IPsec-Mesh version 0.1
                
                Use --help for more options
                
                Arguments:
                ----------------------------
                Region:                       ca-central-1
                Vpc ID:                       any
                Hostcerts bucket:             ipsec-host-v
                CA crypto bucket:             ipsec-crypto-v
                Conf and sources bucket:      ipsec-source-v
                CA use existing:              no
                Leave CA key in local folder: no
                AWS stackname:                ipsec-efxqqfwy
                ---------------------------- 
                Do you want to proceed ? [yes|no]: yes
                The bucket ipsec-source-v already exists
                File config/clear uploaded in bucket ipsec-source-v
                File config/private uploaded in bucket ipsec-source-v
                File config/clear-or-private uploaded in bucket ipsec-source-v
                File config/private-or-clear uploaded in bucket ipsec-source-v
                File config/oe-cert.conf uploaded in bucket ipsec-source-v
                File sources/enroll_cert_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/generate_certifcate_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/ipsec_setup_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/cron.txt uploaded in bucket ipsec-source-v
                File sources/cronIPSecStats.sh uploaded in bucket ipsec-source-v
                File sources/ipsecSetup.yaml uploaded in bucket ipsec-source-v
                File sources/setup_ipsec.sh uploaded in bucket ipsec-source-v
                File README.md uploaded in bucket ipsec-source-v
                File aws_setup.py uploaded in bucket ipsec-source-v
                The bucket ipsec-host-v already exists
                Stack ipsec-efxqqfwy creation started. Waiting to finish (ca 3-5 min)
                Created CA CMK key arn:aws:kms:ca-central-1:123456789012:key/abcdefgh-1234-1234-1234-abcdefgh123456
                Certificate generation lambda arn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy
                Generating RSA private key, 4096 bit long modulus
                .............................++
                .................................................................................................................................................................................................................................................................................................................................................................................++
                e is 65537 (0x10001)
                Certificate and key generated. Subject CN=ipsec.ca-central-1 Valid 10 years
                The bucket ipsec-crypto-v already exists
                Encrypted CA key uploaded in bucket ipsec-crypto-v
                CA cert uploaded in bucket ipsec-crypto-v
                CA cert and key remove from local folder
                Lambda functionarn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy updated
                Resource policy for CA CMK hardened - removed action kms:encrypt
                
                done :-)
            

Launching the EC2 Instance

Now that you’ve installed the solution, you can start launching EC2 instances. From the EC2 Launch Wizard, execute the following steps. The instructions assume that you’re using RedHat, Amazon Linux 2, or CentOS.

Note: Steps or details that I don’t explicitly mention can be set to default (or according to your needs).

  1. Select the IAM Role already configured by the solution with the pattern Ec2IPsec-{stackname}
     
    Figure 1: Select the IAM Role

    Figure 1: Select the IAM Role

  2. (You can skip this step if you are using Amazon Linux 2.) Under Advanced Details, select User data as text and activate the AWS Systems Manager Agent (SSM Agent) by providing the following string (for RedHat and CentOS 64 Bits only):
    
        #!/bin/bash
        sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
        sudo systemctl start amazon-ssm-agent
        

     

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

  3. Set the tag name to IPSec with the value todo. This is the identifier that triggers the installation and management of IPsec on the instance.
     
    Figure 3: Set the tag name to "IPSec" with the value "todo"

    Figure 3: Set the tag name to “IPSec” with the value “todo”

  4. On the Configuration page for the security group, allow ESP (Protocol 50) and IKE (UDP 500) for your network, like 172.31.0.0/16. You need to enter these values as shown in following screen:
     
    Figure 4: Enter values on the "Configuration" page

    Figure 4: Enter values on the “Configuration” page

After 1-2 minutes, the value of the IPSec instance tag will change to enabled, meaning the instance is successfully set up.
 

Figure 5: Look for the "enabled" value for the IPSec key

Figure 5: Look for the “enabled” value for the IPSec key

So what’s happening in the background?

 

Figure 6: Architectural diagram

Figure 6: Architectural diagram

As illustrated in the solution architecture diagram, the following steps are executed automatically in the background by the solution:

  1. An EC2 launch triggers a CloudWatch event, which launches an IPSecSetup Lambda function.
  2. The IPSecSetup Lambda function checks whether the EC2 instance has the tag IPSec:todo. If the tag is present, the Lambda function issues a certificate calling a GenerateCertificate Lambda.
  3. The GenerateCertificate Lambda function downloads the encrypted CA certificate and key.
  4. The GenerateCertificate Lambda function decrypts the CA key with a customer master key (CMK).
  5. The GenerateCertificate Lambda function issues a host certificate to the EC2 instance. It encrypts the host certificate and key with a KMS generated random secret in PKCS12 structure. The secret is envelope-encrypted with a dedicated CMK.
  6. The GenerateCertificate Lambda function publishes the issued certificates to your dedicated bucket for documentation.
  7. The IPSec Lambda function calls and runs the installation via SSM.
  8. The installation downloads the configuration and installs python, aws-sdk, libreswan, and curl if needed.
  9. The EC2 instance decrypts the host key with the dedicated CMK and installs it in the IPSec database.
  10. A weekly scheduled event triggers reenrollment of the certificates via the Reenrollcertificates Lambda function.
  11. The Reenrollcertificates Lambda function triggers the IPSecSetup Lambda (call event type: execution). The IPSecSetup Lambda will renew the certificate only, leaving the rest of the configuration untouched.

Testing the connection on the EC2 instance

You can log in to the instance and ping one of the hosts in your network. This will trigger the IPSec connection and you should see successful answers.


        $ ping 172.31.1.26
        
        PING 172.31.1.26 (172.31.1.26) 56(84) bytes of data.
        64 bytes from 172.31.1.26: icmp_seq=2 ttl=255 time=0.722 ms
        64 bytes from 172.31.1.26: icmp_seq=3 ttl=255 time=0.483 ms
        

To see a list of IPSec tunnels you can execute the following:


        sudo ipsec whack --trafficstatus
        

Here is an example of the execution:
 

Figure 7: Example execution

Figure 7: Example execution

Changing your configuration or installing it on already running instances

All configuration exists in the source bucket (default: ipsec-source prefix), in files for libreswan standard. If you need to change the configuration, follow the following instructions:

  1. Review and update the following files:
    1. oe-conf, which is the configuration for libreswan
    2. clear, private, private-to-clear and clear-to-ipsec, which should contain your network ranges.
  2. Change the tag for the IPSec instance to
    IPSec:todo.
  3. Stop and Start the instance (don’t restart). This will retrigger the setup of the instance.
     
    Figure 8: Stop and start the instance

    Figure 8: Stop and start the instance

    1. As an alternative to step 3, if you prefer not to stop and start the instance, you can invoke the IPSecSetup Lambda function via Test Event with a test JSON event in the following format:
      
                      { "detail" :  
                          { "instance-id": "YOUR_INSTANCE_ID" }
                      }
              

      A sample of test event creation in the Lambda Design window is shown below:
       

      Figure 9: Sample test event creation

      Figure 9: Sample test event creation

Monitoring and alarms

The solution delivers and takes care of IPSec/IKE Metrics and SNS Alarms in the case of errors. To monitor your IPSec environment, you can use Amazon CloudWatch. You can see metrics for active IPSec sessions, IKE/ESP errors, and connection shunts.
 

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

There are two SNS topics and alarms configured for IPSec setup failure or certificate reenrollment failure. You will see an alarm and an SNS message. It’s very important that your administrator subscribes to notifications so that you can react quickly. If you receive an alarm, please use the information in the “Troubleshooting” section of this post, below.
 

Figure 11: Alarms

Figure 11: Alarms

Troubleshooting

Below, I’ve listed some common errors and how to troubleshoot them:
 

The IPSec Tag doesn’t change to IPSec:enabled upon EC2 launch.

  1. Wait 2 minutes after the EC2 instance launches, so that it becomes reachable for AWS SSM.
  2. Check that the EC2 Instance has the right role assigned for the SSM Agent. The role is provisioned by the solution named Ec2IPsec-{stackname}.
  3. Check that the SSM Agent is reachable via a NAT gateway, an Internet gateway, or a private SSM endpoint.
  4. For CenOS and RedHat, check that you’ve installed the SSM Agent. See “Launching the EC2 instance.”
  5. Check the output of the SSM Agent command execution in the EC2 service.
  6. Check the IPSecSetup Lambda logs in CloudWatch for details.

The IPSec connection is lost after a few hours and can only be established from one host (in one direction).

  1. Check that your Security Groups allow ESP Protocol and UDP 500. Security Groups are stateful. They may only allow a single direction for IPSec establishment.
  2. Check that your network ACL allows UDP 500 and ESP Protocol.

The SNS Alarm on IPSec reenrollment is trigged, but everything seems to work fine.

  1. Certificates are valid for 30 days and rotated every week. If the rotation fails, you have three weeks to fix the problem.
  2. Check that the EC2 instances are reachable over AWS SSM. If reachable, trigger the certificate rotation Lambda again.
  3. See the IPSecSetup Lambda logs in CloudWatch for details.

DNS Route 53, RDS, and other managed services are not reachable.

  1. DNS, RDS and other managed services do not support IPSec. You need to exclude them from encryption by listing them in the config/clear list. For more details see step 2 of Installation (one-time setup) in this blog.

Here are some additional general IPSec commands for troubleshooting:

Stopping IPSec can be done by executing the following unix command:


        sudo ipsec stop 
        

If you want to stop IPSec on all instances, you can execute this command via AWS Systems Manager on all instances with the tag IPSec:enabled. Stopping encryption means all traffic will be sent unencrypted.

If you want to have a fail-open case, meaning on IKE(IPSec) failure send the data unencrypted, then configure your network in config/private-or-clear as described in step 2 of Installation (one-time setup).

Debugging IPSec issues can be done using Libreswan commands . For example:


        sudo ipsec status 
        
        sudo ipsec whack –debug `
        
        sudo ipsec barf 
        

Security

The CA key is encrypted using an Advanced Encryption Standard (AES) 256 CBC 128-byte secret and stored in a bucket with server-side encryption (SSE). The secret is envelope-encrypted with a CMK in AWS KMP pattern. Only the certificate-issuing Lambda function can decrypt the secret KMS resource policy. The encrypted secret for the CA key is set in an encrypted environment variable of the certificate-issuing Lambda function.

The IPSec host private key is generated by the certificate-issuing Lambda function. The private key and certificate are encrypted with AES 256 CBC (PKCS12) and protected with a 128-byte secret generated by KMS. The secret is envelope-encrypted with a user CMK. Only the EC2 instances with attached IPSec IAM policy can decrypt the secret and private key.

The issuing of the certificate is a full synchronous call: One request and one corresponding response without any polling or similar sync/callbacks. The host private key is not stored in a database or an S3 bucket.

The issued certificates are valid for 30 days and are stored for auditing purposes in a certificates bucket without a private key.

Alternate subject names and multiple interfaces or secondary IPs

The certificate subject name and AltSubjectName attribute contains the private Domain Name System (DNS) of the EC2 and all private IPs assigned to the instance (interfaces, primary, and secondary IPs).

The provided default libreswan configuration covers a single interface. You can adjust the configuration according to libreswan documentation for multiple interfaces, for example, to cover Amazon Elastic Container Service for Kubernetes (Amazon EKS).

Conclusion

With the solution in this blog post, you can automate the process of building an encryption IPSec layer for your EC2 instances to protect your workloads. You don’t need to worry about configuring certificates, monitoring, and alerting. The solution uses a combination of AWS KMS, IAM, AWS Lambda, CloudWatch and the libreswan implementation. If you need libreswan support, use the mailing list or github. AWS forums can give you more information on KMS for IAM. If you require a special enterprise enhancement, contact AWS professional services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vesselin Tzvetkov

Vesselin is senior security consultant at AWS Professional Services and is passionate about security architecture and engineering innovative solutions. Outside of technology, he likes classical music, philosophy, and sports. He holds a Ph.D. in security from TU-Darmstadt and a M.S. in electrical engineering from Bochum University in Germany.

How to map out your migration of Oracle PeopleSoft to AWS

Post Syndicated from Ashok Shanmuga Sundaram original https://aws.amazon.com/blogs/architecture/how-to-map-out-your-migration-of-oracle-peoplesoft-to-aws/

Oracle PeopleSoft Enterprise is a widely used enterprise resource planning (ERP) application. Customers run production deployments of various PeopleSoft applications on AWS, including PeopleSoft Human Capital Management (HCM), Financials and Supply Chain Management (FSCM), Interactive Hub (IAH), and Customer Relationship Management (CRM).

We published a whitepaper on Best Practices for Running Oracle PeopleSoft on AWS in December 2017. It provides architectural guidance and outlines best practices for high availability, security, scalability, and disaster recovery for running Oracle PeopleSoft applications on AWS.

It also covers highly available, scalable, and cost-effective multi-region reference architectures for deploying PeopleSoft applications on AWS, like the one illustrated below.

While migrating your Oracle PeopleSoft applications to AWS, here are some things to keep in mind:

  • Multi-AZ deployments – Deploy your PeopleSoft servers and database across multiple Availability Zones (AZs) for high availability. AWS AZs allow you to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center.
  • Use Amazon Relational Database Service (Amazon RDS) to deploy your PeopleSoft databaseAmazon RDS makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, allowing you to focus on your applications and business. Deploying an RDS for Oracle Database in multiple AZs simplifies creating a highly available architecture because you’ll have built-in support for automated failover from your primary database to a synchronously replicated secondary database in an alternative AZ.
  • Migration of large databases – Migrating large databases to Amazon RDS within a small downtime window requires careful planning:
    • We recommend that you take a point-in-time export of your database, transfer it to AWS, import it into Amazon RDS, and then apply the delta changes from on-premises.
    • Use AWS Direct Connect or AWS Snowball to transfer the export dump to AWS.
    • Use AWS Database Migration Service to apply the delta changes and sync the on-premises database with the Amazon RDS instance.
  • AWS Infrastructure Event Management (IEM) – Take advantage of AWS IEM to mitigate risks and help ensure a smooth migration. IEM is a highly focused engagement where AWS experts provide you with architectural and operational guidance, assist you in reviewing and fine-tuning your migration plan, and provide real-time support for your migration.
  • Cost optimization – There are a number of ways you can optimize your costs on AWS, including:
    • Use reserved instances for environments that are running most of the time, like production environments. A Reserved Instance is an EC2 offering that provides you with a significant discount (up to 75%) on EC2 usage compared to On-Demand pricing when you commit to a one-year or three-year term.
    • Shut down resources that are not in use. For example, development and test environments are typically used for only eight hours a day during the work week. You can stop these resources when they are not in use for a potential cost savings of 75% (40 hours vs. 168 hours). Use the AWS Instance Scheduler to automatically start and stop your Amazon EC2 and Amazon RDS instances based on a schedule.

The Configuring Amazon RDS as an Oracle PeopleSoft Database whitepaper has detailed instructions on configuring a backend Amazon RDS database for your Oracle PeopleSoft deployment on AWS. After you read the whitepaper, I recommend these other resources as your next step:

  • For a real-world case study on migrating a large Oracle database to AWS, check out this blog post about how AFG migrated their mission-critical Oracle Siebel CRM system running on Oracle Exadata on-premises to Amazon RDS for Oracle.
  • For more information on running Oracle Enterprise Solutions on AWS, check out this re:Invent 2017 video.
  • You can find more Oracle on AWS resources here and here.

About the author

Ashok Shanmuga Sundaram is a partner solutions architect with the Global System Integrator (GSI) team at Amazon Web Services. He works with the GSIs to provide guidance on enterprise cloud adoption, migration and strategy.