Tag Archives: AWS on Windows

Use MAP for Windows to Simplify your Migration to AWS

Post Syndicated from Fred Wurden original https://aws.amazon.com/blogs/compute/use-map-for-windows-to-simplify-your-migration-to-aws/

There’s no question that organizations today are being disrupted in their industry. In a previous blog post, I shared that such disruption often accelerates organizations’ decisions to move to the cloud. When these organizations migrate to the cloud, Windows workloads are often critical to their business and these workloads require a performant, reliable, and secure cloud infrastructure. Customers tell us that reducing risk, building cloud expertise, and lowering costs are important factors when choosing that infrastructure.

Today, we are announcing the general availability of the Migration Acceleration Program (MAP) for Windows, a comprehensive program that helps you execute large-scale migrations and modernizations of your Windows workloads on AWS. We have millions of customers on AWS, and have spent the last 11 years helping Windows customers successfully move to our cloud. We’ve built a proven methodology, providing you with AWS services, tools, and expertise to help simplify the migration of your Windows workloads to AWS. MAP for Windows provides prescriptive guidance, consulting support from experts, tools, trainings, and service credits to help reduce the risk and cost of migrating to the cloud as you embark on your migration journey.

MAP for Windows also helps you along the pathways to modernize current and legacy versions of Windows Server and SQL Server to cloud native and open source solutions, enabling you to break free from commercial licensing costs. With the strong price-performance of open-source solutions and the proven reliability of AWS, you can innovate quickly while reducing your risk.

With MAP for Windows, you will follow a simple three-step migration process to your migration:

  1. Assess Your Readiness: The migration readiness assessment helps you identify gaps along the six dimensions of the AWS Cloud Adoption Framework: business, process, people, platform, operations, and security. This assessment helps customers identify capabilities required in the migration. MAP for Windows also includes an Optimization and Licensing Assessment, which provides recommendations on how to optimize your licenses on AWS.
  2. Mobilize Your Resources: The mobilize phase helps you build an operational foundation for your migration, with the goal of fixing the capability gaps identified in the assessment phase. The mobilize phase accelerates your migration decisions by providing clear guidance on migration plans that improve the success of your migration.
  3. Migrate or Modernize Your Workloads: APN Partners and the AWS ProServe team help customers execute the large-scale migration plan developed during the mobilize phase. MAP for Windows also offers financial incentives to help you offset migration costs such as labor, training, and the expense of sometimes running two environments in parallel.

MAP for Windows includes support from AWS Professional Services and AWS Migration Competency Partners, such as Rackspace, 2nd Watch, Accenture, Cloudreach, Enimbos Global Services, Onica, and Slalom. Our MAP for Windows partners have successfully demonstrated completion of multiple large-scale migrations to AWS. They have received the APN Migration Competency Partner and the Microsoft Workloads Competency designations.

Learn about what MAP for Windows can do for you on this page. Learn also about the migration experiences of AWS customers. And contact us to discuss your Windows migration or modernization initiatives and apply to MAP for Windows.

About the Author

Fred Wurden is the GM of Enterprise Engineering (Windows, VMware, Red Hat, SAP, benchmarking) working to make AWS the most customer-centric cloud platform on Earth. Prior to AWS, Fred worked at Microsoft for 17 years and held positions, including: EU/DOJ engineering compliance for Windows and Azure, interoperability principles and partner engagements, and open source engineering. He lives with his wife and a few four-legged friends since his kids are all in college now.

Building Windows containers with AWS CodePipeline and custom actions

Post Syndicated from Dmitry Kolomiets original https://aws.amazon.com/blogs/devops/building-windows-containers-with-aws-codepipeline-and-custom-actions/

Dmitry Kolomiets, DevOps Consultant, Professional Services

AWS CodePipeline and AWS CodeBuild are the primary AWS services for building CI/CD pipelines. AWS CodeBuild supports a wide range of build scenarios thanks to various built-in Docker images. It also allows you to bring in your own custom image in order to use different tools and environment configurations. However, there are some limitations in using custom images.

Considerations for custom Docker images:

  • AWS CodeBuild has to download a new copy of the Docker image for each build job, which may take longer time for large Docker images.
  • AWS CodeBuild provides a limited set of instance types to run the builds. You might have to use a custom image if the build job requires higher memory, CPU, graphical subsystems, or any other functionality that is not part of the out-of-the-box provided Docker image.

Windows-specific limitations

  • AWS CodeBuild supports Windows builds only in a limited number of AWS regions at this time.
  • AWS CodeBuild executes Windows Server containers using Windows Server 2016 hosts, which means that build containers are huge—it is not uncommon to have an image size of 15 GB or more (with .NET Framework SDK installed). Windows Server 2019 containers, which are almost half as small, cannot be used due to host-container mismatch.
  • AWS CodeBuild runs build jobs inside Docker containers. You should enable privileged mode in order to build and publish Linux Docker images as part of your build job. However, DIND is not supported on Windows and, therefore, AWS CodeBuild cannot be used to build Windows Server container images.

The last point is the critical one for microservice type of applications based on Microsoft stacks (.NET Framework, Web API, IIS). The usual workflow for this kind of applications is to build a Docker image, push it to ECR and update ECS / EKS cluster deployment.

Here is what I cover in this post:

  • How to address the limitations stated above by implementing AWS CodePipeline custom actions (applicable for both Linux and Windows environments).
  • How to use the created custom action to define a CI/CD pipeline for Windows Server containers.

CodePipeline custom actions

By using Amazon EC2 instances, you can address the limitations with Windows Server containers and enable Windows build jobs in the regions where AWS CodeBuild does not provide native Windows build environments. To accommodate the specific needs of a build job, you can pick one of the many Amazon EC2 instance types available.

The downside of this approach is additional management burden—neither AWS CodeBuild nor AWS CodePipeline support Amazon EC2 instances directly. There are ways to set up a Jenkins build cluster on AWS and integrate it with CodeBuild and CodeDeploy, but these options are too “heavy” for the simple task of building a Docker image.

There is a different way to tackle this problem: AWS CodePipeline provides APIs that allow you to extend a build action though custom actions. This example demonstrates how to add a custom action to offload a build job to an Amazon EC2 instance.

Here is the generic sequence of steps that the custom action performs:

  • Acquire EC2 instance (see the Notes on Amazon EC2 build instances section).
  • Download AWS CodePipeline artifacts from Amazon S3.
  • Execute the build command and capture any errors.
  • Upload output artifacts to be consumed by subsequent AWS CodePipeline actions.
  • Update the status of the action in AWS CodePipeline.
  • Release the Amazon EC2 instance.

Notice that most of these steps are the same regardless of the actual build job being executed. However, the following parameters will differ between CI/CD pipelines and, therefore, have to be configurable:

  • Instance type (t2.micro, t3.2xlarge, etc.)
  • AMI (builds could have different prerequisites in terms of OS configuration, software installed, Docker images downloaded, etc.)
  • Build command line(s) to execute (MSBuild script, bash, Docker, etc.)
  • Build job timeout

Serverless custom action architecture

CodePipeline custom build action can be implemented as an agent component installed on an Amazon EC2 instance. The agent polls CodePipeline for build jobs and executes them on the Amazon EC2 instance. There is an example of such an agent on GitHub, but this approach requires installation and configuration of the agent on all Amazon EC2 instances that carry out the build jobs.

Instead, I want to introduce an architecture that enables any Amazon EC2 instance to be a build agent without additional software and configuration required. The architecture diagram looks as follows:

Serverless custom action architecture

There are multiple components involved:

  1. An Amazon CloudWatch Event triggers an AWS Lambda function when a custom CodePipeline action is to be executed.
  2. The Lambda function retrieves the action’s build properties (AMI, instance type, etc.) from CodePipeline, along with location of the input artifacts in the Amazon S3 bucket.
  3. The Lambda function starts a Step Functions state machine that carries out the build job execution, passing all the gathered information as input payload.
  4. The Step Functions flow acquires an Amazon EC2 instance according to the provided properties, waits until the instance is up and running, and starts an AWS Systems Manager command. The Step Functions flow is also responsible for handling all the errors during build job execution and releasing the Amazon EC2 instance once the Systems Manager command execution is complete.
  5. The Systems Manager command runs on an Amazon EC2 instance, downloads CodePipeline input artifacts from the Amazon S3 bucket, unzips them, executes the build script, and uploads any output artifacts to the CodePipeline-provided Amazon S3 bucket.
  6. Polling Lambda updates the state of the custom action in CodePipeline once it detects that the Step Function flow is completed.

The whole architecture is serverless and requires no maintenance in terms of software installed on Amazon EC2 instances thanks to the Systems Manager command, which is essential for this solution. All the code, AWS CloudFormation templates, and installation instructions are available on the GitHub project. The following sections provide further details on the mentioned components.

Custom Build Action

The custom action type is defined as an AWS::CodePipeline::CustomActionType resource as follows:

  Ec2BuildActionType: 
    Type: AWS::CodePipeline::CustomActionType
    Properties: 
      Category: !Ref CustomActionProviderCategory
      Provider: !Ref CustomActionProviderName
      Version: !Ref CustomActionProviderVersion
      ConfigurationProperties: 
        - Name: ImageId 
          Description: AMI to use for EC2 build instances.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String
        - Name: InstanceType
          Description: Instance type for EC2 build instances.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String
        - Name: Command
          Description: Command(s) to execute.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String 
        - Name: WorkingDirectory 
          Description: Working directory for the command to execute.
          Key: true 
          Required: false
          Secret: false
          Queryable: false
          Type: String 
        - Name: OutputArtifactPath 
          Description: Path of the file(-s) or directory(-es) to use as custom action output artifact.
          Key: true 
          Required: false
          Secret: false
          Queryable: false
          Type: String 
      InputArtifactDetails: 
        MaximumCount: 1
        MinimumCount: 0
      OutputArtifactDetails: 
        MaximumCount: 1
        MinimumCount: 0 
      Settings: 
        EntityUrlTemplate: !Sub "https://${AWS::Region}.console.aws.amazon.com/systems-manager/documents/${RunBuildJobOnEc2Instance}"
        ExecutionUrlTemplate: !Sub "https://${AWS::Region}.console.aws.amazon.com/states/home#/executions/details/{ExternalExecutionId}"

The custom action type is uniquely identified by Category, Provider name, and Version.

Category defines the stage of the pipeline in which the custom action can be used, such as build, test, or deploy. Check the AWS documentation for the full list of allowed values.

Provider name and Version are the values used to identify the custom action type in the CodePipeline console or AWS CloudFormation templates. Once the custom action type is installed, you can add it to the pipeline, as shown in the following screenshot:

Adding custom action to the pipeline

The custom action type also defines a list of user-configurable properties—these are the properties identified above as specific for different CI/CD pipelines:

  • AMI Image ID
  • Instance Type
  • Command
  • Working Directory
  • Output artifacts

The properties are configurable in the CodePipeline console, as shown in the following screenshot:

Custom action properties

Note the last two settings in the Custom Action Type AWS CloudFormation definition: EntityUrlTemplate and ExecutionUrlTemplate.

EntityUrlTemplate defines the link to the AWS Systems Manager document that carries over the build actions. The link is visible in AWS CodePipeline console as shown in the following screenshot:

Custom action's EntityUrlTemplate link

ExecutionUrlTemplate defines the link to additional information related to a specific execution of the custom action. The link is also visible in the CodePipeline console, as shown in the following screenshot:

Custom action's ExecutionUrlTemplate link

This URL is defined as a link to the Step Functions execution details page, which provides high-level information about the custom build step execution, as shown in the following screenshot:

Custom build step execution

This page is a convenient visual representation of the custom action execution flow and may be useful for troubleshooting purposes as it gives an immediate access to error messages and logs.

The polling Lambda function

The Lambda function polls CodePipeline for custom actions when it is triggered by the following CloudWatch event:

  source: 
    - "aws.codepipeline"
  detail-type: 
    - "CodePipeline Action Execution State Change"
  detail: 
    state: 
      - "STARTED"

The event is triggered for every CodePipeline action started, so the Lambda function should verify if, indeed, there is a custom action to be processed.

The rest of the lambda function is trivial and relies on the following APIs to retrieve or update CodePipeline actions and deal with instances of Step Functions state machines:

CodePipeline API

AWS Step Functions API

You can find the complete source of the Lambda function on GitHub.

Step Functions state machine

The following diagram shows complete Step Functions state machine. There are three main blocks on the diagram:

  • Acquiring an Amazon EC2 instance and waiting while the instance is registered with Systems Manager
  • Running a Systems Manager command on the instance
  • Releasing the Amazon EC2 instance

Note that it is necessary to release the Amazon EC2 instance in case of error or exception during Systems Manager command execution, relying on Fallback States to guarantee that.

You can find the complete definition of the Step Function state machine on GitHub.

Step Functions state machine

Systems Manager Document

The AWS Systems Manager Run Command does all the magic. The Systems Manager agent is pre-installed on AWS Windows and Linux AMIs, so no additional software is required. The Systems Manager run command executes the following steps to carry out the build job:

  1. Download input artifacts from Amazon S3.
  2. Unzip artifacts in the working folder.
  3. Run the command.
  4. Upload output artifacts to Amazon S3, if any; this makes them available for the following CodePipeline stages.

The preceding steps are operating-system agnostic, and both Linux and Windows instances are supported. The following code snippet shows the Windows-specific steps.

You can find the complete definition of the Systems Manager document on GitHub.

mainSteps:
  - name: win_enable_docker
    action: aws:configureDocker
    inputs:
      action: Install

  # Windows steps
  - name: windows_script
    precondition:
      StringEquals: [platformType, Windows]
    action: aws:runPowerShellScript
    inputs:
      runCommand:
        # Ensure that if a command fails the script does not proceed to the following commands
        - "$ErrorActionPreference = \"Stop\""

        - "$jobDirectory = \"{{ workingDirectory }}\""
        # Create temporary folder for build artifacts, if not provided
        - "if ([string]::IsNullOrEmpty($jobDirectory)) {"
        - "    $parent = [System.IO.Path]::GetTempPath()"
        - "    [string] $name = [System.Guid]::NewGuid()"
        - "    $jobDirectory = (Join-Path $parent $name)"
        - "    New-Item -ItemType Directory -Path $jobDirectory"
                # Set current location to the new folder
        - "    Set-Location -Path $jobDirectory"
        - "}"

        # Download/unzip input artifact
        - "Read-S3Object -BucketName {{ inputBucketName }} -Key {{ inputObjectKey }} -File artifact.zip"
        - "Expand-Archive -Path artifact.zip -DestinationPath ."

        # Run the build commands
        - "$directory = Convert-Path ."
        - "$env:PATH += \";$directory\""
        - "{{ commands }}"
        # We need to check exit code explicitly here
        - "if (-not ($?)) { exit $LASTEXITCODE }"

        # Compress output artifacts, if specified
        - "$outputArtifactPath  = \"{{ outputArtifactPath }}\""
        - "if ($outputArtifactPath) {"
        - "    Compress-Archive -Path $outputArtifactPath -DestinationPath output-artifact.zip"
                # Upload compressed artifact to S3
        - "    $bucketName = \"{{ outputBucketName }}\""
        - "    $objectKey = \"{{ outputObjectKey }}\""
        - "    if ($bucketName -and $objectKey) {"
                    # Don't forget to encrypt the artifact - CodePipeline bucket has a policy to enforce this
        - "        Write-S3Object -BucketName $bucketName -Key $objectKey -File output-artifact.zip -ServerSideEncryption aws:kms"
        - "    }"
        - "}"
      workingDirectory: "{{ workingDirectory }}"
      timeoutSeconds: "{{ executionTimeout }}"

CI/CD pipeline for Windows Server containers

Once you have a custom action that offloads the build job to the Amazon EC2 instance, you may approach the problem stated at the beginning of this blog post: how to build and publish Windows Server containers on AWS.

With the custom action installed, the solution is quite straightforward. To build a Windows Server container image, you need to provide the value for Windows Server with Containers AMI, the instance type to use, and the command line to execute, as shown in the following screenshot:

Windows Server container custom action properties

This example executes the Docker build command on a Windows instance with the specified AMI and instance type, using the provided source artifact. In real life, you may want to keep the build script along with the source code and push the built image to a container registry. The following is a PowerShell script example that not only produces a Docker image but also pushes it to AWS ECR:

# Authenticate with ECR
Invoke-Expression -Command (Get-ECRLoginCommand).Command

# Build and push the image
docker build -t <ecr-repository-url>:latest .
docker push <ecr-repository-url>:latest

return $LASTEXITCODE

You can find a complete example of the pipeline that produces the Windows Server container image and pushes it to Amazon ECR on GitHub.

Notes on Amazon EC2 build instances

There are a few ways to get Amazon EC2 instances for custom build actions. Let’s take a look at a couple of them below.

Start new EC2 instance per job and terminate it at the end

This is a reasonable default strategy that is implemented in this GitHub project. Each time the pipeline needs to process a custom action, you start a new Amazon EC2 instance, carry out the build job, and terminate the instance afterwards.

This approach is easy to implement. It works well for scenarios in which you don’t have many builds and/or builds take some time to complete (tens of minutes). In this case, the time required to provision an instance is amortized. Conversely, if the builds are fast, instance provisioning time could be actually longer than the time required to carry out the build job.

Use a pool of running Amazon EC2 instances

There are cases when it is required to keep builder instances “warm”, either due to complex initialization or merely to reduce the build duration. To support this scenario, you could maintain a pool of always-running instances. The “acquisition” phase takes a warm instance from the pool and the “release” phase returns it back without terminating or stopping the instance. A DynamoDB table can be used as a registry to keep track of “busy” instances and provide waiting or scaling capabilities to handle high demand.

This approach works well for scenarios in which there are many builds and demand is predictable (e.g. during work hours).

Use a pool of stopped Amazon EC2 instances

This is an interesting approach, especially for Windows builds. All AWS Windows AMIs are generalized using a sysprep tool. The important implication of this is that the first start time for Windows EC2 instances is quite long: it could easily take more than 5 minutes. This is generally unacceptable for short-living build jobs (if your build takes just a minute, it is annoying to wait 5 minutes to start the instance).

Interestingly, once the Windows instance is initialized, subsequent starts take less than a minute. To utilize this, you could create a pool of initialized and stopped Amazon EC2 instances. In this case, for the acquisition phase, you start the instance, and when you need to release it, you stop or hibernate it.

This approach provides substantial improvements in terms of build start-up time.

The downside is that you reuse the same Amazon EC2 instance between the builds—it is not completely clean environment. Build jobs have to be designed to expect the presence of artifacts from the previous executions on the build instance.

Using an Amazon EC2 fleet with spot instances

Another variation of the previous strategies is to use Amazon EC2 Fleet to make use of cost-efficient spot instances for your build jobs.

Amazon EC2 Fleet makes it possible to combine on-demand instances with spot instances to deliver cost-efficient solution for your build jobs. On-demand instances can provide the minimum required capacity and spot instances provide a cost-efficient way to improve performance of your build fleet.

Note that since spot instances could be terminated at any time, the Step Functions workflow has to support Amazon EC2 instance termination and restart the build on a different instance transparently for CodePipeline.

Limits and Cost

The following are a few final thoughts.

Custom action timeouts

The default maximum execution time for CodePipeline custom actions is one hour. If your build jobs require more than an hour, you need to request a limit increase for custom actions.

Cost of running EC2 build instances

Custom Amazon EC2 instances could be even more cost effective than CodeBuild for many scenarios. However, it is difficult to compare the total cost of ownership of a custom-built fleet with CodeBuild. CodeBuild is a fully managed build service and you pay for each minute of using the service. In contrast, with Amazon EC2 instances you pay for the instance either per hour or per second (depending on instance type and operating system), EBS volumes, Lambda, and Step Functions. Please use the AWS Simple Monthly Calculator to get the total cost of your projected build solution.

Cleanup

If you are running the above steps as a part of workshop / testing, then you may delete the resources to avoid any further charges to be incurred. All resources are deployed as part of CloudFormation stack, so go to the Services, CloudFormation, select the specific stack and click delete to remove the stack.

Conclusion

The CodePipeline custom action is a simple way to utilize Amazon EC2 instances for your build jobs and address a number of CodePipeline limitations.

With AWS CloudFormation template available on GitHub you can import the CodePipeline custom action with a simple Start/Terminate instance strategy into your account and start using the custom action in your pipelines right away.

The CodePipeline custom action with a simple Start/Terminate instance strategy is available on GitHub as an AWS CloudFormation stack. You could import the stack to your account and start using the custom action in your pipelines right away.

An example of the pipeline that produces Windows Server containers and pushes them to Amazon ECR can also be found on GitHub.

I invite you to clone the repositories to play with the custom action, and to make any changes to the action definition, Lambda functions, or Step Functions flow.

Feel free to ask any questions or comments below, or file issues or PRs on GitHub to continue the discussion.

Fact-checking GigaOm’s Microsoft-sponsored benchmark claims

Post Syndicated from Fred Wurden original https://aws.amazon.com/blogs/compute/fact-checking-gigaoms-microsoft-sponsored-benchmark-claims/

SQL Server on AWS delivers 40% price/performance advantage over Azure

In this blog, we will review a recent benchmark that Microsoft sponsored and GigaOm published on 12/2/2019. This benchmark is not credible because Microsoft and GigaOm use configurations of AWS that generate weaker performance, they have not been transparent on how it was run, and the benchmarks are not reproducible.

AWS is committed to providing objective, transparent, and replicable benchmarking data so you can make an informed decision for why to run SQL Server on AWS. The latest AWS performance benchmark shows that AWS has up to a 1.75x performance advantage and up to 40% price/performance advantage over Azure (see Appendix).

The GigaOm/Microsoft benchmark is not an accurate, head-to-head comparison. It claims that Azure has over 3x performance and 80% price/performance advantage compared to AWS. This claim by GigaOm and Microsoft is not reproductible, and was created by utilizing a modified TPC-E benchmark which uses a Microsoft’s proprietary benchmark tool. These benchmark results are also inaccurate comparisons due to significant mismatches in the configurations and the price calculations for Azure and AWS. These inaccuracies include:

  1. The benchmark uses four striped disks for Azure, but did not apply any striping to AWS. Striping is a technique that is commonly used to enhance performance.
  2. The benchmark uses an older generation AWS instance (R4), ignoring AWS hardware innovations that the latest generation comparable instance R5d delivers. R5d also has up to 3.6TB local storage for additional performance benefits.
  3. The benchmark leaves out significant cost components for Microsoft, resulting in an inaccurate price/performance result. The Microsoft cost does not account for the original cost of the Window Server licenses or the Software Assurance required to use Azure Hybrid Benefits. It also does not take into account the AWS programs that provide similar benefits, such as the Migration Acceleration Program (MAP).

AWS ran a performance analysis with comparable instance type and storage configurations using the publicly available TPC-C HammerDB benchmark tool. The benchmark running SQL Server on AWS delivers 1.75x better performance and delivers up to 40% better price/performance than Azure. You can use the same HammerDB benchmark tool and run your own TPC-C tests to replicate the latest results using this whitepaper. Similarly, you can use the same tool to replicate prior TPC-C like benchmarks from DB Best which showed that AWS delivered 2-3x better performance over Azure.

Performance is just one of the reasons customers such as NextGen Healthcare and Pearson choose AWS to run their Windows workloads, and according to a report by IDC, we host nearly two times as many Windows Server instances in the cloud as Microsoft. More and more enterprises are entrusting their Windows workloads to AWS because of its greater reliability, security, and performance. Publishing misleading benchmarks is just one more old-guard tactic by Microsoft, in addition to license complexity and licensing restrictions, to try to prevent customers from using the best cloud for their Windows workloads.

It is important that you have the facts when making a decision about your cloud provider. AWS encourages you to look for replicable research and benchmark your own workloads that help you verify which provider offers the best price, performance, and reliability. Don’t be misguided by vendor claims that cannot be validated.

Appendix

HammerDB TPC-C tests run internally on the same server and storage configurations as the GigaOm report deliver the following performance improvements over Azure when more optimal options are chosen:

About the Author

Fred Wurden is the GM of Enterprise Engineering (Windows, VMware, RedHat, SAP, benchmarking) working to make AWS the most customer-centric cloud platform on Earth. Prior to AWS, Fred worked at Microsoft for 17 years and held positions, including: EU/DoJ engineering compliance for Windows and Azure, interoperability principles and partner engagements, and open source engineering. He lives with his wife and a few four-legged friends since his kids are all in college now.

We love SQL Server running on AWS almost as much as our customers

Post Syndicated from Sandy Carter original https://aws.amazon.com/blogs/compute/we-love-sql-server-running-on-aws-almost-as-much-as-our-customers/

We love SQL Server running on AWS almost as much as our customers. Microsoft SQL server 2019 became generally available on November 8, 2019, and is now available to on AWS. More customers run SQL Server on AWS than any other cloud, and trust AWS for a number of reasons.

The first is performance. Recent performance benchmarks show that AWS delivers great price-performance for running SQL server. ZK Research points out that SQL Server on AWS consistently shows a price performance using HammerDB, a TPC-C-like benchmark tool compared to Azure, that’s over two times better. These results come from analysis done by ZKResearch based on independent testing results published by DBBest. Furthermore, we offer fast and high throughput storage options with Amazon EBS and local NVMe instance storage. We know that getting better application performance is critical for your customer’s satisfaction. In fact, excellent application performance leads to 39% higher* customer satisfaction, while poor performance may lead to damaged reputations or, even worse, customer attrition. To make sure you have the best possible experience for your customers, we have focused on pushing the boundaries around performance.

For example, the value of running SQL Server on AWS is shown in Pearson’s migration story. Pearson is a British-owned education publishing and assessment service to schools and corporations, as well for students directly. Pearson owns educational media brands including Addison–Wesley, Prentice Hall, eCollege, and others. Schoolnet , one of their offerings, tests tens of millions of students and is used by tens of thousands of educators. Pearson migrated Schoolnet, which was an on-premises application that used SQL Server, over to the AWS cloud. The goal of the migration was to ensure that the high volume of tests done daily would run efficiently and effectively. As many customers do, Pearson build for the worst-case demand, and had over-provisioned with a massive infrastructure utilized for potential (real!) peaks. When they moved to AWS, not only did they see efficiencies in cost but SQL Server was far easier to manage and was much faster. In fact, at our reInvent conference, Ian Wright told the audience that they had so much feedback coming into the level 2 support desk that was positive from the migration in particular, that Schoolnet was running faster than ever before!

Second, customers need high availability for mission-critical applications written using SQL Server. We have the best global infrastructure for running workloads that require high availability. The AWS Global Infrastructure underlays SQL Server on AWS and spans 69 Availability Zones (AZs) within 22 geographic regions around the world. These AZs are designed for physical redundancy and provide resilience, which enables uninterrupted performance. In 2018, the next-largest cloud provider had almost seven times more downtime hours than AWS.

Third, while CIOs tell us that cost is not the key factor in their decision to move to the cloud (agility and innovation are usually at the center of their motivation), they are often impressed with the cost savings they see when bringing their SQL server workloads to AWS. We typically see at least 20% savings with just a lift-and-shift. Over the first few months, you can continue to optimize your EC2 instances for an additional 10–20% savings. By adopting higher level services, you can further optimize—many customers saw 60% or more savings.

For example, Axinom ran its Windows-based applications in an on-premises environment that made it difficult to scale to meet increasing user traffic. The company also wanted to boost scalability and cut costs. Axinom moved its applications, including Axinom CMS and Axinom DRM, to the AWS Cloud. The company runs its Microsoft SQL Server–based platform on AWS and Spot Instances to optimize costs. As Johannes Jauch, the Chief Technology Officer of Axinom, said, “We have cut costs for supporting our digital media supply chain services by 70 percent using AWS products such as Amazon Spot Instances. As a result, we can provide more competitive pricing for our global customers.” And Axinom is not the only customers. Customers like Salesforce, Adobe, and Decisiv are benefitting from increased productivity and agility running SQL Server on AWS. You can read more about how customers are unlocking maximum business value by migrating to AWS.

And finally, not only does AWS offers more security, compliance, and governance services and key features than the next largest cloud provider, we also have the most migration experience.

All these benefits mean that the new features of SQL Server 2019, such as big data clusters, always being encrypted with secure enclaves, and improvements in SQL on Linux features, run better on AWS. We are also happy to announce that you can now launch EC2 instances that run Windows Server 2019/2016 and four editions of SQL Server 2019 (Web, Express, Standard, and Enterprise). The Amazon Machine Images (AMIs) are available today in all AWS Regions and run on a wide variety of EC2 instance types. You can launch these instances from the AWS Management Console, AWS CLI, or through AWS Marketplace. To get started with SQL 2019 on AWS, you can either purchase a License Included EC2 instance, or, if you have software assurance, you also have two Bring Your Own License (BYOL) options! You can BYOL SQL Server on an AWS instance with license-included Windows, or can BYOL SQL Server on a dedicated host with BYOL Windows (provided the Windows license was purchased before October 1, 2019).

Join the customers on SQL Server on AWS today! To learn more about SQL Server 2019 and to explore your licensing options, visit Microsoft SQL Server on AWS. If you need advice and guidance as you plan your migration effort, check out the AWS Partners who have qualified for the AWS Microsoft Workloads Competency and focus on database solutions. Please join me and the AWS team at AWS re:Invent (December 2–6 in Las Vegas).

*Source: Netmagic, https://www.netmagicsolutions.com/data/images/WP_How-End-User-Experience-Affects-Your-Bottom-Line16-08-231471935227.pdf

Migrating Azure VM to AWS using AWS SMS Connector for Azure

Post Syndicated from Emma White original https://aws.amazon.com/blogs/compute/migrating-azure-vm-to-aws-using-aws-sms-connector-for-azure/

AWS SMS is an agentless service that facilitates and expedites the migration of your existing workloads to AWS. The service enables you to automate, schedule, and monitor incremental replications of active server volumes, which facilitates large-scale server migration coordination. Recently, you could only migrate virtual machines (VMs) running in VMware vSphere and Microsoft Hyper-V environments. Currently, you can use the simplicity and ease of AWS Server Migration Service (SMS) to migrate virtual machines running on Microsoft Azure. You can discover Azure VMs, group them into applications, and migrate a group of applications as a single unit without having to go through the hassle of coordinating the replication of the individual servers or decoupling application dependencies. SMS significantly reduces application migration time, as well as decreases the risk of errors in the migration process.

 

This post takes you step-by-step through how to provision the SMS virtual machine on Microsoft Azure, discover the virtual machines in a Microsoft Azure subscription, create a replication job, and finally launch the instance on AWS.

 

1- Provisioning the SMS virtual machine

To provision your SMS virtual machine on Microsoft Azure, complete the following steps.

  1. Download three PowerShell scripts listed under Step 1 of Installing the Server Migration Connection on Azure.
FileURL
Installation scripthttps://s3.amazonaws.com/sms-connector/aws-sms-azure-setup.ps1
MD5 hashhttps://s3.amazonaws.com/sms-connector/aws-sms-azure-setup.ps1.md5
SHA256 hashhttps://s3.amazonaws.com/sms-connector/aws-sms-azure-setup.ps1.sha256

 

  1. To validate the integrity of the files you can compare the checksums of the files. You can use PowerShell 5.1 or newer.

 

2.1 To validate the MD5 hash of the aws-sms-azure-setup.ps1 script, run the following command and wait for an output similar to the following result:

Command to validate the MD5 has of the aws-sems-azure-setup.ps1 script

2.2 To validate the SHA256 hash of the aws-sms-azure-setup.ps1 file, run the following command and wait for an output similar to the following result:

Command to validate the SHA256 hash of the aws-sms-azure-setup.ps1 file

2.3 Compare the returned values ​​by opening the aws-sms-azure-setup.ps1.md5 and aws-sms-azure-setup.ps1.sha256 files in your preferred text editor.

2.4 To validate if the PowerShell script has a valid Amazon Web Services signature, run the following command and wait for an output similar to the following result:

Command to validate validate if the PowerShell script has a valid Amazon Web Services signature

 

  1. Before running the script for provisioning the SMS virtual machine, you must have an Azure Virtual Network and an Azure Storage Account in which you will temporarily store metadata for the tasks that SSM performs against the Microsoft Azure Subscription. A good recommendation is to use the same Azure Virtual Network as the Azure Virtual Machines being migrated, since the SSM virtual machine performs REST API communications to communicate with AWS endpoints as well as the Azure Cloud Service. It is not necessary for the SMS virtual machine to have a Public IP or Internet Inbounds Rules.

 

4.  Run the installation script .\aws-sms-azure-setup.ps1

Screenshot of running the installation script

  1. Enter with the name of the existing Storage Account Name and Azure Virtual Network in the subscription:

Screenshot of where to enter Storage Account Name and Azure Virtual Network

  1. The Microsoft Azure modules imports into the local PowerShell, and you receive a prompt for credentials to access the subscription.

Azure login credentials

  1. A summary of the created features appears, similar to the following:

Screenshot of created features

  1. Wait for the process to complete. It may take a few minutes:

screenshot of processing jobs

  1. After the provisioning an output containing the Object Id of System Assigned Identity and Private IP. Save this information as it is going to be used to register the connector to the SMS service in the step 23.

Screenshot of the information to save

  1. To check the provisioned resources, log into the Microsoft Azure Portal and select the Resource Group option. The provided AWS script performed a role created in the Microsoft Azure IAM that allows the virtual machine to make use of the necessary services through REST APIs over HTTPS calls and to be authenticated via Azure Inbuilt Instance Metadata Service (IMDS).

Screenshot of provisioned resources log in Microsoft Azure Portal

  1. As a requirement, you need to create an IAM User that contains the necessary permissions for the SMS service to perform the migration. To do this, log into your AWS account at https://aws.amazon.com/console, under services select IAM. Then select User, and click Add user.

Screenshot of AWS console. add user

 

  1. In the Add user page, insert a username and check the option Programmatic access. Click: Next Permissions

Screenshot of adding a username

  1. Attach an existing policy with the name ServerMigrationConnector. This policy allows the AWS Connector to connects and executes API-requests against AWS. Click Next:Tags.

Adding policy ServerMigrationConnector

  1. Optionally add tags to the user. Click Next: Review.

Screenshot of option to add tags to the user

15. Click Create User and save the Access Key and Secret Access Key. This information is going to be used during the AWS SMS Connector setup.

Create User and save the access key and secret access key

 

  1. From a computer that has access to the Azure Virtual Network, access the SMS Virtual Machine configuration using a browser and the previously recorded private IP from the output of the script. In this example, the URL is https://10.0.0.4.

Screenshot of accessing the SMS Virtual Machine configuration

  1. On the main page of the SMS virtual machine, click Get Started Now

Screenshot of the SMS virtual machine start page

  1. Read and accept the terms of the contract, then click Next.

Screenshot of accepting terms of contract

  1. Create a password that will be used to login later in the management connector console and click Next.

Screenshot of creating a password

  1. Review the Network Info and click Next.

Screenshot of reviewing the network info

  1. Choose if you would like to opt-in to having anonymous log data set to AWS then click Next.

Screenshot of option to add log data to AWS

  1. Insert an Access Key and Secret Access Key for an IAM User whose only policy is attached: “ServerMigrationConnector” Also, select the region in which the SMS endpoint will be used and click Next. The access key mentioned it was created through step 11 to 15.

Selet AWS Region, and Insert Access Key and Secret Key

  1. Enter the Object Id of System Assigned Identify copied in step 9 and click Next.

Enter Object Id of System Assigned Identify

  1. Congratulations, you have successfully configured the Azure connector, click Go to connector dashboard.

Screenshot of the successful configuration of the Azure connector

  1. Verify that the connector status is HEALTHY by clicking Connectors on the menu.

Screenshot of verifying that the connector status is healthy

 

2 – Replicating Azure Virtual Machines to Amazon Web Services

  1. Access the SMS console and go to the Servers option. Click Import Server Catalog or Re-Import Server Catalog if it has been previously executed.

Screenshot of SMS console and servers option

  1. Select the Azure Virtual Machines to be migrated and click Create Replication Job.

Screenshot of Azure virtual machines migration

  1. Select which type of licensing best suits your environment, such as:

– Auto (Current licensing autodetection)

– AWS (License Included)

– BYOL (Bring Your Own License).
See options: https://aws.amazon.com/windows/resources/licensing/

Screenshot of best type of licensing for your environment

  1. Select the appropriate replication frequency, when the replication should start, and the IAM service role. You can leave it blank and the SMS service is going to use the built-in service role “sms”

Screenshot of replication jobs and IAM service role

  1. A summary of the settings are displayed and click Create. 
    Screenshot of the summary of settings displayed
  2. In the SMS Console, go to the Replication Jobs option and follow the replication job status:

Overview of replication jobs

  1. After completion, access the EC2 console, go to AMIs, and a list of the AMIs generated by SMS will now be in this list. In the example below, several AMIs were generated because the replication frequency is 1 hour.

List of AMIs generated by SMS

  1. Now navigate to the SMS console, click Launch Instance and follow the screen processes for creating a new Amazon EC2 instance.

SMS console and Launch Instance screenshot

 

3 – Conclusion

This solution provides a simple, agentless, non-intrusive way to the migration process with the AWS Server Migration Service.

 

For more about Windows Workloads on AWS go to:  http://aws.amazon.com/windows

 

About the Author

Photo of the Author

 

 

Marcio Morales is a Senior Solution Architect at Amazon Web Services. He works with AWS customers to provide guidance and technical assistance on running their Microsoft workloads on AWS.