Tag Archives: Migration & Transfer Services

Welcome to AWS Storage Day 2023

Post Syndicated from Veliswa Boya original https://aws.amazon.com/blogs/aws/welcome-to-aws-storage-day-2023/

Welcome to the fifth annual AWS Storage Day! This virtual event is happening today starting at 9:00 AM Pacific Time (12:00 PM Eastern Time) and is available for you to watch on the AWS On Air Twitch channel. The first AWS Storage Day was hosted in 2019, and this event has grown into an innovation day that we look forward to delivering to you every year. In last year’s Storage Day post, I wrote about the constant innovations in AWS Storage aimed at helping you put your data to work while keeping it secure and protected. This year, Storage Day is focused on storage for AI/ML, data protection and resiliency, and the benefits of moving to the cloud.

AWS Storage Day Key Themes
When it comes to storage for AI/ML, data volumes are increasing at an unprecedented rate, exploding from terabytes to petabytes and even to exabytes. With a modern data architecture on AWS, you can rapidly build scalable data lakes, use a broad and deep collection of purpose-built data services, scale your systems at a low cost without compromising performance, share data across organizational boundaries, and manage compliance, security, and governance, allowing you to make decisions with speed and agility at scale.
To train machine learning models and build Generative AI applications, you must have the right data strategy in place. So, I’m happy to see that, among the list of sessions to look forward to at the live event, the Optimize generative AI and ML with AWS Infrastructure session will discuss how you can transform your data into meaningful insights.

Whether you’re just getting started with the cloud, planning to migrate applications to AWS, or already building applications on AWS, we have resources to help you protect your data and meet your business continuity objectives. Our data protection and resiliency features and solutions can help you meet your business continuity goals and deliver disaster recovery during data loss events, across recovery point and time objectives (RPO and RTO). With the unprecedented data growth happening in the world today, determining where your data is stored, how it’s secured, and who has access to it is a higher priority than ever. Be sure to join the Protect data in AWS amid a rapidly evolving cyber landscape session to learn more.

When moving data to the cloud, you need to understand where you’re moving it for different use cases, the types of data you’re moving, and the network resources available, among other considerations. There are many reasons to move to the cloud, recently, Enterprise Strategy Group (ESG) validated that organizations reduced compute, networking, and storage costs by up to 66 percent by migrating on-premises workloads to AWS Cloud infrastructure. ESG confirmed that migrating on-premises workloads to AWS provides organizations with reduced costs, increased performance, improved operational efficiency, faster time to value, and improved business agility.
We have a number of sessions that discuss how to move to the cloud, based on your use case. I’m most looking forward to the Hybrid cloud storage and edge compute: AWS, where you need it session, which will discuss considerations for workloads that can’t fully move to the cloud.

Tune in to learn from experts about new announcements, leadership insights, and educational content related to the broad portfolio of AWS Storage services and features that address all these themes and more. Today, we have announcements related to Amazon Simple Storage Service (Amazon S3), Amazon FSx for Windows File Server, Amazon Elastic File System (Amazon EFS), Amazon FSx for OpenZFS, and more.

Let’s get into it.

15 Years of Amazon EBS
Not long ago, I was reading Jeff Barr’s post titled 15 Years of AWS Blogging! In this post, Jeff mentioned a few posts he wrote for the earliest AWS services and features. Amazon Elastic Block Store (Amazon EBS) is on this list as a service that simplifies the use of Amazon EC2.

Well, it’s been 15 years since the launch of Amazon EBS was announced, and today we celebrate 15 years of this service. If you were one of the original users who put Amazon EBS to good use and provided us with the very helpful feedback that helped us invent and simplify, iterate and improve, I’m sure you can’t believe how time flies. Today, Amazon EBS handles more than 100 trillion I/O operations daily, and over 390 million EBS volumes are created every day.

If you’re new to Amazon EBS, join us for a fireside chat with Matt Garman, Senior Vice President, Sales, Marketing, and Global Services at AWS, and learn the strategy and customer challenges behind the launch of the service in 2008. You’ll also hear from long-term EBS customer, Stripe, about its growth with EBS since Stripe was launched 12 years ago.

Amazon EBS has continuously improved its scalability and performance to support more customer workloads as the direct storage attachment for Amazon EC2 instances. With the launch of Amazon EC2 M7i instances, powered by custom 4th Generation Intel Xeon Scalable processors, on August 2, you can attach up to 128 Amazon EBS volumes, an increase from 28 on a previous generation M6i instance. The higher number of volume attachments means you can increase storage density per instance and improve resource utilization, reducing total compute cost.

You can host up to 127 containers per instance for larger database applications and scale them more cost effectively before needing to provision more instances and only pay for resources you need. With a higher number of volume attachments, you can fully utilize the memory and vCPU available on these powerful M7i instances as your database storage footprint grows. EBS is also increasing the number of multi-volume snapshots you can create, for up to 128 EBS volumes attached to an instance, enabling you to create crash-consistent backups of all volumes attached to an instance.

Join the 15 years of innovations with Amazon EBS session for a discussion about how the original vision for Amazon EBS has evolved to meet your growing demands for cloud infrastructure.

Mountpoint for Amazon S3
Now generally available, Mountpoint for Amazon S3 is a new open source file client that delivers high throughput access, lowering compute costs for data lakes on Amazon S3. Mountpoint for Amazon S3 is a file client that translates local file system API calls to S3 object API calls. Using Mountpoint for Amazon S3, you can mount an Amazon S3 bucket as a local file system on your compute instance, to access your objects through a file interface with the elastic storage and throughput of Amazon S3. Mountpoint for Amazon S3 supports sequential and random read operations on existing files, and sequential write operations for creating new files.

The Deep dive and demo of Mountpoint for Amazon S3 session demonstrates how to use the file client to access objects in Amazon S3 using file APIs, making it easier to store data at scale and maximize the value of your data with analytics and machine learning workloads. Read this blog post to learn more about Mountpoint for Amazon S3 and how to get started, including a demo.

Put Cold Storage to Work Faster with Amazon S3 Glacier Flexible Retrieval
Amazon S3 Glacier Flexible Retrieval improves data restore time by up to 85 percent, at no additional cost. Faster data restores automatically apply to the Standard retrieval tier when using Amazon S3 Batch Operations. These restores begin to return objects within minutes, so you can process restored data faster. Processing restored data in parallel with ongoing restores helps you accelerate data workflows and quickly respond to business needs. Now, whether you’re transcoding media, restoring operational backups, training machine learning models, or analyzing historical data, you can speed up your data restores from archive.

Coupled with the S3 Glacier improvements to restore throughput by up to 10 times for millions of objects announced in 2022, S3 Glacier data restores of all sizes now benefit from both faster starts and shorter completion times.

Join the Maximize the value of cold data with Amazon S3 Glacier session to learn how Amazon S3 Glacier is helping organizations of all sizes and from all industries transform their data archiving to unlock business value, increase agility, and save on storage costs. Read this blog post to learn more about the Amazon S3 Glacier Flexible Retrieval performance improvements and follow step-by-step guidance on how to get started with faster standard retrievals from S3 Glacier Flexible Retrieval.

Supporting a Broad Spectrum of File Workloads
To serve a broad spectrum of use cases that rely on file systems, we offer a portfolio of file system services, each targeting a different set of needs. Amazon EFS is a serverless file system built to deliver an elastic experience for sharing data across compute resources. Amazon FSx makes it easier and cost-effective for you to launch, run, and scale feature-rich, high-performance file systems in the cloud, enabling you to move to the cloud with no changes to your code, processes, or how you manage your data.

Power ML research and big data analytics with Amazon EFS
Amazon EFS offers serverless and fully scalable file storage, designed for high scalability in both storage capacity and throughput performance. Just last week, we announced enhanced support for faster read and write IOPS, making it easier to power more demanding workloads. We’ve improved the performance capabilities of Amazon EFS by adding support for up to 55,000 read IOPS and up to 25,000 write IOPS per file system. These performance enhancements help you to run more demanding workflows, such as machine learning (ML) research with KubeFlow, financial simulations with IBM Symphony, and big data processing with Domino Data Lab, Hadoop, and Spark.

Join the Build and run analytics and SaaS applications at scale session to hear how recent Amazon EFS performance improvements can help power more workloads.

Multi-AZ file systems on Amazon FSx for OpenZFS
You can now use a multi-AZ deployment option when creating file systems on Amazon FSx for OpenZFS, making it easier to deploy file storage that spans multiple AWS Availability Zones to provide multi-AZ resilience for business-critical workloads. With this launch, you can take advantage of the power, agility, and simplicity of Amazon FSx for OpenZFS for a broader set of workloads, including business-critical workloads like database, line-of-business, and web-serving applications that require highly available shared storage that spans multiple AZs.

The new multi-AZ file systems are designed to deliver high levels of performance to serve a broad variety of workloads, including performance-intensive workloads such as financial services analytics, media and entertainment workflows, semiconductor chip design, and game development and streaming, up to 21 GB per second of throughput and over 1 million IOPS for frequently accessed cached data, and up to 10 GB per second and 350,000 IOPS for data accessed from persistent disk storage.

Join the Migrate NAS to AWS to reduce TCO and gain agility session to learn more about multi-AZs with Amazon FSx for OpenZFS.

New, Higher Throughput Capacity Levels on Amazon FSx for Windows File Server
Performance improvements for Amazon FSx for Windows File Server help you accelerate time-to-results for performance-intensive workloads such as SQL Server databases, media processing, cloud video editing, and virtual desktop infrastructure (VDI).

We’re adding four new, higher throughput capacity levels to increase the maximum I/O available up to 12 GB per second from the previous I/O of 2 GB per second. These throughput improvements come with correspondingly higher levels of disk IOPS, designed to deliver an increase up to 350,000 IOPS.

In addition, by using FSx for Windows File Server, you can provision IOPS higher than the default 3 IOPS per GiB for your SSD file system. This allows you to scale SSD IOPS independently from storage capacity, allowing you to optimize costs for performance-sensitive workloads.

Join the Migrate NAS to AWS to reduce TCO and gain agility session to learn more about the performance improvements for Amazon FSx for Windows File Server.

Logically Air-Gapped Vault for AWS Backup
AWS Backup is a fully managed, policy-based data protection solution that enables customers to centralize and automate backup restores across 19 AWS services (spanning compute, storage, and databases) and third-party applications such as VMware Cloud on AWS and on-premises, as well as SAP HANA on Amazon EC2.

Today, we’re announcing the preview of logically air-gapped vault as a new type of AWS Backup Vault that acts as an additional layer of protection to mitigate against malware events. With logically air-gapped vault, customers can recover their application data through a different trusted account.

Join the Deep dive on data recovery for ransomware events session to learn more about logically air-gapped vault for AWS Backup.

Copy Data to and from Other Clouds with AWS DataSync
AWS DataSync is an online data movement and discovery service that simplifies data migration and helps you quickly, easily, and securely transfer your file or object data to, from, and between AWS storage services. In addition to support of data migration to and from AWS storage services, DataSync supports copying to and from other clouds such as Google Cloud Storage, Azure Files, and Azure Blob Storage. Using DataSync, you can move your object data at scale between Amazon S3 compatible storage on other clouds and AWS storage services such as Amazon S3. We’re now expanding the support of DataSync for copying data to and from other clouds to include DigitalOcean Spaces, Wasabi Cloud Storage, Backblaze B2 Cloud Storage, Cloudflare R2 Storage, and Oracle Cloud Storage.

Join the Identify and accelerate data migrations at scale session to learn more about this expanded support for DataSync.

Join Us Online
Join us today for the AWS Storage Day virtual event on the AWS On Air channel on Twitch. The event will be live starting at 9:00 AM Pacific Time (12:00 PM Eastern Time) on August 9. All sessions will be available on demand approximately two days after Storage Day.

We look forward to seeing you on Twitch!

– Veliswa 

New – AWS DMS Serverless: Automatically Provisions and Scales Capacity for Migration and Data Replication

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/new-aws-dms-serverless-automatically-provisions-and-scales-capacity-for-migration-and-data-replication/

With the vast amount of data being created today, organizations are moving to the cloud to take advantage of the security, reliability, and performance of fully managed database services. To facilitate database and analytics migrations, you can use AWS Database Migration Service (AWS DMS). First launched in 2016, AWS DMS offers a simple migration process that automates database migration projects, saving time, resources, and money.

Although you can start AWS DMS migration with a few clicks through the console, you still need to do research and planning to determine the required capacity before migrating. It can be challenging to know how to properly scale capacity ahead of time, especially when simultaneously migrating many workloads or continuously replicating data. On top of that, you also need to continually monitor usage and manually scale capacity to ensure optimal performance.

Introducing AWS DMS Serverless
Today, I’m excited to tell you about AWS DMS Serverless, a new serverless option in AWS DMS that automatically sets up, scales, and manages migration resources to make your database migrations easier and more cost-effective.

Here’s a quick preview on how AWS DMS Serverless works:

AWS DMS Serverless removes the guesswork of figuring out required compute resources and handling the operational burden needed to ensure a high-performance, uninterrupted migration. It performs automatic capacity provisioning, scaling, and capacity optimization of migrations, allowing you to quickly begin migrations with minimal oversight.

At launch, AWS DMS Serverless supports Microsoft SQL Server, PostgreSQL, MySQL, and Oracle as data sources. As for data targets, AWS DMS Serverless supports a wide range of databases and analytics services, from Amazon Aurora, Amazon Relational Database Service (Amazon RDS), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon DynamoDB, and more. AWS DMS Serverless continues to add support for new data sources and targets. Visit Supported Engine Versions to stay updated.

With a variety of sources and targets supported by AWS DMS Serverless, many scenarios become possible. You can use AWS DMS Serverless to migrate databases and help to build modern data strategies by synchronizing ongoing data replications into data lakes (e.g., Amazon S3) or data warehouses (e.g., Amazon Redshift) from multiple, perhaps disparate data sources.

How AWS DMS Serverless Works
Let me show you how you can get started with AWS DMS Serverless. In this post, I migrate my data from a source database running on PostgreSQL to a target MySQL database running on Amazon RDS. The following screenshot shows my source database with dummy data:

As for the target, I’ve set up a MySQL database running in Amazon RDS. The following screenshot shows my target database:

Getting starting with AWS DMS Serverless is similar to how AWS DMS works today. AWS DMS Serverless requires me to complete the setup tasks such as creating a virtual private cloud (VPC) to defining source and target endpoints. If this is your first time working with AWS DMS, you can learn more by visiting Prerequisites for AWS Database Migration Service.

To connect to a data store, AWS DMS needs endpoints for both source and target data stores. An endpoint provides all necessary information including connection, data store type, and location to my data stores. The following image shows an endpoint I’ve created for my target database:

When I have finished setting up the endpoints, I can begin to create a replication by selecting the Create replication button on the Serverless replications page. Replication is a new concept introduced in AWS DMS Serverless to abstract instances and tasks that we normally have in standard AWS DMS. Additionally, the capacity resources are managed independently for each replication.

On the Create replication page, I need to define some configurations. This starts with defining Name, then specifying Source database endpoint and Target database endpoint. If you don’t find your endpoints, make sure you’re selecting database engines supported by AWS DMS Serverless.

After that, I need to specify the Replication type. There are three types of replication available in AWS DMS Serverless:

  • Full load — If I need to migrate all existing data in source database
  • Change data capture (CDC) — If I have to replicate data changes from source to target database.
  • Full load and change data capture (CDC) — If I need to migrate existing data and replicate data changes from source to target database.

In this example, I chose Full load and change data capture (CDC) because I need to migrate existing data and continuously update the target database for ongoing changes on the source database.

In the Settings section, I can also enable logging with Amazon CloudWatch, which makes it easier for me to monitor replication progress over time.

As with standard AWS DMS, in AWS DMS Serverless, I can also configure Selection rules in Table mappings to define filters that I need to replicate from table columns in the source data store.

I can also use Transformation rules if I need to rename a schema or table or add a prefix or suffix to a schema or table.

In the Capacity section, I can set the range for required capacity to perform replication by defining the minimum and maximum DCU (DMS capacity units). The minimum DCU setting is optional because AWS DMS Serverless determines the minimum DCU based on an assessment of the replication workload. During replication process, AWS DMS uses this range to scale up and down based on CPU utilization, connections, and available memory.

Setting the maximum capacity allows you to manage costs by making sure that AWS DMS Serverless never consumes more resources than you have budgeted for. When you define the maximum DCU, make sure that you choose a reasonable capacity so that AWS DMS Serverless can handle large bursts of data transaction volumes. If traffic volume decreases, AWS DMS Serverless scales capacity down again, and you only pay for what you need. For cases in which you want to change the minimum and maximum DCU settings, you have to stop the replication process first, make the changes, and run the replication again.

When I’m finished with configuring replication, I select Create replication.

When my replication is created, I can view more details of my replication and start the process by selecting Start.

After my replication runs for around 40 minutes, I can monitor replication progress in the Monitoring tab. AWS DMS Serverless also has a CloudWatch metric called Capacity utilization, which indicates the use of capacity to run replication according to the range defined as minimum and maximum DCU. The following screenshot shows the capacity scales up in the CloudWatch metrics chart.

When the replication finishes its process, I see the capacity starting to decrease. This indicates that in addition to AWS DMS Serverless successfully scaling up to the required capacity, it can also scale down within the range I have defined.

Finally, all I need to do is verify whether my data has been successfully replicated into the target data store. I need to connect to the target, run a select query, and check if all data has been successfully replicated from the source.

Now Available
AWS DMS Serverless is now available in all commercial regions where standard AWS DMS is available, and you can start using it today. For more information about benefits, use cases, how to get started, and pricing details, refer to AWS DMS Serverless.

Happy migrating!
Donnie

AWS Application Migration Service Major Updates: Import and Export Feature, Source Server Migration Metrics Dashboard, and Additional Post-Launch Actions

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/aws-application-migration-service-major-updates-import-and-export-feature-source-server-migration-metrics-dashboard-and-additional-post-launch-actions/

AWS Application Migration Service (AWS MGN) can simplify and expedite your migration to AWS by automatically converting your source servers from physical, virtual, or cloud infrastructure to run natively on AWS. In the post, How to Use the New AWS Application Migration Server for Lift-and-Shift Migrations, Channy introduced us to Application Migration Service and how to get started.

By using Application Migration Service for migration, you can minimize time-intensive, error-prone manual processes by automating replication and conversion of your source servers from physical, virtual, or cloud infrastructure to run natively on AWS. Last year, we introduced major improvements such as new migration servers grouping, an account-level launch template, and a post-launch actions template.

Today, I’m pleased to announce three major updates of Application Migration Service. Here’s the quick summary for each feature release:

  • Import and export – You can now use Application Migration Service to import your source environment inventory list to the service from a CSV file. You can also export your source server inventory for reporting purposes, offline reviews and updates, integration with other tools and AWS services, and performing bulk configuration changes by reimporting the inventory list.
  • Server migration metrics dashboard – This new dashboard can help simplify migration project management by providing an aggregated view of the migration lifecycle status of your source servers
  • Additional post-launch modernization actions – In this update, Application Migration Service added eight additional predefined post-launch actions. These actions are applied to your migrated applications when you launch them on AWS.

Let me share how you can use these features for your migration.

Import and Export
Before we go further into the import and export features, let’s discuss two concepts within Application Migration Service: applications and waves, which you can define when migrating with Application Migration Service. Applications represent a group of servers. By using applications, you can define groups of servers and identify them as an application. Within your application, you can perform various activities with Application Migration Service, such as monitoring, specifying tags, and performing bulk operations, for example, launching test instances. Additionally, you can group your applications into waves, which represent a group of servers that are migrated together, as part of your migration plan.

With the import feature, you can now import your inventory list in CSV form into Application Migration Service. This makes it easy for you to manage large scale-migrations, and ingest your inventory of source servers, applications and waves, including their attributes.

To start using the import feature, I need to identify my servers and application inventory. I can do this manually, or using discovery tools. The next thing I need to do is download the import template which I can access from the console. 

After I downloaded the import template, I can start mapping from my inventory list into this template. While mapping my inventory, I can group related servers into applications and waves. I can also perform configurations, such as defining Amazon Elastic Compute Cloud (Amazon EC2) launch template settings, and specifying tags for each wave.

The following screenshot is an example of the results of my import template:

The next step is to upload my CSV file to an Amazon Simple Storage Service (Amazon S3) bucket. Then, I can start the import process from the Application Migration Service console by referencing the CSV file containing my inventory list that I’ve uploaded to the S3 bucket.

When the import process is complete, I can see the details of the import results.

I can import inventory for servers that don’t have an agent installed, or haven’t yet been discovered by agentless replication. However, to replicate data, I need to use agentless replication, or install the AWS Replication Agent on my source servers.

Now I can view all my inventory inside the Source servers, Applications and Waves pages on the Application Migration Service console. The following is a screenshot for recently imported waves.

In addition, with the export feature, I can export my source servers, applications, and waves along with all configurations that I’ve defined into a CSV file.

This is helpful if you want to do reporting or offline reviews, or for bulk editing before reimporting the CSV file into Application Migration Service.

Server Migration Metrics Dashboard
We previously supported a migration metrics dashboard for applications and waves. In this release, we have specifically added a migration metrics dashboard for servers. Now you can view aggregated overviews of your server’s migration process on the Application Migration Service dashboard. Three topics are available in the migration metrics dashboard:

  • Alerts – Shows associated alerts for respective servers.
  • Data replication status – Shows the replication data overview status for source servers. Here, you get a quick overview of the lifecycle status of the replication data process.
  • Migration lifecycle – Shows an overview of the migration lifecycle from source servers.

Additional Predefined Post-launch Modernization Actions
Post-launch actions allow you to control and automate actions performed after your servers have been launched in AWS. You can use predefined or use custom post-launch actions.

Application Migration Service now has eight additional predefined post-launch actions to run in your EC2 instances on top of the existing four predefined post-launch actions. These additional post-launch actions provide you with flexibility to maximize your migration experience.

The new additional predefined post-launch actions are as follows:

  • Convert MS-SQL license – You can easily convert Windows MS-SQL BYOL to an AWS license using the Windows MS-SQL license conversion action. The launch process includes checking the SQL edition (Enterprise, Standard, or Web) and using the right AMI with the right billing code.
  • Create AMI from instance – You can create a new Amazon Machine Image (AMI) from your Application Migration Service launched instance.
  • Upgrade Windows version – This feature allows you to easily upgrade your migrated server to Windows Server 2012 R2, 2016, 2019, or 2022. You can see the full list of available OS versions on AWSEC2-CloneInstanceAndUpgradeWindows page.
  • Conduct EC2 connectivity checks – You can conduct network connectivity checks to a predefined list of ports and hosts using the EC2 connectivity check feature.
  • Validate volume integrity – You can use this feature to ensure that Amazon Elastic Block Store (Amazon EBS) volumes on the launched instance are the same size as the source, properly mounted on the EC2 instance, and accessible.
  • Verify process status – You can validate the process status to ensure that processes are in a running state after instance launch. You will need to provide a list of processes that you want to verify and specify how long the service should wait before testing begins. This feature lets you do the needed validations automatically and saves time by not having to do them manually.
  • CloudWatch agent installation – Use the Amazon CloudWatch agent installation feature to install and set up the CloudWatch agent and Application Insights features.
  • Join Directory Service domain – You can simplify the AWS join domain process by using this feature. If you choose to activate this action, your instance will be managed by the AWS Cloud Directory (instead of on premises).

Things to Know
Keep in mind the following:

  • Updated UI/UX – We have updated the user interface with card layout and table layout view for the action list on the Application Migration Service console. This update helps you to determine which post-launch actions are suitable for your use case . We have also added filter options to make it easy to find relevant actions by operating system, category, and more.
  • Support for additional OS versions – Application Migration Service now supports CentOS 5.5 and later and Red Hat Enterprise Linux (RHEL) 5.5 and later operating systems.
  • Availability – These features are available now, and you can start using them today in all Regions where Application Migration Service is supported.

Get Started Today

Visit the Application Migration Service User Guide page to learn more about these features and understand the pricing. You can also visit Getting started with AWS Application Migration Service to learn more about how to get started to migrate your workloads.

Happy migrating!

Donnie

AWS Week in Review – March 20, 2023

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-20-2023/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

A new week starts, and Spring is almost here! If you’re curious about AWS news from the previous seven days, I got you covered.

Last Week’s Launches
Here are the launches that got my attention last week:

Picture of an S3 bucket and AWS CEO Adam Selipsky.Amazon S3 – Last week there was AWS Pi Day 2023 celebrating 17 years of innovation since Amazon S3 was introduced on March 14, 2006. For the occasion, the team released many new capabilities:

Amazon Linux 2023 – Our new Linux-based operating system is now generally available. Sébastien’s post is full of tips and info.

Application Auto Scaling – Now can use arithmetic operations and mathematical functions to customize the metrics used with Target Tracking policies. You can use it to scale based on your own application-specific metrics. Read how it works with Amazon ECS services.

AWS Data Exchange for Amazon S3 is now generally available – You can now share and find data files directly from S3 buckets, without the need to create or manage copies of the data.

Amazon Neptune – Now offers a graph summary API to help understand important metadata about property graphs (PG) and resource description framework (RDF) graphs. Neptune added support for Slow Query Logs to help identify queries that need performance tuning.

Amazon OpenSearch Service – The team introduced security analytics that provides new threat monitoring, detection, and alerting features. The service now supports OpenSearch version 2.5 that adds several new features such as support for Point in Time Search and improvements to observability and geospatial functionality.

AWS Lake Formation and Apache Hive on Amazon EMR – Introduced fine-grained access controls that allow data administrators to define and enforce fine-grained table and column level security for customers accessing data via Apache Hive running on Amazon EMR.

Amazon EC2 M1 Mac Instances – You can now update guest environments to a specific or the latest macOS version without having to tear down and recreate the existing macOS environments.

AWS Chatbot – Now Integrates With Microsoft Teams to simplify the way you troubleshoot and operate your AWS resources.

Amazon GuardDuty RDS Protection for Amazon Aurora – Now generally available to help profile and monitor access activity to Aurora databases in your AWS account without impacting database performance

AWS Database Migration Service – Now supports validation to ensure that data is migrated accurately to S3 and can now generate an AWS Glue Data Catalog when migrating to S3.

AWS Backup – You can now back up and restore virtual machines running on VMware vSphere 8 and with multiple vNICs.

Amazon Kendra – There are new connectors to index documents and search for information across these new content: Confluence Server, Confluence Cloud, Microsoft SharePoint OnPrem, Microsoft SharePoint Cloud. This post shows how to use the Amazon Kendra connector for Microsoft Teams.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
A few more blog posts you might have missed:

Example of a geospatial query.Women founders Q&A – We’re talking to six women founders and leaders about how they’re making impacts in their communities, industries, and beyond.

What you missed at that 2023 IMAGINE: Nonprofit conference – Where hundreds of nonprofit leaders, technologists, and innovators gathered to learn and share how AWS can drive a positive impact for people and the planet.

Monitoring load balancers using Amazon CloudWatch anomaly detection alarms – The metrics emitted by load balancers provide crucial and unique insight into service health, service performance, and end-to-end network performance.

Extend geospatial queries in Amazon Athena with user-defined functions (UDFs) and AWS Lambda – Using a solution based on Uber’s Hexagonal Hierarchical Spatial Index (H3) to divide the globe into equally-sized hexagons.

How cities can use transport data to reduce pollution and increase safety – A guest post by Rikesh Shah, outgoing head of open innovation at Transport for London.

For AWS open-source news and updates, here’s the latest newsletter curated by Ricardo to bring you the most recent updates on open-source projects, posts, events, and more.

Upcoming AWS Events
Here are some opportunities to meet:

AWS Public Sector Day 2023 (March 21, London, UK) – An event dedicated to helping public sector organizations use technology to achieve more with less through the current challenging conditions.

Women in Tech at Skills Center Arlington (March 23, VA, USA) – Let’s celebrate the history and legacy of women in tech.

The AWS Summits season is warming up! You can sign up here to know when registration opens in your area.

That’s all from me for this week. Come back next Monday for another Week in Review!

Danilo

AWS Application Migration Service Major Updates – New Migration Servers Grouping, Updated Launch, and Post-Launch Template

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-application-migration-service-major-updates-new-migration-servers-grouping-updated-launch-and-post-launch-template/

Last year, we introduced the general availability of AWS Application Migration Service that simplifies and expedites your migration to AWS by automatically converting your source servers from physical, virtual, or cloud infrastructure to run natively on AWS. Since the GA launch, we have made improvements, adding features such as agentless replication, MAP 2.0 auto-tagging and support for optional post-launch modernization actions.

Today we announce three major updates of Application Migration Service to support your migration projects of any size:

  • New Migration Servers Grouping – You can group migration servers into “applications,” a group of servers that function together as a single application, and manage the migration stage in “waves,” a plan of migrations including grouping servers and applications.
  • Updated Launch Template – You can modify the general settings and default launch template, and this template is then used to generate the Amazon Elastic Compute Cloud (Amazon EC2) instance launch template of subsequently installed source servers.
  • Updated Post-Launch Template – You can configure custom modernization actions for the post-launch template. You can associate any AWS Systems Manager documents and their parameters with a post-launch custom action.

Let’s dive deep into each launch!

New Migration Servers Grouping – Applications and Waves
Customers have clusters of servers that comprise an application, with dependencies between them. The servers within an application share the same configurations, such as network, security policies, etc. Customers want to migrate complete applications and services, as well as set up and configure the application environment.

We introduce the new concept of “application,” representing a group of servers, and you can manage the migration of an application.

The new application feature groups source servers together with the same application for integrated migration jobs. It includes configuring the environment before migrating the application’s servers, creating the appropriate security groups, and performing bulk actions on all of the applications servers.

You can track and monitor the status of application migration and data replication within the migration lifecycle from source servers.

Also, customers with large migrations plan their migration, grouping servers and applications in waves. These are logical groups that describe the migration plan over time. Waves may include multiple servers and applications that do not necessarily have dependencies between them.

We introduce the new concept of “wave,” assisting customers in building their migration plan, as well as executing and monitoring it.

Application Migration Service supports actions on waves, such as launching all servers in a testing environment or performing cutover of a wave. Application Migration Service also provides reporting and monitoring information at the wave level so that customers will be able to manage their migration projects.

Updated Launch Template – Launch Settings and Default EC2 Launch Template
The launch template allows you to control the way Application Migration Service launches instances in the AWS Cloud. You can change the settings for existing and newly added servers individually. Previously, we only supported the AWS Migration Acceleration Program (MAP) option to add tags to launched migration instances.

We added two new options to modify the global launch template, and this template is then used to generate the EC2 launch templates of subsequently installed source servers. Customers start with a global Application Migration Service launch template, which can be used for predefined launch templates. They would then potentially only have to perform modifications to a smaller subset of source servers, as opposed to all of them.

Here are default settings that will be used when launching target servers:

  • Activate instance type right-sizing – The service will determine the best match instance type. The default instance type defined in the EC2 template will be ignored.
  • Start instance upon launch – The service will launch instances automatically. If this option is not selected, the launched instance will need to be manually started after launch.
  • Copy private IP – This enables you to copy the private IP of your source server to the target.
  • Transfer server tags – Transfer the tags from the source server to the launched instances.
  • Operating system licensing – Specify whether to continue to use the Bring Your Own License model (BYOL) of the source server or use an AWS provided license.

Also, you can configure the default settings that will be applied to the EC2 launch template of every target server, such as default target subnet, additional security groups, default instance type, Amazon Elastic Block Store (Amazon EBS) volume type, IOPS, and throughput to associate with all instances launched by this service.

Updated Post-Launch Template – Custom Actions
Post-launch settings allow you to control and automate actions performed after the server has been launched in AWS. It includes four built-in actions: installing the AWS Systems Manager agent, installing the AWS Elastic Disaster Recover agent and configuring replication, CentOS conversion, and SUSE subscription conversion.

We added a new option to configure custom actions in the post-launch template. You can associate any AWS Systems Manager and its action parameters. It also includes the order in which the actions will be executed and the source server’s operating systems for which the custom action can be configured.

Choose Add custom action to make a new post-launch custom action. For example, the AWS-CopySnapshot, one of Systems Manager Automation’s runbooks, copies a point-in-time snapshot of an EBS volume. You can copy the snapshot within the same AWS Region or from one Region to another.

In the Action parameters, you can assign SnapshotId and SourceRegion to run the AWS Systems Manager CopySnapshot runbook.

You can create your own Systems Manager document to define the actions that Systems Manager performs on your managed instances. Systems Manager offers more than 100 preconfigured documents that you can use by specifying parameters as the post-launch actions. To learn more, see AWS Systems Manager Automation runbook reference in the AWS documentation.

Now Available
The new migration servers grouping, updates on the launch, and post-launch template are available now, and you can start using them today in all Regions where AWS Application Migration Service is supported.

To learn more, see the Application Migration Service User Guide, give it a try, and please send feedback to AWS re:Post for Application Migration Service or through your usual AWS support contacts.

Channy

Welcome to AWS Storage Day 2022

Post Syndicated from Veliswa Boya original https://aws.amazon.com/blogs/aws/welcome-to-aws-storage-day-2022/

We are on the fourth year of our annual AWS Storage Day! Do you remember our first Storage Day 2019 and the subsequent Storage Day 2020? I watched Storage Day 2021, which was streamed live from downtown Seattle. We continue to hear from our customers about how powerful the Storage Day announcements and educational sessions were. With this year’s lineup, we aim to share our insights on how to protect your data and put it to work. The free Storage Day 2022 virtual event is happening now on the AWS Twitch channel. Tune in to hear from experts about new announcements, leadership insights, and educational content related to the broad portfolio of AWS Storage services.

Our customers are looking to reduce and optimize storage costs, while building the cloud storage skills they need for themselves and for their organizations. Furthermore, our customers want to protect their data for resiliency and put their data to work. In this blog post, you will find our insights and announcements that address all these needs and more.

Let’s get into it…

Protect Your Data
Data protection has become an operational model to deliver the resiliency of applications and the data they rely on. Organizations use the National Institute of Standards and Technology (NIST) cybersecurity framework and its Identify->Protect->Detect->Respond->Recover process to approach data protection overall. It’s necessary to consider data resiliency and recovery upfront in the Identify and Protect functions, so there is a plan in place for the later Respond and Recover functions.

AWS is making data resiliency, including malware-type recovery, table stakes for our customers. Many of our customers use Amazon Elastic Block Store (Amazon EBS) for mission-critical applications. If you already use Amazon EBS and you regularly back up EBS volumes using EBS multi-volume snapshots, I have an announcement that you will find very exciting.

Amazon EBS
Amazon EBS scales fast for the most demanding, high-performance workloads, and this is why our customers trust Amazon EBS for critical applications such as SAP, Oracle, and Microsoft. Currently, Amazon EBS enables you to back up volumes at any time using EBS Snapshots. Snapshots retain the data from all completed I/O operations, allowing you to restore the volume to its exact state at the moment before backup.

Many of our customers use snapshots in their backup and disaster recovery plans. A common use case for snapshots is to create a backup of a critical workload such as a large database or file system. You can choose to create snapshots of each EBS volume individually or choose to create multi-volume snapshots of the EBS volumes attached to a single Amazon Elastic Compute Cloud (EC2) instance. Our customers love the simplicity and peace of mind that comes with regularly backing up EBS volumes attached to a single EC2 instance using EBS multi-volume snapshots, and today we’re announcing a new feature—crash consistent snapshots for a subset of EBS volumes.

Previously, when you wanted to create multi-volume snapshots of EBS volumes attached to a single Amazon EC2 instance, if you only wanted to include some—but not all—attached EBS volumes, you had to make multiple API calls to keep only the snapshots you wanted. Now, you can choose specific volumes you want to exclude in the create-snapshots process using a single API call or by using the Amazon EC2 console, resulting in significant cost savings. Crash consistent snapshots for a subset of EBS volumes is also supported by Amazon Data Lifecycle Manager policies to automate the lifecycle of your multi-volume snapshots.

This feature is now available to you at no additional cost. To learn more, please visit the EBS Snapshots user guide.

Put Your Data to Work
We give you controls and tools to get the greatest value from your data—at an organizational level down to the individual data worker and scientist. Decisions you make today will have a long-lasting impact on your ability to put your data to work. Consider your own pace of innovation and make sure you have a cloud provider that will be there for you no matter what the future brings. AWS Storage provides the best cloud for your traditional and modern applications. We support data lakes in AWS Storage, analytics, machine learning (ML), and streaming on top of that data, and we also make cloud benefits available at the edge.

Amazon File Cache (Coming Soon)
Today we are also announcing Amazon File Cache, an upcoming new service on AWS that accelerates and simplifies hybrid cloud workloads. Amazon File Cache provides a high-speed cache on AWS that makes it easier for you to process file data, regardless of where the data is stored. Amazon File Cache serves as a temporary, high-performance storage location for your data stored in on-premises file servers or in file systems or object stores in AWS.

This new service enables you to make dispersed data sets available to file-based applications on AWS with a unified view and at high speeds with sub-millisecond latencies and up to hundreds of GB/s of throughput. Amazon File Cache is designed to enable a wide variety of cloud bursting workloads and hybrid workflows, ranging from media rendering and transcoding, to electronic design automation (EDA), to big data analytics.

Amazon File Cache will be generally available later this year. If you are interested in learning more about this service, please sign up for more information.

AWS Transfer Family
During Storage Day 2020, we announced that customers could deploy AWS Transfer Family server endpoints in Amazon Virtual Private Clouds (Amazon VPCs). AWS Transfer Family helps our customers easily manage and share data with simple, secure, and scalable file transfers. With Transfer Family, you can seamlessly migrate, automate, and monitor your file transfer workflows into and out of Amazon S3 and Amazon Elastic File System (Amazon EFS) using the SFTP, FTPS, and FTP protocols. Exchanged data is natively accessible in AWS for processing, analysis, and machine learning, as well as for integrations with business applications running on AWS.

On July 26th of this year, Transfer Family launched support for the Applicability Statement 2 (AS2) protocol. Customers across verticals such as healthcare and life sciences, retail, financial services, and insurance that rely on AS2 for exchanging business-critical data can now use AWS Transfer Family’s highly available, scalable, and globally available AS2 endpoints to more cost-effectively and securely exchange transactional data with their trading partners.

With a focus on helping you work with partners of your choice, we are excited to announce the AWS Transfer Family Delivery Program as part of the AWS Partner Network (APN) Service Delivery Program (SDP). Partners that deliver cloud-native Managed File Transfer (MFT) and business-to-business (B2B) file exchange solutions using AWS Transfer Family are welcome to join the program. Partners in this program meet a high bar, with deep technical knowledge, experience, and proven success in delivering Transfer Family solutions to our customers.

Five New AWS Storage Learning Badges
Earlier I talked about how our customers are looking to add the cloud storage skills they need for themselves and for their organizations. Currently, storage administrators and practitioners don’t have an easy way of externally demonstrating their AWS storage knowledge and skills. Organizations seeking skilled talent also lack an easy way of validating these skills for prospective employees.

In February 2022, we announced digital badges aligned to Learning Plans for Block Storage and Object Storage on AWS Skill Builder. Today, we’re announcing five additional storage learning badges. Three of these digital badges align to the Skill Builder Learning Plans in English for File, Data Protection & Disaster Recovery (DPDR), and Data Migration. Two of these badges—Core and Technologist—are tiered badges that are awarded to individuals who earn a series of Learning Plan-related badges in the following progression:

Image showing badge progression. To get the Storage Core badge users must first get Block, File, and Object badges. To get the Storage Technologist Badge users must first get the Core, Data Protection & Disaster Recovery, and Data Migration badges.

To learn more, please visit the AWS Learning Badges page.

Well, That’s It!
As I’m sure you’ve picked up on the pattern already, today’s announcements focused on continuous innovation and AWS’s ongoing commitment to providing the cloud storage training that your teams are looking for. Best of all, this AWS training is free. These announcements also focused on simplifying your data migration to the cloud, protecting your data, putting your data to work, and cost-optimization.

Now Join Us Online
Register for free and join us for the AWS Storage Day 2022 virtual event on the AWS channel on Twitch. The event will be live from 9:00 AM Pacific Time (12:00 PM Eastern Time) on August 10. All sessions will be available on demand approximately 2 days after Storage Day.

We look forward to seeing you on Twitch!

– Veliswa x

Augmentation patterns to modernize a mainframe on AWS

Post Syndicated from Lewis Tang original https://aws.amazon.com/blogs/architecture/augmentation-patterns-to-modernize-a-mainframe-on-aws/

Customers with mainframes want to use Amazon Web Services (AWS) to increase agility, maximize the value of their investments, and innovate faster. On June 8, 2022, AWS announced the general availability of AWS Mainframe Modernization, a new service that makes it faster and simpler for customers to modernize mainframe-based workloads.

In this post, we discuss the common use cases and the augmentation architecture patterns that help liberate data from mainframe for modern data analytics, get rid of expensive and unsupported tape storage solutions for mainframe, build new capabilities that integrate with core mainframe workloads, and enable agile development and testing by adopting CI/CD for mainframe.

Pattern 1: Augment mainframe data retention with backup and archival on AWS

Mainframes process and generate the most business-critical data. It’s imperative to provide data protection via solutions, such as data backup, archiving, and disaster recovery. Mainframes usually use automated tape libraries—virtual tape libraries for backup and archive. These tapes need to be stored, organized, and transported to vaults and disaster recovery sites. All this can be very expensive and rigid.

There is a more cost-effective approach that helps simplify the operations of tape libraries:  leverage AWS partner tools, such as Model9, to transparently migrate the data on tape storage to AWS.

As depicted in Figure 1, mainframe data can be transferred via the secured network connection using AWS Transfer Family services or AWS DataSync to AWS cloud storage services, such as Amazon Elastic File System, Amazon Elastic Block Store, and Amazon Simple Storage Service (S3). After data is stored in AWS cloud, you can configure and move data among these services to meet with the business data processing need. Depending on data storage requirements, data storage costs can be further optimized by configuring S3 Lifecyle policies to move data among Amazon S3 storage classes. For long-term data archiving purpose, you can choose S3 Glacier storage class to achieve durability, resilience, and the optimal cost effectiveness.

Mainframe data backup and archival augmentation

Figure 1. Mainframe data backup and archival augmentation

Pattern 2: Augment mainframe with agile development and test environments including CI/CD pipeline on AWS

For any business-critical business application, a typical mainframe workload requires development and test environments to support production workloads. It’s common to see the lengthy application development lifecycle, a lack of automated testing, and an absent CI/CD pipeline with most of mainframes. Furthermore, the existing mainframe development processes and tools are outdated, as they are unable to keep up with the business pace, resulting in a growing backlog. Organizations with mainframes look for application development solutions to solve these challenges.

As demonstrated in Figure 2, AWS developer tools orchestrate code compilation, testing, and deployment among mainframe test environments. Mainframe test environments are either provided by the mainframe vendors as emulators or by AWS partners, such as Micro Focus. You can load the preferred developer tools and run an integrated development environment (IDE) from Amazon WorkSpaces or Amazon AppStream 2.0. Developers create or modify code in the IDE, and then commit and push their code to AWS CodeCommit. As soon as the code is pushed, an event is generated and triggers the pipeline in AWS CodePipeline to build the new code in a compilation environment via AWS CodeBuild. The pipeline pushes the new code to the test environment.

To optimize cost, you can scale the test environment capacity to meet needs. The tests are executed, and the test environment can be shut down when not in use. When the tests are successful, the pipeline pushes the code back to the mainframe via AWS CodeDeploy and an intermediary server. On the mainframe side, the code can go through a recompilation and final testing before being pushed to production.

You can further optimize operations and licensing cost of mainframe emulator by leveraging the managed integrated development and test environment provided by AWS Mainframe Modernization service.

Mainframe CI/CD augmentation

Figure 2. Mainframe CI/CD augmentation

Pattern 3: Augment mainframe with agile data analytics on AWS

Core business applications running on mainframes generate a lot of data throughout the years. Decades of historical business transactions and massive amounts of user data present an opportunity to develop deep business insight. By creating a data lake using the AWS big data services, you can gain faster analytics capabilities and better insight into core business data originated from mainframe applications.

Figure 3 depicts data being pulled from relational, hierarchical, or mainframe file-based data stores on mainframes. These data are presented in various formats and stored as DB2 for z/OS, VSAM, IMS DB, IDMS, DMS, or other formats. You can use AWS partners data replication and change data capture tools from AWS Marketplace or AWS cloud services, such as Amazon Managed Streaming for Apache Kafka for near real-time data streaming, Transfer Family services, and DataSync for moving data in batch from mainframes to AWS.

Once data are replicated to AWS, you can further process data using the services like AWS Lambda, or Amazon Elastic Container Service and store the processed data on various AWS storage services, such as Amazon DynamoDB, Amazon Relational Database Service, and Amazon S3.

By using AWS big data and data analytics services, such as Amazon EMR, Amazon Redshift, Amazon Athena, AWS Glue, and Amazon QuickSight, you can develop deep business insight and present flexible visuals to your customers. Read more about mainframe data integration.

Mainframe data analytics augmentation

Figure 3. Mainframe data analytics augmentation

Pattern 4: Augment mainframe with new functions and channels on AWS

Organizations with a mainframe use AWS to innovate and iterate quickly, as they often lack agility. For example, a common scenario for a bank could be providing a mobile application for customer engagements, such as supporting a marketing campaign for a new credit card.

As depicted in Figure 4, with the data replicated from mainframes to AWS cloud and analyzed by AWS big data and analytics services, new business functions can be developed on cloud-native applications by using Amazon API Gateway, AWS Lambda, and AWS Fargate. These new business applications can interact with mainframe data, and the combination can give deep business insight.

To add new innovation capabilities, with time-series data generated by the new business function applications, using Amazon Forecast can predict domain-specific metrics, such as inventory, workforce, web traffic, and finances. Amazon Lex can build virtual agents, automate informational response to customer enquiries, and improve business productivity. Adding Amazon SageMaker, you can prepare data gathered from mainframe and new business applications at scale to build, train, and deploy machine learning models for any business cases.

You can further improve customer engagement by incorporating Amazon Connect and Amazon Pinpoint to build multichannel communications.

Mainframe new functions and channels augmentation

Figure 4. Mainframe new functions and channels augmentation

Conclusion

To increase agility, maximize the value of investments, and innovate faster, organizations can adopt the patterns discussed in this post to augment mainframes by using AWS services to build resilient data protection solution, provision agile CI/CD integrated development and test environment, liberate mainframe data and developing innovation solutions for new digital customer experience. With AWS Mainframe Modernization service, you can accelerate this journey and innovate faster.

Identification of replication bottlenecks when using AWS Application Migration Service

Post Syndicated from Tobias Reekers original https://aws.amazon.com/blogs/architecture/identification-of-replication-bottlenecks-when-using-aws-application-migration-service/

Enterprises frequently begin their journey by re-hosting (lift-and-shift) their on-premises workloads into AWS and running Amazon Elastic Compute Cloud (Amazon EC2) instances. A simpler way to re-host is by using AWS Application Migration Service (Application Migration Service), a cloud-native migration service.

To streamline and expedite migrations, automate reusable migration patterns that work for a wide range of applications. Application Migration Service is the recommended migration service to lift-and-shift your applications to AWS.

In this blog post, we explore key variables that contribute to server replication speed when using Application Migration Service. We will also look at tests you can run to identify these bottlenecks and, where appropriate, include remediation steps.

Overview of migration using Application Migration Service

Figure 1 depicts the end-to-end data replication flow from source servers to a target machine hosted on AWS. The diagram is designed to help visualize potential bottlenecks within the data flow, which are denoted by a black diamond.

Data flow when using AWS Application Migration Service (black diamonds denote potential points of contention)

Figure 1. Data flow when using AWS Application Migration Service (black diamonds denote potential points of contention)

Baseline testing

To determine a baseline replication speed, we recommend performing a control test between your target AWS Region and the nearest Region to your source workloads. For example, if your source workloads are in a data center in Rome and your target Region is Paris, run a test between eu-south-1 (Milan) and eu-west-3 (Paris). This will give a theoretical upper bandwidth limit, as replication will occur over the AWS backbone. If the target Region is already the closest Region to your source workloads, run the test from within the same Region.

Network connectivity

There are several ways to establish connectivity between your on-premises location and AWS Region:

  1. Public internet
  2. VPN
  3. AWS Direct Connect

This section pertains to options 1 and 2. If facing replication speed issues, the first place to look is at network bandwidth. From a source machine within your internal network, run a speed test to calculate your bandwidth out to the internet; common test providers include Cloudflare, Ookla, and Google. This is your bandwidth to the internet, not to AWS.

Next, to confirm the data flow from within your data center, run a traceroute (Windows) or tracert (Linux). Identify any network hops that are unusual or potentially throttling bandwidth (due to hardware limitations or configuration).

To measure the maximum bandwidth between your data center and the AWS subnet that is being used for data replication, while accounting for Security Sockets Layer (SSL) encapsulation, use the CloudEndure SSL bandwidth tool (refer to Figure 1).

Source storage I/O

The next area to look for replication bottlenecks is source storage. The underlying storage for servers can be a point of contention. If the storage is maxing-out its read speeds, this will impact the data-replication rate. If your storage I/O is heavily utilized, it can impact block replication by Application Migration Service. In order to measure storage speeds, you can use the following tools:

  • Windows: WinSat (or other third-party tooling, like AS SSD Benchmark)
  • Linux: hdparm

We suggest reducing read/write operations on your source storage when starting your migration using Application Migration Service.

Application Migration Service EC2 replication instance size

The size of the EC2 replication server instance can also have an impact on the replication speed. Although it is recommended to keep the default instance size (t3.small), it can be increased if there are business requirements, like to speed up the initial data sync. Note: using a larger instance can lead to increased compute costs.

-508 (1)

Common replication instance changes include:

  • Servers with <26 disks: change the instance type to m5.large. Increase the instance type to m5.xlarge or higher, as needed.
  • Servers with <26 disks (or servers in AWS Regions that do not support m5 instance types): change the instance type to m4.large. Increase to m4.xlarge or higher, as needed.

Note: Changing the replication server instance type will not affect data replication. Data replication will automatically pick up where it left off, using the new instance type you selected.

Application Migration Service Elastic Block Store replication volume

You can customize the Amazon Elastic Block Store (Amazon EBS) volume type used by each disk within each source server in that source server’s settings (change staging disk type).

By default, disks <500GiB use Magnetic HDD volumes. AWS best practice suggests not change the default Amazon EBS volume type, unless there is a business need for doing so. However, as we aim to speed up the replication, we actively change the default EBS volume type.

There are two options to choose from:

  1. The lower cost, Throughput Optimized HDD (st1) option utilizes slower, less expensive disks.

-508 (2)

    • Consider this option if you(r):
      • Want to keep costs low
      • Large disks do not change frequently
      • Are not concerned with how long the initial sync process will take
  1. The faster, General Purpose SSD (gp2) option utilizes faster, but more expensive disks.

-508 (3)

    • Consider this option if you(r):
      • Source server has disks with a high write rate, or if you need faster performance in general
      • Want to speed up the initial sync process
      • Are willing to pay more for speed

Source server CPU

The Application Migration Service agent that is installed on the source machine for data replication uses a single core in most cases (agent threads can be scheduled to multiple cores). If core utilization reaches a maximum, this can be a limitation for replication speed. In order to check the core utilization:

  • Windows: Launch the Task Manger application within Windows, and click on the “CPU” tab. Right click on the CPU graph (this is currently showing an average of cores) > select “Change graph to” > “Logical processors”. This will show individual cores and their current utilization (Figure 2).
Logical processor CPU utilization

Figure 2. Logical processor CPU utilization

Linux: Install htop and run from the terminal. The htop command will display the Application Migration Service/CE process and indicate the CPU and memory utilization percentage (this is of the entire machine). You can check the CPU bars to determine if a CPU is being maxed-out (Figure 3).

AWS Application Migration Service/CE process to assess CPU utilization

Figure 3. AWS Application Migration Service/CE process to assess CPU utilization

Conclusion

In this post, we explored several key variables that contribute to server replication speed when using Application Migration Service. We encourage you to explore these key areas during your migration to determine if your replication speed can be optimized.

Related information

New for AWS DataSync – Move Data Between AWS and Google Cloud Storage or AWS and Microsoft Azure Files

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-aws-datasync-move-data-between-aws-and-google-cloud-storage-or-aws-and-microsoft-azure-files/

Moving data to and from AWS Storage services can be automated and accelerated with AWS DataSync. For example, you can use DataSync to migrate data to AWS, replicate data for business continuity, and move data for analysis and processing in the cloud. You can use DataSync to transfer data to and from AWS Storage services, including Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS), and Amazon FSx. DataSync also integrates with Amazon CloudWatch and AWS CloudTrail for logging, monitoring, and alerting.

Today, we added to DataSync the capability to migrate data between AWS Storage services and either Google Cloud Storage or Microsoft Azure Files. In this way, you can simplify your data processing or storage consolidation tasks. This also helps if you need to import, share, and exchange data with customers, vendors, or partners who use Google Cloud Storage or Microsoft Azure Files. DataSync provides end-to-end security, including encryption and integrity validation, to ensure your data arrives securely, intact, and ready to use.

Let’s see how this works in practice.

Preparing the DataSync Agent
First, I need a DataSync agent to read from, or write to, storage located in Google Cloud Storage or Azure Files. I deploy the agent on an Amazon Elastic Compute Cloud (Amazon EC2) instance. The latest DataSync Amazon Machine Image (AMI) ID is stored in the Parameter Store, a capability of AWS Systems Manager. I use the AWS Command Line Interface (CLI) to get the value of the /aws/service/datasync/ami parameter:

aws ssm get-parameter --name /aws/service/datasync/ami --region us-east-1
{
    "Parameter": {
        "Name": "/aws/service/datasync/ami",
        "Type": "String",
        "Value": "ami-0e244fe801cf5a510",
        "Version": 54,
        "LastModifiedDate": "2022-05-11T14:08:09.319000+01:00",
        "ARN": "arn:aws:ssm:us-east-1::parameter/aws/service/datasync/ami",
        "DataType": "text"
    }
}

Using the EC2 console, I start an EC2 instance using the AMI ID specified in the Value property of the parameter. For networking, I use a public subnet and the option to auto-assign a public IP address. The EC2 instance needs network access to both the source and the destination of a data moving task. Another requirement for the instance is to be able to receive HTTP traffic from DataSync to activate the agent.

When using AWS DataSync in a virtual private cloud (VPC) based on the Amazon VPC service, it is a best practice to use VPC endpoints to connect the agent with the DataSync service. In the VPC console, I choose Endpoints on the navigation pane and then Create endpoint. I enter a name for the endpoint and select the AWS services category.

Console screenshot.

In the Services section, I look for DataSync.

Console screenshot.

Then, I select the same VPC where I started the EC2 instance.

Console screenshot.

To reduce cross-AZ traffic, I choose the same subnet used for the EC2 instance.

Console screenshot.

The DataSync agent running on the EC2 instance needs network access to the VPC endpoint. For simplicity, I use the default security group of the VPC for both. I create the VPC endpoint and, after a few minutes, it’s ready to be used.

Console screenshot.

In the AWS DataSync console, I choose Agents from the navigation pane and then Create agent. I select Amazon EC2 for the Hypervisor.

Console screenshot.

I choose VPC endpoints using AWS PrivateLink for the Endpoint type. I select the VPC endpoint I created before and the same Subnet and Security group I used for the VPC endpoint.

I choose the option to Automatically get the activation key and type the public IP of the EC2 instance. Then, I choose Get key.

Console screenshot.

After the DataSync agent has been activated, I don’t need HTTP access anymore, and I remove that from the security groups of the EC2 instance. Now that the DataSync agent is active, I can configure tasks and locations to move my data.

Moving Data from Google Cloud Storage to Amazon S3
I have a few images in a Google Cloud Storage bucket, and I want to synchronize those files with an S3 bucket. In the Google Cloud console, I open the settings of the bucket. There, I create a service account with Storage Object Viewer permissions and write down the credentials (access key and secret) to access the bucket programmatically.

Back in the AWS DataSync console, I choose Tasks and then Create task.

To configure the source of the task, I create a location. I select Object storage for the Location type and choose the agent I just created. For the Server, I use storage.googleapis.com. Then, I enter the name of the Google Cloud bucket and the folder where my images are stored.

Console screenshot.

For authentication, I enter the access key and the secret I retrieved when I created the service account. I choose Next.

Console screenshot.

To configure the destination of the task, I create another location. This time, I select Amazon S3 for the Location Type. I choose the destination S3 bucket and enter a folder that will be used as a prefix for the files transferred to the bucket. I use the Autogenerate button to create the IAM role that will give DataSync permissions to access the S3 bucket.

Console screenshot.

In the next step, I configure the task settings. I enter a name for the task. Optionally, I can fine-tune how DataSync verifies the integrity of the transferred data or allocate a bandwidth for the task.

Console screenshot.

I can also choose what data to scan and what to transfer. By default, all source data is scanned, and only data that has changed is transferred. In the Additional settings, I disable Copy object tags because tags are currently not supported with Google Cloud Storage.

Console screenshot.

I can select the schedule used to run this task. For now, I leave it Not scheduled, and I will start it manually.

Console screenshot.

For logging, I use the Autogenerate button to create a log group for DataSync. I choose Next.

Console screenshot.

I review the configurations and create the task. Now, I start the data moving task from the console. After a few minutes, the files are synced with my S3 bucket and I can access them from the S3 console.

Console screenshot.

Moving Data from Azure Files to Amazon FSx for Windows File Server
I take a lot of pictures, and I also have a few images in an Azure file share. I want to synchronize those files with an Amazon FSx for Windows file system. In the Azure console, I select the file share and choose the Connect button to generate a PowerShell script that checks if this storage account is accessible over the network.

$connectTestResult = Test-NetConnection -ComputerName <SMB_SERVER> -Port 445
if ($connectTestResult.TcpTestSucceeded) {
    # Save the password so the drive will persist on reboot
    cmd.exe /C "cmdkey /add:`"danilopsync.file.core.windows.net`" /user:`"localhost\<USER>`" /pass:`"<PASSWORD>`""
    # Mount the drive
    New-PSDrive -Name Z -PSProvider FileSystem -Root "\\danilopsync.file.core.windows.net\<SHARE_NAME>" -Persist
} else {
    Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port."
}

From this script, I grab the information I need to configure the DataSync location:

  • SMB Server
  • Share Name
  • User
  • Password

Back in the AWS DataSync console, I choose Tasks and then Create task.

To configure the source of the task, I create a location. I select Server Message Block (SMB) for the Location Type and the agent I created before. Then, I use the information I found in the script to enter the SMB Server address, the Share name, and the User/Password to use for authentication.

Console screenshot.

To configure the destination of the task, I again create a location. This time, I choose Amazon FSx for the Location type. I select an FSx for Windows file system that I created before and use the default share name. I use the default security group to connect to the file system. Because I am using AWS Directory Service for Microsoft Active Directory with FSx for Windows File Server, I use the credentials of a user member of the AWS Delegated FSx Administrators and Domain Admins groups. For more information, see Creating a location for FSx for Windows File Server in the documentation.

Console screenshot.

In the next step, I enter a name for the task and leave all other options to their default values in the same way I did for the previous task.

Console screenshot.

I review the configurations and create the task. Now, I start the data moving task from the console. After a few minutes, the files are synched with my FSx for Windows file system share. I mount the file system share with a Windows EC2 instance and see that my images are there.

EC2 screenshot.

When creating a task, I can reuse existing locations. For example, if I want to synchronize files from Azure Files to my S3 bucket, I can quickly select the two corresponding locations I created for this post.

Availability and Pricing
You can move your data using the AWS DataSync console, AWS Command Line Interface (CLI), or AWS SDKs to create tasks that move data between AWS storage and Google Cloud Storage buckets or Azure Files file systems. As your tasks run, you can monitor progress from the DataSync console or by using CloudWatch.

There are no changes to DataSync pricing with these new capabilities. Moving data to and from Google Cloud or Microsoft Azure is charged at the same rate as all other data sources supported by DataSync today.

You may be subject to data transfer out fees by Google Cloud or Microsoft Azure. Because DataSync compresses data in flight when copying between the agent and AWS, you may be able to reduce egress fees by deploying the DataSync agent in a Google Cloud or Microsoft Azure environment.

When using DataSync to move data from AWS to Google Cloud or Microsoft Azure, you are charged for data transfer out from EC2 to the internet. See Amazon EC2 pricing for more information.

Automate and accelerate the way you move data with AWS DataSync.

Danilo

Avoid affecting your production environment during migration with AWS Application Migration Service

Post Syndicated from Michael Spence original https://aws.amazon.com/blogs/architecture/avoid-affecting-your-production-environment-during-migration-with-aws-application-migration-service/

Customers commonly use AWS Application Migration Service to migrate Active Directory joined Windows or Linux servers to Amazon Web Services (AWS). However, this process can affect the production environment during testing. For example, if you update DNS addresses during testing, clients that try to reach the original server will be redirected to the testing server.

To avoid this issue, you could launch instances in a completely isolated network environment. However, this may not be an option for you if your application stack depends on other applications or services.

In this post, we offer an architecture and process that can be applied to applications that cannot be completely isolated during testing. It uses Microsoft Domain Name System (DNS) Query Policies and a Group Policy and does not affect the production environment.

Overview of solution

Using a simple application stack of a WordPress site as an example, we’ll show you how to implement this solution. We’ll also discuss the prequisites required to implement this solution and describe potential domain issues you may encounter and how to solve them.

Figure 1 shows a view of the testing architecture on-premises and in AWS:

  1. It uses an Amazon Virtual Private Cloud (Amazon VPC) that was set up specifically for testing.
  2. Within this Amazon VPC, a management subnet that runs on Amazon Elastic Compute Cloud (Amazon EC2) contains the production environment’s self-managed domain controller. This domain controller provides authentication and DNS services to the instances launched during testing.
  3. The client that tests the application is located within the same private subnet as the applications being tested.
Testing architecture design

Figure 1. Testing architecture design

Prerequisites

To implement this solution, you will need:

  • An AWS account
  • Network connectivity between an on-premises server (or other cloud) and AWS
  • Active Directory (Windows Server 2016 or later)
  • Active Directory Integrated DNS
  • Basic understanding of the following technologies:
    • Application Migration Service
    • EC2 launch templates
    • Active Directory (Windows Version 2016, 2019, or 2022)
    • Microsoft Group Policy
    • Active Directory Integrated DNS
    • Windows PowerShell

Potential domain issues and how to solve them

Application Migration Service is a block-level replication of the original source server, meaning that the replicated server is an exact copy of the original server. During testing, the original source server and the server that was launched for testing are active on the domain at the same time.

This has the potential to create these issues:

  • Each server joined to an Active Directory domain has an associated computer object, and that object has a password. The server will change this password automatically every 30 days by default. If either server changes the password for that object, it will cause the other server to fail because it cannot establish a secure session with a domain controller.
  • Active Directory supports dynamic DNS updates by allowing servers to register and dynamically update their resource records. If the replicated server updates DNS with a new IP address, that change can propagate to the rest of the environment. This means that clients trying to reach the original server will be redirected to the testing server, which will affect the production environment.

By creating a new Group Policy Object (GPO) with the following settings (and as shown in Figure 2), you can avoid these potential issues.

  1. Disable Machine Password Changes. Use reverse logic notation to enable the setting to disable the automatic password changes on the server.
  2. Disable Dynamic Updates from the DNS Client. This setting will prevent the server from updating DNS with a new IP address when you are testing it.

Apply this newly created GPO to the servers that are being migrated. The servers can be targeted for the policy by Active Directory security group, or you can move the servers to a separate organizational unit (OU) within Active Directory and apply the GPO to that OU.

GPO settings

Figure 2. GPO settings

After you apply GPO settings to the server being migrated, verify the GPO settings from an administrative command prompt:

C:\> gpupdate /force

C:\> gpresult /r /scope computer

Active Directory replication

You can disable outbound replication from the domain controller in the testing VPC while still allowing inbound replication. We intentionally configured the network configuration within the testing VPC to allow connectivity to on-premises and other networks.

To do this, issue the following command from an administrative command prompt with a domain administrator privileged user:

C:\> repadmin /options <Testing DC> +disable_outbound_repl

Example walkthrough

1.     Example application description

Let’s look at a simple application stack of a WordPress site. It consists of a single web server and a single database server that are being migrated from on-premises to AWS. The WordPress stack could be Linux or Windows. The WordPress configuration on the web server points to the database by hostname only (not by IP address).

When the web server starts up during the testing phase, it issues a DNS query to find the database server by name. Note that we don’t want to return the IP address of the current production database server; we want the one currently being tested. When our workstation in the testing VPC opens up a browser to the WordPress site, we also want it to return the IP address of the web server being tested.

For reference, Table 1 reflects the IP addresses, network information, and URL used in the testing example.

Table 1. Example values for the testing scenario
Testing Private Subnet CIDR 10.0.67.0/24
Domain Zone Name awslaunch.local
WordPress Site URL http://wswpweb1
Hostname Original IP Address Testing IP Address
wswpweb1 10.0.4.171 10.0.67.171
wswpsql1 10.0.4.202 10.0.67.202

2.     Modify EC2 launch templates

To ensure that we launch the EC2 instances with the IP addresses we specified in Table 1, let’s change the EC2 launch template associated with each of the source servers from within Application Migration Service. The EC2 launch template should specify the testing subnet and the IP address to be assigned to the test instances.

EC2 launch template settings

Figure 3. EC2 launch template settings

Once you’ve modified the EC2 launch template, be sure to set it as the “Default” template because this is the version of the template that Application Migration Service will launch upon testing.

3.     Create Microsoft DNS Query Policies

Now that we’ve launched the EC2 instances for testing, let’s set up DNS Query Policies for DNS to return an alternate set of IP addresses during the testing phase.

We’ll set these policies to return alternate IP addresses from DNS for the servers involved in testing and only within the boundary of the testing private subnet (Policy 3.4). By doing this, if any client outside of the testing private subnet queries this domain controller, it will return the original IP addresses and not the testing ones.

Because DNS Query Policies are specific to a specific domain controller, let’s issue the following PowerShell commands on the domain controller that is active in our testing VPC (AD-DC-2 in Figure 1) to create the following policies:

Policy 3.1       A client subnet that represents our private subnet in the testing VPC. This is a one-time configuration.

Add-DnsServerClientSubnet -Name
"MGN-Testing-AZ1" -IPv4Subnet "10.0.67.0/24"

Policy 3.2       A zone scope within the domain zone name. This is a one-time configuration.

Add-DnsServerZoneScope -ZoneName
"awslaunch.local" -Name "MGN-Testing-AZ1-Scope"

Policy 3.3       Alternative IP addresses in the new zone scope.

Add-DnsServerResourceRecord -ZoneName
"awslaunch.local" -A -Name "wswpweb1" -IPv4Address
"10.0.67.171" -ZoneScope "MGN-Testing-AZ1-Scope"

Add-DnsServerResourceRecord -ZoneName
"awslaunch.local" -A -Name "wswpsql1" -IPv4Address
"10.0.67.202" -ZoneScope "MGN-Testing-AZ1-Scope"

Policy 3.4       A query resolution policy that returns alternate IP addresses for our testing servers without impacting the rest of the domain.

Add-DnsServerQueryResolutionPolicy -Name
"MGN-Testing-AZ1-Policy" -Action ALLOW -FQDN "eq,wswpweb1.awslaunch.local,wswpsql1.awslaunch.local"
-ClientSubnet "eq,MGN-Testing-AZ1" -ZoneScope
"MGN-Testing-AZ1-Scope,1" -ZoneName "awslaunch.local"

4.     Verify current production IP addresses

Now that we have our policies set, let’s make sure that the application stack IP addresses are still returning correctly from the current production environment with the following command:

C:\>nslookup wswpweb1

Server: WSDC1.awslaunch.local

Address: 10.0.5.10

 

Name: wswpweb1.awslaunch.local

Address: 10.0.4.171

 

C:\>nslookup wswpsql1

Server: WSDC1.awslaunch.local

Address: 10.0.5.10

 

Name: wswpsql1.awslaunch.local

Address: 10.0.4.202

These are the correct IP addresses. They match the original IP addresses from Table 1.

5.    Verify testing IP addresses

Now let’s verify that the application stack IP addresses are returning correctly from the testing environment.

C:\>nslookup wswpweb1

Server: WSDC5.awslaunch.local

Address: 10.0.69.10

 

Name: wswpweb1.awslaunch.local

Address: 10.0.67.171

 

C:\>nslookup wswpsql1

Server: WSDC5.awslaunch.local

Address: 10.0.69.10

 

Name: wswpsql1.awslaunch.local

Address: 10.0.67.202

There are the correct addresses. They match the testing IP addresses from Table 1.

We can also verify that the WordPress site is functional by opening a browser from our testing client and browsing to the URL (https://wswpweb1, listed in Table 1).

WordPress site verification

Figure 4. WordPress site verification

Additionally, we can verify that the web server is connecting to the correct SQL server database by issuing the following command when logged into the WordPress web server:

C:\ >netstat | findstr /c:"ms-sql-s"

 TCP 10.0.67.171:50355 10.0.67.202:ms-sql-s TIME_WAIT

 TCP 10.0.67.171:50361
10.0.67.202:ms-sql-s
TIME_WAIT

6.     Cleanup

After you’ve tested everything, mark the source servers in Application Migration Service as “Ready for Cutover.” This will terminate the EC2 testing instances.

The following PowerShell commands will remove the DNS resource records and the DNS Query Policy:

Remove-DnsServerResourceRecord -ZoneName
"awslaunch.local" -ZoneScope "MGN-Testing-AZ1-Scope"
-RRType "A" -Name "wswpweb1"

Remove-DnsServerResourceRecord -ZoneName
"awslaunch.local" -ZoneScope "MGN-Testing-AZ1-Scope"
-RRType "A" -Name "wswpsql1"

Remove-DnsServerQueryResolutionPolicy -Name
"MGN-Testing-AZ1-Policy" -ZoneName "awslaunch.local"

This command will re-enable replication from the testing domain controller.

C:\> repadmin /options <Testing DC>
-disable_outbound_repl

Once the servers are launched for final cutover, you can remove the Group Policy applied to the servers to allow the server to resume password changes and dynamic DNS updates.

Conclusion

When you migrate your server resources to AWS with Application Migration Service, the testing phase of the migration lifecycle is critical. It verifies that your application will function properly within the AWS environment before committing to a final cutover. The testing phase can uncover unexpected application dependencies, gauge application performance, and deliver the confidence that your migration to AWS will be successful without risks to your production environment.

In this blog post, we explored an architecture and process that uses Microsoft DNS Query Policies and Group Policy to address potential domain issues and avoids affecting the production environment. This testing framework persists during the entire cloud migration and across multiple application migrations. Once migration is complete, the testing architecture can be removed or maintained for future cloud migrations.

Ready to get started? Read more about the AWS Application Migration Service and other blog posts related to migrations on the AWS Cloud Enterprise Strategy Blog and the AWS Architecture Blog.

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

New – AWS Migration Hub Refactor Spaces Helps to Incrementally Refactor Your Applications

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-aws-migration-hub-refactor-spaces-helps-to-incrementally-refactor-your-applications/

I am excited to announce the preview of AWS Migration Hub Refactor Spaces, a new capability of AWS Migration Hub to let you refactor existing applications into distributed applications, typically based on microservices.

There are multiple reasons why you want to refactor existing applications. You might want to make your code more modular, use more modern frameworks, use different data storage, etc. In general, when refactoring, your objective is to make your application easier to maintain and evolve over time. Other benefits might include handling larger workloads, increasing resiliency, or lowering costs. But let’s face it, refactoring is hard. I usually compare refactoring to changing the engines, cabin seats, and entertainment system of a plane while keeping the plane in the air, fully loaded with passengers, and without having them notice any change.

When talking with customers who have successfully been through these refactoring projects, we noticed a common pattern: the Strangler Fig design pattern.

strangler fig

A strangler fig is a family of plants that grow their roots from the top of the trees that host them, eventually enveloping or replacing their host. Author Martin Fowler first coined the term to describe a migration design pattern. The idea is “to gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled”.

How Can I Apply This Plant Behavior To My Application Migration?
Inspired by this family of plants, I might want to extract capabilities from a monolithic application and rewrite them as microservices. Then, I incrementally route traffic away from the old to the new. Over time, all of the requests are routed to microservices, and the existing application is retired.

While effective, this approach to application transformation creates hurdles. I must create the required infrastructure to separate the existing applications and the microservices. In the AWS cloud, this often involves creating multiple AWS accounts, so teams or services can more easily operate independently. Having multiple accounts is the most efficient way to separate concerns and billing across teams. When dealing with multiple AWS accounts, it is required to maintain networking infrastructure to connect my existing application and new services together. Furthermore, I must create a routing control system to route traffic gradually from the old application to the new services in different accounts. Creating and managing that infrastructure at scale is complex. It introduces additional risks and costs to the refactor project.

How Refactor Spaces Helps
AWS Migration Hub Refactor Spaces takes care of the heavy lifting for me. First, it lays down the networking infrastructure to enable connectivity between multiple AWS accounts. Second, it creates and manages a mechanism to route API calls away from my legacy application.

Let’s imagine I have a monolithic application that I want to refactor. The application is made of a web-based front-end using ReactJS. The front-end application is hosted on Amazon Simple Storage Service (Amazon S3) and distributed through Amazon CloudFront. The front-end makes API calls to a monolithic application developed in NodeJS or Python and deployed on several EC2 instances. The API uses a relational database, because this is how we store data since the company existed.

The architecture of this application is illustrated by the following diagram.

refactor spaces - monolith

Each API has a distinct URI. For example the /cart API handles the shopping basket, the /order API handles the ordering system, etc. I apply the strangler fig pattern and decide to extract the /cart capabilities to a set of new microservices. I create an AWS account for these microservices. I develop and deploy a set of AWS Lambda functions to implement the cart management functionalities. I chose to use Amazon DynamoDB for the shopping basket data storage because of its low latency at scale.

The schema of my new architecture is shown in the following diagram:

Refactor Space - target architectureBut now I have two challenges. First, I have to design, code, and deploy a routing mechanism to route API calls made by the front-end application to the correct back-end: either the monolith, or the new microservices. This service will likely be deployed into a distinct AWS account. Then, I have to configure network connectivity between these multiple AWS accounts.

This is where Refactor Spaces comes into the picture.

Introducing AWS Migration Hub Refactor Spaces
Refactor Spaces makes it easy to manage application refactoring by taking care of the two challenges I just described: the routing of the API calls and the network connectivity between AWS accounts. It is made of Environments, Services, and an Application proxy. Let’s see it in action.

I open the AWS Management Console, navigate to AWS Migration Hub, and select Refactor Spaces.

I first create a Refactor Spaces Environment. An Environment is a multi-account network fabric consisting of peered VPCs. This lets AWS resources in service VPCs added to the environment communicate directly across AWS accounts. It also provides a unified view of networking and services across accounts.

In Create environment, I give my environment a name and a description, and then select Next.

Refactor Spaces - Create environment

Then, I define my application. I give my application a name, and select the VPC where the proxy will be deployed.

An application is a services container. It has a proxy that defines routes. The proxy lets your front-end application use a single endpoint to contact multiple services. All of the traffic hits the single proxy endpoint, and then it’s sent to multiple services based on your rules.

Refactor Space - Create application

You may want to use multiple AWS accounts as explained before. Typically, an application is made of one AWS Account that hosts the Refactor Spaces Application proxy, one or multiple AWS accounts to host the legacy application, and one AWS account for the first microservice. Therefore, I invite the other AWS account owners to join this Refactor Spaces environment. I add one principal per AWS Account. Refactor Spaces doesn’t reinvent the wheel, but it leverages AWS Resource Access Manager (RAM) to do so.

This step is optional. Refactor Spaces may work within one AWS Account. It is possible to share the environments with other AWS accounts at a later stage.

I enter the AWS account IDs as Principals, and then select Next.

Refactor Space - Shared Accounts

Finally, I review my choices and select Create & share environment (not shown here).

Assuming that the microservices are ready to use, the next step is registering them as Refactor Spaces Services. Refactor Spaces Services are entities that provide business capabilities, typically microservices. These services are reachable through unique endpoints, and they can interoperate across accounts in a Refactor Spaces Environment. In this example, there are four services:

  • The monolithic app. This is the default service where Refactor Spaces routes all API calls initiated by the front-end.
  • Three microservices to implement the /cart capability. I decided to refactor this capability with three distinct sevices: AddItem, RemoveItem, and ListItems.

A Refactor Spaces Service may target any compute resource type: EC2, containers deployed on AWS Fargate, an Application Load Balancer, an AWS Lambda function, etc.

I select Create service from the left menu. The service configuration is in three steps. First, I select the Refactor Spaces Environment and Application where I want to define this service. Second, I give my service a name and a description. And third, I select the service endpoint: either an HTTP/HTTPS URL in a VPC, or a Lambda function.

The monolithic application is the default route where Refactor Spaces Application proxy routes all of the API calls, unless otherwise specified. I enter / as Source path and select Include child paths. Then, I make make sure Match all is selected for HTTP verbs.

When finished, I select Create service. I repeat this process for each of my microservices. For this demo, I create four Refactor Spaces Services in total.

AWS Migration Hub refactor Spaces - create service

The last step defines the routing rules for the Refactor Spaces Application proxy. When configured, the proxy becomes the new API endpoint for my front-end application. The sole change that I have to make in my front-end application is to point it at the Refactor Spaces Application proxy URI. The proxy routes API calls to Services, according to a route definition. An Application proxy supports routing to all compute platforms with public or private visibility. At the moment, private endpoints must be referred through a public DNS name or their private IP address. Each API call is run against the set of routes configured in the proxy. When a path matches a rule, the request is sent to the target service configured for that path. Proxies have a default route that forwards requests to a default service if they don’t match any of the path rules.

I select the service that I just created. Then, I enter the route Source path and the HTTP Verb to support. When my service expects subpaths (such as /cart/123), I make sure to select Include child paths, as well.

Refactor Spaces - Define route

I repeat this process for the GetItem and RemoveItem microservices. They are invoked for different HTTP verbs: GET and DELETE respectively.

Based on this configuration, Refactor Spaces creates and manages the following architecture for me. The Refactor Spaces Application proxy and network fabric are deployed in a separate AWS account. I might further configure the Amazon API Gateway based on the needs of my monolithic application or microservices.

Refactor Spaces - final architecture

The ultimate change is for the application front-end. I modify its configuration to point to the Refactor Spaces Application proxy endpoint, instead of the monolith’s endpoint. From now on, Refactor Spaces routes API calls to the monolith by default. It routes the /cart calls for GET, POST, and DELETE verbs to my new microservices implemented as Lambda functions.

Over time, I will repeat this process to move other capabilities out of the monolithic application, one-by-one, until the old monolith is strangled replaced by the new microservices architecture.

Pricing and Availaibility
AWS Migration Hub Refactor Spaces is available today in the ten following AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Singapore) Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), Europe (Frankfurt), Europe (London), and Europe (Stockholm). As per usual, we’re looking forward to expanding to additional Regions in the future.

This new capability is available today as an open preview, and no registration is necessary. You can start to use it today. There is no charge for using Refactor Space during the preview period. However, you may be charged for the resources that it provisions on your AWS accounts: Amazon API Gateway, AWS Transit Gateway, and Network Load Balancer. The pricing details are available on AWS Migration Hub’s pricing page. Billing will start when Refactor Spaces will be generally available.

Go and start refactoring your applications today!

— seb

New – AWS Transfer Family support for Amazon Elastic File System

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-aws-transfer-family-support-for-amazon-elastic-file-system/

AWS Transfer Family provides fully managed Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP) over TLS, and FTP support for Amazon Simple Storage Service (S3), enabling you to seamlessly migrate your file transfer workflows to AWS.

Today I am happy to announce AWS Transfer Family now also supports file transfers to Amazon Elastic File System (EFS) file systems as well as Amazon S3. This feature enables you to easily and securely provide your business partners access to files stored in Amazon EFS file systems. With this launch, you now have the option to store the transferred files in a fully managed file system and reduce your operational burden, while preserving your existing workflows that use SFTP, FTPS, or FTP protocols.

Amazon EFS file systems are accessible within your Amazon Virtual Private Cloud (VPC) and VPC connected environments. With this launch, you can securely enable third parties such as your vendors, partners, or customers to access your files over the supported protocols at scale globally, without needing to manage any infrastructure. When you select Amazon EFS as the data store for your AWS Transfer Family server, the transferred files are readily available to your business-critical applications running on Amazon Elastic Compute Cloud (EC2), as well as to containerized and serverless applications run using AWS services such as Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), AWS Fargate, and AWS Lambda.

Using Amazon EFS – Getting Started
To get started in your existing Amazon EFS file system, make sure the POSIX identities you assign for your SFTP/FTPS/FTP users are owners of the files and directories you want to provide access to. You will provide access to that Amazon EFS file system through a resource-based policy. Your role also needs to establish a trust relationship. This trust relationship allows AWS Transfer Family to assume the AWS Identity and Access Management (IAM) role to access your bucket so that it can service your users’ file transfer requests.

You will also need to make sure you have created a mount target for your file system. In the example below, the home directory is owned by userid 1234 and groupid 5678.

$ mkdir home/myname
$ chown 1234:5678 home/myname

When you create a server in the AWS Transfer Family console, select Amazon EFS as your storage service in the Step 4 section Choose a domain.

When the server is enabled and in an online state, you can add users to your server. On the Servers page, select the check box of the server that you want to add a user to and choose Add user.

In the User configuration section, you can specify the username, uid (e.g. 1234), gid (e.g 5678), IAM role, and Amazon EFS file system as user’s home directory. You can optionally specify a directory within the file system which will be the user’s landing directory. You use a service-managed identity type – SSH keys. If you want to use password type, you can use a custom option with AWS Secrets Manager.

Amazon EFS uses POSIX IDs which consist of an operating system user id, group id, and secondary group id to control access to a file system. When setting up your user, you can specify the username, user’s POSIX configuration, and an IAM role to access the EFS file system. To learn more about configuring ownership of sub-directories in EFS, visit the documentation.

Once the users have been configured, you can transfer files using the AWS Transfer Family service by specifying the transfer operation in a client. When your user authenticates successfully using their file transfer client, it will be placed directly within the specified home directory, or root of the specified EFS file system.

$ sftp [email protected]

sftp> cd /fs-23456789/home/myname
sftp> ls -l
-rw-r--r-- 1 3486 1234 5678 Jan 04 14:59 my-file.txt
sftp> put my-newfile.txt
sftp> ls -l
-rw-r--r-- 1 3486 1234 5678 Jan 04 14:59 my-file.txt
-rw-r--r-- 1 1002 1234 5678 Jan 04 15:22 my-newfile.txt

Most of SFTP/FTPS/FTP commands are supported in the new EFS file system. You can refer to a list of available commands for FTP and FTPS clients in the documentation.

Command Amazon S3 Amazon EFS
cd Supported Supported
ls/dir Supported Supported
pwd Supported Supported
put Supported Supported
get Supported Supported including resolving symlinks
rename Supported (only file) Supported (file or folder)
chown Not supported Supported (root only)
chmod Not supported Supported (root only)
chgrp Not supported Supported (root or owner only)
ln -s Not supported Not supported
mkdir Supported Supported
rm Supported Supported
rmdir Supported (non-empty folders only) Supported
chmtime Not Supported Supported

You can use Amazon CloudWatch to track your users’ activity for file creation, update, delete, read operations, and metrics for data uploaded and downloaded using your server. To learn more on how to enable CloudWatch logging, visit the documentation.

Available Now
AWS Transfer Family support for Amazon EFS file systems is available in all AWS Regions where AWS Transfer Family is available. There are no additional AWS Transfer Family charges for using Amazon EFS as the storage backend. With Amazon EFS storage, you pay only for what you use. There is no need to provision storage in advance and there are no minimum commitments or up-front fees.

To learn more, take a look at the FAQs and the documentation. Please send feedback to the AWS forum for AWS Transfer Family or through your usual AWS support contacts.

Learn all the details about AWS Transfer Family to access Amazon EFS file systems and get started today.

Channy;