All posts by Betsy Chernoff

Optimizing NGINX load balancing on Amazon EC2 A1 instances

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/compute/optimizing-nginx-load-balancing-on-amazon-ec2-a1-instances/

This post is contributed by Geoff Blake | Sr System Development Engineer

In a previous post, Optimizing Network Intensive Workloads on Amazon EC2 A1 Instances, I provided general guidance on tuning network-intensive workloads on A1 instances using Memcached as the example use case.

NGINX is another network-intensive application that, like Memcached, is a good fit for the A1 instances. This post describes how to configure NGINX as a load balancer on A1 for optimal performance using Amazon Linux 2, highlighting the important tuning parameters. You can extract up to a 30% performance benefit with these tunings over the default configuration on A1. However, depending on the particular data rates, processing required per request, instance size, and chosen AMI, the values in this post could change for your particular scenario. However, the methodologies described here are still applicable.

IRQ affinity and receive packet steering

Turning off irqbalance, pinning IRQs, and distributing network processing to specific cores helps the performance of NGINX when it runs on A1 instances.

Unlike Memcached in the previous post, NGINX does not benefit from using receive packet steering (RPS) to spread network processing among all cores or isolating processing to a subset of cores. It is better to allow NGINX access to all cores, while keeping IRQs and network processing isolated to a subset of cores.

Using a modified version of the script from the previous post, you can set IRQs, RPS, and NGINX workers to the following mappings. Finding the optimal balance of IRQs, RPS, and worker mappings can benefit performance by up to 10% on a1.4xlarge instances.

Instance typeIRQ settingsRPS settingsNGINX workers
a1.2xlargeCore 0, 4Core 0, 4Run on cores 0-7
a1.4xlargeCore 0, 8Core 0, 8Run on cores 0-15

NGINX access logging

For production deployments of NGINX, logging is critically important for monitoring the health of servers and debugging issues.

On large a1.4xlarge instance types, logging each request can become a performance bottleneck in certain situations. To alleviate this issue, tune the access_log configuration with the buffer modifier. Only a small amount of buffering is necessary to avoid logging becoming a bottleneck, on the order of 8 KB. This tuning parameter alone can give a significant boost of 20% or more on a1.4xlarge instances, depending on the traffic profile.

Additional Linux tuning parameters

The Linux networking stack is tuned to conserve resources for general use cases. When running NGINX as a load balancer, the server must be tuned to give the network stack considerably more resources than the default amount, for the prevention of dropped connections and packets. Here are the key parameters to tune:

  • core.somaxcon: Maximum number of backlogged sockets allowed. Increase this to 4096 or more to prevent dropping connection requests.
  • ipv4.tcp_max_syn_backlog: Maximum number of outstanding SYN requests. Set this to the same value as net.core.somaxconn.
  • ipv4.ip_local_port_range: To avoid prematurely running out of connections with clients, set to a larger ephemeral port range of 1024 65535.
  • core.rmem_max, net.core.wmem_max, net.ipv4.tcp_rmem, net.ipv4.tcp_wmem, net.ipv4.tcp_mem: Socket and TCP buffer settings. Tune these to be larger than the default. Setting the maximum buffer sizes to 8 MB should be sufficient.

Additional NGINX configuration parameters

To extract the most out of an NGINX load balancer on A1 instances, set the following  NGINX parameters higher than their default values:

 

  • worker_processes: Keeping this set to the default of auto works well on A1.
  • worker_rlimit_nofile: Set this to a high value such as 65536 to allow many connections and access to files.
  • worker_connections: Set this to a high value such as 49152 to cover most of the ephemeral port range.
  • keepalive_requests: The number of requests that a downstream client can make before a connection is closed. Setting this to a reasonably high number such as 10000 helps prevent connection churn and ephemeral port exhaustion.
  • Keepalive: Set to a value that covers your total number of backends plus expected growth, such as 100 in your upstream blocks to keep connections open to your backends from each worker process.

Summary

Using the above tuning parameters versus the defaults that come with Amazon Linux 2 and NGINX can significantly increase the performance of an NGINX load balancing workload by up to 30% on the a1.4xlarge instance type. Similar, but less dramatic performance gains were seen on the smaller a1.2xlarge instance type as well. If you have questions about your own workload running on A1 instances, contact us at [email protected].

 

Why AWS is the best place for your Windows workloads, and how Microsoft is changing their licensing to try to awkwardly force you into Azure

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/compute/why-aws-is-the-best-place-for-your-windows-workloads-and-how-microsoft-is-changing-their-licensing-to-try-to-awkwardly-force-you-into-azure/

This post is contributed by Sandy Carter, Vice President at AWS. It is also located on LinkedIn

Many companies today are considering how to migrate to the cloud to take advantage of the agility and innovation that the cloud brings. Having the right to choose the best provider for your business is critical.

AWS is the best cloud for running your Windows workloads and our experience running Windows applications has earned our customers’ trust. It’s been more than 11 years since AWS first made it possible for customers to run their Windows workloads on AWS—longer than Azure has even been around, and according to a report by IDC, we host nearly two times as many Windows Server instances in the cloud as Microsoft. And more and more enterprises are entrusting their Windows workloads to AWS because of its greater reliability, higher performance, and lower cost, with the number of AWS enterprise customers using AWS for Windows Server growing more than 400% in the past three years.

In fact, we are seeing a trend of customers moving from Azure to AWS. eMarketer started their digital transformation with Azure, but found performance challenges and higher costs that led them to migrate all of their workloads over to AWS. Why did they migrate? They found a better experience, stronger support, higher availability, and better performance, with 4x faster launch times and 35% lower costs compared to Azure. Ancestry, a leader in consumer genomics, went all-in on development in the cloud moving 10 PB data and 400 Windows-based applications in less than 9 months. They also modernized to Linux with .NET Core and leveraged advanced technologies including serverless and containers. With results like that, you can see why organizations like Sysco, Edwards Life Sciences, Expedia, and NextGen Healthcare have chosen AWS to upgrade, migrate, and modernize their Windows workloads

If you are interested in seeing your cost savings over running on-premises or over running on Azure,  send us an email at [email protected] or visit why AWS is the best cloud for Windows.

Deploying an Nginx-based HTTP/HTTPS load balancer with Amazon Lightsail

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/compute/deploying-an-nginx-based-http-https-load-balancer-with-amazon-lightsail/

In this post, I discuss how to configure a load balancer to route web traffic for Amazon Lightsail using NGINX. I define load balancers and explain their value. Then, I briefly weigh the pros and cons of self-hosted load balancers against Lightsail’s managed load balancer service. Finally, I cover how to set up a NGINX-based load balancer inside of a Lightsail instance.

If you feel like you already understand what load balancers are, and the pros and cons of self-hosted load balancers vs. managed services, feel free to skip ahead to the deployment section.

What is a load balancer?

Although load balancers offer many different functionalities, for the sake of this discussion, I focus on one main task: A load balancer accepts your users’ traffic and routes it to the right server.

For example, if I assign the load balancer the DNS name: www.example.com, anyone visiting the site first encounters the load balancer. The load balancer then routes the request to one of the backend servers: web-1, web-2, or web-3. The backend servers then respond to the requestor.

A load balancer provides multiple benefits to your application or website. Here are a few key advantages:

  • Redundancy
  • Publicly available IP addresses
  • Horizontally scaled application capacity

Redundancy

Load balancers usually front at least two backend servers. Using something called a health check, they ensure that these servers are online and available to service user requests. If a backend server goes out of service, the load balancer stops routing traffic to that instance. By running multiple servers, you help to ensure that at least one is always available to respond to incoming traffic. As a result, your users aren’t bogged down by errors from a downed machine.

Publicly available IP addresses

Without a load balancer, a server requires a unique IP address to accept an incoming request via the internet. There are a finite number of these IP addresses, and most cloud providers limit the number of statically assigned public IP addresses that you can have.

By using a load balancer, you can task a single public IP address with servicing multiple backend servers. Later in this post, I return to this topic as I discuss configuring a load balancer.

Horizontally scaled application capacity

As your application or website becomes more popular, its performance may degrade. Adding additional capacity can be as easy as spinning up a new instance and placing it behind your load balancer. If demand drops, you can spin down any unneeded instances to save money.

Horizontal scaling means the deployment of additional identically configured servers to handle increased load. Vertical scaling means the deployment of a more powerful server to handle increased load. If you deploy an underpowered server, expect poor performance, whether you have a single server or ten.

Self-managed load balancer vs. a managed service

Now that you have a better understanding of load balancers and the key benefits that they provide, the next question is: How can you get one into your environment?

On one hand, you could spin up a Lightsail load balancer. These load balancers are all managed by AWS and don’t require any patching or maintenance on your part to stay up-to-date. You only need to name your load balancer and pick instances to service. Your load balancer is then up and running. If you’re so inclined, you can also get a free SSL (secure socket layer) certificate with a few extra clicks.

Lightsail load balancers deploy easily and require essentially no maintenance after they’re operational, for $18 per month (at publication time). Lightsail load balancer design prioritizes easy installation and maintenance. As a result, they lack some advanced configuration settings found with other models.

Consequently, you might prefer to configure your load balancer if you prioritize configuration flexibility or cost reduction. A self-hosted load balancer provides access to many advanced features, and your only hard cost is the instance price.

The downsides of self-hosting are that you are also responsible for the following:

  • Installing the load balancer software.
  • Keeping the software (and the host operating system) updated and secure.

Deploying a NGINX-based load balancer with Lightsail

Although many software-based load balancers are available, I recommend building a solution on NGINX because this wildly popular tool:

  • Is open source/free.
  • Offers great community support.
  • Has a custom Lightsail blueprint.

Prerequisites

This tutorial assumes that you already have your backend servers deployed. These servers should all:

  • Be identically configured.
  • Point to a central backend database.

In other words, the target servers shouldn’t each have database copies installed.

To deploy a centralized database, see Lightsail’s managed database offering.

Because I’ve tailored these instructions to generic website and web app hosting, they may not work with specific applications such as WordPress.

Required prerequisites

Before installing an optional SSL certificate, you need to have the following:

  • A purchased domain name.
  • Permissions to update the DNS servers for that domain.

Optional prerequisites

Although not required, the following prerequisites may also be helpful:

  • Familiarity with accessing instances via SSH.
  • Using basic LINUX commands. 

Deploy an NGINX instance

To begin, deploy an NGINX instance in Lightsail, choosing the NGINX blueprint. Make sure that you are deploying it into the same Region as the servers to load balance.

Choose an appropriate instance size for your application, being aware of the amount of RAM and CPU, as well as the data transfer. If you choose an undersized instance, you can always scale it up via a snapshot. However, an oversized instance cannot as easily be scaled down. You may need to rebuild the load balancer on a smaller-sized instance from scratch.

Configure HTTP load balancing

In the following steps, edit the NGINX configuration file to load balance HTTP requests to the appropriate servers.

First, start up an SSH session with your new NGINX instance and change into the appropriate configuration directory:

cd /opt/bitnami/nginx/conf/bitnami/

Make sure that you have the IP addresses for the servers to load balance. In most cases, traffic shouldn’t flow from your load balancer to your instances across the internet. So, make sure to use the instance’s private IP address. You can find this information on the instance management page, in the Lightsail console.

In this example, my three servers have the following private IP addresses:

  • 192.0.2.67
  • 192.0.2.206
  • 192.0.2.198

The configuration file to edit is named bitnami.conf. Open it using your preferred text editor (use sudo to edit the file):

sudo vi bitnami.conf

Clear the contents of the file and add the following code, making sure to substitute the private IP addresses of the servers to load balance:

# Define Pool of servers to load balance upstream webservers { 
server 192.0.2.67 max_fails=3 fail_timeout=30s; 
server 192.0.2.206 max_fails=3 fail_timeout=30s;
server 192.0.2.198 max_fails=3 fail_timeout=30s;
}

In the code, you used the keyword upstream to define a pool (named webservers) of three servers to which NGINX should route traffic. If you don’t specify how NGINX should route each request, it defaults to round-robin server routing. Two other routing methods are available:

  • Least connected, which routes to the server with the fewest number of active connections.
  • IP hash, which uses a hashing function to enable sticky sessions (otherwise called session persistence).

Discussion on these methods is out of scope for this post. For more information, see Using nginx as HTTP load balancer.

Additionally, I recommend max_fails and fail_timeout to define health checks. Based on the configuration above, NGINX marks a server as down if it fails to respond or responds with an error three times in 30 seconds. If a server is marked down, NGINX continues to probe every 30 seconds. If it receives a positive response, it marks the server as live.

After the code you just inserted to the file, add the following:

# Forward traffic on port 80 to one of the servers in the webservers group server {
listen 80; location / {
   proxy_pass http://webservers;
   }
}

This code tells NGINX to listen for requests on port 80, the default port for unencrypted web (HTTP) traffic and forward such requests to one of the servers in the webservers group defined by the upstream keyword.

Save the file and quit back to your command prompt.

For the changes to take effect, restart the NGINX service using the Bitnami control script:

sudo /opt/bitnami/ctlscript.sh restart nginx

At this point, you should be able to visit the IP address of your NGINX instance in your web browser. The load balancer then routes the request to one of the servers defined in your webservers group.

For reference, here’s the full bitnami.conf file.

# Define Pool of servers to load balance
upstream webservers {
server 192.0.2.67 max_fails=3 fail_timeout=30s;
server 192.0.2.206 max_fails=3 fail_timeout=30s;
server 192.0.2.198 max_fails=3 fail_timeout=30s;
}
# Forward traffic on port 80 to one of the servers in the webservers group server {
listen 80; location / {
proxy_pass http://webservers;
}
}

Configure HTTPS load balancing

Configuring your load balancer to use SSL requires three steps:

  1. Ensure that you have a domain record for your NGINX load balancer instance.
  2. Obtain and install a certificate.
  3. Update the NGINX configuration file.

If you have not already done so, assign your NGINX instance an entry with your DNS provider. Remember, the load balancer is the address your users use to reach your site. For instance, it might be appropriate to create a record that points http://www.yourdomain.com/ at your NGINX load balancer. If you need help configuring the DNS in Lightsail, see DNS in Amazon Lightsail.

Similarly, to configure your NGINX instance to use a free SSL certificate from Let’s Encrypt, follow steps 1–7 in Tutorial: Using Let’s Encrypt SSL certificates with your Nginx instance in Amazon Lightsail. You handle step 8 later in this post,

After you configure NGINX to use the SSL certificate and update your DNS, you must modify the configuration file to allow for HTTPS traffic.

Again, use a text editor to open the bitnami.conf file:

sudo vi bitnami.conf

Add the following code to the bottom of the file:

server {
     listen 443 ssl;
     location / {
          proxy_pass http://webservers;
     }
     ssl_certificate server.crt;
     ssl_certificate_key server.key;
     ssl_session_cache shared:SSL:1m;
     ssl_session_timeout 5m;
     ssl_ciphers HIGH:!aNULL:!MD5;
     ssl_prefer_server_ciphers on;
}

This code closely resembles the HTTP code added previously. However, in this case, the code tells NGINX to accept SSL connections on the secure port 443 (HTTPS) and forward them to one of your web servers. The rest of the commands instruct NGINX on where to locate SSL certificates, as well as setting various SSL parameters.

Here again, restart the NGINX service:

sudo /opt/bitnami/ctlscript.sh restart nginx

Optional steps

At this point, you should be able to access your website using both HTTP and HTTPS. However, there are a couple of optional steps to consider, including:

  • Shutting off direct HTTP/HTTPS access to your web servers.
  • Automatically redirecting incoming load balancer HTTP requests to HTTPS.

It’s probably not a great idea to allow people to access your load-balanced servers directly. Fortunately, you can easily restrict access:

  1. Navigate to each instance’s management page in the Lightsail console.
  2. Choose Networking.
  3. Remove the HTTP and HTTPS (if enabled) firewall port entries.

This restriction shuts down access via the internet while still allowing communications between the load balancer and the servers over the private AWS network.

In many cases, there’s no good reason to allow access to a website or web app over unencrypted HTTP. However, the load balancer configuration described to this point still accepts HTTP requests. To automatically reroute requests from HTTP to HTTPS, make one small change to the configuration file:

  1. Edit the conf file.
  2. Find this code:
server {
listen 80; location / {
proxy_pass http://webservers;
}
}
  1. Replace it with this code:
server {
listen 80;
return 301 https://$host$request_uri;
}

The replacement code instructs NGINX to respond to HTTP requests with a “page has been permanently redirected” message and a citation of the new page address. The new address is simply requested one, only accessed over HTTPS instead of HTTP.

For this change to take effect, you must restart NGINX:

sudo /opt/bitnami/ctlscript.sh restart nginx

For reference, this is what the final bitnami.conf file looks like:

# Define the pool of servers to load balance
upstream webservers {
server 192.0.2.67 max_fails=3 fail_timeout=30s;
server 192.0.2.206 max_fails=3 fail_timeout=30s;
server 192.0.2.198 max_fails=3 fail_timeout=30s;
}
# Redirect traffic on port 80 to use HTTPS
server {
listen 80;
return 301 https://$host$request_uri;
}
# Forward traffic on port 443 to one of the servers in the web servers group
server {
     listen 443 ssl;
     location / {
          proxy_pass http://webservers;
     }
     ssl_certificate server.crt;
     ssl_certificate_key server.key;
     ssl_session_cache shared:SSL:1m;
     ssl_session_timeout 5m;
     ssl_ciphers HIGH:!aNULL:!MD5;
     ssl_prefer_server_ciphers on;
}

Conclusion

This post explained how to configure a load balancer to route web traffic for Amazon Lightsail using NGINX. I defined load balancers and their utility. I weighed the pros and cons of self-hosted load balancers against Lightsail’s managed load balancer service. Finally, I walked you through how to set up a NGINX-based load balancer inside of a Lightsail instance.

Thanks for reading this post. If you have any questions, feel free to contact me on Twitter, @mikegcoleman or visit the Amazon Lightsail forums.

Celebrating 10 years of Microsoft Windows Server and SQL Server on AWS! Happy Birthday!

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/compute/celebrating-10-years-of-microsoft-windows-server-and-sql-server-on-aws-happy-birthday/

Contributed by Sandy Carter, Vice President of Windows on AWS and Enterprise Workloads

Happy birthday to all of our AWS customers! In particular, I want to call out Autodesk, RightScale (now part of Flexera), and Suunto (Movescount) – just a few of our customers who have been running Microsoft Windows Server and Microsoft SQL Server on AWS for 10 years! Thank you for your business and seeing the value of Windows on AWS!

So many customers trust their Windows workloads on AWS because of our experience, reliability, security, and performance. IDC (a leading IT Analyst) estimates that AWS accounted for approximately 57.7% of total Windows instances in public cloud IaaS during 2017 – nearly 2x the nearest cloud provider.

Our Windows on AWS customers benefit from the millions of active customers per month across the AWS Cloud. Ancestry, the global leader in family history and consumer genomics, has over 6000 instances on AWS. Nat Natarajan, Executive Vice President of Product and Technology at Ancestry just spoke with us in Seattle. I loved hearing how they are using Windows on AWS.

“AWS provides us with the flexibility we need to stay at the forefront of consumer genomics, as the science and technology in the space continues to rapidly evolve. We’re confident that AWS provides us with unmatched scalability, security, and privacy.”

Reliability is one the reasons why NextGen Healthcare, provider of tailored healthcare solutions for ambulatory practices and healthcare providers around the world, trusts AWS to run their SQL Server databases. One of the foundations of our reliability is how we design our Regions. AWS has 18 Regions around the globe, each of which are made up of two or more Availability Zones. Availability Zones are physically separate locations with independent infrastructure engineered to be insulated from failures in other Availability Zones. Today we have 55 Availability Zones across these 18 Regions, and we’ve announced plans for 12 more Availability Zones and four more Regions.

I talk to so many of our customers every week who tell me that their Windows and SQL Server workloads run better on AWS. For example, eMarketer enables thousands of companies around the world to better understand markets and consumer behavior. This helps them get the data they need to succeed in a competitive and fast changing digital economy. They recently told me how they started their digital transformation initiative on another public cloud.

“We chose to move our Microsoft workloads to AWS because of your extensive migration experience, higher availability, and better performance. We are seeing 35% cost savings and thrilled to see 4x faster launch times now.” – Ryan Hoffman, Senior Vice President of Engineering

One of the things I get asked about more and more is, can you modernize those Windows apps as well? Using serverless compute on AWS Lambda, Windows containers, and Amazon Machine Learning (Amazon ML), you can really take those Windows apps into the 21st century! For example, Mitek, the global leader in mobile capture and identity verification software solutions, wanted to modernize their Mobile Verify application to accelerate integration across multiple regions and environments. They leveraged Windows containers using Amazon ECS so they could focus their resources on developing more features instead of servers, VMs, and patching. They reduced their deployment time from hours to minutes!

We know .NET developers love using their existing tools. We created tools such as AWS Toolkit for Visual Studio and AWS Tools for Visual Studio Team Services (VSTS) to provide integration into many popular AWS services. Agero tells us how easy it is for their .NET developers to get started with AWS. Agero provides connected vehicle data, roadside assistance, and claims management services to over 115 million drivers and leading insurers.

“We experimented with AWS Elastic Beanstalk and found it was the simplest, fastest way to get .NET code running in AWS.” Bernie Gracy, Chief Digital Officer

Of course, most of our customers use Microsoft Active Directory on-premises for directory-based identity-related services, and some also use Azure AD to manage users with Office365. Customers use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) to easily integrate AWS resources with on-premises AD and Azure AD. That way, there’s no data to be synchronized or replicated from on-premises to AWS. AWS Managed AD lets you use the same administration tools and built-in features such as single sign-on (SSO) or Group Policy as you use on-premises. And we now enable our customers to share a single directory with multiple AWS accounts within an AWS Region!

This birthday is significant to us here at Amazon Web Services as we obsess over our customers. Over 90% of our roadmap items are driven directly from you! With hundreds of thousands of active customers running Windows on AWS, that’s a lot of great ideas. In fact, did you know that our premier serverless engine, AWS Lambda, which lets you run .NET Core without provisioning or managing servers, came directly from you, our customers?

Some of you wanted an easier way to jumpstart Windows Server projects on AWS, which led us to build Amazon Lightsail, giving you compute, storage and networking with a low, predictable price. Based on feedback from machine learning practitioners and researchers, we launched our AWS Deep Learning AMI for Windows so you can quickly launch Amazon EC2 instances pre-installed with popular deep learning frameworks including Apache MXNet, Caffe and Tensorflow.

Licensing Options

Our customer obsession means that we are committed to helping you lower your total cost of ownership (TCO). When I talk to customers, they tell me they appreciate that AWS does not approach their cloud migration journey as a way to lock-in additional software license subscriptions. For example, TSO Logic, one of our AWS Partner Network (APN) partners, described in a blog the work we did with one of our joint customers, a privately-held U.S. company with more than 70,000 employees, operating in 50 countries. We helped this customer save 22 percent on their SQL Server workloads by optimizing core counts and reducing licensing costs.

Delaware North, a global leader in hospitality management and food service management, uses our pay-as-you-go licenses to scale-up their SQL Server instances during peak periods, without having to pay for those licenses for multiple years. Many customers also use License Mobility benefits to bring their Microsoft application licenses to AWS, and other customers, such as Xero, the accounting software platform for small and medium-sized businesses, reduce costs by bringing their own Windows Server Datacenter Edition and SQL Server Enterprise Edition licenses to AWS on our Amazon EC2 Dedicated Hosts. And, we also have investment programs to help qualified customers offset Microsoft licensing costs when migrating to the AWS Cloud.

We know that many of you are thinking about what to do with legacy applications still using the 2008 versions of SQL Server and Windows Server. I hear from many leaders who don’t want to base their cloud strategy on software end-of-support. AWS provides flexibility to easily upgrade and modernize your legacy workloads. ClickSoftware Technologies, the SaaS provider of field service management solutions, found how easy it was to upgrade to a current version of SQL Server on AWS.

“After migrating to AWS, we upgraded to SQL Server 2016 using SQL Server 2008 in compatibility mode, which meant we did not have to make any application changes, and now have a fully supported version of SQL Server.” – Udi Keidar, VP of Cloud Services, ClickSoftware

Did you know you can also bring your Microsoft licenses to VMware Cloud on AWS? VMware Cloud on AWS is a great solution when you need to execute a fast migration – whether that’s due to running out of data center space, an upcoming lease expiration, or a natural disaster such as the recent hurricanes. Massachusetts Institute of Technology (MIT) started with a proof of concept (POC) and moved their initial 300 VMs in less than 96 hours, with just one employee. Over the next three months they migrated of all of their 2,800 production VMs to VMware Cloud on AWS.

Looking Forward

Next month, I hope you’ll join me at AWS re:Invent to learn “What’s New with Microsoft and .NET on AWS” as well as the dozens of other sessions we have for Windows on AWS for IT leaders, DevOps engineers, system administrators, DBAs and .NET developers. We have so many new innovations to share with you!

I want to thank each you for trusting us these last ten years with your most critical business applications and allowing us to continue to help you innovate and transform your business. If you’d like to learn more about how we can help you bring your applications built on Windows Server and SQL Server to the cloud, please check out the following resources and events or contact us!

  1. https://aws.amazon.com/windows/
  2. https://aws.amazon.com/sql/
  3. https://aws.amazon.com/directoryservice/
  4. https://aws.amazon.com/windows/windows-study-guide/
  5. AWS .NET Developer Center

Upcoming Events

  1. October 23, 2018 Webinar: Migrating Microsoft SQL Server 2008 Databases to AWS
  2. Live with AM & Nicki – A fun new twitch.tv series to show you how to build a modern web application on AWS!
  3. November 26-30 2018 re:Invent: Check out the complete list of sessions for Windows, SQL Server, Active Directory and .NET on AWS!

 

 

 

AWS Online Tech Talks – April & Early May 2018

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-april-early-may-2018/

We have several upcoming tech talks in the month of April and early May. Come join us to learn about AWS services and solution offerings. We’ll have AWS experts online to help answer questions in real-time. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

April & early May — 2018 Schedule

Compute

April 30, 2018 | 01:00 PM – 01:45 PM PTBest Practices for Running Amazon EC2 Spot Instances with Amazon EMR (300) – Learn about the best practices for scaling big data workloads as well as process, store, and analyze big data securely and cost effectively with Amazon EMR and Amazon EC2 Spot Instances.

May 1, 2018 | 01:00 PM – 01:45 PM PTHow to Bring Microsoft Apps to AWS (300) – Learn more about how to save significant money by bringing your Microsoft workloads to AWS.

May 2, 2018 | 01:00 PM – 01:45 PM PTDeep Dive on Amazon EC2 Accelerated Computing (300) – Get a technical deep dive on how AWS’ GPU and FGPA-based compute services can help you to optimize and accelerate your ML/DL and HPC workloads in the cloud.

Containers

April 23, 2018 | 11:00 AM – 11:45 AM PTNew Features for Building Powerful Containerized Microservices on AWS (300) – Learn about how this new feature works and how you can start using it to build and run modern, containerized applications on AWS.

Databases

April 23, 2018 | 01:00 PM – 01:45 PM PTElastiCache: Deep Dive Best Practices and Usage Patterns (200) – Learn about Redis-compatible in-memory data store and cache with Amazon ElastiCache.

April 25, 2018 | 01:00 PM – 01:45 PM PTIntro to Open Source Databases on AWS (200) – Learn how to tap the benefits of open source databases on AWS without the administrative hassle.

DevOps

April 25, 2018 | 09:00 AM – 09:45 AM PTDebug your Container and Serverless Applications with AWS X-Ray in 5 Minutes (300) – Learn how AWS X-Ray makes debugging your Container and Serverless applications fun.

Enterprise & Hybrid

April 23, 2018 | 09:00 AM – 09:45 AM PTAn Overview of Best Practices of Large-Scale Migrations (300) – Learn about the tools and best practices on how to migrate to AWS at scale.

April 24, 2018 | 11:00 AM – 11:45 AM PTDeploy your Desktops and Apps on AWS (300) – Learn how to deploy your desktops and apps on AWS with Amazon WorkSpaces and Amazon AppStream 2.0

IoT

May 2, 2018 | 11:00 AM – 11:45 AM PTHow to Easily and Securely Connect Devices to AWS IoT (200) – Learn how to easily and securely connect devices to the cloud and reliably scale to billions of devices and trillions of messages with AWS IoT.

Machine Learning

April 24, 2018 | 09:00 AM – 09:45 AM PT Automate for Efficiency with Amazon Transcribe and Amazon Translate (200) – Learn how you can increase the efficiency and reach your operations with Amazon Translate and Amazon Transcribe.

April 26, 2018 | 09:00 AM – 09:45 AM PT Perform Machine Learning at the IoT Edge using AWS Greengrass and Amazon Sagemaker (200) – Learn more about developing machine learning applications for the IoT edge.

Mobile

April 30, 2018 | 11:00 AM – 11:45 AM PTOffline GraphQL Apps with AWS AppSync (300) – Come learn how to enable real-time and offline data in your applications with GraphQL using AWS AppSync.

Networking

May 2, 2018 | 09:00 AM – 09:45 AM PT Taking Serverless to the Edge (300) – Learn how to run your code closer to your end users in a serverless fashion. Also, David Von Lehman from Aerobatic will discuss how they used [email protected] to reduce latency and cloud costs for their customer’s websites.

Security, Identity & Compliance

April 30, 2018 | 09:00 AM – 09:45 AM PTAmazon GuardDuty – Let’s Attack My Account! (300) – Amazon GuardDuty Test Drive – Practical steps on generating test findings.

May 3, 2018 | 09:00 AM – 09:45 AM PTProtect Your Game Servers from DDoS Attacks (200) – Learn how to use the new AWS Shield Advanced for EC2 to protect your internet-facing game servers against network layer DDoS attacks and application layer attacks of all kinds.

Serverless

April 24, 2018 | 01:00 PM – 01:45 PM PTTips and Tricks for Building and Deploying Serverless Apps In Minutes (200) – Learn how to build and deploy apps in minutes.

Storage

May 1, 2018 | 11:00 AM – 11:45 AM PTBuilding Data Lakes That Cost Less and Deliver Results Faster (300) – Learn how Amazon S3 Select And Amazon Glacier Select increase application performance by up to 400% and reduce total cost of ownership by extending your data lake into cost-effective archive storage.

May 3, 2018 | 11:00 AM – 11:45 AM PTIntegrating On-Premises Vendors with AWS for Backup (300) – Learn how to work with AWS and technology partners to build backup & restore solutions for your on-premises, hybrid, and cloud native environments.

Our Newest AWS Community Heroes (Spring 2018 Edition)

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/our-newest-aws-community-heroes-spring-2018-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these Heroes share their time and knowledge across social media and in-person events. Heroes also actively help drive content at Meetups, workshops, and conferences.

This March, we have five Heroes that we’re happy to welcome to our network of cloud innovators:

Peter Sbarski

Peter Sbarski is VP of Engineering at A Cloud Guru and the organizer of Serverlessconf, the world’s first conference dedicated entirely to serverless architectures and technologies. His work at A Cloud Guru allows him to work with, talk and write about serverless architectures, cloud computing, and AWS. He has written a book called Serverless Architectures on AWS and is currently collaborating on another book called Serverless Design Patterns with Tim Wagner and Yochay Kiriaty.

Peter is always happy to talk about cloud computing and AWS, and can be found at conferences and meetups throughout the year. He helps to organize Serverless Meetups in Melbourne and Sydney in Australia, and is always keen to share his experience working on interesting and innovative cloud projects.

Peter’s passions include serverless technologies, event-driven programming, back end architecture, microservices, and orchestration of systems. Peter holds a PhD in Computer Science from Monash University, Australia and can be followed on Twitter, LinkedIn, Medium, and GitHub.

 

 

 

Michael Wittig

Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. widdix maintains several AWS related open source projects, most notably a collection of production-ready CloudFormation templates. In 2016, widdix released marbot: a Slack bot supporting your DevOps team to detect and solve incidents on AWS.

In close collaboration with his brother Andreas Wittig, the Wittig brothers are actively creating AWS related content. Their book Amazon Web Services in Action (Manning) introduces AWS with a strong focus on automation. Andreas and Michael run the blog cloudonaut.io where they share their knowledge about AWS with the community. The Wittig brothers also published a bunch of video courses with O’Reilly, Manning, Pluralsight, and A Cloud Guru. You can also find them speaking at conferences and user groups in Europe. Both brothers are co-organizing the AWS user group in Stuttgart.

 

 

 

 

Fernando Hönig

Fernando is an experienced Infrastructure Solutions Leader, holding 5 AWS Certifications, with extensive IT Architecture and Management experience in a variety of market sectors. Working as a Cloud Architect Consultant in United Kingdom since 2014, Fernando built an online community for Hispanic speakers worldwide.

Fernando founded a LinkedIn Group, a Slack Community and a YouTube channel all of them named “AWS en Español”, and started to run a monthly webinar via YouTube streaming where different leaders discuss aspects and challenges around AWS Cloud.

During the last 18 months he’s been helping to run and coach AWS User Group leaders across LATAM and Spain, and 10 new User Groups were founded during this time.

Feel free to follow Fernando on Twitter, connect with him on LinkedIn, or join the ever-growing Hispanic Community via Slack, LinkedIn or YouTube.

 

 

 

Anders Bjørnestad

Anders is a consultant and cloud evangelist at Webstep AS in Norway. He finished his degree in Computer Science at the Norwegian Institute of Technology at about the same time the Internet emerged as a public service. Since then he has been an IT consultant and a passionate advocate of knowledge-sharing.

He architected and implemented his first customer solution on AWS back in 2010, and is essential in building Webstep’s core cloud team. Anders applies his broad expert knowledge across all layers of the organizational stack. He engages with developers on technology and architectures and with top management where he advises about cloud strategies and new business models.

Anders enjoys helping people increase their understanding of AWS and cloud in general, and holds several AWS certifications. He co-founded and co-organizes the AWS User Groups in the largest cities in Norway (Oslo, Bergen, Trondheim and Stavanger), and also uses any opportunity to engage in events related to AWS and cloud wherever he is.

You can follow him on Twitter or connect with him on LinkedIn.

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.