Tag Archives: Price Reduction

AWS Inter-Region Data Transfer (DTIR) Price Reduction

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-inter-region-data-transfer-dtir-price-reduction/

If you build AWS applications that span two or more AWS regions, this post is for you. We are reducing the cost to transfer data from the South America (São Paulo), Middle East (Bahrain), Africa (Cape Town), and Asia Pacific (Sydney) Regions to other AWS regions as follows, effective May 1, 2020:

RegionOld Rate ($/GB)New Rate ($/GB)
South America (São Paulo)0.16000.1380
Middle East (Bahrain)0.16000.1105
Africa (Cape Town)0.18000.1470
Asia Pacific (Sydney)0.14000.0980

Consult the price list to see inter-region data transfer prices for all AWS regions.

Jeff;

 

EC2 Price Reduction – For EC2 Instance Saving Plans and Standard Reserved Instances

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/ec2-price-reduction-for-ec2-instance-saving-plans-and-standard-reserved-instances/

It is my great pleasure to tell you about a price reduction for Amazon Elastic Compute Cloud (EC2) customers who plan to use, Standard Reserved Instances or EC2 Instance Saving Plans. The price changes are already in effect, and so anyone buying news RIs or a new EC2 Instance Saving Plan will be able to take advantage of the lower prices.

Our engineering investments, coupled with our scale and our time-tested ability to manage our capacity, allow us to identify and pass on cost savings to you.

The price reduction you receive, depends on the region you choose, whether you take out a 1 or 3 year term and finally the instance family you commit to in your agreement. Price reductions vary from between 1% to a massive 18% on what you were previously paying. Below I’ve given a snapshot of some of the savings across the M5, C5, and R5 instance types, however there are also price reductions for the instance types C5n, C5d, M5a, M5n, M5ad, M5dn, R5a, R5n, R5d, R5ad, R5dn, T3, T3a, Z1d, and A1.

Price Reduction for 1 Year TermsPrice Reduction for 3 Year Terms
RegionM5C5R5M5C5R5
Europe (Stockholm)10%8%0%13%11%5%
Middle East (Bahrain)10%8%0%13%11%5%
Asia Pacific (Mumbai)2%7%0%2%5%4%
Europe (Paris)10%8%0%13%11%5%
US East (Ohio)2%1%0%2%0%5%
Europe (Ireland)10%8%0%13%11%5%
Europe (Frankfurt)12%9%0%18%13%5%
South America (São Paulo)0%12%0%4%7%4%
Asia Pacific (Hong Kong)3%7%0%2%4%5%
US East (N. Virginia)2%0%0%2%0%5%
Asia Pacific (Seoul)0%7%0%6%12%5%
Asia Pacific (Osaka)10%12%0%14%16%5%
Europe (London)10%8%0%14%11%5%
Asia Pacific (Tokyo)10%12%0%14%16%5%
AWS GovCloud (US-East)8%0%0%5%0%5%
AWS GovCloud (US-West)8%0%0%5%0%5%
US West (Oregon)2%0%0%2%0%5%
US West (N. California)16%8%0%15%7%5%
Asia Pacific (Singapore)2%6%0%2%4%5%
Asia Pacific (Sydney)9%1%0%13%3%5%
Canada (Central)1%6%0%2%0%5%

There are a few caveats that I’d like to make you aware of, firstly, this price reduction is not available for Convertible Reserved Instances, Compute Saving Plans or On-Demand instances. Secondly, Windows instances will see a different price reduction due to the licensing involved.

The price reduction is available in all regions effective immediately. Going forward, you will pay the new lower price for EC2.

— Martin

90%+ price reduction for AWS IoT Jobs, Globally Available

Post Syndicated from Alejandra Quetzalli original https://aws.amazon.com/blogs/aws/new-price-reduction-for-aws-iot-jobs-globally-available/

I have good news for AWS customers using the AWS IoT Device Management service. There has been a 90%+ price reduction for AWS IoT Device Jobs ! 🥳

Let’s check out the new prices:

AWS IoT Jobs Pricing Reduction table

🤖What is IoT?

IoT (Internet of Things) represents the billions (literally!) of physical devices around the world that are connected to the internet, collecting and sharing data.

🤷🏻‍♀️What is AWS IoT Device Jobs?

AWS IoT Device Jobs (‘Jobs’ for short) enables customers to trigger remote actions on one or more of their IoT devices, when connected to the AWS IoT Core service. Some examples of remote actions that could be triggered with ‘Jobs‘ are OTA (Over-the-Air) firmware updates, device reboots, factory resets, configuration changes, etc.

Jobs’ is a feature of the AWS IoT Device Management service. (AWS IoT Device Management is a service that enables customers to register, organize, monitor, and remotely manage devices connected to AWS IoT Core.)

👩🏻‍💻How do customers use Jobs today?

A ‘Job’ is a set of operations that customers can define in the cloud. (For example, the AWS console!) A single ‘Job’ can be targeted at one connected device or a group of devices. For example, a customer can define a single ‘Job’ to perform an OTA (Over-the-Air) update on 100 devices. It then executes 100 remote actions to update each individual device!

Customers are charged based on the number of remote actions executed.

☁Lastly…

At AWS, we love to pass savings along to our customers! This is why we focus on driving down our costs over time. These new savings are now globally available 🌎🌍🌏 to our customers.

👉🏽To learn more, visit the AWS IoT Device Management page, or get started with AWS IoT Device Management documentation.

¡Gracias por tu tiempo!
~Alejandra 💁🏻‍♀️y Canela 🐾

AWS Data Transfer Out (DTO) 40% Price Reduction in South America (São Paulo) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-data-transfer-out-dto-40-price-reduction-in-south-america-sao-paulo-region/

I have good news for AWS customers using our South America (São Paulo) Region. Effective April 1, 2020 we are reducing prices for Data Transfer Out to the Internet (DTO) from the South America (São Paulo) Region by 40%. Data Transfer in remains free.

Here are the new prices for DTO from EC2, S3, and many other AWS services to the Internet:

Monthly Usage TierPrevious AWS Rate ($/GB)Price AdjustmentNew AWS Rate ($/GB)
Less than 10 TB
0.250-40%0.150
Less than 50 TB0.230-40%0.138
Less than 150 TB0.210-40%0.126
More than 150 TB0.190-40%0.114

At AWS, we focus on driving down our costs over time. As we do this, we pass the savings along to our customers. This is our 81st price reduction since 2006.

If you want to get started with AWS, the AWS Free Tier includes 15 GB/month of global data transfer out and lets you explore more than 60 AWS services.

Jeff;

 

EC2 Price Reduction in the São Paulo Region (R5 and I3)

Post Syndicated from Julien Simon original https://aws.amazon.com/blogs/aws/ec2-price-reduction-in-the-sao-paulo-region-r5-and-i3/

I’ve got good news for AWS customers using our South America (São Paulo) Region!

Effective February 1, 2020 we are reducing prices for On-Demand, Reserved and Dedicated Instances as follows:

  • All R5 families (R5, R5a, R5d, R5ad) – Up to 25%.
  • All I3 families (I3, I3en) – 13%.

The pricing pages have been updated.

Questions?
If you need assistance or have feedback, please reach out to your usual AWS support contacts, or post a message in the AWS Forum for Amazon EC2.

– Julien

CloudEndure Highly Automated Disaster Recovery – 80% Price Reduction

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/cloudendure-highly-automated-disaster-recovery-80-price-reduction/

AWS acquired CloudEndure last year. After the acquisition we began working with our new colleagues to integrate their products into the AWS product portfolio.

CloudEndure Disaster Recovery is designed to help you minimize downtime and data loss. It continuously replicates the contents of your on-premises, virtual, or cloud-based systems to a low-cost staging area in the AWS region of your choice, within the confines of your AWS account:

The block-level replication encompasses essentially every aspect of the protected system including the operating system, configuration files, databases, applications, and data files. CloudEndure Disaster Recovery can replicate any database or application that runs on supported versions of Linux or Windows, and is commonly used with Oracle and SQL Server, as well as enterprise applications such as SAP. If you do an AWS-to-AWS replication, the AWS environment within a specified VPC is replicated; this includes the VPC itself, subnets, security groups, routes, ACLs, Internet Gateways, and other items.

Here are some of the most popular and interesting use cases for CloudEndure Disaster Recovery:

On-Premises to Cloud Disaster Recovery -This model moves your secondary data center to the AWS Cloud without downtime or performance impact. You can improve your reliability, availability, and security without having to invest in duplicate hardware, networking, or software.

Cross-Region Disaster Recovery – If your application is already on AWS, you can add an additional layer of cost-effective protection and improve your business continuity by setting up cross-region disaster recovery. You can set up continuous replication between regions or Availability Zones and meet stringent RPO (Recovery Point Objective) or RTO (Recovery Time Objective) requirements.

Cross-Cloud Disaster Recovery – If you run workloads on other clouds, you can increase your overall resilience and meet compliance requirements by using AWS as your DR site. CloudEndure Disaster Recovery will replicate and recover your workloads, including automatic conversion of your source machines so that they boot and run natively on AWS.

80% Price Reduction
Recovery is quick and robust, yet cost-effective. In fact, we are reducing the price for CloudEndure Disaster Recovery by about 80% today, making it more cost-effective than ever: $0.028 per hour, or about $20 per month per server.

If you have tried to implement a DR solution in the traditional way, you know that it requires a costly set of duplicate IT resources (storage, compute, and networking) and software licenses. By replicating your workloads into a low-cost staging area in your preferred AWS Region, CloudEndure Disaster Recovery reduces compute costs by 95% and eliminates the need to pay for duplicate OS and third-party application licenses.

To learn more, watch the Disaster Recovery to AWS Demo Video:

After that, be sure to visit the new CloudEndure Disaster Recovery page!

Jeff;

200 Amazon CloudFront Points of Presence + Price Reduction

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/200-amazon-cloudfront-points-of-presence-price-reduction/

Less than two years ago I announced the 100th Point of Presence for Amazon CloudFront.

The overall Point of Presence footprint is now growing at 50% per year. Since we launched the 100th PoP in 2017, we have expanded to 77 cities in 34 countries including China, Israel, Denmark, Norway, South Africa, UAE, Bahrain, Portugal, and Belgium.

CloudFront has been used to deliver many high-visibility live-streaming events including Superbowl LIII, Thursday Night Football (via Prime Video), the Royal Wedding, the Winter Olympics, the Commonwealth Games, a multitude of soccer games (including the 2019 FIFA World Cup), and much more.

Whether used alone or in conjunction with other AWS services, CloudFront is a great way to deliver content, with plenty of options that also help to secure the content and to protect the underlying source. For example:

DDoS ProtectionAmazon CloudFront customers were automatically protected against 84,289 Distributed Denial of Service (DDoS) attacks in 2018, including a 1.4 Tbps memcached reflection attack.

Attack MitigationCloudFront customers used AWS Shield Advanced and AWS WAF to mitigate application-layer attacks, including a flood of over 20 million requests per second.

Certificate Management – We announced CloudFront Integration with AWS Certificate Manager in 2016, and use of custom certificates has grown by 600%.

New Locations in South America
Today I am happy to announce that our global network continues to grow, and now includes 200 Points of Presence, including new locations in Argentina (198), Chile (199), and Colombia (200):

AWS customer NED is based in Chile. They are using CloudFront to deliver server-side ad injection and low-latency content distribution to their clients, and are also using [email protected] to implement robust anti-piracy protection.

Price Reduction
We are also reducing the pricing for on-demand data transfer from CloudFront by 56% for all Points of Presence in South America, effective November 1, 2019. Check out the CloudFront Pricing page to learn more.

CloudFront Resources
Here are some resources to help you to learn how to make great use CloudFront in your organization:

Jeff;

 

New – Gigabit Connectivity Options for Amazon Direct Connect

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gigabit-connectivity-options-for-amazon-direct-connect/

AWS Direct Connect gives you the ability to create private network connections between your datacenter, office, or colocation environment and AWS. The connections start at your network and end at one of 91 AWS Direct Connect locations and can reduce your network costs, increase throughput, and deliver a more consistent experience than an Internet-based connection. In most cases you will need to work with an AWS Direct Connect Partner to get your connection set up.

As I prepared to write this post, I learned that my understanding of AWS Direct Connect was incomplete, and that the name actually encompasses three distinct models. Here’s a summary:

Dedicated Connections are available with 1 Gbps and 10 Gbps capacity. You use the AWS Management Console to request a connection, after which AWS will review your request and either follow up via email to request additional information or provision a port for your connection. Once AWS has provisioned a port for you, the remaining time to complete the connection by the AWS Direct Connect Partner will vary between days and weeks. A Dedicated Connection is a physical Ethernet port dedicated to you. Each Dedicated Connection supports up to 50 Virtual Interfaces (VIFs). To get started, read Creating a Connection.

Hosted Connections are available with 50 to 500 Mbps capacity, and connection requests are made via an AWS Direct Connect Partner. After the AWS Direct Connect Partner establishes a network circuit to your premises, capacity to AWS Direct Connect can be added or removed on demand by adding or removing Hosted Connections. Each Hosted Connection supports a single VIF; you can obtain multiple VIFs by acquiring multiple Hosted Connections. The AWS Direct Connect Partner provisions the Hosted Connection and sends you an invite, which you must accept (with a click) in order to proceed.

Hosted Virtual Interfaces are also set up via AWS Direct Connect Partners. A Hosted Virtual Interface has access to all of the available capacity on the network link between the AWS Direct Connect Partner and an AWS Direct Connect location. The network link between the AWS Direct Connect Partner and the AWS Direct Connect location is shared by multiple customers and could possibly be oversubscribed. Due to the possibility of oversubscription in the Hosted Virtual Interface model, we no longer allow new AWS Direct Connect Partner service integrations using this model and recommend that customers with workloads sensitive to network congestion use Dedicated or Hosted Connections.

Higher Capacity Hosted Connections
Today we are announcing Hosted Connections with 1, 2, 5, or 10 Gbps of capacity. These capacities will be available through a select set of AWS Direct Connect Partners who have been specifically approved by AWS. We are also working with AWS Direct Connect Partners to implement additional monitoring of the network link between the AWS Direct Connect Partners and AWS.

Most AWS Direct Connect Partners support adding or removing Hosted Connections on demand. Suppose that you archive a massive amount of data to Amazon Glacier at the end of every quarter, and that you already have a pair of resilient 10 Gbps circuits from your AWS Direct Connect Partner for use by other parts of your business. You then create a pair of resilient 1, 2, 5 or 10 Gbps Hosted Connections at the end of the quarter, upload your data to Glacier, and then delete the Hosted Connections.

You pay AWS for the port-hour charges while the Hosted Connections are in place, along with any associated data transfer charges (see the Direct Connect Pricing page for more info). Check with your AWS Direct Connect Partner for the charges associated with their services. You get a cost-effective, elastic way to move data to the cloud while creating Hosted Connections only when needed.

Available Now
The new higher capacity Hosted Connections are available through select AWS Direct Connect Partners after they are approved by AWS.

Jeff;

PS – As part of this launch, we are reducing the prices for the existing 200, 300, 400, and 500 Mbps Hosted Connection capacities by 33.3%, effective March 1, 2019.

 

AWS Fargate Price Reduction – Up to 50%

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/aws-fargate-price-reduction-up-to-50/

AWS Fargate is a compute engine that uses containers as its fundamental compute primitive. AWS Fargate runs your application containers for you on demand. You no longer need to provision a pool of instances or manage a Docker daemon or orchestration agent. Because the infrastructure that runs your containers is invisible, you don’t have to worry about whether you have provisioned enough instances to run your containerized workload. You also don’t have to worry about whether you’re using those instances efficiently to avoid paying for resources that you don’t use. You no longer need to do undifferentiated heavy lifting to maintain the infrastructure that runs your containers. AWS Fargate automatically updates and patches underlying resources to keep you safe from vulnerabilities in the underlying operating system and software. AWS Fargate uses an on-demand pricing model that charges per vCPU and per GB of memory reserved per second, with a 1-minute minimum.

At re:Invent 2018 we announced Firecracker, an open source virtualization technology that is purpose-built for creating and managing secure, multi-tenant containers and functions-based services. Firecracker enables you to deploy workloads in lightweight virtual machines called microVMs. These microVMs can initiate code faster, with less overhead. Innovations such as these allow us to improve the efficiency of Fargate and help us pass on cost savings to customers.

Effective January 7th, 2019 Fargate pricing per vCPU per second is being reduced by 20%, and pricing per GB of memory per second is being reduced by 65%. Depending on the ratio of CPU to memory that you’re allocating for your containers, you could see an overall price reduction of anywhere from 35% to 50%.

The following table shows the price reduction for each built-in launch configuration.

vCPUGB MemoryEffective Price Cut
0.250.5-35.00%
0.251-42.50%
0.252-50.00%
0.51-35.00%
0.52-42.50%
0.53-47.00%
0.54-50.00%
12-35.00%
13-39.30%
14-42.50%
15-45.00%
16-47.00%
17-48.60%
18-50.00%
24-35.00%
25-37.30%
26-39.30%
27-41.00%
28-42.50%
29-43.80%
210-45.00%
211-46.10%
212-47.00%
213-47.90%
214-48.60%
215-49.30%
216-50.00%
48-35.00%
49-36.20%
410-37.30%
411-38.30%
412-39.30%
413-40.20%
414-41.00%
415-41.80%
416-42.50%
417-43.20%
418-43.80%
419-44.40%
420-45.00%
421-45.50%
422-46.10%
423-46.50%
424-47.00%
425-47.40%
426-47.90%
427-48.30%
428-48.60%
429-49.00%
430-49.30%

Many engineering organizations such as Turner Broadcasting System, Veritone, and Catalytic have already been using AWS Fargate to achieve significant infrastructure cost savings for batch jobs, cron jobs, and other on-and-off workloads. Running a cluster of instances at all times to run your containers constantly incurs cost, but AWS Fargate stops charging when your containers stop.

With these new price reductions, AWS Fargate also enables significant savings for containerized web servers, API services, and background queue consumers run by organizations like KPMG, CBS, and Product Hunt. If your application is currently running on large EC2 instances that peak at 10-20% CPU utilization, consider migrating to containers in AWS Fargate. Containers give you more granularity to provision the exact amount of CPU and memory that your application needs. You no longer pay for instance resources that your application doesn’t use. If a sudden spike of traffic causes your application to require more resources you still have the ability to rapidly scale your application out by adding more containers, or scale your application up by launching larger containers.

AWS Fargate lets you focus on building your containerized application without worrying about the infrastructure. This encompasses not just the infrastructure capacity provisioning, monitoring, and maintenance but also the infrastructure price. Implementing Firecracker in AWS Fargate is just part of our journey to keep making AWS Fargate faster, more powerful, and more efficient. Running your containers in AWS Fargate allows you to benefit from these improvements without any manual intervention required on your part.

AWS Fargate has achieved SOC, PCI, HIPAA BAA, ISO, MTCS, C5, and ENS High compliance certification, and has a 99.99% SLA. You can get started with AWS Fargate in 13 AWS Regions around the world.

New – EC2 P3dn GPU Instances with 100 Gbps Networking & Local NVMe Storage for Faster Machine Learning + P3 Price Reduction

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-ec2-p3dn-gpu-instances-with-100-gbps-networking-local-nvme-storage-for-faster-machine-learning-p3-price-reduction/

Late last year I told you about Amazon EC2 P3 instances and also spent some time discussing the concept of the Tensor Core, a specialized compute unit that is designed to accelerate machine learning training and inferencing for large, deep neural networks. Our customers love P3 instances and are using them to run a wide variety of machine learning and HPC workloads. For example, fast.ai set a speed record for deep learning, training the ResNet-50 deep learning model on 1 million images for just $40.

Raise the Roof
Today we are expanding the P3 offering at the top end with the addition of p3dn.24xlarge instances, with 2x the GPU memory and 1.5x as many vCPUs as p3.16xlarge instances. The instances feature 100 Gbps network bandwidth (up to 4x the bandwidth of previous P3 instances), local NVMe storage, the latest NVIDIA V100 Tensor Core GPUs with 32 GB of GPU memory, NVIDIA NVLink for faster GPU-to-GPU communication, AWS-custom Intel® Xeon® Scalable (Skylake) processors running at 3.1 GHz sustained all-core Turbo, all built atop the AWS Nitro System. Here are the specs:4

ModelNVIDIA V100 Tensor Core GPUsGPU MemoryNVIDIA NVLinkvCPUsMain MemoryLocal StorageNetwork BandwidthEBS-Optimized Bandwidth
p3dn.24xlarge8256 GB300 GB/s96768 GiB2 x 900 GB NVMe SSD100 Gbps14 Gbps

If you are doing large-scale training runs using MXNet, TensorFlow, PyTorch, or Keras, be sure to check out the Horovod distributed training framework that is included in the Amazon Deep Learning AMIs. You should also take a look at the new NVIDIA AI Software containers in the AWS Marketplace; these containers are optimized for use on P3 instances with V100 GPUs.

With a total of 256 GB of GPU memory (twice as much as the largest of the current P3 instances), the p3dn.24xlarge allows you to explore bigger and more complex deep learning algorithms. You can rotate and scale your training images faster than ever before, while also taking advantage of the Intel AVX-512 instructions and other leading-edge Skylake features. Your GPU code can scale out across multiple GPUs and/or instances using NVLink and the NVLink Collective Communications Library (NCCL). Using NCCL will also allow you to fully exploit the 100 Gbps of network bandwidth that is available between instances when used within a Placement Group.

In addition to being a great fit for distributed machine learning training and image classification, these instances provide plenty of power for your HPC jobs. You can render 3D images, transcode video in real time, model financial risks, and much more.

You can use existing AMIs as long as they include the ENA, NVMe, and NVIDIA drivers. You will need to upgrade to the latest ENA driver to get 100 Gbps networking; if you are using the Deep Learning AMIs, be sure to use a recent version that is optimized for AVX-512.

Available Today
The p3dn.24xlarge instances are available now in the US East (N. Virginia) and US West (Oregon) Regions and you can start using them today in On-Demand, Spot, and Reserved Instance form.

Bonus – P3 Price Reduction
As part of today’s launch we are also reducing prices for the existing P3 instances. The following prices went in to effect on December 6, 2018:

  • 20% reduction for all prices (On-Demand and RI) and all instance sizes in the Asia Pacific (Tokyo) Region.
  • 15% reduction for all prices (On-Demand and RI) and all instance sizes in the Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul) Regions.
  • 15% reduction for Standard RIs with a three-year term for all instance sizes in all regions except Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul).

The percentages apply to instances running Linux; slightly smaller percentages apply to instances that run Microsoft Windows and other operating systems.

These reductions will help to make your machine learning training and inferencing even more affordable, and are being brought to you as we pursue our goal of putting machine learning in the hands of every developer.

Jeff;

 

 

AWS Data Transfer Price Reductions – Up to 34% (Japan) and 28% (Australia)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-data-transfer-price-reductions-up-to-34-japan-and-28-australia/

I’ve got good good news for AWS customers who make use of our Asia Pacific (Tokyo) and Asia Pacific (Sydney) Regions. Effective September 1, 2018 we are reducing prices for data transfer from Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), and Amazon CloudFront by up to 34% in Japan and 28% in Australia.

EC2 and S3 Data Transfer
Here are the new prices for data transfer from EC2 and S3 to the Internet:

EC2 & S3 Data Transfer Out to InternetJapanAustralia
Old RateNew RateChangeOld RateNew RateChange
Up to 1 GB / Month$0.000$0.0000%$0.000$0.0000%
Next 9.999 TB / Month$0.140$0.114-19%$0.140$0.114-19%
Next 40 TB / Month$0.135$0.089-34%$0.135$0.098-27%
Next 100 TB / Month$0.130$0.086-34%$0.130$0.094-28%
Greater than 150 TB / Month$0.120$0.084-30%$0.120$0.092-23%

You can consult the EC2 Pricing and S3 Pricing pages for more information.

CloudFront Data Transfer
Here are the new prices for data transfer from CloudFront edge nodes to the Internet

CloudFront Data Transfer Out to InternetJapanAustralia
Old RateNew RateChangeOld RateNew RateChange
Up to 10 TB / Month$0.140$0.114-19%$0.140$0.114-19%
Next 40 TB / Month$0.135$0.089-34%$0.135$0.098-27%
Next 100 TB / Month$0.120$0.086-28%$0.120$0.094-22%
Next 350 TB / Month$0.100$0.084-16%$0.100$0.092-8%
Next 524 TB / Month$0.080$0.0800%$0.095$0.090-5%
Next 4 PB / Month$0.070$0.0700%$0.090$0.085-6%
Over 5 PB / Month$0.060$0.0600%$0.085$0.080-6%

Visit the CloudFront Pricing page for more information.

We have also reduced the price of data transfer from CloudFront to your Origin. The price for CloudFront Data Transfer to Origin from edge locations in Australia has been reduced 20% to $0.080 per GB. This represents content uploads via POST and PUT.

Things to Know
Here are a couple of interesting things that you should know about AWS and data transfer:

AWS Free Tier – You can use the AWS Free Tier to get started with, and to learn more about, EC2, S3, CloudFront, and many other AWS services. The AWS Getting Started page contains lots of resources to help you with your first project.

Data Transfer from AWS Origins to CloudFront – There is no charge for data transfers from an AWS origin (S3, EC2, Elastic Load Balancing, and so forth) to any CloudFront edge location.

CloudFront Reserved Capacity Pricing – If you routinely use CloudFront to deliver 10 TB or more content per month, you should investigate our Reserved Capacity pricing. You can receive a significant discount by committing to transfer 10 TB or more content from a single region, with additional discounts at higher levels of usage. To learn more or to sign up, simply Contact Us.

Jeff;

 

Amazon Lightsail Update – More Instance Sizes and Price Reductions

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-lightsail-update-more-instance-sizes-and-price-reductions/

Amazon Lightsail gives you access to the power of AWS, with the simplicity of a VPS (Virtual Private Server). You choose a configuration from a menu and launch a virtual machine (an instance) preconfigured with SSD-based storage, DNS management, and a static IP address. You can use Linux or Windows, and can even choose between eleven Linux-powered blueprints that contain ready-to-run copies of popular web, e-commerce, and development tools:

On the Linux/Unix side, you now have six options, including CentOS:

The monthly fee for each instance includes a generous data transfer allocation, giving you the ability to host web sites, blogs, online stores and whatever else you can dream up!

Since the launch of Lightsail in late 2016, we’ve done our best to listen and respond to customer feedback. For example:

October 2017Microsoft Windows – This update let you launch Lightsail instances running Windows Server 2012 R2, Windows Server 2016, and Windows Server 2016 with SQL Server 2016 Express. This allowed you to build, test, and deploy .NET and Windows applications without having to set up or run any infrastructure.

November 2017Load Balancers & Certificate Management – This update gave you the ability to build highly scalable applications that use load balancers to distribute traffic to multiple Lightsail instances. It also gave you access to free SSL/TLS certificates and a simple, integrated tool to request and validate them, along with an automated renewal mechanism.

November 2017Additional Block Storage – This update let you extend your Lightsail instances with additional SSD-backed storage, with the ability to attach up to 15 disks (each holding up to 16 TB) to each instance. The additional storage is automatically replicated and encrypted.

May 2018Additional Regions – This update let you launch Lightsail instances in the Canada (Central), Europe (Paris), and Asia Pacific (Seoul) Regions, bringing the total region count to 13, and giving you lots of geographic flexibility.

So that’s where we started and how we got here! What’s next?

And Now for the Updates
Today we are adding two more instances sizes at the top end of the range and reducing the prices for the existing instances by up to 50%.

Here are the new instance sizes:

16 GB – 16 GB of memory, 4 vCPUs, 320 GB of storage, and 6 TB of data transfer.

32 GB – 32 GB of memory, 8 vCPUs, 640 GB of storage, and 7 TB of data transfer.

Here are the monthly prices (billed hourly) for Lightsail instances running Linux:

512 MB
1 GB2 GB4 GB8 GB16 GB32 GB
Old$5.00$10$20$40$80
New$3.50$5$10$20$40$80$160

And for Lightsail instances running Windows:

512 MB1 GB2 GB4 GB8 GB16 GB32 GB
Old$10$17$30$55$100
New$8$12$20$40$70$120$240

These reductions are effective as of August 1, 2018 and take place automatically, with no action on your part.

From Our Customers
WordPress power users, developers, entrepreneurs, and people who need a place to host their personal web site are all making great use of Lightsail. The Lightsail team is always thrilled to see customer feedback on social media and shared a couple of recent tweets with me as evidence!

Emil Uzelac (@emiluzelac) is a well-respected member of the WordPress community, especially in the area of WordPress theme development and reviews. When he tried Lightsail he was super impressed with the speed of our instances calling them “by far the fastest I’ve tried”:

As an independent developer and SaaS cofounder, Mike Rogers (@mikerogers0) hasn’t spent a lot of time working with infrastructure. However, when he moved some of his Ruby on Rails projects over to Lightsail, he realized that it was easy (and actually fun) to make the move:

Stephanie Davis (@StephanieMDavis) is a business intelligence developer and honey bee researcher who wanted to find a new home for her writings. She settled on Lightsail, and after it was all up and running she had a “. . . a much, much better grasp of the AWS cloud infrastructure and an economical, slick web host”:

If you have your own Lightsail success story to share, could I ask you to tweet it and hashtag it with #PoweredByLightsail ? I can’t wait to read it!

Some new Lightsail Resources
While I have got your attention, I’d like to share some helpful videos with you!

Deploying a MEAN stack Application on Amazon Lightsail – AWS Developer Advocate Mike Coleman shows you how to deploy a MEAN stack (MongoDB, Express.js, Angular, Node.js) on Lightsail:

Deploying a WordPress Instance on Amazon Lightsail – Mike shows you how to deploy WordPress:

Deploying Docker Containers on Amazon Lightsail – Mike shows you how to use Docker containers:

Jeff;

EC2 Price Reduction – H1 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-price-reduction-h1-instances/

EC2’s H1 instances offer 2 to 16 terabytes of fast, dense storage for big data applications, optimized to deliver high throughput for sequential I/O. Enhanced Networking, 32 to 256 gigabytes of RAM, and Intel Xeon E5-2686 v4 processors running at a base frequency of 2.3 GHz round out the feature set.

I am happy to announce that we are reducing the On-Demand and Reserved Instance prices for H1 instances in the US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) Regions by 15%, effective immediately.

Jeff;

 

Introducing the B2 Snapshot Return Refund Program

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/b2-snapshot-return-refund-program/

B2 Snapshot Return Refund Program

What Is the B2 Snapshot Return Refund Program?

Backblaze’s mission is making cloud storage astonishingly easy and affordable. That guides our focus — making our customers’ data more usable. Today, we’re pleased to introduce a trial of the B2 Snapshot Return Refund program. B2 customers have long been able to create a Snapshot of their data and order a hard drive with that data sent via FedEx anywhere in the world. Starting today, if the customer sends the drive back to Backblaze within 30 days, they will get a full refund. This new feature is available automatically for B2 customers when they order a Snapshot. There are no extra buttons to push or boxes to check — just send back the drive within 30 days and we’ll refund your money. To put it simply, we are offering the cloud storage industry’s only refundable rapid data egress service.

You Shouldn’t be Afraid to Use Your Own Data

Last week, we cut the price of B2 downloads in half — from 2¢ per GB to 1¢ per GB. That 50% reduction makes B2’s download price 1/5 that of Amazon’s S3 (with B2 storage pricing already 1/4 that of S3). The price reduction and today’s introduction of the B2 Snapshot Return Refund program are deliberate moves to eliminate the industry’s biggest barrier to entry — the cost of using data stored in the cloud.  Storage vendors who make it expensive to restore, or place time lag impediments to access, are reducing the usefulness of your data. We believe this is antithetical to encouraging the use of the cloud in the first place.

Learning From Our Customers

Our Computer Backup product already has a Restore Return Refund program. It’s incredibly popular, and we enjoy the almost daily “you just saved my bacon” letters that come back with the returned hard drives. Our customer surveys have repeatedly demonstrated that the ability to get data back is one of the things that has made our Computer Backup service one of the most popular in the industry. So, it made sense to us that our B2 customers could use a similar program.

There are many ways B2 customers can benefit from using the B2 Snapshot Return Refund program, here is a typical scenario.

Media and Entertainment Workflow Based Snapshots

Businesses in the Media and Entertainment (M&E) industry tend to have large quantities of digital media, and the amount of data will continue to increase in the coming years with more 4K and 8K cameras coming into regular use. When an organization needs to deliver or share that data, they typically have to manually download data from their internal storage system, and copy it on a thumb drive or hard drive, or perhaps create an LTO tape. Once that is done, they take their storage device, label it, and mail to their customer. Not only is this practice costly, time consuming, and potentially insecure, it doesn’t scale well with larger amounts of data.

With just a few clicks, you can easily distribute or share your digital media if it stored in the B2 Cloud. Here’s how the process works:

  1. Log in to your Backblaze B2 account.
  2. Navigate to the bucket where the data is located.
  3. Select the files, or the entire bucket, you wish to send and create a “Snapshot.”
  4. Once the Snapshot is complete you have choices:
    • Download the Snapshot and pay $0.01/GB for the download
    • Have Backblaze copy the Snapshot to an external hard drive and FedEx it anywhere in the world. This stores up to 3.5 TB and costs $189.00. Return the hard drive to Backblaze within 30 days and you’ll get your $189.00 back.
    • Have Backblaze copy the Snapshot to a flash drive and FedEx it anywhere in the world. This stores up to 110 GB and costs $99.00. FedEx shipping to the specified location is included. Return the flash drive to Backblaze within 30 days and you’ll get your $99.00 back.

You can always keep the hard drive or flash drive and Backblaze, of course, will keep your money.

Each drive containing a Snapshot is encrypted. The encryption key can be found in your Backblaze B2 account after you log in. The FedEX tracking number is there as well. When the hard drive arrives at its destination you can provide the encryption key to the recipient and they’ll be able to access the files. Note that the encryption key must be entered each time the hard drive is started, so the data remains protected even if the hard drive is returned to Backblaze.

The B2 Snapshot Return Refund program supports Snapshots as large as 3.5 terabytes. That means you can send about 50 hours of 4k video to a client or partner by selecting the hard drive option. If you select the flash drive option, a Snapshot can be up to 110 gigabytes, which is about 1hr and 45 min of 4k video.

While the example uses an M&E workflow, any workflow requiring the exchange or distribution of large amounts of data across distinct geographies will benefit from this service.

This is a Trial Program

Backblaze fully intends to offer the B2 Snapshot Return Refund Program for a long time. That said, there is no program like this in the industry and so we want to put some guardrails on it to ensure we can offer a sustainable program for all. Thus, the “fine print”:

  • Minimum Snapshot Size — a Snapshot must be greater than 10 GB to qualify for this program. Why? You can download a 10 GB Snapshot in a few minutes. Why pay us to do the same thing and have it take a couple of days??
  • The 30 Day Clock — The clock starts on the day the drive is marked as delivered to you by FedEx and the clock ends on the date postmarked on the package we receive. If that’s 30 days or less, your refund will be granted.
  • 5 Drive Refunds Per Year — We are initially setting a limit of 5 drive refunds per B2 account per year. By placing a cap on the number of drive refunds per year, we are able to provide a service that is responsive to our entire client base. We expect to change or remove this limit once we have enough data to understand the demand and can make sure we are staffed properly.

It is Your Data — Use It

Our industry has a habit of charging little to store data and then usurious amounts to get it back. There are certainly real costs involved in data retrieval. We outlined them in our post on the Cost of Cloud Storage. The industry rates charged for data retrieval are clearly strategic moves to try and lock customers in. To us, that runs counter to trying to do our part to make data useful and our customers’ lives easier. That viewpoint drives our efforts behind lowering our download pricing and the creation of this program.

We hope you enjoy the B2 Snapshot Return Refund program. If you have a moment, please tell us in the comments below how you might use it!

The post Introducing the B2 Snapshot Return Refund Program appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Backblaze Cuts B2 Download Price In Half

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/backblaze-b2-drops-download-price-in-half/

Backblaze B2 downloads now cost 50% less
Backblaze is pleased to announce that, effective immediately, we are reducing the price of Backblaze B2 Cloud Storage downloads by 50%. This means that B2 download pricing drops from $0.02 to $0.01 per GB. As always, the first gigabyte of data downloaded each day remains free.

If some of this sounds familiar, that’s because a little under a year ago, we dropped our download price from $0.05 to $0.02. While that move solidified our position as the affordability leader in the high performance cloud storage space, we continue to innovate on our platform and are excited to provide this additional value to our customers.

This price reduction applies immediately to all existing and new customers. In keeping with Backblaze’s overall approach to providing services, there are no tiers or minimums. It’s automatic and it starts today.

Why Is Backblaze Lowering What Is Already The Industry’s Lowest Price?

Because it makes cloud storage more useful for more people.

When we decided to use Backblaze B2 as our cloud storage service, their download pricing at the time enabled us to offer our broadcasters unlimited audio uploads so they can upload past decades of preaching to our extensive library for streaming and downloading. With Backblaze cutting the bandwidth prices 50% to just one penny a gigabyte, we are excited about offering much higher quality video. — Ian Wagner, Senior Developer, Sermon Audio

Since our founding in 2007, Backblaze’s mission has been to make storing data astonishingly easy and affordable. We have a well documented, relentless pursuit of lowering storage costs — it starts with our storage pods and runs through everything we do. Today, we have over 500 petabytes of customer data stored. B2’s storage pricing already being 14 that of Amazon’s S3 has certainly helped us get there. Today’s pricing reduction puts our download pricing 15 that of S3. The “affordable” part of our story is well established.

I’d like to take a moment to discuss the “easy” part. Our industry has historically done a poor job of putting ourselves in our customers’ shoes. When customers are faced with the decision of where to put their data, price is certainly a factor. But it’s not just the price of storage that customers must consider. There’s a cost to download your data. The business need for providers to charge for this is reasonable — downloading data requires bandwidth, and bandwidth costs money. We discussed that in a prior post on the Cost of Cloud Storage.

But there’s a difference between the costs of bandwidth and what the industry is charging today. There’s a joke that some of the storage clouds are competing to become “Hotel California” — you can check out anytime you want, but your data can never leave.1 Services that make it expensive to restore data or place time lag impediments to data access are reducing the usefulness of your data. Customers should not have to wonder if they can afford to access their own data.

When replacing LTO with StarWind VTL and cloud storage, our customers had only one concern left: the possible cost of data retrieval. Backblaze just wiped this concern out of the way by lowering that cost to just one penny per gig. — Max Kolomyeytsev, Director of Product Management, StarWind

Many businesses have not yet been able to back up their data to the cloud because of the costs. Many of those companies are forced to continue backing up to tape. That tape is an inefficient means for data storage is clear. Solution providers like StarWind VTL specialize in helping businesses move off of antiquated tape libraries. However, as Max Kolomyeytsev, Director of Product Management at StarWind points out, “When replacing LTO with StarWind VTL and cloud storage our customers had only one concern left: the possible cost of data retrieval. Backblaze just wiped this concern out of the way by lowering that cost to just one penny per gig.”

Customers that have already adopted the cloud often are forced to make difficult tradeoffs between data they want to access and the cost associated with that access. Surrendering the use of your own data defeats many of the benefits that “the cloud” brings in the first place. Because of B2’s download price, Ian Wagner, a Senior Developer at Sermon Audio, is able to lower his costs and expand his product offering. “When we decided to use Backblaze B2 as our cloud storage service, their download pricing at the time enabled us to offer our broadcasters unlimited audio uploads so they can upload past decades of preaching to our extensive library for streaming and downloading. With Backblaze cutting the bandwidth prices 50% to just one penny a gigabyte, we are excited about offering much higher quality video.”

Better Download Pricing Also Helps Third Party Applications Deliver Customer Solutions

Many organizations use third party applications or devices to help manage their workflows. Those applications are the hub for customers getting their data to where it needs to go. Leaders in verticals like Media Asset Management, Server & NAS Backup, and Enterprise Storage have already chosen to integrate with B2.

With Backblaze lowering their download price to an amazing one penny a gigabyte, our CloudNAS is even a better fit for photographers, videographers and business owners who need to have their files at their fingertips, with an easy, reliable, low cost way to use Backblaze for unlimited primary storage and active archive. — Paul Tian, CEO, Morro Data

For Paul Tian, founder of Ready NAS and CEO of Morro Data, reasonable download pricing also helps his company better serve its customers. “With Backblaze lowering their download price to an amazing one penny a gigabyte, our CloudNAS is even a better fit for photographers, videographers and business owners who need to have their files at their fingertips, with an easy, reliable, low cost way to use Backblaze for unlimited primary storage and active archive.”

If you use an application that hasn’t yet integrated with B2, please ask your provider to add B2 Cloud Storage and mention the application in the comments below.

 

How Do the Major Cloud Storage Providers Compare on Pricing?

Not only is Backblaze B2 storage 14 the price of Amazon S3, Google Cloud, or Azure, but our download pricing is now 15 their price as well.

Pricing TierBackblaze B2Amazon S3Microsoft AzureGoogle Cloud
First 1 TB$0.01$0.09$0.09$0.12
Next 9 TB$0.01$0.09$0.09$0.11
Next 40 TB$0.01$0.085$0.09$0.08
Next 100 TB$0.01$0.07$0.07$0.08
Next 350 TB+$0.01$0.05$0.05$0.08

Using the chart above, let’s compute a few examples of download costs…

DataBackblaze B2Amazon S3Microsoft AzureGoogle Cloud
1 terabyte$10$90$90$120
10 terabytes$100$900$900$1,200
50 terabytes$500$4,300$4,500$4,310
500 terabytes$5,000$28,800$29,000$40,310
Not only is Backblaze B2 pricing dramatically lower cost, it’s also simple — one price for any amount of data downloaded to anywhere. In comparison, to compute the cost of downloading 500 TB of data with S3 you start with the following formula:
(($0.09 * 10) + ($0.085 * 40) + ($0.07 * 100) + ($0.05 * 350)) * 1,000
Want to see this comparison for the amount of data you manage?
Use our cloud storage calculator.

Customers Want to Avoid Vendor Lock In

Halving the price of downloads is a crazy move — the kind of crazy our customers will be excited about. When using our Transmit 5 app on the Mac to upload their data to B2 Cloud Storage, our users can sleep soundly knowing they’ll be getting a truly affordable price when they need to restore that data. Cool beans, Backblaze. — Cabel Sasser, Co-Founder, Panic

As the cloud storage industry grows, customers are increasingly concerned with getting locked in to one vendor. No business wants to be fully dependent on one vendor for anything. In addition, customers want multiple copies of their data to mitigate against a vendor outage or other issues.

Many vendors offer the ability for customers to replicate data across “regions.” This enables customers to store data in two physical locations of the customer’s choosing. Of course, customers pay for storing both copies of the data and for the data transfer between regions.

At 1¢ per GB, transferring data out of Backblaze is more affordable than transferring data between most other vendor regions. For example, if a customer is storing data in Amazon S3’s Northern California region (US West) and wants to replicate data to S3 in Northern Virginia (US East), she will pay 2¢ per GB to simply move the data.

However, if that same customer wanted to replicate data from Backblaze B2 to S3 in Northern Virginia, she would pay 1¢ per GB to move the data. She can achieve her replication strategy while also mitigating against vendor risk — all while cutting the bandwidth bill by 50%. Of course, this is also before factoring the savings on her storage bill as B2 storage is 14 of the price of S3.

How Is Backblaze Doing This?

Simple. We just changed our pricing table and updated our website.

The longer answer is that the cost of bandwidth is a function of a few factors, including how it’s being used and the volume of usage. With another year of data for B2, over a decade of experience in the cloud storage industry, and data growth exceeding 100 PB per quarter, we know we can sustainably offer this pricing to our customers; we also know how better download pricing can make our customers and partners more effective in their work. So it is an easy call to make.

Our pricing is simple. Storage is $0.005/GB/Month, Download costs are $0.01/GB. There are no tiers or minimums and you can get started any time you wish.

Our desire is to provide a great service at a fair price. We’re proud to be the affordability leader in the Cloud Storage space and hope you’ll give us the opportunity to show you what B2 Cloud Storage can enable for you.

Enjoy the service and I’d love to hear what this price reduction does for you in the comments below…or, if you are attending NAB this year, come by to visit and tell us in person!


1 For those readers who don’t get the Eagles reference there, please click here…I promise you won’t regret the next 7 minutes of your life.

The post Backblaze Cuts B2 Download Price In Half appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Amazon Relational Database Service – Looking Back at 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-relational-database-service-looking-back-at-2017/

The Amazon RDS team launched nearly 80 features in 2017. Some of them were covered in this blog, others on the AWS Database Blog, and the rest in What’s New or Forum posts. To wrap up my week, I thought it would be worthwhile to give you an organized recap. So here we go!

Certification & Security

Features

Engine Versions & Features

Regional Support

Instance Support

Price Reductions

And That’s a Wrap
I’m pretty sure that’s everything. As you can see, 2017 was quite the year! I can’t wait to see what the team delivers in 2018.

Jeff;

 

Some notes on Meltdown/Spectre

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/01/some-notes-on-meltdownspectre.html

I thought I’d write up some notes.

You don’t have to worry if you patch. If you download the latest update from Microsoft, Apple, or Linux, then the problem is fixed for you and you don’t have to worry. If you aren’t up to date, then there’s a lot of other nasties out there you should probably also be worrying about. I mention this because while this bug is big in the news, it’s probably not news the average consumer needs to concern themselves with.

This will force a redesign of CPUs and operating systems. While not a big news item for consumers, it’s huge in the geek world. We’ll need to redesign operating systems and how CPUs are made.

Don’t worry about the performance hit. Some, especially avid gamers, are concerned about the claims of “30%” performance reduction when applying the patch. That’s only in some rare cases, so you shouldn’t worry too much about it. As far as I can tell, 3D games aren’t likely to see less than 1% performance degradation. If you imagine your game is suddenly slower after the patch, then something else broke it.

This wasn’t foreseeable. A common cliche is that such bugs happen because people don’t take security seriously, or that they are taking “shortcuts”. That’s not the case here. Speculative execution and timing issues with caches are inherent issues with CPU hardware. “Fixing” this would make CPUs run ten times slower. Thus, while we can tweek hardware going forward, the larger change will be in software.

There’s no good way to disclose this. The cybersecurity industry has a process for coordinating the release of such bugs, which appears to have broken down. In truth, it didn’t. Once Linus announced a security patch that would degrade performance of the Linux kernel, we knew the coming bug was going to be Big. Looking at the Linux patch, tracking backwards to the bug was only a matter of time. Hence, the release of this information was a bit sooner than some wanted. This is to be expected, and is nothing to be upset about.

It helps to have a name. Many are offended by the crassness of naming vulnerabilities and giving them logos. On the other hand, we are going to be talking about these bugs for the next decade. Having a recognizable name, rather than a hard-to-remember number, is useful.

Should I stop buying Intel? Intel has the worst of the bugs here. On the other hand, ARM and AMD alternatives have their own problems. Many want to deploy ARM servers in their data centers, but these are likely to expose bugs you don’t see on x86 servers. The software fix, “page table isolation”, seems to work, so there might not be anything to worry about. On the other hand, holding up purchases because of “fear” of this bug is a good way to squeeze price reductions out of your vendor. Conversely, later generation CPUs, “Haswell” and even “Skylake” seem to have the least performance degradation, so it might be time to upgrade older servers to newer processors.

Intel misleads. Intel has a press release that implies they are not impacted any worse than others. This is wrong: the “Meltdown” issue appears to apply only to Intel CPUs. I don’t like such marketing crap, so I mention it.


Statements from companies:

Amazon EC2 Price Reduction in the Asia Pacific (Mumbai) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-ec2-price-reduction-in-the-asia-pacific-mumbai-region/

Whew – I am just getting back in to blogging after a quick recovery from AWS re:Invent!

I’m happy to start things off with yet another AWS price reduction, this one for four instance families in the Asia Pacific (Mumbai) Region. Effective December 1, 2017 we are reducing prices for On-Demand and Reserved Instances as follows:

  • M4 – Up to 15%.
  • T2 – Up to 15%.
  • R4 – Up to 15%.
  • C4 – Up to 10%.

The pricing pages have been updated. Enjoy!

Jeff;

 

AWS IoT Update – Better Value with New Pricing Model

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iot-update-better-value-with-new-pricing-model/

Our customers are using AWS IoT to make their connected devices more intelligent. These devices collect & measure data in the field (below the ground, in the air, in the water, on factory floors and in hospital rooms) and use AWS IoT as their gateway to the AWS Cloud. Once connected to the cloud, customers can write device data to Amazon Simple Storage Service (S3) and Amazon DynamoDB, process data using Amazon Kinesis and AWS Lambda functions, initiate Amazon Simple Notification Service (SNS) push notifications, and much more.

New Pricing Model (20-40% Reduction)
Today we are making a change to the AWS IoT pricing model that will make it an even better value for you. Most customers will see a price reduction of 20-40%, with some receiving a significantly larger discount depending on their workload.

The original model was based on a charge for the number of messages that were sent to or from the service. This all-inclusive model was a good starting point, but also meant that some customers were effectively paying for parts of AWS IoT that they did not actually use. For example, some customers have devices that ping AWS IoT very frequently, with sparse rule sets that fire infrequently. Our new model is more fine-grained, with independent charges for each component (all prices are for devices that connect to the US East (Northern Virginia) Region):

Connectivity – Metered in 1 minute increments and based on the total time your devices are connected to AWS IoT. Priced at $0.08 per million minutes of connection (equivalent to $0.042 per device per year for 24/7 connectivity). Your devices can send keep-alive pings at 30 second to 20 minute intervals at no additional cost.

Messaging – Metered by the number of messages transmitted between your devices and AWS IoT. Pricing starts at $1 per million messages, with volume pricing falling as low as $0.70 per million. You may send and receive messages up to 128 kilobytes in size. Messages are metered in 5 kilobyte increments (up from 512 bytes previously). For example, an 8 kilobyte message is metered as two messages.

Rules Engine – Metered for each time a rule is triggered, and for the number of actions executed within a rule, with a minimum of one action per rule. Priced at $0.15 per million rules-triggered and $0.15 per million actions-executed. Rules that process a message in excess of 5 kilobytes are metered at the next multiple of the 5 kilobyte size. For example, a rule that processes an 8 kilobyte message is metered as two rules.

Device Shadow & Registry Updates – Metered on the number of operations to access or modify Device Shadow or Registry data, priced at $1.25 per million operations. Device Shadow and Registry operations are metered in 1 kilobyte increments of the Device Shadow or Registry record size. For example, an update to a 1.5 kilobyte Shadow record is metered as two operations.

The AWS Free Tier now offers a generous allocation of connection minutes, messages, triggered rules, rules actions, Shadow, and Registry usage, enough to operate a fleet of up to 50 devices. The new prices will take effect on January 1, 2018 with no effort on your part. At that time, the updated prices will be published on the AWS IoT Pricing page.

AWS IoT at re:Invent
We have an entire IoT track at this year’s AWS re:Invent. Here is a sampling:

We also have customer-led sessions from Philips, Panasonic, Enel, and Salesforce.

Jeff;