Tag Archives: asia

DynamoDB Accelerator (DAX) Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/dynamodb-accelerator-dax-now-generally-available/

Earlier this year I told you about Amazon DynamoDB Accelerator (DAX), a fully-managed caching service that sits in front of (logically speaking) your Amazon DynamoDB tables. DAX returns cached responses in microseconds, making it a great fit for eventually-consistent read-intensive workloads. DAX supports the DynamoDB API, and is seamless and easy to use. As a managed service, you simply create your DAX cluster and use it as the target for your existing reads and writes. You don’t have to worry about patching, cluster maintenance, replication, or fault management.

Now Generally Available
Today I am pleased to announce that DAX is now generally available. We have expanded DAX into additional AWS Regions and used the preview time to fine-tune performance and availability:

Now in Five Regions – DAX is now available in the US East (Northern Virginia), EU (Ireland), US West (Oregon), Asia Pacific (Tokyo), and US West (Northern California) Regions.

In Production – Our preview customers are reporting that they are using DAX in production, that they loved how easy it was to add DAX to their application, and have told us that their apps are now running 10x faster.

Getting Started with DAX
As I outlined in my earlier post, it is easy to use DAX to accelerate your existing DynamoDB applications. You simply create a DAX cluster in the desired region, update your application to reference the DAX SDK for Java (the calls are the same; this is a drop-in replacement), and configure the SDK to use the endpoint to your cluster. As a read-through/write-through cache, DAX seamlessly handles all of the DynamoDB read/write APIs.

We are working on SDK support for other languages, and I will share additional information as it becomes available.

DAX Pricing
You pay for each node in the cluster (see the DynamoDB Pricing page for more information) on a per-hour basis, with prices starting at $0.269 per hour in the US East (Northern Virginia) and US West (Oregon) regions. With DAX, each of the nodes in your cluster serves as a read target and as a failover target for high availability. The DAX SDK is cluster aware and will issue round-robin requests to all nodes in the cluster so that you get to make full use of the cluster’s cache resources.

Because DAX can easily handle sudden spikes in read traffic, you may be able to reduce the amount of provisioned throughput for your tables, resulting in an overall cost savings while still returning results in microseconds.

Jeff;

 

In the Works – AWS Region in Hong Kong

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-hong-kong/

Last year we launched new AWS Regions in Canada, India, Korea, the UK (London), and the United States (Ohio), and announced that new regions are coming to France (Paris), China (Ningxia), and Sweden (Stockholm).

Coming to Hong Kong in 2018
Today, I am happy to be able to tell you that we are planning to open up an AWS Region in Hong Kong, in 2018. Hong Kong is a leading international financial center, well known for its service oriented economy. It is rated highly on innovation and for ease of doing business. As an evangelist, I get to visit many great cities in the world, and was lucky to have spent some time in Hong Kong back in 2014 and met a number of awesome customers there. Many of these customers have given us feedback that they wanted a local AWS Region.

This will be the eighth AWS Region in Asia Pacific joining six other Regions there — Singapore, Tokyo, Sydney, Beijing, Seoul, and Mumbai, and an additional Region in China (Ningxia) expected to launch in the coming months. Together, these Regions will provide our customers with a total of 19 Availability Zones (AZs) and allow them to architect highly fault tolerant applications.

Today, our infrastructure comprises 43 Availability Zones across 16 geographic regions worldwide, with another three AWS Regions (and eight Availability Zones) in France, China, and Sweden coming online throughout 2017 and 2018, (see the AWS Global Infrastructure page for more info).

We are looking forward to serving new and existing customers in Hong Kong and working with partners across Asia-Pacific. Of course, the new region will also be open to existing AWS customers who would like to process and store data in Hong Kong. Public sector organizations such as government agencies, educational institutions, and nonprofits in Hong Kong will be able to use this region to store sensitive data locally (the AWS in the Public Sector page has plenty of success stories drawn from our worldwide customer base).

If you are a customer or a partner and have specific questions about this Region, you can contact our Hong Kong team.

Help Wanted
If you are interested in learning more about AWS positions in Hong Kong, please visit the Amazon Jobs site and set the location to Hong Kong.

Jeff;

 

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Amazon Lightsail Update – 9 More Regions and Global Console

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-lightsail-update-9-more-regions-and-global-console/

Amazon Lightsail lets you launch Virtual Private Servers on AWS with just a few clicks. With prices starting at $5 per month, Lightsail takes care of the heavy lifting and gives you a simple way to build and host applications. As I showed you in my re:Invent post (Amazon Lightsail – The Power of AWS, the Simplicity of a VPS), you can choose a configuration from a menu and launch a virtual machine preconfigured with SSD-based storage, DNS management, and a static IP address.

Since we launched in November, many customers have used Lightsail to launch Virtual Private Servers. For example, Monash University is using Amazon Lightsail to rapidly re-platform a number of CMS services in a simple and cost-effective manner. They have already migrated 50 workloads and are now thinking of creating an internal CMS service based on Lightsail to allow staff and students to create their own CMS instances in a self-service manner.

Today we are expanding Lightsail into nine more AWS Regions and launching a new, global console.

New Regions
At re:Invent we made Lightsail available in the US East (Northern Virginia) Region. Earlier this month we added support for several additional Regions in the US and Europe. Today we are launching Lightsail in four of our Asia Pacific Regions, bringing the total to ten. Here’s the full list:

  • US East (Northern Virginia)
  • US West (Oregon)
  • US East (Ohio)
  • EU (London)
  • EU (Frankfurt)
  • EU (Ireland)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)

Global Console
The updated Lightsail console makes it easy for you to create and manage resources in one or more Regions. I simply choose the desired Region when I create a new instance:

I can see all of my instances and static IP addresses on the same page, no matter what Region they are in:

And I can perform searches that span all of my resources and Regions. All of my LAMP stacks:

Or all of my resources in the EU (Ireland) Region:

I can perform a similar search on the Snapshots tab:

A new DNS zones tab lets me see my existing zones and create new ones:

Creation of SSH keypairs is now specific to a Region:

I can manage my key pairs on a Region-by-Region basis:

Static IP addresses are also specific to a particular Region:

Available Now
You can use the new Lightsail console and create resources in all ten Regions today!

Jeff;

 

EC2 Price Reductions – Reserved Instances & M4 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-price-reductions-reserved-instances-m4-instances/

As AWS grows, we continue to find ways to make it an even better value. We work with our suppliers to drive down costs while also finding ways to build hardware and software that is increasingly more efficient and cost-effective.

In addition to reducing our prices on a regular and frequent basis, we also give customers options that help them to optimize their use of AWS. For example, Reserved Instances (first launched in 2009) allow Amazon EC2 users to obtain a significant discount when compared to On-Demand Pricing, along with a capacity reservation when used in a specific Availability Zone.

Our customers use multiple strategies to purchase and manage their Reserved Instances. Some prefer to make an upfront payment and earn a bigger discount; some prefer to pay nothing upfront and get a smaller (yet still substantial) discount. In the middle, others are happiest with a partial upfront payment and a discount that falls in between the two other options. In order to meet this wide range of preferences we are adding 3 Year No Upfront Standard Reserved Instances for most of the current generation instance types. We are also reducing prices for No Upfront Reserved Instances, Convertible Reserved Instances, and General Purpose M4 instances (both On-Demand and Reserved Instances). This is our 61st AWS Price Reduction.

Here are the details (all changes and reductions are effective immediately):

New No Upfront Payment Option for 3 Year Standard RIs – We previously offered a no upfront payment option with a 1 year term for Standard RIs. Today, we are adding a No Upfront payment option with a 3 year term for C4, M4, R4, I3, P2, X1, and T2 Standard Reserved Instances.

Lower Prices for No Upfront Reserved Instances – We are lowering the prices for No Upfront 1 Year Standard and 3 Year Convertible Reserved Instances for the C4, M4, R4, I3, P2, X1, and T2 instance types by up to 17%, depending on instance type, operating system, and region.

Here are the average reductions for No Upfront Reserved Instances for Linux in several representative regions:

US East (Northern Virginia)
US West (Oregon)
EU (Ireland)
Asia Pacific (Tokyo)
Asia Pacific (Singapore)
C4 -11% -11% -10% -10% -9%
M4 -16% -16% -16% -16% -17%
R4 -10% -10% -10% -10% -10%

Lower Prices for Convertible Reserved Instances – Convertible Reserved Instances allow you to change the instance family and other parameters associated with the RI at any time; this allows you to adjust your RI inventory as your application evolves and your needs change. We are lowering the prices for 3 Year Convertible Reserved Instances by up to 21% for most of the current generation instances (C4, M4, R4, I3, P2, X1, and T2).

Here are the average reductions for Convertible Reserved Instances for Linux in several representative regions:

US East (Northern Virginia)
US West (Oregon)
EU (Ireland)
Asia Pacific (Tokyo)
Asia Pacific (Singapore)
C4 -13% -13% -5% -5% -11%
M4 -19% -19% -17% -15% -21%
R4 -15% -15% -15% -15% -15%

Similar reductions will go into effect for nearly all of the other regions as well.

Lower Prices for M4 Instances – We are lowering the prices for M4 Linux instances by up to 7%.

Visit the EC2 Reserved Instance Pricing Page and the EC2 Pricing Page, or consult the AWS Price List API for all of the new prices.

Learn More
The following blog posts contain additional information about some of the improvements that we have made to the EC2 Reserved Instance model:

You can also read AWS Pricing and the Reserved Instances FAQ to learn more.

Jeff;

AWS X-Ray Update – General Availability, Including Lambda Integration

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-x-ray-update-general-availability-including-lambda-integration/

I first told you about AWS X-Ray at AWS re:Invent in my post, AWS X-Ray – See Inside Your Distributed Application. X-Ray allows you to trace requests made to your application as execution traverses Amazon EC2 instances, Amazon ECS containers, microservices, AWS database services, and AWS messaging services. It is designed for development and production use, and can handle simple three-tier applications as well as applications composed of thousands of microservices. As I showed you last year, X-Ray helps you to perform end-to-end tracing of requests, record a representative sample of the traces, see a map of the services and the trace data, and to analyze performance issues and errors. This helps you understand how your application and its underlying services are performing so you can identify and address the root cause of issues.

You can take a look at the full X-Ray walk-through in my earlier post to learn more.

We launched X-Ray in preview form at re:Invent and invited interested developers and architects to start using it. Today we are making the service generally available, with support in the US East (Northern Virginia), US West (Northern California), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), South America (São Paulo), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Sydney), and Asia Pacific (Mumbai) Regions.

New Lambda Integration (Preview)
During the preview period we fine-tuned the service and added AWS Lambda integration, which we are launching today in preview form. Now, Lambda developers can use X-Ray to gain visibility into their function executions and performance. Previously, Lambda customers who wanted to understand their application’s latency breakdown, diagnose slowdowns, or troubleshoot timeouts had to rely on custom logging and analysis.

In order to make use of this new integration, you simply ensure that the functions of interest have execution roles that gives the functions permission to write to X-Ray, and then enable tracing on a function-by-function basis (when you create new functions using the console, the proper permissions are assigned automatically). Then you use the X-Ray service map to see how your requests flow through your Lambda functions, EC2 instances, ECS containers, and so forth. You can identify the services and resources of interest, zoom in, examine detailed timing information, and then remedy the issue.

Each call to a Lambda function generates two or more nodes in the X-Ray map:

Lambda Service – This node represents the time spent within Lambda itself.

User Function – This node represents the execution time of the Lambda function.

Downstream Service Calls – These nodes represent any calls that the Lambda function makes to other services.

To learn more, read Using X-Ray with Lambda.

Now Available
We will begin to charge for the usage of X-Ray on May 1, 2017.

Pricing is based on the number of traces that you record, and the number that you analyze (each trace represent a request made to your application). You can record 100,000 traces and retrieve or scan 1,000,000 traces every month at no charge. Beyond that, you pay $5 for every million traces that you record and $0.50 for every million traces that you retrieve for analysis, with more info available on the AWS X-Ray Pricing page. You can visit the AWS Billing Console to see how many traces you have recorded or accessed (data collection began on March 1, 2017).

Check out AWS X-Ray and the new Lambda integration today and let me know what you think!

Jeff;

 

AWS Global Summits are Coming!

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-global-summits-are-coming/

One of the first things I got to do when I joined the AWS Blog team was to attend the summit in New York City last August. Meeting all of our customers, checking out Game Day, and getting to see the enthusiasm of the AWS community made me even more excited to be starting my adventure working on the blog with Jeff.

This year’s AWS Summit dates have been announced and whether you are new to the cloud or an experienced user, you can always learn something new at an AWS Summit. These free events, held around the world, are designed to educate you about the AWS platform. Our team has built a program that offers a multitude of learning opportunities covering a broad range of topics, and technical depth. Join us to develop the skills needed to design, deploy, and operate infrastructure and applications on AWS.

We have Summits taking place across North America, Latin America, Asia Pacific, Europe, the Middle East, Japan, and Greater China. To see the full list of cities and dates, check out the AWS Summits page.

Registration is now open for six locations including; San Francisco, Sydney, Singapore, Kuala Lumpur, Seoul, Manila, and Bangkok. You can also subscribe to the AWS Events RSS feed, follow @awscloud, and find us on Facebook.

And you never know, along with learning all sorts of new things at the summit, you just might run into me or Jeff and snag a blog sticker too!

-Ana

Amazon Aurora Update – More Cross Region & Cross Account Support, T2.Small DB Instances, Another Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-aurora-update-more-cross-region-cross-account-support-t2-small-db-instances-another-region/

I’m in catch-up mode again, and would like to tell you about some recent improvements that we have made to Amazon Aurora. As a reminder, Aurora is our high-performance MySQL-compatible (and soon PostgreSQL-compatible) enterprise-class database (read Now Available – Amazon Aurora and Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS for an introduction).

Here’s are the newest additions to Aurora:

  • Cross Region Snapshot Copy
  • Cross Region Replication for Encrypted Databases
  • Cross Account Encrypted Snapshot Sharing
  • Availability in the US West (Northern California) Region
  • T2.Small Instance Support

Let’s take a quick look at each one!

Cross Region Snapshot Copy
You can now copy Amazon Aurora snapshots (either automatic or manual) from one region to another. Select the snapshot, choose Copy Snapshot from the Snapshot Actions menu, pick a region, and enter a name for the new snapshot:

You can also choose to encrypt the snapshot as part of this operation. To learn more, read Copying a DB Snapshot or DB Cluster Snapshot.

Cross Region Replication for Encrypted Databases
You can already enable encryption when you create a fresh Amazon Aurora DB Instance:

You can now create a read replica in another region with a couple of clicks. You can use this to build multi-region, highly available systems or to move the data closer to the user. To create a cross region read replica, simply select the existing DB Instance and choose Create Cross Region Read Replica from the menu:

Then you choose the destination region in the Network & Security settings, and click on Create:

The destination region must include a DB Subnet Group that encompasses 2 or more Availability Zones.

To learn more about this powerful new feature, read Replicating Amazon Aurora DB Clusters Across AWS Regions.

Cross Account Encrypted Snapshot Sharing
You already have the ability to configure periodic, automated snapshots when you create your Amazon Aurora DB Instance. You can also create snapshots at any desired time with a couple of clicks:

If the DB Instance is encrypted, the snapshot will be as well.

You can now share encrypted snapshots with other AWS accounts. In order to use this feature, the DB Instance (and therefore the snapshot) must be encrypted with a Master Key other than the default RDS key. Select the snapshot and choose Share Snapshot from the Snapshot Actions menu:

Then enter the target AWS Account ID(s), clicking Add after each one, and click on Save to share the snapshot:

You will also need to share the key that was used to encrypt the snapshot. To learn more about this feature, read Sharing a DB Snapshot or DB Cluster Snapshot.

Availability in the US West (Northern California Region)
You can now launch Amazon Aurora DB Instances in the US West (Northern California) Region. Here’s the full list of regions where Aurora is available:

  • US East (Northern Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • US West (Northern California)
  • Canada (Central)
  • EU (Ireland)
  • EU (London)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Sydney)
  • Asia Pacific (Seoul)
  • Asia Pacific (Mumbai)

See the Amazon Aurora Pricing page for pricing info in each region.

T2.Small Instance Support
You can now launch t2.small DB Instances:

These economical instances are a great fit for dev & test environments and for light production workloads. You can also use them to gain some experience with Amazon Aurora. These instances (along with six others, including the t2.medium that we launched last November) are available in all AWS regions where Aurora is available.

On-Demand pricing for t2.small DB Instances starts at $0.041 per hour in the US East (Northern Virginia) Region, dropping to $0.018 per hour for an All Upfront Reserved Instance with a 3 year term (see the Amazon Aurora Pricing page for more info).

Jeff;

 

AWS Database Migration Service – 20,000 Migrations and Counting

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-database-migration-service-20000-migrations-and-counting/

I first wrote about AWS Database Migration Service just about a year ago in my AWS Database Migration Service post. At that time I noted that over 1,000 AWS customers had already made use of the service as part of their move to AWS.

As a quick recap, AWS Database Migration Service and Schema Conversion Tool (SCT) help our customers migrate their relational data from expensive, proprietary databases and data warehouses (either on premises on in the cloud, but with restrictive licensing terms either way) to more cost-effective cloud-based databases and data warehouses such as Amazon Aurora, Amazon Redshift, MySQL, MariaDB, and PostgreSQL, with minimal downtime along the way. Our customers tell us that they love the flexibility and the cost-effective nature of these moves. For example, moving to Amazon Aurora gives them access to a database that is MySQL and PostgreSQL compatible, at 1/10th the cost of a commercial database. Take a peek at our AWS Database Migration Services Customer Testimonials to see how Expedia, Thomas Publishing, Pega, and Veoci have made use of the service.

20,000 Unique Migrations
I’m pleased to be able to announce that our customers have already used AWS Database Migration Service to migrate 20,000 unique databases to AWS and that the pace continues to accelerate (we reached 10,000 migrations in September of 2016).

We’ve added many new features to DMS and SCT over the past year. Here’s a summary:

Learn More
Here are some resources that will help you to learn more and to get your own migrations underway, starting with some recent webinars:

Migrating From Sharded to Scale-Up – Some of our customers implemented a scale-out strategy in order to deal with their relational workload, sharding their database across multiple independent pieces, each running on a separate host. As part of their migration, these customers often consolidate two or more shards onto a single Aurora instance, reducing complexity, increasing reliability, and saving money along the way. If you’d like to do this, check out the blog post, webinar recording, and presentation.

Migrating From Oracle or SQL Server to Aurora – Other customers migrate from commercial databases such as Oracle or SQL Server to Aurora. If you would like to do this, check out this presentation and the accompanying webinar recording.

We also have plenty of helpful blog posts on the AWS Database Blog:

  • Reduce Resource Consumption by Consolidating Your Sharded System into Aurora – “You might, in fact, save bunches of money by consolidating your sharded system into a single Aurora instance or fewer shards running on Aurora. That is exactly what this blog post is all about.”
  • How to Migrate Your Oracle Database to Amazon Aurora – “This blog post gives you a quick overview of how you can use the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS) to facilitate and simplify migrating your commercial database to Amazon Aurora. In this case, we focus on migrating from Oracle to the MySQL-compatible Amazon Aurora.”
  • Cross-Engine Database Replication Using AWS Schema Conversion Tool and AWS Database Migration Service – “AWS SCT makes heterogeneous database migrations easier by automatically converting source database schema. AWS SCT also converts the majority of custom code, including views and functions, to a format compatible with the target database.”
  • Database Migration—What Do You Need to Know Before You Start? – “Congratulations! You have convinced your boss or the CIO to move your database to the cloud. Or you are the boss, CIO, or both, and you finally decided to jump on the bandwagon. What you’re trying to do is move your application to the new environment, platform, or technology (aka application modernization), because usually people don’t move databases for fun.”
  • How to Script a Database Migration – “You can use the AWS DMS console or the AWS CLI or the AWS SDK to perform the database migration. In this blog post, I will focus on performing the migration with the AWS CLI.”

The documentation includes five helpful walkthroughs:

There’s also a hands-on lab (you will need to register in order to participate).

See You at a Summit
The DMS team is planning to attend and present at many of our upcoming AWS Summits and would welcome the opportunity to discuss your database migration requirements in person.

Jeff;

 

 

Amazon RDS – 2016 in Review

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-rds-2016-in-review/

Even though we published 294 posts on this blog last year, I left out quite a number of worthwhile launches! Today I would like to focus on Amazon Relational Database Service (RDS) and recap all of the progress that the teams behind this family of services made in 2016. The team focused on four major areas last year:

  • High Availability
  • Enhanced Monitoring
  • Simplified Security
  • Database Engine Updates

Let’s take a look at each of these areas…

High Availability
Relational databases are at the heart of many types of applications. In order to allow our customers to build applications that are highly available, RDS has offered multi-AZ support since early 2010 (read Amazon RDS – Multi-AZ Deployments For Enhanced Availability & Reliability for more info). Instead of spending weeks setting up multiple instances, arranging for replication, writing scripts to detect network, instance, & network issues, making failover decisions, and bringing a new secondary instance online, you simply opt for Multi-AZ Deployment when you create the Database Instance. RDS also makes it easy for you to create cross-region read replicas.

Here are some of the other enhancements that we made in 2016 in order to help you to achieve high availability:

Enhanced Monitoring
We announced the first big step toward enhanced monitoring at the end of 2015 (New – Enhanced Monitoring for Amazon RDS) with support for MySQL, MariaDB, and Amazon Aurora and then made additional announcements in 2016:

Simplified Security
We want to make it as easy and simple as possible for you to use encryption to protect your data, whether it is at rest or in motion. Here are the enhancements that we made in this area last year:

Database Engine Updates
The open source community and the vendors of commercial databases add features and produce new releases at a rapid pace and we track their work very closely, aiming to update RDS as quickly as possible after each significant release. Here’s what we did in 2016:

Stay Tuned
We’ve already made some big announcements this year (you can find them in the AWS What’s New for 2017) with plenty more in store including the recently announced PostgreSQL-compatible version of Aurora, so stay tuned! You may also want to subscribe to the AWS Database Blog for detailed posts that will show you how to get the most from RDS, Amazon Aurora, and Amazon ElastiCache.

Jeff;

PS – This post does not include all of the enhancements that we made to AWS Database Migration Service or the Schema Conversion Tool last year. I’m working on another post on that topic.

Megaupload Case Takes Toll on Finn Batato, But He’ll Keep Fighting

Post Syndicated from Andy original https://torrentfreak.com/megaupload-case-takes-toll-on-finn-batato-but-hell-keep-fighting-170227/

Whenever there’s a new headline about the years-long prosecution of Megaupload, it is usually Kim Dotcom’s image adorning publications around the world. In many ways, the German-born entrepreneur is the face of the United States’ case against the defunct storage site, and he appears to like it that way.

Thanks to his continuous presence on Twitter, regular appearances in the media, alongside promotion of new file-sharing platforms, one might be forgiven for thinking Dotcom was fighting the US single-handedly. But quietly and very much in the background, three other men are also battling for their freedom.

Megaupload programmers Mathias Ortmann and Bram van der Kolk face a similar fate to Dotcom but have stayed almost completely silent since their arrests in 2012. Former site advertising manager Finn Batato, whose name headlines the entire case (US v. Finn Batato) has been a little more vocal though, and from recent comments we learn that the US prosecution is taking its toll.

Seven years ago before the raid, Batato was riding the crest of a wave as Megaupload’s CMO. According to the FBI he pocketed $630,000 in 2010 and was regularly seen out with Dotcom having fun, racing around the Nürburgring’s Nordschleife track with Formula 1 star Kimi Raikkonen, for example. But things are different now.

Finn with Kimi Raikkonen

While still involved with Mega, the new file-sharing site that Dotcom founded and then left after what appears to be an acrimonious split, Batato is reportedly feeling the pressure. In a new interview with Newshub, the marketing expert says that his marriage is on the rocks, a direct result of the US case against him.

According to Batato, he’s now living in someone else’s house, something he hasn’t done “for 25 years.” It’s a far cry from the waterside luxury being enjoyed by Dotcom.

Batato met wife Anastasia back in 2012, not long after the raid and while he was still under house arrest. The pair married in 2015 and have two children, Leo and Oskar.

“The constant pressure over your head – not knowing what is there to come, is very hard, very tough,” Batato said in an earlier interview with NZHerald.

“Everything that happens in our life happens with that big black cloud over our heads which especially has an impact on me and my mood because I can’t just switch it off. If everything goes down the hill, maybe I will see [my sons] once every month in a prison cell. That breaks my heart. I can’t enjoy it as much as I would want to. It’s highly stressful.”

Since then, Batato has been busy. While working as Mega’s Chief Marketing Officer, the German citizen has been learning about the law. He’s had to. Unlike Dotcom who can retain the best lawyers in the game, Batato says he has few resources.

What savings he had were seized on the orders of the United States in Hong Kong back in 2012, and he previously admitted to having to check his bank account before buying groceries. As a result he’s been conducting his own legal defense for almost two years.

In 2015 he reportedly received praise while doing so, with lawyers appearing for his co-defendants commending him when he stood up to argue a point during a Megaupload hearing. “I was kind of proud about that,” he said.

Like Dotcom (with whom he claims to be on “good terms”), Batato insists that he’s done nothing wrong. He shares his former colleague’s optimism that he won’t be extradited and will take his case to the Supreme Court, should all else fail.

That may be necessary. Last week, the New Zealand High Court determined that Batato and his co-defendants can be extradited to the US, albeit not on copyright grounds. Justice Murray Gilbert agreed with the US Government’s position that their case has fraud at its core, an extraditable offense.

In the short term, the case is expected to move to the Court of Appeal and, depending on the outcome there, potentially to the Supreme Court. Either way, this case still has years to run with plenty more legal appearances for Batato. He won’t be doing it with the legal backup enjoyed by Dotcom but he’ll share his determination.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Now Available – I3 Instances for Demanding, I/O Intensive Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/

On the first day of AWS re:Invent I published an EC2 Instance Update and promised to share additional information with you as soon as I had it.

Today I am happy to be able to let you know that we are making six sizes of our new I3 instances available in fifteen AWS regions! Designed for I/O intensive workloads and equipped with super-efficient NVMe SSD storage, these instances can deliver up to 3.3 million IOPS at a 4 KB block and up to 16 GB/second of sequential disk throughput. This makes them a great fit for any workload that requires high throughput and low latency including relational databases, NoSQL databases, search engines, data warehouses, real-time analytics, and disk-based caches. When compared to the I2 instances, I3 instances deliver storage that is less expensive and more dense, with the ability to deliver substantially more IOPS and more network bandwidth per CPU core.

The Specs
Here are the instance sizes and the associated specs:

Instance Name vCPU Count Memory
Instance Storage (NVMe SSD) Price/Hour
i3.large 2 15.25 GiB 0.475 TB $0.15
i3.xlarge 4 30.5 GiB 0.950 TB $0.31
i3.2xlarge 8 61 GiB 1.9 TB $0.62
i3.4xlarge 16 122 GiB 3.8 TB (2 disks) $1.25
i3.8xlarge 32 244 GiB 7.6 TB (4 disks) $2.50
i3.16xlarge 64 488 GiB 15.2 TB (8 disks) $4.99

The prices shown are for On-Demand instances in the US East (Northern Virginia) Region; see the EC2 pricing page for more information.

I3 instances are available in On-Demand, Reserved, and Spot form in the US East (Northern Virginia), US West (Oregon), US West (Northern California), US East (Ohio), Canada (Central), South America (São Paulo), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Sydney), and AWS GovCloud (US) Regions. You can also use them as Dedicated Hosts and as Dedicated Instances.

These instances support Hardware Virtualization (HVM) AMIs only, and must be run within a Virtual Private Cloud. In order to benefit from the performance made possible by the NVMe storage, you must run one of the following operating systems:

  • Amazon Linux AMI
  • RHEL – 6.5 or better
  • CentOS – 7.0 or better
  • Ubuntu – 16.04 or 16.10
  • SUSE 12
  • SUSE 11 with SP3
  • Windows Server 2008 R2, 2012 R2, and 2016

The I3 instances offer up to 8 NVMe SSDs. In order to achieve the best possible throughput and to get as many IOPS as possible, you can stripe multiple volumes together, or spread the I/O workload across them in another way.

Each vCPU (Virtual CPU) is a hardware hyperthread on an Intel E5-2686 v4 (Broadwell) processor running at 2.3 GHz. The processor supports the AVX2 instructions, along with Turbo Boost and NUMA.

Go For Launch
The I3 instances are available today in fifteen AWS regions and you can start to use them right now.

Jeff;

 

AWS Direct Connect Update – Link Aggregation Groups, Bundles, and re:Invent Recap

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-direct-connect-update-link-aggregation-groups-bundles-and-reinvent-recap/

AWS Direct Connect helps our large-scale customers to create private, dedicated network connections to their office, data center, or colocation facility. Our customers create 1 Gbps and 10 Gbps connections in order to reduce their network costs, increase data transfer throughput, and to get a more consistent network experience than is possible with an Internet-based connection.

Today I would like to tell you about a new Link Aggregation feature for Direct Connect. I’d also like to tell you about our new Direct Connect Bundles and to tell you more about how we used Direct Connect to provide a first-class customer experience at AWS re:Invent 2016.

Link Aggregation Groups
Some of our customers would like to set up multiple connections (generally known as ports) between their location and one of the 46 Direct Connect locations. Some of them would like to create a highly available link that is resilient in the face of network issues outside of AWS; others simply need more data transfer throughput.

In order to support this important customer use case, you can now purchase up to 4 ports and treat them as a single managed connection, which we call a Link Aggregation Group or LAG. After you have set this up, traffic is load-balanced across the ports at the level of individual packet flows. All of the ports are active simultaneously, and are represented by a single BGP session. Traffic across the group is managed via Dynamic LACP (Link Aggregation Control Protocol – or ISO/IEC/IEEE 8802-1AX:2016). When you create your group, you also specify the minimum number of ports that must be active in order for the connection to be activated.

You can order a new group with multiple ports and you can aggregate existing ports into a new group. Either way, all of the ports must have the same speed (1 Gbps or 10 Gbps).

All of the ports in as group will connect to the same device on the AWS side. You can add additional ports to an existing group as long as there’s room on the device (this information is now available in the Direct Connect Console). If you need to expand an existing group and the device has no open ports, you can simply order a new group and migrate your connections.

Here’s how you can make use of link aggregation from the Console. First, creating a new LAG from scratch:

And second, creating a LAG from existing connections:


Link Aggregation Groups are now available in the US East (Northern Virginia), US West (Northern California), US East (Ohio), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Mumbai), and Asia Pacific (Seoul) Regions and you can create them today. We expect to make them available in the remaining regions by the end of this month.

Direct Connect Bundles
We announced some powerful new Direct Connect Bundles at re:Invent 2016. Each bundle is an advanced, hybrid reference architecture designed to reduce complexity and to increase performance. Here are the new bundles:

Level 3 Communications Powers Amazon WorkSpaces – Connects enterprise applications, data, user workspaces, and end-point devices to offer reliable performance and a better end-user experience:

SaaS Architecture enhanced by AT&T NetBond – Enhances quality and user experience for applications migrated to the AWS Cloud:

Aviatrix User Access Integrated with Megaport DX – Supports encrypted connectivity between AWS Cloud Regions, between enterprise data centers and AWS, and on VPN access to AWS:

Riverbed Hybrid SDN/NFV Architecture over Verizon Secure Cloud Interconnect – Allows enterprise customers to provide secure, optimized access to AWS services in a hybrid network environment:

Direct Connect at re:Invent 2016
In order to provide a top-notch experience for attendees and partners at re:Invent, we worked with Level 3 to set up a highly available and fully redundant set of connections. This network was used to support breakout sessions, certification exams, the hands-on labs, the keynotes (including the live stream to over 25,000 viewers in 122 countries), the hackathon, bootcamps, and workshops. The re:Invent network used four 10 Gbps connections, two each to US West (Oregon) and US East (Northern Virginia):

It supported all of the re:Invent venues:

Here are some video resources that will help you to learn more about how we did this, and how you can do it yourself:

Jeff;

1984 is the new Bible in the age of Trump

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/02/1984-is-new-bible.html

In the age of Trump, Orwell’s book 1984 is becoming the new Bible: a religious text which few read, but which many claim supports their beliefs. A good demonstration is this CNN op-ed, in which the author describes Trump as being Orwellian, but mostly just because Trump is a Republican.

Trump’s populist attacks against our (classically) liberal world order is indeed cause for concern. His assault on the truth is indeed a bit Orwellian. But it’s op-eds like this one at CNN that are part of the problem.
While the author of the op-ed spends much time talking about his dogs (“Winston”, “Julia”), and how much he hates Trump, he spends little time on the core thesis “Orwellianism”. When he does, it’s mostly about old political disagreements. For example, the op-ed calls Trump’s cabinet appointees Orwellian simply because they are Republicans:

He has provided us with Betsy DeVos, a secretary of education nominee who is widely believed to oppose public education, and who promotes the truly Orwellian-sounding concept of “school choice,” a plan that seems well-intentioned but which critics complain actually siphons much-needed funds from public to private education institutions.

Calling school-choice “Orwellian” is absurd. Republicans want to privatize more, and the Democrats want the state to run more of the economy. It’s the same disagreement that divides the two parties on almost any policy issue. When you call every little political disagreement “Orwellian” then you devalue the idea. I’m Republican, so of course I’d argue that the it’s the state-run education system giving parents zero choice that is the thing that’s Orwellian here. And now we bicker, both convinced that Orwell is on our side in this debate. #WhatWouldOrwellDo
If something is “Orwellian”, then you need to do a better job demonstrating this, making the analogy clear. For example, last year I showed how in response to a political disagreement, that Wikipedia and old newspaper articles were edited in order to conform to the new political reality. This is a clear example of Winston Smith’s job of changing the past in order to match the present.
But even such clear documentation is probably powerless to change anybody’s mind. Whether “changing the text of old newspaper articles to fit modern politics” is Orwellian depends entirely on your politics, whether the changes agree with your views. Go follow the link [*] and see for yourself and see if you agree with the change (replacing the word “refugee” in old articles with “asylee” instead).
It’s this that Orwell was describing. Doublethink wasn’t something forced onto us by a totalitarian government so much as something we willingly adopted ourselves. The target of Orwell’s criticism wasn’t them, the totalitarian government, but us, the people who willingly went along with it. Doublethink is what people in both parties (Democrats and Republicans) do equally, regardless of the who resides in the White House.
Trump is an alt-Putin. He certainly wants to become a totalitarian. But at this point, his lies are juvenile and transparent, which even his supporters find difficult believing [*]. The most Orwellian thing about him is what he inherits from Obama [*]: the two Party system, perpetual war, omnipresent surveillance, the propaganda system, and our nascent cyber-police-state [*].
Conclusion

Yes, people should read 1984 in the age of Trump, not because he’s created the Orwellian system, but because he’s trying to exploit the system that’s already there. If you believe he’s Orwellian because he’s Republican, as the foolish author of that CNN op-ed believes, then you’ve missed the point of Orwell’s novel completely.

Bonus: Doing a point-by-point rebuttal gets boring, and makes the post long, but ought to be done out of a sense of completeness. The following paragraph contains the most “Orwell” points, but it’s all essentially nonsense:

We are living in this state of flux in real life. Russia was and likely is our nation’s fiercest rival, yet as a candidate, President Trump famously stated, “Russia, if you’re listening, I hope you’re able to find the 30,000 [Clinton] emails that are missing.” He praises Putin but states that perhaps he may not actually like him when they meet. WikiLeaks published DNC data alleged to have been obtained by Russian operatives, but the election was not “rigged.” A recount would be “ridiculous,” yet voter fraud was rampant. Trusted sources of information are “fake news,” and somehow Chelsea Manning, WikiLeaks’ most notable whistleblower, is now an “ungrateful traitor.”

Trump’s asking Russia to find the missing emails was clearly a joke. Trump’s speech is marked by exaggeration and jokes like this. That Trump’s rivals insist his jokes be taken seriously is the problem here, more than what he’s joking about.

The correct Orwellian analogy to draw here is is the Eurasia (Russia) and Eastasia (China) parallels. Under Obama, China was a close trading partner while Russia was sanctioned for invading the Ukraine. Under Trump, it’s China who is our top rival while Russia/Putin is more of our friends. What’s Orwellian is how polls [*] of what Republicans think of Russia have gone through a shift, “We’ve always been at war with Eastasia”.

The above paragraph implies Trump said the election wasn’t “rigged”. No, Trump still says the election was rigged, even after he won it. [*] It’s Democrats who’ve flip-flopped on their opinion whether the election was “rigged” after Trump’s win. Trump attacks the election system because that’s what illiberal totalitarians always do, not because it’s Orwellian.

“Recounts” and “fraudulent votes” aren’t the same thing. Somebody registered to vote, and voting, in multiple states is not something that’ll be detected with a “recount” in any one state, for example. Trump’s position on voter fraud is absurd, but it’s not Orwellian.

Instead of these small things, what’s Orwellian is Trump’s grander story of a huge popular “movement” behind him. That’s why his inauguration numbers are important. That’s why losing the popular vote is important. It’s why he keeps using the word “movement” in all his speeches. It’s the big lie he’s telling that makes him Orwellian, not all the small lies.

Trusted sources of news are indeed “fake news”. The mainstream media has problems, whether it’s their tendency to sensationalism, or the way they uncritically repeat government propaganda (“according to senior government officials”) regardless of which Party controls the White House. Indeed, Orwell himself was a huge critic of the press — sometimes what they report is indeed “fake news”, not simply a mistake but something that violates the press’s own standards.

Yes, the President or high-level government officials have no business attacking the press the way Trump does, regardless if they deserve it. Trump indeed had a few legitimate criticism of the press, but his attacks have quickly devolved to attacking the press whenever it’s simply Truth disagreeing with Trump’s lies. It’s all attacks against the independent press that are the problem, not the label “fake news”.

As Wikipedia documents, “the term “traitor” has been used as a political epithet, regardless of any verifiable treasonable action”. Despite being found not guilty of “aiding the enemy”, Chelsea Manning was convicted of espionage. Reasonable people can disagree about Manning’s action — while you may not like the “traitor” epithet, it’s not an Orwellian term.

Instead, what is Orwellian is insisting Manning was a “whistleblower”. Reasonable people disagree with that description. Manning didn’t release specific diplomatic cables demonstrative of official wrongdoing, but the entire dump of all cables going back more than a decade. It’s okay to call Manning a whistleblower (I might describe her as such), but it’s absurd to claim this is some objective truth. For example, the Wikipedia article [*] on Chelsea Manning documents several people calling her a whistleblower, but does not itself use that term to describe Manning. The struggle between objective and subjective “Truth” is a big part of Orwell’s work.

What I’m demonstrating here in this bonus section is the foolishness of that CNN op-ed. He hates Trump, but entirely misunderstands Orwell. He does a poor job pinning down Trump on exactly how he fits the Orwellian mode. He writes like somebody who hasn’t actually read the book at all.

Amazon Cloud Directory – A Cloud-Native Directory for Hierarchical Data

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-cloud-directory-a-cloud-native-directory-for-hierarchical-data/

Our customers have traditionally used directories (typically Active Directory Lightweight Directory Service or LDAP-based) to manage hierarchically organized data. Device registries, course catalogs, network configurations, and user directories are often represented as hierarchies, sometimes with multiple types of relationships between objects in the same collection. For example, a user directory could have one hierarchy based on physical location (country, state, city, building, floor, and office), a second one based on projects and billing codes, and a third based on the management chain. However, traditional directory technologies do not support the use of multiple relationships in a single directory; you’d have to create and maintain additional directories if you needed to do this.

Scale is another important challenge. The fundamental operations on a hierarchy involve locating the parent or the child object of a given object. Given that hierarchies can be used to represent large, nested collections of information, these fundamental operations must be as efficient as possible, regardless of how many objects there are or how deeply they are nested. Traditional directories can be difficult to scale, and the pain only grows if you are using two or more in order to represent multiple hierarchies.

New Amazon Cloud Directory
Today we are launching Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data as described above. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

Cloud Directory is a building block that already powers other AWS services including Amazon Cognito and AWS Organizations. Because it plays such a crucial role within AWS, it was designed with scalability, high availability, and security in mind (data is encrypted at rest and while in transit).

Amazon Cloud Directory is a managed service; you don’t need to think about installing or patching software, managing servers, or scaling any storage or compute infrastructure. You simply define the schemas, create a directory, and then populate your directory by making calls to the Cloud Directory API. This API is designed for speed and for scale, with efficient, batch-based read and write functions.

The long-lasting nature of a directory, combined with the scale and the diversity of use cases that it must support over its lifetime, brings another challenge to light. Experience has shown that static schemas lack the flexibility to adapt to the changes that arise with scale and with new use cases. In order to address this challenge and to make the directory future-proof, Cloud Directory is built around a model that explicitly makes room for change. You simply extend your existing schemas by adding new facets. This is a safe operation that leaves existing data intact so that existing applications will continue to work as expected. Combining schemas and facets allows you to represent multiple hierarchies within the same directory. For example, your first hierarchy could mirror your org chart. Later, you could add an additional facet to track some additional properties for each employee, perhaps a second phone number or a social network handle. After that, you can could create a geographically oriented hierarchy within the same data: Countries, states, buildings, floors, offices, and employees.

As I mentioned, other parts of AWS already use Amazon Cloud Directory. Cognito User Pools use Cloud Directory to offer application-specific user directories with support for user sign-up, sign-in and multi-factor authentication. With Cognito Your User Pools, you can easily and securely add sign-up and sign-in functionality to your mobile and web apps with a fully-managed service that scales to support hundreds of millions of users. Similarly, AWS Organizations uses Cloud Directory to support creation of groups of related AWS accounts and makes good use of multiple hierarchies to enforce a wide range of policies.

Before we dive in, let’s take a quick look at some important Amazon Cloud Directory concepts:

Directories are named, and must have at least one schema. Directories store objects, relationships between objects, schemas, and policies.

Facets model the data by defining required and allowable attributes. Each facet provides an independent scope for attribute names; this allows multiple applications that share a directory to safely and independently extend a given schema without fear of collision or confusion.

Schemas define the “shape” of data stored in a directory by making reference to one or more facets. Each directory can have one or more schemas. Schemas exist in one of three phases (Development, Published, or Applied). Development schemas can be modified; Published schemas are immutable. Amazon Cloud Directory includes a collection of predefined schemas for people, organizations, and devices. The combination of schemas and facets leaves the door open to significant additions to the initial data model and subject area over time, while ensuring that existing applications will still work as expected.

Attributes are the actual stored data. Each attribute is named and typed; data types include Boolean, binary (blob), date/time, number, and string. Attributes can be mandatory or optional, and immutable or editable. The definition of an attribute can specify a rule that is used to validate the length and/or content of an attribute before it is stored or updated. Binary and string objects can be length-checked against minimum and maximum lengths. A rule can indicate that a string must have a value chosen from a list, or that a number is within a given range.

Objects are stored in directories, have attributes, and are defined by a schema. Each object can have multiple children and multiple parents, as specified by the schema. You can use the multiple-parent feature to create multiple, independent hierarchies within a single directory (sometimes known as a forest of trees).

Policies can be specified at any level of the hierarchy, and are inherited by child objects. Cloud Directory does not interpret or assign any meaning to policies, leaving this up to the application. Policies can be used to specify and control access permissions, user rights, device characteristics, and so forth.

Creating a Directory
Let’s create a directory! I start by opening up the AWS Directory Service Console and clicking on the first Create directory button:

I enter a name for my directory (users), choose the person schema (which happens to have two facets; more about this in a moment), and click on Next:

The predefined (AWS) schema will be copied to my directory. I give it a name and a version, and click on Publish:

Then I review the configuration and click on Launch:

The directory is created, and I can then write code to add objects to it.

Pricing and Availability
Cloud Directory is available now in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Sydney), and Asia Pacific (Singapore) Regions and you can start using it today.

Pricing is based on three factors: the amount of data that you store, the number of reads, and the number of writes (these prices are for US East (Northern Virginia)):

  • Storage – $0.25 / GB / month
  • Reads – $0.0040 for every 10,000 reads
  • Writes – $0.0043 for every 1,000 writes

In the Works
We have some big plans for Cloud Directory!

While the priorities can change due to customer feedback, we are working on cross-region replication, AWS Lambda integration, and the ability to create new directories via AWS CloudFormation.

Jeff;

AWS IPv6 Update – Global Support Spanning 15 Regions & Multiple AWS Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-ipv6-update-global-support-spanning-15-regions-multiple-aws-services/

We’ve been working to add IPv6 support to many different parts of AWS over the last couple of years, starting with Elastic Load Balancing, AWS IoT, Amazon Route 53, Amazon CloudFront, AWS WAF, and S3 Transfer Acceleration, all building up to last month’s announcement of IPv6 support for EC2 instances in Virtual Private Clouds (initially available for use in the US East (Ohio) Region).

Today I am happy to share the news that IPv6 support for EC2 instances in VPCs is now available in a total of fifteen regions, along with Application Load Balancer support for IPv6 in nine of those regions.

You can now build and deploy applications that can use IPv6 addresses to communicate with servers, object storage, load balancers, and content distribution services. In accord with the latest guidelines for IPv6 support from Apple and other vendors, your mobile applications can now make use of IPv6 addresses when they communicate with AWS.

IPv6 Now in 15 Regions
IPv6 support for EC2 instances in new and existing VPCs is now available in the US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon), South America (São Paulo), Canada (Central), EU (Ireland), EU (Frankfurt), EU (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Mumbai), and AWS GovCloud (US) Regions and you can start using it today!

You can enable IPv6 from the AWS Management Console when you create a new VPC:

Application Load Balancer
Application Load Balancers in the US East (Northern Virginia), US West (Northern California), US West (Oregon), South America (São Paulo), EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and AWS GovCloud (US) Regions now support IPv6 in dual-stack mode, making them accessible via IPv4 or IPv6 (we expect to add support for the remaining regions within a few weeks).

Simply enable the dualstack option when you configure the ALB and then make sure that your security groups allow or deny IPv6 traffic in accord with your requirements. Here’s how you select the dualstack option:

You can also enable this option by running the set-ip-address-type command or by making a call to the SetIpAddressType function. To learn more about this new feature, read the Load Balancer Address Type documentation.

IPv6 Recap
Here are the IPv6 launches that we made in the run-up to the launch of IPv6 support for EC2 instances in VPCs:

CloudFront, WAF, and S3 Transfer Acceleration – This launch let you enable IPv6 support for individual CloudFront distributions. Newly created distributions supported IPv6 by default and existing distributions could be upgraded with a couple of clicks (if you using Route 53 alias records, you also need to add an AAAA record to the domain). With IPv6 support enabled, the new addresses will show up in the CloudFront Access Logs. The launch also let you use AWS WAF to inspect requests that arrive via IPv4 or IPv6 addresses and to use a new, dual-stack endpoint for S3 Transfer Acceleration.

Route 53 – This launch added support for DNS queries over IPv6 (support for the requisite AAAA records was already in place). A subsequent launch added support for Health Checks of IPv6 Endpoints, allowing you to monitor the health of the endpoints and to arrange for DNS failover.

IoT – This product launch included IPv6 support for message exchange between devices and AWS IoT.

S3 – This launch added support for access to S3 buckets via dual-stack endpoints.

Elastic Load Balancing – This launch added publicly routable IPv6 addresses for Elastic Load Balancers.

Jeff;

 

New SOC 2 Report Available: Confidentiality

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/new-soc-2-report-available-confidentiality/

AICPA SOC logo

As with everything at Amazon, the success of our security and compliance program is primarily measured by one thing: our customers’ success. Our customers drive our portfolio of compliance reports, attestations, and certifications that support their efforts in running a secure and compliant cloud environment. As a result of our engagement with key customers across the globe, we are happy to announce the publication of our new SOC 2 Confidentiality report. This report is available now through AWS Artifact in the AWS Management Console.

We’ve been publishing SOC 2 Security and Availability Trust Principle reports for years now, and the Confidentiality criteria is complementary to the Security and Availability criteria. The SOC 2 Confidentiality Trust Principle, developed by the American Institute of CPAs (AICPA) Assurance Services Executive Committee (ASEC), outlines additional criteria focused on further safeguarding data, limiting and reducing access to authorized users, and addressing the effective and timely disposal of customer content after deletion by the customer.

The AWS SOC Report covers the data centers in the US East (N. Virginia), US West (Oregon), US West (N. California), AWS GovCloud (US), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), and South America (São Paulo) Regions. See AWS Global Infrastructure for more information.

To request this report:

  1. Sign in to your AWS account.
  2. In the list of services under Security, Identity, and Compliance, choose Compliance Reports, and on the next page choose the report you would like to review. Note that you might need to request approval from Amazon for some reports. Requests are reviewed and approved by Amazon within 24 hours.

Want to know more? See answers to some frequently asked questions about the AWS SOC program.  

– Chad

AWS Hot Startups – A Year in Review

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-hot-startups-a-year-in-review/

It is the end of 2016! Tina Barr has a great roundup of all the startups we featured this year. Check it out and see if you managed to read about them all, then come back in January for when we start up all over again.

Also- I wanted to thank Elsa Mayer for her hard work in helping out with the startup posts.

-Ana


 

What a year it has been for startups! We began the Hot Startups series in March as a way to feature exciting AWS-powered startups and the motivation behind the products and services they offer. Since then we’ve featured 27 startups across a wide range of industries including healthcare, commerce, social, finance, and many more. Each startup offers a unique product to its users – whether it’s an entertaining app or website, an educational platform, or a product to help businesses grow, startups have the ability to reach customers far and wide.

The startups we showcased this year are headquartered all over the world. Check them out on the map below!

startup map

In case you missed any of the posts, here is a list of all the hot startups we featured in 2016:

March

  • Intercom – One place for every team in an Internet business to see and talk
    to customers, personally, at scale.
  • Tile – A popular key locator product that works with an app to help people find their stuff.
  • Bugsnag – A tool to capture and analyze runtime errors in production web and mobile applications.
  • DroneDeploy  – Making the sky productive and accessible for everyone.

April

  • Robinhood – Free stock trading to democratize access to financial markets.
  • Dubsmash – Bringing joy to communication through video
  • Sharethrough – An all-in-one native advertising platform.

June

  • Shaadi.com – Helping South Asians to find a companion for life.
  • Capillary– Boosting customer engagement for e-commerce.
  • Monzo – A mobile-first bank.

July

  • Depop– A social mobile marketplace for artists and friends to buy and sell products.
  • Nextdoor – Building stronger and safer neighborhoods through technology.
  • Branch – Provides free deep linking technology for mobile app developers to gain and retain users.

August

  • Craftsvilla– Offering a platform to purchase ethnic goods.
  • SendBird – Helping developers build 1-on-1 messaging and group chat quickly.
  • Teletext.io – A solution for content management, without the system.
  • Wavefront– A cloud-based analytics platform.

September

  • Funding Circle – The leading online marketplace for business loans.
  • Karhoo– A ride comparison app.
  • nearbuy – Connecting customers and local merchants across India.

October

  • Optimizely – Providing web and mobile A/B testing for the world’s leading brands.
  • Touch Surgery – Building technologies for the global surgical community.
  • WittyFeed – Creating viral content.

November

  • AwareLabs – Helping small businesses build smart websites.
  • Doctor On Demand– Delivering fast, easy, and cost-effective access to top healthcare providers.
  • Starling Bank – Mobile banking for the next generation.
  • VigLink – Powering content-driven commerce.

Thank you for keeping up with us as we shared these startups’ amazing stories throughout the year. Be sure to check back here in January for our first hot startups of 2017!

-Tina Barr

Opening Soon – AWS Office in Dubai to Support Cloud Growth in UAE

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/opening-soon-aws-office-in-dubai-to-support-cloud-growth-in-uae/

The AWS office in Dubai, UAE will open on January 1, 2017.

We’ve been working with the Dubai Investment Development Agency (Dubai FDI) to launch the office, and plan to support startups, government institutions, and some of the Middle East’s historic and most established enterprises as they make the transition to the AWS Cloud.

Resources in Dubai
The office will be staffed with account managers, solutions architects, partner managers, professional services consultant, and support staff to allow customers to interact with AWS in their local setting and language.

In addition to access to the AWS team, customers in the Middle East have access to several important AWS programs including AWS Activate and AWS Educate:

  • AWS Activate is designed to provide startups with resources that will help them to get started on AWS, including up to $100,000 (USD) in AWS promotional credits.
  • AWS Educate is a global initiative designed to provide students and educators with the resources needed to accelerate cloud-based learning endeavors.
  • AWS Training and Certification helps technologists to develop the skills to design, deploy, and operate infrastructure in the AWS Cloud.

We are also planning to host AWSome days and other large-scale training events in the region.

Customers in the Middle East
Middle Eastern organizations were among the earliest adopters of cloud services when AWS launched in 2006. Customers based in the region are using AWS to run everything from development and test environments to Big Data analytics, from mobile, web and social applications to enterprise business applications and mission critical workloads.

AWS counts some of the UAE’s most well-known and fastest growing businesses as customers, including PayFort and Careem, as well as government institutions and some of the largest companies in the Middle East, such as flydubai and Middle East Broadcasting Center.

Careem is the leading ride booking app in the Middle East and North Africa. Launched in 2012, Careem runs totally on AWS and over the past three years has grown by 10x in size every year. This is growth that would not have been possible without AWS. After starting with one city, Dubai, Careem now serves millions of commuters in 43 cities across the Middle East, North Africa and Asia. Careem uses over 500 EC2 instances as well as a number of other services such as Amazon S3, Amazon DynamoDB, Elastic Beanstalk and others.

PayFort is a startup based in the United Arab Emirates that provides payment solutions to customers across the Middle East through its payments gateway, FORT. The platform enables organizations to accept online payments via debit and credit cards. PayFort counts Etihad Airways, Ferrari World, and Souq.com among its customers. PayFort chose to run FORT entirely on AWS technologies and as a result is saving 32% over their on-premises costs. Although cost was key for PayFort, it turns our that they chose AWS due to the high level of security that they could achieve with the platform. Compliance with Payment Card Industry Data Security Standard (PCI DSS) and International Organization for Standards (ISO) 27001 are central to PayFort’s payment services, both of which are available with AWS (we were actually the first cloud provider to reach compliance with version 3.1 of PCI DSS).

Fly Dubai is the leading low-cost airline in the Middle East, with over 90 destinations, and was launched by the government of Dubai in 2009. flydubai chose to build their online check-in platform on AWS and went from design to production in four months where it is now being used by thousands of passengers a day – this timeline would not have been possible without the cloud. Given the seasonal fluctuations in demand for flights, flydubai also needs an IT provider that allows it to cope with spikes in demand. Using AWS allows them to do this and lead times for new infrastructure services have been reduced from up to 10 weeks to a matter of hours.

Partners
The AWS Partner Network of consulting and technology partners in the region helps our customers to get the most from the cloud. The network includes global members like CSC as well as prominent regional members such as Redington.

Redington is an AWS Consulting Partner and is the Master Value Added Distributor for AWS in Middle East and North Africa. They are also an Authorized Commercial Reseller of AWS cloud technologies. Redington is helping organizations in the MEA region with cloud assessment, cloud readiness, design, implementation, migration, deployment and optimization of cloud resources. They also have an ecosystem of partners including ISV’s with experienced and certified AWS engineers with cross domain experience.

Join Us
This announcement is part of our continued expansion across Europe, the Middle East, and Asia. As part of our investment in these areas, we created over 10,000 new jobs in 2015. If you are interested in joining our team in Dubai or in any other location around the world, check out the Amazon Jobs site.

Jeff;