Tag Archives: asia

Streaming Service iflix Buys Shows Based on Piracy Data

Post Syndicated from Ernesto original https://torrentfreak.com/streaming-service-iflix-buys-shows-based-on-piracy-data-170819/

When major movie and TV companies discuss piracy they often mention the massive losses incurred as a result of unauthorized downloads and streams.

However, this unofficial market also offers a valuable pool of often publicly available data on the media consumption habits of a relatively young generation.

Many believe that piracy is in part a market signal showing copyright holders what consumers want. This makes piracy statistics key business intelligence, which some companies have started to realize.

Netflix, for example, previously said that their offering is partly based on what shows do well on BitTorrent networks and other pirate sites. In addition, the streaming service also uses piracy to figure out how much they can charge in a country. They are not alone.

Other major entertainment companies also keep a close eye on piracy, using this data to their advantage. This includes the Asia-based streaming portal iFlix, which recently secured $133 million in funding and boasts to have over five million users.

Iflix co-founder Patrick Grove says that his company actively uses piracy numbers to determine what content they acquire. The data reveal what is popular locally, and help to give viewers the TV-shows and movies they’re most interested in.

“We looked at piracy data in every market,” Grove informed CNBC’s Managing Asia, which doesn’t stop at looking at a few torrent download numbers.

Representatives from the Asian company actually went out on the streets to buy pirated DVDs from street vendors. In addition, iflix also received help from local Internet providers which shared a variety of streaming data.

TorrentFreak reached out to the streaming service to get more details about their data gathering techniques. One of the main partners to measure online piracy is the German company TECXIPIO, which is known to actively monitor BitTorrent traffic.

The company also maintains a close relationship with Internet providers that offer further insight, including streaming data, to determine which titles work best in each market.

While analyzing the different sets of data, the streaming service was surprised to see the diversity in different regions as well as the ever-changing consumer demand.

“Through looking at the Top 20 pirated DVDs in every market we are live in, we were surprised to find the amount of pirated K-drama content. In Ghana for example, the number one pirated title is K-drama series called ‘Legend of the Blue Sea’,” an iflix spokesperson told us.

Iflix believes that piracy data is superior to other market intelligence. Before rolling out its service in Saudi Arabia the company made a list of the 1,000 most popular shows and used that to its advantage.

While there is a lot of piracy in emerging markets, iflix doesn’t think that people are not willing to pay for entertainment. It just has to be available for a decent price, and that’s where they come in.

“We believe that people in emerging markets do not actively want to steal content, they do so because there is no better alternative,” the company informs us.

“As consumers become more connected, gaining access to information and cultural influences on a global scale, they want to be entertained at a world-class standard. We set out with the aim of offering an alternative that is better than piracy; by providing unlimited access to high-quality, world-class entertainment, all at the price of pirated DVD.”

There is no doubt that iflix is ambitious, and that it’s willing to employ some unusual tactics to grow its userbase. The company is quite optimistic about the future as well, judging from its co-founder’s prediction that it will welcome its billionth viewer in a few years.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pimoroni is 5 now!

Post Syndicated from guru original https://www.raspberrypi.org/blog/pimoroni-is-5-now/

Long read written by Pimoroni’s Paul Beech, best enjoyed over a cup o’ grog.

Every couple of years, I’ve done a “State of the Fleet” update here on the Raspberry Pi blog to tell everyone how the Sheffield Pirates are doing. Half a decade has gone by in a blink, but reading back over the previous posts shows that a lot has happened in that time!

TL;DR We’re an increasingly medium-sized design/manufacturing/e-commerce business with workshops in Sheffield, UK, and Essen, Germany, and we employ almost 40 people. We’re totally lovely. Thanks for supporting us!

 

We’ve come a long way, baby

I’m sitting looking out the window at Sheffield-on-Sea and feeling pretty lucky about how things are going. In the morning, I’ll be flying east for Maker Faire Tokyo with Niko (more on him later), and to say hi to some amazing people in Shenzhen (and to visit Huaqiangbei, of course). This is after I’ve already visited this year’s Maker Faires in New York, San Francisco, and Berlin.

Pimoroni started out small, but we’ve grown like weeds, and we’re steadily sauntering towards becoming a medium-sized business. That’s thanks to fantastic support from the people who buy our stuff and spread the word. In return, we try to be nice, friendly, and human in everything we do, and to make exciting things, ideally with our own hands here in Sheffield.

Pimoroni soldering

Handmade with love

We’ve made it onto a few ‘fastest-growing’ lists, and we’re in the top 500 of the Inc. 5000 Europe list. Adafruit did it first a few years back, and we’ve never gone wrong when we’ve followed in their footsteps.

The slightly weird nature of Pimoroni means we get listed as either a manufacturing or e-commerce business. In reality, we’re about four or five companies in one shell, which is very much against the conventions of “how business is done”. However, having seen what Adafruit, SparkFun, and Seeed do, we’re more than happy to design, manufacture, and sell our stuff in-house, as well as stocking the best stuff from across the maker community.

Pimoroni stocks

Product and process

The whole process of expansion has not been without its growing pains. We’re just under 40 people strong now, and have an outpost in Germany (also hilariously far from the sea for piratical activities). This means we’ve had to change things quickly to improve and automate processes, so that the wheels won’t fall off as things get bigger. Process optimization is incredibly interesting to a geek, especially the making sure that things are done well, that mistakes are easy to spot and to fix, and that nothing is missed.

At the end of 2015, we had a step change in how busy we were, and our post room and support started to suffer. As a consequence, we implemented measures to become more efficient, including small but important things like checking in parcels with a barcode scanner attached to a Raspberry Pi. That Pi has been happily running on the same SD card for a couple of years now without problems 😀

Pimoroni post room

Going postal?

We also hired a full-time support ninja, Matt, to keep the experience of getting stuff from us light and breezy and to ensure that any problems are sorted. He’s had hugely positive impact already by making the emails and replies you see more friendly. Of course, he’s also started using the laser cutters for tinkering projects. It’d be a shame to work at Pimoroni and not get to use all the wonderful toys, right?

Employing all the people

You can see some of the motley crew we employ here and there on the Pimoroni website. And if you drop by at the Raspberry Pi Birthday Party, Pi Wars, Maker Faires, Deer Shed Festival, or New Scientist Live in September, you’ll be seeing new Pimoroni faces as we start to engage with people more about what we do. On top of that, we’re starting to make proper videos (like Sandy’s soldering guide), as opposed to the 101 episodes of Bilge Tank we recorded in a rather off-the-cuff and haphazard fashion. Although that’s the beauty of Bilge Tank, right?

Pimoroni soldering

Such soldering setup

As Emma, Sandy, Lydia, and Tanya gel as a super creative team, we’re starting to create more formal educational resources, and to make kits that are suitable for a wider audience. Things like our Pi Zero W kits are products of their talents.

Emma is our new Head of Marketing. She’s really ‘The Only Marketing Person Who Would Ever Fit In At Pimoroni’, having been a core part of the Sheffield maker scene since we hung around with one Ben Nuttall, in the dark days before Raspberry Pi was a thing.

Through a series of fortunate coincidences, Niko and his equally talented wife Mena were there when we cut the first Pibow in 2012. They immediately pitched in to help us buy our second laser cutter so we could keep up with demand. They have been supporting Pimoroni with sourcing in East Asia, and now Niko has become a member of the Pirates’ Council and the Head of Engineering as we’re increasing the sophistication and scale of the things we do. The Unicorn HAT HD is one of his masterpieces.

Pimoroni devices

ALL the HATs!

We see ourselves as a wonderful island of misfit toys, and it feels good to have the best toy shop ever, and to support so many lovely people. Business is about more than just profits.

Where do we go to, me hearties?

So what are our plans? At the moment we’re still working absolutely flat-out as demand from wholesalers, retailers, and customers increases. We thought Raspberry Pi was big, but it turns out it’s just getting started. Near the end of 2016, it seemed to reach a whole new level of popularityand still we continue to meet people to whom we have to explain what a Pi is. It’s a good problem to have.

We need a bigger space, but it’s been hard to find somewhere suitable in Sheffield that won’t mean we’re stuck on an industrial estate miles from civilisation. That would be bad for the crewwe like having world-class burritos on our doorstep.

The good news is, it looks like our search is at an end! Just in time for the arrival of our ‘Super-Turbo-Death-Star’ new production line, which will enable to make devices in a bigger, better, faster, more ‘Now now now!’ fashion \o/

Pimoroni warehouse

Spacious, but not spacious enough!

We’ve got lots of treasure in the pipeline, but we want to pick up the pace of development even more and create many new HATs, pHATs, and SHIMs, e.g. for environmental sensing and audio applications. Picade will also be getting some love to make it slicker and more hackable.

We’re also starting to flirt with adding more engineering and production capabilities in-house. The plan is to try our hand at anodising, powder-coating, and maybe even injection-moulding if we get the space and find the right machine. Learning how to do things is amazing, and we love having an idea and being able to bring it to life in almost no time at all.

Pimoroni production

This is where the magic happens

Fanks!

There are so many people involved in supporting our success, and some people we love for just existing and doing wonderful things that make us want to do better. The biggest shout-outs go to Liz, Eben, Gordon, James, all the Raspberry Pi crew, and Limor and pt from Adafruit, for being the most supportive guiding lights a young maker company could ever need.

A note from us

It is amazing for us to witness the growth of businesses within the Raspberry Pi ecosystem. Pimoroni is a wonderful example of an organisation that is creating opportunities for makers within its local community, and the company is helping to reinvigorate Sheffield as the heart of making in the UK.

If you’d like to take advantage of the great products built by the Pirates, Monkeys, Robots, and Ninjas of Sheffield, you should do it soon: Pimoroni are giving everyone 20% off their homemade tech until 6 August.

Pimoroni, from all of us here at Pi Towers (both in the UK and USA), have a wonderful birthday, and many a grog on us!

The post Pimoroni is 5 now! appeared first on Raspberry Pi.

Time-lapse Visualizes Game of Thrones Piracy Around The Globe

Post Syndicated from Ernesto original https://torrentfreak.com/time-lapse-visualizes-game-of-thrones-piracy-around-the-globe-17-730/

Game of Thrones has been the most pirated TV-show online for years, and this isn’t expected to change anytime soon.

While most of today’s piracy takes place through streaming services, BitTorrent traffic remains significant as well. The show’s episodes are generally downloaded millions of times each, by people from all over the world.

In recent years there have been several attempts to quantify this piracy bonanza. While MILLIONS of downloads make for a good headline, there are some other trends worth looking at as well.

TorrentFreak spoke to Abigail De Kosnik, an Associate Professor at the University of California, Berkeley. Together with computer scientist and artist Benjamin De Kosnik, she runs the BitTorrent-oriented research project “alpha60.”

The goal of alpha60 is to quantify and map BitTorrent activity around various media titles, to make this “shadow economy” visible to media scholars and the general public. Over the past two weeks, they’ve taken a close look at Game of Thrones downloads.

Their tracking software collected swarm data from 72 torrents that were released shortly after the first episode premiered. Before being anonymized, the collected IP-addresses were first translated to geographical locations, to reveal various traffic patterns.

The results, summarized in a white paper, reveal that during the first five days, alpha60 registered an estimated 1.77 million downloads. Of particular interest is the five-day time-lapse of the worldwide swarm activity.

Five-day Game of Thrones piracy timelapse

The time-lapse shows that download patterns vary depending on the time of the day. There is a lot of activity in Asia, but cities such as Athens, Toronto, and Sao Paulo also pop up regularly.

When looking at the absolute numbers, Seoul comes out on top as the Game of Thrones download capital of the world, followed by Athens, São Paulo, Guangzhou, Mumbai, and Bangalore.

Perhaps more interesting is the view of the number of downloads relative to the population, or the “over-pirating” cities, as alpha60 calls them. Here, Dallas comes out on top, before Brisbane, Chicago, Riyadh, Saudi Arabia, Seattle, and Perth.

Of course, VPNs may skew the results somewhat, but overall the data should give a pretty accurate impression of the download traffic around the globe.

Below are the complete top tens of most active cities, both in absolute numbers and relative to the population. Further insights and additional information is available in the full whitepaper, which can be accessed here.

Note: The download totals reported by alpha60 are significantly lower than the MUSO figures that came out last week. Alpha60 stresses, however, that their methods and data are accurate. MUSO, for its part, has made some dubious claims in the past.

Most downloads (absolute)

1 Seoul, Rep. of Korea
2 Athens, Greece
3 São Paulo, Brazil
4 Guangzhou, China
5 Mumbai, India
6 Bangalore, India
7 Shanghai, China
8 Riyadh, Saudi Arabia
9 Delhi, India
10 Beijing, China

Most downloads (relative)

1 Dallas, USA
2 Brisbane, Australia
3 Chicago, USA
4 Riyadh, Saudi Arabia
5 Seattle, USA
6 Perth, Australia
7 Phoenix, USA
8 Toronto, Canada
9 Athens, Greece
10 Guangzhou, China

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New – GPU-Powered Streaming Instances for Amazon AppStream 2.0

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-gpu-powered-streaming-instances-for-amazon-appstream-2-0/

We launched Amazon AppStream 2.0 at re:Invent 2016. This application streaming service allows you to deliver Windows applications to a desktop browser.

AppStream 2.0 is fully managed and provides consistent, scalable performance by running applications on general purpose, compute optimized, and memory optimized streaming instances, with delivery via NICE DCV – a secure, high-fidelity streaming protocol. Our enterprise and public sector customers have started using AppStream 2.0 in place of legacy application streaming environments that are installed on-premises. They use AppStream 2.0 to deliver both commercial and line of business applications to a desktop browser. Our ISV customers are using AppStream 2.0 to move their applications to the cloud as-is, with no changes to their code. These customers focus on demos, workshops, and commercial SaaS subscriptions.

We are getting great feedback on AppStream 2.0 and have been adding new features very quickly (even by AWS standards). So far this year we have added an image builder, federated access via SAML 2.0, CloudWatch monitoring, Fleet Auto Scaling, Simple Network Setup, persistent storage for user files (backed by Amazon S3), support for VPC security groups, and built-in user management including web portals for users.

New GPU-Powered Streaming Instances
Many of our customers have told us that they want to use AppStream 2.0 to deliver specialized design, engineering, HPC, and media applications to their users. These applications are generally graphically intensive and are designed to run on expensive, high-end PCs in conjunction with a GPU (Graphics Processing Unit). Due to the hardware requirements of these applications, cost considerations have traditionally kept them out of situations where part-time or occasional access would otherwise make sense. Recently, another requirement has come to the forefront. These applications almost always need shared, read-write access to large amounts of sensitive data that is best stored, processed, and secured in the cloud. In order to meet the needs of these users and applications, we are launching two new types of streaming instances today:

Graphics Desktop – Based on the G2 instance type, Graphics Desktop instances are designed for desktop applications that use the CUDA, DirectX, or OpenGL for rendering. These instances are equipped with 15 GiB of memory and 8 vCPUs. You can select this instance family when you build an AppStream image or configure an AppStream fleet:

Graphics Pro – Based on the brand-new G3 instance type, Graphics Pro instances are designed for high-end, high-performance applications that can use the NVIDIA APIs and/or need access to large amounts of memory. These instances are available in three sizes, with 122 to 488 GiB of memory and 16 to 64 vCPUs. Again, you can select this instance family when you configure an AppStream fleet:

To learn more about how to launch, run, and scale a streaming application environment, read Scaling Your Desktop Application Streams with Amazon AppStream 2.0.

As I noted earlier, you can use either of these two instance types to build an AppStream image. This will allow you to test and fine tune your applications and to see the instances in action.

Streaming Instances in Action
We’ve been working with several customers during a private beta program for the new instance types. Here are a few stories (and some cool screen shots) to show you some of the applications that they are streaming via AppStream 2.0:

AVEVA is a world leading provider of engineering design and information management software solutions for the marine, power, plant, offshore and oil & gas industries. As part of their work on massive capital projects, their customers need to bring many groups of specialist engineers together to collaborate on the creation of digital assets. In order to support this requirement, AVEVA is building SaaS solutions that combine the streamed delivery of engineering applications with access to a scalable project data environment that is shared between engineers across the globe. The new instances will allow AVEVA to deliver their engineering design software in SaaS form while maximizing quality and performance. Here’s a screen shot of their Everything 3D app being streamed from AppStream:

Nissan, a Japanese multinational automobile manufacturer, trains its automotive specialists using 3D simulation software running on expensive graphics workstations. The training software, developed by The DiSti Corporation, allows its specialists to simulate maintenance processes by interacting with realistic 3D models of the vehicles they work on. AppStream 2.0’s new graphics capability now allows Nissan to deliver these training tools in real time, with up to date content, to a desktop browser running on low-cost commodity PCs. Their specialists can now interact with highly realistic renderings of a vehicle that allows them to train for and plan maintenance operations with higher efficiency.

Cornell University is an American private Ivy League and land-grant doctoral university located in Ithaca, New York. They deliver advanced 3D tools such as AutoDesk AutoCAD and Inventor to students and faculty to support their course work, teaching, and research. Until now, these tools could only be used on GPU-powered workstations in a lab or classroom. AppStream 2.0 allows them to deliver the applications to a web browser running on any desktop, where they run as if they were on a local workstation. Their users are no longer limited by available workstations in labs and classrooms, and can bring their own devices and have access to their course software. This increased flexibility also means that faculty members no longer need to take lab availability into account when they build course schedules. Here’s a copy of Autodesk Inventor Professional running on AppStream at Cornell:

Now Available
Both of the graphics streaming instance families are available in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) Regions and you can start streaming from them today. Your applications must run in a Windows 2012 R2 environment, and can make use of DirectX, OpenGL, CUDA, OpenCL, and Vulkan.

With prices in the US East (Northern Virginia) Region starting at $0.50 per hour for Graphics Desktop instances and $2.05 per hour for Graphics Pro instances, you can now run your simulation, visualization, and HPC workloads in the AWS Cloud on an economical, pay-by-the-hour basis. You can also take advantage of fast, low-latency access to Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), AWS Lambda, Amazon Redshift, and other AWS services to build processing workflows that handle pre- and post-processing of your data.

Jeff;

 

Write and Read Multiple Objects in Amazon Cloud Directory by Using Batch Operations

Post Syndicated from Vineeth Harikumar original https://aws.amazon.com/blogs/security/write-and-read-multiple-objects-in-amazon-cloud-directory-by-using-batch-operations/

Amazon Cloud Directory is a hierarchical data store that enables you to build flexible, cloud-native directories for organizing hierarchies of data along multiple dimensions. For example, you can create an organizational structure that you can navigate through multiple hierarchies for reporting structure, location, and cost center.

In this blog post, I demonstrate how you can use Cloud Directory APIs to write and read multiple objects by using batch operations. With batch write operations, you can execute a sequence of operations atomically—meaning that all of the write operations must occur, or none of them do. You also can make your application efficient by reducing the number of required round trips to read and write objects to your directory. I have used the AWS SDK for Java for all the sample code in this blog post, but you can use other language SDKs or the AWS CLI in a similar way.

Using batch write operations

To demonstrate batch write operations, let’s say that AnyCompany’s warehouses are organized to determine the fastest methods to ship orders to its customers. In North America, AnyCompany plans to open new warehouses regularly so that the company can keep up with customer demand while continuing to meet the delivery times to which they are committed.

The following diagram shows part of AnyCompany’s global network, including Asian and European warehouse networks.

Let’s take a look at how I can use batch write operations to add NorthAmerica to AnyCompany’s global network of warehouses, with the first three warehouses in New York City (NYC), Las Vegas (LAS), and Phoenix (PHX).

Adding NorthAmerica to the global network

To add NorthAmerica to the global network, I can use a batch write operation to create and link all the objects in the existing network.

First, I set up a helper method, which performs repetitive tasks, for the getBatchCreateOperation object. The following lines of code help me create an NA object for NorthAmerica and then attach the three city-related nodes: NYC, LAS, and PHX. Because AnyCompany is planning to grow its network, I add a suffix of _1 to each city code (such as PHX_1), which will be helpful hierarchically when the company adds more warehouses within a city.

    private BatchWriteOperation getBatchCreateOperation(
            String warehouseName,
            String directorySchemaARN,
            String parentReference,
            String linkName) {

        SchemaFacet warehouse_facet = new SchemaFacet()
            .withFacetName("warehouse")
            .withSchemaArn(directorySchemaARN);

        AttributeKeyAndValue kv = new AttributeKeyAndValue()
            .withKey(new AttributeKey()
                .withFacetName("warehouse")
                .withName("name")
                .withSchemaArn(directorySchemaARN))
            .withValue(new TypedAttributeValue()
                .withStringValue(warehouseName);

        List<SchemaFacet> facets = Lists.newArrayList(warehouse_facet);
        List<AttributeKeyAndValue> kvs = Lists.newArrayList(kv);

        BatchCreateObject createObject = new BatchCreateObject();

        createObject.withParentReference(new ObjectReference()
            .withSelector(parentReference));
        createObject.withLinkName(linkName);

        createObject.withBatchReferenceName(UUID.randomUUID().toString());
        createObject.withSchemaFacet(facets);
        createObject.withObjectAttributeList(kvs);

        return new BatchWriteOperation().withCreateObject
                                       (createObject);
    }

The parameters of this helper method include:

  • warehouseName – The name of the warehouse to create in the getBatchCreateOperation object.
  • directorySchemaARN – The Amazon Resource Name (ARN) of the schema applied to the directory.
  • parentReference – The object reference of the parent object.
  • linkName – The unique child path from the parent reference where the object should be attached.

I then use this helper method to set up multiple create operations for NorthAmerica, NewYork, Phoenix, and LasVegas. For the sake of simplicity, I use airport codes to stand for the cities (for example, NYC stands for NewYork).

   BatchWriteOperation createObjectNA = getBatchCreateOperation(
                      "NA",
                      directorySchemaARN,
                      "/",
                      "NorthAmerica");
   BatchWriteOperation createObjectNYC = getBatchCreateOperation(
                      "NYC_1",
                      directorySchemaARN,
                      "/NorthAmerica",
                      "NewYork");
   BatchWriteOperation createObjectPHX = getBatchCreateOperation(
                       "PHX_1",
                       directorySchemaARN,
                       "/NorthAmerica",
                       "Phoenix");
   BatchWriteOperation createObjectLAS = getBatchCreateOperation(
                      "LAS_1",
                      directorySchemaARN,
                      "/NorthAmerica",
                      "LasVegas");

   BatchWriteRequest request = new BatchWriteRequest();
   request.setDirectoryArn(directoryARN);
   request.setOperations(Lists.newArrayList(
       createObjectNA,
       createObjectNYC,
       createObjectPHX,
       createObjectLAS));

   client.batchWrite(request);

Running the preceding code results in a hierarchy for the network with NA added to the network, as shown in the following diagram.

Using batch read operations

Now, let’s say that after I add NorthAmerica to AnyCompany’s global network, an analyst wants to see the updated view of the NorthAmerica warehouse network as well as some information about the newly introduced warehouse configurations for the Phoenix warehouses. To do this, I can use batch read operations to get the network of warehouses for NorthAmerica as well as specifically request the attributes and configurations of the Phoenix warehouses.

To list the children of the NorthAmerica warehouses, I use the BatchListObjectChildren API to get all the children at the path, /NorthAmerica. Next, I want to view the attributes of the Phoenix object, so I use the BatchListObjectAttributes API to read all the attributes of the object at /NorthAmerica/Phoenix, as shown in the following code example.

    BatchListObjectChildren listObjectChildrenRequest = new BatchListObjectChildren()
        .withObjectReference(new ObjectReference().withSelector("/NorthAmerica"));
    BatchListObjectAttributes listObjectAttributesRequest = new BatchListObjectAttributes()
        .withObjectReference(new ObjectReference()
            .withSelector("/NorthAmerica/Phoenix"));
    BatchReadRequest batchRead = new BatchReadRequest()
        .withConsistencyLevel(ConsistencyLevel.EVENTUAL)
        .withDirectoryArn(directoryArn)
        .withOperations(Lists.newArrayList(listObjectChildrenRequest, listObjectAttributesRequest));

    BatchReadResult result = client.batchRead(batchRead);

Exception handling

Batch operations in Cloud Directory might sometimes fail, and it is important to know how to handle such failures, which differ for write operations and read operations.

Batch write operation failures

If a batch write operation fails, Cloud Directory fails the entire batch operation and returns an exception. The exception contains the index of the operation that failed along with the exception type and message. If you see RetryableConflictException, you can try again with exponential backoff. A simple way to do this is to double the amount of time you wait each time you get an exception or failure. For example, if your first batch write operation fails, wait 100 milliseconds and try the request again. If the second request fails, wait 200 milliseconds and try again. If the third request fails, wait 400 milliseconds and try again.

Batch read operation failures

If a batch read operation fails, the response contains either a successful response or an exception response. Individual batch read operation failures do not cause the entire batch read operation to fail—Cloud Directory returns individual success or failure responses for each operation.

Limits of batch operations

Batch operations are still constrained by the same Cloud Directory limits as other Cloud Directory APIs. A single batch operation does not limit the number of operations, but the total number of nodes or objects being written or edited in a single batch operation have enforced limits. For example, a total of 20 objects can be written in a single batch operation request to Cloud Directory, regardless of how many individual operations there are within that batch. Similarly, a total of 200 objects can be read in a single batch operation request to Cloud Directory. For more information, see limits on batch operations.

Summary

In this post, I have demonstrated how you can use batch operations to operate on multiple objects and simplify making complicated changes across hierarchies. In my next post, I will demonstrate how to use batch references within batch write operations. To learn more about batch operations, see Batches, BatchWrite, and BatchRead.

If you have comments about this post, submit them in the “Comments” section below. If you have implementation questions, start a new thread on the Directory Service forum.

– Vineeth

Under the Hood of Server-Side Encryption for Amazon Kinesis Streams

Post Syndicated from Damian Wylie original https://aws.amazon.com/blogs/big-data/under-the-hood-of-server-side-encryption-for-amazon-kinesis-streams/

Customers are using Amazon Kinesis Streams to ingest, process, and deliver data in real time from millions of devices or applications. Use cases for Kinesis Streams vary, but a few common ones include IoT data ingestion and analytics, log processing, clickstream analytics, and enterprise data bus architectures.

Within milliseconds of data arrival, applications (KCL, Apache Spark, AWS Lambda, Amazon Kinesis Analytics) attached to a stream are continuously mining value or delivering data to downstream destinations. Customers are then scaling their streams elastically to match demand. They pay incrementally for the resources that they need, while taking advantage of a fully managed, serverless streaming data service that allows them to focus on adding value closer to their customers.

These benefits are great; however, AWS learned that many customers could not take advantage of Kinesis Streams unless their data-at-rest within a stream was encrypted. Many customers did not want to manage encryption on their own, so they asked for a fully managed, automatic, server-side encryption mechanism leveraging centralized AWS Key Management Service (AWS KMS) customer master keys (CMK).

Motivated by this feedback, AWS added another fully managed, low cost aspect to Kinesis Streams by delivering server-side encryption via KMS managed encryption keys (SSE-KMS) in the following regions:

  • US East (N. Virginia)
  • US West (Oregon)
  • US West (N. California)
  • EU (Ireland)
  • Asia Pacific (Singapore)
  • Asia Pacific (Tokyo)

In this post, I cover the mechanics of the Kinesis Streams server-side encryption feature. I also share a few best practices and considerations so that you can get started quickly.

Understanding the mechanics

The following section walks you through how Kinesis Streams uses CMKs to encrypt a message in the PutRecord or PutRecords path before it is propagated to the Kinesis Streams storage layer, and then decrypt it in the GetRecords path after it has been retrieved from the storage layer.

When server-side encryption is enabled—which takes just a few clicks in the console—the partition key and payload for every incoming record is encrypted automatically as it’s flowing into Kinesis Streams, using the selected CMK. When data is at rest within a stream, it’s encrypted.

When records are retrieved through a GetRecords request from the encrypted stream, they are decrypted automatically as they are flowing out of the service. That means your Kinesis Streams producers and consumers do not need to be aware of encryption. You have a fully managed data encryption feature at your fingertips, which can be enabled within seconds.

AWS also makes it easy to audit the application of server-side encryption. You can use the AWS Management Console for instant stream-level verification; the responses from PutRecord, PutRecords, and getRecords; or AWS CloudTrail.

Calling PutRecord or PutRecords

When server-side encryption is enabled for a particular stream, Kinesis Streams and KMS perform the following actions when your applications call PutRecord or PutRecords on a stream with server-side encryption enabled. The Amazon Kinesis Producer Library (KPL) uses PutRecords.

 

  1. Data is sent from a customer’s producer (client) to a Kinesis stream using TLS via HTTPS. Data in transit to a stream is encrypted by default.
  2. After data is received, it is momentarily stored in RAM within a front-end proxy layer.
  3. Kinesis Streams authenticates the producer, then impersonates the producer to request input keying material from KMS.
  4. KMS creates key material, encrypts it by using CMK, and sends both the plaintext and encrypted key material to the service, encrypted with TLS.
  5. The client uses the plaintext key material to derive data encryption keys (data keys) that are unique per-record.
  6. The client encrypts the payload and partition key using the data key in RAM within the front-end proxy layer and removes the plaintext data key from memory.
  7. The client appends the encrypted key material to the encrypted data.
  8. The plaintext key material is securely cached in memory within the front-end layer for reuse, until it expires after 5 minutes.
  9. The client delivers the encrypted message to a back-end store where it is stored at rest and fetchable by an authorized consumer through a GetRecords The Amazon Kinesis Client Library (KCL) calls GetRecords to retrieve records from a stream.

Calling getRecords

Kinesis Streams and KMS perform the following actions when your applications call GetRecords on a server-side encrypted stream.

 

  1. When a GeRecords call is made, the front-end proxy layer retrieves the encrypted record from its back-end store.
  2. The consumer (client) makes a request to KMS using a token generated by the customer’s request. KMS authorizes it.
  3. The client requests that KMS decrypt the encrypted key material.
  4. KMS decrypts the encrypted key material and sends the plaintext key material to the client.
  5. Kinesis Streams derives the per-record data keys from the decrypted key material.
  6. If the calling application is authorized, the client decrypts the payload and removes the plaintext data key from memory.
  7. The client delivers the payload over TLS and HTTPS to the consumer, requesting the records. Data in transit to a consumer is encrypted by default.

Verifying server-side encryption

Auditors or administrators often ask for proof that server-side encryption was or is enabled. Here are a few ways to do this.

To check if encryption is enabled now for your streams:

  • Use the AWS Management Console or the DescribeStream API operation. You can also see what CMK is being used for encryption.
  • See encryption in action by looking at responses from PutRecord, PutRecords, or GetRecords When encryption is enabled, the encryptionType parameter is set to “KMS”. If encryption is not enabled, encryptionType is not included in the response.

Sample PutRecord response

{
    "SequenceNumber": "49573959617140871741560010162505906306417380215064887298",
    "ShardId": "shardId-000000000000",
    "EncryptionType": "KMS"
}

Sample GetRecords response

{
    "Records": [
        {
            "Data": "aGVsbG8gd29ybGQ=", 
            "PartitionKey": "test", 
            "ApproximateArrivalTimestamp": 1498292565.825, 
            "EncryptionType": "KMS", 
            "SequenceNumber": "495735762417140871741560010162505906306417380215064887298"
        }, 
        {
            "Data": "ZnJvZG8gbGl2ZXMK", 
            "PartitionKey": "3d0d9301-3c30-4c48-a9a8-e485b2982b28", 
            "ApproximateArrivalTimestamp": 1498292801.747, 
            "EncryptionType": "KMS", 
            "SequenceNumber": "49573959617140871741560010162507115232237011062036103170"
        }
    ], 
    "NextShardIterator": "AAAAAAAAAAEvFypHZDx/4bJVAS34puwdiNcwssKqbh/XhRK7HSYRq3RS+YXJnVKJ8j0gQUt94bONdqQYHk9X9JHgefMUDKzDzndy5WbZWO4CS3hRdMdrbmJ/9KoR4lOfZvqTLt6JWQjDqXv0IaKs06/LHYcEA3oPcyQLOTJHdJl2EzplCTZnn/U295ovxvqF9g9DY8y2nVoMkdFLmdcEMVXjhCDKiRIt", 
    "MillisBehindLatest": 0
}

To check if encryption was enabled, use CloudTrail, which logs the StartStreamEncryption() and StopStreamEncryption() API calls made against a particular stream.

Getting started

It’s very easy to enable, disable, or modify server-side encryption for a particular stream.

  1. In the Kinesis Streams console, select a stream and choose Details.
  2. Select a CMK and select Enabled.
  3. Choose Save.

You can enable encryption only for a live stream, not upon stream creation.  Follow the same process to disable a stream. To use a different CMK, select it and choose Save.

Each of these tasks can also be accomplished using the StartStreamEncryption and StopStreamEncryption API operations.

Considerations

There are a few considerations you should be aware of when using server-side encryption for Kinesis Streams:

  • Permissions
  • Costs
  • Performance

Permissions

One benefit of using the “(Default) aws/kinesis” AWS managed key is that every producer and consumer with permissions to call PutRecord, PutRecords, or GetRecords inherits the right permissions over the “(Default) aws/kinesis” key automatically.

However, this is not necessarily the same case for a CMK. Kinesis Streams producers and consumers do not need to be aware of encryption. However, if you enable encryption using a custom master key but a producer or consumer doesn’t have IAM permissions to use it, PutRecord, PutRecords, or GetRecords requests fail.

This is a great security feature. On the other hand, it can effectively lead to data loss if you inadvertently apply a custom master key that restricts producers and consumers from interacting from the Kinesis stream. Take precautions when applying a custom master key. For more information about the minimum IAM permissions required for producers and consumers interacting with an encrypted stream, see Using Server-Side Encryption.

Costs

When you apply server-side encryption, you are subject to KMS API usage and key costs. Unlike custom KMS master keys, the “(Default) aws/kinesis” CMK is offered free of charge. However, you still need to pay for the API usage costs that Kinesis Streams incurs on your behalf.

API usage costs apply for every CMK, including custom ones. Kinesis Streams calls KMS approximately every 5 minutes when it is rotating the data key. In a 30-day month, the total cost of KMS API calls initiated by a Kinesis stream should be less than a few dollars.

Performance

During testing, AWS discovered that there was a slight increase (typically 0.2 millisecond or less per record) with put and get record latencies due to the additional overhead of encryption.

If you have questions or suggestions, please comment below.

AWS Price Reduction – SQL Server Standard Edition on EC2

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-price-reduction-sql-server-standard-edition-on-ec2/

I’m happy to be able to announce the 62nd AWS price reduction, this one for Microsoft SQL Server Standard Edition on EC2.

Many enterprise workloads run on Microsoft Windows, primarily on-premises or in corporate data centers. We believe that AWS is the best place to build, deploy, scale, and manage Windows applications due to the breadth of services that we provide, backed up by our global reach and our partner ecosystem. Customers like Adobe, Pitney Bowes, and DeVry University have all moved core production Windows Server workloads to AWS. Their applications run the gamut from SharePoint sites to custom .NET applications and SAP, and frequently use SQL Server.

Microsoft SQL Server on AWS runs on an EC2 Windows instance and can support your application development and migration efforts. It gives you control over every setting, just as you would have if you were running your relational database on-premises, with support for 32-bit and 64-bit versions.

Today we are reducing the On-Demand and Reserved Instance prices for Microsoft SQL Server Standard Edition on EC2 running on R4, M4, I3, and X1 instances by up to 52%, depending on instance type, size, and region. You can build and run enterprise-scale applications, massively scalable websites. and mobile applications even more cost-effectively than before.

Here are the largest price reductions for each region and instance type:

Region R4 M4 I3 X1
US East (Northern Virginia) -51% -29% -50% -52%
US East (Ohio) -51% -29% -50% -52%
US West (Oregon) -51% -29% -50% -52%
US West (Northern California) -51% -30% -50%
Canada (Central) -51% -51% -50% -44%
South America (São Paulo) -49% -30% -48%
EU (Ireland) -51% -29% -50% -51%
EU (Frankfurt) -51% -29% -50% -50%
EU (London) -51% -51% -50% -44%
Asia Pacific (Singapore) -51% -31% -50% -50%
Asia Pacific (Sydney) -51% -30% -50% -50%
Asia Pacific (Tokyo) -51% -29% -50% -50%
Asia Pacific (Seoul)  -51% -31% -50% -50%
Asia Pacific (Mumbai)  -51% -33% -50% -50%

The new, lower prices for On-Demand instances are in effect as of July 1, 2017. The new pricing for Reserved Instances is in effect today.

Jeff;

 

DynamoDB Accelerator (DAX) Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/dynamodb-accelerator-dax-now-generally-available/

Earlier this year I told you about Amazon DynamoDB Accelerator (DAX), a fully-managed caching service that sits in front of (logically speaking) your Amazon DynamoDB tables. DAX returns cached responses in microseconds, making it a great fit for eventually-consistent read-intensive workloads. DAX supports the DynamoDB API, and is seamless and easy to use. As a managed service, you simply create your DAX cluster and use it as the target for your existing reads and writes. You don’t have to worry about patching, cluster maintenance, replication, or fault management.

Now Generally Available
Today I am pleased to announce that DAX is now generally available. We have expanded DAX into additional AWS Regions and used the preview time to fine-tune performance and availability:

Now in Five Regions – DAX is now available in the US East (Northern Virginia), EU (Ireland), US West (Oregon), Asia Pacific (Tokyo), and US West (Northern California) Regions.

In Production – Our preview customers are reporting that they are using DAX in production, that they loved how easy it was to add DAX to their application, and have told us that their apps are now running 10x faster.

Getting Started with DAX
As I outlined in my earlier post, it is easy to use DAX to accelerate your existing DynamoDB applications. You simply create a DAX cluster in the desired region, update your application to reference the DAX SDK for Java (the calls are the same; this is a drop-in replacement), and configure the SDK to use the endpoint to your cluster. As a read-through/write-through cache, DAX seamlessly handles all of the DynamoDB read/write APIs.

We are working on SDK support for other languages, and I will share additional information as it becomes available.

DAX Pricing
You pay for each node in the cluster (see the DynamoDB Pricing page for more information) on a per-hour basis, with prices starting at $0.269 per hour in the US East (Northern Virginia) and US West (Oregon) regions. With DAX, each of the nodes in your cluster serves as a read target and as a failover target for high availability. The DAX SDK is cluster aware and will issue round-robin requests to all nodes in the cluster so that you get to make full use of the cluster’s cache resources.

Because DAX can easily handle sudden spikes in read traffic, you may be able to reduce the amount of provisioned throughput for your tables, resulting in an overall cost savings while still returning results in microseconds.

Jeff;

 

In the Works – AWS Region in Hong Kong

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-hong-kong/

Last year we launched new AWS Regions in Canada, India, Korea, the UK (London), and the United States (Ohio), and announced that new regions are coming to France (Paris), China (Ningxia), and Sweden (Stockholm).

Coming to Hong Kong in 2018
Today, I am happy to be able to tell you that we are planning to open up an AWS Region in Hong Kong, in 2018. Hong Kong is a leading international financial center, well known for its service oriented economy. It is rated highly on innovation and for ease of doing business. As an evangelist, I get to visit many great cities in the world, and was lucky to have spent some time in Hong Kong back in 2014 and met a number of awesome customers there. Many of these customers have given us feedback that they wanted a local AWS Region.

This will be the eighth AWS Region in Asia Pacific joining six other Regions there — Singapore, Tokyo, Sydney, Beijing, Seoul, and Mumbai, and an additional Region in China (Ningxia) expected to launch in the coming months. Together, these Regions will provide our customers with a total of 19 Availability Zones (AZs) and allow them to architect highly fault tolerant applications.

Today, our infrastructure comprises 43 Availability Zones across 16 geographic regions worldwide, with another three AWS Regions (and eight Availability Zones) in France, China, and Sweden coming online throughout 2017 and 2018, (see the AWS Global Infrastructure page for more info).

We are looking forward to serving new and existing customers in Hong Kong and working with partners across Asia-Pacific. Of course, the new region will also be open to existing AWS customers who would like to process and store data in Hong Kong. Public sector organizations such as government agencies, educational institutions, and nonprofits in Hong Kong will be able to use this region to store sensitive data locally (the AWS in the Public Sector page has plenty of success stories drawn from our worldwide customer base).

If you are a customer or a partner and have specific questions about this Region, you can contact our Hong Kong team.

Help Wanted
If you are interested in learning more about AWS positions in Hong Kong, please visit the Amazon Jobs site and set the location to Hong Kong.

Jeff;

 

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Amazon Lightsail Update – 9 More Regions and Global Console

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-lightsail-update-9-more-regions-and-global-console/

Amazon Lightsail lets you launch Virtual Private Servers on AWS with just a few clicks. With prices starting at $5 per month, Lightsail takes care of the heavy lifting and gives you a simple way to build and host applications. As I showed you in my re:Invent post (Amazon Lightsail – The Power of AWS, the Simplicity of a VPS), you can choose a configuration from a menu and launch a virtual machine preconfigured with SSD-based storage, DNS management, and a static IP address.

Since we launched in November, many customers have used Lightsail to launch Virtual Private Servers. For example, Monash University is using Amazon Lightsail to rapidly re-platform a number of CMS services in a simple and cost-effective manner. They have already migrated 50 workloads and are now thinking of creating an internal CMS service based on Lightsail to allow staff and students to create their own CMS instances in a self-service manner.

Today we are expanding Lightsail into nine more AWS Regions and launching a new, global console.

New Regions
At re:Invent we made Lightsail available in the US East (Northern Virginia) Region. Earlier this month we added support for several additional Regions in the US and Europe. Today we are launching Lightsail in four of our Asia Pacific Regions, bringing the total to ten. Here’s the full list:

  • US East (Northern Virginia)
  • US West (Oregon)
  • US East (Ohio)
  • EU (London)
  • EU (Frankfurt)
  • EU (Ireland)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Singapore)
  • Asia Pacific (Sydney)

Global Console
The updated Lightsail console makes it easy for you to create and manage resources in one or more Regions. I simply choose the desired Region when I create a new instance:

I can see all of my instances and static IP addresses on the same page, no matter what Region they are in:

And I can perform searches that span all of my resources and Regions. All of my LAMP stacks:

Or all of my resources in the EU (Ireland) Region:

I can perform a similar search on the Snapshots tab:

A new DNS zones tab lets me see my existing zones and create new ones:

Creation of SSH keypairs is now specific to a Region:

I can manage my key pairs on a Region-by-Region basis:

Static IP addresses are also specific to a particular Region:

Available Now
You can use the new Lightsail console and create resources in all ten Regions today!

Jeff;

 

EC2 Price Reductions – Reserved Instances & M4 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-price-reductions-reserved-instances-m4-instances/

As AWS grows, we continue to find ways to make it an even better value. We work with our suppliers to drive down costs while also finding ways to build hardware and software that is increasingly more efficient and cost-effective.

In addition to reducing our prices on a regular and frequent basis, we also give customers options that help them to optimize their use of AWS. For example, Reserved Instances (first launched in 2009) allow Amazon EC2 users to obtain a significant discount when compared to On-Demand Pricing, along with a capacity reservation when used in a specific Availability Zone.

Our customers use multiple strategies to purchase and manage their Reserved Instances. Some prefer to make an upfront payment and earn a bigger discount; some prefer to pay nothing upfront and get a smaller (yet still substantial) discount. In the middle, others are happiest with a partial upfront payment and a discount that falls in between the two other options. In order to meet this wide range of preferences we are adding 3 Year No Upfront Standard Reserved Instances for most of the current generation instance types. We are also reducing prices for No Upfront Reserved Instances, Convertible Reserved Instances, and General Purpose M4 instances (both On-Demand and Reserved Instances). This is our 61st AWS Price Reduction.

Here are the details (all changes and reductions are effective immediately):

New No Upfront Payment Option for 3 Year Standard RIs – We previously offered a no upfront payment option with a 1 year term for Standard RIs. Today, we are adding a No Upfront payment option with a 3 year term for C4, M4, R4, I3, P2, X1, and T2 Standard Reserved Instances.

Lower Prices for No Upfront Reserved Instances – We are lowering the prices for No Upfront 1 Year Standard and 3 Year Convertible Reserved Instances for the C4, M4, R4, I3, P2, X1, and T2 instance types by up to 17%, depending on instance type, operating system, and region.

Here are the average reductions for No Upfront Reserved Instances for Linux in several representative regions:

US East (Northern Virginia)
US West (Oregon)
EU (Ireland)
Asia Pacific (Tokyo)
Asia Pacific (Singapore)
C4 -11% -11% -10% -10% -9%
M4 -16% -16% -16% -16% -17%
R4 -10% -10% -10% -10% -10%

Lower Prices for Convertible Reserved Instances – Convertible Reserved Instances allow you to change the instance family and other parameters associated with the RI at any time; this allows you to adjust your RI inventory as your application evolves and your needs change. We are lowering the prices for 3 Year Convertible Reserved Instances by up to 21% for most of the current generation instances (C4, M4, R4, I3, P2, X1, and T2).

Here are the average reductions for Convertible Reserved Instances for Linux in several representative regions:

US East (Northern Virginia)
US West (Oregon)
EU (Ireland)
Asia Pacific (Tokyo)
Asia Pacific (Singapore)
C4 -13% -13% -5% -5% -11%
M4 -19% -19% -17% -15% -21%
R4 -15% -15% -15% -15% -15%

Similar reductions will go into effect for nearly all of the other regions as well.

Lower Prices for M4 Instances – We are lowering the prices for M4 Linux instances by up to 7%.

Visit the EC2 Reserved Instance Pricing Page and the EC2 Pricing Page, or consult the AWS Price List API for all of the new prices.

Learn More
The following blog posts contain additional information about some of the improvements that we have made to the EC2 Reserved Instance model:

You can also read AWS Pricing and the Reserved Instances FAQ to learn more.

Jeff;

AWS X-Ray Update – General Availability, Including Lambda Integration

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-x-ray-update-general-availability-including-lambda-integration/

I first told you about AWS X-Ray at AWS re:Invent in my post, AWS X-Ray – See Inside Your Distributed Application. X-Ray allows you to trace requests made to your application as execution traverses Amazon EC2 instances, Amazon ECS containers, microservices, AWS database services, and AWS messaging services. It is designed for development and production use, and can handle simple three-tier applications as well as applications composed of thousands of microservices. As I showed you last year, X-Ray helps you to perform end-to-end tracing of requests, record a representative sample of the traces, see a map of the services and the trace data, and to analyze performance issues and errors. This helps you understand how your application and its underlying services are performing so you can identify and address the root cause of issues.

You can take a look at the full X-Ray walk-through in my earlier post to learn more.

We launched X-Ray in preview form at re:Invent and invited interested developers and architects to start using it. Today we are making the service generally available, with support in the US East (Northern Virginia), US West (Northern California), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), South America (São Paulo), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Sydney), and Asia Pacific (Mumbai) Regions.

New Lambda Integration (Preview)
During the preview period we fine-tuned the service and added AWS Lambda integration, which we are launching today in preview form. Now, Lambda developers can use X-Ray to gain visibility into their function executions and performance. Previously, Lambda customers who wanted to understand their application’s latency breakdown, diagnose slowdowns, or troubleshoot timeouts had to rely on custom logging and analysis.

In order to make use of this new integration, you simply ensure that the functions of interest have execution roles that gives the functions permission to write to X-Ray, and then enable tracing on a function-by-function basis (when you create new functions using the console, the proper permissions are assigned automatically). Then you use the X-Ray service map to see how your requests flow through your Lambda functions, EC2 instances, ECS containers, and so forth. You can identify the services and resources of interest, zoom in, examine detailed timing information, and then remedy the issue.

Each call to a Lambda function generates two or more nodes in the X-Ray map:

Lambda Service – This node represents the time spent within Lambda itself.

User Function – This node represents the execution time of the Lambda function.

Downstream Service Calls – These nodes represent any calls that the Lambda function makes to other services.

To learn more, read Using X-Ray with Lambda.

Now Available
We will begin to charge for the usage of X-Ray on May 1, 2017.

Pricing is based on the number of traces that you record, and the number that you analyze (each trace represent a request made to your application). You can record 100,000 traces and retrieve or scan 1,000,000 traces every month at no charge. Beyond that, you pay $5 for every million traces that you record and $0.50 for every million traces that you retrieve for analysis, with more info available on the AWS X-Ray Pricing page. You can visit the AWS Billing Console to see how many traces you have recorded or accessed (data collection began on March 1, 2017).

Check out AWS X-Ray and the new Lambda integration today and let me know what you think!

Jeff;

 

AWS Global Summits are Coming!

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-global-summits-are-coming/

One of the first things I got to do when I joined the AWS Blog team was to attend the summit in New York City last August. Meeting all of our customers, checking out Game Day, and getting to see the enthusiasm of the AWS community made me even more excited to be starting my adventure working on the blog with Jeff.

This year’s AWS Summit dates have been announced and whether you are new to the cloud or an experienced user, you can always learn something new at an AWS Summit. These free events, held around the world, are designed to educate you about the AWS platform. Our team has built a program that offers a multitude of learning opportunities covering a broad range of topics, and technical depth. Join us to develop the skills needed to design, deploy, and operate infrastructure and applications on AWS.

We have Summits taking place across North America, Latin America, Asia Pacific, Europe, the Middle East, Japan, and Greater China. To see the full list of cities and dates, check out the AWS Summits page.

Registration is now open for six locations including; San Francisco, Sydney, Singapore, Kuala Lumpur, Seoul, Manila, and Bangkok. You can also subscribe to the AWS Events RSS feed, follow @awscloud, and find us on Facebook.

And you never know, along with learning all sorts of new things at the summit, you just might run into me or Jeff and snag a blog sticker too!

-Ana

Amazon Aurora Update – More Cross Region & Cross Account Support, T2.Small DB Instances, Another Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-aurora-update-more-cross-region-cross-account-support-t2-small-db-instances-another-region/

I’m in catch-up mode again, and would like to tell you about some recent improvements that we have made to Amazon Aurora. As a reminder, Aurora is our high-performance MySQL-compatible (and soon PostgreSQL-compatible) enterprise-class database (read Now Available – Amazon Aurora and Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS for an introduction).

Here’s are the newest additions to Aurora:

  • Cross Region Snapshot Copy
  • Cross Region Replication for Encrypted Databases
  • Cross Account Encrypted Snapshot Sharing
  • Availability in the US West (Northern California) Region
  • T2.Small Instance Support

Let’s take a quick look at each one!

Cross Region Snapshot Copy
You can now copy Amazon Aurora snapshots (either automatic or manual) from one region to another. Select the snapshot, choose Copy Snapshot from the Snapshot Actions menu, pick a region, and enter a name for the new snapshot:

You can also choose to encrypt the snapshot as part of this operation. To learn more, read Copying a DB Snapshot or DB Cluster Snapshot.

Cross Region Replication for Encrypted Databases
You can already enable encryption when you create a fresh Amazon Aurora DB Instance:

You can now create a read replica in another region with a couple of clicks. You can use this to build multi-region, highly available systems or to move the data closer to the user. To create a cross region read replica, simply select the existing DB Instance and choose Create Cross Region Read Replica from the menu:

Then you choose the destination region in the Network & Security settings, and click on Create:

The destination region must include a DB Subnet Group that encompasses 2 or more Availability Zones.

To learn more about this powerful new feature, read Replicating Amazon Aurora DB Clusters Across AWS Regions.

Cross Account Encrypted Snapshot Sharing
You already have the ability to configure periodic, automated snapshots when you create your Amazon Aurora DB Instance. You can also create snapshots at any desired time with a couple of clicks:

If the DB Instance is encrypted, the snapshot will be as well.

You can now share encrypted snapshots with other AWS accounts. In order to use this feature, the DB Instance (and therefore the snapshot) must be encrypted with a Master Key other than the default RDS key. Select the snapshot and choose Share Snapshot from the Snapshot Actions menu:

Then enter the target AWS Account ID(s), clicking Add after each one, and click on Save to share the snapshot:

You will also need to share the key that was used to encrypt the snapshot. To learn more about this feature, read Sharing a DB Snapshot or DB Cluster Snapshot.

Availability in the US West (Northern California Region)
You can now launch Amazon Aurora DB Instances in the US West (Northern California) Region. Here’s the full list of regions where Aurora is available:

  • US East (Northern Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • US West (Northern California)
  • Canada (Central)
  • EU (Ireland)
  • EU (London)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Sydney)
  • Asia Pacific (Seoul)
  • Asia Pacific (Mumbai)

See the Amazon Aurora Pricing page for pricing info in each region.

T2.Small Instance Support
You can now launch t2.small DB Instances:

These economical instances are a great fit for dev & test environments and for light production workloads. You can also use them to gain some experience with Amazon Aurora. These instances (along with six others, including the t2.medium that we launched last November) are available in all AWS regions where Aurora is available.

On-Demand pricing for t2.small DB Instances starts at $0.041 per hour in the US East (Northern Virginia) Region, dropping to $0.018 per hour for an All Upfront Reserved Instance with a 3 year term (see the Amazon Aurora Pricing page for more info).

Jeff;

 

AWS Database Migration Service – 20,000 Migrations and Counting

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-database-migration-service-20000-migrations-and-counting/

I first wrote about AWS Database Migration Service just about a year ago in my AWS Database Migration Service post. At that time I noted that over 1,000 AWS customers had already made use of the service as part of their move to AWS.

As a quick recap, AWS Database Migration Service and Schema Conversion Tool (SCT) help our customers migrate their relational data from expensive, proprietary databases and data warehouses (either on premises on in the cloud, but with restrictive licensing terms either way) to more cost-effective cloud-based databases and data warehouses such as Amazon Aurora, Amazon Redshift, MySQL, MariaDB, and PostgreSQL, with minimal downtime along the way. Our customers tell us that they love the flexibility and the cost-effective nature of these moves. For example, moving to Amazon Aurora gives them access to a database that is MySQL and PostgreSQL compatible, at 1/10th the cost of a commercial database. Take a peek at our AWS Database Migration Services Customer Testimonials to see how Expedia, Thomas Publishing, Pega, and Veoci have made use of the service.

20,000 Unique Migrations
I’m pleased to be able to announce that our customers have already used AWS Database Migration Service to migrate 20,000 unique databases to AWS and that the pace continues to accelerate (we reached 10,000 migrations in September of 2016).

We’ve added many new features to DMS and SCT over the past year. Here’s a summary:

Learn More
Here are some resources that will help you to learn more and to get your own migrations underway, starting with some recent webinars:

Migrating From Sharded to Scale-Up – Some of our customers implemented a scale-out strategy in order to deal with their relational workload, sharding their database across multiple independent pieces, each running on a separate host. As part of their migration, these customers often consolidate two or more shards onto a single Aurora instance, reducing complexity, increasing reliability, and saving money along the way. If you’d like to do this, check out the blog post, webinar recording, and presentation.

Migrating From Oracle or SQL Server to Aurora – Other customers migrate from commercial databases such as Oracle or SQL Server to Aurora. If you would like to do this, check out this presentation and the accompanying webinar recording.

We also have plenty of helpful blog posts on the AWS Database Blog:

  • Reduce Resource Consumption by Consolidating Your Sharded System into Aurora – “You might, in fact, save bunches of money by consolidating your sharded system into a single Aurora instance or fewer shards running on Aurora. That is exactly what this blog post is all about.”
  • How to Migrate Your Oracle Database to Amazon Aurora – “This blog post gives you a quick overview of how you can use the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS) to facilitate and simplify migrating your commercial database to Amazon Aurora. In this case, we focus on migrating from Oracle to the MySQL-compatible Amazon Aurora.”
  • Cross-Engine Database Replication Using AWS Schema Conversion Tool and AWS Database Migration Service – “AWS SCT makes heterogeneous database migrations easier by automatically converting source database schema. AWS SCT also converts the majority of custom code, including views and functions, to a format compatible with the target database.”
  • Database Migration—What Do You Need to Know Before You Start? – “Congratulations! You have convinced your boss or the CIO to move your database to the cloud. Or you are the boss, CIO, or both, and you finally decided to jump on the bandwagon. What you’re trying to do is move your application to the new environment, platform, or technology (aka application modernization), because usually people don’t move databases for fun.”
  • How to Script a Database Migration – “You can use the AWS DMS console or the AWS CLI or the AWS SDK to perform the database migration. In this blog post, I will focus on performing the migration with the AWS CLI.”

The documentation includes five helpful walkthroughs:

There’s also a hands-on lab (you will need to register in order to participate).

See You at a Summit
The DMS team is planning to attend and present at many of our upcoming AWS Summits and would welcome the opportunity to discuss your database migration requirements in person.

Jeff;

 

 

Amazon RDS – 2016 in Review

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-rds-2016-in-review/

Even though we published 294 posts on this blog last year, I left out quite a number of worthwhile launches! Today I would like to focus on Amazon Relational Database Service (RDS) and recap all of the progress that the teams behind this family of services made in 2016. The team focused on four major areas last year:

  • High Availability
  • Enhanced Monitoring
  • Simplified Security
  • Database Engine Updates

Let’s take a look at each of these areas…

High Availability
Relational databases are at the heart of many types of applications. In order to allow our customers to build applications that are highly available, RDS has offered multi-AZ support since early 2010 (read Amazon RDS – Multi-AZ Deployments For Enhanced Availability & Reliability for more info). Instead of spending weeks setting up multiple instances, arranging for replication, writing scripts to detect network, instance, & network issues, making failover decisions, and bringing a new secondary instance online, you simply opt for Multi-AZ Deployment when you create the Database Instance. RDS also makes it easy for you to create cross-region read replicas.

Here are some of the other enhancements that we made in 2016 in order to help you to achieve high availability:

Enhanced Monitoring
We announced the first big step toward enhanced monitoring at the end of 2015 (New – Enhanced Monitoring for Amazon RDS) with support for MySQL, MariaDB, and Amazon Aurora and then made additional announcements in 2016:

Simplified Security
We want to make it as easy and simple as possible for you to use encryption to protect your data, whether it is at rest or in motion. Here are the enhancements that we made in this area last year:

Database Engine Updates
The open source community and the vendors of commercial databases add features and produce new releases at a rapid pace and we track their work very closely, aiming to update RDS as quickly as possible after each significant release. Here’s what we did in 2016:

Stay Tuned
We’ve already made some big announcements this year (you can find them in the AWS What’s New for 2017) with plenty more in store including the recently announced PostgreSQL-compatible version of Aurora, so stay tuned! You may also want to subscribe to the AWS Database Blog for detailed posts that will show you how to get the most from RDS, Amazon Aurora, and Amazon ElastiCache.

Jeff;

PS – This post does not include all of the enhancements that we made to AWS Database Migration Service or the Schema Conversion Tool last year. I’m working on another post on that topic.

Megaupload Case Takes Toll on Finn Batato, But He’ll Keep Fighting

Post Syndicated from Andy original https://torrentfreak.com/megaupload-case-takes-toll-on-finn-batato-but-hell-keep-fighting-170227/

Whenever there’s a new headline about the years-long prosecution of Megaupload, it is usually Kim Dotcom’s image adorning publications around the world. In many ways, the German-born entrepreneur is the face of the United States’ case against the defunct storage site, and he appears to like it that way.

Thanks to his continuous presence on Twitter, regular appearances in the media, alongside promotion of new file-sharing platforms, one might be forgiven for thinking Dotcom was fighting the US single-handedly. But quietly and very much in the background, three other men are also battling for their freedom.

Megaupload programmers Mathias Ortmann and Bram van der Kolk face a similar fate to Dotcom but have stayed almost completely silent since their arrests in 2012. Former site advertising manager Finn Batato, whose name headlines the entire case (US v. Finn Batato) has been a little more vocal though, and from recent comments we learn that the US prosecution is taking its toll.

Seven years ago before the raid, Batato was riding the crest of a wave as Megaupload’s CMO. According to the FBI he pocketed $630,000 in 2010 and was regularly seen out with Dotcom having fun, racing around the Nürburgring’s Nordschleife track with Formula 1 star Kimi Raikkonen, for example. But things are different now.

Finn with Kimi Raikkonen

While still involved with Mega, the new file-sharing site that Dotcom founded and then left after what appears to be an acrimonious split, Batato is reportedly feeling the pressure. In a new interview with Newshub, the marketing expert says that his marriage is on the rocks, a direct result of the US case against him.

According to Batato, he’s now living in someone else’s house, something he hasn’t done “for 25 years.” It’s a far cry from the waterside luxury being enjoyed by Dotcom.

Batato met wife Anastasia back in 2012, not long after the raid and while he was still under house arrest. The pair married in 2015 and have two children, Leo and Oskar.

“The constant pressure over your head – not knowing what is there to come, is very hard, very tough,” Batato said in an earlier interview with NZHerald.

“Everything that happens in our life happens with that big black cloud over our heads which especially has an impact on me and my mood because I can’t just switch it off. If everything goes down the hill, maybe I will see [my sons] once every month in a prison cell. That breaks my heart. I can’t enjoy it as much as I would want to. It’s highly stressful.”

Since then, Batato has been busy. While working as Mega’s Chief Marketing Officer, the German citizen has been learning about the law. He’s had to. Unlike Dotcom who can retain the best lawyers in the game, Batato says he has few resources.

What savings he had were seized on the orders of the United States in Hong Kong back in 2012, and he previously admitted to having to check his bank account before buying groceries. As a result he’s been conducting his own legal defense for almost two years.

In 2015 he reportedly received praise while doing so, with lawyers appearing for his co-defendants commending him when he stood up to argue a point during a Megaupload hearing. “I was kind of proud about that,” he said.

Like Dotcom (with whom he claims to be on “good terms”), Batato insists that he’s done nothing wrong. He shares his former colleague’s optimism that he won’t be extradited and will take his case to the Supreme Court, should all else fail.

That may be necessary. Last week, the New Zealand High Court determined that Batato and his co-defendants can be extradited to the US, albeit not on copyright grounds. Justice Murray Gilbert agreed with the US Government’s position that their case has fraud at its core, an extraditable offense.

In the short term, the case is expected to move to the Court of Appeal and, depending on the outcome there, potentially to the Supreme Court. Either way, this case still has years to run with plenty more legal appearances for Batato. He won’t be doing it with the legal backup enjoyed by Dotcom but he’ll share his determination.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.