Tag Archives: Today

Attend This Free December 13 Tech Talk: “Cloud-Native DDoS Mitigation with AWS Shield”

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/register-for-and-attend-this-december-14-aws-shield-tech-talk-cloud-native-ddos-mitigation/

AWS Online Tech Talks banner

As part of the AWS Online Tech Talks series, AWS will present Cloud-Native DDoS Mitigation with AWS Shield on Wednesday, December 13. This tech talk will start at 9:00 A.M. Pacific Time and end at 9:40 A.M. Pacific Time.

Distributed Denial of Service (DDoS) mitigation can help you maintain application availability, but traditional solutions are hard to scale and require expensive hardware. AWS Shield is a managed DDoS protection service that helps you safeguard web applications running in the AWS Cloud. In this tech talk, you will learn simple techniques for using AWS Shield to help you build scalable DDoS defenses into your applications without investing in costly infrastructure. You also will learn how AWS Shield helps you monitor your applications to detect DDoS attempts and how to respond to in-progress events.

This tech talk is free. Register today.

– Craig

Managing AWS Lambda Function Concurrency

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/managing-aws-lambda-function-concurrency/

One of the key benefits of serverless applications is the ease in which they can scale to meet traffic demands or requests, with little to no need for capacity planning. In AWS Lambda, which is the core of the serverless platform at AWS, the unit of scale is a concurrent execution. This refers to the number of executions of your function code that are happening at any given time.

Thinking about concurrent executions as a unit of scale is a fairly unique concept. In this post, I dive deeper into this and talk about how you can make use of per function concurrency limits in Lambda.

Understanding concurrency in Lambda

Instead of diving right into the guts of how Lambda works, here’s an appetizing analogy: a magical pizza.
Yes, a magical pizza!

This magical pizza has some unique properties:

  • It has a fixed maximum number of slices, such as 8.
  • Slices automatically re-appear after they are consumed.
  • When you take a slice from the pizza, it does not re-appear until it has been completely consumed.
  • One person can take multiple slices at a time.
  • You can easily ask to have the number of slices increased, but they remain fixed at any point in time otherwise.

Now that the magical pizza’s properties are defined, here’s a hypothetical situation of some friends sharing this pizza.

Shawn, Kate, Daniela, Chuck, Ian and Avleen get together every Friday to share a pizza and catch up on their week. As there is just six of them, they can easily all enjoy a slice of pizza at a time. As they finish each slice, it re-appears in the pizza pan and they can take another slice again. Given the magical properties of their pizza, they can continue to eat all they want, but with two very important constraints:

  • If any of them take too many slices at once, the others may not get as much as they want.
  • If they take too many slices, they might also eat too much and get sick.

One particular week, some of the friends are hungrier than the rest, taking two slices at a time instead of just one. If more than two of them try to take two pieces at a time, this can cause contention for pizza slices. Some of them would wait hungry for the slices to re-appear. They could ask for a pizza with more slices, but then run the same risk again later if more hungry friends join than planned for.

What can they do?

If the friends agreed to accept a limit for the maximum number of slices they each eat concurrently, both of these issues are avoided. Some could have a maximum of 2 of the 8 slices, or other concurrency limits that were more or less. Just so long as they kept it at or under eight total slices to be eaten at one time. This would keep any from going hungry or eating too much. The six friends can happily enjoy their magical pizza without worry!

Concurrency in Lambda

Concurrency in Lambda actually works similarly to the magical pizza model. Each AWS Account has an overall AccountLimit value that is fixed at any point in time, but can be easily increased as needed, just like the count of slices in the pizza. As of May 2017, the default limit is 1000 “slices” of concurrency per AWS Region.

Also like the magical pizza, each concurrency “slice” can only be consumed individually one at a time. After consumption, it becomes available to be consumed again. Services invoking Lambda functions can consume multiple slices of concurrency at the same time, just like the group of friends can take multiple slices of the pizza.

Let’s take our example of the six friends and bring it back to AWS services that commonly invoke Lambda:

  • Amazon S3
  • Amazon Kinesis
  • Amazon DynamoDB
  • Amazon Cognito

In a single account with the default concurrency limit of 1000 concurrent executions, any of these four services could invoke enough functions to consume the entire limit or some part of it. Just like with the pizza example, there is the possibility for two issues to pop up:

  • One or more of these services could invoke enough functions to consume a majority of the available concurrency capacity. This could cause others to be starved for it, causing failed invocations.
  • A service could consume too much concurrent capacity and cause a downstream service or database to be overwhelmed, which could cause failed executions.

For Lambda functions that are launched in a VPC, you have the potential to consume the available IP addresses in a subnet or the maximum number of elastic network interfaces to which your account has access. For more information, see Configuring a Lambda Function to Access Resources in an Amazon VPC. For information about elastic network interface limits, see Network Interfaces section in the Amazon VPC Limits topic.

One way to solve both of these problems is applying a concurrency limit to the Lambda functions in an account.

Configuring per function concurrency limits

You can now set a concurrency limit on individual Lambda functions in an account. The concurrency limit that you set reserves a portion of your account level concurrency for a given function. All of your functions’ concurrent executions count against this account-level limit by default.

If you set a concurrency limit for a specific function, then that function’s concurrency limit allocation is deducted from the shared pool and assigned to that specific function. AWS also reserves 100 units of concurrency for all functions that don’t have a specified concurrency limit set. This helps to make sure that future functions have capacity to be consumed.

Going back to the example of the consuming services, you could set throttles for the functions as follows:

Amazon S3 function = 350
Amazon Kinesis function = 200
Amazon DynamoDB function = 200
Amazon Cognito function = 150
Total = 900

With the 100 reserved for all non-concurrency reserved functions, this totals the account limit of 1000.

Here’s how this works. To start, create a basic Lambda function that is invoked via Amazon API Gateway. This Lambda function returns a single “Hello World” statement with an added sleep time between 2 and 5 seconds. The sleep time simulates an API providing some sort of capability that can take a varied amount of time. The goal here is to show how an API that is underloaded can reach its concurrency limit, and what happens when it does.
To create the example function

  1. Open the Lambda console.
  2. Choose Create Function.
  3. For Author from scratch, enter the following values:
    1. For Name, enter a value (such as concurrencyBlog01).
    2. For Runtime, choose Python 3.6.
    3. For Role, choose Create new role from template and enter a name aligned with this function, such as concurrencyBlogRole.
  4. Choose Create function.
  5. The function is created with some basic example code. Replace that code with the following:

import time
from random import randint
seconds = randint(2, 5)

def lambda_handler(event, context):
time.sleep(seconds)
return {"statusCode": 200,
"body": ("Hello world, slept " + str(seconds) + " seconds"),
"headers":
{
"Access-Control-Allow-Headers": "Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token",
"Access-Control-Allow-Methods": "GET,OPTIONS",
}}

  1. Under Basic settings, set Timeout to 10 seconds. While this function should only ever take up to 5-6 seconds (with the 5-second max sleep), this gives you a little bit of room if it takes longer.

  1. Choose Save at the top right.

At this point, your function is configured for this example. Test it and confirm this in the console:

  1. Choose Test.
  2. Enter a name (it doesn’t matter for this example).
  3. Choose Create.
  4. In the console, choose Test again.
  5. You should see output similar to the following:

Now configure API Gateway so that you have an HTTPS endpoint to test against.

  1. In the Lambda console, choose Configuration.
  2. Under Triggers, choose API Gateway.
  3. Open the API Gateway icon now shown as attached to your Lambda function:

  1. Under Configure triggers, leave the default values for API Name and Deployment stage. For Security, choose Open.
  2. Choose Add, Save.

API Gateway is now configured to invoke Lambda at the Invoke URL shown under its configuration. You can take this URL and test it in any browser or command line, using tools such as “curl”:


$ curl https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01
Hello world, slept 2 seconds

Throwing load at the function

Now start throwing some load against your API Gateway + Lambda function combo. Right now, your function is only limited by the total amount of concurrency available in an account. For this example account, you might have 850 unreserved concurrency out of a full account limit of 1000 due to having configured a few concurrency limits already (also the 100 concurrency saved for all functions without configured limits). You can find all of this information on the main Dashboard page of the Lambda console:

For generating load in this example, use an open source tool called “hey” (https://github.com/rakyll/hey), which works similarly to ApacheBench (ab). You test from an Amazon EC2 instance running the default Amazon Linux AMI from the EC2 console. For more help with configuring an EC2 instance, follow the steps in the Launch Instance Wizard.

After the EC2 instance is running, SSH into the host and run the following:


sudo yum install go
go get -u github.com/rakyll/hey

“hey” is easy to use. For these tests, specify a total number of tests (5,000) and a concurrency of 50 against the API Gateway URL as follows(replace the URL here with your own):


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

The output from “hey” tells you interesting bits of information:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

Summary:
Total: 381.9978 secs
Slowest: 9.4765 secs
Fastest: 0.0438 secs
Average: 3.2153 secs
Requests/sec: 13.0891
Total data: 140024 bytes
Size/request: 28 bytes

Response time histogram:
0.044 [1] |
0.987 [2] |
1.930 [0] |
2.874 [1803] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
3.817 [1518] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
4.760 [719] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
5.703 [917] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
6.647 [13] |
7.590 [14] |
8.533 [9] |
9.477 [4] |

Latency distribution:
10% in 2.0224 secs
25% in 2.0267 secs
50% in 3.0251 secs
75% in 4.0269 secs
90% in 5.0279 secs
95% in 5.0414 secs
99% in 5.1871 secs

Details (average, fastest, slowest):
DNS+dialup: 0.0003 secs, 0.0000 secs, 0.0332 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0046 secs
req write: 0.0000 secs, 0.0000 secs, 0.0005 secs
resp wait: 3.2149 secs, 0.0438 secs, 9.4472 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0004 secs

Status code distribution:
[200] 4997 responses
[502] 3 responses

You can see a helpful histogram and latency distribution. Remember that this Lambda function has a random sleep period in it and so isn’t entirely representational of a real-life workload. Those three 502s warrant digging deeper, but could be due to Lambda cold-start timing and the “second” variable being the maximum of 5, causing the Lambda functions to time out. AWS X-Ray and the Amazon CloudWatch logs generated by both API Gateway and Lambda could help you troubleshoot this.

Configuring a concurrency reservation

Now that you’ve established that you can generate this load against the function, I show you how to limit it and protect a backend resource from being overloaded by all of these requests.

  1. In the console, choose Configure.
  2. Under Concurrency, for Reserve concurrency, enter 25.

  1. Click on Save in the top right corner.

You could also set this with the AWS CLI using the Lambda put-function-concurrency command or see your current concurrency configuration via Lambda get-function. Here’s an example command:


$ aws lambda get-function --function-name concurrencyBlog01 --output json --query Concurrency
{
"ReservedConcurrentExecutions": 25
}

Either way, you’ve set the Concurrency Reservation to 25 for this function. This acts as both a limit and a reservation in terms of making sure that you can execute 25 concurrent functions at all times. Going above this results in the throttling of the Lambda function. Depending on the invoking service, throttling can result in a number of different outcomes, as shown in the documentation on Throttling Behavior. This change has also reduced your unreserved account concurrency for other functions by 25.

Rerun the same load generation as before and see what happens. Previously, you tested at 50 concurrency, which worked just fine. By limiting the Lambda functions to 25 concurrency, you should see rate limiting kick in. Run the same test again:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

While this test runs, refresh the Monitoring tab on your function detail page. You see the following warning message:

This is great! It means that your throttle is working as configured and you are now protecting your downstream resources from too much load from your Lambda function.

Here is the output from a new “hey” command:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01
Summary:
Total: 379.9922 secs
Slowest: 7.1486 secs
Fastest: 0.0102 secs
Average: 1.1897 secs
Requests/sec: 13.1582
Total data: 164608 bytes
Size/request: 32 bytes

Response time histogram:
0.010 [1] |
0.724 [3075] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
1.438 [0] |
2.152 [811] |∎∎∎∎∎∎∎∎∎∎∎
2.866 [11] |
3.579 [566] |∎∎∎∎∎∎∎
4.293 [214] |∎∎∎
5.007 [1] |
5.721 [315] |∎∎∎∎
6.435 [4] |
7.149 [2] |

Latency distribution:
10% in 0.0130 secs
25% in 0.0147 secs
50% in 0.0205 secs
75% in 2.0344 secs
90% in 4.0229 secs
95% in 5.0248 secs
99% in 5.0629 secs

Details (average, fastest, slowest):
DNS+dialup: 0.0004 secs, 0.0000 secs, 0.0537 secs
DNS-lookup: 0.0002 secs, 0.0000 secs, 0.0184 secs
req write: 0.0000 secs, 0.0000 secs, 0.0016 secs
resp wait: 1.1892 secs, 0.0101 secs, 7.1038 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0005 secs

Status code distribution:
[502] 3076 responses
[200] 1924 responses

This looks fairly different from the last load test run. A large percentage of these requests failed fast due to the concurrency throttle failing them (those with the 0.724 seconds line). The timing shown here in the histogram represents the entire time it took to get a response between the EC2 instance and API Gateway calling Lambda and being rejected. It’s also important to note that this example was configured with an edge-optimized endpoint in API Gateway. You see under Status code distribution that 3076 of the 5000 requests failed with a 502, showing that the backend service from API Gateway and Lambda failed the request.

Other uses

Managing function concurrency can be useful in a few other ways beyond just limiting the impact on downstream services and providing a reservation of concurrency capacity. Here are two other uses:

  • Emergency kill switch
  • Cost controls

Emergency kill switch

On occasion, due to issues with applications I’ve managed in the past, I’ve had a need to disable a certain function or capability of an application. By setting the concurrency reservation and limit of a Lambda function to zero, you can do just that.

With the reservation set to zero every invocation of a Lambda function results in being throttled. You could then work on the related parts of the infrastructure or application that aren’t working, and then reconfigure the concurrency limit to allow invocations again.

Cost controls

While I mentioned how you might want to use concurrency limits to control the downstream impact to services or databases that your Lambda function might call, another resource that you might be cautious about is money. Setting the concurrency throttle is another way to help control costs during development and testing of your application.

You might want to prevent against a function performing a recursive action too quickly or a development workload generating too high of a concurrency. You might also want to protect development resources connected to this function from generating too much cost, such as APIs that your Lambda function calls.

Conclusion

Concurrent executions as a unit of scale are a fairly unique characteristic about Lambda functions. Placing limits on how many concurrency “slices” that your function can consume can prevent a single function from consuming all of the available concurrency in an account. Limits can also prevent a function from overwhelming a backend resource that isn’t as scalable.

Unlike monolithic applications or even microservices where there are mixed capabilities in a single service, Lambda functions encourage a sort of “nano-service” of small business logic directly related to the integration model connected to the function. I hope you’ve enjoyed this post and configure your concurrency limits today!

MQTT 5: Introduction to MQTT 5

Post Syndicated from The HiveMQ Team original https://www.hivemq.com/blog/mqtt-5-introduction-to-mqtt-5/

MQTT 5 Introduction

Introduction to MQTT 5

Welcome to our brand new blog post series MQTT 5 – Features and Hidden Gems. Without doubt, the MQTT protocol is the most popular and best received Internet of Things protocol as of today (see the Google Trends Chart below), supporting large scale use cases ranging from Connected Cars, Manufacturing Systems, Logistics, Military Use Cases to Enterprise Chat Applications, Mobile Apps and connecting constrained IoT devices. Of course, with huge amounts of production deployments, the wish list for future versions of the MQTT protocol grew bigger and bigger.

MQTT 5 is by far the most extensive and most feature-rich update to the MQTT protocol specification ever. We are going to explore all hidden gems and protocol features with use case discussion and useful background information – one blog post at a time.

Be sure to read the MQTT Essentials Blog Post series first before diving into our new MQTT 5 series. To get the most out of the new blog posts, it’s important to have a basic understanding of the MQTT 3.1.1 protocol as we are going to highlight key changes as well as all improvements.

CoderDojo: 2000 Dojos ever

Post Syndicated from Giustina Mizzoni original https://www.raspberrypi.org/blog/2000-dojos-ever/

Every day of the week, we verify new Dojos all around the world, and each Dojo is championed by passionate volunteers. Last week, a huge milestone for the CoderDojo community went by relatively unnoticed: in the history of the movement, more than 2000 Dojos have now been verified!

CoderDojo banner — 2000 Dojos

2000 Dojos

This is a phenomenal achievement for a movement that’s just six years old and powered by volunteers. Presently, there are more than 1650 active Dojos running weekly, fortnightly, or monthly, and all of them are free for participants — for example, the Dojos run by Joel Bayubasire in Kampala, Uganda:

Joel Bayubasire with Ninjas at his Ugandan Dojo — 2000 Dojos

Empowering refugee children

This week, Joel set up his second Dojo and verified it on our global map. Joel is a Congolese refugee living in Kampala, Uganda, where he is currently completing his PhD in Economics at Madison International Institute and Business School.

Joel understands first-hand the challenges faced by refugees who were forced to leave their country due to war or conflict. Uganda is currently hosting more than 1.2 million refugees, 60% of which are children (World Bank, 2017). As refugees, children are only allowed to attend local schools until the age of 12. This results in lower educational attainment, which will likely affect their future employment prospects.

Two girls at a laptop. Joel Bayubasire CoderDojo — 2000 Dojos

Joel has the motivation to overcome these challenges, because he understands the power of education. Therefore, he initiated a number of community-based activities to provide educational opportunities for refugee children. As part of this, he founded his first Dojo earlier in the year, with the aim of giving these children a chance to compete in today’s global knowledge-based economy.

Two boys at a laptop. Joel Bayubasire CoderDojo — 2000 Dojos

Aware that securing volunteer mentors would be a challenge, Joel trained eight young people from the community to become youth mentors to their peers. He explains:

I believe that the mastery of computer coding allows talented young people to thrive professionally and enables them to not only be consumers but creators of the interconnected world of today!

Based on the success of Joel’s first Dojo, he has now expanded the CoderDojo initiative in his community; his plan is to provide computer science training for more than 300 refugee youths in Kampala by 2019. If you’d like to learn more about Joel’s efforts, head to this website.

Join the movement

If you are interested in creating opportunities for the young people in your community, then join the growing CoderDojo movement — you can volunteer to start a Dojo or to support an existing one today!

The post CoderDojo: 2000 Dojos ever appeared first on Raspberry Pi.

Screener Piracy Season Kicks Off With Louis C.K.’s ‘I Love You, Daddy’

Post Syndicated from Ernesto original https://torrentfreak.com/screener-piracy-season-kicks-off-with-louis-c-k-s-i-love-you-daddy-171211/

Towards the end of the year, movie screeners are sent out to industry insiders who cast their votes for the Oscars and other awards.

It’s a highly anticipated time for pirates who hope to get copies of the latest blockbusters early, which is traditionally what happens.

Last year the action started relatively late. It took until January before the first leak surfaced – Denzel Washington’s Fences –
but more than a dozen made their way online soon after.

Today the first leak of the new screener season started to populate various pirate sites, Louis C.K.’s “I Love You, Daddy.” It was released by the infamous “Hive-CM8” group which also made headlines in previous years.

“I Love You, Daddy” was carefully chosen, according to a message posted in the release notes. Last month distributor The Orchard chose to cancel the film from its schedule after Louis C.K. was accused of sexual misconduct. With uncertainty surrounding the film’s release, “Hive-CM8” decided to get it out.

“We decided to let this one title go out this month, since it never made it to the cinema, and nobody knows if it ever will go to retail at all,” Hive-CM8 write in their NFO.

“Either way their is no perfect time to release it anyway, but we think it would be a waste to let a great Louis C.K. go unwatched and nobody can even see or buy it,” they add.

I Love You, Daddy

It is no surprise that the group put some thought into their decision. In 2015 they published several movies before their theatrical release, for which they later offered an apology, stating that this wasn’t acceptable.

Last year this stance was reiterated, noting that they would not leak any screeners before Christmas. Today’s release shows that this isn’t a golden rule, but it’s unlikely that they will push any big titles before they’re out in theaters.

“I Love You, Daddy” isn’t going to be seen in theaters anytime soon, but it might see an official release. This past weekend, news broke that Louis C.K. had bought back the rights from The Orchard and must pay back marketing costs, including a payment for the 12,000 screeners that were sent out.

Hive-CM8, meanwhile, suggest that they have more screeners in hand, although their collection isn’t yet complete.

“We are still missing some titles, anyone want to share for the collection? Yes we want to have them all if possible, we are collectors, we don’t want to release them all,” they write.

Finally, the group also has some disappointing news for Star Wars fans who are looking for an early copy of “The Last Jedi.” Hive-CM8 is not going to release it.

“Their will be no starwars from us, sorry wont happen,” they write.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

ETTV: How an Upload Bot Became a Pirate Hero

Post Syndicated from Ernesto original https://torrentfreak.com/ettv-how-an-upload-bot-became-a-pirate-hero-171210/

Earlier this year, the torrent community was hit hard when another major torrent site suddenly shut its doors.

Just a few months after celebrating its tenth anniversary, ExtraTorrent’s operator threw in the towel. While an official explanation was never provided, it’s likely that he was pressed to make this decision.

The ExtraTorrent site was a safe harbor for millions of regular users, who became homeless overnight. But it was more than that. It was also the birth ground of several popular releasers and distribution groups.

ETTV and ETHD turned into well-known brands themselves. While the ET is derived from ExtraTorrent, the groups have shared TV and movie torrents on several other large torrent sites, and they still do. They even have their own site now.

With millions of people sharing their uploads every week, they’ve become icons and heroes to many. But how did this all come to be? We sat down with the team, virtually, to find out more.

“The idea for ettv/ethd was brought up by ExtraTorrent users,” the ETTV team says.

There was demand for a new group that would upload scene releases faster than the original EZTV, which was the dominant TV-torrent distribution group around 2011, when it all started.

“At the time the real EZTV was still active. They released stuff hours after it was released from the scene, leaving sites to wait very long for shows to arrive in public. In no way was ettv intended for competitive purposes. We had a lot of respect for Nova and the original EZTV operators.”

While ETTV is regularly referred to as a “group,” it was a one-person operation initially. Just a guy with a seedbox, grabbing scene releases and posting them on torrent sites.

It didn’t take long before people got wind of the new distribution ‘group,’ and interest for the torrents quickly exploded. This meant that a single seedbox was no longer sufficient, but help was not far away.

“It started off with one operator and a seedbox, but it became popular too fast. That’s when former ExtraTorrent owners stepped in to give ETTV the support and funding it needed to keep the story going.”

One of the earliest ETTV uploads on ExtraTorrent

In addition to the available disk space and bandwidth, the team itself expanded as well. At its height, a handful of people were working on the group. However, when things became more and more automated this number reduced again.

What many people don’t realize is that ETTV and ETHD are mostly run by lines of code. The entire distribution process is automated and requires minimal intervention from the people behind it.

“Ettv/Ethd is a bot, it doesn’t require human attention. It grabs what you tell the script to,” the team tells us.

The bot is set up to grab the latest copies of predefined shows from private servers where the latest scene release are posted. These are transferred to the seedbox and the torrents are then pushed out to the public – on ETTV.tv, but also on The Pirate Bay and elsewhere. Everything is automated.

Even most of the maintenance is taken care of by the ‘bot’ itself. When disk space is running out older content is purged, allowing fresh releases to come through.

“The only persons involve with the bots are the bill payers of our new home ettv.tv. All they do check bot logs to see if it has any errors and correct them,” the team explains.

One problem that couldn’t be easily solved with some code was the shutdown of ExtraTorrent. While the bills for the seedboxes were paid in advance until the end of 2017, the groups had to find a new home.

“The shutdown of ExtraTorrent didn’t affect the bots from running, it just left ettv/ethd homeless and caused fans to lose their way trying to find us. Not many knew where else we uploaded or didn’t like the other sites we uploaded to.”

After a few months had passed it became clear that they were not going anywhere. Quite the contrary, they started their very own site, ETTV.tv, where all the latest releases are published.

ETTV.tv

In the near future, the team will focus on turning the site into a new home for its followers. Just a few weeks ago it launched a new release “tag,” ETMovies, which specializes in lower resolution films with a smaller file size, for example.

“We recently introduced ETMovies which is basically for SD Movies, other than that the only plan ettv/ethd has is to give a home to the members that suffered from the sudden shut down of ExtraTorrent.”

Just this week, the site also expanded its reach by adding new categories such as music, games, software, and Books, where approved uploaders will publish content.

While they are doing their best to keep the site up and running, it’s not a given that ETTV will be around forever. As long as there are plenty of funds and no concrete legal pressure they might. But if recent history has shown us anything, it’s that there are no guarantees.

“No one is here seeking to be a millionaire, if the traffic pays the bills we keep going, if not then all we can say is (sorry we tried) we will not be the heroes that saved the day.

“Again and again, the troublesome history of torrent sites is clear. It’s a war no site owner can win. If we are ever in danger, we will choose freedom. It’s not like followers can bail you out if the worst were to happen,” the ETTV team concludes.

For now, however, the bot keeps on running.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Roguelike Simulator

Post Syndicated from Eevee original https://eev.ee/release/2017/12/09/roguelike-simulator/

Screenshot of a monochromatic pixel-art game designed to look mostly like ASCII text

On a recent game night, glip and I stumbled upon bitsy — a tiny game maker for “games where you can walk around and talk to people and be somewhere.” It’s enough of a genre to have become a top tag on itch, so we flicked through a couple games.

What we found were tiny windows into numerous little worlds, ill-defined yet crisply rendered in chunky two-colored pixels. Indeed, all you can do is walk around and talk to people and be somewhere, but the somewheres are strangely captivating. My favorite was the last days of our castle, with a day on the town in a close second (though it cheated and extended the engine a bit), but there are several hundred of these tiny windows available. Just single, short, minimal, interactive glimpses of an idea.

I’ve been wanting to do more of that, so I gave it a shot today. The result is Roguelike Simulator, a game that condenses the NetHack experience into about ninety seconds.


Constraints breed creativity, and bitsy is practically made of constraints — the only place you can even make any decisions at all is within dialogue trees. There are only three ways to alter the world: the player can step on an ending tile to end the game, step on an exit tile to instantly teleport to a tile on another map (or not), or pick up an item. That’s it. You can’t even implement keys; the best you can do is make an annoying maze of identical rooms, then have an NPC tell you the solution.

In retrospect, a roguelike — a genre practically defined by its randomness — may have been a poor choice.

I had a lot of fun faking it, though, and it worked well enough to fool at least one person for a few minutes! Some choice hacks follow. Probably play the game a couple times before reading them?

  • Each floor reveals itself, of course, by teleporting you between maps with different chunks of the floor visible. I originally intended for this to be much more elaborate, but it turns out to be a huge pain to juggle multiple copies of the same floor layout.

  • Endings can’t be changed or randomized; even the text is static. I still managed to implement multiple variants on the “ascend” ending! See if you can guess how. (It’s not that hard.)

  • There are no Boolean operators, but there are arithmetic operators, so in one place I check whether you have both of two items by multiplying together how many of each you have.

  • Monsters you “defeat” are actually just items you pick up. They’re both drawn in the same color, and you can’t see your inventory, so you can’t tell the difference.

Probably the best part was writing the text, which is all completely ridiculous. I really enjoy writing a lot of quips — which I guess is why I like Twitter — and I’m happy to see they’ve made people laugh!


I think this has been a success! It’s definitely made me more confident about making smaller things — and about taking the first idea I have and just running with it. I’m going to keep an eye out for other micro game engines to play with, too.

Movie Company Has No Right to Sue, Accused Pirate Argues

Post Syndicated from Ernesto original https://torrentfreak.com/movie-company-has-no-right-to-sue-accused-pirate-argues-171208/

In recent years, a group of select companies have pressured hundreds of thousands of alleged pirates to pay significant settlement fees, or face legal repercussions.

These so-called “copyright trolling” efforts have also been a common occurrence in the United States for more than half a decade, and still are today.

While copyright holders should be able to take legitimate piracy claims to court, not all cases are as strong as they first appear. Many defendants have brought up flaws, often in relation to the IP-address evidence, but an accused pirate in Oregon takes things up a notch.

Lingfu Zhang, represented by attorney David Madden, has turned the tables on the makers of the film Fathers & Daughters. The man denies having downloaded the movie but also points out that the filmmakers have signed away their online distribution rights.

The issue was brought up in previous months, but the relevant findings were only unsealed this week. They show that the movie company (F&D), through a sales agent, sold the online distribution rights to a third party.

While this is not uncommon in the movie business, it means that they no longer have the right to distribute the movie online, a right Zhang was accused of violating. This is also what his attorney pointed out to the court, asking for a judgment in favor of his client.

“ZHANG denies downloading the movie but Defendant’s current motion for summary judgment challenges a different portion of F&D’s case: Defendant argues that F&D has alienated all of the relevant rights necessary to sue for infringement under the Copyright Act,” Madden writes.

The filmmakers opposed the request and pointed out that they still had some rights. However, this is irrelevant according to the defense, since the distribution rights are not owned by them, but by a company that’s not part of the lawsuit.

“Plaintiff claims, for example, that it still owns the right to exploit the movie on airlines and oceangoing vessels. That may or may not be true – Plaintiff has not submitted any evidence on the question – but ZHANG is not accused of showing the movie on an airplane or a cruise ship.

“He is accused of downloading it over the Internet, which is an infringement that affects only an exclusive right owned by non-party DISTRIBUTOR 2,” Madden adds.

Interestingly, an undated addendum to the licensing agreement, allegedly created after the lawsuit was started, states that the filmmakers would keep their “anti-piracy” rights, as can be seen below.

Anti-Piracy rights?

This doesn’t save the filmmaker, according to the defense. The “licensor” who keeps these anti-piracy and enforcement rights refers to the sales agent, not the filmmaker, Madden writes. In addition, the case is about copyright infringement, and despite the addendum, the filmmakers don’t have the exclusive rights that apply here.

“Plaintiff represented to this Court that it was the ‘proprietor of all copyrights and interests need to bring suit’ […] notwithstanding that it had – years earlier – transferred away all its exclusive rights under Section 106 of the Copyright Act,” the defense lawyer concludes.

“Even viewing all Plaintiff’s agreements in the light most favorable to it, Plaintiff holds nothing more than a bare right to sue, which is not a cognizable right that may be exercised in the courts of this Circuit.”

While the court has yet to decide on the motion, this case could turn into a disaster for the makers of Fathers & Daughters.

If the court agrees that they don’t have the proper rights, defendants in other cases may argue the same. It’s easy to see how their entire trolling scheme would then collapse.

The original memorandum in support of the motion for summary judgment is available here (pdf) and a copy of the reply brief can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Movie & TV Companies Tackle Pirate IPTV in Australia Federal Court

Post Syndicated from Andy original https://torrentfreak.com/movie-tv-companies-tackle-pirate-iptv-in-australia-federal-court-171207/

As movie and TV show piracy has migrated from the desktop towards mobile and living room-based devices, copyright holders have found the need to adapt to a new enemy.

Dealing with streaming services is now high on the agenda, with third-party Kodi addons and various Android apps posing the biggest challenge. Alongside is the much less prevalent but rapidly growing pay IPTV market, in which thousands of premium channels are delivered to homes for a relatively small fee.

In Australia, copyright holders are treating these services in much the same way as torrent sites. They feel that if they can force ISPs to block them, the problem can be mitigated. Most recently, movie and TV show giants Village Roadshow, Disney, Universal, Warner Bros, Twentieth Century Fox, and Paramount filed an application targeting HDSubs+, a pirate IPTV operation servicing thousands of Australians.

Filed in October, the application for the injunction targets Australia’s largest ISPs including Telstra, Optus, TPG, and Vocus, plus their subsidiaries. The movie and TV show companies want them to quickly block HDSubs+, to prevent it from reaching its audience.

HDSubs+ IPTV package
However, blocking isn’t particularly straightforward. Due to the way IPTV services are setup a number of domains need to be blocked, including their sales platforms, EPG (electronic program guide), software (such as an Android app), updates, and sundry other services. In HDSubs+ case around ten domains need to be restricted but in court today, Village Roadshow revealed that probably won’t deal with the problem.

HDSubs+ appears to be undergoing some kind of transformation, possibly to mitigate efforts to block it in Australia. ComputerWorld reports that it is now directing subscribers to update to a new version that works in a more evasive manner.

If they agree, HDSubs+ customers are being migrated over to a service called PressPlayPlus. It works in the same way as the old system but no longer uses the domain names cited in Village Roadshow’s injunction application. This means that DNS blocks, the usual weapon of choice for local ISPs, will prove futile.

Village Roadshow says that with this in mind it may be forced to seek enhanced IP address blocking, unless it is granted a speedy hearing for its application. This, in turn, may result in the normally cooperative ISPs returning to court to argue their case.

“If that’s what you want to do, then you’ll have to amend the orders and let the parties know,” Judge John Nicholas said.

“It’s only the former [DNS blocking] that carriage service providers have agreed to in the past.”

As things stand, Village Roadshow will return to court on December 15 for a case management hearing but in the meantime, the Federal Court must deal with another IPTV-related blocking request.

In common with its Australian and US-based counterparts, Hong Kong-based broadcaster Television Broadcasts Limited (TVB) has launched a similar case asking local ISPs to block another IPTV service.

“Television Broadcasts Limited can confirm that we have commenced legal action in Australia to protect our copyright,” a TVB spokesperson told Computerworld.

TVB wants ISPs including Telstra, Optus, Vocus, and TPG plus their subsidiaries to block access to seven Android-based services named as A1, BlueTV, EVPAD, FunTV, MoonBox, Unblock, and hTV5.

Court documents list 21 URLs maintaining the services. They will all need to be blocked by DNS or other means, if the former proves futile. Online reports suggest that there are similarities among the IPTV products listed above. A demo for the FunTV IPTV service is shown below.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Running Windows Containers on Amazon ECS

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/running-windows-containers-on-amazon-ecs/

This post was developed and written by Jeremy Cowan, Thomas Fuller, Samuel Karp, and Akram Chetibi.

Containers have revolutionized the way that developers build, package, deploy, and run applications. Initially, containers only supported code and tooling for Linux applications. With the release of Docker Engine for Windows Server 2016, Windows developers have started to realize the gains that their Linux counterparts have experienced for the last several years.

This week, we’re adding support for running production workloads in Windows containers using Amazon Elastic Container Service (Amazon ECS). Now, Amazon ECS provides an ECS-Optimized Windows Server Amazon Machine Image (AMI). This AMI is based on the EC2 Windows Server 2016 AMI, and includes Docker 17.06 Enterprise Edition and the ECS Agent 1.16. This AMI provides improved instance and container launch time performance. It’s based on Windows Server 2016 Datacenter and includes Docker 17.06.2-ee-5, along with a new version of the ECS agent that now runs as a native Windows service.

In this post, I discuss the benefits of this new support, and walk you through getting started running Windows containers with Amazon ECS.

When AWS released the Windows Server 2016 Base with Containers AMI, the ECS agent ran as a process that made it difficult to monitor and manage. As a service, the agent can be health-checked, managed, and restarted no differently than other Windows services. The AMI also includes pre-cached images for Windows Server Core 2016 and Windows Server Nano Server 2016. By caching the images in the AMI, launching new Windows containers is significantly faster. When Docker images include a layer that’s already cached on the instance, Docker re-uses that layer instead of pulling it from the Docker registry.

The ECS agent and an accompanying ECS PowerShell module used to install, configure, and run the agent come pre-installed on the AMI. This guarantees there is a specific platform version available on the container instance at launch. Because the software is included, you don’t have to download it from the internet. This saves startup time.

The Windows-compatible ECS-optimized AMI also reports CPU and memory utilization and reservation metrics to Amazon CloudWatch. Using the CloudWatch integration with ECS, you can create alarms that trigger dynamic scaling events to automatically add or remove capacity to your EC2 instances and ECS tasks.

Getting started

To help you get started running Windows containers on ECS, I’ve forked the ECS reference architecture, to build an ECS cluster comprised of Windows instances instead of Linux instances. You can pull the latest version of the reference architecture for Windows.

The reference architecture is a layered CloudFormation stack, in that it calls other stacks to create the environment. Within the stack, the ecs-windows-cluster.yaml file contains the instructions for bootstrapping the Windows instances and configuring the ECS cluster. To configure the instances outside of AWS CloudFormation (for example, through the CLI or the console), you can add the following commands to your instance’s user data:

Import-Module ECSTools
Initialize-ECSAgent

Or

Import-Module ECSTools
Initialize-ECSAgent –Cluster MyCluster -EnableIAMTaskRole

If you don’t specify a cluster name when you initialize the agent, the instance is joined to the default cluster.

Adding -EnableIAMTaskRole when initializing the agent adds support for IAM roles for tasks. Previously, enabling this setting meant running a complex script and setting an environment variable before you could assign roles to your ECS tasks.

When you enable IAM roles for tasks on Windows, it consumes port 80 on the host. If you have tasks that listen on port 80 on the host, I recommend configuring a service for them that uses load balancing. You can use port 80 on the load balancer, and the traffic can be routed to another host port on your container instances. For more information, see Service Load Balancing.

Create a cluster

To create a new ECS cluster, choose Launch stack, or pull the GitHub project to your local machine and run the following command:

aws cloudformation create-stack –template-body file://<path to master-windows.yaml> --stack-name <name>

Upload your container image

Now that you have a cluster running, step through how to build and push an image into a container repository. You use a repository hosted in Amazon Elastic Container Registry (Amazon ECR) for this, but you could also use Docker Hub. To build and push an image to a repository, install Docker on your Windows* workstation. You also create a repository and assign the necessary permissions to the account that pushes your image to Amazon ECR. For detailed instructions, see Pushing an Image.

* If you are building an image that is based on Windows layers, then you must use a Windows environment to build and push your image to the registry.

Write your task definition

Now that your image is built and ready, the next step is to run your Windows containers using a task.

Start by creating a new task definition based on the windows-simple-iis image from Docker Hub.

  1. Open the ECS console.
  2. Choose Task Definitions, Create new task definition.
  3. Scroll to the bottom of the page and choose Configure via JSON.
  4. Copy and paste the following JSON into that field.
  5. Choose Save, Create.
{
   "family": "windows-simple-iis",
   "containerDefinitions": [
   {
     "name": "windows_sample_app",
     "image": "microsoft/iis",
     "cpu": 100,
     "entryPoint":["powershell", "-Command"],
     "command":["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -Value '<html><head><title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center><h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p></body></html>'; C:\\ServiceMonitor.exe w3svc"],
     "portMappings": [
     {
       "protocol": "tcp",
       "containerPort": 80,
       "hostPort": 8080
     }
     ],
     "memory": 500,
     "essential": true
   }
   ]
}

You can now go back into the Task Definition page and see windows-simple-iis as an available task definition.

There are a few important aspects of the task definition file to note when working with Windows containers. First, the hostPort is configured as 8080, which is necessary because the ECS agent currently uses port 80 to enable IAM roles for tasks required for least-privilege security configurations.

There are also some fairly standard task parameters that are intentionally not included. For example, network mode is not available with Windows at the time of this release, so keep that setting blank to allow Docker to configure WinNAT, the only option available today.

Also, some parameters work differently with Windows than they do with Linux. The CPU limits that you define in the task definition are absolute, whereas on Linux they are weights. For information about other task parameters that are supported or possibly different with Windows, see the documentation.

Run your containers

At this point, you are ready to run containers. There are two options to run containers with ECS:

  1. Task
  2. Service

A task is typically a short-lived process that ECS creates. It can’t be configured to actively monitor or scale. A service is meant for longer-running containers and can be configured to use a load balancer, minimum/maximum capacity settings, and a number of other knobs and switches to help ensure that your code keeps running. In both cases, you are able to pick a placement strategy and a specific IAM role for your container.

  1. Select the task definition that you created above and choose Action, Run Task.
  2. Leave the settings on the next page to the default values.
  3. Select the ECS cluster created when you ran the CloudFormation template.
  4. Choose Run Task to start the process of scheduling a Docker container on your ECS cluster.

You can now go to the cluster and watch the status of your task. It may take 5–10 minutes for the task to go from PENDING to RUNNING, mostly because it takes time to download all of the layers necessary to run the microsoft/iis image. After the status is RUNNING, you should see the following results:

You may have noticed that the example task definition is named windows-simple-iis:2. This is because I created a second version of the task definition, which is one of the powerful capabilities of using ECS. You can make the task definitions part of your source code and then version them. You can also roll out new versions and practice blue/green deployment, switching to reduce downtime and improve the velocity of your deployments!

After the task has moved to RUNNING, you can see your website hosted in ECS. Find the public IP or DNS for your ECS host. Remember that you are hosting on port 8080. Make sure that the security group allows ingress from your client IP address to that port and that your VPC has an internet gateway associated with it. You should see a page that looks like the following:

This is a nice start to deploying a simple single instance task, but what if you had a Web API to be scaled out and in based on usage? This is where you could look at defining a service and collecting CloudWatch data to add and remove both instances of the task. You could also use CloudWatch alarms to add more ECS container instances and keep up with the demand. The former is built into the configuration of your service.

  1. Select the task definition and choose Create Service.
  2. Associate a load balancer.
  3. Set up Auto Scaling.

The following screenshot shows an example where you would add an additional task instance when the CPU Utilization CloudWatch metric is over 60% on average over three consecutive measurements. This may not be aggressive enough for your requirements; it’s meant to show you the option to scale tasks the same way you scale ECS instances with an Auto Scaling group. The difference is that these tasks start much faster because all of the base layers are already on the ECS host.

Do not confuse task dynamic scaling with ECS instance dynamic scaling. To add additional hosts, see Tutorial: Scaling Container Instances with CloudWatch Alarms.

Conclusion

This is just scratching the surface of the flexibility that you get from using containers and Amazon ECS. For more information, see the Amazon ECS Developer Guide and ECS Resources.

– Jeremy, Thomas, Samuel, Akram

Apple CEO is Optimistic VPN Apps Will Return to China App Store

Post Syndicated from Andy original https://torrentfreak.com/apple-ceo-is-optimistic-vpn-apps-will-return-to-china-app-store-171206/

As part of an emerging crackdown on tools and systems with the ability to bypass China’s ‘Great Firewall’, during the summer Chinese government pressure began to affect Apple.

During the final days of July, Apple was forced to remove many of the most-used VPN applications from its Chinese App Store. In a short email from the company, VPN providers and software developers were told that VPN applications are considered illegal in China.

“We are writing to notify you that your application will be removed from the China App Store because it includes content that is illegal in China, which is not in compliance with the App Store Review Guidelines,” Apple informed the affected VPNs.

While the position on the ground doesn’t appear to have changed in the interim, Apple Chief Executive Tim Cook today expressed optimism that the VPN apps would eventually be restored to their former positions on China’s version of the App Store.

“My hope over time is that some of the things, the couple of things that’s been pulled, come back,” Cook said. “I have great hope on that and great optimism on that.”

According to Reuters, Cook said that he always tries to find ways to work together to settle differences and if he gets criticized for that “so be it.”

Speaking at the Fortune Forum in the Chinese city of Guangzhou, Cook said that he believes strongly in freedoms. But back home in the US, Apple has been strongly criticized for not doing enough to uphold freedom of speech and communication in China.

Back in October, two US senators wrote to Cook asking why the company had removed the VPN apps from the company’s store in China.

“VPNs allow users to access the uncensored Internet in China and other countries that restrict Internet freedom. If these reports are true, we are concerned that Apple may be enabling the Chinese government’s censorship and surveillance of the Internet,” senators Ted Cruz and Patrick Leahy wrote.

“While Apple’s many contributions to the global exchange of information are admirable, removing VPN apps that allow individuals in China to evade the Great Firewall and access the Internet privately does not enable people in China to ‘speak up’.”

They were comments Senator Leahy underlined again yesterday.

“American tech companies have become leading champions of free expression. But that commitment should not end at our borders,” Leahy told CNBC.

“Global leaders in innovation, like Apple, have both an opportunity and a moral obligation to promote free expression and other basic human rights in countries that routinely deny these rights.”

Whether the optimism expressed by Cook today is based on discussions with the Chinese government is unknown. However, it seems unlikely that authorities would be willing to significantly compromise on their dedication to maintaining the Great Firewall, which not only controls access to locally controversial content but also seeks to boost the success of Chinese companies.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Implementing Dynamic ETL Pipelines Using AWS Step Functions

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/implementing-dynamic-etl-pipelines-using-aws-step-functions/

This post contributed by:
Wangechi Dole, AWS Solutions Architect
Milan Krasnansky, ING, Digital Solutions Developer, SGK
Rian Mookencherry, Director – Product Innovation, SGK

Data processing and transformation is a common use case you see in our customer case studies and success stories. Often, customers deal with complex data from a variety of sources that needs to be transformed and customized through a series of steps to make it useful to different systems and stakeholders. This can be difficult due to the ever-increasing volume, velocity, and variety of data. Today, data management challenges cannot be solved with traditional databases.

Workflow automation helps you build solutions that are repeatable, scalable, and reliable. You can use AWS Step Functions for this. A great example is how SGK used Step Functions to automate the ETL processes for their client. With Step Functions, SGK has been able to automate changes within the data management system, substantially reducing the time required for data processing.

In this post, SGK shares the details of how they used Step Functions to build a robust data processing system based on highly configurable business transformation rules for ETL processes.

SGK: Building dynamic ETL pipelines

SGK is a subsidiary of Matthews International Corporation, a diversified organization focusing on brand solutions and industrial technologies. SGK’s Global Content Creation Studio network creates compelling content and solutions that connect brands and products to consumers through multiple assets including photography, video, and copywriting.

We were recently contracted to build a sophisticated and scalable data management system for one of our clients. We chose to build the solution on AWS to leverage advanced, managed services that help to improve the speed and agility of development.

The data management system served two main functions:

  1. Ingesting a large amount of complex data to facilitate both reporting and product funding decisions for the client’s global marketing and supply chain organizations.
  2. Processing the data through normalization and applying complex algorithms and data transformations. The system goal was to provide information in the relevant context—such as strategic marketing, supply chain, product planning, etc. —to the end consumer through automated data feeds or updates to existing ETL systems.

We were faced with several challenges:

  • Output data that needed to be refreshed at least twice a day to provide fresh datasets to both local and global markets. That constant data refresh posed several challenges, especially around data management and replication across multiple databases.
  • The complexity of reporting business rules that needed to be updated on a constant basis.
  • Data that could not be processed as contiguous blocks of typical time-series data. The measurement of the data was done across seasons (that is, combination of dates), which often resulted with up to three overlapping seasons at any given time.
  • Input data that came from 10+ different data sources. Each data source ranged from 1–20K rows with as many as 85 columns per input source.

These challenges meant that our small Dev team heavily invested time in frequent configuration changes to the system and data integrity verification to make sure that everything was operating properly. Maintaining this system proved to be a daunting task and that’s when we turned to Step Functions—along with other AWS services—to automate our ETL processes.

Solution overview

Our solution included the following AWS services:

  • AWS Step Functions: Before Step Functions was available, we were using multiple Lambda functions for this use case and running into memory limit issues. With Step Functions, we can execute steps in parallel simultaneously, in a cost-efficient manner, without running into memory limitations.
  • AWS Lambda: The Step Functions state machine uses Lambda functions to implement the Task states. Our Lambda functions are implemented in Java 8.
  • Amazon DynamoDB provides us with an easy and flexible way to manage business rules. We specify our rules as Keys. These are key-value pairs stored in a DynamoDB table.
  • Amazon RDS: Our ETL pipelines consume source data from our RDS MySQL database.
  • Amazon Redshift: We use Amazon Redshift for reporting purposes because it integrates with our BI tools. Currently we are using Tableau for reporting which integrates well with Amazon Redshift.
  • Amazon S3: We store our raw input files and intermediate results in S3 buckets.
  • Amazon CloudWatch Events: Our users expect results at a specific time. We use CloudWatch Events to trigger Step Functions on an automated schedule.

Solution architecture

This solution uses a declarative approach to defining business transformation rules that are applied by the underlying Step Functions state machine as data moves from RDS to Amazon Redshift. An S3 bucket is used to store intermediate results. A CloudWatch Event rule triggers the Step Functions state machine on a schedule. The following diagram illustrates our architecture:

Here are more details for the above diagram:

  1. A rule in CloudWatch Events triggers the state machine execution on an automated schedule.
  2. The state machine invokes the first Lambda function.
  3. The Lambda function deletes all existing records in Amazon Redshift. Depending on the dataset, the Lambda function can create a new table in Amazon Redshift to hold the data.
  4. The same Lambda function then retrieves Keys from a DynamoDB table. Keys represent specific marketing campaigns or seasons and map to specific records in RDS.
  5. The state machine executes the second Lambda function using the Keys from DynamoDB.
  6. The second Lambda function retrieves the referenced dataset from RDS. The records retrieved represent the entire dataset needed for a specific marketing campaign.
  7. The second Lambda function executes in parallel for each Key retrieved from DynamoDB and stores the output in CSV format temporarily in S3.
  8. Finally, the Lambda function uploads the data into Amazon Redshift.

To understand the above data processing workflow, take a closer look at the Step Functions state machine for this example.

We walk you through the state machine in more detail in the following sections.

Walkthrough

To get started, you need to:

  • Create a schedule in CloudWatch Events
  • Specify conditions for RDS data extracts
  • Create Amazon Redshift input files
  • Load data into Amazon Redshift

Step 1: Create a schedule in CloudWatch Events
Create rules in CloudWatch Events to trigger the Step Functions state machine on an automated schedule. The following is an example cron expression to automate your schedule:

In this example, the cron expression invokes the Step Functions state machine at 3:00am and 2:00pm (UTC) every day.

Step 2: Specify conditions for RDS data extracts
We use DynamoDB to store Keys that determine which rows of data to extract from our RDS MySQL database. An example Key is MCS2017, which stands for, Marketing Campaign Spring 2017. Each campaign has a specific start and end date and the corresponding dataset is stored in RDS MySQL. A record in RDS contains about 600 columns, and each Key can represent up to 20K records.

A given day can have multiple campaigns with different start and end dates running simultaneously. In the following example DynamoDB item, three campaigns are specified for the given date.

The state machine example shown above uses Keys 31, 32, and 33 in the first ChoiceState and Keys 21 and 22 in the second ChoiceState. These keys represent marketing campaigns for a given day. For example, on Monday, there are only two campaigns requested. The ChoiceState with Keys 21 and 22 is executed. If three campaigns are requested on Tuesday, for example, then ChoiceState with Keys 31, 32, and 33 is executed. MCS2017 can be represented by Key 21 and Key 33 on Monday and Tuesday, respectively. This approach gives us the flexibility to add or remove campaigns dynamically.

Step 3: Create Amazon Redshift input files
When the state machine begins execution, the first Lambda function is invoked as the resource for FirstState, represented in the Step Functions state machine as follows:

"Comment": ” AWS Amazon States Language.", 
  "StartAt": "FirstState",
 
"States": { 
  "FirstState": {
   
"Type": "Task",
   
"Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Start",
    "Next": "ChoiceState" 
  } 

As described in the solution architecture, the purpose of this Lambda function is to delete existing data in Amazon Redshift and retrieve keys from DynamoDB. In our use case, we found that deleting existing records was more efficient and less time-consuming than finding the delta and updating existing records. On average, an Amazon Redshift table can contain about 36 million cells, which translates to roughly 65K records. The following is the code snippet for the first Lambda function in Java 8:

public class LambdaFunctionHandler implements RequestHandler<Map<String,Object>,Map<String,String>> {
    Map<String,String> keys= new HashMap<>();
    public Map<String, String> handleRequest(Map<String, Object> input, Context context){
       Properties config = getConfig(); 
       // 1. Cleaning Redshift Database
       new RedshiftDataService(config).cleaningTable(); 
       // 2. Reading data from Dynamodb
       List<String> keyList = new DynamoDBDataService(config).getCurrentKeys();
       for(int i = 0; i < keyList.size(); i++) {
           keys.put(”key" + (i+1), keyList.get(i)); 
       }
       keys.put(”key" + T,String.valueOf(keyList.size()));
       // 3. Returning the key values and the key count from the “for” loop
       return (keys);
}

The following JSON represents ChoiceState.

"ChoiceState": {
   "Type" : "Choice",
   "Choices": [ 
   {

      "Variable": "$.keyT",
     "StringEquals": "3",
     "Next": "CurrentThreeKeys" 
   }, 
   {

     "Variable": "$.keyT",
    "StringEquals": "2",
    "Next": "CurrentTwooKeys" 
   } 
 ], 
 "Default": "DefaultState"
}

The variable $.keyT represents the number of keys retrieved from DynamoDB. This variable determines which of the parallel branches should be executed. At the time of publication, Step Functions does not support dynamic parallel state. Therefore, choices under ChoiceState are manually created and assigned hardcoded StringEquals values. These values represent the number of parallel executions for the second Lambda function.

For example, if $.keyT equals 3, the second Lambda function is executed three times in parallel with keys, $key1, $key2 and $key3 retrieved from DynamoDB. Similarly, if $.keyT equals two, the second Lambda function is executed twice in parallel.  The following JSON represents this parallel execution:

"CurrentThreeKeys": { 
  "Type": "Parallel",
  "Next": "NextState",
  "Branches": [ 
  {

     "StartAt": “key31",
    "States": { 
       “key31": {

          "Type": "Task",
        "InputPath": "$.key1",
        "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
        "End": true 
       } 
    } 
  }, 
  {

     "StartAt": “key32",
    "States": { 
     “key32": {

        "Type": "Task",
       "InputPath": "$.key2",
         "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
       "End": true 
      } 
     } 
   }, 
   {

      "StartAt": “key33",
       "States": { 
          “key33": {

                "Type": "Task",
             "InputPath": "$.key3",
             "Resource": "arn:aws:lambda:xx-xxxx-x:XXXXXXXXXXXX:function:Execution",
           "End": true 
       } 
     } 
    } 
  ] 
} 

Step 4: Load data into Amazon Redshift
The second Lambda function in the state machine extracts records from RDS associated with keys retrieved for DynamoDB. It processes the data then loads into an Amazon Redshift table. The following is code snippet for the second Lambda function in Java 8.

public class LambdaFunctionHandler implements RequestHandler<String, String> {
 public static String key = null;

public String handleRequest(String input, Context context) { 
   key=input; 
   //1. Getting basic configurations for the next classes + s3 client Properties
   config = getConfig();

   AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient(); 
   // 2. Export query results from RDS into S3 bucket 
   new RdsDataService(config).exportDataToS3(s3,key); 
   // 3. Import query results from S3 bucket into Redshift 
    new RedshiftDataService(config).importDataFromS3(s3,key); 
   System.out.println(input); 
   return "SUCCESS"; 
 } 
}

After the data is loaded into Amazon Redshift, end users can visualize it using their preferred business intelligence tools.

Lessons learned

  • At the time of publication, the 1.5–GB memory hard limit for Lambda functions was inadequate for processing our complex workload. Step Functions gave us the flexibility to chunk our large datasets and process them in parallel, saving on costs and time.
  • In our previous implementation, we assigned each key a dedicated Lambda function along with CloudWatch rules for schedule automation. This approach proved to be inefficient and quickly became an operational burden. Previously, we processed each key sequentially, with each key adding about five minutes to the overall processing time. For example, processing three keys meant that the total processing time was three times longer. With Step Functions, the entire state machine executes in about five minutes.
  • Using DynamoDB with Step Functions gave us the flexibility to manage keys efficiently. In our previous implementations, keys were hardcoded in Lambda functions, which became difficult to manage due to frequent updates. DynamoDB is a great way to store dynamic data that changes frequently, and it works perfectly with our serverless architectures.

Conclusion

With Step Functions, we were able to fully automate the frequent configuration updates to our dataset resulting in significant cost savings, reduced risk to data errors due to system downtime, and more time for us to focus on new product development rather than support related issues. We hope that you have found the information useful and that it can serve as a jump-start to building your own ETL processes on AWS with managed AWS services.

For more information about how Step Functions makes it easy to coordinate the components of distributed applications and microservices in any workflow, see the use case examples and then build your first state machine in under five minutes in the Step Functions console.

If you have questions or suggestions, please comment below.

AWS Contributes to Milestone 1.0 Release and Adds Model Serving Capability for Apache MXNet

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-contributes-to-milestone-1-0-release-and-adds-model-serving-capability-for-apache-mxnet/

Post by Dr. Matt Wood

Today AWS announced contributions to the milestone 1.0 release of the Apache MXNet deep learning engine including the introduction of a new model-serving capability for MXNet. The new capabilities in MXNet provide the following benefits to users:

1) MXNet is easier to use: The model server for MXNet is a new capability introduced by AWS, and it packages, runs, and serves deep learning models in seconds with just a few lines of code, making them accessible over the internet via an API endpoint and thus easy to integrate into applications. The 1.0 release also includes an advanced indexing capability that enables users to perform matrix operations in a more intuitive manner.

  • Model Serving enables set up of an API endpoint for prediction: It saves developers time and effort by condensing the task of setting up an API endpoint for running and integrating prediction functionality into an application to just a few lines of code. It bridges the barrier between Python-based deep learning frameworks and production systems through a Docker container-based deployment model.
  • Advanced indexing for array operations in MXNet: It is now more intuitive for developers to leverage the powerful array operations in MXNet. They can use the advanced indexing capability by leveraging existing knowledge of NumPy/SciPy arrays. For example, it supports MXNet NDArray and Numpy ndarray as index, e.g. (a[mx.nd.array([1,2], dtype = ‘int32’]).

2) MXNet is faster: The 1.0 release includes implementation of cutting-edge features that optimize the performance of training and inference. Gradient compression enables users to train models up to five times faster by reducing communication bandwidth between compute nodes without loss in convergence rate or accuracy. For speech recognition acoustic modeling like the Alexa voice, this feature can reduce network bandwidth by up to three orders of magnitude during training. With the support of NVIDIA Collective Communication Library (NCCL), users can train a model 20% faster on multi-GPU systems.

  • Optimize network bandwidth with gradient compression: In distributed training, each machine must communicate frequently with others to update the weight-vectors and thereby collectively build a single model, leading to high network traffic. Gradient compression algorithm enables users to train models up to five times faster by compressing the model changes communicated by each instance.
  • Optimize the training performance by taking advantage of NCCL: NCCL implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides communication routines that are optimized to achieve high bandwidth over interconnection between multi-GPUs. MXNet supports NCCL to train models about 20% faster on multi-GPU systems.

3) MXNet provides easy interoperability: MXNet now includes a tool for converting neural network code written with the Caffe framework to MXNet code, making it easier for users to take advantage of MXNet’s scalability and performance.

  • Migrate Caffe models to MXNet: It is now possible to easily migrate Caffe code to MXNet, using the new source code translation tool for converting Caffe code to MXNet code.

MXNet has helped developers and researchers make progress with everything from language translation to autonomous vehicles and behavioral biometric security. We are excited to see the broad base of users that are building production artificial intelligence applications powered by neural network models developed and trained with MXNet. For example, the autonomous driving company TuSimple recently piloted a self-driving truck on a 200-mile journey from Yuma, Arizona to San Diego, California using MXNet. This release also includes a full-featured and performance optimized version of the Gluon programming interface. The ease-of-use associated with it combined with the extensive set of tutorials has led significant adoption among developers new to deep learning. The flexibility of the interface has driven interest within the research community, especially in the natural language processing domain.

Getting started with MXNet
Getting started with MXNet is simple. To learn more about the Gluon interface and deep learning, you can reference this comprehensive set of tutorials, which covers everything from an introduction to deep learning to how to implement cutting-edge neural network models. If you’re a contributor to a machine learning framework, check out the interface specs on GitHub.

To get started with the Model Server for Apache MXNet, install the library with the following command:

$ pip install mxnet-model-server

The Model Server library has a Model Zoo with 10 pre-trained deep learning models, including the SqueezeNet 1.1 object classification model. You can start serving the SqueezeNet model with just the following command:

$ mxnet-model-server \
  --models squeezenet=https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model \
  --service dms/model_service/mxnet_vision_service.py

Learn more about the Model Server and view the source code, reference examples, and tutorials here: https://github.com/awslabs/mxnet-model-server/

-Dr. Matt Wood

Glenn’s Take on re:Invent 2017 – Part 3

Post Syndicated from Glenn Gore original https://aws.amazon.com/blogs/architecture/glenns-take-on-reinvent-2017-part-3/

Glenn Gore here, Chief Architect for AWS. I was in Las Vegas last week — with 43K others — for re:Invent 2017. I checked in to the Architecture blog here and here with my take on what was interesting about some of the bigger announcements from a cloud-architecture perspective.

In the excitement of so many new services being launched, we sometimes overlook feature updates that, while perhaps not as exciting as Amazon DeepLens, have significant impact on how you architect and develop solutions on AWS.

Amazon DynamoDB is used by more than 100,000 customers around the world, handling over a trillion requests every day. From the start, DynamoDB has offered high availability by natively spanning multiple Availability Zones within an AWS Region. As more customers started building and deploying truly-global applications, there was a need to replicate a DynamoDB table to multiple AWS Regions, allowing for read/write operations to occur in any region where the table was replicated. This update is important for providing a globally-consistent view of information — as users may transition from one region to another — or for providing additional levels of availability, allowing for failover between AWS Regions without loss of information.

There are some interesting concurrency-design aspects you need to be aware of and ensure you can handle correctly. For example, we support the “last writer wins” reconciliation where eventual consistency is being used and an application updates the same item in different AWS Regions at the same time. If you require strongly-consistent read/writes then you must perform all of your read/writes in the same AWS Region. The details behind this can be found in the DynamoDB documentation. Providing a globally-distributed, replicated DynamoDB table simplifies many different use cases and allows for the logic of replication, which may have been pushed up into the application layers to be simplified back down into the data layer.

The other big update for DynamoDB is that you can now back up your DynamoDB table on demand with no impact to performance. One of the features I really like is that when you trigger a backup, it is available instantly, regardless of the size of the table. Behind the scenes, we use snapshots and change logs to ensure a consistent backup. While backup is instant, restoring the table could take some time depending on its size and ranges — from minutes to hours for very large tables.

This feature is super important for those of you who work in regulated industries that often have strict requirements around data retention and backups of data, which sometimes limited the use of DynamoDB or required complex workarounds to implement some sort of backup feature in the past. This often incurred significant, additional costs due to increased read transactions on their DynamoDB tables.

Amazon Simple Storage Service (Amazon S3) was our first-released AWS service over 11 years ago, and it proved the simplicity and scalability of true API-driven architectures in the cloud. Today, Amazon S3 stores trillions of objects, with transactional requests per second reaching into the millions! Dealing with data as objects opened up an incredibly diverse array of use cases ranging from libraries of static images, game binary downloads, and application log data, to massive data lakes used for big data analytics and business intelligence. With Amazon S3, when you accessed your data in an object, you effectively had to write/read the object as a whole or use the range feature to retrieve a part of the object — if possible — in your individual use case.

Now, with Amazon S3 Select, an SQL-like query language is used that can work with delimited text and JSON files, as well as work with GZIP compressed files. We don’t support encryption during the preview of Amazon S3 Select.

Amazon S3 Select provides two major benefits:

  • Faster access
  • Lower running costs

Serverless Lambda functions, where every millisecond matters when you are being charged, will benefit greatly from Amazon S3 Select as data retrieval and processing of your Lambda function will experience significant speedups and cost reductions. For example, we have seen 2x speed improvement and 80% cost reduction with the Serverless MapReduce code.

Other AWS services such as Amazon Athena, Amazon Redshift, and Amazon EMR will support Amazon S3 Select as well as partner offerings including Cloudera and Hortonworks. If you are using Amazon Glacier for longer-term data archival, you will be able to use Amazon Glacier Select to retrieve a subset of your content from within Amazon Glacier.

As the volume of data that can be stored within Amazon S3 and Amazon Glacier continues to scale on a daily basis, we will continue to innovate and develop improved and optimized services that will allow you to work with these magnificently-large data sets while reducing your costs (retrieval and processing). I believe this will also allow you to simplify the transformation and storage of incoming data into Amazon S3 in basic, semi-structured formats as a single copy vs. some of the duplication and reformatting of data sometimes required to do upfront optimizations for downstream processing. Amazon S3 Select largely removes the need for this upfront optimization and instead allows you to store data once and process it based on your individual Amazon S3 Select query per application or transaction need.

Thanks for reading!

Glenn contemplating why CSV format is still relevant in 2017 (Italy).

GoMovies/123Movies Launches Anime Streaming Site

Post Syndicated from Ernesto original https://torrentfreak.com/gomovies123movies-launches-anime-streaming-site-171204/

Pirate video streaming sites are booming. Their relative ease of use through on-demand viewing makes them a viable alternative to P2P file-sharing, which traditionally dominated the piracy arena.

The popular movie streaming site GoMovies, formerly known as 123movies, is one of the most-used streaming sites. Despite the rebranding and several domain changes, it has built a steady base of millions of users over the past year and a half.

And it’s not done yet, we learn today.

The site, currently operating from the Gostream.is domain name, recently launched a new spinoff targeting anime fans. Animehub.to is currently promoted on GoMovies and the site’s operators aim to turn it into the leading streaming site for anime content.

Animehub.to

Someone connected to GoMovies told us that they’ve received a lot of requests from users to add anime content. Anime has traditionally been a large niche on file-sharing sites and the same is true on streaming platforms.

Technically speaking, GoMovies could have easily filled up the original site with anime content, but the owners prefer a different outlet.

With a separate anime site, they hope to draw in more visitors, TorrentFreak was told by an insider. For one, this makes it possible to rank better in search engines. It also allows the operators to cater specifically to the anime audience, with anime specific categories and release schedules.

Anime copyright holders will not be pleased with the new initiative, that’s for sure, but GoMovies is not new to legal pressure.

Earlier this year the US Ambassador to Vietnam called on the local Government to criminally prosecute people behind 123movies, the previous iteration of the site. In addition, the MPAA reported the site to the US Government in its recent overview of notorious pirate sites.

Pressure or not, it appears that GoMovies has no intention of slowing down or changing its course, although we’ve heard that yet another rebranding is on the horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Could a Single Copyright Complaint Kill Your Domain?

Post Syndicated from Andy original https://torrentfreak.com/could-a-single-copyright-complaint-kill-your-domain-171203/

It goes without saying that domain names are a crucial part of any site’s infrastructure. Without domains, sites aren’t easily findable and when things go wrong, the majority of web users could be forgiven for thinking that they no longer exist.

That was the case last week when Canada-based mashup site Sowndhaus suddenly found that its domain had been rendered completely useless. As previously reported, the site’s domain was suspended by UK-based registrar DomainBox after it received a copyright complaint from the IFPI.

There are a number of elements to this story, not least that the site’s operators believe that their project is entirely legal.

“We are a few like-minded folks from the mashup community that were tired of doing the host dance – new sites welcome us with open arms until record industry pressure becomes too much and they mass delete and ban us,” a member of the Sowndhaus team informs TF.

“After every mass deletion there are a wave of producers that just retire and their music is lost forever. We decided to make a more permanent home for ourselves and Canada’s Copyright Modernization Act gave us the opportunity to do it legally.
We just want a small quiet corner of the internet where we can make music without being criminalized. It seems insane that I even have to say that.”

But while these are all valid concerns for the Sowndhaus community, there is a bigger picture here. There is absolutely no question that sites like YouTube and Soundcloud host huge libraries of mashups, yet somehow they hang on to their domains. Why would DomainBox take such drastic action? Is the site a real menace?

“The IFPI have sent a few standard DMCA takedown notices [to Sowndhaus, indirectly], each about a specific track or tracks on our server, asking us to remove them and any infringing activity. Every track complained about has been transformative, either a mashup or a remix and in a couple of cases cover versions,” the team explains.

But in all cases, it appears that IFPI and its agents didn’t take the time to complain to the site first. They instead went for the site’s infrastructure.

“[IFPI] have never contacted us directly, even though we have a ‘report copyright abuse’ feature on our site and a dedicated copyright email address. We’ve only received forwarded emails from our host and domain registrar,” the site says.

Sowndhaus believes that the event that led to the domain suspension was caused by a support ticket raised by the “RiskIQ Incident Response Team”, who appear to have been working on behalf of IFPI.

“We were told by DomainBox…’Please remove the unlawful content from your website, or the domain will be suspended. Please reply within the next 5 working days to ensure the request was actioned’,” Sowndhaus says.

But they weren’t given five days, or even one. DomainBox chose to suspend the Sowndhaus.com domain name immediately, rendering the site inaccessible and without even giving the site a chance to respond.

“They didn’t give us an option to appeal the decision. They just took the IFPI’s word that the files were unlawful and must be removed,” the site informs us.

Intrigued at why DomainBox took the nuclear option, TorrentFreak sent several emails to the company but each time they went unanswered. We also sent emails to Mesh Digital Ltd, DomainBox’s operator, but they were given the same treatment.

We wanted to know on what grounds the registrar suspended the domain but perhaps more importantly, we wanted to know if the company is as aggressive as this with its other customers.

To that end we posed a question: If DomainBox had been entrusted with the domains of YouTube or Soundcloud, would they have acted in the same manner? We can’t put words in their mouth but it seems likely that someone in the company would step in to avoid a PR disaster on that scale.

Of course, both YouTube and Soundcloud comply with the law by taking down content when it infringes someone’s rights. It’s a position held by Sowndhaus too, even though they do not operate in the United States.

“We comply fully with the Copyright Act (Canada) and have our own policy of removing any genuinely infringing content,” the site says, adding that users who infringe are banned from the platform.

While there has never been any suggestion that IFPI or its agents asked for Sowndhaus’ domain to be suspended, it’s clear that DomainBox made a decision to do just that. In some cases that might have been warranted, but registrars should definitely aim for a clear, transparent and fair process, so that the facts can be reviewed and appropriate action taken.

It’s something for people to keep in mind when they register a domain in future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Glenn’s Take on re:Invent Part 2

Post Syndicated from Glenn Gore original https://aws.amazon.com/blogs/architecture/glenns-take-on-reinvent-part-2/

Glenn Gore here, Chief Architect for AWS. I’m in Las Vegas this week — with 43K others — for re:Invent 2017. We’ve got a lot of exciting announcements this week. I’m going to check in to the Architecture blog with my take on what’s interesting about some of the announcements from an cloud architectural perspective. My first post can be found here.

The Media and Entertainment industry has been a rapid adopter of AWS due to the scale, reliability, and low costs of our services. This has enabled customers to create new, online, digital experiences for their viewers ranging from broadcast to streaming to Over-the-Top (OTT) services that can be a combination of live, scheduled, or ad-hoc viewing, while supporting devices ranging from high-def TVs to mobile devices. Creating an end-to-end video service requires many different components often sourced from different vendors with different licensing models, which creates a complex architecture and a complex environment to support operationally.

AWS Media Services
Based on customer feedback, we have developed AWS Media Services to help simplify distribution of video content. AWS Media Services is comprised of five individual services that can either be used together to provide an end-to-end service or individually to work within existing deployments: AWS Elemental MediaConvert, AWS Elemental MediaLive, AWS Elemental MediaPackage, AWS Elemental MediaStore and AWS Elemental MediaTailor. These services can help you with everything from storing content safely and durably to setting up a live-streaming event in minutes without having to be concerned about the underlying infrastructure and scalability of the stream itself.

In my role, I participate in many AWS and industry events and often work with the production and event teams that put these shows together. With all the logistical tasks they have to deal with, the biggest question is often: “Will the live stream work?” Compounding this fear is the reality that, as users, we are also quick to jump on social media and make noise when a live stream drops while we are following along remotely. Worse is when I see event organizers actively selecting not to live stream content because of the risk of failure and and exposure — leading them to decide to take the safe option and not stream at all.

With AWS Media Services addressing many of the issues around putting together a high-quality media service, live streaming, and providing access to a library of content through a variety of mechanisms, I can’t wait to see more event teams use live streaming without the concern and worry I’ve seen in the past. I am excited for what this also means for non-media companies, as video becomes an increasingly common way of sharing information and adding a more personalized touch to internally- and externally-facing content.

AWS Media Services will allow you to focus more on the content and not worry about the platform. Awesome!

Amazon Neptune
As a civilization, we have been developing new ways to record and store information and model the relationships between sets of information for more than a thousand years. Government census data, tax records, births, deaths, and marriages were all recorded on medium ranging from knotted cords in the Inca civilization, clay tablets in ancient Babylon, to written texts in Western Europe during the late Middle Ages.

One of the first challenges of computing was figuring out how to store and work with vast amounts of information in a programmatic way, especially as the volume of information was increasing at a faster rate than ever before. We have seen different generations of how to organize this information in some form of database, ranging from flat files to the Information Management System (IMS) used in the 1960s for the Apollo space program, to the rise of the relational database management system (RDBMS) in the 1970s. These innovations drove a lot of subsequent innovations in information management and application development as we were able to move from thousands of records to millions and billions.

Today, as architects and developers, we have a vast variety of database technologies to select from, which have different characteristics that are optimized for different use cases:

  • Relational databases are well understood after decades of use in the majority of companies who required a database to store information. Amazon Relational Database (Amazon RDS) supports many popular relational database engines such as MySQL, Microsoft SQL Server, PostgreSQL, MariaDB, and Oracle. We have even brought the traditional RDBMS into the cloud world through Amazon Aurora, which provides MySQL and PostgreSQL support with the performance and reliability of commercial-grade databases at 1/10th the cost.
  • Non-relational databases (NoSQL) provided a simpler method of storing and retrieving information that was often faster and more scalable than traditional RDBMS technology. The concept of non-relational databases has existed since the 1960s but really took off in the early 2000s with the rise of web-based applications that required performance and scalability that relational databases struggled with at the time. AWS published this Dynamo whitepaper in 2007, with DynamoDB launching as a service in 2012. DynamoDB has quickly become one of the critical design elements for many of our customers who are building highly-scalable applications on AWS. We continue to innovate with DynamoDB, and this week launched global tables and on-demand backup at re:Invent 2017. DynamoDB excels in a variety of use cases, such as tracking of session information for popular websites, shopping cart information on e-commerce sites, and keeping track of gamers’ high scores in mobile gaming applications, for example.
  • Graph databases focus on the relationship between data items in the store. With a graph database, we work with nodes, edges, and properties to represent data, relationships, and information. Graph databases are designed to make it easy and fast to traverse and retrieve complex hierarchical data models. Graph databases share some concepts from the NoSQL family of databases such as key-value pairs (properties) and the use of a non-SQL query language such as Gremlin. Graph databases are commonly used for social networking, recommendation engines, fraud detection, and knowledge graphs. We released Amazon Neptune to help simplify the provisioning and management of graph databases as we believe that graph databases are going to enable the next generation of smart applications.

A common use case I am hearing every week as I talk to customers is how to incorporate chatbots within their organizations. Amazon Lex and Amazon Polly have made it easy for customers to experiment and build chatbots for a wide range of scenarios, but one of the missing pieces of the puzzle was how to model decision trees and and knowledge graphs so the chatbot could guide the conversation in an intelligent manner.

Graph databases are ideal for this particular use case, and having Amazon Neptune simplifies the deployment of a graph database while providing high performance, scalability, availability, and durability as a managed service. Security of your graph database is critical. To help ensure this, you can store your encrypted data by running AWS in Amazon Neptune within your Amazon Virtual Private Cloud (Amazon VPC) and using encryption at rest integrated with AWS Key Management Service (AWS KMS). Neptune also supports Amazon VPC and AWS Identity and Access Management (AWS IAM) to help further protect and restrict access.

Our customers now have the choice of many different database technologies to ensure that they can optimize each application and service for their specific needs. Just as DynamoDB has unlocked and enabled many new workloads that weren’t possible in relational databases, I can’t wait to see what new innovations and capabilities are enabled from graph databases as they become easier to use through Amazon Neptune.

Look for more on DynamoDB and Amazon S3 from me on Monday.

 

Glenn at Tour de Mont Blanc

 

 

Epic Games Settles First Copyright Case Against Fortnite Cheater

Post Syndicated from Ernesto original https://torrentfreak.com/epic-games-settles-first-copyright-case-against-fortnite-cheater-171201/

Frustrated by thousands of cheaters who wreak havoc in Fortnite’s “Battle Royale,” game publisher Epic Games decided to take several of them to court.

One of the defendants is Minnesota resident Charles Vraspir, a.k.a. “Joreallean,”

The game publisher accused him of copyright infringement and breach of contract, by injecting unauthorized computer code in order to cheat.

According to Epic’s allegations, Vraspir was banned at least nine times but registered new accounts to continue his cheating. In addition, he was also suspected of having written code for the cheats.

“Defendant’s cheating, and his inducing and enabling of others to cheat, is ruining the game playing experience of players who do not cheat,” Epic games wrote.

While the complaint included all the elements for an extensive legal battle, both sides chose to resolve the case without much of a fight. Yesterday, they informed the court that a settlement had been reached.

Epic Games’ counsel asked the court to enter the agreement as well as a permanent injunction, which both have agreed on.

The proposed injunction, signed today, forbids Vraspir from carrying out any copyright infringements in the future, to destroy all cheats, and to never cheat again.

Among other things, he is prohibited from “creating, writing, developing, advertising, promoting, and/or distributing anything that infringes Epic’s works now or hereafter protected by any of Epic’s copyrights.”

While there is no mention of a settlement fee or fine, Vraspir will have to pay $5,000 if he breaches the agreement.

From the injunction

Based on the swift settlement, it can be assumed that Epic Games is not aiming to bankrupt the cheaters. Instead, it’s likely that the company wants to set an example and deter others from cheating in the future.

In addition to the settlement, Epic Games also responded to the mother of the 14-year-old cheater who was sued in a separate case. After we first covered the news last week it was quickly picked up by mainstream media, and it hasn’t gone unnoticed by the game publisher either.

The mother accused Epic of taking a minor to court and making his personal info known to the public.

In a response this week, the company notes that it had no idea of the age of the defendant when it filed the complaint. In addition, Epic notes that by handing over his full name and address in the unredacted letter, she exposed her son.

The rules dictate that filings mentioning an individual known to be a minor should use the minor’s initials only, not the full name as the mother did. While the mother may have waived this protection with her letter, Epic says it will stick to the initials going forward.

“Although there is an argument that by submitting the Letter to the Court containing Defendant’s name and address, Defendant’s mother waived this protection […] we plan to include only Defendant’s initials or redact his name entirely in all future filings with the Court, including this letter.”

Given the quick settlement in the Vraspir case, it’s likely that the case against the 14-year-old boy will also be resolved without much additional damage. That is, if both sides can come to an agreement.

A copy of the stipulation and injunction is available here (pdf). The reply to the mother can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Stretch for PCs and Macs, and a Raspbian update

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/stretch-pcs-macs-raspbian-update/

Today, we are launching the first Debian Stretch release of the Raspberry Pi Desktop for PCs and Macs, and we’re also releasing the latest version of Raspbian Stretch for your Pi.

Raspberry Pi Desktop Stretch splash screen

For PCs and Macs

When we released our custom desktop environment on Debian for PCs and Macs last year, we were slightly taken aback by how popular it turned out to be. We really only created it as a result of one of those “Wouldn’t it be cool if…” conversations we sometimes have in the office, so we were delighted by the Pi community’s reaction.

Seeing how keen people were on the x86 version, we decided that we were going to try to keep releasing it alongside Raspbian, with the ultimate aim being to make simultaneous releases of both. This proved to be tricky, particularly with the move from the Jessie version of Debian to the Stretch version this year. However, we have now finished the job of porting all the custom code in Raspbian Stretch to Debian, and so the first Debian Stretch release of the Raspberry Pi Desktop for your PC or Mac is available from today.

The new Stretch releases

As with the Jessie release, you can either run this as a live image from a DVD, USB stick, or SD card or install it as the native operating system on the hard drive of an old laptop or desktop computer. Please note that installing this software will erase anything else on the hard drive — do not install this over a machine running Windows or macOS that you still need to use for its original purpose! It is, however, safe to boot a live image on such a machine, since your hard drive will not be touched by this.

We’re also pleased to announce that we are releasing the latest version of Raspbian Stretch for your Pi today. The Pi and PC versions are largely identical: as before, there are a few applications (such as Mathematica) which are exclusive to the Pi, but the user interface, desktop, and most applications will be exactly the same.

For Raspbian, this new release is mostly bug fixes and tweaks over the previous Stretch release, but there are one or two changes you might notice.

File manager

The file manager included as part of the LXDE desktop (on which our desktop is based) is a program called PCManFM, and it’s very feature-rich; there’s not much you can’t do in it. However, having used it for a few years, we felt that it was perhaps more complex than it needed to be — the sheer number of menu options and choices made some common operations more awkward than they needed to be. So to try to make file management easier, we have implemented a cut-down mode for the file manager.

Raspberry Pi Desktop Stretch - file manager

Most of the changes are to do with the menus. We’ve removed a lot of options that most people are unlikely to change, and moved some other options into the Preferences screen rather than the menus. The two most common settings people tend to change — how icons are displayed and sorted — are now options on the toolbar and in a top-level menu rather than hidden away in submenus.

The sidebar now only shows a single hierarchical view of the file system, and we’ve tidied the toolbar and updated the icons to make them match our house style. We’ve removed the option for a tabbed interface, and we’ve stomped a few bugs as well.

One final change was to make it possible to rename a file just by clicking on its icon to highlight it, and then clicking on its name. This is the way renaming works on both Windows and macOS, and it’s always seemed slightly awkward that Unix desktop environments tend not to support it.

As with most of the other changes we’ve made to the desktop over the last few years, the intention is to make it simpler to use, and to ease the transition from non-Unix environments. But if you really don’t like what we’ve done and long for the old file manager, just untick the box for Display simplified user interface and menus in the Layout page of Preferences, and everything will be back the way it was!

Raspberry Pi Desktop Stretch - preferences GUI

Battery indicator for laptops

One important feature missing from the previous release was an indication of the amount of battery life. Eben runs our desktop on his Mac, and he was becoming slightly irritated by having to keep rebooting into macOS just to check whether his battery was about to die — so fixing this was a priority!

We’ve added a battery status icon to the taskbar; this shows current percentage charge, along with whether the battery is charging, discharging, or connected to the mains. When you hover over the icon with the mouse pointer, a tooltip with more details appears, including the time remaining if the battery can provide this information.

Raspberry Pi Desktop Stretch - battery indicator

While this battery monitor is mainly intended for the PC version, it also supports the first-generation pi-top — to see it, you’ll only need to make sure that I2C is enabled in Configuration. A future release will support the new second-generation pi-top.

New PC applications

We have included a couple of new applications in the PC version. One is called PiServer — this allows you to set up an operating system, such as Raspbian, on the PC which can then be shared by a number of Pi clients networked to it. It is intended to make it easy for classrooms to have multiple Pis all running exactly the same software, and for the teacher to have control over how the software is installed and used. PiServer is quite a clever piece of software, and it’ll be covered in more detail in another blog post in December.

We’ve also added an application which allows you to easily use the GPIO pins of a Pi Zero connected via USB to a PC in applications using Scratch or Python. This makes it possible to run the same physical computing projects on the PC as you do on a Pi! Again, we’ll tell you more in a separate blog post this month.

Both of these applications are included as standard on the PC image, but not on the Raspbian image. You can run them on a Pi if you want — both can be installed from apt.

How to get the new versions

New images for both Raspbian and Debian versions are available from the Downloads page.

It is possible to update existing installations of both Raspbian and Debian versions. For Raspbian, this is easy: just open a terminal window and enter

sudo apt-get update
sudo apt-get dist-upgrade

Updating Raspbian on your Raspberry Pi

How to update to the latest version of Raspbian on your Raspberry Pi. Download Raspbian here: More information on the latest version of Raspbian: Buy a Raspberry Pi:

It is slightly more complex for the PC version, as the previous release was based around Debian Jessie. You will need to edit the files /etc/apt/sources.list and /etc/apt/sources.list.d/raspi.list, using sudo to do so. In both files, change every occurrence of the word “jessie” to “stretch”. When that’s done, do the following:

sudo apt-get update 
sudo dpkg --force-depends -r libwebkitgtk-3.0-common
sudo apt-get -f install
sudo apt-get dist-upgrade
sudo apt-get install python3-thonny
sudo apt-get install sonic-pi=2.10.0~repack-rpt1+2
sudo apt-get install piserver
sudo apt-get install usbbootgui

At several points during the upgrade process, you will be asked if you want to keep the current version of a configuration file or to install the package maintainer’s version. In every case, keep the existing version, which is the default option. The update may take an hour or so, depending on your network connection.

As with all software updates, there is the possibility that something may go wrong during the process, which could lead to your operating system becoming corrupted. Therefore, we always recommend making a backup first.

Enjoy the new versions, and do let us know any feedback you have in the comments or on the forums!

The post Stretch for PCs and Macs, and a Raspbian update appeared first on Raspberry Pi.