Tag Archives: scene

The AWS Cloud Goes Underground at re:Invent

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/the-aws-cloud-goes-underground-at-reinvent/

As you wander through the AWS re:Invent campus, take a minute to think about your expectations for all of the elements that need to come together…

Starting with the location, my colleagues have chosen the best venues, designed the sessions, picked the speakers, laid out the menu, selected the color schemes, programmed or printed all of the signs, and much more, all with the goal of creating an optimal learning environment for you and tens of thousands of other AWS customers.

However, as is often the case, the part that you can see is just a part of the picture. Behind the scenes, people, processes, plans, and systems come together to put all of this infrastructure in to place and to make it run so smoothly that you don’t usually notice it.

Today I would like to tell you about a mission-critical aspect of the re:Invent infrastructure that is actually underground. In addition to providing great Wi-Fi for your phones, tablets, cameras, laptops, and other devices, we need to make sure that a myriad of events, from the live-streamed keynotes, to the live-streamed keynotes and the WorkSpaces-powered hands-on labs are well-connected to each other and to the Internet. With events running at hotels up and down the Las Vegas Strip, reliable, low-latency connectivity is essential!

Thank You CenturyLink / Level3
Over the years we have been working with the great folks at Level3 to make this happen. They recently became part of CenturyLink and are now the Official Network Sponsor of re:Invent, responsible for the network fiber, circuits, and services that tie the re:Invent campus together.

To make this happen, they set up two miles of dark fiber beneath the Strip, routed to multiple Availability Zones in two separate AWS Regions. The Sands Expo Center is equipped with redundant 10 gigabit connections and the other venues (Aria, MGM, Mirage, and Wynn) are each provisioned for 2 to 10 gigabits, meaning that over half of the Strip is enabled for Direct Connect. According to the IT manager at one of the facilities, this may be the largest temporary hybrid network ever configured in Las Vegas.

On the Wi-Fi side, showNets is plugged in to the same network; your devices are talking directly to Direct Connect access points (how cool is that?).

Here’s a simplified illustration of how it all fits together:

The CenturyLink team will be onsite at re:Invent and will be tweeting live network stats throughout the week.

I hope you have enjoyed this quick look behind the scenes and beneath the street!

Jeff;

Kodi Addon Dev Says “Show of Force” Will Be Met With Defiance

Post Syndicated from Andy original https://torrentfreak.com/kodi-addon-dev-says-show-force-will-met-defiance-171119/

For many years, the members of the MPAA have flexed their muscles all around the globe, working to prevent people from engaging in online piracy. If the last 17 years ‘progress’ is anything to go by, it’s a war that will go on indefinitely.

With Columbia, Disney, Paramount, Twentieth Century Fox, Universal, and Warner on board, the MPAA has historically relied on sheer power to intimidate opponents. That has certainly worked in many large piracy cases but for many peripheral smaller-scale pirates, their presence is largely ignored.

This week, however, several players in the Kodi scene discovered that these giants – and more besides – have the ability to literally turn up at their front door. As reported Thursday, UK-based Kodi addon developer The_Alpha received a hand-delivered cease-and-desist letter from all of the above, accompanied by new faces Netflix, Amazon and Sky TV.

These companies are part of the Alliance for Creativity and Entertainment (ACE), a massive and recently-formed anti-piracy coalition comprised of 30 global entertainment brands. TorrentFreak reached out to The_Alpha for his thoughts on coming under such a dazzling spotlight but perhaps understandably he didn’t want to comment.

The leader of the Ares Project was willing to go on the record, however, after he too received a hand-delivered threat during the week. His decision was to immediately comply and shutdown but TF is informed that others might not be so willing to follow suit.

A Kodi addon developer living in the UK who spoke to us on condition of anonymity told us that most people operating in the scene expected some kind of trouble – just not on this scale.

“Did you see the [company logos] across the top of Alpha’s letter? That’s some serious shit right there. The film companies are no surprise but Amazon delivers my groceries so I don’t expect this shit from them,” he said.

When the ACE partnership was formed earlier this year, it seemed pretty clear that the main drive was towards the pooling of anti-piracy resources to be more effective and efficient. However, it can’t have escaped ACE that such a broad and powerful alliance could also have a profound psychological effect on its adversaries.

“There’s no doubt in my mind that they’re turning up mob-handed to put the shits up people like Alpha and the rest of us,” the developer said. “It’s hardly a fair dust-up is it? What have we got to fight back with, a giro [state benefits]? It’s a show of force, ‘look how important we are’!”

Interestingly, however, the dev told us that it isn’t necessarily the size of the coalition that has him most concerned. What caught his eye was the inclusion of two influential UK-based companies in the alliance.

“Having Sly [a local derogatory nickname for Sky TV] and the Premier League on the letter makes it much more serious to me than seeing Warner or whatever,” he commented.

“I don’t get involved in footie but Sly is everywhere round here and I think it’s something the Brit dev scene might take notice of, even if most say ‘fuck it’ and carry on anyway.”

When questioned whether that’s likely, our source said that while ACE might be able to tackle some of the bigger targets like Ares Project or Colossus, they fundamentally misunderstand how the Kodi scene works.

“If you want a good example of a scattered pirate scene, I give you Kodi. They can bomb the base or whatever but nobody lives there,” he explained.

“There’s some older blokes like me who can do without the stress but a lot of younger coders, builders and YouTubers who thrive on it. They’re used to running around council estates with real-life problems. A faffy letter from some toff in a suit means literally nothing. Like I said, all they have to lose is a giro.”

Whether this is just bravado will remain to be seen, but our earlier discussions with others in the scene indicate a particular weakness in the UK, with many players vulnerable to being found after failing to hide their identities in the past. To a point, our source agrees that this is a problem.

“People are saying that Alpha was found after trying to raise some charity money related to his disabled son but I don’t know for sure and nor does anybody else. What strikes me is that none of us really thought things would get this on top here because all you ever hear about is America this, Canada that, whatever. Does this means that more of us are getting done in England? You tell me,” he said.

Only time will tell but stamping out the pirate Kodi scene is going to be hard work.

Within hours of several projects disappearing Wednesday and Thursday, YouTube and myriad blogs were being flooded with guides detailing immediate replacements. This ad-hoc network of enthusiasts makes the exchange of information happen at an alarming rate and it’s hard to see how any company – no matter how powerful – will ever be able to keep up.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Ares Kodi Project Calls it Quits After Hollywood Cease & Desist

Post Syndicated from Andy original https://torrentfreak.com/ares-kodi-project-calls-it-quits-after-hollywood-cease-desist-171117/

This week has been particularly bad for those involved in the Kodi addon scene. Following cease-and-desist notices from the MPA-led anti-piracy coalition Alliance for Creativity and Entertainment, several addon developers and repositories shut down.

With Columbia, Disney, Paramount, Twentieth Century Fox, Universal, Warner, Netflix, Amazon and Sky TV all lined up for war, the third-party developers had little choice but to quit. One of those affected was the leader of the hugely popular Ares Project, which quietly disappeared mid-week.

The Ares Wizard was an extremely popular and important piece of software which allowed people to switch Kodi builds, install third-party addons, install popular repositories, change system settings, and carry out backups. It’s installed on huge numbers of machines worldwide but it will soon fall into disrepair.

The mighty Ares Wizard in action

“[This week] I was subject to a hand-delivered notice to cease-and-desist from MPA & ACE,” Ares Project leader Tekto informs TorrentFreak.

“Given the notice, we obviously shut down the repo and wizard as requested.”

The news that Ares Project is done and never coming back will be a huge blow to the community. The project just celebrated its second birthday and has grown exponentially since it first arrived on the scene.

“Ares Project started in Oct 2015. Originally it was to be a tool to setup up the video cache on Kodi correctly. However, many ideas were thrown into the pot and it became a wee bit more; such as a wizard to install community provided builds, common addons and few other tweaks and options,” Tekto says.

“For my own part I started blogging earlier that year as part of a longer-term goal to be self-funding. I always disliked seeing begging bowls out to support ‘server’ costs, many of which were cheap £5-10 per month servers that were used to gain £100s in donations.

“The blog, via affiliate links and ads, could and would provide the funds to cover our hosting costs without resorting to begging for money every weekend.”

Intrigued by this first wave of actions by ACE in Europe, TorrentFreak asked for a copy of the MPA/ACE cease-and-desist notice but unfortunately, Tekto flat-out refused. All he would tell us is that he’d agreed not to give out any copies or screenshots and that he was adhering to that 100%.

That only leaves speculation as to what grounds the MPA/ACE cited for closing the project but to be fair, it doesn’t take much thought to find a direct comparison. Earlier this year, in the BREIN v Filmspeler case, the European Court of Justice (ECJ) ruled that selling “fully-loaded” Kodi boxes amounted to illegally communicating copyrighted content to the public.

With that in mind, it doesn’t take much of a leap to see how this ruling could also apply to someone distributing “fully-loaded” Kodi software builds or addons via a website. It had previously been considered a legal gray area, of course, and it was in that space that the Ares team believed it operated. After all, it took ECJ clarification for local courts in the Netherlands to be satisfied with the legal position.

“There was never any question that what we were doing was illegal. We didn’t and never have hosted any content, we always prevented discussions about illegal paid services, and never sold any devices, pre-loaded or otherwise. That used to be enough to occupy the ‘gray’ area which meant we were safe to develop our applications. That changed in 2017 as we were to discover,” Tekto notes.

Up until this week and apparently oblivious to how the earlier ECJ ruling might affect their operation, things had been going extremely well for Ares. In mid-2016, the group moved to its own support forum that attracted 100,000 signed-up members and 300,000 visitors every month.

“This was quite an achievement in terms of viral marketing but ultimately this would become part of our downfall,” Tekto says.

“The recent innovation of the ‘basket driven’ Ares Portal system seems to have triggered the legal move to shut the project down completely. This simple system gave access to hundreds of add-ons. The system removed the need for builds, blogs and YouTubers – you just shopped on the site for addons and then installed them to your device with a simple 6 digit code.”

While Ares and Tekto still didn’t believe they were doing anything illegal (addons were linked, not hosted) it is now pretty clear to them that the previous gray area has been well and truly closed, at least as far as the MPA/ACE alliance is concerned. And with that in mind, the show is over. Done. Finished.

“We are not criminals or malicious hackers, we weren’t even careful about hiding our identities. You couldn’t meet a more ordinary bunch of folks in truth,” he says.

“There was never any question we would close our doors if what we were doing crossed any boundaries of legality. So with the notice served on us, we are closing our doors and removing all our websites and applications. It’s a sad day in many ways, but nobody wants to be facing court or a potential custodial sentence, for what is essentially a hobby.”

Finally, Tekto says that others like him might want to consider their positions carefully, before they too get a knock at the door. In the meantime, he gives thanks to the project’s supporters, who have remained loyal over the past two years.

“It just leaves me to thank our users for their support and step away from the Kodi scene,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Daphne Caruana Galizia’s Murder and the Security of WhatsApp

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/daphne_caruana_.html

Daphne Caruana Galizia was a Maltese journalist whose anti-corruption investigations exposed powerful people. She was murdered in October by a car bomb.

Galizia used WhatsApp to communicate securely with her sources. Now that she is dead, the Maltese police want to break into her phone or the app, and find out who those sources were.

One journalist reports:

Part of Daphne’s destroyed smart phone was elevated from the scene.

Investigators say that Caruana Galizia had not taken her laptop with her on that particular trip. If she had done so, the forensic experts would have found evidence on the ground.

Her mobile phone is also being examined, as can be seen from her WhatsApp profile, which has registered activity since the murder. But it is understood that the data is safe.

Sources close to the newsroom said that as part of the investigation her sim card has been cloned. This is done with the help of mobile service providers in similar cases. Asked if her WhatsApp messages or any other messages that were stored in her phone will be retrieved, the source said that since the messaging application is encrypted, the messages cannot be seen. Therefore it is unlikely that any data can be retrieved.

I am less optimistic than that reporter. The FBI is providing “specific assistance.” The article doesn’t explain that, but I would not be surprised if they were helping crack the phone.

It will be interesting to see if WhatsApp’s security survives this. My guess is that it depends on how much of the phone was recovered from the bombed car.

EDITED TO ADD (11/7): The court-appointed IT expert on the case has a criminal record in the UK for theft and forgery.

Now Available – Amazon Aurora with PostgreSQL Compatibility

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-amazon-aurora-with-postgresql-compatibility/

Late last year I told you about our plans to add PostgreSQL compatibility to Amazon Aurora. We launched the private beta shortly after that announcement, and followed it up earlier this year with an open preview. We’ve received lots of great feedback during the beta and the preview and have done our best to make sure that the product meets your needs and exceeds your expectations!

Now Generally Available
I am happy to report that Amazon Aurora with PostgreSQL Compatibility is now generally available and that you can use it today in four AWS Regions, with more to follow. It is compatible with PostgreSQL 9.6.3 and scales automatically to support up to 64 TB of storage, with 6-way replication behind the scenes to improve performance and availability.

Just like Amazon Aurora with MySQL compatibility, this edition is fully managed and is very easy to set up and to use. On the performance side, you can expect up to 3x the throughput that you’d get if you ran PostgreSQL on your own (you can read Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases to learn more about how we did this).

You can launch a PostgreSQL-compatible Amazon Aurora instance from the RDS Console by selecting Amazon Aurora as the engine and PostgreSQL-compatible as the edition, and clicking on Next:

Then choose your instance class, single or Multi-AZ deployment (good for dev/test and production, respectively), set the instance name, and the administrator credentials, and click on Next:

You can choose between six instance classes (2 to 64 vCPUs and 15.25 to 488 GiB of memory):

The db.r4 instance class is new addition to Aurora and to RDS, and gives you an additional size at the top-end. The db.r4.16xlarge will give you additional write performance, and may allow you to use a single Aurora database instead of two or more sharded databases.

You can also set many advanced options on the next page, starting with network options such as the VPC and public accessibility:

You can set the cluster name and other database options. Encryption is easy to use and enabled by default; you can use the built-in default master key or choose one of your own:

You can also set failover behavior, the retention period for snapshot backups, and choose to enable collection of detailed (OS-level) metrics via Enhanced Monitoring:

After you have set it up to your liking, click on Launch DB Instance to proceed!

The new instances (primary and secondary since I specified Multi-AZ) are up and running within minutes:

Each PostgreSQL-compatible instance publishes 44 metrics to CloudWatch automatically:

With enhanced monitoring enabled, each instance collects additional per-instance and per-process metrics. It can be enabled when the instance is launched, or afterward, via Modify Instance. Here are some of the metrics collected when enhanced monitoring is enabled:

Clicking on Manage Graphs lets you choose which metrics are shown:

Per-process metrics are also available:

You can scale your read capacity by creating up to 15 Aurora replicas:

The cluster provides a single reader endpoint that you can access in order to load-balance requests across the replicas:

Performance Insights
As I noted earlier, Performance Insights is turned on automatically. This Amazon Aurora feature is wired directly into the database engine and allows you to look deep inside of each query, seeing the database resources that it uses and how they contribute to the overall response time. Here’s the initial view:

I can slice the view by SQL query in order to see how many concurrent copies of each query are running:

There are more views and options than I can fit in this post; to learn more take a look at Using Performance Insights.

Migrating to Amazon Aurora with PostgreSQL Compatibility
AWS Database Migration Service and the Schema Conversion Tool are ready to help you to move data stored in commercial and open-source databases to Amazon Aurora. The Schema Conversion Tool will perform a quick assessment of your database schemas and your code in order to help you to choose between MySQL and PostgreSQL. Our new, limited-time, Free DMS program allows you to use DMS and SCT to migrate to Aurora at no cost, with access to several types of DMS Instances for up to 6 months.

If you are already using PostgreSQL, you will be happy to hear that we support a long list of extensions including PostGIS and dblink.

Available Now
You can use Amazon Aurora with PostgreSQL Compatibility today in the US East (Northern Virginia), EU (Ireland), US West (Oregon), and US East (Ohio) Regions, with others to follow as soon as possible.

Jeff;

Anti-Piracy Group Joins Internet Organization That Controls Top-Level Domain

Post Syndicated from Andy original https://torrentfreak.com/anti-piracy-group-joins-internet-organization-that-controls-top-level-domain-171019/

All around the world, content creators and rightsholders continue to protest against the unauthorized online distribution of copyrighted content.

While pirating end-users obviously share some of the burden, the main emphasis has traditionally been placed on the shuttering of illicit sites, whether torrent, streaming, or hosting based.

Over time, however, sites have become more prevalent and increasingly resilient, leaving the music, movie and publishing industries to play a frustrating game of whac-a-mole. With this in mind, their focus has increasingly shifted towards Internet gatekeepers, including ISPs and bodies with influence over domain availability.

While most of these efforts take place via cooperation or legal action, there’s regularly conflict when Hollywood, for example, wants a particular domain rendered inaccessible or the music industry wants pirates kicked off the Internet.

As a result, there’s nearly always a disconnect, with copyright holders on one side and Internet technology companies worried about mission creep on the other. In Denmark, however, those lines have just been blurred in the most intriguing way possible after an infamous anti-piracy outfit joined an organization with significant control over the Internet in the country.

RettighedsAlliancen (or Rights Alliance as it’s more commonly known) is an anti-piracy group which counts some of the most powerful local and international movie companies among its members. It also operates on behalf of IFPI and by extension, most of the world’s major recording labels.

The group has been involved in dozens of legal processes over the years against file-sharers and file-sharing sites, most recently fighting for and winning ISP blockades against most major pirate portals including The Pirate Bay, RARBG, Torrentz, and many more.

In a somewhat surprising new announcement, the group has revealed it’s become the latest member of DIFO, the Danish Internet Forum (DIFO) which “works for a secure and accessible Internet” under the top-level .DK domain. Indeed, DIFO has overall responsibility for Danish internet infrastructure.

“For DIFO it is important to have a strong link to the Danish internet community. Therefore, we are very pleased that the Alliance wishes to be part of the association,” DIFO said in a statement.

Rights Alliance will be DIFO’s third new member this year but uniquely it will get the opportunity to represent the interests of more than 100,000 Danish and international rightholders from inside an influential Internet-focused organization.

Looking at DIFO’s membership, Rights Alliance certainly stands out as unusual. The majority of the members are made up of IT-based organizations, such as the Internet Industry Association, The Association of Open Source Suppliers and DKRegistrar, the industry association for Danish domain registrars.

A meeting around a table with these players and their often conflicting interests is likely to be an experience for all involved. However, all parties seem more than happy with the new partnership.

“We want to help create a more secure internet for companies that invest in doing business online, and for users to be safe, so combating digital crime is a key and shared goal,” says Rights Alliance chief, Maria Fredenslund. “I am therefore looking forward to the future cooperation with DIFO.”

Only time will tell how this partnership will play out but if common ground can be found, it’s certainly possible that the anti-piracy scene in Denmark could step up a couple of gears in the future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Implementing Default Directory Indexes in Amazon S3-backed Amazon CloudFront Origins Using [email protected]

Post Syndicated from Ronnie Eichler original https://aws.amazon.com/blogs/compute/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/

With the recent launch of [email protected], it’s now possible for you to provide even more robust functionality to your static websites. Amazon CloudFront is a content distribution network service. In this post, I show how you can use [email protected] along with the CloudFront origin access identity (OAI) for Amazon S3 and still provide simple URLs (such as www.example.com/about/ instead of www.example.com/about/index.html).

Background

Amazon S3 is a great platform for hosting a static website. You don’t need to worry about managing servers or underlying infrastructure—you just publish your static to content to an S3 bucket. S3 provides a DNS name such as <bucket-name>.s3-website-<AWS-region>.amazonaws.com. Use this name for your website by creating a CNAME record in your domain’s DNS environment (or Amazon Route 53) as follows:

www.example.com -> <bucket-name>.s3-website-<AWS-region>.amazonaws.com

You can also put CloudFront in front of S3 to further scale the performance of your site and cache the content closer to your users. CloudFront can enable HTTPS-hosted sites, by either using a custom Secure Sockets Layer (SSL) certificate or a managed certificate from AWS Certificate Manager. In addition, CloudFront also offers integration with AWS WAF, a web application firewall. As you can see, it’s possible to achieve some robust functionality by using S3, CloudFront, and other managed services and not have to worry about maintaining underlying infrastructure.

One of the key concerns that you might have when implementing any type of WAF or CDN is that you want to force your users to go through the CDN. If you implement CloudFront in front of S3, you can achieve this by using an OAI. However, in order to do this, you cannot use the HTTP endpoint that is exposed by S3’s static website hosting feature. Instead, CloudFront must use the S3 REST endpoint to fetch content from your origin so that the request can be authenticated using the OAI. This presents some challenges in that the REST endpoint does not support redirection to a default index page.

CloudFront does allow you to specify a default root object (index.html), but it only works on the root of the website (such as http://www.example.com > http://www.example.com/index.html). It does not work on any subdirectory (such as http://www.example.com/about/). If you were to attempt to request this URL through CloudFront, CloudFront would do a S3 GetObject API call against a key that does not exist.

Of course, it is a bad user experience to expect users to always type index.html at the end of every URL (or even know that it should be there). Until now, there has not been an easy way to provide these simpler URLs (equivalent to the DirectoryIndex Directive in an Apache Web Server configuration) to users through CloudFront. Not if you still want to be able to restrict access to the S3 origin using an OAI. However, with the release of [email protected], you can use a JavaScript function running on the CloudFront edge nodes to look for these patterns and request the appropriate object key from the S3 origin.

Solution

In this example, you use the compute power at the CloudFront edge to inspect the request as it’s coming in from the client. Then re-write the request so that CloudFront requests a default index object (index.html in this case) for any request URI that ends in ‘/’.

When a request is made against a web server, the client specifies the object to obtain in the request. You can use this URI and apply a regular expression to it so that these URIs get resolved to a default index object before CloudFront requests the object from the origin. Use the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

To get started, create an S3 bucket to be the origin for CloudFront:

Create bucket

On the other screens, you can just accept the defaults for the purposes of this walkthrough. If this were a production implementation, I would recommend enabling bucket logging and specifying an existing S3 bucket as the destination for access logs. These logs can be useful if you need to troubleshoot issues with your S3 access.

Now, put some content into your S3 bucket. For this walkthrough, create two simple webpages to demonstrate the functionality:  A page that resides at the website root, and another that is in a subdirectory.

<s3bucketname>/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>

<s3bucketname>/subdirectory/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>

When uploading the files into S3, you can accept the defaults. You add a bucket policy as part of the CloudFront distribution creation that allows CloudFront to access the S3 origin. You should now have an S3 bucket that looks like the following:

Root of bucket

Subdirectory in bucket

Next, create a CloudFront distribution that your users will use to access the content. Open the CloudFront console, and choose Create Distribution. For Select a delivery method for your content, under Web, choose Get Started.

On the next screen, you set up the distribution. Below are the options to configure:

  • Origin Domain Name:  Select the S3 bucket that you created earlier.
  • Restrict Bucket Access: Choose Yes.
  • Origin Access Identity: Create a new identity.
  • Grant Read Permissions on Bucket: Choose Yes, Update Bucket Policy.
  • Object Caching: Choose Customize (I am changing the behavior to avoid having CloudFront cache objects, as this could affect your ability to troubleshoot while implementing the Lambda code).
    • Minimum TTL: 0
    • Maximum TTL: 0
    • Default TTL: 0

You can accept all of the other defaults. Again, this is a proof-of-concept exercise. After you are comfortable that the CloudFront distribution is working properly with the origin and Lambda code, you can re-visit the preceding values and make changes before implementing it in production.

CloudFront distributions can take several minutes to deploy (because the changes have to propagate out to all of the edge locations). After that’s done, test the functionality of the S3-backed static website. Looking at the distribution, you can see that CloudFront assigns a domain name:

CloudFront Distribution Settings

Try to access the website using a combination of various URLs:

http://<domainname>/:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET / HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "cb7e2634fe66c1fd395cf868087dd3b9"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: -D2FSRwzfcwyKZKFZr6DqYFkIf4t7HdGw2MkUF5sE6YFDxRJgi0R1g==
< Content-Length: 209
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:16 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This is because CloudFront is configured to request a default root object (index.html) from the origin.

http://<domainname>/subdirectory/:  Doesn’t work

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< x-amz-server-side-encryption: AES256
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: Iqf0Gy8hJLiW-9tOAdSFPkL7vCWBrgm3-1ly5tBeY_izU82ftipodA==
< Content-Length: 0
< Content-Type: application/x-directory
< Last-Modified: Wed, 19 Jul 2017 19:21:24 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

If you use a tool such like cURL to test this, you notice that CloudFront and S3 are returning a blank response. The reason for this is that the subdirectory does exist, but it does not resolve to an S3 object. Keep in mind that S3 is an object store, so there are no real directories. User interfaces such as the S3 console present a hierarchical view of a bucket with folders based on the presence of forward slashes, but behind the scenes the bucket is just a collection of keys that represent stored objects.

http://<domainname>/subdirectory/index.html:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/index.html
*   Trying 54.192.192.130...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.130) port 80 (#0)
> GET /subdirectory/index.html HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 20:35:15 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: RefreshHit from cloudfront
< X-Amz-Cf-Id: bkh6opXdpw8pUomqG3Qr3UcjnZL8axxOH82Lh0OOcx48uJKc_Dc3Cg==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3f2788d309d30f41de96da6f931d4ede.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This request works as expected because you are referencing the object directly. Now, you implement the [email protected] function to return the default index.html page for any subdirectory. Looking at the example JavaScript code, here’s where the magic happens:

var newuri = olduri.replace(/\/$/, '\/index.html');

You are going to use a JavaScript regular expression to match any ‘/’ that occurs at the end of the URI and replace it with ‘/index.html’. This is the equivalent to what S3 does on its own with static website hosting. However, as I mentioned earlier, you can’t rely on this if you want to use a policy on the bucket to restrict it so that users must access the bucket through CloudFront. That way, all requests to the S3 bucket must be authenticated using the S3 REST API. Because of this, you implement a [email protected] function that takes any client request ending in ‘/’ and append a default ‘index.html’ to the request before requesting the object from the origin.

In the Lambda console, choose Create function. On the next screen, skip the blueprint selection and choose Author from scratch, as you’ll use the sample code provided.

Next, configure the trigger. Choosing the empty box shows a list of available triggers. Choose CloudFront and select your CloudFront distribution ID (created earlier). For this example, leave Cache Behavior as * and CloudFront Event as Origin Request. Select the Enable trigger and replicate box and choose Next.

Lambda Trigger

Next, give the function a name and a description. Then, copy and paste the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

Next, define a role that grants permissions to the Lambda function. For this example, choose Create new role from template, Basic Edge Lambda permissions. This creates a new IAM role for the Lambda function and grants the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ]
        }
    ]
}

In a nutshell, these are the permissions that the function needs to create the necessary CloudWatch log group and log stream, and to put the log events so that the function is able to write logs when it executes.

After the function has been created, you can go back to the browser (or cURL) and re-run the test for the subdirectory request that failed previously:

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.202...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.202) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 21:18:44 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: rwFN7yHE70bT9xckBpceTsAPcmaadqWB9omPBv2P6WkIfQqdjTk_4w==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3572de112011f1b625bb77410b0c5cca.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

You have now configured a way for CloudFront to return a default index page for subdirectories in S3!

Summary

In this post, you used [email protected] to be able to use CloudFront with an S3 origin access identity and serve a default root object on subdirectory URLs. To find out some more about this use-case, see [email protected] integration with CloudFront in our documentation.

If you have questions or suggestions, feel free to comment below. For troubleshooting or implementation help, check out the Lambda forum.

‘Pirate’ EBook Site Refuses Point Blank to Cooperate With BREIN

Post Syndicated from Andy original https://torrentfreak.com/pirate-ebook-site-refuses-point-blank-to-cooperate-with-brein-171015/

Dutch anti-piracy group BREIN is probably best known for its legal action against The Pirate Bay but the outfit also tackles many other forms of piracy.

A prime example is the case it pursued against a seller of fully-loaded Kodi boxes in the Netherlands. The subsequent landmark ruling from the European Court of Justice will reverberate around Europe for years to come.

Behind the scenes, however, BREIN persistently tries to take much smaller operations offline, and not without success. Earlier this year it revealed it had taken down 231 illegal sites and services includes 84 linking sites, 63 streaming portals, and 34 torrent sites. Some of these shut down completely and others were forced to leave their hosting providers.

Much of this work flies under the radar but some current action, against an eBook site, is now being thrust into the public eye.

For more than five years, EBoek.info (eBook) has serviced Internet users looking to obtain comic books in Dutch. The site informs TorrentFreak it provides a legitimate service, targeted at people who have purchased a hard copy but also want their comics in digital format.

“EBoek.info is a site about comic books in the Dutch language. Besides some general information about the books, people who have legally obtained a hard copy of the books can find a link to an NZB file which enables them to download a digital version of the books they already have,” site representative ‘Zala’ says.

For those out of the loop, NZB files are a bit like Usenet’s version of .torrent files. They contain no copyrighted content themselves but do provide software clients with information on where to find specific content, so it can be downloaded to a user’s machine.

“BREIN claims that this is illegal as it is impossible for us to verify if our visitor is telling the truth [about having purchased a copy],” Zala reveals.

Speaking with TorrentFreak, BREIN chief Tim Kuik says there’s no question that offering downloads like this is illegal.

“It is plain and simple: the site makes links to unauthorized digital copies available to the general public and therefore is infringing copyright. It is distribution of the content without authorization of the rights holder,” Kuik says.

“The unauthorized copies are not private copies. The private copy exception does not apply to this kind of distribution. The private copy has not been made by the owner of the book himself for his own use. Someone else made the digital copy and is making it available to anyone who wants to download it provided he makes the unverified claim that he has a legal copy. This harms the normal exploitation of the
content.”

Zala says that BREIN has been trying to take his site offline for many years but more recently, the platform has utilized the services of Cloudflare, partly as a form of shield. As readers may be aware, a site behind Cloudflare has its originating IP addresses hidden from the public, not to mention BREIN, who values that kind of information. According to the operator, however, BREIN managed to obtain the information from the CDN provider.

“BREIN has tried for years to take our site offline. Recently, however, Cloudflare was so friendly to give them our IP address,” Zala notes.

A text copy of an email reportedly sent by BREIN to EBoek’s web host and seen by TF appears to confirm that Cloudflare handed over the information as suggested. Among other things, the email has BREIN informing the host that “The IP we got back from Cloudflare is XXX.XXX.XX.33.”

This means that BREIN was able to place direct pressure on EBoek.info’s web host, so only time will tell if that bears any fruit for the anti-piracy group. In the meantime, however, EBoek has decided to go public over its battle with BREIN.

“We have received a request from Stichting BREIN via our hosting provider to take EBoek.info offline,” the site informed its users yesterday.

Interestingly, it also appears that BREIN doesn’t appreciate that the operators of EBoek have failed to make their identities publicly known on their platform.

“The site operates anonymously which also is unlawful. Consumer protection requires that the owner/operator of a site identifies himself,” Kuik says.

According to EBoek, the anti-piracy outfit told the site’s web host that as a “commercial online service”, EBoek is required under EU law to display its “correct and complete business information” including names, addresses, and other information. But perhaps unsurprisingly, the site doesn’t want to play ball.

“In my opinion, you are confusing us with Facebook. They are a foreign commercial company with a European branch in Ireland, and therefore are subject to Irish legislation,” Zala says in an open letter to BREIN.

“Eboek.info, on the other hand, is a foreign hobby club with no commercial purpose, whose administrators have no connection with any country in the European Union. As administrators, we follow the laws of our country of residence which do not oblige us to disclose our identity through our website.

“The fact that Eboek is visible in the Netherlands does not just mean that we are going to adapt to Dutch rules, just as we don’t adapt the site to the rules of Saudi Arabia or China or wherever we are available.”

In a further snub to the anti-piracy group, EBoek says that all visitors to the site have to communicate with its operators via its guestbook, which is publicly visible.

“We see no reason to make an exception for Stichting BREIN,” the site notes.

What makes the situation more complex is that EBoek isn’t refusing dialog completely. The site says it doesn’t want to talk to BREIN but will speak to BREIN’s customers – the publishers of the comic books in question – noting that to date no complaints from publishers have ever been received.

While the parties argue about lines of communication, BREIN insists that following this year’s European Court of Justice decision in the GS Media case, a link to a known infringing work represents copyright infringement. In this case, an NZB file – which links to a location on Usenet – would generally fit the bill.

But despite focusing on the Dutch market, the operators of EBoek say the ruling doesn’t apply to them as they’re outside of the ECJ’s jurisdiction and aren’t commercially motivated. Refusing point blank to take their site offline, EBoek’s operators say that BREIN can do its worst, nothing will have much effect.

“[W]hat’s the worst thing that can happen? That our web host hands [BREIN] our address and IP data. In that case, it will turn out that…we are actually far away,” Zala says.

“[In the case the site goes offline], we’ll just put a backup on another server and, in this case, won’t make use of the ‘services’ of Cloudflare, the provider that apparently put BREIN on the right track.”

The question of jurisdiction is indeed an interesting one, particularly given BREIN’s focus in the Netherlands. But Kuik is clear – it is the area where the content is made available that matters.

“The law of the country where the content is made available applies. In this case the EU and amongst others the Netherlands,” Kuik concludes.

To be continued…..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Evil Within 2 Used Denuvo, Then Dumped it Before Launch

Post Syndicated from Andy original https://torrentfreak.com/the-evil-within-2-used-denuvo-then-dumped-it-before-launch-171013/

At the end of September we reported on a nightmare scenario for videogame anti-tamper technology Denuvo.

With cracking groups chipping away at the system for the past few months, progressing in leaps and bounds, the race to the bottom was almost complete. After aiming to hold off pirates for the first few lucrative weeks and months after launch, the Denuvo-protected Total War: Warhammer 2 fell to pirates in a matter of hours.

In the less than two weeks that have passed since, things haven’t improved much. By most measurements, in fact, the situation appears to have gotten worse.

On Wednesday, action role-playing game Middle Earth: Shadow of War was cracked a day after launch. While this didn’t beat the record set by Warhammer 2, the scene was given an unexpected gift.

Instead of the crack appearing courtesy of scene groups STEAMPUNKS or CPY, which has largely been the tradition thus far this year, old favorite CODEX stepped up to the mark with their own efforts. This means there are now close to half a dozen entities with the ability to defeat Denuvo, which isn’t a good look for the anti-piracy outfit.

A CODEX crack for Denuvo, from nowhere

Needless to say, this development was met with absolute glee by pirates, who forgave the additional day taken to crack the game in order to welcome CODEX into the anti-Denuvo club. But while this is bad news for the anti-tamper technology, there could be a worse enemy crossing the horizon – no confidence.

This Tuesday, DSO Gaming reported that it had received a review copy of Bethesda’s then-upcoming survival horror game, The Evil Within 2. The site, which is often a reliable source for Denuvo-related news, confirmed that the code was indeed protected by Denuvo.

“Another upcoming title that will be using Denuvo is The Evil Within 2,” the site reported. “Bethesda has provided us with a review code for The Evil Within 2. As such, we can confirm that Denuvo is present in it.”

As you read this, October 13, 2017, The Evil Within 2 is enjoying its official worldwide launch. Early yesterday afternoon, however, the title leaked early onto the Internet, courtesy of cracking group CODEX.

At first view, it looked like CODEX had cracked Denuvo before the game’s official launch but the reality was somewhat different after the dust had settled. For reasons best known to developer Bethesda, Denuvo was completely absent from the title. As shown by the title’s NFO (information) file, the only protection present was that provided by Steam.

Denuvo? What Denuvo?

This raises a number of scenarios, none of them good for Denuvo.

One possibility is that all along Bethesda never intended to use Denuvo on the final release. Exactly why we’ll likely never know, but the theory doesn’t really gel with them including it in the review code reviewed by DSO Gaming earlier this week.

The other proposition is that Bethesda witnessed the fiasco around Denuvo’s ‘protection’ in recent days and decided not to invest in something that wasn’t going to provide value for money.

Of course, these theories are going to be pretty difficult to confirm. Denuvo are a pretty confident bunch when things are going their way but they go suspiciously quiet when the tide is turning. Equally, developers tend to keep quiet about their anti-piracy strategies too.

The bottom line though is that if the protection really works and turns in valuable cash, why wouldn’t Bethesda use it as they have done on previous titles including Doom and Prey?

With that question apparently answering itself at the moment, all eyes now turn to Denuvo. Although it has a history of being one of the most successful anti-piracy systems overall, it has taken a massive battering in recent times. Will it recover? Only time will tell but at the moment things couldn’t get much worse.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Spooky Halloween Video Contest

Post Syndicated from Yev original https://www.backblaze.com/blog/spooky-halloween-video-contest/

Would You LIke to Play a Game? Let's make a scary movie or at least a silly one.

Think you can create a really spooky Halloween video?

We’re giving out $100 Visa gift cards just in time for the holidays. Want a chance to win? You’ll need to make a spooky 30-second Halloween-themed video. We had a lot of fun with this the last time we did it a few years back so we’re doing it again this year.

Here’s How to Enter

  1. Prepare a short, 30 seconds or less, video recreating your favorite horror movie scene using your computer or hard drive as the victim — or make something original!
  2. Insert the following image at the end of the video (right-click and save as):
    Backblaze cloud backup
  3. Upload your video to YouTube
  4. Post a link to your video on the Backblaze Facebook wall or on Twitter with the hashtag #Backblaze so we can see it and enter it into the contest. Or, link to it in the comments below!
  5. Share your video with friends

Common Questions
Q: How many people can be in the video?
A: However many you need in order to recreate the scene!
Q: Can I make it longer than 30 seconds?
A: Maybe 32 seconds, but that’s it. If you want to make a longer “director’s cut,” we’d love to see it, but the contest video should be close to 30 seconds. Please keep it short and spooky.
Q: Can I record it on an iPhone, Android, iPad, Camera, etc?
A: You can use whatever device you wish to record your video.
Q: Can I submit multiple videos?
A: If you have multiple favorite scenes, make a vignette! But please submit only one video.
Q: How many winners will there be?
A: We will select up to three winners total.

Contest Rules

  • To upload the video to YouTube, you must have a valid YouTube account and comply with all YouTube rules for age, content, copyright, etc.
  • To post a link to your video on the Backblaze Facebook wall, you must use a valid Facebook account and comply with all Facebook rules for age, content, copyrights, etc.
  • We reserve the right to remove and/or not consider as a valid entry, any videos which we deem inappropriate. We reserve the exclusive right to determine what is inappropriate.
  • Backblaze reserves the right to use your video for promotional purposes.
  • The contest will end on October 29, 2017 at 11:59:59 PM Pacific Daylight Time. The winners (up to three) will be selected by Backblaze and will be announced on October 31, 2017.
  • We will be giving away gift cards to the top winners. The prize will be mailed to the winner in a timely manner.
  • Please keep the content of the post PG rated — no cursing or extreme gore/violence.
  • By submitting a video you agree to all of these rules.

Need an example?

The post Spooky Halloween Video Contest appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Skill up on how to perform CI/CD with AWS Developer tools

Post Syndicated from Chirag Dhull original https://aws.amazon.com/blogs/devops/skill-up-on-how-to-perform-cicd-with-aws-devops-tools/

This is a guest post from Paul Duvall, CTO of Stelligent, a division of HOSTING.

I co-founded Stelligent, a technology services company that provides DevOps Automation on AWS as a result of my own frustration in implementing all the “behind the scenes” infrastructure (including builds, tests, deployments, etc.) on software projects on which I was developing software. At Stelligent, we have worked with numerous customers looking to get software delivered to users quicker and with greater confidence. This sounds simple but it often consists of properly configuring and integrating myriad tools including, but not limited to, version control, build, static analysis, testing, security, deployment, and software release orchestration. What some might not realize is that there’s a new breed of build, deploy, test, and release tools that help reduce much of the undifferentiated heavy lifting of deploying and releasing software to users.

 
I’ve been using AWS since 2009 and I, along with many at Stelligent – have worked with the AWS Service Teams as part of the AWS Developer Tools betas that are now generally available (including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy). I’ve combined the experience we’ve had with customers along with this specialized knowledge of the AWS Developer and Management Tools to provide a unique course that shows multiple ways to use these services to deliver software to users quicker and with confidence.

 
In DevOps Essentials on AWS, you’ll learn how to accelerate software delivery and speed up feedback loops by learning how to use AWS Developer Tools to automate infrastructure and deployment pipelines for applications running on AWS. The course demonstrates solutions for various DevOps use cases for Amazon EC2, AWS OpsWorks, AWS Elastic Beanstalk, AWS Lambda (Serverless), Amazon ECS (Containers), while defining infrastructure as code and learning more about AWS Developer Tools including AWS CodeStar, AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.

 
In this course, you see me use the AWS Developer and Management Tools to create comprehensive continuous delivery solutions for a sample application using many types of AWS service platforms. You can run the exact same sample and/or fork the GitHub repository (https://github.com/stelligent/devops-essentials) and extend or modify the solutions. I’m excited to share how you can use AWS Developer Tools to create these solutions for your customers as well. There’s also an accompanying website for the course (http://www.devopsessentialsaws.com/) that I use in the video to walk through the course examples which link to resources located in GitHub or Amazon S3. In this course, you will learn how to:

  • Use AWS Developer and Management Tools to create a full-lifecycle software delivery solution
  • Use AWS CloudFormation to automate the provisioning of all AWS resources
  • Use AWS CodePipeline to orchestrate the deployments of all applications
  • Use AWS CodeCommit while deploying an application onto EC2 instances using AWS CodeBuild and AWS CodeDeploy
  • Deploy applications using AWS OpsWorks and AWS Elastic Beanstalk
  • Deploy an application using Amazon EC2 Container Service (ECS) along with AWS CloudFormation
  • Deploy serverless applications that use AWS Lambda and API Gateway
  • Integrate all AWS Developer Tools into an end-to-end solution with AWS CodeStar

To learn more, see DevOps Essentials on AWS video course on Udemy. For a limited time, you can enroll in this course for $40 and save 80%, a $160 saving. Simply use the code AWSDEV17.

 
Stelligent, an AWS Partner Network Advanced Consulting Partner holds the AWS DevOps Competency and over 100 AWS technical certifications. To stay updated on DevOps best practices, visit www.stelligent.com.

A Million ‘Pirate’ Boxes Sold in the UK During The Last Two Years

Post Syndicated from Andy original https://torrentfreak.com/a-million-pirate-boxes-sold-in-the-uk-during-the-last-two-years-170919/

With the devices hitting the headlines on an almost weekly basis, it probably comes as no surprise that ‘pirate’ set-top boxes are quickly becoming public enemy number one with video rightsholders.

Typically loaded with the legal Kodi software but augmented with third-party addons, these often Android-based pieces of hardware drag piracy out of the realm of the computer savvy and into the living rooms of millions.

One of the countries reportedly most affected by this boom is the UK. The consumption of these devices among the general public is said to have reached epidemic proportions, and anecdotal evidence suggests that terms like Kodi and Showbox are now household terms.

Today we have another report to digest, this time from the Federation Against Copyright Theft, or FACT as they’re often known. Titled ‘Cracking Down on Digital Piracy,’ the report provides a general overview of the piracy scene, tackling well-worn topics such as how release groups and site operators work, among others.

The report is produced by FACT after consultation with the Police Intellectual Property Crime Unit, Intellectual Property Office, Police Scotland, and anti-piracy outfit Entura International. It begins by noting that the vast majority of the British public aren’t involved in the consumption of infringing content.

“The most recent stats show that 75% of Brits who look at content online abide by the law and don’t download or stream it illegally – up from 70% in 2013. However, that still leaves 25% who do access material illegally,” the report reads.

The report quickly heads to the topic of ‘pirate’ set-top boxes which is unsurprising, not least due to FACT’s current focus as a business entity.

While it often positions itself alongside government bodies (which no doubt boosts its status with the general public), FACT is a private limited company serving The Premier League, another company desperate to stamp out the use of infringing devices.

Nevertheless, it’s difficult to argue with some of the figures cited in the report.

“At a conservative estimate, we believe a million set-top boxes with software added
to them to facilitate illegal downloads have been sold in the UK in the last couple
of years,” the Intellectual Property Office reveals.

Interestingly, given a growing tech-savvy public, FACT’s report notes that ready-configured boxes are increasingly coming into the country.

“Historically, individuals and organized gangs have added illegal apps and add-ons onto the boxes once they have been imported, to allow illegal access to premium channels. However more recently, more boxes are coming into the UK complete with illegal access to copyrighted content via apps and add-ons already installed,” FACT notes.

“Boxes are often stored in ‘fulfillment houses’ along with other illegal electrical items and sold on social media. The boxes are either sold as one-off purchases, or with a monthly subscription to access paid-for channels.”

While FACT press releases regularly blur the lines when people are prosecuted for supplying set-top boxes in general, it’s important to note that there are essentially two kinds of products on offer to the public.

The first relies on Kodi-type devices which provide on-going free access to infringing content. The second involves premium IPTV subscriptions which are a whole different level of criminality. Separating the two when reading news reports can be extremely difficult, but it’s a hugely important to recognize the difference when assessing the kinds of sentences set-top box suppliers are receiving in the UK.

Nevertheless, FACT correctly highlights that the supply of both kinds of product are on the increase, with various parties recognizing the commercial opportunities.

“A significant number of home-grown British criminals are now involved in this type of crime. Some of them import the boxes wholesale through entirely legal channels, and modify them with illegal software at home. Others work with sophisticated criminal networks across Europe to bring the boxes into the UK.

“They then sell these boxes online, for example through eBay or Facebook, sometimes managing to sell hundreds or thousands of boxes before being caught,” the company adds.

The report notes that in some cases the sale of infringing set-top boxes occurs through cottage industry, with suppliers often working on their own or with small groups of friends and family. Invetiably, perhaps, larger scale operations are reported to be part of networks with connections to other kinds of crime, such as dealing in drugs.

“In contrast to drugs, streaming devices provide a relatively steady and predictable revenue stream for these criminals – while still being lucrative, often generating hundreds of thousands of pounds a year, they are seen as a lower risk activity with less likelihood of leading to arrest or imprisonment,” FACT reports.

While there’s certainly the potential to earn large sums from ‘pirate’ boxes and premium IPTV services, operating on the “hundreds of thousands of pounds a year” scale in the UK would attract a lot of unwanted attention. That’s not saying that it isn’t already, however.

Noting that digital piracy has evolved hugely over the past three or four years, the report says that the cases investigated so far are just the “tip of the iceberg” and that many other cases are in the early stages and will only become known to the public in the months and years ahead.

Indeed, the Intellectual Property Office hints that some kind of large-scale enforcement action may be on the horizon.

“We have identified a significant criminal business model which we have discussed and shared with key law enforcement partners. I can’t go into detail on this, but as investigations take their course, you will see the scale,” an IPO spokesperson reveals.

While details are necessarily scarce, a source familiar with this area told TF that he would be very surprised if the targets aren’t the growing handful of commercial UK-based IPTV re-sellers who offer full subscription TV services for a few pounds per month.

“They’re brazen. Watch this space,” he said.

FACT’s full report, Cracking Down on Digital Piracy, can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

NSA Spied on Early File-Sharing Networks, Including BitTorrent

Post Syndicated from Andy original https://torrentfreak.com/nsa-spied-on-early-file-sharing-networks-including-bittorrent-170914/

In the early 2000s, when peer-to-peer (P2P) file-sharing was in its infancy, the majority of users had no idea that their activities could be monitored by outsiders. The reality was very different, however.

As few as they were, all of the major networks were completely open, with most operating a ‘shared folder’ type system that allowed any network participant to see exactly what another user was sharing. Nevertheless, with little to no oversight, file-sharing at least felt like a somewhat private affair.

As user volumes began to swell, software such as KaZaA (which utilized the FastTrack network) and eDonkey2000 (eD2k network) attracted attention from record labels, who were desperate to stop the unlicensed sharing of copyrighted content. The same held true for the BitTorrent networks that arrived on the scene a couple of years later.

Through the rise of lawsuits against consumers, the general public began to learn that their activities on P2P networks were not secret and they were being watched for some, if not all, of the time by copyright holders. Little did they know, however, that a much bigger player was also keeping a watchful eye.

According to a fascinating document just released by The Intercept as part of the Edward Snowden leaks, the National Security Agency (NSA) showed a keen interest in trying to penetrate early P2P networks.

Initially published by internal NSA news site SIDToday in June 2005, the document lays out the aims of a program called FAVA – File-Sharing Analysis and Vulnerability Assessment.

“One question that naturally arises after identifying file-sharing traffic is whether or not there is anything of intelligence value in this traffic,” the NSA document begins.

“By searching our collection databases, it is clear that many targets are using popular file sharing applications; but if they are merely sharing the latest release of their favorite pop star, this traffic is of dubious value (no offense to Britney Spears intended).”

Indeed, the vast majority of users of these early networks were only been interested in sharing relatively small music files, which were somewhat easy to manage given the bandwidth limitations of the day. However, the NSA still wanted to know what was happening on a broader scale, so that meant decoding their somewhat limited encryption.

“As many of the applications, such as KaZaA for example, encrypt their traffic, we first had to decrypt the traffic before we could begin to parse the messages. We have developed the capability to decrypt and decode both KaZaA and eDonkey traffic to determine which files are being shared, and what queries are being performed,” the NSA document reveals.

Most progress appears to have been made against KaZaA, with the NSA revealing the use of tools to parse out registry entries on users’ hard drives. This information gave up users’ email addresses, country codes, user names, the location of their stored files, plus a list of recent searches.

This gave the NSA the ability to look deeper into user behavior, which revealed some P2P users going beyond searches for basic run-of-the-mill multimedia content.

“[We] have discovered that our targets are using P2P systems to search for and share files which are at the very least somewhat surprising — not simply harmless music and movie files. With more widespread adoption, these tools will allow us to regularly assimilate data which previously had been passed over; giving us a more complete picture of our targets and their activities,” the document adds.

Today, more than 12 years later, with KaZaA long dead and eDonkey barely alive, scanning early pirate activities might seem a distant act. However, there’s little doubt that similar programs remain active today. Even in 2005, the FAVA program had lofty ambitions, targeting other networks and protocols including DirectConnect, Freenet, Gnutella, Gnutella2, JoltID, MSN Messenger, Windows Messenger and……BitTorrent.

“If you have a target using any of these applications or using some other application which might fall into the P2P category, please contact us,” the NSA document urges staff. “We would be more than happy to help.”

Confirming the continued interest in BitTorrent, The Intercept has published a couple of further documents which deal with the protocol directly.

The first details an NSA program called GRIMPLATE, which aimed to study how Department of Defense employees were using BitTorrent and whether that constituted a risk.

The second relates to P2P research carried out by Britain’s GCHQ spy agency. It details DIRTY RAT, a web application which gave the government to “the capability to identify users sharing/downloading files of interest on the eMule (Kademlia) and BitTorrent networks.”

The SIDToday document detailing the FAVA program can be viewed here

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Indian Movie Actor Mobbed By Press After Arrest of Torrent Site Admin

Post Syndicated from Andy original https://torrentfreak.com/indian-movie-actor-mobbed-by-press-after-airport-torrent-site-arrest-170913/

While most of the headlines relating to Internet piracy are focused on North America and Europe, there are dozens of countries where piracy is a way of life for millions of citizens. India, with its booming economy and growth in technology, is certainly one of them.

According to a recently published report, India now has 355 million Internet users out of a population of more than 1.3 billion. Not only is there massive room for growth, that figure is up from 277 million just two years ago. The rate of growth is astonishing.

Needless to say, Indians love their Internet and increasing numbers of citizens are also getting involved in the piracy game. There are many large sites and prominent release groups operating out of the country, some of them targeting the international market. Carry out a search for DVDSCR (DVD screener) on most search indexes globally and one is just as likely to find Indian movie releases as those emanating from the West.

If people didn’t know it already, India is nurturing a pirate force to be reckoned with, with local torrent and streaming sites pumping out the latest movies at an alarming rate. This has caused an outcry from many in the movie industry who are determined to do something to stem the tide.

One of these is actor Vishal Krishna, who not only stars in movies but is also a producer working in the Tamil film industry. Often referred to simply by his first name, Vishal has spoken out regularly against piracy in his role at the Tamil Film Producers Council.

In May, he referred to the operators of the hugely popular torrent site TamilRockers as ‘Internet Mafias’ while demanding their arrest for leaking the blockbuster Baahubali 2, a movie that pulled in US$120 million in six days. Now, it appears, he may have gotten his way. Well, partially, at least.

Last evening, reports began to surface of an arrest at Chennai airport in north east India. According to local media, Gauri Shankar, an alleged administrator of Tamilrockers.co, was detained by Triplicane police.

This would’ve been a huge coup for Vishal, who has been warning Tamilrockers to close down for the past three years. He even claimed to know the identity of the main perpetrator behind the site, noting that it was only a matter of time before he was brought to justice.

Soon after the initial reports, however, other media outlets claimed that Gauri Shankar is actually an operator at Tamilgun, another popular pirate portal currently blocked by ISPs on the orders of the Indian government.

So was it rockers or gun? According to Indiaglitz.com, Vishal rushed to the scene in Chennai to find out.

Outside the police station

What followed were quite extraordinary scenes outside the Triplicane police station. Emerging from the building flanked by close to 20 men, some in uniform, Vishal addressed an excited crowd of reporters. A swathe of microphones from various news outlets greeted him as he held up his hands urging the crowd to calm down.

“Just give us some time, I will give you the details,” Vishal said in two languages.

“Just give us some time. It is too early. I’ll just give it to you in a bit. It’s something connected to website piracy. Just give me some time. I have to give you all the details, proper details.”

So, even after all the excitement, it’s unclear who the police have in custody. Nevertheless, the attention this event is getting from the press is on a level rarely seen in a piracy case, so more news is bound to follow soon.

In the meantime, both TamilRockers and TamilGun remain online, operating as normal. Clearly, there is much more work to be done.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SceneAccess Torrent Tracker Shuts Down

Post Syndicated from Ernesto original https://torrentfreak.com/private-torrent-tracker-sceneaccess-shuts-down-170912/

SceneAccess (ScT) has been a respected and well-connected private BitTorrent tracker for more than a decade, but a few hours ago it closed its doors.

The operators of the tracker, which recently stopped enforcing a mandatory share ratio, had been complaining about a lack of financial support for a while.

“As we stand now, we have NO money left to pay our bills and the lights WILL go out,” one of the staffers wrote earlier this year, urging the site’s members to chip in to help the site stay online.

Apparently, these frequent donation reminders were unsuccessful. Today, members of the tracker, some of which have been with the site for more than tens years, are greeted by a farewell notice.

“After putting a decade of blood, sweat and tears – it is time to throw in the towel. It is time for us to close this chapter…” it reads, thanking all donors who helped the site over the years.

“As times change, so do peoples priorities and without continued economical support from the community, it is impossible to run a site of this size. It’s been a pleasure for all of us to serve you with pride and honor.”

SceneAccess shuts down

SceneAccess has seen its fair share of trouble over the years. The site was raided in its early days, forced by anti-piracy group BREIN to switch hosts, DDoSed on several occasions, and suffered a leak of user data, among other things.

While it recovered from all these events, a lack of financial support now means that the end has finally come.

The tracker is not the only site to run low on donations. Many trackers, including several of the big players, have complained about the same issue in recent years.

While there may always be additional factors in play when a site shuts down, it is clear that SceneAccess is not coming back, unless there is some magical turnaround. This means that its users have to find a new home, wherever that may be.

“Thank you for 11 amazing years. We wish you all the best in your future endeavors,” SCC concludes.

Another one bites the dust…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

No, Google Drive is Definitely Not The New Pirate Bay

Post Syndicated from Andy original https://torrentfreak.com/no-google-drive-is-definitely-not-the-new-pirate-bay-170910/

Running close to two decades old, the world of true mainstream file-sharing is less of a mystery to the general public than it’s ever been.

Most people now understand the concept of shifting files from one place to another, and a significant majority will be aware of the opportunities to do so with infringing content.

Unsurprisingly, this is a major thorn in the side of rightsholders all over the world, who have been scrambling since the turn of the century in a considerable effort to stem the tide. The results of their work have varied, with some sectors hit harder than others.

One area that has taken a bit of a battering recently involves the dominant peer-to-peer platforms reliant on underlying BitTorrent transfers. Several large-scale sites have shut down recently, not least KickassTorrents, Torrentz, and ExtraTorrent, raising questions of what bad news may arrive next for inhabitants of Torrent Land.

Of course, like any other Internet-related activity, sharing has continued to evolve over the years, with streaming and cloud-hosting now a major hit with consumers. In the main, sites which skirt the borders of legality have been the major hosting and streaming players over the years, but more recently it’s become clear that even the most legitimate companies can become unwittingly involved in the piracy scene.

As reported here on TF back in 2014 and again several times this year (1,2,3), cloud-hosting services operated by Google, including Google Drive, are being used to store and distribute pirate content.

That news was echoed again this week, with a report on Gadgets360 reiterating that Google Drive is still being used for movie piracy. What followed were a string of follow up reports, some of which declared Google’s service to be ‘The New Pirate Bay.’

No. Just no.

While it’s always tempting for publications to squeeze a reference to The Pirate Bay into a piracy article due to the site’s popularity, it’s particularly out of place in this comparison. In no way, shape, or form can a centralized store of data like Google Drive ever replace the underlying technology of sites like The Pirate Bay.

While the casual pirate might love the idea of streaming a movie with a couple of clicks to a browser of his or her choice, the weakness of the cloud system cannot be understated. To begin with, anything hosted by Google is vulnerable to immediate takedown on demand, usually within a matter of hours.

“Google Drive has a variety of piracy counter-measures in place,” a spokesperson told Mashable this week, “and we are continuously working to improve our protections to prevent piracy across all of our products.”

When will we ever hear anything like that from The Pirate Bay? Answer: When hell freezes over. But it’s not just compliance with takedown requests that make Google Drive-hosted files vulnerable.

At the point Google Drive responds to a takedown request, it takes down the actual file. On the other hand, even if Pirate Bay responded to notices (which it doesn’t), it would be unable to do anything about the sharing going on underneath. Removing a torrent file or magnet link from TPB does nothing to negatively affect the decentralized swarm of people sharing files among themselves. Those files stay intact and sharing continues, no matter what happens to the links above.

Importantly, people sharing using BitTorrent do so without any need for central servers – the whole process is decentralized as long as a user can lay his or her hands on a torrent file or magnet link. Those using Google Drive, however, rely on a totally centralized system, where not only is Google king, but it can and will stop the entire party after receiving a few lines of text from a rightsholder.

There is a very good reason why sites like The Pirate Bay have been around for close to 15 years while platforms such as Megaupload, Hotfile, Rapidshare, and similar platforms have all met their makers. File-hosting platforms are expensive-to-run warehouses full of files, each of which brings direct liability for their hosts, once they’re made aware that those files are infringing. These days the choice is clear – take the files down or get brought down, it’s as simple as that.

The Pirate Bay, on the other hand, is nothing more than a treasure map (albeit a valuable one) that points the way to content spread all around the globe in the most decentralized way possible. There are no files to delete, no content to disappear. Comparing a vulnerable Google Drive to this kind of robust system couldn’t be further from the mark.

That being said, this is the way things are going. The cloud, it seems, is here to stay in all its forms. Everyone has access to it and uploading content is easier – much easier – than uploading it to a BitTorrent network. A Google Drive upload is simplicity itself for anyone with a mouse and a file; the same cannot be said about The Pirate Bay.

For this reason alone, platforms like Google Drive and the many dozens of others offering a similar service will continue to become havens for pirated content, until the next big round of legislative change. At the moment, each piece of content has to be removed individually but in the future, it’s possible that pre-emptive filters will kill uploads of pirated content before they see the light of day.

When this comes to pass, millions of people will understand why Google Drive, with its bots checking every file upload for alleged infringement, is not The Pirate Bay. At this point, if people have left it too long, it might be too late to reinvigorate BitTorrent networks to their former glory.

People will try to rebuild them, of course, but realizing why they shouldn’t have been left behind at all is probably the best protection.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New Network Load Balancer – Effortless Scaling to Millions of Requests per Second

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/

Elastic Load Balancing (ELB)) has been an important part of AWS since 2009, when it was launched as part of a three-pack that also included Auto Scaling and Amazon CloudWatch. Since that time we have added many features, and also introduced the Application Load Balancer. Designed to support application-level, content-based routing to applications that run in containers, Application Load Balancers pair well with microservices, streaming, and real-time workloads.

Over the years, our customers have used ELB to support web sites and applications that run at almost any scale — from simple sites running on a T2 instance or two, all the way up to complex applications that run on large fleets of higher-end instances and handle massive amounts of traffic. Behind the scenes, ELB monitors traffic and automatically scales to meet demand. This process, which includes a generous buffer of headroom, has become quicker and more responsive over the years and works well even for our customers who use ELB to support live broadcasts, “flash” sales, and holidays. However, in some situations such as instantaneous fail-over between regions, or extremely spiky workloads, we have worked with our customers to pre-provision ELBs in anticipation of a traffic surge.

New Network Load Balancer
Today we are introducing the new Network Load Balancer (NLB). It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:

Static IP Addresses – Each Network Load Balancer provides a single IP address for each VPC subnet in its purview. If you have targets in a subnet in us-west-2a and other targets in a subnet in us-west-2c, NLB will create and manage two IP addresses (one per subnet); connections to that IP address will spread traffic across the instances in the subnet. You can also specify an existing Elastic IP for each subnet for even greater control. With full control over your IP addresses, Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.

Zonality – The IP-per-subnet feature reduces latency with improved performance, improves availability through isolation and fault tolerance and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single subnet while still allowing automatic failover.

Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.

Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.

Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.

Creating a Network Load Balancer
I can create a Network Load Balancer opening up the EC2 Console, selecting Load Balancers, and clicking on Create Load Balancer:

I choose Network Load Balancer and click on Create, then enter the details. I can choose an Elastic IP address for each subnet in the target VPC and I can tag the Network Load Balancer:

Then I click on Configure Routing and create a new target group. I enter a name, and then choose the protocol and port. I can also set up health checks that go to the traffic port or to the alternate of my choice:

Then I click on Register Targets and the EC2 instances that will receive traffic, and click on Add to registered:

I make sure that everything looks good and then click on Create:

The state of my new Load Balancer is provisioning, switching to active within a minute or so:

For testing purposes, I simply grab the DNS name of the Load Balancer from the console (in practice I would use Amazon Route 53 and a more friendly name):

Then I sent it a ton of traffic (I intended to let it run for just a second or two but got distracted and it created a huge number of processes, so this was a happy accident):

$ while true;
> do
>   wget http://nlb-1-6386cc6bf24701af.elb.us-west-2.amazonaws.com/phpinfo2.php &
> done

A more disciplined test would use a tool like Bees with Machine Guns, of course!

I took a quick break to let some traffic flow and then checked the CloudWatch metrics for my Load Balancer, finding that it was able to handle the sudden onslaught of traffic with ease:

I also looked at my EC2 instances to see how they were faring under the load (really well, it turns out):

It turns out that my colleagues did run a more disciplined test than I did. They set up a Network Load Balancer and backed it with an Auto Scaled fleet of EC2 instances. They set up a second fleet composed of hundreds of EC2 instances, each running Bees with Machine Guns and configured to generate traffic with highly variable request and response sizes. Beginning at 1.5 million requests per second, they quickly turned the dial all the way up, reaching over 3 million requests per second and 30 Gbps of aggregate bandwidth before maxing out their test resources.

Choosing a Load Balancer
As always, you should consider the needs of your application when you choose a load balancer. Here are some guidelines:

Network Load Balancer (NLB) – Ideal for load balancing of TCP traffic, NLB is capable of handling millions of requests per second while maintaining ultra-low latencies. NLB is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone.

Application Load Balancer (ALB) – Ideal for advanced load balancing of HTTP and HTTPS traffic, ALB provides advanced request routing that supports modern application architectures, including microservices and container-based applications.

Classic Load Balancer (CLB) – Ideal for applications that were built within the EC2-Classic network.

For a side-by-side feature comparison, see the Elastic Load Balancer Details table.

If you are currently using a Classic Load Balancer and would like to migrate to a Network Load Balancer, take a look at our new Load Balancer Copy Utility. This Python tool will help you to create a Network Load Balancer with the same configuration as an existing Classic Load Balancer. It can also register your existing EC2 instances with the new load balancer.

Pricing & Availability
Like the Application Load Balancer, pricing is based on Load Balancer Capacity Units, or LCUs. Billing is $0.006 per LCU, based on the highest value seen across the following dimensions:

  • Bandwidth – 1 GB per LCU.
  • New Connections – 800 per LCU.
  • Active Connections – 100,000 per LCU.

Most applications are bandwidth-bound and should see a cost reduction (for load balancing) of about 25% when compared to Application or Classic Load Balancers.

Network Load Balancers are available today in all AWS commercial regions except China (Beijing), supported by AWS CloudFormation, Auto Scaling, and Amazon ECS.

Jeff;

 

MPAA: Net Neutrality Rules Should Not Hinder Anti-Piracy Efforts

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-net-neutrality-rules-should-not-hinder-anti-piracy-efforts-170907/

This summer, millions of people protested the FCC’s plan to repeal the net neutrality rules that were put in place by the former Obama administration.

Well over 22 million comments are listed on the FCC site already and among those we spotted a response from the main movie industry lobby group, the MPAA.

Acting on behalf of six major Hollywood studios, the MPAA is not getting involved in the repeal debate. It instead highlights that, if the FCC maintains any type of network neutrality rules, these shouldn’t get in the way of its anti-piracy efforts.

The Hollywood group stresses that despite an increase in legal services, online piracy remains a problem. Through various anti-piracy measures, rightsholders are working hard to combat this threat, which is their right by law.

“Copyright owners and content providers have a right under the Copyright and Communications acts to combat theft of their content, and the law encourages internet intermediaries to collaborate with content creators to do so,” the MPAA writes.

Now that the net neutrality rules are facing a possible revision or repeal, the MPAA wants to make it very clear that any future regulation should not get in the way of these anti-piracy efforts.

“The MPAA therefore asks that any network neutrality rules the FCC maintains or adopts make explicit that such rules do not limit the ability of copyright owners and their licensees to combat copyright infringement,” the group writes to the FCC.

This means that measures such as website blocking, which could be considered to violate net neutrality as it discriminates against specific traffic, should be allowed. The same is true for other filtering and blocking efforts.

The MPAA’s position doesn’t come as a surprise and given the FCC’s actions in the past, Hollywood has little to worry about. The current net neutrality rules, which were put in place by the Obama administration, specifically exclude pirate traffic.

“Nothing in this part prohibits reasonable efforts by a provider of broadband Internet access service to address copyright infringement or other unlawful activity,” the current net neutrality order reads.

“We reiterate that our rules do not alter the copyright laws and are not intended to prohibit or discourage voluntary practices undertaken to address or mitigate the occurrence of copyright infringement,” the FCC previously clarified.

Still, the MPAA is better safe than sorry.

This is not the first time that the MPAA has got involved in net neutrality debates. Behind the scenes the group has been lobbying US lawmakers on this issue for several years, previously arguing for similar net neutrality exceptions in Brazil and India.

The MPAA’s full comments can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Private Torrent Site Legal Battle Heard By Court of Appeal

Post Syndicated from Andy original https://torrentfreak.com/private-torrent-site-legal-battle-heard-by-court-of-appeal-170908/

Founded way back in 2006, SwePiracy grew to become one of the most famous private torrent sites on the Swedish scene. Needless to say, it also became a target for anti-piracy outfits.

Six years after its debut and following an investigation by anti-piracy group Antipiratbyrån (now Rights Alliance), during 2012 police in Sweden and the Netherlands cooperated to shut down the site and arrest its operator.

In early 2016, more than four years on, SwePiracy’s then 25-year-old operator appeared in court to answer charges relating to the unlawful distribution of a sample 27 movies between March 2011 and February 2012. The prosecution demanded several years in prison and nearly $3.13 million (25 million kronor) in damages.

SwePiracy defense lawyer Per E. Samuelsson, who previously took part in The Pirate Bay trial, said the claims against his client were the most unreasonable he’d seen in his 35 years as a lawyer.

In October 2016, three weeks after the full trial, the Norrköping District Court handed down its decision. Given some of the big numbers being thrown around, the case seemed to turn out relatively well for the defendant.

While SwePiracy’s former operator was found guilty of copyright infringement, the prosecution’s demands for harsh punishment were largely pushed aside. A jail sentence was switched to probation plus community service, and the millions of dollars demanded in damages were reduced to ‘just’ $148,000, payable to movie outfit Nordisk Film. On top, $45,600 said to have been generated by SwePiracy was confiscated.

Almost immediately both sides announced an appeal, with the defendant demanding a more lenient sentence and the prosecution naturally leaning the other way. This week the case was heard at the Göta Court of Appeal, one of the six appellate courts in the Swedish system.

“We state that the District Court made an inaccurate assessment of the damages. So the damages claim remains at the same level as before,” Rights Alliance lawyer Henrik Pontén told Sweden’s IDG.

“There are two different approaches. We say that you have to pay for the entire license [for content when you infringe]. The District Court looked at how many times the movies were downloaded during the period.”

According to Pontén, the cost of such a license is hypothetical since there are no licenses available for distributing content through entities such as torrent sites, which have no mechanisms for control and no limits on sharing. That appears to have motivated the prosecution to demand a hefty price tag.

In addition to Rights Alliance wanting a better deal for their theoretical license, the official prosecutor also has issues with the amount of money that was confiscated from the platform.

“The operator has received donations to run the site. I have calculated how much money was received and the sum that the District Court awarded was almost half of my calculations,” Henrik Rasmusson told IDG.

Only time will tell how the Court of Appeal will rule but it’s worth noting that the decision could go either way or might even stand as it is now. In any event, this case has dragged on for far too long already and is unlikely to end positively for any of the parties involved.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.