Tag Archives: servers

Embedding a Tweet Can be Copyright Infringement, Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/embedding-a-tweet-can-be-copyright-infringement-court-rules-180216/

Nowadays it’s fairly common for blogs and news sites to embed content posted by third parties, ranging from YouTube videos to tweets.

Although these publications don’t host the content themselves, they can be held liable for copyright infringement, a New York federal court has ruled.

The case in question was filed by Justin Goldman whose photo of Tom Brady went viral after he posted it on Snapchat. After being reposted on Reddit, it also made its way onto Twitter from where various news organizations picked it up.

Several of these news sites reported on the photo by embedding tweets from others. However, since Goldman never gave permission to display his photo, he went on to sue the likes of Breitbart, Time, Vox and Yahoo, for copyright infringement.

In their defense, the news organizations argued that they did nothing wrong as no content was hosted on their servers. They referred to the so-called “server test” that was applied in several related cases in the past, which determined that liability rests on the party that hosts the infringing content.

In an order that was just issued, US District Court Judge Katherine Forrest disagrees. She rejects the “server test” argument and rules that the news organizations are liable.

“[W]hen defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield them from this result,” Judge Forrest writes.

Judge Forrest argues that the server test was established in the ‘Perfect 10 v. Amazon’ case, which dealt with the ‘distribution’ of content. This case is about ‘displaying’ an infringing work instead, an area where the jurisprudence is not as clear.

“The Court agrees with plaintiff. The plain language of the Copyright Act, the legislative history undergirding its enactment, and subsequent Supreme Court jurisprudence provide no basis for a rule that allows the physical location or possession of an image to determine who may or may not have “displayed” a work within the meaning of the Copyright Act.”

As a result, summary judgment was granted in favor of Goldman.

Rightsholders, including Getty Images which supported Goldman, are happy with the result. However, not everyone is pleased. The Electronic Frontier Foundation (EFF) says that if the current verdict stands it will put millions of regular Internet users at risk.

“Rejecting years of settled precedent, a federal court in New York has ruled that you could infringe copyright simply by embedding a tweet in a web page,” EFF comments.

“Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and technically misguided decision would threaten millions of ordinary Internet users with infringement liability.”

Given what’s at stake, it’s likely that the news organization will appeal this week’s order.

Interestingly, earlier this week a California district court dismissed Playboy’s copyright infringement complaint against Boing Boing, which embedded a YouTube video that contained infringing content.

A copy of Judge Forrest’s opinion can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Pirates Crack Microsoft’s UWP Protection, Five Layers of DRM Defeated

Post Syndicated from Andy original https://torrentfreak.com/pirates-crack-microsofts-uwp-protection-five-layers-of-drm-defeated-180215/

As the image on the right shows, Microsoft’s Universal Windows Platform (UWP) is a system that enables software developers to create applications that can run across many devices.

“The Universal Windows Platform (UWP) is the app platform for Windows 10. You can develop apps for UWP with just one API set, one app package, and one store to reach all Windows 10 devices – PC, tablet, phone, Xbox, HoloLens, Surface Hub and more,” Microsoft explains.

While the benefits of such a system are immediately apparent, critics say that UWP gives Microsoft an awful lot of control, not least since UWP software must be distributed via the Windows Store with Microsoft taking a cut.

Or that was the plan, at least.

Last evening it became clear that the UWP system, previously believed to be uncrackable, had fallen to pirates. After being released on October 31, 2017, the somewhat underwhelming Zoo Tycoon Ultimate Animal Collection became the first victim at the hands of popular scene group, CODEX.

“This is the first scene release of a UWP (Universal Windows Platform) game. Therefore we would like to point out that it will of course only work on Windows 10. This particular game requires Windows 10 version 1607 or newer,” the group said in its release notes.

CODEX release notes

CODEX says it’s important that the game isn’t allowed to communicate with the Internet so the group advises users to block the game’s executable in their firewall.

While that’s not a particularly unusual instruction, CODEX did reveal that various layers of protection had to be bypassed to make the game work. They’re listed by the group as MSStore, UWP, EAppX, XBLive, and Arxan, the latter being an anti-tamper system.

“It’s the equivalent of Denuvo (without the DRM License part),” cracker Voksi previously explained. “It’s still bloats the executable with useless virtual machines that only slow down your game.”

Arxan features

Arxan’s marketing comes off as extremely confident but may need amending in light of yesterday’s developments.

“Arxan uses code protection against reverse-engineering, key and data protection to secure servers and fortification of game logic to stop the bad guys from tampering. Sorry hackers, game over,” the company’s marketing reads.

What is unclear at this stage is whether Zoo Tycoon Ultimate Animal Collection represents a typical UWP release or if some particular flaw allowed CODEX to take it apart. The possibility of additional releases is certainly a tantalizing one for pirates but how long they will have to wait is unknown.

Whatever the outcome, Arxan calling “game over” is perhaps a little premature under the circumstances but in this continuing arms race, they probably have another version of their anti-tamper tech up their sleeves…..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

How to Patch Linux Workloads on AWS

Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-linux-workloads-on-aws/

Most malware tries to compromise your systems by using a known vulnerability that the operating system maker has already patched. As best practices to help prevent malware from affecting your systems, you should apply all operating system patches and actively monitor your systems for missing patches.

In this blog post, I show you how to patch Linux workloads using AWS Systems Manager. To accomplish this, I will show you how to use the AWS Command Line Interface (AWS CLI) to:

  1. Launch an Amazon EC2 instance for use with Systems Manager.
  2. Configure Systems Manager to patch your Amazon EC2 Linux instances.

In two previous blog posts (Part 1 and Part 2), I showed how to use the AWS Management Console to perform the necessary steps to patch, inspect, and protect Microsoft Windows workloads. You can implement those same processes for your Linux instances running in AWS by changing the instance tags and types shown in the previous blog posts.

Because most Linux system administrators are more familiar with using a command line, I show how to patch Linux workloads by using the AWS CLI in this blog post. The steps to use the Amazon EBS Snapshot Scheduler and Amazon Inspector are identical for both Microsoft Windows and Linux.

What you should know first

To follow along with the solution in this post, you need one or more Amazon EC2 instances. You may use existing instances or create new instances. For this post, I assume this is an Amazon EC2 for Amazon Linux instance installed from Amazon Machine Images (AMIs).

Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on Amazon EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is AWS Systems Manager?

As of Amazon Linux 2017.09, the AMI comes preinstalled with the Systems Manager agent. Systems Manager Patch Manager also supports Red Hat and Ubuntu. To install the agent on these Linux distributions or an older version of Amazon Linux, see Installing and Configuring SSM Agent on Linux Instances.

If you are not familiar with how to launch an Amazon EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. You must make sure that the Amazon EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager. The following diagram shows how you should structure your VPC.

Diagram showing how to structure your VPC

Later in this post, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the IAM user you are using for this post must have the iam:PassRole permission. This permission allows the IAM user assigning tasks to pass his own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. You also should authorize your IAM user to use Amazon EC2 and Systems Manager. As mentioned before, you will be using the AWS CLI for most of the steps in this blog post. Our documentation shows you how to get started with the AWS CLI. Make sure you have the AWS CLI installed and configured with an AWS access key and secret access key that belong to an IAM user that have the following AWS managed policies attached to the IAM user you are using for this example: AmazonEC2FullAccess and AmazonSSMFullAccess.

Step 1: Launch an Amazon EC2 Linux instance

In this section, I show you how to launch an Amazon EC2 instance so that you can use Systems Manager with the instance. This step requires you to do three things:

  1. Create an IAM role for Systems Manager before launching your Amazon EC2 instance.
  2. Launch your Amazon EC2 instance with Amazon EBS and the IAM role for Systems Manager.
  3. Add tags to the instances so that you can add your instances to a Systems Manager maintenance window based on tags.

A. Create an IAM role for Systems Manager

Before launching an Amazon EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the Amazon EC2 instance. AWS already provides a preconfigured policy that you can use for the new role and it is called AmazonEC2RoleforSSM.

  1. Create a JSON file named trustpolicy-ec2ssm.json that contains the following trust policy. This policy describes which principal (an entity that can take action on an AWS resource) is allowed to assume the role we are going to create. In this example, the principal is the Amazon EC2 service.
    {
      "Version": "2012-10-17",
      "Statement": {
        "Effect": "Allow",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Action": "sts:AssumeRole"
      }
    }

  1. Use the following command to create a role named EC2SSM that has the AWS managed policy AmazonEC2RoleforSSM attached to it. This generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name EC2SSM --assume-role-policy-document file://trustpolicy-ec2ssm.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name EC2SSM --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM

  1. Use the following commands to create the IAM instance profile and add the role to the instance profile. The instance profile is needed to attach the role we created earlier to your Amazon EC2 instance.
    $ aws iam create-instance-profile --instance-profile-name EC2SSM-IP
    $ aws iam add-role-to-instance-profile --instance-profile-name EC2SSM-IP --role-name EC2SSM

B. Launch your Amazon EC2 instance

To follow along, you need an Amazon EC2 instance that is running Amazon Linux. You can use any existing instance you may have or create a new instance.

When launching a new Amazon EC2 instance, be sure that:

  1. Use the following command to launch a new Amazon EC2 instance using an Amazon Linux AMI available in the US East (N. Virginia) Region (also known as us-east-1). Replace YourKeyPair and YourSubnetId with your information. For more information about creating a key pair, see the create-key-pair documentation. Write down the InstanceId that is in the output because you will need it later in this post.
    $ aws ec2 run-instances --image-id ami-cb9ec1b1 --instance-type t2.micro --key-name YourKeyPair --subnet-id YourSubnetId --iam-instance-profile Name=EC2SSM-IP

  1. If you are using an existing Amazon EC2 instance, you can use the following command to attach the instance profile you created earlier to your instance.
    $ aws ec2 associate-iam-instance-profile --instance-id YourInstanceId --iam-instance-profile Name=EC2SSM-IP

C. Add tags

The final step of configuring your Amazon EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this post. For this example, I add a tag named Patch Group and set the value to Linux Servers. I could have other groups of Amazon EC2 instances that I treat differently by having the same tag name but a different tag value. For example, I might have a collection of other servers with the tag name Patch Group with a value of Web Servers.

  • Use the following command to add the Patch Group tag to your Amazon EC2 instance.
    $ aws ec2 create-tags --resources YourInstanceId --tags --tags Key="Patch Group",Value="Linux Servers"

Note: You must wait a few minutes until the Amazon EC2 instance is available before you can proceed to the next section. To make sure your Amazon EC2 instance is online and ready, you can use the following AWS CLI command:

$ aws ec2 describe-instance-status --instance-ids YourInstanceId

At this point, you now have at least one Amazon EC2 instance you can use to configure Systems Manager.

Step 2: Configure Systems Manager

In this section, I show you how to configure and use Systems Manager to apply operating system patches to your Amazon EC2 instances, and how to manage patch compliance.

To start, I provide some background information about Systems Manager. Then, I cover how to:

  1. Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
  2. Create a Systems Manager patch baseline and associate it with your instance to define which patches Systems Manager should apply.
  3. Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
  4. Monitor patch compliance to verify the patch state of your instances.

You must meet two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your Amazon EC2 instance. Second, you must install the Systems Manager agent on your Amazon EC2 instance. If you have used a recent Amazon Linux AMI, Amazon has already installed the Systems Manager agent on your Amazon EC2 instance. You can confirm this by logging in to an Amazon EC2 instance and checking the Systems Manager agent log files that are located at /var/log/amazon/ssm/.

To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see Installing and Configuring the Systems Manager Agent on Linux Instances. If you forgot to attach the newly created role when launching your Amazon EC2 instance or if you want to attach the role to already running Amazon EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

A. Create the Systems Manager IAM role

For a maintenance window to be able to run any tasks, you must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: this role will be used by Systems Manager instead of Amazon EC2. Earlier, you created the role, EC2SSM, with the policy, AmazonEC2RoleforSSM, which allowed the Systems Manager agent on your instance to communicate with Systems Manager. In this section, you need a new role with the policy, AmazonSSMMaintenanceWindowRole, so that the Systems Manager service can execute commands on your instance.

To create the new IAM role for Systems Manager:

  1. Create a JSON file named trustpolicy-maintenancewindowrole.json that contains the following trust policy. This policy describes which principal is allowed to assume the role you are going to create. This trust policy allows not only Amazon EC2 to assume this role, but also Systems Manager.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"",
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "ec2.amazonaws.com",
                   "ssm.amazonaws.com"
               ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }

  1. Use the following command to create a role named MaintenanceWindowRole that has the AWS managed policy, AmazonSSMMaintenanceWindowRole, attached to it. This command generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name MaintenanceWindowRole --assume-role-policy-document file://trustpolicy-maintenancewindowrole.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name MaintenanceWindowRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonSSMMaintenanceWindowRole

B. Create a Systems Manager patch baseline and associate it with your instance

Next, you will create a Systems Manager patch baseline and associate it with your Amazon EC2 instance. A patch baseline defines which patches Systems Manager should apply to your instance. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your Amazon EC2 instance. Use the following command to list all instances managed by Systems Manager. The --filters option ensures you look only for your newly created Amazon EC2 instance.

$ aws ssm describe-instance-information --filters Key=InstanceIds,Values= YourInstanceId

{
    "InstanceInformationList": [
        {
            "IsLatestVersion": true,
            "ComputerName": "ip-10-50-2-245",
            "PingStatus": "Online",
            "InstanceId": "YourInstanceId",
            "IPAddress": "10.50.2.245",
            "ResourceType": "EC2Instance",
            "AgentVersion": "2.2.120.0",
            "PlatformVersion": "2017.09",
            "PlatformName": "Amazon Linux AMI",
            "PlatformType": "Linux",
            "LastPingDateTime": 1515759143.826
        }
    ]
}

If your instance is missing from the list, verify that:

  1. Your instance is running.
  2. You attached the Systems Manager IAM role, EC2SSM.
  3. You deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram shown earlier in this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
  4. The Systems Manager agent logs don’t include any unaddressed errors.

Now that you have checked that Systems Manager can manage your Amazon EC2 instance, it is time to create a patch baseline. With a patch baseline, you define which patches are approved to be installed on all Amazon EC2 instances associated with the patch baseline. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs. If you do not specifically define a patch baseline, the default AWS-managed patch baseline is used.

To create a patch baseline:

  1. Use the following command to create a patch baseline named AmazonLinuxServers. With approval rules, you can determine the approved patches that will be included in your patch baseline. In this example, you add all Critical severity patches to the patch baseline as soon as they are released, by setting the Auto approval delay to 0 days. By setting the Auto approval delay to 2 days, you add to this patch baseline the Important, Medium, and Low severity patches two days after they are released.
    $ aws ssm create-patch-baseline --name "AmazonLinuxServers" --description "Baseline containing all updates for Amazon Linux" --operating-system AMAZON_LINUX --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Values=[Critical],Key=SEVERITY}]},ApproveAfterDays=0,ComplianceLevel=CRITICAL},{PatchFilterGroup={PatchFilters=[{Values=[Important,Medium,Low],Key=SEVERITY}]},ApproveAfterDays=2,ComplianceLevel=HIGH}]"
    
    {
        "BaselineId": "YourBaselineId"
    }

  1. Use the following command to register the patch baseline you created with your instance. To do so, you use the Patch Group tag that you added to your Amazon EC2 instance.
    $ aws ssm register-patch-baseline-for-patch-group --baseline-id YourPatchBaselineId --patch-group "Linux Servers"
    
    {
        "PatchGroup": "Linux Servers",
        "BaselineId": "YourBaselineId"
    }

C.  Define a maintenance window

Now that you have successfully set up a role, created a patch baseline, and registered your Amazon EC2 instance with your patch baseline, you will define a maintenance window so that you can control when your Amazon EC2 instances will receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

To define a maintenance window:

  1. Use the following command to define a maintenance window. In this example command, the maintenance window will start every Saturday at 10:00 P.M. UTC. It will have a duration of 4 hours and will not start any new tasks 1 hour before the end of the maintenance window.
    $ aws ssm create-maintenance-window --name SaturdayNight --schedule "cron(0 0 22 ? * SAT *)" --duration 4 --cutoff 1 --allow-unassociated-targets
    
    {
        "WindowId": "YourMaintenanceWindowId"
    }

For more information about defining a cron-based schedule for maintenance windows, see Cron and Rate Expressions for Maintenance Windows.

  1. After defining the maintenance window, you must register the Amazon EC2 instance with the maintenance window so that Systems Manager knows which Amazon EC2 instance it should patch in this maintenance window. You can register the instance by using the same Patch Group tag you used to associate the Amazon EC2 instance with the AWS-provided patch baseline, as shown in the following command.
    $ aws ssm register-target-with-maintenance-window --window-id YourMaintenanceWindowId --resource-type INSTANCE --targets "Key=tag:Patch Group,Values=Linux Servers"
    
    {
        "WindowTargetId": "YourWindowTargetId"
    }

  1. Assign a task to the maintenance window that will install the operating system patches on your Amazon EC2 instance. The following command includes the following options.
    1. name is the name of your task and is optional. I named mine Patching.
    2. task-arn is the name of the task document you want to run.
    3. max-concurrency allows you to specify how many of your Amazon EC2 instances Systems Manager should patch at the same time. max-errors determines when Systems Manager should abort the task. For patching, this number should not be too low, because you do not want your entire patch task to stop on all instances if one instance fails. You can set this, for example, to 20%.
    4. service-role-arn is the Amazon Resource Name (ARN) of the AmazonSSMMaintenanceWindowRole role you created earlier in this blog post.
    5. task-invocation-parameters defines the parameters that are specific to the AWS-RunPatchBaseline task document and tells Systems Manager that you want to install patches with a timeout of 600 seconds (10 minutes).
      $ aws ssm register-task-with-maintenance-window --name "Patching" --window-id "YourMaintenanceWindowId" --targets "Key=WindowTargetIds,Values=YourWindowTargetId" --task-arn AWS-RunPatchBaseline --service-role-arn "arn:aws:iam::123456789012:role/MaintenanceWindowRole" --task-type "RUN_COMMAND" --task-invocation-parameters "RunCommand={Comment=,TimeoutSeconds=600,Parameters={SnapshotId=[''],Operation=[Install]}}" --max-concurrency "500" --max-errors "20%"
      
      {
          "WindowTaskId": "YourWindowTaskId"
      }

Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed by using the following command.

$ aws ssm describe-maintenance-window-executions --window-id "YourMaintenanceWindowId"

{
    "WindowExecutions": [
        {
            "Status": "SUCCESS",
            "WindowId": "YourMaintenanceWindowId",
            "WindowExecutionId": "b594984b-430e-4ffa-a44c-a2e171de9dd3",
            "EndTime": 1515766467.487,
            "StartTime": 1515766457.691
        }
    ]
}

D.  Monitor patch compliance

You also can see the overall patch compliance of all Amazon EC2 instances using the following command in the AWS CLI.

$ aws ssm list-compliance-summaries

This command shows you the number of instances that are compliant with each category and the number of instances that are not in JSON format.

You also can see overall patch compliance by choosing Compliance under Insights in the navigation pane of the Systems Manager console. You will see a visual representation of how many Amazon EC2 instances are up to date, how many Amazon EC2 instances are noncompliant, and how many Amazon EC2 instances are compliant in relation to the earlier defined patch baseline.

Screenshot of the Compliance page of the Systems Manager console

In this section, you have set everything up for patch management on your instance. Now you know how to patch your Amazon EC2 instance in a controlled manner and how to check if your Amazon EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all Amazon EC2 instances you manage.

Summary

In this blog post, I showed how to use Systems Manager to create a patch baseline and maintenance window to keep your Amazon EC2 Linux instances up to date with the latest security patches. Remember that by creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing any part of this solution, start a new thread on the Amazon EC2 forum or contact AWS Support.

– Koen

Pirate Streaming Search Engine Exploits Crunchyroll Vulnerability

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-streaming-search-engine-exploits-crunchyroll-vulnerability-180213/

With 20 million members around the world, Crunchyroll is one of the largest on-demand streaming platforms for anime and manga content.

Much like Hollywood, the site has competition from pirate streaming sites which offer their content without permission. These usually stream pirated videos which are hosted on external sites.

However, this week Crunchyroll is facing a more direct attack. The people behind the new streaming meta-search engine StreamCR say they’ve found a way to stream the site’s content from its own servers, without paying.

“This works due to a vulnerability in the Crunchyroll system,” StreamCR’s operators tell TorrentFreak.

Simply put, StreamCR uses an active Crunchyroll account to locate the video streams and embeds this on its own website. This allows people to access Crunchyroll videos in the best quality without paying.

“This gives access to the full library in the region of our server, retrieving it as long as we’re not bound by the regular regional restriction. For this, we pick a US server as American Crunchyroll has the most library of content.

Stream in various qualities

The exploit was developed in-house, the StreamCR team informs us. While it works fine at the moment the team realizes that this may not last forever, as Crunchyroll might eventually patch the vulnerability.

However, the meta-search engine will have made its point by then.

“We expect them to fix this, Why wouldn’t they? In the meantime, this can demonstrate how vulnerable Crunchyroll is at the moment,” they tell us.

The site’s ultimate plan is to become the go-to search engine for people looking to stream all kinds of pirated videos. In addition to Crunchyroll, StreamCR also indexes various pirate sites, including YesMovies, Gomovies, and 9anime.

“StreamCR’s goal is to let people access streams with ease from a universal site, we’re trying to have a Google-like experience for finding online streams,” they say.

TorrentFreak reached out to Crunchyroll asking for a comment on the issue, but at the time of publication, we have yet to hear back.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Amazon Relational Database Service – Looking Back at 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-relational-database-service-looking-back-at-2017/

The Amazon RDS team launched nearly 80 features in 2017. Some of them were covered in this blog, others on the AWS Database Blog, and the rest in What’s New or Forum posts. To wrap up my week, I thought it would be worthwhile to give you an organized recap. So here we go!

Certification & Security

Features

Engine Versions & Features

Regional Support

Instance Support

Price Reductions

And That’s a Wrap
I’m pretty sure that’s everything. As you can see, 2017 was quite the year! I can’t wait to see what the team delivers in 2018.

Jeff;

 

Hosting Provider Steadfast Maintains DMCA Safe Harbor Defense For Trial

Post Syndicated from Ernesto original https://torrentfreak.com/hosting-provider-steadfast-maintains-dmca-safe-harbor-defense-for-trial-180212/

Two years ago, adult entertainment publisher ALS Scan dragged several third-party Internet services to court.

The company targeted several companies including CDN provider CloudFlare and the Chicago-based hosting company Steadfast, accusing them of copyright infringement because they offered services to pirate sites.

The case against Steadfast is getting close to trial and to start with an advantage, ALS Scan recently asked the court for partial summary judgment, determining that the hosting company contributed to copyright infringement and that it has no safe harbor protection.

ALS argued that Steadfast refused to shut down the servers of the image sharing platform Imagebam.com, which was operated by its client Flixya. ALS Scan described the site as a repeat offender, as it had been targeted with dozens of DMCA notices, and accused Steadfast of turning a blind eye to the situation.

Steadfast, for its part, fiercely denied the allegations. The hosting provider admitted that it leased servers to Flixya for ten years but said that it forwarded all notices to its client. The hosting company could not address individual infringements, other than shutting down the entire site, which would have been disproportionate in their view.

A few days ago California District Court Judge George Wu ruled on the matter, denying ALS’s motion for summary judgment.

Both sides made sensible arguments on the contributory infringement issue, but it is by no means undisputed that the hosting provider ‘contributed’ to the infringing activities. The court, therefore, left this question open for the jury to determine at trial.

“Ultimately, both sides have raised triable issues of fact with respect to material contribution. As a result, the Court would deny Plaintiff’s Motion,” Judge Wu writes.

ALS also sought summary judgment on the DMCA safe harbor protection issue, but the court denied this request as well. While it’s clear that the hosting company never terminated a customer for repeat infringements, it’s not clear whether it was ever in a situation where it needed to.

The DMCA requires Internet services to implement a meaningful repeat infringer policy, but in this case, Steadfast’s client Imagebam reportedly had a takedown policy of its own, which complicates the issue.

“While the fact Steadfast has never terminated one of its own customers for infringement is potentially damaging to its ability to fit the safe harbor, Plaintiff has not established that Steadfast faced a situation requiring it to terminate one of its users,” Judge Wu writes.

“Even in the present case it is unclear that Steadfast needed to terminate Flixya’s account given Flixya itself had a policy that was arguably successful at removing infringing images from imagebam.com.”

Judge Wu adds that safe harbor defenses are generally left to the jury, and this is what he decided as well.

As a result, ALS’s entire motion for summary judgment is denied. This is good news for Steadfast, who will have their safe harbor defense available at the upcoming trial. However, they will likely celebrate this win with caution, as the jury makes its ultimate decision.

A copy of the court’s order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

How I built a data warehouse using Amazon Redshift and AWS services in record time

Post Syndicated from Stephen Borg original https://aws.amazon.com/blogs/big-data/how-i-built-a-data-warehouse-using-amazon-redshift-and-aws-services-in-record-time/

This is a customer post by Stephen Borg, the Head of Big Data and BI at Cerberus Technologies.

Cerberus Technologies, in their own words: Cerberus is a company founded in 2017 by a team of visionary iGaming veterans. Our mission is simple – to offer the best tech solutions through a data-driven and a customer-first approach, delivering innovative solutions that go against traditional forms of working and process. This mission is based on the solid foundations of reliability, flexibility and security, and we intend to fundamentally change the way iGaming and other industries interact with technology.

Over the years, I have developed and created a number of data warehouses from scratch. Recently, I built a data warehouse for the iGaming industry single-handedly. To do it, I used the power and flexibility of Amazon Redshift and the wider AWS data management ecosystem. In this post, I explain how I was able to build a robust and scalable data warehouse without the large team of experts typically needed.

In two of my recent projects, I ran into challenges when scaling our data warehouse using on-premises infrastructure. Data was growing at many tens of gigabytes per day, and query performance was suffering. Scaling required major capital investment for hardware and software licenses, and also significant operational costs for maintenance and technical staff to keep it running and performing well. Unfortunately, I couldn’t get the resources needed to scale the infrastructure with data growth, and these projects were abandoned. Thanks to cloud data warehousing, the bottleneck of infrastructure resources, capital expense, and operational costs have been significantly reduced or have totally gone away. There is no more excuse for allowing obstacles of the past to delay delivering timely insights to decision makers, no matter how much data you have.

With Amazon Redshift and AWS, I delivered a cloud data warehouse to the business very quickly, and with a small team: me. I didn’t have to order hardware or software, and I no longer needed to install, configure, tune, or keep up with patches and version updates. Instead, I easily set up a robust data processing pipeline and we were quickly ingesting and analyzing data. Now, my data warehouse team can be extremely lean, and focus more time on bringing in new data and delivering insights. In this post, I show you the AWS services and the architecture that I used.

Handling data feeds

I have several different data sources that provide everything needed to run the business. The data includes activity from our iGaming platform, social media posts, clickstream data, marketing and campaign performance, and customer support engagements.

To handle the diversity of data feeds, I developed abstract integration applications using Docker that run on Amazon EC2 Container Service (Amazon ECS) and feed data to Amazon Kinesis Data Streams. These data streams can be used for real time analytics. In my system, each record in Kinesis is preprocessed by an AWS Lambda function to cleanse and aggregate information. My system then routes it to be stored where I need on Amazon S3 by Amazon Kinesis Data Firehose. Suppose that you used an on-premises architecture to accomplish the same task. A team of data engineers would be required to maintain and monitor a Kafka cluster, develop applications to stream data, and maintain a Hadoop cluster and the infrastructure underneath it for data storage. With my stream processing architecture, there are no servers to manage, no disk drives to replace, and no service monitoring to write.

Setting up a Kinesis stream can be done with a few clicks, and the same for Kinesis Firehose. Firehose can be configured to automatically consume data from a Kinesis Data Stream, and then write compressed data every N minutes to Amazon S3. When I want to process a Kinesis data stream, it’s very easy to set up a Lambda function to be executed on each message received. I can just set a trigger from the AWS Lambda Management Console, as shown following.

I also monitor the duration of function execution using Amazon CloudWatch and AWS X-Ray.

Regardless of the format I receive the data from our partners, I can send it to Kinesis as JSON data using my own formatters. After Firehose writes this to Amazon S3, I have everything in nearly the same structure I received but compressed, encrypted, and optimized for reading.

This data is automatically crawled by AWS Glue and placed into the AWS Glue Data Catalog. This means that I can immediately query the data directly on S3 using Amazon Athena or through Amazon Redshift Spectrum. Previously, I used Amazon EMR and an Amazon RDS–based metastore in Apache Hive for catalog management. Now I can avoid the complexity of maintaining Hive Metastore catalogs. Glue takes care of high availability and the operations side so that I know that end users can always be productive.

Working with Amazon Athena and Amazon Redshift for analysis

I found Amazon Athena extremely useful out of the box for ad hoc analysis. Our engineers (me) use Athena to understand new datasets that we receive and to understand what transformations will be needed for long-term query efficiency.

For our data analysts and data scientists, we’ve selected Amazon Redshift. Amazon Redshift has proven to be the right tool for us over and over again. It easily processes 20+ million transactions per day, regardless of the footprint of the tables and the type of analytics required by the business. Latency is low and query performance expectations have been more than met. We use Redshift Spectrum for long-term data retention, which enables me to extend the analytic power of Amazon Redshift beyond local data to anything stored in S3, and without requiring me to load any data. Redshift Spectrum gives me the freedom to store data where I want, in the format I want, and have it available for processing when I need it.

To load data directly into Amazon Redshift, I use AWS Data Pipeline to orchestrate data workflows. I create Amazon EMR clusters on an intra-day basis, which I can easily adjust to run more or less frequently as needed throughout the day. EMR clusters are used together with Amazon RDS, Apache Spark 2.0, and S3 storage. The data pipeline application loads ETL configurations from Spring RESTful services hosted on AWS Elastic Beanstalk. The application then loads data from S3 into memory, aggregates and cleans the data, and then writes the final version of the data to Amazon Redshift. This data is then ready to use for analysis. Spark on EMR also helps with recommendations and personalization use cases for various business users, and I find this easy to set up and deliver what users want. Finally, business users use Amazon QuickSight for self-service BI to slice, dice, and visualize the data depending on their requirements.

Each AWS service in this architecture plays its part in saving precious time that’s crucial for delivery and getting different departments in the business on board. I found the services easy to set up and use, and all have proven to be highly reliable for our use as our production environments. When the architecture was in place, scaling out was either completely handled by the service, or a matter of a simple API call, and crucially doesn’t require me to change one line of code. Increasing shards for Kinesis can be done in a minute by editing a stream. Increasing capacity for Lambda functions can be accomplished by editing the megabytes allocated for processing, and concurrency is handled automatically. EMR cluster capacity can easily be increased by changing the master and slave node types in Data Pipeline, or by using Auto Scaling. Lastly, RDS and Amazon Redshift can be easily upgraded without any major tasks to be performed by our team (again, me).

In the end, using AWS services including Kinesis, Lambda, Data Pipeline, and Amazon Redshift allows me to keep my team lean and highly productive. I eliminated the cost and delays of capital infrastructure, as well as the late night and weekend calls for support. I can now give maximum value to the business while keeping operational costs down. My team pushed out an agile and highly responsive data warehouse solution in record time and we can handle changing business requirements rapidly, and quickly adapt to new data and new user requests.


Additional Reading

If you found this post useful, be sure to check out Deploy a Data Warehouse Quickly with Amazon Redshift, Amazon RDS for PostgreSQL and Tableau Server and Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshift.


About the Author

Stephen Borg is the Head of Big Data and BI at Cerberus Technologies. He has a background in platform software engineering, and first became involved in data warehousing using the typical RDBMS, SQL, ETL, and BI tools. He quickly became passionate about providing insight to help others optimize the business and add personalization to products. He is now the Head of Big Data and BI at Cerberus Technologies.

 

 

 

Cloudflare Hit With Piracy Lawsuit After Abuse Form ‘Fails’

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-hit-with-piracy-lawsuit-after-abuse-form-fails-180210/

Seattle-based artist Christopher Boffoli is no stranger when it comes to suing tech companies for aiding copyright infringement of his work.

Boffoli has filed lawsuits against Imgur, Twitter, Pinterest, Google, and others, which were dismissed and/or settled out of court under undisclosed terms.

This month he filed a new case against another intermediary, Cloudflare, which has had its fair share of piracy allegations in recent years.

In common with other companies, Cloudflare is accused of contributing to copyright infringements of Boffoli’s “Big Appetites” miniatures series. In this case, several Cloudflare customers allegedly posted these photos on their sites which were then reproduced on the servers of the CDN provider.

The lawsuit mentions that the infringing copies were posted on unique-landscape.com and baklol.com. This was also pointed out to Cloudflare by Boffoli, who sent the company DMCA takedown notices in October and November of last year.

While the photographer received an automated response, the photos in question remained online. Through the lawsuit, Boffoli hopes this will change.

“CloudFlare induced, caused, or materially contributed to the Infringing Websites’ publication,” the complaint reads. “CloudFlare had actual knowledge of the Infringing Content. Boffoli provided notice to CloudFlare in compliance with the DMCA, and CloudFlare failed to disable access to or remove the Infringing Websites.”

The photographer is asking the court to order an injunction preventing Cloudflare from making his work available. In addition, the complaint asks for actual and statutory damages for willful copyright infringement. With at least four photos in the lawsuit, the potential damages are more than half a million dollars.

While it’s not mentioned in the complaint, the email communication between Boffoli and Cloudflare goes further than just an automated response. Court records show that the photographer initially didn’t ask Cloudflare to remove the infringing photos. Instead, he asked the CDN provider to forward them to the ISP or site owner.

“I would be grateful if you would forward this DMCA takedown request to the website owner and ISP so these infringing links can immediately be removed,” it read.

Part of the email communication

From then on things escalated a bit. The emails reveal that Boffoli had trouble reporting the infringing photos through the required form.

When the photographer pointed this out in a direct email, Cloudflare urged him to try the form again as that was the only way to send the DMCA request to the designated copyright agent.

“The DMCA doesn’t require us to process reports not sent to our registered agent as per our registration with the US Copyright Office. Our registered copyright agent is the form located at cloudflare.com/abuse/form and you may proceed via that avenue,” Cloudflare wrote.

If the case moves forward, Cloudflare may use this to argue that it never received a proper DMCA takedown notice. However, Boffoli wasn’t planning on trying again and instead threatened a lawsuit, unless Cloudflare took immediate action.

“As I have said, your form did not work for me despite repeated attempts to use it. And it is insulting for you to suggest that it’s working fine when it is not. So again, this is absolutely my last attempt to get you to respond to this infringement for which you are impeding the removal,” Boffoli wrote.

“If you take no action now I will forward this to my legal team this week. It is more than enough of a burden to have to waste countless hours policing my own copyrights without organizations like Cloudflare running interference for copyright infringers. I am not averse to asking a federal judge to compel you to deal with these copyright infringements. And I will seek statutory damages for contributory infringement at that time.”

As it turns out, that was not an idle threat.

—-

A copy of the complaint is available here (pdf) and the email exhibits can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Voksi Releases Detailed Denuvo-Cracking Video Tutorial

Post Syndicated from Andy original https://torrentfreak.com/voksi-releases-detailed-denuvo-cracking-video-tutorial-180210/

Earlier this week, version 4.9 of the Denuvo anti-tamper system, which had protected Assassins Creed Origin for the past several months, was defeated by Italian cracking group CPY.

While Denuvo would probably paint four months of protection as a success, the company would certainly have preferred for things to have gone on a bit longer, not least following publisher Ubisoft’s decision to use VMProtect technology on top.

But while CPY do their thing in Italy there’s another rival whittling away at whatever the giants at Denuvo (and new owner Irdeto) can come up with. The cracker – known only as Voksi – hails from Bulgaria and this week he took the unusual step of releasing a 90-minute video (embedded below) in which he details how to defeat Denuvo’s V4 anti-tamper technology.

The video is not for the faint-hearted so those with an aversion to issues of a highly technical nature might feel the urge to look away. However, it may surprise readers to learn that not so long ago, Voksi knew absolutely nothing about coding.

“You will find this very funny and unbelievable,” Voksi says, recalling the events of 2012.

“There was one game called Sanctum and on one free [play] weekend [on Steam], I and my best friend played through it and saw how great the cooperative action was. When the free weekend was over, we wanted to keep playing, but we didn’t have any money to buy the game.

“So, I started to look for alternative ways, LAN emulators, anything! Then I decided I need to crack it. That’s how I got into reverse engineering. I started watching some shitty YouTube videos with bad quality and doing some tutorials. Then I found about Steam exploits and that’s how I got into making Steamworks fixes, allowing cracked multiplayer between players.”

Voksi says his entire cracking career began with this one indie game and his desire to play it with his best friend. Prior to that, he had absolutely no experience at all. He says he’s taken no university courses or any course at all for that matter. Everything he knows has come from material he’s found online. But the intrigue doesn’t stop there.

“I don’t even know how to code properly in high-level language like C#, C++, etc. But I understand assembly [language] perfectly fine,” he explains.

For those who code, that’s generally a little bit back to front, with low-level languages usually posing the most difficulties. But Voksi says that with assembly, everything “just clicked.”

Of course, it’s been six years since the 21-year-old was first motivated to crack a game due to lack of funds. In the more than half decade since, have his motivations changed at all? Is it the thrill of solving the puzzle or are there other factors at play?

“I just developed an urge to provide paid stuff for free for people who can’t afford it and specifically, co-op and multiplayer cracks. Of course, i’m not saying don’t support the developers if you have the money and like the game. You should do that,” he says.

“The challenge of cracking also motivates me, especially with an abomination like Denuvo. It is pure cancer for the gaming industry, it doesn’t help and it only causes issues for the paying customers.”

Those who follow Voksi online will know that as well as being known in his own right, he’s part of the REVOLT group, a collective that has Voksi’s core interests and goals as their own.

“REVOLT started as a group with one and only goal – to provide multiplayer support for cracked games. No other group was doing it until that day. It was founded by several members, from which I’m currently the only one active, still releasing cracks.

“Our great achievements are in first place, of course, cracking Denuvo V4, making us one of the four groups/people who were able to break the protection. In second place are our online fixes for several AAA games, allowing you to play on legit servers with legit players. In third place, our ordinary Steamworks fixes allowing you to play multiplayer between cracked users.”

In communities like /r/crackwatch on Reddit and those less accessible, Voksi and others doing similar work are often held up as Internet heroes, cracking games in order to give the masses access to something that might’ve been otherwise inaccessible. But how does this fame sit with him?

“Well, I don’t see myself as a hero, just another ordinary person doing what he loves. I love seeing people happy because of my work, that’s also a big motivation, but nothing more than that,” he says.

Finally, what’s up next for Voksi and what are his hopes for the rest of the year?

“In an ideal world, Denuvo would die. As for me, I don’t know, time will tell,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Migrating Your Amazon ECS Containers to AWS Fargate

Post Syndicated from Tiffany Jernigan original https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-containers-to-aws-fargate/

AWS Fargate is a new technology that works with Amazon Elastic Container Service (ECS) to run containers without having to manage servers or clusters. What does this mean? With Fargate, you no longer need to provision or manage a single virtual machine; you can just create tasks and run them directly!

Fargate uses the same API actions as ECS, so you can use the ECS console, the AWS CLI, or the ECS CLI. I recommend running through the first-run experience for Fargate even if you’re familiar with ECS. It creates all of the one-time setup requirements, such as the necessary IAM roles. If you’re using a CLI, make sure to upgrade to the latest version

In this blog, you will see how to migrate ECS containers from running on Amazon EC2 to Fargate.

Getting started

Note: Anything with code blocks is a change in the task definition file. Screen captures are from the console. Additionally, Fargate is currently available in the us-east-1 (N. Virginia) region.

Launch type

When you create tasks (grouping of containers) and clusters (grouping of tasks), you now have two launch type options: EC2 and Fargate. The default launch type, EC2, is ECS as you knew it before the announcement of Fargate. You need to specify Fargate as the launch type when running a Fargate task.

Even though Fargate abstracts away virtual machines, tasks still must be launched into a cluster. With Fargate, clusters are a logical infrastructure and permissions boundary that allow you to isolate and manage groups of tasks. ECS also supports heterogeneous clusters that are made up of tasks running on both EC2 and Fargate launch types.

The optional, new requiresCompatibilities parameter with FARGATE in the field ensures that your task definition only passes validation if you include Fargate-compatible parameters. Tasks can be flagged as compatible with EC2, Fargate, or both.

"requiresCompatibilities": [
    "FARGATE"
]

Networking

"networkMode": "awsvpc"

In November, we announced the addition of task networking with the network mode awsvpc. By default, ECS uses the bridge network mode. Fargate requires using the awsvpc network mode.

In bridge mode, all of your tasks running on the same instance share the instance’s elastic network interface, which is a virtual network interface, IP address, and security groups.

The awsvpc mode provides this networking support to your tasks natively. You now get the same VPC networking and security controls at the task level that were previously only available with EC2 instances. Each task gets its own elastic networking interface and IP address so that multiple applications or copies of a single application can run on the same port number without any conflicts.

The awsvpc mode also provides a separation of responsibility for tasks. You can get complete control of task placement within your own VPCs, subnets, and the security policies associated with them, even though the underlying infrastructure is managed by Fargate. Also, you can assign different security groups to each task, which gives you more fine-grained security. You can give an application only the permissions it needs.

"portMappings": [
    {
        "containerPort": "3000"
    }
 ]

What else has to change? First, you only specify a containerPort value, not a hostPort value, as there is no host to manage. Your container port is the port that you access on your elastic network interface IP address. Therefore, your container ports in a single task definition file need to be unique.

"environment": [
    {
        "name": "WORDPRESS_DB_HOST",
        "value": "127.0.0.1:3306"
    }
 ]

Additionally, links are not allowed as they are a property of the “bridge” network mode (and are now a legacy feature of Docker). Instead, containers share a network namespace and communicate with each other over the localhost interface. They can be referenced using the following:

localhost/127.0.0.1:<some_port_number>

CPU and memory

"memory": "1024",
 "cpu": "256"

"memory": "1gb",
 "cpu": ".25vcpu"

When launching a task with the EC2 launch type, task performance is influenced by the instance types that you select for your cluster combined with your task definition. If you pick larger instances, your applications make use of the extra resources if there is no contention.

In Fargate, you needed a way to get additional resource information so we created task-level resources. Task-level resources define the maximum amount of memory and cpu that your task can consume.

  • memory can be defined in MB with just the number, or in GB, for example, “1024” or “1gb”.
  • cpu can be defined as the number or in vCPUs, for example, “256” or “.25vcpu”.
    • vCPUs are virtual CPUs. You can look at the memory and vCPUs for instance types to get an idea of what you may have used before.

The memory and CPU options available with Fargate are:

CPU Memory
256 (.25 vCPU) 0.5GB, 1GB, 2GB
512 (.5 vCPU) 1GB, 2GB, 3GB, 4GB
1024 (1 vCPU) 2GB, 3GB, 4GB, 5GB, 6GB, 7GB, 8GB
2048 (2 vCPU) Between 4GB and 16GB in 1GB increments
4096 (4 vCPU) Between 8GB and 30GB in 1GB increments

IAM roles

Because Fargate uses awsvpc mode, you need an Amazon ECS service-linked IAM role named AWSServiceRoleForECS. It provides Fargate with the needed permissions, such as the permission to attach an elastic network interface to your task. After you create your service-linked IAM role, you can delete the remaining roles in your services.

"executionRoleArn": "arn:aws:iam::<your_account_id>:role/ecsTaskExecutionRole"

With the EC2 launch type, an instance role gives the agent the ability to pull, publish, talk to ECS, and so on. With Fargate, the task execution IAM role is only needed if you’re pulling from Amazon ECR or publishing data to Amazon CloudWatch Logs.

The Fargate first-run experience tutorial in the console automatically creates these roles for you.

Volumes

Fargate currently supports non-persistent, empty data volumes for containers. When you define your container, you no longer use the host field and only specify a name.

Load balancers

For awsvpc mode, and therefore for Fargate, use the IP target type instead of the instance target type. You define this in the Amazon EC2 service when creating a load balancer.

If you’re using a Classic Load Balancer, change it to an Application Load Balancer or a Network Load Balancer.

Tip: If you are using an Application Load Balancer, make sure that your tasks are launched in the same VPC and Availability Zones as your load balancer.

Let’s migrate a task definition!

Here is an example NGINX task definition. This type of task definition is what you’re used to if you created one before Fargate was announced. It’s what you would run now with the EC2 launch type.

{
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "nginx",
            "memory": "512",
            "cpu": "100",
            "essential": true,
            "portMappings": [
                {
                    "hostPort": "80",
                    "containerPort": "80",
                    "protocol": "tcp"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],
    "family": "nginx-ec2"
}

OK, so now what do you need to do to change it to run with the Fargate launch type?

  • Add FARGATE for requiredCompatibilities (not required, but a good safety check for your task definition).
  • Use awsvpc as the network mode.
  • Just specify the containerPort (the hostPortvalue is the same).
  • Add a task executionRoleARN value to allow logging to CloudWatch.
  • Provide cpu and memory limits for the task.
{
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "nginx",
            "memory": "512",
            "cpu": "100",
            "essential": true,
            "portMappings": [
                {
                    "containerPort": "80",
                    "protocol": "tcp"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],
    "networkMode": "awsvpc",
    "executionRoleArn": "arn:aws:iam::<your_account_id>:role/ecsTaskExecutionRole",
    "family": "nginx-fargate",
    "memory": "512",
    "cpu": "256"
}

Are there more examples?

Yep! Head to the AWS Samples GitHub repo. We have several sample task definitions you can try for both the EC2 and Fargate launch types. Contributions are very welcome too :).

 

tiffany jernigan
@tiffanyfayj

Build a Multi-Tenant Amazon EMR Cluster with Kerberos, Microsoft Active Directory Integration and EMRFS Authorization

Post Syndicated from Songzhi Liu original https://aws.amazon.com/blogs/big-data/build-a-multi-tenant-amazon-emr-cluster-with-kerberos-microsoft-active-directory-integration-and-emrfs-authorization/

One of the challenges faced by our customers—especially those in highly regulated industries—is balancing the need for security with flexibility. In this post, we cover how to enable multi-tenancy and increase security by using EMRFS (EMR File System) authorization, the Amazon S3 storage-level authorization on Amazon EMR.

Amazon EMR is an easy, fast, and scalable analytics platform enabling large-scale data processing. EMRFS authorization provides Amazon S3 storage-level authorization by configuring EMRFS with multiple IAM roles. With this functionality enabled, different users and groups can share the same cluster and assume their own IAM roles respectively.

Simply put, on Amazon EMR, we can now have an Amazon EC2 role per user assumed at run time instead of one general EC2 role at the cluster level. When the user is trying to access Amazon S3 resources, Amazon EMR evaluates against a predefined mappings list in EMRFS configurations and picks up the right role for the user.

In this post, we will discuss what EMRFS authorization is (Amazon S3 storage-level access control) and show how to configure the role mappings with detailed examples. You will then have the desired permissions in a multi-tenant environment. We also demo Amazon S3 access from HDFS command line, Apache Hive on Hue, and Apache Spark.

EMRFS authorization for Amazon S3

There are two prerequisites for using this feature:

  1. Users must be authenticated, because EMRFS needs to map the current user/group/prefix to a predefined user/group/prefix. There are several authentication options. In this post, we launch a Kerberos-enabled cluster that manages the Key Distribution Center (KDC) on the master node, and enable a one-way trust from the KDC to a Microsoft Active Directory domain.
  2. The application must support accessing Amazon S3 via Applications that have their own S3FileSystem APIs (for example, Presto) are not supported at this time.

EMRFS supports three types of mapping entries: user, group, and Amazon S3 prefix. Let’s use an example to show how this works.

Assume that you have the following three identities in your organization, and they are defined in the Active Directory:

To enable all these groups and users to share the EMR cluster, you need to define the following IAM roles:

In this case, you create a separate Amazon EC2 role that doesn’t give any permission to Amazon S3. Let’s call the role the base role (the EC2 role attached to the EMR cluster), which in this example is named EMR_EC2_RestrictedRole. Then, you define all the Amazon S3 permissions for each specific user or group in their own roles. The restricted role serves as the fallback role when the user doesn’t belong to any user/group, nor does the user try to access any listed Amazon S3 prefixes defined on the list.

Important: For all other roles, like emrfs_auth_group_role_data_eng, you need to add the base role (EMR_EC2_RestrictedRole) as the trusted entity so that it can assume other roles. See the following example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::511586466501:role/EMR_EC2_RestrictedRole"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The following is an example policy for the admin user role (emrfs_auth_user_role_admin_user):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        }
    ]
}

We are assuming the admin user has access to all buckets in this example.

The following is an example policy for the data science group role (emrfs_auth_group_role_data_sci):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

This role grants all Amazon S3 permissions to the emrfs-auth-data-science-bucket-demo bucket and all the objects in it. Similarly, the policy for the role emrfs_auth_group_role_data_eng is shown below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

Example role mappings configuration

To configure EMRFS authorization, you use EMR security configuration. Here is the configuration we use in this post

Consider the following scenario.

First, the admin user admin1 tries to log in and run a command to access Amazon S3 data through EMRFS. The first role emrfs_auth_user_role_admin_user on the mapping list, which is a user role, is mapped and picked up. Then admin1 has access to the Amazon S3 locations that are defined in this role.

Then a user from the data engineer group (grp_data_engineering) tries to access a data bucket to run some jobs. When EMRFS sees that the user is a member of the grp_data_engineering group, the group role emrfs_auth_group_role_data_eng is assumed, and the user has proper access to Amazon S3 that is defined in the emrfs_auth_group_role_data_eng role.

Next, the third user comes, who is not an admin and doesn’t belong to any of the groups. After failing evaluation of the top three entries, EMRFS evaluates whether the user is trying to access a certain Amazon S3 prefix defined in the last mapping entry. This type of mapping entry is called the prefix type. If the user is trying to access s3://emrfs-auth-default-bucket-demo/, then the prefix mapping is in effect, and the prefix role emrfs_auth_prefix_role_default_s3_prefix is assumed.

If the user is not trying to access any of the Amazon S3 paths that are defined on the list—which means it failed the evaluation of all the entries—it only has the permissions defined in the EMR_EC2RestrictedRole. This role is assumed by the EC2 instances in the cluster.

In this process, all the mappings defined are evaluated in the defined order, and the first role that is mapped is assumed, and the rest of the list is skipped.

Setting up an EMR cluster and mapping Active Directory users and groups

Now that we know how EMRFS authorization role mapping works, the next thing we need to think about is how we can use this feature in an easy and manageable way.

Active Directory setup

Many customers manage their users and groups using Microsoft Active Directory or other tools like OpenLDAP. In this post, we create the Active Directory on an Amazon EC2 instance running Windows Server and create the users and groups we will be using in the example below. After setting up Active Directory, we use the Amazon EMR Kerberos auto-join capability to establish a one-way trust from the KDC running on the EMR master node to the Active Directory domain on the EC2 instance. You can use your own directory services as long as it talks to the LDAP (Lightweight Directory Access Protocol).

To create and join Active Directory to Amazon EMR, follow the steps in the blog post Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory.

After configuring Active Directory, you can create all the users and groups using the Active Directory tools and add users to appropriate groups. In this example, we created users like admin1, dataeng1, datascientist1, grp_data_engineering, and grp_data_science, and then add the users to the right groups.

Join the EMR cluster to an Active Directory domain

For clusters with Kerberos, Amazon EMR now supports automated Active Directory domain joins. You can use the security configuration to configure the one-way trust from the KDC to the Active Directory domain. You also configure the EMRFS role mappings in the same security configuration.

The following is an example of the EMR security configuration with a trusted Active Directory domain EMRKRB.TEST.COM and the EMRFS role mappings as we discussed earlier:

The EMRFS role mapping configuration is shown in this example:

We will also provide an example AWS CLI command that you can run.

Launching the EMR cluster and running the tests

Now you have configured Kerberos and EMRFS authorization for Amazon S3.

Additionally, you need to configure Hue with Active Directory using the Amazon EMR configuration API in order to log in using the AD users created before. The following is an example of Hue AD configuration.

[
  {
    "Classification":"hue-ini",
    "Properties":{

    },
    "Configurations":[
      {
        "Classification":"desktop",
        "Properties":{

        },
        "Configurations":[
          {
            "Classification":"ldap",
            "Properties":{

            },
            "Configurations":[
              {
                "Classification":"ldap_servers",
                "Properties":{

                },
                "Configurations":[
                  {
                    "Classification":"AWS",
                    "Properties":{
                      "base_dn":"DC=emrkrb,DC=test,DC=com",
                      "ldap_url":"ldap://emrkrb.test.com",
                      "search_bind_authentication":"false",
                      "bind_dn":"CN=adjoiner,CN=users,DC=emrkrb,DC=test,DC=com",
                      "bind_password":"Abc123456",
                      "create_users_on_login":"true",
                      "nt_domain":"emrkrb.test.com"
                    },
                    "Configurations":[

                    ]
                  }
                ]
              }
            ]
          },
          {
            "Classification":"auth",
            "Properties":{
              "backend":"desktop.auth.backend.LdapBackend"
            },
            "Configurations":[

            ]
          }
        ]
      }
    ]
  }

Note: In the preceding configuration JSON file, change the values as required before pasting it into the software setting section in the Amazon EMR console.

Now let’s use this configuration and the security configuration you created before to launch the cluster.

In the Amazon EMR console, choose Create cluster. Then choose Go to advanced options. On the Step1: Software and Steps page, under Edit software settings (optional), paste the configuration in the box.

The rest of the setup is the same as an ordinary cluster setup, except in the Security Options section. In Step 4: Security, under Permissions, choose Custom, and then choose the RestrictedRole that you created before.

Choose the appropriate subnets (these should meet the base requirement in order for a successful Active Directory join—see the Amazon EMR Management Guide for more details), and choose the appropriate security groups to make sure it talks to the Active Directory. Choose a key so that you can log in and configure the cluster.

Most importantly, choose the security configuration that you created earlier to enable Kerberos and EMRFS authorization for Amazon S3.

You can use the following AWS CLI command to create a cluster.

aws emr create-cluster --name "TestEMRFSAuthorization" \ 
--release-label emr-5.10.0 \ --instance-type m3.xlarge \ 
--instance-count 3 \ 
--ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,KeyName=MyEC2KeyPair \ --service-role EMR_DefaultRole \ 
--security-configuration MyKerberosConfig \ 
--configurations file://hue-config.json \
--applications Name=Hadoop Name=Hive Name=Hue Name=Spark \ 
--kerberos-attributes Realm=EC2.INTERNAL, \ KdcAdminPassword=<YourClusterKDCAdminPassword>, \ ADDomainJoinUser=<YourADUserLogonName>,ADDomainJoinPassword=<YourADUserPassword>, \ 
CrossRealmTrustPrincipalPassword=<MatchADTrustPwd>

Note: If you create the cluster using CLI, you need to save the JSON configuration for Hue into a file named hue-config.json and place it on the server where you run the CLI command.

After the cluster gets into the Waiting state, try to connect by using SSH into the cluster using the Active Directory user name and password.

ssh -l [email protected] <EMR IP or DNS name>

Quickly run two commands to show that the Active Directory join is successful:

  1. id [user name] shows the mapped AD users and groups in Linux.
  2. hdfs groups [user name] shows the mapped group in Hadoop.

Both should return the current Active Directory user and group information if the setup is correct.

Now, you can test the user mapping first. Log in with the admin1 user, and run a Hadoop list directory command:

hadoop fs -ls s3://emrfs-auth-data-science-bucket-demo/

Now switch to a user from the data engineer group.

Retry the previous command to access the admin’s bucket. It should throw an Amazon S3 Access Denied exception.

When you try listing the Amazon S3 bucket that a data engineer group member has accessed, it triggers the group mapping.

hadoop fs -ls s3://emrfs-auth-data-engineering-bucket-demo/

It successfully returns the listing results. Next we will test Apache Hive and then Apache Spark.

 

To run jobs successfully, you need to create a home directory for every user in HDFS for staging data under /user/<username>. Users can configure a step to create a home directory at cluster launch time for every user who has access to the cluster. In this example, you use Hue since Hue will create the home directory in HDFS for the user at the first login. Here Hue also needs to be integrated with the same Active Directory as explained in the example configuration described earlier.

First, log in to Hue as a data engineer user, and open a Hive Notebook in Hue. Then run a query to create a new table pointing to the data engineer bucket, s3://emrfs-auth-data-engineering-bucket-demo/table1_data_eng/.

You can see that the table was created successfully. Now try to create another table pointing to the data science group’s bucket, where the data engineer group doesn’t have access.

It failed and threw an Amazon S3 Access Denied error.

Now insert one line of data into the successfully create table.

Next, log out, switch to a data science group user, and create another table, test2_datasci_tb.

The creation is successful.

The last task is to test Spark (it requires the user directory, but Hue created one in the previous step).

Now let’s come back to the command line and run some Spark commands.

Login to the master node using the datascientist1 user:

Start the SparkSQL interactive shell by typing spark-sql, and run the show tables command. It should list the tables that you created using Hive.

As a data science group user, try select on both tables. You will find that you can only select the table defined in the location that your group has access to.

Conclusion

EMRFS authorization for Amazon S3 enables you to have multiple roles on the same cluster, providing flexibility to configure a shared cluster for different teams to achieve better efficiency. The Active Directory integration and group mapping make it much easier for you to manage your users and groups, and provides better auditability in a multi-tenant environment.


Additional Reading

If you found this post useful, be sure to check out Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory and Launching and Running an Amazon EMR Cluster inside a VPC.


About the Authors

Songzhi Liu is a Big Data Consultant with AWS Professional Services. He works closely with AWS customers to provide them Big Data & Machine Learning solutions and best practices on the Amazon cloud.

 

 

 

 

All-In on Unlimited Backup

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/all-in-on-unlimited-backup/

chips on computer with cloud backup

The cloud backup industry has seen its share of tumultuousness. BitCasa, Dell DataSafe, Xdrive, and a dozen others have closed up shop. Mozy, Amazon, and Microsoft offered, but later canceled, their unlimited offerings. Recently, CrashPlan for Home customers were notified that their service was being end-of-lifed. Then today we’ve heard from Carbonite customers who are frustrated by this morning’s announcement of a price increase from Carbonite.

We believe that the fundamental goal of a cloud backup is having peace-of-mind: knowing your data — all of it — is safe. For over 10 years Backblaze has been providing that peace-of-mind by offering completely unlimited cloud backup to our customers. And we continue to be committed to that. Knowing that your cloud backup vendor is not going to disappear or fundamentally change their service is an essential element in achieving that peace-of-mind.

Committed to Unlimited Backup

When Mozy discontinued their unlimited backup on Jan 31, 2011, a lot of people asked, “Does this mean Backblaze will discontinue theirs as well?” At that time I wrote the blog post Backblaze is committed to unlimited backup. That was seven years ago. Since then we’ve continued to make Backblaze cloud backup better: dramatically speeding up backups and restores, offering the unique and very popular Restore Return Refund program, enabling direct access and sharing of any file in your backup, and more. We also introduced Backblaze Groups to enable businesses and families to manage backups — all at no additional cost.

How That’s Possible

I’d like to answer the question of “How have you been able to do this when others haven’t?

First, commitment. It’s not impossible to offer unlimited cloud backup, but it’s not easy. The Backblaze team has been committed to unlimited as a core tenet.

Second, we have pursued the technical, business, and cultural steps required to make it happen. We’ve designed our own servers, written our cloud storage software, run our own operations, and been continually focused on every place we could optimize a penny out of the cost of storage. We’ve built a culture at Backblaze that cares deeply about that.

Ensuring Peace-of-Mind

Price increases and plan changes happen in our industry, but Backblaze has consistently been the low price leader, and continues to stand by the foundational element of our service — truly unlimited backup storage. Carbonite just announced a price increase from $60 to $72/year, and while that’s not an astronomical increase, it’s important to keep in mind the service that they are providing at that rate. The basic Carbonite plan provides a service that doesn’t back up videos or external hard drives by default. We think that’s dangerous. No one wants to discover that their videos weren’t backed up after their computer dies, or have to worry about the safety and durability of their data. That is why we have continued to build on our foundation of unlimited, as well as making our service faster and more accessible. All of these serve the goal of ensuring peace-of-mind for our customers.

3 Months Free For You & A Friend

As part of our commitment to unlimited, refer your friends to receive three months of Backblaze service through March 15, 2018. When you Refer-a-Friend with your personal referral link, and they subscribe, both of you will receive three months of service added to your account. See promotion details on our Refer-a-Friend page.

Want A Reminder When Your Carbonite Subscription Runs Out?

If you’re considering switching from Carbonite, we’d love to be your new backup provider. Enter your email and the date you’d like to be reminded in the form below and you’ll get a friendly reminder email from us to start a new backup plan with Backblaze. Or, you could start a free trial today.

We think you’ll be glad you switched, and you’ll have a chance to experience some of that Backblaze peace-of-mind for your data.

Please Send Me a Reminder When I Need a New Backup Provider



 

The post All-In on Unlimited Backup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Anti-Piracy Video Scares Kids With ‘Fake’ Malware Info

Post Syndicated from Ernesto original https://torrentfreak.com/anti-piracy-video-scares-kids-with-fake-malware-info-180206/

Today is Safer Internet Day, a global awareness campaign to educate the public on all sorts of threats that people face online.

It is a laudable initiative supported by the Industry Trust for IP Awareness which, together with the children’s charity Into Film, has released an informative video and associated course materials.

The organizations have created a British version of an animation previously released as part of the Australian “Price of Piracy” campaign. While the video includes an informative description of the various types of malware, there appears to be a secondary agenda.

Strangely enough, the video itself contains no advice on how to avoid malware at all, other than to avoid pirate sites. In that sense, it looks more like an indirect anti-piracy ad.

While there’s no denying that kids might run into malware if they randomly click on pirate site ads, this problem is certainly not exclusive to these sites. Email and social media are frequently used to link to malware too, and YouTube comments can pose the same risk. The problem is everywhere.

What really caught our eye, however, is the statement that pirate sites are the most used propagation method for malware. “Did you know, the number one way we infect your device is via illegal pirate sites,” an animated piece of malware claims in the video.

Forget about email attachments, spam links, compromised servers, or even network attacks. Pirate sites are the number one spot through which malware spreads. According to the video at least. But where do they get this knowledge?

Meet the malwares

When we asked the Industry Trust for IP Awareness for further details, the organization checked with their Australian colleagues, who pointed us to a working paper (pdf) from 2014. This paper includes the following line: “Illegal streaming websites are now the number one propagation mechanism for malicious software as 97% of them contain malware.”

Unfortunately, there’s a lot wrong with this claim.

Through another citation, the 97% figure points to this unpublished study of which only the highlights were shared. This “malware” research looked at the prevalence of malware and other unwanted software linked to pirate sites. Not just streaming sites as the other paper said, but let’s ignore that last bit.

What the study actually found is that of the 30 researched pirate sites, “90% contained malware or other ‘Potentially Unwanted Programmes’.” Note that this is not the earlier mentioned 97%, and that this broad category not only includes malware but also popup ads, which were most popular. This means that the percentage of actual malware on these sites can be anywhere from 0.1% to 90%.

Importantly, none of the malware found in this research was installed without an action performed by the user, such as clicking on a flashy download button or installing a mysterious .exe file.

Aside from clearly erroneous references, the more worrying issue is that even the original incorrect statement that “97% of all pirate sites contain malware” provides no evidence for the claim in the video that pirate sites are “the number one way” through which malware spreads.

Even if 100% of all pirate sites link to malware, that’s no proof that it’s the most used propagation method.

The malware issue has been a popular talking point for a while, but after searching for answers for days, we couldn’t find a grain of evidence. There are a lot of malware propagation methods, including email, which traditionally is a very popular choice.

Even more confusingly, the same paper that was cited as a source for the pirate site malware claim notes that 80% of all web-based malware is hosted on “innocent” but compromised websites.

As the provided evidence gave no answers, we asked the experts to chime in. Luckily, security company Malwarebytes was willing to share its assessment. As leaders in the anti-malware industry, they should know better than researchers who have their numbers and terminology mixed up.

“These days, most common infections come from malicious spam campaigns and drive-by exploit attacks,” Adam Kujawa, Director of Malware Intelligence at Malwarebytes informs us.

“Torrent sites are still frequently used by criminals to host malware disguised as something the user wants, like an application, movie, etc. However they are really only a threat to people who use torrent sites regularly and those people have likely learned how to avoid malicious torrents,” he adds.

In other words, most people who regularly visit pirate sites know how to avoid these dangers. That doesn’t mean that they are not a threat to unsuspecting kids who visit them for the first time of course.

“Now, if users who were not familiar with torrent and pirate sites started using these services, there is a high probability that they could encounter some kind of malware. However, many of these sites have user review processes to let other users know if a particular torrent or download is likely malicious.

“So, unless a user is completely new to this process and ignores all the warning signs, they could walk away from a pirate site without getting infected,” Kujawa says.

Overall, the experts at Malwarebytes see no evidence for the claim that pirate sites are the number one propagation method for malware.

“So in summary, I don’t think the claim that ‘pirate sites’ are the number one way to infect users is accurate at all,” Kujawa concludes.

While it’s always a good idea to avoid places that can have a high prevalence of malware, including pirate sites, the claims in the video are not backed up by real evidence. There are tens of thousands of non-pirate sites that pose similar or worse risks, so it’s always a good idea to have anti-malware and virus software installed.

The organizations and people involved in the British “Meet the Malwares” video might not have been aware of the doubtful claims, but it’s unfortunate that they didn’t opt for a broader campaign instead of the focused anti-piracy message.

Finally, since it’s still Safer Internet Day, we encourage kids to take a close look at the various guides on how to avoid “fake news” while engaging in critical thinking.

Be safe!

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Blizzard Targets Fan-Created ‘World of Warcraft’ Legacy Server

Post Syndicated from Ernesto original https://torrentfreak.com/blizzard-targets-fan-created-world-of-warcraft-legacy-server-180203/

Over the years video game developer Blizzard Entertainment has published many popular game titles, including World of Warcraft (WoW).

First released in 2004, the multiplayer online role-playing game has been a massive success. It holds the record for the most popular MMORPG in history, with over 100 million subscribers.

While the current game looks entirely different from its first release, there are many nostalgic gamers who still enjoy the earlier editions. Unfortunately, however, they can’t play them. At least not legally.

The only option WoW fans have is to go to unauthorized fan projects which recreate the early gaming experience, such as Light’s Hope.

“We are what’s known as a ‘Legacy Server’ project for World of Warcraft, which seeks to emulate the experience of playing the game in its earliest iterations, including advancing through early expansions,” the project explains.

“If you’ve ever wanted to see what World of Warcraft was like back in 2004 then this is the place to be. Our goal is to maintain the same feel and structure as the realms back then while maintaining an open platform for development and operation.”

In recent years the project has captured the hearts of tens of thousands of die-hard WoW fans. At the time of writing, the most popular realm has more than 6,000 people playing from all over the world. Blizzard, however, is less excited.

The company has asked the developer platform GitHub to remove the code repository published by Light’s Hope. Blizzard’s notice targets several SQL databases stating that the layout and structure is nearly identical to the early WoW databases.

“The LightsHope spell table has identical layout and typically identical field names as the table from early WoW. We use database tables to represent game data, like spells, in WoW,” Blizzard writes.

“In our code, we use .sql files to represent the data layout of each table […]. MaNGOS, the platform off of which Light’s Hope appears to be built, uses a similar structure. The LightsHope spell_template table matches almost exactly the layout and field names of early WoW client database tables.”

This takedown notice had some effect, as people now see a “repository unavailable due to DMCA takedown” message when they access it in their browser.

While this may slow down development temporarily, it appears that the server itself is still running just fine. There were some downtime reports earlier this week, but it’s unknown whether that was related.

In addition to the GitHub repository, the official Twitter account was also suspended recently.

TorrentFreak contacted both Blizzard and Light’s Hope earlier this week for a comment on the situation. At the time of publication, we haven’t heard back.

Blizzard’s takedown notice comes just weeks after several organizations and gaming fans asked the US Copyright Office to make a DMCA circumvention exemption for “abandoned” games, including older versions of popular MMORPGs.

While it’s possible that such an exemption is granted in the future, it’s unlikely to apply to the public at large. The more likely scenario is that it would permit libraries, researchers, and museums to operate servers for these abandoned games.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Cloudflare is Liable For Pirate Sites & Has No Safe Harbor, Publisher Says

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-is-liable-for-pirate-sites-and-has-no-safe-harbor-publisher-says-180201/

As one of the leading CDN and DDoS protection services, Cloudflare is used by millions of websites across the globe.

This includes thousands of “pirate” sites, including the likes of The Pirate Bay, which rely on the U.S.-based company to keep server loads down.

Many rightsholders have complained about Cloudflare’s involvement with these sites and last year adult entertainment publisher ALS Scan took it a step further by dragging the company to court.

ALS accused the CDN service of various types of copyright and trademark infringement, noting that several customers used the Cloudflare’s servers to distribute pirated content. While Cloudflare managed to have several counts dismissed, the accusation of contributory copyright infringement remains.

An upcoming trial could determine whether Cloudflare is liable or not, but ALS believes that this isn’t needed. This week, the publisher filed a request for partial summary judgment, asking the court to rule over the matter in advance of a trial.

“The evidence is undisputed,” ALS writes. “Cloudflare materially assists website operators in reproduction, distribution and display of copyrighted works, including infringing copies of ALS works. Cloudflare also masks information about pirate sites and their hosts.”

ALS anticipates that Cloudflare may argue that the company or its clients are protected by the DMCA’s safe harbor provision, but contests this claim. The publisher notes that none of the customers registered the required paperwork at the US Copyright Office.

“Cloudflare may say that the Cloudflare Customer Sites are themselves service providers entitled to DMCA protections, however, none have qualified for safe harbors by submitting the required notices to the US Copyright Office.”

Cloudflare itself has no safe harbor protection either, they argue, because it operates differently than a service provider as defined in the DMCA. It’s a “smart system” which also modifies content, instead of a “dumb pipe,” they claim.

In addition, the CDN provider is accused of failing to implement a reasonable policy that will terminate repeat offenders.

“Cloudflare has no available safe harbors. Even if any safe harbors apply, Cloudflare has lost such safe harbors for failure to adopt and reasonably implement a policy including termination of repeat infringers,” ALS writes.

Previously, the court clarified that under U.S. law the company can be held liable for caching content of copyright infringing websites. Cloudflare’s “infrastructure-level caching” cannot be seen as fair use, it ruled.

ALS now asks the court to issue a partial summary judgment ruling that Cloudflare is liable for contributory copyright infringement. If this motion is granted, a trial would only be needed to establish the damages amount.

The lawsuit is a crucial matter for Cloudflare, and not only because of the potential damages it faces in this case. If Cloudflare loses, other rightsholders are likely to make similar demands, forcing the company to actively police potential pirate sites.

Cloudflare will undoubtedly counter ALS’ claims in a future filing, so this case is far from over.

A copy of ALS Scan’s memorandum in support of the motion for partial summary judgment can be found here (pdf).

Researchers Use a Blockchain to Boost Anonymous Torrent Sharing

Post Syndicated from Ernesto original https://torrentfreak.com/researchers-use-a-blockchain-to-boost-anonymous-torrent-sharing-180129/

The Tribler client has been around for over a decade. We first covered it in 2006 and since then it’s developed into a truly decentralized BitTorrent client.

Even if all torrent sites were shut down today, Tribler users would still be able to find and add new content.

The project is not run by regular software developers but by a team of quality researchers at Delft University of Technology. There are currently more than 45 masters students, various thesis students, five dedicated scientific developers, and several professors involved.

Simply put, Triber aims to make the torrent ecosystem truly decentralized and anonymous. A social network of peers that can survive even if all torrent sites ceased to exist.

“Search and download torrents with less worries or censorship,” Triber’s tagline reads.

Like many other BitTorrent clients, Tribler has a search box at the top of the application. However, the search results that appear when users type in a keyword don’t come from a central index. Instead, they come directly from other peers.

Thriber’s search results

With the latest release, Tribler 7.0, the project adds another element to the mix, it’s very own blockchain. This blockchain keeps track of how much people are sharing and rewards them accordingly.

“Tribler is a torrent client for social people, who help each other. You can now earn tokens by helping others. It is specifically designed to prevent freeriding and detect hit-and-run peers.” Tribler leader Dr. Johan Pouwelse tells TF.

“You help other Tribler users by seeding and by enhancing their privacy. In return, you get faster downloads, as your tokens show you contribute to the community.”

Pouwelse, who aims to transform BitTorrent into an ethical Darknet, just presented the latest release at Stanford University. In addition, the Internet Engineering Task Force is also considering the blockchain implementation as an official Internet standard.

This recognition from academics and technology experts is welcome, of course, but Triber’s true power comes from the users. The client has gathered a decent userbase of the years but there sure is plenty room for improvement on this front.

The anonymity aspect is perhaps one of the biggest selling points and Pouwelse believes that this will greatly benefit from the blockchain implementation.

Triber provides users with pseudo anonymity by routing the transfers through other users. However, this means that the amount of bandwith used by the application inceases as well. Thus far, this hasn’t worked very well, which resulted in slow anonymous downloads.

“With the integrated blockchain release today we think we can start fixing the problem of both underseeded swarms and fast proxies,” Dr. Pouwelse says.

“Our solution is basically very simple, only social people get decent performance on Tribler. This means in a few years we will end up with only users that act nice. Others leave.”

Tribler’s trust stats

Tribler provides users with quite a bit of flexibility on the anonymity site. The feature can be turned off completely, or people can choose a protection layer ranging from one to four hops.

What’s also important to note is that users don’t operate as exit nodes by default. The IP-addresses of the exit nodes are public ouitside the network and can be monitored, so that would only increase liability.

So who are the exit-nodes in this process then? According to Pouwelse’s rather colorful description, these appear to be volunteers that run their code through a VPN a or a VPS server.

“The past years we have created an army of bots we call ‘Self-replicating Autonomous Entities’. These are Terminator-style self-replicating pieces of code which have their own Bitcoin wallet to go out there and buy servers to run more copies of themselves,” he explains.

“They utilize very primitive genetic evolution to improve survival, buy a VPN for protection, earn credits using our experimental credit mining preview release, and sell our bandwidth tokens on our integrated decentral market for cold hard Bitcoin cash to renew the cycle of life for the next month billing cycle of their VPS provider.”

Some might question why there’s such a massive research project dedicated to building an anonymous BitTorrent network. What are the benefits to society?

The answer is clear, according to Pouwelse. The ethical darknet they envision will be a unique micro-economy where sharing is rewarded, without having to expose one’s identity.

“We are building the Internet of Trust. The Internet can do amazing things, it even created honesty among drugs dealers,” he says, referring to the infamous Silk Road.

“Reliability rating of drugs lords gets you life imprisonment. That’s not something we want. We are creating our own trustworthy micro-economy for bandwidth tokens and real Bitcoins,” he adds.

People who are interested in taking Tribler for a spin can download the latest version from the official website.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Task Networking in AWS Fargate

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/task-networking-in-aws-fargate/

AWS Fargate is a technology that allows you to focus on running your application without needing to provision, monitor, or manage the underlying compute infrastructure. You package your application into a Docker container that you can then launch using your container orchestration tool of choice.

Fargate allows you to use containers without being responsible for Amazon EC2 instances, similar to how EC2 allows you to run VMs without managing physical infrastructure. Currently, Fargate provides support for Amazon Elastic Container Service (Amazon ECS). Support for Amazon Elastic Container Service for Kubernetes (Amazon EKS) will be made available in the near future.

Despite offloading the responsibility for the underlying instances, Fargate still gives you deep control over configuration of network placement and policies. This includes the ability to use many networking fundamentals such as Amazon VPC and security groups.

This post covers how to take advantage of the different ways of networking your containers in Fargate when using ECS as your orchestration platform, with a focus on how to do networking securely.

The first step to running any application in Fargate is defining an ECS task for Fargate to launch. A task is a logical group of one or more Docker containers that are deployed with specified settings. When running a task in Fargate, there are two different forms of networking to consider:

  • Container (local) networking
  • External networking

Container Networking

Container networking is often used for tightly coupled application components. Perhaps your application has a web tier that is responsible for serving static content as well as generating some dynamic HTML pages. To generate these dynamic pages, it has to fetch information from another application component that has an HTTP API.

One potential architecture for such an application is to deploy the web tier and the API tier together as a pair and use local networking so the web tier can fetch information from the API tier.

If you are running these two components as two processes on a single EC2 instance, the web tier application process could communicate with the API process on the same machine by using the local loopback interface. The local loopback interface has a special IP address of 127.0.0.1 and hostname of localhost.

By making a networking request to this local interface, it bypasses the network interface hardware and instead the operating system just routes network calls from one process to the other directly. This gives the web tier a fast and efficient way to fetch information from the API tier with almost no networking latency.

In Fargate, when you launch multiple containers as part of a single task, they can also communicate with each other over the local loopback interface. Fargate uses a special container networking mode called awsvpc, which gives all the containers in a task a shared elastic network interface to use for communication.

If you specify a port mapping for each container in the task, then the containers can communicate with each other on that port. For example the following task definition could be used to deploy the web tier and the API tier:

{
  "family": "myapp"
  "containerDefinitions": [
    {
      "name": "web",
      "image": "my web image url",
      "portMappings": [
        {
          "containerPort": 80
        }
      ],
      "memory": 500,
      "cpu": 10,
      "esssential": true
    },
    {
      "name": "api",
      "image": "my api image url",
      "portMappings": [
        {
          "containerPort": 8080
        }
      ],
      "cpu": 10,
      "memory": 500,
      "essential": true
    }
  ]
}

ECS, with Fargate, is able to take this definition and launch two containers, each of which is bound to a specific static port on the elastic network interface for the task.

Because each Fargate task has its own isolated networking stack, there is no need for dynamic ports to avoid port conflicts between different tasks as in other networking modes. The static ports make it easy for containers to communicate with each other. For example, the web container makes a request to the API container using its well-known static port:

curl 127.0.0.1:8080/my-endpoint

This sends a local network request, which goes directly from one container to the other over the local loopback interface without traversing the network. This deployment strategy allows for fast and efficient communication between two tightly coupled containers. But most application architectures require more than just internal local networking.

External Networking

External networking is used for network communications that go outside the task to other servers that are not part of the task, or network communications that originate from other hosts on the internet and are directed to the task.

Configuring external networking for a task is done by modifying the settings of the VPC in which you launch your tasks. A VPC is a fundamental tool in AWS for controlling the networking capabilities of resources that you launch on your account.

When setting up a VPC, you create one or more subnets, which are logical groups that your resources can be placed into. Each subnet has an Availability Zone and its own route table, which defines rules about how network traffic operates for that subnet. There are two main types of subnets: public and private.

Public subnets

A public subnet is a subnet that has an associated internet gateway. Fargate tasks in that subnet are assigned both private and public IP addresses:


A browser or other client on the internet can send network traffic to the task via the internet gateway using its public IP address. The tasks can also send network traffic to other servers on the internet because the route table can route traffic out via the internet gateway.

If tasks want to communicate directly with each other, they can use each other’s private IP address to send traffic directly from one to the other so that it stays inside the subnet without going out to the internet gateway and back in.

Private subnets

A private subnet does not have direct internet access. The Fargate tasks inside the subnet don’t have public IP addresses, only private IP addresses. Instead of an internet gateway, a network address translation (NAT) gateway is attached to the subnet:

 

There is no way for another server or client on the internet to reach your tasks directly, because they don’t even have an address or a direct route to reach them. This is a great way to add another layer of protection for internal tasks that handle sensitive data. Those tasks are protected and can’t receive any inbound traffic at all.

In this configuration, the tasks can still communicate to other servers on the internet via the NAT gateway. They would appear to have the IP address of the NAT gateway to the recipient of the communication. If you run a Fargate task in a private subnet, you must add this NAT gateway. Otherwise, Fargate can’t make a network request to Amazon ECR to download the container image, or communicate with Amazon CloudWatch to store container metrics.

Load balancers

If you are running a container that is hosting internet content in a private subnet, you need a way for traffic from the public to reach the container. This is generally accomplished by using a load balancer such as an Application Load Balancer or a Network Load Balancer.

ECS integrates tightly with AWS load balancers by automatically configuring a service-linked load balancer to send network traffic to containers that are part of the service. When each task starts, the IP address of its elastic network interface is added to the load balancer’s configuration. When the task is being shut down, network traffic is safely drained from the task before removal from the load balancer.

To get internet traffic to containers using a load balancer, the load balancer is placed into a public subnet. ECS configures the load balancer to forward traffic to the container tasks in the private subnet:

This configuration allows your tasks in Fargate to be safely isolated from the rest of the internet. They can still initiate network communication with external resources via the NAT gateway, and still receive traffic from the public via the Application Load Balancer that is in the public subnet.

Another potential use case for a load balancer is for internal communication from one service to another service within the private subnet. This is typically used for a microservice deployment, in which one service such as an internet user account service needs to communicate with an internal service such as a password service. Obviously, it is undesirable for the password service to be directly accessible on the internet, so using an internet load balancer would be a major security vulnerability. Instead, this can be accomplished by hosting an internal load balancer within the private subnet:

With this approach, one container can distribute requests across an Auto Scaling group of other private containers via the internal load balancer, ensuring that the network traffic stays safely protected within the private subnet.

Best Practices for Fargate Networking

Determine whether you should use local task networking

Local task networking is ideal for communicating between containers that are tightly coupled and require maximum networking performance between them. However, when you deploy one or more containers as part of the same task they are always deployed together so it removes the ability to independently scale different types of workload up and down.

In the example of the application with a web tier and an API tier, it may be the case that powering the application requires only two web tier containers but 10 API tier containers. If local container networking is used between these two container types, then an extra eight unnecessary web tier containers would end up being run instead of allowing the two different services to scale independently.

A better approach would be to deploy the two containers as two different services, each with its own load balancer. This allows clients to communicate with the two web containers via the web service’s load balancer. The web service could distribute requests across the eight backend API containers via the API service’s load balancer.

Run internet tasks that require internet access in a public subnet

If you have tasks that require internet access and a lot of bandwidth for communication with other services, it is best to run them in a public subnet. Give them public IP addresses so that each task can communicate with other services directly.

If you run these tasks in a private subnet, then all their outbound traffic has to go through an NAT gateway. AWS NAT gateways support up to 10 Gbps of burst bandwidth. If your bandwidth requirements go over this, then all task networking starts to get throttled. To avoid this, you could distribute the tasks across multiple private subnets, each with their own NAT gateway. It can be easier to just place the tasks into a public subnet, if possible.

Avoid using a public subnet or public IP addresses for private, internal tasks

If you are running a service that handles private, internal information, you should not put it into a public subnet or use a public IP address. For example, imagine that you have one task, which is an API gateway for authentication and access control. You have another background worker task that handles sensitive information.

The intended access pattern is that requests from the public go to the API gateway, which then proxies request to the background task only if the request is from an authenticated user. If the background task is in a public subnet and has a public IP address, then it could be possible for an attacker to bypass the API gateway entirely. They could communicate directly to the background task using its public IP address, without being authenticated.

Conclusion

Fargate gives you a way to run containerized tasks directly without managing any EC2 instances, but you still have full control over how you want networking to work. You can set up containers to talk to each other over the local network interface for maximum speed and efficiency. For running workloads that require privacy and security, use a private subnet with public internet access locked down. Or, for simplicity with an internet workload, you can just use a public subnet and give your containers a public IP address.

To deploy one of these Fargate task networking approaches, check out some sample CloudFormation templates showing how to configure the VPC, subnets, and load balancers.

If you have questions or suggestions, please comment below.

The Effects of the Spectre and Meltdown Vulnerabilities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/the_effects_of_3.html

On January 3, the world learned about a series of major security vulnerabilities in modern microprocessors. Called Spectre and Meltdown, these vulnerabilities were discovered by several different researchers last summer, disclosed to the microprocessors’ manufacturers, and patched­ — at least to the extent possible.

This news isn’t really any different from the usual endless stream of security vulnerabilities and patches, but it’s also a harbinger of the sorts of security problems we’re going to be seeing in the coming years. These are vulnerabilities in computer hardware, not software. They affect virtually all high-end microprocessors produced in the last 20 years. Patching them requires large-scale coordination across the industry, and in some cases drastically affects the performance of the computers. And sometimes patching isn’t possible; the vulnerability will remain until the computer is discarded.

Spectre and Meltdown aren’t anomalies. They represent a new area to look for vulnerabilities and a new avenue of attack. They’re the future of security­ — and it doesn’t look good for the defenders.

Modern computers do lots of things at the same time. Your computer and your phone simultaneously run several applications — ­or apps. Your browser has several windows open. A cloud computer runs applications for many different computers. All of those applications need to be isolated from each other. For security, one application isn’t supposed to be able to peek at what another one is doing, except in very controlled circumstances. Otherwise, a malicious advertisement on a website you’re visiting could eavesdrop on your banking details, or the cloud service purchased by some foreign intelligence organization could eavesdrop on every other cloud customer, and so on. The companies that write browsers, operating systems, and cloud infrastructure spend a lot of time making sure this isolation works.

Both Spectre and Meltdown break that isolation, deep down at the microprocessor level, by exploiting performance optimizations that have been implemented for the past decade or so. Basically, microprocessors have become so fast that they spend a lot of time waiting for data to move in and out of memory. To increase performance, these processors guess what data they’re going to receive and execute instructions based on that. If the guess turns out to be correct, it’s a performance win. If it’s wrong, the microprocessors throw away what they’ve done without losing any time. This feature is called speculative execution.

Spectre and Meltdown attack speculative execution in different ways. Meltdown is more of a conventional vulnerability; the designers of the speculative-execution process made a mistake, so they just needed to fix it. Spectre is worse; it’s a flaw in the very concept of speculative execution. There’s no way to patch that vulnerability; the chips need to be redesigned in such a way as to eliminate it.

Since the announcement, manufacturers have been rolling out patches to these vulnerabilities to the extent possible. Operating systems have been patched so that attackers can’t make use of the vulnerabilities. Web browsers have been patched. Chips have been patched. From the user’s perspective, these are routine fixes. But several aspects of these vulnerabilities illustrate the sorts of security problems we’re only going to be seeing more of.

First, attacks against hardware, as opposed to software, will become more common. Last fall, vulnerabilities were discovered in Intel’s Management Engine, a remote-administration feature on its microprocessors. Like Spectre and Meltdown, they affected how the chips operate. Looking for vulnerabilities on computer chips is new. Now that researchers know this is a fruitful area to explore, security researchers, foreign intelligence agencies, and criminals will be on the hunt.

Second, because microprocessors are fundamental parts of computers, patching requires coordination between many companies. Even when manufacturers like Intel and AMD can write a patch for a vulnerability, computer makers and application vendors still have to customize and push the patch out to the users. This makes it much harder to keep vulnerabilities secret while patches are being written. Spectre and Meltdown were announced prematurely because details were leaking and rumors were swirling. Situations like this give malicious actors more opportunity to attack systems before they’re guarded.

Third, these vulnerabilities will affect computers’ functionality. In some cases, the patches for Spectre and Meltdown result in significant reductions in speed. The press initially reported 30%, but that only seems true for certain servers running in the cloud. For your personal computer or phone, the performance hit from the patch is minimal. But as more vulnerabilities are discovered in hardware, patches will affect performance in noticeable ways.

And then there are the unpatchable vulnerabilities. For decades, the computer industry has kept things secure by finding vulnerabilities in fielded products and quickly patching them. Now there are cases where that doesn’t work. Sometimes it’s because computers are in cheap products that don’t have a patch mechanism, like many of the DVRs and webcams that are vulnerable to the Mirai (and other) botnets — ­groups of Internet-connected devices sabotaged for coordinated digital attacks. Sometimes it’s because a computer chip’s functionality is so core to a computer’s design that patching it effectively means turning the computer off. This, too, is becoming more common.

Increasingly, everything is a computer: not just your laptop and phone, but your car, your appliances, your medical devices, and global infrastructure. These computers are and always will be vulnerable, but Spectre and Meltdown represent a new class of vulnerability. Unpatchable vulnerabilities in the deepest recesses of the world’s computer hardware is the new normal. It’s going to leave us all much more vulnerable in the future.

This essay previously appeared on TheAtlantic.com.

MagPi 66: Raspberry Pi media projects for your home

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-66-media-pi/

Hey folks, Rob from The MagPi here! Issue 66 of The MagPi is out right now, with the ultimate guide to powering your home media with Raspberry Pi. We think the Pi is the perfect replacement or upgrade for many media devices, so in this issue we show you how to build a range of Raspberry Pi media projects.

MagPi 66

Yes, it does say Pac-Man robotics on the cover. They’re very cool.

The article covers file servers for sharing media across your network, music streaming boxes that connect to Spotify, a home theatre PC to make your TV-watching more relaxing, a futuristic Pi-powered moving photoframe, and even an Alexa voice assistant to control all these devices!

More to see

That’s not all though — The MagPi 66 also shows you how to build a Raspberry Pi cluster computer, how to control LEGO robots using the GPIO, and why your Raspberry Pi isn’t affected by Spectre and Meltdown.




In addition, you’ll also find our usual selection of product reviews and excellent project showcases.

Get The MagPi 66

Issue 66 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.

Subscribe for free goodies

Want to support the Raspberry Pi Foundation and the magazine, and get some cool free stuff? If you take out a twelve-month print subscription to The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

I hope you enjoy this issue! See you next month.

The post MagPi 66: Raspberry Pi media projects for your home appeared first on Raspberry Pi.