Tag Archives: policy

New National Academies Report on Crypto Policy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/new_national_ac.html

The National Academies has just published “Decrypting the Encryption Debate: A Framework for Decision Makers.” It looks really good, although I have not read it yet.

Not much news or analysis yet. Please post any links you find in the comments, and I will summarize them here.

How to Patch Linux Workloads on AWS

Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-linux-workloads-on-aws/

Most malware tries to compromise your systems by using a known vulnerability that the operating system maker has already patched. As best practices to help prevent malware from affecting your systems, you should apply all operating system patches and actively monitor your systems for missing patches.

In this blog post, I show you how to patch Linux workloads using AWS Systems Manager. To accomplish this, I will show you how to use the AWS Command Line Interface (AWS CLI) to:

  1. Launch an Amazon EC2 instance for use with Systems Manager.
  2. Configure Systems Manager to patch your Amazon EC2 Linux instances.

In two previous blog posts (Part 1 and Part 2), I showed how to use the AWS Management Console to perform the necessary steps to patch, inspect, and protect Microsoft Windows workloads. You can implement those same processes for your Linux instances running in AWS by changing the instance tags and types shown in the previous blog posts.

Because most Linux system administrators are more familiar with using a command line, I show how to patch Linux workloads by using the AWS CLI in this blog post. The steps to use the Amazon EBS Snapshot Scheduler and Amazon Inspector are identical for both Microsoft Windows and Linux.

What you should know first

To follow along with the solution in this post, you need one or more Amazon EC2 instances. You may use existing instances or create new instances. For this post, I assume this is an Amazon EC2 for Amazon Linux instance installed from Amazon Machine Images (AMIs).

Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on Amazon EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is AWS Systems Manager?

As of Amazon Linux 2017.09, the AMI comes preinstalled with the Systems Manager agent. Systems Manager Patch Manager also supports Red Hat and Ubuntu. To install the agent on these Linux distributions or an older version of Amazon Linux, see Installing and Configuring SSM Agent on Linux Instances.

If you are not familiar with how to launch an Amazon EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. You must make sure that the Amazon EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager. The following diagram shows how you should structure your VPC.

Diagram showing how to structure your VPC

Later in this post, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the IAM user you are using for this post must have the iam:PassRole permission. This permission allows the IAM user assigning tasks to pass his own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. You also should authorize your IAM user to use Amazon EC2 and Systems Manager. As mentioned before, you will be using the AWS CLI for most of the steps in this blog post. Our documentation shows you how to get started with the AWS CLI. Make sure you have the AWS CLI installed and configured with an AWS access key and secret access key that belong to an IAM user that have the following AWS managed policies attached to the IAM user you are using for this example: AmazonEC2FullAccess and AmazonSSMFullAccess.

Step 1: Launch an Amazon EC2 Linux instance

In this section, I show you how to launch an Amazon EC2 instance so that you can use Systems Manager with the instance. This step requires you to do three things:

  1. Create an IAM role for Systems Manager before launching your Amazon EC2 instance.
  2. Launch your Amazon EC2 instance with Amazon EBS and the IAM role for Systems Manager.
  3. Add tags to the instances so that you can add your instances to a Systems Manager maintenance window based on tags.

A. Create an IAM role for Systems Manager

Before launching an Amazon EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the Amazon EC2 instance. AWS already provides a preconfigured policy that you can use for the new role and it is called AmazonEC2RoleforSSM.

  1. Create a JSON file named trustpolicy-ec2ssm.json that contains the following trust policy. This policy describes which principal (an entity that can take action on an AWS resource) is allowed to assume the role we are going to create. In this example, the principal is the Amazon EC2 service.
    {
      "Version": "2012-10-17",
      "Statement": {
        "Effect": "Allow",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Action": "sts:AssumeRole"
      }
    }

  1. Use the following command to create a role named EC2SSM that has the AWS managed policy AmazonEC2RoleforSSM attached to it. This generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name EC2SSM --assume-role-policy-document file://trustpolicy-ec2ssm.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name EC2SSM --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM

  1. Use the following commands to create the IAM instance profile and add the role to the instance profile. The instance profile is needed to attach the role we created earlier to your Amazon EC2 instance.
    $ aws iam create-instance-profile --instance-profile-name EC2SSM-IP
    $ aws iam add-role-to-instance-profile --instance-profile-name EC2SSM-IP --role-name EC2SSM

B. Launch your Amazon EC2 instance

To follow along, you need an Amazon EC2 instance that is running Amazon Linux. You can use any existing instance you may have or create a new instance.

When launching a new Amazon EC2 instance, be sure that:

  1. Use the following command to launch a new Amazon EC2 instance using an Amazon Linux AMI available in the US East (N. Virginia) Region (also known as us-east-1). Replace YourKeyPair and YourSubnetId with your information. For more information about creating a key pair, see the create-key-pair documentation. Write down the InstanceId that is in the output because you will need it later in this post.
    $ aws ec2 run-instances --image-id ami-cb9ec1b1 --instance-type t2.micro --key-name YourKeyPair --subnet-id YourSubnetId --iam-instance-profile Name=EC2SSM-IP

  1. If you are using an existing Amazon EC2 instance, you can use the following command to attach the instance profile you created earlier to your instance.
    $ aws ec2 associate-iam-instance-profile --instance-id YourInstanceId --iam-instance-profile Name=EC2SSM-IP

C. Add tags

The final step of configuring your Amazon EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this post. For this example, I add a tag named Patch Group and set the value to Linux Servers. I could have other groups of Amazon EC2 instances that I treat differently by having the same tag name but a different tag value. For example, I might have a collection of other servers with the tag name Patch Group with a value of Web Servers.

  • Use the following command to add the Patch Group tag to your Amazon EC2 instance.
    $ aws ec2 create-tags --resources YourInstanceId --tags --tags Key="Patch Group",Value="Linux Servers"

Note: You must wait a few minutes until the Amazon EC2 instance is available before you can proceed to the next section. To make sure your Amazon EC2 instance is online and ready, you can use the following AWS CLI command:

$ aws ec2 describe-instance-status --instance-ids YourInstanceId

At this point, you now have at least one Amazon EC2 instance you can use to configure Systems Manager.

Step 2: Configure Systems Manager

In this section, I show you how to configure and use Systems Manager to apply operating system patches to your Amazon EC2 instances, and how to manage patch compliance.

To start, I provide some background information about Systems Manager. Then, I cover how to:

  1. Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
  2. Create a Systems Manager patch baseline and associate it with your instance to define which patches Systems Manager should apply.
  3. Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
  4. Monitor patch compliance to verify the patch state of your instances.

You must meet two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your Amazon EC2 instance. Second, you must install the Systems Manager agent on your Amazon EC2 instance. If you have used a recent Amazon Linux AMI, Amazon has already installed the Systems Manager agent on your Amazon EC2 instance. You can confirm this by logging in to an Amazon EC2 instance and checking the Systems Manager agent log files that are located at /var/log/amazon/ssm/.

To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see Installing and Configuring the Systems Manager Agent on Linux Instances. If you forgot to attach the newly created role when launching your Amazon EC2 instance or if you want to attach the role to already running Amazon EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

A. Create the Systems Manager IAM role

For a maintenance window to be able to run any tasks, you must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: this role will be used by Systems Manager instead of Amazon EC2. Earlier, you created the role, EC2SSM, with the policy, AmazonEC2RoleforSSM, which allowed the Systems Manager agent on your instance to communicate with Systems Manager. In this section, you need a new role with the policy, AmazonSSMMaintenanceWindowRole, so that the Systems Manager service can execute commands on your instance.

To create the new IAM role for Systems Manager:

  1. Create a JSON file named trustpolicy-maintenancewindowrole.json that contains the following trust policy. This policy describes which principal is allowed to assume the role you are going to create. This trust policy allows not only Amazon EC2 to assume this role, but also Systems Manager.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"",
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "ec2.amazonaws.com",
                   "ssm.amazonaws.com"
               ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }

  1. Use the following command to create a role named MaintenanceWindowRole that has the AWS managed policy, AmazonSSMMaintenanceWindowRole, attached to it. This command generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name MaintenanceWindowRole --assume-role-policy-document file://trustpolicy-maintenancewindowrole.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name MaintenanceWindowRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonSSMMaintenanceWindowRole

B. Create a Systems Manager patch baseline and associate it with your instance

Next, you will create a Systems Manager patch baseline and associate it with your Amazon EC2 instance. A patch baseline defines which patches Systems Manager should apply to your instance. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your Amazon EC2 instance. Use the following command to list all instances managed by Systems Manager. The --filters option ensures you look only for your newly created Amazon EC2 instance.

$ aws ssm describe-instance-information --filters Key=InstanceIds,Values= YourInstanceId

{
    "InstanceInformationList": [
        {
            "IsLatestVersion": true,
            "ComputerName": "ip-10-50-2-245",
            "PingStatus": "Online",
            "InstanceId": "YourInstanceId",
            "IPAddress": "10.50.2.245",
            "ResourceType": "EC2Instance",
            "AgentVersion": "2.2.120.0",
            "PlatformVersion": "2017.09",
            "PlatformName": "Amazon Linux AMI",
            "PlatformType": "Linux",
            "LastPingDateTime": 1515759143.826
        }
    ]
}

If your instance is missing from the list, verify that:

  1. Your instance is running.
  2. You attached the Systems Manager IAM role, EC2SSM.
  3. You deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram shown earlier in this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
  4. The Systems Manager agent logs don’t include any unaddressed errors.

Now that you have checked that Systems Manager can manage your Amazon EC2 instance, it is time to create a patch baseline. With a patch baseline, you define which patches are approved to be installed on all Amazon EC2 instances associated with the patch baseline. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs. If you do not specifically define a patch baseline, the default AWS-managed patch baseline is used.

To create a patch baseline:

  1. Use the following command to create a patch baseline named AmazonLinuxServers. With approval rules, you can determine the approved patches that will be included in your patch baseline. In this example, you add all Critical severity patches to the patch baseline as soon as they are released, by setting the Auto approval delay to 0 days. By setting the Auto approval delay to 2 days, you add to this patch baseline the Important, Medium, and Low severity patches two days after they are released.
    $ aws ssm create-patch-baseline --name "AmazonLinuxServers" --description "Baseline containing all updates for Amazon Linux" --operating-system AMAZON_LINUX --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Values=[Critical],Key=SEVERITY}]},ApproveAfterDays=0,ComplianceLevel=CRITICAL},{PatchFilterGroup={PatchFilters=[{Values=[Important,Medium,Low],Key=SEVERITY}]},ApproveAfterDays=2,ComplianceLevel=HIGH}]"
    
    {
        "BaselineId": "YourBaselineId"
    }

  1. Use the following command to register the patch baseline you created with your instance. To do so, you use the Patch Group tag that you added to your Amazon EC2 instance.
    $ aws ssm register-patch-baseline-for-patch-group --baseline-id YourPatchBaselineId --patch-group "Linux Servers"
    
    {
        "PatchGroup": "Linux Servers",
        "BaselineId": "YourBaselineId"
    }

C.  Define a maintenance window

Now that you have successfully set up a role, created a patch baseline, and registered your Amazon EC2 instance with your patch baseline, you will define a maintenance window so that you can control when your Amazon EC2 instances will receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

To define a maintenance window:

  1. Use the following command to define a maintenance window. In this example command, the maintenance window will start every Saturday at 10:00 P.M. UTC. It will have a duration of 4 hours and will not start any new tasks 1 hour before the end of the maintenance window.
    $ aws ssm create-maintenance-window --name SaturdayNight --schedule "cron(0 0 22 ? * SAT *)" --duration 4 --cutoff 1 --allow-unassociated-targets
    
    {
        "WindowId": "YourMaintenanceWindowId"
    }

For more information about defining a cron-based schedule for maintenance windows, see Cron and Rate Expressions for Maintenance Windows.

  1. After defining the maintenance window, you must register the Amazon EC2 instance with the maintenance window so that Systems Manager knows which Amazon EC2 instance it should patch in this maintenance window. You can register the instance by using the same Patch Group tag you used to associate the Amazon EC2 instance with the AWS-provided patch baseline, as shown in the following command.
    $ aws ssm register-target-with-maintenance-window --window-id YourMaintenanceWindowId --resource-type INSTANCE --targets "Key=tag:Patch Group,Values=Linux Servers"
    
    {
        "WindowTargetId": "YourWindowTargetId"
    }

  1. Assign a task to the maintenance window that will install the operating system patches on your Amazon EC2 instance. The following command includes the following options.
    1. name is the name of your task and is optional. I named mine Patching.
    2. task-arn is the name of the task document you want to run.
    3. max-concurrency allows you to specify how many of your Amazon EC2 instances Systems Manager should patch at the same time. max-errors determines when Systems Manager should abort the task. For patching, this number should not be too low, because you do not want your entire patch task to stop on all instances if one instance fails. You can set this, for example, to 20%.
    4. service-role-arn is the Amazon Resource Name (ARN) of the AmazonSSMMaintenanceWindowRole role you created earlier in this blog post.
    5. task-invocation-parameters defines the parameters that are specific to the AWS-RunPatchBaseline task document and tells Systems Manager that you want to install patches with a timeout of 600 seconds (10 minutes).
      $ aws ssm register-task-with-maintenance-window --name "Patching" --window-id "YourMaintenanceWindowId" --targets "Key=WindowTargetIds,Values=YourWindowTargetId" --task-arn AWS-RunPatchBaseline --service-role-arn "arn:aws:iam::123456789012:role/MaintenanceWindowRole" --task-type "RUN_COMMAND" --task-invocation-parameters "RunCommand={Comment=,TimeoutSeconds=600,Parameters={SnapshotId=[''],Operation=[Install]}}" --max-concurrency "500" --max-errors "20%"
      
      {
          "WindowTaskId": "YourWindowTaskId"
      }

Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed by using the following command.

$ aws ssm describe-maintenance-window-executions --window-id "YourMaintenanceWindowId"

{
    "WindowExecutions": [
        {
            "Status": "SUCCESS",
            "WindowId": "YourMaintenanceWindowId",
            "WindowExecutionId": "b594984b-430e-4ffa-a44c-a2e171de9dd3",
            "EndTime": 1515766467.487,
            "StartTime": 1515766457.691
        }
    ]
}

D.  Monitor patch compliance

You also can see the overall patch compliance of all Amazon EC2 instances using the following command in the AWS CLI.

$ aws ssm list-compliance-summaries

This command shows you the number of instances that are compliant with each category and the number of instances that are not in JSON format.

You also can see overall patch compliance by choosing Compliance under Insights in the navigation pane of the Systems Manager console. You will see a visual representation of how many Amazon EC2 instances are up to date, how many Amazon EC2 instances are noncompliant, and how many Amazon EC2 instances are compliant in relation to the earlier defined patch baseline.

Screenshot of the Compliance page of the Systems Manager console

In this section, you have set everything up for patch management on your instance. Now you know how to patch your Amazon EC2 instance in a controlled manner and how to check if your Amazon EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all Amazon EC2 instances you manage.

Summary

In this blog post, I showed how to use Systems Manager to create a patch baseline and maintenance window to keep your Amazon EC2 Linux instances up to date with the latest security patches. Remember that by creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing any part of this solution, start a new thread on the Amazon EC2 forum or contact AWS Support.

– Koen

Backblaze and GDPR

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/gdpr-compliance/

GDPR General Data Protection Regulation

Over the next few months the noise over GDPR will finally reach a crescendo. For the uninitiated, “GDPR” stands for “General Data Protection Regulation” and it goes into effect on May 25th of this year. GDPR is designed to protect how personal information of EU (European Union) citizens is collected, stored, and shared. The regulation should also improve transparency as to how personal information is managed by a business or organization.

Backblaze fully expects to be GDPR compliant when May 25th rolls around and we thought we’d share our experience along the way. We’ll start with this post as an introduction to GDPR. In future posts, we’ll dive into some of the details of the process we went through in meeting the GDPR objectives.

GDPR: A Two Way Street

To ensure we are GDPR compliant, Backblaze has assembled a dedicated internal team, engaged outside counsel in the United Kingdom, and consulted with other tech companies on best practices. While it is a sizable effort on our part, we view this as a waypoint in our ongoing effort to secure and protect our customers’ data and to be transparent in how we work as a company.

In addition to the effort we are putting into complying with the regulation, we think it is important to underscore and promote the idea that data privacy and security is a two-way street. We can spend millions of dollars on protecting the security of our systems, but we can’t stop a bad actor from finding and using your account credentials left on a note stuck to your monitor. We can give our customers tools like two factor authentication and private encryption keys, but it is the partnership with our customers that is the most powerful protection. The same thing goes for your digital privacy — we’ll do our best to protect your information, but we will need your help to do so.

Why GDPR is Important

At the center of GDPR is the protection of Personally Identifiable Information or “PII.” The definition for PII is information that can be used stand-alone or in concert with other information to identify a specific person. This includes obvious data like: name, address, and phone number, less obvious data like email address and IP address, and other data such as a credit card number, and unique identifiers that can be decoded back to the person.

How Will GDPR Affect You as an Individual

If you are a citizen in the EU, GDPR is designed to protect your private information from being used or shared without your permission. Technically, this only applies when your data is collected, processed, stored or shared outside of the EU, but it’s a good practice to hold all of your service providers to the same standard. For example, when you are deciding to sign up with a service, you should be able to quickly access and understand what personal information is being collected, why it is being collected, and what the business can do with that information. These terms are typically found in “Terms and Conditions” and “Privacy Policy” documents, or perhaps in a written contract you signed before starting to use a given service or product.

Even if you are not a citizen of the EU, GDPR will still affect you. Why? Because nearly every company you deal with, especially online, will have customers that live in the EU. It makes little sense for Backblaze, or any other service provider or vendor, to create a separate set of rules for just EU citizens. In practice, protection of private information should be more accountable and transparent with GDPR.

How Will GDPR Affect You as a Backblaze Customer

Over the coming months Backblaze customers will see changes to our current “Terms and Conditions,” “Privacy Policy,” and to our Backblaze services. While the changes to the Backblaze services are expected to be minimal, the “terms and privacy” documents will change significantly. The changes will include among other things the addition of a group of model clauses and related materials. These clauses will be generally consistent across all GDPR compliant vendors and are meant to be easily understood so that a customer can easily determine how their PII is being collected and used.

Common GDPR Questions:

Here are a few of the more common questions we have heard regarding GDPR.

  1. GDPR will only affect citizens in the EU.
    Answer: The changes that are being made by companies such as Backblaze to comply with GDPR will almost certainly apply to customers from all countries. And that’s a good thing. The protections afforded to EU citizens by GDPR are something all users of our service should benefit from.
  2. After May 25, 2018, a citizen of the EU will not be allowed to use any applications or services that store data outside of the EU.
    Answer: False, no one will stop you as an EU citizen from using the internet-based service you choose. But, you should make sure you know where your data is being collected, processed, and stored. If any of those activities occur outside the EU, make sure the company is following the GDPR guidelines.
  3. My business only has a few EU citizens as customers, so I don’t need to care about GDPR?
    Answer: False, even if you have just one EU citizen as a customer, and you capture, process or store data their PII outside of the EU, you need to comply with GDPR.
  4. Companies can be fined millions of dollars for not complying with GDPR.
    Answer:
    True, but: the regulation allows for companies to be fined up to $4 Million dollars or 20% of global revenue (whichever is greater) if they don’t comply with GDPR. In practice, the feeling is that such fines will be reserved (at least initially) for egregious violators that ignore or merely give “lip-service” to GDPR.
  5. You’ll be able to tell a company is GDPR compliant because they have a “GDPR Certified” badge on their website.
    Answer: There is no official GDPR certification or an official GDPR certification program. Companies that comply with GDPR are expected to follow the articles in the regulation and it should be clear from the outside looking in that they have followed the regulations. For example, their “Terms and Conditions,” and “Privacy Policy” should clearly spell out how and why they collect, use, and share your information. At some point a real GDPR certification program may be adopted, but not yet.

For all the hoopla about GDPR, the regulation is reasonably well thought out and addresses a very important issue — people’s privacy online. Creating a best practices document, or in this case a regulation, that companies such as Backblaze can follow is a good idea. The document isn’t perfect, and over the coming years we expect there to be changes. One thing we hope for is that the countries within the EU continue to stand behind one regulation and not fragment the document into multiple versions, each applying to themselves. We believe that having multiple different GDPR versions for different EU countries would lead to less protection overall of EU citizens.

In summary, GDPR changes are coming over the next few months. Backblaze has our internal staff and our EU-based legal council working diligently to ensure that we will be GDPR compliant by May 25th. We believe that GDPR will have a positive effect in enhancing the protection of personally identifiable information for not only EU citizens, but all of our Backblaze customers.

The post Backblaze and GDPR appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Can Consumers’ Online Data Be Protected?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/can_consumers_o.html

Everything online is hackable. This is true for Equifax’s data and the federal Office of Personal Management’s data, which was hacked in 2015. If information is on a computer connected to the Internet, it is vulnerable.

But just because everything is hackable doesn’t mean everything will be hacked. The difference between the two is complex, and filled with defensive technologies, security best practices, consumer awareness, the motivation and skill of the hacker and the desirability of the data. The risks will be different if an attacker is a criminal who just wants credit card details ­ and doesn’t care where he gets them from ­ or the Chinese military looking for specific data from a specific place.

The proper question isn’t whether it’s possible to protect consumer data, but whether a particular site protects our data well enough for the benefits provided by that site. And here, again, there are complications.

In most cases, it’s impossible for consumers to make informed decisions about whether their data is protected. We have no idea what sorts of security measures Google uses to protect our highly intimate Web search data or our personal e-mails. We have no idea what sorts of security measures Facebook uses to protect our posts and conversations.

We have a feeling that these big companies do better than smaller ones. But we’re also surprised when a lone individual publishes personal data hacked from the infidelity site AshleyMadison.com, or when the North Korean government does the same with personal information in Sony’s network.

Think about all the companies collecting personal data about you ­ the websites you visit, your smartphone and its apps, your Internet-connected car — and how little you know about their security practices. Even worse, credit bureaus and data brokers like Equifax collect your personal information without your knowledge or consent.

So while it might be possible for companies to do a better job of protecting our data, you as a consumer are in no position to demand such protection.

Government policy is the missing ingredient. We need standards and a method for enforcement. We need liabilities and the ability to sue companies that poorly secure our data. The biggest reason companies don’t protect our data online is that it’s cheaper not to. Government policy is how we change that.

This essay appeared as half of a point/counterpoint with Priscilla Regan, in a CQ Researcher report titled “Privacy and the Internet.”

Hosting Provider Steadfast Maintains DMCA Safe Harbor Defense For Trial

Post Syndicated from Ernesto original https://torrentfreak.com/hosting-provider-steadfast-maintains-dmca-safe-harbor-defense-for-trial-180212/

Two years ago, adult entertainment publisher ALS Scan dragged several third-party Internet services to court.

The company targeted several companies including CDN provider CloudFlare and the Chicago-based hosting company Steadfast, accusing them of copyright infringement because they offered services to pirate sites.

The case against Steadfast is getting close to trial and to start with an advantage, ALS Scan recently asked the court for partial summary judgment, determining that the hosting company contributed to copyright infringement and that it has no safe harbor protection.

ALS argued that Steadfast refused to shut down the servers of the image sharing platform Imagebam.com, which was operated by its client Flixya. ALS Scan described the site as a repeat offender, as it had been targeted with dozens of DMCA notices, and accused Steadfast of turning a blind eye to the situation.

Steadfast, for its part, fiercely denied the allegations. The hosting provider admitted that it leased servers to Flixya for ten years but said that it forwarded all notices to its client. The hosting company could not address individual infringements, other than shutting down the entire site, which would have been disproportionate in their view.

A few days ago California District Court Judge George Wu ruled on the matter, denying ALS’s motion for summary judgment.

Both sides made sensible arguments on the contributory infringement issue, but it is by no means undisputed that the hosting provider ‘contributed’ to the infringing activities. The court, therefore, left this question open for the jury to determine at trial.

“Ultimately, both sides have raised triable issues of fact with respect to material contribution. As a result, the Court would deny Plaintiff’s Motion,” Judge Wu writes.

ALS also sought summary judgment on the DMCA safe harbor protection issue, but the court denied this request as well. While it’s clear that the hosting company never terminated a customer for repeat infringements, it’s not clear whether it was ever in a situation where it needed to.

The DMCA requires Internet services to implement a meaningful repeat infringer policy, but in this case, Steadfast’s client Imagebam reportedly had a takedown policy of its own, which complicates the issue.

“While the fact Steadfast has never terminated one of its own customers for infringement is potentially damaging to its ability to fit the safe harbor, Plaintiff has not established that Steadfast faced a situation requiring it to terminate one of its users,” Judge Wu writes.

“Even in the present case it is unclear that Steadfast needed to terminate Flixya’s account given Flixya itself had a policy that was arguably successful at removing infringing images from imagebam.com.”

Judge Wu adds that safe harbor defenses are generally left to the jury, and this is what he decided as well.

As a result, ALS’s entire motion for summary judgment is denied. This is good news for Steadfast, who will have their safe harbor defense available at the upcoming trial. However, they will likely celebrate this win with caution, as the jury makes its ultimate decision.

A copy of the court’s order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Comcast Explains How It Deals With Persistent Pirates

Post Syndicated from Ernesto original https://torrentfreak.com/comcast-explains-how-it-deals-with-persistent-pirates-180210/

Dating back to the turn of the last century, copyright holders have alerted Internet providers about alleged copyright infringers on their network.

While many ISPs forwarded these notices to their subscribers, most were not very forthcoming about what would happen after multiple accusations.

This vagueness was in part shaped by law. While it’s clear that the DMCA requires Internet providers to implement a meaningful “repeat infringer” policy, the DMCA doesn’t set any clear boundaries on what constitutes a repeat infringer and when one should be punished.

With the recent Fourth Circuit Court of Appeals ruling against Cox, it is now clear that “infringers” doesn’t imply people who are adjudicated, valid accusations from copyright holders are enough. However, an ISP still has some flexibility when it comes to the rest of its “repeat infringer” policy.

In this light, it’s interesting to see that Comcast recently published details of its repeat infringer policy online. While the ISP has previously confirmed that persistent pirates could be terminated, it has never publicly spelled out its policy in such detail.

First up, Comcast clarifies that subscribers to its Xfinity service can be flagged based on reports from rightsholders alone, which is in line with the Fourth Circuit ruling.

“Any infringement of third party copyright rights violates the law. We reserve the right to treat any customer account for whom we receive multiple DMCA notifications from content owners as a repeat infringer,” the company notes.

If Comcast receives multiple notices in a calendar month, the associated subscriber moves from one policy step to the next one. This means that the ISP will issue warnings with increased visibility.

These alerts can come in the form of emails, letters to a home address, text messages, phone calls, and also alerts sent to the subscriber’s web browser. The alerts then have to be acknowledged by the user, so it clear that he or she understands what’s at stake.

From Comcast’s repeat infringer policy

Comcast doesn’t state specifically how many alerts will trigger tougher action, but it stresses that repeat infringers risk having their accounts suspended. As a result, all devices that rely on Internet access will be interrupted or stop working.

“If your XFINITY Internet account is suspended, you will have no Internet access or service during suspension. This means any services and devices that use the Internet will not properly work or will not work at all,” Comcast states.

The suspension is applied as a last warning before the lights go out completely. Subscribers who reach this stage can still reinstate their Internet connectivity by calling Comcast. It’s unclear whether they have to take any additional action, but it could be that these subscribers have to ‘promise’ to behave.

After this last warning, the subscriber risks the most severe penalty, account termination. This is not limited to regular access to the web, but also affects XFINITY TV, XFINITY Voice, and XFINITY Home, including smart thermostats and home security equipment.

“If you reach the point of service termination, we will terminate your XFINITY Internet service and related add-ons. Unreturned equipment charges will still apply. If you also have XFINITY TV and/or XFINITY Voice services, they will also be terminated,” Comcast warns.

Comcast doesn’t specify how long the Internet termination lasts but the company states that it’s typically no less than 180 days. This means that terminated subscribers will need to find an Internet subscription elsewhere if one’s available.

The good news is that other XFINITY services can be restored after termination, without Internet access. Subscribers will have to contact Comcast to request a quote for an Internet-less package.

While this policy may sound harsh to some, Comcast has few other options if it wants to avoid liability. The good news is that the company requires users to acknowledge the warnings, which means that any measures shouldn’t come as a surprise.

There is no mention of any option to contest any copyright holder notices, which may become an issue in the future. After all, when copyright holders have the power to have people’s Internet connections terminated, their accusations have to be spot on.



Comcast’s repeat infringer policy is available here and was, according to the information we have available, quietly published around December last year.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Sharing Secrets with AWS Lambda Using AWS Systems Manager Parameter Store

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/

This post courtesy of Roberto Iturralde, Sr. Application Developer- AWS Professional Services

Application architects are faced with key decisions throughout the process of designing and implementing their systems. One decision common to nearly all solutions is how to manage the storage and access rights of application configuration. Shared configuration should be stored centrally and securely with each system component having access only to the properties that it needs for functioning.

With AWS Systems Manager Parameter Store, developers have access to central, secure, durable, and highly available storage for application configuration and secrets. Parameter Store also integrates with AWS Identity and Access Management (IAM), allowing fine-grained access control to individual parameters or branches of a hierarchical tree.

This post demonstrates how to create and access shared configurations in Parameter Store from AWS Lambda. Both encrypted and plaintext parameter values are stored with only the Lambda function having permissions to decrypt the secrets. You also use AWS X-Ray to profile the function.

Solution overview

This example is made up of the following components:

  • An AWS SAM template that defines:
    • A Lambda function and its permissions
    • An unencrypted Parameter Store parameter that the Lambda function loads
    • A KMS key that only the Lambda function can access. You use this key to create an encrypted parameter later.
  • Lambda function code in Python 3.6 that demonstrates how to load values from Parameter Store at function initialization for reuse across invocations.

Launch the AWS SAM template

To create the resources shown in this post, you can download the SAM template or choose the button to launch the stack. The template requires one parameter, an IAM user name, which is the name of the IAM user to be the admin of the KMS key that you create. In order to perform the steps listed in this post, this IAM user will need permissions to execute Lambda functions, create Parameter Store parameters, administer keys in KMS, and view the X-Ray console. If you have these privileges in your IAM user account you can use your own account to complete the walkthrough. You can not use the root user to administer the KMS keys.

SAM template resources

The following sections show the code for the resources defined in the template.
Lambda function

ParameterStoreBlogFunctionDev:
    Type: 'AWS::Serverless::Function'
    Properties:
      FunctionName: 'ParameterStoreBlogFunctionDev'
      Description: 'Integrating lambda with Parameter Store'
      Handler: 'lambda_function.lambda_handler'
      Role: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
      CodeUri: './code'
      Environment:
        Variables:
          ENV: 'dev'
          APP_CONFIG_PATH: 'parameterStoreBlog'
          AWS_XRAY_TRACING_NAME: 'ParameterStoreBlogFunctionDev'
      Runtime: 'python3.6'
      Timeout: 5
      Tracing: 'Active'

  ParameterStoreBlogFunctionRoleDev:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          -
            Effect: Allow
            Principal:
              Service:
                - 'lambda.amazonaws.com'
            Action:
              - 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
      Policies:
        -
          PolicyName: 'ParameterStoreBlogDevParameterAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'ssm:GetParameter*'
                Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/dev/parameterStoreBlog*'
        -
          PolicyName: 'ParameterStoreBlogDevXRayAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'xray:PutTraceSegments'
                  - 'xray:PutTelemetryRecords'
                Resource: '*'

In this YAML code, you define a Lambda function named ParameterStoreBlogFunctionDev using the SAM AWS::Serverless::Function type. The environment variables for this function include the ENV (dev) and the APP_CONFIG_PATH where you find the configuration for this app in Parameter Store. X-Ray tracing is also enabled for profiling later.

The IAM role for this function extends the AWSLambdaBasicExecutionRole by adding IAM policies that grant the function permissions to write to X-Ray and get parameters from Parameter Store, limited to paths under /dev/parameterStoreBlog*.
Parameter Store parameter

SimpleParameter:
    Type: AWS::SSM::Parameter
    Properties:
      Name: '/dev/parameterStoreBlog/appConfig'
      Description: 'Sample dev config values for my app'
      Type: String
      Value: '{"key1": "value1","key2": "value2","key3": "value3"}'

This YAML code creates a plaintext string parameter in Parameter Store in a path that your Lambda function can access.
KMS encryption key

ParameterStoreBlogDevEncryptionKeyAlias:
    Type: AWS::KMS::Alias
    Properties:
      AliasName: 'alias/ParameterStoreBlogKeyDev'
      TargetKeyId: !Ref ParameterStoreBlogDevEncryptionKey

  ParameterStoreBlogDevEncryptionKey:
    Type: AWS::KMS::Key
    Properties:
      Description: 'Encryption key for secret config values for the Parameter Store blog post'
      Enabled: True
      EnableKeyRotation: False
      KeyPolicy:
        Version: '2012-10-17'
        Id: 'key-default-1'
        Statement:
          -
            Sid: 'Allow administration of the key & encryption of new values'
            Effect: Allow
            Principal:
              AWS:
                - !Sub 'arn:aws:iam::${AWS::AccountId}:user/${IAMUsername}'
            Action:
              - 'kms:Create*'
              - 'kms:Encrypt'
              - 'kms:Describe*'
              - 'kms:Enable*'
              - 'kms:List*'
              - 'kms:Put*'
              - 'kms:Update*'
              - 'kms:Revoke*'
              - 'kms:Disable*'
              - 'kms:Get*'
              - 'kms:Delete*'
              - 'kms:ScheduleKeyDeletion'
              - 'kms:CancelKeyDeletion'
            Resource: '*'
          -
            Sid: 'Allow use of the key'
            Effect: Allow
            Principal:
              AWS: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
            Action:
              - 'kms:Encrypt'
              - 'kms:Decrypt'
              - 'kms:ReEncrypt*'
              - 'kms:GenerateDataKey*'
              - 'kms:DescribeKey'
            Resource: '*'

This YAML code creates an encryption key with a key policy with two statements.

The first statement allows a given user (${IAMUsername}) to administer the key. Importantly, this includes the ability to encrypt values using this key and disable or delete this key, but does not allow the administrator to decrypt values that were encrypted with this key.

The second statement grants your Lambda function permission to encrypt and decrypt values using this key. The alias for this key in KMS is ParameterStoreBlogKeyDev, which is how you reference it later.

Lambda function

Here I walk you through the Lambda function code.

import os, traceback, json, configparser, boto3
from aws_xray_sdk.core import patch_all
patch_all()

# Initialize boto3 client at global scope for connection reuse
client = boto3.client('ssm')
env = os.environ['ENV']
app_config_path = os.environ['APP_CONFIG_PATH']
full_config_path = '/' + env + '/' + app_config_path
# Initialize app at global scope for reuse across invocations
app = None

class MyApp:
    def __init__(self, config):
        """
        Construct new MyApp with configuration
        :param config: application configuration
        """
        self.config = config

    def get_config(self):
        return self.config

def load_config(ssm_parameter_path):
    """
    Load configparser from config stored in SSM Parameter Store
    :param ssm_parameter_path: Path to app config in SSM Parameter Store
    :return: ConfigParser holding loaded config
    """
    configuration = configparser.ConfigParser()
    try:
        # Get all parameters for this app
        param_details = client.get_parameters_by_path(
            Path=ssm_parameter_path,
            Recursive=False,
            WithDecryption=True
        )

        # Loop through the returned parameters and populate the ConfigParser
        if 'Parameters' in param_details and len(param_details.get('Parameters')) > 0:
            for param in param_details.get('Parameters'):
                param_path_array = param.get('Name').split("/")
                section_position = len(param_path_array) - 1
                section_name = param_path_array[section_position]
                config_values = json.loads(param.get('Value'))
                config_dict = {section_name: config_values}
                print("Found configuration: " + str(config_dict))
                configuration.read_dict(config_dict)

    except:
        print("Encountered an error loading config from SSM.")
        traceback.print_exc()
    finally:
        return configuration

def lambda_handler(event, context):
    global app
    # Initialize app if it doesn't yet exist
    if app is None:
        print("Loading config and creating new MyApp...")
        config = load_config(full_config_path)
        app = MyApp(config)

    return "MyApp config is " + str(app.get_config()._sections)

Beneath the import statements, you import the patch_all function from the AWS X-Ray library, which you use to patch boto3 to create X-Ray segments for all your boto3 operations.

Next, you create a boto3 SSM client at the global scope for reuse across function invocations, following Lambda best practices. Using the function environment variables, you assemble the path where you expect to find your configuration in Parameter Store. The class MyApp is meant to serve as an example of an application that would need its configuration injected at construction. In this example, you create an instance of ConfigParser, a class in Python’s standard library for handling basic configurations, to give to MyApp.

The load_config function loads the all the parameters from Parameter Store at the level immediately beneath the path provided in the Lambda function environment variables. Each parameter found is put into a new section in ConfigParser. The name of the section is the name of the parameter, less the base path. In this example, the full parameter name is /dev/parameterStoreBlog/appConfig, which is put in a section named appConfig.

Finally, the lambda_handler function initializes an instance of MyApp if it doesn’t already exist, constructing it with the loaded configuration from Parameter Store. Then it simply returns the currently loaded configuration in MyApp. The impact of this design is that the configuration is only loaded from Parameter Store the first time that the Lambda function execution environment is initialized. Subsequent invocations reuse the existing instance of MyApp, resulting in improved performance. You see this in the X-Ray traces later in this post. For more advanced use cases where configuration changes need to be received immediately, you could implement an expiry policy for your configuration entries or push notifications to your function.

To confirm that everything was created successfully, test the function in the Lambda console.

  1. Open the Lambda console.
  2. In the navigation pane, choose Functions.
  3. In the Functions pane, filter to ParameterStoreBlogFunctionDev to find the function created by the SAM template earlier. Open the function name to view its details.
  4. On the top right of the function detail page, choose Test. You may need to create a new test event. The input JSON doesn’t matter as this function ignores the input.

After running the test, you should see output similar to the following. This demonstrates that the function successfully fetched the unencrypted configuration from Parameter Store.

Create an encrypted parameter

You currently have a simple, unencrypted parameter and a Lambda function that can access it.

Next, you create an encrypted parameter that only your Lambda function has permission to use for decryption. This limits read access for this parameter to only this Lambda function.

To follow along with this section, deploy the SAM template for this post in your account and make your IAM user name the KMS key admin mentioned earlier.

  1. In the Systems Manager console, under Shared Resources, choose Parameter Store.
  2. Choose Create Parameter.
    • For Name, enter /dev/parameterStoreBlog/appSecrets.
    • For Type, select Secure String.
    • For KMS Key ID, choose alias/ParameterStoreBlogKeyDev, which is the key that your SAM template created.
    • For Value, enter {"secretKey": "secretValue"}.
    • Choose Create Parameter.
  3. If you now try to view the value of this parameter by choosing the name of the parameter in the parameters list and then choosing Show next to the Value field, you won’t see the value appear. This is because, even though you have permission to encrypt values using this KMS key, you do not have permissions to decrypt values.
  4. In the Lambda console, run another test of your function. You now also see the secret parameter that you created and its decrypted value.

If you do not see the new parameter in the Lambda output, this may be because the Lambda execution environment is still warm from the previous test. Because the parameters are loaded at Lambda startup, you need a fresh execution environment to refresh the values.

Adjust the function timeout to a different value in the Advanced Settings at the bottom of the Lambda Configuration tab. Choose Save and test to trigger the creation of a new Lambda execution environment.

Profiling the impact of querying Parameter Store using AWS X-Ray

By using the AWS X-Ray SDK to patch boto3 in your Lambda function code, each invocation of the function creates traces in X-Ray. In this example, you can use these traces to validate the performance impact of your design decision to only load configuration from Parameter Store on the first invocation of the function in a new execution environment.

From the Lambda function details page where you tested the function earlier, under the function name, choose Monitoring. Choose View traces in X-Ray.

This opens the X-Ray console in a new window filtered to your function. Be aware of the time range field next to the search bar if you don’t see any search results.
In this screenshot, I’ve invoked the Lambda function twice, one time 10.3 minutes ago with a response time of 1.1 seconds and again 9.8 minutes ago with a response time of 8 milliseconds.

Looking at the details of the longer running trace by clicking the trace ID, you can see that the Lambda function spent the first ~350 ms of the full 1.1 sec routing the request through Lambda and creating a new execution environment for this function, as this was the first invocation with this code. This is the portion of time before the initialization subsegment.

Next, it took 725 ms to initialize the function, which includes executing the code at the global scope (including creating the boto3 client). This is also a one-time cost for a fresh execution environment.

Finally, the function executed for 65 ms, of which 63.5 ms was the GetParametersByPath call to Parameter Store.

Looking at the trace for the second, much faster function invocation, you see that the majority of the 8 ms execution time was Lambda routing the request to the function and returning the response. Only 1 ms of the overall execution time was attributed to the execution of the function, which makes sense given that after the first invocation you’re simply returning the config stored in MyApp.

While the Traces screen allows you to view the details of individual traces, the X-Ray Service Map screen allows you to view aggregate performance data for all traced services over a period of time.

In the X-Ray console navigation pane, choose Service map. Selecting a service node shows the metrics for node-specific requests. Selecting an edge between two nodes shows the metrics for requests that traveled that connection. Again, be aware of the time range field next to the search bar if you don’t see any search results.

After invoking your Lambda function several more times by testing it from the Lambda console, you can view some aggregate performance metrics. Look at the following:

  • From the client perspective, requests to the Lambda service for the function are taking an average of 50 ms to respond. The function is generating ~1 trace per minute.
  • The function itself is responding in an average of 3 ms. In the following screenshot, I’ve clicked on this node, which reveals a latency histogram of the traced requests showing that over 95% of requests return in under 5 ms.
  • Parameter Store is responding to requests in an average of 64 ms, but note the much lower trace rate in the node. This is because you only fetch data from Parameter Store on the initialization of the Lambda execution environment.

Conclusion

Deduplication, encryption, and restricted access to shared configuration and secrets is a key component to any mature architecture. Serverless architectures designed using event-driven, on-demand, compute services like Lambda are no different.

In this post, I walked you through a sample application accessing unencrypted and encrypted values in Parameter Store. These values were created in a hierarchy by application environment and component name, with the permissions to decrypt secret values restricted to only the function needing access. The techniques used here can become the foundation of secure, robust configuration management in your enterprise serverless applications.

Chrome and Firefox Block 123movies Over “Harmful Programs”

Post Syndicated from Ernesto original https://torrentfreak.com/chrome-and-firefox-block-123movies-over-harmful-programs-180209/

With millions of visitors per day, 123movies(hub), also known as Gomovies, is one of the largest pirate streaming sites on the web.

Today, however, many visitors were welcomed by a dangerous-looking red banner instead of the usual homepage.

“The site ahead contains harmful programs,” Chrome warns its users. “Attackers on 123movieshub.to might attempt to trick you into installing programs that harm your browsing experience.”

It is not clear what the problem is in this particular case, but these type of notifications are often triggered by malicious or deceptive third-party advertising that has appeared on a site.

Warning

These warning messages are triggered by Google’s Safebrowsing algorithm which flags websites that pose a potential danger to visitors. Chrome, Firefox, and others use this service to prevent users from running into unwanted software.

In addition to the browser block, Google generally informs the site’s owners that their domain will be demoted in search results until the issue is resolved.

Google previously informed us that these kinds of warnings automatically disappear when the flagged sites no longer violate Google’s policy. This can take one or two days, but also longer.

This isn’t the first time that Google has flagged such a large website. Many pirate sites, including The Pirate Bay, have been affected by this issue in the past.

Chrome and Firefox users should be familiar with these intermittent warning notices be now. If users believe that an affected site is harmless they can always take steps (Chrome, FF) to bypass the blocks, but that’s completely at their own risk.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Give Your WordPress Blog a Voice With Our New Amazon Polly Plugin

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/give-your-wordpress-blog-a-voice-with-our-new-amazon-polly-plugin/

I first told you about Polly in late 2016 in my post Amazon Polly – Text to Speech in 47 Voices and 24 Languages. After that AWS re:Invent launch, we added support for Korean, five new voices, and made Polly available in all Regions in the aws partition. We also added whispering, speech marks, a timbre effect, and dynamic range compression.

New WordPress Plugin
Today we are launching a WordPress plugin that uses Polly to create high-quality audio versions of your blog posts. You can access the audio from within the post or in podcast form using a feature that we call Amazon Pollycast! Both options make your content more accessible and can help you to reach a wider audience. This plugin was a joint effort between the AWS team our friends at AWS Advanced Technology Partner WP Engine.

As you will see, the plugin is easy to install and configure. You can use it with installations of WordPress that you run on your own infrastructure or on AWS. Either way, you have access to all of Polly’s voices along with a wide variety of configuration options. The generated audio (an MP3 file for each post) can be stored alongside your WordPress content, or in Amazon Simple Storage Service (S3), with optional support for content distribution via Amazon CloudFront.

Installing the Plugin
I did not have an existing WordPress-powered blog, so I begin by launching a Lightsail instance using the WordPress 4.8.1 blueprint:

Then I follow these directions to access my login credentials:

Credentials in hand, I log in to the WordPress Dashboard:

The plugin makes calls to AWS, and needs to have credentials in order to do so. I hop over to the IAM Console and created a new policy. The policy allows the plugin to access a carefully selected set of S3 and Polly functions (find the full policy in the README):

Then I create an IAM user (wp-polly-user). I enter the name and indicate that it will be used for Programmatic Access:

Then I attach the policy that I just created, and click on Review:

I review my settings (not shown) and then click on Create User. Then I copy the two values (Access Key ID and Secret Access Key) into a secure location. Possession of these keys allows the bearer to make calls to AWS so I take care not to leave them lying around.

Now I am ready to install the plugin! I go back to the WordPress Dashboard and click on Add New in the Plugins menu:

Then I click on Upload Plugin and locate the ZIP file that I downloaded from the WordPress Plugins site. After I find it I click on Install Now to proceed:

WordPress uploads and installs the plugin. Now I click on Activate Plugin to move ahead:

With the plugin installed, I click on Settings to set it up:

I enter my keys and click on Save Changes:

The General settings let me control the sample rate, voice, player position, the default setting for new posts, and the autoplay option. I can leave all of the settings as-is to get started:

The Cloud Storage settings let me store audio in S3 and to use CloudFront to distribute the audio:

The Amazon Pollycast settings give me control over the iTunes parameters that are included in the generated RSS feed:

Finally, the Bulk Update button lets me regenerate all of the audio files after I change any of the other settings:

With the plugin installed and configured, I can create a new post. As you can see, the plugin can be enabled and customized for each post:

I can see how much it will cost to convert to audio with a click:

When I click on Publish, the plugin breaks the text into multiple blocks on sentence boundaries, calls the Polly SynthesizeSpeech API for each block, and accumulates the resulting audio in a single MP3 file. The published blog post references the file using the <audio> tag. Here’s the post:

I can’t seem to use an <audio> tag in this post, but you can download and play the MP3 file yourself if you’d like.

The Pollycast feature generates an RSS file with links to an MP3 file for each post:

Pricing
The plugin will make calls to Amazon Polly each time the post is saved or updated. Pricing is based on the number of characters in the speech requests, as described on the Polly Pricing page. Also, the AWS Free Tier lets you process up to 5 million characters per month at no charge, for a period of one year that starts when you make your first call to Polly.

Going Further
The plugin is available on GitHub in source code form and we are looking forward to your pull requests! Here are a couple of ideas to get you started:

Voice Per Author – Allow selection of a distinct Polly voice for each author.

Quoted Text – For blogs that make frequent use of embedded quotes, use a distinct voice for the quotes.

Translation – Use Amazon Translate to translate the texts into another language, and then use Polly to generate audio in that language.

Other Blogging Engines – Build a similar plugin for your favorite blogging engine.

SSML Support – Figure out an interesting way to use Polly’s SSML tags to add additional character to the audio.

Let me know what you come up with!

Jeff;

 

RIAA: Cox Ruling Shows that Grande Can Be Liable for Piracy Too

Post Syndicated from Ernesto original https://torrentfreak.com/riaa-cox-ruling-shows-that-grande-can-be-liable-for-piracy-too-180207/

Regular Internet providers are being put under increasing pressure for not doing enough to curb copyright infringement.

Last year several major record labels, represented by the RIAA, filed a lawsuit in a Texas District Court, accusing ISP Grande Communications of turning a blind eye on its pirating subscribers.

“Despite their knowledge of repeat infringements, Defendants have permitted repeat infringers to use the Grande service to continue to infringe Plaintiffs’ copyrights without consequence,” the RIAA’s complaint read.

Grande disagreed with this assertion and filed a motion to dismiss the case. The ISP argued that it doesn’t encourage any of its customers to download copyrighted works, and that it has no control over the content subscribers access.

The Internet provider didn’t deny that it received millions of takedown notices through the piracy tracking company Rightscorp. However, it believed that these notices are flawed and not worthy of acting upon.

The case shows a lot of similarities with the legal battle between BMG and Cox Communications, in which the Fourth Circuit Court of Appeals issued an important verdict last week.

The appeals court overturned the $25 million piracy damages verdict against Cox due to an erroneous jury instruction but held that the ISP lost its safe harbor protection because it failed to implement a meaningful repeat infringer policy.

This week, the RIAA used the Fourth Circuit ruling as further evidence that Grande’s motion to dismiss should be denied.

The RIAA points out that both Cox and Grande used similar arguments in their defense, some of which were denied by the appeals court. The Fourth Circuit held, for example, that an ISP’s substantial non-infringing uses does not immunize it from liability for contributory copyright infringement.

In addition, the appeals court also clarified that if an ISP wilfully blinds itself to copyright infringements, that is sufficient to satisfy the knowledge requirement for contributory copyright infringement.

According to the RIAA’s filing at a Texas District Court this week, Grande has already admitted that it willingly ‘ignored’ takedown notices that were submitted on behalf of third-party copyright holders.

“Grande has already admitted that it received notices from Rightscorp and, to use Grande’s own phrase, did not ‘meaningfully investigate’ them,” the RIAA writes.

“Thus, even if this Court were to apply the Fourth Circuit’s ‘willful blindness’ standard, the level of knowledge that Grande has effectively admitted exceeds the level of knowledge that the Fourth Circuit held was ‘powerful evidence’ sufficient to establish liability for contributory infringement.”

As such, the motion to dismiss the case should be denied, the RIAA argues.

What’s not mentioned in the RIAA’s filing, however, is why Grande chose not to act upon these takedown notices. In its defense, the ISP previously explained that Rightcorp’s notices lacked specificity and were incapable of detecting actual infringements.

Grande argued that if they acted on these notices without additional proof, its subscribers could lose their Internet access even though they are using it for legal purposes. The ISP may, therefore, counter that it wasn’t willfully blind, as it saw no solid proof for the alleged infringements to begin with.

“To merely treat these allegations as true without investigation would be a disservice to Grande’s subscribers, who would run the risk of having their Internet service permanently terminated despite using Grande’s services for completely legitimate purposes,” Grande previously wrote.

This brings up a tricky issue. The Fourth Circuit made it clear last week that ISPs require a meaningful policy against repeat infringers in respond to takedown notices from copyright holders. But what are the requirements for a proper takedown notice? Do any and all notices count?

Grande clearly has no faith in the accuracy of Rightscorp’s technology but if their case goes in the same direction as Cox’s, that might not make much of a difference.

A copy of the RIAA’s summary of supplemental authority is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Build a Multi-Tenant Amazon EMR Cluster with Kerberos, Microsoft Active Directory Integration and EMRFS Authorization

Post Syndicated from Songzhi Liu original https://aws.amazon.com/blogs/big-data/build-a-multi-tenant-amazon-emr-cluster-with-kerberos-microsoft-active-directory-integration-and-emrfs-authorization/

One of the challenges faced by our customers—especially those in highly regulated industries—is balancing the need for security with flexibility. In this post, we cover how to enable multi-tenancy and increase security by using EMRFS (EMR File System) authorization, the Amazon S3 storage-level authorization on Amazon EMR.

Amazon EMR is an easy, fast, and scalable analytics platform enabling large-scale data processing. EMRFS authorization provides Amazon S3 storage-level authorization by configuring EMRFS with multiple IAM roles. With this functionality enabled, different users and groups can share the same cluster and assume their own IAM roles respectively.

Simply put, on Amazon EMR, we can now have an Amazon EC2 role per user assumed at run time instead of one general EC2 role at the cluster level. When the user is trying to access Amazon S3 resources, Amazon EMR evaluates against a predefined mappings list in EMRFS configurations and picks up the right role for the user.

In this post, we will discuss what EMRFS authorization is (Amazon S3 storage-level access control) and show how to configure the role mappings with detailed examples. You will then have the desired permissions in a multi-tenant environment. We also demo Amazon S3 access from HDFS command line, Apache Hive on Hue, and Apache Spark.

EMRFS authorization for Amazon S3

There are two prerequisites for using this feature:

  1. Users must be authenticated, because EMRFS needs to map the current user/group/prefix to a predefined user/group/prefix. There are several authentication options. In this post, we launch a Kerberos-enabled cluster that manages the Key Distribution Center (KDC) on the master node, and enable a one-way trust from the KDC to a Microsoft Active Directory domain.
  2. The application must support accessing Amazon S3 via Applications that have their own S3FileSystem APIs (for example, Presto) are not supported at this time.

EMRFS supports three types of mapping entries: user, group, and Amazon S3 prefix. Let’s use an example to show how this works.

Assume that you have the following three identities in your organization, and they are defined in the Active Directory:

To enable all these groups and users to share the EMR cluster, you need to define the following IAM roles:

In this case, you create a separate Amazon EC2 role that doesn’t give any permission to Amazon S3. Let’s call the role the base role (the EC2 role attached to the EMR cluster), which in this example is named EMR_EC2_RestrictedRole. Then, you define all the Amazon S3 permissions for each specific user or group in their own roles. The restricted role serves as the fallback role when the user doesn’t belong to any user/group, nor does the user try to access any listed Amazon S3 prefixes defined on the list.

Important: For all other roles, like emrfs_auth_group_role_data_eng, you need to add the base role (EMR_EC2_RestrictedRole) as the trusted entity so that it can assume other roles. See the following example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::511586466501:role/EMR_EC2_RestrictedRole"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The following is an example policy for the admin user role (emrfs_auth_user_role_admin_user):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        }
    ]
}

We are assuming the admin user has access to all buckets in this example.

The following is an example policy for the data science group role (emrfs_auth_group_role_data_sci):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

This role grants all Amazon S3 permissions to the emrfs-auth-data-science-bucket-demo bucket and all the objects in it. Similarly, the policy for the role emrfs_auth_group_role_data_eng is shown below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

Example role mappings configuration

To configure EMRFS authorization, you use EMR security configuration. Here is the configuration we use in this post

Consider the following scenario.

First, the admin user admin1 tries to log in and run a command to access Amazon S3 data through EMRFS. The first role emrfs_auth_user_role_admin_user on the mapping list, which is a user role, is mapped and picked up. Then admin1 has access to the Amazon S3 locations that are defined in this role.

Then a user from the data engineer group (grp_data_engineering) tries to access a data bucket to run some jobs. When EMRFS sees that the user is a member of the grp_data_engineering group, the group role emrfs_auth_group_role_data_eng is assumed, and the user has proper access to Amazon S3 that is defined in the emrfs_auth_group_role_data_eng role.

Next, the third user comes, who is not an admin and doesn’t belong to any of the groups. After failing evaluation of the top three entries, EMRFS evaluates whether the user is trying to access a certain Amazon S3 prefix defined in the last mapping entry. This type of mapping entry is called the prefix type. If the user is trying to access s3://emrfs-auth-default-bucket-demo/, then the prefix mapping is in effect, and the prefix role emrfs_auth_prefix_role_default_s3_prefix is assumed.

If the user is not trying to access any of the Amazon S3 paths that are defined on the list—which means it failed the evaluation of all the entries—it only has the permissions defined in the EMR_EC2RestrictedRole. This role is assumed by the EC2 instances in the cluster.

In this process, all the mappings defined are evaluated in the defined order, and the first role that is mapped is assumed, and the rest of the list is skipped.

Setting up an EMR cluster and mapping Active Directory users and groups

Now that we know how EMRFS authorization role mapping works, the next thing we need to think about is how we can use this feature in an easy and manageable way.

Active Directory setup

Many customers manage their users and groups using Microsoft Active Directory or other tools like OpenLDAP. In this post, we create the Active Directory on an Amazon EC2 instance running Windows Server and create the users and groups we will be using in the example below. After setting up Active Directory, we use the Amazon EMR Kerberos auto-join capability to establish a one-way trust from the KDC running on the EMR master node to the Active Directory domain on the EC2 instance. You can use your own directory services as long as it talks to the LDAP (Lightweight Directory Access Protocol).

To create and join Active Directory to Amazon EMR, follow the steps in the blog post Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory.

After configuring Active Directory, you can create all the users and groups using the Active Directory tools and add users to appropriate groups. In this example, we created users like admin1, dataeng1, datascientist1, grp_data_engineering, and grp_data_science, and then add the users to the right groups.

Join the EMR cluster to an Active Directory domain

For clusters with Kerberos, Amazon EMR now supports automated Active Directory domain joins. You can use the security configuration to configure the one-way trust from the KDC to the Active Directory domain. You also configure the EMRFS role mappings in the same security configuration.

The following is an example of the EMR security configuration with a trusted Active Directory domain EMRKRB.TEST.COM and the EMRFS role mappings as we discussed earlier:

The EMRFS role mapping configuration is shown in this example:

We will also provide an example AWS CLI command that you can run.

Launching the EMR cluster and running the tests

Now you have configured Kerberos and EMRFS authorization for Amazon S3.

Additionally, you need to configure Hue with Active Directory using the Amazon EMR configuration API in order to log in using the AD users created before. The following is an example of Hue AD configuration.

[
  {
    "Classification":"hue-ini",
    "Properties":{

    },
    "Configurations":[
      {
        "Classification":"desktop",
        "Properties":{

        },
        "Configurations":[
          {
            "Classification":"ldap",
            "Properties":{

            },
            "Configurations":[
              {
                "Classification":"ldap_servers",
                "Properties":{

                },
                "Configurations":[
                  {
                    "Classification":"AWS",
                    "Properties":{
                      "base_dn":"DC=emrkrb,DC=test,DC=com",
                      "ldap_url":"ldap://emrkrb.test.com",
                      "search_bind_authentication":"false",
                      "bind_dn":"CN=adjoiner,CN=users,DC=emrkrb,DC=test,DC=com",
                      "bind_password":"Abc123456",
                      "create_users_on_login":"true",
                      "nt_domain":"emrkrb.test.com"
                    },
                    "Configurations":[

                    ]
                  }
                ]
              }
            ]
          },
          {
            "Classification":"auth",
            "Properties":{
              "backend":"desktop.auth.backend.LdapBackend"
            },
            "Configurations":[

            ]
          }
        ]
      }
    ]
  }

Note: In the preceding configuration JSON file, change the values as required before pasting it into the software setting section in the Amazon EMR console.

Now let’s use this configuration and the security configuration you created before to launch the cluster.

In the Amazon EMR console, choose Create cluster. Then choose Go to advanced options. On the Step1: Software and Steps page, under Edit software settings (optional), paste the configuration in the box.

The rest of the setup is the same as an ordinary cluster setup, except in the Security Options section. In Step 4: Security, under Permissions, choose Custom, and then choose the RestrictedRole that you created before.

Choose the appropriate subnets (these should meet the base requirement in order for a successful Active Directory join—see the Amazon EMR Management Guide for more details), and choose the appropriate security groups to make sure it talks to the Active Directory. Choose a key so that you can log in and configure the cluster.

Most importantly, choose the security configuration that you created earlier to enable Kerberos and EMRFS authorization for Amazon S3.

You can use the following AWS CLI command to create a cluster.

aws emr create-cluster --name "TestEMRFSAuthorization" \ 
--release-label emr-5.10.0 \ --instance-type m3.xlarge \ 
--instance-count 3 \ 
--ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,KeyName=MyEC2KeyPair \ --service-role EMR_DefaultRole \ 
--security-configuration MyKerberosConfig \ 
--configurations file://hue-config.json \
--applications Name=Hadoop Name=Hive Name=Hue Name=Spark \ 
--kerberos-attributes Realm=EC2.INTERNAL, \ KdcAdminPassword=<YourClusterKDCAdminPassword>, \ ADDomainJoinUser=<YourADUserLogonName>,ADDomainJoinPassword=<YourADUserPassword>, \ 
CrossRealmTrustPrincipalPassword=<MatchADTrustPwd>

Note: If you create the cluster using CLI, you need to save the JSON configuration for Hue into a file named hue-config.json and place it on the server where you run the CLI command.

After the cluster gets into the Waiting state, try to connect by using SSH into the cluster using the Active Directory user name and password.

ssh -l [email protected] <EMR IP or DNS name>

Quickly run two commands to show that the Active Directory join is successful:

  1. id [user name] shows the mapped AD users and groups in Linux.
  2. hdfs groups [user name] shows the mapped group in Hadoop.

Both should return the current Active Directory user and group information if the setup is correct.

Now, you can test the user mapping first. Log in with the admin1 user, and run a Hadoop list directory command:

hadoop fs -ls s3://emrfs-auth-data-science-bucket-demo/

Now switch to a user from the data engineer group.

Retry the previous command to access the admin’s bucket. It should throw an Amazon S3 Access Denied exception.

When you try listing the Amazon S3 bucket that a data engineer group member has accessed, it triggers the group mapping.

hadoop fs -ls s3://emrfs-auth-data-engineering-bucket-demo/

It successfully returns the listing results. Next we will test Apache Hive and then Apache Spark.

 

To run jobs successfully, you need to create a home directory for every user in HDFS for staging data under /user/<username>. Users can configure a step to create a home directory at cluster launch time for every user who has access to the cluster. In this example, you use Hue since Hue will create the home directory in HDFS for the user at the first login. Here Hue also needs to be integrated with the same Active Directory as explained in the example configuration described earlier.

First, log in to Hue as a data engineer user, and open a Hive Notebook in Hue. Then run a query to create a new table pointing to the data engineer bucket, s3://emrfs-auth-data-engineering-bucket-demo/table1_data_eng/.

You can see that the table was created successfully. Now try to create another table pointing to the data science group’s bucket, where the data engineer group doesn’t have access.

It failed and threw an Amazon S3 Access Denied error.

Now insert one line of data into the successfully create table.

Next, log out, switch to a data science group user, and create another table, test2_datasci_tb.

The creation is successful.

The last task is to test Spark (it requires the user directory, but Hue created one in the previous step).

Now let’s come back to the command line and run some Spark commands.

Login to the master node using the datascientist1 user:

Start the SparkSQL interactive shell by typing spark-sql, and run the show tables command. It should list the tables that you created using Hive.

As a data science group user, try select on both tables. You will find that you can only select the table defined in the location that your group has access to.

Conclusion

EMRFS authorization for Amazon S3 enables you to have multiple roles on the same cluster, providing flexibility to configure a shared cluster for different teams to achieve better efficiency. The Active Directory integration and group mapping make it much easier for you to manage your users and groups, and provides better auditability in a multi-tenant environment.


Additional Reading

If you found this post useful, be sure to check out Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory and Launching and Running an Amazon EMR Cluster inside a VPC.


About the Authors

Songzhi Liu is a Big Data Consultant with AWS Professional Services. He works closely with AWS customers to provide them Big Data & Machine Learning solutions and best practices on the Amazon cloud.

 

 

 

 

Progressing from tech to leadership

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/on-leadership.html

I’ve been a technical person all my life. I started doing vulnerability research in the late 1990s – and even today, when I’m not fiddling with CNC-machined robots or making furniture, I’m probably clobbering together a fuzzer or writing a book about browser protocols and APIs. In other words, I’m a geek at heart.

My career is a different story. Over the past two decades and a change, I went from writing CGI scripts and setting up WAN routers for a chain of shopping malls, to doing pentests for institutional customers, to designing a series of network monitoring platforms and handling incident response for a big telco, to building and running the product security org for one of the largest companies in the world. It’s been an interesting ride – and now that I’m on the hook for the well-being of about 100 folks across more than a dozen subteams around the world, I’ve been thinking a bit about the lessons learned along the way.

Of course, I’m a bit hesitant to write such a post: sometimes, your efforts pan out not because of your approach, but despite it – and it’s possible to draw precisely the wrong conclusions from such anecdotes. Still, I’m very proud of the culture we’ve created and the caliber of folks working on our team. It happened through the work of quite a few talented tech leads and managers even before my time, but it did not happen by accident – so I figured that my observations may be useful for some, as long as they are taken with a grain of salt.

But first, let me start on a somewhat somber note: what nobody tells you is that one’s level on the leadership ladder tends to be inversely correlated with several measures of happiness. The reason is fairly simple: as you get more senior, a growing number of people will come to you expecting you to solve increasingly fuzzy and challenging problems – and you will no longer be patted on the back for doing so. This should not scare you away from such opportunities, but it definitely calls for a particular mindset: your motivation must come from within. Look beyond the fight-of-the-day; find satisfaction in seeing how far your teams have come over the years.

With that out of the way, here’s a collection of notes, loosely organized into three major themes.

The curse of a techie leader

Perhaps the most interesting observation I have is that for a person coming from a technical background, building a healthy team is first and foremost about the subtle art of letting go.

There is a natural urge to stay involved in any project you’ve started or helped improve; after all, it’s your baby: you’re familiar with all the nuts and bolts, and nobody else can do this job as well as you. But as your sphere of influence grows, this becomes a choke point: there are only so many things you could be doing at once. Just as importantly, the project-hoarding behavior robs more junior folks of the ability to take on new responsibilities and bring their own ideas to life. In other words, when done properly, delegation is not just about freeing up your plate; it’s also about empowerment and about signalling trust.

Of course, when you hand your project over to somebody else, the new owner will initially be slower and more clumsy than you; but if you pick the new leads wisely, give them the right tools and the right incentives, and don’t make them deathly afraid of messing up, they will soon excel at their new jobs – and be grateful for the opportunity.

A related affliction of many accomplished techies is the conviction that they know the answers to every question even tangentially related to their domain of expertise; that belief is coupled with a burning desire to have the last word in every debate. When practiced in moderation, this behavior is fine among peers – but for a leader, one of the most important skills to learn is knowing when to keep your mouth shut: people learn a lot better by experimenting and making small mistakes than by being schooled by their boss, and they often try to read into your passing remarks. Don’t run an authoritarian camp focused on total risk aversion or perfectly efficient resource management; just set reasonable boundaries and exit conditions for experiments so that they don’t spiral out of control – and be amazed by the results every now and then.

Death by planning

When nothing is on fire, it’s easy to get preoccupied with maintaining the status quo. If your current headcount or budget request lists all the same projects as last year’s, or if you ever find yourself ending an argument by deferring to a policy or a process document, it’s probably a sign that you’re getting complacent. In security, complacency usually ends in tears – and when it doesn’t, it leads to burnout or boredom.

In my experience, your goal should be to develop a cadre of managers or tech leads capable of coming up with clever ideas, prioritizing them among themselves, and seeing them to completion without your day-to-day involvement. In your spare time, make it your mission to challenge them to stay ahead of the curve. Ask your vendor security lead how they’d streamline their work if they had a 40% jump in the number of vendors but no extra headcount; ask your product security folks what’s the second line of defense or containment should your primary defenses fail. Help them get good ideas off the ground; set some mental success and failure criteria to be able to cut your losses if something does not pan out.

Of course, malfunctions happen even in the best-run teams; to spot trouble early on, instead of overzealous project tracking, I found it useful to encourage folks to run a data-driven org. I’d usually ask them to imagine that a brand new VP shows up in our office and, as his first order of business, asks “why do you have so many people here and how do I know they are doing the right things?”. Not everything in security can be quantified, but hard data can validate many of your assumptions – and will alert you to unseen issues early on.

When focusing on data, it’s important not to treat pie charts and spreadsheets as an art unto itself; if you run a security review process for your company, your CSAT scores are going to reach 100% if you just rubberstamp every launch request within ten minutes of receiving it. Make sure you’re asking the right questions; instead of “how satisfied are you with our process”, try “is your product better as a consequence of talking to us?”

Whenever things are not progressing as expected, it is a natural instinct to fall back to micromanagement, but it seldom truly cures the ill. It’s probable that your team disagrees with your vision or its feasibility – and that you’re either not listening to their feedback, or they don’t think you’d care. It’s good to assume that most of your employees are as smart or smarter than you; barking your orders at them more loudly or more frequently does not lead anyplace good. It’s good to listen to them and either present new facts or work with them on a plan you can all get behind.

In some circumstances, all that’s needed is honesty about the business trade-offs, so that your team feels like your “partner in crime”, not a victim of circumstance. For example, we’d tell our folks that by not falling behind on basic, unglamorous work, we earn the trust of our VPs and SVPs – and that this translates into the independence and the resources we need to pursue more ambitious ideas without being told what to do; it’s how we game the system, so to speak. Oh: leading by example is a pretty powerful tool at your disposal, too.

The human factor

I’ve come to appreciate that hiring decent folks who can get along with others is far more important than trying to recruit conference-circuit superstars. In fact, hiring superstars is a decidedly hit-and-miss affair: while certainly not a rule, there is a proportion of folks who put the maintenance of their celebrity status ahead of job responsibilities or the well-being of their peers.

For teams, one of the most powerful demotivators is a sense of unfairness and disempowerment. This is where tech-originating leaders can shine, because their teams usually feel that their bosses understand and can evaluate the merits of the work. But it also means you need to be decisive and actually solve problems for them, rather than just letting them vent. You will need to make unpopular decisions every now and then; in such cases, I think it’s important to move quickly, rather than prolonging the uncertainty – but it’s also important to sincerely listen to concerns, explain your reasoning, and be frank about the risks and trade-offs.

Whenever you see a clash of personalities on your team, you probably need to respond swiftly and decisively; being right should not justify being a bully. If you don’t react to repeated scuffles, your best people will probably start looking for other opportunities: it’s draining to put up with constant pie fights, no matter if the pies are thrown straight at you or if you just need to duck one every now and then.

More broadly, personality differences seem to be a much better predictor of conflict than any technical aspects underpinning a debate. As a boss, you need to identify such differences early on and come up with creative solutions. Sometimes, all you need is taking some badly-delivered but valid feedback and having a conversation with the other person, asking some questions that can help them reach the same conclusions without feeling that their worldview is under attack. Other times, the only path forward is making sure that some folks simply don’t run into each for a while.

Finally, dealing with low performers is a notoriously hard but important part of the game. Especially within large companies, there is always the temptation to just let it slide: sideline a struggling person and wait for them to either get over their issues or leave. But this sends an awful message to the rest of the team; for better or worse, fairness is important to most. Simply firing the low performers is seldom the best solution, though; successful recovery cases are what sets great managers apart from the average ones.

Oh, one more thought: people in leadership roles have their allegiance divided between the company and the people who depend on them. The obligation to the company is more formal, but the impact you have on your team is longer-lasting and more intimate. When the obligations to the employer and to your team collide in some way, make sure you can make the right call; it might be one of the the most consequential decisions you’ll ever make.

Addressing Data Residency with AWS

Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/addressing-data-residency-with-aws/

Whitepaper image

AWS has released a new whitepaper that has been requested by many AWS customers: AWS Policy Perspectives: Data Residency. Data residency is the requirement that all customer content processed and stored in an IT system must remain within a specific country’s borders, and it is one of the foremost concerns of governments that want to use commercial cloud services. General cybersecurity concerns and concerns about government requests for data have contributed to a continued focus on keeping data within countries’ borders. In fact, some governments have determined that mandating data residency provides an extra layer of security.

This approach, however, is counterproductive to the data protection objectives and the IT modernization and global economic growth goals that many governments have set as milestones. This new whitepaper addresses the real and perceived security risks expressed by governments when they demand in-country data residency by identifying the most likely and prevalent IT vulnerabilities and security risks, explaining the native security embedded in cloud services, and highlighting the roles and responsibilities of cloud service providers (CSPs), governments, and customers in protecting data.

Large-scale, multinational CSPs, often called hyperscale CSPs, represent a transformational disruption in technology because of how they support their customers with high degrees of efficiency, agility, and innovation as part of world-class security offerings. The whitepaper explains how hyperscale CSPs, such as AWS, that might be located out of country provide their customers the ability to achieve high levels of data protection through safeguards on their own platform and with turnkey tooling for their customers. They do this while at the same time preserving nation-state regulatory sovereignty.

The whitepaper also considers the commercial, public-sector, and economic effects of data residency policies and offers considerations for governments to evaluate before enforcing requirements that can unintentionally limit public-sector digital transformation goals, in turn possibly leading to increased cybersecurity risk.

AWS continues to engage with governments around the world to hear and address their top-of-mind security concerns. We take seriously our commitment to advocate for our customers’ interests and enforce security from “ground zero.” This means that when customers use AWS, they can have the confidence that their data is protected with a level of assurance that meets, if not exceeds, their needs, regardless of where the data resides.

– Min Hyun, Cloud Security Policy Strategist

Appeals Court Throws Out $25 Million Piracy Verdict Against Cox, Doesn’t Reinstate “Safe Harbor”

Post Syndicated from Ernesto original https://torrentfreak.com/appeals-court-throws-out-25-million-piracy-verdict-against-cox-doesnt-reinstate-safe-harbor-180201/

December 2015, a Virginia federal jury ruled that Internet provider Cox Communications was responsible for the copyright infringements of its subscribers.

The ISP was found guilty of willful contributory copyright infringement and ordered to pay music publisher BMG Rights Management $25 million in damages.

Cox swiftly filed its appeal arguing that the District Court made several errors in the jury instructions. In addition, it asked for a clarification of the term “repeat infringer” in its favor.

Today the Court of Appeals for the Fourth Circuit ruled on the matter in a mixed decision which could have great consequences.

The Court ruled that the District Court indeed made a mistake in its jury instruction. Specifically, it said that the ISP could be found liable for contributory infringement if it “knew or should have known of such infringing activity.” The Court of Appeals agrees that based on the law, the “should have known” standard is too low.

When this is the case the appeals court can call for a new trial, and that is exactly what it did. This means that the $25 million verdict is off the table, and the same is true for the millions in attorney’s fees and costs BMG was previously granted.

It’s not all good news for Cox though. The most crucial matter in the case is whether Cox has safe harbor protection under the DMCA. In order to qualify, the company is required to terminate accounts of repeat infringers, when appropriate.

Cox argued that subscribers can only be seen as repeat infringers if they’ve been previously adjudicated in court, not if they merely received several takedown notices. This was still an open question, as the term repeat infringer is not clearly defined in the DMCA.

Today, however, the appeals court is pretty clear on the matter. According to Judge Motz’s opinion, shared by HWR, the language of the DMCA suggests that the term “infringer” is not limited to adjudicated infringers.

This is supported by legislative history as the House Commerce and Senate Judiciary Committee Reports both explained that “those who repeatedly or flagrantly abuse their access to the Internet through disrespect for the intellectual property rights of others should know that there is a realistic threat of losing that access.”

“The passage does not suggest that they should risk losing Internet access only once they have been sued in court and found liable for multiple instances of infringement,” Judge Motz writes in her opinion.

Losing Internet access would hardly be a “realistic threat” that would stop someone from pirating if he or she has already been punished several times in court, the argument goes.

This leads the Court of Appeals to conclude that the District Court was right: Cox is not entitled to safe harbor protection because it failed to implement a meaningful repeat infringer policy.

“Cox failed to qualify for the DMCA safe harbor because it failed to implement its policy in any consistent or meaningful way — leaving it essentially with no policy,” Judge Motz writes.

This means that, while Cox gets a new trial, it is still at a severe disadvantage. Not only that, the Court of Appeals interpretation of the repeat infringer question is also a clear signal to other Internet service providers to disconnect pirates based on repeated copyright holder complaints.

Judge Motz’s full opinion is available here (pdf).

Cloudflare is Liable For Pirate Sites & Has No Safe Harbor, Publisher Says

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-is-liable-for-pirate-sites-and-has-no-safe-harbor-publisher-says-180201/

As one of the leading CDN and DDoS protection services, Cloudflare is used by millions of websites across the globe.

This includes thousands of “pirate” sites, including the likes of The Pirate Bay, which rely on the U.S.-based company to keep server loads down.

Many rightsholders have complained about Cloudflare’s involvement with these sites and last year adult entertainment publisher ALS Scan took it a step further by dragging the company to court.

ALS accused the CDN service of various types of copyright and trademark infringement, noting that several customers used the Cloudflare’s servers to distribute pirated content. While Cloudflare managed to have several counts dismissed, the accusation of contributory copyright infringement remains.

An upcoming trial could determine whether Cloudflare is liable or not, but ALS believes that this isn’t needed. This week, the publisher filed a request for partial summary judgment, asking the court to rule over the matter in advance of a trial.

“The evidence is undisputed,” ALS writes. “Cloudflare materially assists website operators in reproduction, distribution and display of copyrighted works, including infringing copies of ALS works. Cloudflare also masks information about pirate sites and their hosts.”

ALS anticipates that Cloudflare may argue that the company or its clients are protected by the DMCA’s safe harbor provision, but contests this claim. The publisher notes that none of the customers registered the required paperwork at the US Copyright Office.

“Cloudflare may say that the Cloudflare Customer Sites are themselves service providers entitled to DMCA protections, however, none have qualified for safe harbors by submitting the required notices to the US Copyright Office.”

Cloudflare itself has no safe harbor protection either, they argue, because it operates differently than a service provider as defined in the DMCA. It’s a “smart system” which also modifies content, instead of a “dumb pipe,” they claim.

In addition, the CDN provider is accused of failing to implement a reasonable policy that will terminate repeat offenders.

“Cloudflare has no available safe harbors. Even if any safe harbors apply, Cloudflare has lost such safe harbors for failure to adopt and reasonably implement a policy including termination of repeat infringers,” ALS writes.

Previously, the court clarified that under U.S. law the company can be held liable for caching content of copyright infringing websites. Cloudflare’s “infrastructure-level caching” cannot be seen as fair use, it ruled.

ALS now asks the court to issue a partial summary judgment ruling that Cloudflare is liable for contributory copyright infringement. If this motion is granted, a trial would only be needed to establish the damages amount.

The lawsuit is a crucial matter for Cloudflare, and not only because of the potential damages it faces in this case. If Cloudflare loses, other rightsholders are likely to make similar demands, forcing the company to actively police potential pirate sites.

Cloudflare will undoubtedly counter ALS’ claims in a future filing, so this case is far from over.

A copy of ALS Scan’s memorandum in support of the motion for partial summary judgment can be found here (pdf).

Huge Rightsholder Coalition Calls on New EU Presidency to Remove Safe Harbors

Post Syndicated from Andy original https://torrentfreak.com/huge-rightsholder-coalition-calls-on-new-eu-presidency-to-remove-safe-harbors-180131/

While piracy of all kinds is often viewed as a threat to the creative industries, a new type of unauthorized content distribution has been gaining prominence over the past few years.

Sites like YouTube, that allow their users to upload all kinds of material – some of it infringing – are now seen as undermining a broad range of industries that rely on both video and audio to generate revenue.

The cries against such User Uploaded Content (UUC) sites are often led by the music industry, which complains that the safe harbor provisions of copyright law are being abused while UUC sites generate review from infringing content. In tandem, while that free content is made available, UUC sites have little or no incentive to pay for official content licenses, and certainly not at a rate considered fair by the industry.

This mismatch, between the price that content industries would like to achieve for licenses and what they actually achieve, is now known as the ‘Value Gap’.

Today, in advance of an EU meeting on the draft Copyright Directive, a huge coalition of rightsholder groups is calling on the new EU Presidency not to pass up an “unmissable opportunity” to find a solution to their problems.

In a letter addressed to the Presidency of the Council of the European Union, which Bulgaria officially took over January 1, 2018, an army of rightsholders lay out their demands.

“We represent musical, audio-visual, literary, visual authors; performers; book, press, musical, scientific, technical and medical publishers; recorded music, film and TV producers; football leagues; broadcasters; distributors and photo agencies. These are at the very heart of Europe’s creative sector,” the groups write.

“We have formed an alliance to campaign for a solution to a major problem which is holding back our sector and jeopardizing future sustainability – the Transfer of Value, otherwise known as the Value Gap.

“User uploaded content services have become vast distributors of our creative works e.g. film, music, photos, broadcasts, text and sport content – all while refusing to negotiate fair or any copyright licences with us as right holders.”

Value Gap Coalition

Featuring groups representing many thousands of rightsholders, the coalition is the broadest yet to call for action against the ‘Value Gap’. Or, to put it another way, to demand a change in the law to prevent sites like YouTube, Facebook and other hosting platforms from “hiding” behind provisions designed to protect them from the infringing activities of others.

“This problem is caused by a lack of clarity surrounding the application of copyright to certain online services and the abuse of European copyright ‘safe harbor’ rules in the e-Commerce Directive (2000/31/EC) by those services,” the coalition writes.

Referencing the EU Copyright Directive proposal tabled by the European Commission in September 2016, the coalition says that UUC services communicating content to the public should be compelled to obtain licenses for that content. If they play an “active role” through promotion or optimization of content, UUC platforms should be denied ‘safe harbors’ under copyright law, they argue.

Noting that there is “no solution” to the problem without the above fixes, the coalition cites last year’s ruling by the Court of Justice of the European Union which found that The Pirate Bay knowingly provide users with a platform to share copyright-infringing links.

“It is important to recall that the underlying policy objective of this legislation is to address the current unfairness in the online market due to the misapplication of copyright liability rules by UUC services. We would therefore like to stress that the focus should remain on finding effective solutions to tackle this issue.

“As an alliance, we look forward to working with your Presidency to achieve an effective solution to the Value Gap problem for the benefit of Europe,” the coalition concludes.

The letter, addressed to Prime Minister Borissov, Minister Pavlova and Minister Banov, arrives in the wake of an alert sounded by several Members of the European Parliament.

Earlier this month they warned that the EU’s proposed mandatory upload filters – which could see UUC sites pre-screen user-uploaded content for infringement – amount to “censorship machines” that will do more harm than good.

The full letter can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Udemy Targets ‘Pirate’ Site Giving Away its Paid Courses For Free

Post Syndicated from Andy original https://torrentfreak.com/udemy-targets-pirate-site-giving-away-its-paid-courses-for-free-180129/

While there’s no shortage of people who advocate free sharing of movies and music, passions are often raised when it comes to the availability of educational information.

Significant numbers of people believe that learning should be open to all and that texts and associated materials shouldn’t be locked away by copyright holders trying to monetize knowledge. Of course, people who make a living creating learning materials see the position rather differently.

A clash of these ideals is brewing in the United States where online learning platform Udemy has been trying to have some of its courses taken down from FreeTutorials.us, a site that makes available premium tutorials and other learning materials for free.

Early December 2017, counsel acting for Udemy and a number of its individual and corporate instructors (Maximilian Schwarzmüller, Academind GmbH, Peter Dalmaris, Futureshock Enterprises, Jose Marcial Portilla, and Pierian Data) wrote to FreeTutorials.us with DMCA takedown notice.

“Pursuant to 17 U.S.C. § 512(c)(3)(A) of the Digital Millennium Copyright Act (‘DMCA’), this communication serves as a notice of infringement and request for removal of certain web content available on freetutorials.us,” the letter reads.

“I hereby request that you remove or disable access to the material listed in Exhibit A in as expedient a fashion as possible. This communication does not constitute a waiver of any right to recover damages incurred by virtue of any such unauthorized activities, and such rights as well as claims for other relief are expressly retained.”

A small sample of Exhibit A

On January 10, 2018, the same law firm wrote to Cloudflare, which provides services to FreeTutorials. The DMCA notice asked Cloudflare to disable access to the same set of infringing content listed above.

It seems likely that whatever happened next wasn’t to Udemy’s satisfaction. On January 16, an attorney from the same law firm filed a DMCA subpoena at a district court in California. A DMCA subpoena can enable a copyright holder to obtain the identity of an alleged infringer without having to file a lawsuit and without needing a signature from a judge.

The subpoena was directed at Cloudflare, which provides services to FreeTutorials. The company was ordered to hand over “all identifying information identifying the owner, operator and/or contact person(s) associated with the domain www.freetutorials.us, including but not limited to name(s), address(es), telephone number(s), email address(es), Internet protocol connection records, administrative records and billing records from the time the account was established to the present.”

On January 26, the date by which Cloudflare was ordered to hand over the information, Cloudflare wrote to FreeTutorials with a somewhat late-in-the-day notification.

“We received the attached subpoena regarding freetutorials.us, a domain managed through your Cloudflare account. The subpoena requires us to provide information in our systems related to this website,” the company wrote.

“We have determined that this is a valid subpoena, and we are required to provide the requested information. In accordance with our Privacy Policy, we are informing you before we provide any of the requested subscriber information. We plan to turn over documents in response to the subpoena on January 26th, 2018, unless you intervene in the case.”

With that deadline passing last Friday, it’s safe to say that Cloudflare has complied with the subpoena as the law requires. However, TorrentFreak spoke with FreeTutorials who told us that the company doesn’t hold anything useful on them.

“No, they have nothing,” the team explained.

Noting that they’ll soon dispense with the services of Cloudflare, the team confirmed that they had received emails from Udemy and its instructors but hadn’t done a lot in response.

“How about a ‘NO’? was our answer to all the DMCA takedown requests from Udemy and its Instructors,” they added.

FreeTutorials (FTU) are affiliated with FreeCoursesOnline (FCO) and seem passionate about what they do. In common with others who distribute learning materials online, they express a belief in free education for all, irrespective of financial resources.

“We, FTU and FCO, are a group of seven members assorted as a team from different countries and cities. We are JN, SRZ aka SunRiseZone, Letap, Lihua Google Drive, Kaya, Zinnia, Faiz MeemBazooka,” a spokesperson revealed.

“We’re all members and colleagues and we also have our own daily work and business stuff to do. We have been through that phase of life when we didn’t have enough money to buy books and get tuition or even apply for a good course that we always wanted to have, so FTU & FCO are just our vision to provide Free Education For Everyone.

“We would love to change our priorities towards our current and future projects, only if we manage to get some faithful FTU’ers to join in and help us to grow together and make FTU a place it should be.”

TorrentFreak requested comment from Udemy but at the time of publication, we were yet to hear back. However, we did manage to get in touch with Jonathan Levi, an Udemy instructor who sent this takedown notice to the site in October 2017:

“I’m writing to you on behalf of SuperHuman Enterprises, LLC. You are in violation of our copyright, using our images, and linking to pirated copies of our courses. Remove them IMMEDIATELY or face severe legal action….You have 48 hours to comply,” he wrote, adding:

“And in case you’re going to say I don’t have evidence that I own the files, it’s my fucking face in the videos.”

Levi says that the site had been non-responsive so now things are being taken to the next level.

“They don’t reply to takedowns, so we’ve joined a class action lawsuit against FTU lead by Udemy and a law firm specializing in this type of thing,” Levi concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Piracy Can Help Music Sales of Many Artists, Research Shows

Post Syndicated from Ernesto original https://torrentfreak.com/piracy-can-help-music-sales-of-many-artists-research-shows-180128/

The debate over whether online piracy helps or hurts music sales has been dragging on for several decades now.

The issue has been researched extensively with both positive and negative effects being reported, often varying based on the type of artist, music genre and media, among other variables.

One of the more extensive studies was published this month in the peer-reviewed Information Economics and Policy journal, by Queen’s University economics researcher Jonathan Lee.

In a paper titled ‘Purchase, pirate, publicize: Private-network music sharing and market album sales’ he examined the effect of BitTorrent-based piracy on both digital and physical music sales.

We covered an earlier version of the study two years ago when it was still a work in progress. With updates to the research methods and a data sample, the results are now more clear.

The file-sharing data was obtained from an unnamed private BitTorrent tracker and covers a data set of 250,000 albums and more than five million downloads. These were matched to US sales data for thousands of albums provided by Nielsen SoundScan.

By refining the estimation approach and updating the matching technique, the final version of the paper shows some interesting results.

Based on the torrent tracker data, Lee finds that piracy can boost sales of mid-tier artists, both for physical CDs and digital downloads. For the most popular artists, this effect is reversed. In both cases, the impact is the largest for digital sales.

“I now find that top artists are harmed and mid-tier artists may be helped in both markets, but that these effects are larger for digital sales,” Lee tells TorrentFreak. “This is consistent with the idea that people are more willing to switch between digital piracy and digital sales than between digital piracy and physical CDs.”

The findings lead to the conclusion that there is no ideal ‘one-size-fits-all’ response to piracy. In fact, some unauthorized sharing may be a good thing.

This is in line with observations from musicians themselves over the past years. Several top artists have admitted the positive effects of piracy, including Ed Sheeran, who recently said that he owes his career to it.

“I know that’s a bad thing to say, because I’m part of a music industry that doesn’t like illegal file sharing,” Sheeran said in an interview with CBS. “Illegal file sharing was what made me. It was students in England going to university, sharing my songs with each other.”

Sheeran sharing on TPB

Today, Sheeran is in a totally different position of course. As one of the top artists, he would now be hurt by piracy. However, the new stars of tomorrow may still reap the benefits.

According to the researcher, the music industry should realize that shutting down pirate sites may not always be the best option. On the contrary, file-sharing sites may be useful as promotional platforms in some cases.

“Following above, a policy of total shutdown of private file sharing networks seems excessively costly (compared with their relatively small impact on sales) and unwise (as a one-size-fits-all policy). It would be better to make legal consumption more convenient, reducing the demand for piracy as an alternative to purchasing,” Lee tells us.

“It would also be smart to experiment with releasing music onto piracy networks themselves, especially for up-and-coming artists, similar to the free promotion afforded by commercial radio.”

The researcher makes another interesting extrapolation from the findings. In recent years, some labels and artists have signed exclusive deals with some streaming platforms. This means that content is not available everywhere, and this fragmentation may make piracy look more appealing.

“Here you can view piracy as a non-fragmented alternative platform to Spotify et al. Thus consumers will have a strong incentive to use a single non-fragmented platform (piracy) over having multiple subscriptions to fragmented platforms,” Lee says.

It would be better for the labels to publish their music on all platforms, and to make these more appealing and convenient than the pirate alternative.

The data used for the research was collected several years ago before the big streaming boom, so it might be that the results are different today. However, it is clear that the effect of piracy on sales is not as uniform as the music industry often portrays it.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons