Tag Archives: psa

Federate Database User Authentication Easily with IAM and Amazon Redshift

Post Syndicated from Thiyagarajan Arumugam original https://aws.amazon.com/blogs/big-data/federate-database-user-authentication-easily-with-iam-and-amazon-redshift/

Managing database users though federation allows you to manage authentication and authorization procedures centrally. Amazon Redshift now supports database authentication with IAM, enabling user authentication though enterprise federation. No need to manage separate database users and passwords to further ease the database administration. You can now manage users outside of AWS and authenticate them for access to an Amazon Redshift data warehouse. Do this by integrating IAM authentication and a third-party SAML-2.0 identity provider (IdP), such as AD FS, PingFederate, or Okta. In addition, database users can also be automatically created at their first login based on corporate permissions.

In this post, I demonstrate how you can extend the federation to enable single sign-on (SSO) to the Amazon Redshift data warehouse.

SAML and Amazon Redshift

AWS supports Security Assertion Markup Language (SAML) 2.0, which is an open standard for identity federation used by many IdPs. SAML enables federated SSO, which enables your users to sign in to the AWS Management Console. Users can also make programmatic calls to AWS API actions by using assertions from a SAML-compliant IdP. For example, if you use Microsoft Active Directory for corporate directories, you may be familiar with how Active Directory and AD FS work together to enable federation. For more information, see the Enabling Federation to AWS Using Windows Active Directory, AD FS, and SAML 2.0 AWS Security Blog post.

Amazon Redshift now provides the GetClusterCredentials API operation that allows you to generate temporary database user credentials for authentication. You can set up an IAM permissions policy that generates these credentials for connecting to Amazon Redshift. Extending the IAM authentication, you can configure the federation of AWS access though a SAML 2.0–compliant IdP. An IAM role can be configured to permit the federated users call the GetClusterCredentials action and generate temporary credentials to log in to Amazon Redshift databases. You can also set up policies to restrict access to Amazon Redshift clusters, databases, database user names, and user group.

Amazon Redshift federation workflow

In this post, I demonstrate how you can use a JDBC– or ODBC-based SQL client to log in to the Amazon Redshift cluster using this feature. The SQL clients used with Amazon Redshift JDBC or ODBC drivers automatically manage the process of calling the GetClusterCredentials action, retrieving the database user credentials, and establishing a connection to your Amazon Redshift database. You can also use your database application to programmatically call the GetClusterCredentials action, retrieve database user credentials, and connect to the database. I demonstrate these features using an example company to show how different database users accounts can be managed easily using federation.

The following diagram shows how the SSO process works:

  1. JDBC/ODBC
  2. Authenticate using Corp Username/Password
  3. IdP sends SAML assertion
  4. Call STS to assume role with SAML
  5. STS Returns Temp Credentials
  6. Use Temp Credentials to get Temp cluster credentials
  7. Connect to Amazon Redshift using temp credentials

Walkthrough

Example Corp. is using Active Directory (idp host:demo.examplecorp.com) to manage federated access for users in its organization. It has an AWS account: 123456789012 and currently manages an Amazon Redshift cluster with the cluster ID “examplecorp-dw”, database “analytics” in us-west-2 region for its Sales and Data Science teams. It wants the following access:

  • Sales users can access the examplecorp-dw cluster using the sales_grp database group
  • Sales users access examplecorp-dw through a JDBC-based SQL client
  • Sales users access examplecorp-dw through an ODBC connection, for their reporting tools
  • Data Science users access the examplecorp-dw cluster using the data_science_grp database group.
  • Partners access the examplecorp-dw cluster and query using the partner_grp database group.
  • Partners are not federated through Active Directory and are provided with separate IAM user credentials (with IAM user name examplecorpsalespartner).
  • Partners can connect to the examplecorp-dw cluster programmatically, using language such as Python.
  • All users are automatically created in Amazon Redshift when they log in for the first time.
  • (Optional) Internal users do not specify database user or group information in their connection string. It is automatically assigned.
  • Data warehouse users can use SSO for the Amazon Redshift data warehouse using the preceding permissions.

Step 1:  Set up IdPs and federation

The Enabling Federation to AWS Using Windows Active Directory post demonstrated how to prepare Active Directory and enable federation to AWS. Using those instructions, you can establish trust between your AWS account and the IdP and enable user access to AWS using SSO.  For more information, see Identity Providers and Federation.

For this walkthrough, assume that this company has already configured SSO to their AWS account: 123456789012 for their Active Directory domain demo.examplecorp.com. The Sales and Data Science teams are not required to specify database user and group information in the connection string. The connection string can be configured by adding SAML Attribute elements to your IdP. Configuring these optional attributes enables internal users to conveniently avoid providing the DbUser and DbGroup parameters when they log in to Amazon Redshift.

The user-name attribute can be set up as follows, with a user ID (for example, nancy) or an email address (for example. [email protected]):

<Attribute Name="https://redshift.amazon.com/SAML/Attributes/DbUser">  
  <AttributeValue>user-name</AttributeValue>
</Attribute>

The AutoCreate attribute can be defined as follows:

<Attribute Name="https://redshift.amazon.com/SAML/Attributes/AutoCreate">
    <AttributeValue>true</AttributeValue>
</Attribute>

The sales_grp database group can be included as follows:

<Attribute Name="https://redshift.amazon.com/SAML/Attributes/DbGroups">
    <AttributeValue>sales_grp</AttributeValue>
</Attribute>

For more information about attribute element configuration, see Configure SAML Assertions for Your IdP.

Step 2: Create IAM roles for access to the Amazon Redshift cluster

The next step is to create IAM policies with permissions to call GetClusterCredentials and provide authorization for Amazon Redshift resources. To grant a SQL client the ability to retrieve the cluster endpoint, region, and port automatically, include the redshift:DescribeClusters action with the Amazon Redshift cluster resource in the IAM role.  For example, users can connect to the Amazon Redshift cluster using a JDBC URL without the need to hardcode the Amazon Redshift endpoint:

Previous:  jdbc:redshift://endpoint:port/database

Current:  jdbc:redshift:iam://clustername:region/dbname

Use IAM to create the following policies. You can also use an existing user or role and assign these policies. For example, if you already created an IAM role for IdP access, you can attach the necessary policies to that role. Here is the policy created for sales users for this example:

Sales_DW_IAM_Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "redshift:DescribeClusters"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:GetClusterCredentials"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:userid": "AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:CreateClusterUser"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:JoinGroup"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp"
            ]
        }
    ]
}

The policy uses the following parameter values:

  • Region: us-west-2
  • AWS Account: 123456789012
  • Cluster name: examplecorp-dw
  • Database group: sales_grp
  • IAM role: AIDIODR4TAW7CSEXAMPLE
Policy Statement Description
{
"Effect":"Allow",
"Action":[
"redshift:DescribeClusters"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
]
}

Allow users to retrieve the cluster endpoint, region, and port automatically for the Amazon Redshift cluster examplecorp-dw. This specification uses the resource format arn:aws:redshift:region:account-id:cluster:clustername. For example, the SQL client JDBC can be specified in the format jdbc:redshift:iam://clustername:region/dbname.

For more information, see Amazon Resource Names.

{
"Effect":"Allow",
"Action":[
"redshift:GetClusterCredentials"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
"arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
],
"Condition":{
"StringEquals":{
"aws:userid":"AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com"
}
}
}

Generates a temporary token to authenticate into the examplecorp-dw cluster. “arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}” restricts the corporate user name to the database user name for that user. This resource is specified using the format: arn:aws:redshift:region:account-id:dbuser:clustername/dbusername.

The Condition block enforces that the AWS user ID should match “AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com”, so that individual users can authenticate only as themselves. The AIDIODR4TAW7CSEXAMPLE role has the Sales_DW_IAM_Policy policy attached.

{
"Effect":"Allow",
"Action":[
"redshift:CreateClusterUser"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
]
}
Automatically creates database users in examplecorp-dw, when they log in for the first time. Subsequent logins reuse the existing database user.
{
"Effect":"Allow",
"Action":[
"redshift:JoinGroup"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp"
]
}
Allows sales users to join the sales_grp database group through the resource “arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp” that is specified in the format arn:aws:redshift:region:account-id:dbgroup:clustername/dbgroupname.

Similar policies can be created for Data Science users with access to join the data_science_grp group in examplecorp-dw. You can now attach the Sales_DW_IAM_Policy policy to the role that is mapped to IdP application for SSO.
 For more information about how to define the claim rules, see Configuring SAML Assertions for the Authentication Response.

Because partners are not authorized using Active Directory, they are provided with IAM credentials and added to the partner_grp database group. The Partner_DW_IAM_Policy is attached to the IAM users for partners. The following policy allows partners to log in using the IAM user name as the database user name.

Partner_DW_IAM_Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "redshift:DescribeClusters"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:GetClusterCredentials"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ],
            "Condition": {
                "StringEquals": {
                    "redshift:DbUser": "${aws:username}"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:CreateClusterUser"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:JoinGroup"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/partner_grp"
            ]
        }
    ]
}

redshift:DbUser“: “${aws:username}” forces an IAM user to use the IAM user name as the database user name.

With the previous steps configured, you can now establish the connection to Amazon Redshift through JDBC– or ODBC-supported clients.

Step 3: Set up database user access

Before you start connecting to Amazon Redshift using the SQL client, set up the database groups for appropriate data access. Log in to your Amazon Redshift database as superuser to create a database group, using CREATE GROUP.

Log in to examplecorp-dw/analytics as superuser and create the following groups and users:

CREATE GROUP sales_grp;
CREATE GROUP datascience_grp;
CREATE GROUP partner_grp;

Use the GRANT command to define access permissions to database objects (tables/views) for the preceding groups.

Step 4: Connect to Amazon Redshift using the JDBC SQL client

Assume that sales user “nancy” is using the SQL Workbench client and JDBC driver to log in to the Amazon Redshift data warehouse. The following steps help set up the client and establish the connection:

  1. Download the latest Amazon Redshift JDBC driver from the Configure a JDBC Connection page
  2. Build the JDBC URL with the IAM option in the following format:
    jdbc:redshift:iam://examplecorp-dw:us-west-2/sales_db

Because the redshift:DescribeClusters action is assigned to the preceding IAM roles, it automatically resolves the cluster endpoints and the port. Otherwise, you can specify the endpoint and port information in the JDBC URL, as described in Configure a JDBC Connection.

Identify the following JDBC options for providing the IAM credentials (see the “Prepare your environment” section) and configure in the SQL Workbench Connection Profile:

plugin_name=com.amazon.redshift.plugin.AdfsCredentialsProvider 
idp_host=demo.examplecorp.com (The name of the corporate identity provider host)
idp_port=443  (The port of the corporate identity provider host)
user=examplecorp\nancy(corporate user name)
password=***(corporate user password)

The SQL workbench configuration looks similar to the following screenshot:

Now, “nancy” can connect to examplecorp-dw by authenticating using the corporate Active Directory. Because the SAML attributes elements are already configured for nancy, she logs in as database user nancy and is assigned the sales_grp. Similarly, other Sales and Data Science users can connect to the examplecorp-dw cluster. A custom Amazon Redshift ODBC driver can also be used to connect using a SQL client. For more information, see Configure an ODBC Connection.

Step 5: Connecting to Amazon Redshift using JDBC SQL Client and IAM Credentials

This optional step is necessary only when you want to enable users that are not authenticated with Active Directory. Partners are provided with IAM credentials that they can use to connect to the examplecorp-dw Amazon Redshift clusters. These IAM users are attached to Partner_DW_IAM_Policy that assigns them to be assigned to the public database group in Amazon Redshift. The following JDBC URLs enable them to connect to the Amazon Redshift cluster:

jdbc:redshift:iam//examplecorp-dw/analytics?AccessKeyID=XXX&SecretAccessKey=YYY&DbUser=examplecorpsalespartner&DbGroup= partner_grp&AutoCreate=true

The AutoCreate option automatically creates a new database user the first time the partner logs in. There are several other options available to conveniently specify the IAM user credentials. For more information, see Options for providing IAM credentials.

Step 6: Connecting to Amazon Redshift using an ODBC client for Microsoft Windows

Assume that another sales user “uma” is using an ODBC-based client to log in to the Amazon Redshift data warehouse using Example Corp Active Directory. The following steps help set up the ODBC client and establish the Amazon Redshift connection in a Microsoft Windows operating system connected to your corporate network:

  1. Download and install the latest Amazon Redshift ODBC driver.
  2. Create a system DSN entry.
    1. In the Start menu, locate the driver folder or folders:
      • Amazon Redshift ODBC Driver (32-bit)
      • Amazon Redshift ODBC Driver (64-bit)
      • If you installed both drivers, you have a folder for each driver.
    2. Choose ODBC Administrator, and then type your administrator credentials.
    3. To configure the driver for all users on the computer, choose System DSN. To configure the driver for your user account only, choose User DSN.
    4. Choose Add.
  3. Select the Amazon Redshift ODBC driver, and choose Finish. Configure the following attributes:
    Data Source Name =any friendly name to identify the ODBC connection 
    Database=analytics
    user=uma(corporate user name)
    Auth Type-Identity Provider: AD FS
    password=leave blank (Windows automatically authenticates)
    Cluster ID: examplecorp-dw
    idp_host=demo.examplecorp.com (The name of the corporate IdP host)

This configuration looks like the following:

  1. Choose OK to save the ODBC connection.
  2. Verify that uma is set up with the SAML attributes, as described in the “Set up IdPs and federation” section.

The user uma can now use this ODBC connection to establish the connection to the Amazon Redshift cluster using any ODBC-based tools or reporting tools such as Tableau. Internally, uma authenticates using the Sales_DW_IAM_Policy  IAM role and is assigned the sales_grp database group.

Step 7: Connecting to Amazon Redshift using Python and IAM credentials

To enable partners, connect to the examplecorp-dw cluster programmatically, using Python on a computer such as Amazon EC2 instance. Reuse the IAM users that are attached to the Partner_DW_IAM_Policy policy defined in Step 2.

The following steps show this set up on an EC2 instance:

  1. Launch a new EC2 instance with the Partner_DW_IAM_Policy role, as described in Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances. Alternatively, you can attach an existing IAM role to an EC2 instance.
  2. This example uses Python PostgreSQL Driver (PyGreSQL) to connect to your Amazon Redshift clusters. To install PyGreSQL on Amazon Linux, use the following command as the ec2-user:
    sudo easy_install pip
    sudo yum install postgresql postgresql-devel gcc python-devel
    sudo pip install PyGreSQL

  1. The following code snippet demonstrates programmatic access to Amazon Redshift for partner users:
    #!/usr/bin/env python
    """
    Usage:
    python redshift-unload-copy.py <config file> <region>
    
    * Copyright 2014, Amazon.com, Inc. or its affiliates. All Rights Reserved.
    *
    * Licensed under the Amazon Software License (the "License").
    * You may not use this file except in compliance with the License.
    * A copy of the License is located at
    *
    * http://aws.amazon.com/asl/
    *
    * or in the "license" file accompanying this file. This file is distributed
    * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
    * express or implied. See the License for the specific language governing
    * permissions and limitations under the License.
    """
    
    import sys
    import pg
    import boto3
    
    REGION = 'us-west-2'
    CLUSTER_IDENTIFIER = 'examplecorp-dw'
    DB_NAME = 'sales_db'
    DB_USER = 'examplecorpsalespartner'
    
    options = """keepalives=1 keepalives_idle=200 keepalives_interval=200
                 keepalives_count=6"""
    
    set_timeout_stmt = "set statement_timeout = 1200000"
    
    def conn_to_rs(host, port, db, usr, pwd, opt=options, timeout=set_timeout_stmt):
        rs_conn_string = """host=%s port=%s dbname=%s user=%s password=%s
                             %s""" % (host, port, db, usr, pwd, opt)
        print "Connecting to %s:%s:%s as %s" % (host, port, db, usr)
        rs_conn = pg.connect(dbname=rs_conn_string)
        rs_conn.query(timeout)
        return rs_conn
    
    def main():
        # describe the cluster and fetch the IAM temporary credentials
        global redshift_client
        redshift_client = boto3.client('redshift', region_name=REGION)
        response_cluster_details = redshift_client.describe_clusters(ClusterIdentifier=CLUSTER_IDENTIFIER)
        response_credentials = redshift_client.get_cluster_credentials(DbUser=DB_USER,DbName=DB_NAME,ClusterIdentifier=CLUSTER_IDENTIFIER,DurationSeconds=3600)
        rs_host = response_cluster_details['Clusters'][0]['Endpoint']['Address']
        rs_port = response_cluster_details['Clusters'][0]['Endpoint']['Port']
        rs_db = DB_NAME
        rs_iam_user = response_credentials['DbUser']
        rs_iam_pwd = response_credentials['DbPassword']
        # connect to the Amazon Redshift cluster
        conn = conn_to_rs(rs_host, rs_port, rs_db, rs_iam_user,rs_iam_pwd)
        # execute a query
        result = conn.query("SELECT sysdate as dt")
        # fetch results from the query
        for dt_val in result.getresult() :
            print dt_val
        # close the Amazon Redshift connection
        conn.close()
    
    if __name__ == "__main__":
        main()

You can save this Python program in a file (redshiftscript.py) and execute it at the command line as ec2-user:

python redshiftscript.py

Now partners can connect to the Amazon Redshift cluster using the Python script, and authentication is federated through the IAM user.

Summary

In this post, I demonstrated how to use federated access using Active Directory and IAM roles to enable single sign-on to an Amazon Redshift cluster. I also showed how partners outside an organization can be managed easily using IAM credentials.  Using the GetClusterCredentials API action, now supported by Amazon Redshift, lets you manage a large number of database users and have them use corporate credentials to log in. You don’t have to maintain separate database user accounts.

Although this post demonstrated the integration of IAM with AD FS and Active Directory, you can replicate this solution across with your choice of SAML 2.0 third-party identity providers (IdP), such as PingFederate or Okta. For the different supported federation options, see Configure SAML Assertions for Your IdP.

If you have questions or suggestions, please comment below.


Additional Reading

Learn how to establish federated access to your AWS resources by using Active Directory user attributes.


About the Author

Thiyagarajan Arumugam is a Big Data Solutions Architect at Amazon Web Services and designs customer architectures to process data at scale. Prior to AWS, he built data warehouse solutions at Amazon.com. In his free time, he enjoys all outdoor sports and practices the Indian classical drum mridangam.

 

Piracy Narrative Isn’t About Ethics Anymore, It’s About “Danger”

Post Syndicated from Andy original https://torrentfreak.com/piracy-narrative-isnt-about-ethics-anymore-its-about-danger-170812/

Over the years there have been almost endless attempts to stop people from accessing copyright-infringing content online. Campaigns have come and gone and almost two decades later the battle is still ongoing.

Early on, when panic enveloped the music industry, the campaigns centered around people getting sued. Grabbing music online for free could be costly, the industry warned, while parading the heads of a few victims on pikes for the world to see.

Periodically, however, the aim has been to appeal to the public’s better nature. The idea is that people essentially want to do the ‘right thing’, so once they understand that largely hard-working Americans are losing their livelihoods, people will stop downloading from The Pirate Bay. For some, this probably had the desired effect but millions of people are still getting their fixes for free, so the job isn’t finished yet.

In more recent years, notably since the MPAA and RIAA had their eyes blacked in the wake of SOPA, the tone has shifted. In addition to educating the public, torrent and streaming sites are increasingly being painted as enemies of the public they claim to serve.

Several studies, largely carried out on behalf of the Digital Citizens Alliance (DCA), have claimed that pirate sites are hotbeds of malware, baiting consumers in with tasty pirate booty only to offload trojans, viruses, and God-knows-what. These reports have been ostensibly published as independent public interest documents but this week an advisor to the DCA suggested a deeper interest for the industry.

Hemanshu Nigam is a former federal prosecutor, ex-Chief Security Officer for News Corp and Fox Interactive Media, and former VP Worldwide Internet Enforcement at the MPAA. In an interview with Deadline this week, he spoke about alleged links between pirate sites and malware distributors. He also indicated that warning people about the dangers of pirate sites has become Hollywood’s latest anti-piracy strategy.

“The industry narrative has changed. When I was at the MPAA, we would tell people that stealing content is wrong and young people would say, yeah, whatever, you guys make a lot of money, too bad,” he told the publication.

“It has gone from an ethical discussion to a dangerous one. Now, your parents’ bank account can be raided, your teenage daughter can be spied on in her bedroom and extorted with the footage, or your computer can be locked up along with everything in it and held for ransom.”

Nigam’s stance isn’t really a surprise since he’s currently working for the Digital Citizens Alliance as an advisor. In turn, the Alliance is at least partly financed by the MPAA. There’s no suggestion whatsoever that Nigam is involved in any propaganda effort, but recent signs suggest that the DCA’s work in malware awareness is more about directing people away from pirate sites than protecting them from the alleged dangers within.

That being said and despite the bias, it’s still worth giving experts like Nigam an opportunity to speak. Largely thanks to industry efforts with brands, pirate sites are increasingly being forced to display lower-tier ads, which can be problematic. On top, some sites’ policies mean they don’t deserve any visitors at all.

In the Deadline piece, however, Nigam alleges that hackers have previously reached out to pirate websites offering $200 to $5000 per day “depending on the size of the pirate website” to have the site infect users with malware. If true, that’s a serious situation and people who would ordinarily use ‘pirate’ sites would definitely appreciate the details.

For example, to which sites did hackers make this offer and, crucially, which sites turned down the offer and which ones accepted?

It’s important to remember that pirates are just another type of consumer and they would boycott sites in a heartbeat if they discovered they’d been paid to infect them with malware. But, as usual, the claims are extremely light in detail. Instead, there’s simply a blanket warning to stay away from all unauthorized sites, which isn’t particularly helpful.

In some cases, of course, operational security will prevent some details coming to light but without these, people who don’t get infected on a ‘pirate’ site (the vast majority) simply won’t believe the allegations. As the author of the Deadline piece pointed out, it’s a bit like Reefer Madness all over again.

The point here is that without hard independent evidence to back up these claims, with reports listing sites alongside the malware they’ve supposed to have spread and when, few people will respond to perceived scaremongering. Free content trumps a few distant worries almost every time, whether that involves malware or the threat of a lawsuit.

It’ll be up to the DCA and their MPAA paymasters to consider whether the approach is working but thus far, not even having government heavyweights on board has helped.

Earlier this year the DCA launched a video campaign, enrolling 15 attorney generals to publish their own anti-piracy PSAs on YouTube. Thus far, interest has been minimal, to say the least.

At the time of writing the 15 PSAs have 3,986 views in total, with 2,441 of those contributed by a single video contributed by Wisconsin Attorney General Brad Schimel. Despite the relative success, even that got slammed with 2 upvotes and 127 downvotes.

A few of the other videos have a couple of hundred views each but more than half have less than 70. Perhaps most worryingly for the DCA, apart from the Schimel PSA, none have any upvotes at all, only down. It’s unclear who the viewers were but it seems reasonable to conclude they weren’t entertained.

The bottom line is nobody likes malware or having their banking details stolen but yet again, people who claim to have the public interest at heart aren’t actually making a difference on the ground. It could be argued that groups advocating online safety should be publishing guides on how to stay protected on the Internet period, not merely advising people to stay away from certain sites.

But of course, that wouldn’t achieve the goals of the MPAA Digital Citizens Alliance.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Hackers Use Pirate Sites to Ruin Your Life, State Attorneys General Warn

Post Syndicated from Ernesto original https://torrentfreak.com/hackers-use-pirate-sites-to-ruin-your-life-state-attorneys-general-warn-170727/

In recent years copyright holders have tried many things to dissuade the public from visiting pirate websites.

They often claim that piracy costs the entertainment industry thousands of jobs, for example. Another strategy to is to scare the public at large directly, by pointing out all the ills people may encounter on pirate sites.

The Digital Citizens Alliance (DCA), which has deep ties to the content industries, is a proponent of the latter strategy. The group has released a variety of reports pointing out that pirate sites are a hotbed for malware, identity theft, hacking and other evils.

To add some political weight to this message, the DCA recently helped to launch a new series of public service announcements where a group of 15 State Attorneys General warn the public about these threats.

The participating Attorneys General include Arizona’s Mark Brnovich, Kentucky’s Andy Bashear, Washington DC’s Karl Racine, and Wisconsin’s Brad Schimel, who all repeat the exact same words in their PSAs.

“Nowadays we all have to worry about cybersecurity. Hackers are always looking for new ways to break into our computers. Something as simple as visiting pirate websites can put your computer at risk.”

“Hackers use pirate websites to infect your computer and steal your ID and financial information, or even take over your computer’s camera without you knowing it,” the Attorneys General add.

Organized by the Digital Citizens Alliance, the campaign in question runs on TV and radio in several states and also appears on social media during the summer.

The warnings, while over dramatized, do raise a real concern. There are a lot of pirate sites that have lower-tier advertising, where malware regularly slips through. And some ads lead users to fake websites where people should probably not leave their credit card information.

Variety points out that the Attorneys General are tasked with the goal to keep their citizens safe, so the PSA’s message is certainly fitting.

Still, one has to wonder whether the main driver of these ads is online safety. Could perhaps the interests of the entertainment industry play a role too? It certainly won’t be the first time that State Attorneys General have helped out Hollywood.

Just a few years ago the MPAA secretly pushed Mississippi State Attorney General Jim Hood to revive SOPA-like anti-piracy efforts in the United States. That was part of the MPAA’s “Project Goliath,” which was aimed at “convincing state prosecutors to take up the fight” against Google, under an anti-piracy umbrella.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

“Kodi Boxes Are a Fire Risk”: Awful Timing or Opportunism?

Post Syndicated from Andy original https://torrentfreak.com/kodi-boxes-are-a-fire-risk-awful-timing-or-opportunism-170618/

Anyone who saw the pictures this week couldn’t have failed to be moved by the plight of Londoners caught up in the Grenfell Tower inferno. The apocalyptic images are likely to stay with people for years to come and the scars for those involved may never heal.

As the building continued to smolder and the death toll increased, UK tabloids provided wall-to-wall coverage of the disaster. On Thursday, however, The Sun took a short break to put out yet another sensationalized story about Kodi. Given the week’s events, it was bound to raise eyebrows.

“HOT GOODS: Kodi boxes are a fire hazard because thousands of IPTV devices nabbed by customs ‘failed UK electrical standards’,” the headline reads.

Another sensational ‘Kodi’ headline

“It’s estimated that thousands of Brits have bought so-called Kodi boxes which can be connected to telly sets to stream pay-per-view sport and films for free,” the piece continued.

“But they could be a fire hazard, according to the Federation Against Copyright Theft (FACT), which has been nabbing huge deliveries of the devices as they arrive in the UK.”

As the image below shows, “Kodi box” fire hazard claims appeared next to images from other news articles about the huge London fire. While all separate stories, the pairing is not a great look.

A ‘Kodi Box’, as depicted in The Sun

FACT chief executive Kieron Sharp told The Sun that his group had uncovered two parcels of 2,000 ‘Kodi’ boxes and found that they “failed electrical safety standards”, making them potentially dangerous. While that may well be the case, the big question is all about timing.

It’s FACT’s job to reduce copyright infringement on behalf of clients such as The Premier League so it’s no surprise that they’re making a sustained effort to deter the public from buying these devices. That being said, it can’t have escaped FACT or The Sun that fire and death are extremely sensitive topics this week.

That leaves us with a few options including unfortunate opportunism or perhaps terrible timing, but let’s give the benefit of the doubt for a moment.

There’s a good argument that FACT and The Sun brought a valid issue to the public’s attention at a time when fire safety is on everyone’s lips. So, to give credit where it’s due, providing people with a heads-up about potentially dangerous devices is something that most people would welcome.

However, it’s difficult to offer congratulations on the PSA when the story as it appears in The Sun does nothing – absolutely nothing – to help people stay safe.

If some boxes are a risk (and that’s certainly likely given the level of Far East imports coming into the UK) which ones are dangerous? Where were they manufactured? Who sold them? What are the serial numbers? Which devices do people need to get out of their houses?

Sadly, none of these questions were answered or even addressed in the article, making it little more than scaremongering. Only making matters worse, the piece notes that it isn’t even clear how many of the seized devices are indeed a fire risk and that more tests need to be done. Is this how we should tackle such an important issue during an extremely sensitive week?

Timing and lack of useful information aside, one then has to question the terminology employed in the article.

As a piece of computer software, Kodi cannot catch fire. So, what we’re actually talking about here is small computers coming into the country without passing safety checks. The presence of Kodi on the devices – if indeed Kodi was even installed pre-import – is absolutely irrelevant.

Anti-piracy groups warning people of the dangers associated with their piracy habits is nothing new. For years, Internet users have been told that their computers will become malware infested if they share files or stream infringing content. While in some cases that may be true, there’s rarely any effort by those delivering the warnings to inform people on how to stay safe.

A classic example can be found in the numerous reports put out by the Digital Citizens Alliance in the United States. The DCA has produced several and no doubt expensive reports which claim to highlight the risks Internet users are exposed to on ‘pirate’ sites.

The DCA claims to do this in the interests of consumers but the group offers no practical advice on staying safe nor does it provide consumers with risk reduction strategies. Like many high-level ‘drug prevention’ documents shuffled around government, it could be argued that on a ‘street’ level their reports are next to useless.

Demonizing piracy is a well-worn and well-understood strategy but if warnings are to be interpreted as representing genuine concern for the welfare of people, they have to be a lot more substantial than mere scaremongering.

Anyone concerned about potentially dangerous devices can check out these useful guides from Electrical Safety First (pdf) and the Electrical Safety Council (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Piracy is Theft! Classic Anti-Piracy Ads From the ’90s

Post Syndicated from Ernesto original https://torrentfreak.com/piracy-is-theft-classic-anti-piracy-ads-from-the-90s-170114/

piracytheftEvery now and then it can be quite amusing to look back at some of the anti-piracy campaigns deployed by rightholders in the past. Especially, when contrasted with newer initiatives.

Last week we reported on a new UK campaign where suspected pirates will get an “educational alert” in the mail if they are ‘caught’ sharing infringing content using BitTorrent.

The initiative breaks with the more aggressive traditions of scaring pirates with high fines, and rewarding snitches who tell on them, although there are still some remnants of this around.

How different was this in the early ’90s when the (now defunct) European Leisure Software Publishers Association (ELSPA) ran a controversial series of ads, warning pirates of potential jail time.

piracywarn

In an attempt to connect with a predominantly young audience, ELSPA also promoted a series of cartoon PSAs in UK computer magazines.

These ads informed readers that “piracy is theft” and encouraged them to report suspicious behavior to the Federation Against Software Theft (FAST). In return, the informants could look forward to a £1,000 reward.

piratecartoon1

The cartoons showed teens how they could report suspicious software sellers at a local market, or even teachers who dare to allow students to make copies.

piratecartoon2

Or what about friends, who ‘gang up’ on people so they can score a sizable reward? It was all possible, if the cartoons were to believed.

piratecartoon3

If ELSPA’s goal was to be noticed, the ads were definitely successful. Soon after the first ones were placed, angry parents started writing letters to computer magazines, including this one Commodore Format received in the early ’90s.

“I would like to strongly object to the advert which appeared in your magazine,” a concerned parent wrote.

“It encourages young, vulnerable children to think that a phone call will lead to £1,000 very easily. It has caused a lot of ill feeling where I live between boys who were friends and then fell out, and thought this was a way to get back at one boy causing unnecessary upset to the families.”

cf-elspa

ELSPA responded in the magazine and argued that these types of ads were needed to counter the growing threat of piracy. While the organization suggested that the cartoons were instrumental in lowering piracy rates, we now know that it certainly didn’t stop the copying.

Not even SIIA’s Don’t Copy that Floppy!, one of the all-time anti-piracy classics that turns 25 this year, could manage that.

In the years that followed many similar campaigns were launched, some more aggressive than others. And while the “piracy is theft” mantra is still in circulation, the general sense is that a ‘scare approach’ is not all that productive.

Perhaps this is one of the reasons why the latest UK anti-piracy effort relies more on carrots than sticks. Whether that will be successful has yet to be seen, but it’s certainly less “amusing.”

You know who…

youknowwho

The advertising images published here were sourced from WoS, where you can find some more examples. The Commodore Format scan is courtesy of the CF Archive.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Movie Director Steals Clam Chowder to Avenge Illegal Downloads

Post Syndicated from Andy original https://torrentfreak.com/movie-director-steals-clam-chowder-to-avenge-illegal-downloads-170119/

teboAmong most copyright holders and some artists there is an insistence that illegal downloading is tantamount to stealing. A download can result in a lost sale, they argue, thereby depriving creators (or distributors) of revenue.

File-sharers, on the other hand, have their own theories as to what their hobby amounts to. Conclusions vary, from “try before you buy” to satisfying demand unmet by authorized sources. Some merely want content for free but still argue that a copy does not amount to theft.

Director Casey Tebo, on the other hand, strongly disagrees.

Tebo, who has directed live performances for Aerosmith, Mötley Crüe, Judas Priest and Run DMC, believes that piracy is just like stealing. And, after becoming a victim himself, decided to do something about it.

According to the 42-year-old, who won an Emmy for his work on ESPN E:60’s Dream On: The Stories of Boston’s Strongest, the final straw was when he discovered people had been pirating one of his movies.

Tebo wrote and directed 2016 horror movie Happy Birthday and recently discovered that an old friend from school had watched it.

“He said ‘Bro, I saw your movie, it was amazing!’,” Tebo recalled.

When the director asked where he’d seen it – iTunes, Walmart etc – the guy dropped the bombshell. Someone at work supplied it.

“What do you mean, he brings the DVDs into work?” the director asked. “I don’t know what that means?”

In a mocking tone impersonating his former friend’s recollection, Tebo continued.

“He gets ’em off, you know, King Torrent, uTorrent, your fucking mother’s torrent, whatever the fuck it is.”

Noting that his movie only had a small budget with just three investors putting in $500K between them, Tebo said that one of his buddies came up with a plot to get revenge.

“You should go to the fuckin grocery store this cat works at and just fuckin steal some shit,” his buddy said. “And, if you get caught by the cops, just say ‘Hey, he steals my shit, why can’t I steal his shit?’”

In a video posted on YouTube, which he describes as a PSA (Pirates Suck Ass), Tebo reveals what happened next. A sped-up clip shows the director ‘shopping’ for $30 worth of ‘free’ ingredients to make a clam chowder. He then leaves the store without paying. Needless to say, Tebo (who apparently was born with six fingers on his right hand) didn’t get away with it.

Someone from the store came out to Tebo’s car after spotting him on the security camera and threatened to call the cops unless he came back and paid.

“Yeah I know, but this is like a really huge chain supermarket,” Tebo protested. Although subtle, this was almost certainly a dig at people who believe that downloading movies from big companies doesn’t hurt them.

“This is a store, you just robbed it,” the worker replied. Then, Tebo revealed his scheme.

“See, the guy who works at your store likes to download the movies and burn ’em and give them to everybody who works there. You have people in your store who are stealing from my industry, so why can’t I just steal from you guys?” he asked.

The store worker was having none of it. “Give me the bags or i’m calling the cops,” he said.

Again, Tebo underlined his point.

“I just want to know. If you’re the manager of this place and you got guys downloading illegal movies, why can’t I take groceries for free?” he questioned.

“Because it’s fucking stealing, asshole,” came the reply…..

Tebo says that he eventually gave the stuff back. He didn’t, however, explain why he thought it was fair to steal from the grocery store when his aim was to get revenge on one of its employees. Maybe this is the real-life equivalent of holding ISPs responsible for Internet pirates, who knows.

The whole bizarre affair is documented below.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BBC Scrambles to Plug Planet Earth II Finale Leak

Post Syndicated from Andy original https://torrentfreak.com/bbc-scrambles-to-plug-planet-earth-ii-finale-leak-161211/

bbcFor those who’ve keenly followed the past five weekly episodes of the BBC’s Planet Earth II series, it’s been a magical experience. In a world obsessed with the monotony of reality TV, Sunday nights in the UK have rarely been so awe-inspiring.

But while UK license-payers have been enjoying early access to the show, those elsewhere will have to wait for the show’s appearance on regular TV. The United States, for example, must wait until January 2017 to experience its wonders on BBC America. Unless they have access to the Internet, of course.

Ever since the first episode’s debut on BBC, each installment of the series has been uploaded to torrent and streaming sites after airing. However, more than a week ago someone obtained a copy of the series on Blu-ray and uploaded every episode to the Internet, including the season finale “Cities” which airs tonight.

As can be seen from the image below, one of the copies was sourced from an official Blu-ray disc before its official release, produced and distributed by BBC Worldwide.

pe2

The hand-written label suggests that the copy might have been destined for semi-private Russian tracker HD Club, although it’s possible more copies were available elsewhere.

While an early leak of an episode of Game of Thrones, for example, would have made worldwide headlines, the leak of Planet Earth II before its finale has been more low key. However, behind the scenes the BBC has been furiously trying to protect the show.

Since November 7, the day after the first episode aired, US-based anti-piracy company MarkMonitor has been regularly sending notices to Google which target many thousands of copies of the show on hundreds of file-sharing sites.

Up until the last day of November these notices targeted only copies of shows that had already aired, but following the leak of the Blu-ray (which contains the season finale), things got more serious. The first takedown send to Google in December targeted nearly 400 URLs, a tiny sample of which can be seen below.

A handful of the takedowns sent to Googlepe2-sample

On December 2, another 485 URLs were hit by the BBC, 562 two days later, and a further 456 the day after that, adding to the many thousands already sent.

TorrentFreak spoke with BBC Worldwide to find out more about their efforts to keep Planet Earth’s finale fresh but perhaps understandably they didn’t want to say too much.

“Piracy is illegal wherever and whenever it happens – before or after a programme is broadcast,” Dan Phelan, BBC Worldwide’s Head of Communications, told TorrentFreak.

“We take it seriously whatever the source and will work hard to get it taken down.”

But while the BBC are unquestionably doing just that, once again there appears to be an issue with pirates fulfilling unmet demand. Planet Earth II won’t get released on TV in the states until January 28, 2017, which considering the massive buzz around the show online, undoubtedly means that some will choose to pirate instead.

An image from Planet Earth II’s “Cities” finalepe2-cities

Of course, there are many avenues to do that (such as torrents and illicit streaming) but some are taking a third approach.

In common with all big shows on the BBC, Planet Earth II appears on the corporation’s iPlayer service, both live and in catch-up form after airing. The iPlayer tries to keep foreign users out but in reality, the use of VPNs and even some easily installed browser plug-ins can easily circumvent the measures in place.

The Planet Earth II finale “Cities” will air on BBC1 at 8pm tonight. For those looking for a more permanent keepsake, the Blu-ray is also now officially on sale.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

That "Commission on Enhancing Cybersecurity" is absurd

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/that-commission-on-enhancing.html

An Obama commission has publish a report on how to “Enhance Cybersecurity”. It’s promoted as having been written by neutral, bipartisan, technical experts. Instead, it’s almost entirely dominated by special interests and the Democrat politics of the outgoing administration.

In this post, I’m going through a random list of some of the 53 “action items” proposed by the documents. I show how they are policy issues, not technical issues. Indeed, much of the time the technical details are warped to conform to special interests.

IoT passwords

The recommendations include such things as Action Item 2.1.4:

Initial best practices should include requirements to mandate that IoT devices be rendered unusable until users first change default usernames and passwords. 

This recommendation for changing default passwords is repeated many times. It comes from the way the Mirai worm exploits devices by using hardcoded/default passwords.

But this is a misunderstanding of how these devices work. Take, for example, the infamous Xiongmai camera. It has user accounts on the web server to control the camera. If the user forgets the password, the camera can be reset to factory defaults by pressing a button on the outside of the camera.

But here’s the deal with security cameras. They are placed at remote sites miles away, up on the second story where people can’t mess with them. In order to reset them, you need to put a ladder in your truck and drive 30 minutes out to the site, then climb the ladder (an inherently dangerous activity). Therefore, Xiongmai provides a RESET.EXE utility for remotely resetting them. That utility happens to connect via Telnet using a hardcoded password.

The above report misunderstands what’s going on here. It sees Telnet and a hardcoded password, and makes assumptions. Some people assume that this is the normal user account — it’s not, it’s unrelated to the user accounts on the web server portion of the device. Requiring the user to change the password on the web service would have no effect on the Telnet service. Other people assume the Telnet service is accidental, that good security hygiene would remove it. Instead, it’s an intended feature of the product, to remotely reset the device. Fixing the “password” issue as described in the above recommendations would simply mean the manufacturer would create a different, custom backdoor that hackers would eventually reverse engineer, creating MiraiV2 botnet. Instead of security guides banning backdoors, they need to come up with standard for remote reset.

That characterization of Mirai as an IoT botnet is wrong. Mirai is a botnet of security cameras. Security cameras are fundamentally different from IoT devices like toasters and fridges because they are often exposed to the public Internet. To stream video on your phone from your security camera, you need a port open on the Internet. Non-camera IoT devices, however, are overwhelmingly protected by a firewall, with no exposure to the public Internet. While you can create a botnet of Internet cameras, you cannot create a botnet of Internet toasters.

The point I’m trying to demonstrate here is that the above report was written by policy folks with little grasp of the technical details of what’s going on. They use Mirai to justify several of their “Action Items”, none of which actually apply to the technical details of Mirai. It has little to do with IoT, passwords, or hygiene.

Public-private partnerships

Action Item 1.2.1: The President should create, through executive order, the National Cybersecurity Private–Public Program (NCP 3 ) as a forum for addressing cybersecurity issues through a high-level, joint public–private collaboration.

We’ve had public-private partnerships to secure cyberspace for over 20 years, such as the FBI InfraGuard partnership. President Clinton’s had a plan in 1998 to create a public-private partnership to address cyber vulnerabilities. President Bush declared public-private partnerships the “cornerstone of his 2003 plan to secure cyberspace.

Here we are 20 years later, and this document is full of new naive proposals for public-private partnerships There’s no analysis of why they have failed in the past, or a discussion of which ones have succeeded.

The many calls for public-private programs reflects the left-wing nature of this supposed “bipartisan” document, that sees government as a paternalistic entity that can help. The right-wing doesn’t believe the government provides any value in these partnerships. In my 20 years of experience with government private-partnerships in cybersecurity, I’ve found them to be a time waster at best and at worst, a way to coerce “voluntary measures” out of companies that hurt the public’s interest.

Build a wall and make China pay for it

Action Item 1.3.1: The next Administration should require that all Internet-based federal government services provided directly to citizens require the use of appropriately strong authentication.

This would cost at least $100 per person, for 300 million people, or $30 billion. In other words, it’ll cost more than Trump’s wall with Mexico.

Hardware tokens are cheap. Blizzard (a popular gaming company) must deal with widespread account hacking from “gold sellers”, and provides second factor authentication to its gamers for $6 each. But that ignores the enormous support costs involved. How does a person prove their identity to the government in order to get such a token? To replace a lost token? When old tokens break? What happens if somebody’s token is stolen?

And that’s the best case scenario. Other options, like using cellphones as a second factor, are non-starters.

This is actually not a bad recommendation, as far as government services are involved, but it ignores the costs and difficulties involved.

But then the recommendations go on to suggest this for private sector as well:

Specifically, private-sector organizations, including top online retailers, large health insurers, social media companies, and major financial institutions, should use strong authentication solutions as the default for major online applications.

No, no, no. There is no reason for a “top online retailer” to know your identity. I lie about my identity. Amazon.com thinks my name is “Edward Williams”, for example.

They get worse with:

Action Item 1.3.3: The government should serve as a source to validate identity attributes to address online identity challenges.

In other words, they are advocating a cyber-dystopic police-state wet-dream where the government controls everyone’s identity. We already see how this fails with Facebook’s “real name” policy, where everyone from political activists in other countries to LGBTQ in this country get harassed for revealing their real names.

Anonymity and pseudonymity are precious rights on the Internet that we now enjoy — rights endangered by the radical policies in this document. This document frequently claims to promote security “while protecting privacy”. But the government doesn’t protect privacy — much of what we want from cybersecurity is to protect our privacy from government intrusion. This is nothing new, you’ve heard this privacy debate before. What I’m trying to show here is that the one-side view of privacy in this document demonstrates how it’s dominated by special interests.

Cybersecurity Framework

Action Item 1.4.2: All federal agencies should be required to use the Cybersecurity Framework. 

The “Cybersecurity Framework” is a bunch of a nonsense that would require another long blogpost to debunk. It requires months of training and years of experience to understand. It contains things like “DE.CM-4: Malicious code is detected”, as if that’s a thing organizations are able to do.

All the while it ignores the most common cyber attacks (SQL/web injections, phishing, password reuse, DDoS). It’s a typical example where organizations spend enormous amounts of money following process while getting no closer to solving what the processes are attempting to solve. Federal agencies using the Cybersecurity Framework are no safer from my pentests than those who don’t use it.

It gets even crazier:

Action Item 1.5.1: The National Institute of Standards and Technology (NIST) should expand its support of SMBs in using the Cybersecurity Framework and should assess its cost-effectiveness specifically for SMBs.

Small businesses can’t even afford to even read the “Cybersecurity Framework”. Simply reading the doc, trying to understand it, would exceed their entire IT/computer budget for the year. It would take a high-priced consultant earning $500/hour to tell them that “DE.CM-4: Malicious code is detected” means “buy antivirus and keep it up to date”.

Software liability is a hoax invented by the Chinese to make our IoT less competitive

Action Item 2.1.3: The Department of Justice should lead an interagency study with the Departments of Commerce and Homeland Security and work with the Federal Trade Commission, the Consumer Product Safety Commission, and interested private sector parties to assess the current state of the law with regard to liability for harm caused by faulty IoT devices and provide recommendations within 180 days. 

For over a decade, leftists in the cybersecurity industry have been pushing the concept of “software liability”. Every time there is a major new development in hacking, such as the worms around 2003, they come out with documents explaining why there’s a “market failure” and that we need liability to punish companies to fix the problem. Then the problem is fixed, without software liability, and the leftists wait for some new development to push the theory yet again.

It’s especially absurd for the IoT marketspace. The harm, as they imagine, is DDoS. But the majority of devices in Mirai were sold by non-US companies to non-US customers. There’s no way US regulations can stop that.

What US regulations will stop is IoT innovation in the United States. Regulations are so burdensome, and liability lawsuits so punishing, that it will kill all innovation within the United States. If you want to get rich with a clever IoT Kickstarter project, forget about it: you entire development budget will go to cybersecurity. The only companies that will be able to afford to ship IoT products in the United States will be large industrial concerns like GE that can afford the overhead of regulation/liability.

Liability is a left-wing policy issue, not one supported by technical analysis. Software liability has proven to be immaterial in any past problem and current proponents are distorting the IoT market to promote it now.

Cybersecurity workforce

Action Item 4.1.1: The next President should initiate a national cybersecurity workforce program to train 100,000 new cybersecurity practitioners by 2020. 

The problem in our industry isn’t the lack of “cybersecurity practitioners”, but the overabundance of “insecurity practitioners”.

Take “SQL injection” as an example. It’s been the most common way hackers break into websites for 15 years. It happens because programmers, those building web-apps, blinding paste input into SQL queries. They do that because they’ve been trained to do it that way. All the textbooks on how to build webapps teach them this. All the examples show them this.

So you have government programs on one hand pushing tech education, teaching kids to build web-apps with SQL injection. Then you propose to train a second group of people to fix the broken stuff the first group produced.

The solution to SQL/website injections is not more practitioners, but stopping programmers from creating the problems in the first place. The solution to phishing is to use the tools already built into Windows and networks that sysadmins use, not adding new products/practitioners. These are the two most common problems, and they happen not because of a lack of cybersecurity practitioners, but because the lack of cybersecurity as part of normal IT/computers.

I point this to demonstrate yet against that the document was written by policy people with little or no technical understanding of the problem.

Nutritional label

Action Item 3.1.1: To improve consumers’ purchasing decisions, an independent organization should develop the equivalent of a cybersecurity “nutritional label” for technology products and services—ideally linked to a rating system of understandable, impartial, third-party assessment that consumers will intuitively trust and understand. 

This can’t be done. Grab some IoT devices, like my thermostat, my car, or a Xiongmai security camera used in the Mirai botnet. These devices are so complex that no “nutritional label” can be made from them.

One of the things you’d like to know is all the software dependencies, so that if there’s a bug in OpenSSL, for example, then you know your device is vulnerable. Unfortunately, that requires a nutritional label with 10,000 items on it.

Or, one thing you’d want to know is that the device has no backdoor passwords. But that would miss the Xiongmai devices. The web service has no backdoor passwords. If you caught the Telnet backdoor password and removed it, then you’d miss the special secret backdoor that hackers would later reverse engineer.

This is a policy position chasing a non-existent technical issue push by Pieter Zatko, who has gotten hundreds of thousands of dollars from government grants to push the issue. It’s his way of getting rich and has nothing to do with sound policy.

Cyberczars and ambassadors

Various recommendations call for the appointment of various CISOs, Assistant to the President for Cybersecurity, and an Ambassador for Cybersecurity. But nowhere does it mention these should be technical posts. This is like appointing a Surgeon General who is not a doctor.

Government’s problems with cybersecurity stems from the way technical knowledge is so disrespected. The current cyberczar prides himself on his lack of technical knowledge, because that helps him see the bigger picture.

Ironically, many of the other Action Items are about training cybersecurity practitioners, employees, and managers. None of this can happen as long as leadership is clueless. Technical details matter, as I show above with the Mirai botnet. Subtlety and nuance in technical details can call for opposite policy responses.

Conclusion

This document is promoted as being written by technical experts. However, nothing in the document is neutral technical expertise. Instead, it’s almost entirely a policy document dominated by special interests and left-wing politics. In many places it makes recommendations to the incoming Republican president. His response should be to round-file it immediately.

I only chose a few items, as this blogpost is long enough as it is. I could pick almost any of of the 53 Action Items to demonstrate how they are policy, special-interest driven rather than reflecting technical expertise.

Fix Dirty COW on the Raspberry Pi

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/fix-dirty-cow-raspberry-pi/

Hi gang, Rob from The MagPi here. We have a new issue out on Thursday but before that, here comes a PSA.

You may have seen the news recently about a bug in the Linux kernel called Dirty COW – it’s a vulnerability that affects the ‘copy-on-write’ mechanism in Linux, which is also known as COW. This bug can be used to gain full control over a device running a version of Linux, including Android phones, web servers, and even the Raspberry Pi.

We're not sure why a bug got a logo but we're running with it

We’re not sure why a bug got a logo but we’re running with it

You don’t need to worry though, as a patch for Raspbian Jessie to fix Dirty COW has already been released, and you can get it right now. Open up a terminal window and type the following:

sudo apt-get update
sudo apt-get install raspberrypi-kernel

Once the install is done, make sure to reboot your Raspberry Pi and you’ll be Dirty COW-free!

The post Fix Dirty COW on the Raspberry Pi appeared first on Raspberry Pi.

iKeepSafe Inadvertently Gives Students a Valuable Lesson in Creators’ Rights

Post Syndicated from Andy original https://torrentfreak.com/ikeepsafe-inadvertently-gives-students-a-valuable-lesson-in-creators-rights-161023/

Children and students of all kinds are some of the most valuable assets to society. After all, they’re literally the future of the planet. As a result, hundreds of groups around the world dedicate themselves to protecting their interests, from general welfare and healthcare to Internet safety.

One of the groups dedicated to the latter is the Internet Keep Safe Coalition (iKeepSafe), an alliance of policy leaders, educators, law enforcement and technology experts.

iKeepSafe has launched a new initiative in partnership with pro-copyright/anti-piracy group Creative Future called the Contribute to Creativity Challenge.

“We know that when students are given the opportunity to be creative, they not only learn to make conscious choices about sharing their creative work, but they also understand the value of respecting the rights of other creators,” iKeepSafe says.

The challenge is a competition which requires students to submit electronic projects that center around the importance of behaving well online, such as respecting copyright and related rights.

“To participate, each entrant will need to submit an electronic project educating others about the importance of being an ethical, responsible online digital citizen,” iKeepSafe notes.

“The submissions will be judged according to the judging rubric and the winning entries will each receive a $75 Amazon gift card for books or classroom supplies.”

For those submitting entries the exercise of considering what makes a good digital citizen should be an enlightening one. Indeed, the creative process itself should also be enjoyable and educational, further sweetened by the prospect of a few bucks should the entry be a winner.

But for those young creators getting involved, there’s another equally valuable lesson to be learned from this exercise, even at the tender age of 12.

It’s quite likely that some participating students will be considering getting involved in the business of content creation, whether that’s in the music, movie, TV, or publishing sectors. With that in mind, they should consider the terms and conditions of any contracts entered into. This competition is a great place to start.

The Contribute to Creativity Challenge has five pages of T&Cs (pdf). They include rules that submitted content cannot infringe other people’s intellectual property rights or condone any illegal activities, which is fair enough.

However, since this is all about being creative and respecting creators’ rights, we took a look at what rights these young creators will have over their content after it’s submitted to the competition and what uses it will be put to thereafter.

“By entering the Competition, each Entrant hereby grants to Promoter and their assigns, licensees and designees a non-exclusive, irrevocable, perpetual license to use, copy, publish, and publicly display the Entry and all elements of the Entry (including, but not limited to, the Entrant’s name, city and country, biographical information, statements, voice, photograph and other likeness (unless prohibited by law)) in whole or in part,” the conditions read.

Of course, some kind of license is required if the competition operators are to be able to do anything with the entries. However, it also means that whether the entrant likes it or not (or even understands the legal jargon), their submitted work can be published along with their photographs until the end of time by iKeepSafe, “in any and all media either now known or not currently known, in perpetuity throughout the universe for all purposes.”

In perpetuity. Universe. All purposes. And, just to be clear, “without notification and without compensation of any kind to Entrant or any third party.” (emphasis ours)

Of course, there will be many students who will relish the thought of their projects gaining some publicity since that could really help their profile. However, it seems likely from the conditions of the competition that what iKeepSafe really wants is free material for upcoming campaigns.

“The Promoter shall have the right, without limitation, to reproduce, alter, amend, edit, publish, modify, crop and use each Entry in connection with commercials, advertisements and promotions related to the Promoter, the sale of Promoter’s products, the Competition and any other competition sponsored by Promoter, in any and all media, now or hereafter known, including but not limited to, all forms of television distribution, theatrical advertisements, radio, the Internet, newspapers, magazines and billboards,” the conditions read.

The eagle-eyed will have noticed that student entrants grant iKeepSafe a non-exclusive license, which usually means that they are also free to exploit their works themselves, a luxury that an exclusive license does not offer. While that’s a good thing, a subsequent clause could conceivably muddy the waters.

“Entrant agrees not to release any publicity or other materials on their own or through someone else regarding his or her participation in the Competition without the prior consent of the Promoter, which it may withhold in its sole discretion,” it reads.

Just to be absolutely clear, there’s no suggestion that iKeepSafe are leading students down a dark path here, since their overall goal of promoting ethical behavior online is a noble one. That being said, would it really hurt to properly compensate student creators featured in subsequent campaigns that will largely exist to help businesses?

After all, the message here is about being ethical, and with Creative Future on board – which represents rightsholders worth billions of dollars – there’s more than a little bit of cash lying around to properly compensate these young creators.

Perhaps the key lesson for students and other creators to be aware of at this early stage is that some companies and organizations will be prepared to exploit their creative work while giving little or indeed absolutely nothing back.

Today it’s a harmless school project competition entry on ethics, but in a few years time it could be something worth millions, ask George Michael.

Finally, if being ethical and responsible really is the goal, perhaps students and competition operators alike should consider a much less restrictive Creative Commons license.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Amazon ECS Service Auto Scaling Enables Rent-A-Center SAP Hybris Solution

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/amazon-ecs-service-auto-scaling-enables-rent-a-center-sap-hybris-solution/

This is a guest post from Troy Washburn, Sr. DevOps Manager @ Rent-A-Center, Inc., and Ashay Chitnis, Flux7 architect.

—–

Rent-A-Center in their own words: Rent-A-Center owns and operates more than 3,000 rent-to-own retail stores for name-brand furniture, electronics, appliances and computers across the US, Canada, and Puerto Rico.

Rent-A-Center (RAC) wanted to roll out an ecommerce platform that would support the entire online shopping workflow using SAP’s Hybris platform. The goal was to implement a cloud-based solution with a cluster of Hybris servers which would cater to online web-based demand.

The challenge: to run the Hybris clusters in a microservices architecture. A microservices approach has several advantages including the ability for each service to scale up and down to meet fluctuating changes in demand independently. RAC also wanted to use Docker containers to package the application in a format that is easily portable and immutable. There were four types of containers necessary for the architecture. Each corresponded to a particular service:

1. Apache: Received requests from the external Elastic Load Balancing load balancer. Apache was used to set certain rewrite and proxy http rules.
2. Hybris: An external Tomcat was the frontend for the Hybris platform.
3. Solr Master: A product indexing service for quick lookup.
4. Solr Slave: Replication of master cache to directly serve product searches from Hybris.

To deploy the containers in a microservices architecture, RAC and AWS consultants at Flux7 started by launching Amazon ECS resources with AWS CloudFormation templates. Running containers on ECS requires the use of three primary resources: clusters, services, and task definitions. Each container refers to its task definition for the container properties, such as CPU and memory. And, each of the above services stored its container images in Amazon ECR repositories.

This post describes the architecture that we created and implemented.

Auto Scaling

At first glance, scaling on ECS can seem confusing. But the Flux7 philosophy is that complex systems only work when they are a combination of well-designed simple systems that break the problem down into smaller pieces. The key insight that helped us design our solution was understanding that there are two very different scaling operations happening. The first is the scaling up of individual tasks in each service and the second is the scaling up of the cluster of Amazon EC2 instances.

During implementation, Service Auto Scaling was released by the AWS team and so we researched how to implement task scaling into the existing solution. As we were implementing the solution through AWS CloudFormation, task scaling needed to be done the same way. However, the new scaling feature was not available for implementation through CloudFormation and so the natural course was to implement it using AWS Lambda–backed custom resources.

A corresponding Lambda function is implemented in Node.js 4.3, while automatic scaling happens by monitoring the CPUUtilization Amazon CloudWatch metric. The ECS policies below are registered with CloudWatch alarms that are triggered when specific thresholds are crossed. Similarly, by using the MemoryUtilization CloudWatch metric, ECS scaling can be made to scale in and out as well.

The Lambda function and CloudFormation custom resource JSON are available in the Flux7 GitHub repository: https://github.com/Flux7Labs/blog-code-samples/tree/master/2016-10-ecs-enables-rac-sap-hybris

Scaling ECS services and EC2 instances automatically

The key to understanding cluster scaling is to start by understanding the problem. We are no longer running a homogeneous workload in a simple environment. We have a cluster hosting a heterogeneous workload with different requirements and different demands on the system.

This clicked for us after we phrased the problem as, “Make sure the cluster has enough capacity to launch ‘x’ more instances of a task.” This led us to realize that we were no longer looking at an overall average resource utilization problem, but rather a discrete bin packing problem.

The problem is inherently more complex. (Anyone remember from algorithms class how the discrete Knapsack problem is NP-hard, but the continuous knapsack problem can easily be solved in polynomial time? Same thing.) So we have to check on each individual instance if a particular task can be scheduled on it, and if for any task we don’t cross the required capacity threshold, then we need to allocate more instance capacity.

To ensure that ECS scaling always has enough resources to scale out and has just enough resources after scaling in, it was necessary that the Auto Scaling group scales according to three criteria:

1. ECS task count in relation to the host EC2 instance count in a cluster
2. Memory reservation
3. CPU reservation

We implemented the first criteria for the Auto Scaling group. Instead of using the default scaling abilities, we set group scaling in and out using Lambda functions that were triggered periodically by a combination of the AWS::Lambda::Permission and an AWS::Events::Rule resources, as we wanted specific criteria for scaling.

The Lambda function is available in the Flux7 GitHub repository: https://github.com/Flux7Labs/blog-code-samples/tree/master/2016-10-ecs-enables-rac-sap-hybris

Future versions of this piece of code will incorporate the other two criteria along with the ability to use CloudWatch alarms to trigger scaling.

Conclusion

Using advanced ECS features like Service Auto Scaling in conjunction with Lambda to meet RAC’s business requirements, RAC and Flux7 were able to Dockerize SAP Hybris in production for the first time ever.

Further, ECS and CloudFormation give users the ability to implement robust solutions while still providing the ability to roll back in case of failures. With ECS as a backbone technology, RAC has been able to deploy a Hybris setup with automatic scaling, self-healing, one-click deployment, CI/CD, and PCI compliance consistent with the company’s latest technology guidelines and meeting the requirements of their newly-formed culture of DevOps and extreme agility.

If you have any questions or suggestions, please comment below.

Ricky Gervais: Don’t Pirate My Film, I Stand to Make Millions

Post Syndicated from Ernesto original https://torrentfreak.com/ricky-gervais-psa-160701/

Anti-piracy PSAs come in all shapes and sizes, but nearly all of them fail to appeal to the public they’re intended for.

Today, as part of the Industry Trust’s “Moments Worth Paying For” campaign, British comedian Ricky Gervais gives it a shot with a special anti-piracy message of his own.

The PSA is for his upcoming movie “David Brent: Life on the Road,” in which he brings the iconic character from The Office back to life. The movie premieres later this summer and Gervais hopes that pirates will go to see it in the cinema, instead of heading to a nearby torrent site.

“We’re basically asking you not to pirate movies. The quality is bad, and it’s a lot of people’s livelihoods,” Gervais says.

Pretty classic language for a PSA, but Gervais then adds another dimension.

“For example my new movie David Brent: Life On The Road. If it does well I stand to make millions. If you pirate it, sure, you’ll save a few quid. But millions…”

Of course, the entire PSA is tongue-in-cheek, but by using one of the classic pirate excuses combined with more traditional anti-piracy language, Gervais creates more food for discussion than more traditional PSAs.

The short video confuses both pirates and creators, and actually provides a starting ground for a decent discussion.

Amusingly, the Industry Trust stresses that Gervais himself wrote the anti-piracy message, as if they need an excuse. To compensate, they are quick to stress that piracy seriously hurts the UK movie industry.

“The Industry Trust handed artistic control of the trailer to Gervais, who tore up the rule book and took the trailer spectacularly off-message as only he can,” they write.

“While the trailer is a light-hearted take on piracy, the reality is that in 2015, the top 20 titles made up, a total of 41% of the total UK box office, meaning that a high percentage of films weren’t seen by a lot of people.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.