Tag Archives: 2014

Looming Net Neutrality Repeal Sparks BitTorrent Throttling Fears

Post Syndicated from Ernesto original https://torrentfreak.com/looming-net-neutrality-repeal-sparks-bittorrent-throttling-fears-171123/

Ten years ago we uncovered that Comcast was systematically slowing down BitTorrent traffic to ease the load on its network.

The Comcast case ignited a broad discussion about net neutrality and provided the setup for the FCC’s Open Internet Order, which came into effect three years later.

This Open Internet Order then became the foundation of the net neutrality regulation that was adopted in 2015 and still applies today. The big change compared to the earlier attempt was that ISPs can be regulated as carriers under Title II.

These rules provide a clear standard that prevents ISPs from blocking, throttling, and paid prioritization of “lawful” traffic. However, this may soon be over as the FCC is determined to repeal it.

FCC head Ajit Pai recently told Reuters that the current rules are too restrictive and hinder competition and innovation, which is ultimately not in the best interests of consumers

“The FCC will no longer be in the business of micromanaging business models and preemptively prohibiting services and applications and products that could be pro-competitive,” Pai said. “We should simply set rules of the road that let companies of all kinds in every sector compete and let consumers decide who wins and loses.”

This week the FCC released its final repeal draft (pdf), which was met with fierce resistance from the public and various large tech companies. They fear that, if the current net neutrality rules disappear, throttling and ‘fast lanes’ for some services will become commonplace.

This could also mean that BitTorrent traffic could become a target once again, with it being blocked or throttled across many networks, as The Verge just pointed out.

Blocking BitTorrent traffic would indeed become much easier if current net neutrality safeguards were removed. However, the FCC believes that the current “no-throttling rules are unnecessary to prevent the harms that they were intended to thwart,” such as blocking entire file transfer protocols.

Instead, the FCC notes that antitrust law, FTC enforcement of ISP commitments, and consumer expectations will prevent any unwelcome blocking. This is also the reason why ISPs adopted no-blocking policies even when they were not required to, they point out.

Indeed, when the DC Circuit Court of Appeals decimated the Open Internet Order in 2014, Comcast was quick to assure subscribers that it had no plans to start throttling torrents again. Yes, that offers no guarantees for the future.

The FCC goes on to mention that the current net neutrality rules don’t prevent selective blocking. They can already be bypassed by ISPs if they offer “curated services,” which allows them to filter content on viewpoint grounds. And Edge providers also block content because it violates their “viewpoints,” citing the Cloudflare termination of The Daily Stormer.

Net neutrality supporters see these explanations as weak excuses and have less trust in the self-regulating capacity of the ISP industry that the FCC, calling for last minute protests to stop the repeal.

For now it appears, however, that the FCC is unlikely to change its course, as Ars Technica reports.

While net neutrality concerns are legitimate, for BitTorrent users not that much will change.

As we’ve highlighted in the past, blocking pirate sites is already an option under the current rules. The massive copyright loophole made sure of that. Targeting all torrent traffic is even an option, in theory.

If net neutrality is indeed repealed next month, blocking or throttling BitTorrent traffic across the entire network will become easier, no doubt. For now, however, there are no signs that any ISPs plan to do so.

If it does, we will know soon enough. The FCC will require ISPs to be transparent under the new plan. They have to disclose network management practices, blocking efforts, commercial prioritization, and the like.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Hollywood Studios Force ISPs to Block Popcorn Time & Subtitle Sites

Post Syndicated from Andy original https://torrentfreak.com/court-orders-isps-to-block-popcorn-time-subtitle-websites-171113/

Early 2014, a new craze was sweeping the piracy world. Instead of relatively cumbersome text-heavy torrent sites, people were turning to a brand new application called Popcorn Time.

Dubbed the Netflix for Pirates due to its beautiful interface, Popcorn Time was soon a smash hit all over the planet. But with that fame came trouble, with anti-piracy outfits all over the world seeking to shut it down or at least pour cold water on its popularity.

In the meantime, however, the popularity of Kodi skyrocketed, something which pushed Popcorn Time out of the spotlight for a while. Nevertheless, the application in several different forms never went away and it still enjoys an impressive following today. This means that despite earlier action in several jurisdictions, Hollywood still has it on the radar.

The latest development comes out of Norway, where Disney Entertainment, Paramount Pictures Corporation, Columbia Pictures, Twentieth Century Fox Film Corporation, Universal City Studios and Warner Bros. have just taken 14 local Internet service providers to court.

The studios claimed that the ISPs (including Telenor, Nextgentel, Get, Altibox, Telia, Homenet, Ice Norge, Eidsiva Bredbånd and Lynet Internet) should undertake broad blocking action to ensure that three of the most popular Popcorn Time forks (located at popcorn-time.to, popcorntime.sh and popcorn-time.is) can no longer function in the region.

Since site-blocking necessarily covers the blocking of websites, there appears to have been much discussion over whether a software application can be considered a website. However, the court ultimately found that wasn’t really an issue, since each application requires websites to operate.

“Each of the three [Popcorn Time variants] must be considered a ‘site’, even though users access Popcorn Time in a way that is technically different from the way other pirate sites provide users with access to content, and although different components of the Popcorn Time service are retrieved from different domains,” the Oslo District Court’s ruling reads.

In respect of all three releases of Popcorn Time, the Court weighed the pros and cons of blocking, including whether blocking was needed at all. However, it ultimately decided that alternative methods for dealing with the sites do not exist since the rightsholders tried and ultimately failed to get cooperation from the sites’ operators.

“All sites have as their main purpose the purpose of facilitating infringement of protected works by giving the public unauthorized access to movies and TV shows. This happens without regard to the rights of others and imposes major losses on the licensees and the cultural industry in general,” the Court writes.

The Court also supported compelling ISPs to introduce the blocks, noting that they are “an appropriate and proportionate measure” that does not interfere with the Internet service providers’ freedom to operate nor anyone’s else’s right to freedom of expression.

But while the websites in question are located in three places (popcorn-time.to, popcorntime.sh and popcorn-time.is) the Court’s blocking order goes much further. Not only does it cover these key domains but also other third-party sites that Popcorn Time utilizes, such as platforms offering subtitles.

Popcorn-time.to related domains to be blocked: popcorn-time.to, popcorn-time.xyz, popcorn-time.se, iosinstaller.com, video4time.info, thepopcorntime.net, timepopcorn.info, time-popcorn.com, the-pop-corn-time.net, timepopcorn.net, time4videostream.com, ukfrnlge.xyz, opensubtitles.org, onlinesubtitles.com, popcorntime-update.xyz, plus subdomains.

Popcorntime.sh related domains to be blocked: Popcorntime.sh, api-fetch.website, yts.ag, opensubtitles.org, plus subdomains.

Popcorn-time.is related domains to be blocked: popcorn-time.is, yts.ag, yify.is, yts.ph, api-fetch.website, eztvapi.ml and opensubtitles.org, plus subdomains.

Separately, the Court ordered the ISPs to block torrent site YTS.ag and onlinesubtitles.com, opensubtitles.org, plus their subdomains.

Since no one appeared to represent the sites and the ISPs can’t be held responsible if they cooperate, the Court found that the studios had succeeding in their action and are entitled to compensation.

“The Court’s conclusions mean that the plaintiffs have won the case and, in principle, are entitled to compensation for their legal costs from the operators of the sites,” the Court notes. “This means that the operators of sites are ordered to pay the plaintiffs’ costs.”

Those costs amount to 570,000 kr (around US$70,000), an amount which the Court chose to split equally between the three Popcorn Time forks ($23,359 each). It seems unlikely the amounts will ever be recovered although there is still an opportunity for the parties to appeal.

In the meantime the ISPs have just days left to block the sites listed above. Once they’ve been put in place, the blocks will remain in place for five years.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Visualize AWS Cloudtrail Logs using AWS Glue and Amazon Quicksight

Post Syndicated from Luis Caro Perez original https://aws.amazon.com/blogs/big-data/streamline-aws-cloudtrail-log-visualization-using-aws-glue-and-amazon-quicksight/

Being able to easily visualize AWS CloudTrail logs gives you a better understanding of how your AWS infrastructure is being used. It can also help you audit and review AWS API calls and detect security anomalies inside your AWS account. To do this, you must be able to perform analytics based on your CloudTrail logs.

In this post, I walk through using AWS Glue and AWS Lambda to convert AWS CloudTrail logs from JSON to a query-optimized format dataset in Amazon S3. I then use Amazon Athena and Amazon QuickSight to query and visualize the data.

Solution overview

To process CloudTrail logs, you must implement the following architecture:

CloudTrail delivers log files in an Amazon S3 bucket folder. To correctly crawl these logs, you modify the file contents and folder structure using an Amazon S3-triggered Lambda function that stores the transformed files in an S3 bucket single folder. When the files are in a single folder, AWS Glue scans the data, converts it into Apache Parquet format, and catalogs it to allow for querying and visualization using Amazon Athena and Amazon QuickSight.

Walkthrough

Let’s look at the steps that are required to build the solution.

Set up CloudTrail logs

First, you need to set up a trail that delivers log files to an S3 bucket. To create a trail in CloudTrail, follow the instructions in Creating a Trail.

When you finish, the trail settings page should look like the following screenshot:

In this example, I set up log files to be delivered to the cloudtraillfcaro bucket.

Consolidate CloudTrail reports into a single folder using Lambda

AWS CloudTrail delivers log files using the following folder structure inside the configured Amazon S3 bucket:

AWSLogs/ACCOUNTID/CloudTrail/REGION/YEAR/MONTH/HOUR/filename.json.gz

Additionally, log files have the following structure:

{
    "Records": [{
        "eventVersion": "1.01",
        "userIdentity": {
            "type": "IAMUser",
            "principalId": "AIDAJDPLRKLG7UEXAMPLE",
            "arn": "arn:aws:iam::123456789012:user/Alice",
            "accountId": "123456789012",
            "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
            "userName": "Alice",
            "sessionContext": {
                "attributes": {
                    "mfaAuthenticated": "false",
                    "creationDate": "2014-03-18T14:29:23Z"
                }
            }
        },
        "eventTime": "2014-03-18T14:30:07Z",
        "eventSource": "cloudtrail.amazonaws.com",
        "eventName": "StartLogging",
        "awsRegion": "us-west-2",
        "sourceIPAddress": "72.21.198.64",
        "userAgent": "signin.amazonaws.com",
        "requestParameters": {
            "name": "Default"
        },
        "responseElements": null,
        "requestID": "cdc73f9d-aea9-11e3-9d5a-835b769c0d9c",
        "eventID": "3074414d-c626-42aa-984b-68ff152d6ab7"
    },
    ... additional entries ...
    ]

If AWS Glue crawlers are used to catalog these files as they are written, the following obstacles arise:

  1. AWS Glue identifies different tables per different folders because they don’t follow a traditional partition format.
  2. Based on the structure of the file content, AWS Glue identifies the tables as having a single column of type array.
  3. CloudTrail logs have JSON attributes that use uppercase letters. According to the Best Practices When Using Athena with AWS Glue, it is recommended that you convert these to lowercase.

To have AWS Glue catalog all log files in a single table with all the columns describing each event, implement the following Lambda function:

from __future__ import print_function
import json
import urllib
import boto3
import gzip

s3 = boto3.resource('s3')
client = boto3.client('s3')

def convertColumntoLowwerCaps(obj):
    for key in obj.keys():
        new_key = key.lower()
        if new_key != key:
            obj[new_key] = obj[key]
            del obj[key]
    return obj


def lambda_handler(event, context):

    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8'))
    print(bucket)
    print(key)
    try:
        newKey = 'flatfiles/' + key.replace("/", "")
        client.download_file(bucket, key, '/tmp/file.json.gz')
        with gzip.open('/tmp/out.json.gz', 'w') as output, gzip.open('/tmp/file.json.gz', 'rb') as file:
            i = 0
            for line in file: 
                for record in json.loads(line,object_hook=convertColumntoLowwerCaps)['records']:
            		if i != 0:
            		    output.write("\n")
            		output.write(json.dumps(record))
            		i += 1
        client.upload_file('/tmp/out.json.gz', bucket,newKey)
        return "success"
    except Exception as e:
        print(e)
        print('Error processing object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
        raise e

The function goes over each element of the records array, changes uppercase letters to lowercase in column names, and inserts each element of the array as a single line of a new file. The new file is saved inside a flatfiles folder created by the function without any subfolders in the S3 bucket.

The function should have a role containing a policy with at least the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::cloudtraillfcaro/*",
                "arn:aws:s3:::cloudtraillfcaro"
            ],
            "Effect": "Allow"
        }
    ]
}

In this example, CloudTrail delivers logs to the cloudtraillfcaro bucket. Make sure that you replace this name with your bucket name in the policy. For more information about how to work with inline policies, see Working with Inline Policies.

After the Lambda function is created, you can set up the following trigger using the Triggers tab on the AWS Lambda console.

Choose Add trigger, and choose S3 as a source of the trigger.

After choosing the source, configure the following settings:

In the trigger, any file that is written to the path for the log files—which in this case is AWSLogs/119582755581/CloudTrail/—is processed. Make sure that the Enable trigger check box is selected and that the bucket and prefix parameters match your use case.

After you set up the function and receive log files, the bucket (in this case cloudtraillfcaro) should contain the processed files inside the flatfiles folder.

Catalog source data

Once the files are processed by the Lambda function, set up a crawler named cloudtrail to catalog them.

The crawler must point to the flatfiles folder.

All the crawlers and AWS Glue jobs created for this solution must have a role with the AWSGlueServiceRole managed policy and an inline policy with permissions to modify the S3 buckets used on the Lambda function. For more information, see Working with Managed Policies.

The role should look like the following:

In this example, the inline policy named s3perms contains the permissions to modify the S3 buckets.

After you choose the role, you can schedule the crawler to run on demand.

A new database is created, and the crawler is set to use it. In this case, the cloudtrail database is used for all the tables.

After the crawler runs, a single table should be created in the catalog with the following structure:

The table should contain the following columns:

Create and run the AWS Glue job

To convert all the CloudTrail logs to a columnar store in Parquet, set up an AWS Glue job by following these steps.

Upload the following script into a bucket in Amazon S3:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
import boto3
import time

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "cloudtrail", table_name = "flatfiles", transformation_ctx = "datasource0")
resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "make_struct", transformation_ctx = "resolvechoice1")
relationalized1 = resolvechoice1.relationalize("trail", args["TempDir"]).select("trail")
datasink = glueContext.write_dynamic_frame.from_options(frame = relationalized1, connection_type = "s3", connection_options = {"path": "s3://cloudtraillfcaro/parquettrails"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()

In the example, you load the script as a file named cloudtrailtoparquet.py. Make sure that you modify the script and update the “{"path": "s3://cloudtraillfcaro/parquettrails"}” with the destination in which you want to store your results.

After uploading the script, add a new AWS Glue job. Choose a name and role for the job, and choose the option of running the job from An existing script that you provide.

To avoid processing the same data twice, enable the Job bookmark setting in the Advanced properties section of the job properties.

Choose Next twice, and then choose Finish.

If logs are already in the flatfiles folder, you can run the job on demand to generate the first set of results.

Once the job starts running, wait for it to complete.

When the job is finished, its Run status should be Succeeded. After that, you can verify that the Parquet files are written to the Amazon S3 location.

Catalog results

To be able to process results from Athena, you can use an AWS Glue crawler to catalog the results of the AWS Glue job.

In this example, the crawler is set to use the same database as the source named cloudtrail.

You can run the crawler using the console. When the crawler finishes running and has processed the Parquet results, a new table should be created in the AWS Glue Data Catalog. In this example, it’s named parquettrails.

The table should have the classification set to parquet.

It should have the same columns as the flatfiles table, with the exception of the struct type columns, which should be relationalized into several columns:

In this example, notice how the requestparameters column, which was a struct in the original table (flatfiles), was transformed to several columns—one for each key value inside it. This is done using a transformation native to AWS Glue called relationalize.

Query results with Athena

After crawling the results, you can query them using Athena. For example, to query what events took place in the time frame between 2017-10-23t12:00:00 and 2017-10-23t13:00, use the following select statement:

select *
from cloudtrail.parquettrails
where eventtime > '2017-10-23T12:00:00Z' AND eventtime < '2017-10-23T13:00:00Z'
order by eventtime asc;

Be sure to replace cloudtrail.parquettrails with the names of your database and table that references the Parquet results. Replace the datetimes with an hour when your account had activity and was processed by the AWS Glue job.

Visualize results using Amazon QuickSight

Once you can query the data using Athena, you can visualize it using Amazon QuickSight. Before connecting Amazon QuickSight to Athena, be sure to grant QuickSight access to Athena and the associated S3 buckets in your account. For more information, see Managing Amazon QuickSight Permissions to AWS Resources. You can then create a new data set in Amazon QuickSight based on the Athena table that you created.

After setting up permissions, you can create a new analysis in Amazon QuickSight by choosing New analysis.

Then add a new data set.

Choose Athena as the source.

Give the data source a name (in this case, I named it cloudtrail).

Choose the name of the database and the table referencing the Parquet results.

Then choose Visualize.

After that, you should see the following screen:

Now you can create some visualizations. First, search for the sourceipaddress column, and drag it to the AutoGraph section.

You can see a list of the IP addresses that you have used to interact with AWS. To review whether these IP addresses have been used from IAM users, internal AWS services, or roles, use the type value that is inside the useridentity field of the original log files. Thanks to the relationalize transformation, this value is available as the useridentity.type column. After the column is added into the Group/Color box, the visualization should look like the following:

You can now see and distinguish the most used IPs and whether they are used from roles, AWS services, or IAM users.

After following all these steps, you can use Amazon QuickSight to add different columns from CloudTrail and perform different types of visualizations. You can build operational dashboards that continuously monitor AWS infrastructure usage and access. You can share those dashboards with others in your organization who might need to see this data.

Summary

In this post, you saw how you can use a simple Lambda function and an AWS Glue script to convert text files into Parquet to improve Athena query performance and data compression. The post also demonstrated how to use AWS Lambda to preprocess files in Amazon S3 and transform them into a format that is recognizable by AWS Glue crawlers.

This example, used AWS CloudTrail logs, but you can apply the proposed solution to any set of files that after preprocessing, can be cataloged by AWS Glue.


Additional Reading

Learn how to Harmonize, Query, and Visualize Data from Various Providers using AWS Glue, Amazon Athena, and Amazon QuickSight.


About the Authors

Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

Say Hello To Our Newest AWS Community Heroes (Fall 2017 Edition)

Post Syndicated from Sara Rodas original https://aws.amazon.com/blogs/aws/say-hello-to-our-newest-aws-community-heroes-fall-2017-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these heroes share their time and knowledge across social media and through in-person events. Heroes also actively help drive community-led tracks at conferences. At this year’s re:Invent, many Heroes will be speaking during the Monday Community Day track.

This November, we are thrilled to have four Heroes joining our network of cloud innovators. Without further ado, meet to our newest AWS Community Heroes!

 

Anh Ho Viet

Anh Ho Viet is the founder of AWS Vietnam User Group, Co-founder & CEO of OSAM, an AWS Consulting Partner in Vietnam, an AWS Certified Solutions Architect, and a cloud lover.

At OSAM, Anh and his enthusiastic team have helped many companies, from SMBs to Enterprises, move to the cloud with AWS. They offer a wide range of services, including migration, consultation, architecture, and solution design on AWS. Anh’s vision for OSAM is beyond a cloud service provider; the company will take part in building a complete AWS ecosystem in Vietnam, where other companies are encouraged to become AWS partners through training and collaboration activities.

In 2016, Anh founded the AWS Vietnam User Group as a channel to share knowledge and hands-on experience among cloud practitioners. Since then, the community has reached more than 4,800 members and is still expanding. The group holds monthly meetups, connects many SMEs to AWS experts, and provides real-time, free-of-charge consultancy to startups. In August 2017, Anh joined as lead content creator of a program called “Cloud Computing Lectures for Universities” which includes translating AWS documentation & news into Vietnamese, providing students with fundamental, up-to-date knowledge of AWS cloud computing, and supporting students’ career paths.

 

Thorsten Höger

Thorsten Höger is CEO and Cloud consultant at Taimos, where he is advising customers on how to use AWS. Being a developer, he focuses on improving development processes and automating everything to build efficient deployment pipelines for customers of all sizes.

Before being self-employed, Thorsten worked as a developer and CTO of Germany’s first private bank running on AWS. With his colleagues, he migrated the core banking system to the AWS platform in 2013. Since then he organizes the AWS user group in Stuttgart and is a frequent speaker at Meetups, BarCamps, and other community events.

As a supporter of open source software, Thorsten is maintaining or contributing to several projects on Github, like test frameworks for AWS Lambda, Amazon Alexa, or developer tools for CloudFormation. He is also the maintainer of the Jenkins AWS Pipeline plugin.

In his spare time, he enjoys indoor climbing and cooking.

 

Becky Zhang

Yu Zhang (Becky Zhang) is COO of BootDev, which focuses on Big Data solutions on AWS and high concurrency web architecture. Before she helped run BootDev, she was working at Yubis IT Solutions as an operations manager.

Becky plays a key role in the AWS User Group Shanghai (AWSUGSH), regularly organizing AWS UG events including AWS Tech Meetups and happy hours, gathering AWS talent together to communicate the latest technology and AWS services. As a female in technology industry, Becky is keen on promoting Women in Tech and encourages more woman to get involved in the community.

Becky also connects the China AWS User Group with user groups in other regions, including Korea, Japan, and Thailand. She was invited as a panelist at AWS re:Invent 2016 and spoke at the Seoul AWS Summit this April to introduce AWS User Group Shanghai and communicate with other AWS User Groups around the world.

Besides events, Becky also promotes the Shanghai AWS User Group by posting AWS-related tech articles, event forecasts, and event reports to Weibo, Twitter, Meetup.com, and WeChat (which now has over 2000 official account followers).

 

Nilesh Vaghela

Nilesh Vaghela is the founder of ElectroMech Corporation, an AWS Cloud and open source focused company (the company started as an open source motto). Nilesh has been very active in the Linux community since 1998. He started working with AWS Cloud technologies in 2013 and in 2014 he trained a dedicated cloud team and started full support of AWS cloud services as an AWS Standard Consulting Partner. He always works to establish and encourage cloud and open source communities.

He started the AWS Meetup community in Ahmedabad in 2014 and as of now 12 Meetups have been conducted, focusing on various AWS technologies. The Meetup has quickly grown to include over 2000 members. Nilesh also created a Facebook group for AWS enthusiasts in Ahmedabad, with over 1500 members.

Apart from the AWS Meetup, Nilesh has delivered a number of seminars, workshops, and talks around AWS introduction and awareness, at various organizations, as well as at colleges and universities. He has also been active in working with startups, presenting AWS services overviews and discussing how startups can benefit the most from using AWS services.

Nilesh is Red Hat Linux Technologies and AWS Cloud Technologies trainer as well.

 

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.

Me on the Equifax Breach

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/me_on_the_equif.html

Testimony and Statement for the Record of Bruce Schneier
Fellow and Lecturer, Belfer Center for Science and International Affairs, Harvard Kennedy School
Fellow, Berkman Center for Internet and Society at Harvard Law School

Hearing on “Securing Consumers’ Credit Data in the Age of Digital Commerce”

Before the

Subcommittee on Digital Commerce and Consumer Protection
Committee on Energy and Commerce
United States House of Representatives

1 November 2017
2125 Rayburn House Office Building
Washington, DC 20515

Mister Chairman and Members of the Committee, thank you for the opportunity to testify today concerning the security of credit data. My name is Bruce Schneier, and I am a security technologist. For over 30 years I have studied the technologies of security and privacy. I have authored 13 books on these subjects, including Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (Norton, 2015). My popular newsletter CryptoGram and my blog Schneier on Security are read by over 250,000 people.

Additionally, I am a Fellow and Lecturer at the Harvard Kennedy School of Government –where I teach Internet security policy — and a Fellow at the Berkman-Klein Center for Internet and Society at Harvard Law School. I am a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of Electronic Privacy Information Center and VerifiedVoting.org. I am also a special advisor to IBM Security and the Chief Technology Officer of IBM Resilient.

I am here representing none of those organizations, and speak only for myself based on my own expertise and experience.

I have eleven main points:

1. The Equifax breach was a serious security breach that puts millions of Americans at risk.

Equifax reported that 145.5 million US customers, about 44% of the population, were impacted by the breach. (That’s the original 143 million plus the additional 2.5 million disclosed a month later.) The attackers got access to full names, Social Security numbers, birth dates, addresses, and driver’s license numbers.

This is exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, cell phone companies and other businesses vulnerable to fraud. As a result, all 143 million US victims are at greater risk of identity theft, and will remain at risk for years to come. And those who suffer identify theft will have problems for months, if not years, as they work to clean up their name and credit rating.

2. Equifax was solely at fault.

This was not a sophisticated attack. The security breach was a result of a vulnerability in the software for their websites: a program called Apache Struts. The particular vulnerability was fixed by Apache in a security patch that was made available on March 6, 2017. This was not a minor vulnerability; the computer press at the time called it “critical.” Within days, it was being used by attackers to break into web servers. Equifax was notified by Apache, US CERT, and the Department of Homeland Security about the vulnerability, and was provided instructions to make the fix.

Two months later, Equifax had still failed to patch its systems. It eventually got around to it on July 29. The attackers used the vulnerability to access the company’s databases and steal consumer information on May 13, over two months after Equifax should have patched the vulnerability.

The company’s incident response after the breach was similarly damaging. It waited nearly six weeks before informing victims that their personal information had been stolen and they were at increased risk of identity theft. Equifax opened a website to help aid customers, but the poor security around that — the site was at a domain separate from the Equifax domain — invited fraudulent imitators and even more damage to victims. At one point, the official Equifax communications even directed people to that fraudulent site.

This is not the first time Equifax failed to take computer security seriously. It confessed to another data leak in January 2017. In May 2016, one of its websites was hacked, resulting in 430,000 people having their personal information stolen. Also in 2016, a security researcher found and reported a basic security vulnerability in its main website. And in 2014, the company reported yet another security breach of consumer information. There are more.

3. There are thousands of data brokers with similarly intimate information, similarly at risk.

Equifax is more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about us — almost all of them companies you’ve never heard of and have no business relationship with.

The breadth and depth of information that data brokers have is astonishing. Data brokers collect and store billions of data elements covering nearly every US consumer. Just one of the data brokers studied holds information on more than 1.4 billion consumer transactions and 700 billion data elements, and another adds more than 3 billion new data points to its database each month.

These brokers collect demographic information: names, addresses, telephone numbers, e-mail addresses, gender, age, marital status, presence and ages of children in household, education level, profession, income level, political affiliation, cars driven, and information about homes and other property. They collect lists of things we’ve purchased, when we’ve purchased them, and how we paid for them. They keep track of deaths, divorces, and diseases in our families. They collect everything about what we do on the Internet.

4. These data brokers deliberately hide their actions, and make it difficult for consumers to learn about or control their data.

If there were a dozen people who stood behind us and took notes of everything we purchased, read, searched for, or said, we would be alarmed at the privacy invasion. But because these companies operate in secret, inside our browsers and financial transactions, we don’t see them and we don’t know they’re there.

Regarding Equifax, few consumers have any idea what the company knows about them, who they sell personal data to or why. If anyone knows about them at all, it’s about their business as a credit bureau, not their business as a data broker. Their website lists 57 different offerings for business: products for industries like automotive, education, health care, insurance, and restaurants.

In general, options to “opt-out” don’t work with data brokers. It’s a confusing process, and doesn’t result in your data being deleted. Data brokers will still collect data about consumers who opt out. It will still be in those companies’ databases, and will still be vulnerable. It just don’t be included individually when they sell data to their customers.

5. The existing regulatory structure is inadequate.

Right now, there is no way for consumers to protect themselves. Their data has been harvested and analyzed by these companies without their knowledge or consent. They cannot improve the security of their personal data, and have no control over how vulnerable it is. They only learn about data breaches when the companies announce them — which can be months after the breaches occur — and at that point the onus is on them to obtain credit monitoring services or credit freezes. And even those only protect consumers from some of the harms, and only those suffered after Equifax admitted to the breach.

Right now, the press is reporting “dozens” of lawsuits against Equifax from shareholders, consumers, and banks. Massachusetts has sued Equifax for violating state consumer protection and privacy laws. Other states may follow suit.

If any of these plaintiffs win in the court, it will be a rare victory for victims of privacy breaches against the companies that have our personal information. Current law is too narrowly focused on people who have suffered financial losses directly traceable to a specific breach. Proving this is difficult. If you are the victim of identity theft in the next month, is it because of Equifax or does the blame belong to another of the thousands of companies who have your personal data? As long as one can’t prove it one way or the other, data brokers remain blameless and liability free.

Additionally, much of this market in our personal data falls outside the protections of the Fair Credit Reporting Act. And in order for the Federal Trade Commission to levy a fine against Equifax, it needs to have a consent order and then a subsequent violation. Any fines will be limited to credit information, which is a small portion of the enormous amount of information these companies know about us. In reality, this is not an effective enforcement regime.

Although the FTC is investigating Equifax, it is unclear if it has a viable case.

6. The market cannot fix this because we are not the customers of data brokers.

The customers of these companies are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer — everyone who wants to sell you something, even governments.

Markets work because buyers choose from a choice of sellers, and sellers compete for buyers. None of us are Equifax’s customers. None of us are the customers of any of these data brokers. We can’t refuse to do business with the companies. We can’t remove our data from their databases. With few limited exceptions, we can’t even see what data these companies have about us or correct any mistakes.

We are the product that these companies sell to their customers: those who want to use our personal information to understand us, categorize us, make decisions about us, and persuade us.

Worse, the financial markets reward bad security. Given the choice between increasing their cybersecurity budget by 5%, or saving that money and taking the chance, a rational CEO chooses to save the money. Wall Street rewards those whose balance sheets look good, not those who are secure. And if senior management gets unlucky and the a public breach happens, they end up okay. Equifax’s CEO didn’t get his $5.2 million severance pay, but he did keep his $18.4 million pension. Any company that spends more on security than absolutely necessary is immediately penalized by shareholders when its profits decrease.

Even the negative PR that Equifax is currently suffering will fade. Unless we expect data brokers to put public interest ahead of profits, the security of this industry will never improve without government regulation.

7. We need effective regulation of data brokers.

In 2014, the Federal Trade Commission recommended that Congress require data brokers be more transparent and give consumers more control over their personal information. That report contains good suggestions on how to regulate this industry.

First, Congress should help plaintiffs in data breach cases by authorizing and funding empirical research on the harm individuals receive from these breaches.

Specifically, Congress should move forward legislative proposals that establish a nationwide “credit freeze” — which is better described as changing the default for disclosure from opt-out to opt-in — and free lifetime credit monitoring services. By this I do not mean giving customers free credit-freeze options, a proposal by Senators Warren and Schatz, but that the default should be a credit freeze.

The credit card industry routinely notifies consumers when there are suspicious charges. It is obvious that credit reporting agencies should have a similar obligation to notify consumers when there is suspicious activity concerning their credit report.

On the technology side, more could be done to limit the amount of personal data companies are allowed to collect. Increasingly, privacy safeguards impose “data minimization” requirements to ensure that only the data that is actually needed is collected. On the other hand, Congress should not create a new national identifier to replace the Social Security Numbers. That would make the system of identification even more brittle. Better is to reduce dependence on systems of identification and to create contextual identification where necessary.

Finally, Congress needs to give the Federal Trade Commission the authority to set minimum security standards for data brokers and to give consumers more control over their personal information. This is essential as long as consumers are these companies’ products and not their customers.

8. Resist complaints from the industry that this is “too hard.”

The credit bureaus and data brokers, and their lobbyists and trade-association representatives, will claim that many of these measures are too hard. They’re not telling you the truth.

Take one example: credit freezes. This is an effective security measure that protects consumers, but the process of getting one and of temporarily unfreezing credit is made deliberately onerous by the credit bureaus. Why isn’t there a smartphone app that alerts me when someone wants to access my credit rating, and lets me freeze and unfreeze my credit at the touch of the screen? Too hard? Today, you can have an app on your phone that does something similar if you try to log into a computer network, or if someone tries to use your credit card at a physical location different from where you are.

Moreover, any credit bureau or data broker operating in Europe is already obligated to follow the more rigorous EU privacy laws. The EU General Data Protection Regulation will come into force, requiring even more security and privacy controls for companies collecting storing the personal data of EU citizens. Those companies have already demonstrated that they can comply with those more stringent regulations.

Credit bureaus, and data brokers in general, are deliberately not implementing these 21st-century security solutions, because they want their services to be as easy and useful as possible for their actual customers: those who are buying your information. Similarly, companies that use this personal information to open accounts are not implementing more stringent security because they want their services to be as easy-to-use and convenient as possible.

9. This has foreign trade implications.

The Canadian Broadcast Corporation reported that 100,000 Canadians had their data stolen in the Equifax breach. The British Broadcasting Corporation originally reported that 400,000 UK consumers were affected; Equifax has since revised that to 15.2 million.

Many American Internet companies have significant numbers of European users and customers, and rely on negotiated safe harbor agreements to legally collect and store personal data of EU citizens.

The European Union is in the middle of a massive regulatory shift in its privacy laws, and those agreements are coming under renewed scrutiny. Breaches such as Equifax give these European regulators a powerful argument that US privacy regulations are inadequate to protect their citizens’ data, and that they should require that data to remain in Europe. This could significantly harm American Internet companies.

10. This has national security implications.

Although it is still unknown who compromised the Equifax database, it could easily have been a foreign adversary that routinely attacks the servers of US companies and US federal agencies with the goal of exploiting security vulnerabilities and obtaining personal data.

When the Fair Credit Reporting Act was passed in 1970, the concern was that the credit bureaus might misuse our data. That is still a concern, but the world has changed since then. Credit bureaus and data brokers have far more intimate data about all of us. And it is valuable not only to companies wanting to advertise to us, but foreign governments as well. In 2015, the Chinese breached the database of the Office of Personal Management and stole the detailed security clearance information of 21 million Americans. North Korea routinely engages in cybercrime as way to fund its other activities. In a world where foreign governments use cyber capabilities to attack US assets, requiring data brokers to limit collection of personal data, securely store the data they collect, and delete data about consumers when it is no longer needed is a matter of national security.

11. We need to do something about it.

Yes, this breach is a huge black eye and a temporary stock dip for Equifax — this month. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

Unless Congress acts to protect consumer information in the digital age, these breaches will continue.

Thank you for the opportunity to testify today. I will be pleased to answer your questions.

Cybercriminals Infiltrating E-Mail Networks to Divert Large Customer Payments

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/cybercriminals_.html

There’s a new criminal tactic involving hacking an e-mail account of a company that handles high-value transactions and diverting payments. Here it is in real estate:

The scam generally works like this: Hackers find an opening into a title company’s or realty agent’s email account, track upcoming home purchases scheduled for settlements — the pricier the better — then assume the identity of the title agency person handling the transaction.

Days or sometimes weeks before the settlement, the scammer poses as the title or escrow agent whose email accounts they’ve hijacked and instructs the home buyer to wire the funds needed to close — often hundreds of thousands of dollars, sometimes far more — to the criminals’ own bank accounts, not the title or escrow company’s legitimate accounts. The criminals then withdraw the money and vanish.

Here it is in fine art:

The fraud is relatively simple. Criminals hack into an art dealer’s email account and monitor incoming and outgoing correspondence. When the gallery sends a PDF invoice to a client via email following a sale, the conversation is hijacked. Posing as the gallery, hackers send a duplicate, fraudulent invoice from the same gallery email address, with an accompanying message instructing the client to disregard the first invoice and instead wire payment to the account listed in the fraudulent document.

Once money has been transferred to the criminals’ account, the hackers move the money to avoid detection and then disappear. The same technique is used to intercept payments made by galleries to their artists and others. Because the hackers gain access to the gallery’s email contacts, the scam can spread quickly, with fraudulent emails appearing to come from known sources.

I’m sure it’s happening in other industries as well, probably even with business-to-business commerce.

EDITED TO ADD (11/14): Brian Krebs wrote about this in 2014.

Съд на ЕС: медийни концентрации

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/11/05/kpn19560/

Стана известно решение  на Общия съд по дело T‑394/15 KPN BV  (Netherlands),  v European Commission

 

Жалбоподателят KPN BV е активен в сектора на кабелните мрежи за телевизионни, широколентови интернет, фиксирани телефонни услуги и мобилни телекомуникационни услуги, по-специално в Нидерландия.

Liberty Global plc е международен кабелен оператор, който притежава и експлоатира кабелни мрежи, предлагащи телевизия, широколентов интернет, фиксирана телефония и мобилни телекомуникационни услуги в единадесет държави-членки на Европейския съюз и Швейцария. Г-н Джон Малоун, американски гражданин, притежава най-голямото миноритарно участие в Liberty Global.

Ziggo NV притежава и управлява широколентова кабелна мрежа, която обхваща повече от половината от територията на Нидерландия. Това предприятие предоставя цифрови и аналогови кабелни видео, широколентов интернет, мобилни телекомуникации и услуги за цифрова телефония (Voice over Internet Protocol). Ziggo притежава 50% от HBO Nederland.

На 10 октомври 2014 г. Комисията приема Решение C (2014) 7241, с което обявява концентрацията, включваща придобиването от Liberty Global на изключителен контрол над Ziggo, за съвместима с вътрешния пазар и със Споразумението за ЕИП (Дело COMP / M.7000 – Liberty Global / Ziggo) (ОВ 2015, C 145, стр. 7) .

KPN обжалва решението  на три правни основания –  явна грешка в преценката относно евентуалните вертикални последици на концентрацията на пазара на pay tv,  нарушение на задължението за анализ на евентуалните вертикални антиконкурентни ефекти на пазара на спортни телевизионни канали  и   явна грешка в преценката относно упражняването на решаващо влияние върху г-н Malone върху Liberty Global.

Съдът анализира второто основание – липсата на мотиви – и потвърждава, че обжалваното решение не съдържа достатъчен  анализ относно вертикалните ефекти, които биха произтекли от предложената концентрация.

В резултат отменя решението на ЕК.

Filed under: EU Law, Media Law Tagged: съд на ес

Despite Massive Site-Blocking, Russian Pirate Video Market Doubles

Post Syndicated from Andy original https://torrentfreak.com/despite-massive-site-blocking-russian-pirate-video-market-doubles-171029/

On many occasions, outgoing MPAA chief Chris Dodd has praised countries that have implemented legislation that allows for widespread site-blocking.

In 2014, when the UK had blocked just a few dozen sites, Dodd described the practice as both “balanced and proportionate” while noting that it had made “tremendous progress in tackling infringing websites.” This, the former senator said, made it “one of the most effective anti-piracy measures in the world.”

With many thousands of ‘pirate’ sites blocked across the country, perhaps Dodd should ask the Russians how they’re getting on. Spoiler: Not that great.

According to new research published by Group-IB and reported by Izvestia, Internet pirates have been adapting to their new reality, finding new and stable ways of doing business while growing their turnover.

In fact, according to the ‘Economics of Pirate Sites Report 2016’, they’ve been so successful that the market for Internet pirate video more than doubled in value during 2016, reaching a peak of 3.3 billion rubles ($57.2m) versus just 1.5 billion rubles ($26m) in 2015. Overall Internet piracy in 2016 was valued at a billion rubles more ($74.5m), Group-IB notes.

According to the report, Russian pirates operated with impunity until 2012, at which point the government – under pressure from rightsholders – began introducing tough anti-piracy legislation, which included the blocking of pirate domains. Since then there have been a number of amendments which further tightened the law but pirates have adapted each time, protecting their revenue with new business plans.

Group-IB says that the most successful pirate site in 2016 was Seasonvar.ru, which pulled in a million visitors every day generating just over $3.3m in revenue. Second place was taken by My-hit.org, with almost 400,000 daily visitors generating an annual income of $1.2m. HDrezka.me served more than 315,000 people daily and made roughly the same to take third. Fourth and fifth spots were taken by Kinokrad.co and Baskino.Club.

Overall, it’s estimated that the average pirate video site makes around $156,000 per year via advertising, subscriptions, or via voluntary donations. They’re creative with their money channels too.

According to Maxim Ryabyko, Director General of Association for the Protection of Copyright on the Internet (AZAPO), sites use middle-men for dealing with both advertisers and payment processors, which enables operators to remain anonymous.

Like in the United States, UK and elsewhere in Europe, lists of pirate sites have been drawn up to give advertisers and networks guidance on where not to place their ads. However, plenty of companies aren’t involved in the initiative. When challenged over their ads appearing on pirate sites, some protest that it hasn’t been proven that the sites are acting illegally, so the business will continue.

“There is no negative attitude towards piracy by intermediaries. Their money is not objectionable. Sometimes they say: ‘Go and sue. There will be a court hearing, there will be a decision, and we will draw conclusions then’,” Ryabyko told Izvestia.

Dealing with pirates in the criminal arena isn’t easy either, with key players admitting that the police have other things to concentrate on.

“Law enforcers do not know how to search for pirates, they have other priorities,” says Sergei Semenov of the Film and TV Producers Association.

“Criminal prosecutions in this area are a great strain – single cases require a lot of time for investigation and going to court does not achieve the desired effect. We therefore need to diversify the grounds for bringing people to justice. We need to see how this can be applied.”

One idea is to prosecute pirates for non-payment of taxes. Well, it worked for Al Capone….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pirate-Friendly Coinhive’s DNS Hacked, User Hashes Stolen

Post Syndicated from Andy original https://torrentfreak.com/pirate-friendly-coinhives-dns-hacked-user-hashes-stolen-171025/

Just over a month ago, a Javascript cryptocurrency miner was silently added to The Pirate Bay. Noticed by users who observed their CPU usage going through the roof, it later transpired the site was trialing a miner operated by Coinhive.

Many users were disappointed that The Pirate Bay had added the Javascript-based Monero coin miner without their permission. However, it didn’t take long for people to see the potential benefits, with a raft of other sites adding the miner in the hope of generating additional revenue.

Now, however, Coinhive has an unexpected and potentially serious problem to deal with. The company has just revealed that on Monday night its DNS records maintained at Cloudflare were accessed by a third-party, allowing an unnamed attacker to redirect user mining traffic to a server they controlled.

“The DNS records for coinhive.com have been manipulated to redirect requests for the coinhive.min.js to a third party server. This third party server hosted a modified version of the JavaScript file with a hardcoded site key. This essentially let the attacker ‘steal’ hashes from our users,” Coinhive said in a statement.

The company hasn’t revealed how long the unauthorized redirect stayed in place for, but it appears that all coins mined on sites hosting Coinhive’s script were ‘stolen’ during the period, instead of being credited to their accounts.

Coinhive stresses that no user account information was leaked and that its website and database servers were uncompromised. But while that’s good news, the method that the hackers used to access the company’s DNS provider lay in a basic security error.

Back in 2014, crowdfunding platform Kickstarter – which Coinhive used – fell victim to a security breach. After being advised of the fact by law enforcement officials, Kickstarter shut down unauthorized access, began strengthening its systems, while advising customers to do the same.

While Coinhive did respond to the warning to ensure that its data was safe, something slipped through the net. One piece of information – its Cloudflare account password – remained unchanged after the Kickstarter attack. It now seems the most likely culprit for this week’s DNS breach.

“The root cause for this incident was an insecure password for our Cloudflare account that was probably leaked with the Kickstarter data breach back in 2014,” Coinhive says.

“We have learned hard lessons about security and used 2FA and unique passwords with all services since, but we neglected to update our years old Cloudflare account.”

While not mentioning Coinhive explicitly, Kickstarter warned earlier this month that the 2014 incident may not be completely over. In an update posted on the site Oct 6, Kickstarter noted that some of its customers had recently been hearing more information about the breach from notification service Have I been pwned?.

In the meantime, Coinhive has issued an apology and indicated it will find ways to reimburse sites which have lost revenue as a result of the DNS hack.

“We’re deeply sorry about this severe oversight,” the company said. “Our current plan is to credit all sites with an additional 12 hours of their the daily average hashrate. Please give us a few hours to roll this out.”

Based on earlier calculations carried out by TF, The Pirate Bay (if it was mining during the breach) could be potentially owed around $200 for the lost hashes, give or take. After turning off mining in September, the site reactivated it again in October, with no opt-out. The situation appears fluid.

While the hack is obviously a disappointment, Coinhive appears to have advised its users quickly and transparently, which under the circumstances is exactly what’s required. The fact that it’s offering compensation to users will also be welcomed.

The breach is the latest controversy to hit the company. Earlier this month, Cloudflare began banning sites which implemented Coinhive mining without informing their users. The CDN company said it considered non-advised mining as malware.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Lose Yourself: National Party Guilty of Eminem Copyright Infringement

Post Syndicated from Ernesto original https://torrentfreak.com/lose-yourself-national-party-guilty-of-eminem-copyright-infringement-171025/

In recent years, New Zealand has been the center stage of the largest copyright battle in Internet history; the criminal prosecution of Megaupload and several of its former employees.

In 2012, the country’s law enforcement officials helped to bring down the file-sharing site, including a military-style raid on its founder, Kim Dotcom.

While the Megaupload case is still ongoing, a separate copyright battle in New Zealand came to a conclusion this week. In this case, the country’s leading National Party was the accused.

In 2014 the party of former Prime Minister and Kim Dotcom nemesis John Key was sued for copyright infringement by Eminem’s publisher Eight Mile Style. In an advertising spot for the General Election campaign, the party used a song heavily inspired by the track “Lose Yourself.” A blatant copyright infringement, they argued.

This week the High Court agreed with the publisher ruling that the ad indeed infringed on their copyright. The National Party must now pay a total of $600,000 (415,000 USD) including damages and interest, NZ Herald reports.

Recognizing the irony, Kim Dotcom swiftly took the matter to Twitter. He launched a poll asking who’s guilty of copyright infringement, him or the National Party? The results are, as expected, in his favor.

Lose Yourself?

Dotcom sees the matter as something the old government is responsible for and he has more faith in the current leadership.

“All I can say is that the irony of this is hilarious and that Karma has finally caught up with the corrupt !former! National government. Honest people are now running New Zealand and the courts will be busy dealing with the crimes committed by the last government,” Dotcom informs us.

The National Party didn’t simply use the song without paying for it. They actually sought professional advice before starting the campaign and licensed a track called Eminem Esque, which is the one they used in the ad.

While the party hoped to avoid more expensive licensing fees by using the knock-off song, the High Court ruled that the similarities between Lose Yourself and Eminem Esque are so significant that it breached copyright.

And indeed, the music used in the ad campaign below is quite similar to the original Eminem track.

National Party president Peter Goodfellow is disappointed with the outcome and stresses that the party did not act flagrantly and properly licensed the song that was used.

“The music was licensed with one of New Zealand’s main industry copyright bodies, the Australasian Mechanical Copyright Owners Society. Being licensed and available for purchase, and having taken advice from our suppliers, the party believed the purchase was legal.”

The fact that the Party sought advice and licensed the knock-off track was taken into account. The High Court didn’t award any additional damages, but nonetheless, the copyright infringement claims stuck.

The other camp was more positive about the outcome. Adam Simpson, who represented Eminem’s publisher, described the ruling as a win for musicians and a warning to those who infringe on their rights.

“The ruling clarifies and confirms the rights of artists and songwriters. It sets a major precedent in New Zealand and will be influential in Australia, the UK and elsewhere,” Simpson said.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MPAA and RIAA’s Megaupload Lawsuits Remain on Hold

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-and-riaas-megaupload-lawsuits-remain-on-hold-171023/

More than half a decade has passed since Megaupload was shut down and it’s still unclear how the criminal proceedings will unfold.

Aside from Andrus Nomm’s plea deal, progress in the criminal proceedings has been slow.

Earlier this year there was some movement when the New Zealand High Court ruled that Kim Dotcom and his former colleagues can be extradited to the US. This extradition would not be on copyright grounds, but for conspiracy to defraud.

Following the ruling, Dotcom and his former colleagues quickly announced they would take the matter to the Court of Appeal. This process is still pending and may take several more months to complete.

While all parties await the outcome, the criminal case in the United States remains pending. The same goes for the civil cases launched by the MPAA and RIAA in 2014.

Since the civil cases may influence the criminal proceedings, Megaupload’s legal team previously managed to put these cases on hold, and last week they requested another extension.

This is not the first time that such a request had been made. There have been several extensions already.

At the time of the last request, there were concerns that the long delays could result in the destruction of evidence, as some of Megaupload’s hard drives were starting to fail. However, after the parties agreed on a solution to back-up and restore the files, this is no longer an issue.

“With the preservation order in place, and there being no other objection, Defendant Megaupload hereby moves the Court to enter the attached proposed order, continuing the stay in this case for an additional six months,” Megaupload’s legal team informed the court this week.

Without any objections from the MPAA and RIAA, U.S. District Court Judge Liam O’Grady swiftly granted Megaupload’s request to stay both lawsuits until April next year.

To be continued.

Order to stay

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Denuvo DRM Cracked within a Day of Release

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/10/denuvo_drm_crac.html

Denuvo is probably the best digital-rights management system, used to protect computer games. It’s regularly cracked within a day.

If Denuvo can no longer provide even a single full day of protection from cracks, though, that protection is going to look a lot less valuable to publishers. But that doesn’t mean Denuvo will stay effectively useless forever. The company has updated its DRM protection methods with a number of “variants” since its rollout in 2014, and chatter in the cracking community indicates a revamped “version 5” will launch any day now. That might give publishers a little more breathing room where their games can exist uncracked and force the crackers back to the drawing board for another round of the never-ending DRM battle.

BoingBoing post.

Related: Vice has a good history of DRM.

Steal This Show S03E09: Learning To Love Your Panopticon

Post Syndicated from Ernesto original https://torrentfreak.com/steal-show-s03e09-learning-love-panopticon/

stslogo180If you enjoy this episode, consider becoming a patron and getting involved with the show. Check out Steal This Show’s Patreon campaign: support us and get all kinds of fantastic benefits!

In this episode we meet Diani Barreto from the Berlin Bureau of ExposeFacs. Launched in June 2014, ExposeFacts.org supports and encourages whistleblowers to disclose information that citizens need to make truly informed decisions in a democracy.

ExposeFacts aims to shed light on concealed activities that are relevant to human rights, corporate malfeasance, the environment, civil liberties and war.

Steal This Show aims to release bi-weekly episodes featuring insiders discussing copyright and file-sharing news. It complements our regular reporting by adding more room for opinion, commentary, and analysis.

The guests for our news discussions will vary, and we’ll aim to introduce voices from different backgrounds and persuasions. In addition to news, STS will also produce features interviewing some of the great innovators and minds.

Host: Jamie King

Guest: Diani Barreto

Produced by Jamie King
Edited & Mixed by Riley Byrne
Original Music by David Triana
Web Production by Siraje Amarniss

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Federate Database User Authentication Easily with IAM and Amazon Redshift

Post Syndicated from Thiyagarajan Arumugam original https://aws.amazon.com/blogs/big-data/federate-database-user-authentication-easily-with-iam-and-amazon-redshift/

Managing database users though federation allows you to manage authentication and authorization procedures centrally. Amazon Redshift now supports database authentication with IAM, enabling user authentication though enterprise federation. No need to manage separate database users and passwords to further ease the database administration. You can now manage users outside of AWS and authenticate them for access to an Amazon Redshift data warehouse. Do this by integrating IAM authentication and a third-party SAML-2.0 identity provider (IdP), such as AD FS, PingFederate, or Okta. In addition, database users can also be automatically created at their first login based on corporate permissions.

In this post, I demonstrate how you can extend the federation to enable single sign-on (SSO) to the Amazon Redshift data warehouse.

SAML and Amazon Redshift

AWS supports Security Assertion Markup Language (SAML) 2.0, which is an open standard for identity federation used by many IdPs. SAML enables federated SSO, which enables your users to sign in to the AWS Management Console. Users can also make programmatic calls to AWS API actions by using assertions from a SAML-compliant IdP. For example, if you use Microsoft Active Directory for corporate directories, you may be familiar with how Active Directory and AD FS work together to enable federation. For more information, see the Enabling Federation to AWS Using Windows Active Directory, AD FS, and SAML 2.0 AWS Security Blog post.

Amazon Redshift now provides the GetClusterCredentials API operation that allows you to generate temporary database user credentials for authentication. You can set up an IAM permissions policy that generates these credentials for connecting to Amazon Redshift. Extending the IAM authentication, you can configure the federation of AWS access though a SAML 2.0–compliant IdP. An IAM role can be configured to permit the federated users call the GetClusterCredentials action and generate temporary credentials to log in to Amazon Redshift databases. You can also set up policies to restrict access to Amazon Redshift clusters, databases, database user names, and user group.

Amazon Redshift federation workflow

In this post, I demonstrate how you can use a JDBC– or ODBC-based SQL client to log in to the Amazon Redshift cluster using this feature. The SQL clients used with Amazon Redshift JDBC or ODBC drivers automatically manage the process of calling the GetClusterCredentials action, retrieving the database user credentials, and establishing a connection to your Amazon Redshift database. You can also use your database application to programmatically call the GetClusterCredentials action, retrieve database user credentials, and connect to the database. I demonstrate these features using an example company to show how different database users accounts can be managed easily using federation.

The following diagram shows how the SSO process works:

  1. JDBC/ODBC
  2. Authenticate using Corp Username/Password
  3. IdP sends SAML assertion
  4. Call STS to assume role with SAML
  5. STS Returns Temp Credentials
  6. Use Temp Credentials to get Temp cluster credentials
  7. Connect to Amazon Redshift using temp credentials

Walkthrough

Example Corp. is using Active Directory (idp host:demo.examplecorp.com) to manage federated access for users in its organization. It has an AWS account: 123456789012 and currently manages an Amazon Redshift cluster with the cluster ID “examplecorp-dw”, database “analytics” in us-west-2 region for its Sales and Data Science teams. It wants the following access:

  • Sales users can access the examplecorp-dw cluster using the sales_grp database group
  • Sales users access examplecorp-dw through a JDBC-based SQL client
  • Sales users access examplecorp-dw through an ODBC connection, for their reporting tools
  • Data Science users access the examplecorp-dw cluster using the data_science_grp database group.
  • Partners access the examplecorp-dw cluster and query using the partner_grp database group.
  • Partners are not federated through Active Directory and are provided with separate IAM user credentials (with IAM user name examplecorpsalespartner).
  • Partners can connect to the examplecorp-dw cluster programmatically, using language such as Python.
  • All users are automatically created in Amazon Redshift when they log in for the first time.
  • (Optional) Internal users do not specify database user or group information in their connection string. It is automatically assigned.
  • Data warehouse users can use SSO for the Amazon Redshift data warehouse using the preceding permissions.

Step 1:  Set up IdPs and federation

The Enabling Federation to AWS Using Windows Active Directory post demonstrated how to prepare Active Directory and enable federation to AWS. Using those instructions, you can establish trust between your AWS account and the IdP and enable user access to AWS using SSO.  For more information, see Identity Providers and Federation.

For this walkthrough, assume that this company has already configured SSO to their AWS account: 123456789012 for their Active Directory domain demo.examplecorp.com. The Sales and Data Science teams are not required to specify database user and group information in the connection string. The connection string can be configured by adding SAML Attribute elements to your IdP. Configuring these optional attributes enables internal users to conveniently avoid providing the DbUser and DbGroup parameters when they log in to Amazon Redshift.

The user-name attribute can be set up as follows, with a user ID (for example, nancy) or an email address (for example. [email protected]):

<Attribute Name="https://redshift.amazon.com/SAML/Attributes/DbUser">  
  <AttributeValue>user-name</AttributeValue>
</Attribute>

The AutoCreate attribute can be defined as follows:

<Attribute Name="https://redshift.amazon.com/SAML/Attributes/AutoCreate">
    <AttributeValue>true</AttributeValue>
</Attribute>

The sales_grp database group can be included as follows:

<Attribute Name="https://redshift.amazon.com/SAML/Attributes/DbGroups">
    <AttributeValue>sales_grp</AttributeValue>
</Attribute>

For more information about attribute element configuration, see Configure SAML Assertions for Your IdP.

Step 2: Create IAM roles for access to the Amazon Redshift cluster

The next step is to create IAM policies with permissions to call GetClusterCredentials and provide authorization for Amazon Redshift resources. To grant a SQL client the ability to retrieve the cluster endpoint, region, and port automatically, include the redshift:DescribeClusters action with the Amazon Redshift cluster resource in the IAM role.  For example, users can connect to the Amazon Redshift cluster using a JDBC URL without the need to hardcode the Amazon Redshift endpoint:

Previous:  jdbc:redshift://endpoint:port/database

Current:  jdbc:redshift:iam://clustername:region/dbname

Use IAM to create the following policies. You can also use an existing user or role and assign these policies. For example, if you already created an IAM role for IdP access, you can attach the necessary policies to that role. Here is the policy created for sales users for this example:

Sales_DW_IAM_Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "redshift:DescribeClusters"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:GetClusterCredentials"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:userid": "AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:CreateClusterUser"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:JoinGroup"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp"
            ]
        }
    ]
}

The policy uses the following parameter values:

  • Region: us-west-2
  • AWS Account: 123456789012
  • Cluster name: examplecorp-dw
  • Database group: sales_grp
  • IAM role: AIDIODR4TAW7CSEXAMPLE
Policy Statement Description
{
"Effect":"Allow",
"Action":[
"redshift:DescribeClusters"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
]
}

Allow users to retrieve the cluster endpoint, region, and port automatically for the Amazon Redshift cluster examplecorp-dw. This specification uses the resource format arn:aws:redshift:region:account-id:cluster:clustername. For example, the SQL client JDBC can be specified in the format jdbc:redshift:iam://clustername:region/dbname.

For more information, see Amazon Resource Names.

{
"Effect":"Allow",
"Action":[
"redshift:GetClusterCredentials"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
"arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
],
"Condition":{
"StringEquals":{
"aws:userid":"AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com"
}
}
}

Generates a temporary token to authenticate into the examplecorp-dw cluster. “arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}” restricts the corporate user name to the database user name for that user. This resource is specified using the format: arn:aws:redshift:region:account-id:dbuser:clustername/dbusername.

The Condition block enforces that the AWS user ID should match “AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com”, so that individual users can authenticate only as themselves. The AIDIODR4TAW7CSEXAMPLE role has the Sales_DW_IAM_Policy policy attached.

{
"Effect":"Allow",
"Action":[
"redshift:CreateClusterUser"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
]
}
Automatically creates database users in examplecorp-dw, when they log in for the first time. Subsequent logins reuse the existing database user.
{
"Effect":"Allow",
"Action":[
"redshift:JoinGroup"
],
"Resource":[
"arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp"
]
}
Allows sales users to join the sales_grp database group through the resource “arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp” that is specified in the format arn:aws:redshift:region:account-id:dbgroup:clustername/dbgroupname.

Similar policies can be created for Data Science users with access to join the data_science_grp group in examplecorp-dw. You can now attach the Sales_DW_IAM_Policy policy to the role that is mapped to IdP application for SSO.
 For more information about how to define the claim rules, see Configuring SAML Assertions for the Authentication Response.

Because partners are not authorized using Active Directory, they are provided with IAM credentials and added to the partner_grp database group. The Partner_DW_IAM_Policy is attached to the IAM users for partners. The following policy allows partners to log in using the IAM user name as the database user name.

Partner_DW_IAM_Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "redshift:DescribeClusters"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:GetClusterCredentials"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ],
            "Condition": {
                "StringEquals": {
                    "redshift:DbUser": "${aws:username}"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:CreateClusterUser"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "redshift:JoinGroup"
            ],
            "Resource": [
                "arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/partner_grp"
            ]
        }
    ]
}

redshift:DbUser“: “${aws:username}” forces an IAM user to use the IAM user name as the database user name.

With the previous steps configured, you can now establish the connection to Amazon Redshift through JDBC– or ODBC-supported clients.

Step 3: Set up database user access

Before you start connecting to Amazon Redshift using the SQL client, set up the database groups for appropriate data access. Log in to your Amazon Redshift database as superuser to create a database group, using CREATE GROUP.

Log in to examplecorp-dw/analytics as superuser and create the following groups and users:

CREATE GROUP sales_grp;
CREATE GROUP datascience_grp;
CREATE GROUP partner_grp;

Use the GRANT command to define access permissions to database objects (tables/views) for the preceding groups.

Step 4: Connect to Amazon Redshift using the JDBC SQL client

Assume that sales user “nancy” is using the SQL Workbench client and JDBC driver to log in to the Amazon Redshift data warehouse. The following steps help set up the client and establish the connection:

  1. Download the latest Amazon Redshift JDBC driver from the Configure a JDBC Connection page
  2. Build the JDBC URL with the IAM option in the following format:
    jdbc:redshift:iam://examplecorp-dw:us-west-2/sales_db

Because the redshift:DescribeClusters action is assigned to the preceding IAM roles, it automatically resolves the cluster endpoints and the port. Otherwise, you can specify the endpoint and port information in the JDBC URL, as described in Configure a JDBC Connection.

Identify the following JDBC options for providing the IAM credentials (see the “Prepare your environment” section) and configure in the SQL Workbench Connection Profile:

plugin_name=com.amazon.redshift.plugin.AdfsCredentialsProvider 
idp_host=demo.examplecorp.com (The name of the corporate identity provider host)
idp_port=443  (The port of the corporate identity provider host)
user=examplecorp\nancy(corporate user name)
password=***(corporate user password)

The SQL workbench configuration looks similar to the following screenshot:

Now, “nancy” can connect to examplecorp-dw by authenticating using the corporate Active Directory. Because the SAML attributes elements are already configured for nancy, she logs in as database user nancy and is assigned the sales_grp. Similarly, other Sales and Data Science users can connect to the examplecorp-dw cluster. A custom Amazon Redshift ODBC driver can also be used to connect using a SQL client. For more information, see Configure an ODBC Connection.

Step 5: Connecting to Amazon Redshift using JDBC SQL Client and IAM Credentials

This optional step is necessary only when you want to enable users that are not authenticated with Active Directory. Partners are provided with IAM credentials that they can use to connect to the examplecorp-dw Amazon Redshift clusters. These IAM users are attached to Partner_DW_IAM_Policy that assigns them to be assigned to the public database group in Amazon Redshift. The following JDBC URLs enable them to connect to the Amazon Redshift cluster:

jdbc:redshift:iam//examplecorp-dw/analytics?AccessKeyID=XXX&SecretAccessKey=YYY&DbUser=examplecorpsalespartner&DbGroup= partner_grp&AutoCreate=true

The AutoCreate option automatically creates a new database user the first time the partner logs in. There are several other options available to conveniently specify the IAM user credentials. For more information, see Options for providing IAM credentials.

Step 6: Connecting to Amazon Redshift using an ODBC client for Microsoft Windows

Assume that another sales user “uma” is using an ODBC-based client to log in to the Amazon Redshift data warehouse using Example Corp Active Directory. The following steps help set up the ODBC client and establish the Amazon Redshift connection in a Microsoft Windows operating system connected to your corporate network:

  1. Download and install the latest Amazon Redshift ODBC driver.
  2. Create a system DSN entry.
    1. In the Start menu, locate the driver folder or folders:
      • Amazon Redshift ODBC Driver (32-bit)
      • Amazon Redshift ODBC Driver (64-bit)
      • If you installed both drivers, you have a folder for each driver.
    2. Choose ODBC Administrator, and then type your administrator credentials.
    3. To configure the driver for all users on the computer, choose System DSN. To configure the driver for your user account only, choose User DSN.
    4. Choose Add.
  3. Select the Amazon Redshift ODBC driver, and choose Finish. Configure the following attributes:
    Data Source Name =any friendly name to identify the ODBC connection 
    Database=analytics
    user=uma(corporate user name)
    Auth Type-Identity Provider: AD FS
    password=leave blank (Windows automatically authenticates)
    Cluster ID: examplecorp-dw
    idp_host=demo.examplecorp.com (The name of the corporate IdP host)

This configuration looks like the following:

  1. Choose OK to save the ODBC connection.
  2. Verify that uma is set up with the SAML attributes, as described in the “Set up IdPs and federation” section.

The user uma can now use this ODBC connection to establish the connection to the Amazon Redshift cluster using any ODBC-based tools or reporting tools such as Tableau. Internally, uma authenticates using the Sales_DW_IAM_Policy  IAM role and is assigned the sales_grp database group.

Step 7: Connecting to Amazon Redshift using Python and IAM credentials

To enable partners, connect to the examplecorp-dw cluster programmatically, using Python on a computer such as Amazon EC2 instance. Reuse the IAM users that are attached to the Partner_DW_IAM_Policy policy defined in Step 2.

The following steps show this set up on an EC2 instance:

  1. Launch a new EC2 instance with the Partner_DW_IAM_Policy role, as described in Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances. Alternatively, you can attach an existing IAM role to an EC2 instance.
  2. This example uses Python PostgreSQL Driver (PyGreSQL) to connect to your Amazon Redshift clusters. To install PyGreSQL on Amazon Linux, use the following command as the ec2-user:
    sudo easy_install pip
    sudo yum install postgresql postgresql-devel gcc python-devel
    sudo pip install PyGreSQL

  1. The following code snippet demonstrates programmatic access to Amazon Redshift for partner users:
    #!/usr/bin/env python
    """
    Usage:
    python redshift-unload-copy.py <config file> <region>
    
    * Copyright 2014, Amazon.com, Inc. or its affiliates. All Rights Reserved.
    *
    * Licensed under the Amazon Software License (the "License").
    * You may not use this file except in compliance with the License.
    * A copy of the License is located at
    *
    * http://aws.amazon.com/asl/
    *
    * or in the "license" file accompanying this file. This file is distributed
    * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
    * express or implied. See the License for the specific language governing
    * permissions and limitations under the License.
    """
    
    import sys
    import pg
    import boto3
    
    REGION = 'us-west-2'
    CLUSTER_IDENTIFIER = 'examplecorp-dw'
    DB_NAME = 'sales_db'
    DB_USER = 'examplecorpsalespartner'
    
    options = """keepalives=1 keepalives_idle=200 keepalives_interval=200
                 keepalives_count=6"""
    
    set_timeout_stmt = "set statement_timeout = 1200000"
    
    def conn_to_rs(host, port, db, usr, pwd, opt=options, timeout=set_timeout_stmt):
        rs_conn_string = """host=%s port=%s dbname=%s user=%s password=%s
                             %s""" % (host, port, db, usr, pwd, opt)
        print "Connecting to %s:%s:%s as %s" % (host, port, db, usr)
        rs_conn = pg.connect(dbname=rs_conn_string)
        rs_conn.query(timeout)
        return rs_conn
    
    def main():
        # describe the cluster and fetch the IAM temporary credentials
        global redshift_client
        redshift_client = boto3.client('redshift', region_name=REGION)
        response_cluster_details = redshift_client.describe_clusters(ClusterIdentifier=CLUSTER_IDENTIFIER)
        response_credentials = redshift_client.get_cluster_credentials(DbUser=DB_USER,DbName=DB_NAME,ClusterIdentifier=CLUSTER_IDENTIFIER,DurationSeconds=3600)
        rs_host = response_cluster_details['Clusters'][0]['Endpoint']['Address']
        rs_port = response_cluster_details['Clusters'][0]['Endpoint']['Port']
        rs_db = DB_NAME
        rs_iam_user = response_credentials['DbUser']
        rs_iam_pwd = response_credentials['DbPassword']
        # connect to the Amazon Redshift cluster
        conn = conn_to_rs(rs_host, rs_port, rs_db, rs_iam_user,rs_iam_pwd)
        # execute a query
        result = conn.query("SELECT sysdate as dt")
        # fetch results from the query
        for dt_val in result.getresult() :
            print dt_val
        # close the Amazon Redshift connection
        conn.close()
    
    if __name__ == "__main__":
        main()

You can save this Python program in a file (redshiftscript.py) and execute it at the command line as ec2-user:

python redshiftscript.py

Now partners can connect to the Amazon Redshift cluster using the Python script, and authentication is federated through the IAM user.

Summary

In this post, I demonstrated how to use federated access using Active Directory and IAM roles to enable single sign-on to an Amazon Redshift cluster. I also showed how partners outside an organization can be managed easily using IAM credentials.  Using the GetClusterCredentials API action, now supported by Amazon Redshift, lets you manage a large number of database users and have them use corporate credentials to log in. You don’t have to maintain separate database user accounts.

Although this post demonstrated the integration of IAM with AD FS and Active Directory, you can replicate this solution across with your choice of SAML 2.0 third-party identity providers (IdP), such as PingFederate or Okta. For the different supported federation options, see Configure SAML Assertions for Your IdP.

If you have questions or suggestions, please comment below.


Additional Reading

Learn how to establish federated access to your AWS resources by using Active Directory user attributes.


About the Author

Thiyagarajan Arumugam is a Big Data Solutions Architect at Amazon Web Services and designs customer architectures to process data at scale. Prior to AWS, he built data warehouse solutions at Amazon.com. In his free time, he enjoys all outdoor sports and practices the Indian classical drum mridangam.

 

More Raspberry Pi labs in West Africa

Post Syndicated from Rachel Churcher original https://www.raspberrypi.org/blog/pi-based-ict-west-africa/

Back in May 2013, we heard from Dominique Laloux about an exciting project to bring Raspberry Pi labs to schools in rural West Africa. Until 2012, 75 percent of teachers there had never used a computer. The project has been very successful, and Dominique has been in touch again to bring us the latest news.

A view of the inside of the new Pi lab building

Preparing the new Pi labs building in Kuma Tokpli, Togo

Growing the project

Thanks to the continuing efforts of a dedicated team of teachers, parents and other supporters, the Centre Informatique de Kuma, now known as INITIC (from the French ‘INItiation aux TIC’), runs two Raspberry Pi labs in schools in Togo, and plans to open a third in December. The second lab was opened last year in Kpalimé, a town in the Plateaux Region in the west of the country.

Student using a Raspberry Pi computer

Using the new Raspberry Pi labs in Kpalimé, Togo

More than 400 students used the new lab intensively during the last school year. Dominique tells us more:

“The report made in early July by the seven teachers who accompanied the students was nothing short of amazing: the young people covered a very impressive number of concepts and skills, from the GUI and the file system, to a solid introduction to word processing and spreadsheets, and many other skills. The lab worked exactly as expected. Its 21 Raspberry Pis worked flawlessly, with the exception of a couple of SD cards that needed re-cloning, and a couple of old screens that needed to be replaced. All the Raspberry Pis worked without a glitch. They are so reliable!”

The teachers and students have enjoyed access to a range of software and resources, all running on Raspberry Pi 2s and 3s.

“Our current aim is to introduce the students to ICT using the Raspberry Pis, rather than introducing them to programming and electronics (a step that will certainly be considered later). We use Ubuntu Mate along with a large selection of applications, from LibreOffice, Firefox, GIMP, Audacity, and Calibre, to special maths, science, and geography applications. There are also special applications such as GnuCash and GanttProject, as well as logic games including PyChess. Since December, students also have access to a local server hosting Kiwix, Wiktionary (a local copy of Wikipedia in four languages), several hundred videos, and several thousand books. They really love it!”

Pi lab upgrade

This summer, INITIC upgraded the equipment in their Pi lab in Kuma Adamé, which has been running since 2014. 21 older model Raspberry Pis were replaced with Pi 2s and 3s, to bring this lab into line with the others, and encourage co-operation between the different locations.

“All 21 first-generation Raspberry Pis worked flawlessly for three years, despite the less-than-ideal conditions in which they were used — tropical conditions, dust, frequent power outages, etc. I brought them all back to Brussels, and they all still work fine. The rationale behind the upgrade was to bring more computing power to the lab, and also to have the same equipment in our two Raspberry Pi labs (and in other planned installations).”

Students and teachers using the upgraded Pi labs in Kuma Adamé

Students and teachers using the upgraded Pi lab in Kuma Adamé

An upgrade of the organisation’s first lab, installed in 2012 in Kuma Tokpli, will be completed in December. This lab currently uses ‘retired’ laptops, which will be replaced with Raspberry Pis and peripherals. INITIC, in partnership with the local community, is also constructing a new building to house the upgraded technology, and the organisation’s third Raspberry Pi lab.

Reliable tech

Dominique has been very impressed with the performance of the Raspberry Pis since 2014.

“Our experience of three years, in two very different contexts, clearly demonstrates that the Raspberry Pi is a very convincing alternative to more ‘conventional’ computers for introducing young students to ICT where resources are scarce. I wish I could convince more communities in the world to invest in such ‘low cost, low consumption, low maintenance’ infrastructure. It really works!”

He goes on to explain that:

“Our goal now is to build at least one new Raspberry Pi lab in another Togolese school each year. That will, of course, depend on how successful we are at gathering the funds necessary for each installation, but we are confident we can convince enough friends to give us the financial support needed for our action.”

A desk with Raspberry Pis and peripherals

Reliable Raspberry Pis in the labs at Kpalimé

Get involved

We are delighted to see the Raspberry Pi being used to bring information technology to new teachers, students, and communities in Togo – it’s wonderful to see this project becoming established and building on its achievements. The mission of the Raspberry Pi Foundation is to put the power of digital making into the hands of people all over the world. Therefore, projects like this, in which people use our tech to fulfil this mission in places with few resources, are wonderful to us.

More information about INITIC and its projects can be found on its website. If you are interested in helping the organisation to meet its goals, visit the How to help page. And if you are involved with a project like this, bringing ICT, computer science, and coding to new places, please tell us about it in the comments below.

The post More Raspberry Pi labs in West Africa appeared first on Raspberry Pi.

A2SV – Auto Scanning SSL Vulnerability Tool For Poodle & Heartbleed

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/10/a2sv-auto-scanning-ssl-vulnerability-tool-poodle-heartbleed/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

A2SV – Auto Scanning SSL Vulnerability Tool For Poodle & Heartbleed

A2SV is a Python-based SSL Vulnerability focused tool that allows for auto-scanning and detection of the common and well-known SSL Vulnerabilities.

SSL Vulnerabilities Detected by A2SV

  • [CVE-2007-1858] Anonymous Cipher
  • [CVE-2012-4929] CRIME(SPDY)
  • [CVE-2014-0160] CCS Injection
  • [CVE-2014-0224] HeartBleed
  • [CVE-2014-3566] SSLv3 POODLE
  • [CVE-2015-0204] FREAK Attack
  • [CVE-2015-4000] LOGJAM Attack
  • [CVE-2016-0800] SSLv2 DROWN

Planned for future:

  • [PLAN] SSL ACCF
  • [PLAN] SSL Information Analysis

Installation & Requirements for A2SV

A.

Read the rest of A2SV – Auto Scanning SSL Vulnerability Tool For Poodle & Heartbleed now! Only available at Darknet.

Yarrrr! Dutch ISPs Block The Pirate Bay But It’s Bad Timing for Trolls

Post Syndicated from Andy original https://torrentfreak.com/yarrrr-dutch-isps-block-the-pirate-bay-but-its-bad-timing-for-trolls-171005/

While many EU countries have millions of Internet pirates, few have given citizens the freedom to plunder like the Netherlands. For many years, Dutch Internet users actually went about their illegal downloading with government blessing.

Just over three years ago, downloading and copying movies and music for personal use was not punishable by law. Instead, the Dutch compensated rightsholders through a “piracy levy” on writable media, hard drives and electronic devices with storage capacity, including smartphones.

Following a ruling from the European Court of Justice in 2014, however, all that came to an end. Along with uploading (think BitTorrent sharing), downloading was also outlawed.

Around the same time, The Court of The Hague handed down a decision in a long-running case which had previously forced two Dutch ISPs, Ziggo and XS4ALL, to block The Pirate Bay.

Ruling against local anti-piracy outfit BREIN, it was decided that the ISPs wouldn’t have to block The Pirate Bay after all. After a long and tortuous battle, however, the ISPs learned last month that they would have to block the site, pending a decision from the Supreme Court.

On September 22, both ISPs were given 10 business days to prevent subscriber access to the notorious torrent site, or face fines of 2,000 euros per day, up to a maximum of one million euros.

With that time nearly up, yesterday Ziggo broke cover to become the first of the pair to block the site. On a dedicated diversion page, somewhat humorously titled ziggo.nl/yarrr, the ISP explained the situation to now-blocked users.

“You are trying to visit a page of The Pirate Bay. On September 22, the Hague Court obliged us to block access to this site. The pirate flag is thus handled by us. The case is currently at the Supreme Court which judges the basic questions in this case,” the notice reads.

Ziggo Pirate Bay message (translated)

Customers of XS4ALL currently have no problem visiting The Pirate Bay but according to a statement handed to Tweakers by a spokesperson, the blockade will be implemented today.

In addition to the site’s main domains, the injunction will force the ISPs to block 155 URLs and IP addresses in total, a list that has been drawn up by BREIN to include various mirrors, proxies, and alternate access points. XS4All says it will publish a list of all the blocked items on its notification page.

While the re-introduction of a Pirate Bay blockade in the Netherlands is an achievement for BREIN, it’s potentially bad timing for the copyright trolls waiting in the wings to snare Dutch file-sharers.

As recently reported, movie outfit Dutch Filmworks (DFW) is preparing a wave of cash-settlement copyright-trolling letters to mimic those sent by companies elsewhere.

There’s little doubt that users of The Pirate Bay would’ve been DFW’s targets but it seems likely that given the introduction of blockades, many Dutch users will start to educate themselves on the use of VPNs to protect their privacy, or at least become more aware of the risks.

Of course, there will be no real shortage of people who’ll continue to download without protection, but DFW are getting into this game just as it’s likely to get more difficult for them. As more and more sites get blocked (and that is definitely BREIN’s overall plan) the low hanging fruit will sit higher and higher up the tree – and the cash with it.

Like all methods of censorship, site-blocking eventually drives communication underground. While anti-piracy outfits all say blocking is necessary, obfuscation and encryption isn’t welcomed by any of them.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] Catching up with RawTherapee 5.x

Post Syndicated from corbet original https://lwn.net/Articles/735133/rss

Free-software raw photo editor RawTherapee released a major new
revision earlier this year, followed by a string of incremental
updates. The 5.x series, released at a rapid pace, marks a
significant improvement in the RawTherapee’s development tempo — the
project’s preceding update had landed in 2014. Regardless of the speed of
the releases themselves, however, the improved RawTherapee offers users a
lot of added functionality and may shake up the raw-photo-processing
workflow for many photographers.

PS4 Piracy Now Exists – If Gamers Want to Jump Through Hoops

Post Syndicated from Andy original https://torrentfreak.com/ps4-piracy-now-exists-if-gamers-want-to-jump-through-hoops-170930/

During the reign of the first few generations of consoles, gamers became accustomed to their machines being compromised by hacking groups and enthusiasts, to enable the execution of third-party software.

Often carried out under the banner of running “homebrew” code, so-called jailbroken consoles also brought with them the prospect of running pirate copies of officially produced games. Once the floodgates were opened, not much could hold things back.

With the advent of mass online gaming, however, things became more complex. Regular firmware updates mean that security holes could be fixed remotely whenever a user went online, rendering the jailbreaking process a cat-and-mouse game with continually moving targets.

This, coupled with massively improved overall security, has meant that the current generation of consoles has remained largely piracy free, at least on a do-it-at-home basis. Now, however, that position is set to change after the first decrypted PS4 game dumps began to hit the web this week.

Thanks to release group KOTF (Knights of the Fallen), Grand Theft Auto V, Far Cry 4, and Assassins Creed IV are all available for download from the usual places. As expected they are pretty meaty downloads, with GTAV weighing in via 90 x 500MB files, Far Cry4 via 54 of the same size, and ACIV sporting 84 x 250MB.

Partial NFO file for PS4 GTA V

While undoubtedly large, it’s not the filesize that will prove most prohibitive when it comes to getting these beasts to run on a PlayStation 4. Indeed, a potential pirate will need to jump through a number of hoops to enjoy any of these titles or others that may appear in the near future.

KOTF explains as much in the NFO (information) files it includes with its releases. The list of requirements is long.

First up, a gamer needs to possess a PS4 with an extremely old firmware version – v1.76 – which was released way back in August 2014. The fact this firmware is required doesn’t come as a surprise since it was successfully jailbroken back in December 2015.

The age of the firmware raises several issues, not least where people can obtain a PS4 that’s so old it still has this firmware intact. Also, newer games require later firmware, so most games released during the past two to three years won’t be compatible with v1.76. That limits the pool of games considerably.

Finally, forget going online with such an old software version. Sony will be all over it like a cheap suit, plotting to do something unpleasant to that cheeky antique code, given half a chance. And, for anyone wondering, downgrading a higher firmware version to v1.76 isn’t possible – yet.

But for gamers who want a little bit of recent PS4 nostalgia on the cheap, ‘all’ they have to do is gather the necessary tools together and follow the instructions below.

Easy – when you know how

While this is a landmark moment for PS4 piracy (which to date has mainly centered around much hocus pocus), the limitations listed above mean that it isn’t going to hit the mainstream just yet.

That being said, all things are possible when given the right people, determination, and enough time. Whether that will be anytime soon is anyone’s guess but there are rumors that firmware v4.55 has already been exploited, so you never know.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Microsoft Becomes Sponsor of Open Source Initiative

Post Syndicated from ris original https://lwn.net/Articles/734941/rss

The Open Source Initiative (OSI) has announced that Microsoft has
joined the organization as a Premium Sponsor.
Microsoft’s history with the OSI dates back to 2005 with the submission of the Microsoft Community License, then again in August of 2007 with the submission of the Microsoft Permissive License. For many in the open source software community, it was Microsoft’s release of .NET in 2014 under an open source license that may have first caught their attention. Microsoft has increasingly participated in open source projects and communities as users, contributors, and creators, and has released even more open source products like Visual Studio Code and Typescript.