Tag Archives: risk

Epic Responds to Cheating Fortnite Kid’s Mom in Court

Post Syndicated from Ernesto original https://torrentfreak.com/epic-responds-to-cheating-fortnite-kids-mom-in-court-180424/

Last fall, Epic Games released Fortnite’s free-to-play “Battle Royale” game mode, generating massive interest among gamers.

This also included thousands of cheaters, many of whom were subsequently banned. Epic Games then went a step further by taking several cheaters to court for copyright infringement.

One of the alleged cheaters turned out to be a minor, who’s referred to by his initials C.R. in the Carolina District Court. Epic Games wasn’t aware of this when it filed the lawsuit, but the kid’s mother let the company know, loud and clear.

“This company is in the process of attempting to sue a 14-year-old child,” the mother informed the Court last fall.

Among other defenses, the mother highlighted that the EULA, which the game publisher relies heavily upon in the complaint, isn’t legally binding. The EULA states that minors require permission from a parent or legal guardian, which was not the case here.

“Please note parental consent was not issued to [my son] to play this free game produced by Epic Games, INC,” the mother wrote in her letter.

After this letter, things went quiet. Epic managed to locate and serve the defendant with help from a private investigator, but no official response to the complaint was filed. This eventually prompted Epic to request an entry of default.

However, US District Court Malcolm Howard wouldn’t allow Epic to cruise to a win that easily. Instead, he ruled that the mother’s letter should be seen as a motion to dismiss the case.

“While it is true that defendant has not responded since proper service was effectuated, the letter from defendant’s mother detailing why this matter should be dismissed cannot be ignored,” Judge Howard wrote earlier this month.

As a result, Epic Games had to reply to the letter, which it did yesterday. In a redacted motion the game publisher argues that most of the mother’s arguments failed to state a claim and are therefore irrelevant.

Epic argues that the only issue that remains is the lack of parental consent when C.R. agreed to the EULA and the Terms. The mother argued that these are not valid agreements because her son is a minor, but Epic disagrees.

“This ‘infancy defense’ is not available to C.R,” Epic writes, pointing to jurisprudence where another court ruled that a minor can’t use the infancy defense to void contractual obligations while keeping the benefits of the same contract.

“C.R. affirmatively agreed to abide by Epic’s Terms and EULA, and ‘retained the benefits’ of the contracts he entered into with Epic. Accordingly, C.R. should not be able to ‘use the infancy defense to void [his] contractual obligations by retaining the benefits of the contract[s]’.”

Epic further argues that it’s clear that the cheater infringed on Epic’s copyrights and facilitated others to do the same. As such, the company asks the Court to deny the mother’s motion to dismiss.

If the Court agrees, Epic can request an entry of default. It did the same in a related case against another minor defendant earlier, which was granted by the Court late last week.

If that happens, the underage defendants risk a default judgment. This is likely to include a claim for monetary damages as well as an injunction prohibiting the minors from any copyright infringement or cheating in the future.

A copy of Epic Games’ redacted reply is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Tips for Success: GDPR Lessons Learned

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/tips-for-success-gdpr-lessons-learned/

Security is our top priority at AWS, and from the beginning we have built security into the fabric of our services. With the introduction of GDPR (which becomes enforceable on May 25 of 2018), privacy and data protection have become even more ingrained into our security-centered culture. Three weeks ago, well ahead of the deadline, we announced that all AWS services are compliant with GDPR, meaning you can use AWS as a data processor as a way to help solve your GDPR challenges (be sure to visit our GDPR Center for additional information).

When it comes to GDPR compliance, many customers are progressing nicely and much of the initial trepidation is gone. In my interactions with customers on this topic, a few themes have emerged as universal:

  • GDPR is important. You need to have a plan in place if you process personal data of EU data subjects, not only because it’s good governance, but because GDPR does carry significant penalties for non-compliance.
  • Solving this can be complex, potentially involving a lot of personnel and multiple tools. Your GDPR process will also likely span across disciplines – impacting people, processes, and technology.
  • Each customer is unique, and there are many methodologies around assessing your compliance with GDPR. It’s important to be aware of your own individual business attributes.

I thought it might be helpful to share some of our own lessons learned. In our experience in solving the GDPR challenge, the following were keys to our success:

  1. Get your senior leadership involved. We have a regular cadence of detailed status conversations about GDPR with our CEO, Andy Jassy. GDPR is high stakes, and the AWS leadership team knows it. If GDPR doesn’t have the attention it needs with the visibility of top management today, it’s time to escalate.
  2. Centralize the GDPR efforts. Driving all work streams centrally is key. This may sound obvious, but managing this in a distributed manner may result in duplicative effort and/or team members moving in a different direction.
  3. The most important single partner in solving GDPR is your legal team. Having non-legal people make assumptions about how to interpret GDPR for your unique environment is both risky and a potential waste of time and resources. You want to avoid analysis paralysis by getting proper legal advice, collaborating on a direction, and then moving forward with the proper urgency.
  4. Collaborate closely with tech leadership. The “process” people in your organization, the ones who already know how to approach governance problems, are typically comfortable jumping right in to GDPR. But technical teams, including data owners, have set up their software for business application. They may not even know what kind of data they are storing, processing, or transferring to other parts of the business. In the GDPR exercise they need to be aware of (or at least help facilitate) the tracking of data and data elements between systems. This isn’t a typical ask for technical teams, so be prepared to educate and to fully understand data flow.
  5. Don’t live by the established checklists. There are multiple methodologies to solving the compliance challenges of GDPR. At AWS, we ended up establishing core requirements, mapped out by data controller and data processor functions and then, in partnership with legal, decided upon a group of projects based on our known current state. Be careful about using a set methodology, tool or questionnaire to govern your efforts. These generic assessments can help educate, but letting them drive or limit your work could lead to missing something that is key to your own compliance. In this sense, a generic, “one size fits all” solution might not be helpful.
  6. Don’t be afraid to challenge prior orthodoxy. Many times we changed course based on new information. You shouldn’t be afraid to scrap an effort if you determine it’s not working. You should also not be afraid to escalate issues to senior leadership when needed. This is an executive issue.
  7. Look for ways to leverage your work beyond this compliance activity. GDPR requires serious effort, but are the results limited to GDPR compliance? Certainly not. You can use GDPR workflows as a way to ensure better governance moving forward. Privacy and security will require work for the foreseeable future, so make your governance program scalable and usable for other purposes.

One last tip that has made all the difference: think about protecting data subjects and work backwards from there. Customer focus drives us to ask, “what would customers and data subjects want and expect us to do?” Taking GDPR from a pure legal or compliance standpoint may be technically sufficient, but we believe the objectives of security and personal data protection require a more comprehensive view, and you can most effectively shape that view by starting with the individuals GDPR was meant to protect.

If you would like to find out more about our experiences, as well as how we can help you in your efforts, please reach out to us today.

-Chad Woolf

Vice President, AWS Security Assurance

Interested in additional AWS Security news? Follow the AWS Security Blog on Twitter.

Announcing the new AWS Certified Security – Specialty exam

Post Syndicated from Janna Pellegrino original https://aws.amazon.com/blogs/architecture/announcing-the-new-aws-certified-security-specialty-exam/

Good news for cloud security experts: following our most popular beta exam ever, the AWS Certified Security – Specialty exam is here. This new exam allows experienced cloud security professionals to demonstrate and validate their knowledge of how to secure the AWS platform.

About the exam
The security exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. The exam is open to anyone who currently holds a Cloud Practitioner or Associate-level certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

The exam validates:

  • An understanding of specialized data classifications and AWS data protection mechanisms.
  • An understanding of data encryption methods and AWS mechanisms to implement them.
  • An understanding of secure Internet protocols and AWS mechanisms to implement them.
  • A working knowledge of AWS security services and features of services to provide a secure production environment.
  • Competency gained from two or more years of production deployment experience using AWS security services and features.
  • Ability to make trade-off decisions with regard to cost, security, and deployment complexity given a set of application requirements.
  • An understanding of security operations and risk.

Learn more and register >>

How to prepare
We have training and other resources to help you prepare for the exam:

AWS Training (aws.amazon.com/training)

Additional Resources

Learn more and register >>

Please contact us if you have questions about exam registration.

Good luck!

Announcing the new AWS Certified Security – Specialty exam

Post Syndicated from Ozlem Yilmaz original https://aws.amazon.com/blogs/security/announcing-the-new-aws-certified-security-specialty-exam/

Good news for cloud security experts: the AWS Certified Security — Specialty exam is here. This new exam allows experienced cloud security professionals to demonstrate and validate their knowledge of how to secure the AWS platform.

About the exam

The security exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. The exam is open to anyone who currently holds a Cloud Practitioner or Associate-level certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

The exam validates your understanding of:

  • Specialized data classifications and AWS data protection mechanisms
  • Data encryption methods and AWS mechanisms to implement them
  • Secure Internet protocols and AWS mechanisms to implement them
  • AWS security services and features of services to provide a secure production environment
  • Making tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements
  • Security operations and risk

How to prepare

We have training and other resources to help you prepare for the exam.

AWS Training that includes:

Additional Resources

Learn more and register here, and please contact us if you have questions about exam registration.

Want more AWS Security news? Follow us on Twitter.

French Minister of Culture Calls For Pirate Streaming Blacklist

Post Syndicated from Ernesto original https://torrentfreak.com/french-minister-of-culture-calls-for-pirate-streaming-blacklist-180423/

Nearly a decade ago, France was on the anti-piracy enforcement frontline.

The country was the first to introduce a graduated response system, Hadopi, where Internet subscribers risked losing their Internet connections if they were caught sharing torrents repeatedly.

Today this approach is no longer as effective as it once was. The bulk of all online piracy has moved from P2P downloading to streaming, and the latter isn’t traceable by anti-piracy watchdogs.

This hasn’t gone unnoticed by the French Government, Minister of Culture Françoise Nyssen in particular, who highlighted the issue to reporters a few days ago.

“The Hadopi response is no longer suitable because piracy is now 80% by streaming,” she said, quoted by local media.

While Hadopi may have outgrown its usefulness, France is not giving up the piracy fight. On the contrary, the country is now pondering new measures to target the current epidemic of pirate streaming sites.

Nyssen hopes that local authorities will implement a national pirate site blocklist to address the problem. Ideally, this should be constantly updated to ensure that pirate streaming sites remain inaccessible.

The Minister told reporters that France must “act on the sites,” by implementing “a blacklist which is constantly updated to keep them offline”.

This list would be maintained by the Hadopi agency which can then circulate it among several online intermediaries. This can include Internet providers, but also search engines and advertising networks.

The tough language will be music to the ears of the film industry and the timing doesn’t appear to be a total coincidence either.

The comments from the French Minister of Culture come shortly after several film industry groups boycotted a reception at the ministry. According to the groups, France dropped the ball on enforcement against piracy, which is blamed for more than a billion euros in losses.

The renewed promise may calm the waters for a while, but for now, it’s little more than that. It will likely take time before an effective pirate site blacklist is established, if it gets that far.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Russia Blacklists 250 Pirate Sites For Displaying Gambling Ads

Post Syndicated from Andy original https://torrentfreak.com/russia-blacklists-250-pirate-sites-for-displaying-gambling-ads-180421/

Blocking alleged pirate sites is usually a question of proving that they’re involved in infringement and then applying to the courts for an injunction.

In Europe, the process is becoming easier, largely thanks to an EU ruling that permits blocking on copyright grounds.

As reported over the past several years, Russia is taking its blocking processes very seriously. Copyright holders can now have sites blocked in just a few days, if they can show their operators as being unresponsive to takedown demands.

This week, however, Russian authorities have again shown that copyright infringement doesn’t have to be the only Achilles’ heel of pirate sites.

Back in 2006, online gambling was completely banned in Russia. Three years later in 2009, land-based gambling was also made illegal in all but four specified regions. Then, in 2012, the Russian Supreme Court ruled that ISPs must block access to gambling sites, something they had previously refused to do.

That same year, telecoms watchdog Rozcomnadzor began publishing a list of banned domains and within those appeared some of the biggest names in gambling. Many shut down access to customers located in Russia but others did not. In response, Rozcomnadzor also began targeting sites that simply offered information on gambling.

Fast forward more than six years and Russia is still taking a hard line against gambling operators. However, it now finds itself in a position where the existence of gambling material can also assist the state in its quest to take down pirate sites.

Following a complaint from the Federal Tax Service of Russia, Rozcomnadzor has again added a large number of ‘pirate’ sites to the country’s official blocklist after they advertised gambling-related products and services.

“Rozkomnadzor, at the request of the Federal Tax Service of Russia, added more than 250 pirate online cinemas and torrent trackers to the unified register of banned information, which hosted illegal advertising of online casinos and bookmakers,” the telecoms watchdog reported.

Almost immediately, 200 of the sites were blocked by local ISPs since they failed to remove the advertising when told to do so. For the remaining 50 sites, breathing space is still available. Their bans can be suspended if the offending ads are removed within a timeframe specified by the authorities, which has not yet run out.

“Information on a significant number of pirate resources with illegal advertising was received by Rozcomnadzor from citizens and organizations through a hotline that operates on the site of the Unified Register of Prohibited Information, all of which were sent to the Federal Tax Service for making decisions on restricting access,” the watchdog revealed.

Links between pirate sites and gambling companies have traditionally been close over the years, with advertising for many top-tier brands appearing on portals large and small. However, in recent times the prevalence of gambling ads has diminished, in part due to campaigns conducted in the United States, Europe, and the UK.

For pirate site operators in Russia, the decision to carry gambling ads now comes with the added risk of being blocked. Only time will tell whether any reduction in traffic is considered serious enough to warrant a gambling boycott of their own.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Implement continuous integration and delivery of serverless AWS Glue ETL applications using AWS Developer Tools

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/implement-continuous-integration-and-delivery-of-serverless-aws-glue-etl-applications-using-aws-developer-tools/

AWS Glue is an increasingly popular way to develop serverless ETL (extract, transform, and load) applications for big data and data lake workloads. Organizations that transform their ETL applications to cloud-based, serverless ETL architectures need a seamless, end-to-end continuous integration and continuous delivery (CI/CD) pipeline: from source code, to build, to deployment, to product delivery. Having a good CI/CD pipeline can help your organization discover bugs before they reach production and deliver updates more frequently. It can also help developers write quality code and automate the ETL job release management process, mitigate risk, and more.

AWS Glue is a fully managed data catalog and ETL service. It simplifies and automates the difficult and time-consuming tasks of data discovery, conversion, and job scheduling. AWS Glue crawls your data sources and constructs a data catalog using pre-built classifiers for popular data formats and data types, including CSV, Apache Parquet, JSON, and more.

When you are developing ETL applications using AWS Glue, you might come across some of the following CI/CD challenges:

  • Iterative development with unit tests
  • Continuous integration and build
  • Pushing the ETL pipeline to a test environment
  • Pushing the ETL pipeline to a production environment
  • Testing ETL applications using real data (live test)
  • Exploring and validating data

In this post, I walk you through a solution that implements a CI/CD pipeline for serverless AWS Glue ETL applications supported by AWS Developer Tools (including AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild) and AWS CloudFormation.

Solution overview

The following diagram shows the pipeline workflow:

This solution uses AWS CodePipeline, which lets you orchestrate and automate the test and deploy stages for ETL application source code. The solution consists of a pipeline that contains the following stages:

1.) Source Control: In this stage, the AWS Glue ETL job source code and the AWS CloudFormation template file for deploying the ETL jobs are both committed to version control. I chose to use AWS CodeCommit for version control.

To get the ETL job source code and AWS CloudFormation template, download the gluedemoetl.zip file. This solution is developed based on a previous post, Build a Data Lake Foundation with AWS Glue and Amazon S3.

2.) LiveTest: In this stage, all resources—including AWS Glue crawlers, jobs, S3 buckets, roles, and other resources that are required for the solution—are provisioned, deployed, live tested, and cleaned up.

The LiveTest stage includes the following actions:

  • Deploy: In this action, all the resources that are required for this solution (crawlers, jobs, buckets, roles, and so on) are provisioned and deployed using an AWS CloudFormation template.
  • AutomatedLiveTest: In this action, all the AWS Glue crawlers and jobs are executed and data exploration and validation tests are performed. These validation tests include, but are not limited to, record counts in both raw tables and transformed tables in the data lake and any other business validations. I used AWS CodeBuild for this action.
  • LiveTestApproval: This action is included for the cases in which a pipeline administrator approval is required to deploy/promote the ETL applications to the next stage. The pipeline pauses in this action until an administrator manually approves the release.
  • LiveTestCleanup: In this action, all the LiveTest stage resources, including test crawlers, jobs, roles, and so on, are deleted using the AWS CloudFormation template. This action helps minimize cost by ensuring that the test resources exist only for the duration of the AutomatedLiveTest and LiveTestApproval

3.) DeployToProduction: In this stage, all the resources are deployed using the AWS CloudFormation template to the production environment.

Try it out

This code pipeline takes approximately 20 minutes to complete the LiveTest test stage (up to the LiveTest approval stage, in which manual approval is required).

To get started with this solution, choose Launch Stack:

This creates the CI/CD pipeline with all of its stages, as described earlier. It performs an initial commit of the sample AWS Glue ETL job source code to trigger the first release change.

In the AWS CloudFormation console, choose Create. After the template finishes creating resources, you see the pipeline name on the stack Outputs tab.

After that, open the CodePipeline console and select the newly created pipeline. Initially, your pipeline’s CodeCommit stage shows that the source action failed.

Allow a few minutes for your new pipeline to detect the initial commit applied by the CloudFormation stack creation. As soon as the commit is detected, your pipeline starts. You will see the successful stage completion status as soon as the CodeCommit source stage runs.

In the CodeCommit console, choose Code in the navigation pane to view the solution files.

Next, you can watch how the pipeline goes through the LiveTest stage of the deploy and AutomatedLiveTest actions, until it finally reaches the LiveTestApproval action.

At this point, if you check the AWS CloudFormation console, you can see that a new template has been deployed as part of the LiveTest deploy action.

At this point, make sure that the AWS Glue crawlers and the AWS Glue job ran successfully. Also check whether the corresponding databases and external tables have been created in the AWS Glue Data Catalog. Then verify that the data is validated using Amazon Athena, as shown following.

Open the AWS Glue console, and choose Databases in the navigation pane. You will see the following databases in the Data Catalog:

Open the Amazon Athena console, and run the following queries. Verify that the record counts are matching.

SELECT count(*) FROM "nycitytaxi_gluedemocicdtest"."data";
SELECT count(*) FROM "nytaxiparquet_gluedemocicdtest"."datalake";

The following shows the raw data:

The following shows the transformed data:

The pipeline pauses the action until the release is approved. After validating the data, manually approve the revision on the LiveTestApproval action on the CodePipeline console.

Add comments as needed, and choose Approve.

The LiveTestApproval stage now appears as Approved on the console.

After the revision is approved, the pipeline proceeds to use the AWS CloudFormation template to destroy the resources that were deployed in the LiveTest deploy action. This helps reduce cost and ensures a clean test environment on every deployment.

Production deployment is the final stage. In this stage, all the resources—AWS Glue crawlers, AWS Glue jobs, Amazon S3 buckets, roles, and so on—are provisioned and deployed to the production environment using the AWS CloudFormation template.

After successfully running the whole pipeline, feel free to experiment with it by changing the source code stored on AWS CodeCommit. For example, if you modify the AWS Glue ETL job to generate an error, it should make the AutomatedLiveTest action fail. Or if you change the AWS CloudFormation template to make its creation fail, it should affect the LiveTest deploy action. The objective of the pipeline is to guarantee that all changes that are deployed to production are guaranteed to work as expected.

Conclusion

In this post, you learned how easy it is to implement CI/CD for serverless AWS Glue ETL solutions with AWS developer tools like AWS CodePipeline and AWS CodeBuild at scale. Implementing such solutions can help you accelerate ETL development and testing at your organization.

If you have questions or suggestions, please comment below.

 


Additional Reading

If you found this post useful, be sure to check out Implement Continuous Integration and Delivery of Apache Spark Applications using AWS and Build a Data Lake Foundation with AWS Glue and Amazon S3.

 


About the Authors

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.

 
Luis Caro is a Big Data Consultant for AWS Professional Services. He works with our customers to provide guidance and technical assistance on big data projects, helping them improving the value of their solutions when using AWS.

 

 

 

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Audit Trail Overview

Post Syndicated from Bozho original https://techblog.bozho.net/audit-trail-overview/

As part of my current project (secure audit trail) I decided to make a survey about the use of audit trail “in the wild”.

I haven’t written in details about this project of mine (unlike with some other projects). Mostly because it’s commercial and I don’t want to use my blog as a direct promotion channel (though I am doing that at the moment, ironically). But the aim of this post is to shed some light on how audit trail is used.

The survey can be found here. The questions are basically: does your current project have audit trail functionality, and if yes, is it protected from tampering. If not – do you think you should have such functionality.

The results are interesting (although with only around 50 respondents)

So more than half of the systems (on which respondents are working) don’t have audit trail. While audit trail is recommended by information security and related standards, it may not find place in the “busy schedule” of a software project, even though it’s fairly easy to provide a trivial implementation (e.g. I’ve written how to quickly setup one with Hibernate and Spring)

A trivial implementation might do in many cases but if the audit log is critical (e.g. access to sensitive data, performing financial operations etc.), then relying on a trivial implementation might not be enough. In other words – if the sysadmin can access the database and delete or modify the audit trail, then it doesn’t serve much purpose. Hence the next question – how is the audit trail protected from tampering:

And apparently, from the less than 50% of projects with audit trail, around 50% don’t have technical guarantees that the audit trail can’t be tampered with. My guess is it’s more, because people have different understanding of what technical measures are sufficient. E.g. someone may think that digitally signing your log files (or log records) is sufficient, but in fact it isn’t, as whole files (or records) can be deleted (or fully replaced) without a way to detect that. Timestamping can help (and a good audit trail solution should have that), but it doesn’t guarantee the order of events or prevent a malicious actor from deleting or inserting fake ones. And if timestamping is done on a log file level, then any not-yet-timestamped log file is vulnerable to manipulation.

I’ve written about event logs before and their two flavours – event sourcing and audit trail. An event log can effectively be considered audit trail, but you’d need additional security to avoid the problems mentioned above.

So, let’s see what would various levels of security and usefulness of audit logs look like. There are many papers on the topic (e.g. this and this), and they often go into the intricate details of how logging should be implemented. I’ll try to give an overview of the approaches:

  • Regular logs – rely on regular INFO log statements in the production logs to look for hints of what has happened. This may be okay, but is harder to look for evidence (as there is non-auditable data in those log files as well), and it’s not very secure – usually logs are collected (e.g. with graylog) and whoever has access to the log collector’s database (or search engine in the case of Graylog), can manipulate the data and not be caught
  • Designated audit trail – whether it’s stored in the database or in logs files. It has the proper business-event level granularity, but again doesn’t prevent or detect tampering. With lower risk systems that may is perfectly okay.
  • Timestamped logs – whether it’s log files or (harder to implement) database records. Timestamping is good, but if it’s not an external service, a malicious actor can get access to the local timestamping service and issue fake timestamps to either re-timestamp tampered files. Even if the timestamping is not compromised, whole entries can be deleted. The fact that they are missing can sometimes be deduced based on other factors (e.g. hour of rotation), but regularly verifying that is extra effort and may not always be feasible.
  • Hash chaining – each entry (or sequence of log files) could be chained (just as blockchain transactions) – the next one having the hash of the previous one. This is a good solution (whether it’s local, external or 3rd party), but it has the risk of someone modifying or deleting a record, getting your entire chain and re-hashing it. All the checks will pass, but the data will not be correct
  • Hash chaining with anchoring – the head of the chain (the hash of the last entry/block) could be “anchored” to an external service that is outside the capabilities of a malicious actor. Ideally, a public blockchain, alternatively – paper, a public service (twitter), email, etc. That way a malicious actor can’t just rehash the whole chain, because any check against the external service would fail.
  • WORM storage (write once, ready many). You could send your audit logs almost directly to WORM storage, where it’s impossible to replace data. However, that is not ideal, as WORM storage can be slow and expensive. For example AWS Glacier has rather big retrieval times and searching through recent data makes it impractical. It’s actually cheaper than S3, for example, and you can have expiration policies. But having to support your own WORM storage is expensive. It is a good idea to eventually send the logs to WORM storage, but “fresh” audit trail should probably not be “archived” so that it’s searchable and some actionable insight can be gained from it.
  • All-in-one – applying all of the above “just in case” may be unnecessary for every project out there, but that’s what I decided to do at LogSentinel. Business-event granularity with timestamping, hash chaining, anchoring, and eventually putting to WORM storage – I think that provides both security guarantees and flexibility.

I hope the overview is useful and the results from the survey shed some light on how this aspect of information security is underestimated.

The post Audit Trail Overview appeared first on Bozho's tech blog.

snallygaster – Scan For Secret Files On HTTP Servers

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/04/snallygaster-scan-for-secret-files-on-http-servers/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

snallygaster – Scan For Secret Files On HTTP Servers

snallygaster is a Python-based tool that can help you to scan for secret files on HTTP servers, files that are accessible that shouldn’t be public and can pose a security risk.

Typical examples include publicly accessible git repositories, backup files potentially containing passwords or database dumps. In addition it contains a few checks for other security vulnerabilities.

snallygaster HTTP Secret File Scanner Features

This is an overview of the tests provided by snallygaster.

Read the rest of snallygaster – Scan For Secret Files On HTTP Servers now! Only available at Darknet.

The DMCA and its Chilling Effects on Research

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/the_dmca_and_it.html

The Center for Democracy and Technology has a good summary of the current state of the DMCA’s chilling effects on security research.

To underline the nature of chilling effects on hacking and security research, CDT has worked to describe how tinkerers, hackers, and security researchers of all types both contribute to a baseline level of security in our digital environment and, in turn, are shaped themselves by this environment, most notably when things they do upset others and result in threats, potential lawsuits, and prosecution. We’ve published two reports (sponsored by the Hewlett Foundation and MacArthur Foundation) about needed reforms to the law and the myriad of ways that security research directly improves people’s lives. To get a more complete picture, we wanted to talk to security researchers themselves and gauge the forces that shape their work; essentially, we wanted to “take the pulse” of the security research community.

Today, we are releasing a third report in service of this effort: “Taking the Pulse of Hacking: A Risk Basis for Security Research.” We report findings after having interviewed a set of 20 security researchers and hackers — half academic and half non-academic — about what considerations they take into account when starting new projects or engaging in new work, as well as to what extent they or their colleagues have faced threats in the past that chilled their work. The results in our report show that a wide variety of constraints shape the work they do, from technical constraints to ethical boundaries to legal concerns, including the DMCA and especially the CFAA.

Note: I am a signatory on the letter supporting unrestricted security research.

Let’s stop talking about password strength

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/lets-stop-talking-about-password.html

Picture from EFF — CC-BY license

Near the top of most security recommendations is to use “strong passwords”. We need to stop doing this.

Yes, weak passwords can be a problem. If a website gets hacked, weak passwords are easier to crack. It’s not that this is wrong advice.

On the other hand, it’s not particularly good advice, either. It’s far down the list of important advice that people need to remember. “Weak passwords” are nowhere near the risk of “password reuse”. When your Facebook or email account gets hacked, it’s because you used the same password across many websites, not because you used a weak password.

Important websites, where the strength of your password matters, already take care of the problem. They use strong, salted hashes on the backend to protect the password. On the frontend, they force passwords to be a certain length and a certain complexity. Maybe the better advice is to not trust any website that doesn’t enforce stronger passwords (minimum of 8 characters consisting of both letters and non-letters).

To some extent, this “strong password” advice has become obsolete. A decade ago, websites had poor protection (MD5 hashes) and no enforcement of complexity, so it was up to the user to choose strong passwords. Now that important websites have changed their behavior, such as using bcrypt, there is less onus on the user.

But the real issue here is that “strong password” advice reflects the evil, authoritarian impulses of the infosec community. Instead of measuring insecurity in terms of costs vs. benefits, risks vs. rewards, we insist that it’s an issue of moral weakness. We pretend that flaws happen because people are greedy, lazy, and ignorant. We pretend that security is its own goal, a benefit we should achieve, rather than a cost we must endure.

We like giving moral advice because it’s easy: just be “stronger”. Discussing “password reuse” is more complicated, forcing us discuss password managers, writing down passwords on paper, that it’s okay to reuse passwords for crappy websites you don’t care about, and so on.

What I’m trying to say is that the moral weakness here is us. Rather then give pertinent advice we give lazy advice. We give the advice that victim shames them for being weak while pretending that we are strong.

So stop telling people to use strong passwords. It’s crass advice on your part and largely unhelpful for your audience, distracting them from the more important things.

Cybersecurity Insurance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/cybersecurity_i_1.html

Good article about how difficult it is to insure an organization against Internet attacks, and how expensive the insurance is.

Companies like retailers, banks, and healthcare providers began seeking out cyberinsurance in the early 2000s, when states first passed data breach notification laws. But even with 20 years’ worth of experience and claims data in cyberinsurance, underwriters still struggle with how to model and quantify a unique type of risk.

“Typically in insurance we use the past as prediction for the future, and in cyber that’s very difficult to do because no two incidents are alike,” said Lori Bailey, global head of cyberrisk for the Zurich Insurance Group. Twenty years ago, policies dealt primarily with data breaches and third-party liability coverage, like the costs associated with breach class-action lawsuits or settlements. But more recent policies tend to accommodate first-party liability coverage, including costs like online extortion payments, renting temporary facilities during an attack, and lost business due to systems failures, cloud or web hosting provider outages, or even IT configuration errors.

In my new book — out in September — I write:

There are challenges to creating these new insurance products. There are two basic models for insurance. There’s the fire model, where individual houses catch on fire at a fairly steady rate, and the insurance industry can calculate premiums based on that rate. And there’s the flood model, where an infrequent large-scale event affects large numbers of people — but again at a fairly steady rate. Internet+ insurance is complicated because it follows neither of those models but instead has aspects of both: individuals are hacked at a steady (albeit increasing) rate, while class breaks and massive data breaches affect lots of people at once. Also, the constantly changing technology landscape makes it difficult to gather and analyze the historical data necessary to calculate premiums.

BoingBoing article.

Artefacts in the classroom with Museum in a Box

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/museum-in-a-box/

Museum in a Box bridges the gap between museums and schools by creating a more hands-on approach to conservation education through 3D printing and digital making.

Artefacts in the classroom with Museum in a Box || Raspberry Pi Stories

Learn more: http://rpf.io/ Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspberry Pi from one of our Approved Resellers: http://rpf.io/ytproducts Find out more about the Raspberry Pi Foundation: Raspberry Pi http://rpf.io/ytrpi Code Club UK http://rpf.io/ytccuk Code Club International http://rpf.io/ytcci CoderDojo http://rpf.io/ytcd Check out our free online training courses: http://rpf.io/ytfl Find your local Raspberry Jam event: http://rpf.io/ytjam Work through our free online projects: http://rpf.io/ytprojects Do you have a question about your Raspberry Pi?

Fantastic collections and where to find them

Large, impressive statues are truly a sight to be seen. Take for example the 2.4m Hoa Hakananai’a at the British Museum. Its tall stature looms over you as you read its plaque to learn of the statue’s journey from Easter Island to the UK under the care of Captain Cook in 1774, and you can’t help but wonder at how it made it here in one piece.

Hoa Hakananai’a Captain Cook British Museum
Hoa Hakananai’a Captain Cook British Museum

But unless you live near a big city where museums are plentiful, you’re unlikely to see the likes of Hoa Hakananai’a in person. Instead, you have to content yourself with online photos or videos of world-famous artefacts.

And that only accounts for the objects that are on display: conservators estimate that only approximately 5 to 10% of museums’ overall collections are actually on show across the globe. The rest is boxed up in storage, inaccessible to the public due to risk of damage, or simply due to lack of space.

Museum in a Box

Museum in a Box aims to “put museum collections and expert knowledge into your hand, wherever you are in the world,” through modern maker practices such as 3D printing and digital making. With the help of the ‘Scan the World’ movement, an “ambitious initiative whose mission is to archive objects of cultural significance using 3D scanning technologies”, the Museum in a Box team has been able to print small, handheld replicas of some of the world’s most recognisable statues and sculptures.

Museum in a Box Raspberry Pi

Each 3D print gets NFC tags so it can initiate audio playback from a Raspberry Pi that sits snugly within the laser-cut housing of a ‘brain box’. Thus the print can talk directly to us through the magic of wireless technology, replacing the dense, dry text of a museum plaque with engaging speech.

Museum in a Box Raspberry Pi

The Museum in a Box team headed by CEO George Oates (featured in the video above) makes use of these 3D-printed figures alongside original artefacts, postcards, and more to bridge the gap between large, crowded, distant museums and local schools. Modeled after the museum handling collections that used to be sent to schools, Museum in a Box is a cheaper, more accessible alternative. Moreover, it not only allows for hands-on learning, but also encourages children to get directly involved by hacking its technology! With NFC technology readily available to the public, students can curate their own collections about their local area, record their own messages, and send their own box-sized museums on to schools in other towns or countries. In this way, Museum in a Box enables students to explore, and expand the reach of, their own histories.

Moving forward

With the technology perfected and interest in the project ever-growing, Museum in a Box has a busy year ahead. Supporting the new ‘Unstacked’ learning initiative, the team will soon be delivering ten boxes to the Smithsonian Libraries. The team has curated two collections specifically for this: an exploration into Asia-Pacific America experiences of migration to the USA throughout the 20th century, and a look into the history of science.

Smithsonian Library Museum in a Box Raspberry Pi

The team will also be making a box for the British Museum to support their Iraq Scheme initiative, and another box will be heading to the V&A to support their See Red programme. While primarily installed in the Lansbury Micro Museum, the box will also take to the road to visit the local Spotlight high school.

Museum in a Box at Raspberry Fields

Lastly, by far the most exciting thing the Museum in a Box team will be doing this year — in our opinion at least — is showcasing at Raspberry Fields! This is our brand-new festival of digital making that’s taking place on 30 June and 1 July 2018 here in Cambridge, UK. Find more information about it and get your ticket here.

The post Artefacts in the classroom with Museum in a Box appeared first on Raspberry Pi.

Popular Torrent Site Loses Domain After Copyright Complaint

Post Syndicated from Ernesto original https://torrentfreak.com/popular-torrent-site-loses-domain-after-copyright-complaint-180409/

With millions of visitors per month, Yggtorrent is one of the largest torrent sites on the Internet.

Catering to a French audience, it’s not widely known everywhere, but in France, it’s getting close to a spot among the 100 most visited sites in the country.

Yggtorrent is not the typical torrent indexer. It sees itself as a community instead and has a dedicated tracker, something that’s quite rare these days. The site is really only a few months old and filled the gap T411 left behind when it closed last year.

Its popularity hasn’t gone unnoticed by copyright holders either. In addition to sending thousands of DMCA notices, local anti-piracy group SACEM went a step further a few weeks ago, asking Yggtorrent’s domain registrar Internet.bs for help.

In a letter sent on behalf of SACEM, BrandAnalytic pointed out that the torrent site is offering copyrighted content without permission from the owners, thereby violating the law.

“This contravening domain name provides users with copyright-protected works without any express or tacit permission of the societies or their authors, composers and publishers,” the complaint reads.

BrandAnalytic/SACEM’s complaint

Strangely enough, the letter also accuses the site of phishing. As evidence, BrandAnalytic sent a screenshot of the site’s registration page while mentioning that it automatically installs cookies on users’ computers.

Since Yggtorrent uses a Whois privacy service, BrandAnalytic says it can’t identify the owners. They, therefore, ask Internet.bs to step in and take the domain offline.

“As you are the Registrar of this contravening domain name, we count on your prompt and amicable collaboration to remove it from the global domain tree,” BrandAnalytic writes.

The complaint was sent late February and Internet.bs forwarded it to the torrent site at the time, so it could respond appropriately. However, Yggtorrent did not respond at all.

After a reminder, the registrar decided to put the torrent site’s .com domain name on hold a few days ago, which means that it became inaccessible.

TorrentFreak spoke to an operator of Yggtorrent who explains that the site receives thousands of DMCA complaints and that it’s impossible to answer them all. They’ll now leave the .com domain domain behind and move to a new one, Yggtorrent.is.

Instead of using Internet.bs as registrar, the new domain name was purchased through Njalla, the privacy-oriented domain registration service that was founded by former Pirate Bay spokesperson Peter Sunde.

“Now, we know that we should not use internet.bs anymore. This is not the first time they suspend a domain name like this. It happened to Extratorrent in the past.

“We use Njalla right now, it’s safe,” Yggtorrent’s operator adds.

While the site is indeed back online, older torrents may not function as usual, as the tracker of the .com domain is no longer accessible. The site, therefore, recommends users to update the tracker address manually got get them going again.

Yggtorrent, which came under new management recently, appears to come out of this issue relatively unscathed. However, being in the crosshairs of SACEM is not without risk. The organization previously took out What.cd and Zone-Telechargement, among others.

Yggtorrent’s homepage

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Piracy & Money Are Virtually Inseparable & People Probably Don’t Care Anymore

Post Syndicated from Andy original https://torrentfreak.com/piracy-money-are-virtually-inseparable-people-probably-dont-care-anymore-180408/

Long before peer-to-peer file-sharing networks were a twinkle in developers’ eyes, piracy of software and games flourished under the radar. Cassettes, floppy discs and CDs were the physical media of choice, while the BBS became the haunt of the need-it-now generation.

Sharing was the name of the game. When someone had game ‘X’ on tape, it was freely shared with friends and associates because when they got game ‘Y’, the favor had to be returned. The content itself became the currency and for most, the thought of asking for money didn’t figure into the equation.

Even when P2P networks first took off, money wasn’t really a major part of the equation. Sure, the people running Kazaa and the like were generating money from advertising but for millions of users, sharing content between friends and associates was still the name of the game.

Even when the torrent site scene began to gain traction, money wasn’t the driving force. Everything was so new that developers were much more concerned with getting half written/half broken tracker scripts to work than anything else. Having people care enough to simply visit the sites and share something with others was the real payoff. Ironically, it was a reward that money couldn’t buy.

But as the scene began to develop, so did the influx of minor and even major businessmen. The ratio economy of the private tracker scene meant that bandwidth could essentially be converted to cash, something which gave site operators revenue streams that had never previously existed. That was both good and bad for the scene.

The fact is that running a torrent site costs money and if time is factored in too, that becomes lots of money. If site admins have to fund everything themselves, a tipping point is eventually reached. If the site becomes unaffordable, it closes, meaning that everyone loses. So, by taking in some donations or offering users other perks in exchange for financial assistance, the whole thing remains viable.

Counter-intuitively, the success of such a venture then becomes the problem, at least as far as maintaining the old “sharing is caring” philosophy goes. A well-run private site, with enthusiastic donors, has the potential to bring in quite a bit of cash. Initially, the excess can be saved away for that rainy day when things aren’t so good. Having a few thousand in the bank when chaos rains down is rarely a bad thing.

But what happens when a site does really well and is making money hand over fist? What happens when advertisers on public sites begin to queue up, offering lots of cash to get involved? Is a site operator really expected to turn down the donations and tell the advertisers to go away? Amazingly, some do. Less amazingly, most don’t.

Although there are some notable exceptions, particularly in the niche private tracker scene, these days most ‘pirate’ sites are in it for the money.

In the current legal climate, some probably consider this their well-earned ‘danger money’ yet others are so far away from the sharing ethos it hurts. Quite often, these sites are incapable of taking in a new member due to alleged capacity issues yet a sizeable ‘donation’ miraculously solves the problem and gets the user in. It’s like magic.

As it happens, two threads on Reddit this week sparked this little rant. Both discuss whether someone should consider paying $20 and 37 euros respectively to get invitations to a pair of torrent sites.

Ask a purist and the answer is always ‘NO’, whether that’s buying an invitation from the operator of a torrent site or from someone selling invites for profit.

Aside from the fact that no one on these sites has paid content owners a dime, sites that demand cash for entry are doing so for one reason and one reason only – profit. Ridiculous when it’s the users of those sites that are paying to distribute the content.

On the other hand, others see no wrong in it.

They argue that paying a relatively small amount to access huge libraries of content is preferable to spending hundreds of dollars on a legitimate service that doesn’t carry all the content they need. Others don’t bother making any excuses at all, spending sizable sums with pirate IPTV/VOD services that dispose of sharing morals by engaging in a different business model altogether.

But the bottom line, whether we like it or not, is that money and Internet piracy have become so intertwined, so enmeshed in each other’s existence, that it’s become virtually impossible to separate them.

Even those running the handful of non-profit sites still around today would be forced to reconsider if they had to start all over again in today’s climate. The risk model is entirely different and quite often, only money tips those scales.

The same holds true for the people putting together the next big streaming portals. These days it’s about getting as many eyeballs on content as possible, making the money, and getting out the other end unscathed.

This is not what most early pirates envisioned. This is certainly not what the early sharing masses wanted. Yet arguably, through the influx of business people and the desire to generate profit among the general population, the pirating masses have never had it so good.

As revealed in a recent study, volumes of piracy are on the up and it is now possible – still possible – to access almost any item of content on pirate sites, despite the so-called “follow the money” approach championed by the authorities.

While ‘Sharing is Caring’ still lives today, it’s slowly being drowned out and at this point, there’s probably no way back. The big question is whether anyone cares anymore and the answer to that is “probably not”.

So, if the driving force isn’t sharing or love, it’ll probably have to be money. And that works everywhere else, doesn’t it?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Spooky Torrent Warns EZTV Users About “Huge Security Risk”

Post Syndicated from Ernesto original https://torrentfreak.com/spooky-torrent-warns-eztv-users-about-huge-security-risk-180408/

For more than a decade EZTV has been a widely recognized brand among many BitTorrent users and known as one of the main TV-distribution groups.

While the original EZTV shut down following a hostile takeover, the people who took over are still serving torrents to millions of people every month.

Generally speaking, EZTV takes releases from outside encoders which they then distribute with their own nametag. It’s been like this for years and has never caused any real problems.

Last week, however, a disturbing release was added to the site, sending a message to EZTV users. What appeared to be a regular release of Lucifer S03E19, turned into something darker.

Ten minutes into the episode, a red warning appears, telling viewers that EZTV.ag is a huge security risk.

Huge Security Risk

Throughout the rest of the episode, a few dozen IP-addresses appear plastered across the screen. Needless to say, this makes the program rather unwatchable.

According to the earlier message, these IP-addresses are “used on EZTV.ag.” This seems to suggest that the website has a leak somewhere unless it refers to IP-addresses of downloaders, which are public anyway.

IP-addresses

It is hard to grasp what’s really going on here and there is no direct evidence that the site has been breached in any way. Not directly at least.

At the end of the episode, a final message appears, adding to the intrigue. The message comes from the encoder DeXoX and offers up a complete IP-address database, email addresses of registered EZTV users, and more.

DeXoX

Again, we have not been able to verify the validity of these claims but it’s certainly not good PR for EZTV. The spooky torrent has been downloaded by thousands of people already and is still listed on the site several days after first appearing.

We are not familiar with DeXoX, but it appears that the person behind the handle is not a fan of EZTV.ag, to say the least.

It remains unclear how the torrent was added to the site. It could be that the EZTV site has indeed been breached in some way, or DeXoX has access to the site where EZTV sources its material. In any event, the release page or the site itself contains no warnings, only the video itself.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

If YouTube-Ripping Sites Are Illegal, What About Tools That Do a Similar Job?

Post Syndicated from Andy original https://torrentfreak.com/if-youtube-ripping-sites-are-illegal-what-about-tools-that-do-a-similar-job-180407/

In 2016, the International Federation of the Phonographic Industry published research which claimed that half of 16 to 24-year-olds use stream-ripping tools to copy music from sites like YouTube.

While this might not have surprised those who regularly participate in the activity, IFPI said that volumes had become so vast that stream-ripping had overtaken pirate site music downloads. That was a big statement.

Probably not coincidentally, just two weeks later IFPI, RIAA, and BPI announced legal action against the world’s largest YouTube ripping site, YouTube-MP3.

“YTMP3 rapidly and seamlessly removes the audio tracks contained in videos streamed from YouTube that YTMP3’s users access, converts those audio tracks to an MP3 format, copies and stores them on YTMP3’s servers, and then distributes copies of the MP3 audio files from its servers to its users in the United States, enabling its users to download those MP3 files to their computers, tablets, or smartphones,” the complaint read.

The labels sued YouTube-MP3 for direct infringement, contributory infringement, vicarious infringement, inducing others to infringe, plus circumvention of technological measures on top. The case was big and one that would’ve been intriguing to watch play out in court, but that never happened.

A year later in September 2017, YouTubeMP3 settled out of court. No details were made public but YouTube-MP3 apparently took all the blame and the court was asked to rule in favor of the labels on all counts.

This certainly gave the impression that what YouTube-MP3 did was illegal and a strong message was sent out to other companies thinking of offering a similar service. However, other onlookers clearly saw the labels’ lawsuit as something to be studied and learned from.

One of those was the operator of NotMP3downloader.com, a site that offers Free MP3 Recorder for YouTube, a tool offering similar functionality to YouTube-MP3 while supposedly avoiding the same legal pitfalls.

Part of that involves audio being processed on the user’s machine – not by stream-ripping as such – but by stream-recording. A subtle difference perhaps, but the site’s operator thinks it’s important.

“After examining the claims made by the copyright holders against youtube-mp3.org, we identified that the charges were based on the three main points. [None] of them are applicable to our product,” he told TF this week.

The first point involves YouTube-MP3’s acts of conversion, storage and distribution of content it had previously culled from YouTube. Copies of unlicensed tracks were clearly held on its own servers, a potent direct infringement risk.

“We don’t have any servers to download, convert or store a copyrighted or any other content from YouTube. Therefore, we do not violate any law or prohibition implied in this part,” NotMP3downloader’s operator explains.

Then there’s the act of “stream-ripping” itself. While YouTube-MP3 downloaded digital content from YouTube using its own software, NotMP3downloader claims to do things differently.

“Our software doesn’t download any streaming content directly, but only launches a web browser with the video specified by a user. The capturing happens from a local machine’s sound card and doesn’t deal with any content streamed through a network,” its operator notes.

This part also seems quite important. YouTube-MP3 was accused of unlawfully circumventing technological measures implemented by YouTube to prevent people downloading or copying content. By opening up YouTube’s own website and viewing content in the way the site demands, NotMP3downloader says it does not “violate the website’s integrity nor performs direct download of audio or video files.”

Like the Betamax video recorder before it that enabled recording from analog TV, NotMP3downloader enables a user to record a YouTube stream on their local machine. This, its makers claim, means the software is completely legal and defeats all the claims made by the labels in the YouTube-MP3 lawsuit.

“What YouTube does is broadcasting content through the Internet. Thus, there is nothing wrong if users are allowed to watch such content later as they may want,” the NotMP3downloader team explain.

“It is worth noting that in Sony Corp. of America v. United City Studios, Inc. (464 U.S. 417) the United States Supreme Court held that such practice, also known as time-shifting, was lawful representing fair use under the US Copyright Act and causing no substantial harm to the copyright holder.”

While software that can record video and sounds locally are nothing new, the developments in the YouTube-MP3 case and this response from NotMP3downloader raises interesting questions.

We put some of them to none other than former RIAA Executive Vice President, Neil Turkewitz, who now works as President of Turkewitz Consulting Group.

Turkewitz stressed that he doesn’t speak for the industry as a whole or indeed the RIAA but it’s clear that his passion for protecting creators persists. He told us that in this instance, reliance on the Betamax decision is “misplaced”.

“The content is different, the activity is different, and the function is different,” Turkewitz told TF.

“The Sony decision must be understood in its context — the time shifting of audiovisual programming being broadcast from point to multipoint. The making available of content by a point-to-point interactive service like YouTube isn’t broadcasting — or at a minimum, is not a form of broadcasting akin to that considered by the Supreme Court in Sony.

“More fundamentally, broadcasting (right of communication to the public) is one of only several rights implicated by the service. And of course, issues of liability will be informed by considerations of purpose, effect and perceived harm. A court’s judgment will also be affected by whether it views the ‘innovation’ as an attempt to circumvent the requirements of law. The decision of the Supreme Court in ABC v. Aereo is certainly instructive in that regard.”

And there are other issues too. While YouTube itself is yet to take any legal action to deter users from downloading rather than merely streaming content, its terms of service are quite specific and seem to cover all eventualities.

“[Y]ou agree not to access Content or any reason other than your personal, non-commercial use solely as intended through and permitted by the normal functionality of the Service, and solely for Streaming,” YouTube’s ToS reads.

“‘Streaming’ means a contemporaneous digital transmission of the material by YouTube via the Internet to a user operated Internet enabled device in such a manner that the data is intended for real-time viewing and not intended to be downloaded (either permanently or temporarily), copied, stored, or redistributed by the user.

“You shall not copy, reproduce, distribute, transmit, broadcast, display, sell, license, or otherwise exploit any Content for any other purposes without the prior written consent of YouTube or the respective licensors of the Content.”

In this respect, it seems that a user doing anything but real-time streaming of YouTube content is breaching YouTube’s terms of service. The big question then, of course, is whether providing a tool specifically for that purpose represents an infringement of copyright.

The people behind Free MP3 Recorder believe that the “scope of application depends entirely on the end users’ intentions” which seems like a fair argument at first view. But, as usual, copyright law is incredibly complex and there are plenty of opposing views.

We asked the BPI, which took action against YouTubeMP3, for its take on this type of tool. The official response was “No comment” which doesn’t really clarify the position, at least for now.

Needless to say, the Betamax decision – relevant or not – doesn’t apply in the UK. But that only adds more parameters into the mix – and perhaps more opportunities for lawyers to make money arguing for and against tools like this in the future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.