Tag Archives: risks

EU Piracy Report Suppression Raises Questions Over Transparency

Post Syndicated from Andy original https://torrentfreak.com/eu-piracy-report-suppression-raises-questions-transparency-170922/

Over the years, copyright holders have made hundreds of statements against piracy, mainly that it risks bringing industries to their knees through widespread and uncontrolled downloading from the Internet.

But while TV shows like Game of Thrones have been downloaded millions of times, the big question (one could argue the only really important question) is whether this activity actually affects sales. After all, if piracy has a massive negative effect on industry, something needs to be done. If it does not, why all the panic?

Quite clearly, the EU Commission wanted to find out the answer to this potential multi-billion dollar question when it made the decision to invest a staggering 360,000 euros in a dedicated study back in January 2014.

With a final title of ‘Estimating displacement rates of copyrighted content in the EU’, the completed study is an intimidating 307 pages deep. Shockingly, until this week, few people even knew it existed because, for reasons unknown, the EU Commission decided not to release it.

However, thanks to the sheer persistence of Member of the European Parliament Julia Reda, the public now has a copy and it contains quite a few interesting conclusions. But first, some background.

The study uses data from 2014 and covers four broad types of content: music,
audio-visual material, books and videogames. Unlike other reports, the study also considered live attendances of music and cinema visits in the key regions of Germany, UK, Spain, France, Poland and Sweden.

On average, 51% of adults and 72% of minors in the EU were found to have illegally downloaded or streamed any form of creative content, with Poland and Spain coming out as the worst offenders. However, here’s the kicker.

“In general, the results do not show robust statistical evidence of displacement of sales by online copyright infringements,” the study notes.

“That does not necessarily mean that piracy has no effect but only that the statistical analysis does not prove with sufficient reliability that there is an effect.”

For a study commissioned by the EU with huge sums of public money, this is a potentially damaging conclusion, not least for the countless industry bodies that lobby day in, day out, for tougher copyright law based on the “fact” that piracy is damaging to sales.

That being said, the study did find that certain sectors can be affected by piracy, notably recent top movies.

“The results show a displacement rate of 40 per cent which means that for every ten recent top films watched illegally, four fewer films are consumed legally,” the study notes.

“People do not watch many recent top films a second time but if it happens, displacement is lower: two legal consumptions are displaced by every ten illegal second views. This suggests that the displacement rate for older films is lower than the 40 per cent for recent top films. All in all, the estimated loss for recent top films is 5 per cent of current sales volumes.”

But while there is some negative effect on the movie industry, others can benefit. The study found that piracy had a slightly positive effect on the videogames industry, suggesting that those who play pirate games eventually become buyers of official content.

On top of displacement rates, the study also looked at the public’s willingness to pay for content, to assess whether price influences pirate consumption. Interestingly, the industry that had the most displaced sales – the movie industry – had the greatest number of people unhappy with its pricing model.

“Overall, the analysis indicates that for films and TV-series current prices are higher than 80 per cent of the illegal downloaders and streamers are willing to pay,” the study notes.

For other industries, where sales were not found to have been displaced or were positively affected by piracy, consumer satisfaction with pricing was greatest.

“For books, music and games, prices are at a level broadly corresponding to the
willingness to pay of illegal downloaders and streamers. This suggests that a
decrease in the price level would not change piracy rates for books, music and
games but that prices can have an effect on displacement rates for films and
TV-series,” the study concludes.

So, it appears that products that are priced fairly do not suffer significant displacement from piracy. Those that are priced too high, on the other hand, can expect to lose some sales.

Now that it’s been released, the findings of the study should help to paint a more comprehensive picture of the infringement climate in the EU, while laying to rest some of the wild claims of the copyright lobby. That being said, it shouldn’t have taken the toils of Julia Reda to bring them to light.

“This study may have remained buried in a drawer for several more years to come if it weren’t for an access to documents request I filed under the European Union’s Freedom of Information law on July 27, 2017, after having become aware of the public tender for this study dating back to 2013,” Reda explains.

“I would like to invite the Commission to become a provider of more solid and timely evidence to the copyright debate. Such data that is valuable both financially and in terms of its applicability should be available to everyone when it is financed by the European Union – it should not be gathering dust on a shelf until someone actively requests it.”

The full study can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MetalKettle Addon Repository Vulnerable After GitHub ‘Takeover’

Post Syndicated from Ernesto original https://torrentfreak.com/metalkettle-after-github-takeover-170915/

A few weeks ago MetalKettle, one of the most famous Kodi addon developers of recent times, decided to call it quits.

Worried about potential legal risks, he saw no other option than to halt all development of third-party Kodi addons.

Soon after this announcement, the developer proceeded to remove the GitHub account which was used to distribute his addons. However, he didn’t realize that this might not have been the best decision.

As it turns out, GitHub allows outsiders to re-register names of deleted accounts. While this might not be a problem in most cases, it can be disastrous when the accounts are connected to Kodi add-ons that are constantly pinging for new updates.

In essence, it means that the person who registered the Github account can load content onto the boxes of people who still have the MetalKettle repo installed. Quite a dangerous prospect, something MetalKettle realizes as well.

“Someone has re-registered metalkettle on github. So in theory could pollute any devices with the repo still installed,” he warned on Twitter.

“Warning : if any users have a metalkettle repo installed on their systems or within a build – please delete ASAP,” he added.

MetalKettle warning

It’s not clear what the intentions of the new MetalKettle user are on GitHub, if he or she has any at all. But, people should be very cautious and probably remove it from their systems.

The real MetalKettle, meanwhile, alerted TVAddons to the situation and they have placed the repository on their Indigo blacklist of banned software. This effectively disables the repository on devices with Indigo installed.

GitHub on their turn may want to reconsider their removal policy. Perhaps it’s smarter to not make old usernames available for registration, at least not for a while, as it’s clearly a vulnerability.

This is also shown by another Kodi repo controversy that appeared earlier today. Another GitHub account that was reportedly deleted earlier, resurfaced today pushing a new version of the Exodus addon and other sources.

According to some, the GitHub account is operated by the original Exodus developers and perfectly safe, but others warn that the name was reregistered in bad faith.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Self-Driving Cars Should Be Open Source

Post Syndicated from Bozho original https://techblog.bozho.net/self-driving-cars-open-source/

Self-driving cars are (will be) the pinnacle of consumer products automation – robot vacuum cleaners, smart fridges and TVs are just toys compared to self-driving cars. Both in terms of technology and in terms of impact. We aren’t yet on level 5 self driving cars , but they are behind the corner.

But as software engineers we know how fragile software is. And self-driving cars are basically software, so we can see all the risks involved with putting our lives in the hands anonymous (from our point of view) developers and unknown (to us) processes and quality standards. One may argue that this has been the case for every consumer product ever, but with software is different – software is way more complex than anything else.

So I have an outrageous proposal – self-driving cars should be open source. We have to be able to verify and trust the code that’s navigating our helpless bodies around the highways. Not only that, but we have to be able to verify if it is indeed that code that is currently running in our car, and not something else.

In fact, let me extend that – all cars should be open source. Before you say “but that will ruin the competitive advantage of manufacturers and will be deadly for business”, I don’t actually care how they trained their neural networks, or what their datasets are. That’s actually the secret sauce of the self-driving car and in my view it can remain proprietary and closed. What I’d like to see open-sourced is everything else. (Under what license – I’d be fine to even have it copyrighted and so not “real” open source, but that’s a separate discussion).

Why? This story about remote carjacking using the entertainment system of a Jeep is a scary example. Attackers that reverse engineer the car software can remotely control everything in the car. Why did that happen? Well, I guess it’s complicated and we have to watch the DEFCON talk.

And also read the paper, but a paragraph in wikipedia about the CAN bus used in most cars gives us a hint:

CAN is a low-level protocol and does not support any security features intrinsically. There is also no encryption in standard CAN implementations, which leaves these networks open to man-in-the-middle packet interception. In most implementations, applications are expected to deploy their own security mechanisms; e.g., to authenticate incoming commands or the presence of certain devices on the network. Failure to implement adequate security measures may result in various sorts of attacks if the opponent manages to insert messages on the bus. While passwords exist for some safety-critical functions, such as modifying firmware, programming keys, or controlling antilock brake actuators, these systems are not implemented universally and have a limited number of seed/key pair

I don’t know in what world it makes sense to even have a link between the entertainment system and the low-level network that operates the physical controls. As apparent from the talk, the two systems are supposed to be air-gapped, but in reality they aren’t.

Rookie mistakes were abound – unauthenticated “execute” method, running as root, firmware is not signed, hard-coded passwords, etc. How do we know that there aren’t tons of those in all cars out there right now, and in the self-driving cars of the future (which will likely use the same legacy technologies of the current cars)? Recently I heard a negative comment about the source code of one of the self-driving cars “players”, and I’m pretty sure there are many of those rookie mistakes.

Why this is this even more risky for self-driving cars? I’m not an expert in car programming, but it seems like the attack surface is bigger. I might be completely off target here, but on a typical car you’d have to “just” properly isolate the CAN bus. With self-driving cars the autonomous system that watches the surrounding and makes decisions on what to do next has to be connected to the CAN bus. With Tesla being able to send updates over the wire, the attack surface is even bigger (although that’s actually a good feature – to be able to patch all cars immediately once a vulnerability is discovered).

Of course, one approach would be to introduce legislation that regulates car software. It might work, but it would rely on governments to to proper testing, which won’t always be the case.

The alternative is to open-source it and let all the white-hats find your issues, so that you can close them before the car hits the road. Not only that, but consumers like me will feel safer, and geeks would be able to verify whether the car is really running the software it claims to run by verifying the fingerprints.

Richard Stallman might be seen as a fanatic when he advocates against closed source software, but in cases like … cars, his concerns seem less extreme.

“But the Jeep vulnerability was fixed”, you may say. And that might be seen as being the way things are – vulnerabilities appear, they get fixed, life goes on. No person was injured because of the bug, right? Well, not yet. And “gaining control” is the extreme scenario – there are still pretty bad scenarios, like being able to track a car through its GPS, or cause panic by controlling the entertainment system. It might be over wifi, or over GPRS, or even by physically messing with the car by inserting a flash drive. Is open source immune to those issues? No, but it has proven to be more resilient.

One industry where the problem of proprietary software on a product that the customer bought is … tractors. It turns out farmers are hacking their tractors, because of multiple issues and the inability of the vendor to resolve them in a timely manner. This is likely to happen to cars soon, when only authorized repair shops are allowed to touch anything on the car. And with unauthorized repair shops the attack surface becomes even bigger.

In fact, I’d prefer open source not just for cars, but for all consumer products. The source code of a smart fridge or a security camera is trivial, it would rarely mean sacrificing competitive advantage. But refrigerators get hacked, security cameras are active part of botnets, the “internet of shit” is getting ubiquitous. A huge amount of these issues are dumb, beginner mistakes. We have the right to know what shit we are running – in our frdges, DVRs and ultimatey – cars.

Your fridge may soon by spying on you, your vacuum cleaner may threaten your pet in demand of “ransom”. The terrorists of the future may crash planes without being armed, can crash vans into crowds without being in the van, and can “explode” home equipment without being in the particular home. And that’s not just a hypothetical.

Will open source magically solve the issue? No. But it will definitely make things better and safer, as it has done with operating systems and web servers.

The post Self-Driving Cars Should Be Open Source appeared first on Bozho's tech blog.

Prime Day 2017 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/prime-day-2017-powered-by-aws/

The third annual Prime Day set another round of records for global orders, topping Black Friday and Cyber Monday, making it the biggest day in Amazon retail history. Over the course of the 30 hour event, tens of millions of Prime members purchased things like Echo Dots, Fire tablets, programmable pressure cookers, espresso machines, rechargeable batteries, and much more! July 11th also set a record for the number of new Prime memberships, as people signed up in order to take advantage of hundreds of thousands of deals. Amazon customers shopped online and made heavy use of the Amazon App, with mobile orders more than doubling from last Prime Day.

Powered by AWS
Last year I told you about How AWS Powered Amazon’s Biggest Day Ever, and shared what the team had learned with regard to preparation, automation, monitoring, and thinking big. All of those lessons still apply and you can read that post to learn more. Preparation for this year’s Prime Day (which started just days after Prime Day 2016 wrapped up) started by collecting and sharing best practices and identifying areas for improvement, proceeding to implementation and stress testing as the big day approached. Two of the best practices involve auditing and GameDay:

Auditing – This is a formal way for us to track preparations, identify risks, and to track progress against our objectives. Each team must respond to a series of detailed technical and operational questions that are designed to help them determine their readiness. On the technical side, questions could revolve around time to recovery after a database failure, including the all-important check of the TTL (time to live) for the CNAME. Operational questions address schedules for on-call personnel, points of contact, and ownership of services & instances.

GameDay – This practice (which I believe originated with former Amazonian Jesse Robbins), is intended to validate all of the capacity planning & preparation and to verify that all of the necessary operational practices are in place and work as expected. It introduces simulated failures and helps to train the team to identify and quickly resolve issues, building muscle memory in the process. It also tests failover and recovery capabilities, and can expose latent defects that are lurking under the covers. GameDays help teams to understand scaling drivers (page views, orders, and so forth) and gives them an opportunity to test their scaling practices. To learn more, read Resilience Engineering: Learning to Embrace Failure or watch the video: GameDay: Creating Resiliency Through Destruction.

Prime Day 2017 Metrics
So, how did we do this year?

The AWS teams checked their dashboards and log files, and were happy to share their metrics with me. Here are a few of the most interesting ones:

Block Storage – Use of Amazon Elastic Block Store (EBS) grew by 40% year-over-year, with aggregate data transfer jumping to 52 petabytes (a 50% increase) for the day and total I/O requests rising to 835 million (a 30% increase). The team told me that they loved the elasticity of EBS, and that they were able to ramp down on capacity after Prime Day concluded instead of being stuck with it.

NoSQL Database – Amazon DynamoDB requests from Alexa, the Amazon.com sites, and the Amazon fulfillment centers totaled 3.34 trillion, peaking at 12.9 million per second. According to the team, the extreme scale, consistent performance, and high availability of DynamoDB let them meet needs of Prime Day without breaking a sweat.

Stack Creation – Nearly 31,000 AWS CloudFormation stacks were created for Prime Day in order to bring additional AWS resources on line.

API Usage – AWS CloudTrail processed over 50 billion events and tracked more than 419 billion calls to various AWS APIs, all in support of Prime Day.

Configuration TrackingAWS Config generated over 14 million Configuration items for AWS resources.

You Can Do It
Running an event that is as large, complex, and mission-critical as Prime Day takes a lot of planning. If you have an event of this type in mind, please take a look at our new Infrastructure Event Readiness white paper. Inside, you will learn how to design and provision your applications to smoothly handle planned scaling events such as product launches or seasonal traffic spikes, with sections on automation, resiliency, cost optimization, event management, and more.



Pirates Are Not Easily Deterred by Viruses and Malware, Study Finds

Post Syndicated from Ernesto original https://torrentfreak.com/pirates-are-not-easily-deterred-by-viruses-and-malware-study-finds-170913/

Despite the widespread availability of legal streaming services, piracy remains rampant around the world.

This is the situation in Singapore where a new study commissioned by the Cable and Satellite Broadcasting Association of Asia (CASBAA) found that 39% of all Singaporeans download or stream movies, TV shows, or live sports illegally.

The survey, conducted by Sycamore Research, polled the opinions and behaviors of a weighted sample of 1,000 respondents. The research concludes that nearly half of the population regularly pirates and also found that these people are not easily deterred.

Although the vast majority of the population knows that piracy is against the law, the lure of free content is often hard to ignore. Many simply see it as socially acceptible behavior.

“The notion that piracy is something that everybody does nowadays turns it into a socially acceptable behavior”, Sycamore Research Director Anna Meadows says, commenting on the findings.

“Numerous studies have shown that what we perceive others to be doing has a far stronger influence on our behavior than what we know we ‘ought’ to do. People know that they shouldn’t really pirate, but they continue to do so because they believe those around them do as well.”

One of the main threats pirates face is the availability of malware and malicious ads that are present on some sites. This risk is recognized by 74% of the active pirates, but they continue nonetheless.

The dangers of malware and viruses, which is a key talking point among industry groups nowadays, do have some effect. Among those who stopped pirating, 40% cited it as their primary reason. That’s more than the availability of legal services, which is mentioned in 37% of cases.

Aside from traditional download and streaming sites, the growing popularity of pirate media boxes is clearly present in Singapore was well. A total of 14% of Singaporeans admit to having such a device in their home.

So why do people continue to pirate despite the risks?

The answer is simple; because it’s free. The vast majority (63%) mention the lack of financial costs as their main motivation to use pirate sites. The ability to watch something whenever they want and a lack of legal options follow at a distance, both at 31%.

“There are few perceived downsides to piracy,” Meadows notes.

“Whilst the risk of devices being infected with viruses or malware is understood, it is underweighted. In the face of the benefit of free content, people appear to discount the risks, as the idea of getting something for nothing is so psychologically powerful.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Disabling Intel Hyper-Threading Technology on Amazon EC2 Windows Instances

Post Syndicated from Brian Beach original https://aws.amazon.com/blogs/compute/disabling-intel-hyper-threading-technology-on-amazon-ec2-windows-instances/

In a prior post, Disabling Intel Hyper-Threading on Amazon Linux, I investigated how the Linux kernel enumerates CPUs. I also discussed the options to disable Intel Hyper-Threading (HT Technology) in Amazon Linux running on Amazon EC2.

In this post, I do the same for Microsoft Windows Server 2016 running on EC2 instances. I begin with a quick review of HT Technology and the reasons you might want to disable it. I also recommend that you take a moment to review the prior post for a more thorough foundation.

HT Technology

HT Technology makes a single physical processor appear as multiple logical processors. Each core in an Intel Xeon processor has two threads of execution. Most of the time, these threads can progress independently; one thread executing while the other is waiting on a relatively slow operation (for example, reading from memory) to occur. However, the two threads do share resources and occasionally one thread is forced to wait while the other is executing.

There a few unique situations where disabling HT Technology can improve performance. One example is high performance computing (HPC) workloads that rely heavily on floating point operations. In these rare cases, it can be advantageous to disable HT Technology. However, these cases are rare, and for the overwhelming majority of workloads you should leave it enabled. I recommend that you test with and without HT Technology enabled, and only disable threads if you are sure it will improve performance.

Exploring HT Technology on Microsoft Windows

Here’s how Microsoft Windows enumerates CPUs. As before, I am running these examples on an m4.2xlarge. I also chose to run Windows Server 2016, but you can walk through these exercises on any version of Windows. Remember that the m4.2xlarge has eight vCPUs, and each vCPU is a thread of an Intel Xeon core. Therefore, the m4.2xlarge has four cores, each of which run two threads, resulting in eight vCPUs.

Windows does not have a built-in utility to examine CPU configuration, but you can download the Sysinternals coreinfo utility from Microsoft’s website. This utility provides useful information about the system CPU and memory topology. For this walkthrough, you enumerate the individual CPUs, which you can do by running coreinfo -c. For example:

C:\Users\Administrator >coreinfo -c

Coreinfo v3.31 - Dump information on system CPU and memory topology
Copyright (C) 2008-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

Logical to Physical Processor Map:
**------ Physical Processor 0 (Hyperthreaded)
--**---- Physical Processor 1 (Hyperthreaded)
----**-- Physical Processor 2 (Hyperthreaded)
------** Physical Processor 3 (Hyperthreaded)

As you can see from the screenshot, the coreinfo utility displays a table where each row is a physical core and each column is a logical CPU. In other words, the two asterisks on the first line indicate that CPU 0 and CPU 1 are the two threads in the first physical core. Therefore, my m4.2xlarge has for four physical processors and each processor has two threads resulting in eight total CPUs, just as expected.

It is interesting to note that Windows Server 2016 enumerates CPUs in a different order than Linux. Remember from the prior post that Linux enumerated the first thread in each core, followed by the second thread in each core. You can see from the output earlier that Windows Server 2016, enumerates both threads in the first core, then both threads in the second core, and so on. The diagram below shows the relationship of CPUs to cores and threads in both operating systems.

In the Linux post, I disabled CPUs 4–6, leaving one thread per core, and effectively disabling HT Technology. You can see from the diagram that you must disable the odd-numbered threads (that is, 1, 3, 5, and 7) to achieve the same result in Windows. Here’s how to do that.

Disabling HT Technology on Microsoft Windows

In Linux, you can globally disable CPUs dynamically. In Windows, there is no direct equivalent that I could find, but there are a few alternatives.

First, you can disable CPUs using the msconfig.exe tool. If you choose Boot, Advanced Options, you have the option to set the number of processors. In the example below, I limit my m4.2xlarge to four CPUs. Restart for this change to take effect.

Unfortunately, Windows does not disable hyperthreaded CPUs first and then real cores, as Linux does. As you can see in the following output, coreinfo reports that my c4.2xlarge has two real cores and four hyperthreads, after rebooting. Msconfig.exe is useful for disabling cores, but it does not allow you to disable HT Technology.

Note: If you have been following along, you can re-enable all your CPUs by unselecting the Number of processors check box and rebooting your system.


C:\Users\Administrator >coreinfo -c

Coreinfo v3.31 - Dump information on system CPU and memory topology
Copyright (C) 2008-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

Logical to Physical Processor Map:
**-- Physical Processor 0 (Hyperthreaded)
--** Physical Processor 1 (Hyperthreaded)

While you cannot disable HT Technology systemwide, Windows does allow you to associate a particular process with one or more CPUs. Microsoft calls this, “processor affinity”. To see an example, use the following steps.

  1. Launch an instance of Notepad.
  2. Open Windows Task Manager and choose Processes.
  3. Open the context (right click) menu on notepad.exe and choose Set Affinity….

This brings up the Processor Affinity dialog box.

As you can see, all the CPUs are allowed to run this instance of notepad.exe. You can uncheck a few CPUs to exclude them. Windows is smart enough to allow any scheduled operations to continue to completion on disabled CPUs. It then saves its state at the next scheduling event, and resumes those operations on another CPU. To ensure that only one thread in each core is able to run a process, you uncheck every other core. This effectively disables HT Technology for this process. For example:

Of course, this can be tedious when you have a large number of cores. Remember that the x1.32xlarge has 128 CPUs. Luckily, you can set the affinity of a running process from PowerShell using the Get-Process cmdlet. For example:

PS C:\> (Get-Process -Name 'notepad').ProcessorAffinity = 0x55;

The ProcessorAffinity attribute takes a bitmask in hexadecimal format. 0x55 in hex is equivalent to 01010101 in binary. Think of the binary encoding as 1=enabled and 0=disabled. This is slightly confusing, but we work left to right so that CPU 0 is the rightmost bit and CPU 7 is the leftmost bit. Therefore, 01010101 means that the first thread in each CPU is enabled just as it was in the diagram earlier.

The calculator built into Windows includes a “programmer view” that helps you convert from hexadecimal to binary. In addition, the ProcessorAffinity attribute is a 64-bit number. Therefore, you can only configure the processor affinity on systems up to 64 CPUs. At the moment, only the x1.32xlarge has more than 64 vCPUs.

In the preceding examples, you changed the processor affinity of a running process. Sometimes, you want to start a process with the affinity already configured. You can do this using the start command. The start command includes an affinity flag that takes a hexadecimal number like the PowerShell example earlier.

C:\Users\Administrator>start /affinity 55 notepad.exe

It is interesting to note that a child process inherits the affinity from its parent. For example, the following commands create a batch file that launches Notepad, and starts the batch file with the affinity set. If you examine the instance of Notepad launched by the batch file, you see that the affinity has been applied to as well.

C:\Users\Administrator>echo notepad.exe > test.bat
C:\Users\Administrator>start /affinity 55 test.bat

This means that you can set the affinity of your task scheduler and any tasks that the scheduler starts inherits the affinity. So, you can disable every other thread when you launch the scheduler and effectively disable HT Technology for all of the tasks as well. Be sure to test this point, however, as some schedulers override the normal inheritance behavior and explicitly set processor affinity when starting a child process.


While the Windows operating system does not allow you to disable logical CPUs, you can set processor affinity on individual processes. You also learned that Windows Server 2016 enumerates CPUs in a different order than Linux. Therefore, you can effectively disable HT Technology by restricting a process to every other CPU. Finally, you learned how to set affinity of both new and running processes using Task Manager, PowerShell, and the start command.

Note: this technical approach has nothing to do with control over software licensing, or licensing rights, which are sometimes linked to the number of “CPUs” or “cores.” For licensing purposes, those are legal terms, not technical terms. This post did not cover anything about software licensing or licensing rights.

If you have questions or suggestions, please comment below.

Pirate Sites and the Dying Art of Customer Service

Post Syndicated from Andy original https://torrentfreak.com/pirate-sites-and-the-dying-art-of-customer-service-170803/

Consumers of products and services in the West are now more educated than ever before. They often research before making a purchase and view follow-up assistance as part of the package. Indeed, many companies live and die on the levels of customer support they’re able to offer.

In this ultra-competitive world, we send faulty technology items straight back to the store, cancel our unreliable phone providers, and switch to new suppliers for the sake of a few dollars, pounds or euros per month. But does this demanding environment translate to the ‘pirate’ world?

It’s important to remember that when the first waves of unauthorized platforms appeared after the turn of the century, content on the Internet was firmly established as being ‘free’. When people first fired up KaZaA, LimeWire, or the few fledgling BitTorrent portals, few could believe their luck. Nevertheless, the fact that there was no charge for content was quickly accepted as the standard.

That’s a position that continues today but for reasons that are not entirely clear, some users of pirate sites treat the availability of such platforms as some kind of right, holding them to the same standards of service that they would their ISP, for example.

One only has to trawl the comments section on The Pirate Bay to see hundreds of examples of people criticizing the quality of uploaded movies, the fact that a software crack doesn’t work, or that some anonymous uploader failed to deliver the latest album quickly enough. That’s aside from the continual complaints screamed on various external platforms which bemoan the site’s downtime record.

For people who recall the sheer joy of finding a working Suprnova mirror for a few minutes almost 15 years ago, this attitude is somewhat baffling. Back then, people didn’t go ballistic when a site went down, they savored the moment when enthusiastic volunteers brought it back up. There was a level of gratefulness that appears somewhat absent today, in a new world where free torrent and streaming sites are suddenly held to the same standards as Comcast or McDonalds.

But while a cultural change among users has definitely taken place over the years, the way sites communicate with their users has taken a hit too. Despite the advent of platforms including Twitter and Facebook, the majority of pirate site operators today have a tendency to leave their users completely in the dark when things go wrong, leading to speculation and concern among grateful and entitled users alike.

So why does The Pirate Bay’s blog stay completely unattended these days? Why do countless sites let dust gather on Twitter accounts that last made an announcement in 2012? And why don’t site operators announce scheduled downtime in advance or let people know what’s going on when the unexpected happens?

“Honestly? I don’t have the time anymore. I also care less than I did,” one site operator told TF.

“11 years of doing this shit is enough to grind anybody down. It’s something I need to do but not doing it makes no difference either. People complain in any case. Then if you start [informing people] again they’ll want it always. Not happening.”

Rather less complimentary was the operator of a large public site. He told us that two decades ago relationships between operators and users were good but have been getting worse ever since.

“Users of pirate content 20 years ago were highly technical. 10 years ago they were somewhat technical. Right now they are fucking watermelon head puppets. They are plain stupid,” he said.

“Pirate sites don’t have customers. They have users. The definition of a customer, when related to the web, is a person that actually buys a service. Since pirates sites don’t sell services (I’m talking about public ones) they have no customers.”

Another site operator told us that his motivations for not interacting with users are based on the changing legal environment, which has become steadily and markedly worse, year upon year.

“I’m not enjoying being open like before. I used to chat keenly with the users, on the site and IRC [Internet Relay Chat] but i’m keeping my distance since a long time ago,” he told us.

“There have always been risks but now I lock everything down. I’m not using Facebook in any way personally or for the site and I don’t need the dramas of Twitter. Everytime you engage on there, problems arise with people wanting a piece of you. Some of the staff use it but I advise the contrary where possible.”

Interested in where the boundaries lie, we asked a couple of sites whether they should be doing more to keep users informed and if that should be considered a ‘customer service’ obligation these days.

“This is not Netflix and i’m not the ‘have a nice day’ guy from McDonalds,” one explained.

“If people want Netflix help then go to Netflix. There’s two of us here doing everything and I mean everything. We’re already in a pinch so spending time to answer every retarded question from kids is right out.”

Our large public site operator agreed, noting that users complain about the most crazy things, including why they don’t have enough space on a drive to download, why a movie that’s out in 2020 hasn’t been uploaded yet, and why can’t they login – when they haven’t even opened an account yet.

While the responses aren’t really a surprise given the ‘free’ nature of the sites and the volume of visitors, things don’t get any better when moving up (we use the term loosely) to paid ‘pirate’ services.

Last week, one streaming platform in particular had an absolute nightmare with what appeared to be technical issues. Nevertheless, some of its users, despite only paying a few pounds per month, demanded their pound of flesh from the struggling service.

One, who raised the topic on Reddit, was advised to ask for his money back for the trouble caused. It raised a couple of eyebrows.

“Put in a ticket and ask [for a refund], morally they should,” the user said.

The use of the word “morally” didn’t sit well with some observers, one of which couldn’t understand how the word could possibly be mentioned in the context of a pirate paying another pirate money, for a pirate service that had broken down.

“Wait let me get this straight,” the critic said. “You want a refund for a gray market service. It’s like buying drugs off the corner only to find out it’s parsley. Do you go back to the dealer and demand a refund? You live and you learn bud. [Shaking my head] at people in here talking about it being morally responsible…too funny.”

It’s not clear when pirate sites started being held to the same standards as regular commercial entities but from anecdotal evidence at least, the problem appears to be getting worse. That being said and from what we’ve heard, users can stop holding their breath waiting for deluxe customer service – it’s not coming anytime soon.

“There’s no way to monetize support,” one admin concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Renowned Kodi Addon Developer MetalKettle Calls it Quits

Post Syndicated from Andy original https://torrentfreak.com/renowned-kodi-addon-developer-metalkettle-calls-it-quits-170829/

After dominating the piracy landscape for more than a decade, BitTorrent now shares the accolades with web streaming. The latter is often easier to understand for novices, which has led to its meteoric rise.

Currently, software like Kodi grabs most of the headlines. The software is both free and legal but when it’s augmented with third-party addons, it’s transformed into a streaming piracy powerhouse. As a result, addon developers and distributors are coming under pressure these days.

Numerous cases are already underway, notably against addon repository TVAddons and the developer behind third-party addon ZemTV. Both are being sued in the United States by Dish Networks but the case filed against TVAddons in Canada is the most controversial. It’s already proven massively costly for its operator, who has been forced to ask the public for donations to keep up the fight.

With this backdrop of legal problems for prominent players, it’s no surprise that other people involved in the addon scene are considering their positions. This morning brings yet more news of retirement, with one of the most respected addon developers and distributors deciding to call it a day.

Over the past three to four years, the name MetalKettle has become synonymous not only with high-quality addons but also the MetalKettle addon repository, which was previously a leading go-to location for millions of Kodi users.

But now, ‘thanks’ to the increased profile of the Kodi addon scene, the entire operation will shrink away to avoid further negative attention. (Statement published verbatim)

“Over the past year or so Kodi has become more mainstream and public we’ve all seen the actions of others become highlighted legally, with authorities determined to target 3rd party addons making traction. This has eventually caused me to consider ‘what if?’ – the result of which never ends well in my mind,” MetalKettle writes.

“With all this said I have decided to actively give up 3rd party addon development (at least for the time being) and concentrate on being a husband and father.”

The news that MetalKettle will now fall silent will be sad news for the Kodi scene, after hosting plenty of addons over the years including UK Turks, UKTV Again, Xmovies8, Cartoons8, Toonmania, TV Mix, Sports Mix, Live Mix, Watch1080p, and MovieHut, to name just a few.

The distribution of these addons and others ultimately placed MetalKettles on the official Kodi repository blacklist, banned for providing access to premium content without authorization.

More recently, however, MetalKettle joined the Colossus Kodi repository but it seems likely that particular alliance will now come to an end. Whether other developers will take on any of the existing MetalKettle addons is unclear.

Signing off to his fans during the past few hours, MetalKettle (MK) thanked everyone for their support over the past several years.

“It’s much appreciated and made this all worthwhile,” MK said.

While plenty of people contribute to the Kodi scene, it can be quite a hostile environment for those who step out of line. The same cannot be said of MK, as evidenced by the outpouring of gratitude from his associates on Twitter.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Live Mayweather v McGregor Streams Will Thrive On Torrents Tonight

Post Syndicated from Andy original https://torrentfreak.com/live-mayweather-v-mcgregor-streams-will-thrive-on-torrents-tonight-170826/

Tonight, August 26, at the T-Mobile Arena in Las Vegas, Floyd Mayweather Jr. will finally meet UFC lightweight champion Conor McGregor in what is being billed as the biggest fight in boxing history.

Although tickets for inside the arena are still available for those with a lot of money to burn, most fans will be viewing on a screen of some kind, whether that’s in a cinema, sports bar, or at home in front of a TV.

The fight will be available on Showtime in the United States but the promoters also say they’ve done their best to make it accessible to millions of people in dozens of countries, with varying price tags dependent on region. Nevertheless, due to generally high prices, it’s likely that untold thousands around the world will attempt to watch the fight without paying.

That will definitely be possible. Although Showtime has won a pre-emptive injunction to stop some sites offering the fight, many hundreds of others are likely to fill in the gaps, offering generally lower-quality streams to the eager masses. Whether all of these sites will be able to cope with what could be unprecedented demand will remain to be seen, but there is one method that will thrive under the pressure.

Torrent technology is best known for offering content after it’s aired, whether that’s the latest episode of Game of Thrones or indeed a recording of the big fight scheduled for the weekend. However, what most ‘point-and-click’ file-sharers won’t know is that there’s a torrent-based technology that offers live sporting events week in, week out.

Without going into too many technical details, AceStream / Ace Player HD is a torrent engine built into the ever-popular VLC media player. It’s available on Windows, Android and Linux, costs nothing to install, and is incredibly easy to use.

Where regular torrent clients handle both .torrent files and magnet links, AceStream relies on an AceStream Content ID to find streams to play instead. This ID is a hash value (similar to one seen in magnet links, but prefaced with ‘acestream://’) which relates to the stream users want to view.

Once found, these can be copied to the user’s clipboard and pasted into the ‘Open Ace Stream Content ID’ section of the player’s file menu. Click ‘play’ and it’s done – it really is that simple.

AceStream is simplicity itself

Of course, any kind of content – both authorized and unauthorized – can be streamed and shared using AceStream and there are hundreds of live channels available, some in very high quality, 24/7. Inevitably, however, there’s quite an emphasis on premium content from sports broadcasters around the world, with fresh links to content shared on a daily basis.

The screenshot below shows a typical AceStream Content ID indexing site, with channels on the left, AceStream Content IDs in the center, plus language and then stream speed on the far right. (Note: TF has redacted the links since many will still be live at time of publication)

A typical AceSteam Content ID listing

While streams of most major TV channels are relatively easy to find, specialist channels showing PPV events are a little bit more difficult to discover. For those who know where to look, however, the big fight will be only a cut-and-paste away and in much better quality than that found on most web-based streaming portals.

All that being said, for torrent enthusiasts the magic lies in the ability of the technology to adapt to surging demand. While websites and streams wilt under the load Saturday night, it’s likely that AceStream streams will thrive under the pressure, with viewers (downloaders/streamers) also becoming distributors (uploaders) to others watching the event unfold.

With this in mind, it’s worth noting that while AceStream is efficient and resilient, using it to watch infringing content is illegal in most regions, since simultaneous uploading also takes place. Still, that’s unlikely to frighten away enthusiasts, who will already be aware of the risks and behind a VPN.

Ace Streams do have an Achilles heel though. Unlike a regular torrent swarm, where the initial seeder can disappear once a full copy of the movie or TV show is distributed around other peers, AceStreams are completely reliant on the initial stream seeder at all times. If he or she disappears, the live stream dies and it is all over. For this reason, people looking to stream often have a couple of extra stream hashes standing by.

But for big fans (who also have the money to spend, of course), the decision to pirate rather than pay is one not to be taken lightly. The fight will be a huge spectacle that will probably go down in history as the biggest combat sports event of all time. If streams go down early, that moment will be gone forever, so forget telling your kids about the time you watched McGregor knock out Mayweather in Round Two.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

ROI is not a cybersecurity concept

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/roi-is-not-cybersecurity-concept.html

In the cybersecurity community, much time is spent trying to speak the language of business, in order to communicate to business leaders our problems. One way we do this is trying to adapt the concept of “return on investment” or “ROI” to explain why they need to spend more money. Stop doing this. It’s nonsense. ROI is a concept pushed by vendors in order to justify why you should pay money for their snake oil security products. Don’t play the vendor’s game.

The correct concept is simply “risk analysis”. Here’s how it works.

List out all the risks. For each risk, calculate:

  • How often it occurs.
  • How much damage it does.
  • How to mitigate it.
  • How effective the mitigation is (reduces chance and/or cost).
  • How much the mitigation costs.

If you have risk of something that’ll happen once-per-day on average, costing $1000 each time, then a mitigation costing $500/day that reduces likelihood to once-per-week is a clear win for investment.

Now, ROI should in theory fit directly into this model. If you are paying $500/day to reduce that risk, I could use ROI to show you hypothetical products that will …

  • …reduce the remaining risk to once-per-month for an additional $10/day.
  • …replace that $500/day mitigation with a $400/day mitigation.

But this is never done. Companies don’t have a sophisticated enough risk matrix in order to plug in some ROI numbers to reduce cost/risk. Instead, ROI is a calculation is done standalone by a vendor pimping product, or a security engineer building empires within the company.

If you haven’t done risk analysis to begin with (and almost none of you have), then ROI calculations are pointless.

But there are further problems. This is risk analysis as done in industries like oil and gas, which have inanimate risk. Almost all their risks are due to accidental failures, like in the Deep Water Horizon incident. In our industry, cybersecurity, risks are animate — by hackers. Our risk models are based on trying to guess what hackers might do.

An example of this problem is when our drug company jacks up the price of an HIV drug, Anonymous hackers will break in and dump all our financial data, and our CFO will go to jail. A lot of our risks come now from the technical side, but the whims and fads of the hacker community.

Another example is when some Google researcher finds a vuln in WordPress, and our website gets hacked by that three months from now. We have to forecast not only what hackers can do now, but what they might be able to do in the future.

Finally, there is this problem with cybersecurity that we really can’t distinguish between pesky and existential threats. Take ransomware. A lot of large organizations have just gotten accustomed to just wiping a few worker’s machines every day and restoring from backups. It’s a small, pesky problem of little consequence. Then one day a ransomware gets domain admin privileges and takes down the entire business for several weeks, as happened after #nPetya. Inevitably our risk models always come down on the high side of estimates, with us claiming that all threats are existential, when in fact, most companies continue to survive major breaches.

These difficulties with risk analysis leads us to punting on the problem altogether, but that’s not the right answer. No matter how faulty our risk analysis is, we still have to go through the exercise.

One model of how to do this calculation is architecture. We know we need a certain number of toilets per building, even without doing ROI on the value of such toilets. The same is true for a lot of security engineering. We know we need firewalls, encryption, and OWASP hardening, even without specifically doing a calculation. Passwords and session cookies need to go across SSL. That’s the starting point from which we start to analysis risks and mitigations — what we need beyond SSL, for example.

So stop using “ROI”, or worse, the abomination “ROSI”. Start doing risk analysis.

ESET Tries to Scare People Away From Using Torrents

Post Syndicated from Andy original https://torrentfreak.com/eset-tries-to-scare-people-away-from-using-torrents-170805/

Any company in the security game can be expected to play up threats among its customer base in order to get sales.

Sellers of CCTV equipment, for example, would have us believe that criminals don’t want to be photographed and will often go elsewhere in the face of that. Car alarm companies warn us that since X thousand cars are stolen every minute, an expensive Immobilizer is an anti-theft must.

Of course, they’re absolutely right to point these things out. People want to know about these offline risks since they affect our quality of life. The same can be said of those that occur in the online world too.

We ARE all at risk of horrible malware that will trash our computers and steal our banking information so we should all be running adequate protection. That being said, how many times do our anti-virus programs actually trap a piece of nasty-ware in a year? Once? Twice? Ten times? Almost never?

The truth is we all need to be informed but it should be done in a measured way. That’s why an article just published by security firm ESET on the subject of torrents strikes a couple of bad chords, particularly with people who like torrents. It’s titled “Why you should view torrents as a threat” and predictably proceeds to outline why.

“Despite their popularity among users, torrents are very risky ‘business’,” it begins.

“Apart from the obvious legal trouble you could face for violating the copyright of musicians, filmmakers or software developers, there are security issues linked to downloading them that could put you or your computer in the crosshairs of the black hats.”

Aside from the use of the phrase “very risky” (‘some risk’ is a better description), there’s probably very little to complain about in this opening shot. However, things soon go downhill.

“Merely downloading the newest version of BitTorrent clients – software necessary for any user who wants to download or seed files from this ‘ecosystem’ – could infect your machine and irreversibly damage your files,” ESET writes.

Following that scary statement, some readers will have already vowed never to use a torrent again and moved on without reading any more, but the details are really important.

To support its claim, ESET points to two incidents in 2016 (which to its great credit the company actually discovered) which involved the Transmission torrent client. Both involved deliberate third-party infection and in the latter hackers attacked Transmission’s servers and embedded malware in its OSX client before distribution to the public.

No doubt these were both miserable incidents (to which the Transmission team quickly responded) but to characterize this as a torrent client problem seems somewhat unfair.

People intent on spreading viruses and malware do not discriminate and will happily infect ANY piece of computer software they can. Sadly, many non-technical people reading the ESET post won’t read beyond the claim that installing torrent clients can “infect your machine and irreversibly damage your files.”

That’s a huge disservice to the hundreds of millions of torrent client installations that have taken place over a decade and a half and were absolutely trouble free. On a similar basis, we could argue that installing Windows is the main initial problem for people getting viruses from the Internet. It’s true but it’s also not the full picture.

Finally, the piece goes on to detail other incidents over the years where torrents have been found to contain malware. The several cases highlighted by ESET are both real and pretty unpleasant for victims but the important thing to note here is torrent users are no different to any other online user, no matter how they use the Internet.

People who download files from the Internet, from ALL untrusted sources, are putting themselves at risk of getting a virus or other malware. Whether that content is obtained from a website or a P2P network, the risks are ever-present and only a foolish person would do so without decent security software (such as ESET’s) protecting them.

The take home point here is to be aware of security risks and put them into perspective. It’s hard to put a percentage on these things but of the hundreds of millions of torrent and torrent client downloads that have taken place since their inception 15 years ago, the overwhelming majority have been absolutely fine.

Security situations do arise and we need to be aware of them, but presenting things in a way that spreads unnecessary concern in a particular sector isn’t necessary to sell products.

The AV-TEST Institute registers around 390,000 new malicious programs every day that don’t involve torrents, plenty for any anti-virus firm to deal with.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Concerns About The Blockchain Technology

Post Syndicated from Bozho original https://techblog.bozho.net/concerns-blockchain-technology/

The so-called (and marketing-branded) “blockchain technology” is promised to revolutionize every industry. Anything, they say, will become decentralized, free from middle men or government control. Services will thrive on various installments of the blockchain, and smart contracts will automatically enforce any logic that is related to the particular domain.

I don’t mind having another technological leap (after the internet), and given that I’m technically familiar with the blockchain, I may even be part of it. But I’m not convinced it will happen, and I’m not convinced it’s going to be the next internet.

If we strip the hype, the technology behind Bitcoin is indeed a technical masterpiece. It combines existing techniques (likes hash chains and merkle trees) with a very good proof-of-work based consensus algorithm. And it creates a digital currency, which ontop of being worth billions now, is simply cool.

But will this technology be mass-adopted, and will mass adoption allow it to retain the technological benefits it has?

First, I’d like to nitpick a little bit – if anyone is speaking about “decentralized software” when referring to “the blockchain”, be suspicious. Bitcon and other peer-to-peer overlay networks are in fact “distributed” (see the pictures here). “Decentralized” means having multiple providers, but doesn’t mean each user will be full-featured nodes on the network. This nitpicking is actually part of another argument, but we’ll get to that.

If blockchain-based applications want to reach mass adoption, they have to be user-friendly. I know I’m being captain obvious here (and fortunately some of the people in the area have realized that), but with the current state of the technology, it’s impossible for end users to even get it, let alone use it.

My first serious concern is usability. To begin with, you need to download the whole blockchain on your machine. When I got my first bitcoin several years ago (when it was still 10 euro), the blockchain was kind of small and I didn’t notice that problem. Nowadays both the Bitcoin and Ethereum blockchains take ages to download. I still haven’t managed to download the ethereum one – after several bugs and reinstalls of the client, I’m still at 15%. And we are just at the beginning. A user just will not wait for days to download something in order to be able to start using a piece of technology.

I recently proposed downloading snapshots of the blockchain via bittorrent to be included in the Ethereum protocol itself. I know that snapshots of the Bitcoin blockchain have been distributed that way, but it has been a manual process. If a client can quickly download the huge file up to a recent point, and then only donwload the latest ones in the the traditional way, starting up may be easier. Of course, the whole chain would have to be verified, but maybe that can be a background process that doesn’t stop you from using whatever is built ontop of the particular blockchain. (I’m not sure if that will be secure enough, and that, say potential Sybil attacks on the bittorrent part won’t make it undesirable, it’s just an idea).

But even if such an approach works and is adopted, that would still mean that for every service you’d have to download a separate blockchain. Of course, projects like Ethereum may seem like the “one stop shop” for cool blockchain-based applications, but fragmentation is already happening – there are alt-coins bundled with various services like file storage, DNS, etc. That will not be workable for end-users. And it’s certainly not an option for mobile, which is the dominant client now. If instead of downloading the entire chain, something like consistent hashing is used to distribute the content in small portions among clients, it might be workable. But how will trust work in that case, I don’t know. Maybe it’s possible, maybe not.

And yes, I know that you don’t necessarily have to install a wallet/client in order to make use of a given blockchain – you can just have a cloud-based wallet. Which is fairly convenient, but that gets me to my nitpicking from a few paragraphs above and to may second concern – this effectively turns a distributed system into a decentralized one – a limited number of cloud providers hold most of the data (just as a limited number of miners hold most of the processing power). And then, even though the underlying technology allows for a distributed deployment, we’ll end-up again with simply decentralized or even de-facto cenetralized, if mergers and acquisitions lead us there (and they probably will). And in order to be able to access our wallets/accounts from multiple devices, we’d use a convenient cloud service where we’d login with our username and password (because the private key is just too technical and hard for regular users). And that seems to defeat the whole idea.

Not only that, but there is an inevitable centralization of decisions (who decides on the size of the block, who has commit rights to the client repository) as well as a hidden centralization of power – how much GPU power does the Chinese mining “farms” control and can they influence the network significantly? And will the average user ever know that or care (as they don’t care that Google is centralized). I think that overall, distributed technologies will follow the power law, and the majority of data/processing power/decision power will be controller by a minority of actors. And so our distributed utopia will not happen in its purest form we dream of.

My third concern is incentive. Distributed technologies that have been successful so far have a pretty narrow set of incentives. The internet was promoted by large public institutions, including government agencies and big universitives. Bittorrent was successful mainly because it allowed free movies and songs with 2 clicks of the mouse. And Bitcoin was successful because it offered financial benefits. I’m oversimplifying of course, but “government effort”, “free & easy” and “source of more money” seem to have been the successful incentives. On the other side of the fence there are dozens of failed distributed technologies. I’ve tried many of them – alternative search engines, alternative file storage, alternative ride-sharings, alternative social networks, alternative “internets” even. None have gained traction. Because they are not easier to use than their free competitors and you can’t make money out of them (and no government bothers promoting them).

Will blockchain-based services have sufficient incentives to drive customers? Will centralized competitors just easily crush the distributed alternatives by being cheaper, more-user friendly, having sales departments that can target more than hardcore geeks who have no problem syncing their blockchain via the command line? The utopian slogans seem very cool to idealists and futurists, but don’t sell. “Free from centralized control, full control over your data” – we’d have to go through a long process of cultural change before these things make sense to more than a handful of people.

Speaking of services, often examples include “the sharing economy”, where one stranger offers a service to another stranger. Blockchain technology seems like a good fit here indeed – the services are by nature distributed, why should the technology be centralized? Here comes my fourth concern – identity. While for the cryptocurrencies it’s actually beneficial to be anonymous, for most of the real-world services (i.e. the industries that ought to be revolutionized) this is not an option. You can’t just go in the car of publicKey=5389BC989A342…. “But there are already distributed reputation systems”, you may say. Yes, and they are based on technical, not real-world identities. That doesn’t build trust. I don’t trust that publicKey=5389BC989A342… is the same person that got the high reputation. There may be five people behind that private key. The private key may have been stolen (e.g. in a cloud-provider breach).

The values of companies like Uber and AirBNB is that they serve as trust brokers. They verify and vouch for their drivers and hosts (and passengers and guests). They verify their identity through government-issued documents, skype calls, selfies, compare pictures to documents, get access to government databases, credit records, etc. Can a fully distributed service do that? No. You’d need a centralized provider to do it. And how would the blockchain make any difference then? Well, I may not be entirely correct here. I’ve actually been thinking quite a lot about decentralized identity. E.g. a way to predictably generate a private key based on, say biometrics+password+government-issued-documents, and use the corresponding public key as your identifier, which is then fed into reputation schemes and ultimately – real-world services. But we’re not there yet.

And that is part of my fifth concern – the technology itself. We are not there yet. There are bugs, there are thefts and leaks. There are hard-forks. There isn’t sufficient understanding of the technology (I confess I don’t fully grasp all the implementation details, and they are always the key). Often the technology is advertised as “just working”, but it isn’t. The other day I read an article (lost the link) that clarifies a common misconception about smart contracts – they cannot interact with the outside world – they can’t call APIs (e.g. stock market prices, bank APIs), they can’t push or fetch data from anywhere but the blockchain. That mandates the need, again, for a centralized service that pushes the relevant information before smart contracts can pick it up. I’m pretty sure that all cool-sounding applications are not possible without extensive research. And even if/when they are, writing distributed code is hard. Debugging a smart contract is hard. Yes, hard is cool, but that doesn’t drive economic value.

I have mostly been referring to public blockchains so far. Private blockchains may have their practical application, but there’s one catch – they are not exactly the cool distributed technology that the Bitcoin uses. They may be called “blockchains” because they…chain blocks, but they usually centralize trust. For example the Hyperledger project uses PKI, with all its benefits and risks. In these cases, a centralized authority issues the identity “tokens”, and then nodes communicate and form a shared ledger. That’s a bit easier problem to solve, and the nodes would usually be on actual servers in real datacenters, and not on your uncle’s Windows XP.

That said, hash chaining has been around for quite a long time. I did research on the matter because of a side-project of mine and it seems providing a tamper-proof/tamper-evident log/database on semi-trusted machines has been discussed in many computer science papers since the 90s. That alone is not “the magic blockchain” that will solve all of our problems, no matter what gossip protocols you sprinkle ontop. I’m not saying that’s bad, on the contrary – any variation and combinations of the building blocks of the blockchain (the hash chain, the consensus algorithm, the proof-of-work (or stake), possibly smart contracts), has potential for making useful products.

I know I sound like the a naysayer here, but I hope I’ve pointed out particular issues, rather than aimlessly ranting at the hype (though that’s tempting as well). I’m confident that blockchain-like technologies will have their practical applications, and we will see some successful, widely-adopted services and solutions based on that, just as pointed out in this detailed report. But I’m not convinced it will be revolutionizing.

I hope I’m proven wrong, though, because watching a revolutionizing technology closely and even being part of it would be quite cool.

The post Concerns About The Blockchain Technology appeared first on Bozho's tech blog.

TVStreamCMS Brings Pirate Streaming Site Clones to The Masses

Post Syndicated from Ernesto original https://torrentfreak.com/tvstreamcms-brings-pirate-streaming-site-clones-to-the-masses-170723/

In recent years many pirates have moved from more traditional download sites and tools, to streaming portals.

These streaming sites come in all shapes and sizes, and there is fierce competition among site owners to grab the most traffic. More traffic means more money, after all.

While building a streaming from scratch is quite an operation, there are scripts on the market that allow virtually anyone to set up their own streaming index in just a few minutes.

TVStreamCMS is one of the leading players in this area. To find out more we spoke to one of the people behind the project, who prefers to stay anonymous, but for the sake of this article, we’ll call him Rick.

“The idea came up when I wanted to make my own streaming site. I saw that they make a lot of money, and many people had them,” Rick tells us.

After discovering that there were already a few streaming site scripts available, Rick saw an opportunity. None of the popular scripts at the time offered automatic updates with freshly pirated content, a gap that was waiting to be filled.

“I found out that TVStreamScript and others on ThemeForest like MTDB were available, but these were not automatized. Instead, they were kinda generic and hard to update. We wanted to make our own site, but as we made it, we also thought about reselling it.”

Soon after TVStreamCMS was born. In addition to using it for his own project, Rick also decided to offer it to others who wanted to run their own streaming portal, for a monthly subscription fee.

TVStreamCMS website

According to Rick, the script’s automated content management system has been its key selling point. The buyers don’t have to update or change much themselves, as pretty much everything is automatized.

This has generated hundreds of sales over the years, according to the developer. And several of the sites that run on the script are successfully “stealing” traffic from the original, such as gomovies.co, which ranks well above the real GoMovies in Google’s search results.

“Currently, a lot of the sites competing against the top level streaming sites are using our script. This includes 123movies.co, gomovies.co and putlockers.tv, keywords like yesmovies fmovies gomovies 123movies, even in different Languages like Portuguese, French and Italian,” Rick says.

The pirated videos that appear on these sites come from a database maintained by the TVStreamCMS team. These are hosted on their own servers, but also by third parties such as Google and Openload.

When we looked at one of the sites we noticed a few dead links, but according to Rick, these are regularly replaced.

“Dead links are maintained by our team, DMCA removals are re-uploaded, and so on. This allows users not to worry about re-uploading or adding content daily and weekly as movies and episodes release,” Rick explains.

While this all sounds fine and dandy for prospective pirates, there are some significant drawbacks.

Aside from the obvious legal risks that come with operating one of these sites, there is also a financial hurdle. The full package costs $399 plus a monthly fee of $99, and the basic option is $399 and $49 per month.

TVStreamCMS subscription plans

There are apparently plenty of site owners who don’t mind paying this kind of money. That said, not everyone is happy with the script. TorrentFreak spoke to a source at one of the larger streaming sites, who believes that these clones are misleading their users.

TVStreamCMS is not impressed by the criticism. They know very well what they are doing. Their users asked for these clone templates, and they are delivering them, so both sides can make more money.

“We’re are in the business to make money and grow the sales,” Rick says.

“So we have made templates looking like 123movies, Yesmovies, Fmovies and Putlocker to accommodate the demands of the buyers. A similar design gets buyers traffic and is very, very effective for new sites, as users who come from Google they think it is the real website.”

The fact that 123Movies changed its name to GoMovies and recently changed to a GoStream.is URL, only makes it easier for clones to get traffic, according to the developer.

“This provides us with a lot of business because every time they change their name the buyers come back and want another site with the new name. GoMovies, for instance, and now Gostream,” Rick notes.

Of course, the infringing nature of the clone sites means that there are many copyright holders who would rather see the script and its associated sites gone. Previously, the Hollywood group FACT managed to shut down TVstreamScript, taking down hundreds of sites that relied on it, and it’s likely that TVStreamCMS is being watched too.

For now, however, more and more clones continue to flood the web with pirated streams.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Password Masking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/password_maskin.html

Slashdot asks if password masking — replacing password characters with asterisks as you type them — is on the way out. I don’t know if that’s true, but I would be happy to see it go. Shoulder surfing, the threat is defends against, is largely nonexistent. And it is becoming harder to type in passwords on small screens and annoying interfaces. The IoT will only exacerbate this problem, and when passwords are harder to type in, users choose weaker ones.

Just How Risky is Internet Piracy in 2017?

Post Syndicated from Andy original https://torrentfreak.com/just-how-risky-is-internet-piracy-in-2017-170715/

The world’s largest entertainment companies in the spheres of music, movies, and gaming would jump for joy if the Internet piracy phenomenon came to a crashing halt tomorrow. (Spoiler: it won’t)

As a result, large sums of money are expended every day in an effort to keep unlawful distribution under control. Over the years there have been many strategies and several of these have involved targeting end users.

The world is a very big place and the tackling of piracy differs from region to region, but what most consumers of unauthorized media want to know is whether they’re putting themselves at risk.

The short answer is that no matter where people are, there is always some level of risk attached to obtaining and using pirate content. The long answer is more nuanced.

BitTorrent and other P2P protocols

By its very nature, using BitTorrent to access copyrighted content comes with a risk. Since downloaders are also distributors and their IP addresses are necessarily public, torrent users are extremely easy to track. In fact, with a minimum of equipment, any determined rightsholder is able spot and potentially uncover the identity of a file-sharer.

But while basic BitTorrent sharing gets a 0/10 for privacy, that’s a bit like saying that a speeding car gets 0/10 for stealth. Like the speeding car, anyone can see the pirating torrent user, but the big question is whether there’s anyone around who intends to do anything about it.

The big surprise in 2017 is that users are still statistically unlikely to face any consequences.

In the United States, for example, where copyright trolling can be a serious issue for those who get caught up in the net, the problem still only affects a tiny, tiny proportion of pirates. A one percent risk of getting snared would be overstating the risk but these are still odds that any gambler would be happy to take.

Surprisingly, pirates are also less likely to encounter a simple friendly warning than they were last year too. The “Six Strikes” Copyright Alerts System operated by the MPAA and RIAA, that set out to advise large volumes of pirates using notices sent via their ISPs, was discontinued in January. Those behind it gave in, for reasons unknown.

This means that millions of torrent users – despite exposing their IP addresses in public while sharing copyrighted content – are doing so without significant problems. Nevertheless, large numbers are also taking precautions, by using anonymization technologies including VPNs.

That’s not to say that their actions are legal – they’re not – but outside the few thousand people caught up in trolls’ nets each year, the vast and overwhelming majority of torrent users (which number well over 100 million) are pirating with impunity.

In the UK, not even trolling is a problem anymore. After a few flurries that seemed to drag on longer than they should, copyright trolls appear to have left the country for more lucrative shores. No cases have gone through the courts in recent times which means that UK users are torrenting pretty much whatever they like, with no legal problems whatsoever.

It’s important to note though, that their actions aren’t going unnoticed. Unlike the United States, the UK has a warning system in place. This means that a few thousand customers of a handful of ISPs are receiving notices each month informing them that their piratey behavior has been monitored by an entertainment company.

Currently, however, there are no punishments for those who are ‘caught’, even when they’re accused of pirating on a number of occasions. At least so far, it seems that the plan is to worry pirates into submission and in some cases that will probably work. Nevertheless, things can easily change when records are being kept on this scale.

Germany aside (which is overrun with copyright trolling activity), a handful of other European countries have also endured relatively small troll problems (Finland, Sweden, Denmark) but overall, file-sharers go about their business as usual across the continent. There are no big projects in any country aiming to punish large numbers of BitTorrent users and only France has an active warning notice program.

Canada and Australia have also had relatively small problems with copyright trolls (the former also has a fairly toothless ISP warning system) but neither country is considered a particularly ‘dangerous’ place to share files using BitTorrent. Like the United States, UK, and Europe, the chances of getting prosecuted for infringement are very small indeed.

Why such little enforcement?

There are a number of reasons for the apparent lack of interest in BitTorrent users but a few bubble up to the top. Firstly, there’s the question of resources required to tackle millions of users. Obviously, some scare tactics could be deployed by hitting a few people hard, but it feels like most companies have moved beyond that thinking.

That’s partly due to the more recent tendency of entertainment groups and governments to take a broader view of infringement, hitting it at its source by strangling funds to pirate sites, hitting their advertisers, blocking their websites, and attempting to forge voluntary anti-piracy schemes with search engines.

It’s also worth noting that huge numbers of people are routinely protecting themselves with VPN-like technology, which allows them to move around the Internet with much improved levels of privacy. Just recently, anti-piracy outfit Rightscorp partly blamed this for falling revenues.

Importantly, however, the nature of infringement has been changing for some time too.

A few years ago, most people were getting their movies and music from torrent sites but now they’re more likely to be obtaining their fix from a streaming source. Accessing the top blockbusters via a streaming site (perhaps via Kodi) is for the most part untraceable, as is grabbing music from one of the hundreds of MP3 portals around today.

But as recent news revealed, why bother with ‘pirate’ sites when people can simply rip music from sites like YouTube?

So-called stream-ripping is now blamed for huge swathes of piracy and as a result, torrent sites get far fewer mentions from anti-piracy groups than they did before.

While still a thorn in their side, it wouldn’t be a stretch to presume that torrent sites are no longer considered the primary problem they once were, at least in respect of music. Now, the ‘Value Gap‘ is more of a headache.

So, in a nutshell, the millions of people obtaining and sharing copyrighted content using BitTorrent are still taking some risks in every major country, and those need to be carefully weighed.

The activity is illegal almost everywhere, punishable in both civil and criminal courts, and has the potential to land people with big fines and even a jail sentence, if the scale of sharing is big enough.

In truth, however, the chances of the man in the street getting caught are so slim that many people don’t give the risks a second thought. That said, even people who drive 10mph over the limit get caught once in a while, so those that want to keep a clean sheet online often get a VPN and reduce the risks to almost 0%.

For people who stream, life is much less complicated. Streaming movies, TV shows or music from an illicit source is untraceable by any regular means, which up to now has made it almost 100% safe. Notably, there hasn’t been a single prosecution of a user who streamed infringing content anywhere in the world. In the EU it is illegal though, so something might happen in future, potentially…..possibly…..at some point….maybe.

And here’s the thing. While this is the general position today, the ‘market’ is volatile and has the ability to change quickly. A case could get filed in the US or UK next week, each targeting 50,000 BitTorrent users for downloading something that came out months ago. Nobody knows for sure so perhaps the best analogy is the one drummed into kids during high-school sex education classes.

People shouldn’t put themselves at risk at all but if they really must, they should take precautions. If they don’t, they could easily be the unlucky one and that is nearly always miserable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Analyze OpenFDA Data in R with Amazon S3 and Amazon Athena

Post Syndicated from Ryan Hood original https://aws.amazon.com/blogs/big-data/analyze-openfda-data-in-r-with-amazon-s3-and-amazon-athena/

One of the great benefits of Amazon S3 is the ability to host, share, or consume public data sets. This provides transparency into data to which an external data scientist or developer might not normally have access. By exposing the data to the public, you can glean many insights that would have been difficult with a data silo.

The openFDA project creates easy access to the high value, high priority, and public access data of the Food and Drug Administration (FDA). The data has been formatted and documented in consumer-friendly standards. Critical data related to drugs, devices, and food has been harmonized and can easily be called by application developers and researchers via API calls. OpenFDA has published two whitepapers that drill into the technical underpinnings of the API infrastructure as well as how to properly analyze the data in R. In addition, FDA makes openFDA data available on S3 in raw format.

In this post, I show how to use S3, Amazon EMR, and Amazon Athena to analyze the drug adverse events dataset. A drug adverse event is an undesirable experience associated with the use of a drug, including serious drug side effects, product use errors, product quality programs, and therapeutic failures.

Data considerations

Keep in mind that this data does have limitations. In addition, in the United States, these adverse events are submitted to the FDA voluntarily from consumers so there may not be reports for all events that occurred. There is no certainty that the reported event was actually due to the product. The FDA does not require that a causal relationship between a product and event be proven, and reports do not always contain the detail necessary to evaluate an event. Because of this, there is no way to identify the true number of events. The important takeaway to all this is that the information contained in this data has not been verified to produce cause and effect relationships. Despite this disclaimer, many interesting insights and value can be derived from the data to accelerate drug safety research.

Data analysis using SQL

For application developers who want to perform targeted searching and lookups, the API endpoints provided by the openFDA project are “ready to go” for software integration using a standard API powered by Elasticsearch, NodeJS, and Docker. However, for data analysis purposes, it is often easier to work with the data using SQL and statistical packages that expect a SQL table structure. For large-scale analysis, APIs often have query limits, such as 5000 records per query. This can cause extra work for data scientists who want to analyze the full dataset instead of small subsets of data.

To address the concern of requiring all the data in a single dataset, the openFDA project released the full 100 GB of harmonized data files that back the openFDA project onto S3. Athena is an interactive query service that makes it easy to analyze data in S3 using standard SQL. It’s a quick and easy way to answer your questions about adverse events and aspirin that does not require you to spin up databases or servers.

While you could point tools directly at the openFDA S3 files, you can find greatly improved performance and use of the data by following some of the preparation steps later in this post.


This post explains how to use the following architecture to take the raw data provided by openFDA, leverage several AWS services, and derive meaning from the underlying data.


  1. Load the openFDA /drug/event dataset into Spark and convert it to gzip to allow for streaming.
  2. Transform the data in Spark and save the results as a Parquet file in S3.
  3. Query the S3 Parquet file with Athena.
  4. Perform visualization and analysis of the data in R and Python on Amazon EC2.

Optimizing public data sets: A primer on data preparation

Those who want to jump right into preparing the files for Athena may want to skip ahead to the next section.

Transforming, or pre-processing, files is a common task for using many public data sets. Before you jump into the specific steps for transforming the openFDA data files into a format optimized for Athena, I thought it would be worthwhile to provide a quick exploration on the problem.

Making a dataset in S3 efficiently accessible with minimal transformation for the end user has two key elements:

  1. Partitioning the data into objects that contain a complete part of the data (such as data created within a specific month).
  2. Using file formats that make it easy for applications to locate subsets of data (for example, gzip, Parquet, ORC, etc.).

With these two key elements in mind, you can now apply transformations to the openFDA adverse event data to prepare it for Athena. You might find the data techniques employed in this post to be applicable to many of the questions you might want to ask of the public data sets stored in Amazon S3.

Before you get started, I encourage those who are interested in doing deeper healthcare analysis on AWS to make sure that you first read the AWS HIPAA Compliance whitepaper. This covers the information necessary for processing and storing patient health information (PHI).

Also, the adverse event analysis shown for aspirin is strictly for demonstration purposes and should not be used for any real decision or taken as anything other than a demonstration of AWS capabilities. However, there have been robust case studies published that have explored a causal relationship between aspirin and adverse reactions using OpenFDA data. If you are seeking research on aspirin or its risks, visit organizations such as the Centers for Disease Control and Prevention (CDC) or the Institute of Medicine (IOM).

Preparing data for Athena

For this walkthrough, you will start with the FDA adverse events dataset, which is stored as JSON files within zip archives on S3. You then convert it to Parquet for analysis. Why do you need to convert it? The original data download is stored in objects that are partitioned by quarter.

Here is a small sample of what you find in the adverse events (/drugs/event) section of the openFDA website.

If you were looking for events that happened in a specific quarter, this is not a bad solution. For most other scenarios, such as looking across the full history of aspirin events, it requires you to access a lot of data that you won’t need. The zip file format is not ideal for using data in place because zip readers must have random access to the file, which means the data can’t be streamed. Additionally, the zip files contain large JSON objects.

To read the data in these JSON files, a streaming JSON decoder must be used or a computer with a significant amount of RAM must decode the JSON. Opening up these files for public consumption is a great start. However, you still prepare the data with a few lines of Spark code so that the JSON can be streamed.

Step 1:  Convert the file types

Using Apache Spark on EMR, you can extract all of the zip files and pull out the events from the JSON files. To do this, use the Scala code below to deflate the zip file and create a text file. In addition, compress the JSON files with gzip to improve Spark’s performance and reduce your overall storage footprint. The Scala code can be run in either the Spark Shell or in an Apache Zeppelin notebook on your EMR cluster.

If you are unfamiliar with either Apache Zeppelin or the Spark Shell, the following posts serve as great references:


import scala.io.Source
import java.util.zip.ZipInputStream
import org.apache.spark.input.PortableDataStream
import org.apache.hadoop.io.compress.GzipCodec

// Input Directory
val inputFile = "s3://download.open.fda.gov/drug/event/2015q4/*.json.zip";

// Output Directory
val outputDir = "s3://{YOUR OUTPUT BUCKET HERE}/output/2015q4/";

// Extract zip files from 
val zipFiles = sc.binaryFiles(inputFile);

// Process zip file to extract the json as text file and save it
// in the output directory 
val rdd = zipFiles.flatMap((file: (String, PortableDataStream)) => {
    val zipStream = new ZipInputStream(file.2.open)
    val entry = zipStream.getNextEntry
    val iter = Source.fromInputStream(zipStream).getLines
}).map(.replaceAll("\s+","")).saveAsTextFile(outputDir, classOf[GzipCodec])

Step 2:  Transform JSON into Parquet

With just a few more lines of Scala code, you can use Spark’s abstractions to convert the JSON into a Spark DataFrame and then export the data back to S3 in Parquet format.

Spark requires the JSON to be in JSON Lines format to be parsed correctly into a DataFrame.

// Output Parquet directory
val outputDir = "s3://{YOUR OUTPUT BUCKET NAME}/output/drugevents"
// Input json file
val inputJson = "s3://{YOUR OUTPUT BUCKET NAME}/output/2015q4/*”
// Load dataframe from json file multiline 
val df = spark.read.json(sc.wholeTextFiles(inputJson).values)
// Extract results from dataframe
val results = df.select("results")
// Save it to Parquet

Step 3:  Create an Athena table

With the data cleanly prepared and stored in S3 using the Parquet format, you can now place an Athena table on top of it to get a better understanding of the underlying data.

Because the openFDA data structure incorporates several layers of nesting, it can be a complex process to try to manually derive the underlying schema in a Hive-compatible format. To shorten this process, you can load the top row of the DataFrame from the previous step into a Hive table within Zeppelin and then extract the “create  table” statement from SparkSQL.


val top1 = spark.sql("select * from data tablesample(1 rows)")


val show_cmd = spark.sql("show create table drugevents”).show(1, false)

This returns a “create table” statement that you can almost paste directly into the Athena console. Make some small modifications (adding the word “external” and replacing “using with “stored as”), and then execute the code in the Athena query editor. The table is created.

For the openFDA data, the DDL returns all string fields, as the date format used in your dataset does not conform to the yyy-mm-dd hh:mm:ss[.f…] format required by Hive. For your analysis, the string format works appropriately but it would be possible to extend this code to use a Presto function to convert the strings into time stamps.

   companynumb  string, 
   safetyreportid  string, 
   safetyreportversion  string, 
   receiptdate  string, 
   patientagegroup  string, 
   patientdeathdate  string, 
   patientsex  string, 
   patientweight  string, 
   serious  string, 
   seriousnesscongenitalanomali  string, 
   seriousnessdeath  string, 
   seriousnessdisabling  string, 
   seriousnesshospitalization  string, 
   seriousnesslifethreatening  string, 
   seriousnessother  string, 
   actiondrug  string, 
   activesubstancename  string, 
   drugadditional  string, 
   drugadministrationroute  string, 
   drugcharacterization  string, 
   drugindication  string, 
   drugauthorizationnumb  string, 
   medicinalproduct  string, 
   drugdosageform  string, 
   drugdosagetext  string, 
   reactionoutcome  string, 
   reactionmeddrapt  string, 
   reactionmeddraversionpt  string)
STORED AS parquet
  's3://{YOUR TARGET BUCKET}/output/drugevents'

With the Athena table in place, you can start to explore the data by running ad hoc queries within Athena or doing more advanced statistical analysis in R.

Using SQL and R to analyze adverse events

Using the openFDA data with Athena makes it very easy to translate your questions into SQL code and perform quick analysis on the data. After you have prepared the data for Athena, you can begin to explore the relationship between aspirin and adverse drug events, as an example. One of the most common metrics to measure adverse drug events is the Proportional Reporting Ratio (PRR). It is defined as:

PRR = (m/n)/( (M-m)/(N-n) )
m = #reports with drug and event
n = #reports with drug
M = #reports with event in database
N = #reports in database

Gastrointestinal haemorrhage has the highest PRR of any reaction to aspirin when viewed in aggregate. One question you may want to ask is how the PRR has trended on a yearly basis for gastrointestinal haemorrhage since 2005.

Using the following query in Athena, you can see the PRR trend of “GASTROINTESTINAL HAEMORRHAGE” reactions with “ASPIRIN” since 2005:

with drug_and_event as 
(select rpad(receiptdate, 4, 'NA') as receipt_year
    , reactionmeddrapt
    , count(distinct (concat(safetyreportid,receiptdate,reactionmeddrapt))) as reports_with_drug_and_event 
from fda.drugevents
where rpad(receiptdate,4,'NA') 
     between '2005' and '2015' 
     and medicinalproduct = 'ASPIRIN'
     and reactionmeddrapt= 'GASTROINTESTINAL HAEMORRHAGE'
group by reactionmeddrapt, rpad(receiptdate, 4, 'NA') 
), reports_with_drug as 
select rpad(receiptdate, 4, 'NA') as receipt_year
    , count(distinct (concat(safetyreportid,receiptdate,reactionmeddrapt))) as reports_with_drug 
 from fda.drugevents 
 where rpad(receiptdate,4,'NA') 
     between '2005' and '2015' 
     and medicinalproduct = 'ASPIRIN'
group by rpad(receiptdate, 4, 'NA') 
), reports_with_event as 
   select rpad(receiptdate, 4, 'NA') as receipt_year
    , count(distinct (concat(safetyreportid,receiptdate,reactionmeddrapt))) as reports_with_event 
   from fda.drugevents
   where rpad(receiptdate,4,'NA') 
     between '2005' and '2015' 
     and reactionmeddrapt= 'GASTROINTESTINAL HAEMORRHAGE'
   group by rpad(receiptdate, 4, 'NA')
), total_reports as 
   select rpad(receiptdate, 4, 'NA') as receipt_year
    , count(distinct (concat(safetyreportid,receiptdate,reactionmeddrapt))) as total_reports 
   from fda.drugevents
   where rpad(receiptdate,4,'NA') 
     between '2005' and '2015' 
   group by rpad(receiptdate, 4, 'NA')
select  drug_and_event.receipt_year, 
(1.0 * drug_and_event.reports_with_drug_and_event/reports_with_drug.reports_with_drug)/ (1.0 * (reports_with_event.reports_with_event- drug_and_event.reports_with_drug_and_event)/(total_reports.total_reports-reports_with_drug.reports_with_drug)) as prr
, drug_and_event.reports_with_drug_and_event
, reports_with_drug.reports_with_drug
, reports_with_event.reports_with_event
, total_reports.total_reports
from drug_and_event
    inner join reports_with_drug on  drug_and_event.receipt_year = reports_with_drug.receipt_year   
    inner join reports_with_event on  drug_and_event.receipt_year = reports_with_event.receipt_year
    inner join total_reports on  drug_and_event.receipt_year = total_reports.receipt_year
order by  drug_and_event.receipt_year

One nice feature of Athena is that you can quickly connect to it via R or any other tool that can use a JDBC driver to visualize the data and understand it more clearly.

With this quick R script that can be run in R Studio either locally or on an EC2 instance, you can create a visualization of the PRR and Reporting Odds Ratio (RoR) for “GASTROINTESTINAL HAEMORRHAGE” reactions from “ASPIRIN” since 2005 to better understand these trends.

# connect to ATHENA
conn <- dbConnect(drv, '<Your JDBC URL>',s3_staging_dir="<Your S3 Location>",user=Sys.getenv(c("USER_NAME"),password=Sys.getenv(c("USER_PASSWORD"))

# Declare Adverse Event

# Build SQL Blocks
sqlFirst <- "SELECT rpad(receiptdate, 4, 'NA') as receipt_year, count(DISTINCT safetyreportid) as event_count FROM fda.drugsflat WHERE rpad(receiptdate,4,'NA') between '2005' and '2015'"
sqlEnd <- "GROUP BY rpad(receiptdate, 4, 'NA') ORDER BY receipt_year"

# Extract Aspirin with adverse event counts
sql <- paste(sqlFirst,"AND medicinalproduct ='ASPIRIN' AND reactionmeddrapt=",adverseEvent, sqlEnd,sep=" ")
aspirinAdverseCount = dbGetQuery(conn,sql)

# Extract Aspirin counts
sql <- paste(sqlFirst,"AND medicinalproduct ='ASPIRIN'", sqlEnd,sep=" ")
aspirinCount = dbGetQuery(conn,sql)

# Extract adverse event counts
sql <- paste(sqlFirst,"AND reactionmeddrapt=",adverseEvent, sqlEnd,sep=" ")
adverseCount = dbGetQuery(conn,sql)

# All Drug Adverse event Counts
sql <- paste(sqlFirst, sqlEnd,sep=" ")
allDrugCount = dbGetQuery(conn,sql)

# Select correct rows
selAll =  allDrugCount$receipt_year == aspirinAdverseCount$receipt_year
selAspirin = aspirinCount$receipt_year == aspirinAdverseCount$receipt_year
selAdverse = adverseCount$receipt_year == aspirinAdverseCount$receipt_year

# Calculate Numbers
m <- c(aspirinAdverseCount$event_count)
n <- c(aspirinCount[selAspirin,2])
M <- c(adverseCount[selAdverse,2])
N <- c(allDrugCount[selAll,2])

# Calculate proptional reporting ratio
PRR = (m/n)/((M-m)/(N-n))

# Calculate reporting Odds Ratio
d = n-m
D = N-M
ROR = (m/d)/(M/D)

# Plot the PRR and ROR
g_range <- range(0, PRR,ROR)
g_range[2] <- g_range[2] + 3
yearLen = length(aspirinAdverseCount$receipt_year)
plot(PRR, type="o", col="blue", ylim=g_range,axes=FALSE, ann=FALSE)
axis(2, las=1, at=1*0:g_range[2])
lines(ROR, type="o", pch=22, lty=2, col="red")

As you can see, the PRR and RoR have both remained fairly steady over this time range. With the R Script above, all you need to do is change the adverseEvent variable from GASTROINTESTINAL HAEMORRHAGE to another type of reaction to analyze and compare those trends.


In this walkthrough:

  • You used a Scala script on EMR to convert the openFDA zip files to gzip.
  • You then transformed the JSON blobs into flattened Parquet files using Spark on EMR.
  • You created an Athena DDL so that you could query these Parquet files residing in S3.
  • Finally, you pointed the R package at the Athena table to analyze the data without pulling it into a database or creating your own servers.

If you have questions or suggestions, please comment below.

Next Steps

Take your skills to the next level. Learn how to optimize Amazon S3 for an architecture commonly used to enable genomic data analysis. Also, be sure to read more about running R on Amazon Athena.






About the Authors

Ryan Hood is a Data Engineer for AWS. He works on big data projects leveraging the newest AWS offerings. In his spare time, he enjoys watching the Cubs win the World Series and attempting to Sous-vide anything he can find in his refrigerator.



Vikram Anand is a Data Engineer for AWS. He works on big data projects leveraging the newest AWS offerings. In his spare time, he enjoys playing soccer and watching the NFL & European Soccer leagues.



Dave Rocamora is a Solutions Architect at Amazon Web Services on the Open Data team. Dave is based in Seattle and when he is not opening data, he enjoys biking and drinking coffee outside.





More on the NSA’s Use of Traffic Shaping

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/more_on_the_nsa_2.html

“Traffic shaping” — the practice of tricking data to flow through a particular route on the Internet so it can be more easily surveiled — is an NSA technique that has gotten much less attention than it deserves. It’s a powerful technique that allows an eavesdropper to get access to communications channels it would otherwise not be able to monitor.

There’s a new paper on this technique:

This report describes a novel and more disturbing set of risks. As a technical matter, the NSA does not have to wait for domestic communications to naturally turn up abroad. In fact, the agency has technical methods that can be used to deliberately reroute Internet communications. The NSA uses the term “traffic shaping” to describe any technical means the deliberately reroutes Internet traffic to a location that is better suited, operationally, to surveillance. Since it is hard to intercept Yemen’s international communications from inside Yemen itself, the agency might try to “shape” the traffic so that it passes through communications cables located on friendlier territory. Think of it as diverting part of a river to a location from which it is easier (or more legal) to catch fish.

The NSA has clandestine means of diverting portions of the river of Internet traffic that travels on global communications cables.

Could the NSA use traffic shaping to redirect domestic Internet traffic — ­emails and chat messages sent between Americans, say­ — to foreign soil, where its surveillance can be conducted beyond the purview of Congress and the courts? It is impossible to categorically answer this question, due to the classified nature of many national-security surveillance programs, regulations and even of the legal decisions made by the surveillance courts. Nevertheless, this report explores a legal, technical, and operational landscape that suggests that traffic shaping could be exploited to sidestep legal restrictions imposed by Congress and the surveillance courts.

News article. NSA document detailing the technique with Yemen.

This work builds on previous research that I blogged about here.

The fundamental vulnerability is that routing information isn’t authenticated.

“Kodi Boxes Are a Fire Risk”: Awful Timing or Opportunism?

Post Syndicated from Andy original https://torrentfreak.com/kodi-boxes-are-a-fire-risk-awful-timing-or-opportunism-170618/

Anyone who saw the pictures this week couldn’t have failed to be moved by the plight of Londoners caught up in the Grenfell Tower inferno. The apocalyptic images are likely to stay with people for years to come and the scars for those involved may never heal.

As the building continued to smolder and the death toll increased, UK tabloids provided wall-to-wall coverage of the disaster. On Thursday, however, The Sun took a short break to put out yet another sensationalized story about Kodi. Given the week’s events, it was bound to raise eyebrows.

“HOT GOODS: Kodi boxes are a fire hazard because thousands of IPTV devices nabbed by customs ‘failed UK electrical standards’,” the headline reads.

Another sensational ‘Kodi’ headline

“It’s estimated that thousands of Brits have bought so-called Kodi boxes which can be connected to telly sets to stream pay-per-view sport and films for free,” the piece continued.

“But they could be a fire hazard, according to the Federation Against Copyright Theft (FACT), which has been nabbing huge deliveries of the devices as they arrive in the UK.”

As the image below shows, “Kodi box” fire hazard claims appeared next to images from other news articles about the huge London fire. While all separate stories, the pairing is not a great look.

A ‘Kodi Box’, as depicted in The Sun

FACT chief executive Kieron Sharp told The Sun that his group had uncovered two parcels of 2,000 ‘Kodi’ boxes and found that they “failed electrical safety standards”, making them potentially dangerous. While that may well be the case, the big question is all about timing.

It’s FACT’s job to reduce copyright infringement on behalf of clients such as The Premier League so it’s no surprise that they’re making a sustained effort to deter the public from buying these devices. That being said, it can’t have escaped FACT or The Sun that fire and death are extremely sensitive topics this week.

That leaves us with a few options including unfortunate opportunism or perhaps terrible timing, but let’s give the benefit of the doubt for a moment.

There’s a good argument that FACT and The Sun brought a valid issue to the public’s attention at a time when fire safety is on everyone’s lips. So, to give credit where it’s due, providing people with a heads-up about potentially dangerous devices is something that most people would welcome.

However, it’s difficult to offer congratulations on the PSA when the story as it appears in The Sun does nothing – absolutely nothing – to help people stay safe.

If some boxes are a risk (and that’s certainly likely given the level of Far East imports coming into the UK) which ones are dangerous? Where were they manufactured? Who sold them? What are the serial numbers? Which devices do people need to get out of their houses?

Sadly, none of these questions were answered or even addressed in the article, making it little more than scaremongering. Only making matters worse, the piece notes that it isn’t even clear how many of the seized devices are indeed a fire risk and that more tests need to be done. Is this how we should tackle such an important issue during an extremely sensitive week?

Timing and lack of useful information aside, one then has to question the terminology employed in the article.

As a piece of computer software, Kodi cannot catch fire. So, what we’re actually talking about here is small computers coming into the country without passing safety checks. The presence of Kodi on the devices – if indeed Kodi was even installed pre-import – is absolutely irrelevant.

Anti-piracy groups warning people of the dangers associated with their piracy habits is nothing new. For years, Internet users have been told that their computers will become malware infested if they share files or stream infringing content. While in some cases that may be true, there’s rarely any effort by those delivering the warnings to inform people on how to stay safe.

A classic example can be found in the numerous reports put out by the Digital Citizens Alliance in the United States. The DCA has produced several and no doubt expensive reports which claim to highlight the risks Internet users are exposed to on ‘pirate’ sites.

The DCA claims to do this in the interests of consumers but the group offers no practical advice on staying safe nor does it provide consumers with risk reduction strategies. Like many high-level ‘drug prevention’ documents shuffled around government, it could be argued that on a ‘street’ level their reports are next to useless.

Demonizing piracy is a well-worn and well-understood strategy but if warnings are to be interpreted as representing genuine concern for the welfare of people, they have to be a lot more substantial than mere scaremongering.

Anyone concerned about potentially dangerous devices can check out these useful guides from Electrical Safety First (pdf) and the Electrical Safety Council (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.