Tag Archives: congress

Tech Giants Protest Looming US Pirate Site Blocking Order

Post Syndicated from Ernesto original https://torrentfreak.com/tech-giants-protest-looming-us-pirate-site-blocking-order-171013/

While domain seizures against pirate sites are relatively common in the United states, ISP and search engine blocking is not. This could change soon though.

In an ongoing case against Sci-Hub, regularly referred to as the “Pirate Bay of Science,” a magistrate judge in Virginia recently recommended a broad order which would require search engines and Internet providers to block the site.

The recommendation followed a request from the academic publisher American Chemical Society (ACS) that wants these third-party services to make the site in question inaccessible. While Sci-Hub has chosen not to defend itself, a group of tech giants has now stepped in to prevent the broad injunction from being issued.

This week the Computer & Communications Industry Association (CCIA), which includes members such as Cloudflare, Facebook, and Google, asked the court to limit the proposed measures. In an amicus curiae brief submitted to the Virginia District Court, they share their concerns.

“Here, Plaintiff is seeking—and the Magistrate Judge has recommended—a permanent injunction that would sweep in various Neutral Service Providers, despite their having violated no laws and having no connection to this case,” CCIA writes.

According to the tech companies, neutral service providers are not “in active concert or participation” with the defendant, and should, therefore, be excluded from the proposed order.

While search engines may index Sci-Hub and ISPs pass on packets from this site, they can’t be seen as “confederates” that are working together with them to violate the law, CCIA stresses.

“Plaintiff has failed to make a showing that any such provider had a contract with these Defendants or any direct contact with their activities—much less that all of the providers who would be swept up by the proposed injunction had such a connection.”

Even if one of the third party services could be found liable the matter should be resolved under the DMCA, which expressly prohibits such broad injunctions, the CCIA claims.

“The DMCA thus puts bedrock limits on the injunctions that can be imposed on qualifying providers if they are named as defendants and are held liable as infringers. Plaintiff here ignores that.

“What ACS seeks, in the posture of a permanent injunction against nonparties, goes beyond what Congress was willing to permit, even against service providers against whom an actual judgment of infringement has been entered.That request must be rejected.”

The tech companies hope the court will realize that the injunction recommended by the magistrate judge will set a dangerous precedent, which goes beyond what the law is intended for, so will impose limits in response to their concerns.

It will be interesting to see whether any copyright holder groups will also chime in, to argue the opposite.

CCIA’s full amicus curiae brief is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

"Responsible encryption" fallacies

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/responsible-encryption-fallacies.html

Deputy Attorney General Rod Rosenstein gave a speech recently calling for “Responsible Encryption” (aka. “Crypto Backdoors”). It’s full of dangerous ideas that need to be debunked.

The importance of law enforcement

The first third of the speech talks about the importance of law enforcement, as if it’s the only thing standing between us and chaos. It cites the 2016 Mirai attacks as an example of the chaos that will only get worse without stricter law enforcement.

But the Mira case demonstrated the opposite, how law enforcement is not needed. They made no arrests in the case. A year later, they still haven’t a clue who did it.

Conversely, we technologists have fixed the major infrastructure issues. Specifically, those affected by the DNS outage have moved to multiple DNS providers, including a high-capacity DNS provider like Google and Amazon who can handle such large attacks easily.

In other words, we the people fixed the major Mirai problem, and law-enforcement didn’t.

Moreover, instead being a solution to cyber threats, law enforcement has become a threat itself. The DNC didn’t have the FBI investigate the attacks from Russia likely because they didn’t want the FBI reading all their files, finding wrongdoing by the DNC. It’s not that they did anything actually wrong, but it’s more like that famous quote from Richelieu “Give me six words written by the most honest of men and I’ll find something to hang him by”. Give all your internal emails over to the FBI and I’m certain they’ll find something to hang you by, if they want.
Or consider the case of Andrew Auernheimer. He found AT&T’s website made public user accounts of the first iPad, so he copied some down and posted them to a news site. AT&T had denied the problem, so making the problem public was the only way to force them to fix it. Such access to the website was legal, because AT&T had made the data public. However, prosecutors disagreed. In order to protect the powerful, they twisted and perverted the law to put Auernheimer in jail.

It’s not that law enforcement is bad, it’s that it’s not the unalloyed good Rosenstein imagines. When law enforcement becomes the thing Rosenstein describes, it means we live in a police state.

Where law enforcement can’t go

Rosenstein repeats the frequent claim in the encryption debate:

Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detection

Of course our society has places “impervious to detection”, protected by both legal and natural barriers.

An example of a legal barrier is how spouses can’t be forced to testify against each other. This barrier is impervious.

A better example, though, is how so much of government, intelligence, the military, and law enforcement itself is impervious. If prosecutors could gather evidence everywhere, then why isn’t Rosenstein prosecuting those guilty of CIA torture?

Oh, you say, government is a special exception. If that were the case, then why did Rosenstein dedicate a precious third of his speech discussing the “rule of law” and how it applies to everyone, “protecting people from abuse by the government”. It obviously doesn’t, there’s one rule of government and a different rule for the people, and the rule for government means there’s lots of places law enforcement can’t go to gather evidence.

Likewise, the crypto backdoor Rosenstein is demanding for citizens doesn’t apply to the President, Congress, the NSA, the Army, or Rosenstein himself.

Then there are the natural barriers. The police can’t read your mind. They can only get the evidence that is there, like partial fingerprints, which are far less reliable than full fingerprints. They can’t go backwards in time.

I mention this because encryption is a natural barrier. It’s their job to overcome this barrier if they can, to crack crypto and so forth. It’s not our job to do it for them.

It’s like the camera that increasingly comes with TVs for video conferencing, or the microphone on Alexa-style devices that are always recording. This suddenly creates evidence that the police want our help in gathering, such as having the camera turned on all the time, recording to disk, in case the police later gets a warrant, to peer backward in time what happened in our living rooms. The “nothing is impervious” argument applies here as well. And it’s equally bogus here. By not helping police by not recording our activities, we aren’t somehow breaking some long standing tradit

And this is the scary part. It’s not that we are breaking some ancient tradition that there’s no place the police can’t go (with a warrant). Instead, crypto backdoors breaking the tradition that never before have I been forced to help them eavesdrop on me, even before I’m a suspect, even before any crime has been committed. Sure, laws like CALEA force the phone companies to help the police against wrongdoers — but here Rosenstein is insisting I help the police against myself.

Balance between privacy and public safety

Rosenstein repeats the frequent claim that encryption upsets the balance between privacy/safety:

Warrant-proof encryption defeats the constitutional balance by elevating privacy above public safety.

This is laughable, because technology has swung the balance alarmingly in favor of law enforcement. Far from “Going Dark” as his side claims, the problem we are confronted with is “Going Light”, where the police state monitors our every action.

You are surrounded by recording devices. If you walk down the street in town, outdoor surveillance cameras feed police facial recognition systems. If you drive, automated license plate readers can track your route. If you make a phone call or use a credit card, the police get a record of the transaction. If you stay in a hotel, they demand your ID, for law enforcement purposes.

And that’s their stuff, which is nothing compared to your stuff. You are never far from a recording device you own, such as your mobile phone, TV, Alexa/Siri/OkGoogle device, laptop. Modern cars from the last few years increasingly have always-on cell connections and data recorders that record your every action (and location).

Even if you hike out into the country, when you get back, the FBI can subpoena your GPS device to track down your hidden weapon’s cache, or grab the photos from your camera.

And this is all offline. So much of what we do is now online. Of the photographs you own, fewer than 1% are printed out, the rest are on your computer or backed up to the cloud.

Your phone is also a GPS recorder of your exact position all the time, which if the government wins the Carpenter case, they police can grab without a warrant. Tagging all citizens with a recording device of their position is not “balance” but the premise for a novel more dystopic than 1984.

If suspected of a crime, which would you rather the police searched? Your person, houses, papers, and physical effects? Or your mobile phone, computer, email, and online/cloud accounts?

The balance of privacy and safety has swung so far in favor of law enforcement that rather than debating whether they should have crypto backdoors, we should be debating how to add more privacy protections.

“But it’s not conclusive”

Rosenstein defends the “going light” (“Golden Age of Surveillance”) by pointing out it’s not always enough for conviction. Nothing gives a conviction better than a person’s own words admitting to the crime that were captured by surveillance. This other data, while copious, often fails to convince a jury beyond a reasonable doubt.
This is nonsense. Police got along well enough before the digital age, before such widespread messaging. They solved terrorist and child abduction cases just fine in the 1980s. Sure, somebody’s GPS location isn’t by itself enough — until you go there and find all the buried bodies, which leads to a conviction. “Going dark” imagines that somehow, the evidence they’ve been gathering for centuries is going away. It isn’t. It’s still here, and matches up with even more digital evidence.
Conversely, a person’s own words are not as conclusive as you think. There’s always missing context. We quickly get back to the Richelieu “six words” problem, where captured communications are twisted to convict people, with defense lawyers trying to untwist them.

Rosenstein’s claim may be true, that a lot of criminals will go free because the other electronic data isn’t convincing enough. But I’d need to see that claim backed up with hard studies, not thrown out for emotional impact.

Terrorists and child molesters

You can always tell the lack of seriousness of law enforcement when they bring up terrorists and child molesters.
To be fair, sometimes we do need to talk about terrorists. There are things unique to terrorism where me may need to give government explicit powers to address those unique concerns. For example, the NSA buys mobile phone 0day exploits in order to hack terrorist leaders in tribal areas. This is a good thing.
But when terrorists use encryption the same way everyone else does, then it’s not a unique reason to sacrifice our freedoms to give the police extra powers. Either it’s a good idea for all crimes or no crimes — there’s nothing particular about terrorism that makes it an exceptional crime. Dead people are dead. Any rational view of the problem relegates terrorism to be a minor problem. More citizens have died since September 8, 2001 from their own furniture than from terrorism. According to studies, the hot water from the tap is more of a threat to you than terrorists.
Yes, government should do what they can to protect us from terrorists, but no, it’s not so bad of a threat that requires the imposition of a military/police state. When people use terrorism to justify their actions, it’s because they trying to form a military/police state.
A similar argument works with child porn. Here’s the thing: the pervs aren’t exchanging child porn using the services Rosenstein wants to backdoor, like Apple’s Facetime or Facebook’s WhatsApp. Instead, they are exchanging child porn using custom services they build themselves.
Again, I’m (mostly) on the side of the FBI. I support their idea of buying 0day exploits in order to hack the web browsers of visitors to the secret “PlayPen” site. This is something that’s narrow to this problem and doesn’t endanger the innocent. On the other hand, their calls for crypto backdoors endangers the innocent while doing effectively nothing to address child porn.
Terrorists and child molesters are a clichéd, non-serious excuse to appeal to our emotions to give up our rights. We should not give in to such emotions.

Definition of “backdoor”

Rosenstein claims that we shouldn’t call backdoors “backdoors”:

No one calls any of those functions [like key recovery] a “back door.”  In fact, those capabilities are marketed and sought out by many users.

He’s partly right in that we rarely refer to PGP’s key escrow feature as a “backdoor”.

But that’s because the term “backdoor” refers less to how it’s done and more to who is doing it. If I set up a recovery password with Apple, I’m the one doing it to myself, so we don’t call it a backdoor. If it’s the police, spies, hackers, or criminals, then we call it a “backdoor” — even it’s identical technology.

Wikipedia uses the key escrow feature of the 1990s Clipper Chip as a prime example of what everyone means by “backdoor“. By “no one”, Rosenstein is including Wikipedia, which is obviously incorrect.

Though in truth, it’s not going to be the same technology. The needs of law enforcement are different than my personal key escrow/backup needs. In particular, there are unsolvable problems, such as a backdoor that works for the “legitimate” law enforcement in the United States but not for the “illegitimate” police states like Russia and China.

I feel for Rosenstein, because the term “backdoor” does have a pejorative connotation, which can be considered unfair. But that’s like saying the word “murder” is a pejorative term for killing people, or “torture” is a pejorative term for torture. The bad connotation exists because we don’t like government surveillance. I mean, honestly calling this feature “government surveillance feature” is likewise pejorative, and likewise exactly what it is that we are talking about.

Providers

Rosenstein focuses his arguments on “providers”, like Snapchat or Apple. But this isn’t the question.

The question is whether a “provider” like Telegram, a Russian company beyond US law, provides this feature. Or, by extension, whether individuals should be free to install whatever software they want, regardless of provider.

Telegram is a Russian company that provides end-to-end encryption. Anybody can download their software in order to communicate so that American law enforcement can’t eavesdrop. They aren’t going to put in a backdoor for the U.S. If we succeed in putting backdoors in Apple and WhatsApp, all this means is that criminals are going to install Telegram.

If the, for some reason, the US is able to convince all such providers (including Telegram) to install a backdoor, then it still doesn’t solve the problem, as uses can just build their own end-to-end encryption app that has no provider. It’s like email: some use the major providers like GMail, others setup their own email server.

Ultimately, this means that any law mandating “crypto backdoors” is going to target users not providers. Rosenstein tries to make a comparison with what plain-old telephone companies have to do under old laws like CALEA, but that’s not what’s happening here. Instead, for such rules to have any effect, they have to punish users for what they install, not providers.

This continues the argument I made above. Government backdoors is not something that forces Internet services to eavesdrop on us — it forces us to help the government spy on ourselves.
Rosenstein tries to address this by pointing out that it’s still a win if major providers like Apple and Facetime are forced to add backdoors, because they are the most popular, and some terrorists/criminals won’t move to alternate platforms. This is false. People with good intentions, who are unfairly targeted by a police state, the ones where police abuse is rampant, are the ones who use the backdoored products. Those with bad intentions, who know they are guilty, will move to the safe products. Indeed, Telegram is already popular among terrorists because they believe American services are already all backdoored. 
Rosenstein is essentially demanding the innocent get backdoored while the guilty don’t. This seems backwards. This is backwards.

Apple is morally weak

The reason I’m writing this post is because Rosenstein makes a few claims that cannot be ignored. One of them is how he describes Apple’s response to government insistence on weakening encryption doing the opposite, strengthening encryption. He reasons this happens because:

Of course they [Apple] do. They are in the business of selling products and making money. 

We [the DoJ] use a different measure of success. We are in the business of preventing crime and saving lives. 

He swells in importance. His condescending tone ennobles himself while debasing others. But this isn’t how things work. He’s not some white knight above the peasantry, protecting us. He’s a beat cop, a civil servant, who serves us.

A better phrasing would have been:

They are in the business of giving customers what they want.

We are in the business of giving voters what they want.

Both sides are doing the same, giving people what they want. Yes, voters want safety, but they also want privacy. Rosenstein imagines that he’s free to ignore our demands for privacy as long has he’s fulfilling his duty to protect us. He has explicitly rejected what people want, “we use a different measure of success”. He imagines it’s his job to tell us where the balance between privacy and safety lies. That’s not his job, that’s our job. We, the people (and our representatives), make that decision, and it’s his job is to do what he’s told. His measure of success is how well he fulfills our wishes, not how well he satisfies his imagined criteria.

That’s why those of us on this side of the debate doubt the good intentions of those like Rosenstein. He criticizes Apple for wanting to protect our rights/freedoms, and declare they measure success differently.

They are willing to be vile

Rosenstein makes this argument:

Companies are willing to make accommodations when required by the government. Recent media reports suggest that a major American technology company developed a tool to suppress online posts in certain geographic areas in order to embrace a foreign government’s censorship policies. 

Let me translate this for you:

Companies are willing to acquiesce to vile requests made by police-states. Therefore, they should acquiesce to our vile police-state requests.

It’s Rosenstein who is admitting here is that his requests are those of a police-state.

Constitutional Rights

Rosenstein says:

There is no constitutional right to sell warrant-proof encryption.

Maybe. It’s something the courts will have to decide. There are many 1st, 2nd, 3rd, 4th, and 5th Amendment issues here.
The reason we have the Bill of Rights is because of the abuses of the British Government. For example, they quartered troops in our homes, as a way of punishing us, and as a way of forcing us to help in our own oppression. The troops weren’t there to defend us against the French, but to defend us against ourselves, to shoot us if we got out of line.

And that’s what crypto backdoors do. We are forced to be agents of our own oppression. The principles enumerated by Rosenstein apply to a wide range of even additional surveillance. With little change to his speech, it can equally argue why the constant TV video surveillance from 1984 should be made law.

Let’s go back and look at Apple. It is not some base company exploiting consumers for profit. Apple doesn’t have guns, they cannot make people buy their product. If Apple doesn’t provide customers what they want, then customers vote with their feet, and go buy an Android phone. Apple isn’t providing encryption/security in order to make a profit — it’s giving customers what they want in order to stay in business.
Conversely, if we citizens don’t like what the government does, tough luck, they’ve got the guns to enforce their edicts. We can’t easily vote with our feet and walk to another country. A “democracy” is far less democratic than capitalism. Apple is a minority, selling phones to 45% of the population, and that’s fine, the minority get the phones they want. In a Democracy, where citizens vote on the issue, those 45% are screwed, as the 55% impose their will unwanted onto the remainder.

That’s why we have the Bill of Rights, to protect the 49% against abuse by the 51%. Regardless whether the Supreme Court agrees the current Constitution, it is the sort right that might exist regardless of what the Constitution says. 

Obliged to speak the truth

Here is the another part of his speech that I feel cannot be ignored. We have to discuss this:

Those of us who swear to protect the rule of law have a different motivation.  We are obliged to speak the truth.

The truth is that “going dark” threatens to disable law enforcement and enable criminals and terrorists to operate with impunity.

This is not true. Sure, he’s obliged to say the absolute truth, in court. He’s also obliged to be truthful in general about facts in his personal life, such as not lying on his tax return (the sort of thing that can get lawyers disbarred).

But he’s not obliged to tell his spouse his honest opinion whether that new outfit makes them look fat. Likewise, Rosenstein knows his opinion on public policy doesn’t fall into this category. He can say with impunity that either global warming doesn’t exist, or that it’ll cause a biblical deluge within 5 years. Both are factually untrue, but it’s not going to get him fired.

And this particular claim is also exaggerated bunk. While everyone agrees encryption makes law enforcement’s job harder than with backdoors, nobody honestly believes it can “disable” law enforcement. While everyone agrees that encryption helps terrorists, nobody believes it can enable them to act with “impunity”.

I feel bad here. It’s a terrible thing to question your opponent’s character this way. But Rosenstein made this unavoidable when he clearly, with no ambiguity, put his integrity as Deputy Attorney General on the line behind the statement that “going dark threatens to disable law enforcement and enable criminals and terrorists to operate with impunity”. I feel it’s a bald face lie, but you don’t need to take my word for it. Read his own words yourself and judge his integrity.

Conclusion

Rosenstein’s speech includes repeated references to ideas like “oath”, “honor”, and “duty”. It reminds me of Col. Jessup’s speech in the movie “A Few Good Men”.

If you’ll recall, it was rousing speech, “you want me on that wall” and “you use words like honor as a punchline”. Of course, since he was violating his oath and sending two privates to death row in order to avoid being held accountable, it was Jessup himself who was crapping on the concepts of “honor”, “oath”, and “duty”.

And so is Rosenstein. He imagines himself on that wall, doing albeit terrible things, justified by his duty to protect citizens. He imagines that it’s he who is honorable, while the rest of us not, even has he utters bald faced lies to further his own power and authority.

We activists oppose crypto backdoors not because we lack honor, or because we are criminals, or because we support terrorists and child molesters. It’s because we value privacy and government officials who get corrupted by power. It’s not that we fear Trump becoming a dictator, it’s that we fear bureaucrats at Rosenstein’s level becoming drunk on authority — which Rosenstein demonstrably has. His speech is a long train of corrupt ideas pursuing the same object of despotism — a despotism we oppose.

In other words, we oppose crypto backdoors because it’s not a tool of law enforcement, but a tool of despotism.

“Pirate Sites Generate $111 Million In Ad Revenue a Year”

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-sites-generate-111-million-in-ad-revenue-a-year-171005/

In recent years various copyright holder groups have adopted a “follow-the-money” approach in the hope of cutting off funding to so-called pirate sites.

The Trustworthy Accountability Group (TAG) is one of the organizations that helps to facilitate these efforts. TAG coordinates an advertising-oriented Anti-Piracy Program for the advertising industry and has signed up dozens of large companies across various industries.

Today they released a new report, titled “Measuring Digital Advertising Revenue to Infringing Sites,” which shows the impact of these efforts.

The study, carried out by Ernst and Young, reveals that the top 672 piracy sites still generate plenty of revenue. A whopping $111 million per year, to be precise. But it may have been twice as much without the industry’s interventions.

“Digital ad revenue linked to infringing content was estimated at $111 million last year, the majority of which (83 percent) came from non-premium advertisers,” TAG writes.

“If the industry had not taken aggressive steps to reduce piracy, those pirate site operators would have potentially earned an additional $102-$177 million in advertising revenue, depending on the breakdown of premium and non-premium advertisers.”

Pirate revenue estimates

Taking more than $100 million away from pirate sites is pretty significant, to say the least.

It, therefore, comes as no surprise that the news is paired with positive comments from various industry insiders as well as US Congressman Adam Schiff, who co-chairs the International Creativity and Theft Prevention Caucus.

“The study recently completed by Ernst and Young on behalf of TAG shows that those efforts are bearing fruit, and that voluntary efforts by advertisers and agencies kept well over $100 million out of the pockets of pirate sites last year alone,” Schiff says.

While TAG and their partners pat themselves on the back, those who take a more critical look at the data will realize that their view is rather optimistic. There is absolutely no evidence that TAG’s efforts are responsible for the claimed millions that were kept from pirate sites.

In fact, most of these millions never ended up in the pockets of these websites to begin with.

The $102 million that pirate sites ‘didn’t get’ is simply the difference between premium and non-premium ads. In other words, the extra money these sites would have made if they had 100% premium ads, which is a purely hypothetical situation.

Long before TAG existed pirate sites were banned by a lot of premium advertising networks, including Google AdSense, and mostly serving lower tier ads.

The estimated CPM figures (earnings per 1,000 views) are rather optimistic too. TAG puts these at $2.50 for non-premium ads. We spoke to several site owners who said these were way off. Even pop-unders in premium countries make less than a dollar, we were told.

Site owners are not the only ones that have a much lower estimate. An earlier copyright industry-backed study, published by Digital Citizens Alliance (DCA), put the average CPM of these pirate site ads at $0.30, which is miles away from the $2.50 figure.

In fact, the DCA study also put the premium ads at $0.30, because these often end up as leftover inventory at pirate sites, according to experts.

“Based on MediaLink expertise and research with advertising industry members, the assumption is that where premium ads appear they are delivered programmatically by exchanges to fulfill the dregs of campaigns. As such, rates are assumed to be the same for premium and non-premium ads,” the DCA report noted.

In the TAG report, the estimate for premium ads is a bit higher, $5 per 1000 views. Video ads may be higher, but these only represent a tiny fraction of the total.

While TAG’s efforts will no doubt make a difference, it’s good to keep the caveats above in mind. Their claim that that the ad industry’s anti-piracy efforts have “cut pirate ad revenue in half” is misleading, to say the least.

That doesn’t mean that all numbers released by the organization should be taken with a grain of salt. The TAG membership rates below are 100% accurate.

TAG membership fees

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Perfect 10 Takes Giganews to Supreme Court, Says It’s Worse Than Megaupload

Post Syndicated from Andy original https://torrentfreak.com/perfect-10-takes-giganews-supreme-court-says-worse-megaupload-170906/

Adult publisher Perfect 10 has developed a reputation for being a serial copyright litigant.

Over the years the company targeted a number of high-profile defendants, including Google, Amazon, Mastercard, and Visa. Around two dozen of Perfect 10’s lawsuits ended in cash settlements and defaults, in the publisher’s favor.

Perhaps buoyed by this success, the company went after Usenet provider Giganews but instead of a company willing to roll over, Perfect 10 found a highly defensive and indeed aggressive opponent. The initial copyright case filed by Perfect 10 alleged that Giganews effectively sold access to Perfect 10 content but things went badly for the publisher.

In November 2014, the U.S. District Court for the Central District of California found that Giganews was not liable for the infringing activities of its users. Perfect 10 was ordered to pay Giganews $5.6m in attorney’s fees and costs. Perfect 10 lost again at the Court of Appeals for the Ninth Circuit.

As a result of these failed actions, Giganews is owned millions by Perfect 10 but the publisher has thus far refused to pay up. That resulted in Giganews filing a $20m lawsuit, accusing Perfect 10 and President Dr. Norman Zada of fraud.

With all this litigation boiling around in the background and Perfect 10 already bankrupt as a result, one might think the story would be near to a conclusion. That doesn’t seem to be the case. In a fresh announcement, Perfect 10 says it has now appealed its case to the US Supreme Court.

“This is an extraordinarily important case, because for the first time, an appellate court has allowed defendants to copy and sell movies, songs, images, and other copyrighted works, without permission or payment to copyright holders,” says Zada.

“In this particular case, evidence was presented that defendants were copying and selling access to approximately 25,000 terabytes of unlicensed movies, songs, images, software, and magazines.”

Referencing an Amicus brief previously filed by the RIAA which described Giganews as “blatant copyright pirates,” Perfect 10 accuses the Ninth Circuit of allowing Giganews to copy and sell trillions of dollars of other people’s intellectual property “because their copying and selling was done in an automated fashion using a computer.”

Noting that “everything is done via computer” these days and with an undertone that the ruling encouraged others to infringe, Perfect 10 says there are now 88 companies similar to Giganews which rely on the automation defense to commit infringement – even involving content owned by people in the US Government.

“These exploiters of other people’s property are fearless. They are copying and selling access to pirated versions of pretty much every movie ever made, including films co-produced by treasury secretary Steven Mnuchin,” Nada says.

“You would think the justice department would do something to protect the viability of this nation’s movie and recording studios, as unfettered piracy harms jobs and tax revenues, but they have done nothing.”

But Zada doesn’t stop at blaming Usenet services, the California District Court, the Ninth Circuit, and the United States Department of Justice for his problems – Congress is to blame too.

“Copyright holders have nowhere to turn other than the Federal courts, whose judges are ridiculously overworked. For years, Congress has failed to provide the Federal courts with adequate funding. As a result, judges can make mistakes,” he adds.

For Zada, those mistakes are particularly notable, particularly since at least one other super high-profile company was shut down in the most aggressive manner possible for allegedly being involved in less piracy than Giganews.

Pointing to the now-infamous Megaupload case, Perfect 10 notes that the Department of Justice completely shut that operation down, filing charges of criminal copyright infringement against Kim Dotcom and seizing $175 million “for selling access to movies and songs which they did not own.”

“Perfect 10 provided evidence that [Giganews] offered more than 200 times as many full length movies as did megaupload.com. But our evidence fell on deaf ears,” Zada complains.

In contrast, Perfect 10 adds, a California District Court found that Giganews had done nothing wrong, allowed it to continue copying and selling access to Perfect 10’s content, and awarded the Usenet provider $5.63m in attorneys fees.

“Prior to this case, no court had ever awarded fees to an alleged infringer, unless they were found to either own the copyrights at issue, or established a fair use defense. Neither was the case here,” Zada adds.

While Perfect 10 has filed a petition with the Supreme Court, the odds of being granted a review are particularly small. Only time will tell how this case will end, but it seems unlikely that the adult publisher will enjoy a happy ending, one in which it doesn’t have to pay Giganews millions of dollars in attorney’s fees.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Journalists Generally Do Not Use Secure Communication

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/journalists_gen.html

This should come as no surprise:

Alas, our findings suggest that secure communications haven’t yet attracted mass adoption among journalists. We looked at 2,515 Washington journalists with permanent credentials to cover Congress, and we found only 2.5 percent of them solicit end-to-end encrypted communication via their Twitter bios. That’s just 62 out of all the broadcast, newspaper, wire service, and digital reporters. Just 28 list a way to reach them via Signal or another secure messaging app. Only 22 provide a PGP public key, a method that allows sources to send encrypted messages. A paltry seven advertise a secure email address. In an era when anything that can be hacked will be and when the president has declared outright war on the media, this should serve as a frightening wake-up call.

[…]

When journalists don’t step up, sources with sensitive information face the burden of using riskier modes of communication to initiate contact­ — and possibly conduct all of their exchanges­ — with reporters. It increases their chances of getting caught, putting them in danger of losing their job or facing prosecution. It’s burden enough to make them think twice about whistleblowing.

I forgive them for not using secure e-mail. It’s hard to use and confusing. But secure messaging is easy.

Court Won’t Drop Case Against Alleged KickassTorrents Owner

Post Syndicated from Ernesto original https://torrentfreak.com/court-wont-drop-case-against-alleged-kickasstorrents-owner-170804/

kickasstorrents_500x500Last summer, Polish law enforcement officers arrested Artem Vaulin, the alleged founder of KickassTorrents.

Polish authorities acted on a criminal complaint from the US Government, which accused Vaulin of criminal copyright infringement and money laundering.

While Vaulin is still awaiting the final decision in his extradition process in Poland, his US counsel tried to have the entire case thrown out with a motion to dismiss submitted to the Illinois District Court late last year.

One of the fundamental flaws of the case, according to the defense, is that torrent files themselves are not copyrighted content. In addition, they argued that any secondary copyright infringement claims would fail as these are non-existent under criminal law.

After a series of hearings and a long wait afterwards, US District Judge John Z. Lee has now issued his verdict (pdf).

In a 28-page memorandum and order, the motion to dismiss was denied on various grounds.

The court doesn’t contest that torrent files themselves are not protected content under copyright law. However, this argument ignores the fact that the files are used to download copyrighted material, the order reads.

“This argument, however, misunderstands the indictment. The indictment is not concerned with the mere downloading or distribution of torrent files,” Judge Lee writes.

“Granted, the indictment describes these files and charges Vaulin with operating a website dedicated to hosting and distributing them. But the protected content alleged to have been infringed in the indictment is a number of movies and other copyright protected media that users of Vaulin’s network purportedly downloaded and distributed..,” he adds.

In addition, the defense’s argument that secondary copyright infringement claims are non-existent under criminal law doesn’t hold either, according to the Judge’s decision.

Vaulin’s defense noted that the Government’s theory could expose other search engines, such as Google, to criminal liability. While this is theoretically possible, the court sees distinct differences and doesn’t aim to rule on all search engines in general.

“For present purposes, though, the Court need not decide whether and when a search engine operator might engage in conduct sufficient to constitute aiding and abetting criminal copyright infringement. The issue here is whether 18 U.S.C. § 2 applies to 17 U.S.C. § 506. The Court is persuaded that it does,” Judge Lee writes.

Based on these and other conclusions, the motion to dismiss was denied. This means that the case will move forward. The next step will be to see how the Polish court rules on the extradition request.

Vaulin’s lead counsel Ira Rothken is disappointed with the outcome. He stresses that while courts commonly construe indictments in a light most favorable to the government, it went too far in this case.

“Currently a person merely ‘making available’ a file on a network in California wouldn’t even be committing a civil copyright infringement under the ruling in Napster but under today’s ruling that same person doing it in Illinois could be criminally prosecuted by the United States,” Rothken informs TorrentFreak.

“If federal judges disagree on the state of the federal copyright law then people shouldn’t be criminally prosecuted absent clarification by Congress,” he adds.

The defense team is still considering the best options for appeal, and whether they want to go down that road. However, Rothken hopes that the Seventh Circuit Court of Appeals will address the issue in the future.

“We hope one day that the Seventh Circuit Court of Appeals will undo this ruling and the chilling effect it will have on internet search engines, user generated content sites, and millions of netizens globally,” Rothken notes.

For now, however, Vaulin’s legal team will likely shift its focus to preventing his extradition to the United States.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

More on the NSA’s Use of Traffic Shaping

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/more_on_the_nsa_2.html

“Traffic shaping” — the practice of tricking data to flow through a particular route on the Internet so it can be more easily surveiled — is an NSA technique that has gotten much less attention than it deserves. It’s a powerful technique that allows an eavesdropper to get access to communications channels it would otherwise not be able to monitor.

There’s a new paper on this technique:

This report describes a novel and more disturbing set of risks. As a technical matter, the NSA does not have to wait for domestic communications to naturally turn up abroad. In fact, the agency has technical methods that can be used to deliberately reroute Internet communications. The NSA uses the term “traffic shaping” to describe any technical means the deliberately reroutes Internet traffic to a location that is better suited, operationally, to surveillance. Since it is hard to intercept Yemen’s international communications from inside Yemen itself, the agency might try to “shape” the traffic so that it passes through communications cables located on friendlier territory. Think of it as diverting part of a river to a location from which it is easier (or more legal) to catch fish.

The NSA has clandestine means of diverting portions of the river of Internet traffic that travels on global communications cables.

Could the NSA use traffic shaping to redirect domestic Internet traffic — ­emails and chat messages sent between Americans, say­ — to foreign soil, where its surveillance can be conducted beyond the purview of Congress and the courts? It is impossible to categorically answer this question, due to the classified nature of many national-security surveillance programs, regulations and even of the legal decisions made by the surveillance courts. Nevertheless, this report explores a legal, technical, and operational landscape that suggests that traffic shaping could be exploited to sidestep legal restrictions imposed by Congress and the surveillance courts.

News article. NSA document detailing the technique with Yemen.

This work builds on previous research that I blogged about here.

The fundamental vulnerability is that routing information isn’t authenticated.

US Opposes Kim Dotcom’s Supreme Court Petition Over Seized Millions

Post Syndicated from Ernesto original https://torrentfreak.com/us-opposes-kim-dotcoms-supreme-court-petition-over-seized-millions-170613/

megaupload-logoFollowing the 2012 raid on Megaupload and Kim Dotcom, U.S. and New Zealand authorities seized millions of dollars in cash and other property.

Claiming the assets were obtained through copyright and money laundering crimes, the U.S. government launched a separate civil action in which it asked the court to forfeit the bank accounts, cars, and other seized possessions of the Megaupload defendants.

The U.S. branded Dotcom and his colleagues as “fugitives” and won their case. Dotcom’s legal team quickly appealed this verdict, but lost once more at the Fourth Circuit appeals court.

However, Dotcom didn’t give up and petitioned the US Supreme Court to hear the case. Together with the other defendants, he wants the Supreme Court to overturn the “fugitive disentitlement” ruling and the forfeiture of his assets.

The crux of the case is whether or not the District Court’s order to forfeit an estimated $67 million in assets was right. The defense argues that Dotcom and the other Megaupload defendants were wrongfully labeled as fugitives by the Department of Justice.

“If left undisturbed, the Fourth Circuit’s decision enables the Government to obtain civil forfeiture of every penny of a foreign citizen’s foreign assets based on unproven allegations of the most novel, dubious United States crimes,” Dotcom’s legal team wrote.

The United States Government disagrees with this assessment. In their opposition brief (pdf), submitted late last week and picked up by ARS, the Department of Justice asks the Supreme Court not to take on the case.

According to the US, the decision to label Dotcom and his colleagues as fugitives is how Congress intended the relevant section of the law to work. In addition, the current rulings are not incompatible with previous court decisions in similar cases.

“Petitioners also seek review of the court of appeals’ holding that they qualify as ‘fugitives’ under the federal fugitive-disentitlement statute […] because they declined to enter the United States with the specific intent to avoid prosecution,” DoJ writes in its brief.

“That contention does not warrant review. The court of appeals correctly construed Section 2466 in light of its text and purpose. Its holding applying the statute to the facts here does not conflict with any decision of another circuit,” the brief adds.

The full opposition brief responds in detail to the petition of Dotcom and his colleagues, with the US ultimately concluding that the Supreme Court should deny the request.

Dotcom and his legal team have previously stated that they need more resources to mount a proper defense against the criminal complaint. The case has been ongoing for more than half a decade and is being fought in several courts, which has proven to be rather expensive.

Whether the Supreme Court accepts or denies the case will likely be decided in the weeks to come. Until then, the waiting continues.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Surveillance Intermediaries

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/surveillance_in_2.html

Interesting law-journal article: “Surveillance Intermediaries,” by Alan Z. Rozenshtein.

Abstract:Apple’s 2016 fight against a court order commanding it to help the FBI unlock the iPhone of one of the San Bernardino terrorists exemplifies how central the question of regulating government surveillance has become in American politics and law. But scholarly attempts to answer this question have suffered from a serious omission: scholars have ignored how government surveillance is checked by “surveillance intermediaries,” the companies like Apple, Google, and Facebook that dominate digital communications and data storage, and on whose cooperation government surveillance relies. This Article fills this gap in the scholarly literature, providing the first comprehensive analysis of how surveillance intermediaries constrain the surveillance executive. In so doing, it enhances our conceptual understanding of, and thus our ability to improve, the institutional design of government surveillance.

Surveillance intermediaries have the financial and ideological incentives to resist government requests for user data. Their techniques of resistance are: proceduralism and litigiousness that reject voluntary cooperation in favor of minimal compliance and aggressive litigation; technological unilateralism that designs products and services to make surveillance harder; and policy mobilization that rallies legislative and public opinion to limit surveillance. Surveillance intermediaries also enhance the “surveillance separation of powers”; they make the surveillance executive more subject to inter-branch constraints from Congress and the courts, and to intra-branch constraints from foreign-relations and economics agencies as well as the surveillance executive’s own surveillance-limiting components.

The normative implications of this descriptive account are important and cross-cutting. Surveillance intermediaries can both improve and worsen the “surveillance frontier”: the set of tradeoffs ­ between public safety, privacy, and economic growth ­ from which we choose surveillance policy. And while intermediaries enhance surveillance self-government when they mobilize public opinion and strengthen the surveillance separation of powers, they undermine it when their unilateral technological changes prevent the government from exercising its lawful surveillance authorities.

Extending the Airplane Laptop Ban

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/extending_the_a.html

The Department of Homeland Security is rumored to be considering extending the current travel ban on large electronics for Middle Eastern flights to European ones as well. The likely reaction of airlines will be to implement new traveler programs, effectively allowing wealthier and more frequent fliers to bring their computers with them. This will only exacerbate the divide between the haves and the have-nots — all without making us any safer.

In March, both the United States and the United Kingdom required that passengers from 10 Muslim countries give up their laptop computers and larger tablets, and put them in checked baggage. The new measure was based on reports that terrorists would try to smuggle bombs onto planes concealed in these larger electronic devices.

The security measure made no sense for two reasons. First, moving these computers into the baggage holds doesn’t keep them off planes. Yes, it is easier to detonate a bomb that’s in your hands than to remotely trigger it in the cargo hold. But it’s also more effective to screen laptops at security checkpoints than it is to place them in checked baggage. TSA already does this kind of screening randomly and occasionally: making passengers turn laptops on to ensure that they’re functional computers and not just bomb-filled cases, and running chemical tests on their surface to detect explosive material.

And, two, banning laptops on selected flights just forces terrorists to buy more roundabout itineraries. It doesn’t take much creativity to fly Doha-Amsterdam-New York instead of direct. Adding Amsterdam to the list of affected airports makes the terrorist add yet another itinerary change; it doesn’t remove the threat.

Which brings up another question: If this is truly a threat, why aren’t domestic flights included in this ban? Remember that anyone boarding a plane to the United States from these Muslim countries has already received a visa to enter the country. This isn’t perfect security — the infamous underwear bomber had a visa, after all — but anyone who could detonate a laptop bomb on his international flight could do it on his domestic connection.

I don’t have access to classified intelligence, and I can’t comment on whether explosive-filled laptops are truly a threat. But, if they are, TSA can set up additional security screenings at the gates of US-bound flights worldwide and screen every laptop coming onto the plane. It wouldn’t be the first time we’ve had additional security screening at the gate. And they should require all laptops to go through this screening, prohibiting them from being stashed in checked baggage.

This measure is nothing more than security theater against what appears to be a movie-plot threat.

Banishing laptops to the cargo holds brings with it a host of other threats. Passengers run the risk of their electronics being stolen from their checked baggage — something that has happened in the past. And, depending on the country, passengers also have to worry about border control officials intercepting checked laptops and making copies of what’s on their hard drives.

Safety is another concern. We’re already worried about large lithium-ion batteries catching fire in airplane baggage holds; adding a few hundred of these devices will considerably exacerbate the risk. Both FedEx and UPS no longer accept bulk shipments of these batteries after two jets crashed in 2010 and 2011 due to combustion.

Of course, passengers will rebel against this rule. Having access to a computer on these long transatlantic flights is a must for many travelers, especially the high-revenue business-class travelers. They also won’t accept the delays and confusion this rule will cause as it’s rolled out. Unhappy passengers fly less, or fly other routes on other airlines without these restrictions.

I don’t know how many passengers are choosing to fly to the Middle East via Toronto to avoid the current laptop ban, but I suspect there may be some. If Europe is included in the new ban, many more may consider adding Canada to their itineraries, as well as choosing European hubs that remain unaffected.

As passengers voice their disapproval with their wallets, airlines will rebel. Already Emirates has a program to loan laptops to their premium travelers. I can imagine US airlines doing the same, although probably for an extra fee. We might learn how to make this work: keeping our data in the cloud or on portable memory sticks and using unfamiliar computers for the length of the flight.

A more likely response will be comparable to what happened after the US increased passenger screening post-9/11. In the months and years that followed, we saw different ways for high-revenue travelers to avoid the lines: faster first-class lanes, and then the extra-cost trusted traveler programs that allow people to bypass the long lines, keep their shoes on their feet and leave their laptops and liquids in their bags. It’s a bad security idea, but it keeps both frequent fliers and airlines happy. It would be just another step to allow these people to keep their electronics with them on their flight.

The problem with this response is that it solves the problem for frequent fliers, while leaving everyone else to suffer. This is already the case; those of us enrolled in a trusted traveler program forget what it’s like to go through “normal” security screening. And since frequent fliers — likely to be more wealthy — no longer see the problem, they don’t have any incentive to fix it.

Dividing security checks into haves and have-nots is bad social policy, and we should actively fight any expansion of it. If the TSA implements this security procedure, it should implement it for every flight. And there should be no exceptions. Force every politically connected flier, from members of Congress to the lobbyists that influence them, to do without their laptops on planes. Let the TSA explain to them why they can’t work on their flights to and from D.C.

This essay previously appeared on CNN.com.

EDITED TO ADD: US officials are backing down.

SS7ware’s insights from the MVNOs World Congress

Post Syndicated from Yate Team original https://blog.yate.ro/2017/05/08/ss7ware-insights-from-the-mvnos-world-congress/

We just got back from the MVNOs World Congress, in Nice, where we did quite a good impression with our YateHSS/HLR and YateUCN solutions for MVNOs. The “not so shocking” conclusion that we came to was that our public pricing policy impacts the MVNO market at its core.

FBI’s Comey dangerous definition of "valid" journalism

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/fbis-comey-dangerous-definition-of.html

The First Amendment, the “freedom of speech” one, does not mention journalists. When it says “freedom of the press” it means the physical printing press. Yes, that does include newspapers, but it also includes anybody else publishing things, such as the famous agitprop pamphlets published by James Otis, John Dickinson, and Thomas Paine. There was no journalistic value to Thomas Paine’s Common Sense. The pamphlet argued for abolishing the monarchy and for American independence.

Today in testimony before congress, FBI directory James Comey came out in support of journalism, pointing out that they would not prosecute journalists doing their jobs. But he then modified his statement, describing “valid” journalists as those who in possession of leaks would first check with the government, to avoid publishing anything that would damage national security. It’s a power the government has abused in the past to delay or censor leaks. It’s specifically why Edward Snowden contacted Glenn Greenwald and Laura Poitras — he wanted journalists who would not kowtow the government on publishing the leaks.

Comey’s testimony today was in regards to prosecuting Assange and Wikileaks. Under the FBI’s official “journalist” classification scheme, Wikileaks are not real journalists, but instead publish “intelligence porn” and are hostile to America’s interests.

To be fair, there may be good reasons to prosecute Assange. Publishing leaks is one thing, but the suspicion with Wikileaks is that they do more, that they actively help getting the leaks in the first place. The original leaks that started Wikileaks may have come from hacks by Assange himself. Assange may have helped Manning grab the diplomatic cables. Wikileaks may have been involved in hacking the DNC and Podesta emails, more than simply receiving and publishing the information.

If that’s the case, then the US government would have good reason to prosecute Wikileaks.

But that’s not what Comey said today. Instead, Comey referred only to Wikileaks constitutionally protected publishing activities, and how since they didn’t fit his definition of “journalism”, they were open to prosecution. This is fundamentally wrong, and a violation of the both the spirit and the letter of the First Amendment. The FBI should not have a definition of “journalism” it thinks is valid. Yes, Assange is an anti-American douchebag. Being an apologist for Putin’s Russia disproves his claim of being a neutral journalist targeting the corrupt and powerful. But these activities are specifically protected by the Constitution.

If this were 1776, Comey would of course be going after Thomas Paine, for publishing “revolution porn”, and not being a real journalist.

Encryption Policy and Freedom of the Press

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/encryption_poli.html

Interesting law journal article: “Encryption and the Press Clause,” by D. Victoria Barantetsky.

Abstract: Almost twenty years ago, a hostile debate over whether government could regulate encryption — later named the Crypto Wars — seized the country. At the center of this debate stirred one simple question: is encryption protected speech? This issue touched all branches of government percolating from Congress, to the President, and eventually to the federal courts. In a waterfall of cases, several United States Court of Appeals appeared to reach a consensus that encryption was protected speech under the First Amendment, and with that the Crypto Wars appeared to be over, until now.

Nearly twenty years later, the Crypto Wars have returned. Following recent mass shootings, law enforcement has once again questioned the legal protection for encryption and tried to implement “backdoor” techniques to access messages sent over encrypted channels. In the case, Apple v. FBI, the agency tried to compel Apple to grant access to the iPhone of a San Bernardino shooter. The case was never decided, but the legal arguments briefed before the court were essentially the same as they were two decades prior. Apple and amici supporting the company argued that encryption was protected speech.

While these arguments remain convincing, circumstances have changed in ways that should be reflected in the legal doctrines that lawyers use. Unlike twenty years ago, today surveillance is ubiquitous, and the need for encryption is no longer felt by a seldom few. Encryption has become necessary for even the most basic exchange of information given that most Americans share “nearly every aspect of their lives ­– from the mundane to the intimate” over the Internet, as stated in a recent Supreme Court opinion.

Given these developments, lawyers might consider a new justification under the Press Clause. In addition to the many doctrinal concerns that exist with protection under the Speech Clause, the
Press Clause is normatively and descriptively more accurate at protecting encryption as a tool for secure communication without fear of government surveillance. This Article outlines that framework by examining the historical and theoretical transformation of the Press Clause since its inception.

Congress Removes FCC Privacy Protections on Your Internet Usage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/congress_remove.html

Think about all of the websites you visit every day. Now imagine if the likes of Time Warner, AT&T, and Verizon collected all of your browsing history and sold it on to the highest bidder. That’s what will probably happen if Congress has its way.

This week, lawmakers voted to allow Internet service providers to violate your privacy for their own profit. Not only have they voted to repeal a rule that protects your privacy, they are also trying to make it illegal for the Federal Communications Commission to enact other rules to protect your privacy online.

That this is not provoking greater outcry illustrates how much we’ve ceded any willingness to shape our technological future to for-profit companies and are allowing them to do it for us.

There are a lot of reasons to be worried about this. Because your Internet service provider controls your connection to the Internet, it is in a position to see everything you do on the Internet. Unlike a search engine or social networking platform or news site, you can’t easily switch to a competitor. And there’s not a lot of competition in the market, either. If you have a choice between two high-speed providers in the US, consider yourself lucky.

What can telecom companies do with this newly granted power to spy on everything you’re doing? Of course they can sell your data to marketers — and the inevitable criminals and foreign governments who also line up to buy it. But they can do more creepy things as well.

They can snoop through your traffic and insert their own ads. They can deploy systems that remove encryption so they can better eavesdrop. They can redirect your searches to other sites. They can install surveillance software on your computers and phones. None of these are hypothetical.

They’re all things Internet service providers have done before, and they are some of the reasons the FCC tried to protect your privacy in the first place. And now they’ll be able to do all of these things in secret, without your knowledge or consent. And, of course, governments worldwide will have access to these powers. And all of that data will be at risk of hacking, either by criminals and other governments.

Telecom companies have argued that other Internet players already have these creepy powers — although they didn’t use the word “creepy” — so why should they not have them as well? It’s a valid point.

Surveillance is already the business model of the Internet, and literally hundreds of companies spy on your Internet activity against your interests and for their own profit.

Your e-mail provider already knows everything you write to your family, friends, and colleagues. Google already knows our hopes, fears, and interests, because that’s what we search for.

Your cellular provider already tracks your physical location at all times: it knows where you live, where you work, when you go to sleep at night, when you wake up in the morning, and — because everyone has a smartphone — who you spend time with and who you sleep with.

And some of the things these companies do with that power is no less creepy. Facebook has run experiments in manipulating your mood by changing what you see on your news feed. Uber used its ride data to identify one-night stands. Even Sony once installed spyware on customers’ computers to try and detect if they copied music files.

Aside from spying for profit, companies can spy for other purposes. Uber has already considered using data it collects to intimidate a journalist. Imagine what an Internet service provider can do with the data it collects: against politicians, against the media, against rivals.

Of course the telecom companies want a piece of the surveillance capitalism pie. Despite dwindling revenues, increasing use of ad blockers, and increases in clickfraud, violating our privacy is still a profitable business — especially if it’s done in secret.

The bigger question is: why do we allow for-profit corporations to create our technological future in ways that are optimized for their profits and anathema to our own interests?

When markets work well, different companies compete on price and features, and society collectively rewards better products by purchasing them. This mechanism fails if there is no competition, or if rival companies choose not to compete on a particular feature. It fails when customers are unable to switch to competitors. And it fails when what companies do remains secret.

Unlike service providers like Google and Facebook, telecom companies are infrastructure that requires government involvement and regulation. The practical impossibility of consumers learning the extent of surveillance by their Internet service providers, combined with the difficulty of switching them, means that the decision about whether to be spied on should be with the consumer and not a telecom giant. That this new bill reverses that is both wrong and harmful.

Today, technology is changing the fabric of our society faster than at any other time in history. We have big questions that we need to tackle: not just privacy, but questions of freedom, fairness, and liberty. Algorithms are making decisions about policing, healthcare.

Driverless vehicles are making decisions about traffic and safety. Warfare is increasingly being fought remotely and autonomously. Censorship is on the rise globally. Propaganda is being promulgated more efficiently than ever. These problems won’t go away. If anything, the Internet of things and the computerization of every aspect of our lives will make it worse.

In today’s political climate, it seems impossible that Congress would legislate these things to our benefit. Right now, regulatory agencies such as the FTC and FCC are our best hope to protect our privacy and security against rampant corporate power. That Congress has decided to reduce that power leaves us at enormous risk.

It’s too late to do anything about this bill — Trump will certainly sign it — but we need to be alert to future bills that reduce our privacy and security.

This post previously appeared on the Guardian.

EDITED TO ADD: Former FCC Commissioner Tom Wheeler wrote a good op-ed on the subject. And here’s an essay laying out what this all means to the average Internet user.

VPN Searches Soar as Congress Votes to Repeal Broadband Privacy Rules

Post Syndicated from Andy original https://torrentfreak.com/vpn-searches-soar-as-congress-votes-to-repeal-broadband-privacy-rules-170329/

In a blow to privacy advocates across the United States, the House of Representatives voted Tuesday to grant Internet service providers permission to sell subscribers’ browsing histories to third parties.

The bill repeals broadband privacy rules adopted last year by the Federal Communications Commission under the Obama administration, which required ISPs to obtain consumer consent before using their data for advertising or marketing purposes.

The House of Representatives voted 215-205 in favor of overturning the regulations after the Senate voted to revoke the rules last week. President Donald Trump’s signature is needed before it can go into law but with the White House giving its full support, that’s a given.

“The Administration strongly supports House passage of S.J.Res. 34, which would nullify the Federal Communications Commission’s final rule titled ‘Protecting the Privacy of Customers of Broadband and Other Telecommunication Services’,” the White House said in a statement yesterday.

“If S.J.Res. 34 were presented to the President, his advisors would recommend that he sign the bill into law.”

If that happens, the US will free up the country’s Internet service providers to compete in the online advertising market with platform giants such as Google and Facebook. Of course, that will come at the expense of subscribers’ privacy, whose every browsing move online can be subjected to some level of scrutiny.

While supporters say that scrapping the regulations will mean that all Internet companies will operate on a level playing field when it comes to privacy protection, critics say that ISPs should be held to a higher level of accountability.

Whereas consumers have a choice over which information can be shared with websites, browsing history via an ISP is total, potentially exposing sensitive issues concerning health, finances, or even sexual preferences.

With this in mind, it’s no surprise that US Internet users are beginning to realize that everything they do online could soon be exposed to third-parties intent on invading their privacy in the interests of commerce. Predictably, questions are being raised over what can be done to mitigate the threat.

Aside from cutting the cord entirely, there’s only one practical way to hinder ISPs, and that’s through the use of some form of encryption. Importantly, visitors to basic HTTP websites will have no browsing protection whatsoever. Those using HTTPS can assume that although ISPs will still know which URLs they’ve visited, content exchanged will be cloaked.

Of course, for those looking for a more workable solution, VPNs – Virtual Private Networks – can provide a much greater level of encrypted protection, especially among providers who promise to keep no logs.

As a result, various providers, including blackVPN, ExpressVPN, LiquidVPN, StrongVPN and Torguard, have weighed in on the debate via social media. NordVPN have also spoken out against the bill in the press, and Private Internet Access even took out a full page ad in the New York Times this week.

It’s now becoming clear that while it was once a somewhat niche activity, VPN use could now be about to hit the mainstream.

Taking a look at Google Trends results for the search term ‘VPN’, we can see that interest across the United States is now double what it was back in 2012. The significant surge to the right of the chart is likely attributable to the past few weeks of debate surrounding the repeal of broadband privacy rules.

While most VPN providers have been campaigning against the changes, there can be no doubt that the signing of the bill into law will be extremely good for business. As seen from the above, record numbers of people are learning about VPNs and there’s even encouragement coming in from people at the very top of Internet commerce.

Following the vote yesterday, Twitter general counsel Vijaya Gadde took to her company’s platform to‏ suggest that citizens should take steps to protect their privacy.

Her tweet, which was later attributed to her own opinion and not company policy, was retweeted by Twitter Chief Executive Jack Dorsey.

It will be interesting to see how the new rules will affect VPN uptake longer term when the fuss around the debate this month has died down. Nevertheless, there seems little doubt that VPN use will rise to some extent and that could be bad news for copyright holders seeking to enforce their rights online.

In addition to stopping ISPs from spying on users’ browsing histories, a good VPN also prevents users being monitored online when using BitTorrent. A further handy side-effect is that they also render site-blocking efforts useless.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Music Industry Wants Piracy Filters, No Takedown Whack-a-Mole

Post Syndicated from Ernesto original https://torrentfreak.com/music-industry-wants-piracy-filters-no-takedown-whack-a-mole-170222/

Signed into law nearly twenty years ago, the DMCA is one of the best known pieces of Internet related legislation.

The law provides a safe harbor for Internet services, shielding them from copyright infringement liability as long as they process takedown notices and deal with repeat infringers.

In recent years, however, various parties have complained about shortcomings and abuse of the system. On the one hand, rightsholders believe that the law doesn’t do enough to protect creators, while the opposing side warns of increased censorship and abuse.

To address these concerns, the U.S. Copyright Office is currently running an extended public consultation.

This week a new round of comments was submitted, including a detailed response from a coalition of music industry groups such as the RIAA, National Music Publishers’ Association, and SoundExchange. When it comes to their views of the DMCA the music groups are very clear: It’s failing.

The music groups note that they are currently required to police the entire Internet in search of infringing links and files, which they then have to take down one at a time. This doesn’t work, they argue.

They say that the present situation forces rightsholders to participate in a never-ending whack-a-mole game which doesn’t fix the underlying problem. Instead, it results in a “frustrating, burdensome and ultimately ineffective takedown process.”

“…as numerous copyright owners point out in their comments, the notice and takedown system as currently configured results in an endless game of whack-a-mole, with infringing content that is removed from a site one moment reposted to the same site and other sites moments later, to be repeated ad infinitem.”

Instead of leaving all the work up to copyright holders, the music groups want Internet services to filter out and block infringing content proactively. With the use of automated hash filtering tools, for example.

“One possible solution to this problem would be to require that, once a service provider receives a takedown notice with respect to a given work, the service provider use automated content identification technology to prevent the same work from being uploaded in the future,” the groups write.

“Automated content identification technologies are one important type of standard technical measure that should be adopted across the industry, and at a minimum by service providers who give the public access to large amounts of works uploaded by users.”

These anti-piracy filters are already in use by some companies and are relatively cheap to implement, even for relatively smaller services, the music groups note.

The whack-a-mole problem doesn’t only apply to hosting providers but also to search engines, the music groups complain.

While companies such as Google remove links to infringing material upon request, these links often reappear under a different URL. At the same time, pirate sites often appear before legitimate services in search results. A fix for this problem would be to stop indexing known pirate sites completely.

“One possible solution would be to require search engines to de-index structurally infringing sites that are the subject of a large number of takedown notices,” the groups recommend.

Ideally, they want copyright holders and Internet services to reach a voluntary agreement on how to filter pirated content. This could be similar to YouTube’s Content-ID system, or the hash filtering mechanisms Dropbox and Google Drive employ, for example.

If service providers are not interested in helping out, however, the music industry says new legislation might be needed to give them a push.

“The Music Community stands ready to work with service providers and other copyright owners on the development and implementation of standard technical measures and voluntary measures. However, to the extent such measures are not forthcoming, legislative solutions will be necessary to restore the balance Congress intended,” the recommendation reads.

Interestingly, this collaborative stance doesn’t appear to apply to all parties. File-hosting service 4Shared previously informed TorrentFreak that several prominent music groups have shown little interest in their voluntary piracy fingerprint tool.

The notion of piracy filters isn’t new. A few months ago the European Commission released its proposal to modernize the EU’s copyright law, under which online services will also be required to install mandatory piracy filters.

Whether the U.S. Government will follow suit has yet to be seen. In any case, rightsholders are likely to keep lobbying for change until they see significant improvements.

The full submission of the music groups is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_th.html

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analog for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

ISPs Don’t Have Blanket Immunity From Piracy, BMG Says

Post Syndicated from Ernesto original https://torrentfreak.com/isps-dont-have-blanket-immunity-from-piracy-bmg-says-170127/

bmgrightsCan an Internet provider be held liable for subscribers who share pirated files? Yes, a Virginia federal jury ruled late last year.

This verdict caused great uncertainty in the ISP industry, as several companies suddenly realized that they could become the next target.

Internet provider RCN is among the companies that are worried about the fallout. With 400,000 subscribers nationwide, it is one of the larger Internet providers in the United States, and as such it regularly receives takedown notices for alleged copyright infringements on its network.

Many of these notices come from BMG and its anti-piracy partner Rightscorp, who argued that RCN can be held for the actions of its customers.

The Internet provider is not happy with these allegations and last year it took legal steps in response. The ISP filed a lawsuit against music rights group BMG at a New York federal court, seeking a declaratory judgment on the matter.

In their request, RCN argued that they are not liable for the alleged copyright infringements of subscribers. They are merely passing on traffic, which grants the company protection under the DMCA’s safe harbor provision, they claimed.

BMG clearly disagrees with this point of view and this week submitted a motion to dismiss (pdf) the request for a declaratory judgment at the New York federal court.

The music group stresses that a declaratory judgment is not proper in this situation. RCN wants the court to rule that it is not liable for any past, current, or future infringements, without providing essential details on their takedown and repeat infringer policies and procedures.

While BMG admits that the DMCA provides a “safe harbor” for Internet providers, they stress that this right is not unconditional. It doesn’t provide “blanket immunity” from copyright infringement claims.

“..this [safe harbor] defense is not absolute. Rather than provide conduit ISPs with blanket immunity, Congress required that conduit ISPs adopt and reasonably implement a policy for the termination of repeat infringers in appropriate circumstances before they may claim a safe harbor defense,” BMG notes.

One of the main problems BMG signals is that the court can’t rule on something that has yet to happen. Without knowing how RCN will deal with repeat infringers in the future, liability may change over time, they say.

“Crucially, the facts on which RCN’s future liability turns cannot possibly be known in advance,” BMG writes.

“The Court cannot possibly know the content of future infringement notices, whether RCN will have knowledge of future infringements, what its future practices will be, and whether it will terminate repeat infringers in appropriate circumstances.”

As such, the music rights group asks the court to dismiss the ISP’s request for a declaratory judgment.

“These sorts of contingencies render questions of infringement far too remote to justify declaratory relief. It would be extraordinarily unfair to immunize RCN against the infringement of BMG’s copyrights without regard for the facts that would be pertinent to an infringement action,” they say.

Even if the court disagrees and decides that it has jurisdiction to rule on the issue, BMG cites several other grounds to dismiss the complaint. This includes RCN’s alleged failure to state a claim for DMCA safe harbor protection.

The court will now review the arguments from both sides to decide whether the case will go ahead.

In addition to RCN’s request for a declaratory judgment regarding liability for pirating subscribers, fellow ISP Windstream has a similar case pending in the court. At the same time, Cox Communication is appealing the $25 million judgment in their case.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

RIAA and MPAA Back $25 Million Piracy Verdict Against Cox

Post Syndicated from Ernesto original https://torrentfreak.com/riaa-and-mpaa-back-25-million-piracy-verdict-against-cox-170109/

piratkeybDecember 2015 a Virginia federal jury ruled that Internet provider Cox Communications was responsible for the copyright infringements of its subscribers.

The ISP was found guilty of willful contributory copyright infringement and ordered to pay music publisher BMG Rights Management $25 million in damages.

Cox disagreed with the outcome and a few weeks ago the ISP filed its appeal arguing that the district court made several errors that may ultimately restrict the public’s access to Internet services.

The company received support from several industry associations, academic institutions, libraries and digital rights groups, who submitted amicus briefs to the court of appeals voicing their concerns. However, BMG is not fighting alone.

Late last week, several copyright industry groups, including the RIAA, MPAA, and the Copyright Alliance, rallied behind the music rights group.

The submissions, which total roughly 150 pages, all stress that the current verdict should be upheld. ISPs such as Cox should not be able to enjoy safe harbor protection if they fail to disconnect “repeat infringers” from their networks, BMG’s supporters say.

The MPAA stresses that the District Court made the right decision by holding Cox liable. They stress that online piracy is a massive problem which copyright holders can’t handle without the proper legal tools to hold intermediaries such as Cox accountable.

“Online piracy accounts for a full quarter of all internet traffic and costs the entertainment industry tens of billions of dollars per year,” the MPAA writes in its brief (pdf).

“…it is simply not feasible to combat the epidemic of online infringement unless copyright-holders have the legal tools to incentivize the cooperation of intermediaries like Cox and to hold them accountable when they knowingly facilitate widespread online infringement.”

One of the central elements in this case is the “repeat infringer” question. Under the DMCA, ISPs are required to have a policy to disconnect persistent pirates, but both sides differ on their interpretation of the term.

In its defense, Cox said that only courts can decide if someone is an infringer. Otherwise, people will be disconnected based on one-sided allegations from copyright holders, which remain untested in court.

However, the MPAA, RIAA and other rightsholder groups believe that regular takedown notices should count as well, noting that earlier court verdicts made this clear.

“If Congress meant that a subscriber should have been sued in court, had a judgment entered against her, and failed to overturn that judgment on appeal — multiple times — before facing even the threat of losing internet access as a repeat infringer, it would have said so,” the RIAA writes in its brief (pdf).

In a situation where repeat infringers only lose their Internet subscriptions following a court order, copyright holders would have to launch massive legal campaigns in the U.S. targeting individual file-sharers.

That would result in an unworkable situation which runs counter to the purpose of the DMCA, the RIAA argues.

“Under Cox’s interpretation, copyright owners would be forced to launch demanding campaigns of multiple lawsuits against every individual infringer even to hope to obtain the benefit of ISP repeat-infringer policies.

“That would require a stream of individual lawsuits in federal district courts all over the country, imposing an additional burden on the courts and draining the resources of copyright owners and individual subscribers alike,” the RIAA adds.

Siding with BMG, the copyright groups ask the appeals court to keep the district court ruling intact. This runs counter to Cox’s request, which asked the court to reverse the judgment or grant a new trial.

The recent amicus briefs illustrate the gravity of the case, which is shaping up to be crucial in determining the future of anti-piracy enforcement in the United States. As such, it would be no surprise if the case goes all the way to the Supreme Court.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Year Encryption Won (Wired)

Post Syndicated from jake original http://lwn.net/Articles/710093/rss

It’s not entirely clear that the title is justified, but Wired does cover some progress on the encryption front in 2016. “End-to-end encryption, which ensures that the only people who can see your communications are you and the person on the receiving end, certainly isn’t new. But in 2016, encryption went mainstream, reaching billions of people all over the world. Even more significantly, it overcame its most aggressive legal challenge yet, in a prolonged standoff between Apple and the FBI. And just this week, a Congressional committee affirmed the importance of encryption, giving hope that future laws around the topic will include at least a modicum of sanity.

There’s still a long way to go, and any gains that were made could potentially be rolled back, but for now it’s worth taking a step back to appreciate just how far encryption came this year. As far as silver linings go, you could do a lot worse.”