Tag Archives: fbi

How Different Stakeholders Frame Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/how_different_s.html

Josephine Wolff examines different Internet governance stakeholders and how they frame security debates.

Her conclusion:

The tensions that arise around issues of security among different groups of internet governance stakeholders speak to the many tangled notions of what online security is and whom it is meant to protect that are espoused by the participants in multistakeholder governance forums. What makes these debates significant and unique in the context of internet governance is not that the different stakeholders often disagree (indeed, that is a common occurrence), but rather that they disagree while all using the same vocabulary of security to support their respective stances. Government stakeholders advocate for limitations on WHOIS privacy/proxy services in order to aid law enforcement and protect their citizens from crime and fraud. Civil society stakeholders advocate against those limitations in order to aid activists and minorities and protect those online users from harassment. Both sides would claim that their position promotes a more secure internet and a more secure society — ­and in a sense, both would be right, except that each promotes a differently secure internet and society, protecting different classes of people and behaviour from different threats.

While vague notions of security may be sufficiently universally accepted as to appear in official documents and treaties, the specific details of individual decisions­ — such as the implementation of dotless domains, changes to the WHOIS database privacy policy, and proposals to grant government greater authority over how their internet traffic is routed­ — require stakeholders to disentangle the many different ideas embedded in that language. For the idea of security to truly foster cooperation and collaboration as a boundary object in internet governance circles, the participating stakeholders will have to more concretely agree on what their vision of a secure internet is and how it will balance the different ideas of security espoused by different groups. Alternatively, internet governance stakeholders may find it more useful to limit their discussions on security, as a whole, and try to force their discussions to focus on more specific threats and issues within that space as a means of preventing themselves from succumbing to a façade of agreement without grappling with the sources of disagreement that linger just below the surface.

The intersection of multistakeholder internet governance and definitional issues of security is striking because of the way that the multistakeholder model both reinforces and takes advantage of the ambiguity surrounding the idea of security explored in the security studies literature. That ambiguity is a crucial component of maintaining a functional multistakeholder model of governance because it lends itself well to high-level agreements and discussions, contributing to the sense of consensus building across stakeholders. At the same time, gathering those different stakeholders together to decide specific issues related to the internet and its infrastructure brings to a fore the vast variety of definitions of security they employ and forces them to engage in security-versus-security fights, with each trying to promote their own particular notion of security. Security has long been a contested concept, but rarely do these contestations play out as directly and dramatically as in the multistakeholder arena of internet governance, where all parties are able to face off on what really constitutes security in a digital world.

We certainly saw this in the “going dark” debate: e.g. the FBI vs. Apple and their iPhone security.

Yes, we can validate the Wikileaks emails

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/yes-we-can-validate-wikileaks-emails.html

Recently, WikiLeaks has released emails from Democrats. Many have repeatedly claimed that some of these emails are fake or have been modified, that there’s no way to validate each and every one of them as being true. Actually, there is, using a mechanism called DKIM.

DKIM is a system designed to stop spam. It works by verifying the sender of the email. Moreover, as a side effect, it verifies that the email has not been altered.
Hillary’s team uses “hillaryclinton.com”, which as DKIM enabled. Thus, we can verify whether some of these emails are true.
Recently, in response to a leaked email suggesting Donna Brazile gave Hillary’s team early access to debate questions, she defended herself by suggesting the email had been “doctored” or “falsified”. That’s not true. We can use DKIM to verify it.
You can see the email in question at the WikiLeaks site: https://wikileaks.org/podesta-emails/emailid/5205. The title suggests they have early access to debate questions, and includes one specifically on the death penalty, with the text:
since 1973, 156 people have been on death row and later set free. Since 1976, 1,414 people have been executed in the U.S

Indeed, during the debate the next day, they asked the question:

Secretary Clinton, since 1976, we have executed 1,414 people in this country.  Since 1973, 156 who were convicted have been exonerated from the death row.

It’s not a smoking gun, but at the same time, it both claims they got questions in advance while having a question in advance. Trump gets hung on similar chains of evidence, so it’s not something we can easily ignore.
Anyway, this post isn’t about the controversy, but the fact that we can validate the email. When an email server sends a message, it’ll include an invisible “header”. They aren’t especially hidden, most email programs allow you to view them, it’s just that they are boring, so hidden by default. The DKIM header in this email looks like:
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=hillaryclinton.com; s=google;
How do you verify this is true. There are a zillion ways with various “DKIM verifiers”. I use the popular Thunderbird email reader (from the Mozilla Firefox team). They have an addon designed specifically to verify DKIM. Normally, email readers don’t care, because it’s the email server‘s job to verify DKIM, not the client. So we need a client addon to enable verification.
Downloading the raw email from WikiLeaks and opening in Thunderbird, with the addon, I get the following verification that the email is valid. Specifically, it validates that the HillaryClinton.com sent precisely this content, with this subject, on that date.
Let’s see what happens when somebody tries to doctor the email. In the following, I added “MAKE AMERICA GREAT AGAIN” to the top of the email.
As you can see, we’ve proven that DKIM will indeed detect if anybody has “doctored” or “falsified” this email.
I was just listening to ABC News about this story. It repeated Democrat talking points that the WikiLeaks emails weren’t validated. That’s a lie. This email in particular has been validated. I just did it, and shown you how you can validate it, too.

Btw, if you can forge an email that validates correctly as I’ve shown, I’ll give you 1-bitcoin. It’s the easiest way of solving arguments whether this really validates the email — if somebody tells you this blogpost is invalid, then tell them they can earn about $600 (current value of BTC) proving it. Otherwise, no.

Update: I’m a bit late writing this blog post. Apparently, others have validated these, too.

Update: In the future, when HilaryClinton.com changes their DKIM key, it will no longer be able to verify. Thus, I’m recording the domain key here:

google._domainkey.hillaryclinton.com: type TXT, class IN
v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCJdAYdE2z61YpUMFqFTFJqlFomm7C4Kk97nzJmR4YZuJ8SUy9CF35UVPQzh3EMLhP+yOqEl29Ax2hA/h7vayr/f/a19x2jrFCwxVry+nACH1FVmIwV3b5FCNEkNeAIqjbY8K9PeTmpqNhWDbvXeKgFbIDwhWq0HP2PbySkOe4tTQIDAQAB

Android Pirate App Store Case Ends in Mistrial, Jury Undecided

Post Syndicated from Ernesto original https://torrentfreak.com/android-pirate-app-store-case-ends-in-mistrial-jury-undecided-161021/

applanetAssisted by police in France and the Netherlands, the FBI took down the “pirate” Android stores Appbucket, Applanet and SnappzMarket during the summer of 2012.

During the years that followed several people connected to the Android app sites were arrested and indicted, and slowly but surely these cases are now reaching their conclusions.

Two months ago the first sentencing was announced, and it was a big one. SnappzMarket’s ‘PR manager’ Scott Walton was handed a 46-month prison sentence for conspiracy to commit copyright infringement.

Like several others, Walton had pleaded guilty in order to get a reduced sentence. However, not all did. David Lee, a California man linked to Applanet, decided to move to trial instead.

The indictment charged Lee with aiding and abetting criminal copyright infringement (pdf). In addition, he was charged with conspiring to infringe copyrights and violating the DMCA’s anti-circumvention provision.

As the case progressed it became clear that the U.S. Government’s evidence wasn’t as strong as initially thought. Before the trial even started, the prosecution voluntarily dropped the criminal copyright infringement charge.

The “overt” acts that were scrapped due to a lack of evidence are all related to an undercover FBI agent in the Northern District of Georgia, who supposedly downloaded pirated apps from Applanet’s computer servers.

What remained was the conspiracy charge and last week both parties argued their case before the jury. Over the course of several days many witnesses were heard, including FBI agents and co-defendant Gary Sharp, who previously pleaded guilty.

Friday last week the closing arguments were presented after which the jury retreated to deliberate at 10:30 in the morning. At the end of the day, however, they still hadn’t reached a decision so the court decided to continue after the weekend.

On Monday the jury got back together but after having failed to reach a verdict by the end of the day, a mistrial was declared. This means that David Lee has not been found guilty.



TorrentFreak reached out to Lee’s lawyers for more information but they declined to comment.

In the jury instructions the defense hammered on the fact that the government must prove that either the conspiracy or an overt act took place in the District of Georgia, even if the defendant never set foot there.

It could be that the Jury couldn’t reach a unanimous decision on that point or any of the other key issues.

TF also contacted the Department of Justice, who didn’t go into detail either, but informed us that they are still evaluating the outcome. “We are considering our options,” a DoJ spokesperson said.

In theory, the U.S. Government can ask for a retrial, which means that the case has to be tried again. For now, however, David Lee remains out of prison.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Yahoo-email-search story is garbage

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/the-yahoo-email-search-story-is-garbage.html

Joseph Menn (Reuters) is reporting that Yahoo! searched emails for the NSA. The details of the story are so mangled that it’s impossible to say what’s actually going on.

The first paragraph says this:

Yahoo Inc last year secretly built a custom software program to search all of its customers’ incoming emails

The second paragraph says this:

The company complied with a classified U.S. government demand, scanning hundreds of millions of Yahoo Mail accounts

Well? Which is it? Did they “search incoming emails” or did they “scan mail accounts”? Whether we are dealing with emails in transmit, or stored on the servers, is a BFD (Big Fucking Detail) that you can’t gloss over and confuse in a story like this. Whether searches are done indiscriminately across all emails, or only for specific accounts, is another BFD.

The third paragraph seems to resolve this, but it doesn’t:

Some surveillance experts said this represents the first case to surface of a U.S. Internet company agreeing to an intelligence agency’s request by searching all arriving messages, as opposed to examining stored messages or scanning a small number of accounts in real time.

Who are these “some surveillance experts”? Why is the story keeping their identities secret? Are they some whistleblowers afraid for their jobs? If so, then that should be mentioned. In reality, they are unlikely to be real surveillance experts, but just some random person that knows slightly more about the subject than Joseph Menn, and their identities are being kept secret in order to prevent us from challenging these experts — which is a violation of journalistic ethics.

And, are they analyzing the raw information the author sent them? Or are they opining on the garbled version of events that we see in the first two paragraphs.

The confusion continues:

It is not known what information intelligence officials were looking for, only that they wanted Yahoo to search for a set of characters. That could mean a phrase in an email or an attachment, said the sources, who did not want to be identified.

What the fuck is a “set of characters”??? Is this an exact quote for somewhere? Or something the author of the story made up? The clarification of what this “could mean” doesn’t clear this up, because if that’s what it “actually means”, then why not say this to begin with?

It’s not just technical terms, but also legal ones:

The request to search Yahoo Mail accounts came in the form of a classified edict sent to the company’s legal team, according to the three people familiar with the matter.

What the fuck is a “classified edict”? An NSL? A FISA court order? What? This is also a BFD.

We outsiders already know about the NSA/FBI’s ability to ask for strong selectors (email addresses). What what we don’t know about is their ability to search all emails, regardless of account, for arbitrary keywords/phases. If that’s what’s going on, then this would be a huge story. But the story doesn’t make it clear that this is actually what’s going on — just strongly implies it.

There are many other ways to interpret this story. For example, the government may simply be demanding that when Yahoo satisfies demands for emails (based on email addresses), that it does so from the raw incoming stream, before it hits spam/malware filters. Or, they may be demanding that Yahoo satisfies their demands with more secrecy, so that the entire company doesn’t learn of the email addresses that a FISA order demands. Or, the government may be demanding that the normal collection happen in real time, in the seconds that emails arrive, instead of minutes later.

Or maybe this isn’t an NSA/FISA story at all. Maybe the DHS has a cybersecurity information sharing program that distributes IoCs (indicators of compromise) to companies under NDA. Because it’s a separate program under NDA, Yahoo would need to setup a email malware scanning system separate from their existing malware system in order to use those IoCs. (@declanm‘s stream has further variations on this scenario).

My point is this: the story is full of mangled details that really tell us nothing. I can come up with multiple, unrelated scenarios that are consistent with the content in the story. The story certainly doesn’t say that Yahoo did anything wrong, or that the government is doing anything wrong (at least, wronger than we already know).

I’m convinced the government is up to no good, strong arming companies like Yahoo into compliance. The thing that’s stopping us from discovering malfeasance is poor reporting like this.

Man Who Leaked The Revenant Online Fined $1.1m

Post Syndicated from Andy original https://torrentfreak.com/man-leaked-revenant-online-fined-1-1m-160930/

revenantIn December 2015, many so-called ‘screener’ copies of the latest movies leaked online. Among them a near perfect copy of Alejandro G. Iñárritu’s ‘The Revenant’.

Starring Leonardo DiCaprio and slated for a Christmas day release, in a matter of hours the tale vengeance clocked up tens of thousands of illegal downloads.

With such a high-profile leak, it was inevitable that the authorities would attempt to track down the individual responsible. It didn’t take them long.

Following an FBI investigation, former studio worker William Kyle Morarity was discovered as the culprit. Known online by the username “clutchit,” the 31-year-old had uploaded The Revenant and The Peanuts Movie to private torrent tracker Pass The Popcorn.

The Revenant


Uploading a copyrighted work being prepared for commercial distribution is a felony that carries a maximum penalty of three years in prison, so his sentencing always had the potential to be punishing for the Lancaster man, despite his early guilty plea.

This week Morarity was sentenced in federal court for criminal copyright infringement after admitting screener copies of both movies to the Internet.

After being posted online six days in advance of its theatrical release, it was estimated that The Revenant was downloaded at least a million times during a six week period, costing Twentieth Century Fox Film Corporation to suffer losses of “well over $1 million.”

United States District Court Judge Stephen V. Wilson ordered Morarity to pay $1.12 million in restitution to Twentieth Century Fox. He also sentenced the 31-year-old to eight months’ home detention and 24 months’ probation.

According to court documents, Morarity obtained the screeners and copied them to a portable hard drive. He then uploaded the movies to Pass The Popcorn on December 17 and December 19.

“The film industry creates thousands of jobs in Southern California,” said United States Attorney Eileen M. Decker commenting on the sentencing.

“The defendant’s illegal conduct caused significant harm to the victim movie studio. The fact that the defendant stole these films while working on the lot of a movie studio makes his crime more egregious.”

Deirdre Fike, the Assistant Director in Charge of the FBI’s Los Angeles Field Office, said that Morarity had abused his position of trust to obtain copies of the movies and then used them in a way that caused Fox to incur huge losses.

“The theft of intellectual property – in this case, major motion pictures – discourages creative incentive and affects the average American making ends meet in the entertainment industry,” Fike said.

As part of his punishment, Morarity also agreed to assist the FBI to produce a public service announcement aimed at educating the public about the harms of copyright infringement and the illegal uploading of movies to the Internet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Some technical notes on the PlayPen case

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/09/some-technical-notes-on-playpen-case.html

In March of 2015, the FBI took control of a Tor onion childporn website (“PlayPen”), then used an 0day exploit to upload malware to visitors’s computers, to identify them. There is some controversy over the warrant they used, and government mass hacking in general. However, much of the discussion misses some technical details, which I thought I’d discuss here.

IP address
In a post on the case, Orin Kerr claims:
retrieving IP addresses is clearly a search

He is wrong, at least, in the general case. Uploading malware to gather other things (hostname, username, MAC address) is clearly a search. But discovering the IP address is a different thing.
Today’s homes contain many devices behind a single router. The home has only one public IP address, that of the router. All the other devices have local IP addresses. The router then does network address translation (NAT) in order to convert outgoing traffic to all use the public IP address.
The FBI sought the public IP address of the NAT/router, not the local IP address of the perp’s computer. The malware (“NIT”) didn’t search the computer for the IP address. Instead the NIT generated network traffic, destined to the FBI’s computers. The FBI discovered the suspect’s public IP address by looking at their own computers.
Historically, there have been similar ways of getting this IP address (from a Tor hidden user) without “hacking”. In the past, Tor used to leak DNS lookups, which would often lead to the user’s ISP, or to the user’s IP address itself. Another technique would be to provide rich content files (like PDF) or video files that the user would have to be downloaded to view, and which then would contact the Internet (contacting the FBI’s computers) themselves bypassing Tor.
Since the Fourth Amendment is about where the search happens, and not what is discovered, it’s not a search to find the IP address in packets arriving at FBI servers. How the FBI discovered the IP address may be a search (running malware on the suspect’s computer), but the public IP address itself doesn’t necessarily mean a search happened.

Of course, uploading malware just to transmit packets to an FBI server, getting the IP address from the packets, it’s still problematic. It’s gotta be something that requires a warrant, even though it’s not precisely the malware searching the machine for its IP address.

In any event, if not for the IP address, then PlayPen searches still happened for the hostname, username, and MAC address. Imagine the FBI gets a search warrant, shows up at the suspect’s house, and finds no child porn. They then look at the WiFi router, and find that suspected MAC address is indeed connected. They then use other tools to find that the device with that MAC address is located in the neighbor’s house — who has been piggybacking off the WiFi.
It’s a pre-crime warrant (#MinorityReport)
The warrant allows the exploit/malware/search to be used whenever somebody logs in with a username and password.
The key thing here is that the warrant includes people who have not yet created an account on the server at the time the warrant is written. They will connect, create an account, log in, then start accessing the site.
In other words, the warrant includes people who have never committed a crime when the warrant was issued, but who first commit the crime after the warrant. It’s a pre-crime warrant. 
Sure, it’s possible in any warrant to catch pre-crime. For example, a warrant for a drug dealer may also catch a teenager making their first purchase of drugs. But this seems quantitatively different. It’s not targeting the known/suspected criminal — it’s targeting future criminals.
This could easily be solved by limiting the warrant to only accounts that have already been created on the server.
It’s more than an anticipatory warrant

People keep saying it’s an anticipatory warrant, as if this explains everything.

I’m not a lawyer, but even I can see that this explains only that the warrant anticipates future probable cause. “Anticipatory warrant” doesn’t explain that the warrant also anticipates future place to be searched. As far as I can tell, “anticipatory place” warrants don’t exist and are a clear violation of the Fourth Amendment. It makes it look like a “general warrant”, which the Fourth Amendment was designed to prevent.

Orin’s post includes some “unknown place” examples — but those specify something else in particular. A roving wiretap names a person, and the “place” is whatever phone they use. In contrast, this PlayPen warrant names no person. Orin thinks that the problem may be that more than one person is involved, but he is wrong. A warrant can (presumably) name multiple people, or you can have multiple warrants, one for each person. Instead, the problem here is that no person is named. It’s not “Rob’s computer”, it’s “the computer of whoever logs in”. Even if the warrant were ultimately for a single person, it’d still be problematic because the person is not identified.
Orin cites another case, where the FBI places a beeper into a package in order to track it. The place, in this case, is the package. Again, this is nowhere close to this case, where no specific/particular place is mentioned, only a type of place. 
This could easily have been resolved. Most accounts were created before the warrant was issued. The warrant could simply have listed all the usernames, saying the computers of those using these accounts are the places to search. It’s a long list of usernames (1,500?), but if you can’t include them all in a single warrant, in this day and age of automation, I’d imagine you could easily create 1,500 warrants.
It’s malware

As a techy, the name for what the FBI did is “hacking”, and the name for their software is “malware” not “NIT”. The definitions don’t change depending upon who’s doing it and for what purpose. That the FBI uses weasel words to distract from what it’s doing seems like a violation of some sort of principle.

I am not a lawyer, I am a revolutionary. I care less about precedent and more about how a Police State might abuse technology. That a warrant can be issued whose condition is similar “whoever logs into the server” seems like a scary potential for abuse. That a warrant can be designed to catch pre-crime seems even scarier, like science fiction. That a warrant might not be issued for something called “malware”, but would be issued for something called “NIT”, scares me the most.
This warrant could easily have been narrower. It could have listed all the existing account holders. It could’ve been even narrower, for account holders where the server logs prove they’ve already downloaded child porn.
Even then, we need to be worried about FBI mass hacking. I agree that FBI has good reason to keep the 0day secret, and that it’s not meaningful to the defense. But in general, I think courts should demand an overabundance of transparency — the police could be doing something nefarious, so the courts should demand transparency to prevent that.

Recovering an iPhone 5c Passcode

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/recovering_an_i.html

Remember the San Bernardino killer’s iPhone, and how the FBI maintained that they couldn’t get the encryption key without Apple providing them with a universal backdoor? Many of us computer-security experts said that they were wrong, and there were several possible techniques they could use. One of them was manually removing the flash chip from the phone, extracting the memory, and then running a brute-force attack without worrying about the phone deleting the key.

The FBI said it was impossible. We all said they were wrong. Now, Sergei Skorobogatov has proved them wrong. Here’s his paper:

Abstract: This paper is a short summary of a real world mirroring attack on the Apple iPhone 5c passcode retry counter under iOS 9. This was achieved by desoldering the NAND Flash chip of a sample phone in order to physically access its connection to the SoC and partially reverse engineering its proprietary bus protocol. The process does not require any expensive and sophisticated equipment. All needed parts are low cost and were obtained from local electronics distributors. By using the described and successful hardware mirroring process it was possible to bypass the limit on passcode retry attempts. This is the first public demonstration of the working prototype and the real hardware mirroring process for iPhone 5c. Although the process can be improved, it is still a successful proof-of-concept project. Knowledge of the possibility of mirroring will definitely help in designing systems with better protection. Also some reliability issues related to the NAND memory allocation in iPhone 5c are revealed. Some future research directions are outlined in this paper and several possible countermeasures are suggested. We show that claims that iPhone 5c NAND mirroring was infeasible were ill-advised.

Susan Landau explains why this is important:

The moral of the story? It’s not, as the FBI has been requesting, a bill to make it easier to access encrypted communications, as in the proposed revised Burr-Feinstein bill. Such “solutions” would make us less secure, not more so. Instead we need to increase law enforcement’s capabilities to handle encrypted communications and devices. This will also take more funding as well as redirection of efforts. Increased security of our devices and simultaneous increased capabilities of law enforcement are the only sensible approach to a world where securing the bits, whether of health data, financial information, or private emails, has become of paramount importance.

Or: The FBI needs computer-security expertise, not backdoors.

Patrick Ball writes about the dangers of backdoors.

EDITED TO ADD (9/23): Good article from the Economist.

Researcher Finds Critical Vulnerabilities in Hollywood Screener System

Post Syndicated from Andy original https://torrentfreak.com/researcher-finds-critical-vulnerabilities-in-hollywood-screener-system-160909/

oscartorrentsSo-called screener copies of the latest movies are some of Hollywood’s most valuable assets, yet every year and to the delight of pirates, many leak out onto the Internet.

Over the years, Hollywood has done its best to limit the leaks, but every 12 months without fail, many of the top titles appear online in close to perfect quality.

With that in mind, the studios have been testing Netflix-like systems that negate the need for physical discs to be sent out.

One such system has been made available at Awards-Screeners.com. Quietly referenced by companies including 20th Century Fox, the site allows SAG-AFTRA members and other industry insiders to view the latest movies in a secure environment. At least, that’s the idea.


Late August, TorrentFreak was contacted by security researcher Chris Vickery of MacKeeper.com who told us that while conducting tests, he’d discovered an exposed MongoDB database that appeared to be an integral part of Awards-Screeners.com.

“The database was running with no authentication required for access. No username. No password. Just entirely exposed to the open internet,” Vickery told TF.

The researcher’s discovery was significant as the database contained more than 1,200 user logins. Vickery did not share the full database with TF but he did provide details of a handful of the accounts it contained. Embarrassingly, many belong to senior executives including:

– Vice President of International Technology at Universal Pictures
– ‎Director of Content Technology & Security at Disney
– Vice President of Post-Production Technology at Disney
– Executive Director, Feature Mastering at Warner Bros
– Vice President of Global Business & Technology Strategy at Warner Bros
– Director of Content Protection at Paramount Pictures
– VP of corporate communications and publicity for 20th Century Fox

While the hashed passwords for the above would be difficult to crack, the database itself was publicly offering admin-level access, so it was a disaster from a security perspective.

“Any of the values in the database could have been changed to arbitrary values, i.e. create-your-own-password,” Vickery said.


According to the researcher, this vulnerability had the potential to blow a hole in the screener system and could’ve had huge piracy and subsequent law enforcement implications.

“Theoretically, it would have been possible for a malicious person to log into any of the 1,200+ user accounts, screencap an unreleased film, and torrent it to the world,” he explained.

“There’s also supposedly video watermark technology that makes it possible to trace which account it came from. So basically you could have framed any of the users for the distribution as well by using their account to do it.”

The screenshot below shows Vickery’s view of the database, in this case highlighting the availability of a screener copy of the soon-to-be-released Oliver Stone movie, Snowden.


Vision Media Management, which claims to be the largest Awards screener fulfillment operation in the world, is the outfit in charge of the system. It’s described in the company’s promotional material as a “Secure Digital Screener” platform “selected by the MPAA major studios as the preferred secure content delivery method for Awards voters.”

Like all responsible data breach hunters, Vickery did his research and decided to inform Awards-Screeners.com and Vision Media Management of his findings. Initially, they appeared somewhat grateful.

“During my telephone conversation with Vision Media Management, which consisted of me, their lead counsel (Tanya Forsheit), and their CTO (Doug Woodard), they were very surprised and worried. They didn’t understand how this could happen and claimed that the system should have nothing loaded into it currently and was purged months ago,” Vickery said.

“This is not believable due to time stamps of activity in the database. In the ‘Snowden’ screenshot, for example, you can see that the entry was updated on 7/13/2016.”


Vickery also informed the MPAA of his discoveries and was told by the organization’s Office of Technology that it was “currently working diligently” with Vision to “evaluate the situation and take appropriate remedial action.”

Meanwhile, conversations between Vickery and Vision Media Management continued. The researcher says that the company tried to downplay his findings with claims that the database had been secure and contained only test data.

awards-screeners-userHowever, when Vickery asked if he could release the database, he was advised it was too sensitive to be made public. The company then began a drive to convince the researcher that security at Amazon, one of Vision’s vendors, was to blame for the leak. Vision’s lawyer also suggested that Vickery had “improperly downloaded” the database.

In a follow-up mail, Vickery made it clear to Vision that allegations of “improper downloading” were incompatible with the fact that the database had been published openly to the public Internet. And, after all, he had done the responsible thing by informing them of their security issues.

“I have cooperated with and contributed to data breach-related investigations conducted by the FTC, FBI, US Navy, HHS/OCR, US Secret Service, and other similar entities,” he told the company. “Not a single regulatory or government agency I have interacted with has even suggested that what I do, downloading publicly published information, is improper.”

In subsequent discussion with Vickery, Vision Media asked for time to assess the situation but by September 4, the researcher had more bad news for the company.

Emails shared with TF show Vickery informing Vision of yet more security holes in its system, specifically a pair of publicly exposed S3 buckets located on Vision resources at Amazon. Vickery says these contained development and release builds of Vision’s Android app, development and deployment meeting notes, plus some unexplained references to Netflix.

In the run-up to this piece, Vickery advised Vision Media that a public disclosure would be likely so in an effort to provide balanced reporting, TorrentFreak reached out to Vision Media’s CEO for a statement on the researcher’s findings. At the time of publication, nothing had been received.

And after several conversations with Vision via email and on the phone, Vickery was drawing a blank this week too.

“Vision has not gotten back to me today, and we were very clear last week that they would be contacting me again by Thursday,” Vickery told TF. “I even sent them a little reminder earlier and asked if we were still planning to talk. No response all day.”

In the absence of an official statement from Vision Media, it’s impossible to say how many people accessed the Awards-Screener database before Vickery, or what their intentions were. Perhaps only time will tell but one thing is clear – a move to the digital space might not be the perfect solution for screener distribution.

Check out Chris Vickery’s report on MacKeeper

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Apple’s Cloud Key Vault

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/09/apples_cloud_ke.html

Ever since Ian Krstić, Apple’s Head of Security Engineering and Architecture, presented the company’s key backup technology at Black Hat 2016, people have been pointing to it as evidence that the company can create a secure backdoor for law enforcement.

It’s not. Matthew Green and Steve Bellovin have both explained why not. And the same group of us that wrote the “Keys Under Doormats” paper on why backdoors are a bad idea have also explained why Apple’s technology does not enable it to build secure backdoors for law enforcement. Michael Specter did the bulk of the writing.

The problem with Tait’s argument becomes clearer when you actually try to turn Apple’s Cloud Key Vault into an exceptional access mechanism. In that case, Apple would have to replace the HSM with one that accepts an additional message from Apple or the FBI­ — or an agency from any of the 100+ countries where Apple sells iPhones­ — saying “OK, decrypt,” as well as the user’s password. In order to do this securely, these messages would have to be cryptographically signed with a second set of keys, which would then have to be used as often as law enforcement access is required. Any exceptional access scheme made from this system would have to have an additional set of keys to ensure authorized use of the law enforcement access credentials.

Managing access by a hundred-plus countries is impractical due to mutual mistrust, so Apple would be stuck with keeping a second signing key (or database of second signing keys) for signing these messages that must be accessed for each and every law enforcement agency. This puts us back at the situation where Apple needs to protect another repeatedly-used, high-value public key infrastructure: an equivalent situation to what has already resulted in the theft of Bitcoin wallets, RealTek’s code signing keys, and Certificate Authority failures, among many other disasters.

Repeated access of private keys drastically increases their probability of theft, loss, or inappropriate use. Apple’s Cloud Key Vault does not have any Apple-owned private key, and therefore does not indicate that a secure solution to this problem actually exists.

It is worth noting that the exceptional access schemes one can create from Apple’s CKV (like the one outlined above) inherently entails the precise issues we warned about in our previous essay on the danger signs for recognizing flawed exceptional access systems. Additionally, the Risks of Key Escrow and Keys Under Doormats papers describe further technical and nontechnical issues with exceptional access schemes that must be addressed. Among the nontechnical hurdles would be the requirement, for example, that Apple run a large legal office to confirm that requests for access from the government of Uzbekistan actually involved a device that was located in that country, and that the request was consistent with both US law and Uzbek law.

My colleagues and I do not argue that the technical community doesn’t know how to store high-value encryption keys­ — to the contrary that’s the whole point of an HSM. Rather, we assert that holding on to keys in a safe way such that any other party (i.e. law enforcement or Apple itself) can also access them repeatedly without high potential for catastrophic loss is impossible with today’s technology, and that any scheme running into fundamental sociotechnical challenges such as jurisdiction must be evaluated honestly before any technical implementation is considered.

Pirate Android App ‘Store’ Member Jailed For 46 Months

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-android-app-store-member-jailed-for-46-months-160831/

snappzAssisted by police in France and the Netherlands, the FBI took down the “pirate” Android stores Appbucket, Applanet and SnappzMarket during the summer of 2012.

The domain seizures were the first ever against “rogue” mobile app marketplaces and followed similar actions against BitTorrent and streaming sites.

During the years that followed several people connected to the Android app sites were arrested and indicted, but progress has been slow. Today, we can report on what we believe to be the first sentencing in these cases.

Earlier this month, Scott Walton of Lovejoy, Georgia, was found guilty of conspiracy to commit copyright infringement and sentenced to 46 months in prison.

The sentence hasn’t been announced publicly by the Department of Justice, but paperwork (pdf) obtained by TorrentFreak confirms that it was handed down by Georgia District Court Judge Timothy Batten.

The Judgement


According to the prosecution, one of Walton’s primary tasks was to manage public relations for SnappzMarket.

“In this role, defendant Walton monitored the Facebook fan page for SnappzMarket, provided responses to support inquiries, developed new ideas for SnappzMarket, and assisted with finding solutions to technical problems,” the indictment reads.

“In addition, defendant Walton searched for and downloaded copies of copyrighted apps, burned those copies to digital media such as compact discs, and mailed them to defendant Gary Edwin Sharp.”

The sentencing itself doesn’t come as a surprise, but it took a long time to be finalized.

Together with several co-defendants, Walton had already pleaded guilty two years ago, when he admitted to being involved in the illegal copying and distribution of more than a million pirated Android apps with a retail value of $1.7 million.

Before sentencing, Walton’s attorney Jeffrey Berhold urged the court to minimize the sentence. Citing letters from family and friends, he noted that his client can be of great value to the community.

“The Court can make this world a better place by releasing Scott Walton sooner rather than later,” Berhold wrote.

Whether these pleas helped is unknown. The 46-month sentence is short of the five years maximum, but it remains a very long time.

Initially, Walton was able to await his sentencing as a free man, but last year he was incarcerated after violating his pretrial release conditions. This means that he has already served part of his sentence.

The two other SnappzMarket members who were indicted, Joshua Ryan Taylor and Gary Edwin Sharp, are expected to be sentenced later this year. The same is true for co-conspirator Kody Jon Peterson.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The NSA Is Hoarding Vulnerabilities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/the_nsa_is_hoar.html

The National Security Agency is lying to us. We know that because of data stolen from an NSA server was dumped on the Internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others’ computers. Those vulnerabilities aren’t being reported, and aren’t getting fixed, making your computers and networks unsafe.

On August 13, a group calling itself the Shadow Brokers released 300 megabytes of NSA cyberweapon code on the Internet. Near as we experts can tell, the NSA network itself wasn’t hacked; what probably happened was that a “staging server” for NSA cyberweapons — that is, a server the NSA was making use of to mask its surveillance activities — was hacked in 2013.

The NSA inadvertently resecured itself in what was coincidentally the early weeks of the Snowden document release. The people behind the link used casual hacker lingo, and made a weird, implausible proposal involving holding a bitcoin auction for the rest of the data: “!!! Attention government sponsors of cyber warfare and those who profit from it !!!! How much you pay for enemies cyber weapons?”

Still, most people believe the hack was the work of the Russian government and the data release some sort of political message. Perhaps it was a warning that if the US government exposes the Russians as being behind the hack of the Democratic National Committee — or other high-profile data breaches — the Russians will expose NSA exploits in turn.

But what I want to talk about is the data. The sophisticated cyberweapons in the data dump include vulnerabilities and “exploit code” that can be deployed against common Internet security systems. Products targeted include those made by Cisco, Fortinet, TOPSEC, Watchguard, and Juniper — systems that are used by both private and government organizations around the world. Some of these vulnerabilities have been independently discovered and fixed since 2013, and some had remained unknown until now.

All of them are examples of the NSA — despite what it and other representatives of the US government say — prioritizing its ability to conduct surveillance over our security. Here’s one example. Security researcher Mustafa al-Bassam found an attack tool codenamed BENIGHCERTAIN that tricks certain Cisco firewalls into exposing some of their memory, including their authentication passwords. Those passwords can then be used to decrypt virtual private network, or VPN, traffic, completely bypassing the firewalls’ security. Cisco hasn’t sold these firewalls since 2009, but they’re still in use today.

Vulnerabilities like that one could have, and should have, been fixed years ago. And they would have been, if the NSA had made good on its word to alert American companies and organizations when it had identified security holes.

Over the past few years, different parts of the US government have repeatedly assured us that the NSA does not hoard “zero days” ­ the term used by security experts for vulnerabilities unknown to software vendors. After we learned from the Snowden documents that the NSA purchases zero-day vulnerabilities from cyberweapons arms manufacturers, the Obama administration announced, in early 2014, that the NSA must disclose flaws in common software so they can be patched (unless there is “a clear national security or law enforcement” use).

Later that year, National Security Council cybersecurity coordinator and special adviser to the president on cybersecurity issues Michael Daniel insisted that US doesn’t stockpile zero-days (except for the same narrow exemption). An official statement from the White House in 2014 said the same thing.

The Shadow Brokers data shows this is not true. The NSA hoards vulnerabilities.

Hoarding zero-day vulnerabilities is a bad idea. It means that we’re all less secure. When Edward Snowden exposed many of the NSA’s surveillance programs, there was considerable discussion about what the agency does with vulnerabilities in common software products that it finds. Inside the US government, the system of figuring out what to do with individual vulnerabilities is called the Vulnerabilities Equities Process (VEP). It’s an inter-agency process, and it’s complicated.

There is a fundamental tension between attack and defense. The NSA can keep the vulnerability secret and use it to attack other networks. In such a case, we are all at risk of someone else finding and using the same vulnerability. Alternatively, the NSA can disclose the vulnerability to the product vendor and see it gets fixed. In this case, we are all secure against whoever might be using the vulnerability, but the NSA can’t use it to attack other systems.

There are probably some overly pedantic word games going on. Last year, the NSA said that it discloses 91 percent of the vulnerabilities it finds. Leaving aside the question of whether that remaining 9 percent represents 1, 10, or 1,000 vulnerabilities, there’s the bigger question of what qualifies in the NSA’s eyes as a “vulnerability.”

Not all vulnerabilities can be turned into exploit code. The NSA loses no attack capabilities by disclosing the vulnerabilities it can’t use, and doing so gets its numbers up; it’s good PR. The vulnerabilities we care about are the ones in the Shadow Brokers data dump. We care about them because those are the ones whose existence leaves us all vulnerable.

Because everyone uses the same software, hardware, and networking protocols, there is no way to simultaneously secure our systems while attacking their systems ­ whoever “they” are. Either everyone is more secure, or everyone is more vulnerable.

Pretty much uniformly, security experts believe we ought to disclose and fix vulnerabilities. And the NSA continues to say things that appear to reflect that view, too. Recently, the NSA told everyone that it doesn’t rely on zero days — very much, anyway.

Earlier this year at a security conference, Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) organization — basically the country’s chief hacker — gave a rare public talk, in which he said that credential stealing is a more fruitful method of attack than are zero days: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

The distinction he’s referring to is the one between exploiting a technical hole in software and waiting for a human being to, say, get sloppy with a password.

A phrase you often hear in any discussion of the Vulnerabilities Equities Process is NOBUS, which stands for “nobody but us.” Basically, when the NSA finds a vulnerability, it tries to figure out if it is unique in its ability to find it, or whether someone else could find it, too. If it believes no one else will find the problem, it may decline to make it public. It’s an evaluation prone to both hubris and optimism, and many security experts have cast doubt on the very notion that there is some unique American ability to conduct vulnerability research.

The vulnerabilities in the Shadow Brokers data dump are definitely not NOBUS-level. They are run-of-the-mill vulnerabilities that anyone — another government, cybercriminals, amateur hackers — could discover, as evidenced by the fact that many of them were discovered between 2013, when the data was stolen, and this summer, when it was published. They are vulnerabilities in common systems used by people and companies all over the world.

So what are all these vulnerabilities doing in a secret stash of NSA code that was stolen in 2013? Assuming the Russians were the ones who did the stealing, how many US companies did they hack with these vulnerabilities? This is what the Vulnerabilities Equities Process is designed to prevent, and it has clearly failed.

If there are any vulnerabilities that — according to the standards established by the White House and the NSA — should have been disclosed and fixed, it’s these. That they have not been during the three-plus years that the NSA knew about and exploited them — despite Joyce’s insistence that they’re not very important — demonstrates that the Vulnerable Equities Process is badly broken.

We need to fix this. This is exactly the sort of thing a congressional investigation is for. This whole process needs a lot more transparency, oversight, and accountability. It needs guiding principles that prioritize security over surveillance. A good place to start are the recommendations by Ari Schwartz and Rob Knake in their report: these include a clearly defined and more public process, more oversight by Congress and other independent bodies, and a strong bias toward fixing vulnerabilities instead of exploiting them.

And as long as I’m dreaming, we really need to separate our nation’s intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency’s mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS’s mission.

I doubt we’re going to see any congressional investigations this year, but we’re going to have to figure this out eventually. In my 2014 book Data and Goliath, I write that “no matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security by fixing almost all the vulnerabilities we find…” Our nation’s cybersecurity is just too important to let the NSA sacrifice it in order to gain a fleeting advantage over a foreign adversary.

This essay previously appeared on Vox.com.

EDITED TO ADD (8/27): The vulnerabilities were seen in the wild within 24 hours, demonstrating how important they were to disclose and patch.

James Bamford thinks this is the work of an insider. I disagree, but he’s right that the TAO catalog was not a Snowden document.

People are looking at the quality of the code. It’s not that good.

FBI-Controlled Megaupload Domain Now Features Soft Porn

Post Syndicated from Ernesto original https://torrentfreak.com/fbi-controlled-megaupload-domain-now-features-soft-porn-160826/

fbiantiMegaupload was shutdown nearly half a decade ago, but all this time there has been little progress on the legal front.

Last December a New Zealand District Court judge ruled that Kim Dotcom and his colleagues can be extradited to the United States to face criminal charges, a decision that will be appealed shortly.

With the criminal case pending, the U.S. Government also retains control over several of the company’s assets.

This includes cash, cars, but also over a dozen of Megaupload’s former domain names, including Megastuff.co, Megaclicks.org, Megaworld.mobi, Megaupload.com, Megaupload.org, and Megavideo.com.

Initially, the domains served a banner indicating they had been seized as part of a criminal investigation. However, those who visit some of the sites today are in for a surprise.

This week we discovered that Megaupload.org is now hosting a site dedicated to soft porn advertisements. Other seized domains are also filled with ads, including Megastuff.co, Megaclicks.org, and Megaworld.mobi.



Interestingly, this all happened under the watch of the FBI, which is still listed as the administrative and technical contact for the domain names in question.

So how can this be?

Regular readers may recall that something similar happened to the main Megaupload.com domain last year. At the time we traced this back to an expired domain the FBI used for their nameservers, Cirfu.net.

After Cirfu.net expired, someone else took over the domain name and linked Megaupload.com to scammy ads. The U.S. authorities eventually fixed this by removing the nameservers altogether, but it turns out that they didn’t do this for all seized domains.

A few weeks ago the Cirfu.net domain expired once more and again it was picked up by an outsider. This unknown person or organization parked it at Rook Media, to generate some cash from the FBI-controlled domains.

As can be seen from the domain WHOIS data, Megaupload.org still uses the old Cirfu.net nameservers, which means that an outsider is now able to control several of the seized Megaupload domain names.


The ‘hijacked’ domains don’t get much traffic but it’s still quite embarrassing to have them linked to ads and soft porn. Commenting on our findings, Kim Dotcom notes that the sloppiness is exemplary of the entire criminal case.

“Their handling of the Megaupload domain is a reflection of the entire case: Unprofessional,” Dotcom tells us.

What’s clear is that the U.S. authorities haven’t learned from their past mistakes. It literally only takes a few clicks to update the nameserver info and reinstate the original seizure banner. One would assume that the FBI has the technical capabilities to pull that off.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Another lesson in confirmation bias

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/another-lesson-in-confirmation-bias.html

The biggest problem with hacker attribution is the confirmation bias problem. Once you develop a theory, your mind shifts to distorting evidence trying to prove the theory. After a while, only your theory seems possible as one that can fit all your carefully selected evidence.

You can watch this happen in two recent blogposts [1] [2] by Krypt3ia attributing bitcoin payments to the Shadow Broker hackers as coming from the government (FBI, NSA, TAO). These posts are absolutely wrong. Nonetheless, the press has picked up on the story and run with it [*]. [Note: click on the pictures in this post to blow them up so you can see them better].

The Shadow Brokers published their bitcoin address (19BY2XCgbDe6WtTVbTyzM9eR3LYr6VitWK) asking for donations to release the rest of their tools. They’ve received 66 transactions so far, totally 1.78 bitcoin, or roughly $1000 at today’s exchange rate.

Bitcoin is not anonymous by pseudonymous. Bitcoin is a public ledger with all transaction visible by everyone. Sometimes we can’t tie addresses back to people, but sometimes we can. There are a lot of researchers who spent a lot of time on “taint anlysis” trying to track down the real identity of evildoers. Thus, it seems plausible that we might be able to discover the identities of those people making contributions to Shadow Brokers.

The first of Krypt3ia’s errant blogposts tries to use the Bitcoin taint analysis plugin within Maltego in order to do some analysis on the Shadow Broker address. What he found was links to the Silk Road address — the address controlled by the FBI since they took down that darknet marketplace several years ago. Therefore, he created the theory that the government (FBI? NSA? TAO?) was up to some evil tricks, such as trying to fill the account with money so that they could then track where the money went in the public blockchain.

But he misinterpreted the links. (He was wrong.) There were no payments from the Silk Road accounts to the Shadow Broker account. Instead, there were people making payments to both accounts. As a prank.

To demonstrate how this prank wors, I made my own transaction, where I pay money to the Shadow Brokers (19BY2…), to Silk Road (1F1A…), and to a few other well-known accounts controlled by the government.

The point here is that anybody can do these shenanigans. That government controlled addresses are involved means nothing. They are public, and anybody can send coin to them.

That blogpost points to yet more shenanigans, such as somebody “rick rolling”, to confirm that TAO hackers were involved. What you see in the picture below is a series of transactions using bitcoin addresses containing the phrase “never gonna give you up“, the title of Rich Astley’s song (I underlined the words in red).

Far from the government being involved, somebody else took credit for the hack, with the Twitter handle @MalwareTechBlog. In a blogpost [*], he describes what he did. He then proves his identity by signing a message at the bottom of his post, using the same key (the 1never…. key above) in his tricks. Below is a screenshot of how I verified (and how anybody can verify) the key.

Moreover, these pranks should be seen in context. Goofball shenanigans on the blockchain are really, really common. An example is the following transaction:

Notice the vanity bitcoin address transfering money to the Silk Road account. There is also a “Public Note” on this transaction, a feature unique to BlockChain.info — which recently removed the feature because it was so extensively abused.

Bitcoin also has a feature where 40 bytes of a message can be added to transactions. The first transaction sending bitcoins to both Shadow Brokers and Silk Road was this one. If you tell it to “show scripts”, you see that it contains an email address for Cryptome, the biggest and oldest Internet leaks site (albeit not as notorious as Wikileaks).

The point is this: shenanigans and pranks are common on the Internet. What we see with Shadow Brokers is normal trickery. If you are unfamiliar with Bitcoin culture, it may look like some extra special trickery just for Shadow Brokers, but it isn’t.

After much criticism why his first blogpost was wrong, Krypt3ia published a second. The point of the second was to lambaste his critics — just because he jotted down some idle thoughts in a post doesn’t make him responsible for journalists like ZDnet picking up as a story that’s now suddenly being passed around.

But his continues with the claim that there is somehow evidence of government involvement, even though his original claim of payments from Silk Road were wrong. As he says:

However, my contention still stands that there be some fuckery going on here with those wallet transactions by the looks of it and that the likely candidate would be the government

Krypt3ia goes onto then claim, about the Rick Astley trick:

So yeah, these accounts as far as I can tell so far without going and spending way to many fucking hours on bitcoin.ifo or some such site, were created to purposely rick roll and fuck with the ShadowBrokers. Now, they may be fractions of bitcoins but I ask you, who the fuck has bitcoin money to burn here? Any of you out there? I certainly don’t and the way it was done, so tongue in cheek kinda reminds me of the audacity of TAO…

Who has bitcoin money to burn? The answer is everyone. Krypt3ia obvious isn’t paying attention to the value of bitcoin here, which are pennies. Each transaction of 0.0001337 bitcoins is worth about 10 cents at current exchange rates, meaning this Rick Roll was less than $1. It takes minutes to open an account (like at Circle.com) and use your credit card (or debit card) to $1 worth of bitcoin and carry out this prank.

He goes on to say:

If you also look at the wallets that I have marked with the super cool “Invisible Man” logo, you can see how some of those were actually transfering money from wallet to wallet in sequence to then each post transactions to Shadow. Now what is that all about huh? More wallets acting together? As Velma would often say in Scooby Doo, JINKY’S! Something is going on there.

Well, no, it’s normal bitcoin transactions. (I’ve made this mistake too — learned about it, then forgot about it, then had to relearn about it). A Bitcoin transaction needs to consume all the previous transactions that it refers to. This invariably leaves some bitcoin left over, so has to be transferred back into the user’s wallet. Thus, on my hijinx at the top of this post, you see the address 1HFWw… receives most of the bitcoin. That was a newly created by my wallet back in 2014 to receive the unspent portions of transactions. While it looks strange, it’s perfectly normal.

It’s easy to point out that Krypt3ia just doesn’t understand much about bitcoin, and is getting excited by Maltego output he doesn’t understand.

But the real issue is confirmation bias. He’s developed a theory, and searches for confirmation of that theory. He says “there are connections that cannot be discounted”, when in fact all the connections can easily be discounted with more research, with more knowledge. When he gets attacked, he’s becomes even more motivated to search for reasons why he’s actually right. He’s not motivated to be proven wrong.

And this is the case of most “attribution” in the cybersec issue. We don’t have smoking guns (such as bitcoin coming from the Silk Road account), and must make do with flimsy data (like here, bitcoin going to the Silk Road account). Sometimes our intuition is right, and this flimsy data does indeed point us to the hacker. In other cases, it leads us astray, as I’ve documented before in this blog. The less we understand something, the more it confirms our theory rather than conforming we just don’t understand. That “we just don’t know” is rarely an acceptable answer.

I point this out because I’m always the skeptic when the government attributes attacks to North Korea, China, Russia, Iran, and so on. I’ve seen them be right sometimes, and I’ve seem them be absolutely wrong. And when they are wrong, it’s easy figuring out why — because of things like confirmation bias.

Maltego plugin showing my Bitcoin hijinx transaction from above

Creating vanity addresses, for rickrolling or other reasons

National interest is exploitation, not disclosure

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/national-interest-is-exploitation-not.html

Most of us agree that more accountability/transparency is needed in how the government/NSA/FBI exploits 0days. However, the EFF’s positions on the topic are often absurd, which prevent our voices from being heard.

One of the EFF’s long time planks is that the government should be disclosing/fixing 0days rather than exploiting them (through the NSA or FBI). As they phrase it in a recent blog post:

as described by White House Cybersecurity Coordinator, Michael Daniel: “[I]n the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.” Other knowledgeable insiders—from former National Security Council Cybersecurity Directors Ari Schwartz and Rob Knake to President Obama’s hand-picked Review Group on Intelligence and Communications Technologies—have also endorsed clear, public rules favoring disclosure.

The EFF isn’t even paying attention to what the government said. The majority of vulnerabilities are useless to the NSA/FBI. Even powerful bugs like Heartbleed or Shellshock are useless, because they can’t easily be weaponized. They can’t easily be put into a point-and-shoot tool and given to cyberwarriors.

Thus, it’s a tautology saying “majority of cases vulns should be disclosed”. It has no bearing on the minority of bugs the NSA is interested in — the cases where we want more transparency and accountability.

These minority of bugs are not discovered accidentally. Accidental bugs have value to the NSA, so the NSA spends considerable amount of money hunting down different bugs that would be of use, and in many cases, buying useful vulns from 0day sellers. The EFF pretends the political issue is about 0days the NSA happens to come across accidentally — the real political issue is about the ones the NSA spent a lot of money on.

For these bugs, the minority of bugs the NSA sees, we need to ask whether it’s in the national interest to exploit them, or to disclose/fix them. And the answer to this question is clearly in favor of exploitation, not fixing. It’s basic math.

An end-to-end Apple iOS 0day (with sandbox escape and persistance) is worth around $1 million, according to recent bounties from Zerodium and Exodus Intel.

There are two competing national interests with such a bug. The first is whether such a bug should be purchased and used against terrorist iPhones in order to disrupt ISIS. The second is whether such a bug should be purchased and disclosed/fixed, to protect American citizens using iPhones.

Well, for one thing, the threat is asymmetric. As Snowden showed, the NSA has widespread control over network infrastructure, and can therefore insert exploits as part of a man-in-the-middle attack. That makes any browser-bugs, such as the iOS bug above, much more valuable to the NSA. No other intelligence organization, no hacker group, has that level of control over networks, especially within the United States. Non-NSA actors have to instead rely upon the much less reliable “watering hole” and “phishing” methods to hack targets. Thus, this makes the bug of extreme value for exploitation by the NSA, but of little value in fixing to protect Americans.

The NSA buys one bug per version of iOS. It only needs one to hack into terrorist phones. But there are many more bugs. If it were in the national interest to buy iOS 0days, buying just one will have little impact, since many more bugs still lurk waiting to be found. The government would have to buy many bugs to make a significant dent in the risk.

And why is the government helping Apple at the expense of competitors anyway? Why is it securing iOS with its bug-bounty program and not Android? And not Windows? And not Adobe PDF? And not the million other products people use?

The point is that no sane person can argue that it’s worth it for the government to spend $1 million per iOS 0day in order to disclose/fix. If it were in the national interest, we’d already have federal bug bounties of that order, for all sorts of products. Long before the EFF argues that it’s in the national interest that purchased bugs should be disclosed rather than exploited, the EFF needs to first show that it’s in the national interest to have a federal bug bounty program at all.

Conversely, it’s insane to argue it’s not worth $1 million to hack into terrorist iPhones. Assuming the rumors are true, the NSA has been incredibly effective at disrupting terrorist networks, reducing the collateral damage of drone strikes and such. Seriously, I know lots of people in government, and they have stories. Even if you discount the value of taking out terrorists, 0days have been hugely effective at preventing “collateral damage” — i.e. the deaths of innocents.

The NSA/DoD/FBI buying and using 0days is here to stay. Nothing the EFF does or says will ever change that. Given this constant, the only question is how We The People get more visibility into what’s going on, that our representative get more oversight, that the courts have clearer and more consistent rules. I’m the first to stand up and express my worry that the NSA might unleash a worm that takes down the Internet, or the FBI secretly hacks into my home devices. Policy makers need to address these issues, not the nonsense issues promoted by the EFF.

Torrentz Gone, KAT Down, Are Torrent Giants Doomed to Fall?

Post Syndicated from Ernesto original https://torrentfreak.com/torrentz-gone-kat-down-are-torrent-giants-doomed-to-fall-160806/

bomb-explosion-atomicAt TorrentFreak we have been keeping a close eye on the torrent ecosystem for more than a decade.

During this time, many sites have shut down, either voluntarily or forced by a court order.

This week meta-search engine Torrentz joined this ever-expanding list. In what appears to be a voluntary action, the site waved its millions of users farewell without prior warning.

The site’s operators have yet to explain their motivations. However, it wouldn’t be a big surprise if the continued legal pressure on torrent sites played a major role, with KAT as the most recent example.

And let’s be honest. Running a site that could make you the target of an FBI investigation, facing over a dozen years in prison, is no joke.

Looking back at the largest torrent sites of the past 15 years, we see a familiar pattern emerge. Many of the sites that make it to the top eventually fall down, often due to legal pressure.

Suprnova (2004)

Suprnova was one of the first ever BitTorrent giants. Founded by the Slovenian-born Andrej Preston, the site dominated the torrent scene during the early days.

It was also one of the first torrent sites to be targeted by the authorities. In November 2004 the site’s servers were raided, and a month later Preston, aka Sloncek, decided to shut it down voluntarily. The police investigation was eventually dropped a few months later.

Lokitorrent (2005)

When Suprnova went down a new site was quick to fill its void. LokiTorrent soon became one of the largest torrent sites around, which also attracted the attention of the MPAA.

LokiTorrent’s owner Ed Webber said he wanted to fight the MPAA and actively collected donations to pay for the legal costs. With success, as he raised over $40,000 in a few weeks.

However, not long after that, LokiTorrent was shut down, and all that was left was the iconic “You can click but you can’t hide” MPAA notice.


TorrentSpy (2008)

In 2006 TorrentSpy was more popular than any other BitTorrent site. This quickly changed when it was sued by the MPAA. In 2007 a federal judge ordered TorrentSpy to log all user data and the site opted to ban all U.S. traffic in response.

March 2008 TorrentSpy owner Justin Bunnell decided to shut down completely and not much later his company was ordered to pay the Hollywood studios $110 million in damages.

Mininova (2009)

After TorrentSpy’s demise, Mininova became the largest torrent site on the net. The name was inspired by Suprnova, but in 2008 the site was many times larger than its predecessor.

Its popularity eventually resulted in a lawsuit from local anti-piracy outfit BREIN, which Mininova lost. As a result, the site had to remove all infringing torrents, a move which effectively ended its reign.

Today the site is still online, limiting uploads to pre-approved publishers, making it a ghost of the giant it was in the past.

BTJunkie (2012)

In 2012, shortly after the Megaupload raid, torrent site BTJunkie shut down voluntarily.

Talking to TorrentFreak, BTjunkie’s founder said that the legal actions against other file-sharing sites played an important role in making the difficult decision. Witnessing all the trouble his colleagues got into was a constant cause of worry and stress.

“We’ve been fighting for years for your right to communicate, but it’s time to move on. It’s been an experience of a lifetime, we wish you all the best,” he wrote in a farewell message.


isoHunt (2013)

The shutdown of isoHunt a year later wasn’t much of a surprise. The site had been fighting a legal battle with the MPAA for over a decade and eventually lost, agreeing to pay the movie studios a $110m settlement.

As one of the oldest and largest sites at the time, the torrent ecosystem lost another icon. However, as is often the case, another site with the same name quickly took over and is still operating today.

EZTV (2015)

The story of EZTV’s demise is quite different from the rest. The popular TV-torrent distribution group shut down last year after a hostile takeover.

Strangely enough, many people don’t even realize that it’s “gone.” The site continued to operate under new ownership and still releases torrents. However, in solidarity with the original founders these torrents are banned on several other sites.

YIFY/YTS (2015)

What started as a simple movie release group in 2010 turned into one of the largest torrent icons. The group amassed a huge following and its website was generating millions of pageviews per day early last year.

In November 2015 this ended abruptly. Facing a million dollar lawsuit from Hollywood, the group’s founder decided to pull the plug and call it quits. Even though various copycats have since emerged, the real YIFY/YTS is no more.

KickassTorrents (2016)

Three weeks ago Polish law enforcement officers arrested Artem Vaulin, the alleged owner of KickassTorrents. The arrest resulted in the shutdown of the site, which came as a shock to millions of KAT users and the torrent community at large.

Out of nowhere, the largest torrent index disappeared and there are no signs that it’s coming back anytime soon. The site’s community, meanwhile, has found a new home at Katcr.to.

Torrentz (2016)

Torrentz is the last torrent site to cease its operations. Although no official explanation was given, some of the stories outlined above were probably weighed into the founders’ decision.

So what will the future bring? Who will be the next giant to fall? It’s obvious that nearly nothing last forever in the torrent ecosystem. Well, apart from the ever-resilient Pirate Bay.

And there are several other alternatives still around as well. ExtraTorrent has been around for a decade now and continues to grow, and the same is true for other popular torrent sites.

At least, for now…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New Presidential Directive on Incident Response

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/new_presidentia.html

Last week, President Obama issued a policy directive (PPD-41) on cyber-incident response coordination. The FBI is in charge, which is no surprise. Actually, there’s not much surprising in the document. I suppose it’s important to formalize this stuff, but I think it’s what happens now.

News article. Brief analysis. The FBI’s perspective.

Hacking the Vote

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/hacking_the_vot.html

Russia has attacked the U.S. in cyberspace in an attempt to influence our national election, many experts have concluded. We need to take this national security threat seriously and both respond and defend, despite the partisan nature of this particular attack.

There is virtually no debate about that, either from the technical experts who analyzed the attack last month or the FBI which is analyzing it now. The hackers have already released DNC emails and voicemails, and promise more data dumps.

While their motivation remains unclear, they could continue to attack our election from now to November — and beyond.

Like everything else in society, elections have gone digital. And just as we’ve seen cyberattacks affecting all aspects of society, we’re going to see them affecting elections as well.

What happened to the DNC is an example of organizational doxing — the publishing of private information — an increasingly popular tactic against both government and private organizations. There are other ways to influence elections: denial-of-service attacks against candidate and party networks and websites, attacks against campaign workers and donors, attacks against voter rolls or election agencies, hacks of the candidate websites and social media accounts, and — the one that scares me the most — manipulation of our highly insecure but increasingly popular electronic voting machines.

On the one hand, this attack is a standard intelligence gathering operation, something the NSA does against political targets all over the world and other countries regularly do to us. The only thing different between this attack and the more common Chinese and Russian attacks against our government networks is that the Russians apparently decided to publish selected pieces of what they stole in an attempt to influence our election, and to use Wikileaks as a way to both hide their origin and give them a veneer of respectability.

All of the attacks listed above can be perpetrated by other countries and by individuals as well. They’ve been done in elections in other countries. They’ve been done in other contexts. The Internet broadly distributes power, and what was once the sole purview of nation states is now in the hands of the masses. We’re living in a world where disgruntled people with the right hacking skills can influence our elections, wherever they are in the world.

The Snowden documents have shown the world how aggressive our own intelligence agency is in cyberspace. But despite all of the policy analysis that has gone into our own national cybersecurity, we seem perpetually taken by surprise when we are attacked. While foreign interference in national elections isn’t new, and something the U.S. has repeatedly done, electronic interference is a different animal.

The Obama Administration is considering how to respond, but politics will get in the way. Were this an attack against a popular Internet company, or a piece of our physical infrastructure, we would all be together in response. But because these attacks affect one political party, the other party benefits. Even worse, the benefited candidate is actively inviting more foreign attacks against his opponent, though he now says he was just being sarcastic. Any response from the Administration or the FBI will be viewed through this partisan lens, especially because the President is a Democrat.

We need to rise above that. These threats are real and they affect us all, regardless of political affiliation. That this particular attack targeted the DNC is no indication of who the next attack might target. We need to make it clear to the world that we will not accept interference in our political process, whether by foreign countries or lone hackers.

However we respond to this act of aggression, we also need to increase the security of our election systems against all threats — and quickly.

We tend to underestimate threats that haven’t happened — we discount them as “theoretical” — and overestimate threats that have happened at least once. The terrorist attacks of 9/11 are a showcase example of that: Administration officials ignored all the warning signs, and then drastically overreacted after the fact. These Russian attacks against our voting system have happened. And they will happen again, unless we take action.

If a foreign country attacked U.S. critical infrastructure, we would respond as a nation against the threat. But if that attack falls along political lines, the response is more complicated. It shouldn’t be. This is a national security threat against our democracy, and needs to be treated as such.

This essay previously appeared on CNN.com.

Mr. Robot ‘Plugs’ uTorrent and Pirate Release Groups

Post Syndicated from Ernesto original https://torrentfreak.com/mr-robot-plugs-utorrent-and-pirate-release-groups-160729/

fsocEarlier this month the second season of Mr. Robot premiered.

The TV-show, which portrays and appeals to a subculture of nerds, hacktivists, hackers and technology insiders, has become an instant cult hit.

Aside from classic hacker groups, the makers of the show were inspired by The Pirate Bay founders. Last year Mr. Robot creator Sam Esmail admitted that the main character Elliot is in part modeled after the illustrious trio.

In addition, Mr. Robot also includes various nods and easter eggs for the technology inclined. For example, the first episode of the second season included an online trail for people to follow in the real world.

In the most recent episode, pirates were saluted during a short scene. Without giving away any spoilers, the main character Elliot was shown playing a pirated movie via his PLEX media server.

The movie in question, The Careful Massacre of the Bourgeoisie, is “fake” but that’s not true for the other pirate references displayed.

uTorrent / PLEX and pirate groups (large)


As the screenshot above shows, Elliot uses a recent version of the popular BitTorrent client uTorrent, showing a house ad for an upgrade to uTorrent Plus.

In the “movies” folder, which is also shown, we can see various other movies complete with release group tags such as YIFY, PRiSTiNE, DiPSHiT, RARBG and CRiTERiON.

It is safe to say that these were not included by accident but as a nod towards the pirates in the audience. The same can be said for the iconic FBI warning that’s shown when the movie starts playing.

FBI warning (large)


The mention didn’t go unnoticed by the pirate groups in question. We reached out to YIFY, who quit after running into legal trouble last year, and he appreciates the mention.

“Makes me feel like a little bit of a ‘bad ass’, even though it’s a pretty minor thing in the show still a cheeky smile came about,” YIFY told TF.

“I do like the fact that the producers of Mr Robot specifically do try to get an accurate reflection of today’s real world online.”

While the names of the pirate groups are indeed accurate, there may be room for improvement. A member of another release group pictured in the episode, who commented on condition of anonymity, questioned Elliot’s BitTorrent client preference.

“I find it hard to believe that the main character in the show – a pro hacker – is using a non-open source software to download or stream his torrents,” the group member said.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.