Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/surveillance_in_3.html
Post Syndicated from Andy original https://torrentfreak.com/apple-ceo-is-optimistic-vpn-apps-will-return-to-china-app-store-171206/
During the final days of July, Apple was forced to remove many of the most-used VPN applications from its Chinese App Store. In a short email from the company, VPN providers and software developers were told that VPN applications are considered illegal in China.
“We are writing to notify you that your application will be removed from the China App Store because it includes content that is illegal in China, which is not in compliance with the App Store Review Guidelines,” Apple informed the affected VPNs.
While the position on the ground doesn’t appear to have changed in the interim, Apple Chief Executive Tim Cook today expressed optimism that the VPN apps would eventually be restored to their former positions on China’s version of the App Store.
“My hope over time is that some of the things, the couple of things that’s been pulled, come back,” Cook said. “I have great hope on that and great optimism on that.”
According to Reuters, Cook said that he always tries to find ways to work together to settle differences and if he gets criticized for that “so be it.”
Speaking at the Fortune Forum in the Chinese city of Guangzhou, Cook said that he believes strongly in freedoms. But back home in the US, Apple has been strongly criticized for not doing enough to uphold freedom of speech and communication in China.
Back in October, two US senators wrote to Cook asking why the company had removed the VPN apps from the company’s store in China.
“VPNs allow users to access the uncensored Internet in China and other countries that restrict Internet freedom. If these reports are true, we are concerned that Apple may be enabling the Chinese government’s censorship and surveillance of the Internet,” senators Ted Cruz and Patrick Leahy wrote.
“While Apple’s many contributions to the global exchange of information are admirable, removing VPN apps that allow individuals in China to evade the Great Firewall and access the Internet privately does not enable people in China to ‘speak up’.”
They were comments Senator Leahy underlined again yesterday.
“American tech companies have become leading champions of free expression. But that commitment should not end at our borders,” Leahy told CNBC.
“Global leaders in innovation, like Apple, have both an opportunity and a moral obligation to promote free expression and other basic human rights in countries that routinely deny these rights.”
Whether the optimism expressed by Cook today is based on discussions with the Chinese government is unknown. However, it seems unlikely that authorities would be willing to significantly compromise on their dedication to maintaining the Great Firewall, which not only controls access to locally controversial content but also seeks to boost the success of Chinese companies.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/nsa_red_disk_da.html
The disk image, when unpacked and loaded, is a snapshot of a hard drive dating back to May 2013 from a Linux-based server that forms part of a cloud-based intelligence sharing system, known as Red Disk. The project, developed by INSCOM’s Futures Directorate, was slated to complement the Army’s so-called distributed common ground system (DCGS), a legacy platform for processing and sharing intelligence, surveillance, and reconnaissance information.
Red Disk was envisioned as a highly customizable cloud system that could meet the demands of large, complex military operations. The hope was that Red Disk could provide a consistent picture from the Pentagon to deployed soldiers in the Afghan battlefield, including satellite images and video feeds from drones trained on terrorists and enemy fighters, according to a Foreign Policy report.
Red Disk was a modular, customizable, and scalable system for sharing intelligence across the battlefield, like electronic intercepts, drone footage and satellite imagery, and classified reports, for troops to access with laptops and tablets on the battlefield. Marking files found in several directories imply the disk is “top secret,” and restricted from being shared to foreign intelligence partners.
A couple of points. One, this isn’t particularly sensitive. It’s an intelligence distribution system under development. It’s not raw intelligence. Two, this doesn’t seem to be classified data. Even the article hedges, using the unofficial term of “highly sensitive.” Three, it doesn’t seem that Chris Vickery, the researcher that discovered the data, has published it.
Chris Vickery, director of cyber risk research at security firm UpGuard, found the data and informed the government of the breach in October. The storage server was subsequently secured, though its owner remains unknown.
This doesn’t feel like a big deal to me.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/warrant_protect.html
The cell phones we carry with us constantly are the most perfect surveillance device ever invented, and our laws haven’t caught up to that reality. That might change soon.
This week, the Supreme Court will hear a case with profound implications on your security and privacy in the coming years. The Fourth Amendment’s prohibition of unlawful search and seizure is a vital right that protects us all from police overreach, and the way the courts interpret it is increasingly nonsensical in our computerized and networked world. The Supreme Court can either update current law to reflect the world, or it can further solidify an unnecessary and dangerous police power.
The case centers on cell phone location data and whether the police need a warrant to get it, or if they can use a simple subpoena, which is easier to obtain. Current Fourth Amendment doctrine holds that you lose all privacy protections over any data you willingly share with a third party. Your cellular provider, under this interpretation, is a third party with whom you’ve willingly shared your movements, 24 hours a day, going back months — even though you don’t really have any choice about whether to share with them. So police can request records of where you’ve been from cell carriers without any judicial oversight. The case before the court, Carpenter v. United States, could change that.
Traditionally, information that was most precious to us was physically close to us. It was on our bodies, in our homes and offices, in our cars. Because of that, the courts gave that information extra protections. Information that we stored far away from us, or gave to other people, afforded fewer protections. Police searches have been governed by the “third-party doctrine,” which explicitly says that information we share with others is not considered private.
The Internet has turned that thinking upside-down. Our cell phones know who we talk to and, if we’re talking via text or e-mail, what we say. They track our location constantly, so they know where we live and work. Because they’re the first and last thing we check every day, they know when we go to sleep and when we wake up. Because everyone has one, they know whom we sleep with. And because of how those phones work, all that information is naturally shared with third parties.
More generally, all our data is literally stored on computers belonging to other people. It’s our e-mail, text messages, photos, Google docs, and more all in the cloud. We store it there not because it’s unimportant, but precisely because it is important. And as the Internet of Things computerizes the rest our lives, even more data will be collected by other people: data from our health trackers and medical devices, data from our home sensors and appliances, data from Internet-connected “listeners” like Alexa, Siri, and your voice-activated television.
All this data will be collected and saved by third parties, sometimes for years. The result is a detailed dossier of your activities more complete than any private investigator – or police officer – could possibly collect by following you around.
The issue here is not whether the police should be allowed to use that data to help solve crimes. Of course they should. The issue is whether that information should be protected by the warrant process that requires the police to have probable cause to investigate you and get approval by a court.
Warrants are a security mechanism. They prevent the police from abusing their authority to investigate someone they have no reason to suspect of a crime. They prevent the police from going on “fishing expeditions.” They protect our rights and liberties, even as we willingly give up our privacy to the legitimate needs of law enforcement.
The third-party doctrine never made a lot of sense. Just because I share an intimate secret with my spouse, friend, or doctor doesn’t mean that I no longer consider it private. It makes even less sense in today’s hyper-connected world. It’s long past time the Supreme Court recognized that a months’-long history of my movements is private, and my e-mails and other personal data deserve the same protections, whether they’re on my laptop or on Google’s servers.
This essay previously appeared in the Washington Post.
EDITED TO ADD (12/1): Good commentary on the Supreme Court oral arguments.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/websites_use_se.html
The security researchers at Princeton are posting
You may know that most websites have third-party analytics scripts that record which pages you visit and the searches you make. But lately, more and more sites use “session replay” scripts. These scripts record your keystrokes, mouse movements, and scrolling behavior, along with the entire contents of the pages you visit, and send them to third-party servers. Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.
The stated purpose of this data collection includes gathering insights into how users interact with websites and discovering broken or confusing pages. However the extent of data collected by these services far exceeds user expectations; text typed into forms is collected before the user submits the form, and precise mouse movements are saved, all without any visual indication to the user. This data can’t reasonably be expected to be kept anonymous. In fact, some companies allow publishers to explicitly link recordings to a user’s real identity.
The researchers will post more details on their blog; I’ll link to them when they’re published.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/motherboard_dig.html
This digital security guide by Motherboard is very good. I put alongside EFF’s “Surveillance Self-Defense” and John Scott-Railton’s “Digital Security Low Hanging Fruit.” There’s also “Digital Security and Privacy for Human Rights Defenders.”
There are too many of these….
Post Syndicated from Andy original https://torrentfreak.com/us-senators-ask-apple-why-vpn-apps-were-removed-in-china-171020/
As part of what is now clearly a crackdown on Great Firewall-evading tools and services, during the summer Chinese government pressure reached technology giant Apple.
On or around July 29, Apple removed many of the most-used VPN applications from its Chinese app store. In a short email from the company, VPN providers were informed that VPN applications are considered illegal in China.
“We are writing to notify you that your application will be removed from the China App Store because it includes content that is illegal in China, which is not in compliance with the App Store Review Guidelines,” Apple informed the affected VPNs.
Now, in a letter sent to Apple CEO Tim Cook, US senators Ted Cruz and Patrick Leahy express concern at the move by Apple, noting that if reports of the software removals are true, the company could be assisting China’s restrictive approach to the Internet.
“VPNs allow users to access the uncensored Internet in China and other countries that restrict Internet freedom. If these reports are true, we are concerned that Apple may be enabling the Chines government’s censorship and surveillance of the Internet.”
Describing China as a country with “an abysmal human rights record, including with respect to the rights of free expression and free access to information, both online and offline”, the senators cite Reporters Without Borders who previously labeled the country as “the enemy of the Internet”.
While senators Cruz and Leahy go on to praise Apple for its contribution to the spread of information, they criticize the company for going along with the wishes of the Chinese government as it seeks to suppress knowledge and communication.
“While Apple’s many contributions to the global exchange of information are admirable, removing VPN apps that allow individuals in China to evade the Great Firewall and access the Internet privately does not enable people in China to ‘speak up’,” the senators write.
“To the contrary, if Apple complies with such demands from the Chinese government it inhibits free expression for users across China, particularly in light of the Cyberspace Administration of China’s new regulations targeting online anonymity.”
In January, a notice published by China’s Ministry of Industry and Information Technology said that the government had indeed launched a 14-month campaign to crack down on local ‘unauthorized’ Internet platforms.
This means that all VPN services have to be pre-approved by the Government if they want to operate in China. And the aggression against VPNs and their providers didn’t stop there.
In September, a Chinese man who sold Great Firewall-evading VPN software via a website was sentenced to nine months in prison by a Chinese court. Just weeks later, a software developer who set up a VPN for his own use but later sold access to the service was arrested and detained for three days.
This emerging pattern is clearly a concern for the senators who are now demanding that Tim Cook responds to ten questions (pdf), including whether Apple raised concerns about China’s VPN removal demands and details of how many apps were removed from its store. The senators also want to see copies of any pro-free speech statements Apple has made in China.
Whether the letter will make any difference on the ground in China remains to be seen, but the public involvement of the senators and technology giant Apple is certain to thrust censorship and privacy further into the public eye.
Post Syndicated from Andy original https://torrentfreak.com/purevpn-explains-how-it-helped-the-fbi-catch-a-cyberstalker-171016/
Early October, Ryan S. Lin, 24, of Newton, Massachusetts, was arrested on suspicion of conducting “an extensive cyberstalking campaign” against a 24-year-old Massachusetts woman, as well as her family members and friends.
The Department of Justice described Lin’s offenses as a “multi-faceted” computer hacking and cyberstalking campaign. Launched in April 2016 when he began hacking into the victim’s online accounts, Lin allegedly obtained personal photographs and sensitive information about her medical and sexual histories and distributed that information to hundreds of other people.
Details of what information the FBI compiled on Lin can be found in our earlier report but aside from his alleged crimes (which are both significant and repugnant), it was PureVPN’s involvement in the case that caused the most controversy.
In a report compiled by an FBI special agent, it was revealed that the Hong Kong-based company’s logs helped the authorities net the alleged criminal.
“Significantly, PureVPN was able to determine that their service was accessed by the same customer from two originating IP addresses: the RCN IP address from the home Lin was living in at the time, and the software company where Lin was employed at the time,” the agent’s affidavit reads.
Among many in the privacy community, this revelation was met with disappointment. On the PureVPN website the company claims to carry no logs and on a general basis, it’s expected that so-called “no-logging” VPN providers should provide people with some anonymity, at least as far as their service goes. Now, several days after the furor, the company has responded to its critics.
In a fairly lengthy statement, the company begins by confirming that it definitely doesn’t log what websites a user views or what content he or she downloads.
However, that’s only half the problem. While it doesn’t log user activity (what sites people visit or content they download), it does log the IP addresses that customers use to access the PureVPN service. These, given the right circumstances, can be matched to external activities thanks to logs carried by other web companies.
PureVPN talks about logs held by Google’s Gmail service to illustrate its point.
“A network log is automatically generated every time a user visits a website. For the sake of this example, let’s say a user logged into their Gmail account. Every time they accessed Gmail, the email provider created a network log,” the company explains.
“If you are using a VPN, Gmail’s network log would contain the IP provided by PureVPN. This is one half of the picture. Now, if someone asks Google who accessed the user’s account, Google would state that whoever was using this IP, accessed the account.
“If the user was connected to PureVPN, it would be a PureVPN IP. The inquirer [in the Lin case, the FBI] would then share timestamps and network logs acquired from Google and ask them to be compared with the network logs maintained by the VPN provider.”
Now, if PureVPN carried no logs – literally no logs – it would not be able to help with this kind of inquiry. That was the case last year when the FBI approached Private Internet Access for information and the company was unable to assist.
However, as is made pretty clear by PureVPN’s explanation, the company does log user IP addresses and timestamps which reveal when a user was logged on to the service. It doesn’t matter that PureVPN doesn’t log what the user allegedly did online, since the third-party service already knows that information to the precise second.
Following the example, GMail knows that a user sent an email at 10:22am on Monday October 16 from a PureVPN IP address. So, if PureVPN is approached by the FBI, the company can confirm that User X was using the same IP address at exactly the same time, and his home IP address was XXX.XX.XXX.XX. Effectively, the combined logs link one IP address to the other and the user is revealed. It’s that simple.
It is for this reason that in TorrentFreak’s annual summary of no-logging VPN providers, the very first question we ask every single company reads as follows:
Do you keep ANY logs which would allow you to match an IP-address and a time stamp to a user/users of your service? If so, what information do you hold and for how long?
Clearly, if a company says “yes we log incoming IP addresses and associated timestamps”, any claim to total user anonymity is ended right there and then.
While not completely useless (a logging service will still stop the prying eyes of ISPs and similar surveillance, while also defeating throttling and site-blocking), if you’re a whistle-blower with a job or even your life to protect, this level of protection is entirely inadequate.
The take-home points from this controversy are numerous, but perhaps the most important is for people to read and understand VPN provider logging policies.
Secondly, and just as importantly, VPN providers need to be extremely clear about the information they log. Not tracking browsing or downloading activities is all well and good, but if home IP addresses and timestamps are stored, this needs to be made clear to the customer.
Finally, VPN users should not be evil. There are plenty of good reasons to stay anonymous online but cyberstalking, death threats and ruining people’s lives are not included. Fortunately, the FBI have offline methods for catching this type of offender, and long may that continue.
PureVPN’s blog post is available here.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/10/my_blogging.html
Blog regulars will notice that I haven’t been posting as much lately as I have in the past. There are two reasons. One, it feels harder to find things to write about. So often it’s the same stories over and over. I don’t like repeating myself. Two, I am busy writing a book. The title is still: Click Here to Kill Everybody: Peril and Promise in a Hyper-Connected World. The book is a year late, and as a very different table of contents than it had in 2016. I have been writing steadily since mid-August. The book is due to the publisher at the end of March 2018, and will be published in the beginning of September.
This is the current table of contents:
- Introduction: Everything is Becoming a Computer
- Part 1: The Trends
- 1. Capitalism Continues to Drive the Internet
- 2. Customer/User Control is Next
- 3. Government Surveillance and Control is Also Increasing
- 4. Cybercrime is More Profitable Than Ever
- 5. Cyberwar is the New Normal
- 6. Algorithms, Automation, and Autonomy Bring New Dangers
- 7. What We Know About Computer Security
- 8. Agile is Failing as a Security Paradigm
- 9. Authentication and Identification are Getting Harder
- 10. Risks are Becoming Catastrophic
- Part 2: The Solutions
- 11. We Need to Regulate the Internet of Things
- 12. We Need to Defend Critical Infrastructure
- 13. We Need to Prioritize Defense Over Offence
- 14. We Need to Make Smarter Decisions About Connecting
- 15. What’s Likely to Happen, and What We Can Do in Response
- 16. Where Policy Can Go Wrong
- Conclusion: Technology and Policy, Together
So that’s what’s been happening.
Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/responsible-encryption-fallacies.html
Deputy Attorney General Rod Rosenstein gave a speech recently calling for “Responsible Encryption” (aka. “Crypto Backdoors”). It’s full of dangerous ideas that need to be debunked.
The importance of law enforcement
But the Mira case demonstrated the opposite, how law enforcement is not needed. They made no arrests in the case. A year later, they still haven’t a clue who did it.
Conversely, we technologists have fixed the major infrastructure issues. Specifically, those affected by the DNS outage have moved to multiple DNS providers, including a high-capacity DNS provider like Google and Amazon who can handle such large attacks easily.
In other words, we the people fixed the major Mirai problem, and law-enforcement didn’t.
Where law enforcement can’t go
Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detection
Of course our society has places “impervious to detection”, protected by both legal and natural barriers.
An example of a legal barrier is how spouses can’t be forced to testify against each other. This barrier is impervious.
A better example, though, is how so much of government, intelligence, the military, and law enforcement itself is impervious. If prosecutors could gather evidence everywhere, then why isn’t Rosenstein prosecuting those guilty of CIA torture?
Oh, you say, government is a special exception. If that were the case, then why did Rosenstein dedicate a precious third of his speech discussing the “rule of law” and how it applies to everyone, “protecting people from abuse by the government”. It obviously doesn’t, there’s one rule of government and a different rule for the people, and the rule for government means there’s lots of places law enforcement can’t go to gather evidence.
Likewise, the crypto backdoor Rosenstein is demanding for citizens doesn’t apply to the President, Congress, the NSA, the Army, or Rosenstein himself.
Then there are the natural barriers. The police can’t read your mind. They can only get the evidence that is there, like partial fingerprints, which are far less reliable than full fingerprints. They can’t go backwards in time.
I mention this because encryption is a natural barrier. It’s their job to overcome this barrier if they can, to crack crypto and so forth. It’s not our job to do it for them.
It’s like the camera that increasingly comes with TVs for video conferencing, or the microphone on Alexa-style devices that are always recording. This suddenly creates evidence that the police want our help in gathering, such as having the camera turned on all the time, recording to disk, in case the police later gets a warrant, to peer backward in time what happened in our living rooms. The “nothing is impervious” argument applies here as well. And it’s equally bogus here. By not helping police by not recording our activities, we aren’t somehow breaking some long standing tradit
And this is the scary part. It’s not that we are breaking some ancient tradition that there’s no place the police can’t go (with a warrant). Instead, crypto backdoors breaking the tradition that never before have I been forced to help them eavesdrop on me, even before I’m a suspect, even before any crime has been committed. Sure, laws like CALEA force the phone companies to help the police against wrongdoers — but here Rosenstein is insisting I help the police against myself.
Balance between privacy and public safety
Rosenstein repeats the frequent claim that encryption upsets the balance between privacy/safety:
Warrant-proof encryption defeats the constitutional balance by elevating privacy above public safety.
This is laughable, because technology has swung the balance alarmingly in favor of law enforcement. Far from “Going Dark” as his side claims, the problem we are confronted with is “Going Light”, where the police state monitors our every action.
You are surrounded by recording devices. If you walk down the street in town, outdoor surveillance cameras feed police facial recognition systems. If you drive, automated license plate readers can track your route. If you make a phone call or use a credit card, the police get a record of the transaction. If you stay in a hotel, they demand your ID, for law enforcement purposes.
And that’s their stuff, which is nothing compared to your stuff. You are never far from a recording device you own, such as your mobile phone, TV, Alexa/Siri/OkGoogle device, laptop. Modern cars from the last few years increasingly have always-on cell connections and data recorders that record your every action (and location).
Even if you hike out into the country, when you get back, the FBI can subpoena your GPS device to track down your hidden weapon’s cache, or grab the photos from your camera.
And this is all offline. So much of what we do is now online. Of the photographs you own, fewer than 1% are printed out, the rest are on your computer or backed up to the cloud.
Your phone is also a GPS recorder of your exact position all the time, which if the government wins the Carpenter case, they police can grab without a warrant. Tagging all citizens with a recording device of their position is not “balance” but the premise for a novel more dystopic than 1984.
If suspected of a crime, which would you rather the police searched? Your person, houses, papers, and physical effects? Or your mobile phone, computer, email, and online/cloud accounts?
The balance of privacy and safety has swung so far in favor of law enforcement that rather than debating whether they should have crypto backdoors, we should be debating how to add more privacy protections.
“But it’s not conclusive”
Rosenstein’s claim may be true, that a lot of criminals will go free because the other electronic data isn’t convincing enough. But I’d need to see that claim backed up with hard studies, not thrown out for emotional impact.
Terrorists and child molesters
Definition of “backdoor”
No one calls any of those functions [like key recovery] a “back door.” In fact, those capabilities are marketed and sought out by many users.
But that’s because the term “backdoor” refers less to how it’s done and more to who is doing it. If I set up a recovery password with Apple, I’m the one doing it to myself, so we don’t call it a backdoor. If it’s the police, spies, hackers, or criminals, then we call it a “backdoor” — even it’s identical technology.
Wikipedia uses the key escrow feature of the 1990s Clipper Chip as a prime example of what everyone means by “backdoor“. By “no one”, Rosenstein is including Wikipedia, which is obviously incorrect.
Though in truth, it’s not going to be the same technology. The needs of law enforcement are different than my personal key escrow/backup needs. In particular, there are unsolvable problems, such as a backdoor that works for the “legitimate” law enforcement in the United States but not for the “illegitimate” police states like Russia and China.
The question is whether a “provider” like Telegram, a Russian company beyond US law, provides this feature. Or, by extension, whether individuals should be free to install whatever software they want, regardless of provider.
If the, for some reason, the US is able to convince all such providers (including Telegram) to install a backdoor, then it still doesn’t solve the problem, as uses can just build their own end-to-end encryption app that has no provider. It’s like email: some use the major providers like GMail, others setup their own email server.
Ultimately, this means that any law mandating “crypto backdoors” is going to target users not providers. Rosenstein tries to make a comparison with what plain-old telephone companies have to do under old laws like CALEA, but that’s not what’s happening here. Instead, for such rules to have any effect, they have to punish users for what they install, not providers.
Apple is morally weak
Of course they [Apple] do. They are in the business of selling products and making money.
We [the DoJ] use a different measure of success. We are in the business of preventing crime and saving lives.
A better phrasing would have been:
They are in the business of giving customers what they want.
We are in the business of giving voters what they want.
They are willing to be vile
Companies are willing to make accommodations when required by the government. Recent media reports suggest that a major American technology company developed a tool to suppress online posts in certain geographic areas in order to embrace a foreign government’s censorship policies.
Companies are willing to acquiesce to vile requests made by police-states. Therefore, they should acquiesce to our vile police-state requests.
There is no constitutional right to sell warrant-proof encryption.
That’s why we have the Bill of Rights, to protect the 49% against abuse by the 51%. Regardless whether the Supreme Court agrees the current Constitution, it is the sort right that might exist regardless of what the Constitution says.
Obliged to speak the truth
Those of us who swear to protect the rule of law have a different motivation. We are obliged to speak the truth.
The truth is that “going dark” threatens to disable law enforcement and enable criminals and terrorists to operate with impunity.
But he’s not obliged to tell his spouse his honest opinion whether that new outfit makes them look fat. Likewise, Rosenstein knows his opinion on public policy doesn’t fall into this category. He can say with impunity that either global warming doesn’t exist, or that it’ll cause a biblical deluge within 5 years. Both are factually untrue, but it’s not going to get him fired.
And this particular claim is also exaggerated bunk. While everyone agrees encryption makes law enforcement’s job harder than with backdoors, nobody honestly believes it can “disable” law enforcement. While everyone agrees that encryption helps terrorists, nobody believes it can enable them to act with “impunity”.
I feel bad here. It’s a terrible thing to question your opponent’s character this way. But Rosenstein made this unavoidable when he clearly, with no ambiguity, put his integrity as Deputy Attorney General on the line behind the statement that “going dark threatens to disable law enforcement and enable criminals and terrorists to operate with impunity”. I feel it’s a bald face lie, but you don’t need to take my word for it. Read his own words yourself and judge his integrity.
Rosenstein’s speech includes repeated references to ideas like “oath”, “honor”, and “duty”. It reminds me of Col. Jessup’s speech in the movie “A Few Good Men”.
If you’ll recall, it was rousing speech, “you want me on that wall” and “you use words like honor as a punchline”. Of course, since he was violating his oath and sending two privates to death row in order to avoid being held accountable, it was Jessup himself who was crapping on the concepts of “honor”, “oath”, and “duty”.
And so is Rosenstein. He imagines himself on that wall, doing albeit terrible things, justified by his duty to protect citizens. He imagines that it’s he who is honorable, while the rest of us not, even has he utters bald faced lies to further his own power and authority.
We activists oppose crypto backdoors not because we lack honor, or because we are criminals, or because we support terrorists and child molesters. It’s because we value privacy and government officials who get corrupted by power. It’s not that we fear Trump becoming a dictator, it’s that we fear bureaucrats at Rosenstein’s level becoming drunk on authority — which Rosenstein demonstrably has. His speech is a long train of corrupt ideas pursuing the same object of despotism — a despotism we oppose.
In other words, we oppose crypto backdoors because it’s not a tool of law enforcement, but a tool of despotism.
Post Syndicated from Ujjwal Ratan original https://aws.amazon.com/blogs/big-data/build-a-schema-on-read-analytics-pipeline-using-amazon-athena/
A robust analytics platform is critical for an organization’s success. However, an analytical system is only as good as the data that is used to power it. Wrong or incomplete data can derail analytical projects completely. Moreover, with varied data types and sources being added to the analytical platform, it’s important that the platform is able to keep up. This means the system has to be flexible enough to adapt to changing data types and volumes.
For a platform based on relational databases, this would mean countless data model updates, code changes, and regression testing, all of which can take a very long time. For a decision support system, it’s imperative to provide the correct answers quickly. Architects must design analytical platforms with the following criteria:
- Help ingest data easily from multiple sources
- Analyze it for completeness and accuracy
- Use it for metrics computations
- Store the data assets and scale as they grow rapidly without causing disruptions
- Adapt to changes as they happen
- Have a relatively short development cycle that is repeatable and easy to implement.
Schema-on-read is a unique approach for storing and querying datasets. It reverses the order of things when compared to schema-on-write in that the data can be stored as is and you apply a schema at the time that you read it. This approach has changed the time to value for analytical projects. It means you can immediately load data and can query it to do exploratory activities.
In this post, I show how to build a schema-on-read analytical pipeline, similar to the one used with relational databases, using Amazon Athena. The approach is completely serverless, which allows the analytical platform to scale as more data is stored and processed via the pipeline.
The data integration challenge
Part of the reason why it’s so difficult to build analytical platforms are the challenges associated with data integration. This is usually done as a series of Extract Transform Load (ETL) jobs that pull data from multiple varied sources and integrate them into a central repository. The multiple steps of ETL can be viewed as a pipeline where raw data is fed in one end and integrated data accumulates at the other.
Two major hurdles complicate the development of ETL pipelines:
- The sheer volume of data needing to be integrated makes it very difficult to scale these jobs.
- There are an immense variety of data formats from where information needs to be gathered.
Put this in context by looking at the healthcare industry. Building an analytical pipeline for healthcare usually involves working with multiple datasets that cover care quality, provider operations, patient clinical records, and patient claims data. Customers are now looking for even more data sources that may contain valuable information that can be analyzed. Examples include social media data, data from devices and sensors, and biometric and human-generated data such as notes from doctors. These new data sources do not rely on fixed schemas, so integrating them by conventional ETL jobs is much more difficult.
The volume of data available for analysis is growing as well. According to an article published by NCBI, the US healthcare system reached a collective data volume of 150 exabytes in 2011. With the current rate of growth, it is expected to quickly reach zettabyte scale, and soon grow into the yottabytes.
The traditional data warehouses of the early 2000s, like the one shown in the following diagram, were based on a standard four-layer approach of ingest, stage, store, and report. This involved building and maintaining huge data models that suited certain sources and analytical queries. This type of design is based on a schema-on-write approach, where you write data into a predefined schema and read data by querying it.
As we progressed to more varied datasets and analytical requirements, the predefined schemas were not able to keep up. Moreover, by the time the models were built and put into production, the requirements had changed.
Schema-on-read provides much needed flexibility to an analytical project. Not having to rely on physical schemas improves the overall performance when it comes to high volume data loads. Because there are no physical constraints for the data, it works well with datasets with undefined data structures. It gives customers the power to experiment and try out different options.
How can Athena help?
Athena is a serverless analytical query engine that allows you to start querying data stored in Amazon S3 instantly. It supports standard formats like CSV and Parquet and integrates with Amazon QuickSight, which allows you to build interactive business intelligence reports.
The Amazon Athena User Guide provides multiple best practices that should be considered when using Athena for analytical pipelines at production scale. There are various performance tuning techniques that you can apply to speed up query performance. For more information, see Top 10 Performance Tuning Tips for Amazon Athena.
To demonstrate this solution, I use the healthcare dataset from the Centers for Disease Control (CDC) Behavioral Risk Factor Surveillance system (BRFSS). It gathers data via telephone surveys for health-related risk behaviors and conditions, and the use of preventive services. The dataset is available as zip files from the CDC FTP portal for general download and analysis. There is also a user guide with comprehensive details about the program and the process of collecting data.
The following diagram shows the schema-on-read pipeline that demonstrates this solution.
In this architecture, S3 is the central data repository for the CSV files, which are divided by behavioral risk factors like smoking, drinking, obesity, and high blood pressure.
Athena is used to collectively query the CSV files in S3. It uses the AWS Glue Data Catalog to maintain the schema details and applies it at the time of querying the data.
The dataset is further filtered and transformed into a subset that is specifically used for reporting with Amazon QuickSight.
As new data files become available, they are incrementally added to the S3 bucket and the subset query automatically appends them to the end of the table. The dashboards in Amazon QuickSight are refreshed with the new values of calculated metrics.
To use Athena in an analytical pipeline, you have to consider how to design it for initial data ingestion and the subsequent incremental ingestions. Moreover, you also have to decide on the mechanism to trigger a particular stage in the pipeline, which can either be scheduled or event-based.
For the purposes of this post, you build a simple pipeline where the data is:
- Ingested into a staging area in S3
- Filtered and transformed for reporting into a different S3 location
- Transformed into a reporting table in Athena
- Included into a dashboard created using Amazon QuickSight
- Incrementally ingested for updates
This approach is highly customizable and can be used for building more complex pipelines with many more steps.
Data staging in S3
The data ingestion into S3 is fairly straightforward. The files can be ingested via the S3 command line interface (CLI), API, or the AWS Management Console. The data files are in CSV format and already divided by the behavioral condition. To improve performance, I recommend that you use partitioned data with Athena, especially when dealing with large volumes. You can use pre-partitioned data in S3 or build partitions later in the process. The example you are working with has a total of 247 CSV files storing about 205 MB of data across them, but typical production scale deployment would be much larger.
To automate the pipeline, you can make it either event-based or schedule-based. If you take the event-based approach, you can make use of S3 events to trigger an action when the files are uploaded. The event triggers an action, using an AWS Lambda function that corresponds to another step in the pipeline. Traditional ETL jobs have to rely on mechanisms like database triggers to enable this, which can cause additional performance overhead.
If you choose to go with a scheduled-based approach, you can use Lambda with scheduled events. The schedule is managed via a cron expression and the Lambda function is used to run the next step of the pipeline. This is suitable for workloads similar to a scheduled batch ETL job.
Filter and data transformation
To filter and transform the dataset, first look at the overall counts and structure of the data. This allows you to choose the columns that are important for a given report and use the filter clause to extract a subset of the data. Transforming the data as you progress through the pipeline ensures that you are only exposing relevant data to the reporting layer, to optimize performance.
To look at the entire dataset, create a table in Athena to go across the entire data volume. This can be done using the following query:
Replace YourBucket and YourPrefix with your corresponding values.
In this case, there are a total of ~1.4 million records, which you get by running a simple COUNT(*) query on the table.
From here, you can run multiple analysis queries on the dataset. For example, you can find the number of records that fall into a certain behavioral risk, or the state that has the highest number of diabetic patients recorded. These metrics provide data points that help to determine the attributes that would be needed from a reporting perspective.
As you can see, this is much simpler compared to a schema-on-write approach where the analysis of the source dataset is much more difficult. This solution allows you to design the reporting platform in accordance with the questions you are looking to answer from your data. The source data analysis is the first step to design a good analytical platform and this approach allows customers to do that much earlier in the project lifecycle.
After you have completed the source data analysis, the next step is to filter out the required data and transform it to create a reporting database. This is synonymous to the data mart in a standard analytical pipeline. Based on the analysis carried out in the previous step, you might notice some mismatches with the data headers. You might also identify the filter clauses to apply to the dataset to get to your reporting data.
Athena automatically saves query results in S3 for every run. The default bucket for this is created in the following format:
aws-athena-query-results-<AWS Account ID>-<AWS Region>
Athena creates a prefix for each saved query and stores the result set as CSV files organized by dates. You can use this feature to filter out result datasets and store them in an S3 bucket for reporting.
To enable this, create queries that can filter out and transform the subset of data on which to report. For this use case, create three separate queries to filter out unwanted data and fix the column headers:
You can save these queries in Athena so that you can get to the query results easily every time they are executed. The following screenshot is an example of the results when query 1 is executed four times.
The next step is to copy these results over to a new bucket for creating your reporting table. This can be done by running an S3 CP command from the CLI or API, as shown below:
Note the prefix structure in which Athena stores query results. It creates a separate prefix for each day in which the query is executed, and stores the corresponding CSV and metadata file for each run. Copy the result set over to a new prefix “Reporting_Data”. Use the “exclude” and “include” option of S3 CP to only copy the CSV files and use “recursive” to copy all the files from the run.
You can replace the value of the saved query name from “Query1” to “Query2” or “Query3” to copy all data resulting from those queries to the same target prefix. For pipelines that require more complicated transformations, divide the query transformation into multiple steps and execute them based on events or schedule them, as described in the earlier data staging step.
Amazon QuickSight dashboard
After the filtered results sets are copied as CSV files into the new Reporting_Data prefix, create a new table in Athena that is used specifically for BI reporting. This can be done using a create table statement similar to the one below:
This table can now act as a source for a dashboard on Amazon QuickSight, which is a straightforward process to enable. When you choose a new data source, Athena shows up as an option and Amazon QuickSight automatically detects the tables in Athena that are exposed for querying. Here are the data sources supported by Athena at the time of this post:
After choosing Athena, give a name to the data source and choose the database. The tables available for querying automatically show up in the list.
If you choose “BRFSS_REPORTING”, you can create custom metrics using the columns in the reporting table, which can then be used in reports and dashboards.
For more information about features and visualizations, see the Amazon QuickSight User Guide.
Incremental data ingestion
To build a complete pipeline, think about ingesting data incrementally for new records as they become available. To enable this, make sure that the data can be incrementally ingested into the reporting schema and that the reporting metrics are refreshed on each run of the report. To demonstrate this, look at a scenario where the new dataset is ingested in a periodic basis into S3, and which has to be included into the reporting schema when calculating the metrics.
Look at the number of records in the reporting table before the incremental ingestion.
The results are as follows:
Use Query1 as an example transformation query that can isolate the incremental load. Here is a view of the query result bucket before Query1 runs.
After the incremental data is ingested in S3, trigger an event (or pre-schedule) an execution of Query1 in Athena, which results in the csv result set and the metadata file as shown below.
Next, trigger (or schedule) the copy command to copy the incremental records file into the reporting prefix. This is easily automated by using the predefined structure in which Athena saves the query results on S3. On checking the records count in the reporting table after copying, you get an increased count.
This shows that 96,919 records were added to our reporting table that can be used in metric calculations.
This process can be implemented programmatically to add incremental records into the reporting table every time new records are ingested into the staging area. As a result, you can simulate an end-to-end analytical pipeline that runs based on events or is scheduled to run as a batch workload.
Using Athena, you can benefit from the advantages of a schema-on-read analytical application. You can combine other AWS services like Lambda and Amazon QuickSight to build an end-to-end analytical pipeline.
However, it’s important to note that a schema-on-read analytical pipeline may not be the answer for all use cases. Carefully consider the choices between schema-on-read and schema-on-write. The source systems you are working with play a critical role in making that decision. For example:
- Some systems have defined data structures and the chances of variation is very rare. These systems can work with a fixed relational target schema.
- If the target queries mostly involve joining across normalized tables, they work better with a relational database. For this use case, schema-on-write is a good choice.
- The query performance in a schema-on-write application is faster as the data is pre-structured. For fixed dashboards with little to no changes, a schema-on-write is a good choice.
You can also choose to go hybrid. Offload a part of the pipeline that deals with unstructured flat datasets into a schema-on-read architecture, and integrate the output into the main schema-on-write pipeline at a later stage. AWS provides multiple options to build analytical pipelines that suit various use cases. For more information, read about the AWS big data and analytics services.
If you have questions or suggestions, please comment below.
Don’t forget to check out the top 10 tips for Amazon Athena that can improve query performance.
About the Author
Ujjwal Ratan is a healthcare and life sciences Solutions Architect at AWS. He has worked with organizations ranging from large enterprises to smaller startups on problems related to distributed computing, analytics and machine learning. In his free time, he enjoys listening to (and playing) music and taking unplanned road trips with his family.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/the_data_tinder.html
Under European law, service providers like Tinder are required to show users what information they have on them when requested. This author requested, and this is what she received:
Some 800 pages came back containing information such as my Facebook “likes,” my photos from Instagram (even after I deleted the associated account), my education, the age-rank of men I was interested in, how many times I connected, when and where every online conversation with every single one of my matches happened…the list goes on.
“I am horrified but absolutely not surprised by this amount of data,” said Olivier Keyes, a data scientist at the University of Washington. “Every app you use regularly on your phone owns the same [kinds of information]. Facebook has thousands of pages about you!”
As I flicked through page after page of my data I felt guilty. I was amazed by how much information I was voluntarily disclosing: from locations, interests and jobs, to pictures, music tastes and what I liked to eat. But I quickly realised I wasn’t the only one. A July 2017 study revealed Tinder users are excessively willing to disclose information without realising it.
“You are lured into giving away all this information,” says Luke Stark, a digital technology sociologist at Dartmouth University. “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”
Reading through the 1,700 Tinder messages I’ve sent since 2013, I took a trip into my hopes, fears, sexual preferences and deepest secrets. Tinder knows me so well. It knows the real, inglorious version of me who copy-pasted the same joke to match 567, 568, and 569; who exchanged compulsively with 16 different people simultaneously one New Year’s Day, and then ghosted 16 of them.
“What you are describing is called secondary implicit disclosed information,” explains Alessandro Acquisti, professor of information technology at Carnegie Mellon University. “Tinder knows much more about you when studying your behaviour on the app. It knows how often you connect and at which times; the percentage of white men, black men, Asian men you have matched; which kinds of people are interested in you; which words you use the most; how much time people spend on your picture before swiping you, and so on. Personal data is the fuel of the economy. Consumers’ data is being traded and transacted for the purpose of advertising.”
It’s not Tinder. Surveillance is the business model of the Internet. Everyone does this.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/iso_rejects_nsa.html
A number of them voiced their distrust in emails to one another, seen by Reuters, and in written comments that are part of the process. The suspicions stem largely from internal NSA documents disclosed by Snowden that showed the agency had previously plotted to manipulate standards and promote technology it could penetrate. Budget documents, for example, sought funding to “insert vulnerabilities into commercial encryption systems.”
More than a dozen of the experts involved in the approval process for Simon and Speck feared that if the NSA was able to crack the encryption techniques, it would gain a “back door” into coded transmissions, according to the interviews and emails and other documents seen by Reuters.
“I don’t trust the designers,” Israeli delegate Orr Dunkelman, a computer science professor at the University of Haifa, told Reuters, citing Snowden’s papers. “There are quite a lot of people in NSA who think their job is to subvert standards. My job is to secure standards.”
I don’t trust the NSA, either.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/what_the_nsa_co.html
New York Times reporter Charlie Savage writes about some bad statistics we’re all using:
Among surveillance legal policy specialists, it is common to cite a set of statistics from an October 2011 opinion by Judge John Bates, then of the FISA Court, about the volume of internet communications the National Security Agency was collecting under the FISA Amendments Act (“Section 702”) warrantless surveillance program. In his opinion, declassified in August 2013, Judge Bates wrote that the NSA was collecting more than 250 million internet communications a year, of which 91 percent came from its Prism system (which collects stored e-mails from providers like Gmail) and 9 percent came from its upstream system (which collects transmitted messages from network operators like AT&T).
These numbers are wrong. This blog post will address, first, the widespread nature of this misunderstanding; second, how I came to FOIA certain documents trying to figure out whether the numbers really added up; third, what those documents show; and fourth, what I further learned in talking to an intelligence official. This is far too dense and weedy for a New York Times article, but should hopefully be of some interest to specialists.
Worth reading for the details.
Lasers! Cookies! Raspberry Pi! We’re buzzing with excitement about sharing our latest YouTube video with you, which comes directly from the kitchen of maker Estefannie Explains It All!
Uploaded by Raspberry Pi on 2017-09-18.
Estefannie Explains It All + Raspberry Pi
When Estefannie visited Pi Towers earlier this year, we introduced her to the Raspberry Pi Digital Curriculum and the free resources on our website. We’d already chatted to her via email about the idea of creating a collab video for the Raspberry Pi channel. Once she’d met members of the Raspberry Pi Foundation team and listened to them wax lyrical about the work we do here, she was even more keen to collaborate with us.
Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!!
Estefannie returned to the US filled with inspiration for a video for our channel, and we’re so pleased with how awesome her final result is. The video is a super addition to our Raspberry Pi YouTube channel, it shows what our resources can help you achieve, and it’s great fun. You might also have noticed that the project fits in perfectly with this season’s Pioneers challenge. A win all around!
So yeah, we’re really chuffed about this video, and we hope you all like it too!
Estefannie’s Laser Cookies guide
For those of you wanting to try your hand at building your own Cookie Jar Laser Surveillance Security System, Estefannie has provided a complete guide to talk you through it. Here she goes:
First off, you’ll need:
- 10 lasers
- 10 photoresistors
- 10 capacitors
- 1 Raspberry Pi Zero W
- 1 buzzer
- 1 Raspberry Pi Camera Module
- 12 ft PVC pipes + 4 corners
- 1 acrylic panel
- 1 battery pack
- 8 zip ties
- tons of cookies
I used the Raspberry Pi Foundation’s Laser trip wire and the Tweeting Babbage resources to get one laser working and to set up the camera and Twitter API. This took me less than an hour, and it was easy, breezy, beautiful, Raspberry Pi.
I soldered ten lasers in parallel and connected ten photoresistors to their own GPIO pins. I didn’t wire them up in series because of sensitivity reasons and to make debugging easier.
Building the frame took a few tries: I actually started with a wood frame, then tried a clear case, and finally realized the best and cleaner solution would be pipes. All the wires go inside the pipes and come out in a small window on the top to wire up to the Zero W.
Using pipes also made the build cheaper, since they were about $3 for 12 ft. Wiring inside the pipes was tricky, and to finish the circuit, I soldered some of the wires after they were already in the pipes.
I tried glueing the lasers to the frame, but the lasers melted the glue and became decalibrated. Next I tried tape, and then I found picture mounting putty. The putty worked perfectly — it was easy to mold a putty base for the lasers and to calibrate and re-calibrate them if needed. Moreover, the lasers stayed in place no matter how hot they got.
Although the lasers were not very strong, I still strained my eyes after long hours of calibrating — hence the sunglasses! Working indoors with lasers, sunglasses, and code was weird. But now I can say I’ve done that…in my kitchen.
Using all the knowledge I have shared, this project should take a couple of hours. The code you need lives on my GitHub!
Estefannie on YouTube
Estefannie made this video for us as a gift, and we’re so grateful for the time and effort she put into it! If you enjoyed it and would like to also show your gratitude, subscribe to her channel on YouTube and follow her on Instagram and Twitter. And if you make something similar, or build anything with our free resources, make sure to share it with us in the comments below or via our social media channels.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/on_the_equifax_.html
Last Thursday, Equifax reported a data breach that affects 143 million US customers, about 44% of the population. It’s an extremely serious breach; hackers got access to full names, Social Security numbers, birth dates, addresses, driver’s license numbers — exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, and other businesses vulnerable to fraud.
Many sites posted guides to protecting yourself now that it’s happened. But if you want to prevent this kind of thing from happening again, your only solution is government regulation (as unlikely as that may be at the moment).
The market can’t fix this. Markets work because buyers choose between sellers, and sellers compete for buyers. In case you didn’t notice, you’re not Equifax’s customer. You’re its product.
This happened because your personal information is valuable, and Equifax is in the business of selling it. The company is much more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights.
Its customers are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer — everyone who wants to sell you something, even governments.
It’s not just Equifax. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about you — almost all of them companies you’ve never heard of and have no business relationship with.
Surveillance capitalism fuels the Internet, and sometimes it seems that everyone is spying on you. You’re secretly tracked on pretty much every commercial website you visit. Facebook is the largest surveillance organization mankind has created; collecting data on you is its business model. I don’t have a Facebook account, but Facebook still keeps a surprisingly complete dossier on me and my associations — just in case I ever decide to join.
I also don’t have a Gmail account, because I don’t want Google storing my e-mail. But my guess is that it has about half of my e-mail anyway, because so many people I correspond with have accounts. I can’t even avoid it by choosing not to write to gmail.com addresses, because I have no way of knowing if [email protected] is hosted at Gmail.
And again, many companies that track us do so in secret, without our knowledge and consent. And most of the time we can’t opt out. Sometimes it’s a company like Equifax that doesn’t answer to us in any way. Sometimes it’s a company like Facebook, which is effectively a monopoly because of its sheer size. And sometimes it’s our cell phone provider. All of them have decided to track us and not compete by offering consumers privacy. Sure, you can tell people not to have an e-mail account or cell phone, but that’s not a realistic option for most people living in 21st-century America.
The companies that collect and sell our data don’t need to keep it secure in order to maintain their market share. They don’t have to answer to us, their products. They know it’s more profitable to save money on security and weather the occasional bout of bad press after a data loss. Yes, we are the ones who suffer when criminals get our data, or when our private information is exposed to the public, but ultimately why should Equifax care?
Yes, it’s a huge black eye for the company — this week. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?
This market failure isn’t unique to data security. There is little improvement in safety and security in any industry until government steps in. Think of food, pharmaceuticals, cars, airplanes, restaurants, workplace conditions, and flame-retardant pajamas.
Market failures like this can only be solved through government intervention. By regulating the security practices of companies that store our data, and fining companies that fail to comply, governments can raise the cost of insecurity high enough that security becomes a cheaper alternative. They can do the same thing by giving individuals affected by these breaches the ability to sue successfully, citing the exposure of personal data itself as a harm.
By all means, take the recommended steps to protect yourself from identity theft in the wake of Equifax’s data breach, but recognize that these steps are only effective on the margins, and that most data security is out of your hands. Perhaps the Federal Trade Commission will get involved, but without evidence of “unfair and deceptive trade practices,” there’s nothing it can do. Perhaps there will be a class-action lawsuit, but because it’s hard to draw a line between any of the many data breaches you’re subjected to and a specific harm, courts are not likely to side with you.
If you don’t like how careless Equifax was with your data, don’t waste your breath complaining to Equifax. Complain to your government.
This essay previously appeared on CNN.com.
EDITED TO ADD: In the early hours of this breach, I did a radio interview where I minimized the ramifications of this. I didn’t know the full extent of the breach, and thought it was just another in an endless string of breaches. I wondered why the press was covering this one and not many of the others. I don’t remember which radio show interviewed me. I kind of hope it didn’t air.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/a_hardware_priv.html
Andrew “bunnie” Huang and Edward Snowden have designed a hardware device that attaches to an iPhone and monitors it for malicious surveillance activities, even in instances where the phone’s operating system has been compromised. They call it an Introspection Engine, and their use model is a journalist who is concerned about government surveillance:
Our introspection engine is designed with the following goals in mind:
- Completely open source and user-inspectable (“You don’t have to trust us”)
- Introspection operations are performed by an execution domain completely separated from the phone”s CPU (“don’t rely on those with impaired judgment to fairly judge their state”)
- Proper operation of introspection system can be field-verified (guard against “evil maid” attacks and hardware failures)
- Difficult to trigger a false positive (users ignore or disable security alerts when there are too many positives)
- Difficult to induce a false negative, even with signed firmware updates (“don’t trust the system vendor” — state-level adversaries with full cooperation of system vendors should not be able to craft signed firmware updates that spoof or bypass the introspection engine)
- As much as possible, the introspection system should be passive and difficult to detect by the phone’s operating system (prevent black-listing/targeting of users based on introspection engine signatures)
- Simple, intuitive user interface requiring no specialized knowledge to interpret or operate (avoid user error leading to false negatives; “journalists shouldn’t have to be cryptographers to be safe”)
- Final solution should be usable on a daily basis, with minimal impact on workflow (avoid forcing field reporters into the choice between their personal security and being an effective journalist)
This looks like fantastic work, and they have a working prototype.
Of course, this does nothing to stop all the legitimate surveillance that happens over a cell phone: location tracking, records of who you talk to, and so on.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/russian_hacking.html
Kaspersky Labs exposed a highly sophisticated set of hacking tools from Russia called WhiteBear.
From February to September 2016, WhiteBear activity was narrowly focused on embassies and consular operations around the world. All of these early WhiteBear targets were related to embassies and diplomatic/foreign affair organizations. Continued WhiteBear activity later shifted to include defense-related organizations into June 2017. When compared to WhiteAtlas infections, WhiteBear deployments are relatively rare and represent a departure from the broader Skipper Turla target set. Additionally, a comparison of the WhiteAtlas framework to WhiteBear components indicates that the malware is the product of separate development efforts. WhiteBear infections appear to be preceded by a condensed spearphishing dropper, lack Firefox extension installer payloads, and contain several new components signed with a new code signing digital certificate, unlike WhiteAtlas incidents and modules.
The exact delivery vector for WhiteBear components is unknown to us, although we have very strong suspicion the group spearphished targets with malicious pdf files. The decoy pdf document above was likely stolen from a target or partner. And, although WhiteBear components have been consistently identified on a subset of systems previously targeted with the WhiteAtlas framework, and maintain components within the same filepaths and can maintain identical filenames, we were unable to firmly tie delivery to any specific WhiteAtlas component. WhiteBear focused on various embassies and diplomatic entities around the world in early 2016 — tellingly, attempts were made to drop and display decoy pdf’s with full diplomatic headers and content alongside executable droppers on target systems.
One of the clever things the tool does is use hijacked satellite connections for command and control, helping it evade detection by broad surveillance capabilities like what what NSA uses. We’ve seen Russian attack tools that do this before. More details are in the Kaspersky blog post.
Given all the trouble Kaspersky is having because of its association with Russia, it’s interesting to speculate on this disclosure. Either they are independent, and have burned a valuable Russian hacking toolset. Or the Russians decided that the toolset was already burned — maybe the NSA knows all about it and has neutered it somehow — and allowed Kaspersky to publish. Or maybe it’s something in between. That’s the problem with this kind of speculation: without any facts, your theories just amplify whatever opinion you had previously.
Oddly, there hasn’t been much press about this. I have only found one story.
EDITED TO ADD: A colleague pointed out to me that Kaspersky announcements like this often get ignored by the press. There was very little written about ProjectSauron, for example.
EDITED TO ADD: The text I originally wrote said that Kaspersky released the attacks tools, like what Shadow Brokers is doing. They did not. They just exposed the existence of them. Apologies for that error — it was sloppy wording.
Post Syndicated from Andy original https://torrentfreak.com/entire-kim-dotcom-spying-operation-was-illegal-high-court-rules-170825/
In the months that preceded the January 2012 raid on file-storage site Megaupload, authorities in New Zealand used the Government Communications Security Bureau (GCSB) spy agency to monitor Kim and Mona Dotcom, plus Megaupload co-defendant Bram van der Kolk.
When this fact was revealed it developed into a crisis. The GCSB was forbidden by law from conducting surveillance on its own citizens or permanent residents in the country, which led to former Prime Minister John Key later apologizing for the error.
With Dotcom determined to uncover the truth, the entrepreneur launched legal action in pursuit of the information illegally obtained by GCSB and to obtain compensation. In July, the High Court determined that Dotcom wouldn’t get access to the information but it also revealed that the scope of the spying went on much longer than previously admitted, a fact later confirmed by the police.
This raised the specter that not only did the GCSB continue to spy on Dotcom after it knew it was acting illegally, but that an earlier affidavit from a GCSB staff member was suspect.
With the saga continuing to drag on, revelations published in New Zealand this morning indicate that not only was the spying on Dotcom illegal, the entire spying operation – which included his Megaupload co-defendants – was too.
The reports are based on documents released by Lawyer Peter Spring, who is acting for Bram van der Kolk and Mathias Ortmann. Spring says that the High Court decision, which dates back to December but has only just been made available, shows that “the whole surveillance operation fell outside the authorization of the GCSB legislation as it was at the relevant time”.
Since Dotcom is a permanent resident of New Zealand, it’s long been established that the GCSB acted illegally when it spied on him. As foreigners, however, Megaupload co-defendants Finn Batato and Mathias Ortmann were previously considered valid surveillance targets.
It now transpires that the GCSB wasn’t prepared to mount a defense or reveal its methods concerning their surveillance, something which boosted the case against it.
“The circumstances of the interceptions of Messrs Ortmann and Batato’s communications are Top Secret and it has not proved possible to plead to the allegations the plaintiffs have made without revealing information which would jeopardize the national security of New Zealand,” the Court documents read.
“As a result the GCSB is deemed to have admitted the allegations in the statement of claim which relate to the manner in which the interceptions were effected.”
Speaking with RadioNZ, Grant Illingworth, a lawyer representing Ortmann and van der Kolk, said the decision calls the entire GCSB operation into doubt.
“The GCSB has now admitted that the unlawfulness was not just dependent upon residency issues, it went further. The reason it went further was because it didn’t have authorization to carry out the kind of surveillance that it was carrying out under the legislation, as it was at that time,” Illingworth said.
In comments to NZHerald, Illingworth added that the decision meant that the damages case for Ortmann and van der Kolk had come to an end. He refused to respond to questions of whether damages had been paid or a settlement reached.
He did indicate, however, that there could be implications for the battle underway to have Dotcom, Batato, Ortmann and van der Kolk extradited to the United States.
“If there was illegality in the arrest and search phase and that illegality has not previously been made known in the extradition context then it could be relevant to the extradition,” Illingworth said.
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/08/your_personal_b.html
Shonin is a personal bodycam up on Kickstarter.
There are a lot of complicated issues surrounding bodycams — for example, it’s obvious that police bodycams reduce violence — but the one thing everyone is certain about is that they will proliferate. I’m not sure society is fully ready for the ramifications of this level of recording.