Tag Archives: surveillance

Palantir’s Surveillance Service for Law Enforcement

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/07/palantirs_surve.html

Motherboard got its hands on Palantir’s Gotham user’s manual, which is used by the police to get information on people:

The Palantir user guide shows that police can start with almost no information about a person of interest and instantly know extremely intimate details about their lives. The capabilities are staggering, according to the guide:

  • If police have a name that’s associated with a license plate, they can use automatic license plate reader data to find out where they’ve been, and when they’ve been there. This can give a complete account of where someone has driven over any time period.
  • With a name, police can also find a person’s email address, phone numbers, current and previous addresses, bank accounts, social security number(s), business relationships, family relationships, and license information like height, weight, and eye color, as long as it’s in the agency’s database.

  • The software can map out a person’s family members and business associates of a suspect, and theoretically, find the above information about them, too.

All of this information is aggregated and synthesized in a way that gives law enforcement nearly omniscient knowledge over any suspect they decide to surveil.

Read the whole article — it has a lot of details. This seems like a commercial version of the NSA’s XKEYSCORE.

Boing Boing post.

Meanwhile:

The FBI wants to gather more information from social media. Today, it issued a call for contracts for a new social media monitoring tool. According to a request-for-proposals (RFP), it’s looking for an “early alerting tool” that would help it monitor terrorist groups, domestic threats, criminal activity and the like.

The tool would provide the FBI with access to the full social media profiles of persons-of-interest. That could include information like user IDs, emails, IP addresses and telephone numbers. The tool would also allow the FBI to track people based on location, enable persistent keyword monitoring and provide access to personal social media history. According to the RFP, “The mission-critical exploitation of social media will enable the Bureau to detect, disrupt, and investigate an ever growing diverse range of threats to U.S. National interests.”

US Journalist Detained When Returning to US

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/07/us_journalist_d.html

Pretty horrible story of a US journalist who had his computer and phone searched at the border when returning to the US from Mexico.

After I gave him the password to my iPhone, Moncivias spent three hours reviewing hundreds of photos and videos and emails and calls and texts, including encrypted messages on WhatsApp, Signal, and Telegram. It was the digital equivalent of tossing someone’s house: opening cabinets, pulling out drawers, and overturning furniture in hopes of finding something — anything — illegal. He read my communications with friends, family, and loved ones. He went through my correspondence with colleagues, editors, and sources. He asked about the identities of people who have worked with me in war zones. He also went through my personal photos, which I resented. Consider everything on your phone right now. Nothing on mine was spared.

Pomeroy, meanwhile, searched my laptop. He browsed my emails and my internet history. He looked through financial spreadsheets and property records and business correspondence. He was able to see all the same photos and videos as Moncivias and then some, including photos I thought I had deleted.

The EFF has extensive information and advice about device searches at the US border, including a travel guide:

If you are a U.S. citizen, border agents cannot stop you from entering the country, even if you refuse to unlock your device, provide your device password, or disclose your social media information. However, agents may escalate the encounter if you refuse. For example, agents may seize your devices, ask you intrusive questions, search your bags more intensively, or increase by many hours the length of detention. If you are a lawful permanent resident, agents may raise complicated questions about your continued status as a resident. If you are a foreign visitor, agents may deny you entry.

The most important piece of advice is to think about this all beforehand, and plan accordingly.

Digital License Plates

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/07/digital_license.html

They’re a thing:

Developers say digital plates utilize “advanced telematics” — to collect tolls, pay for parking and send out Amber Alerts when a child is abducted. They also help recover stolen vehicles by changing the display to read “Stolen,” thereby alerting everyone within eyeshot.

This makes no sense to me. The numbers are static. License plates being low-tech are a feature, not a bug.

Spanish Soccer League App Spies on Fans

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/spanish_soccer_.html

The Spanish Soccer League’s smartphone app spies on fans in order to find bars that are illegally streaming its games. The app listens with the microphone for the broadcasts, and then uses geolocation to figure out where the phone is.

The Spanish data protection agency has ordered the league to stop doing this. Not because it’s creepy spying, but because the terms of service — which no one reads anyway — weren’t clear.

Maciej Cegłowski on Privacy in the Information Age

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/maciej_ceglowsk.html

Maciej Cegłowski has a really good essay explaining how to think about privacy today:

For the purposes of this essay, I’ll call it “ambient privacy” — the understanding that there is value in having our everyday interactions with one another remain outside the reach of monitoring, and that the small details of our daily lives should pass by unremembered. What we do at home, work, church, school, or in our leisure time does not belong in a permanent record. Not every conversation needs to be a deposition.

Until recently, ambient privacy was a simple fact of life. Recording something for posterity required making special arrangements, and most of our shared experience of the past was filtered through the attenuating haze of human memory. Even police states like East Germany, where one in seven citizens was an informer, were not able to keep tabs on their entire population. Today computers have given us that power. Authoritarian states like China and Saudi Arabia are using this newfound capacity as a tool of social control. Here in the United States, we’re using it to show ads. But the infrastructure of total surveillance is everywhere the same, and everywhere being deployed at scale.

Ambient privacy is not a property of people, or of their data, but of the world around us. Just like you can’t drop out of the oil economy by refusing to drive a car, you can’t opt out of the surveillance economy by forswearing technology (and for many people, that choice is not an option). While there may be worthy reasons to take your life off the grid, the infrastructure will go up around you whether you use it or not.

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent.

Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.

He’s not the first person to talk about privacy as a societal property, or to use pollution metaphors. But his framing is really cogent. And “ambient privacy” is new — and a good phrasing.

Data, Surveillance, and the AI Arms Race

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/data_surveillan.html

According to foreign policy experts and the defense establishment, the United States is caught in an artificial intelligence arms race with China — one with serious implications for national security. The conventional version of this story suggests that the United States is at a disadvantage because of self-imposed restraints on the collection of data and the privacy of its citizens, while China, an unrestrained surveillance state, is at an advantage. In this vision, the data that China collects will be fed into its systems, leading to more powerful AI with capabilities we can only imagine today. Since Western countries can’t or won’t reap such a comprehensive harvest of data from their citizens, China will win the AI arms race and dominate the next century.

This idea makes for a compelling narrative, especially for those trying to justify surveillance — whether government- or corporate-run. But it ignores some fundamental realities about how AI works and how AI research is conducted.

Thanks to advances in machine learning, AI has flipped from theoretical to practical in recent years, and successes dominate public understanding of how it works. Machine learning systems can now diagnose pneumonia from X-rays, play the games of go and poker, and read human lips, all better than humans. They’re increasingly watching surveillance video. They are at the core of self-driving car technology and are playing roles in both intelligence-gathering and military operations. These systems monitor our networks to detect intrusions and look for spam and malware in our email.

And it’s true that there are differences in the way each country collects data. The United States pioneered “surveillance capitalism,” to use the Harvard University professor Shoshana Zuboff’s term, where data about the population is collected by hundreds of large and small companies for corporate advantage — and mutually shared or sold for profit The state picks up on that data, in cases such as the Centers for Disease Control and Prevention’s use of Google search data to map epidemics and evidence shared by alleged criminals on Facebook, but it isn’t the primary user.

China, on the other hand, is far more centralized. Internet companies collect the same sort of data, but it is shared with the government, combined with government-collected data, and used for social control. Every Chinese citizen has a national ID number that is demanded by most services and allows data to easily be tied together. In the western region of Xinjiang, ubiquitous surveillance is used to oppress the Uighur ethnic minority — although at this point there is still a lot of human labor making it all work. Everyone expects that this is a test bed for the entire country.

Data is increasingly becoming a part of control for the Chinese government. While many of these plans are aspirational at the moment — there isn’t, as some have claimed, a single “social credit score,” but instead future plans to link up a wide variety of systems — data collection is universally pushed as essential to the future of Chinese AI. One executive at search firm Baidu predicted that the country’s connected population will provide them with the raw data necessary to become the world’s preeminent tech power. China’s official goal is to become the world AI leader by 2030, aided in part by all of this massive data collection and correlation.

This all sounds impressive, but turning massive databases into AI capabilities doesn’t match technological reality. Current machine learning techniques aren’t all that sophisticated. All modern AI systems follow the same basic methods. Using lots of computing power, different machine learning models are tried, altered, and tried again. These systems use a large amount of data (the training set) and an evaluation function to distinguish between those models and variations that work well and those that work less well. After trying a lot of models and variations, the system picks the one that works best. This iterative improvement continues even after the system has been fielded and is in use.

So, for example, a deep learning system trying to do facial recognition will have multiple layers (hence the notion of “deep”) trying to do different parts of the facial recognition task. One layer will try to find features in the raw data of a picture that will help find a face, such as changes in color that will indicate an edge. The next layer might try to combine these lower layers into features like shapes, looking for round shapes inside of ovals that indicate eyes on a face. The different layers will try different features and will be compared by the evaluation function until the one that is able to give the best results is found, in a process that is only slightly more refined than trial and error.

Large data sets are essential to making this work, but that doesn’t mean that more data is automatically better or that the system with the most data is automatically the best system. Train a facial recognition algorithm on a set that contains only faces of white men, and the algorithm will have trouble with any other kind of face. Use an evaluation function that is based on historical decisions, and any past bias is learned by the algorithm. For example, mortgage loan algorithms trained on historic decisions of human loan officers have been found to implement redlining. Similarly, hiring algorithms trained on historical data manifest the same sexism as human staff often have. Scientists are constantly learning about how to train machine learning systems, and while throwing a large amount of data and computing power at the problem can work, more subtle techniques are often more successful. All data isn’t created equal, and for effective machine learning, data has to be both relevant and diverse in the right ways.

Future research advances in machine learning are focused on two areas. The first is in enhancing how these systems distinguish between variations of an algorithm. As different versions of an algorithm are run over the training data, there needs to be some way of deciding which version is “better.” These evaluation functions need to balance the recognition of an improvement with not over-fitting to the particular training data. Getting functions that can automatically and accurately distinguish between two algorithms based on minor differences in the outputs is an art form that no amount of data can improve.

The second is in the machine learning algorithms themselves. While much of machine learning depends on trying different variations of an algorithm on large amounts of data to see which is most successful, the initial formulation of the algorithm is still vitally important. The way the algorithms interact, the types of variations attempted, and the mechanisms used to test and redirect the algorithms are all areas of active research. (An overview of some of this work can be found here; even trying to limit the research to 20 papers oversimplifies the work being done in the field.) None of these problems can be solved by throwing more data at the problem.

The British AI company DeepMind’s success in teaching a computer to play the Chinese board game go is illustrative. Its AlphaGo computer program became a grandmaster in two steps. First, it was fed some enormous number of human-played games. Then, the game played itself an enormous number of times, improving its own play along the way. In 2016, AlphaGo beat the grandmaster Lee Sedol four games to one.

While the training data in this case, the human-played games, was valuable, even more important was the machine learning algorithm used and the function that evaluated the relative merits of different game positions. Just one year later, DeepMind was back with a follow-on system: AlphaZero. This go-playing computer dispensed entirely with the human-played games and just learned by playing against itself over and over again. It plays like an alien. (It also became a grandmaster in chess and shogi.)

These are abstract games, so it makes sense that a more abstract training process works well. But even something as visceral as facial recognition needs more than just a huge database of identified faces in order to work successfully. It needs the ability to separate a face from the background in a two-dimensional photo or video and to recognize the same face in spite of changes in angle, lighting, or shadows. Just adding more data may help, but not nearly as much as added research into what to do with the data once we have it.

Meanwhile, foreign-policy and defense experts are talking about AI as if it were the next nuclear arms race, with the country that figures it out best or first becoming the dominant superpower for the next century. But that didn’t happen with nuclear weapons, despite research only being conducted by governments and in secret. It certainly won’t happen with AI, no matter how much data different nations or companies scoop up.

It is true that China is investing a lot of money into artificial intelligence research: The Chinese government believes this will allow it to leapfrog other countries (and companies in those countries) and become a major force in this new and transformative area of computing — and it may be right. On the other hand, much of this seems to be a wasteful boondoggle. Slapping “AI” on pretty much anything is how to get funding. The Chinese Ministry of Education, for instance, promises to produce “50 world-class AI textbooks,” with no explanation of what that means.

In the democratic world, the government is neither the leading researcher nor the leading consumer of AI technologies. AI research is much more decentralized and academic, and it is conducted primarily in the public eye. Research teams keep their training data and models proprietary but freely publish their machine learning algorithms. If you wanted to work on machine learning right now, you could download Microsoft’s Cognitive Toolkit, Google’s Tensorflow, or Facebook’s Pytorch. These aren’t toy systems; these are the state-of-the art machine learning platforms.

AI is not analogous to the big science projects of the previous century that brought us the atom bomb and the moon landing. AI is a science that can be conducted by many different groups with a variety of different resources, making it closer to computer design than the space race or nuclear competition. It doesn’t take a massive government-funded lab for AI research, nor the secrecy of the Manhattan Project. The research conducted in the open science literature will trump research done in secret because of the benefits of collaboration and the free exchange of ideas.

While the United States should certainly increase funding for AI research, it should continue to treat it as an open scientific endeavor. Surveillance is not justified by the needs of machine learning, and real progress in AI doesn’t need it.

This essay was written with Jim Waldo, and previously appeared in Foreign Policy.

Computers and Video Surveillance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/computers_and_video.html

It used to be that surveillance cameras were passive. Maybe they just recorded, and no one looked at the video unless they needed to. Maybe a bored guard watched a dozen different screens, scanning for something interesting. In either case, the video was only stored for a few days because storage was expensive.

Increasingly, none of that is true. Recent developments in video analytics — fueled by artificial intelligence techniques like machine learning — enable computers to watch and understand surveillance videos with human-like discernment. Identification technologies make it easier to automatically figure out who is in the videos. And finally, the cameras themselves have become cheaper, more ubiquitous, and much better; cameras mounted on drones can effectively watch an entire city. Computers can watch all the video without human issues like distraction, fatigue, training, or needing to be paid. The result is a level of surveillance that was impossible just a few years ago.

An ACLU report published Thursday called “the Dawn of Robot Surveillance” says AI-aided video surveillance “won’t just record us, but will also make judgments about us based on their understanding of our actions, emotions, skin color, clothing, voice, and more. These automated ‘video analytics’ technologies threaten to fundamentally change the nature of surveillance.”

Let’s take the technologies one at a time. First: video analytics. Computers are getting better at recognizing what’s going on in a video. Detecting when a person or vehicle enters a forbidden area is easy. Modern systems can alarm when someone is walking in the wrong direction — going in through an exit-only corridor, for example. They can count people or cars. They can detect when luggage is left unattended, or when previously unattended luggage is picked up and removed. They can detect when someone is loitering in an area, is lying down, or is running. Increasingly, they can detect particular actions by people. Amazon’s cashier-less stores rely on video analytics to figure out when someone picks an item off a shelf and doesn’t put it back.

More than identifying actions, video analytics allow computers to understand what’s going on in a video: They can flag people based on their clothing or behavior, identify people’s emotions through body language and behavior, and find people who are acting “unusual” based on everyone else around them. Those same Amazon in-store cameras can analyze customer sentiment. Other systems can describe what’s happening in a video scene.

Computers can also identify people. AIs are getting better at identifying people in those videos. Facial recognition technology is improving all the time, made easier by the enormous stockpile of tagged photographs we give to Facebook and other social media sites, and the photos governments collect in the process of issuing ID cards and drivers licenses. The technology already exists to automatically identify everyone a camera “sees” in real time. Even without video identification, we can be identified by the unique information continuously broadcasted by the smartphones we carry with us everywhere, or by our laptops or Bluetooth-connected devices. Police have been tracking phones for years, and this practice can now be combined with video analytics.

Once a monitoring system identifies people, their data can be combined with other data, either collected or purchased: from cell phone records, GPS surveillance history, purchasing data, and so on. Social media companies like Facebook have spent years learning about our personalities and beliefs by what we post, comment on, and “like.” This is “data inference,” and when combined with video it offers a powerful window into people’s behaviors and motivations.

Camera resolution is also improving. Gigapixel cameras as so good that they can capture individual faces and identify license places in photos taken miles away. “Wide-area surveillance” cameras can be mounted on airplanes and drones, and can operate continuously. On the ground, cameras can be hidden in street lights and other regular objects. In space, satellite cameras have also dramatically improved.

Data storage has become incredibly cheap, and cloud storage makes it all so easy. Video data can easily be saved for years, allowing computers to conduct all of this surveillance backwards in time.

In democratic countries, such surveillance is marketed as crime prevention — or counterterrorism. In countries like China, it is blatantly used to suppress political activity and for social control. In all instances, it’s being implemented without a lot of public debate by law-enforcement agencies and by corporations in public spaces they control.

This is bad, because ubiquitous surveillance will drastically change our relationship to society. We’ve never lived in this sort of world, even those of us who have lived through previous totalitarian regimes. The effects will be felt in many different areas. False positives­ — when the surveillance system gets it wrong­ — will lead to harassment and worse. Discrimination will become automated. Those who fall outside norms will be marginalized. And most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behavior, which in turn will hobble society’s ability to experiment and change. A recent ACLU report discusses these harms in more depth. While it’s possible that some of this surveillance is worth the trade-offs, we as society need to deliberately and intelligently make decisions about it.

Some jurisdictions are starting to notice. Last month, San Francisco became the first city to ban facial recognition technology by police and other government agencies. A similar ban is being considered in Somerville, MA, and Oakland, CA. These are exceptions, and limited to the more liberal areas of the country.

We often believe that technological change is inevitable, and that there’s nothing we can do to stop it — or even to steer it. That’s simply not true. We’re led to believe this because we don’t often see it, understand it, or have a say in how or when it is deployed. The problem is that technologies of cameras, resolution, machine learning, and artificial intelligence are complex and specialized.

Laws like what was just passed in San Francisco won’t stop the development of these technologies, but they’re not intended to. They’re intended as pauses, so our policy making can catch up with technology. As a general rule, the US government tends to ignore technologies as they’re being developed and deployed, so as not to stifle innovation. But as the rate of technological change increases, so does the unanticipated effects on our lives. Just as we’ve been surprised by the threats to democracy caused by surveillance capitalism, AI-enabled video surveillance will have similar surprising effects. Maybe a pause in our headlong deployment of these technologies will allow us the time to discuss what kind of society we want to live in, and then enact rules to bring that kind of society about.

This essay previously appeared on Vice Motherboard.

Video Surveillance by Computer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/video_surveilla.html

The ACLU’s Jay Stanley has just published a fantastic report: “The Dawn of Robot Surveillance” (blog post here) Basically, it lays out a future of ubiquitous video cameras watched by increasingly sophisticated video analytics software, and discusses the potential harms to society.

I’m not going to excerpt a piece, because you really need to read the whole thing.

How Technology and Politics Are Changing Spycraft

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/how_technology_.html

Interesting article about how traditional nation-based spycraft is changing. Basically, the Internet makes it increasingly possible to generate a good cover story; cell phone and other electronic surveillance techniques make tracking people easier; and machine learning will make all of this automatic. Meanwhile, Western countries have new laws and norms that put them at a disadvantage over other countries. And finally, much of this has gone corporate.

Cybersecurity for the Public Interest

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/cybersecurity_f_2.html

The Crypto Wars have been waging off-and-on for a quarter-century. On one side is law enforcement, which wants to be able to break encryption, to access devices and communications of terrorists and criminals. On the other are almost every cryptographer and computer security expert, repeatedly explaining that there’s no way to provide this capability without also weakening the security of every user of those devices and communications systems.

It’s an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism­ — as practiced by the Internet companies that are already spying on everyone — ­matters. So does society’s underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?

The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals­ — that occasionally become law­ — that are technological disasters.

This isn’t sustainable, either for this issue or any of the other policy issues surrounding Internet security. We need policymakers who understand technology, but we also need cybersecurity technologists who understand — ­and are involved in — ­policy. We need public-interest technologists.

Let’s pause at that term. The Ford Foundation defines public-interest technologists as “technology practitioners who focus on social justice, the common good, and/or the public interest.” A group of academics recently wrote that public-interest technologists are people who “study the application of technology expertise to advance the public interest, generate public benefits, or promote the public good.” Tim Berners-Lee has called them “philosophical engineers.” I think of public-interest technologists as people who combine their technological expertise with a public-interest focus: by working on tech policy, by working on a tech project with a public benefit, or by working as a traditional technologist for an organization with a public benefit. Maybe it’s not the best term­ — and I know not everyone likes it­ — but it’s a decent umbrella term that can encompass all these roles.

We need public-interest technologists in policy discussions. We need them on congressional staff, in federal agencies, at non-governmental organizations (NGOs), in academia, inside companies, and as part of the press. In our field, we need them to get involved in not only the Crypto Wars, but everywhere cybersecurity and policy touch each other: the vulnerability equities debate, election security, cryptocurrency policy, Internet of Things safety and security, big data, algorithmic fairness, adversarial machine learning, critical infrastructure, and national security. When you broaden the definition of Internet security, many additional areas fall within the intersection of cybersecurity and policy. Our particular expertise and way of looking at the world is critical for understanding a great many technological issues, such as net neutrality and the regulation of critical infrastructure. I wouldn’t want to formulate public policy about artificial intelligence and robotics without a security technologist involved.

Public-interest technology isn’t new. Many organizations are working in this area, from older organizations like EFF and EPIC to newer ones like Verified Voting and Access Now. Many academic classes and programs combine technology and public policy. My cybersecurity policy class at the Harvard Kennedy School is just one example. Media startups like The Markup are doing technology-driven journalism. There are even programs and initiatives related to public-interest technology inside for-profit corporations.

This might all seem like a lot, but it’s really not. There aren’t enough people doing it, there aren’t enough people who know it needs to be done, and there aren’t enough places to do it. We need to build a world where there is a viable career path for public-interest technologists.

There are many barriers. There’s a report titled A Pivotal Moment that includes this quote: “While we cite individual instances of visionary leadership and successful deployment of technology skill for the public interest, there was a consensus that a stubborn cycle of inadequate supply, misarticulated demand, and an inefficient marketplace stymie progress.”

That quote speaks to the three places for intervention. One: the supply side. There just isn’t enough talent to meet the eventual demand. This is especially acute in cybersecurity, which has a talent problem across the field. Public-interest technologists are a diverse and multidisciplinary group of people. Their backgrounds come from technology, policy, and law. We also need to foster diversity within public-interest technology; the populations using the technology must be represented in the groups that shape the technology. We need a variety of ways for people to engage in this sphere: ways people can do it on the side, for a couple of years between more traditional technology jobs, or as a full-time rewarding career. We need public-interest technology to be part of every core computer-science curriculum, with “clinics” at universities where students can get a taste of public-interest work. We need technology companies to give people sabbaticals to do this work, and then value what they’ve learned and done.

Two: the demand side. This is our biggest problem right now; not enough organizations understand that they need technologists doing public-interest work. We need jobs to be funded across a wide variety of NGOs. We need staff positions throughout the government: executive, legislative, and judiciary branches. President Obama’s US Digital Service should be expanded and replicated; so should Code for America. We need more press organizations that perform this kind of work.

Three: the marketplace. We need job boards, conferences, and skills exchanges­ — places where people on the supply side can learn about the demand.

Major foundations are starting to provide funding in this space: the Ford and MacArthur Foundations in particular, but others as well.

This problem in our field has an interesting parallel with the field of public-interest law. In the 1960s, there was no such thing as public-interest law. The field was deliberately created, funded by organizations like the Ford Foundation. They financed legal aid clinics at universities, so students could learn housing, discrimination, or immigration law. They funded fellowships at organizations like the ACLU and the NAACP. They created a world where public-interest law is valued, where all the partners at major law firms are expected to have done some public-interest work. Today, when the ACLU advertises for a staff attorney, paying one-third to one-tenth normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School graduates go into public-interest law, and the school has soul-searching seminars because that percentage is so low. Meanwhile, the percentage of computer-science graduates going into public-interest work is basically zero.

This is bigger than computer security. Technology now permeates society in a way it didn’t just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There’s a pervasive myth in Silicon Valley that technology is politically neutral. It’s not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn’t matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: “What should be governed by the state, and what should be governed by the market?” This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: “How much of our lives should be governed by technology, and under what terms?” In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

This essay previously appeared in the January/February 2019 issue of IEEE Security & Privacy. I maintain a public-interest tech resources page here.

On Surveillance in the Workplace

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/on_surveillance.html

Data & Society just published a report entitled “Workplace Monitoring & Surveillance“:

This explainer highlights four broad trends in employee monitoring and surveillance technologies:

  • Prediction and flagging tools that aim to predict characteristics or behaviors of employees or that are designed to identify or deter perceived rule-breaking or fraud. Touted as useful management tools, they can augment biased and discriminatory practices in workplace evaluations and segment workforces into risk categories based on patterns of behavior.
  • Biometric and health data of workers collected through tools like wearables, fitness tracking apps, and biometric timekeeping systems as a part of employer- provided health care programs, workplace wellness, and digital tracking work shifts tools. Tracking non-work-related activities and information, such as health data, may challenge the boundaries of worker privacy, open avenues for discrimination, and raise questions about consent and workers’ ability to opt out of tracking.

  • Remote monitoring and time-tracking used to manage workers and measure performance remotely. Companies may use these tools to decentralize and lower costs by hiring independent contractors, while still being able to exert control over them like traditional employees with the aid of remote monitoring tools. More advanced time-tracking can generate itemized records of on-the-job activities, which can be used to facilitate wage theft or allow employers to trim what counts as paid work time.

  • Gamification and algorithmic management of work activities through continuous data collection. Technology can take on management functions, such as sending workers automated “nudges” or adjusting performance benchmarks based on a worker’s real-time progress, while gamification renders work activities into competitive, game-like dynamics driven by performance metrics. However, these practices can create punitive work environments that place pressures on workers to meet demanding and shifting efficiency benchmarks.

In a blog post about this report, Cory Doctorow mentioned “the adoption curve for oppressive technology, which goes, ‘refugee, immigrant, prisoner, mental patient, children, welfare recipient, blue collar worker, white collar worker.'” I don’t agree with the ordering, but the sentiment is correct. These technologies are generally used first against people with diminished rights: prisoners, children, the mentally ill, and soldiers.

Detecting Shoplifting Behavior

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/detecting_shopl.html

This system claims to detect suspicious behavior that indicates shoplifting:

Vaak, a Japanese startup, has developed artificial intelligence software that hunts for potential shoplifters, using footage from security cameras for fidgeting, restlessness and other potentially suspicious body language.

The article has no detail or analysis, so we don’t know how well it works. But this kind of thing is surely the future of video surveillance.

The Latest in Creepy Spyware

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/03/the_latest_in_c.html

The Nest home alarm system shipped with a secret microphone, which — according to the company — was only an accidental secret:

On Tuesday, a Google spokesperson told Business Insider the company had made an “error.”

“The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” the spokesperson said. “That was an error on our part.”

Where are the consumer protection agencies? They should be all over this.

And while they’re figuring out which laws Google broke, they should also look at American Airlines. Turns out that some of their seats have built-in cameras:

American Airlines spokesperson Ross Feinstein confirmed to BuzzFeed News that cameras are present on some of the airlines’ in-flight entertainment systems, but said “they have never been activated, and American is not considering using them.” Feinstein added, “Cameras are a standard feature on many in-flight entertainment systems used by multiple airlines. Manufacturers of those systems have included cameras for possible future uses, such as hand gestures to control in-flight entertainment.”

That makes it all okay, doesn’t it?

Actually, I kind of understand the airline seat camera thing. My guess is that whoever designed the in-flight entertainment system just specced a standard tablet computer, and they all came with unnecessary features like cameras. This is how we end up with refrigerators with Internet connectivity and Roombas with microphones. It’s cheaper to leave the functionality in than it is to remove it.

Still, we need better disclosure laws.

Reverse Location Search Warrants

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/02/reverse_locatio.html

The police are increasingly getting search warrants for information about all cell phones in a certain location at a certain time:

Police departments across the country have been knocking at Google’s door for at least the last two years with warrants to tap into the company’s extensive stores of cellphone location data. Known as “reverse location search warrants,” these legal mandates allow law enforcement to sweep up the coordinates and movements of every cellphone in a broad area. The police can then check to see if any of the phones came close to the crime scene. In doing so, however, the police can end up not only fishing for a suspect, but also gathering the location data of potentially hundreds (or thousands) of innocent people. There have only been anecdotal reports of reverse location searches, so it’s unclear how widespread the practice is, but privacy advocates worry that Google’s data will eventually allow more and more departments to conduct indiscriminate searches.

Of course, it’s not just Google who can provide this information.

I am also reminded of a Canadian surveillance program disclosed by Snowden.

I spend a lot of time talking about this sort of thing in Data and Goliath. Once you have everyone under surveillance all the time, many things are possible.

How Surveillance Inhibits Freedom of Expression

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/how_surveillanc_1.html

In my book Data and Goliath, I write about the value of privacy. I talk about how it is essential for political liberty and justice, and for commercial fairness and equality. I talk about how it increases personal freedom and individual autonomy, and how the lack of it makes us all less secure. But this is probably the most important argument as to why society as a whole must protect privacy: it allows society to progress.

We know that surveillance has a chilling effect on freedom. People change their behavior when they live their lives under surveillance. They are less likely to speak freely and act individually. They self-censor. They become conformist. This is obviously true for government surveillance, but is true for corporate surveillance as well. We simply aren’t as willing to be our individual selves when others are watching.

Let’s take an example: hearing that parents and children are being separated as they cross the US border, you want to learn more. You visit the website of an international immigrants’ rights group, a fact that is available to the government through mass Internet surveillance. You sign up for the group’s mailing list, another fact that is potentially available to the government. The group then calls or e-mails to invite you to a local meeting. Same. Your license plates can be collected as you drive to the meeting; your face can be scanned and identified as you walk into and out of the meeting. If, instead of visiting the website, you visit the group’s Facebook page, Facebook knows that you did and that feeds into its profile of you, available to advertisers and political activists alike. Ditto if you like their page, share a link with your friends, or just post about the issue.

Maybe you are an immigrant yourself, documented or not. Or maybe some of your family is. Or maybe you have friends or coworkers who are. How likely are you to get involved if you know that your interest and concern can be gathered and used by government and corporate actors? What if the issue you are interested in is pro- or anti-gun control, anti-police violence or in support of the police? Does that make a difference?

Maybe the issue doesn’t matter, and you would never be afraid to be identified and tracked based on your political or social interests. But even if you are so fearless, you probably know someone who has more to lose, and thus more to fear, from their personal, sexual, or political beliefs being exposed.

This isn’t just hypothetical. In the months and years after the 9/11 terrorist attacks, many of us censored what we spoke about on social media or what we searched on the Internet. We know from a 2013 PEN study that writers in the United States self-censored their browsing habits out of fear the government was watching. And this isn’t exclusively an American event; Internet self-censorship is prevalent across the globe, China being a prime example.

Ultimately, this fear stagnates society in two ways. The first is that the presence of surveillance means society cannot experiment with new things without fear of reprisal, and that means those experiments­ — if found to be inoffensive or even essential to society — ­cannot slowly become commonplace, moral, and then legal. If surveillance nips that process in the bud, change never happens. All social progress­ — from ending slavery to fighting for women’s rights­ — began as ideas that were, quite literally, dangerous to assert. Yet without the ability to safely develop, discuss, and eventually act on those assertions, our society would not have been able to further its democratic values in the way that it has.

Consider the decades-long fight for gay rights around the world. Within our lifetimes we have made enormous strides to combat homophobia and increase acceptance of queer folks’ right to marry. Queer relationships slowly progressed from being viewed as immoral and illegal, to being viewed as somewhat moral and tolerated, to finally being accepted as moral and legal.

In the end, it was the public nature of those activities that eventually slayed the bigoted beast, but the ability to act in private was essential in the beginning for the early experimentation, community building, and organizing.

Marijuana legalization is going through the same process: it’s currently sitting between somewhat moral, and­ — depending on the state or country in question — ­tolerated and legal. But, again, for this to have happened, someone decades ago had to try pot and realize that it wasn’t really harmful, either to themselves or to those around them. Then it had to become a counterculture, and finally a social and political movement. If pervasive surveillance meant that those early pot smokers would have been arrested for doing something illegal, the movement would have been squashed before inception. Of course the story is more complicated than that, but the ability for members of society to privately smoke weed was essential for putting it on the path to legalization.

We don’t yet know which subversive ideas and illegal acts of today will become political causes and positive social change tomorrow, but they’re around. And they require privacy to germinate. Take away that privacy, and we’ll have a much harder time breaking down our inherited moral assumptions.

The second way surveillance hurts our democratic values is that it encourages society to make more things illegal. Consider the things you do­ — the different things each of us does­ — that portions of society find immoral. Not just recreational drugs and gay sex, but gambling, dancing, public displays of affection. All of us do things that are deemed immoral by some groups, but are not illegal because they don’t harm anyone. But it’s important that these things can be done out of the disapproving gaze of those who would otherwise rally against such practices.

If there is no privacy, there will be pressure to change. Some people will recognize that their morality isn’t necessarily the morality of everyone­ — and that that’s okay. But others will start demanding legislative change, or using less legal and more violent means, to force others to match their idea of morality.

It’s easy to imagine the more conservative (in the small-c sense, not in the sense of the named political party) among us getting enough power to make illegal what they would otherwise be forced to witness. In this way, privacy helps protect the rights of the minority from the tyranny of the majority.

This is how we got Prohibition in the 1920s, and if we had had today’s surveillance capabilities in the 1920s, it would have been far more effectively enforced. Recipes for making your own spirits would have been much harder to distribute. Speakeasies would have been impossible to keep secret. The criminal trade in illegal alcohol would also have been more effectively suppressed. There would have been less discussion about the harms of Prohibition, less “what if we didn’t?” thinking. Political organizing might have been difficult. In that world, the law might have stuck to this day.

China serves as a cautionary tale. The country has long been a world leader in the ubiquitous surveillance of its citizens, with the goal not of crime prevention but of social control. They are about to further enhance their system, giving every citizen a “social credit” rating. The details are yet unclear, but the general concept is that people will be rated based on their activities, both online and off. Their political comments, their friends and associates, and everything else will be assessed and scored. Those who are conforming, obedient, and apolitical will be given high scores. People without those scores will be denied privileges like access to certain schools and foreign travel. If the program is half as far-reaching as early reports indicate, the subsequent pressure to conform will be enormous. This social surveillance system is precisely the sort of surveillance designed to maintain the status quo.

For social norms to change, people need to deviate from these inherited norms. People need the space to try alternate ways of living without risking arrest or social ostracization. People need to be able to read critiques of those norms without anyone’s knowledge, discuss them without their opinions being recorded, and write about their experiences without their names attached to their words. People need to be able to do things that others find distasteful, or even immoral. The minority needs protection from the tyranny of the majority.

Privacy makes all of this possible. Privacy encourages social progress by giving the few room to experiment free from the watchful eye of the many. Even if you are not personally chilled by ubiquitous surveillance, the society you live in is, and the personal costs are unequivocal.

This essay originally appeared in McSweeney’s issue #54: “The End of Trust.” It was reprinted on Wired.com.

The PCLOB Needs a Director

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/the_pclob_needs.html

The US Privacy and Civil Liberties Oversight Board is looking for a director. Among other things, this board has some oversight role over the NSA. More precisely, it can examine what any executive-branch agency is doing about counterterrorism. So it can examine the program of TSA watchlists, NSA anti-terrorism surveillance, and FBI counterterrorism activities.

The PCLOB was established in 2004 (when it didn’t do much), disappeared from 2007-2012, and reconstituted in 2012. It issued a major report on NSA surveillance in 2014. It has dwindled since then, having as few as one member. Last month, the Senate confirmed three new members, including Ed Felten.

So, potentially an important job if anyone out there is interested.

Israeli Surveillance Gear

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/israeli_surveil.html

The Israeli Defense Force mounted a botched raid in Gaza. They were attempting to install surveillance gear, which they ended up leaving behind. (There are photos — scroll past the video.) Israeli media is claiming that the capture of this gear by Hamas causes major damage to Israeli electronic surveillance capabilities. The Israelis themselves destroyed the vehicle the commandos used to enter Gaza. I’m guessing they did so because there was more gear in it they didn’t want falling into the Palestinians’ hands.

Can anyone intelligently speculate about what the photos shows? And if there are other photos on the Internet, please post them.