Tag Archives: homeland

Securing Elections

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/securing_electi_1.html

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers’ conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It’s important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn’t be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They’re computers — often ancient computers running operating systems no longer supported by the manufacturers — and they don’t have any magical security technology that the rest of the industry isn’t privy to. If anything, they’re less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We’re not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can’t use the security systems available to banking and other high-value applications.

We can securely bank online, but can’t securely vote online. If we could do away with anonymity — if everyone could check that their vote was counted correctly — then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can’t, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let’s start with the voter rolls. We know they’ve already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That’s just one possibility. A well-executed attack that deletes, for example, one in five voters at random — or changes their addresses — would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or — even better — a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It’s vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it’s easy to agree on strong security. But after the vote, someone is the presumptive winner — and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it’s too late to agree on anything.

The politicians running in the election shouldn’t have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don’t do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don’t go nearly far enough. The constitution delegates elections to the states but allows Congress to “make or alter such Regulations”. In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Internet Security Threats at the Olympics

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/internet_securi.html

There are a lot:

The cybersecurity company McAfee recently uncovered a cyber operation, dubbed Operation GoldDragon, attacking South Korean organizations related to the Winter Olympics. McAfee believes the attack came from a nation state that speaks Korean, although it has no definitive proof that this is a North Korean operation. The victim organizations include ice hockey teams, ski suppliers, ski resorts, tourist organizations in Pyeongchang, and departments organizing the Pyeongchang Olympics.

Meanwhile, a Russia-linked cyber attack has already stolen and leaked documents from other Olympic organizations. The so-called Fancy Bear group, or APT28, began its operations in late 2017 –­ according to Trend Micro and Threat Connect, two private cybersecurity firms­ — eventually publishing documents in 2018 outlining the political tensions between IOC officials and World Anti-Doping Agency (WADA) officials who are policing Olympic athletes. It also released documents specifying exceptions to anti-doping regulations granted to specific athletes (for instance, one athlete was given an exception because of his asthma medication). The most recent Fancy Bear leak exposed details about a Canadian pole vaulter’s positive results for cocaine. This group has targeted WADA in the past, specifically during the 2016 Rio de Janeiro Olympics. Assuming the attribution is right, the action appears to be Russian retaliation for the punitive steps against Russia.

A senior analyst at McAfee warned that the Olympics may experience more cyber attacks before closing ceremonies. A researcher at ThreatConnect asserted that organizations like Fancy Bear have no reason to stop operations just because they’ve already stolen and released documents. Even the United States Department of Homeland Security has issued a notice to those traveling to South Korea to remind them to protect themselves against cyber risks.

One presumes the Olympics network is sufficiently protected against the more pedestrian DDoS attacks and the like, but who knows?

EDITED TO ADD: There was already one attack.

Me on the Equifax Breach

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/me_on_the_equif.html

Testimony and Statement for the Record of Bruce Schneier
Fellow and Lecturer, Belfer Center for Science and International Affairs, Harvard Kennedy School
Fellow, Berkman Center for Internet and Society at Harvard Law School

Hearing on “Securing Consumers’ Credit Data in the Age of Digital Commerce”

Before the

Subcommittee on Digital Commerce and Consumer Protection
Committee on Energy and Commerce
United States House of Representatives

1 November 2017
2125 Rayburn House Office Building
Washington, DC 20515

Mister Chairman and Members of the Committee, thank you for the opportunity to testify today concerning the security of credit data. My name is Bruce Schneier, and I am a security technologist. For over 30 years I have studied the technologies of security and privacy. I have authored 13 books on these subjects, including Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (Norton, 2015). My popular newsletter CryptoGram and my blog Schneier on Security are read by over 250,000 people.

Additionally, I am a Fellow and Lecturer at the Harvard Kennedy School of Government –where I teach Internet security policy — and a Fellow at the Berkman-Klein Center for Internet and Society at Harvard Law School. I am a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of Electronic Privacy Information Center and VerifiedVoting.org. I am also a special advisor to IBM Security and the Chief Technology Officer of IBM Resilient.

I am here representing none of those organizations, and speak only for myself based on my own expertise and experience.

I have eleven main points:

1. The Equifax breach was a serious security breach that puts millions of Americans at risk.

Equifax reported that 145.5 million US customers, about 44% of the population, were impacted by the breach. (That’s the original 143 million plus the additional 2.5 million disclosed a month later.) The attackers got access to full names, Social Security numbers, birth dates, addresses, and driver’s license numbers.

This is exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, cell phone companies and other businesses vulnerable to fraud. As a result, all 143 million US victims are at greater risk of identity theft, and will remain at risk for years to come. And those who suffer identify theft will have problems for months, if not years, as they work to clean up their name and credit rating.

2. Equifax was solely at fault.

This was not a sophisticated attack. The security breach was a result of a vulnerability in the software for their websites: a program called Apache Struts. The particular vulnerability was fixed by Apache in a security patch that was made available on March 6, 2017. This was not a minor vulnerability; the computer press at the time called it “critical.” Within days, it was being used by attackers to break into web servers. Equifax was notified by Apache, US CERT, and the Department of Homeland Security about the vulnerability, and was provided instructions to make the fix.

Two months later, Equifax had still failed to patch its systems. It eventually got around to it on July 29. The attackers used the vulnerability to access the company’s databases and steal consumer information on May 13, over two months after Equifax should have patched the vulnerability.

The company’s incident response after the breach was similarly damaging. It waited nearly six weeks before informing victims that their personal information had been stolen and they were at increased risk of identity theft. Equifax opened a website to help aid customers, but the poor security around that — the site was at a domain separate from the Equifax domain — invited fraudulent imitators and even more damage to victims. At one point, the official Equifax communications even directed people to that fraudulent site.

This is not the first time Equifax failed to take computer security seriously. It confessed to another data leak in January 2017. In May 2016, one of its websites was hacked, resulting in 430,000 people having their personal information stolen. Also in 2016, a security researcher found and reported a basic security vulnerability in its main website. And in 2014, the company reported yet another security breach of consumer information. There are more.

3. There are thousands of data brokers with similarly intimate information, similarly at risk.

Equifax is more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about us — almost all of them companies you’ve never heard of and have no business relationship with.

The breadth and depth of information that data brokers have is astonishing. Data brokers collect and store billions of data elements covering nearly every US consumer. Just one of the data brokers studied holds information on more than 1.4 billion consumer transactions and 700 billion data elements, and another adds more than 3 billion new data points to its database each month.

These brokers collect demographic information: names, addresses, telephone numbers, e-mail addresses, gender, age, marital status, presence and ages of children in household, education level, profession, income level, political affiliation, cars driven, and information about homes and other property. They collect lists of things we’ve purchased, when we’ve purchased them, and how we paid for them. They keep track of deaths, divorces, and diseases in our families. They collect everything about what we do on the Internet.

4. These data brokers deliberately hide their actions, and make it difficult for consumers to learn about or control their data.

If there were a dozen people who stood behind us and took notes of everything we purchased, read, searched for, or said, we would be alarmed at the privacy invasion. But because these companies operate in secret, inside our browsers and financial transactions, we don’t see them and we don’t know they’re there.

Regarding Equifax, few consumers have any idea what the company knows about them, who they sell personal data to or why. If anyone knows about them at all, it’s about their business as a credit bureau, not their business as a data broker. Their website lists 57 different offerings for business: products for industries like automotive, education, health care, insurance, and restaurants.

In general, options to “opt-out” don’t work with data brokers. It’s a confusing process, and doesn’t result in your data being deleted. Data brokers will still collect data about consumers who opt out. It will still be in those companies’ databases, and will still be vulnerable. It just don’t be included individually when they sell data to their customers.

5. The existing regulatory structure is inadequate.

Right now, there is no way for consumers to protect themselves. Their data has been harvested and analyzed by these companies without their knowledge or consent. They cannot improve the security of their personal data, and have no control over how vulnerable it is. They only learn about data breaches when the companies announce them — which can be months after the breaches occur — and at that point the onus is on them to obtain credit monitoring services or credit freezes. And even those only protect consumers from some of the harms, and only those suffered after Equifax admitted to the breach.

Right now, the press is reporting “dozens” of lawsuits against Equifax from shareholders, consumers, and banks. Massachusetts has sued Equifax for violating state consumer protection and privacy laws. Other states may follow suit.

If any of these plaintiffs win in the court, it will be a rare victory for victims of privacy breaches against the companies that have our personal information. Current law is too narrowly focused on people who have suffered financial losses directly traceable to a specific breach. Proving this is difficult. If you are the victim of identity theft in the next month, is it because of Equifax or does the blame belong to another of the thousands of companies who have your personal data? As long as one can’t prove it one way or the other, data brokers remain blameless and liability free.

Additionally, much of this market in our personal data falls outside the protections of the Fair Credit Reporting Act. And in order for the Federal Trade Commission to levy a fine against Equifax, it needs to have a consent order and then a subsequent violation. Any fines will be limited to credit information, which is a small portion of the enormous amount of information these companies know about us. In reality, this is not an effective enforcement regime.

Although the FTC is investigating Equifax, it is unclear if it has a viable case.

6. The market cannot fix this because we are not the customers of data brokers.

The customers of these companies are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer — everyone who wants to sell you something, even governments.

Markets work because buyers choose from a choice of sellers, and sellers compete for buyers. None of us are Equifax’s customers. None of us are the customers of any of these data brokers. We can’t refuse to do business with the companies. We can’t remove our data from their databases. With few limited exceptions, we can’t even see what data these companies have about us or correct any mistakes.

We are the product that these companies sell to their customers: those who want to use our personal information to understand us, categorize us, make decisions about us, and persuade us.

Worse, the financial markets reward bad security. Given the choice between increasing their cybersecurity budget by 5%, or saving that money and taking the chance, a rational CEO chooses to save the money. Wall Street rewards those whose balance sheets look good, not those who are secure. And if senior management gets unlucky and the a public breach happens, they end up okay. Equifax’s CEO didn’t get his $5.2 million severance pay, but he did keep his $18.4 million pension. Any company that spends more on security than absolutely necessary is immediately penalized by shareholders when its profits decrease.

Even the negative PR that Equifax is currently suffering will fade. Unless we expect data brokers to put public interest ahead of profits, the security of this industry will never improve without government regulation.

7. We need effective regulation of data brokers.

In 2014, the Federal Trade Commission recommended that Congress require data brokers be more transparent and give consumers more control over their personal information. That report contains good suggestions on how to regulate this industry.

First, Congress should help plaintiffs in data breach cases by authorizing and funding empirical research on the harm individuals receive from these breaches.

Specifically, Congress should move forward legislative proposals that establish a nationwide “credit freeze” — which is better described as changing the default for disclosure from opt-out to opt-in — and free lifetime credit monitoring services. By this I do not mean giving customers free credit-freeze options, a proposal by Senators Warren and Schatz, but that the default should be a credit freeze.

The credit card industry routinely notifies consumers when there are suspicious charges. It is obvious that credit reporting agencies should have a similar obligation to notify consumers when there is suspicious activity concerning their credit report.

On the technology side, more could be done to limit the amount of personal data companies are allowed to collect. Increasingly, privacy safeguards impose “data minimization” requirements to ensure that only the data that is actually needed is collected. On the other hand, Congress should not create a new national identifier to replace the Social Security Numbers. That would make the system of identification even more brittle. Better is to reduce dependence on systems of identification and to create contextual identification where necessary.

Finally, Congress needs to give the Federal Trade Commission the authority to set minimum security standards for data brokers and to give consumers more control over their personal information. This is essential as long as consumers are these companies’ products and not their customers.

8. Resist complaints from the industry that this is “too hard.”

The credit bureaus and data brokers, and their lobbyists and trade-association representatives, will claim that many of these measures are too hard. They’re not telling you the truth.

Take one example: credit freezes. This is an effective security measure that protects consumers, but the process of getting one and of temporarily unfreezing credit is made deliberately onerous by the credit bureaus. Why isn’t there a smartphone app that alerts me when someone wants to access my credit rating, and lets me freeze and unfreeze my credit at the touch of the screen? Too hard? Today, you can have an app on your phone that does something similar if you try to log into a computer network, or if someone tries to use your credit card at a physical location different from where you are.

Moreover, any credit bureau or data broker operating in Europe is already obligated to follow the more rigorous EU privacy laws. The EU General Data Protection Regulation will come into force, requiring even more security and privacy controls for companies collecting storing the personal data of EU citizens. Those companies have already demonstrated that they can comply with those more stringent regulations.

Credit bureaus, and data brokers in general, are deliberately not implementing these 21st-century security solutions, because they want their services to be as easy and useful as possible for their actual customers: those who are buying your information. Similarly, companies that use this personal information to open accounts are not implementing more stringent security because they want their services to be as easy-to-use and convenient as possible.

9. This has foreign trade implications.

The Canadian Broadcast Corporation reported that 100,000 Canadians had their data stolen in the Equifax breach. The British Broadcasting Corporation originally reported that 400,000 UK consumers were affected; Equifax has since revised that to 15.2 million.

Many American Internet companies have significant numbers of European users and customers, and rely on negotiated safe harbor agreements to legally collect and store personal data of EU citizens.

The European Union is in the middle of a massive regulatory shift in its privacy laws, and those agreements are coming under renewed scrutiny. Breaches such as Equifax give these European regulators a powerful argument that US privacy regulations are inadequate to protect their citizens’ data, and that they should require that data to remain in Europe. This could significantly harm American Internet companies.

10. This has national security implications.

Although it is still unknown who compromised the Equifax database, it could easily have been a foreign adversary that routinely attacks the servers of US companies and US federal agencies with the goal of exploiting security vulnerabilities and obtaining personal data.

When the Fair Credit Reporting Act was passed in 1970, the concern was that the credit bureaus might misuse our data. That is still a concern, but the world has changed since then. Credit bureaus and data brokers have far more intimate data about all of us. And it is valuable not only to companies wanting to advertise to us, but foreign governments as well. In 2015, the Chinese breached the database of the Office of Personal Management and stole the detailed security clearance information of 21 million Americans. North Korea routinely engages in cybercrime as way to fund its other activities. In a world where foreign governments use cyber capabilities to attack US assets, requiring data brokers to limit collection of personal data, securely store the data they collect, and delete data about consumers when it is no longer needed is a matter of national security.

11. We need to do something about it.

Yes, this breach is a huge black eye and a temporary stock dip for Equifax — this month. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

Unless Congress acts to protect consumer information in the digital age, these breaches will continue.

Thank you for the opportunity to testify today. I will be pleased to answer your questions.

Department of Homeland Security to Collect Social Media of Immigrants and Citizens

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/department_of_h_3.html

New rules give the DHS permission to collect “social media handles, aliases, associated identifiable information, and search results” as part of people’s immigration file. The Federal Register has the details, which seems to also include US citizens that communicate with immigrants.

This is part of the general trend to srcrutinize people coming into the US more, but it’s hard to get too worked up about the DHS accessing publicly available information. More disturbing is the trend of occasonally asking for social media passwords at the border.

Now Available: The First Guide in the AWS Government Handbook Series

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/now-available-the-first-guide-in-the-aws-government-handbook-series/

Secure Network Connections image

AWS recently released the first guide in the new AWS Government Handbook Series: Secure Network Connections: An evaluation of the US Trusted Internet Connections program. This new series examines key cybersecurity policy initiatives that have been operating in the traditional IT space, unpacks their security objectives, and identifies lessons learned and best practices of global government first movers and early adopters seeking to achieve the initiative’s security outcomes in the cloud.

In particular, “Secure Network Connections” provides guidance to government policy makers on AWS’s position and recommendations for establishing cloud-based network perimeter monitoring capabilities. Note that this guidance can be applied to any organization that requires centralized perimeter network monitoring. The guide also summarizes lessons learned from AWS’s work with the US Department of Homeland Security (DHS) through an analysis of its federal secure network connections program, Trusted Internet Connections (TIC).

If you have questions or comments about this new guide, submit them in the “Comments” section below. And note that the next guide in this series will be published later this year.

– Craig

NSA Document Outlining Russian Attempts to Hack Voter Rolls

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/nsa_document_ou.html

This week brought new public evidence about Russian interference in the 2016 election. On Monday, the Intercept published a top-secret National Security Agency document describing Russian hacking attempts against the US election system. While the attacks seem more exploratory than operational ­– and there’s no evidence that they had any actual effect ­– they further illustrate the real threats and vulnerabilities facing our elections, and they point to solutions.

The document describes how the GRU, Russia’s military intelligence agency, attacked a company called VR Systems that, according to its website, provides software to manage voter rolls in eight states. The August 2016 attack was successful, and the attackers used the information they stole from the company’s network to launch targeted attacks against 122 local election officials on October 27, 12 days before the election.

That is where the NSA’s analysis ends. We don’t know whether those 122 targeted attacks were successful, or what their effects were if so. We don’t know whether other election software companies besides VR Systems were targeted, or what the GRU’s overall plan was — if it had one. Certainly, there are ways to disrupt voting by interfering with the voter registration process or voter rolls. But there was no indication on Election Day that people found their names removed from the system, or their address changed, or anything else that would have had an effect — anywhere in the country, let alone in the eight states where VR Systems is deployed. (There were Election Day problems with the voting rolls in Durham, NC ­– one of the states that VR Systems supports ­– but they seem like conventional errors and not malicious action.)

And 12 days before the election (with early voting already well underway in many jurisdictions) seems far too late to start an operation like that. That is why these attacks feel exploratory to me, rather than part of an operational attack. The Russians were seeing how far they could get, and keeping those accesses in their pocket for potential future use.

Presumably, this document was intended for the Justice Department, including the FBI, which would be the proper agency to continue looking into these hacks. We don’t know what happened next, if anything. VR Systems isn’t commenting, and the names of the local election officials targeted did not appear in the NSA document.

So while this document isn’t much of a smoking gun, it’s yet more evidence of widespread Russian attempts to interfere last year.

The document was, allegedly, sent to the Intercept anonymously. An NSA contractor, Reality Leigh Winner, was arrested Saturday and charged with mishandling classified information. The speed with which the government identified her serves as a caution to anyone wanting to leak official US secrets.

The Intercept sent a scan of the document to another source during its reporting. That scan showed a crease in the original document, which implied that someone had printed the document and then carried it out of some secure location. The second source, according to the FBI’s affidavit against Winner, passed it on to the NSA. From there, NSA investigators were able to look at their records and determine that only six people had printed out the document. (The government may also have been able to track the printout through secret dots that identified the printer.) Winner was the only one of those six who had been in e-mail contact with the Intercept. It is unclear whether the e-mail evidence was from Winner’s NSA account or her personal account, but in either case, it’s incredibly sloppy tradecraft.

With President Trump’s election, the issue of Russian interference in last year’s campaign has become highly politicized. Reports like the one from the Office of the Director of National Intelligence in January have been criticized by partisan supporters of the White House. It’s interesting that this document was reported by the Intercept, which has been historically skeptical about claims of Russian interference. (I was quoted in their story, and they showed me a copy of the NSA document before it was published.) The leaker was even praised by WikiLeaks founder Julian Assange, who up until now has been traditionally critical of allegations of Russian election interference.

This demonstrates the power of source documents. It’s easy to discount a Justice Department official or a summary report. A detailed NSA document is much more convincing. Right now, there’s a federal suit to force the ODNI to release the entire January report, not just the unclassified summary. These efforts are vital.

This hack will certainly come up at the Senate hearing where former FBI director James B. Comey is scheduled to testify Thursday. Last year, there were several stories about voter databases being targeted by Russia. Last August, the FBI confirmed that the Russians successfully hacked voter databases in Illinois and Arizona. And a month later, an unnamed Department of Homeland Security official said that the Russians targeted voter databases in 20 states. Again, we don’t know of anything that came of these hacks, but expect Comey to be asked about them. Unfortunately, any details he does know are almost certainly classified, and won’t be revealed in open testimony.

But more important than any of this, we need to better secure our election systems going forward. We have significant vulnerabilities in our voting machines, our voter rolls and registration process, and the vote tabulation systems after the polls close. In January, DHS designated our voting systems as critical national infrastructure, but so far that has been entirely for show. In the United States, we don’t have a single integrated election. We have 50-plus individual elections, each with its own rules and its own regulatory authorities. Federal standards that mandate voter-verified paper ballots and post-election auditing would go a long way to secure our voting system. These attacks demonstrate that we need to secure the voter rolls, as well.

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­– by extension ­– our democracy. Yes, fixing this will be expensive. Yes, it will require federal action in what’s historically been state-run systems. But as a country, we have no other option.

This essay previously appeared in the Washington Post.

Extending the Airplane Laptop Ban

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/extending_the_a.html

The Department of Homeland Security is rumored to be considering extending the current travel ban on large electronics for Middle Eastern flights to European ones as well. The likely reaction of airlines will be to implement new traveler programs, effectively allowing wealthier and more frequent fliers to bring their computers with them. This will only exacerbate the divide between the haves and the have-nots — all without making us any safer.

In March, both the United States and the United Kingdom required that passengers from 10 Muslim countries give up their laptop computers and larger tablets, and put them in checked baggage. The new measure was based on reports that terrorists would try to smuggle bombs onto planes concealed in these larger electronic devices.

The security measure made no sense for two reasons. First, moving these computers into the baggage holds doesn’t keep them off planes. Yes, it is easier to detonate a bomb that’s in your hands than to remotely trigger it in the cargo hold. But it’s also more effective to screen laptops at security checkpoints than it is to place them in checked baggage. TSA already does this kind of screening randomly and occasionally: making passengers turn laptops on to ensure that they’re functional computers and not just bomb-filled cases, and running chemical tests on their surface to detect explosive material.

And, two, banning laptops on selected flights just forces terrorists to buy more roundabout itineraries. It doesn’t take much creativity to fly Doha-Amsterdam-New York instead of direct. Adding Amsterdam to the list of affected airports makes the terrorist add yet another itinerary change; it doesn’t remove the threat.

Which brings up another question: If this is truly a threat, why aren’t domestic flights included in this ban? Remember that anyone boarding a plane to the United States from these Muslim countries has already received a visa to enter the country. This isn’t perfect security — the infamous underwear bomber had a visa, after all — but anyone who could detonate a laptop bomb on his international flight could do it on his domestic connection.

I don’t have access to classified intelligence, and I can’t comment on whether explosive-filled laptops are truly a threat. But, if they are, TSA can set up additional security screenings at the gates of US-bound flights worldwide and screen every laptop coming onto the plane. It wouldn’t be the first time we’ve had additional security screening at the gate. And they should require all laptops to go through this screening, prohibiting them from being stashed in checked baggage.

This measure is nothing more than security theater against what appears to be a movie-plot threat.

Banishing laptops to the cargo holds brings with it a host of other threats. Passengers run the risk of their electronics being stolen from their checked baggage — something that has happened in the past. And, depending on the country, passengers also have to worry about border control officials intercepting checked laptops and making copies of what’s on their hard drives.

Safety is another concern. We’re already worried about large lithium-ion batteries catching fire in airplane baggage holds; adding a few hundred of these devices will considerably exacerbate the risk. Both FedEx and UPS no longer accept bulk shipments of these batteries after two jets crashed in 2010 and 2011 due to combustion.

Of course, passengers will rebel against this rule. Having access to a computer on these long transatlantic flights is a must for many travelers, especially the high-revenue business-class travelers. They also won’t accept the delays and confusion this rule will cause as it’s rolled out. Unhappy passengers fly less, or fly other routes on other airlines without these restrictions.

I don’t know how many passengers are choosing to fly to the Middle East via Toronto to avoid the current laptop ban, but I suspect there may be some. If Europe is included in the new ban, many more may consider adding Canada to their itineraries, as well as choosing European hubs that remain unaffected.

As passengers voice their disapproval with their wallets, airlines will rebel. Already Emirates has a program to loan laptops to their premium travelers. I can imagine US airlines doing the same, although probably for an extra fee. We might learn how to make this work: keeping our data in the cloud or on portable memory sticks and using unfamiliar computers for the length of the flight.

A more likely response will be comparable to what happened after the US increased passenger screening post-9/11. In the months and years that followed, we saw different ways for high-revenue travelers to avoid the lines: faster first-class lanes, and then the extra-cost trusted traveler programs that allow people to bypass the long lines, keep their shoes on their feet and leave their laptops and liquids in their bags. It’s a bad security idea, but it keeps both frequent fliers and airlines happy. It would be just another step to allow these people to keep their electronics with them on their flight.

The problem with this response is that it solves the problem for frequent fliers, while leaving everyone else to suffer. This is already the case; those of us enrolled in a trusted traveler program forget what it’s like to go through “normal” security screening. And since frequent fliers — likely to be more wealthy — no longer see the problem, they don’t have any incentive to fix it.

Dividing security checks into haves and have-nots is bad social policy, and we should actively fight any expansion of it. If the TSA implements this security procedure, it should implement it for every flight. And there should be no exceptions. Force every politically connected flier, from members of Congress to the lobbyists that influence them, to do without their laptops on planes. Let the TSA explain to them why they can’t work on their flights to and from D.C.

This essay previously appeared on CNN.com.

EDITED TO ADD: US officials are backing down.

The TSA’s Selective Laptop Ban

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/the_tsas_select.html

Last Monday, the TSA announced a peculiar new security measure to take effect within 96 hours. Passengers flying into the US on foreign airlines from eight Muslim countries would be prohibited from carrying aboard any electronics larger than a smartphone. They would have to be checked and put into the cargo hold. And now the UK is following suit.

It’s difficult to make sense of this as a security measure, particularly at a time when many people question the veracity of government orders, but other explanations are either unsatisfying or damning.

So let’s look at the security aspects of this first. Laptop computers aren’t inherently dangerous, but they’re convenient carrying boxes. This is why, in the past, TSA officials have demanded passengers turn their laptops on: to confirm that they’re actually laptops and not laptop cases emptied of their electronics and then filled with explosives.

Forcing a would-be bomber to put larger laptops in the plane’s hold is a reasonable defense against this threat, because it increases the complexity of the plot. Both the shoe-bomber Richard Reid and the underwear bomber Umar Farouk Abdulmutallab carried crude bombs aboard their planes with the plan to set them off manually once aloft. Setting off a bomb in checked baggage is more work, which is why we don’t see more midair explosions like Pan Am Flight 103 over Lockerbie, Scotland, in 1988.

Security measures that restrict what passengers can carry onto planes are not unprecedented either. Airport security regularly responds to both actual attacks and intelligence regarding future attacks. After the liquid bombers were captured in 2006, the British banned all carry-on luggage except passports and wallets. I remember talking with a friend who traveled home from London with his daughters in those early weeks of the ban. They reported that airport security officials confiscated every tube of lip balm they tried to hide.

Similarly, the US started checking shoes after Reid, installed full-body scanners after Abdulmutallab and restricted liquids in 2006. But all of those measure were global, and most lessened in severity as the threat diminished.

This current restriction implies some specific intelligence of a laptop-based plot and a temporary ban to address it. However, if that’s the case, why only certain non-US carriers? And why only certain airports? Terrorists are smart enough to put a laptop bomb in checked baggage from the Middle East to Europe and then carry it on from Europe to the US.

Why not require passengers to turn their laptops on as they go through security? That would be a more effective security measure than forcing them to check them in their luggage. And lastly, why is there a delay between the ban being announced and it taking effect?

Even more confusing, the New York Times reported that “officials called the directive an attempt to address gaps in foreign airport security, and said it was not based on any specific or credible threat of an imminent attack.” The Department of Homeland Security FAQ page makes this general statement, “Yes, intelligence is one aspect of every security-related decision,” but doesn’t provide a specific security threat. And yet a report from the UK states the ban “follows the receipt of specific intelligence reports.”

Of course, the details are all classified, which leaves all of us security experts scratching our heads. On the face of it, the ban makes little sense.

One analysis painted this as a protectionist measure targeted at the heavily subsidized Middle Eastern airlines by hitting them where it hurts the most: high-paying business class travelers who need their laptops with them on planes to get work done. That reasoning makes more sense than any security-related explanation, but doesn’t explain why the British extended the ban to UK carriers as well. Or why this measure won’t backfire when those Middle Eastern countries turn around and ban laptops on American carriers in retaliation. And one aviation official told CNN that an intelligence official informed him it was not a “political move.”

In the end, national security measures based on secret information require us to trust the government. That trust is at historic low levels right now, so people both in the US and other countries are rightly skeptical of the official unsatisfying explanations. The new laptop ban highlights this mistrust.

This essay previously appeared on CNN.com.

EDITED TO ADD: Here are two essays that look at the possible political motivations, and fallout, of this ban. And the EFF rightly points out that letting a laptop out of your hands and sight is itself a security risk — for the passenger.

Security and Privacy Guidelines for the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_pr.html

Lately, I have been collecting IoT security and privacy guidelines. Here’s everything I’ve found:

  1. Internet of Things (IoT) Broadband Internet Technical Advisory Group, Broadband Internet Technical Advisory Group, Nov 2016.
  2. IoT Security Guidance,” Open Web Application Security Project (OWASP), May 2016.

  3. Strategic Principles for Securing the Internet of Things (IoT),” US Department of Homeland Security, Nov 2016.

  4. Security,” OneM2M Technical Specification, Aug 2016.

  5. Security Solutions,” OneM2M Technical Specification, Aug 2016.

  6. IoT Security Guidelines Overview Document,” GSM Alliance, Feb 2016.

  7. IoT Security Guidelines For Service Ecosystems,” GSM Alliance, Feb 2016.

  8. IoT Security Guidelines for Endpoint Ecosystems,” GSM Alliance, Feb 2016.

  9. IoT Security Guidelines for Network Operators,” GSM Alliance, Feb 2016.

  10. Establishing Principles for Internet of Things Security,” IoT Security Foundation, undated.

  11. IoT Design Manifesto,” May 2015.

  12. NYC Guidelines for the Internet of Things,” City of New York, undated.

  13. IoT Security Compliance Framework,” IoT Security Foundation, 2016.

  14. Principles, Practices and a Prescription for Responsible IoT and Embedded Systems Development,” IoTIAP, Nov 2016.

  15. IoT Trust Framework,” Online Trust Alliance, Jan 2017.

  16. Five Star Automotive Cyber Safety Framework,” I am the Cavalry, Feb 2015.

  17. Hippocratic Oath for Connected Medical Devices,” I am the Cavalry, Jan 2016.

Other, related, items:

  1. We All Live in the Computer Now,” The Netgain Partnership, Oct 2016.
  2. Comments of EPIC to the FTC on the Privacy and Security Implications of the Internet of Things,” Electronic Privacy Information Center, Jun 2013.

  3. Internet of Things Software Update Workshop (IoTSU),” Internet Architecture Board, Jun 2016.

They all largely say the same things: avoid known vulnerabilities, don’t have insecure defaults, make your systems patchable, and so on.

My guess is that everyone knows that IoT regulation is coming, and is either trying to impose self-regulation to forestall government action or establish principles to influence government action. It’ll be interesting to see how the next few years unfold.

If there are any IoT security or privacy guideline documents that I’m missing, please tell me in the comments.

Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_th.html

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analog for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

That "Commission on Enhancing Cybersecurity" is absurd

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/that-commission-on-enhancing.html

An Obama commission has publish a report on how to “Enhance Cybersecurity”. It’s promoted as having been written by neutral, bipartisan, technical experts. Instead, it’s almost entirely dominated by special interests and the Democrat politics of the outgoing administration.

In this post, I’m going through a random list of some of the 53 “action items” proposed by the documents. I show how they are policy issues, not technical issues. Indeed, much of the time the technical details are warped to conform to special interests.

IoT passwords

The recommendations include such things as Action Item 2.1.4:

Initial best practices should include requirements to mandate that IoT devices be rendered unusable until users first change default usernames and passwords. 

This recommendation for changing default passwords is repeated many times. It comes from the way the Mirai worm exploits devices by using hardcoded/default passwords.

But this is a misunderstanding of how these devices work. Take, for example, the infamous Xiongmai camera. It has user accounts on the web server to control the camera. If the user forgets the password, the camera can be reset to factory defaults by pressing a button on the outside of the camera.

But here’s the deal with security cameras. They are placed at remote sites miles away, up on the second story where people can’t mess with them. In order to reset them, you need to put a ladder in your truck and drive 30 minutes out to the site, then climb the ladder (an inherently dangerous activity). Therefore, Xiongmai provides a RESET.EXE utility for remotely resetting them. That utility happens to connect via Telnet using a hardcoded password.

The above report misunderstands what’s going on here. It sees Telnet and a hardcoded password, and makes assumptions. Some people assume that this is the normal user account — it’s not, it’s unrelated to the user accounts on the web server portion of the device. Requiring the user to change the password on the web service would have no effect on the Telnet service. Other people assume the Telnet service is accidental, that good security hygiene would remove it. Instead, it’s an intended feature of the product, to remotely reset the device. Fixing the “password” issue as described in the above recommendations would simply mean the manufacturer would create a different, custom backdoor that hackers would eventually reverse engineer, creating MiraiV2 botnet. Instead of security guides banning backdoors, they need to come up with standard for remote reset.

That characterization of Mirai as an IoT botnet is wrong. Mirai is a botnet of security cameras. Security cameras are fundamentally different from IoT devices like toasters and fridges because they are often exposed to the public Internet. To stream video on your phone from your security camera, you need a port open on the Internet. Non-camera IoT devices, however, are overwhelmingly protected by a firewall, with no exposure to the public Internet. While you can create a botnet of Internet cameras, you cannot create a botnet of Internet toasters.

The point I’m trying to demonstrate here is that the above report was written by policy folks with little grasp of the technical details of what’s going on. They use Mirai to justify several of their “Action Items”, none of which actually apply to the technical details of Mirai. It has little to do with IoT, passwords, or hygiene.

Public-private partnerships

Action Item 1.2.1: The President should create, through executive order, the National Cybersecurity Private–Public Program (NCP 3 ) as a forum for addressing cybersecurity issues through a high-level, joint public–private collaboration.

We’ve had public-private partnerships to secure cyberspace for over 20 years, such as the FBI InfraGuard partnership. President Clinton’s had a plan in 1998 to create a public-private partnership to address cyber vulnerabilities. President Bush declared public-private partnerships the “cornerstone of his 2003 plan to secure cyberspace.

Here we are 20 years later, and this document is full of new naive proposals for public-private partnerships There’s no analysis of why they have failed in the past, or a discussion of which ones have succeeded.

The many calls for public-private programs reflects the left-wing nature of this supposed “bipartisan” document, that sees government as a paternalistic entity that can help. The right-wing doesn’t believe the government provides any value in these partnerships. In my 20 years of experience with government private-partnerships in cybersecurity, I’ve found them to be a time waster at best and at worst, a way to coerce “voluntary measures” out of companies that hurt the public’s interest.

Build a wall and make China pay for it

Action Item 1.3.1: The next Administration should require that all Internet-based federal government services provided directly to citizens require the use of appropriately strong authentication.

This would cost at least $100 per person, for 300 million people, or $30 billion. In other words, it’ll cost more than Trump’s wall with Mexico.

Hardware tokens are cheap. Blizzard (a popular gaming company) must deal with widespread account hacking from “gold sellers”, and provides second factor authentication to its gamers for $6 each. But that ignores the enormous support costs involved. How does a person prove their identity to the government in order to get such a token? To replace a lost token? When old tokens break? What happens if somebody’s token is stolen?

And that’s the best case scenario. Other options, like using cellphones as a second factor, are non-starters.

This is actually not a bad recommendation, as far as government services are involved, but it ignores the costs and difficulties involved.

But then the recommendations go on to suggest this for private sector as well:

Specifically, private-sector organizations, including top online retailers, large health insurers, social media companies, and major financial institutions, should use strong authentication solutions as the default for major online applications.

No, no, no. There is no reason for a “top online retailer” to know your identity. I lie about my identity. Amazon.com thinks my name is “Edward Williams”, for example.

They get worse with:

Action Item 1.3.3: The government should serve as a source to validate identity attributes to address online identity challenges.

In other words, they are advocating a cyber-dystopic police-state wet-dream where the government controls everyone’s identity. We already see how this fails with Facebook’s “real name” policy, where everyone from political activists in other countries to LGBTQ in this country get harassed for revealing their real names.

Anonymity and pseudonymity are precious rights on the Internet that we now enjoy — rights endangered by the radical policies in this document. This document frequently claims to promote security “while protecting privacy”. But the government doesn’t protect privacy — much of what we want from cybersecurity is to protect our privacy from government intrusion. This is nothing new, you’ve heard this privacy debate before. What I’m trying to show here is that the one-side view of privacy in this document demonstrates how it’s dominated by special interests.

Cybersecurity Framework

Action Item 1.4.2: All federal agencies should be required to use the Cybersecurity Framework. 

The “Cybersecurity Framework” is a bunch of a nonsense that would require another long blogpost to debunk. It requires months of training and years of experience to understand. It contains things like “DE.CM-4: Malicious code is detected”, as if that’s a thing organizations are able to do.

All the while it ignores the most common cyber attacks (SQL/web injections, phishing, password reuse, DDoS). It’s a typical example where organizations spend enormous amounts of money following process while getting no closer to solving what the processes are attempting to solve. Federal agencies using the Cybersecurity Framework are no safer from my pentests than those who don’t use it.

It gets even crazier:

Action Item 1.5.1: The National Institute of Standards and Technology (NIST) should expand its support of SMBs in using the Cybersecurity Framework and should assess its cost-effectiveness specifically for SMBs.

Small businesses can’t even afford to even read the “Cybersecurity Framework”. Simply reading the doc, trying to understand it, would exceed their entire IT/computer budget for the year. It would take a high-priced consultant earning $500/hour to tell them that “DE.CM-4: Malicious code is detected” means “buy antivirus and keep it up to date”.

Software liability is a hoax invented by the Chinese to make our IoT less competitive

Action Item 2.1.3: The Department of Justice should lead an interagency study with the Departments of Commerce and Homeland Security and work with the Federal Trade Commission, the Consumer Product Safety Commission, and interested private sector parties to assess the current state of the law with regard to liability for harm caused by faulty IoT devices and provide recommendations within 180 days. 

For over a decade, leftists in the cybersecurity industry have been pushing the concept of “software liability”. Every time there is a major new development in hacking, such as the worms around 2003, they come out with documents explaining why there’s a “market failure” and that we need liability to punish companies to fix the problem. Then the problem is fixed, without software liability, and the leftists wait for some new development to push the theory yet again.

It’s especially absurd for the IoT marketspace. The harm, as they imagine, is DDoS. But the majority of devices in Mirai were sold by non-US companies to non-US customers. There’s no way US regulations can stop that.

What US regulations will stop is IoT innovation in the United States. Regulations are so burdensome, and liability lawsuits so punishing, that it will kill all innovation within the United States. If you want to get rich with a clever IoT Kickstarter project, forget about it: you entire development budget will go to cybersecurity. The only companies that will be able to afford to ship IoT products in the United States will be large industrial concerns like GE that can afford the overhead of regulation/liability.

Liability is a left-wing policy issue, not one supported by technical analysis. Software liability has proven to be immaterial in any past problem and current proponents are distorting the IoT market to promote it now.

Cybersecurity workforce

Action Item 4.1.1: The next President should initiate a national cybersecurity workforce program to train 100,000 new cybersecurity practitioners by 2020. 

The problem in our industry isn’t the lack of “cybersecurity practitioners”, but the overabundance of “insecurity practitioners”.

Take “SQL injection” as an example. It’s been the most common way hackers break into websites for 15 years. It happens because programmers, those building web-apps, blinding paste input into SQL queries. They do that because they’ve been trained to do it that way. All the textbooks on how to build webapps teach them this. All the examples show them this.

So you have government programs on one hand pushing tech education, teaching kids to build web-apps with SQL injection. Then you propose to train a second group of people to fix the broken stuff the first group produced.

The solution to SQL/website injections is not more practitioners, but stopping programmers from creating the problems in the first place. The solution to phishing is to use the tools already built into Windows and networks that sysadmins use, not adding new products/practitioners. These are the two most common problems, and they happen not because of a lack of cybersecurity practitioners, but because the lack of cybersecurity as part of normal IT/computers.

I point this to demonstrate yet against that the document was written by policy people with little or no technical understanding of the problem.

Nutritional label

Action Item 3.1.1: To improve consumers’ purchasing decisions, an independent organization should develop the equivalent of a cybersecurity “nutritional label” for technology products and services—ideally linked to a rating system of understandable, impartial, third-party assessment that consumers will intuitively trust and understand. 

This can’t be done. Grab some IoT devices, like my thermostat, my car, or a Xiongmai security camera used in the Mirai botnet. These devices are so complex that no “nutritional label” can be made from them.

One of the things you’d like to know is all the software dependencies, so that if there’s a bug in OpenSSL, for example, then you know your device is vulnerable. Unfortunately, that requires a nutritional label with 10,000 items on it.

Or, one thing you’d want to know is that the device has no backdoor passwords. But that would miss the Xiongmai devices. The web service has no backdoor passwords. If you caught the Telnet backdoor password and removed it, then you’d miss the special secret backdoor that hackers would later reverse engineer.

This is a policy position chasing a non-existent technical issue push by Pieter Zatko, who has gotten hundreds of thousands of dollars from government grants to push the issue. It’s his way of getting rich and has nothing to do with sound policy.

Cyberczars and ambassadors

Various recommendations call for the appointment of various CISOs, Assistant to the President for Cybersecurity, and an Ambassador for Cybersecurity. But nowhere does it mention these should be technical posts. This is like appointing a Surgeon General who is not a doctor.

Government’s problems with cybersecurity stems from the way technical knowledge is so disrespected. The current cyberczar prides himself on his lack of technical knowledge, because that helps him see the bigger picture.

Ironically, many of the other Action Items are about training cybersecurity practitioners, employees, and managers. None of this can happen as long as leadership is clueless. Technical details matter, as I show above with the Mirai botnet. Subtlety and nuance in technical details can call for opposite policy responses.

Conclusion

This document is promoted as being written by technical experts. However, nothing in the document is neutral technical expertise. Instead, it’s almost entirely a policy document dominated by special interests and left-wing politics. In many places it makes recommendations to the incoming Republican president. His response should be to round-file it immediately.

I only chose a few items, as this blogpost is long enough as it is. I could pick almost any of of the 53 Action Items to demonstrate how they are policy, special-interest driven rather than reflecting technical expertise.

UK Encryption Backdoor Law Passed Via Investigatory Powers Act

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/mYvV-ZzHN1k/

The latest news out of my homeland is not good, the UK encryption backdoor law passed via Investigatory Powers Act or the IPA Bill as it’s commonly known. And itself was passed through a kind of backdoor route, which avoided the scorn of the public. Which was good for the lawmakers, but not for the […]

The post UK Encryption Backdoor Law…

Read the full post at darknet.org.uk

Election Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/election_securi.html

It’s over. The voting went smoothly. As of the time of writing, there are no serious fraud allegations, nor credible evidence that anyone tampered with voting rolls or voting machines. And most important, the results are not in doubt.

While we may breathe a collective sigh of relief about that, we can’t ignore the issue until the next election. The risks remain.

As computer security experts have been saying for years, our newly computerized voting systems are vulnerable to attack by both individual hackers and government-sponsored cyberwarriors. It is only a matter of time before such an attack happens.

Electronic voting machines can be hacked, and those machines that do not include a paper ballot that can verify each voter’s choice can be hacked undetectably. Voting rolls are also vulnerable; they are all computerized databases whose entries can be deleted or changed to sow chaos on Election Day.

The largely ad hoc system in states for collecting and tabulating individual voting results is vulnerable as well. While the difference between theoretical if demonstrable vulnerabilities and an actual attack on Election Day is considerable, we got lucky this year. Not just presidential elections are at risk, but state and local elections, too.

To be very clear, this is not about voter fraud. The risks of ineligible people voting, or people voting twice, have been repeatedly shown to be virtually nonexistent, and “solutions” to this problem are largely voter-suppression measures. Election fraud, however, is both far more feasible and much more worrisome.

Here’s my worry. On the day after an election, someone claims that a result was hacked. Maybe one of the candidates points to a wide discrepancy between the most recent polls and the actual results. Maybe an anonymous person announces that he hacked a particular brand of voting machine, describing in detail how. Or maybe it’s a system failure during Election Day: voting machines recording significantly fewer votes than there were voters, or zero votes for one candidate or another. (These are not theoretical occurrences; they have both happened in the United States before, though because of error, not malice.)

We have no procedures for how to proceed if any of these things happen. There’s no manual, no national panel of experts, no regulatory body to steer us through this crisis. How do we figure out if someone hacked the vote? Can we recover the true votes, or are they lost? What do we do then?

First, we need to do more to secure our elections system. We should declare our voting systems to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states.

We need national security standards for voting machines, and funding for states to procure machines that comply with those standards. Voting-security experts can deal with the technical details, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters, counted by computer but recountable by hand. And we need a system of pre-election and postelection security audits to increase confidence in the system.

Second, election tampering, either by a foreign power or by a domestic actor, is inevitable, so we need detailed procedures to follow–both technical procedures to figure out what happened, and legal procedures to figure out what to do–that will efficiently get us to a fair and equitable election resolution. There should be a board of independent computer-security experts to unravel what happened, and a board of independent election officials, either at the Federal Election Commission or elsewhere, empowered to determine and put in place an appropriate response.

In the absence of such impartial measures, people rush to defend their candidate and their party. Florida in 2000 was a perfect example. What could have been a purely technical issue of determining the intent of every voter became a battle for who would win the presidency. The debates about hanging chads and spoiled ballots and how broad the recount should be were contested by people angling for a particular outcome. In the same way, after a hacked election, partisan politics will place tremendous pressure on officials to make decisions that override fairness and accuracy.

That is why we need to agree on policies to deal with future election fraud. We need procedures to evaluate claims of voting-machine hacking. We need a fair and robust vote-auditing process. And we need all of this in place before an election is hacked and battle lines are drawn.

In response to Florida, the Help America Vote Act of 2002 required each state to publish its own guidelines on what constitutes a vote. Some states — Indiana, in particular — set up a “war room” of public and private cybersecurity experts ready to help if anything did occur. While the Department of Homeland Security is assisting some states with election security, and the F.B.I. and the Justice Department made some preparations this year, the approach is too piecemeal.

Elections serve two purposes. First, and most obvious, they are how we choose a winner. But second, and equally important, they convince the loser–and all the supporters–that he or she lost. To achieve the first purpose, the voting system must be fair and accurate. To achieve the second one, it must be shown to be fair and accurate.

We need to have these conversations before something happens, when everyone can be calm and rational about the issues. The integrity of our elections is at stake, which means our democracy is at stake.

This essay previously appeared in the New York Times.

Regulation of the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/regulation_of_t.html

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

First, the facts. Those websites went down because their domain name provider — a company named Dyn —­ was forced offline. We don’t know who perpetrated that attack, but it could have easily been a lone hacker. Whoever it was launched a distributed denial-of-service attack against Dyn by exploiting a vulnerability in large numbers ­— possibly millions — of Internet-of-Things devices like webcams and digital video recorders, then recruiting them all into a single botnet. The botnet bombarded Dyn with traffic, so much that it went down. And when it went down, so did dozens of websites.

Your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you’ve never heard of to consumers who don’t care about your security.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they’re things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don’t get security updates like our more expensive computers, and many don’t even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

And, like pollution, the only solution is to regulate. The government could impose minimum security standards on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing companies like Dyn to sue them if their devices are used in DDoS attacks. The details would need to be carefully scoped, but either of these options would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

It’s true that this is a domestic solution to an international problem and that there’s no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change.

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Security researchers have demonstrated the ability to remotely take control of Internet-enabled cars. They’ve demonstrated ransomware against home thermostats and exposed vulnerabilities in implanted medical devices. They’ve hacked voting machines and power plants. In one recent paper, researchers showed how a vulnerability in smart light bulbs could be used to start a chain reaction, resulting in them all being controlled by the attackers ­— that’s every one in a city. Security flaws in these things could mean people dying and property being destroyed.

Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the 9/11 terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for more than a decade. A fatal IoT disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex ­— and they’re coming. We can’t afford to ignore these issues until it’s too late.

In general, the software market demands that products be fast and cheap and that security be a secondary consideration. That was okay when software didn’t matter —­ it was okay that your spreadsheet crashed once in a while. But a software bug that literally crashes your car is another thing altogether. The security vulnerabilities in the Internet of Things are deep and pervasive, and they won’t get fixed if the market is left to sort it out for itself. We need to proactively discuss good regulatory solutions; otherwise, a disaster will impose bad ones on us.

This essay previously appeared in the Washington Post.

Notes on that StJude/MuddyWatters/MedSec thing

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/notes-on-that-stjudemuddywattersmedsec.html

I thought I’d write up some notes on the StJude/MedSec/MuddyWaters affair. Some references: [1] [2] [3] [4].

The story so far

tl;dr: hackers drop 0day on medical device company hoping to profit by shorting their stock

St Jude Medical (STJ) is one of the largest providers of pacemakers (aka. cardiac devices) in the country, around ~$2.5 billion in revenue, which accounts for about half their business. They provide “smart” pacemakers with an on-board computer that talks via radio-waves to a nearby monitor that records the functioning of the device (and health data). That monitor, “[email protected]“, then talks back up to St Jude (via phone lines, 3G cell phone, or wifi). Pretty much all pacemakers work that way (my father’s does, although his is from a different vendor).

MedSec is a bunch of cybersecurity researchers (white-hat hackers) who have been investigating medical devices. In theory, their primary business is to sell their services to medical device companies, to help companies secure their devices. Their CEO is Justine Bone, a long-time white-hat hacker. Despite Muddy Waters garbling the research, there’s no reason to doubt that there’s quality research underlying all this.

Muddy Waters is an investment company known for investigating companies, finding problems like accounting fraud, and profiting by shorting the stock of misbehaving companies.

Apparently, MedSec did a survey of many pacemaker manufacturers, chose the one with the most cybersecurity problems, and went to Muddy Waters with their findings, asking for a share of the profits Muddy Waters got from shorting the stock.

Muddy Waters published their findings in [1] above. St Jude published their response in [2] above. They are both highly dishonest. I point that out because people want to discuss the ethics of using 0day to short stock when we should talk about the ethics of lying.

“Why you should sell the stock” [finance issues]

In this section, I try to briefly summarize Muddy Water’s argument why St Jude’s stock will drop. I’m not an expert in this area (though I do a bunch of investment), but they do seem flimsy to me.
Muddy Water’s argument is that these pacemakers are half of St Jude’s business, and that fixing them will first require recalling them all, then take another 2 year to fix, during which time they can’t be selling pacemakers. Much of the Muddy Waters paper is taken up explaining this, citing similar medical cases, and so on.
If at all true, and if the cybersecurity claims hold up, then yes, this would be good reason to short the stock. However, I suspect they aren’t true — and they are simply trying to scare people about long-term consequences allowing Muddy Waters to profit in the short term.
@selenakyle on Twitter suggests this interest document [4] about market-solutions to vuln-disclosure, if you are interested in this angle of things.
Update from @lippard: Abbot Labs agreed in April to buy St Jude at $85 a share (when St Jude’s stock was $60/share). Presumable, for this Muddy Waters attack on St Jude’s stock price to profit from anything more than a really short term stock drop (like dumping their short position today), Muddy Waters would have believe this effort will cause Abbot Labs to walk away from the deal. Normally, there are penalties for doing so, but material things like massive vulnerabilities in a product should allow Abbot Labs to walk away without penalties.

The 0day being dropped

Well, they didn’t actually drop 0day as such, just claims that 0day exists — that it’s been “demonstrated”. Reading through their document a few times, I’ve created a list of the 0day they found, to the granularity that one would expect from CVE numbers (CVE is group within the Department of Homeland security that assigns standard reference numbers to discovered vulnerabilities).

The first two, which can kill somebody, are the salient ones. The others are more normal cybersecurity issues, and may be of concern because they can leak HIPAA-protected info.

CVE-2016-xxxx: Pacemaker can be crashed, leading to death
Within a reasonable distance (under 50 feet) over several hours, pounding the pacemaker with malformed packets (either from an SDR or a hacked version of the [email protected] monitor), the pacemaker can crash. Sometimes such crashes will brick the device, other times put it into a state that may kill the patient by zapping the heart too quickly.

CVE-2016-xxxx: Pacemaker power can be drained, leading to death
Within a reasonable distance (under 50 feet) over several days, the pacemaker’s power can slowly be drained at the rate of 3% per hour. While the user will receive a warning from their [email protected] monitoring device that the battery is getting low, it’s possible the battery may be fully depleted before they can get to a doctor for a replacement. A non-functioning pacemaker may lead to death.

CVE-2016-xxxx: Pacemaker uses unauthenticated/unencrypted RF protocol
The above two items are possible because there is no encryption nor authentication in the wireless protocol, allowing any evildoer access to the pacemaker device or the monitoring device.

CVE-2016-xxxx: [email protected] contained hard-coded credentials and SSH keys
The password to connect to the St Jude network is the same for all device, and thus easily reverse engineered.

CVE-2016-xxxx: local proximity wand not required
It’s unclear in the report, but it seems that most other products require a wand in local promixity (inches) in order to enable communication with the pacemaker. This seems like a requirement — otherwise, even with authentication, remote RF would be able to drain the device in the person’s chest.

So these are, as far as I can tell, the explicit bugs they outline. Unfortunately, none are described in detail. I don’t see enough detail for any of these to actually be assigned a CVE number. I’m being generous here, trying to describe them as such, giving them the benefit of the doubt, there’s enough weasel language in there that makes me doubt all of them. Though, if the first two prove not to be reproducible, then there will be a great defamation case, so I presume those two are true.

The movie/TV plot scenarios

So if you wanted to use this as a realistic TV/movie plot, here are two of them.
#1 You (the executive of the acquiring company) are meeting with the CEO and executives of a smaller company you want to buy. It’s a family concern, and the CEO really doesn’t want to sell. But you know his/her children want to sell. Therefore, during the meeting, you pull out your notebook and an SDR device and put it on the conference room table. You start running the exploit to crash that CEO’s pacemaker. It crashes, the CEO grabs his/her chest, who gets carted off the hospital. The children continue negotiations, selling off their company.
#2 You are a hacker in Russia going after a target. After many phishing attempts, you finally break into the home desktop computer. From that computer, you branch out and connect to the [email protected] devices through the hard-coded password. You then run an exploit from the device, using that device’s own radio, to slowly drain the battery from the pacemaker, day after day, while the target sleeps. You patch the software so it no longer warns the user that the battery is getting low. The battery dies, and a few days later while the victim is digging a ditch, s/he falls over dead from heart failure.

The Muddy Water’s document is crap

There are many ethical issues, but the first should be dishonesty and spin of the Muddy Waters research report.

The report is clearly designed to scare other investors to drop St Jude stock price in the short term so that Muddy Waters can profit. It’s not designed to withstand long term scrutiny. It’s full of misleading details and outright lies.

For example, it keeps stressing how shockingly bad the security vulnerabilities are, such as saying:

We find STJ Cardiac Devices’ vulnerabilities orders of magnitude more worrying than the medical device hacks that have been publicly discussed in the past. 

This is factually untrue. St Jude problems are no worse than the 2013 issue where doctors disable the RF capabilities of Dick Cheney’s pacemaker in response to disclosures. They are no worse than that insulin pump hack. Bad cybersecurity is the norm for medical devices. St Jude may be among the worst, but not by an order-of-magnitude.

The term “orders of magnitude” is math, by the way, and means “at least 100 times worse”. As an expert, I claim these problems are not even one order of magnitude (10 times worse). I challenge MedSec’s experts to stand behind the claim that these vulnerabilities are at least 100 times worse than other public medical device hacks.

In many places, the language is wishy-washy. Consider this quote:

Despite having no background in cybersecurity, Muddy Waters has been able to replicate in-house key exploits that help to enable these attacks

The semantic content of this is nil. It says they weren’t able to replicate the attacks themselves. They don’t have sufficient background in cybersecurity to understand what they replicated.

Such language is pervasive throughout the document, things that aren’t technically lies, but which aren’t true, either.

Also pervasive throughout the document, repeatedly interjected for no reason in the middle of text, are statements like this, repeatedly stressing why you should sell the stock:

Regardless, we have little doubt that STJ is about to enter a period of protracted litigation over these products. Should these trials reach verdicts, we expect the courts will hold that STJ has been grossly negligent in its product design. (We estimate awards could total $6.4 billion.15)

I point this out because Muddy Waters obviously doesn’t feel the content of the document stands on its own, so that you can make this conclusion yourself. It instead feels the need to repeat this message over and over on every page.

Muddy Waters violation of Kerckhoff’s Principle

One of the most important principles of cyber security is Kerckhoff’s Principle, that more openness is better. Or, phrased another way, that trying to achieve security through obscurity is bad.

The Muddy Water’s document attempts to violate this principle. Besides the the individual vulnerabilities, it makes the claim that St Jude cybersecurity is inherently bad because it’s open. it uses off-the-shelf chips, standard software (line Linux), and standard protocols. St Jude does nothing to hide or obfuscate these things.

Everyone in cybersecurity would agree this is good. Muddy Waters claims this is bad.

For example, some of their quotes:

One competitor went as far as developing a highly proprietary embedded OS, which is quite costly and rarely seen

In contrast, the other manufacturers have proprietary RF chips developed specifically for their protocols

Again, as the cybersecurity experts in this case, I challenge MedSec to publicly defend Muddy Waters in these claims.

Medical device manufacturers should do the opposite of what Muddy Waters claims. I’ll explain why.

Either your system is secure or it isn’t. If it’s secure, then making the details public won’t hurt you. If it’s insecure, then making the details obscure won’t help you: hackers are far more adept at reverse engineering than you can possibly understand. Making things obscure, though, does stop helpful hackers (i.e. cybersecurity consultants you hire) from making your system secure, since it’s hard figuring out the details.

Said another way: your adversaries (such as me) hate seeing open systems that are obviously secure. We love seeing obscure systems, because we know you couldn’t possibly have validated their security.

The point is this: Muddy Waters is trying to profit from the public’s misconception about cybersecurity, namely that obscurity is good. The actual principle is that obscurity is bad.

St Jude’s response was no better

In response to the Muddy Water’s document, St Jude published this document [2]. It’s equally full of lies — the sort that may deserve a share holder lawsuit. (I see lawsuits galore over this). It says the following:

We have examined the allegations made by Capital and MedSec on August 25, 2016 regarding the safety and security of our pacemakers and defibrillators, and while we would have preferred the opportunity to review a detailed account of the information, based on available information, we conclude that the report is false and misleading.

If that’s true, if they can prove this in court, then that will mean they could win millions in a defamation lawsuit against Muddy Waters, and millions more for stock manipulation.

But it’s almost certainly not true. Without authentication/encryption, then the fact that hackers can crash/drain a pacemaker is pretty obvious, especially since (as claimed by Muddy Waters), they’ve successfully done it. Specifically, the picture on page 17 of the 34 page Muddy Waters document is a smoking gun of a pacemaker misbehaving.

The rest of their document contains weasel-word denials that may be technically true, but which have no meaning.

St. Jude Medical stands behind the security and safety of our devices as confirmed by independent third parties and supported through our regulatory submissions. 

Our software has been evaluated and assessed by several independent organizations and researchers including Deloitte and Optiv.

In 2015, we successfully completed an upgrade to the ISO 27001:2013 certification.

These are all myths of the cybersecurity industry. Conformance with security standards, such as ISO 27001:2013, has absolutely zero bearing on whether you are secure. Having some consultants/white-hat claim your product is secure doesn’t mean other white-hat hackers won’t find an insecurity.

Indeed, having been assessed by Deloitte is a good indicator that something is wrong. It’s not that they are incompetent (they’ve got some smart people working for them), but ultimately the way the security market works is that you demand of such auditors that the find reasons to believe your product is secure, not that they keep hunting until something is found that is insecure. It’s why outsiders, like MedSec, are better, because they strive to find why your product is insecure. The bigger the enemy, the more resources they’ll put into finding a problem.

It’s like after you get a hair cut, your enemies and your friends will have different opinions on your new look. Enemies are more honest.

The most obvious lie from the St Jude response is the following:

The report claimed that the battery could be depleted at a 50-foot range. This is not possible since once the device is implanted into a patient, wireless communication has an approximate 7-foot range. This brings into question the entire testing methodology that has been used as the basis for the Muddy Waters Capital and MedSec report.

That’s not how wireless works. With directional antennas and amplifiers, 7-feet easily becomes 50-feet or more. Even without that, something designed for reliable operation at 7-feet often works less reliably at 50-feet. There’s no cutoff at 7-feet within which it will work, outside of which it won’t.

That St Jude deliberately lies here brings into question their entire rebuttal. (see what I did there?)

ETHICS EHTICS ETHICS

First let’s discuss the ethics of lying, using weasel words, and being deliberately misleading. Both St Jude and Muddy Waters do this, and it’s ethically wrong. I point this out to uninterested readers who want to get at that other ethical issue. Clear violations of ethics we all agree interest nobody — but they ought to. We should be lambasting Muddy Waters for their clear ethical violations, not the unclear one.

So let’s get to the ethical issue everyone wants to discuss:

Is it ethical to profit from shorting stock while dropping 0day.

Let’s discuss some of the issues.

There’s no insider trading. Some people wonder if there are insider trading issues. There aren’t. While it’s true that Muddy Waters knew some secrets that nobody else knew, as long as they weren’t insider secrets, it’s not insider trading. In other words, only insiders know about a key customer contract won or lost recently. But, vulnerabilities researched by outsiders is still outside the company.

Watching a CEO walk into the building of a competitor is still outsider knowledge — you can trade on the likely merger, even though insider employees cannot.

Dropping 0day might kill/harm people. That may be true, but that’s never an ethical reason to not drop it. That’s because it’s not this one event in isolation. If companies knew ethical researchers would never drop an 0day, then they’d never patch it. It’s like the government’s warrantless surveillance of American citizens: the courts won’t let us challenge it, because we can’t prove it exists, and we can’t prove it exists, because the courts allow it to be kept secret, because revealing the surveillance would harm national intelligence. That harm may happen shouldn’t stop the right thing from happening.

In other words, in the long run, dropping this 0day doesn’t necessarily harm people — and thus profiting on it is not an ethical issue. We need incentives to find vulns. This moves the debate from an ethical one to more of a factual debate about the long-term/short-term risk from vuln disclosure.

As MedSec points out, St Jude has already proven itself an untrustworthy consumer of vulnerability disclosures. When that happens, the dropping 0day is ethically permissible for “responsible disclosure”. Indeed, that St Jude then lied about it in their response ex post facto justifies the dropping of the 0day.

No 0day was actually dropped here. In this case, what was dropped was claims of 0day. This may be good or bad, depending on your arguments. It’s good that the vendor will have some extra time to fix the problems before hackers can start exploiting them. It’s bad because we can’t properly evaluate the true impact of the 0day unless we get more detail — allowing Muddy Waters to exaggerate and mislead people in order to move the stock more than is warranted.

In other words, the lack of actual 0day here is the problem — actual 0day would’ve been better.

This 0day is not necessarily harmful. Okay, it is harmful, but it requires close proximity. It’s not as if the hacker can reach out from across the world and kill everyone (barring my movie-plot section above). If you are within 50 feet of somebody, it’s easier shooting, stabbing, or poisoning them.

Shorting on bad news is common. Before we address the issue whether this is unethical for cybersecurity researchers, we should first address the ethics for anybody doing this. Muddy Waters already does this by investigating companies for fraudulent accounting practice, then shorting the stock while revealing the fraud.

Yes, it’s bad that Muddy Waters profits on the misfortunes of others, but it’s others who are doing fraud — who deserve it. [Snide capitalism trigger warning] To claim this is unethical means you are a typical socialist who believe the State should defend companies, even those who do illegal thing, in order to stop illegitimate/windfall profits. Supporting the ethics of this means you are a capitalist, who believe companies should succeed or fail on their own merits — which means bad companies need to fail, and investors in those companies should lose money.

Yes, this is bad for cybersec research. There is constant tension between cybersecurity researchers doing “responsible” (sic) research and companies lobbying congress to pass laws against it. We see this recently how Detroit lobbied for DMCA (copyright) rules to bar security research, and how the DMCA regulators gave us an exemption. MedSec’s action means now all medical devices manufacturers will now lobby congress for rules to stop MedSec — and the rest of us security researchers. The lack of public research means medical devices will continue to be flawed, which is worse for everyone.

Personally, I don’t care about this argument. How others might respond badly to my actions is not an ethical constraint on my actions. It’s like speech: that others may be triggered into lobbying for anti-speech laws is still not constraint on what ethics allow me to say.

There were no lies or betrayal in the research. For me, “ethics” is usually a problem of lying, cheating, theft, and betrayal. As long as these things don’t happen, then it’s ethically okay. If MedSec had been hired by St Jude, had promised to keep things private, and then later disclosed them, then we’d have an ethical problem. Or consider this: frequently clients ask me to lie or omit things in pentest reports. It’s an ethical quagmire. The quick answer, by the way, is “can you make that request in writing?”. The long answer is “no”. It’s ethically permissible to omit minor things or do minor rewording, but not when it impinges on my credibility.

A life is worth about $10-million. Most people agree that “you can’t put value on a human life”, and that those who do are evil. The opposite is true. Should we spend more on airplane safety, breast cancer research, or the military budget to fight ISIS. Each can be measured in the number of lives saved. Should we spend more on breast cancer research, which affects people in their 30s, or solving heart disease, which affects people’s in their 70s? All these decisions means putting value on human life, and sometimes putting different value on human life. Whether you think it’s ethical, it’s the way the world works.

Thus, we can measure this disclosure of 0day in terms of potential value of life lost, vs. potential value of life saved.

Is this market manipulation? This is more of a legal question than an ethical one, but people are discussing it. If the data is true, then it’s not “manipulation” — only if it’s false. As documented in this post, there’s good reason to doubt the complete truth of what Muddy Waters claims. I suspect it’ll cost Muddy Waters more in legal fees in the long run than they could possibly hope to gain in the short run. I recommend investment companies stick to areas of their own expertise (accounting fraud) instead of branching out into things like cyber where they really don’t grasp things.

This is again bad for security research. Frankly, we aren’t a trusted community, because we claim the “sky is falling” too often, and are proven wrong. As this is proven to be market manipulation, as the stock recovers back to its former level, and the scary stories of mass product recalls fail to emerge, we’ll be blamed yet again for being wrong. That hurts are credibility.

On the other the other hand, if any of the scary things Muddy Waters claims actually come to pass, then maybe people will start heading our warnings.

Ethics conclusion: I’m a die-hard troll, so therefore I’m going to vigorously defend the idea of shorting stock while dropping 0day. (Most of you appear to think it’s unethical — I therefore must disagree with you).  But I’m also a capitalist. This case creates an incentive to drop harmful 0days — but it creates an even greater incentive for device manufacturers not to have 0days to begin with. Thus, despite being a dishonest troll, I do sincerely support the ethics of this.

Conclusion

The two 0days are about crashing the device (killing the patient sooner) or draining the battery (killin them later). Both attacks require hours (if not days) in close proximity to the target. If you can get into the local network (such as through phishing), you might be able to hack the [email protected] monitor, which is in close proximity to the target for hours every night.

Muddy Waters thinks the security problems are severe enough that it’ll destroy St Jude’s $2.5 billion pacemaker business. The argument is flimsy. St Jude’s retort is equally flimsy.

My prediction: a year from now we’ll see little change in St Jude’s pacemaker business earners, while there may be some one time costs cleaning some stuff up. This will stop the shenanigans of future 0day+shorting, even when it’s valid, because nobody will believe researchers.

Tracking the Owner of Kickass Torrents

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/tracking_the_ow.html

Here’s the story of how it was done. First, a fake ad on torrent listings linked the site to a Latvian bank account, an e-mail address, and a Facebook page.

Using basic website-tracking services, Der-Yeghiayan was able to uncover (via a reverse DNS search) the hosts of seven apparent KAT website domains: kickasstorrents.com, kat.cr, kickass.to, kat.ph, kastatic.com, thekat.tv and kickass.cr. This dug up two Chicago IP addresses, which were used as KAT name servers for more than four years. Agents were then able to legally gain a copy of the server’s access logs (explaining why it was federal authorities in Chicago that eventually charged Vaulin with his alleged crimes).

Using similar tools, Homeland Security investigators also performed something called a WHOIS lookup on a domain that redirected people to the main KAT site. A WHOIS search can provide the name, address, email and phone number of a website registrant. In the case of kickasstorrents.biz, that was Artem Vaulin from Kharkiv, Ukraine.

Der-Yeghiayan was able to link the email address found in the WHOIS lookup to an Apple email address that Vaulin purportedly used to operate KAT. It’s this Apple account that appears to tie all of pieces of Vaulin’s alleged involvement together.

On July 31st 2015, records provided by Apple show that the me.com account was used to purchase something on iTunes. The logs show that the same IP address was used on the same day to access the KAT Facebook page. After KAT began accepting Bitcoin donations in 2012, $72,767 was moved into a Coinbase account in Vaulin’s name. That Bitcoin wallet was registered with the same me.com email address.

Another article.

2016-06-26 Лекцията ми от ПловдивConf, “ХУДЛ: една полезна дрога”

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3309

(колебаех се дали да пиша за brexit или за лекцията ми, но някакси човешката глупост трябва да стои на по-заден план от полезните неща)

Докато четях “The view from the cheap seats” на Нийл Геймън, ми хрумна да направя тая лекция. Може да гледате слайдовете, докато я четете, но да знаете, че няма нито една картинка. Опитал съм се да следвам донякъде каквото говорих на самата презентация.
(добавих нещо интересно към генератора ми на презентациите, epub версия, която може да се гледа на някакъв reader вместо бележки)

ХУДЛ: една полезна дрога
———————–

Добрутро, добри хора.

Радвам се, че въпреки жегата има някакви хора да ме слушат, и няма да обвинявам никой, ако заспи.

Заглавието нарочно беше избрано така, за да останат някакви хора в залата, а и ми беше интересно колко ще избягат, като видят за какво става въпрос. Може би беше грешка, понеже колкото повече хора има в залата, толкова по-горещо става…

Темата е художествената литература.
(брех, никой не избяга. Може би защото всички спят?)

Та, защо трябва на някой да му пука за литературата, и защо изобщо трябва да ви говоря по въпроса? Литературата е силно недооценена във влиянието си, в полезността си и в това какво невероятно изживяване ни дава. Много хора считат книгите за нещо, което ги карат да учат в училище и с което им губят времето, нещо, което няма смисъл. Това е доста неприятно, понеже хората, които планират колко големи трябва да са затворите в бъдеще са открили сериозна корелация в едно поколение между количеството хора, които не са чели като деца и броят от същото поколение, които са станали затворници.
(това важи за американските затвори, за нашите нищо не може да се каже)
Както ще разказвам по-долу, има много полезни моменти в четенето, както и защо е толкова приятно.

Защо ви го разказвам аз? Понеже няма много други хора, които да се хванат да го правят – има разни инициативи, насочени към обществото, като “аз чета” и не-помня-какви-други, но те са много широки и се мотаят често по медии, на които никой не вярва. Аз ще се опитам да направя лекция, насочена поне донякъде към IT хората, с надеждата да има някакъв ефект (не само да не влезем в затвора 🙂 ). Също така, на мен литературата ми е помогнала много и искам да го споделя и с вас, понеже мисля, че не трябва да съм един от малкото облагодетелствани 🙂

Има неща, за които няма да говоря. Ще пропусна всичкият non-fiction (който в момента може би е половината от това, което чета) – техническа, историческа и всякаква научна литература – там има също шедьоври и полезни неща, но те изискват по-скоро цикъл лекции (например “на какво може да ни научи историческата литература”, “технически книги, написани добре” или “основен курс по политика и защо ни е”). Иска ми се да говоря и за комиксите, но те заслужават съвсем отделна лекция, и също така ще пропусна книгите-игри, понеже имам твърде малък досег с тях.

И нека да започна с това защо книгите и художествената литература по-специално имат са полезни за нас.

Най-мощният анти-депресант, който съм откривал (изписвали са ми някакви неща, без да стигам до тия от конския тип), е всъщност да се озовете в ситуация, която да ви накара да видите колко не е страшно вашето текущо положение, да го видите от страни и да може да поставите в перспектива това, което ви се случва. Моят пример беше да бъда настанен преди да ми правят дупката в главата в невро-онкологичното отделение (в превод – рак на главата). Така, като видиш около себе си хора с доста по-сериозен проблем, твоят поглед върху нещата започва да се променя. Не го препоръчвам на никого (въпреки че имаше идеята разни депресирани техничари да ходят да помагат на болни от рак хора), но върши работа.

По-лек вариант на това е просто да се озовете в друг свят, който да ви помогне да си промените представите. Има различни начини:

Може да гледате филми. Те обаче са кратки, нямат достатъчно история, филмите над два часа не се получават, а в два часа никаква история не може да се събере.
Също така, няма достатъчно, няма достатъчно материал – единствено порно-индустрията произвежда достатъчно материал (май беше 4 часа за всеки час реално време), но там историите куцат и твърде бързо омръзва.
Филмите вдигат и шум – все някой говори и трябва да се слуша, а и дори да сте фенове на нямото кино, там пак има soundtrack, който си е част от историята. Това прави цялото нещо не-толкова-социално, щото не могат трима човека да гледат три филма в една стая, няма как да се получи. Някои хора, на които си изнасях преди това лекцията казаха, че това е само мой проблем, но не мога да го пропусна.
И най-вече, в наше време всички филми са ужасно клиширани.

Друг вариант са сериалите (всъщност аз даже гледам сериали). Те имат предимството, че са по-дълги и могат да съберат една книга както трябва, да изградят историята и героите както трябва, и даже могат в общи линии да предадат една книга – примерът е как Game of thrones се получава (на филм нямаше как да стане), и поне според мен Властелинът на пръстените щеше да е много по-добре на сериал, отколкото на 10 часа филми.
Проблем е пак, че няма достатъчно материал – освен, че няма порно сериали, по принцип производството им е бавно и тежко. Въобще, всичко кинематографично иска много по-голям ресурс.
Също така, и двете не оставят време за мислене. И филмите, и сериалите се движат със собствено темпо, което не ви дава възможност да спрете и да помислите, да предъвчете информацията. Също така трудно може да се избърза – например наскоро гледах сериал, на който първия сезон беше зле, втория ставаше и третия беше невероятен, но нямаше как да мина бързо през първия, понеже не е само пълнеж, но и setup на историята, който си е важен. Може би просто няма решение на тоя проблем…
Пак вдигат шум, и пак са ужасно клиширани. Всеки път, като гледаш нещо в общи линии е ясно какво ще се случи, просто защото пътеките са утъпкани и никой не смее да излезе извън тях (с много малко изключения, като например Black Mirror).

Имаме и резервен вариант – алкохол и/или халюциногени, които могат да са супер ефективни (към публиката: да, наздраве, и се надявам да е алкохол, не халюциноген, че не е ясно какво ще виждаш), но са вредни за черния дроб (и не само), и като цяло са твърде опасни.
(вероятно трябва да направим една лекция за различните вещества и въздействието им, но май няма да е моя такава. Но, ако видите из турнето лекция, която се казва “бензин”, ще е за нещо такова, щото човекът е пил бензин, и се надявам да го навия)

А книгите…
Книгите са огромно количество. Да прочетете 1 GB чист текст ви трябват 20на години (сметката е, че 1 страница е около 4kb, което прави ~250000 страници, дори при моя темп на четене са 5 години). Само в проекта Гутенберг има 21GB книги (проектът, който събира книгите с изтекли авторски права), да не говорим за количеството други архиви, като например libgen (където има около 10 милиона книги). Според една сметка на google до 2010 е имало 129 милиона книги, които няма как да бъдат прочетени от един човек, дори един процент е непосилно число. Аз, с много желание, за целия си живот има шанс да прочета най-много 10000 книги, ако се старая много.
Книги може да четете навсякъде, без да пречите на хората (с изключение на случаите, в които четете докато някакви хора ви обръщат внимание).
Разнообразието на идеи в книги е най-голямото съществуващо – всички идеи тръгват от някакви книги или ги има в такива, и има нещо като правило 34 на internet-а (което гласи, че ако има нещо, то има порно с него) – ако нещо съществува, то има книга по темата. При 129 милиона книги не е трудно.
Има и формати, които работят без ток – има хартиени книги, работят без ток, само със светлина (даже не е задължително да е слънчева), не им трябва даже internet. Предполагам, все още сте виждали такива напред-назад, ако ще да е само да си подпирате монитора 🙂
И книгите са най-близкото нещо, което имаме до телепатия. Малко по-нататък ще ви го покажа 🙂

Книгите ни дават нова информация и перспективи. Никой от нас не знае всичко.
Авторите постоянно си задават въпроси от типа на “ако продължава така” (бежанската криза), “какво щеше да стане, ако…” (хората развият свръхестествени способности), пробват да намерят отговори, и с това ви карат и вие да си задавате същите въпроси и да стигате до някакви отговори, или да оспорвате техните. Като цяло, ви карат да мислите, а съм чувал (е, или съм чел някъде), че да мислиш е полезно. Не знам някой да го е доказал, та ще го оставя като аксиома 🙂

Развива важното за нас въображение. Ето един цитат:
‘Oh, you know for years we’ve been making wonderful things. We make your iPods. We make phones. We make them better than anybody else, but we don’t come up with any of these ideas. You bring us things and then we make them. So we went on a tour of America talking to people at Microsoft, at Google, at Apple, and we asked them a lot of questions about themselves, just the people working there. And we discovered that they all read science fiction when they were teenagers. So we think maybe it’s a good thing.’

Идва от първата организирана от държава (Китай) конференция за фантастика. Събрали там разни писатели и свързани хора, и Нийл Геймън, който бил там попитал местен партиец – защо правят такова нещо, никъде другаде не бил виждал нещо подобно. Отговорът е цитатът по-горе, че колкото и да са добри китайците в копирането или правенето на техника, нищо от това, което правят не е измислено там. Та, някакви хора отишли, разпитвали всякакви работещи из Силициевата долина и общото, което намерили било четенето на фантастика в ранна възраст. Та, може би е нещо хубаво?
Предположението е, че това развива (важното за нас) въображение.

Също така, от книгите могат да дойдат много нови идеи и насоки (за което имам три пълни слайда):

Фантастика с вампири в космоса ме накара да седна и прочета много неща по темата за човешкото съзнание, мислене, свободната воля и свързаните теми. Беше доста странно, понеже попаднах на нея от thread в един блог, където се питаше за наистина важна книга от последните 10 години, и там много хора казваха – тази книга е страхотна, не се плашете от вампирите. Ще я спомена даже по-нататък пак.

“Нощната стража” и няколко други книги на Тери Пратчет ме накараха да се замисля върху смисъла и работата на полицията (и да си потърся малко книги по темата). В нашата държава ние виждаме полицаите като нещо, което пречи и някак смислената им роля се губи. Като цяло Пратчет е замислящ по редица подобни теми.

“Дзен и изкуството да поддържаш мотоциклет” (странно заглавие, доста стара книга) е книгата, заради която спрях да пиша оценки, вътре има глава, която обяснява много добре защо.

“Fight club” и темата за консумеризма – книгата е прилично по-добра от филма (въпреки че филмът много се старае).

Dead air на Iain Banks и как се бори човек с идиоти е също доста забавна.

И да, наистина представата ми за армията и казармата отвътре все още се базират на “Приключенията на добрия войник Швейк” на Ярослав Хашек. На мен казармата ми се размина, но си мисля, че нямаше много да ми мръдне представата…

“Little brother” и “Homeland” на Кори Доктороу дадоха интересни идеи за следващите протести (каквато е и целта им, те са не толкова книги, колкото наръчници).

“Махалото на Фуко” и “Пражкото гробище” на Умберто Еко са може би най-закопаващите книги за конспиративните теории.

“Сатанински строфи” на Салман Рушди ме накара да се ровя повече в Исляма и да се опитам да разбера какъв точно им е проблема на тия хора с книгата.

Братя Стругацки и всичко, което са написали само по себе си е много храна за размисъл, по всякакви социални теми.

Крайтън и Хейли са ме карали да се интересувам от инфраструктурата на всичко – заводи, летища, болници…

Като пример не за мен – Артър Кларк е измислил комуникационните спътници.

И всякакви други неща, които не ми е преписват от слайда 🙂

Разбира се, може да четем просто за удоволствие. Спомням си как първата книга, която наистина ми хареса (“Откраднатата баба” на Атанас Мочуров – не помнех автора, ама я издирих по заглавието) съм я чел 39 пъти, просто щото ми харесваше. Идея си нямам защо съм ги броил, но мисля, че в един момент я знаех почти на изуст (трябва да съм бил на 6-7-8 години).
Спомням си една такава случка, като беше излязла “Нощната стража” на Тери Пратчет, със съквартиранта си я бяхме взели и вечерта като се прибрахме, седнахме да четем, всеки в неговата стая. По едно време аз свърших книгата, погледнах навън и без да се усетя, казах на глас “Брех, то се е съмнало!”. От другата стая чух “А, вярно бе!”…

Дори самият език може да достави огромно удоволствие. Ето малко цитати:
(дължа бира на Андрей, щото се сети от къде са няколко от тях)

“И наистина, бай Ганьо разлюти супата си до такваз степен, щото един непривикнал човек би се отровил.”
Тоя цитат винаги си го спомням, като ям шкембе чорба.

“The house had four windows set in the front of a size and proportion which more or less exactly failed to please the eye.”
(“Пътеводител на галактическия стопаджия”)

“Отдавна умряла проститутка гонеше съвсем жив мъж през тълпата и си искаше отдавна дължими пари за минала услуга.”
(Стивън Ериксън, една от книгите за Корбал Броуч и Бочалайн.

“What is a flower? A giant sexual organ in its Sunday best. The truth has been known for a long time, yet, over-aged adolescents that we are, we persist in speaking sentimental drives about the delicacy of flowers. We construct idiotic phrases like “So-and-so is in the flower of his youth”, which is as absurd as saying “in the vagina of his youth””
(Амели Нотомб, “Любовен саботаж”)

“Икономиката е псевдонауката, която разглежда илюзорните отношения на субектите от първи и втори род във връзка с халюцинаторния процес на тяхното въображаемо забогатяване.”
(Виктор Пелевин, “Generation П”, любим цитат ми е да го казвам на икономисти. Доста близък до реалността, донякъде чак притеснителен как една измислица движи голяма част от обществото)

“Political language is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind.”
(Оруел, разбира се)

“Some humans would do anything to see if it was possible to do it. If you put a large switch in some cave somewhere, with a sign on it saying ‘End-of-the-World Switch. PLEASE DO NOT TOUCH’, the paint wouldn’t even have time to dry.”
(Пратчет)

Честно казано, бих могъл да направя презентация само от цитати и пак да звучи добре, но ще е малко нечестно 🙂

Разбира се, чувал съм и доста оплаквания. Можем да започнем с “ама това не ли бягство от реалността”, и ще отговоря пак с цитат (който е цитат от някой друг): “C. S. Lewis wisely pointed out that the only people who inveigh against escape tend to be jailers.” (Нийл Геймън). Интересно, че като човек чете (non-fiction) книги за правилата по разни тоталитарни лагери и затвори (съветските са един много добър пример), там всякакви такива неща като личен живот и бягане от реалността са максимално забранени (има един особено добър пример в румънските “превъзпитателни” лагери), като успяват да потрошат психически сериозна част от хората. Да се чуди човек Оруел без да е бил там как толкова добре ги е описал…

За някои хора е бавно. Не знам защо бързат, а човек се научава да чете с повече четене. Това дори е предимство, защото ни дава възможност да четем със собственото си темпо, да обмисляме нещата – няма какво да ни припира.

Чувал съм хора да казват, че книгите са им скучни. Това значи, че четат грешните книги – при толкова много съществуващи не е особено възможно да няма нищо интересно. Има всякакви странни неща, включително преди няколко години някой написа “Гордост и предразсъдъци със зомбита”…

А защо това подзаглавие, къде е дрогата? Пробвайте 🙂 Става ясно много бързо, нуждата да разберете какво става, да се нарадвате на езика, на идеите… Доколкото знам, хората се лекуват от желанието си за четене чрез умиране (за разлика от опиатите, където поне има програми и институции да помогнат). Винаги има желанието да се намери още нещо такова, толкова интересно и забавно (а защо не и повече?). В статистиката на goodreads (които са не-чак-толкова-голяма социална мрежа) за миналата година в reading challenge има 1,722,620 участника и 29,258,154 прочетени книги. Забавното е, че аз, дето чета по около 100 книги годишно, не влизам даже в топ 20-30%, щото има хора, дето четат по 900…

“Какво да четем” е доста често срещан въпрос. Правилният отговор (и най-лесният) е “каквото ви хареса”. Никой друг няма как да знае какво ще ви хареса, та каквото хванете и ви е забавно – четете го 🙂 Дал съм по-долу малко идеи, но моят вкус може сериозно да се различава от вашия. Това не е проблем, щото, както казах по-горе – 129 милиона…
(започвам с цитат от книгата, с > отпред)

> “Истината е, че откакто ме назначиха в тоалетните на четирийсет и четвъртия етаж, ходенето по нужда се бе превърнало в политически акт.”
“Изумление и трепет” на Амели Нотомб е кратко, забавно, леко странно (както повечето ѝ книги) и се чете бързо и с удоволствие. Започвам с него, преди да продължа с тухлите.

> – How long do you want these messages to remain secret?[…]
> – I want them to remain secret for as long as men are capable of evil.
“Cryptonomicon”-а на Нийл Стивънсън мисля, че съм споменавал и преди – в книжните си лекции в initLab и в кило други post-ове. Криптография, валути, втората световна война, модерните крипто-пънкове, Алан Тюринг, и просто радост за душата. За всички, на които е харесала Стивънсън е написал Бароковия цикъл, който е три такива тухли в края си има спор между Нютон и Лайбниц на темата за свободната воля. От нея може да ви станат интересни много неща.

> ‘And what would humans be without love?’
> RARE, said Death.
Чудех се и не можах да избера една книга на Тери Пратчет, за това препоръчвам целия. Имаше проблем с част от преводите му, в оригинал е прекрасен, пробвайте го.

> “Bill could smell Its breath and it was a smell like exploded animals lying on the highway at midnight.”
Стивън Кинг е човекът с телепатията. В книгите му е пълно с такива изречения, които в един момент започвате да си представяте. Има хора, които твърдят, че не е сериозен писател, понеже пише там някакви си популярни ужаси, но истината е, че той е един от наистина големите майстори на езика и на това да бръкне някъде във вас. Може би и буквално 🙂 От него препоръчвам като за начало “Мъртвата зона”, “То”, “Зеления път”, или каквото изобщо си харесате. Има много филми по него, и сполучливите са точно защото са успели да предадат телепатията достатъчно добре на екрана.

> “This is how you communicate with a fellow intelligence: You hurt it, you keep on hurting it, until you can distinguish the speech from the screams.”
Blindsight на Peter Watts (“Слепоглед” на български, не съм гледал как е превода) е книгата с вампирите в космоса и една от най-издържаните научно фантастики, на които съм попадал. Авторът е океански биолог, работил по специалността си 10на години, публикувал и т.н., и решил в един момент да мине към фантастиката. Това е първа книга от втората му поредица, и това беше много замисляща книга, в която намерих и много от неща по темата за изкуствения интелект, които преди това бях виждал в “Новият разум на царя” на Роджър Пенроуз (математик, книгата е научно-популярна на темата докъде може да стигне изкуствения интелект, с прилично количество математика).
А за вампирите обяснението е, че те са вътрешно-видов хищник, който се храни с хора, има много добро зрение, и е изчезнал когато хората са започнали да строят къщи, понеже ако вампир види прав ъгъл, му се задействат едновременно твърде много от зрителните рецептори, получава се претоварване и нещо близко до припадък. За останалото препоръчвам да хванете книгата 🙂

> “Governments always commit their entire populations when the demands grow heavy enough. By their passive acceptance, these populations become accessories to whatever is done in their name.”
“Експериментът Досейди” на Франк Хърбърт. Повечето хора знаят за “Дюн”, но това също си заслужава да се прочете, поне като идея за посоката, в която може да тръгне дадено общество под натиск (и която може да отговори защо хората от много потиснати общества успяват да “hack”-нат тези от по-свободните, на по-прост език защо престъпността в Европа след 1989 е осново от източно-европейски произход).

> “We are a wretched, petty species, and we have been given power to destroy ourselves with.”
“Worm” на Wildbow е интересен случай. Ако не беше толкова добро, и толкова трудно за оставяне, не знам дали нямаше да застрелям човека, който ми я препоръча – книгата е около 5000 страници, и не дава възможност да я оставиш. И докато с други книги това е в общи линии да не се спи една нощ, тук това беше две седмици непрестанно четене до ранните часове. Може да се намери online на url-то, и я препоръчвам на всички, не само защото искам и други хора да преживеят същото нещо 🙂 Може просто да я започнете, то не е нужно много…

Имаше няколко въпроса след лекцията:
Функциониращ адрес на libgen – не се намира трудно, има един, завършващ на .rus.ec….
Техники за бързо четене дали използвам – мисля, че само една, че не субвокализирам (т.е. не си произнасям на ум думите), но не помня как съм го научил, случи се във ваканцията ми между първи и втори клас, просто започнах да чета по-бързо. Има някакви начини и доста информация по темата, може да е полезно да се пробва. Скоростта на четене много зависи и от количеството информация, има неща (като фентъзита), които могат да се четат много по-бързо, понеже просто информацията вътре не е толкова много.
Ако до края на живота си съм принуден да избера една книга, която да препрочитам, коя ще е – никаква идея. Мислих, мислих, но ще трябва да е нещо много по-дебело от Worm …
Минавало ли ми е през ума да пиша някаква книга – аз не пиша особено добре, а и това иска доста време и спокойствие, които нямам. Доколкото знам, е нужно да напишеш около милион думи текстове, за да свикнеш да пишеш добре, не знам дали съм стигнал до там. Иначе, многото четене е изискване за доброто писане, но не е достатъчно.

In Case You Missed These: AWS Security Blog Posts from January and February

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1J9OK26Z1WA3L/In-Case-You-Missed-These-AWS-Security-Blog-Posts-from-January-and-February

In case you missed any of the AWS Security Blog posts from January and February, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from using AWS WAF to automating HIPAA compliance.

February

February 29, AWS Compliance Announcement: Announcing Industry Best Practices for Securing AWS Resources
We are happy to announce that the Center for Internet Security (CIS) has published the CIS AWS Foundations Benchmark, a set of security configuration best practices for AWS. These industry-accepted best practices go beyond the high-level security guidance already available, providing AWS users with clear, step-by-step implementation and assessment procedures. This is the first time CIS has issued a set of security best practices specific to an individual cloud service provider.

February 24, AWS WAF How-To: How to Use AWS WAF to Block IP Addresses That Generate Bad Requests
In this blog post, I show you how to create an AWS Lambda function that automatically parses Amazon CloudFront access logs as they are delivered to Amazon S3, counts the number of bad requests from unique sources (IP addresses), and updates AWS WAF to block further requests from those IP addresses. I also provide a CloudFormation template that creates the web access control list (ACL), rule sets, Lambda function, and logging S3 bucket so that you can try this yourself.

February 23, Automating HIPAA Compliance How-To: How to Use AWS Config to Help with Required HIPAA Audit Controls: Part 4 of the Automating HIPAA Compliance Series
In today’s final post of this series, I am going to complete the explanation of the DevSecOps architecture by highlighting ways you can use AWS Config to help meet audit controls required by HIPAA. Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications. This Config output, along with other audit trails, gives you the types of information you can use to meet your HIPAA auditing obligations.  

February 22, March Webinar Announcement: Register for and Attend This March 2 Webinar—Using AWS WAF and Lambda for Automatic Protection
AWS WAF Software Development Manager Nathan Dye will share  Lambda scripts you can use to automate security with AWS WAF and write dynamic rules that can prevent HTTP floods, protect against badly behaving IPs, and maintain IP reputation lists. You can also learn how Brazilian retailer, Magazine Luiza, leveraged AWS WAF and Lambda to protect its site and run an operationally smooth Black Friday.

February 22, Automating HIPAA Compliance How-To: How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series
In my previous post, I walked through the setup of a DevSecOps environment that gives healthcare developers the ability to launch their own healthcare web server. At the heart of the architecture is AWS CloudFormation, a JSON representation of your architecture that allows security administrators to provision AWS resources according to the compliance standards they define. In today’s post, I will share examples that provide a Top 10 List of CloudFormation code snippets that you can consider when trying to map the requirements of the AWS Business Associates Agreement (BAA) to CloudFormation templates.

February 17, AWS Partner Network: New AWS Partner Network Blog Post: Securely Accessing Customers’ AWS Accounts with Cross-Account IAM Roles
Building off AWS Identity and Access Management (IAM) best practices, the AWS Partner Network (APN) Blog this week published a blog post called, Securely Accessing Customer AWS Accounts with Cross-Account IAM Roles. Written by AWS Partner Solutions Architect David Rocamora, this post addresses how best practices can be applied when working with APN Partners, and describes the potential drawbacks with APN Partners having access to their customers’ AWS resources.

February 16, AWS Summit in Chicago: Register for the Free AWS Summit – Chicago, April 2016
Registration for the 2016 AWS Summit – Chicago is now open. This free event will educate you about the AWS platform and offer information about architecture best practices and new cloud services. Register todayto reserve your seat to hear keynote speaker Matt Wood, AWS General Manager of Product Strategy, highlighting the latest AWS services and customer stories.

February 16, Automating HIPAA Compliance How-To: How to Use AWS Service Catalog for Code Deployments: Part 2 of the Automating HIPAA Compliance Series
In my previous blog post, I discussed the idea of using the cloud to protect the cloud and improving healthcare IT by applying DevSecOps methods. In Part 2 today, I will show an architecture composed of AWS services that gives healthcare security administrators necessary controls, allows healthcare developers to interact with the system using familiar tools (such as Git), and leverages AWS managed services without the need for advanced coding or complex configuration.

February 15, Automating HIPAA Compliance How-To: How to Automate HIPAA Compliance (Part 1): Use the Cloud to Protect the Cloud
In a series of blog posts on the AWS Security Blog this month, I will provide prescriptive advice and code samples to developers, system administrators, and security specialists who wish to improve their healthcare IT by applying the DevSecOps methods that the cloud enables. I will also demonstrate AWS services that can help customers meet their AWS Business Associate Agreement obligations in an automated fashion. Consider this series a getting started guide for DevSecOps strategies you can implement as you migrate your own compliance frameworks and controls to the cloud. 

February 9, AWS WAF How-To: How to Configure Rate-Based Blacklisting with AWS WAF and AWS Lambda
One security challenge you may have faced is how to prevent your web servers from being flooded by unwanted requests, or scanning tools such as bots and crawlers that don’t respect the crawl-delay directivevalue. The main objective of this kind of distributed denial of service (DDoS) attack, commonly called an HTTP flood, is to overburden system resources and make them unavailable to your real users or customers (as shown in the following illustration). In this blog post, I will show you how to provision a solution that automatically detects unwanted traffic based on request rate, and then updates configurations of AWS WAF (a web application firewall that protects any application deployed on the Amazon CloudFront content delivery service) to block subsequent requests from those users.

February 3, AWS Compliance Pilot Program: AWS FedRAMP-Trusted Internet Connection (TIC) Overlay Pilot Program
I’m pleased to announce a newly created resource for usage of the Federal Cloud—after successfully completing the testing phase of the FedRAMP-Trusted Internet Connection (TIC) Overlay pilot program, we’ve developed Guidance for TIC Readiness on AWS. This new way of architecting cloud solutions that address TIC capabilities (in a FedRAMP moderate baseline) comes as the result of our relationships with the FedRAMP Program Management Office (PMO), Department of Homeland Security (DHS) TIC PMO, GSA 18F, and FedRAMP third-party assessment organization (3PAO), Veris Group. Ultimately, this approach will provide US Government agencies and contractors with information assisting in the development of “TIC Ready” architectures on AWS.

February 2, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Microsoft Active Directory
In my previous post, I showed how to use Simple AD to forward DNS requests originating from on-premises networks to an Amazon Route 53 private hosted zone. Today, I will show how you can use Microsoft Active Directory (also provisioned with AWS Directory Service) to provide the same DNS resolution with some additional forwarding capabilities.

February 1, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53
As you establish private connectivity between your on-premises networks and your AWS Virtual Private Cloud (VPC) environments, the need for Domain Name System (DNS) resolution across these environments grows in importance. One common approach used to address this need is to run DNS servers on Amazon EC2 across multiple Availability Zones (AZs) and integrate them with private on-premises DNS domains. In many cases, though, a managed private DNS service (accessible outside of a VPC) with less administrative overhead is advantageous. In this blog post, I will show you two approaches that use Amazon Route 53 and AWS Directory Service to provide DNS resolution between on-premises networks and AWS VPC environments.

 

January

January 26, DNS Filtering How-To: How to Add DNS Filtering to Your NAT Instance with Squid
In this post, I discuss and give an example of how Squid, a leading open-source proxy, can restrict both HTTP and HTTPS outbound traffic to a given set of Internet domains, while being fully transparent for instances in the private subnet. First, I explain briefly how to create the infrastructure resources required for this approach. Then, I provide step-by-step instructions to install, configure, and test Squid as a transparent proxy.

 

January 25, AWS KMS How-To: How to Help Protect Sensitive Data with AWS KMS
One question AWS KMS customers frequently ask is about how how to encrypt Primary Account Number (PAN) data within AWS because PCI DSS sections 3.5 and 3.6 require the encryption of credit card data at rest and has stringent requirements around the management of encryption keys. One KMS encryption option is to encrypt your PAN data using customer data keys (CDKs) that are exportable out of KMS. Alternatively, you also can use KMS to directly encrypt PAN data by using a customer master key (CMK). In this blog post, I will show you how to help protect sensitive PAN data by using KMS CMKs.

January 21, AWS Certificate Manager Announcement: Now Available: AWS Certificate Manager
Launched today, AWS Certificate Manager (ACM) is designed to simplify and automate many of the tasks traditionally associated with provisioning and managing SSL/TLS certificates. ACM takes care of the complexity surrounding the provisioning, deployment, and renewal of digital certificates—all at no extra cost!

January 19, AWS Compliance Announcement: Introducing GxP Compliance on AWS
We’re happy to announce that customers now are enabled to bring the next generation of medical, health, and wellness solutions to their GxP systems by using AWS for their processing and storage needs. Compliance with healthcare and life sciences requirements is a key priority for us, and we are pleased to announce the availability of new compliance enablers for customers with GxP requirements.

January 19, AWS Config How-To: How to Record and Govern Your IAM Resource Configurations Using AWS Config
Using Config Rules on IAM resources, you can codify your best practices for using IAM and assess the compliance state of these rules regularly. In this blog post, I will show how to start recording the configuration of IAM resources, and author an example rule that checks whether all IAM users in the account are using a sample managed policy, MyIAMUserPolicy. I will also describe examples of other rules customers have authored to assess their organizations’ compliance with their own standards.

January 15, AWS Summits: Mark Your Calendar for AWS Summits in 2016
Are you ready for AWS Summits in 2016? This year we have created even more information-packed Summits that will take place across the globe, each designed to accelerate your cloud journey and help you get the most out of AWS services.

January 13, AWS IAM Announcement: The IAM Console Now Helps Prevent You from Accidentally Deleting In-Use Resources
Starting today, the IAM console shows service last accessed data as part of the process of deleting an IAM user or role. Now you have additional data that shows you when a resource was last active so that you can make a more informed decision about whether or not to delete it.

January 6, IAM Best Practices: Adhere to IAM Best Practices in 2016
As another new year begins, we encourage you to review our recommended IAM best practices. Following these best practices can help you maintain the security of your AWS resources. You can learn more by watching the IAM Best Practices to Live By presentation that Anders Samuelsson gave at AWS re:Invent 2015, or you can click the following links that will take you to IAM documentation, blog posts, and videos. 

If you have comments  about any of these posts, please add your comments in the "Comments" section of the appropriate post. If you have questions about or issues implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

In Case You Missed These: AWS Security Blog Posts from January and February

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1J9OK26Z1WA3L/In-Case-You-Missed-These-AWS-Security-Blog-Posts-from-January-and-February

In case you missed any of the AWS Security Blog posts from January and February, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from using AWS WAF to automating HIPAA compliance.

February

February 29, AWS Compliance Announcement: Announcing Industry Best Practices for Securing AWS Resources
We are happy to announce that the Center for Internet Security (CIS) has published the CIS AWS Foundations Benchmark, a set of security configuration best practices for AWS. These industry-accepted best practices go beyond the high-level security guidance already available, providing AWS users with clear, step-by-step implementation and assessment procedures. This is the first time CIS has issued a set of security best practices specific to an individual cloud service provider.

February 24, AWS WAF How-To: How to Use AWS WAF to Block IP Addresses That Generate Bad Requests
In this blog post, I show you how to create an AWS Lambda function that automatically parses Amazon CloudFront access logs as they are delivered to Amazon S3, counts the number of bad requests from unique sources (IP addresses), and updates AWS WAF to block further requests from those IP addresses. I also provide a CloudFormation template that creates the web access control list (ACL), rule sets, Lambda function, and logging S3 bucket so that you can try this yourself.

February 23, Automating HIPAA Compliance How-To: How to Use AWS Config to Help with Required HIPAA Audit Controls: Part 4 of the Automating HIPAA Compliance Series
In today’s final post of this series, I am going to complete the explanation of the DevSecOps architecture by highlighting ways you can use AWS Config to help meet audit controls required by HIPAA. Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications. This Config output, along with other audit trails, gives you the types of information you can use to meet your HIPAA auditing obligations.  

February 22, March Webinar Announcement: Register for and Attend This March 2 Webinar—Using AWS WAF and Lambda for Automatic Protection
AWS WAF Software Development Manager Nathan Dye will share  Lambda scripts you can use to automate security with AWS WAF and write dynamic rules that can prevent HTTP floods, protect against badly behaving IPs, and maintain IP reputation lists. You can also learn how Brazilian retailer, Magazine Luiza, leveraged AWS WAF and Lambda to protect its site and run an operationally smooth Black Friday.

February 22, Automating HIPAA Compliance How-To: How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series
In my previous post, I walked through the setup of a DevSecOps environment that gives healthcare developers the ability to launch their own healthcare web server. At the heart of the architecture is AWS CloudFormation, a JSON representation of your architecture that allows security administrators to provision AWS resources according to the compliance standards they define. In today’s post, I will share examples that provide a Top 10 List of CloudFormation code snippets that you can consider when trying to map the requirements of the AWS Business Associates Agreement (BAA) to CloudFormation templates.

February 17, AWS Partner Network: New AWS Partner Network Blog Post: Securely Accessing Customers’ AWS Accounts with Cross-Account IAM Roles
Building off AWS Identity and Access Management (IAM) best practices, the AWS Partner Network (APN) Blog this week published a blog post called, Securely Accessing Customer AWS Accounts with Cross-Account IAM Roles. Written by AWS Partner Solutions Architect David Rocamora, this post addresses how best practices can be applied when working with APN Partners, and describes the potential drawbacks with APN Partners having access to their customers’ AWS resources.

February 16, AWS Summit in Chicago: Register for the Free AWS Summit – Chicago, April 2016
Registration for the 2016 AWS Summit – Chicago is now open. This free event will educate you about the AWS platform and offer information about architecture best practices and new cloud services. Register todayto reserve your seat to hear keynote speaker Matt Wood, AWS General Manager of Product Strategy, highlighting the latest AWS services and customer stories.

February 16, Automating HIPAA Compliance How-To: How to Use AWS Service Catalog for Code Deployments: Part 2 of the Automating HIPAA Compliance Series
In my previous blog post, I discussed the idea of using the cloud to protect the cloud and improving healthcare IT by applying DevSecOps methods. In Part 2 today, I will show an architecture composed of AWS services that gives healthcare security administrators necessary controls, allows healthcare developers to interact with the system using familiar tools (such as Git), and leverages AWS managed services without the need for advanced coding or complex configuration.

February 15, Automating HIPAA Compliance How-To: How to Automate HIPAA Compliance (Part 1): Use the Cloud to Protect the Cloud
In a series of blog posts on the AWS Security Blog this month, I will provide prescriptive advice and code samples to developers, system administrators, and security specialists who wish to improve their healthcare IT by applying the DevSecOps methods that the cloud enables. I will also demonstrate AWS services that can help customers meet their AWS Business Associate Agreement obligations in an automated fashion. Consider this series a getting started guide for DevSecOps strategies you can implement as you migrate your own compliance frameworks and controls to the cloud. 

February 9, AWS WAF How-To: How to Configure Rate-Based Blacklisting with AWS WAF and AWS Lambda
One security challenge you may have faced is how to prevent your web servers from being flooded by unwanted requests, or scanning tools such as bots and crawlers that don’t respect the crawl-delay directivevalue. The main objective of this kind of distributed denial of service (DDoS) attack, commonly called an HTTP flood, is to overburden system resources and make them unavailable to your real users or customers (as shown in the following illustration). In this blog post, I will show you how to provision a solution that automatically detects unwanted traffic based on request rate, and then updates configurations of AWS WAF (a web application firewall that protects any application deployed on the Amazon CloudFront content delivery service) to block subsequent requests from those users.

February 3, AWS Compliance Pilot Program: AWS FedRAMP-Trusted Internet Connection (TIC) Overlay Pilot Program
I’m pleased to announce a newly created resource for usage of the Federal Cloud—after successfully completing the testing phase of the FedRAMP-Trusted Internet Connection (TIC) Overlay pilot program, we’ve developed Guidance for TIC Readiness on AWS. This new way of architecting cloud solutions that address TIC capabilities (in a FedRAMP moderate baseline) comes as the result of our relationships with the FedRAMP Program Management Office (PMO), Department of Homeland Security (DHS) TIC PMO, GSA 18F, and FedRAMP third-party assessment organization (3PAO), Veris Group. Ultimately, this approach will provide US Government agencies and contractors with information assisting in the development of “TIC Ready” architectures on AWS.

February 2, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Microsoft Active Directory
In my previous post, I showed how to use Simple AD to forward DNS requests originating from on-premises networks to an Amazon Route 53 private hosted zone. Today, I will show how you can use Microsoft Active Directory (also provisioned with AWS Directory Service) to provide the same DNS resolution with some additional forwarding capabilities.

February 1, DNS Resolution How-To: How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53
As you establish private connectivity between your on-premises networks and your AWS Virtual Private Cloud (VPC) environments, the need for Domain Name System (DNS) resolution across these environments grows in importance. One common approach used to address this need is to run DNS servers on Amazon EC2 across multiple Availability Zones (AZs) and integrate them with private on-premises DNS domains. In many cases, though, a managed private DNS service (accessible outside of a VPC) with less administrative overhead is advantageous. In this blog post, I will show you two approaches that use Amazon Route 53 and AWS Directory Service to provide DNS resolution between on-premises networks and AWS VPC environments.
 

January

January 26, DNS Filtering How-To: How to Add DNS Filtering to Your NAT Instance with Squid
In this post, I discuss and give an example of how Squid, a leading open-source proxy, can restrict both HTTP and HTTPS outbound traffic to a given set of Internet domains, while being fully transparent for instances in the private subnet. First, I explain briefly how to create the infrastructure resources required for this approach. Then, I provide step-by-step instructions to install, configure, and test Squid as a transparent proxy.
 

January 25, AWS KMS How-To: How to Help Protect Sensitive Data with AWS KMS
One question AWS KMS customers frequently ask is about how how to encrypt Primary Account Number (PAN) data within AWS because PCI DSS sections 3.5 and 3.6 require the encryption of credit card data at rest and has stringent requirements around the management of encryption keys. One KMS encryption option is to encrypt your PAN data using customer data keys (CDKs) that are exportable out of KMS. Alternatively, you also can use KMS to directly encrypt PAN data by using a customer master key (CMK). In this blog post, I will show you how to help protect sensitive PAN data by using KMS CMKs.

January 21, AWS Certificate Manager Announcement: Now Available: AWS Certificate Manager
Launched today, AWS Certificate Manager (ACM) is designed to simplify and automate many of the tasks traditionally associated with provisioning and managing SSL/TLS certificates. ACM takes care of the complexity surrounding the provisioning, deployment, and renewal of digital certificates—all at no extra cost!

January 19, AWS Compliance Announcement: Introducing GxP Compliance on AWS
We’re happy to announce that customers now are enabled to bring the next generation of medical, health, and wellness solutions to their GxP systems by using AWS for their processing and storage needs. Compliance with healthcare and life sciences requirements is a key priority for us, and we are pleased to announce the availability of new compliance enablers for customers with GxP requirements.

January 19, AWS Config How-To: How to Record and Govern Your IAM Resource Configurations Using AWS Config
Using Config Rules on IAM resources, you can codify your best practices for using IAM and assess the compliance state of these rules regularly. In this blog post, I will show how to start recording the configuration of IAM resources, and author an example rule that checks whether all IAM users in the account are using a sample managed policy, MyIAMUserPolicy. I will also describe examples of other rules customers have authored to assess their organizations’ compliance with their own standards.

January 15, AWS Summits: Mark Your Calendar for AWS Summits in 2016
Are you ready for AWS Summits in 2016? This year we have created even more information-packed Summits that will take place across the globe, each designed to accelerate your cloud journey and help you get the most out of AWS services.

January 13, AWS IAM Announcement: The IAM Console Now Helps Prevent You from Accidentally Deleting In-Use Resources
Starting today, the IAM console shows service last accessed data as part of the process of deleting an IAM user or role. Now you have additional data that shows you when a resource was last active so that you can make a more informed decision about whether or not to delete it.

January 6, IAM Best Practices: Adhere to IAM Best Practices in 2016
As another new year begins, we encourage you to review our recommended IAM best practices. Following these best practices can help you maintain the security of your AWS resources. You can learn more by watching the IAM Best Practices to Live By presentation that Anders Samuelsson gave at AWS re:Invent 2015, or you can click the following links that will take you to IAM documentation, blog posts, and videos. 

If you have comments  about any of these posts, please add your comments in the "Comments" section of the appropriate post. If you have questions about or issues implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig