Tag Archives: identity theft

CrimeStoppers Campaign Targets Pirate Set-Top Boxes & Their Users

Post Syndicated from Andy original https://torrentfreak.com/crimestoppers-campaign-targets-pirate-set-top-boxes-their-users-171209/

While many people might believe CrimeStoppers to be an official extension of the police in the UK, the truth is a little more subtle.

CrimeStoppers is a charity that operates a service through which members of the public can report crime anonymously, either using a dedicated phone line or via a website. Callers are not required to give their name, meaning that for those concerned about reprisals or becoming involved in a case for other sensitive reasons, it’s the perfect buffer between them and the authorities.

The people at CrimeStoppers deal with all kinds of crime but perhaps a little surprisingly, they’ve just got involved in the set-top box controversy in the UK.

“Advances in technology have allowed us to enjoy on-screen entertainment in more ways than ever before, with ever increasing amounts of exciting and original content,” the CrimeStoppers campaign begins.

“However, some people are avoiding paying for this content by using modified streaming hardware devices, like a set-top box or stick, in conjunction with software such as illegal apps or add-ons, or illegal mobile apps which allow them to watch new movie releases, TV that hasn’t yet aired, and subscription sports channels for free.”

The campaign has been launched in partnership with the Intellectual Property Office and unnamed “industry partners”. Who these companies are isn’t revealed but given the standard messages being portrayed by the likes of ACE, Premier League and Federation Against Copyright Theft lately, it wouldn’t be a surprise if some or all of them were involved.

Those messages are revealed in a series of four video ads, each taking a different approach towards discouraging the public from using devices loaded with pirate software.

The first video clearly targets the consumer, dispelling the myth that watching pirate video isn’t against the law. It is, that’s not in any doubt, but from the constant tone of the video, one could be forgiven that it’s an extremely serious crime rather than something which is likely to be a civil matter, if anything at all.

It also warns people who are configuring and selling pirate devices that they are breaking the law. Again, this is absolutely true but this activity is clearly several magnitudes more serious than simply viewing. The video blurs the boundaries for what appears to be dramatic effect, however.

Selling and watching is illegal

The second video is all about demonizing the people and groups who may offer set-top boxes to the public.

Instead of portraying the hundreds of “cottage industry” suppliers behind many set-top box sales in the UK, the CrimeStoppers video paints a picture of dark organized crime being the main driver. By buying from these people, the charity warns, criminals are being welcomed in.

“It is illegal. You could also be helping to fund organized crime and bringing it into your community,” the video warns.

Are you funding organized crime?

The third video takes another approach, warning that set-top boxes have few if any parental controls. This could lead to children being exposed to inappropriate content, the charity warns.

“What are your children watching. Does it worry you?” the video asks.

Of course, the same can be said about the Internet, period. Web browsers don’t filter what content children have access to unless parents take pro-active steps to configure special services or software for the purpose.

There’s always the option to supervise children, of course, but Netflix is probably a safer option for those with a preference to stand off. It’s also considerably more expensive, a fact that won’t have escaped users of these devices.

Got kids? Take care….

Finally, video four picks up a theme that’s becoming increasingly common in anti-piracy campaigns – malware and identity theft.

“Why risk having your identity stolen or your bank account or home network hacked. If you access entertainment or sports using dodgy streaming devices or apps, or illegal addons for Kodi, you are increasing the risks,” the ad warns.

Danger….Danger….

Perhaps of most interest is that this entire campaign, which almost certainly has Big Media behind the scenes in advisory and financial capacities, barely mentions the entertainment industries at all.

Indeed, the success of the whole campaign hinges on people worrying about the supposed ill effects of illicit streaming on them personally and then feeling persuaded to inform on suppliers and others involved in the chain.

“Know of someone supplying or promoting these dodgy devices or software? It is illegal. Call us now and help stop crime in your community,” the videos warn.

That CrimeStoppers has taken on this campaign at all is a bit of a head-scratcher, given the bigger crime picture. Struggling with severe budget cuts, police in the UK are already de-prioritizing a number of crimes, leading to something called “screening out”, a process through which victims are given a crime number but no investigation is carried out.

This means that in 2016, 45% of all reported crimes in Greater Manchester weren’t investigated and a staggering 57% of all recorded domestic burglaries weren’t followed up by the police. But it gets worse.

“More than 62pc of criminal damage and arson offenses were not investigated, along with one in three reported shoplifting incidents,” MEN reports.

Given this backdrop, how will police suddenly find the resources to follow up lots of leads from the public and then subsequently prosecute people who sell pirate boxes? Even if they do, will that be at the expense of yet more “screening out” of other public-focused offenses?

No one is saying that selling pirate devices isn’t a crime or at least worthy of being followed up, but is this niche likely to be important to the public when they’re being told that nothing will be done when their homes are emptied by intruders? “NO” says a comment on one of the CrimeStoppers videos on YouTube.

“This crime affects multi-million dollar corporations, I’d rather see tax payers money invested on videos raising awareness of crimes committed against the people rather than the 0.001%,” it concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Me on the Equifax Breach

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/me_on_the_equif.html

Testimony and Statement for the Record of Bruce Schneier
Fellow and Lecturer, Belfer Center for Science and International Affairs, Harvard Kennedy School
Fellow, Berkman Center for Internet and Society at Harvard Law School

Hearing on “Securing Consumers’ Credit Data in the Age of Digital Commerce”

Before the

Subcommittee on Digital Commerce and Consumer Protection
Committee on Energy and Commerce
United States House of Representatives

1 November 2017
2125 Rayburn House Office Building
Washington, DC 20515

Mister Chairman and Members of the Committee, thank you for the opportunity to testify today concerning the security of credit data. My name is Bruce Schneier, and I am a security technologist. For over 30 years I have studied the technologies of security and privacy. I have authored 13 books on these subjects, including Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (Norton, 2015). My popular newsletter CryptoGram and my blog Schneier on Security are read by over 250,000 people.

Additionally, I am a Fellow and Lecturer at the Harvard Kennedy School of Government –where I teach Internet security policy — and a Fellow at the Berkman-Klein Center for Internet and Society at Harvard Law School. I am a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of Electronic Privacy Information Center and VerifiedVoting.org. I am also a special advisor to IBM Security and the Chief Technology Officer of IBM Resilient.

I am here representing none of those organizations, and speak only for myself based on my own expertise and experience.

I have eleven main points:

1. The Equifax breach was a serious security breach that puts millions of Americans at risk.

Equifax reported that 145.5 million US customers, about 44% of the population, were impacted by the breach. (That’s the original 143 million plus the additional 2.5 million disclosed a month later.) The attackers got access to full names, Social Security numbers, birth dates, addresses, and driver’s license numbers.

This is exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, cell phone companies and other businesses vulnerable to fraud. As a result, all 143 million US victims are at greater risk of identity theft, and will remain at risk for years to come. And those who suffer identify theft will have problems for months, if not years, as they work to clean up their name and credit rating.

2. Equifax was solely at fault.

This was not a sophisticated attack. The security breach was a result of a vulnerability in the software for their websites: a program called Apache Struts. The particular vulnerability was fixed by Apache in a security patch that was made available on March 6, 2017. This was not a minor vulnerability; the computer press at the time called it “critical.” Within days, it was being used by attackers to break into web servers. Equifax was notified by Apache, US CERT, and the Department of Homeland Security about the vulnerability, and was provided instructions to make the fix.

Two months later, Equifax had still failed to patch its systems. It eventually got around to it on July 29. The attackers used the vulnerability to access the company’s databases and steal consumer information on May 13, over two months after Equifax should have patched the vulnerability.

The company’s incident response after the breach was similarly damaging. It waited nearly six weeks before informing victims that their personal information had been stolen and they were at increased risk of identity theft. Equifax opened a website to help aid customers, but the poor security around that — the site was at a domain separate from the Equifax domain — invited fraudulent imitators and even more damage to victims. At one point, the official Equifax communications even directed people to that fraudulent site.

This is not the first time Equifax failed to take computer security seriously. It confessed to another data leak in January 2017. In May 2016, one of its websites was hacked, resulting in 430,000 people having their personal information stolen. Also in 2016, a security researcher found and reported a basic security vulnerability in its main website. And in 2014, the company reported yet another security breach of consumer information. There are more.

3. There are thousands of data brokers with similarly intimate information, similarly at risk.

Equifax is more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about us — almost all of them companies you’ve never heard of and have no business relationship with.

The breadth and depth of information that data brokers have is astonishing. Data brokers collect and store billions of data elements covering nearly every US consumer. Just one of the data brokers studied holds information on more than 1.4 billion consumer transactions and 700 billion data elements, and another adds more than 3 billion new data points to its database each month.

These brokers collect demographic information: names, addresses, telephone numbers, e-mail addresses, gender, age, marital status, presence and ages of children in household, education level, profession, income level, political affiliation, cars driven, and information about homes and other property. They collect lists of things we’ve purchased, when we’ve purchased them, and how we paid for them. They keep track of deaths, divorces, and diseases in our families. They collect everything about what we do on the Internet.

4. These data brokers deliberately hide their actions, and make it difficult for consumers to learn about or control their data.

If there were a dozen people who stood behind us and took notes of everything we purchased, read, searched for, or said, we would be alarmed at the privacy invasion. But because these companies operate in secret, inside our browsers and financial transactions, we don’t see them and we don’t know they’re there.

Regarding Equifax, few consumers have any idea what the company knows about them, who they sell personal data to or why. If anyone knows about them at all, it’s about their business as a credit bureau, not their business as a data broker. Their website lists 57 different offerings for business: products for industries like automotive, education, health care, insurance, and restaurants.

In general, options to “opt-out” don’t work with data brokers. It’s a confusing process, and doesn’t result in your data being deleted. Data brokers will still collect data about consumers who opt out. It will still be in those companies’ databases, and will still be vulnerable. It just don’t be included individually when they sell data to their customers.

5. The existing regulatory structure is inadequate.

Right now, there is no way for consumers to protect themselves. Their data has been harvested and analyzed by these companies without their knowledge or consent. They cannot improve the security of their personal data, and have no control over how vulnerable it is. They only learn about data breaches when the companies announce them — which can be months after the breaches occur — and at that point the onus is on them to obtain credit monitoring services or credit freezes. And even those only protect consumers from some of the harms, and only those suffered after Equifax admitted to the breach.

Right now, the press is reporting “dozens” of lawsuits against Equifax from shareholders, consumers, and banks. Massachusetts has sued Equifax for violating state consumer protection and privacy laws. Other states may follow suit.

If any of these plaintiffs win in the court, it will be a rare victory for victims of privacy breaches against the companies that have our personal information. Current law is too narrowly focused on people who have suffered financial losses directly traceable to a specific breach. Proving this is difficult. If you are the victim of identity theft in the next month, is it because of Equifax or does the blame belong to another of the thousands of companies who have your personal data? As long as one can’t prove it one way or the other, data brokers remain blameless and liability free.

Additionally, much of this market in our personal data falls outside the protections of the Fair Credit Reporting Act. And in order for the Federal Trade Commission to levy a fine against Equifax, it needs to have a consent order and then a subsequent violation. Any fines will be limited to credit information, which is a small portion of the enormous amount of information these companies know about us. In reality, this is not an effective enforcement regime.

Although the FTC is investigating Equifax, it is unclear if it has a viable case.

6. The market cannot fix this because we are not the customers of data brokers.

The customers of these companies are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer — everyone who wants to sell you something, even governments.

Markets work because buyers choose from a choice of sellers, and sellers compete for buyers. None of us are Equifax’s customers. None of us are the customers of any of these data brokers. We can’t refuse to do business with the companies. We can’t remove our data from their databases. With few limited exceptions, we can’t even see what data these companies have about us or correct any mistakes.

We are the product that these companies sell to their customers: those who want to use our personal information to understand us, categorize us, make decisions about us, and persuade us.

Worse, the financial markets reward bad security. Given the choice between increasing their cybersecurity budget by 5%, or saving that money and taking the chance, a rational CEO chooses to save the money. Wall Street rewards those whose balance sheets look good, not those who are secure. And if senior management gets unlucky and the a public breach happens, they end up okay. Equifax’s CEO didn’t get his $5.2 million severance pay, but he did keep his $18.4 million pension. Any company that spends more on security than absolutely necessary is immediately penalized by shareholders when its profits decrease.

Even the negative PR that Equifax is currently suffering will fade. Unless we expect data brokers to put public interest ahead of profits, the security of this industry will never improve without government regulation.

7. We need effective regulation of data brokers.

In 2014, the Federal Trade Commission recommended that Congress require data brokers be more transparent and give consumers more control over their personal information. That report contains good suggestions on how to regulate this industry.

First, Congress should help plaintiffs in data breach cases by authorizing and funding empirical research on the harm individuals receive from these breaches.

Specifically, Congress should move forward legislative proposals that establish a nationwide “credit freeze” — which is better described as changing the default for disclosure from opt-out to opt-in — and free lifetime credit monitoring services. By this I do not mean giving customers free credit-freeze options, a proposal by Senators Warren and Schatz, but that the default should be a credit freeze.

The credit card industry routinely notifies consumers when there are suspicious charges. It is obvious that credit reporting agencies should have a similar obligation to notify consumers when there is suspicious activity concerning their credit report.

On the technology side, more could be done to limit the amount of personal data companies are allowed to collect. Increasingly, privacy safeguards impose “data minimization” requirements to ensure that only the data that is actually needed is collected. On the other hand, Congress should not create a new national identifier to replace the Social Security Numbers. That would make the system of identification even more brittle. Better is to reduce dependence on systems of identification and to create contextual identification where necessary.

Finally, Congress needs to give the Federal Trade Commission the authority to set minimum security standards for data brokers and to give consumers more control over their personal information. This is essential as long as consumers are these companies’ products and not their customers.

8. Resist complaints from the industry that this is “too hard.”

The credit bureaus and data brokers, and their lobbyists and trade-association representatives, will claim that many of these measures are too hard. They’re not telling you the truth.

Take one example: credit freezes. This is an effective security measure that protects consumers, but the process of getting one and of temporarily unfreezing credit is made deliberately onerous by the credit bureaus. Why isn’t there a smartphone app that alerts me when someone wants to access my credit rating, and lets me freeze and unfreeze my credit at the touch of the screen? Too hard? Today, you can have an app on your phone that does something similar if you try to log into a computer network, or if someone tries to use your credit card at a physical location different from where you are.

Moreover, any credit bureau or data broker operating in Europe is already obligated to follow the more rigorous EU privacy laws. The EU General Data Protection Regulation will come into force, requiring even more security and privacy controls for companies collecting storing the personal data of EU citizens. Those companies have already demonstrated that they can comply with those more stringent regulations.

Credit bureaus, and data brokers in general, are deliberately not implementing these 21st-century security solutions, because they want their services to be as easy and useful as possible for their actual customers: those who are buying your information. Similarly, companies that use this personal information to open accounts are not implementing more stringent security because they want their services to be as easy-to-use and convenient as possible.

9. This has foreign trade implications.

The Canadian Broadcast Corporation reported that 100,000 Canadians had their data stolen in the Equifax breach. The British Broadcasting Corporation originally reported that 400,000 UK consumers were affected; Equifax has since revised that to 15.2 million.

Many American Internet companies have significant numbers of European users and customers, and rely on negotiated safe harbor agreements to legally collect and store personal data of EU citizens.

The European Union is in the middle of a massive regulatory shift in its privacy laws, and those agreements are coming under renewed scrutiny. Breaches such as Equifax give these European regulators a powerful argument that US privacy regulations are inadequate to protect their citizens’ data, and that they should require that data to remain in Europe. This could significantly harm American Internet companies.

10. This has national security implications.

Although it is still unknown who compromised the Equifax database, it could easily have been a foreign adversary that routinely attacks the servers of US companies and US federal agencies with the goal of exploiting security vulnerabilities and obtaining personal data.

When the Fair Credit Reporting Act was passed in 1970, the concern was that the credit bureaus might misuse our data. That is still a concern, but the world has changed since then. Credit bureaus and data brokers have far more intimate data about all of us. And it is valuable not only to companies wanting to advertise to us, but foreign governments as well. In 2015, the Chinese breached the database of the Office of Personal Management and stole the detailed security clearance information of 21 million Americans. North Korea routinely engages in cybercrime as way to fund its other activities. In a world where foreign governments use cyber capabilities to attack US assets, requiring data brokers to limit collection of personal data, securely store the data they collect, and delete data about consumers when it is no longer needed is a matter of national security.

11. We need to do something about it.

Yes, this breach is a huge black eye and a temporary stock dip for Equifax — this month. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

Unless Congress acts to protect consumer information in the digital age, these breaches will continue.

Thank you for the opportunity to testify today. I will be pleased to answer your questions.

On the Equifax Data Breach

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/on_the_equifax_.html

Last Thursday, Equifax reported a data breach that affects 143 million US customers, about 44% of the population. It’s an extremely serious breach; hackers got access to full names, Social Security numbers, birth dates, addresses, driver’s license numbers — exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, and other businesses vulnerable to fraud.

Many sites posted guides to protecting yourself now that it’s happened. But if you want to prevent this kind of thing from happening again, your only solution is government regulation (as unlikely as that may be at the moment).

The market can’t fix this. Markets work because buyers choose between sellers, and sellers compete for buyers. In case you didn’t notice, you’re not Equifax’s customer. You’re its product.

This happened because your personal information is valuable, and Equifax is in the business of selling it. The company is much more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights.

Its customers are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer — everyone who wants to sell you something, even governments.

It’s not just Equifax. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about you — almost all of them companies you’ve never heard of and have no business relationship with.

Surveillance capitalism fuels the Internet, and sometimes it seems that everyone is spying on you. You’re secretly tracked on pretty much every commercial website you visit. Facebook is the largest surveillance organization mankind has created; collecting data on you is its business model. I don’t have a Facebook account, but Facebook still keeps a surprisingly complete dossier on me and my associations — just in case I ever decide to join.

I also don’t have a Gmail account, because I don’t want Google storing my e-mail. But my guess is that it has about half of my e-mail anyway, because so many people I correspond with have accounts. I can’t even avoid it by choosing not to write to gmail.com addresses, because I have no way of knowing if [email protected] is hosted at Gmail.

And again, many companies that track us do so in secret, without our knowledge and consent. And most of the time we can’t opt out. Sometimes it’s a company like Equifax that doesn’t answer to us in any way. Sometimes it’s a company like Facebook, which is effectively a monopoly because of its sheer size. And sometimes it’s our cell phone provider. All of them have decided to track us and not compete by offering consumers privacy. Sure, you can tell people not to have an e-mail account or cell phone, but that’s not a realistic option for most people living in 21st-century America.

The companies that collect and sell our data don’t need to keep it secure in order to maintain their market share. They don’t have to answer to us, their products. They know it’s more profitable to save money on security and weather the occasional bout of bad press after a data loss. Yes, we are the ones who suffer when criminals get our data, or when our private information is exposed to the public, but ultimately why should Equifax care?

Yes, it’s a huge black eye for the company — this week. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

This market failure isn’t unique to data security. There is little improvement in safety and security in any industry until government steps in. Think of food, pharmaceuticals, cars, airplanes, restaurants, workplace conditions, and flame-retardant pajamas.

Market failures like this can only be solved through government intervention. By regulating the security practices of companies that store our data, and fining companies that fail to comply, governments can raise the cost of insecurity high enough that security becomes a cheaper alternative. They can do the same thing by giving individuals affected by these breaches the ability to sue successfully, citing the exposure of personal data itself as a harm.

By all means, take the recommended steps to protect yourself from identity theft in the wake of Equifax’s data breach, but recognize that these steps are only effective on the margins, and that most data security is out of your hands. Perhaps the Federal Trade Commission will get involved, but without evidence of “unfair and deceptive trade practices,” there’s nothing it can do. Perhaps there will be a class-action lawsuit, but because it’s hard to draw a line between any of the many data breaches you’re subjected to and a specific harm, courts are not likely to side with you.

If you don’t like how careless Equifax was with your data, don’t waste your breath complaining to Equifax. Complain to your government.

This essay previously appeared on CNN.com.

EDITED TO ADD: In the early hours of this breach, I did a radio interview where I minimized the ramifications of this. I didn’t know the full extent of the breach, and thought it was just another in an endless string of breaches. I wondered why the press was covering this one and not many of the others. I don’t remember which radio show interviewed me. I kind of hope it didn’t air.

AWS Hot Startups – March 2017

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-hot-startups-march-2017/

As the madness of March rounds up, take a break from all the basketball and check out the cool startups Tina Barr brings you for this month!

-Ana


The arrival of spring brings five new startups this month:

  • Amino Apps – providing social networks for hundreds of thousands of communities.
  • Appboy – empowering brands to strengthen customer relationships.
  • Arterys – revolutionizing the medical imaging industry.
  • Protenus – protecting patient data for healthcare organizations.
  • Syapse – improving targeted cancer care with shared data from across the country.

In case you missed them, check out February’s hot startups here.

Amino Apps (New York, NY)
Amino Logo
Amino Apps was founded on the belief that interest-based communities were underdeveloped and outdated, particularly when it came to mobile. CEO Ben Anderson and CTO Yin Wang created the app to give users access to hundreds of thousands of communities, each of them a complete social network dedicated to a single topic. Some of the largest communities have over 1 million members and are built around topics like popular TV shows, video games, sports, and an endless number of hobbies and other interests. Amino hosts communities from around the world and is currently available in six languages with many more on the way.

Navigating the Amino app is easy. Simply download the app (iOS or Android), sign up with a valid email address, choose a profile picture, and start exploring. Users can search for communities and join any that fit their interests. Each community has chatrooms, multimedia content, quizzes, and a seamless commenting system. If a community doesn’t exist yet, users can create it in minutes using the Amino Creator and Manager app (ACM). The largest user-generated communities are turned into their own apps, which gives communities their own piece of real estate on members’ phones, as well as in app stores.

Amino’s vast global network of hundreds of thousands of communities is run on AWS services. Every day users generate, share, and engage with an enormous amount of content across hundreds of mobile applications. By leveraging AWS services including Amazon EC2, Amazon RDS, Amazon S3, Amazon SQS, and Amazon CloudFront, Amino can continue to provide new features to their users while scaling their service capacity to keep up with user growth.

Interested in joining Amino? Check out their jobs page here.

Appboy (New York, NY)
In 2011, Bill Magnuson, Jon Hyman, and Mark Ghermezian saw a unique opportunity to strengthen and humanize relationships between brands and their customers through technology. The trio created Appboy to empower brands to build long-term relationships with their customers and today they are the leading lifecycle engagement platform for marketing, growth, and engagement teams. The team recognized that as rapid mobile growth became undeniable, many brands were becoming frustrated with the lack of compelling and seamless cross-channel experiences offered by existing marketing clouds. Many of today’s top mobile apps and enterprise companies trust Appboy to take their marketing to the next level. Appboy manages user profiles for nearly 700 million monthly active users, and is used to power more than 10 billion personalized messages monthly across a multitude of channels and devices.

Appboy creates a holistic user profile that offers a single view of each customer. That user profile in turn powers contextual cross-channel messaging, lifecycle engagement automation, and robust campaign insights and optimization opportunities. Appboy offers solutions that allow brands to create push notifications, targeted emails, in-app and in-browser messages, news feed cards, and webhooks to enhance the user experience and increase customer engagement. The company prides itself on its interoperability, connecting to a variety of complimentary marketing tools and technologies so brands can build the perfect stack to enable their strategies and experiments in real time.

AWS makes it easy for Appboy to dynamically size all of their service components and automatically scale up and down as needed. They use an array of services including Elastic Load Balancing, AWS Lambda, Amazon CloudWatch, Auto Scaling groups, and Amazon S3 to help scale capacity and better deal with unpredictable customer loads.

To keep up with the latest marketing trends and tactics, visit the Appboy digital magazine, Relate. Appboy was also recently featured in the #StartupsOnAir video series where they gave insight into their AWS usage.

Arterys (San Francisco, CA)
Getting test results back from a physician can often be a time consuming and tedious process. Clinicians typically employ a variety of techniques to manually measure medical images and then make their assessments. Arterys founders Fabien Beckers, John Axerio-Cilies, Albert Hsiao, and Shreyas Vasanawala realized that much more computation and advanced analytics were needed to harness all of the valuable information in medical images, especially those generated by MRI and CT scanners. Clinicians were often skipping measurements and making assessments based mostly on qualitative data. Their solution was to start a cloud/AI software company focused on accelerating data-driven medicine with advanced software products for post-processing of medical images.

Arterys’ products provide timely, accurate, and consistent quantification of images, improve speed to results, and improve the quality of the information offered to the treating physician. This allows for much better tracking of a patient’s condition, and thus better decisions about their care. Advanced analytics, such as deep learning and distributed cloud computing, are used to process images. The first Arterys product can contour cardiac anatomy as accurately as experts, but takes only 15-20 seconds instead of the 45-60 minutes required to do it manually. Their computing cloud platform is also fully HIPAA compliant.

Arterys relies on a variety of AWS services to process their medical images. Using deep learning and other advanced analytic tools, Arterys is able to render images without latency over a web browser using AWS G2 instances. They use Amazon EC2 extensively for all of their compute needs, including inference and rendering, and Amazon S3 is used to archive images that aren’t needed immediately, as well as manage costs. Arterys also employs Amazon Route 53, AWS CloudTrail, and Amazon EC2 Container Service.

Check out this quick video about the technology that Arterys is creating. They were also recently featured in the #StartupsOnAir video series and offered a quick demo of their product.

Protenus (Baltimore, MD)
Protenus Logo
Protenus founders Nick Culbertson and Robert Lord were medical students at Johns Hopkins Medical School when they saw first-hand how Electronic Health Record (EHR) systems could be used to improve patient care and share clinical data more efficiently. With increased efficiency came a huge issue – an onslaught of serious security and privacy concerns. Over the past two years, 140 million medical records have been breached, meaning that approximately 1 in 3 Americans have had their health data compromised. Health records contain a repository of sensitive information and a breach of that data can cause major havoc in a patient’s life – namely identity theft, prescription fraud, Medicare/Medicaid fraud, and improper performance of medical procedures. Using their experience and knowledge from former careers in the intelligence community and involvement in a leading hedge fund, Nick and Robert developed the prototype and algorithms that launched Protenus.

Today, Protenus offers a number of solutions that detect breaches and misuse of patient data for healthcare organizations nationwide. Using advanced analytics and AI, Protenus’ health data insights platform understands appropriate vs. inappropriate use of patient data in the EHR. It also protects privacy, aids compliance with HIPAA regulations, and ensures trust for patients and providers alike.

Protenus built and operates its SaaS offering atop Amazon EC2, where Dedicated Hosts and encrypted Amazon EBS volume are used to ensure compliance with HIPAA regulation for the storage of Protected Health Information. They use Elastic Load Balancing and Amazon Route 53 for DNS, enabling unique, secure client specific access points to their Protenus instance.

To learn more about threats to patient data, read Hospitals’ Biggest Threat to Patient Data is Hiding in Plain Sight on the Protenus blog. Also be sure to check out their recent video in the #StartupsOnAir series for more insight into their product.

Syapse (Palo Alto, CA)
Syapse provides a comprehensive software solution that enables clinicians to treat patients with precision medicine for targeted cancer therapies — treatments that are designed and chosen using genetic or molecular profiling. Existing hospital IT doesn’t support the robust infrastructure and clinical workflows required to treat patients with precision medicine at scale, but Syapse centralizes and organizes patient data to clinicians at the point of care. Syapse offers a variety of solutions for oncologists that allow them to access the full scope of patient data longitudinally, view recommended treatments or clinical trials for similar patients, and track outcomes over time. These solutions are helping health systems across the country to improve patient outcomes by offering the most innovative care to cancer patients.

Leading health systems such as Stanford Health Care, Providence St. Joseph Health, and Intermountain Healthcare are using Syapse to improve patient outcomes, streamline clinical workflows, and scale their precision medicine programs. A group of experts known as the Molecular Tumor Board (MTB) reviews complex cases and evaluates patient data, documents notes, and disseminates treatment recommendations to the treating physician. Syapse also provides reports that give health system staff insight into their institution’s oncology care, which can be used toward quality improvement, business goals, and understanding variables in the oncology service line.

Syapse uses Amazon Virtual Private Cloud, Amazon EC2 Dedicated Instances, and Amazon Elastic Block Store to build a high-performance, scalable, and HIPAA-compliant data platform that enables health systems to make precision medicine part of routine cancer care for patients throughout the country.

Be sure to check out the Syapse blog to learn more and also their recent video on the #StartupsOnAir video series where they discuss their product, HIPAA compliance, and more about how they are using AWS.

Thank you for checking out another month of awesome hot startups!

-Tina Barr

 

Security Risks of TSA PreCheck

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/12/security_risks_12.html

Former TSA Administrator Kip Hawley wrote an op-ed pointing out the security vulnerabilities in the TSA’s PreCheck program:

The first vulnerability in the system is its enrollment process, which seeks to verify an applicant’s identity. We know verification is a challenge: A 2011 Government Accountability Office report on TSA’s system for checking airport workers’ identities concluded that it was “not designed to provide reasonable assurance that only qualified applicants” got approved. It’s not a stretch to believe a reasonably competent terrorist could construct an identity that would pass PreCheck’s front end.

The other step in PreCheck’s “intelligence-driven, risk-based security strategy” is absurd on its face: The absence of negative information about a person doesn’t mean he or she is trustworthy. News reports are filled with stories of people who seemed to be perfectly normal right up to the moment they committed a heinous act. There is no screening algorithm and no database check that can accurately predict human behavior — especially on the scale of millions. It is axiomatic that terrorist organizations recruit operatives who have clean backgrounds and interview well.

None of this is news.

Back in 2004, I wrote:

Imagine you’re a terrorist plotter with half a dozen potential terrorists at your disposal. They all apply for a card, and three get one. Guess which are going on the mission? And they’ll buy round-trip tickets with credit cards and have a “normal” amount of luggage with them.

What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.

The Trusted Traveler program is based on the dangerous myth that terrorists match a particular profile and that we can somehow pick terrorists out of a crowd if we only can identify everyone. That’s simply not true. Most of the 9/11 terrorists were unknown and not on any watch list. Timothy McVeigh was an upstanding US citizen before he blew up the Oklahoma City Federal Building. Palestinian suicide bombers in Israel are normal, nondescript people. Intelligence reports indicate that Al Qaeda is recruiting non-Arab terrorists for US operations.

I wrote much the same thing in 2007:

Background checks are based on the dangerous myth that we can somehow pick terrorists out of a crowd if we could identify everyone. Unfortunately, there isn’t any terrorist profile that prescreening can uncover. Timothy McVeigh could probably have gotten one of these cards. So could have Eric Rudolph, the pipe bomber at the 1996 Olympic Games in Atlanta. There isn’t even a good list of known terrorists to check people against; the government list used by the airlines has been the butt of jokes for years.

And have we forgotten how prevalent identity theft is these days? If you think having a criminal impersonating you to your bank is bad, wait until they start impersonating you to the Transportation Security Administration.

The truth is that whenever you create two paths through security — a high-security path and a low-security path — you have to assume that the bad guys will find a way to exploit the low-security path. It may be counterintuitive, but we are all safer if the people chosen for more thorough screening are truly random and not based on an error-filled database or a cursory background check.

In a companion blog post, Hawley has more details about why the program doesn’t work:

In the sense that PreCheck bars people who were identified by intelligence or law enforcement agencies as possible terrorists, then it was intelligence-driven. But using that standard for PreCheck is ridiculous since those people already get extra screening or are on the No-Fly list. The movie Patriots Day, out now, reminds us of the tragic and preventable Boston Marathon bombing. The FBI sent agents to talk to the Tsarnaev brothers and investigate them as possible terror suspects. And cleared them. Even they did not meet the “intelligence-driven” definition used in PreCheck.

The other problem with “intelligence-driven” in the PreCheck context is that intelligence actually tells us the opposite; specifically that terrorists pick clean operatives. If TSA uses current intelligence to evaluate risk, it would not be out enrolling everybody they can into pre-9/11 security for everybody not flagged by the security services.

Hawley and I may agree on the problem, but we have completely opposite solutions. The op-ed was too short to include details, but they’re in a companion blog post. Basically, he wants to screen PreCheck passengers more:

In the interests of space, I left out details of what I would suggest as short-and medium-term solutions. Here are a few ideas:

  • Immediately scrub the PreCheck enrollees for false identities. That can probably be accomplished best and most quickly by getting permission from members, and then using, commercial data. If the results show that PreCheck has already been penetrated, the program should be suspended.
  • Deploy K-9 teams at PreCheck lanes.

  • Use Behaviorally trained officers to interact with and check the credentials of PreCheck passengers.

  • Use Explosives Trace Detection cotton swabs on PreCheck passengers at a much higher rate. Same with removing shoes.

  • Turn on the body scanners and keep them fully utilized.

  • Allow liquids to stay in the carry-on since TSA scanners can detect threat liquids.

  • Work with the airlines to keep the PreCheck experience positive.

  • Work with airports to place PreCheck lanes away from regular checkpoints so as not to diminish lane capacity for non-PreCheck passengers. Rental Car check-in areas could be one alternative. Also, downtown check-in and screening (with secure transport to the airport) is a possibility.

These solutions completely ignore the data from the real-world experiment PreCheck has been. Hawley writes that PreCheck tells us that “terrorists pick clean operatives.” That’s exactly wrong. PreCheck tells us that, basically, there are no terrorists. If 1) it’s an easier way through airport security that terrorists will invariably use, and 2) there have been no instances of terrorists using it in the 10+ years it and its predecessors have been in operation, then the inescapable conclusion is that the threat is minimal. Instead of screening PreCheck passengers more, we should screen everybody else less. This is me in 2012: “I think the PreCheck level of airport screening is what everyone should get, and that the no-fly list and the photo ID check add nothing to security.”

I agree with Hawley that we need to overhaul airport security. Me in 2010: “Airport security is the last line of defense, and it’s not a very good one.” We need to recognize that the actual risk is much lower than we fear, and ratchet airport security down accordingly. And then we need to continue to invest in investigation and intelligence: security measures that work regardless of the tactic or target.

Five Mistakes Everyone Makes With Cloud Backup

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/5-common-cloud-backup-mistakes/

cloud backup error

Cloud-based storage and file sync services are ubiquitous: Everywhere we turn new services pop up (and often shut down), promising free or low-cost storage of everything and anything on our computers and mobile devices.

When you depend on the cloud it’s very easy to get lulled into a false sense of security. Don’t. Here are five common mistakes all of us make with cloud backup and sync services. I’ve added suggestions for how to avoid these pitfalls.

Assuming the Cloud Is Backing Things Up

“I have iCloud or Google Drive, so everything’s backed up.”

Some cloud backup and file sync services make it really easy to put files online, but they may not be all the files you need. Don’t just assume the cloud services you use are doing a complete backup of your device – check to see what is actually being backed up. The services you use may only back up specific folders or directories on your computer’s hard drive.

Read this for more info on how Backblaze backs up.

There’s a big difference between file backup services and sync services, by the way. Which brings me to my next point:

Confusing Sync for Backup

“I don’t need backup. I’ve got my files synced.”

Sync service enables you to keep consistent contents between multiple devices – think Dropbox or iCloud Drive, for example. Make one change to the contents of that shared info, and the same thing happens across all devices, including file changes and deletions. Depending on how you have syncing and sharing set-up you can delete a file on one device and have it disappear on all the other shared devices.

I’ve also found it handy to have a backup service that enables you to restore multiple versions. In point of fact, Dropbox lets you restore previous versions. Apple’s Time Machine, built into the Mac, does this too. So does Backblaze (we keep track of multiple versions up to 30 days). Not to say you shouldn’t use Dropbox, we do! We wrote about how we are complimentary services.

Thinking One Backup Is Enough

“Hey, I’m backing up to the cloud. That’s better than nothing, right?”

It’s better than nothing but it’s not enough. You want a local backup too. That’s why I recommend a 3-2-1 Backup strategy. In addition to the “live” copy of the data on your hard drive, make sure you have a local backup, and use the cloud for offsite storage. Likewise, if you’re only storing data on a local backup, you’re putting all your eggs in that basket. Add offsite backup to complete your backup strategy. Conversely, if you only store your data in the cloud, you’re susceptible to those services being down as well. So having a local copy can keep you productive even if your favorite service is temporarily down.

Leaving Things Insecure

“I’m not backing up anything important enough for hackers to bother with.”

With identity theft on the rise, the security of all of your data online should be paramount. Strong encryption is important, so make sure it’s supported by the services you depend on.

Even if a bad actor doesn’t want your data, they still may want your computer for nefarious purposes, like driving a botnet used to launch a DDOS (Distributed Denial of Service) attack. That’s exactly what recently happened to Dyn, a company that provides core Internet services for other popular Internet services like Twitter and Spotify.

Make sure to protect your computer with strong passwords, practice safe surfing and keep your computer updated with the latest software. Also check periodically for malware and get rid of it when you find it.

Thinking That it’s Taken Care Of

“I have a backup strategy in place, so I don’t have to think about it anymore.”

I think it’s wise to observe an old aphorism: “Trust but verify.”

There’s absolutely nothing wrong with developing an automated backup strategy. But it’s vitally important to periodically test your backups to make sure they’re doing what they’re supposed to.

You should test your most important, mission-critical data first. Tax returns? Important legal documents? Irreplaceable baby pictures? Make sure the files that are important to you are retrievable and intact by actually trying to recover them. Find out more about how to test your backup.

Backblaze too. Test all your backups – we even recommend it in our Best Practices.

Got more cloud backup myths to bust? Share them with me in the comments!

 

The post Five Mistakes Everyone Makes With Cloud Backup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Anatomy of a Scam – Secret Shoppers

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/7r6fHv6xQN0/anatomy-of-scam-secret-shoppers.html

Here’s a recent example of a secret shopper scam. Like many scams, this one attempts to lure people who think that accidentally receiving a secret shopper invitation is a way to free money. In the end, it is merely an attempt at identity theft – though it may also involve a fee scam as well!If the recipient bothers to check who it is from, it purports to come from Dow Chemical, with an email address that is [email protected], with a cc to [email protected] The hsbrv.net domain points back to a Betty Prevo, with an email address listing [email protected] That sounds suspiciously like our david212 address as well. The whois results are below:Administrative Contact: Prevo, Betty [email protected] 1368 X W. Estes Ave Chicago, Illinois 60626 United StatesFor those who are interested, that address points to an apartment building in Chicago. Interestingly, Betty Prevo apparently exists and does live in that area in Chicago, but she’d probably be interested to find out that she’s running various domains. Blumail? Well, it’s a free email service that, “provides global e-mail accounts, educational content, employment needs, entrepreneurship, networking, story / experience sharing, mentoring and volunteering opportunities to youth and others who are coming online in developing countries.” In this case? It’s a great place for a scammer to get free email hosting. It’s also a well known 419 scam domain. Blumail is a legitimate service, unlike the hsbrv.net domain we first looked at.Now, the actual scam letter:Hello there, My Name is David Anderson and I am your group regional Instructor from within the USA.Henceforth you will be working with me on the completion of your Mystery Shopper’s Position application. Like you already know, your weekly per assignment is $300:00 Flat for working with us and will come in payments of $300 each per assignment you complete for the company.Note that the name actually somewhat matches the email address – that’s often a missed detail for our scammers. PAYMENT TERMS: Your payment would be sent ($300) per assignment , Also the company is in charge of providing you with all expense money for the shopping and other expenses incurred during the course of your assignment.All the tools you will needing would be provided to you with details every week you have an assignment. JOB Description : 1} When an assignment is given to you,You would be provided with details to execute the assignment and in a timely fashion. 2} You would be asked to visit a company or store in your area and they are mostly our competitors as a secret shopper and shop with them to know more about their sales and stock , cost sales and more details as provided by the company then report back to us with details of whatever transpired a the store. But anything you buy at the shop belongs to you,all we want is an effective/quick job and reports.Free money, and what sounds like a somewhat reasonable reason why the company would want you to do this. The grammar is even better than most letters of this type. ASSIGNMENT PACKET : Before any assignment we would provide you with the resources needed {cash}Mostly our company would send you a check which you can cash and use for the assignment. Included to the check would be your assignment packet .Then we would be providing you details on here. But you follow every single information given to you as a secret shopper .It starts to fall apart here with lines like “Then we would be providing you details on here”.And now for the meat of the scam: KINDLY RECONFIRM YOUR INFORMATION BELOW TO PROCEED ON FIRST ASSIGNMENT: Full Legal Name : Full Physical Address : City : State : Zip code : Age: Nationality : Home and Cell # : Present Occupation: Email: Thank you for reading. Yours sincerely. Contact Person: David Anderson Time: 24 Hours daily by e-mailAnd that’s the anatomy of a secret shopper scam. A simple way to hook the gullible into providing details for identity theft.

_uacct = “UA-1423386-1”;
urchinTracker();

Anatomy of a Scam – Secret Shoppers

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/7r6fHv6xQN0/anatomy-of-scam-secret-shoppers.html

Here’s a recent example of a secret shopper scam. Like many scams, this one attempts to lure people who think that accidentally receiving a secret shopper invitation is a way to free money. In the end, it is merely an attempt at identity theft – though it may also involve a fee scam as well!If the recipient bothers to check who it is from, it purports to come from Dow Chemical, with an email address that is [email protected], with a cc to [email protected] The hsbrv.net domain points back to a Betty Prevo, with an email address listing [email protected] That sounds suspiciously like our david212 address as well. The whois results are below:Administrative Contact: Prevo, Betty [email protected] 1368 X W. Estes Ave Chicago, Illinois 60626 United StatesFor those who are interested, that address points to an apartment building in Chicago. Interestingly, Betty Prevo apparently exists and does live in that area in Chicago, but she’d probably be interested to find out that she’s running various domains. Blumail? Well, it’s a free email service that, “provides global e-mail accounts, educational content, employment needs, entrepreneurship, networking, story / experience sharing, mentoring and volunteering opportunities to youth and others who are coming online in developing countries.” In this case? It’s a great place for a scammer to get free email hosting. It’s also a well known 419 scam domain. Blumail is a legitimate service, unlike the hsbrv.net domain we first looked at.Now, the actual scam letter:Hello there, My Name is David Anderson and I am your group regional Instructor from within the USA.Henceforth you will be working with me on the completion of your Mystery Shopper’s Position application. Like you already know, your weekly per assignment is $300:00 Flat for working with us and will come in payments of $300 each per assignment you complete for the company.Note that the name actually somewhat matches the email address – that’s often a missed detail for our scammers. PAYMENT TERMS: Your payment would be sent ($300) per assignment , Also the company is in charge of providing you with all expense money for the shopping and other expenses incurred during the course of your assignment.All the tools you will needing would be provided to you with details every week you have an assignment. JOB Description : 1} When an assignment is given to you,You would be provided with details to execute the assignment and in a timely fashion. 2} You would be asked to visit a company or store in your area and they are mostly our competitors as a secret shopper and shop with them to know more about their sales and stock , cost sales and more details as provided by the company then report back to us with details of whatever transpired a the store. But anything you buy at the shop belongs to you,all we want is an effective/quick job and reports.Free money, and what sounds like a somewhat reasonable reason why the company would want you to do this. The grammar is even better than most letters of this type. ASSIGNMENT PACKET : Before any assignment we would provide you with the resources needed {cash}Mostly our company would send you a check which you can cash and use for the assignment. Included to the check would be your assignment packet .Then we would be providing you details on here. But you follow every single information given to you as a secret shopper .It starts to fall apart here with lines like “Then we would be providing you details on here”.And now for the meat of the scam: KINDLY RECONFIRM YOUR INFORMATION BELOW TO PROCEED ON FIRST ASSIGNMENT: Full Legal Name : Full Physical Address : City : State : Zip code : Age: Nationality : Home and Cell # : Present Occupation: Email: Thank you for reading. Yours sincerely. Contact Person: David Anderson Time: 24 Hours daily by e-mailAnd that’s the anatomy of a secret shopper scam. A simple way to hook the gullible into providing details for identity theft.

_uacct = “UA-1423386-1”;
urchinTracker();

A Different Angle on Identity Theft: When Identity Thieves Use Your Identity

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/v9K21jFvgs4/different-angle-on-identity-theft-when.html

The story of Dr. Gemma Meadows, as reported by MSNBC is an intriguing one. Like many victims of identity theft, she was contacted by her bank and informed of fraudulent activity. What happened next though, is a bit off the normal path for identity theft victims.Various packages with a wide range of values started to show up, and have continued to show up. Now, Dr. Meadows spends time tracking and returning packages, as well as fielding calls from various vendors from whom the items are ordered.Why? According to the article, and what she has been able to determine, the identity thieves are using her information to test validation scripts on e-commerce websites. Her valid address, phone, and other details are being used to make transactions appear valid.Interestingly, the scripts seem to work in some cases, flagging the transactions as possible fraudlent. The article mentions that some sites note that the item is to be shipped thousands of kilometers away from the order location, and that others call to verify that she is the one placing the order. Many others, however, don’t do as well, and the stream of packages continues.The article is well worth a read. We’re used to seeing lives disrupted by identity theft and the credit and financial issues that can go with it. Receiving packages when criminals use your identity to support their crimes in a different way is an entirely different event, and appears to be one that law enforcement and our database driven society isn’t geared to handle.

_uacct = “UA-1423386-1”;
urchinTracker();

A Different Angle on Identity Theft: When Identity Thieves Use Your Identity

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/v9K21jFvgs4/different-angle-on-identity-theft-when.html

The story of Dr. Gemma Meadows, as reported by MSNBC is an intriguing one. Like many victims of identity theft, she was contacted by her bank and informed of fraudulent activity. What happened next though, is a bit off the normal path for identity theft victims.Various packages with a wide range of values started to show up, and have continued to show up. Now, Dr. Meadows spends time tracking and returning packages, as well as fielding calls from various vendors from whom the items are ordered.Why? According to the article, and what she has been able to determine, the identity thieves are using her information to test validation scripts on e-commerce websites. Her valid address, phone, and other details are being used to make transactions appear valid.Interestingly, the scripts seem to work in some cases, flagging the transactions as possible fraudlent. The article mentions that some sites note that the item is to be shipped thousands of kilometers away from the order location, and that others call to verify that she is the one placing the order. Many others, however, don’t do as well, and the stream of packages continues.The article is well worth a read. We’re used to seeing lives disrupted by identity theft and the credit and financial issues that can go with it. Receiving packages when criminals use your identity to support their crimes in a different way is an entirely different event, and appears to be one that law enforcement and our database driven society isn’t geared to handle.

_uacct = “UA-1423386-1”;
urchinTracker();

Ok, Be Afraid if Someone’s Got a Voltmeter Hooked to Your CPU

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/03/05/crypto-fear.html

Boy, do I hate it when a
FLOSS
project is given a hard time unfairly. I was this morning greeted
with news
from many
places that OpenSSL, one of the
most common FLOSS software libraries used for cryptography, was
somehow severely vulnerable.

I had a hunch what was going on. I quickly downloaded
a copy
of the academic paper
that was cited as the sole source for the
story and read it. As I feared, OpenSSL was getting some bad press
unfairly. One must really read this academic computer science article
in the context it was written; most commenting about this paper
probably did not.

First of all, I don’t claim to be an expert on cryptography, and I
think my knowledge level to opine on this subject remains limited to a
little blog post like this and nothing more. Between college and
graduate school, I worked as a system administrator focusing on network
security. While a computer science graduate student, I did take two
cryptography courses, two theory of computation courses, and one class
on complexity theory0. So, when
compared to the general population I probably am an expert, but compared to
people who actually work in cryptography regularly, I’m clearly a
novice. However, I suspect many who have hitherto opined about this
academic article to declare this severe vulnerability have even
less knowledge than I do on the subject.

This article, of course, wasn’t written for novices like me, and
certainly not for the general public nor the technology press. It was
written by and for professional researchers who spend much time each
week reading dozens of these academic papers, a task I haven’t done
since graduate school. Indeed, the paper is written in a style I know
well; my “welcome to CS graduate school” seminar in 1997
covered the format well.

The first thing you have to note about such papers is that informed
readers generally ignore the parts that a newbie is most likely focus
on: the Abstract, Introduction and Conclusion sections. These sections
are promotional materials; they are equivalent to a sales brochure
selling you on how important and groundbreaking the research is. Some
research is groundbreaking, of course, but most is an incremental step
forward toward understanding some theoretical concept, or some report
about an isolated but interesting experimental finding.

Unfortunately, these promotional parts of the paper are the sections
that focus on the negative implications for OpenSSL. In the rest of the
paper, OpenSSL is merely the software component of the experiment
equipment. They likely could have used GNU TLS or any other
implementation of RSA taken from a book on
cryptography1. But this fact
is not even the primary reason that this article isn’t really that big
of a deal for daily use of cryptography.

The experiment described in the paper is very difficult to reproduce.
You have to cause very subtle faults in computation at specific times.
As I understand it, they had to assemble a specialized hardware copy of
a SPARC-based GNU/Linux environment to accomplish the experiment.

Next, the data generated during the run of the software on the
specially-constructed faulty hardware must be collected and operated
upon by a parallel processing computing environment over the course of
many hours. If it turns out all the needed data was gathered, the
output of this whole process is the private RSA key.

The details of the fault generation process deserve special mention.
Very specific faults have to occur, and they can’t occur such that any
other parts of the computation (such as, say, the normal running of the
operating system) are interrupted or corrupted. This is somewhat
straightforward to get done in a lab environment, but accomplishing it
in a production situation would be impractical and improbable. It would
also usually require physical access to the hardware holding the private
key. Such physical access would, of course, probably give you the
private key anyway by simply copying it off the hard drive or out of
RAM!

This is interesting research, and it does suggest some changes that
might be useful. For example, if it doesn’t slow a system down too
much, the integrity of RSA signatures should be verified, on a closely
controlled proxy unit with a separate CPU, before sending out to a wider
audience. But even that would be a process only for the most paranoid.
If faults are occurring on production hardware enough to generate the
bad computations this cracking process relies on, likely something else
will go wrong on the hardware too and it will be declared generally
unusable for production before an interloper could gather enough data to
crack the key. Thus, another useful change to make based on this
finding is to disable and discard RSA keys that were in use on
production hardware that went faulty.

Finally, I think this article does completely convince me that I would
never want to run any RSA computations on a system where the CPU was
emulated. Causing faults in an emulated CPU would only require changes
to the emulation software, and could be done with careful precision to
detect when an RSA-related computation was happening, and only give the
faulty result on those occasions. I’ve never heard of anyone running
production cryptography on an emulated CPU, since it would be too slow,
and virtualization technologies like Xen, KVM, and QEMU all
pass-through CPU instructions directly to hardware (for speed reasons)
when the virtualized guest matches the hardware architecture of the
host.

The point, however, is that proper description of the dangers of a
“security vulnerability” requires more than a single bit
field. Some security vulnerabilities are much worse than others. This
one is substantially closer to the “oh, that’s cute” end of
the spectrum, not the “ZOMG, everyone’s going to experience
identity theft tomorrow” side.

0Many casual
users don’t realize that cryptography — the stuff that secures your
networked data from unwanted viewers — isn’t about math problems
that are unsolvable. In fact, it’s often based on math problems that are
trivially solvable, but take a very long time to solve. This is why
algorithmic complexity questions are central to the question of
cryptographic security.

1 I’m
oversimplifying a bit here. A key factor in the paper appears to be the
linear time algorithm used to compute cryptographic digital signatures,
and the fact that the signatures aren’t verified for integrity before
being deployed. I suspect, though, that just about any RSA system is
going to do this. (Although I do usually test the integrity of my GnuPG
signatures before sending them out, I do this as a user by hand).