Tag Archives: bug bounty

StaCoAn – Mobile App Static Analysis Tool

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/04/stacoan-mobile-app-static-analysis-tool/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

StaCoAn – Mobile App Static Analysis Tool

StaCoAn is a cross-platform tool which aids developers, bug bounty hunters and ethical hackers performing mobile app static analysis on the code of the application for both native Android and iOS applications.

This tool will look for interesting lines in the code which can contain:

  • Hardcoded credentials
  • API keys
  • URL’s of API’s
  • Decryption keys
  • Major coding mistakes

This tool was created with a big focus on usability and graphical guidance in the user interface.

Read the rest of StaCoAn – Mobile App Static Analysis Tool now! Only available at Darknet.

Setting up bug bounties for success

Post Syndicated from Michal Zalewski original https://lcamtuf.blogspot.com/2018/03/setting-up-bug-bounties-for-success.html

Bug bounties end up in the news with some regularity, usually for the wrong reasons. I’ve been itching to write
about that for a while – but instead of dwelling on the mistakes of the bygone days, I figured it may be better to
talk about some of the ways to get vulnerability rewards right.

What do you get out of bug bounties?

There’s plenty of differing views, but I like to think of such programs
simply as a bid on researchers’ time. In the most basic sense, you get three benefits:

  • Improved ability to detect bugs in production before they become major incidents.
  • A comparatively unbiased feedback loop to help you prioritize and measure other security work.
  • A robust talent pipeline for when you need to hire.

What bug bounties don’t offer?

You don’t get anything resembling a comprehensive security program or a systematic assessment of your platforms.
Researchers end up looking for bugs that offer favorable effort-to-payoff ratios for their skills and given the
very imperfect information they have about your enterprise. In other words, you may end up with a hundred
people looking for XSS and just one person looking for RCE.

Your reward structure can steer them toward the targets and bugs you care about, but it’s difficult to fully
eliminate this inherent skew. There’s only so far you can jack up your top-tier rewards, and only so far you can
go lowering the bottom-tier ones.

Don’t you have to outcompete the black market to get all the “good” bugs?

There is a free market price discovery component to it all: if you’re not getting the engagement you
were hoping for, you should probably consider paying more.

That said, there are going to be researchers who’d rather hurt you than work for you, no matter how much you pay;
you don’t have to win them over, and you don’t have to outspend every authoritarian government or
every crime syndicate. A bug bounty is effective simply if it attracts enough eyeballs to make bugs statistically
harder to find, and reduces the useful lifespan of any zero-days in black market trade. Plus, most
researchers don’t want their work to be used to crack down on dissidents in Egypt or Vietnam.

Another factor is that you’re paying for different things: a black market buyer probably wants a reliable exploit
capable of delivering payloads, and then demands silence for months or years to come; a vendor-run
bug bounty program is usually perfectly happy with a reproducible crash and doesn’t mind a researcher blogging
about their work.

In fact, while money is important, you will probably find out that it’s not enough to retain your top talent;
many folks want bug bounties to be more than a business transaction, and find a lot of value in having a close
relationship with your security team, comparing notes, and growing together. Fostering that partnership can
be more important than adding another $10,000 to your top reward.

How do I prevent it all from going horribly wrong?

Bug bounties are an unfamiliar beast to most lawyers and PR folks, so it’s a natural to be wary and try to plan
for every eventuality with pages and pages of impenetrable rules and fine-print legalese.

This is generally unnecessary: there is a strong self-selection bias, and almost every participant in a
vulnerability reward program will be coming to you in good faith. The more friendly, forthcoming, and
approachable you seem, and the more you treat them like peers, the more likely it is for your relationship to stay
positive. On the flip side, there is no faster way to make enemies than to make a security researcher feel that they
are now talking to a lawyer or to the PR dept.

Most people have strong opinions on disclosure policies; instead of imposing your own views, strive to patch reported bugs
reasonably quickly, and almost every reporter will play along. Demand researchers to cancel conference appearances,
take down blog posts, or sign NDAs, and you will sooner or later end up in the news.

But what if that’s not enough?

As with any business endeavor, mistakes will happen; total risk avoidance is seldom the answer. Learn to sincerely
apologize for mishaps; it’s not a sign of weakness to say “sorry, we messed up”. And you will almost certainly not end
up in the courtroom for doing so.

It’s good to foster a healthy and productive relationship with the community, so that they come to your defense when
something goes wrong. Encouraging people to disclose bugs and talk about their experiences is one way of accomplishing that.

What about extortion?

You should structure your program to naturally discourage bad behavior and make it stand out like a sore thumb.
Require bona fide reports with complete technical details before any reward decision is made by a panel of named peers;
and make it clear that you never demand non-disclosure as a condition of getting a reward.

To avoid researchers accidentally putting themselves in awkward situations, have clear rules around data exfiltration
and lateral movement: assure them that you will always pay based on the worst-case impact of their findings; in exchange,
ask them to stop as soon as they get a shell and never access any data that isn’t their own.

So… are there any downsides?

Yep. Other than souring up your relationship with the community if you implement your program wrong, the other consideration
is that bug bounties tend to generate a lot of noise from well-meaning but less-skilled researchers.

When this happens, do not get frustrated and do not penalize such participants; instead, help them grow. Consider
publishing educational articles, giving advice on how to investigate and structure reports, or
offering free workshops every now and then.

The other downside is cost; although bug bounties tend to offer far more bang for your buck than your average penetration
test, they are more random. The annual expenses tend to be fairly predictable, but there is always
some possibility of having to pay multiple top-tier rewards in rapid succession. This is the kind of uncertainty that
many mid-level budget planners react badly to.

Finally, you need to be able to fix the bugs you receive. It would be nuts to prefer to not know about the
vulnerabilities in the first place – but once you invite the research, the clock starts ticking and you need to
ship fixes reasonably fast.

So… should I try it?

There are folks who enthusiastically advocate for bug bounties in every conceivable situation, and people who dislike them
with fierce passion; both sentiments are usually strongly correlated with the line of business they are in.

In reality, bug bounties are not a cure-all, and there are some ways to make them ineffectual or even dangerous.
But they are not as risky or expensive as most people suspect, and when done right, they can actually be fun for your
team, too. You won’t know for sure until you try.

Getting product security engineering right

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/getting-product-security-engineering.html

Product security is an interesting animal: it is a uniquely cross-disciplinary endeavor that spans policy, consulting,
process automation, in-depth software engineering, and cutting-edge vulnerability research. And in contrast to many
other specializations in our field of expertise – say, incident response or network security – we have virtually no
time-tested and coherent frameworks for setting it up within a company of any size.

In my previous post, I shared some thoughts
on nurturing technical organizations and cultivating the right kind of leadership within. Today, I figured it would
be fitting to follow up with several notes on what I learned about structuring product security work – and about actually
making the effort count.

The “comfort zone” trap

For security engineers, knowing your limits is a sought-after quality: there is nothing more dangerous than a security
expert who goes off script and starts dispensing authoritatively-sounding but bogus advice on a topic they know very
little about. But that same quality can be destructive when it prevents us from growing beyond our most familiar role: that of
a critic who pokes holes in other people’s designs.

The role of a resident security critic lends itself all too easily to a sense of supremacy: the mistaken
belief that our cognitive skills exceed the capabilities of the engineers and product managers who come to us for help
– and that the cool bugs we file are the ultimate proof of our special gift. We start taking pride in the mere act
of breaking somebody else’s software – and then write scathing but ineffectual critiques addressed to executives,
demanding that they either put a stop to a project or sign off on a risk. And hey, in the latter case, they better
brace for our triumphant “I told you so” at some later date.

Of course, escalations of this type have their place, but they need to be a very rare sight; when practiced routinely, they are a telltale
sign of a dysfunctional team. We might be failing to think up viable alternatives that are in tune with business or engineering needs; we might
be very unpersuasive, failing to communicate with other rational people in a language they understand; or it might be that our tolerance for risk
is badly out of whack with the rest of the company. Whatever the cause, I’ve seen high-level escalations where the security team
spoke of valiant efforts to resist inexplicably awful design decisions or data sharing setups; and where product leads in turn talked about
pressing business needs randomly blocked by obstinate security folks. Sometimes, simply having them compare their notes would be enough to arrive
at a technical solution – such as sharing a less sensitive subset of the data at hand.

To be effective, any product security program must be rooted in a partnership with the rest of the company, focused on helping them get stuff done
while eliminating or reducing security risks. To combat the toxic us-versus-them mentality, I found it helpful to have some team members with
software engineering backgrounds, even if it’s the ownership of a small open-source project or so. This can broaden our horizons, helping us see
that we all make the same mistakes – and that not every solution that sounds good on paper is usable once we code it up.

Getting off the treadmill

All security programs involve a good chunk of operational work. For product security, this can be a combination of product launch reviews, design consulting requests, incoming bug reports, or compliance-driven assessments of some sort. And curiously, such reactive work also has the property of gradually expanding to consume all the available resources on a team: next year is bound to bring even more review requests, even more regulatory hurdles, and even more incoming bugs to triage and fix.

Being more tractable, such routine tasks are also more readily enshrined in SDLs, SLAs, and all kinds of other official documents that are often mistaken for a mission statement that justifies the existence of our teams. Soon, instead of explaining to a developer why they should fix a particular problem right away, we end up pointing them to page 17 in our severity classification guideline, which defines that “severity 2” vulnerabilities need to be resolved within a month. Meanwhile, another policy may be telling them that they need to run a fuzzer or a web application scanner for a particular number of CPU-hours – no matter whether it makes sense or whether the job is set up right.

To run a product security program that scales sublinearly, stays abreast of future threats, and doesn’t erect bureaucratic speed bumps just for the sake of it, we need to recognize this inherent tendency for operational work to take over – and we need to reign it in. No matter what the last year’s policy says, we usually don’t need to be doing security reviews with a particular cadence or to a particular depth; if we need to scale them back 10% to staff a two-quarter project that fixes an important API and squashes an entire class of bugs, it’s a short-term risk we should feel empowered to take.

As noted in my earlier post, I find contingency planning to be a valuable tool in this regard: why not ask ourselves how the team would cope if the workload went up another 30%, but bad financial results precluded any team growth? It’s actually fun to think about such hypotheticals ahead of the time – and hey, if the ideas sound good, why not try them out today?

Living for a cause

It can be difficult to understand if our security efforts are structured and prioritized right; when faced with such uncertainty, it is natural to stick to the safe fundamentals – investing most of our resources into the very same things that everybody else in our industry appears to be focusing on today.

I think it’s important to combat this mindset – and if so, we might as well tackle it head on. Rather than focusing on tactical objectives and policy documents, try to write down a concise mission statement explaining why you are a team in the first place, what specific business outcomes you are aiming for, how do you prioritize it, and how you want it all to change in a year or two. It should be a fluid narrative that reads right and that everybody on your team can take pride in; my favorite way of starting the conversation is telling folks that we could always have a new VP tomorrow – and that the VP’s first order of business could be asking, “why do you have so many people here and how do I know they are doing the right thing?”. It’s a playful but realistic framing device that motivates people to get it done.

In general, a comprehensive product security program should probably start with the assumption that no matter how many resources we have at our disposal, we will never be able to stay in the loop on everything that’s happening across the company – and even if we did, we’re not going to be able to catch every single bug. It follows that one of our top priorities for the team should be making sure that bugs don’t happen very often; a scalable way of getting there is equipping engineers with intuitive and usable tools that make it easy to perform common tasks without having to worry about security at all. Examples include standardized, managed containers for production jobs; safe-by-default APIs, such as strict contextual autoescaping for XSS or type safety for SQL; security-conscious style guidelines; or plug-and-play libraries that take care of common crypto or ACL enforcement tasks.

Of course, not all problems can be addressed on framework level, and not every engineer will always reach for the right tools. Because of this, the next principle that I found to be worth focusing on is containment and mitigation: making sure that bugs are difficult to exploit when they happen, or that the damage is kept in check. The solutions in this space can range from low-level enhancements (say, hardened allocators or seccomp-bpf sandboxes) to client-facing features such as browser origin isolation or Content Security Policy.

The usual consulting, review, and outreach tasks are an important facet of a product security program, but probably shouldn’t be the sole focus of your team. It’s also best to avoid undue emphasis on vulnerability showmanship: while valuable in some contexts, it creates a hypercompetitive environment that may be hostile to less experienced team members – not to mention, squashing individual bugs offers very limited value if the same issue is likely to be reintroduced into the codebase the next day. I like to think of security reviews as a teaching opportunity instead: it’s a way to raise awareness, form partnerships with engineers, and help them develop lasting habits that reduce the incidence of bugs. Metrics to understand the impact of your work are important, too; if your engagements are seen mostly as a yet another layer of red tape, product teams will stop reaching out to you for advice.

The other tenet of a healthy product security effort requires us to recognize at a scale and given enough time, every defense mechanism is bound to fail – and so, we need ways to prevent bugs from turning into incidents. The efforts in this space may range from developing product-specific signals for the incident response and monitoring teams; to offering meaningful vulnerability reward programs and nourishing a healthy and respectful relationship with the research community; to organizing regular offensive exercises in hopes of spotting bugs before anybody else does.

Oh, one final note: an important feature of a healthy security program is the existence of multiple feedback loops that help you spot problems without the need to micromanage the organization and without being deathly afraid of taking chances. For example, the data coming from bug bounty programs, if analyzed correctly, offers a wonderful way to alert you to systemic problems in your codebase – and later on, to measure the impact of any remediation and hardening work.

Uber Data Hack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/uber_data_hack.html

Uber was hacked, losing data on 57 million driver and rider accounts. The company kept it quiet for over a year. The details are particularly damning:

The two hackers stole data about the company’s riders and drivers ­– including phone numbers, email addresses and names — from a third-party server and then approached Uber and demanded $100,000 to delete their copy of the data, the employees said.

Uber acquiesced to the demands, and then went further. The company tracked down the hackers and pushed them to sign nondisclosure agreements, according to the people familiar with the matter. To further conceal the damage, Uber executives also made it appear as if the payout had been part of a “bug bounty” — a common practice among technology companies in which they pay hackers to attack their software to test for soft spots.

And almost certainly illegal:

While it is not illegal to pay money to hackers, Uber may have violated several laws in its interaction with them.

By demanding that the hackers destroy the stolen data, Uber may have violated a Federal Trade Commission rule on breach disclosure that prohibits companies from destroying any forensic evidence in the course of their investigation.

The company may have also violated state breach disclosure laws by not disclosing the theft of Uber drivers’ stolen data. If the data stolen was not encrypted, Uber would have been required by California state law to disclose that driver’s license data from its drivers had been stolen in the course of the hacking.

Uber was hacked, losing data on 57 million driver and rider accounts. They kept it quiet for over a year. The details are particularly damning:

The two hackers stole data about the company’s riders and drivers ­- including phone numbers, email addresses and names -­ from a third-party server and then approached Uber and demanded $100,000 to delete their copy of the data, the employees said.

Uber acquiesced to the demands, and then went further. The company tracked down the hackers and pushed them to sign nondisclosure agreements, according to the people familiar with the matter. To further conceal the damage, Uber executives also made it appear as if the payout had been part of a “bug bounty” ­- a common practice among technology companies in which they pay hackers to attack their software to test for soft spots.

And almost certainly illegal:

While it is not illegal to pay money to hackers, Uber may have violated several laws in its interaction with them.

By demanding that the hackers destroy the stolen data, Uber may have violated a Federal Trade Commission rule on breach disclosure that prohibits companies from destroying any forensic evidence in the course of their investigation.

The company may have also violated state breach disclosure laws by not disclosing the theft of Uber drivers’ stolen data. If the data stolen was not encrypted, Uber would have been required by California state law to disclose that driver’s license data from its drivers had been stolen in the course of the hacking.

You don’t need printer security

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/02/you-dont-need-printer-security.html

So there’s this tweet:

What it’s probably refering to is this:

This is an obviously bad idea.

Well, not so “obvious”, so some people have ask me to clarify the situation. After all, without “security”, couldn’t a printer just be added to a botnet of IoT devices?

The answer is this:

Fixing insecurity is almost always better than adding a layer of security.

Adding security is notoriously problematic, for three reasons

  1. Hackers are active attackers. When presented with a barrier in front of an insecurity, they’ll often find ways around that barrier. It’s a common problem with “web application firewalls”, for example.
  2. The security software itself can become a source of vulnerabilities hackers can attack, which has happened frequently in anti-virus and intrusion prevention systems.
  3. Security features are usually snake-oil, sounding great on paper, with with no details, and no independent evaluation, provided to the public.

It’s the last one that’s most important. HP markets features, but there’s no guarantee they work. In particular, similar features in other products have proven not to work in the past.

HP describes its three special features in a brief whitepaper [*]. They aren’t bad, but at the same time, they aren’t particularly good. Windows already offers all these features. Indeed, as far as I know, they are just using Windows as their firmware operating system, and are just slapping an “HP” marketing name onto existing Windows functionality.

HP Sure Start: This refers to the standard feature in almost all devices these days of having a secure boot process. Windows supports this in UEFI boot. Apple’s iPhones work this way, which is why the FBI needed Apple’s help to break into a captured terrorist’s phone. It’s a feature built into most IoT hardware, though most don’t enable it in software.

Whitelisting: Their description sounds like “signed firmware updates”, but if that was they case, they’d call it that. Traditionally, “whitelisting” referred to a different feature, containing a list of hashes for programs that can run on the device. Either way, it’s a pretty common functionality.

Run-time intrusion detection: They have numerous, conflicting descriptions on their website. It may mean scanning memory for signatures of known viruses. It may mean stack cookies. It may mean double-checking kernel modules. Windows does all these things, and it has a tiny benefit on stopping security threats.

As for traditional threats for attacks against printers, none of these really are important. What you need to secure a printer is the ability to disable services you aren’t using (close ports), enable passwords and other access control, and delete files of old print jobs so hackers can’t grab them from the printer. HP has features to address these security problems, but then, so do its competitors.

Lastly, printers should be behind firewalls, not only protected from the Internet, but also segmented from the corporate network, so that only those designed ports, or flows between the printer and print servers, are enabled.

Conclusion

The features HP describes are snake oil. If they worked well, they’d still only address a small part of the spectrum of attacks against printers. And, since there’s no technical details or independent evaluation of the features, they are almost certainly lies.

If HP really cared about security, they’d make their software more secure. They use fuzzing tools like AFL to secure it. They’d enable ASLR and stack cookies. They’d compile C code with run-time buffer overflow checks. Thety’d have a bug bounty program. It’s not something they can easily market, but at least it’d be real.

If you cared about printer security, then do the steps I outline above, especially firewalling printers from the traditional network. Seriously, putting $100 firewall between a VLAN for your printers and the rest of the network is cheap and easy way to do a vast amount of security. If you can’t secure printers this way, buying snake oil features like HP describes won’t help you.

Lamers: the problem with bounties

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/lamers-gtfo-please.html

In my last two posts, I pointed out that the anti-spam technique known as “DKIM” cryptographically verifies emails. This can be used to verify that some of the newsworthy emails are, indeed, correct and haven’t been doctored. I offer a 1 btc (one bitcoin, around ~$600 at current exchange rates) bounty if anybody can challenge this assertion.
Unfortunately, bounties attract lamers who think they deserve the bounty. 
This guy insists he wins the bounty because he can add spaces to the email, and add fields like “Cc:” that DKIM doesn’t check. Since DKIM ignores extra spaces and only checks important fields, these changes pass. The guy claims it’s “doctored” because technically, he has changed things, even though he hasn’t actually changed any of the important things (From, Date, Subject, and body content).
No. This doesn’t qualify for the bounty. It doesn’t call into question whether the Wikileaks emails say what they appear to say. It’s so obvious that people have already contacted me and passed on it, knowing it wouldn’t win the bounty. If I’d pay out this bounty for this lameness, one of the 10 people who came up with the idea before this lamer would get this bounty, not him. It’d probably go to this guy — who knows it’s lame, who isn’t seeking the bounty, but would get it anyway before the lamer above:
Let me get ahead of the lamers and point to more sophisticated stuff that also doesn’t count. The following DKIM verified email appears to say that Hillary admitting she eats kittens. This would be newsworthy if true, and a winner of this bounty if indeed it could trick people.
This is in fact also very lame. I mean, it’s damn convincing, but only to lamers. You can see my trick by looking at the email on pastebin (http://pastebin.com/wRsnz0Y6) and comparing it to the original (https://wikileaks.org/podesta-emails/emailid/2986).
The trick is that I’ve added extra From/Subject fields before the DKIM header, so DKIM doesn’t see them. DKIM only sees the fields after. It tricks other validation tools, such as this online validator. However, email readers (Thunderbird, Outlook, Apple Mail) see the first headers, and display something that DKIM hasn’t checked.

I’ve taken a screenshot of the raw email to show both From fields:

Since I don’t rely upon the “magic” of tools to verify DKIM, but look at the whole package, I’ll see the added From/Subject fields. Far from fooling anybody, such modifications will be smoking gun that somebody has attempted illicit modifications. Not just me, mostly anybody viewing the “raw source” that Wikileaks provides would instantly see shenanigans.
The Wikileaks emails can be verified with crypto, using DKIM. Anybody who can doctor an email in such a way that calls this into question, such that they could pass something incriminating through (“I eat kittens”), they win the full bounty. Any good attempts, with something interesting and innovative, wins partial bounty.
Lamers, though, are unwelcome.
BTW, the same is true for bug bounties. Bounties are becoming standard in the infosec industry, whereby companies give people money if they can show ways hackers might hack products. Paying bounties allows companies to fix products before they actually get hacked. Even the U.S. military offers bounties if you can show ways to hack their computers. Lamers are a pox on bug bounty systems — administrators must spend more time telling lamers to go away than they spend on dealing with real issues. No bounties rules are so tight that lamers can’t find a way to subvert the rules without finding anything that matches the intent.

National interest is exploitation, not disclosure

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/national-interest-is-exploitation-not.html

Most of us agree that more accountability/transparency is needed in how the government/NSA/FBI exploits 0days. However, the EFF’s positions on the topic are often absurd, which prevent our voices from being heard.

One of the EFF’s long time planks is that the government should be disclosing/fixing 0days rather than exploiting them (through the NSA or FBI). As they phrase it in a recent blog post:

as described by White House Cybersecurity Coordinator, Michael Daniel: “[I]n the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest.” Other knowledgeable insiders—from former National Security Council Cybersecurity Directors Ari Schwartz and Rob Knake to President Obama’s hand-picked Review Group on Intelligence and Communications Technologies—have also endorsed clear, public rules favoring disclosure.

The EFF isn’t even paying attention to what the government said. The majority of vulnerabilities are useless to the NSA/FBI. Even powerful bugs like Heartbleed or Shellshock are useless, because they can’t easily be weaponized. They can’t easily be put into a point-and-shoot tool and given to cyberwarriors.

Thus, it’s a tautology saying “majority of cases vulns should be disclosed”. It has no bearing on the minority of bugs the NSA is interested in — the cases where we want more transparency and accountability.

These minority of bugs are not discovered accidentally. Accidental bugs have value to the NSA, so the NSA spends considerable amount of money hunting down different bugs that would be of use, and in many cases, buying useful vulns from 0day sellers. The EFF pretends the political issue is about 0days the NSA happens to come across accidentally — the real political issue is about the ones the NSA spent a lot of money on.

For these bugs, the minority of bugs the NSA sees, we need to ask whether it’s in the national interest to exploit them, or to disclose/fix them. And the answer to this question is clearly in favor of exploitation, not fixing. It’s basic math.

An end-to-end Apple iOS 0day (with sandbox escape and persistance) is worth around $1 million, according to recent bounties from Zerodium and Exodus Intel.

There are two competing national interests with such a bug. The first is whether such a bug should be purchased and used against terrorist iPhones in order to disrupt ISIS. The second is whether such a bug should be purchased and disclosed/fixed, to protect American citizens using iPhones.

Well, for one thing, the threat is asymmetric. As Snowden showed, the NSA has widespread control over network infrastructure, and can therefore insert exploits as part of a man-in-the-middle attack. That makes any browser-bugs, such as the iOS bug above, much more valuable to the NSA. No other intelligence organization, no hacker group, has that level of control over networks, especially within the United States. Non-NSA actors have to instead rely upon the much less reliable “watering hole” and “phishing” methods to hack targets. Thus, this makes the bug of extreme value for exploitation by the NSA, but of little value in fixing to protect Americans.

The NSA buys one bug per version of iOS. It only needs one to hack into terrorist phones. But there are many more bugs. If it were in the national interest to buy iOS 0days, buying just one will have little impact, since many more bugs still lurk waiting to be found. The government would have to buy many bugs to make a significant dent in the risk.

And why is the government helping Apple at the expense of competitors anyway? Why is it securing iOS with its bug-bounty program and not Android? And not Windows? And not Adobe PDF? And not the million other products people use?

The point is that no sane person can argue that it’s worth it for the government to spend $1 million per iOS 0day in order to disclose/fix. If it were in the national interest, we’d already have federal bug bounties of that order, for all sorts of products. Long before the EFF argues that it’s in the national interest that purchased bugs should be disclosed rather than exploited, the EFF needs to first show that it’s in the national interest to have a federal bug bounty program at all.

Conversely, it’s insane to argue it’s not worth $1 million to hack into terrorist iPhones. Assuming the rumors are true, the NSA has been incredibly effective at disrupting terrorist networks, reducing the collateral damage of drone strikes and such. Seriously, I know lots of people in government, and they have stories. Even if you discount the value of taking out terrorists, 0days have been hugely effective at preventing “collateral damage” — i.e. the deaths of innocents.

The NSA/DoD/FBI buying and using 0days is here to stay. Nothing the EFF does or says will ever change that. Given this constant, the only question is how We The People get more visibility into what’s going on, that our representative get more oversight, that the courts have clearer and more consistent rules. I’m the first to stand up and express my worry that the NSA might unleash a worm that takes down the Internet, or the FBI secretly hacks into my home devices. Policy makers need to address these issues, not the nonsense issues promoted by the EFF.

Bug Bounties Reaching $500,000 For iOS Exploits

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/Oi2kUyxvVik/

It seems this year bug bounties are getting really serious, especially on the secondary market involving exploit trading firms, not direct to the software producer or owner. $500,000 isn’t chump change and would be a good year for a small security team, especially living somewhere with a weaker currency. Even for a solo security researcher…

Read the full post at darknet.org.uk

More on the Vulnerabilities Equities Process

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/08/more_on_the_vul.html

The Open Technology Institute of the New America Foundation has released a policy paper on the vulnerabilities equities process: “Bugs in the System: A Primer on the Software Vulnerability Ecosystem and its Policy Implications.”

Their policy recommendations:

  • Minimize participation in the vulnerability black market.
  • Establish strong, clear procedures for disclosure when it discovers and acquires vulnerability.
  • Establish rules for government hacking.
  • Support bug bounty programs.
  • Reform the DMCA and CFAA so they encourage responsible vulnerability disclosure.

It’s a good document, and worth reading.

Stealing Money from ISPs Through Premium Rate Calls

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/stealing_money_.html

I think the best hacks are the ones that are obvious once they’re explained, but no one has thought of them before. Here’s an example:

Instagram ($2000), Google ($0) and Microsoft ($500) were vulnerable to direct money theft via premium phone number calls. They all offer services to supply users with a token via a computer-voiced phone call, but neglected to properly verify whether supplied phone numbers were legitimate, non-premium numbers. This allowed a dedicated attacker to steal thousands of EUR/USD/GBP/… . Microsoft was exceptionally vulnerable to mass exploitation by supporting virtually unlimited concurrent calls to one premium number. The vulnerabilities were submitted to the respective Bug Bounty programs and properly resolved.

News articles. Slashdot threads.

Not All Bugs Are Created Equal

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/146077281316

yahoo-security:

Doug DePerry, Senior Security Engineer, Paranoids

In our inaugural post to The Paranoid, we discussed the human element behind online attacks–the human adversary. We sought to give some perspectives as to who is behind online threats in order to better understand how to defend against them. Yahoo’s bug bounty program applies that insight in our ongoing efforts to provide a safe environment for our users. By thinking about the economics of security, we’ve found that we can tilt the advantage in our favor by partnering with industry-leading security researchers.

We often get questions from both security researchers, and people just interested in learning about how programs like these work. We thought we’d use this opportunity to take a quick look under the hood.

First, some background. Bug bounty programs essentially crowd-source security. They allow companies to improve coverage so they are able to add additional eyes where they need them. Bug bounty researchers also bring depth of expertise and different skill sets that can uncover hard to find bugs.  

For the past two years, Yahoo has developed one of the largest and most successful bug bounty programs in the industry. We’ve paid out over $1.7 million dollars in bounties, resolved more than 2,000 security bugs and maintain a “hackership” of more than 2,000 researchers, some of whom make careers out of it.

Security researchers often ask us how we decide the payout associated with a given bug report. At first it might seem logical that we pay based on the type or classification of a security bug. Some bug types tend to be bad, so you might think that they would be paid the same. However, in the vast majority of cases, that’s not the complete story. So if the bug type alone is not what we use to determine the payout, what is? The missing input to the calculation is the impact of the vulnerability. We take into account what data might have been exposed, the sensitivity of that data, the role that data plays, network location and the permissions of the server involved. Those factors are of great importance.

Given the importance of the impact of a bug, the Yahoo bug bounty program does not reward researchers solely based on bug type. The type of bug a security researcher finds is mostly irrelevant. It’s what the bug allows them to do and where that are most important. What can an attacker actually do with this specific bug to potentially affect the security of Yahoo or our users? Furthermore, Yahoo’s application landscape is not necessarily uniform; certain properties or applications are more equal than others.

Here’s an example to show how these factors work in practice. SQL injection bugs are often a devastating bug class because they can provide full access to a database. Odds are, if a company has a presence on the web, they are storing sensitive information in databases. But just because an attacker can access the database does not mean it’s game over. The real reason that the SQL injection bug class can be so devastating is the data stored in the database may be accessed or changed by unauthorized parties. The typical impact of a SQL injection bug is high because the data exposed is typically sensitive, except when it’s not. What if the database doesn’t contain any sensitive data?

Part of the process in determining impact can seem opaque to the researcher, and we understand that. That obscurity is an unfortunate but necessary fact of life in a bug bounty program. As an external party, it is just not possible to have all the information. The sort of testing available to participants in a public bug bounty program is inherently “black box”–no documentation, no source code, what you see is what you get.

So we encourage bug reporters to include in their reports what they believe the impact of the vulnerability to be (example report here). Submitting a report that contains a thorough and detailed explanation of a legitimate security issue is much more highly valued and rewarded.

We also work closely with the developers to ensure the bug is fixed in a timely manner, and to obtain their expert opinion on impact if necessary. If the developers that created the application tell us that no sensitive data is stored in a particular database, we take that into consideration when awarding your bug. More detailed guidelines for our bug bounty program are available at hackerone.com/yahoo.

To paraphrase a little-known quote, “bug bounty programs don’t reward you for being clever.” Users and researchers should know that we place far more weight on how impactful bugs are to our platforms.

For the hackers our there interested in how @yahoo’s Bug Bounty program works.