Tag Archives: vulnerability

[$] Finding Spectre vulnerabilities with smatch

Post Syndicated from corbet original https://lwn.net/Articles/752408/rss

The furor over the Meltdown and Spectre vulnerabilities has calmed a bit —
for now, at least — but that does not mean that developers have stopped
worrying about them. Spectre variant 1 (the bounds-check bypass
vulnerability) has been of particular concern because, while the kernel is
thought to contain numerous vulnerable spots, nobody really knows how to
find them all. As a result, the defenses that have been developed for
variant 1 have only been deployed in a few places. Recently, though,
Dan Carpenter has enhanced the smatch tool to enable it to find possibly
vulnerable code in the kernel.

My letter urging Georgia governor to veto anti-hacking bill

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/my-letter-urging-georgia-governor-to.html

February 16, 2018

Office of the Governor
206 Washington Street
111 State Capitol
Atlanta, Georgia 30334

Re: SB 315

Dear Governor Deal:

I am writing to urge you to veto SB315, the “Unauthorized Computer Access” bill.

The cybersecurity community, of which Georgia is a leader, is nearly unanimous that SB315 will make cybersecurity worse. You’ve undoubtedly heard from many of us opposing this bill. It does not help in prosecuting foreign hackers who target Georgian computers, such as our elections systems. Instead, it prevents those who notice security flaws from pointing them out, thereby getting them fixed. This law violates the well-known Kirchhoff’s Principle, that instead of secrecy and obscurity, that security is achieved through transparency and openness.

That the bill contains this flaw is no accident. The justification for this bill comes from an incident where a security researcher noticed a Georgia state election system had made voter information public. This remained unfixed, months after the vulnerability was first disclosed, leaving the data exposed. Those in charge decided that it was better to prosecute those responsible for discovering the flaw rather than punish those who failed to secure Georgia voter information, hence this law.

Too many security experts oppose this bill for it to go forward. Signing this bill, one that is weak on cybersecurity by favoring political cover-up over the consensus of the cybersecurity community, will be part of your legacy. I urge you instead to veto this bill, commanding the legislature to write a better one, this time consulting experts, which due to Georgia’s thriving cybersecurity community, we do not lack.

Thank you for your attention.

Sincerely,
Robert Graham
(formerly) Chief Scientist, Internet Security Systems

DARPA Funding in AI-Assisted Cybersecurity

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/darpa_funding_i.html

DARPA is launching a program aimed at vulnerability discovery via human-assisted AI. The new DARPA program is called CHESS (Computers and Humans Exploring Software Security), and they’re holding a proposers day in a week and a half.

This is the kind of thing that can dramatically change the offense/defense balance.

Obscure E-Mail Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/obscure_e-mail_.html

This vulnerability is a result of an interaction between two different ways of handling e-mail addresses. Gmail ignores dots in addresses, so [email protected] is the same as [email protected] is the same as [email protected] (Note: I do not own any of those email addresses — if they’re even valid.) Netflix doesn’t ignore dots, so those are all unique e-mail addresses and can each be used to register an account. This difference can be exploited.

I was almost fooled into perpetually paying for Eve’s Netflix access, and only paused because I didn’t recognize the declined card. More generally, the phishing scam here is:

  1. Hammer the Netflix signup form until you find a gmail.com address which is “already registered”. Let’s say you find the victim jameshfisher.
  2. Create a Netflix account with address james.hfisher.
  3. Sign up for free trial with a throwaway card number.
  4. After Netflix applies the “active card check”, cancel the card.
  5. Wait for Netflix to bill the cancelled card. Then Netflix emails james.hfisher asking for a valid card.
  6. Hope Jim reads the email to james.hfisher, assumes it’s for his Netflix account backed by jameshfisher, then enters his card **** 1234.
  7. Change the email for the Netflix account to [email protected], kicking Jim’s access to this account.
  8. Use Netflix free forever with Jim’s card **** 1234!

Obscure, yes? A problem, yes?

James Fisher, who wrote the post, argues that it’s Google’s fault. Ignoring dots might give people an enormous number of different email addresses, but it’s not a feature that people actually want. And as long as other sites don’t follow Google’s lead, these sorts of problems are possible.

I think the problem is more subtle. It’s an example of two systems without a security vulnerability coming together to create a security vulnerability. As we connect more systems directly to each other, we’re going to see a lot more of these. And like this Google/Netflix interaction, it’s going to be hard to figure out who to blame and who — if anyone — has the responsibility of fixing it.

A serious Drupal security issue

Post Syndicated from corbet original https://lwn.net/Articles/750324/rss

The Drupal security team has sent out a “highly critical”
alert
: “A remote code execution vulnerability exists within
multiple subsystems of Drupal 7.x and 8.x. This potentially allows
attackers to exploit multiple attack vectors on a Drupal site, which could
result in the site being completely compromised.
” This seems worth
avoiding; updating to the current version is the way to do that. There is
an FAQ page
with a little more information.

Israeli Security Attacks AMD by Publishing Zero-Day Exploits

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/israeli_securit.html

Last week, the Israeli security company CTS Labs published a series of exploits against AMD chips. The publication came with the flashy website, detailed whitepaper, cool vulnerability names — RYZENFALL, MASTERKEY, FALLOUT, and CHIMERA — and logos we’ve come to expect from these sorts of things. What’s new is that the company only gave AMD a day’s notice, which breaks with every norm about responsible disclosure. CTS Labs didn’t release details of the exploits, only high-level descriptions of the vulnerabilities, but it is probably still enough for others to reproduce their results. This is incredibly irresponsible of the company.

Moreover, the vulnerabilities are kind of meh. Nicholas Weaver explains:

In order to use any of the four vulnerabilities, an attacker must already have almost complete control over the machine. For most purposes, if the attacker already has this access, we would generally say they’ve already won. But these days, modern computers at least attempt to protect against a rogue operating system by having separate secure subprocessors. CTS Labs discovered the vulnerabilities when they looked at AMD’s implementation of the secure subprocessor to see if an attacker, having already taken control of the host operating system, could bypass these last lines of defense.

In a “Clarification,” CTS Labs kind of agrees:

The vulnerabilities described in amdflaws.com could give an attacker that has already gained initial foothold into one or more computers in the enterprise a significant advantage against IT and security teams.

The only thing the attacker would need after the initial local compromise is local admin privileges and an affected machine. To clarify misunderstandings — there is no need for physical access, no digital signatures, no additional vulnerability to reflash an unsigned BIOS. Buy a computer from the store, run the exploits as admin — and they will work (on the affected models as described on the site).

The weirdest thing about this story is that CTS Labs describes one of the vulnerabilities, Chimera, as a backdoor. Although it doesn’t t come out and say that this was deliberately planted by someone, it does make the point that the chips were designed in Taiwan. This is an incredible accusation, and honestly needs more evidence before we can evaluate it.

The upshot of all of this is that CTS Labs played this for maximum publicity: over-hyping its results and minimizing AMD’s ability to respond. And it may have an ulterior motive:

But CTS’s website touting AMD’s flaws also contained a disclaimer that threw some shadows on the company’s motives: “Although we have a good faith belief in our analysis and believe it to be objective and unbiased, you are advised that we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports,” reads one line. WIRED asked in a follow-up email to CTS whether the company holds any financial positions designed to profit from the release of its AMD research specifically. CTS didn’t respond.

We all need to demand better behavior from security researchers. I know that any publicity is good publicity, but I am pleased to see the stories critical of CTS Labs outnumbering the stories praising it.

EDITED TO ADD (3/21): AMD responds:

AMD’s response today agrees that all four bug families are real and are found in the various components identified by CTS. The company says that it is developing firmware updates for the three PSP flaws. These fixes, to be made available in “coming weeks,” will be installed through system firmware updates. The firmware updates will also mitigate, in some unspecified way, the Chimera issue, with AMD saying that it’s working with ASMedia, the third-party hardware company that developed Promontory for AMD, to develop suitable protections. In its report, CTS wrote that, while one CTS attack vector was a firmware bug (and hence in principle correctable), the other was a hardware flaw. If true, there may be no effective way of solving it.

Response here.

QualysGuard – Vulnerability Management Tool

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/03/qualysguard-vulnerability-management-tool/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

QualysGuard – Vulnerability Management Tool

QualysGuard is a web-based vulnerability management tool provided by Qualys, Inc, which was the first company to deliver vulnerability management services as a SaaS-based web-service.

From reviews, it seems like a competent tool with a low rate of false positives that is fairly easy to work with and keep the more ‘dangerous’ parts of vulnerability scanning out of the hands of users, but with the flexibility for expert users to do what they need.

Read the rest of QualysGuard – Vulnerability Management Tool now! Only available at Darknet.

Using JWT For Sessions

Post Syndicated from Bozho original https://techblog.bozho.net/using-jwt-sessions/

The topic has been discussed many times, on hacker news, reddit, blogs. And the consensus is – DON’T USE JWT (for user sessions).

And I largely agree with the criticism of typical arguments for the JWT, the typical “but I can make it work…” explanations and the flaws of the JWT standard..

I won’t repeat everything here, so please go and read those articles. You can really shoot yourself in the foot with JWT, it’s complex to get to know it well and it has little benefits for most of the usecases. I guess for API calls it makes sense, especially if you reuse the same API in a single-page application and for your RESTful clients, but I’ll focus on the user session usecase.

Having all this criticism, I’ve gone against what the articles above recommend, and use JWT, navigating through their arguments and claiming I’m in a sweet spot. I can very well be wrong.

I store the user ID in a JWT token stored as a cookie. Not local storage, as that’s problematic. Not the whole state, as I don’t need that may lead to problems (pointed out in the linked articles). In fact, I don’t have any session state apart from the user data, which I think is a good practice.

What I want to avoid in my setup is sharing sessions across nodes. And this is a very compelling reason to not use the session mechanism of your web server/framework. No, you don’t need to have millions of users in order to need your application to run on more than one node. In fact, it should almost always run on (at least) two nodes, because nodes die and you don’t want downtime. Sticky sessions at the load balancer are a solution to that problem but you are just outsourcing the centralized session storage to the load balancer (and some load balancers might not support it). Shared session cache (e.g. memcached, elasticache, hazelcast) is also an option, and many web servers (at least in Java) support pluggable session replication mechanisms, but that introduces another component to the archtecture, another part of the stack to be supported and that can possibly break. It is not necessarily bad, but if there’s a simple way to avoid it, I’d go for it.

In order to avoid shared session storage, you need either the whole session state to be passed in the request/response cycle (as cookie, request parameter, header), or to receive a userId and load the user from the database or a cache. As we’ve learned, the former might be a bad choice. Despite that fact that frameworks like ASP.NET and JSF dump the whole state in the HTML of the page, it doesn’t intuitively sound good.

As for the latter – you may say “ok, if you are going to load the user from the database on every request this is going to be slow and if you use a cache, then why not use the cache for the sessions themselves?”. Well, the cache can be local. Remember we have just a few application nodes. Each node can have a local, in-memory cache for the currently active users. The fact that all nodes will have the same user loaded (after a few requests are routed to them by the load balancer in a round-robin fashion) is not important, as that cache is small. But you won’t have to take any care for replicating it across nodes, taking care of new nodes coming and going from the cluster, dealing with network issues between the nodes, etc. Each application node will be an island not caring about any other application node.

So here goes my first objection to the linked articles – just storing the user identifier in a JWT token is not pointless, as it saves you from session replication.

What about the criticism for the JWT standard and the security implications of its cryptography? Entirely correct, it’s easy to shoot yourself in the foot. That’s why I’m using JWT only with MAC, and only with a particular algorithm that I verify upon receiving the token, thus (allegedly) avoiding all the pitfalls. In all fairness, I’m willing to use the alternative proposed in one of the articles – PASETO – but it doesn’t have a Java library and it will take some time implementing one (might do in the future). To summarize – if there was another easy to use way for authenticated encryption of cookies, I’d use it.

So I’m basically using JWT in “PASETO-mode”, with only one operation and only one algorithm. And that should be fine as a general approach – the article doesn’t criticize the idea of having a user identifier in a token (and a stateless application node), it criticizes the complexity and vulnerabilities of the standard. This is sort of my second objection – “Don’t use JWT” is widely understood to mean “Don’t use tokens”, where that is not the case.

Have I introduced some vulnerability in my strive for architectural simplicity and lack of shared state? I hope not.

The post Using JWT For Sessions appeared first on Bozho's tech blog.

Setting up bug bounties for success

Post Syndicated from Michal Zalewski original https://lcamtuf.blogspot.com/2018/03/setting-up-bug-bounties-for-success.html

Bug bounties end up in the news with some regularity, usually for the wrong reasons. I’ve been itching to write
about that for a while – but instead of dwelling on the mistakes of the bygone days, I figured it may be better to
talk about some of the ways to get vulnerability rewards right.

What do you get out of bug bounties?

There’s plenty of differing views, but I like to think of such programs
simply as a bid on researchers’ time. In the most basic sense, you get three benefits:

  • Improved ability to detect bugs in production before they become major incidents.
  • A comparatively unbiased feedback loop to help you prioritize and measure other security work.
  • A robust talent pipeline for when you need to hire.

What bug bounties don’t offer?

You don’t get anything resembling a comprehensive security program or a systematic assessment of your platforms.
Researchers end up looking for bugs that offer favorable effort-to-payoff ratios for their skills and given the
very imperfect information they have about your enterprise. In other words, you may end up with a hundred
people looking for XSS and just one person looking for RCE.

Your reward structure can steer them toward the targets and bugs you care about, but it’s difficult to fully
eliminate this inherent skew. There’s only so far you can jack up your top-tier rewards, and only so far you can
go lowering the bottom-tier ones.

Don’t you have to outcompete the black market to get all the “good” bugs?

There is a free market price discovery component to it all: if you’re not getting the engagement you
were hoping for, you should probably consider paying more.

That said, there are going to be researchers who’d rather hurt you than work for you, no matter how much you pay;
you don’t have to win them over, and you don’t have to outspend every authoritarian government or
every crime syndicate. A bug bounty is effective simply if it attracts enough eyeballs to make bugs statistically
harder to find, and reduces the useful lifespan of any zero-days in black market trade. Plus, most
researchers don’t want their work to be used to crack down on dissidents in Egypt or Vietnam.

Another factor is that you’re paying for different things: a black market buyer probably wants a reliable exploit
capable of delivering payloads, and then demands silence for months or years to come; a vendor-run
bug bounty program is usually perfectly happy with a reproducible crash and doesn’t mind a researcher blogging
about their work.

In fact, while money is important, you will probably find out that it’s not enough to retain your top talent;
many folks want bug bounties to be more than a business transaction, and find a lot of value in having a close
relationship with your security team, comparing notes, and growing together. Fostering that partnership can
be more important than adding another $10,000 to your top reward.

How do I prevent it all from going horribly wrong?

Bug bounties are an unfamiliar beast to most lawyers and PR folks, so it’s a natural to be wary and try to plan
for every eventuality with pages and pages of impenetrable rules and fine-print legalese.

This is generally unnecessary: there is a strong self-selection bias, and almost every participant in a
vulnerability reward program will be coming to you in good faith. The more friendly, forthcoming, and
approachable you seem, and the more you treat them like peers, the more likely it is for your relationship to stay
positive. On the flip side, there is no faster way to make enemies than to make a security researcher feel that they
are now talking to a lawyer or to the PR dept.

Most people have strong opinions on disclosure policies; instead of imposing your own views, strive to patch reported bugs
reasonably quickly, and almost every reporter will play along. Demand researchers to cancel conference appearances,
take down blog posts, or sign NDAs, and you will sooner or later end up in the news.

But what if that’s not enough?

As with any business endeavor, mistakes will happen; total risk avoidance is seldom the answer. Learn to sincerely
apologize for mishaps; it’s not a sign of weakness to say “sorry, we messed up”. And you will almost certainly not end
up in the courtroom for doing so.

It’s good to foster a healthy and productive relationship with the community, so that they come to your defense when
something goes wrong. Encouraging people to disclose bugs and talk about their experiences is one way of accomplishing that.

What about extortion?

You should structure your program to naturally discourage bad behavior and make it stand out like a sore thumb.
Require bona fide reports with complete technical details before any reward decision is made by a panel of named peers;
and make it clear that you never demand non-disclosure as a condition of getting a reward.

To avoid researchers accidentally putting themselves in awkward situations, have clear rules around data exfiltration
and lateral movement: assure them that you will always pay based on the worst-case impact of their findings; in exchange,
ask them to stop as soon as they get a shell and never access any data that isn’t their own.

So… are there any downsides?

Yep. Other than souring up your relationship with the community if you implement your program wrong, the other consideration
is that bug bounties tend to generate a lot of noise from well-meaning but less-skilled researchers.

When this happens, do not get frustrated and do not penalize such participants; instead, help them grow. Consider
publishing educational articles, giving advice on how to investigate and structure reports, or
offering free workshops every now and then.

The other downside is cost; although bug bounties tend to offer far more bang for your buck than your average penetration
test, they are more random. The annual expenses tend to be fairly predictable, but there is always
some possibility of having to pay multiple top-tier rewards in rapid succession. This is the kind of uncertainty that
many mid-level budget planners react badly to.

Finally, you need to be able to fix the bugs you receive. It would be nuts to prefer to not know about the
vulnerabilities in the first place – but once you invite the research, the clock starts ticking and you need to
ship fixes reasonably fast.

So… should I try it?

There are folks who enthusiastically advocate for bug bounties in every conceivable situation, and people who dislike them
with fierce passion; both sentiments are usually strongly correlated with the line of business they are in.

In reality, bug bounties are not a cure-all, and there are some ways to make them ineffectual or even dangerous.
But they are not as risky or expensive as most people suspect, and when done right, they can actually be fun for your
team, too. You won’t know for sure until you try.

XSStrike – Advanced XSS Fuzzer & Exploitation Suite

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/03/xsstrike-advanced-xss-fuzzer-exploitation-suite/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

XSStrike – Advanced XSS Fuzzer & Exploitation Suite

XSStrike is an advanced XSS detection suite, which contains a powerful XSS fuzzer and provides zero false positive results using fuzzy matching. XSStrike is the first XSS scanner to generate its own payloads.

It is also built in an intelligent enough manner to detect and break out of various contexts.

Features of XSStrike XSS Fuzzer & Hacking Tool

XSStrike has:

  • Powerful fuzzing engine
  • Context breaking technology
  • Intelligent payload generation
  • GET & POST method support
  • Cookie Support
  • WAF Fingerprinting
  • Handcrafted payloads for filter and WAF evasion
  • Hidden parameter discovery
  • Accurate results via levenshtein distance algorithm

There are various other XSS security related tools you can check out like:

– XSSYA v2.0 Released – XSS Vulnerability Confirmation Tool
– xssless – An Automated XSS Payload Generator Written In Python
– XSSer v1.0 – Cross Site Scripter Framework

You can download XSStrike here:

XSStrike-master.zip

Or read more here.

Read the rest of XSStrike – Advanced XSS Fuzzer & Exploitation Suite now! Only available at Darknet.

AskRob: Does Tor let government peek at vuln info?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/askrob-does-tor-let-government-peek-at.html

On Twitter, somebody asked this question:

The question is about a blog post that claims Tor privately tips off the government about vulnerabilities, using as proof a “vulnerability” from October 2007 that wasn’t made public until 2011.
The tl;dr is that it’s bunk. There was no vulnerability, it was a feature request. The details were already public. There was no spy agency involved, but the agency that does Voice of America, and which tries to protect activists under foreign repressive regimes.

Discussion

The issue is that Tor traffic looks like Tor traffic, making it easy to block/censor, or worse, identify users. Over the years, Tor has added features to make it look more and more like normal traffic, like the encrypted traffic used by Facebook, Google, and Apple. Tors improves this bit-by-bit over time, but short of actually piggybacking on website traffic, it will always leave some telltale signature.
An example showing how we can distinguish Tor traffic is the packet below, from the latest version of the Tor server:
Had this been Google or Facebook, the names would be something like “www.google.com” or “facebook.com”. Or, had this been a normal “self-signed” certificate, the names would still be recognizable. But Tor creates randomized names, with letters and numbers, making it distinctive. It’s hard to automate detection of this, because it’s only probably Tor (other self-signed certificates look like this, too), which means you’ll have occasional “false-positives”. But still, if you compare this to the pattern of traffic, you can reliably detect that Tor is happening on your network.
This has always been a known issue, since the earliest days. Google the search term “detect tor traffic”, and set your advanced search dates to before 2007, and you’ll see lots of discussion about this, such as this post for writing intrusion-detection signatures for Tor.
Among the things you’ll find is this presentation from 2006 where its creator (Roger Dingledine) talks about how Tor can be identified on the network with its unique network fingerprint. For a “vulnerability” they supposedly kept private until 2011, they were awfully darn public about it.
The above blogpost claims Tor kept this vulnerability secret until 2011 by citing this message. It’s because Levine doesn’t understand the terminology and is just blindly searching for an exact match for “TLS normalization”. Here’s an earlier proposed change for the long term goal of to “make our connection handshake look closer to a regular HTTPS [TLS] connection”, from February 2007. Here is another proposal from October 2007 on changing TLS certificates, from days after the email discussion (after they shipped the feature, presumably).
What we see here is here is a known problem from the very beginning of the project, a long term effort to fix that problem, and a slow dribble of features added over time to preserve backwards compatibility.
Now let’s talk about the original train of emails cited in the blogpost. It’s hard to see the full context here, but it sounds like BBG made a feature request to make Tor look even more like normal TLS, which is hinted with the phrase “make our funders happy”. Of course the people giving Tor money are going to ask for improvements, and of course Tor would in turn discuss those improvements with the donor before implementing them. It’s common in project management: somebody sends you a feature request, you then send the proposal back to them to verify what you are building is what they asked for.
As for the subsequent salacious paragraph about “secrecy”, that too is normal. When improving a problem, you don’t want to talk about the details until after you have a fix. But note that this is largely more for PR than anything else. The details on how to detect Tor are available to anybody who looks for them — they just aren’t readily accessible to the layman. For example, Tenable Networks announced the previous month exactly this ability to detect Tor’s traffic, because any techy wanting to would’ve found the secrets how to. Indeed, Teneble’s announcement may have been the impetus for BBG’s request to Tor: “can you fix it so that this new Tenable feature no longer works”.
To be clear, there are zero secret “vulnerability details” here that some secret spy agency could use to detect Tor. They were already known, and in the Teneble product, and within the grasp of any techy who wanted to discover them. A spy agency could just buy Teneble, or copy it, instead of going through this intricate conspiracy.

Conclusion

The issue isn’t a “vulnerability”. Tor traffic is recognizable on the network, and over time, they make it less and less recognizable. Eventually they’ll just piggyback on true HTTPS and convince CloudFlare to host ingress nodes, or something, making it completely undetectable. In the meanwhile, it leaves behind fingerprints, as I showed above.
What we see in the email exchanges is the normal interaction of a donor asking for a feature, not a private “tip off”. It’s likely the donor is the one who tipped off Tor, pointing out Tenable’s product to detect Tor.
Whatever secrets Tor could have tipped off to the “secret spy agency” were no more than what Tenable was already doing in a shipping product.

Update: People are trying to make it look like Voice of America is some sort of intelligence agency. That’s a conspiracy theory. It’s not a member of the American intelligence community. You’d have to come up with a solid reason explaining why the United States is hiding VoA’s membership in the intelligence community, or you’d have to believe that everything in the U.S. government is really just some arm of the C.I.A.

Getting product security engineering right

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/getting-product-security-engineering.html

Product security is an interesting animal: it is a uniquely cross-disciplinary endeavor that spans policy, consulting,
process automation, in-depth software engineering, and cutting-edge vulnerability research. And in contrast to many
other specializations in our field of expertise – say, incident response or network security – we have virtually no
time-tested and coherent frameworks for setting it up within a company of any size.

In my previous post, I shared some thoughts
on nurturing technical organizations and cultivating the right kind of leadership within. Today, I figured it would
be fitting to follow up with several notes on what I learned about structuring product security work – and about actually
making the effort count.

The “comfort zone” trap

For security engineers, knowing your limits is a sought-after quality: there is nothing more dangerous than a security
expert who goes off script and starts dispensing authoritatively-sounding but bogus advice on a topic they know very
little about. But that same quality can be destructive when it prevents us from growing beyond our most familiar role: that of
a critic who pokes holes in other people’s designs.

The role of a resident security critic lends itself all too easily to a sense of supremacy: the mistaken
belief that our cognitive skills exceed the capabilities of the engineers and product managers who come to us for help
– and that the cool bugs we file are the ultimate proof of our special gift. We start taking pride in the mere act
of breaking somebody else’s software – and then write scathing but ineffectual critiques addressed to executives,
demanding that they either put a stop to a project or sign off on a risk. And hey, in the latter case, they better
brace for our triumphant “I told you so” at some later date.

Of course, escalations of this type have their place, but they need to be a very rare sight; when practiced routinely, they are a telltale
sign of a dysfunctional team. We might be failing to think up viable alternatives that are in tune with business or engineering needs; we might
be very unpersuasive, failing to communicate with other rational people in a language they understand; or it might be that our tolerance for risk
is badly out of whack with the rest of the company. Whatever the cause, I’ve seen high-level escalations where the security team
spoke of valiant efforts to resist inexplicably awful design decisions or data sharing setups; and where product leads in turn talked about
pressing business needs randomly blocked by obstinate security folks. Sometimes, simply having them compare their notes would be enough to arrive
at a technical solution – such as sharing a less sensitive subset of the data at hand.

To be effective, any product security program must be rooted in a partnership with the rest of the company, focused on helping them get stuff done
while eliminating or reducing security risks. To combat the toxic us-versus-them mentality, I found it helpful to have some team members with
software engineering backgrounds, even if it’s the ownership of a small open-source project or so. This can broaden our horizons, helping us see
that we all make the same mistakes – and that not every solution that sounds good on paper is usable once we code it up.

Getting off the treadmill

All security programs involve a good chunk of operational work. For product security, this can be a combination of product launch reviews, design consulting requests, incoming bug reports, or compliance-driven assessments of some sort. And curiously, such reactive work also has the property of gradually expanding to consume all the available resources on a team: next year is bound to bring even more review requests, even more regulatory hurdles, and even more incoming bugs to triage and fix.

Being more tractable, such routine tasks are also more readily enshrined in SDLs, SLAs, and all kinds of other official documents that are often mistaken for a mission statement that justifies the existence of our teams. Soon, instead of explaining to a developer why they should fix a particular problem right away, we end up pointing them to page 17 in our severity classification guideline, which defines that “severity 2” vulnerabilities need to be resolved within a month. Meanwhile, another policy may be telling them that they need to run a fuzzer or a web application scanner for a particular number of CPU-hours – no matter whether it makes sense or whether the job is set up right.

To run a product security program that scales sublinearly, stays abreast of future threats, and doesn’t erect bureaucratic speed bumps just for the sake of it, we need to recognize this inherent tendency for operational work to take over – and we need to reign it in. No matter what the last year’s policy says, we usually don’t need to be doing security reviews with a particular cadence or to a particular depth; if we need to scale them back 10% to staff a two-quarter project that fixes an important API and squashes an entire class of bugs, it’s a short-term risk we should feel empowered to take.

As noted in my earlier post, I find contingency planning to be a valuable tool in this regard: why not ask ourselves how the team would cope if the workload went up another 30%, but bad financial results precluded any team growth? It’s actually fun to think about such hypotheticals ahead of the time – and hey, if the ideas sound good, why not try them out today?

Living for a cause

It can be difficult to understand if our security efforts are structured and prioritized right; when faced with such uncertainty, it is natural to stick to the safe fundamentals – investing most of our resources into the very same things that everybody else in our industry appears to be focusing on today.

I think it’s important to combat this mindset – and if so, we might as well tackle it head on. Rather than focusing on tactical objectives and policy documents, try to write down a concise mission statement explaining why you are a team in the first place, what specific business outcomes you are aiming for, how do you prioritize it, and how you want it all to change in a year or two. It should be a fluid narrative that reads right and that everybody on your team can take pride in; my favorite way of starting the conversation is telling folks that we could always have a new VP tomorrow – and that the VP’s first order of business could be asking, “why do you have so many people here and how do I know they are doing the right thing?”. It’s a playful but realistic framing device that motivates people to get it done.

In general, a comprehensive product security program should probably start with the assumption that no matter how many resources we have at our disposal, we will never be able to stay in the loop on everything that’s happening across the company – and even if we did, we’re not going to be able to catch every single bug. It follows that one of our top priorities for the team should be making sure that bugs don’t happen very often; a scalable way of getting there is equipping engineers with intuitive and usable tools that make it easy to perform common tasks without having to worry about security at all. Examples include standardized, managed containers for production jobs; safe-by-default APIs, such as strict contextual autoescaping for XSS or type safety for SQL; security-conscious style guidelines; or plug-and-play libraries that take care of common crypto or ACL enforcement tasks.

Of course, not all problems can be addressed on framework level, and not every engineer will always reach for the right tools. Because of this, the next principle that I found to be worth focusing on is containment and mitigation: making sure that bugs are difficult to exploit when they happen, or that the damage is kept in check. The solutions in this space can range from low-level enhancements (say, hardened allocators or seccomp-bpf sandboxes) to client-facing features such as browser origin isolation or Content Security Policy.

The usual consulting, review, and outreach tasks are an important facet of a product security program, but probably shouldn’t be the sole focus of your team. It’s also best to avoid undue emphasis on vulnerability showmanship: while valuable in some contexts, it creates a hypercompetitive environment that may be hostile to less experienced team members – not to mention, squashing individual bugs offers very limited value if the same issue is likely to be reintroduced into the codebase the next day. I like to think of security reviews as a teaching opportunity instead: it’s a way to raise awareness, form partnerships with engineers, and help them develop lasting habits that reduce the incidence of bugs. Metrics to understand the impact of your work are important, too; if your engagements are seen mostly as a yet another layer of red tape, product teams will stop reaching out to you for advice.

The other tenet of a healthy product security effort requires us to recognize at a scale and given enough time, every defense mechanism is bound to fail – and so, we need ways to prevent bugs from turning into incidents. The efforts in this space may range from developing product-specific signals for the incident response and monitoring teams; to offering meaningful vulnerability reward programs and nourishing a healthy and respectful relationship with the research community; to organizing regular offensive exercises in hopes of spotting bugs before anybody else does.

Oh, one final note: an important feature of a healthy security program is the existence of multiple feedback loops that help you spot problems without the need to micromanage the organization and without being deathly afraid of taking chances. For example, the data coming from bug bounty programs, if analyzed correctly, offers a wonderful way to alert you to systemic problems in your codebase – and later on, to measure the impact of any remediation and hardening work.

New uTorrent Web Streams and Downloads Torrents in Your Browser

Post Syndicated from Ernesto original https://torrentfreak.com/new-utorrent-web-streams-and-downloads-torrents-in-your-browser-180223/

While dozens of millions of people use uTorrent as their default BitTorrent client, the software has seen few feature updates in recent years.

That doesn’t mean that the development team has been sitting still. Instead of drastically expanding the current software, they have started a new ambitious project: uTorrent Web.

This new piece of software, which launched rather quietly, allows users to download and stream torrents directly in their default web browsers, such as Chrome or Firefox.

The way it works is pretty straightforward. After installing the client, which is Windows-only at the moment, torrent and magnet links are automatically opened by uTorrent Web in a browser window.

People can use their regular torrent sites to find torrents or use the app’s search box, which redirects them to Google.

Let’s start…

TorrentFreak took the application for a spin and it works quite well. Videos may take a short while to load, depending on the download speed, but then they play just fine. As in most modern video players, subtitles are also supported, if they’re included.

The streaming functionality supports both audio and video, with the option to choose a specific file, if a torrent contains more than one.

Applications and other files can also be downloaded, but these are obviously not streamed.

uTorrent Web in action

The current Beta release comes with several basic preferences settings and users can change things such as the download location and upload speed. It’s likely that more options will follow as development matures, however.

While the quiet release comes as a surprise, BitTorrent founder Bram Cohen previously told us that the browser version was coming. In the long run, this version could even replace the “original” client, he seemed to suggest.

“We’re very, very sensitive. We know people have been using uTorrent for a very long time and love it. So we’re very, very sensitive to that and gonna be sure to make sure that people feel that it’s an upgrade that’s happening. Not that we’ve just destroyed the experience,” Bram said.

“We’re going to roll it out and get feedback and make sure that people are happy with it before we roll it out to everybody.”

For now, however, it appears that BitTorrent is offering both products side-by-side.

It’s been a turbulent week for BitTorrent Inc., thus far. The company had to deal with a serious vulnerability in its flagship software uTorrent. This same issue also affected uTorrent Web, but the most recent version is fully patched, we were told, as is the stable release.

We reached out to BitTorrent Inc. to find out more about this release, but we haven’t heard back for several days. Perhaps we’ll get an opportunity to find out more in the near future.

Until then, people are free to take uTorrent Web for a spin here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Election Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/election_securi_2.html

I joined a letter supporting the Secure Elections Act (S. 2261):

The Secure Elections Act strikes a careful balance between state and federal action to secure American voting systems. The measure authorizes appropriation of grants to the states to take important and time-sensitive actions, including:

  • Replacing insecure paperless voting systems with new equipment that will process a paper ballot;
  • Implementing post-election audits of paper ballots or records to verify electronic tallies;

  • Conducting “cyber hygiene” scans and “risk and vulnerability” assessments and supporting state efforts to remediate identified vulnerabilities.

    The legislation would also create needed transparency and accountability in elections systems by establishing clear protocols for state and federal officials to communicate regarding security breaches and emerging threats.

BitTorrent Client uTorrent Suffers Security Vulnerability

Post Syndicated from Ernesto original https://torrentfreak.com/bittorrent-client-utorrent-suffers-security-vulnerability-180220/

With dozens of millions of active users a day, uTorrent has long been the most used torrent client.

The software has been around for well over a decade and it’s still used to shift petabytes of data day after day. While there haven’t been many feature updates recently, parent company BitTorrent Inc. was alerted to a serious security vulnerability recently.

The security flaw in question was reported by Google vulnerability researcher Tavis Ormandy, who first reached out to BitTorrent in November last year. Google’s Project Zero allows developers a 90-day window to address security flaws but with this deadline creeping up, BitTorrent had remained quiet.

Late last month Ormandy again reached out to BitTorrent Inc’s Bram Cohen, fearing that the company might not fix the vulnerability in time.

“I don’t think bittorrent are going to make a 90 day disclosure deadline, do you have any direct contacts who could help? I’m not convinced they understand the severity or urgency,” Ormandy wrote on Twitter.

Nudge

While Google’s security researcher might have expected a more swift response, the issue wasn’t ignored.

BitTorrent Inc has yet to fix the problem in the stable release, but a patch was deployed in the Beta version last week. BitTorrent’s Vice President of Engineering David Rees informed us that this will be promoted to the regular release this week, if all goes well.

While no specific details about the vulnerability have yet to be released, it is likely to be a remote execution flaw. Ormandy previously exposed a similar vulnerability in Transmission, which he said was the “first of a few remote code execution flaws in various popular torrent clients.”

BitTorrent Inc. told us that they have shared their patch with Ormandy, who confirmed that this fixes the security issues.

uTorrent Beta release notes

“We have also sent the build to Tavis and he has confirmed that it addresses all the security issues he reported,” Rees told us. “Since we have not promoted this build to stable, I will reserve reporting on the details of the security issue and its fix for now.”

BitTorrent Inc. plans to release more details about the issue when all clients are patched. Then it will also recommend users to upgrade their clients, so they are no longer at risk, and further information will also be available on Google’s Project Zero site.

Of course, people who are concerned about the issue can already upgrade to the latest uTorrent Beta release right away. Or, assuming that it’s related to the client’s remote control functionality, disable that for now.

Note: uTorrent’s Beta changelog states that the fixes were applied on January 15, but we believe that this should read February 15 instead.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

How to Patch Linux Workloads on AWS

Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-linux-workloads-on-aws/

Most malware tries to compromise your systems by using a known vulnerability that the operating system maker has already patched. As best practices to help prevent malware from affecting your systems, you should apply all operating system patches and actively monitor your systems for missing patches.

In this blog post, I show you how to patch Linux workloads using AWS Systems Manager. To accomplish this, I will show you how to use the AWS Command Line Interface (AWS CLI) to:

  1. Launch an Amazon EC2 instance for use with Systems Manager.
  2. Configure Systems Manager to patch your Amazon EC2 Linux instances.

In two previous blog posts (Part 1 and Part 2), I showed how to use the AWS Management Console to perform the necessary steps to patch, inspect, and protect Microsoft Windows workloads. You can implement those same processes for your Linux instances running in AWS by changing the instance tags and types shown in the previous blog posts.

Because most Linux system administrators are more familiar with using a command line, I show how to patch Linux workloads by using the AWS CLI in this blog post. The steps to use the Amazon EBS Snapshot Scheduler and Amazon Inspector are identical for both Microsoft Windows and Linux.

What you should know first

To follow along with the solution in this post, you need one or more Amazon EC2 instances. You may use existing instances or create new instances. For this post, I assume this is an Amazon EC2 for Amazon Linux instance installed from Amazon Machine Images (AMIs).

Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on Amazon EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is AWS Systems Manager?

As of Amazon Linux 2017.09, the AMI comes preinstalled with the Systems Manager agent. Systems Manager Patch Manager also supports Red Hat and Ubuntu. To install the agent on these Linux distributions or an older version of Amazon Linux, see Installing and Configuring SSM Agent on Linux Instances.

If you are not familiar with how to launch an Amazon EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. You must make sure that the Amazon EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager. The following diagram shows how you should structure your VPC.

Diagram showing how to structure your VPC

Later in this post, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the IAM user you are using for this post must have the iam:PassRole permission. This permission allows the IAM user assigning tasks to pass his own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. You also should authorize your IAM user to use Amazon EC2 and Systems Manager. As mentioned before, you will be using the AWS CLI for most of the steps in this blog post. Our documentation shows you how to get started with the AWS CLI. Make sure you have the AWS CLI installed and configured with an AWS access key and secret access key that belong to an IAM user that have the following AWS managed policies attached to the IAM user you are using for this example: AmazonEC2FullAccess and AmazonSSMFullAccess.

Step 1: Launch an Amazon EC2 Linux instance

In this section, I show you how to launch an Amazon EC2 instance so that you can use Systems Manager with the instance. This step requires you to do three things:

  1. Create an IAM role for Systems Manager before launching your Amazon EC2 instance.
  2. Launch your Amazon EC2 instance with Amazon EBS and the IAM role for Systems Manager.
  3. Add tags to the instances so that you can add your instances to a Systems Manager maintenance window based on tags.

A. Create an IAM role for Systems Manager

Before launching an Amazon EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the Amazon EC2 instance. AWS already provides a preconfigured policy that you can use for the new role and it is called AmazonEC2RoleforSSM.

  1. Create a JSON file named trustpolicy-ec2ssm.json that contains the following trust policy. This policy describes which principal (an entity that can take action on an AWS resource) is allowed to assume the role we are going to create. In this example, the principal is the Amazon EC2 service.
    {
      "Version": "2012-10-17",
      "Statement": {
        "Effect": "Allow",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Action": "sts:AssumeRole"
      }
    }

  1. Use the following command to create a role named EC2SSM that has the AWS managed policy AmazonEC2RoleforSSM attached to it. This generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name EC2SSM --assume-role-policy-document file://trustpolicy-ec2ssm.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name EC2SSM --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM

  1. Use the following commands to create the IAM instance profile and add the role to the instance profile. The instance profile is needed to attach the role we created earlier to your Amazon EC2 instance.
    $ aws iam create-instance-profile --instance-profile-name EC2SSM-IP
    $ aws iam add-role-to-instance-profile --instance-profile-name EC2SSM-IP --role-name EC2SSM

B. Launch your Amazon EC2 instance

To follow along, you need an Amazon EC2 instance that is running Amazon Linux. You can use any existing instance you may have or create a new instance.

When launching a new Amazon EC2 instance, be sure that:

  1. Use the following command to launch a new Amazon EC2 instance using an Amazon Linux AMI available in the US East (N. Virginia) Region (also known as us-east-1). Replace YourKeyPair and YourSubnetId with your information. For more information about creating a key pair, see the create-key-pair documentation. Write down the InstanceId that is in the output because you will need it later in this post.
    $ aws ec2 run-instances --image-id ami-cb9ec1b1 --instance-type t2.micro --key-name YourKeyPair --subnet-id YourSubnetId --iam-instance-profile Name=EC2SSM-IP

  1. If you are using an existing Amazon EC2 instance, you can use the following command to attach the instance profile you created earlier to your instance.
    $ aws ec2 associate-iam-instance-profile --instance-id YourInstanceId --iam-instance-profile Name=EC2SSM-IP

C. Add tags

The final step of configuring your Amazon EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this post. For this example, I add a tag named Patch Group and set the value to Linux Servers. I could have other groups of Amazon EC2 instances that I treat differently by having the same tag name but a different tag value. For example, I might have a collection of other servers with the tag name Patch Group with a value of Web Servers.

  • Use the following command to add the Patch Group tag to your Amazon EC2 instance.
    $ aws ec2 create-tags --resources YourInstanceId --tags --tags Key="Patch Group",Value="Linux Servers"

Note: You must wait a few minutes until the Amazon EC2 instance is available before you can proceed to the next section. To make sure your Amazon EC2 instance is online and ready, you can use the following AWS CLI command:

$ aws ec2 describe-instance-status --instance-ids YourInstanceId

At this point, you now have at least one Amazon EC2 instance you can use to configure Systems Manager.

Step 2: Configure Systems Manager

In this section, I show you how to configure and use Systems Manager to apply operating system patches to your Amazon EC2 instances, and how to manage patch compliance.

To start, I provide some background information about Systems Manager. Then, I cover how to:

  1. Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
  2. Create a Systems Manager patch baseline and associate it with your instance to define which patches Systems Manager should apply.
  3. Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
  4. Monitor patch compliance to verify the patch state of your instances.

You must meet two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your Amazon EC2 instance. Second, you must install the Systems Manager agent on your Amazon EC2 instance. If you have used a recent Amazon Linux AMI, Amazon has already installed the Systems Manager agent on your Amazon EC2 instance. You can confirm this by logging in to an Amazon EC2 instance and checking the Systems Manager agent log files that are located at /var/log/amazon/ssm/.

To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see Installing and Configuring the Systems Manager Agent on Linux Instances. If you forgot to attach the newly created role when launching your Amazon EC2 instance or if you want to attach the role to already running Amazon EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

A. Create the Systems Manager IAM role

For a maintenance window to be able to run any tasks, you must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: this role will be used by Systems Manager instead of Amazon EC2. Earlier, you created the role, EC2SSM, with the policy, AmazonEC2RoleforSSM, which allowed the Systems Manager agent on your instance to communicate with Systems Manager. In this section, you need a new role with the policy, AmazonSSMMaintenanceWindowRole, so that the Systems Manager service can execute commands on your instance.

To create the new IAM role for Systems Manager:

  1. Create a JSON file named trustpolicy-maintenancewindowrole.json that contains the following trust policy. This policy describes which principal is allowed to assume the role you are going to create. This trust policy allows not only Amazon EC2 to assume this role, but also Systems Manager.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"",
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "ec2.amazonaws.com",
                   "ssm.amazonaws.com"
               ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }

  1. Use the following command to create a role named MaintenanceWindowRole that has the AWS managed policy, AmazonSSMMaintenanceWindowRole, attached to it. This command generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name MaintenanceWindowRole --assume-role-policy-document file://trustpolicy-maintenancewindowrole.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name MaintenanceWindowRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonSSMMaintenanceWindowRole

B. Create a Systems Manager patch baseline and associate it with your instance

Next, you will create a Systems Manager patch baseline and associate it with your Amazon EC2 instance. A patch baseline defines which patches Systems Manager should apply to your instance. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your Amazon EC2 instance. Use the following command to list all instances managed by Systems Manager. The --filters option ensures you look only for your newly created Amazon EC2 instance.

$ aws ssm describe-instance-information --filters Key=InstanceIds,Values= YourInstanceId

{
    "InstanceInformationList": [
        {
            "IsLatestVersion": true,
            "ComputerName": "ip-10-50-2-245",
            "PingStatus": "Online",
            "InstanceId": "YourInstanceId",
            "IPAddress": "10.50.2.245",
            "ResourceType": "EC2Instance",
            "AgentVersion": "2.2.120.0",
            "PlatformVersion": "2017.09",
            "PlatformName": "Amazon Linux AMI",
            "PlatformType": "Linux",
            "LastPingDateTime": 1515759143.826
        }
    ]
}

If your instance is missing from the list, verify that:

  1. Your instance is running.
  2. You attached the Systems Manager IAM role, EC2SSM.
  3. You deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram shown earlier in this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
  4. The Systems Manager agent logs don’t include any unaddressed errors.

Now that you have checked that Systems Manager can manage your Amazon EC2 instance, it is time to create a patch baseline. With a patch baseline, you define which patches are approved to be installed on all Amazon EC2 instances associated with the patch baseline. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs. If you do not specifically define a patch baseline, the default AWS-managed patch baseline is used.

To create a patch baseline:

  1. Use the following command to create a patch baseline named AmazonLinuxServers. With approval rules, you can determine the approved patches that will be included in your patch baseline. In this example, you add all Critical severity patches to the patch baseline as soon as they are released, by setting the Auto approval delay to 0 days. By setting the Auto approval delay to 2 days, you add to this patch baseline the Important, Medium, and Low severity patches two days after they are released.
    $ aws ssm create-patch-baseline --name "AmazonLinuxServers" --description "Baseline containing all updates for Amazon Linux" --operating-system AMAZON_LINUX --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Values=[Critical],Key=SEVERITY}]},ApproveAfterDays=0,ComplianceLevel=CRITICAL},{PatchFilterGroup={PatchFilters=[{Values=[Important,Medium,Low],Key=SEVERITY}]},ApproveAfterDays=2,ComplianceLevel=HIGH}]"
    
    {
        "BaselineId": "YourBaselineId"
    }

  1. Use the following command to register the patch baseline you created with your instance. To do so, you use the Patch Group tag that you added to your Amazon EC2 instance.
    $ aws ssm register-patch-baseline-for-patch-group --baseline-id YourPatchBaselineId --patch-group "Linux Servers"
    
    {
        "PatchGroup": "Linux Servers",
        "BaselineId": "YourBaselineId"
    }

C.  Define a maintenance window

Now that you have successfully set up a role, created a patch baseline, and registered your Amazon EC2 instance with your patch baseline, you will define a maintenance window so that you can control when your Amazon EC2 instances will receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

To define a maintenance window:

  1. Use the following command to define a maintenance window. In this example command, the maintenance window will start every Saturday at 10:00 P.M. UTC. It will have a duration of 4 hours and will not start any new tasks 1 hour before the end of the maintenance window.
    $ aws ssm create-maintenance-window --name SaturdayNight --schedule "cron(0 0 22 ? * SAT *)" --duration 4 --cutoff 1 --allow-unassociated-targets
    
    {
        "WindowId": "YourMaintenanceWindowId"
    }

For more information about defining a cron-based schedule for maintenance windows, see Cron and Rate Expressions for Maintenance Windows.

  1. After defining the maintenance window, you must register the Amazon EC2 instance with the maintenance window so that Systems Manager knows which Amazon EC2 instance it should patch in this maintenance window. You can register the instance by using the same Patch Group tag you used to associate the Amazon EC2 instance with the AWS-provided patch baseline, as shown in the following command.
    $ aws ssm register-target-with-maintenance-window --window-id YourMaintenanceWindowId --resource-type INSTANCE --targets "Key=tag:Patch Group,Values=Linux Servers"
    
    {
        "WindowTargetId": "YourWindowTargetId"
    }

  1. Assign a task to the maintenance window that will install the operating system patches on your Amazon EC2 instance. The following command includes the following options.
    1. name is the name of your task and is optional. I named mine Patching.
    2. task-arn is the name of the task document you want to run.
    3. max-concurrency allows you to specify how many of your Amazon EC2 instances Systems Manager should patch at the same time. max-errors determines when Systems Manager should abort the task. For patching, this number should not be too low, because you do not want your entire patch task to stop on all instances if one instance fails. You can set this, for example, to 20%.
    4. service-role-arn is the Amazon Resource Name (ARN) of the AmazonSSMMaintenanceWindowRole role you created earlier in this blog post.
    5. task-invocation-parameters defines the parameters that are specific to the AWS-RunPatchBaseline task document and tells Systems Manager that you want to install patches with a timeout of 600 seconds (10 minutes).
      $ aws ssm register-task-with-maintenance-window --name "Patching" --window-id "YourMaintenanceWindowId" --targets "Key=WindowTargetIds,Values=YourWindowTargetId" --task-arn AWS-RunPatchBaseline --service-role-arn "arn:aws:iam::123456789012:role/MaintenanceWindowRole" --task-type "RUN_COMMAND" --task-invocation-parameters "RunCommand={Comment=,TimeoutSeconds=600,Parameters={SnapshotId=[''],Operation=[Install]}}" --max-concurrency "500" --max-errors "20%"
      
      {
          "WindowTaskId": "YourWindowTaskId"
      }

Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed by using the following command.

$ aws ssm describe-maintenance-window-executions --window-id "YourMaintenanceWindowId"

{
    "WindowExecutions": [
        {
            "Status": "SUCCESS",
            "WindowId": "YourMaintenanceWindowId",
            "WindowExecutionId": "b594984b-430e-4ffa-a44c-a2e171de9dd3",
            "EndTime": 1515766467.487,
            "StartTime": 1515766457.691
        }
    ]
}

D.  Monitor patch compliance

You also can see the overall patch compliance of all Amazon EC2 instances using the following command in the AWS CLI.

$ aws ssm list-compliance-summaries

This command shows you the number of instances that are compliant with each category and the number of instances that are not in JSON format.

You also can see overall patch compliance by choosing Compliance under Insights in the navigation pane of the Systems Manager console. You will see a visual representation of how many Amazon EC2 instances are up to date, how many Amazon EC2 instances are noncompliant, and how many Amazon EC2 instances are compliant in relation to the earlier defined patch baseline.

Screenshot of the Compliance page of the Systems Manager console

In this section, you have set everything up for patch management on your instance. Now you know how to patch your Amazon EC2 instance in a controlled manner and how to check if your Amazon EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all Amazon EC2 instances you manage.

Summary

In this blog post, I showed how to use Systems Manager to create a patch baseline and maintenance window to keep your Amazon EC2 Linux instances up to date with the latest security patches. Remember that by creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing any part of this solution, start a new thread on the Amazon EC2 forum or contact AWS Support.

– Koen

Pirate Streaming Search Engine Exploits Crunchyroll Vulnerability

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-streaming-search-engine-exploits-crunchyroll-vulnerability-180213/

With 20 million members around the world, Crunchyroll is one of the largest on-demand streaming platforms for anime and manga content.

Much like Hollywood, the site has competition from pirate streaming sites which offer their content without permission. These usually stream pirated videos which are hosted on external sites.

However, this week Crunchyroll is facing a more direct attack. The people behind the new streaming meta-search engine StreamCR say they’ve found a way to stream the site’s content from its own servers, without paying.

“This works due to a vulnerability in the Crunchyroll system,” StreamCR’s operators tell TorrentFreak.

Simply put, StreamCR uses an active Crunchyroll account to locate the video streams and embeds this on its own website. This allows people to access Crunchyroll videos in the best quality without paying.

“This gives access to the full library in the region of our server, retrieving it as long as we’re not bound by the regular regional restriction. For this, we pick a US server as American Crunchyroll has the most library of content.

Stream in various qualities

The exploit was developed in-house, the StreamCR team informs us. While it works fine at the moment the team realizes that this may not last forever, as Crunchyroll might eventually patch the vulnerability.

However, the meta-search engine will have made its point by then.

“We expect them to fix this, Why wouldn’t they? In the meantime, this can demonstrate how vulnerable Crunchyroll is at the moment,” they tell us.

The site’s ultimate plan is to become the go-to search engine for people looking to stream all kinds of pirated videos. In addition to Crunchyroll, StreamCR also indexes various pirate sites, including YesMovies, Gomovies, and 9anime.

“StreamCR’s goal is to let people access streams with ease from a universal site, we’re trying to have a Google-like experience for finding online streams,” they say.

TorrentFreak reached out to Crunchyroll asking for a comment on the issue, but at the time of publication, we have yet to hear back.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

0-Day Flash Vulnerability Exploited In The Wild

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/02/0-day-flash-vulnerability-exploited-in-the-wild/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

0-Day Flash Vulnerability Exploited In The Wild

So another 0-Day Flash Vulnerability is being exploited in the Wild, a previously unknown flaw which has been labelled CVE-2018-4878 and it affects 28.0.0.137 and earlier versions for both Windows and Mac (the desktop runtime) and for basically everything in the Chrome Flash Player (Windows, Mac, Linux and Chrome OS).

The full Adobe Security Advisory can be found here:

– Security Advisory for Flash Player | APSA18-01

Adobe warned on Thursday that attackers are exploiting a previously unknown security hole in its Flash Player software to break into Microsoft Windows computers.

Read the rest of 0-Day Flash Vulnerability Exploited In The Wild now! Only available at Darknet.

Progressing from tech to leadership

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/on-leadership.html

I’ve been a technical person all my life. I started doing vulnerability research in the late 1990s – and even today, when I’m not fiddling with CNC-machined robots or making furniture, I’m probably clobbering together a fuzzer or writing a book about browser protocols and APIs. In other words, I’m a geek at heart.

My career is a different story. Over the past two decades and a change, I went from writing CGI scripts and setting up WAN routers for a chain of shopping malls, to doing pentests for institutional customers, to designing a series of network monitoring platforms and handling incident response for a big telco, to building and running the product security org for one of the largest companies in the world. It’s been an interesting ride – and now that I’m on the hook for the well-being of about 100 folks across more than a dozen subteams around the world, I’ve been thinking a bit about the lessons learned along the way.

Of course, I’m a bit hesitant to write such a post: sometimes, your efforts pan out not because of your approach, but despite it – and it’s possible to draw precisely the wrong conclusions from such anecdotes. Still, I’m very proud of the culture we’ve created and the caliber of folks working on our team. It happened through the work of quite a few talented tech leads and managers even before my time, but it did not happen by accident – so I figured that my observations may be useful for some, as long as they are taken with a grain of salt.

But first, let me start on a somewhat somber note: what nobody tells you is that one’s level on the leadership ladder tends to be inversely correlated with several measures of happiness. The reason is fairly simple: as you get more senior, a growing number of people will come to you expecting you to solve increasingly fuzzy and challenging problems – and you will no longer be patted on the back for doing so. This should not scare you away from such opportunities, but it definitely calls for a particular mindset: your motivation must come from within. Look beyond the fight-of-the-day; find satisfaction in seeing how far your teams have come over the years.

With that out of the way, here’s a collection of notes, loosely organized into three major themes.

The curse of a techie leader

Perhaps the most interesting observation I have is that for a person coming from a technical background, building a healthy team is first and foremost about the subtle art of letting go.

There is a natural urge to stay involved in any project you’ve started or helped improve; after all, it’s your baby: you’re familiar with all the nuts and bolts, and nobody else can do this job as well as you. But as your sphere of influence grows, this becomes a choke point: there are only so many things you could be doing at once. Just as importantly, the project-hoarding behavior robs more junior folks of the ability to take on new responsibilities and bring their own ideas to life. In other words, when done properly, delegation is not just about freeing up your plate; it’s also about empowerment and about signalling trust.

Of course, when you hand your project over to somebody else, the new owner will initially be slower and more clumsy than you; but if you pick the new leads wisely, give them the right tools and the right incentives, and don’t make them deathly afraid of messing up, they will soon excel at their new jobs – and be grateful for the opportunity.

A related affliction of many accomplished techies is the conviction that they know the answers to every question even tangentially related to their domain of expertise; that belief is coupled with a burning desire to have the last word in every debate. When practiced in moderation, this behavior is fine among peers – but for a leader, one of the most important skills to learn is knowing when to keep your mouth shut: people learn a lot better by experimenting and making small mistakes than by being schooled by their boss, and they often try to read into your passing remarks. Don’t run an authoritarian camp focused on total risk aversion or perfectly efficient resource management; just set reasonable boundaries and exit conditions for experiments so that they don’t spiral out of control – and be amazed by the results every now and then.

Death by planning

When nothing is on fire, it’s easy to get preoccupied with maintaining the status quo. If your current headcount or budget request lists all the same projects as last year’s, or if you ever find yourself ending an argument by deferring to a policy or a process document, it’s probably a sign that you’re getting complacent. In security, complacency usually ends in tears – and when it doesn’t, it leads to burnout or boredom.

In my experience, your goal should be to develop a cadre of managers or tech leads capable of coming up with clever ideas, prioritizing them among themselves, and seeing them to completion without your day-to-day involvement. In your spare time, make it your mission to challenge them to stay ahead of the curve. Ask your vendor security lead how they’d streamline their work if they had a 40% jump in the number of vendors but no extra headcount; ask your product security folks what’s the second line of defense or containment should your primary defenses fail. Help them get good ideas off the ground; set some mental success and failure criteria to be able to cut your losses if something does not pan out.

Of course, malfunctions happen even in the best-run teams; to spot trouble early on, instead of overzealous project tracking, I found it useful to encourage folks to run a data-driven org. I’d usually ask them to imagine that a brand new VP shows up in our office and, as his first order of business, asks “why do you have so many people here and how do I know they are doing the right things?”. Not everything in security can be quantified, but hard data can validate many of your assumptions – and will alert you to unseen issues early on.

When focusing on data, it’s important not to treat pie charts and spreadsheets as an art unto itself; if you run a security review process for your company, your CSAT scores are going to reach 100% if you just rubberstamp every launch request within ten minutes of receiving it. Make sure you’re asking the right questions; instead of “how satisfied are you with our process”, try “is your product better as a consequence of talking to us?”

Whenever things are not progressing as expected, it is a natural instinct to fall back to micromanagement, but it seldom truly cures the ill. It’s probable that your team disagrees with your vision or its feasibility – and that you’re either not listening to their feedback, or they don’t think you’d care. It’s good to assume that most of your employees are as smart or smarter than you; barking your orders at them more loudly or more frequently does not lead anyplace good. It’s good to listen to them and either present new facts or work with them on a plan you can all get behind.

In some circumstances, all that’s needed is honesty about the business trade-offs, so that your team feels like your “partner in crime”, not a victim of circumstance. For example, we’d tell our folks that by not falling behind on basic, unglamorous work, we earn the trust of our VPs and SVPs – and that this translates into the independence and the resources we need to pursue more ambitious ideas without being told what to do; it’s how we game the system, so to speak. Oh: leading by example is a pretty powerful tool at your disposal, too.

The human factor

I’ve come to appreciate that hiring decent folks who can get along with others is far more important than trying to recruit conference-circuit superstars. In fact, hiring superstars is a decidedly hit-and-miss affair: while certainly not a rule, there is a proportion of folks who put the maintenance of their celebrity status ahead of job responsibilities or the well-being of their peers.

For teams, one of the most powerful demotivators is a sense of unfairness and disempowerment. This is where tech-originating leaders can shine, because their teams usually feel that their bosses understand and can evaluate the merits of the work. But it also means you need to be decisive and actually solve problems for them, rather than just letting them vent. You will need to make unpopular decisions every now and then; in such cases, I think it’s important to move quickly, rather than prolonging the uncertainty – but it’s also important to sincerely listen to concerns, explain your reasoning, and be frank about the risks and trade-offs.

Whenever you see a clash of personalities on your team, you probably need to respond swiftly and decisively; being right should not justify being a bully. If you don’t react to repeated scuffles, your best people will probably start looking for other opportunities: it’s draining to put up with constant pie fights, no matter if the pies are thrown straight at you or if you just need to duck one every now and then.

More broadly, personality differences seem to be a much better predictor of conflict than any technical aspects underpinning a debate. As a boss, you need to identify such differences early on and come up with creative solutions. Sometimes, all you need is taking some badly-delivered but valid feedback and having a conversation with the other person, asking some questions that can help them reach the same conclusions without feeling that their worldview is under attack. Other times, the only path forward is making sure that some folks simply don’t run into each for a while.

Finally, dealing with low performers is a notoriously hard but important part of the game. Especially within large companies, there is always the temptation to just let it slide: sideline a struggling person and wait for them to either get over their issues or leave. But this sends an awful message to the rest of the team; for better or worse, fairness is important to most. Simply firing the low performers is seldom the best solution, though; successful recovery cases are what sets great managers apart from the average ones.

Oh, one more thought: people in leadership roles have their allegiance divided between the company and the people who depend on them. The obligation to the company is more formal, but the impact you have on your team is longer-lasting and more intimate. When the obligations to the employer and to your team collide in some way, make sure you can make the right call; it might be one of the the most consequential decisions you’ll ever make.