Tag Archives: Malware

Credential Stealing as an Attack Vector

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/05/credential_stea.html

Traditional computer security concerns itself with vulnerabilities. We employ antivirus software to detect malware that exploits vulnerabilities. We have automatic patching systems to fix vulnerabilities. We debate whether the FBI should be permitted to introduce vulnerabilities in our software so it can get access to systems with a warrant. This is all important, but what’s missing is a recognition that software vulnerabilities aren’t the most common attack vector: credential stealing is.

The most common way hackers of all stripes, from criminals to hacktivists to foreign governments, break into networks is by stealing and using a valid credential. Basically, they steal passwords, set up man-in-the-middle attacks to piggy-back on legitimate logins, or engage in cleverer attacks to masquerade as authorized users. It’s a more effective avenue of attack in many ways: it doesn’t involve finding a zero-day or unpatched vulnerability, there’s less chance of discovery, and it gives the attacker more flexibility in technique.

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group — basically the country’s chief hacker — gave a rare public talk at a conference in January. In essence, he said that zero-day vulnerabilities are overrated, and credential stealing is how he gets into networks: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

This is true for us, and it’s also true for those attacking us. It’s how the Chinese hackers breached the Office of Personnel Management in 2015. The 2014 criminal attack against Target Corporation started when hackers stole the login credentials of the company’s HVAC vendor. Iranian hackers stole US login credentials. And the hacktivist that broke into the cyber-arms manufacturer Hacking Team and published pretty much every proprietary document from that company used stolen credentials.

As Joyce said, stealing a valid credential and using it to access a network is easier, less risky, and ultimately more productive than using an existing vulnerability, even a zero-day.

Our notions of defense need to adapt to this change. First, organizations need to beef up their authentication systems. There are lots of tricks that help here: two-factor authentication, one-time passwords, physical tokens, smartphone-based authentication, and so on. None of these is foolproof, but they all make credential stealing harder.

Second, organizations need to invest in breach detection and — most importantly — incident response. Credential-stealing attacks tend to bypass traditional IT security software. But attacks are complex and multi-step. Being able to detect them in process, and to respond quickly and effectively enough to kick attackers out and restore security, is essential to resilient network security today.

Vulnerabilities are still critical. Fixing vulnerabilities is still vital for security, and introducing new vulnerabilities into existing systems is still a disaster. But strong authentication and robust incident response are also critical. And an organization that skimps on these will find itself unable to keep its networks secure.

This essay originally appeared on Xconomy.

MISP – Malware Information Sharing Platform

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/iiOT5t53d-Y/

MISP, Malware Information Sharing Platform and Threat Sharing, is an open source software solution for collecting, storing, distributing and sharing cyber security indicators and threat about cyber security incidents analysis and malware analysis. MISP is designed by and for incident analysts, security and ICT professionals or malware reverser to…

Read the full post at darknet.org.uk

How to Import IP Address Reputation Lists to Automatically Update AWS WAF IP Blacklists

Post Syndicated from Lee Atkinson original https://blogs.aws.amazon.com/security/post/Tx8GZBDD7HJ6BS/How-to-Import-IP-Address-Reputation-Lists-to-Automatically-Update-AWS-WAF-IP-Bla

You can use AWS WAF (a web application firewall) to help protect your web applications from exploits that originate from groups of IP addresses that are known to be operated by bad actors such as spammers, malware distributors, and botnets. The IP addresses used may change over time as these bad actors attempt to avoid detection. In this post, I will show how to synchronize AWS WAF Rules with reputation lists.

A number of organizations maintain reputation lists of IP addresses used by bad actors. Their goal is to help legitimate companies block access from specific IP addresses and protect their web applications from abuse. These downloadable, plaintext reputation lists include Spamhaus’s Don’t Route Or Peer (DROP) List and Extended Drop (EDROP) List, and Proofpoint’s Emerging Threats IP list. Similarly, the Tor project’s Tor exit node list provides a list of IP addresses currently used by Tor users to access the Internet. Tor is a web proxy that anonymizes web requests and is sometimes used by malicious users to probe or exploit websites.

Solution overview

The described solution uses AWS Lambda to synchronize AWS WAF Rules with available reputation lists. Lambda automates the parsing of text-based IP reputation lists, the extracting of IP addresses contained within those lists, and updating the AWS WAF IP Sets in order to create a blacklist that blocks those addresses. Amazon CloudWatch Events executes the Lambda function on a regular schedule. To simplify the deployment, I will show you how to deploy this solution by using AWS CloudFormation.

The following diagram illustrates the process covered in this post.   

Here is how the process works:

  1. Amazon CloudWatch Events execute the Lambda function on a scheduled basis.
  2. The function downloads third-party reputation lists and processes them.
  3. To create a blacklist, the function then updates AWS WAF IP Sets with the latest IP addresses and ranges defined in the reputation lists.
  4. AWS WAF then denies requests from the IP addresses that appear in the blacklist.

In addition to the reputation lists mentioned previously, you might want to include other lists maintained by a third party or by your organization. This post’s example Lambda function, written in Node.js JavaScript, supports the downloading of multiple lists and updating AWS WAF in a single Lambda execution. The Lambda function supports text-based lists with a single IP address or range specified per line. The function uses regular expressions to extract the IP address in either a.b.c.d dotted-decimal notation or a.b.c.d/n Classless Inter-Domain Routing (CIDR) notation, and supports an optional regular expression to match text that is present in the line before the address range. The function also consolidates ranges duplicated across multiple reputation lists and ranges that are contained within larger ranges. As an example, suppose we have two address ranges, 192.0.2.0/24 and 192.0.2.64/28. The second range is contained within the first, and therefore is not required in the combined list.

AWS WAF lets you apply a web access control list (web ACL) to a CloudFront distribution. A web ACL gives you fine-grained control over the web requests that your AWS resources respond to, and it can contain up to 10 rules. Each rule can specify multiple conditions that are logically ANDed to get a match in order to either Allow, Deny, or Count the incoming request. One of the conditions these rules can refer to is an AWS WAF IP Set. An IP Set defines up to 1,000 IP descriptors, which define the IP addresses and ranges in CIDR notation. Therefore, a single AWS WAF web ACL can block up to 10,000 IP address ranges in total.

AWS WAF supports IP addresses and ranges in the CIDR format a.b.c.d/n where n, the mask, must be on the octet boundaries of 8, 16, 24, or 32. Some of the reputation lists may include address ranges that have masks other than the ones supported by AWS WAF. For those address ranges, the Lambda function will “split” the range into multiple, smaller ranges that have one of the supported masks. For example, suppose we have a range defined 192.0.2.0/31, which specifies a range with two IP addresses. AWS WAF does not support a network mask of 31, so the function will split 192.0.2.0/31 into two ranges, each specifying a single address, 192.0.2.0/32 and 192.0.2.1/32, covering the range of 192.0.2.0/31.

The example Lambda function shards your reputation lists across multiple AWS WAF IP Sets that are made available to it, and prioritizes the list of IP addresses by size of ranges such that it blocks as many individual IP addresses as possible.

Deploy the solution using AWS CloudFormation

Now that I have explained how the Lambda function will process reputation lists, I will show how to use CloudFormation to create a stack consisting of the AWS WAF web ACL, Rules, and IP Sets, as well as the Lambda function, CloudWatch Events Rule, and supporting resources. The sample CloudFormation template defines two AWS WAF Rules and two IP Sets. You can modify the template to include more Rules and IP Sets in order to support a larger blacklist. The template also defines a CloudWatch Events Rule, which will execute the Lambda function every hour. You can alter the rate to a different value, but please respect the wishes of the reputation lists owners and do not request the lists too often.

First, I go to the CloudFormation console, click Create Stack, and specify one of the URLs for the CloudFormation templates listed at the end of this post.

Click Next and type a name for the stack.

The template defines the parameter ReputationLists that, by default, includes the following JSON parameter value that defines the Spamhaus DROP, Tor exit node list, Emerging Threats IP list, and associated regular expression prefixes. You can edit the JSON parameter value to specify other lists.

[
     { "url": "https://www.spamhaus.org/drop/drop.txt" },
     { "url": "https://check.torproject.org/exit-addresses","prefix":"ExitAddress "},
     { "url": "https://rules.emergingthreats.net/fwrules/emerging-Block-IPs.txt" }
]

I click Next and Next again. On the next page, I select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box in order for the stack to create the IAM role assumed by the Lambda function. I click Create to create the stack.

After the CloudFormation stack is created, I select the stack in the CloudFormation console and click the Resources tab.

I make a note of the physical ID for the Lambda function. In my example, it is called WAFReputationLists-LambdaFunction-PM6D5GNW6EDD, but yours may have a different ID. I will use this physical ID later to find the function in the Lambda console.

Monitor using CloudWatch

I then go to the Lambda console and select the Lambda function created by CloudFormation with the name mentioned in the previous paragraph. I click the Monitoring tab, and from here, I can monitor the execution of the function on an hourly basis. Note that it may take an hour for the first scheduled event to execute the function.

On the Monitoring tab, I click View logs in CloudWatch, which takes me to the CloudWatch console. Here, I can view the logs generated by the function using CloudWatch Logs, as shown in the following screenshot.

Attach the AWS WAF web ACL to a CloudFront distribution

To use the blacklist defined in my web ACL to protect my web application, I must attach the web ACL to a CloudFront distribution used to deliver the website via the CloudFront CDN.

I open the CloudFront console and click Distributions. I click my CloudFront distribution’s ID.

I then click Edit. I select the web ACL to attach (I choose the web ACL I created with the stack earlier in this post), scroll to the bottom of the page, and click Yes, Edit to save changes.

Screenshot showing the Edit button

I have attached the appropriate web ACL to the CloudFront distribution, which will allow me to use the blacklist to protect my web application.

Summary

In this post, I demonstrated how you can use AWS WAF to block the IP addresses used by bad actors. I did this by creating a Lambda function that imports IP addresses and ranges from multiple third-party IP reputation lists into AWS WAF, and scheduled the import using CloudWatch Events.The function processes the addresses defined in the lists into a format suitable for AWS WAF, removing duplicates and prioritizing the list to include as many IP addresses as possible. Subsequent requests to your web application (delivered by CloudFront) from IP addresses defined in this blacklist will be denied by AWS WAF.

The CloudFormation template and Lambda function package are available in the following S3 buckets that are located in the AWS regions that currently support Lambda:

The code and other WAF samples also are available in our GitHub aws-waf-sample repository.

If you have comments about this blog post, please submit them in the “Comments” section below. If you have questions about this solution or its implementation, please start a new thread on the AWS WAF forum.

– Lee

Security Risks of Shortened URLs

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/04/security_risks_11.html

Shortened URLs, produced by services like bit.ly and goo.gl, can be brute-forced. And searching random shortened URLs yields all sorts of secret documents. Plus, many of them can be edited, and can be infected with malware.

Academic paper. Blog post with lots of detail.

Gone In Six Characters: Short URLs Considered Harmful for Cloud Services (Freedom to Tinker)

Post Syndicated from jake original http://lwn.net/Articles/683880/rss

Over at the Freedom to Tinker blog, guest poster Vitaly Shmatikov, who is a professor at Cornell Tech, writes about his study [PDF] of what
URL shortening means for the security and privacy of cloud services.
TL;DR: short URLs produced by bit.ly, goo.gl, and similar services are so short that they can be scanned by brute force. Our scan discovered a large number of Microsoft OneDrive accounts with private documents. Many of these accounts are unlocked and allow anyone to inject malware that will be automatically downloaded to users’ devices. We also discovered many driving directions that reveal sensitive information for identifiable individuals, including their visits to specialized medical facilities, prisons, and adult establishments.

World Backup Day is today! Are you backed up?

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/world-backup-day-is-today-are-you-backed-up/

World Backup Day 2016
Happy World Backup Day! Are you backed up? If not, you should be!
OK, World Backup Day may not have the same popularity as Christmas or even Arbor Day, but it’s a great occasion to remind you to check your backup. Find some time today to check your backup, and make sure that your files are safe.
New threats abound
These days you can never be too careful. Even when you back up regularly, you can run into problems that will stop you in your tracks, like malware. Malware is a growing problem on Macs and Windows PCs alike, and even if you’re using “defender” software to protect yourself, new exploits are being discovered all the time.
Malware infection symptoms range from the annoying – browser redirects, crashes and slowdowns – to the downright extortionate, like our own Elli’s recent experience with “ransomware,” software that hijacked her computer, encrypted files and demanded money to make them useable again.
Some companies have had to pay thousands of dollars to overcome ransomware problems, but Elli didn’t have to, because Elli was protected with a recent Backblaze backup.
Trust but verify
If you’re already backing up your computer, terrific. If you have some time during World Backup Day, please take a look at your backup and make sure the files you need are where they’re supposed to be.
Nothing is more useless than a backup system that’s in place but isn’t doing its job. And nothing is more frustrating than running into an occasion where you need a backed up file, only to find out that it’s not there. So try not to get caught in that situation.
With Backblaze, you’re only usually a click or two away from confirmation your files are backed up. What’s more, you can check or restore files from your web browser or using our iOS or Android apps.
Are you Backblaze or other ways to back up your Mac or PC? If so, we’ve assembled some great tips about how to test your backups to make sure that your data is safe and sound.
Don’t be an April Fool
Almost a third of us have never backed up our computer. That’s way too many! If you’re among that group, now is a really good time to make sure your files are safe.
There are a multitude of ways to do that, including Backblaze. If you need help to make sure you’ve got everything saved, we can help. Check our Computer Backup Guide to get instructions about how to back up your computer.
World Backup Day is just one day a year, but your backups should be happening all the time. Make sure that your files are safe and sound and ready for retrieval if and when you need them.
Backblaze provides you with unlimited backup for your computer for just $5 per month. Isn’t it worth it for the peace of mind? Try it for free with a 15-day trial, and see what Backblaze can do for you.
The post World Backup Day is today! Are you backed up? appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

PEiD – Detect PE Packers, Cryptors & Compilers

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/HhoMm4w7x7w/

PEiD is an intuitive application that relies on its user-friendly interface to detect PE packers, cryptors and compilers found in executable files – its detection rate is higher than that of other similar tools since the app packs more than 600 different signatures in PE files. PEiD comes with three different scanning methods, each suitable…

Read the full post at darknet.org.uk

Ransomware Visits Backblaze

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/cryptowall-ransomware-recovery/

Ransomware
“Elli” from our accounting department was trying to go home. Traffic was starting to build and a 45-minute trip home would become a 90-minute trip shortly. Her Windows 10 PC chimed: she had an email. “Last one,” she uttered as she quickly opened the message. It appeared to be a voicemail file from a caller at Quickbooks, our accounting software. “What do they want?” She double-clicked on the attached file and her PC was “toast”, she just didn’t know it yet.
Instead of a voicemail from Quickbooks, what Elli had unwittingly done was unleash a ransomware infection on her system. While she finished up packing her stuff to go home, one by one the data files on her PC were being encrypted making them unreadable to her or to anyone else.
When she glanced back at her computer she noticed something odd: the background picture, the one of her daughters, was gone. It was replaced by a generic image of a field of flowers. Weird. She opened up a folder she kept on her desktop. Here’s what she expected to see:
Clean PC
Here’s what she actually saw:
Infected PC
She couldn’t comprehend what she was seeing. Who could? She called over to our CTO, Brian, to have him take a look at this weirdness. He grabbed the keyboard and started typing. In between the expletives he asked her what she had done on the computer recently. She pointed to the email open in the corner of the screen. Brian asked if she opened the attachment. As she nodded yes, Brian pulled the network cable from the PC, then shut off the Wi-Fi switch, disconnected her external drive, and turned off her computer. “Your PC,” he said, “is infected with ransomware.”
We removed Elli’s infected drive put it in a sandbox where we were able to let it finish its “work”. Once the process was done we accessed the system and besides folder after folder of unintelligible files there were “help” files, put there by the ransomware once as it processed the files in a given folder. Here’s one of them:
Cryptowall Ransomware “Help” Message
cryptowall ransomware help file
Ransomware
Ransomware is malware that infects your computer, encrypts some or all of your data, and then holds it hostage until you pay a ransom to get your files decrypted. Last year we looked at Cryptowall, a form of ransomware. In that blog post we looked at the history and future of ransomware and predicted, sadly, we’d see more attacks. Here are a few recent examples:

Hollywood Presbyterian Hospital: Paid $17,000, “It was the easy choice. I wouldn’t say it was the right choice.”
Community of Christ Church in Hillsboro: Paid $570, “…the only thing we could do was to pay the ransom.”
Europe, the Middle East, Africa and Australia: The security company Trend Micro has labeled the recent attacks a Global Threat as ransomware has invaded these regions with a vengeance.
Mac Computers: Ransomware has now made its way to Apple’s Macintosh, with the first known infection being reported this past week. In this case, it took a fair amount of skullduggery to get past the Apple security protocols. At the center of the attack was a software vendor that was hacked and their software infected with ransomware. The infected software was then available to be downloaded by unsuspecting Mac computer users.

Elli gets her data back
Elli did not pay the ransom. Instead she recovered her data files from her Backblaze backup. Her last backup was just before she downloaded the ZIP file that contained the ransomware, so it was easy to recover all her data and get up and running.
Different versions of ransomware can make the data recovery process a bit more challenging, for example:

Some ransomware attacks have been known to delay their start, instead waiting a period of time or until a specific date before unleashing the downloaded malware and starting the encryption process. In that case you’ll need to be able to roll back the clock on your backup to a date before the infection so you can recover your files.
Other ransomware attacks will attempt to also encrypt connected accessible drives, including for example your local backup drive. For this reason following the 3-2-1 backup strategy of having both an onsite and offsite backup of your data is the best prevention against data loss if ransomware strikes.

Social engineering
All of this could have been avoided had Elli not been fooled by the email and downloaded the file. As is often the case with ransomware attacks, the miscreants used social engineering to get past Elli’s defenses. Social engineering can be defined as the “psychological manipulation of people into performing actions or divulging confidential information.” In Elli’s case there were several tricks:

The “to address” on the email contained Elli’s full name.
It is normal for our office to get emails with attachments from the voicemail system.
It is normal for our office to get messages from Quickbooks.

It’s hard to know if Elli was just one of millions of people who received this email or as is more likely, Elli was the victim of a targeted attack. Such targeted attacks, also known as spear phishing, require that the sender learn about the target so that email message appears more authentic. For most of us finding the information needed to create a credible socially engineered email is as easy as perusing the company web site and then doing a little research on social sites like Facebook, LinkedIn, Google+, and so on.
Lessons learned by “Elli”
It is easy to blame Elli for letting her system get infected with ransomware, but there were multiple failures here. She was using a browser to access her cloud-based email. The email system didn’t block the email that contained the malware. Neither the browser nor the email system she was using caught the fact that the attached ZIP file contained an executable file as she was able to download the file without incident. Finally, the anti-virus software on her PC didn’t detect anything when she downloaded and then unzipped the malware file. No pop-ups, no notifications, nothing; she was on her own and in a moment of weakness she made a mistake. As embarrassing as it is, she let us tell her story so maybe someone else won’t make the same mistake. Thanks Elli.
Epilogue
Some of you may be wondering about the data we store for our customers. The systems and networks of our business operations and our production operations are independent, with separate access and credentials for each. While having an employee’s computer compromised by ransomware was horribly inconvenient for the employee, Backblaze’s core systems were never at risk.
The post Ransomware Visits Backblaze appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

Mac OS X Ransomware KeRanger Is Linux Encoder Trojan

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/A3QDRb8kTfE/

So there’s been a fair bit of noise this past week about the Mac OS X Ransomware, the first of its’ kind called KeRanger. It also happens to be the first popular Mac malware of any form for some time. It’s also a lesson to all the Apple fanbois that their OS is not impervious […]

The post Mac OS X Ransomware KeRanger Is Linux Encoder Trojan…

Read the full post at darknet.org.uk

How to Reduce Security Threats and Operating Costs Using AWS WAF and Amazon CloudFront

Post Syndicated from Vlad Vlasceanu original https://blogs.aws.amazon.com/security/post/Tx1G747SE1R2ZWE/How-to-Reduce-Security-Threats-and-Operating-Costs-Using-AWS-WAF-and-Amazon-Clou

Some Internet operations trust that clients are “well behaved.” As an operator of a publicly accessible web application, for example, you have to trust that the clients accessing your content identify themselves accurately, or that they only use your services in the manner you expect. However, some clients are bad actors. These bad actors are typically automated processes: some might try to scrape your content for their own profit (content scrapers), and others might misrepresent who they are to bypass restrictions (bad bots). For example, they might use a fake user agent.

Successfully blocking bad actors can help reduce security threats to your systems. In addition, you can lower your overall costs, because you no longer have to serve traffic to unintended audiences. In this blog post, I will show you how you can realize these benefits by building a process to help detect content scrapers and bad bots, and then use Amazon CloudFront with AWS WAF (a web application firewall [WAF]) to help block bad actors’ access to your content.

WAFs give you back some control. For example, with AWS WAF you can filter traffic, look for bad actors, and block their access. This is no small feat because bad actors change methods continually to mask their actions, forcing you to adapt your detection methods frequently. Because AWS is fully programmable using RESTful APIs, you can integrate it into your existing DevOps workflows, and build automations around it to react dynamically to the changing methods of bad actors.

AWS WAF works by allowing you to define a set of rules, called a web access control list (web ACL). Each rule in the list contains a set of conditions and an action. Requests received by CloudFront are handed over to AWS WAF for inspection. Individual rules are checked in order. If the request matches the conditions specified in a rule, the indicated action is taken; if not, the default action of the web ACL is taken. Actions can allow the request to be serviced, block the request, or simply count the request for later analysis. Conditions offer a range of options to match traffic based on patterns, such as the source IP address, SQL injection attempts, size of the request, or strings of text. These constructs offer a wide range of capabilities to filter unwanted traffic.

Let’s get start with the involved AWS services and an overview of the solution itself. Because AWS WAF integrates with Amazon CloudFront, your website or web application must be fronted by a CloudFront distribution for the solution to work.

How AWS services help to make this solution work

The following AWS services work together to help block content scrapers and bad bots:

  • As I already mentioned, AWS WAF helps protect your web applications from common web exploits that can affect their availability, compromise security, or consume excessive resources.
  • CloudFront is a content delivery web service. It integrates with other AWS products to give you an easy way to distribute content to end users with low latency and high data-transfer speeds.
  • AWS Lambda enables you to run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or back-end service.
  • Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. You can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as code running on Lambda or any web application.
  • AWS CloudFormation gives you an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Solution overview

Blocking content scrapers and bad bots involves 2 main actions:

  1. Detect an inbound request from a content scraper or bad bot.
  2. Block any subsequent requests from that content scraper or bad bot.

For the solution in today’s post to be effective, your web application must employ both of these actions. The following architecture diagram shows how you can implement this solution by using AWS services.

These are the key elements of the diagram:

  1. A bad bot requests a specifically disallowed URL on your web application. This URL is implemented outside your web application in the blocking solution.
  2. The URL invocation triggers an Lambda function that captures the IP address of the requestor (source address).
  3. The function adds the source address to an AWS WAF block list.
  4. The function also issues a notification to an Amazon SNS topic, informing recipients that a bad bot was blocked.

CloudFront will block additional requests from the source address of the bad bot by checking the AWS WAF block list.

In the remainder of this post, I describe in detail how this solution works.

Detecting content scrapers and bad bots

To detect an inbound request from a content scraper or bad bot, set up a honeypot. This is usually a piece of content that good actors know they are not supposed to— and don’t—access. First, embed a link in your content pointing to the honeypot. You should hide this link from your regular human users, as shown in the following code.

<a href="/v1/honeypot/" style="display: none" aria-hidden="true">honeypot link</a>

Note: In production, do not call the link honeypot. Use a name that is similar to the content in your application. For example, if you are operating an online store with a product catalog, you might use a fake product name or something similar.

Next, instruct good content scrapers and bots to ignore this embedded link. Use the robots exclusion standard (a robots.txt file in the root of your website) and protocol to specify which portions of your site are off limits and to what content scrapers and bots. Conforming content scrapers and bots, such as Google’s web-crawling bot Googlebot, will actively look for this file first, download it, and refrain from indexing any content you disallow in the file. However because this protocol relies on trust, content scrapers and bots can ignore your robots.txt file, which is often the case with malware bots that scan for security vulnerabilities and scrape email addresses.

The following is a robots.txt example file, which disallows access to the honeypot URL described previously.

User-agent: *
Disallow: /v1/honeypot/

Between the embedded link and the robots.txt file, it is likely that any requests made to the honeypot URL do not come from a legitimate user. This is what forms the basis of the detection process.

Blocking content scrapers and bad bots

Next, set up a script that is triggered when the honeypot URL is requested. As mentioned previously, AWS WAF uses a set of rules and conditions to match traffic and trigger actions. In this case, you will use an AWS WAF IPSet filter condition to create a block list, which is a list of disallowed source IP addresses. The script captures the IP address of the requestor and adds it to the block list. Then, when CloudFront passes an inbound request over to AWS WAF for inspection, the rule is triggered if the source IP address appears in the block list. In turn, AWS WAF instructs CloudFront to block the request. Any subsequent requests for your content from that source IP address will be blocked when the honeypot URL is requested.

Note: IPSet filter lists can store up to 1,000 IP addresses or ranges expressed in Classless Inter-Domain Routing (CIDR) format. If you expect the block list to exceed this number, consider using multiple IPSet filter lists and rules. For more details on service limits, see the AWS WAF Limits documentation.

In the remainder of this post, I show you how to implement the honeypot trap using Lambda and Amazon API Gateway. The trap is a minimal microservice that enables you to implement it without having to manage compute capacity and scaling.

Solution implementation and deployment

All resources for this solution are also available for download from our GitHub repository to enable you to inspect the code and change it as needed.

Step 1: Create a RESTful API

To start, you’ll need to create a RESTful API using API Gateway. Using the AWS CLI tools, run the following command and make note of the API ID returned by the call. (For details about how to install and configure the AWS CLI tools, see Getting Set Up with the AWS Command Line Interface.)

$ aws apigateway create-rest-api --name myBotBockingApi

The output will look like this (the line that has the API ID is highlighted):

{
    "name": "myFirstApi",
    "id": "xxxxxxxxxx",
    "createdDate": 1454978163
}

Note: We recommend that you deploy all resources in the same region. Because this solution uses API Gateway and Lambda, see the AWS Global Infrastructure Region Table to check which AWS regions support these services.

Step 2: Deploy the CloudFormation stack

Download this CloudFormation template and run it in your AWS account in the desired region. For detailed steps about how to create a CloudFormation stack based on a template, see this walkthrough.

You must provide two parameters:

  1. The Base Resource Name you want to use for the created resources.
  2. The RESTful API ID of the API created in Step 1 earlier in this post.

The CloudFormation – Create Stack page looks like what is shown in the following screenshot.

CloudFormation will create a web ACL, rule, and empty IPSet filter condition. Additionally, it will create an Amazon Simple Notification Service (SNS) topic to which you can subscribe so that you can receive notifications when new IP addresses are added to the list. CloudFormation will also create a Lambda function and an IAM execution role for the Lambda function, authorizing the function to change the IPSet. The service will also add a permission allowing the RESTful API to invoke the function.

Step 3: Set up API Gateway

We also provide a convenient Swagger template that you can use to set up API Gateway, after the relevant resources have been created using CloudFormation. Swagger is a specification and complete framework implementation for representing RESTful web services, allowing for deployment of easily reproducible APIs. Use the Swagger importer tool to set up API Gateway, but make sure you change the downloaded Swagger template in JSON format, by updating all occurrences of the placeholders shown in the following table.

Placeholder Description Example

[[region]]

The desired region

us-east-1

[[account-id]]

The account ID where the resources are created

012345678901

[[honeypot-uri]]

The name of the honeypot URI endpoint

honeypot

[[lambda-function-name]]

The name of the Lambda function created by CloudFormation (check the Outputs section of the stack)

wafBadBotBlocker-rLambdaFunction-XXXXXXXXXXXXX

Clone the Swagger import tool from GitHub and follow the tool’s readme file to build the import tool using Apache Maven, as shown in the following command.

$ git clone https://github.com/awslabs/aws-apigateway-importer.git aws-apigateway-importer && cd aws-apigateway-importer

Import the customized template (make sure you use the same region as for the CloudFormation resources), and replace [api-id] with the ID from Step 1 earlier in this post, and replace [basepath] with your desired URL segment (such as v1).

$ ./aws-api-import.sh --update [api-id] --deploy [basepath] /path/to/swagger/template.json

In API Gateway terminology, our [basepath] URL segment is called a stage, and defines the path through which an API is accessible.

Step 4: Finish the configuration

Finish the configuration by connecting API Gateway to the CloudFront distribution:

  1. Create an API key, which will be used to ensure that only requests originating from CloudFront will be authorized by API Gateway.
  2. Associate the newly created API key with the deployed API stage. The following image shows an example console page with the API key selected and the recommended API Stage Association values.


     

  3. Find the API Gateway endpoint created by the Swagger import script. You will need this endpoint for the custom origin. Find the endpoint on the API Gateway console by clicking the name of the deployed stage, as highlighted in the following image.

  1. Create a new custom origin in your CloudFront distribution, using the API Gateway endpoint. The details screen in the AWS Management Console for your existing CloudFront distribution will look similar to the following image, which already contains a few distinct origins. Click Create Origin.


     

  2. As shown in the following screenshot, use the API Gateway endpoint as the Origin Domain Name. Make sure the Origin Protocol Policy is set to HTTPS Only and add the API key in the Origin Custom Headers box. Then click Create.

  1. Add a cache behavior that matches your base path (API Gateway stage) and honeypot URL segment. This will point traffic to the newly created custom origin. The following screenshot shows an example console screen that lists CloudFront distribution behaviors. Click Create Behavior.

  1. Use the value of your base path and honeypot URL to set the Path Pattern field. The honeypot URL must match the value in the robots.txt file you deploy and the API Gateway method specified. Select the Custom Origin you just created and configure additional settings, as illustrated in the following screenshot:
  • Though whitelist headers are not strictly required, creating them to match the following screenshot would provide additional identification for your blocked IP notifications.
  • I recommend that you customize the Object Caching policy to not cache responses from the honeypot. Set the values of Minimum TTL, Maximum TTL, and Default TTL to 0 (zero), as shown in the following screenshot.

  1. Register the AWS WAF web ACL with your CloudFront distribution. The General tab of your distribution (see the following screenshot) contains settings affecting the configuration of your content delivery network. Click Edit.


     

  2. Find the AWS WAF Web ACL drop-down list (see the following screenshot) and choose the correct web ACL from the list. The name of the web ACL will start with the name you assigned as the Base Resource Name when you launched the CloudFormation template earlier.

  1. To receive notifications when an IP address gets blocked, subscribe to the SNS topic created by CloudFormation. You can receive emails or even text messages, and you can use that opportunity to validate the blocking action and remove the IP address from the block list, if it was blocked in error. For more information about how to subscribe to SNS topics, see Subscribe to a Topic.

Summary

The solution explained in this blog post helps detect content scrapers and bad bots. In most production deployments, though, this is just a component of a more comprehensive web traffic filtering strategy. AWS WAF provides a highly customizable service that can be interacted with programmatically to react faster to changing threats.

If you have comments about this blog post, please submit them in the “Comments” section below. If you have questions about or issues deploying this solution, start a new thread on the AWS WAF forum.

– Vlad

How to Reduce Security Threats and Operating Costs Using AWS WAF and Amazon CloudFront

Post Syndicated from Vlad Vlasceanu original https://blogs.aws.amazon.com/security/post/Tx1G747SE1R2ZWE/How-to-Reduce-Security-Threats-and-Operating-Costs-Using-AWS-WAF-and-Amazon-Clou

Some Internet operations trust that clients are “well behaved.” As an operator of a publicly accessible web application, for example, you have to trust that the clients accessing your content identify themselves accurately, or that they only use your services in the manner you expect. However, some clients are bad actors. These bad actors are typically automated processes: some might try to scrape your content for their own profit (content scrapers), and others might misrepresent who they are to bypass restrictions (bad bots). For example, they might use a fake user agent.

Successfully blocking bad actors can help reduce security threats to your systems. In addition, you can lower your overall costs, because you no longer have to serve traffic to unintended audiences. In this blog post, I will show you how you can realize these benefits by building a process to help detect content scrapers and bad bots, and then use Amazon CloudFront with AWS WAF (a web application firewall [WAF]) to help block bad actors’ access to your content.

WAFs give you back some control. For example, with AWS WAF you can filter traffic, look for bad actors, and block their access. This is no small feat because bad actors change methods continually to mask their actions, forcing you to adapt your detection methods frequently. Because AWS is fully programmable using RESTful APIs, you can integrate it into your existing DevOps workflows, and build automations around it to react dynamically to the changing methods of bad actors.

AWS WAF works by allowing you to define a set of rules, called a web access control list (web ACL). Each rule in the list contains a set of conditions and an action. Requests received by CloudFront are handed over to AWS WAF for inspection. Individual rules are checked in order. If the request matches the conditions specified in a rule, the indicated action is taken; if not, the default action of the web ACL is taken. Actions can allow the request to be serviced, block the request, or simply count the request for later analysis. Conditions offer a range of options to match traffic based on patterns, such as the source IP address, SQL injection attempts, size of the request, or strings of text. These constructs offer a wide range of capabilities to filter unwanted traffic.

Let’s get start with the involved AWS services and an overview of the solution itself. Because AWS WAF integrates with Amazon CloudFront, your website or web application must be fronted by a CloudFront distribution for the solution to work.

How AWS services help to make this solution work

The following AWS services work together to help block content scrapers and bad bots:

As I already mentioned, AWS WAF helps protect your web applications from common web exploits that can affect their availability, compromise security, or consume excessive resources.

CloudFront is a content delivery web service. It integrates with other AWS products to give you an easy way to distribute content to end users with low latency and high data-transfer speeds.

AWS Lambda enables you to run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or back-end service.

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. You can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as code running on Lambda or any web application.

AWS CloudFormation gives you an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Solution overview

Blocking content scrapers and bad bots involves 2 main actions:

Detect an inbound request from a content scraper or bad bot.

Block any subsequent requests from that content scraper or bad bot.

For the solution in today’s post to be effective, your web application must employ both of these actions. The following architecture diagram shows how you can implement this solution by using AWS services.

These are the key elements of the diagram:

A bad bot requests a specifically disallowed URL on your web application. This URL is implemented outside your web application in the blocking solution.

The URL invocation triggers an Lambda function that captures the IP address of the requestor (source address).

The function adds the source address to an AWS WAF block list.

The function also issues a notification to an Amazon SNS topic, informing recipients that a bad bot was blocked.

CloudFront will block additional requests from the source address of the bad bot by checking the AWS WAF block list.

In the remainder of this post, I describe in detail how this solution works.

Detecting content scrapers and bad bots

To detect an inbound request from a content scraper or bad bot, set up a honeypot. This is usually a piece of content that good actors know they are not supposed to— and don’t—access. First, embed a link in your content pointing to the honeypot. You should hide this link from your regular human users, as shown in the following code.

<a href="/v1/honeypot/" style="display: none" aria-hidden="true">honeypot link</a>

Note: In production, do not call the link honeypot. Use a name that is similar to the content in your application. For example, if you are operating an online store with a product catalog, you might use a fake product name or something similar.

Next, instruct good content scrapers and bots to ignore this embedded link. Use the robots exclusion standard (a robots.txt file in the root of your website) and protocol to specify which portions of your site are off limits and to what content scrapers and bots. Conforming content scrapers and bots, such as Google’s web-crawling bot Googlebot, will actively look for this file first, download it, and refrain from indexing any content you disallow in the file. However because this protocol relies on trust, content scrapers and bots can ignore your robots.txt file, which is often the case with malware bots that scan for security vulnerabilities and scrape email addresses.

The following is a robots.txt example file, which disallows access to the honeypot URL described previously.

User-agent: *
Disallow: /v1/honeypot/

Between the embedded link and the robots.txt file, it is likely that any requests made to the honeypot URL do not come from a legitimate user. This is what forms the basis of the detection process.

Blocking content scrapers and bad bots

Next, set up a script that is triggered when the honeypot URL is requested. As mentioned previously, AWS WAF uses a set of rules and conditions to match traffic and trigger actions. In this case, you will use an AWS WAF IPSet filter condition to create a block list, which is a list of disallowed source IP addresses. The script captures the IP address of the requestor and adds it to the block list. Then, when CloudFront passes an inbound request over to AWS WAF for inspection, the rule is triggered if the source IP address appears in the block list. In turn, AWS WAF instructs CloudFront to block the request. Any subsequent requests for your content from that source IP address will be blocked when the honeypot URL is requested.

Note: IPSet filter lists can store up to 1,000 IP addresses or ranges expressed in Classless Inter-Domain Routing (CIDR) format. If you expect the block list to exceed this number, consider using multiple IPSet filter lists and rules. For more details on service limits, see the AWS WAF Limits documentation.

In the remainder of this post, I show you how to implement the honeypot trap using Lambda and Amazon API Gateway. The trap is a minimal microservice that enables you to implement it without having to manage compute capacity and scaling.

Solution implementation and deployment

All resources for this solution are also available for download from our GitHub repository to enable you to inspect the code and change it as needed.

Step 1: Create a RESTful API

To start, you’ll need to create a RESTful API using API Gateway. Using the AWS CLI tools, run the following command and make note of the API ID returned by the call. (For details about how to install and configure the AWS CLI tools, see Getting Set Up with the AWS Command Line Interface.)

$ aws apigateway create-rest-api –name myBotBockingApi

The output will look like this (the line that has the API ID is highlighted):

{
"name": "myFirstApi",
"id": "xxxxxxxxxx",
"createdDate": 1454978163
}

Note: We recommend that you deploy all resources in the same region. Because this solution uses API Gateway and Lambda, see the AWS Global Infrastructure Region Table to check which AWS regions support these services.

Step 2: Deploy the CloudFormation stack

Download this CloudFormation template and run it in your AWS account in the desired region. For detailed steps about how to create a CloudFormation stack based on a template, see this walkthrough.

You must provide two parameters:

The Base Resource Name you want to use for the created resources.

The RESTful API ID of the API created in Step 1 earlier in this post.

The CloudFormation – Create Stack page looks like what is shown in the following screenshot.

CloudFormation will create a web ACL, rule, and empty IPSet filter condition. Additionally, it will create an Amazon Simple Notification Service (SNS) topic to which you can subscribe so that you can receive notifications when new IP addresses are added to the list. CloudFormation will also create a Lambda function and an IAM execution role for the Lambda function, authorizing the function to change the IPSet. The service will also add a permission allowing the RESTful API to invoke the function.

Step 3: Set up API Gateway

We also provide a convenient Swagger template that you can use to set up API Gateway, after the relevant resources have been created using CloudFormation. Swagger is a specification and complete framework implementation for representing RESTful web services, allowing for deployment of easily reproducible APIs. Use the Swagger importer tool to set up API Gateway, but make sure you change the downloaded Swagger template in JSON format, by updating all occurrences of the placeholders shown in the following table.

Placeholder

Description

Example

[[region]]

The desired region

us-east-1

[[account-id]]

The account ID where the resources are created

012345678901

[[honeypot-uri]]

The name of the honeypot URI endpoint

honeypot

[[lambda-function-name]]

The name of the Lambda function created by CloudFormation (check the Outputs section of the stack)

wafBadBotBlocker-rLambdaFunction-XXXXXXXXXXXXX

Clone the Swagger import tool from GitHub and follow the tool’s readme file to build the import tool using Apache Maven, as shown in the following command.

$ git clone https://github.com/awslabs/aws-apigateway-importer.git aws-apigateway-importer && cd aws-apigateway-importer

Import the customized template (make sure you use the same region as for the CloudFormation resources), and replace [api-id] with the ID from Step 1 earlier in this post, and replace [basepath] with your desired URL segment (such as v1).

$ ./aws-api-import.sh –update [api-id] –deploy [basepath] /path/to/swagger/template.json

In API Gateway terminology, our [basepath] URL segment is called a stage, and defines the path through which an API is accessible.

Step 4: Finish the configuration

Finish the configuration by connecting API Gateway to the CloudFront distribution:

Create an API key, which will be used to ensure that only requests originating from CloudFront will be authorized by API Gateway.

Associate the newly created API key with the deployed API stage. The following image shows an example console page with the API key selected and the recommended API Stage Association values.


 

Find the API Gateway endpoint created by the Swagger import script. You will need this endpoint for the custom origin. Find the endpoint on the API Gateway console by clicking the name of the deployed stage, as highlighted in the following image.

Create a new custom origin in your CloudFront distribution, using the API Gateway endpoint. The details screen in the AWS Management Console for your existing CloudFront distribution will look similar to the following image, which already contains a few distinct origins. Click Create Origin.


 

As shown in the following screenshot, use the API Gateway endpoint as the Origin Domain Name. Make sure the Origin Protocol Policy is set to HTTPS Only and add the API key in the Origin Custom Headers box. Then click Create.

Add a cache behavior that matches your base path (API Gateway stage) and honeypot URL segment. This will point traffic to the newly created custom origin. The following screenshot shows an example console screen that lists CloudFront distribution behaviors. Click Create Behavior.

Use the value of your base path and honeypot URL to set the Path Pattern field. The honeypot URL must match the value in the robots.txt file you deploy and the API Gateway method specified. Select the Custom Origin you just created and configure additional settings, as illustrated in the following screenshot:

Though whitelist headers are not strictly required, creating them to match the following screenshot would provide additional identification for your blocked IP notifications.

I recommend that you customize the Object Caching policy to not cache responses from the honeypot. Set the values of Minimum TTL, Maximum TTL, and Default TTL to 0 (zero), as shown in the following screenshot.

Register the AWS WAF web ACL with your CloudFront distribution. The General tab of your distribution (see the following screenshot) contains settings affecting the configuration of your content delivery network. Click Edit.


 

Find the AWS WAF Web ACL drop-down list (see the following screenshot) and choose the correct web ACL from the list. The name of the web ACL will start with the name you assigned as the Base Resource Name when you launched the CloudFormation template earlier.

To receive notifications when an IP address gets blocked, subscribe to the SNS topic created by CloudFormation. You can receive emails or even text messages, and you can use that opportunity to validate the blocking action and remove the IP address from the block list, if it was blocked in error. For more information about how to subscribe to SNS topics, see Subscribe to a Topic.

Summary

The solution explained in this blog post helps detect content scrapers and bad bots. In most production deployments, though, this is just a component of a more comprehensive web traffic filtering strategy. AWS WAF provides a highly customizable service that can be interacted with programmatically to react faster to changing threats.

If you have comments about this blog post, please submit them in the “Comments” section below. If you have questions about or issues deploying this solution, start a new thread on the AWS WAF forum.

– Vlad

GPL enforcement is a social good

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/38992.html

The Software Freedom Conservancy is currently running a fundraising program in an attempt to raise enough money to continue funding GPL compliance work. If they don’t gain enough supporters, the majority of their compliance work will cease. And, since SFC are one of the only groups currently actively involved in performing GPL compliance work, that basically means that there will be nobody working to ensure that users have the rights that copyright holders chose to give them.Why does this matter? More people are using GPLed software than at any point in history. Hundreds of millions of Android devices were sold this year, all including GPLed code. An unknowably vast number of IoT devices run Linux. Cameras, Blu Ray players, TVs, light switches, coffee machines. Software running in places that we would never have previously imagined. And much of it abandoned immediately after shipping, gently rotting, exposing an increasingly large number of widely known security vulnerabilities to an increasingly hostile internet. Devices that become useless because of protocol updates. Toys that have a “Guaranteed to work until” date, and then suddenly Barbie goes dead and you’re forced to have an unexpected conversation about API mortality with your 5-year old child.We can’t fix all of these things. Many of these devices have important functionality locked inside proprietary components, released under licenses that grant no permission for people to examine or improve them. But there are many that we can. Millions of devices are running modern and secure versions of Android despite being abandoned by their manufacturers, purely because the vendor released appropriate source code and a community grew up to maintain it. But this can only happen when the vendor plays by the rules.Vendors who don’t release their code remove that freedom from their users, and the weapons users have to fight against that are limited. Most users hold no copyright over the software in the device and are unable to take direct action themselves. A vendor’s failure to comply dooms them to having to choose between buying a new device in 12 months or no longer receiving security updates. When yet more examples of vendor-supplied malware are discovered, it’s more difficult to produce new builds without them. The utility of the devices that the user purchased is curtailed significantly.The Software Freedom Conservancy is one of the only organisations actively fighting against this, and if they’re forced to give up their enforcement work the pressure on vendors to comply with the GPL will be reduced even further. If we want users to control their devices, to be able to obtain security updates even after the vendor has given up, we need to keep that pressure up. Supporting the SFC’s work has a real impact on the security of the internet and people’s lives. Please consider giving them money.comment count unavailable comments

The CA’s Role in Fighting Phishing and Malware

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2015/10/29/phishing-and-malware.html

Since we announced Let’s Encrypt we’ve often been asked how we’ll ensure that we don’t issue certificates for phishing and malware sites. The concern most commonly expressed is that having valid HTTPS certificates helps these sites look more legitimate, making people more likely to trust them.

Deciding what to do here has been tough. On the one hand, we don’t like these sites any more than anyone else does, and our mission is to help build a safer and more secure Web. On the other hand, we’re not sure that certificate issuance (at least for Domain Validation) is the right level on which to be policing phishing and malware sites in 2015. This post explains our thinking in order to encourage a conversation about the CA ecosystem’s role in fighting these malicious sites.

CAs Make Poor Content Watchdogs

Let’s Encrypt is going to be issuing Domain Validation (DV) certificates. On a technical level, a DV certificate asserts that a public key belongs to a domain – it says nothing else about a site’s content or who runs it. DV certificates do not include any information about a website’s reputation, real-world identity, or safety. However, many people believe the mere presence of DV certificate ought to connote at least some of these things.

Treating a DV certificate as a kind of “seal of approval” for a site’s content is problematic for several reasons.

First, CAs are not well positioned to operate anti­-phishing and anti-malware operations – or to police content more generally. They simply do not have sufficient ongoing visibility into sites’ content. The best CAs can do is check with organizations that have much greater content awareness, such as Microsoft and Google. Google and Microsoft consume vast quantities of data about the Web from massive crawling and reporting infrastructures. This data allows them to use complex machine learning algorithms (developed and operated by dozens of staff) to identify malicious sites and content.

Even if a CA checks for phishing and malware status with a good API, the CA’s ability to accurately express information regarding phishing and malware is extremely limited. Site content can change much faster than certificate issuance and revocation cycles, phishing and malware status can be page-specific, and certificates and their related browser UIs contain little, if any, information about phishing or malware status. When a CA doesn’t issue a certificate for a site with phishing or malware content, users simply don’t see a lock icon. Users are much better informed and protected when browsers include anti-phishing and anti-malware features, which typically do not suffer from any of these limitations.

Another issue with treating DV certificates as a “seal of approval” for site content is that there is no standard for CA anti­-phishing and anti-malware measures beyond a simple blacklist of high-­value domains, so enforcement is inconsistent across the thousands of CAs trusted by major browsers. Even if one CA takes extraordinary measures to weed out bad sites, attackers can simply shop around to different CAs. The bad guys will almost always be able to get a certificate and hold onto it long enough to exploit people. It doesn’t matter how sophisticated the best CA anti­-phishing and anti-malware programs are, it only matters how good the worst are. It’s a “find the weakest link” scenario, and weak links aren’t hard to find.

Browser makers have realized all of this. That’s why they are pushing phishing and malware protection features, and evolving their UIs to more accurately reflect the assertions that certificates actually make.

TLS No Longer Optional

When they were first developed in the 1990s, HTTPS and SSL/TLS were considered “special” protections that were only necessary or useful for particular kinds of websites, like online banks and shopping sites accepting credit cards. We’ve since come to realize that HTTPS is important for almost all websites. It’s important for any website that allows people to log in with a password, any website that tracks its users in any way, any website that doesn’t want its content altered, and for any site that offers content people might not want others to know they are consuming. We’ve also learned that any site not secured by HTTPS can be used to attack other sites.

TLS is no longer the exception, nor should it be. That’s why we built Let’s Encrypt. We want TLS to be the default method for communication on the Web. It should just be a fundamental part of the fabric, like TCP or HTTP. When this happens, having a certificate will become an existential issue, rather than a value add, and content policing mistakes will be particularly costly. On a technical level, mistakes will lead to significant down time due to a slow issuance and revocation cycle, and features like HSTS. On a philosophical and moral level, mistakes (innocent or otherwise) will mean censorship, since CAs would be gatekeepers for online speech and presence. This is probably not a good role for CAs.

Our Plan

At least for the time being, Let’s Encrypt is going to check with the Google Safe Browsing API before issuing certificates, and refuse to issue to sites that are flagged as phishing or malware sites. Google’s API is the best source of phishing and malware status information that we have access to, and attempting to do more than query this API before issuance would almost certainly be wasteful and ineffective.

We’re going to implement this phishing and malware status check because many people are not comfortable with CAs entirely abandoning anti-phishing and anti-malware efforts just yet, even for DV certificates. We’d like to continue the conversation for a bit longer before we abandon what many people perceive to be an important CA behavior, even though we disagree.

Conclusion

The fight against phishing and malware content is an important one, but it does not make sense for CAs to be on the front lines, at least when it comes to DV certificates. That said, we’re going to implement checks against the Google Safe Browsing API while we continue the conversation.

We look forward to hearing what you think. Please let us know.

Anti Evil Maid 2 Turbo Edition

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/35742.html

The Evil Maid attack has been discussed for some time – in short, it’s the idea that most security mechanisms on your laptop can be subverted if an attacker is able to gain physical access to your system (for instance, by pretending to be the maid in a hotel). Most disk encryption systems will fall prey to the attacker replacing the initial boot code of your system with something that records and then exfiltrates your decryption passphrase the next time you type it, at which point the attacker can simply steal your laptop the next day and get hold of all your data.There are a couple of ways to protect against this, and they both involve the TPM. Trusted Platform Modules are small cryptographic devices on the system motherboard[1]. They have a bunch of Platform Configuration Registers (PCRs) that are cleared on power cycle but otherwise have slightly strange write semantics – attempting to write a new value to a PCR will append the new value to the existing value, take the SHA-1 of that and then store this SHA-1 in the register. During a normal boot, each stage of the boot process will take a SHA-1 of the next stage of the boot process and push that into the TPM, a process called “measurement”. Each component is measured into a separate PCR – PCR0 contains the SHA-1 of the firmware itself, PCR1 contains the SHA-1 of the firmware configuration, PCR2 contains the SHA-1 of any option ROMs, PCR5 contains the SHA-1 of the bootloader and so on.If any component is modified, the previous component will come up with a different measurement and the PCR value will be different, Because you can’t directly modify PCR values[2], this modified code will only be able to set the PCR back to the “correct” value if it’s able to generate a sequence of writes that will hash back to that value. SHA-1 isn’t yet sufficiently broken for that to be practical, so we can probably ignore that. The neat bit here is that you can then use the TPM to encrypt small quantities of data[3] and ask it to only decrypt that data if the PCR values match. If you change the PCR values (by modifying the firmware, bootloader, kernel and so on), the TPM will refuse to decrypt the material.Bitlocker uses this to encrypt the disk encryption key with the TPM. If the boot process has been tampered with, the TPM will refuse to hand over the key and your disk remains encrypted. This is an effective technical mechanism for protecting against people taking images of your hard drive, but it does have one fairly significant issue – in the default mode, your disk is decrypted automatically. You can add a password, but the obvious attack is then to modify the boot process such that a fake password prompt is presented and the malware exfiltrates the data. The TPM won’t hand over the secret, so the malware flashes up a message saying that the system must be rebooted in order to finish installing updates, removes itself and leaves anyone except the most paranoid of users with the impression that nothing bad just happened. It’s an improvement over the state of the art, but it’s not a perfect one.Joanna Rutkowska came up with the idea of Anti Evil Maid. This can take two slightly different forms. In both, a secret phrase is generated and encrypted with the TPM. In the first form, this is then stored on a USB stick. If the user suspects that their system has been tampered with, they boot from the USB stick. If the PCR values are good, the secret will be successfully decrypted and printed on the screen. The user verifies that the secret phrase is correct and reboots, satisfied that their system hasn’t been tampered with. The downside to this approach is that most boots will not perform this verification, and so you rely on the user being able to make a reasonable judgement about whether it’s necessary on a specific boot.The second approach is to do this on every boot. The obvious problem here is that in this case an attacker simply boots your system, copies down the secret, modifies your system and simply prints the correct secret. To avoid this, the TPM can have a password set. If the user fails to enter the correct password, the TPM will refuse to decrypt the data. This can be attacked in a similar way to Bitlocker, but can be avoided with sufficient training: if the system reboots without the user seeing the secret, the user must assume that their system has been compromised and that an attacker now has a copy of their TPM password.This isn’t entirely great from a usability perspective. I think I’ve come up with something slightly nicer, and certainly more Web 2.0[4]. Anti Evil Maid relies on having a static secret because expecting a user to remember a dynamic one is pretty unreasonable. But most security conscious people rely on dynamic secret generation daily – it’s the basis of most two factor authentication systems. TOTP is an algorithm that takes a seed, the time of day and some reasonably clever calculations and comes up with (usually) a six digit number. The secret is known by the device that you’re authenticating against, and also by some other device that you possess (typically a phone). You type in the value that your phone gives you, the remote site confirms that it’s the value it expected and you’ve just proven that you possess the secret. Because the secret depends on the time of day, someone copying that value won’t be able to use it later.But instead of using your phone to identify yourself to a remote computer, we can use the same technique to ensure that your computer possesses the same secret as your phone. If the PCR states are valid, the computer will be able to decrypt the TOTP secret and calculate the current value. This can then be printed on the screen and the user can compare it against their phone. If the values match, the PCR values are valid. If not, the system has been compromised. Because the value changes over time, merely booting your computer gives your attacker nothing – printing an old value won’t fool the user[5]. This allows verification to be a normal part of every boot, without forcing the user to type in an additional password.I’ve written a prototype implementation of this and uploaded it here. Do pay attention to the list of limitations – without a bootloader that measures your kernel and initrd, you’re still open to compromise. Adding TPM support to grub is on my list of things to do. There are also various potential issues like an attacker being able to use external DMA-capable devices to obtain the secret, especially since most Linux distributions still ship kernels that don’t enable the IOMMU by default. And, of course, if your firmware is inherently untrustworthy there’s multiple ways it can subvert this all. So treat this very much like a research project rather than something you can depend on right now. There’s a fair amount of work to do to turn this into a meaningful improvement in security.[1] I wrote about them in more detail here, including a discussion of whether they can be used for general purpose DRM (answer: not really)[2] In theory, anyway. In practice, TPMs are embedded devices running their own firmware, so who knows what bugs they’re hiding.[3] On the order of 128 bytes or so. If you want to encrypt larger things with a TPM, the usual way to do it is to generate an AES key, encrypt your material with that and then encrypt the AES key with the TPM.[4] Is that even a thing these days? What do we say instead?[5] Assuming that the user is sufficiently diligent in checking the value, anywaycomment count unavailable comments

This is not the UEFI backdoor you are looking for

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/35110.html

This is currently the top story on the Linux subreddit. It links to this Tweet which demonstrates using a System Management Mode backdoor to perform privilege escalation under Linux. This is not a story.But first, some background. System Management Mode (SMM) is a feature in most x86 processors since the 386SL back in 1990. It allows for certain events to cause the CPU to stop executing the OS, jump to an area of hidden RAM and execute code there instead, and then hand off back to the OS without the OS knowing what just happened. This allows you to do things like hardware emulation (SMM is used to make USB keyboards look like PS/2 keyboards before the OS loads a USB driver), fan control (SMM will run even if the OS has crashed and lets you avoid the cost of an additional chip to turn the fan on and off) or even more complicated power management (some server vendors use SMM to read performance counters in the CPU and adjust the memory and CPU clocks without the OS interfering).In summary, SMM is a way to run a bunch of non-free code that probably does a worse job than your OS does in most cases, but is occasionally helpful (it’s how your laptop prevents random userspace from overwriting your firmware, for instance). And since the RAM that contains the SMM code is hidden from the OS, there’s no way to audit what it does. Unsurprisingly, it’s an interesting vector to insert malware into – you could configure it so that a process can trigger SMM and then have the resulting SMM code find that process’s credentials structure and change it so it’s running as root.And that’s what Dmytro has done – he’s written code that sits in that hidden area of RAM and can be triggered to modify the state of the running OS. But he’s modified his own firmware in order to do that, which isn’t something that’s possible without finding an existing vulnerability in either the OS or (or more recently, and) the firmware. It’s an excellent demonstration that what we knew to be theoretically possible is practically possible, but it’s not evidence of such a backdoor being widely deployed.What would that evidence look like? It’s more difficult to analyse binary code than source, but it would still be possible to trace firmware to observe everything that’s dropped into the SMM RAM area and pull it apart. Sufficiently subtle backdoors would still be hard to find, but enough effort would probably uncover them. A PC motherboard vendor managed to leave the source code to their firmware on an open FTP server and copies leaked into the wild – if there’s a ubiquitous backdoor, we’d expect to see it there.But still, the fact that system firmware is mostly entirely closed is still a problem in engendering trust – the means to inspect large quantities binary code for vulnerabilities is still beyond the vast majority of skilled developers, let alone the average user. Free firmware such as Coreboot gets part way to solving this but still doesn’t solve the case of the pre-flashed firmware being backdoored and then installing the backdoor into any new firmware you flash.This specific case may be based on a misunderstanding of Dmytro’s work, but figuring out ways to make it easier for users to trust that their firmware is tamper free is going to be increasingly important over the next few years. I have some ideas in that area and I hope to have them working in the near future.comment count unavailable comments

On Passwords and Password Expiration

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/a-hYO-gQkP4/on-passwords-and-password-expiration.html

One of the things that I believe is an important part of my job is to answer user questions in a way that educates them about the topic they ask about in addition to providing the answer. At times, this can be frustrating, but it also challenges me to think about why I’m providing the answer that I do. It also means that I have to review the choices I, and my organization make about policy, process, and the reasons for both.I recently exchanged email with one of our users who questioned our password policy which requires periodic changes of passwords. The user contended that periodic password changes encourages poor password choice, that users who are forced to choose new passwords (even on a relatively infrequent basis) will choose poor passwords, and that in the end, that password changes serve no purpose.In my institution’s case, there are a number of reasons why password changes make sense, and I believe that these are a reasonable match for most companies, colleges, and other organizations – but not necessarily for your Amazon account, or your banking password. It is critical to understand the difference between a daily use password for institutional access that provides access to things like VPN access, email, licensed software, and the rest of the keys to the kingdom, and a single use password that accesses a service or site. Thinking about your password policy in the context of institutional risk while remaining aware of how your users will react is critical.The reasons that help drive password change for my institution, in no particular order are:Password changes help to prevent attackers who have breached accounts, but who have not used them, or who are quietly using them, from having continued access.Similarly, they can help prevent shared passwords from being useful for long term access.They can help prevent users from using the same password in multiple locations by driving changes that don’t match the previously set passwords elsewhere.They can help prevent brute forcing, although this is less common in environments where there are back-off algorithms in place. In many institutions, that central monitoring may not exist, or may not be easy to implement.Password changes continue to be recommended by most best practice documents (including PCI-DSS and others). Including password expiration in your password policy can be an element in proving due diligence as an organization.When you read the list from a user perspective, it is difficult to see a compelling reason for them to change their passwords. There isn’t a big, disaster level threat that is immediately obvious, and the “what’s in it for me” is hard to communicate. When you read it from an organizational perspective, you will likely see a set of reasons that when taken as a whole mean that a reasonable password expiration timeframe is useful at an organizational level. Here’s why: The environment in which most of us work now has two major external threats to passwords: malware and phishing. With malware targeting browsers and browser plugins, and institutional policies that accept that users will visit at least common sites like CNN, ESPN, and other staples of our online lives, we have to acknowledge that malware compromises that gather our user’s passwords are likely.Similarly, despite attempts we make at user education, phishing continues to seduce a portion of our user population into clicking that tempting link, or responding to the IT department that needs to know their password to ensure that their email isn’t turned off. Again, we know that passwords will be exposed.Bulk compromises of passwords are likely to involve captured hashes, which most organizations have spent years designing infrastructure to avoid as tools like Rainbow Tables and faster cracking hardware became available. Thus, we worry more about what access to our networks, and what individual accounts, or small groups of compromised accounts can do. In the event of a large-scale breach of central authentication, the organization will require a password change from every user, typically with immediate expiration of all passwords.In this environment, we will require our users to change their passwords when their account is compromised, but will we know to require that? We know that advanced persistent threats exist, and that some attackers are patient and will wait, gathering information and not abusing the accounts they collect. We can continue to fight those threats with periodic password changes for the accounts that provide access to our institutions.It would, of course, be preferable to use biometrics, or tokens, or some other two factor authentication system. It is also expensive, and difficult to adapt into a diverse environment where credentials are used across a variety of systems that are glued (or duct taped, bubble gummed, and bailing wired) together in a variety of ways. For now, passwords – or preferably passphrases – remain the way to make these heterogeneous systems authenticate and interoperate.In the end, I learned a lot from my exchange with the user. Over the next few months, I’ll be adding additional information to our awareness program reminding users that password changes that change from “Password1” to “Password2” aren’t serving a real use, we’ll add additional information about tools like Password Safe to our posters and awareness materials, and I’ll be working with our identity and access management staff to see if we can leverage their tools to prevent similar poor password practices. In addition, I’ve been using it as a learning opportunity for my staff, and as a challenge for my student employee.I’m aware that I won’t win with every user – I’ll still have the gentleman who resets his password once a day for as many days as our password history and minimum password age will allow so he can get back to his favorite password. I’ll still have the user who changes their password to “Password1!” and claims that yes, they have used a capital and a number and a symbol, and that thus they have met the requirements for a strong password. But I also know that our population continues to grow more security aware, and that many of our users do get the point.If you’re interested in this topic, you may enjoy this Microsoft research about users, security advice, and why they choose to ignore it, and NIST’s password guidance provides a well reasoned explanation of everything from password choice to mnemonics and password guessing.

_uacct = “UA-1423386-1”;
urchinTracker();

On Passwords and Password Expiration

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/a-hYO-gQkP4/on-passwords-and-password-expiration.html

One of the things that I believe is an important part of my job is to answer user questions in a way that educates them about the topic they ask about in addition to providing the answer. At times, this can be frustrating, but it also challenges me to think about why I’m providing the answer that I do. It also means that I have to review the choices I, and my organization make about policy, process, and the reasons for both.I recently exchanged email with one of our users who questioned our password policy which requires periodic changes of passwords. The user contended that periodic password changes encourages poor password choice, that users who are forced to choose new passwords (even on a relatively infrequent basis) will choose poor passwords, and that in the end, that password changes serve no purpose.In my institution’s case, there are a number of reasons why password changes make sense, and I believe that these are a reasonable match for most companies, colleges, and other organizations – but not necessarily for your Amazon account, or your banking password. It is critical to understand the difference between a daily use password for institutional access that provides access to things like VPN access, email, licensed software, and the rest of the keys to the kingdom, and a single use password that accesses a service or site. Thinking about your password policy in the context of institutional risk while remaining aware of how your users will react is critical.The reasons that help drive password change for my institution, in no particular order are:Password changes help to prevent attackers who have breached accounts, but who have not used them, or who are quietly using them, from having continued access.Similarly, they can help prevent shared passwords from being useful for long term access.They can help prevent users from using the same password in multiple locations by driving changes that don’t match the previously set passwords elsewhere.They can help prevent brute forcing, although this is less common in environments where there are back-off algorithms in place. In many institutions, that central monitoring may not exist, or may not be easy to implement.Password changes continue to be recommended by most best practice documents (including PCI-DSS and others). Including password expiration in your password policy can be an element in proving due diligence as an organization.When you read the list from a user perspective, it is difficult to see a compelling reason for them to change their passwords. There isn’t a big, disaster level threat that is immediately obvious, and the “what’s in it for me” is hard to communicate. When you read it from an organizational perspective, you will likely see a set of reasons that when taken as a whole mean that a reasonable password expiration timeframe is useful at an organizational level. Here’s why: The environment in which most of us work now has two major external threats to passwords: malware and phishing. With malware targeting browsers and browser plugins, and institutional policies that accept that users will visit at least common sites like CNN, ESPN, and other staples of our online lives, we have to acknowledge that malware compromises that gather our user’s passwords are likely.Similarly, despite attempts we make at user education, phishing continues to seduce a portion of our user population into clicking that tempting link, or responding to the IT department that needs to know their password to ensure that their email isn’t turned off. Again, we know that passwords will be exposed.Bulk compromises of passwords are likely to involve captured hashes, which most organizations have spent years designing infrastructure to avoid as tools like Rainbow Tables and faster cracking hardware became available. Thus, we worry more about what access to our networks, and what individual accounts, or small groups of compromised accounts can do. In the event of a large-scale breach of central authentication, the organization will require a password change from every user, typically with immediate expiration of all passwords.In this environment, we will require our users to change their passwords when their account is compromised, but will we know to require that? We know that advanced persistent threats exist, and that some attackers are patient and will wait, gathering information and not abusing the accounts they collect. We can continue to fight those threats with periodic password changes for the accounts that provide access to our institutions.It would, of course, be preferable to use biometrics, or tokens, or some other two factor authentication system. It is also expensive, and difficult to adapt into a diverse environment where credentials are used across a variety of systems that are glued (or duct taped, bubble gummed, and bailing wired) together in a variety of ways. For now, passwords – or preferably passphrases – remain the way to make these heterogeneous systems authenticate and interoperate.In the end, I learned a lot from my exchange with the user. Over the next few months, I’ll be adding additional information to our awareness program reminding users that password changes that change from “Password1” to “Password2” aren’t serving a real use, we’ll add additional information about tools like Password Safe to our posters and awareness materials, and I’ll be working with our identity and access management staff to see if we can leverage their tools to prevent similar poor password practices. In addition, I’ve been using it as a learning opportunity for my staff, and as a challenge for my student employee.I’m aware that I won’t win with every user – I’ll still have the gentleman who resets his password once a day for as many days as our password history and minimum password age will allow so he can get back to his favorite password. I’ll still have the user who changes their password to “Password1!” and claims that yes, they have used a capital and a number and a symbol, and that thus they have met the requirements for a strong password. But I also know that our population continues to grow more security aware, and that many of our users do get the point.If you’re interested in this topic, you may enjoy this Microsoft research about users, security advice, and why they choose to ignore it, and NIST’s password guidance provides a well reasoned explanation of everything from password choice to mnemonics and password guessing.

_uacct = “UA-1423386-1”;
urchinTracker();

Android Malware in the Android Marketplace – the dangers of free

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/nzcRAysMqeE/android-malware-in-android-marketplace.html

Android Police today reports that 21 applications (which have since been pulled) in the Android market, with between 50,000 and 200,000 downloads included malware, with capabilities including the rageagainstthecage or exploid root exploits, and that they upload data including “product ID, model, partner (provider?), language, country, and userI”. Worse, their analysis shows the ability to self update.Most of these apps appear to have been copies of existing apps, made available for free. This points out both the danger of the relatively open Android Market, and of uncontrolled app downloads for your users.The original article is worth a read, and includes a list of the malware laden apps.

_uacct = “UA-1423386-1”;
urchinTracker();

Android Malware in the Android Marketplace – the dangers of free

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/nzcRAysMqeE/android-malware-in-android-marketplace.html

Android Police today reports that 21 applications (which have since been pulled) in the Android market, with between 50,000 and 200,000 downloads included malware, with capabilities including the rageagainstthecage or exploid root exploits, and that they upload data including “product ID, model, partner (provider?), language, country, and userI”. Worse, their analysis shows the ability to self update.Most of these apps appear to have been copies of existing apps, made available for free. This points out both the danger of the relatively open Android Market, and of uncontrolled app downloads for your users.The original article is worth a read, and includes a list of the malware laden apps.

_uacct = “UA-1423386-1”;
urchinTracker();

Blackhat, ATMs, and Money Fountains, Oh My!

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/2DV_lbEy4-Y/blackhat-atms-and-money-fountains-oh-my.html

Security blogs and websites are all buzzing with the news of Barnaby Jack’s Blackhat demonstration of ATM insecurity. Wired has coverage, our favorite security monkey has a video, and others including Tony Bradley from PC World covers the important lessons from the talk.So does the hack tell us something truly new? I don’t really think so. For years, many ATMs have been poorly embedded systems, often running commodity operating systems that rely more on physical security provided by locked boxes than on heavily secured operating systems with appropriate security controls. I’ve written about the insecurity of some ATM uplinks before, and accessing their network connection is often very simple in public locations.What the exploit does do is serve to point out vulnerabilities in the specific ATMs, both of which were running Windows CE. It also serves as a reminder that any operating system that can be remotely accessed, or that allows its filesystem to be written, or to mount USB devices is vulnerable. Since many ATMs run Windows XP, or even Windows NT, they make attractive targets to those who have pre-written malware that works on Windows systems.It should also remind us to review what devices we rely on that have embedded PC platforms in them. Windows CE, NT, XP, and various flavors of Linux appear throughout our IT infrastructure, and while we’re used to locking down network access, often embedded devices don’t provide strong local security. I’ve run into everything from AV controllers and music players to embedded systems running animal feeding systems for research. Most of the time, my only ability to secure them is to lock them away, limit access to the room they live in, and to ensure that they’re on a secured network.How do you secure your embedded systems? Have you gone so far as to modify appliances that manufacturers don’t want changed?

_uacct = “UA-1423386-1”;
urchinTracker();