Tag Archives: sysadmins

Fedora 26 released

Post Syndicated from corbet original https://lwn.net/Articles/727539/rss

The Fedora 26
release
is out. “First, of course, we have thousands
improvements from the various upstream software we integrate, including new
development tools like GCC 7, Golang 1.8, and Python 3.6. We’ve added a new
partitioning tool to Anaconda (the Fedora installer) — the existing
workflow is great for non-experts, but this option will be appreciated by
enthusiasts and sysadmins who like to build up their storage scheme from
basic building blocks. F26 also has many under-the-hood improvements, like
better caching of user and group info and better handling of debug
information. And the DNF package manager is at a new major version (2.5),
bringing many new features.
” More details can be found in the
release notes
.

Milestone: 100 Million Certificates Issued

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2017/06/28/hundred-million-certs.html

Let’s Encrypt has reached a milestone: we’ve now issued more than 100,000,000 certificates. This number reflects at least a few things:

First, it illustrates the strong demand for our services. We’d like to thank all of the sysadmins, web developers, and everyone else managing servers for prioritizing protecting your visitors with HTTPS.

Second, it illustrates our ability to scale. I’m incredibly proud of the work our engineering teams have done to make this volume of issuance possible. I’m also very grateful to our operational partners, including IdenTrust, Akamai, and Sumo Logic.

Third, it illustrates the power of automated certificate management. If getting and managing certificates from Let’s Encrypt always required manual steps there is simply no way we’d be able to serve as many sites as we do. We’d like to thank our community for creating a wide range of clients for automating certificate issuance and management.

The total number of certificates we’ve issued is an interesting number, but it doesn’t reflect much about tangible progress towards our primary goal: a 100% HTTPS Web. To understand that progress we need to look at this graph:

Percentage of HTTPS Page Loads in Firefox.

When Let’s Encrypt’s service first became available, less than 40% of page loads on the Web used HTTPS. It took the Web 20 years to get to that point. In the 19 months since we launched, encrypted page loads have gone up by 18%, to nearly 58%. That’s an incredible rate of change for the Web. Contributing to this trend is what we’re most proud of.

If you’re as excited about the potential for a 100% HTTPS Web as we are, please consider getting involved, making a donation, or sponsoring Let’s Encrypt.

Here’s to the next 100,000,000 certificates, and a more secure and privacy-respecting Web for everyone!

Milestone: 100 Million Certificates Issued

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2017/06/28/hundred-million-certs.html

<p>Let’s Encrypt has reached a milestone: we’ve now issued more than 100,000,000 certificates. This number reflects at least a few things:</p>

<p>First, it illustrates the strong demand for our services. We’d like to thank all of the sysadmins, web developers, and everyone else managing servers for prioritizing protecting your visitors with HTTPS.</p>

<p>Second, it illustrates our ability to scale. I’m incredibly proud of the work our engineering teams have done to make this volume of issuance possible. I’m also very grateful to our operational partners, including <a href="https://www.identrust.com/">IdenTrust</a>, <a href="https://www.akamai.com/">Akamai</a>, and <a href="https://www.sumologic.com/">Sumo Logic</a>.</p>

<p>Third, it illustrates the power of automated certificate management. If getting and managing certificates from Let’s Encrypt always required manual steps there is simply no way we’d be able to serve as many sites as we do. We’d like to thank our community for creating a <a href="https://letsencrypt.org/docs/client-options/">wide range of clients</a> for automating certificate issuance and management.</p>

<p>The total number of certificates we’ve issued is an interesting number, but it doesn’t reflect much about tangible progress towards our primary goal: a 100% HTTPS Web. To understand that progress we need to look at this graph:</p>

<p><center><p><img src="/images/2017.06.28-https-percentage.png" alt="Percentage of HTTPS Page Loads in Firefox." style="width: 650px; margin-bottom: 17px;"/></p></center></p>

<p>When Let’s Encrypt’s service first became available, less than 40% of page loads on the Web used HTTPS. It took the Web 20 years to get to that point. In the 19 months since we launched, encrypted page loads have gone up by 18%, to nearly 58%. That’s an incredible rate of change for the Web. Contributing to this trend is what we’re most proud of.</p>

<p>If you’re as excited about the potential for a 100% HTTPS Web as we are, please consider <a href="https://letsencrypt.org/getinvolved/">getting involved</a>, <a href="https://letsencrypt.org/donate/">making a donation</a>, or <a href="https://letsencrypt.org/become-a-sponsor/">sponsoring</a> Let’s Encrypt.</p>

<p>Here’s to the next 100,000,000 certificates, and a more secure and privacy-respecting Web for everyone!</p>

Some notes on Trump’s cybersecurity Executive Order

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/some-notes-on-trumps-cybersecurity.html

President Trump has finally signed an executive order on “cybersecurity”. The first draft during his first weeks in power were hilariously ignorant. The current draft, though, is pretty reasonable as such things go. I’m just reading the plain language of the draft as a cybersecurity expert, picking out the bits that interest me. In reality, there’s probably all sorts of politics in the background that I’m missing, so I may be wildly off-base.

Holding managers accountable

This is a great idea in theory. But government heads are rarely accountable for anything, so it’s hard to see if they’ll have the nerve to implement this in practice. When the next breech happens, we’ll see if anybody gets fired.
“antiquated and difficult to defend Information Technology”

The government uses laughably old computers sometimes. Forces in government wants to upgrade them. This won’t work. Instead of replacing old computers, the budget will simply be used to add new computers. The old computers will still stick around.
“Legacy” is a problem that money can’t solve. Programmers know how to build small things, but not big things. Everything starts out small, then becomes big gradually over time through constant small additions. What you have now is big legacy systems. Attempts to replace a big system with a built-from-scratch big system will fail, because engineers don’t know how to build big systems. This will suck down any amount of budget you have with failed multi-million dollar projects.
It’s not the antiquated systems that are usually the problem, but more modern systems. Antiquated systems can usually be protected by simply sticking a firewall or proxy in front of them.

“address immediate unmet budgetary needs necessary to manage risk”

Nobody cares about cybersecurity. Instead, it’s a thing people exploit in order to increase their budget. Instead of doing the best security with the budget they have, they insist they can’t secure the network without more money.

An alternate way to address gaps in cybersecurity is instead to do less. Reduce exposure to the web, provide fewer services, reduce functionality of desktop computers, and so on. Insisting that more money is the only way to address unmet needs is the strategy of the incompetent.

Use the NIST framework
Probably the biggest thing in the EO is that it forces everyone to use the NIST cybersecurity framework.
The NIST Framework simply documents all the things that organizations commonly do to secure themselves, such run intrusion-detection systems or impose rules for good passwords.
There are two problems with the NIST Framework. The first is that no organization does all the things listed. The second is that many organizations don’t do the things well.
Password rules are a good example. Organizations typically had bad rules, such as frequent changes and complexity standards. So the NIST Framework documented them. But cybersecurity experts have long opposed those complex rules, so have been fighting NIST on them.

Another good example is intrusion-detection. These days, I scan the entire Internet, setting off everyone’s intrusion-detection systems. I can see first hand that they are doing intrusion-detection wrong. But the NIST Framework recommends they do it, because many organizations do it, but the NIST Framework doesn’t demand they do it well.
When this EO forces everyone to follow the NIST Framework, then, it’s likely just going to increase the amount of money spent on cybersecurity without increasing effectiveness. That’s not necessarily a bad thing: while probably ineffective or counterproductive in the short run, there might be long-term benefit aligning everyone to thinking about the problem the same way.
Note that “following” the NIST Framework doesn’t mean “doing” everything. Instead, it means documented how you do everything, a reason why you aren’t doing anything, or (most often) your plan to eventually do the thing.
preference for shared IT services for email, cloud, and cybersecurity
Different departments are hostile toward each other, with each doing things their own way. Obviously, the thinking goes, that if more departments shared resources, they could cut costs with economies of scale. Also obviously, it’ll stop the many home-grown wrong solutions that individual departments come up with.
In other words, there should be a single government GMail-type service that does e-mail both securely and reliably.
But it won’t turn out this way. Government does not have “economies of scale” but “incompetence at scale”. It means a single GMail-like service that is expensive, unreliable, and in the end, probably insecure. It means we can look forward to government breaches that instead of affecting one department affecting all departments.

Yes, you can point to individual organizations that do things poorly, but what you are ignoring is the organizations that do it well. When you make them all share a solution, it’s going to be the average of all these things — meaning those who do something well are going to move to a worse solution.

I suppose this was inserted in there so that big government cybersecurity companies can now walk into agencies, point to where they are deficient on the NIST Framework, and say “sign here to do this with our shared cybersecurity service”.
“identify authorities and capabilities that agencies could employ to support the cybersecurity efforts of critical infrastructure entities”
What this means is “how can we help secure the power grid?”.
What it means in practice is that fiasco in the Vermont power grid. The DHS produced a report containing IoCs (“indicators of compromise”) of Russian hackers in the DNC hack. Among the things it identified was that the hackers used Yahoo! email. They pushed these IoCs out as signatures in their “Einstein” intrusion-detection system located at many power grid locations. The next person that logged into their Yahoo! email was then flagged as a Russian hacker, causing all sorts of hilarity to ensue, such as still uncorrected stories by the Washington Post how the Russians hacked our power-grid.
The upshot is that federal government help is also going to include much government hindrance. They really are this stupid sometimes and there is no way to fix this stupid. (Seriously, the DHS still insists it did the right thing pushing out the Yahoo IoCs).
Resilience Against Botnets and Other Automated, Distributed Threats

The government wants to address botnets because it’s just the sort of problem they love, mass outages across the entire Internet caused by a million machines.

But frankly, botnets don’t even make the top 10 list of problems they should be addressing. Number #1 is clearly “phishing” — you know, the attack that’s been getting into the DNC and Podesta e-mails, influencing the election. You know, the attack that Gizmodo recently showed the Trump administration is partially vulnerable to. You know, the attack that most people blame as what probably led to that huge OPM hack. Replace the entire Executive Order with “stop phishing”, and you’d go further fixing federal government security.

But solving phishing is tough. To begin with, it requires a rethink how the government does email, and how how desktop systems should be managed. So the government avoids complex problems it can’t understand to focus on the simple things it can — botnets.

Dealing with “prolonged power outage associated with a significant cyber incident”

The government has had the hots for this since 2001, even though there’s really been no attack on the American grid. After the Russian attacks against the Ukraine power grid, the issue is heating up.

Nation-wide attacks aren’t really a threat, yet, in America. We have 10,000 different companies involved with different systems throughout the country. Trying to hack them all at once is unlikely. What’s funny is that it’s the government’s attempts to standardize everything that’s likely to be our downfall, such as sticking Einstein sensors everywhere.

What they should be doing is instead of trying to make the grid unhackable, they should be trying to lessen the reliance upon the grid. They should be encouraging things like Tesla PowerWalls, solar panels on roofs, backup generators, and so on. Indeed, rather than industrial system blackout, industry backup power generation should be considered as a source of grid backup. Factories and even ships were used to supplant the electric power grid in Japan after the 2011 tsunami, for example. The less we rely on the grid, the less a blackout will hurt us.

“cybersecurity risks facing the defense industrial base, including its supply chain”

So “supply chain” cybersecurity is increasingly becoming a thing. Almost anything electronic comes with millions of lines of code, silicon chips, and other things that affect the security of the system. In this context, they may be worried about intentional subversion of systems, such as that recent article worried about Kaspersky anti-virus in government systems. However, the bigger concern is the zillions of accidental vulnerabilities waiting to be discovered. It’s impractical for a vendor to secure a product, because it’s built from so many components the vendor doesn’t understand.

“strategic options for deterring adversaries and better protecting the American people from cyber threats”

Deterrence is a funny word.

Rumor has it that we forced China to backoff on hacking by impressing them with our own hacking ability, such as reaching into China and blowing stuff up. This works because the Chinese governments remains in power because things are going well in China. If there’s a hiccup in economic growth, there will be mass actions against the government.

But for our other cyber adversaries (Russian, Iran, North Korea), things already suck in their countries. It’s hard to see how we can make things worse by hacking them. They also have a strangle hold on the media, so hacking in and publicizing their leader’s weird sex fetishes and offshore accounts isn’t going to work either.

Also, deterrence relies upon “attribution”, which is hard. While news stories claim last year’s expulsion of Russian diplomats was due to election hacking, that wasn’t the stated reason. Instead, the claimed reason was Russia’s interference with diplomats in Europe, such as breaking into diplomat’s homes and pooping on their dining room table. We know it’s them when they are brazen (as was the case with Chinese hacking), but other hacks are harder to attribute.

Deterrence of nation states ignores the reality that much of the hacking against our government comes from non-state actors. It’s not clear how much of all this Russian hacking is actually directed by the government. Deterrence polices may be better directed at individuals, such as the recent arrest of a Russian hacker while they were traveling in Spain. We can’t get Russian or Chinese hackers in their own countries, so we have to wait until they leave.

Anyway, “deterrence” is one of those real-world concepts that hard to shoe-horn into a cyber (“cyber-deterrence”) equivalent. It encourages lots of bad thinking, such as export controls on “cyber-weapons” to deter foreign countries from using them.

“educate and train the American cybersecurity workforce of the future”

The problem isn’t that we lack CISSPs. Such blanket certifications devalue the technical expertise of the real experts. The solution is to empower the technical experts we already have.

In other words, mandate that whoever is the “cyberczar” is a technical expert, like how the Surgeon General must be a medical expert, or how an economic adviser must be an economic expert. For over 15 years, we’ve had a parade of non-technical people named “cyberczar” who haven’t been experts.

Once you tell people technical expertise is valued, then by nature more students will become technical experts.

BTW, the best technical experts are software engineers and sysadmins. The best cybersecurity for Windows is already built into Windows, whose sysadmins need to be empowered to use those solutions. Instead, they are often overridden by a clueless cybersecurity consultant who insists on making the organization buy a third-party product instead that does a poorer job. We need more technical expertise in our organizations, sure, but not necessarily more cybersecurity professionals.

Conclusion

This is really a government document, and government people will be able to explain it better than I. These are just how I see it as a technical-expert who is a government-outsider.

My guess is the most lasting consequential thing will be making everyone following the NIST Framework, and the rest will just be a lot of aspirational stuff that’ll be ignored.

Shadow Brokers, or the hottest security product to buy in 2018

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2017/04/shadow-brokers-or-hottest-security.html

For the past three years and a change, the security industry has been mesmerized by a steady trickle of leaks that expose some of the offensive tooling belonging to the Western world’s foremost intelligence agencies. To some folks, the leaks are a devastating blow to national security; to others, they are a chilling peek at the inner workings of an intrusive security apparatus that could be used to attack political enemies within.

I find it difficult to get outraged at revelations such as the compromise of some of the banking exchanges in the Middle East, presumably to track the sources of funding for some of our sworn enemies; at the same time, I’m none too pleased about the reports of the agencies tapping overseas fiber cables of US companies, or indiscriminately hacking university e-mail servers in Europe to provide cover for subsequent C&C ops. Still, many words have been written on the topic, so it is not a debate I am hoping to settle here; my only thought is that if we see espionage as a legitimate task for a nation state, then the revelations seem like a natural extension of what we know about this trade from pre-Internet days. Conversely, if we think that spying is evil, we probably ought to rethink geopolitics in a more fundamental way; until then, there’s no use complaining that the NSA is keeping a bunch of 0-days at hand.

But in a more pragmatic sense, there is one consequence of the leaks that I worry about: the inevitable shifts in IT policies and the next crop of commercial tools and services meant to counter this supposedly new threat. I fear this outcome because I think that the core exploitation capabilities of the agencies – at least to the extent exposed by the leaks – are not vastly different from those of a talented teenager: somewhat disappointingly, the intelligence community accomplishes their goals chiefly by relying on public data sources, the attacks on unpatched or poorly configured systems, and the fallibility of human beings. In fact, some of the exploits exposed in the leaks were probably not developed in-house, but purchased through intermediaries from talented hobbyists – a black market that has been thriving over the past decade or so.

Of course, the NSA is a unique “adversary” in many other ways, but there is no alien technology to reckon with; and by constantly re-framing the conversation around IT security as a response to some new enemy, we tend to forget that the underlying problems that enable such hacking have been with us since the 1990s, that they are not unique to this actor, and that they have not been truly solved by any of the previous tooling and IT spending shifts.

I think that it is useful to compare computer spies to another, far better understood actor: the law enforcement community. In particular:

  1. Both the intelligence agencies and law enforcement are very patient and systematic in their pursuits. If they want to get to you but can’t do so directly, they can always convince, coerce, or compromise your friends, your sysadmins – or heck, just tamper with your supply chain.

  2. Both kinds of actors operate under the protection of the law – which means that they are taking relatively few risks in going after you, can refine their approaches over the years, and can be quite brazen in their plans. They prefer to hack you remotely, of course – but if they can’t, they might just as well break into your home or office, or plant a mole within your org.

  3. Both have nearly unlimited resources. You probably can’t outspend them and they can always source a wide range of tools to further their goals, operating more like a well-oiled machine than a merry band of hobbyists. But it is also easy to understand their goals, and for most people, the best survival strategy is not to invite their undivided attention in the first place.

Once you make yourself interesting enough to be in the crosshairs, the game changes in a pretty spectacular way, and the steps to take might have to come from the playbooks of rebels holed up in the mountains of Pakistan more than from a glossy folder of Cyberintellics Inc. There are no simple, low-cost solutions: you will find no click-and-play security product to help you, and there is no “one weird trick” to keep you safe; taping over your camera or putting your phone in the microwave won’t save the day.

And ultimately, let’s face it: if you’re scrambling to lock down your Internet-exposed SMB servers in response to the most recent revelations from Shadow Brokers, you are probably in deep trouble – and it’s not because of the NSA.

Tips on Winning the ecommerce Game

Post Syndicated from Sarah Wilson original https://www.anchor.com.au/blog/2017/02/tips-ecommerce-hosting-game/

The ecommerce world is constantly changing and evolving, which is exactly why you must keep on top of the game. Arguably, choosing a reliable host is the most important decision that an eCommerce business has, that’s why we have noted 5 major reasons as to why a quality hosting provider is vital.

High Availability

The most important thing to think about when choosing a host and your infrastructure, is “How much is it going to cost me when my site goes down”.
If your site is down, especially over a large period of time, you could be losing customers and profits. One way to minimise this is to create a highly available environment on the cloud. This means that there is a ‘redundancy’ plan in place to minimise the chances of your site being offline for even a minute.

SEO Ranking

Having a good SEO ranking isn’t purely based on your content. If your site is extremely slow to load, or doesn’t load at all, the ‘secret Google bots’, will push your site further and further down the results page. We recommend using a CDN (Content Delivery Network) such as Cloudflare to help improve performance.  

Security

This may seem like a fairly obvious concern, but making sure you have regular security updates and patches is vital, especially, if credit cards or money transfers are involved on your site. Obviously there is no one way to combat every security concern on the internet, however, making sure you have regular back ups and 24/7 support will help any situation.

Scalability 

What happens when you have a sale or run an advertising campaign and suddenly have a flurry of traffic to your site? In order for your site to be able to cope with the new influx, it needs to be scalable. A good hosting provider can make your site scalable so that there is no downtime when your site is hit with a heavy traffic load. Generally, the best direction to follow when scalability is a priority, is the cloud or Amazon Web Services. The best part of it is, not only do you only pay for what you use, but hosting on the Amazon infrastructure also gives you an SLA (Service Level Agreement) of 99.95% uptime guarantee.

Stress-Free Support

Finally, a good hosting provider will take away any stress that is related to hosting. If your site goes down at 3am, you don’t want to be the person having to deal with it. At Anchor, we have a team of expert Sysadmins available 24/7 to take the stress out of keeping your site up and online.

With these 5 points in mind, you can now make 2017 your year, and beat the game that is eCommerce.

If you have security concerns, experiencing slow page loads or even downtime, we can perform a free ecommerce site assessment to help define a hosting roadmap that will allow you to speed ahead of the competition. If you would simply like to learn more about eCommerce hosting on Anchor’s award winning hosting network, simply contact our friendly staff will get back to you ASAP. 

The post Tips on Winning the ecommerce Game appeared first on AWS Managed Services by Anchor.

Tips on Winning the ecommerce Game

Post Syndicated from Sarah Wilson original http://www.anchor.com.au/blog/2017/02/tips-ecommerce-hosting-game/

The ecommerce world is constantly changing and evolving, which is exactly why you must keep on top of the game. Arguably, choosing a reliable host is the most important decision that an eCommerce business has, that’s why we have noted 5 major reasons as to why a quality hosting provider is vital.

High Availability

The most important thing to think about when choosing a host and your infrastructure, is “How much is it going to cost me when my site goes down”.
If your site is down, especially over a large period of time, you could be losing customers and profits. One way to minimise this is to create a highly available environment on the cloud. This means that there is a ‘redundancy’ plan in place to minimise the chances of your site being offline for even a minute.

SEO Ranking

Having a good SEO ranking isn’t purely based on your content. If your site is extremely slow to load, or doesn’t load at all, the ‘secret Google bots’, will push your site further and further down the results page. We recommend using a CDN (Content Delivery Network) such as Cloudflare to help improve performance.  

Security

This may seem like a fairly obvious concern, but making sure you have regular security updates and patches is vital, especially, if credit cards or money transfers are involved on your site. Obviously there is no one way to combat every security concern on the internet, however, making sure you have regular back ups and 24/7 support will help any situation.

Scalability 

What happens when you have a sale or run an advertising campaign and suddenly have a flurry of traffic to your site? In order for your site to be able to cope with the new influx, it needs to be scalable. A good hosting provider can make your site scalable so that there is no downtime when your site is hit with a heavy traffic load. Generally, the best direction to follow when scalability is a priority, is the cloud or Amazon Web Services. The best part of it is, not only do you only pay for what you use, but hosting on the Amazon infrastructure also gives you an SLA (Service Level Agreement) of 99.95% uptime guarantee.

Stress-Free Support

Finally, a good hosting provider will take away any stress that is related to hosting. If your site goes down at 3am, you don’t want to be the person having to deal with it. At Anchor, we have a team of expert Sysadmins available 24/7 to take the stress out of keeping your site up and online.

With these 5 points in mind, you can now make 2017 your year, and beat the game that is eCommerce.

If you have security concerns, experiencing slow page loads or even downtime, we can perform a free ecommerce site assessment to help define a hosting roadmap that will allow you to speed ahead of the competition. If you would simply like to learn more about eCommerce hosting on Anchor’s award winning hosting network, simply contact our friendly staff will get back to you ASAP. 

The post Tips on Winning the ecommerce Game appeared first on AWS Managed Services by Anchor.

That "Commission on Enhancing Cybersecurity" is absurd

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/12/that-commission-on-enhancing.html

An Obama commission has publish a report on how to “Enhance Cybersecurity”. It’s promoted as having been written by neutral, bipartisan, technical experts. Instead, it’s almost entirely dominated by special interests and the Democrat politics of the outgoing administration.

In this post, I’m going through a random list of some of the 53 “action items” proposed by the documents. I show how they are policy issues, not technical issues. Indeed, much of the time the technical details are warped to conform to special interests.

IoT passwords

The recommendations include such things as Action Item 2.1.4:

Initial best practices should include requirements to mandate that IoT devices be rendered unusable until users first change default usernames and passwords. 

This recommendation for changing default passwords is repeated many times. It comes from the way the Mirai worm exploits devices by using hardcoded/default passwords.

But this is a misunderstanding of how these devices work. Take, for example, the infamous Xiongmai camera. It has user accounts on the web server to control the camera. If the user forgets the password, the camera can be reset to factory defaults by pressing a button on the outside of the camera.

But here’s the deal with security cameras. They are placed at remote sites miles away, up on the second story where people can’t mess with them. In order to reset them, you need to put a ladder in your truck and drive 30 minutes out to the site, then climb the ladder (an inherently dangerous activity). Therefore, Xiongmai provides a RESET.EXE utility for remotely resetting them. That utility happens to connect via Telnet using a hardcoded password.

The above report misunderstands what’s going on here. It sees Telnet and a hardcoded password, and makes assumptions. Some people assume that this is the normal user account — it’s not, it’s unrelated to the user accounts on the web server portion of the device. Requiring the user to change the password on the web service would have no effect on the Telnet service. Other people assume the Telnet service is accidental, that good security hygiene would remove it. Instead, it’s an intended feature of the product, to remotely reset the device. Fixing the “password” issue as described in the above recommendations would simply mean the manufacturer would create a different, custom backdoor that hackers would eventually reverse engineer, creating MiraiV2 botnet. Instead of security guides banning backdoors, they need to come up with standard for remote reset.

That characterization of Mirai as an IoT botnet is wrong. Mirai is a botnet of security cameras. Security cameras are fundamentally different from IoT devices like toasters and fridges because they are often exposed to the public Internet. To stream video on your phone from your security camera, you need a port open on the Internet. Non-camera IoT devices, however, are overwhelmingly protected by a firewall, with no exposure to the public Internet. While you can create a botnet of Internet cameras, you cannot create a botnet of Internet toasters.

The point I’m trying to demonstrate here is that the above report was written by policy folks with little grasp of the technical details of what’s going on. They use Mirai to justify several of their “Action Items”, none of which actually apply to the technical details of Mirai. It has little to do with IoT, passwords, or hygiene.

Public-private partnerships

Action Item 1.2.1: The President should create, through executive order, the National Cybersecurity Private–Public Program (NCP 3 ) as a forum for addressing cybersecurity issues through a high-level, joint public–private collaboration.

We’ve had public-private partnerships to secure cyberspace for over 20 years, such as the FBI InfraGuard partnership. President Clinton’s had a plan in 1998 to create a public-private partnership to address cyber vulnerabilities. President Bush declared public-private partnerships the “cornerstone of his 2003 plan to secure cyberspace.

Here we are 20 years later, and this document is full of new naive proposals for public-private partnerships There’s no analysis of why they have failed in the past, or a discussion of which ones have succeeded.

The many calls for public-private programs reflects the left-wing nature of this supposed “bipartisan” document, that sees government as a paternalistic entity that can help. The right-wing doesn’t believe the government provides any value in these partnerships. In my 20 years of experience with government private-partnerships in cybersecurity, I’ve found them to be a time waster at best and at worst, a way to coerce “voluntary measures” out of companies that hurt the public’s interest.

Build a wall and make China pay for it

Action Item 1.3.1: The next Administration should require that all Internet-based federal government services provided directly to citizens require the use of appropriately strong authentication.

This would cost at least $100 per person, for 300 million people, or $30 billion. In other words, it’ll cost more than Trump’s wall with Mexico.

Hardware tokens are cheap. Blizzard (a popular gaming company) must deal with widespread account hacking from “gold sellers”, and provides second factor authentication to its gamers for $6 each. But that ignores the enormous support costs involved. How does a person prove their identity to the government in order to get such a token? To replace a lost token? When old tokens break? What happens if somebody’s token is stolen?

And that’s the best case scenario. Other options, like using cellphones as a second factor, are non-starters.

This is actually not a bad recommendation, as far as government services are involved, but it ignores the costs and difficulties involved.

But then the recommendations go on to suggest this for private sector as well:

Specifically, private-sector organizations, including top online retailers, large health insurers, social media companies, and major financial institutions, should use strong authentication solutions as the default for major online applications.

No, no, no. There is no reason for a “top online retailer” to know your identity. I lie about my identity. Amazon.com thinks my name is “Edward Williams”, for example.

They get worse with:

Action Item 1.3.3: The government should serve as a source to validate identity attributes to address online identity challenges.

In other words, they are advocating a cyber-dystopic police-state wet-dream where the government controls everyone’s identity. We already see how this fails with Facebook’s “real name” policy, where everyone from political activists in other countries to LGBTQ in this country get harassed for revealing their real names.

Anonymity and pseudonymity are precious rights on the Internet that we now enjoy — rights endangered by the radical policies in this document. This document frequently claims to promote security “while protecting privacy”. But the government doesn’t protect privacy — much of what we want from cybersecurity is to protect our privacy from government intrusion. This is nothing new, you’ve heard this privacy debate before. What I’m trying to show here is that the one-side view of privacy in this document demonstrates how it’s dominated by special interests.

Cybersecurity Framework

Action Item 1.4.2: All federal agencies should be required to use the Cybersecurity Framework. 

The “Cybersecurity Framework” is a bunch of a nonsense that would require another long blogpost to debunk. It requires months of training and years of experience to understand. It contains things like “DE.CM-4: Malicious code is detected”, as if that’s a thing organizations are able to do.

All the while it ignores the most common cyber attacks (SQL/web injections, phishing, password reuse, DDoS). It’s a typical example where organizations spend enormous amounts of money following process while getting no closer to solving what the processes are attempting to solve. Federal agencies using the Cybersecurity Framework are no safer from my pentests than those who don’t use it.

It gets even crazier:

Action Item 1.5.1: The National Institute of Standards and Technology (NIST) should expand its support of SMBs in using the Cybersecurity Framework and should assess its cost-effectiveness specifically for SMBs.

Small businesses can’t even afford to even read the “Cybersecurity Framework”. Simply reading the doc, trying to understand it, would exceed their entire IT/computer budget for the year. It would take a high-priced consultant earning $500/hour to tell them that “DE.CM-4: Malicious code is detected” means “buy antivirus and keep it up to date”.

Software liability is a hoax invented by the Chinese to make our IoT less competitive

Action Item 2.1.3: The Department of Justice should lead an interagency study with the Departments of Commerce and Homeland Security and work with the Federal Trade Commission, the Consumer Product Safety Commission, and interested private sector parties to assess the current state of the law with regard to liability for harm caused by faulty IoT devices and provide recommendations within 180 days. 

For over a decade, leftists in the cybersecurity industry have been pushing the concept of “software liability”. Every time there is a major new development in hacking, such as the worms around 2003, they come out with documents explaining why there’s a “market failure” and that we need liability to punish companies to fix the problem. Then the problem is fixed, without software liability, and the leftists wait for some new development to push the theory yet again.

It’s especially absurd for the IoT marketspace. The harm, as they imagine, is DDoS. But the majority of devices in Mirai were sold by non-US companies to non-US customers. There’s no way US regulations can stop that.

What US regulations will stop is IoT innovation in the United States. Regulations are so burdensome, and liability lawsuits so punishing, that it will kill all innovation within the United States. If you want to get rich with a clever IoT Kickstarter project, forget about it: you entire development budget will go to cybersecurity. The only companies that will be able to afford to ship IoT products in the United States will be large industrial concerns like GE that can afford the overhead of regulation/liability.

Liability is a left-wing policy issue, not one supported by technical analysis. Software liability has proven to be immaterial in any past problem and current proponents are distorting the IoT market to promote it now.

Cybersecurity workforce

Action Item 4.1.1: The next President should initiate a national cybersecurity workforce program to train 100,000 new cybersecurity practitioners by 2020. 

The problem in our industry isn’t the lack of “cybersecurity practitioners”, but the overabundance of “insecurity practitioners”.

Take “SQL injection” as an example. It’s been the most common way hackers break into websites for 15 years. It happens because programmers, those building web-apps, blinding paste input into SQL queries. They do that because they’ve been trained to do it that way. All the textbooks on how to build webapps teach them this. All the examples show them this.

So you have government programs on one hand pushing tech education, teaching kids to build web-apps with SQL injection. Then you propose to train a second group of people to fix the broken stuff the first group produced.

The solution to SQL/website injections is not more practitioners, but stopping programmers from creating the problems in the first place. The solution to phishing is to use the tools already built into Windows and networks that sysadmins use, not adding new products/practitioners. These are the two most common problems, and they happen not because of a lack of cybersecurity practitioners, but because the lack of cybersecurity as part of normal IT/computers.

I point this to demonstrate yet against that the document was written by policy people with little or no technical understanding of the problem.

Nutritional label

Action Item 3.1.1: To improve consumers’ purchasing decisions, an independent organization should develop the equivalent of a cybersecurity “nutritional label” for technology products and services—ideally linked to a rating system of understandable, impartial, third-party assessment that consumers will intuitively trust and understand. 

This can’t be done. Grab some IoT devices, like my thermostat, my car, or a Xiongmai security camera used in the Mirai botnet. These devices are so complex that no “nutritional label” can be made from them.

One of the things you’d like to know is all the software dependencies, so that if there’s a bug in OpenSSL, for example, then you know your device is vulnerable. Unfortunately, that requires a nutritional label with 10,000 items on it.

Or, one thing you’d want to know is that the device has no backdoor passwords. But that would miss the Xiongmai devices. The web service has no backdoor passwords. If you caught the Telnet backdoor password and removed it, then you’d miss the special secret backdoor that hackers would later reverse engineer.

This is a policy position chasing a non-existent technical issue push by Pieter Zatko, who has gotten hundreds of thousands of dollars from government grants to push the issue. It’s his way of getting rich and has nothing to do with sound policy.

Cyberczars and ambassadors

Various recommendations call for the appointment of various CISOs, Assistant to the President for Cybersecurity, and an Ambassador for Cybersecurity. But nowhere does it mention these should be technical posts. This is like appointing a Surgeon General who is not a doctor.

Government’s problems with cybersecurity stems from the way technical knowledge is so disrespected. The current cyberczar prides himself on his lack of technical knowledge, because that helps him see the bigger picture.

Ironically, many of the other Action Items are about training cybersecurity practitioners, employees, and managers. None of this can happen as long as leadership is clueless. Technical details matter, as I show above with the Mirai botnet. Subtlety and nuance in technical details can call for opposite policy responses.

Conclusion

This document is promoted as being written by technical experts. However, nothing in the document is neutral technical expertise. Instead, it’s almost entirely a policy document dominated by special interests and left-wing politics. In many places it makes recommendations to the incoming Republican president. His response should be to round-file it immediately.

I only chose a few items, as this blogpost is long enough as it is. I could pick almost any of of the 53 Action Items to demonstrate how they are policy, special-interest driven rather than reflecting technical expertise.

EQGRP tools are post-exploitation

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/08/eqgrp-tools-are-post-exploitation.html

A recent leak exposed hackings tools from the “Equation Group”, a group likely related to the NSA TAO (the NSA/DoD hacking group). I thought I’d write up some comments.

Despite the existence of 0days, these tools seem to be overwhelmingly post-exploitation. They aren’t the sorts of tools you use to break into a network — but the sorts of tools you use afterwards.

The focus of the tools appear to be about hacking into network equipment, installing implants, achievement permanence, and using the equipment to sniff network traffic.

Different pentesters have different ways of doing things once they’ve gotten inside a network, and this is reflected in their toolkits. Some focus on Windows and getting domain admin control, and have tools like mimikatz. Other’s focus on webapps, and how to install hostile PHP scripts. In this case, these tools reflect a methodology that goes after network equipment.

It’s a good strategy. Finding equipment is easy, and undetectable, just do a traceroute. As long as network equipment isn’t causing problems, sysadmins ignore it, so your implants are unlikely to be detected. Internal network equipment is rarely patched, so old exploits are still likely to work. Some tools appear to target bugs in equipment that are likely older than Equation Group itself.

In particular, because network equipment is at the network center instead of the edges, you can reach out and sniff packets through the equipment. Half the time it’s a feature of the network equipment, so no special implant is needed. Conversely, when on the edge of the network, switches often prevent you from sniffing packets, and even if you exploit the switch (e.g. ARP flood), all you get are nearby machines. Getting critical machines from across the network requires remotely hacking network devices.

So you see a group of pentest-type people (TAO hackers) with a consistent methodology, and toolmakers who develop and refine tools for them. Tool development is a rare thing amount pentesters — they use tools, they don’t develop them. Having programmers on staff dramatically changes the nature of pentesting.

Consider the program xml2pcap. I don’t know what it does, but it looks like similar tools I’ve written in my own pentests. Various network devices will allow you to sniff packets, but produce output in custom formats. Therefore, you need to write a quick-and-dirty tool that converts from that weird format back into the standard pcap format for use with tools like Wireshark. More than once I’ve had to convert HTML/XML output to pcap. Setting port filters for 21 (FTP) and Telnet (23) produces low-bandwidth traffic with high return (admin passwords) within networks — all you need is a script that can convert the packets into standard format to exploit this.

Also consider the tftpd tool in the dump. Many network devices support that protocol for updating firmware and configuration. That’s pretty much all it’s used for. This points to a defensive security strategy for your organization: log all TFTP traffic.

Same applies to SNMP. By the way, SNMP vulnerabilities in network equipment is still low hanging fruit. SNMP stores thousands of configuration parameters and statistics in a big tree, meaning that it has an enormous attack surface. Anything value that’s a settable, variable-length value (OCTECT STRING, OBJECT IDENTIFIER) is something you can play with for buffer-overflows and format string bugs. The Cisco 0day in the toolkit was one example.

Some have pointed out that the code in the tools is crappy, and they make obvious crypto errors (such as using the same initialization vectors). This is nonsense. It’s largely pentesters, not software developers, creating these tools. And they have limited threat models — encryption is to avoid easy detection that they are exfiltrating data, not to prevent somebody from looking at the data.

From that perspective, then, this is fine code, with some effort spent at quality for tools that don’t particularly need it. I’m a professional coder, and my little scripts often suck worse than the code I see here.

Lastly, I don’t think it’s a hack of the NSA themselves. Those people are over-the-top paranoid about opsec. But 95% of the US cyber-industrial-complex is made of up companies, who are much more lax about security than the NSA itself. It’s probably one of those companies that got popped — such as an employee who went to DEFCON and accidentally left his notebook computer open on the hotel WiFi.

Conclusion

Despite the 0days, these appear to be post-exploitation tools. They look like the sort of tools pentesters might develop over years, where each time they pop a target, they do a little development based on the devices they find inside that new network in order to compromise more machines/data.

Computer Science Education Benefits from FLOSS

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/02/17/education-floss.html

I read with interest today
when Linux Weekly
News linked

to
Greg DeKoenigsberg’s response
to
Mark
Guzdial’s ACM Blog post, The Impact of Open Source on Computing
Education
(which is mostly a summary
of his
primary argument on his personal blog)
. I must sadly admit that I
was not terribly surprised to read such a post from an ACM-affiliated
academic that speaks so negatively of
FLOSS’s
contribution to Computer Science education.

I mostly agree with (and won’t repeat) DeKoenigsberg’s arguments, but I
do have some additional points and anecdotal examples that may add
usefully to the debate. I have been both a student (high school,
graduate and undergraduate) and teacher (high school and TA) of Computer
Science. In both cases, software freedom was fundamental and frankly
downright essential to my education and to that of my students.

Before I outline my copious disagreements, though, I want to make
abundantly clear that I agree with one of Guzdial’s primary three
points: there is too much unfriendly and outright sexist (although
Guzdial does not use that word directly) behavior in the
FLOSS
community. This should not be ignored, and needs active attention.
Guzdial, however, is clearly underinformed about the extensive work that
many of us are doing to raise awareness and address that issue. In
software development terms: it’s a known bug, it’s been triaged, and
development on a fix is in progress. And, in true FLOSS fashion,
patches are welcome, too (i.e., get involved in a FLOSS community and
help address the problem).

However, the place where my disagreement with Guzdial begins is that
this sexism problem is unique to FLOSS. As an undergraduate Computer
Science major, it was quite clear to me that a sexist culture was
prevalent in my Computer Science department and in CS in general. This
had nothing to do with FLOSS culture, since there was no FLOSS in my
undergraduate department until I installed a few GNU/Linux
machines. (See below for details.)

Computer Science as a whole unfortunately remains heavily
male-dominated with problematic sexist overtones. It was common when I
was an undergraduate (in the early 1990s) that some of my fellow male
students would display pornography on the workstation screens without a
care about who felt unwelcome because of it. Many women complained that
they didn’t feel comfortable in the computer lab, and the issue became a
complicated and ongoing debate in our department. (We all frankly could
have used remedial sensitivity training!) In graduate school, a CS
professor said to me (completely straight-faced) that women didn’t major
in Computer Science because most women’s long term goals are to have
babies and keep house. Thus, I simply reject the notion that this
sexism and lack of acceptance of diversity is a problem unique to FLOSS
culture: it’s a CS-wide problem, AFAICT. Indeed,
the CRA’s
Taulbee Survey shows (see PDF page 10)
that only 22% of the tenure
track CS faculty in the USA and Canada are women, and only 12% of the
full professors are. In short, Guzdial’s corner of the computing world
shares this problem with mine.

Guzdial’s second point is the most offensive to the FLOSS community.
He argues that volunteerism in FLOSS sends a message that no good jobs
are available in computing. I admit that I have only anecdotal evidence
to go on (of course, Guzdial quotes no statistical data, either), but in
my experience, I know that I and many others in FLOSS have been
successfully and gainfully employed precisely because of past
volunteer work we’ve done. Ted
T’so
is fond of saying: Thanks to Linux, my hobby became my job
and my job became my hobby. My experience, while neither as
profound nor as important as Ted’s, is somewhat similar.

I downloaded a copy of GNU/Linux for the first time in 1992. I showed
it to my undergraduate faculty, and they were impressed that I had a
Unix-like system running on PC hardware, and they encouraged me to build
a computer lab with old PC’s. I spent the next three and half years as
the department’s volunteer0 sysadmin and
occasional developer, gaining essential skills that later led me to a
lucrative career as a professional sysadmin and software developer. If
the lure of software freedom advocacy’s relative poverty hadn’t
sidetracked me, I’d surely still be on that same career path.

But that wasn’t even the first time I developed software and got
computers working as a volunteer. Indeed, every computer geek I know
was compelled to write code and do interesting things with computers
from the earliest of ages. We didn’t enter Computer Science because we
wanted to make money from it; we make a living in computing because we
love it and are driven to do it, regardless of how much we get paid for
it. I’ve observed that dedicated, smart people who are really serious
about something end up making a full-time living at that something, one
way or the other.

Frankly, there’s an undertone in Guzdial’s comments on this point that
I find disturbing. The idea of luring people to Computer Science
through job availability is insidious. I was an undergraduate student
right before the upward curve in CS majors, and a graduate student
during the plateau
(See PDF
page 4 of the Taulbee Survey for graphs
). As an undergraduate, I
saw the very beginnings of people majoring in Computer Science
“for the money”, and as a graduate student, I was surrounded
by these sorts of undergraduates. Ultimately, I don’t think our field
is better off for having such people in it. Software is best when it’s
designed and written by people who live to make it better
— people who really hate to go to bed with a bug still open. I
must constantly resist the urge to fix any given broken piece of
software in front of me lest I lose focus on my primary task of the
moment. Every good developer I’ve met has the same urge. In my
experience, when you see software developed by someone who doesn’t have
this drive, you see clearly that it’s (at best) substandard, and
(usually) pure junk. That’s what we’re headed for if we encourage
students to major in Computer Science “for the money”. If
students’ passion is making money for its own sake, we should encourage
them to be investment bankers, not software developers, sysadmins, and
Computer Scientists.

Guzdial’s final point is that our community is telling newcomers
that programming is all that matters. The only evidence Guzdial
gives for this assertion is a pithy quote from Linus Torvalds. If
Guzdial actually listened
to interviews
that Torvalds has given
, Guzdial would hear that Torvalds cares
about a lot more than just code, and spends most of his time in
natural language discussions with developers. The Linux community
doesn’t just require code; it requires code plus a well-argued
position of why the code is right for the users.

Guzdial’s primary point here, though, is that FLOSS ignores usability.
Using Torvalds and the Linux community as the example here makes little
sense, since “usability” of a kernel is about APIs for
fellow programmers. Linus’ kernel is the pinnacle of usability measured
against the userbase who interacts with it directly. If a kernel is
something non-technical users are aware of “using”, then
it’s probably not a very usable kernel.

But Guzdial’s comment isn’t really about the kernel; instead, he subtly
insults the GNOME community (and other GUI-oriented FLOSS projects).
Usability work is quite expensive, but nevertheless the GNOME community
(and others) desperately want it done and try constantly to fund it. In
fact, very recently, there has
been great
worry in the GNOME community
that Oracle’s purchase of Sun means
that various usability-related projects are losing funding. I encourage
Guzdial to get in touch with projects like the GNOME accessibility and
usability projects before he assumes that one offhand quote from Linus
defines the entire FLOSS community’s position on end-user usability.

As a final anecdote, I will briefly tell the story of my year teaching
high school. I was actively recruited (again, yet another a job I got
because of my involvement in FLOSS!)
to teach a high
school AP Computer Science class
while I was still in graduate
school in Cincinnati. The
students built
the computer lab themselves from scratch
, which one student still
claims
is one
of his proudest accomplishments
. I had planned to teach only
‘A’ topics, but the students were so excited to learn, we
ended up doing the whole ‘AB’ course. All but two of the
approximately twenty students took the AP exam. All who took it at
least passed, while most excelled. Many of them now have fruitful
careers in computing and other sciences.

I realize this is one class of students in one high school. But that’s
somewhat the point here. The excitement and the “do it
yourself” inspiration of the FLOSS world pushed a random group of
high school students into action to build their own lab and get the
administration to recruit a teacher for them. I got the job as their
teacher precisely because of my involvement in FLOSS. There is no
reason to believe this success story of FLOSS in education is an
aberration. More likely, Guzdial is making oversimplifications about
something he hasn’t bothered to examine fully.

Finally, I should note that Guzdial
used Michael
Terry
‘s work as a jumping off point for his comments. I’ve met,
seen talks by, and exchanged email with Terry and his graduate students.
I admit that I haven’t read Terry’s most recent papers, but I have read
some of the older ones and am familiar generally with his work. I was
thus not surprised to find
that Terry
clarified that his position differs from Guzdial’s
, in particular
noting that we found that open source developers most
certainly do care about the usability of their
software, but that those developers make an error by focusing too
much on a small subset of their userbase (i.e., the loudest). I can
certainly verify that fact from the anecdotal side. Generally speaking,
I know that Terry is very concerned about FLOSS usability, and I think
that our community should work with him to see what we can learn from
his research. I have never known Terry to be dismissive of the
incredible value of FLOSS and its potential for improvement,
particularly in the area of usability. Terry’s goal, it seems to me, is
to convince and assist FLOSS developers to improve the usability of our
software, and that’s certainly a constructive goal I do support.

(BTW, I mostly used last names through out this post because Mark,
Michael, and Greg are relatively common names and I can think of a dozen
FLOSS celebrities who have one of those first names. 🙂

0Technically,
I was “paid” in that I was given my own office in
the department because I was willing to do the sysadmin duties.
It was nice to be the only undergraduate on campus (outside of
student government) with my own office.

More Xen Tricks

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2007/08/24/more-xen.html

In
my previous
post about Xen
, I talked about how easy Xen is to configure and
set up, particularly on Ubuntu and Debian. I’m still grateful that
Xen remains easy; however, I’ve lately had a few Xen-related
challenges that needed attention. In particular, I’ve needed to
create some surprisingly messy solutions when using vif-route to
route multiple IP numbers on the same network through the dom0 to a
domU.

I tend to use vif-route rather than vif-bridge, as I like the control
it gives me in the dom0. The dom0 becomes a very traditional
packet-forwarding firewall that can decide whether or not to forward
packets to each domU host. However, I recently found some deep
weirdness in IP routing when I use this approach while needing
multiple Ethernet interfaces on the domU. Here’s an example:

Multiple IP numbers for Apache

Suppose the domU host, called webserv, hosts a number of
websites, each with a different IP number, so that I have Apache
doing something like1:

Listen 192.168.0.200:80
Listen 192.168.0.201:80
Listen 192.168.0.202:80

NameVirtualHost 192.168.0.200:80
<VirtualHost 192.168.0.200:80>

NameVirtualHost 192.168.0.201:80
<VirtualHost 192.168.0.201:80>

NameVirtualHost 192.168.0.202:80
<VirtualHost 192.168.0.202:80>

The Xen Configuration for the Interfaces

Since I’m serving all three of those sites from webserv, I
need all those IP numbers to be real, live IP numbers on the local
machine as far as the webserv is concerned. So, in
dom0:/etc/xen/webserv.cfg I list something like:

vif = [ ‘mac=de:ad:be:ef:00:00, ip=192.168.0.200’,
‘mac=de:ad:be:ef:00:01, ip=192.168.0.201’,
‘mac=de:ad:be:ef:00:02, ip=192.168.0.202’ ]

… And then make webserv:/etc/iftab look like:

eth0 mac de:ad:be:ef:00:00 arp 1
eth1 mac de:ad:be:ef:00:01 arp 1
eth2 mac de:ad:be:ef:00:02 arp 1

… And make webserv:/etc/network/interfaces (this is
probably Ubuntu/Debian-specific, BTW) look like:

auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.168.0.200
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 192.168.0.201
netmask 255.255.255.0
auto eth2
iface eth2 inet static
address 192.168.0.202
netmask 255.255.255.0

Packet Forwarding from the Dom0

But, this doesn’t get me the whole way there. My next step is to make
sure that the dom0 is routing the packets properly to
webserv. Since my dom0 is heavily locked down, all
packets are dropped by default, so I have to let through explicitly
anything I’d like webserv to be able to process. So, I
add some code to my firewall script on the dom0 that looks like:2

webIpAddresses=”192.168.0.200 192.168.0.201 192.168.0.202″
UNPRIVPORTS=”1024:65535″

for dport in 80 443;
do
for sport in $UNPRIVPORTS 80 443 8080;
do
for ip in $webIpAddresses;
do
/sbin/iptables -A FORWARD -i eth0 -p tcp -d $ip
–syn -m state –state NEW
–sport $sport –dport $dport -j ACCEPT

/sbin/iptables -A FORWARD -i eth0 -p tcp -d $ip
–sport $sport –dport $dport
-m state –state ESTABLISHED,RELATED -j ACCEPT

/sbin/iptables -A FORWARD -o eth0 -s $ip
-p tcp –dport $sport –sport $dport
-m state –state NEW,ESTABLISHED,RELATED -j ACCEPT
done
done
done

Phew! So at this point, I thought I was done. The packets should find
their way forwarded through the dom0 to the Apache instance running on
the domU, webserv. While that much was true, I now have
the additional problem that packets got lost in a bit of a black hole
on webserv. When I discovered the black hole, I quickly
realized why. It was somewhat atypical, from webserv’s
point of view, to have three “real” and different Ethernet
devices with three different IP numbers, which all talk to the exact
same network. There was more intelligent routing
needed.3

Routing in the domU

While most non-sysadmins still use the route command to
set up local IP routes on a GNU/Linux host, iproute2
(available via the ip command) has been a standard part
of GNU/Linux distributions and supported by Linux for nearly ten
years. To properly support the situation of multiple (from
webserv’s point of view, at least) physical interfaces on
the same network, some special iproute2 code is needed.
Specifically, I set up separate route tables for each device. I first
encoded their names in /etc/iproute2/rt_tables (the
numbers 16-18 are arbitrary, BTW):

16 eth0-200
17 eth1-201
18 eth2-202

And here are the ip commands that I thought would work
(but didn’t, as you’ll see next):

/sbin/ip route del default via 192.168.0.1

for table in eth0-200 eth1-201 eth2-202;
do
iface=`echo $table | perl -pe ‘s/^(S+)-.*$/$1/;’`
ipEnding=`echo $table | perl -pe ‘s/^.*-(S+)$/$1/;’`
ip=192.168.0.$ipEnding
/sbin/ip route add 192.168.0.0/24 dev $iface table $table

/sbin/ip route add default via 192.168.0.1 table $table
/sbin/ip rule add from $ip table $table
/sbin/ip rule add to 0.0.0.0 dev $iface table $table
done

/sbin/ip route add default via 192.168.0.1

The idea is that each table will use rules to force all traffic coming
in on the given IP number and/or interface to always go back out on
the same, and vice versa. The key is these two lines:

/sbin/ip rule add from $ip table $table
/sbin/ip rule add to 0.0.0.0 dev $iface table $table

The first rule says that when traffic is coming from the given IP number,
$ip, the routing rules in table, $table should
be used. The second says that traffic to anywhere when bound for
interface, $iface should use table,
$table.

The tables themselves are set up to always make sure the local network
traffic goes through the proper associated interface, and that the
network router (in this case, 192.168.0.1) is always
used for foreign networks, but that it is reached via the correct
interface.

This is all well and good, but it doesn’t work. Certain instructions
fail with the message, RTNETLINK answers: Network is
unreachable, because the 192.168.0.0 network cannot be found
while the instructions are running. Perhaps there is an
elegant solution; I couldn’t find one. Instead, I temporarily set
up “dummy” global routes in the main route table and
deleted them once the table-specific ones were created. Here’s the
new bash script that does that (lines that are added are emphasized
and in bold):

/sbin/ip route del default via 192.168.0.1
for table in eth0-200 eth1-201 eth2-202;
do
iface=`echo $table | perl -pe ‘s/^(S+)-.*$/$1/;’`
ipEnding=`echo $table | perl -pe ‘s/^.*-(S+)$/$1/;’`
ip=192.168.0.$ipEnding
/sbin/ip route add 192.168.0.0/24 dev $iface table $table

/sbin/ip route add 192.168.0.0/24 dev $iface src $ip

/sbin/ip route add default via 192.168.0.1 table $table
/sbin/ip rule add from $ip table $table

/sbin/ip rule add to 0.0.0.0 dev $iface table $table

/sbin/ip route del 192.168.0.0/24 dev $iface src $ip
done
/sbin/ip route add 192.168.0.0/24 dev eth0 src 192.168.0.200
/sbin/ip route add default via 192.168.0.1
/sbin/ip route del 192.168.0.0/24 dev eth0 src 192.168.0.200

I am pretty sure I’m missing something here — there must be a
better way to do this, but the above actually works, even if it’s
ugly.

Alas, Only Three

There was one additional confusion I put myself through while
implementing the solution. I was actually trying to route four
separate IP addresses into webserv, but discovered that
I got found this error message (found via dmesg on the
domU):
netfront can’t alloc rx grant refs. A quick google
around showed me
that the
XenFaq, which says that Xen 3 cannot handled more than three network
interfaces per domU
. Seems strangely arbitrary to me; I’d love
to hear why cuts it off at three. I can imagine limits at one and
two, but it seems that once you can do three, n should be
possible (perhaps still with linear slowdown or some such). I’ll
have to ask the Xen developers (or UTSL) some day to find out what
makes it possible to have three work but not four.

1Yes, I know I
could rely on client-provided Host: headers and do this with full
name-based virtual hosting, but I don’t
like to do that for good reason (as outlined in the Apache
docs)
.

2Note that the
above firewall code must run on dom0, which has one real
Ethernet device (its eth0) that is connected properly to
the wide 192.168.0.0/24 network, and should have some IP
number of its own there — say 192.168.0.100. And,
don’t forget that dom0 is configured for vif-route, not
vif-bridge. Finally, for brevity, I’ve left out some of the
firewall code that FORWARDs through key stuff like DNS. If you are
interested in it, email me or look it up in a firewall book.

3I was actually a
bit surprised at this, because I often have multiple IP numbers
serviced from the same computer and physical Ethernet interface.
However, in those cases, I use virtual interfaces
(eth0:0, eth0:1, etc.). On a normal system,
Linux does the work of properly routing the IP numbers when you attach
multiple IP numbers virtually to the same physical interface.
However, in Xen domUs, the physical interfaces are locked by Xen to
only permit specific IP numbers to come through, and while you can set
up all the virtual interfaces you want in the domU, it will only get
packets destine for the IP number specified in the vif
section of the configuration file. That’s why I added my three
different “actual” interfaces in the domU.