Tag Archives: hardware

How the IBM PC Won, Then Lost, the Personal Computer Market

Post Syndicated from James W. Cortada original https://spectrum.ieee.org/how-the-ibm-pc-won-then-lost-the-personal-computer-market

On 12 August 1981, at the Waldorf Astoria Hotel in midtown Manhattan, IBM unveiled the company’s entrant into the nascent personal computer market: the IBM PC. With that, the preeminent U.S. computer maker launched another revolution in computing, though few realized it at the time. Press coverage of the announcement was lukewarm.

Soon, though, the world began embracing little computers by the millions, with IBM dominating those sales. The personal computer vastly expanded the number of people and organizations that used computers. Other companies, including Apple and Tandy Corp., were already making personal computers, but no other machine carried the revered IBM name. IBM’s essential contributions were to position the technology as suitable for wide use and to set a technology standard. Rivals were compelled to meet a demand that they had all grossly underestimated. As such, IBM had a greater effect on the PC’s acceptance than did Apple, Compaq, Dell, and even Microsoft.

Despite this initial dominance, by 1986 the IBM PC was becoming an also-ran. And in 2005, the Chinese computer maker Lenovo Group purchased IBM’s PC business.

What occurred between IBM’s wildly successful entry into the personal computer business and its inglorious exit nearly a quarter century later? From IBM’s perspective, a new and vast market quickly turned into an ugly battleground with many rivals. The company stumbled badly, its bureaucratic approach to product development no match for a fast-moving field. Over time, it became clear that the sad story of the IBM PC mirrored the decline of the company.

At the outset, though, things looked rosy.

How the personal computer revolution was launched

IBM did not invent the desktop computer. Most historians agree that the personal computer revolution began in April 1977 at the first West Coast Computer Faire. Here, Steve Jobs introduced the Apple II, with a price tag of US $1,298 (about $5,800 today), while rival Commodore unveiled its PET. Both machines were designed for consumers, not just hobbyists or the technically skilled. In August, Tandy launched its TRS-80, which came with games. Indeed, software for these new machines was largely limited to games and a few programming tools.

Photo of Steve Jobs in front of an Apple Computer sign
Apple cofounder Steve Jobs unveiled the Apple II at the West Coast Computer Faire in April 1977.
Tom Munnecke/Getty Images

IBM’s large commercial customers faced the implications of this emerging technology: Who would maintain the equipment and its software? How secure was the data in these machines? And what was IBM’s position: Should personal computers be taken seriously or not? By 1980, customers in many industries were telling their IBM contacts to enter the fray. At IBM plants in San Diego, Endicott, N.Y, and Poughkeepsie, N.Y., engineers were forming hobby clubs to learn about the new machines.

The logical place to build a small computer was inside IBM’s General Products Division, which focused on minicomputers and the successful typewriter business. But the division had no budget or people to allocate to another machine. IBM CEO Frank T. Cary decided to fund the PC’s development out of his own budget. He turned to William “Bill” Lowe, who had given some thought to the design of such a machine. Lowe reported directly to Cary, bypassing IBM’s complex product-development bureaucracy, which had grown massively during the creation of the System/360 and S/370. The normal process to get a new product to market took four or five years, but the incipient PC market was moving too quickly for that.

Photo of IBM CEO Frank T. Cary
IBM CEO Frank T. Cary authorized a secret initiative to develop a personal computer outside of Big Blue’s product-development process.
IBM

Cary asked Lowe to come back in several months with a plan for developing a machine within a year and to find 40 people from across IBM and relocate them to Boca Raton, Fla.

Lowe’s plan for the PC called for buying existing components and software and bolting them together into a package aimed at the consumer market. There would be no homegrown operating system or IBM-made chips. The product also had to attract corporate customers, although it was unclear how many of those there would be. Mainframe salesmen could be expected to ignore or oppose the PC, so the project was kept reasonably secret.

A friend of Lowe’s, Jack Sams, was a software engineer who vaguely knew Bill Gates, and he reached out to the 24-year-old Gates to see if he had an operating system that might work for the new PC. Gates had dropped out of Harvard to get into the microcomputer business, and he ran a 31-person company called Microsoft. While he thought of programming as an intellectual exercise, Gates also had a sharp eye for business.

In July 1980, the IBMers met with Gates but were not greatly impressed, so they turned instead to Gary Kildall, president of Digital Research, the most recognized microcomputer software company at the time. Kildall then made what may have been the business error of the century. He blew off the blue-suiters so that he could fly his airplane, leaving his wife—a lawyer—to deal with them. The meeting went nowhere, with too much haggling over nondisclosure agreements, and the IBMers left. Gates was now their only option, and he took the IBMers seriously.

The normal process to get a new IBM product to market took four or five years, but the incipient PC market was moving too quickly for that.

That August, Lowe presented his plan to Cary and the rest of the management committee at IBM headquarters in Armonk, N.Y. The idea of putting together a PC outside of IBM’s development process disturbed some committee members. The committee knew that IBM had previously failed with its own tiny machines—specifically the Datamaster and the 5110—but Lowe was offering an alternative strategy and already had Cary’s support. They approved Lowe’s plan.

Lowe negotiated terms, volumes, and delivery dates with suppliers, including Gates. To meet IBM’s deadline, Gates concluded that Microsoft could not write an operating system from scratch, so he acquired one called QDOS (“quick and dirty operating system”) that could be adapted. IBM wanted Microsoft, not the team in Boca Raton, to have responsibility for making the operating system work. That meant Microsoft retained the rights to the operating system. Microsoft paid $75,000 for QDOS. By the early 1990s, that investment had boosted the firm’s worth to $27 billion. IBM’s strategic error in not retaining rights to the operating system went far beyond that $27 billion; it meant that Microsoft would set the standards for the PC operating system. In fairness to IBM, nobody thought the PC business would become so big. Gates said later that he had been “lucky.”

Back at Boca Raton, the pieces started coming together. The team designed the new product, lined up suppliers, and were ready to introduce the IBM Personal Computer just a year after gaining the management committee’s approval. How was IBM able to do this?

Much credit goes to Philip Donald Estridge. An engineering manager known for bucking company norms, Estridge turned out to be the perfect choice to ram this project through. He wouldn’t show up at product-development review meetings or return phone calls. He made decisions quickly and told Lowe and Cary about them later. He staffed up with like-minded rebels, later nicknamed the “Dirty Dozen.” In the fall of 1980, Lowe moved on to a new job at IBM, so Estridge was now in charge. He obtained 8088 microprocessors from Intel, made sure Microsoft kept the development of DOS secret, and quashed rumors that IBM was building a system. The Boca Raton team put in long hours and built a beautiful machine.

The IBM PC was a near-instant success

The big day came on 12 August 1981. Estridge wondered if anyone would show up at the Waldorf Astoria. After all, the PC was a small product, not in IBM’s traditional space. Some 100 people crowded into the hotel. Estridge described the PC, had one there to demonstrate, and answered a few questions.

Image of an old IBM ad.
The IBM PC was aimed squarely at the business market, which compelled other computer makers to follow suit.
IBM

Meanwhile, IBM salesmen had received packets of materials the previous day. On 12 August, branch managers introduced the PC to employees and then met with customers to do the same. Salesmen weren’t given sample machines. Along with their customers, they collectively scratched their heads, wondering how they could use the new computer. For most customers and IBMers, it was a new world.

Nobody predicted what would happen next. The first shipments began in October 1981, and in its first year, the IBM PC generated $1 billion in revenue, far exceeding company projections. IBM’s original manufacturing forecasts called for 1 million machines over three years, with 200,000 the first year. In reality, customers were buying 200,000 PCs per month by the second year.

Those who ordered the first PCs got what looked to be something pretty clever. It could run various software packages and a nice collection of commercial and consumer tools, including the accessible BASIC programming language. Whimsical ads for the PC starred Charlie Chaplin’s Little Tramp and carried the tag line “A Tool for Modern Times.” People could buy the machines at ComputerLand, a popular retail chain in the United States. For some corporate customers, the fact that IBM now had a personal computing product meant that these little machines were not some crazy geek-hippie fad but in fact a new class of serious computing. Corporate users who did not want to rely on their company’s centralized data centers began turning to these new machines.

Estridge and his team were busy acquiring games and business software for the PC. They lined up Lotus Development Corp. to provide its 1-2-3 spreadsheet package; other software products followed from multiple suppliers. As developers began writing software for the IBM PC, they embraced DOS as the industry standard. IBM’s competitors, too, increasingly had to use DOS and Intel chips. And Cary’s decision to avoid the product-development bureaucracy had paid off handsomely.

IBM couldn’t keep up with rivals in the PC market

Encouraged by their success, the IBMers in Boca Raton released a sequel to the PC in early 1983, called the XT. In 1984 came the XT’s successor, the AT. That machine would be the last PC designed outside IBM’s development process. John Opel, who had succeeded Cary as CEO in January 1981, endorsed reining in the PC business. During his tenure, Opel remained out of touch with the PC and did not fully understand the significance of the technology.

We could conclude that Opel did not need to know much about the PC because business overall was outstanding. IBM’s revenue reached $29 billion in 1981 and climbed to $46 billion in 1984. The company was routinely ranked as one of the best run. IBM’s stock more than doubled, making IBM the most valuable company in the world.

The media only wanted to talk about the PC. On its 3 January 1983 cover, Time featured the personal computer, rather than its usual Man of the Year. IBM customers, too, were falling in love with the new machines, ignoring IBM’s other lines of business—mainframes, minicomputers, and typewriters.

Photo of Don Estridge.
Don Estridge was the right person to lead the skunkworks in Boca Raton, Fla., where the IBM PC was built.
IBM

On 1 August 1983, Estridge’s skunkworks was redesignated the Entry Systems Division (ESD), which meant that the PC business was now ensnared in the bureaucracy that Cary had bypassed. Estridge’s 4,000-person group mushroomed to 10,000. He protested that Corporate had transferred thousands of programmers to him who knew nothing about PCs. PC programmers needed the same kind of machine-software knowledge that mainframe programmers in the 1950s had; both had to figure out how to cram software into small memories to do useful work. By the 1970s, mainframe programmers could not think small enough.

Estridge faced incessant calls to report on his activities in Armonk, diverting his attention away from the PC business and slowing development of new products even as rivals began to speed up introduction of their own offerings. Nevertheless, in August 1984, his group managed to release the AT, which had been designed before the reorganization.

But IBM blundered with its first product for the home computing market: the PCjr (pronounced “PC junior”). The company had no experience with this audience, and as soon as IBM salesmen and prospective customers got a glimpse of the machine, they knew something had gone terribly wrong.

Unlike the original PC, the XT, and the AT, the PCjr was the sorry product of IBM’s multilayered development and review process. Rumors inside IBM suggested that the company had spent $250 million to develop it. The computer’s tiny keyboard was scornfully nicknamed the “Chiclet keyboard.” Much of the PCjr’s software, peripheral equipment, memory boards, and other extensions were incompatible with other IBM PCs. Salesmen ignored it, not wanting to make a bad recommendation to customers. IBM lowered the PCjr’s price, added functions, and tried to persuade dealers to promote it, to no avail. ESD even offered the machines to employees as potential Christmas presents for a few hundred dollars, but that ploy also failed.

IBM’s relations with its two most important vendors, Intel and Microsoft, remained contentious. Both Microsoft and Intel made a fortune selling IBM’s competitors the same products they sold to IBM. Rivals figured out that IBM had set the de facto technical standards for PCs, so they developed compatible versions they could bring to market more quickly and sell for less. Vendors like AT&T, Digital Equipment Corp., and Wang Laboratories failed to appreciate that insight about standards, and they suffered. (The notable exception was Apple, which set its own standards and retained its small market share for years.) As the prices of PC clones kept falling, the machines grew more powerful—Moore’s Law at work. By the mid-1980s, IBM was reacting to the market rather than setting the pace.

For some corporate customers, the fact that IBM now had a personal computing product meant that these little machines were not some crazy geek-hippie fad but were in fact a new class of serious computing.

Estridge was not getting along with senior executives at IBM, particularly those on the mainframe side of the house. In early 1985, Opel made Bill Lowe head of the PC business.

Then disaster struck. On 2 August 1985, Estridge, his wife, Mary Ann, and a handful of IBM salesmen from Los Angeles boarded Delta Flight 191 headed to Dallas. Over the Dallas airport, 700 feet off the ground, a strong downdraft slammed the plane to the ground, killing 137 people including the Estridges and all but one of the other IBM employees. IBMers were in shock. Despite his troubles with senior management, Estridge had been popular and highly respected. Not since the death of Thomas J. Watson Sr. nearly 30 years earlier had employees been so stunned by a death within IBM. Hundreds of employees attended the Estridges’ funeral. The magic of the PC may have died before the airplane crash, but the tragedy at Dallas confirmed it.

More missteps doomed the IBM PC and its OS/2 operating system

While IBM continued to sell millions of personal computers, over time the profit on its PC business declined. IBM’s share of the PC market shrank from roughly 80 percent in 1982–1983 to 20 percent a decade later.

Meanwhile, IBM was collaborating with Microsoft on a new operating system, OS/2, even as Microsoft was working on Windows, its replacement for DOS. The two companies haggled over royalty payments and how to work on OS/2. By 1987, IBM had over a thousand programmers assigned to the project and to developing telecommunications, costing an estimated $125 million a year.

OS/2 finally came out in late 1987, priced at $340, plus $2,000 for additional memory to run it. By then, Windows had been on the market for two years and was proving hugely popular. Application software for OS/2 took another year to come to market, and even then the new operating system didn’t catch on. As the business writer Paul Carroll put it, OS/2 began to acquire “the smell of failure.”

Known to few outside of IBM and Microsoft, Gates had offered to sell IBM a portion of his company in mid-1986. It was already clear that Microsoft was going to become one of the most successful firms in the industry. But Lowe declined the offer, making what was perhaps the second-biggest mistake in IBM’s history up to then, following his first one of not insisting on proprietary rights to Microsoft’s DOS or the Intel chip used in the PC. The purchase price probably would have been around $100 million in 1986, an amount that by 1993 would have yielded a return of $3 billion and in subsequent decades orders of magnitude more.

In fairness to Lowe, he was nervous that such an acquisition might trigger antitrust concerns at the U.S. Department of Justice. But the Reagan administration was not inclined to tamper with the affairs of large multinational corporations.

Gates offered to sell IBM a portion of Microsoft in mid-1986. But Lowe declined the offer, making what was perhaps the second-biggest mistake in IBM’s history up to then.

More to the point, Lowe, Opel, and other senior executives did not understand the PC market. Lowe believed that PCs, and especially their software, should undergo the same rigorous testing as the rest of the company’s products. That meant not introducing software until it was as close to bugproof as possible. All other PC software developers valued speed to market over quality—better to get something out sooner that worked pretty well, let users identify problems, and then fix them quickly. Lowe was aghast at that strategy.

Salesmen came forward with proposals to sell PCs in bulk at discounted prices but got pushback. The sales team I managed arranged to sell 6,000 PCs to American Standard, a maker of bathroom fixtures. But it took more than a year and scores of meetings for IBM’s contract and legal teams to authorize the terms.

Lowe’s team was also slow to embrace the faster chips that Intel was producing, most notably the 80386. The new Intel chip had just the right speed and functionality for the next generation of computers. Even as rivals moved to the 386, IBM remained wedded to the slower 286 chip.

As the PC market matured, the gold rush of the late 1970s and early 1980s gave way to a more stable market. A large software industry grew up. Customers found the PC clones, software, and networking tools to be just as good as IBM’s products. The cost of performing a calculation on a PC dropped so much that it was often significantly cheaper to use a little machine than a mainframe. Corporate customers were beginning to understand that economic reality.

Opel retired in 1986, and John F. Akers inherited the company’s sagging fortunes. Akers recognized that the mainframe business had entered a long, slow decline, the PC business had gone into a more rapid fall, and the move to billable services was just beginning. He decided to trim the ranks by offering an early retirement program. But too many employees took the buyout, including too many of the company’s best and brightest.

In 1995, IBM CEO Louis V. Gerstner Jr. finally pulled the plug on OS/2. It did not matter that Microsoft’s software was notorious for having bugs or that IBM’s was far cleaner. As Gerstner noted in his 2002 book, “What my colleagues seemed unwilling or unable to accept was that the war was already over and was a resounding defeat—90 percent market share for Windows to OS/2’s 5 percent or 6 percent.”

The end of the IBM PC

IBM soldiered on with the PC until Samuel J. Palmisano, who once worked in the PC organization, became CEO in 2002. IBM was still the third-largest producer of personal computers, including laptops, but PCs had become a commodity business, and the company struggled to turn a profit from those products. Palmisano and his senior executives had the courage to set aside any emotional attachments to their “Tool for Modern Times” and end it.

In December 2004, IBM announced it was selling its PC business to Lenovo for $1.75 billion. As the New York Times explained, the sale “signals a recognition by IBM, the prototypical American multinational, that its own future lies even further up the economic ladder, in technology services and consulting, in software and in the larger computers that power corporate networks and the Internet. All are businesses far more profitable for IBM than its personal computer unit.”

As soon as IBM salesmen and prospective customers got a glimpse of the IBM PCjr, they knew something had gone terribly wrong.

IBM already owned 19 percent of Lenovo, which would continue for three years under the deal, with an option to acquire more shares. The head of Lenovo’s PC business would be IBM senior vice president Stephen M. Ward Jr., while his new boss would be Lenovo’s chairman, Yang Yuanquing. Lenovo got a five-year license to use the IBM brand on the popular Thinkpad laptops and PCs, and to hire IBM employees to support existing customers in the West, where Lenovo was virtually unknown. IBM would continue to design new laptops for Lenovo in Raleigh, N.C. Some 4,000 IBMers already working in China would switch to Lenovo, along with 6,000 in the United States.

The deal ensured that IBM’s global customers had familiar support while providing a stable flow of maintenance revenue to IBM for five years. For Lenovo, the deal provided a high-profile partner. Palmisano wanted to expand IBM’s IT services business to Chinese corporations and government agencies. Now the company was partnered with China’s largest computer manufacturer, which controlled 27 percent of the Chinese PC market. The deal was one of the most creative in IBM’s history. And yet it remained for many IBMers a sad close to the quarter-century chapter of the PC.

This article is based on excerpts from IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019).

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the August 2021 print issue as “A Tool for Modern Times.”

The Essential Question

Photo of James W. Cortada.
James W. Cortada at the IBM building in Cranford, N.J., in the late 1970s.
James W. Cortada

How many IBM PCs can you fit in an 18-wheeler? That, according to historian James W. Cortada, is the most interesting question he’s ever asked.

He first raised the question in 1985, several years after IBM had introduced its wildly successful personal computer. Cortada was then head of a sales team at IBM’s Nashville site.

“We’d arranged to sell 6,000 PCs to American Standard. They agreed to send their trucks to pick up a certain number of PCs every month. So we needed to know how many PCs would fit,” Cortada explains. “I can’t even remember what the answer was, only that I was delighted that I thought to ask the question.”

Cortada worked in various capacities at IBM for 38 years. (That’s him in the parking lot of IBM’s distinctive building in Cranford, N.J., designed by Victor Lundy.) After he retired in 2012, he became a senior research fellow at the University of Minnesota’s Charles Babbage Institute, where he specializes in the history of technology. That transition might seem odd, but shortly before he joined IBM, Cortada had earned a Ph.D. in modern European history from Florida State University. And he continued to research, write, and publish during his IBM career.

Cover of IBM: The Rise and Fall and Reinvention of a Global Icon
IEEE Spectrum

This month’s Past Forward describes the 1981 launch of the IBM PC. It’s drawn from Cortada’s award-winning history of Big Blue: IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). “I was able to take advantage of the normal skills of a trained historian,” Cortada says. “And I had witnessed a third of IBM’s history. I knew what questions to ask. I knew the skeletons in the closet.”

Even before he started the book, a big question was whether he’d reveal those skeletons or not. “I decided to be candid,” Cortada says. “I didn’t want my grandsons to be embarrassed about what I wrote.”

Cloud Computing’s Coming Energy Crisis

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/cloud-computings-coming-energy-crisis

How much of our computing now happens in the cloud? A lot. Providers of public cloud services alone take in more than a quarter of a trillion U.S. dollars a year. That’s why Amazon, Google, and Microsoft maintain massive data centers all around the world. Apple and Facebook, too, run similar facilities, all stuffed with high-core-count CPUs, sporting terabytes of RAM and petabytes of storage.

These machines do the heavy lifting to support what’s been called “surveillance capitalism”: the endless tracking, user profiling, and algorithmic targeting used to distribute advertising. All that computing rakes in a lot of dollars, of course, but it also consumes a lot of watts: Bloomberg recently estimated that about 1 percent of the world’s electricity goes to cloud computing.

That figure is poised to grow exponentially over the next decade. Bloomberg reckons that, globally, we might exit the 2020s needing as much as 8 percent of all electricity to power the future cloud. That might seem like a massive jump, but it’s probably a conservative estimate. After all, by 2030, with hundreds of millions of augmented-reality spectacles streaming real-time video into the cloud, and with the widespread adoption of smart digital currencies seamlessly blending money with code, the cloud will provide the foundation for nearly every financial transaction and user interaction with data.

How much energy can we dedicate to all this computing? In an earlier time, we could have relied on Moore’s Law to keep the power budget in check as we scaled up our computing resources. But now, as we wring out the last bits of efficiency from the final few process nodes before we reach atomic-scale devices, those improvements will hit physical limits. It won’t be long until computing and power consumption will once again be strongly coupled—as they were 60 years ago, before integrated CPUs changed the game.

We can’t devote the whole of the planet’s electricity generation to support the cloud. Something will have to give.

We seem to be hurtling toward a brick wall, as the rising demand for computing collides with decreasing efficiencies. We can’t devote the whole of the planet’s electricity generation to support the cloud. Something will have to give.

The most immediate solutions will involve processing more data at the edge, before it goes into the cloud. But that only shifts the burden, buying time for rethinking how to manage our computing in the face of limited power resources.

Software and hardware engineering will no doubt reorient their design practices around power efficiency. More code will find its way into custom silicon. And that code will find more reasons to run infrequently, asynchronously, and as minimally as possible. All of that will help, but as software progressively eats more of the world—to borrow a now-famous metaphor—we will confront this challenge in ever-wider realms.

We can already spy one face of this future in the nearly demonic coupling of energy consumption and private profit that provides the proof-of-work mechanism for cryptocurrencies like Bitcoin. Companies like Square have announced investments in solar energy for Bitcoin mining, hoping to deflect some of the bad press associated with this activity. But more than public relations is at stake.

Bitcoin asks us right now to pit the profit motive against the health of the planet. More and more computing activities will do the same in the future. Let’s hope we never get to a point where the fate of the Earth hinges on the fate of the transistor.

This article appears in the August 2021 print issue as “Cloud Computing’s Dark Cloud.”

Google’s Quantum Computer Exponentially Suppresses Errors

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/googles-quantum-computer-exponentially-suppress-errors

In order to develop a practical quantum computer, scientists will have to design ways to deal with any errors that will inevitably pop up in its performance. Now Google has demonstrated that exponential suppression of such errors is possible, experiments that may help pave the way for scalable, fault-tolerant quantum computers.

A quantum computer with enough components known as quantum bits or “qubits” could in theory achieve a “quantum advantage” allowing it to find the answers to problems no classical computer could ever solve.

However, a critical drawback of current quantum computers is the way in which their inner workings are prone to errors. Current state-of-the-art quantum platforms typically have error rates near 10^-3 (or one in a thousand), but many practical applications call for error rates as low as 10^-15.

Router Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/router-security.html

This report is six months old, and I don’t know anything about the organization that produced it, but it has some alarming data about router security.

Conclusion: Our analysis showed that Linux is the most used OS running on more than 90% of the devices. However, many routers are powered by very old versions of Linux. Most devices are still powered with a 2.6 Linux kernel, which is no longer maintained for many years. This leads to a high number of critical and high severity CVEs affecting these devices.

Since Linux is the most used OS, exploit mitigation techniques could be enabled very easily. Anyhow, they are used quite rarely by most vendors except the NX feature.

A published private key provides no security at all. Nonetheless, all but one vendor spread several private keys in almost all firmware images.

Mirai used hard-coded login credentials to infect thousands of embedded devices in the last years. However, hard-coded credentials can be found in many of the devices and some of them are well known or at least easy crackable.

However, we can tell for sure that the vendors prioritize security differently. AVM does better job than the other vendors regarding most aspects. ASUS and Netgear do a better job in some aspects than D-Link, Linksys, TP-Link and Zyxel.

Additionally, our evaluation showed that large scale automated security analysis of embedded devices is possible today utilizing just open source software. To sum it up, our analysis shows that there is no router without flaws and there is no vendor who does a perfect job regarding all security aspects. Much more effort is needed to make home routers as secure as current desktop of server systems.

One comment on the report:

One-third ship with Linux kernel version 2.6.36 was released in October 2010. You can walk into a store today and buy a brand new router powered by software that’s almost 10 years out of date! This outdated version of the Linux kernel has 233 known security vulnerabilities registered in the Common Vulnerability and Exposures (CVE) database. The average router contains 26 critically-rated security vulnerabilities, according to the study.

We know the reasons for this. Most routers are designed offshore, by third parties, and then private labeled and sold by the vendors you’ve heard of. Engineering teams come together, design and build the router, and then disperse. There’s often no one around to write patches, and most of the time router firmware isn’t even patchable. The way to update your home router is to throw it away and buy a new one.

And this paper demonstrates that even the new ones aren’t likely to be secure.

Cloning Google Titan 2FA keys

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/cloning-google-titan-2fa-keys.html

This is a clever side-channel attack:

The cloning works by using a hot air gun and a scalpel to remove the plastic key casing and expose the NXP A700X chip, which acts as a secure element that stores the cryptographic secrets. Next, an attacker connects the chip to hardware and software that take measurements as the key is being used to authenticate on an existing account. Once the measurement-taking is finished, the attacker seals the chip in a new casing and returns it to the victim.

Extracting and later resealing the chip takes about four hours. It takes another six hours to take measurements for each account the attacker wants to hack. In other words, the process would take 10 hours to clone the key for a single account, 16 hours to clone a key for two accounts, and 22 hours for three accounts.

By observing the local electromagnetic radiations as the chip generates the digital signatures, the researchers exploit a side channel vulnerability in the NXP chip. The exploit allows an attacker to obtain the long-term elliptic curve digital signal algorithm private key designated for a given account. With the crypto key in hand, the attacker can then create her own key, which will work for each account she targeted.

The attack isn’t free, but it’s not expensive either:

A hacker would first have to steal a target’s account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning — ­were it ever to happen in the wild — ­would likely be done only by a nation-state pursuing its highest-value targets.

That last line about “nation-state pursuing its highest-value targets” is just not true. There are many other situations where this attack is feasible.

Note that the attack isn’t against the Google system specifically. It exploits a side-channel attack in the NXP chip. Which means that other systems are probably vulnerable:

While the researchers performed their attack on the Google Titan, they believe that other hardware that uses the A700X, or chips based on the A700X, may also be vulnerable. If true, that would include Yubico’s YubiKey NEO and several 2FA keys made by Feitian.

Backdoor in Zyxel Firewalls and Gateways

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/backdoor-in-zyxel-firewalls-and-gateways.html

This is bad:

More than 100,000 Zyxel firewalls, VPN gateways, and access point controllers contain a hardcoded admin-level backdoor account that can grant attackers root access to devices via either the SSH interface or the web administration panel.

[…]

Installing patches removes the backdoor account, which, according to Eye Control researchers, uses the “zyfwp” username and the “PrOw!aN_fXp” password.

“The plaintext password was visible in one of the binaries on the system,” the Dutch researchers said in a report published before the Christmas 2020 holiday.

An introduction to three-phase power and PDUs

Post Syndicated from Rob Dinh original https://blog.cloudflare.com/an-introduction-to-three-phase-power-and-pdus/

An introduction to three-phase power and PDUs

Our fleet of over 200 locations comprises various generations of servers and routers. And with the ever changing landscape of services and computing demands, it’s imperative that we manage power in our data centers right. This blog is a brief Electrical Engineering 101 session going over specifically how power distribution units (PDU) work, along with some good practices on how we use them. It appears to me that we could all use a bit more knowledge on this topic, and more love and appreciation of something that’s critical but usually taken for granted, like hot showers and opposable thumbs.

A PDU is a device used in data centers to distribute power to multiple rack-mounted machines. It’s an industrial grade power strip typically designed to power an average consumption of about seven US households. Advanced models have monitoring features and can be accessed via SSH or webGUI to turn on and off power outlets. How we choose a PDU depends on what country the data center is and what it provides in terms of voltage, phase, and plug type.

An introduction to three-phase power and PDUs

For each of our racks, all of our dual power-supply (PSU) servers are cabled to one of the two vertically mounted PDUs. As shown in the picture above, one PDU feeds a server’s PSU via a red cable, and the other PDU feeds that server’s other PSU via a blue cable. This is to ensure we have power redundancy maximizing our service uptime; in case one of the PDUs or server PSUs fail, the redundant power feed will be available keeping the server alive.

Faraday’s Law and Ohm’s Law

Like most high-voltage applications, PDUs and servers are designed to use AC power. Meaning voltage and current aren’t constant — they’re sine waves with magnitudes that can alternate between positive and negative at a certain frequency. For example, a voltage feed of 100V is not constantly at 100V, but it bounces between 100V and -100V like a sine wave. One complete sine wave cycle is one phase of 360 degrees, and running at 50Hz means there are 50 cycles per second.

The sine wave can be explained by Faraday’s Law and by looking at how an AC power generator works. Faraday’s Law tells us that a current is induced to flow due to a changing magnetic field. Below illustrates a simple generator with a permanent magnet rotating at constant speed and a coil coming in contact with the magnet’s magnetic field. Magnetic force is strongest at the North and South ends of the magnet. So as it rotates on itself near the coil, current flow fluctuates in the coil. One complete rotation of the magnet represents one phase. As the North end approaches the coil, current increases from zero. Once the North end leaves, current decreases to zero. The South end in turn approaches, now the current “increases” in the opposite direction. And finishing the phase, the South end leaves returning the current back to zero. Current alternates its direction at every half cycle, hence the naming of Alternating Current.

An introduction to three-phase power and PDUs

Current and voltage in AC power fluctuate in-phase, or “in tandem”, with each other. So by Ohm’s Law of Power = Voltage x Current, power will always be positive. Notice on the graph below that AC power (Watts) has two peaks per cycle. But for practical purposes, we’d like to use a constant power value. We do that by interpreting AC power into “DC” power using root-mean-square (RMS) averaging, which takes the max value and divides it by √2. For example, in the US, our conditions are 208V 24A at 60Hz. When we look at spec sheets, all of these values can be assumed as RMS’d into their constant DC equivalent values. When we say we’re fully utilizing a PDU’s max capacity of 5kW, it actually means that the power consumption of our machines bounces between 0 and 7.1kW (5kW x √2).

An introduction to three-phase power and PDUs
An introduction to three-phase power and PDUs

It’s also critical to figure out the sum of power our servers will need in a rack so that it falls under the PDU’s design max power capacity. For our US example, a PDU is typically 5kW (208 volts x 24 amps); therefore, we’re budgeting 5kW and fit as many machines as we can under that. If we need more machines and the total sum power goes above 5kW, we’d need to provision another power source. That would lead to possibly another set of PDUs and racks that we may not fully use depending on demand; e.g. more underutilized costs. All we can do is abide by P = V x I.

However there is a way we can increase the max power capacity economically — 3-phase PDU. Compared to single phase, its max capacity is √3 or 1.7 times higher. A 3-phase PDU of the same US specs above has a capacity of 8.6kW (5kW x √3), allowing us to power more machines under the same source. Using a 3-phase setup might mean it has thicker cables and bigger plug; but despite being more expensive than a 1-phase, its value is higher compared to two 1-phase rack setups for these reasons:

  • It’s more cost-effective, because there are fewer hardware resources to buy
  • Say the computing demand adds up to 215kW of hardware, we would need 25 3-phase racks compared to 43 1-phase racks.
  • Each rack needs two PDUs for power redundancy. Using the example above, we would need 50 3-phase PDUs compared to 86 1-phase PDUs to power 215kW worth of hardware.
  • That also means a smaller rack footprint and fewer power sources provided and charged by the data center, saving us up to √3 or 1.7 times in opex.
  • It’s more resilient, because there are more circuit breakers in a 3-phase PDU — one more than in a 1-phase. For example, a 48-outlet PDU that is 1-phase would be split into two circuits of 24 outlets. While a 3-phase one would be split into 3 circuits of 16 outlets. If a breaker tripped, we’d lose 16 outlets using a 3-phase PDU instead of 24.
An introduction to three-phase power and PDUs

The PDU shown above is a 3-phase model of 48 outlets. We can see three pairs of circuit breakers for the three branches that are intertwined with each other white, grey, and black. Industry demands today pressure engineers to maximize compute performance and minimize physical footprint, making the 3-phase PDU a widely-used part of operating a data center.

What is 3-phase?

A 3-phase AC generator has three coils instead of one where the coils are 120 degrees apart inside the cylindrical core, as shown in the figure below. Just like the 1-phase generator, current flow is induced by the rotation of the magnet thus creating power from each coil sequentially at every one-third of the magnet’s rotation cycle. In other words, we’re generating three 1-phase power offset by 120 degrees.

An introduction to three-phase power and PDUs

A 3-phase feed is set up by joining any of its three coils into line pairs. L1, L2, and L3 coils are live wires with each on their own phase carrying their own phase voltage and phase current. Two phases joining together form one line carrying a common line voltage and line current. L1 and L2 phase voltages create the L1/L2 line voltage. L2 and L3 phase voltages create the L2/L3 line voltage. L1 and L3 phase voltages create the L1/L3 line voltage.

An introduction to three-phase power and PDUs

Let’s take a moment to clarify the terminology. Some other sources may refer to line voltage (or current) as line-to-line or phase-to-phase voltage (or current). It can get confusing, because line voltage is the same as phase voltage in 1-phase circuits, as there’s only one phase. Also, the magnitude of the line voltage is equal to the magnitude of the phase voltage in 3-phase Delta circuits, while the magnitude of the line current is equal to the magnitude of the phase current in 3-phase Wye circuits.

Conversely, the line current equals to phase current times √3 in Delta circuits. In Wye circuits, the line voltage equals to phase voltage times √3.

In Delta circuits:
Vline = Vphase
Iline = √3 x Iphase

In Wye circuits:
Vline = √3 x Vphase
Iline = Iphase

Delta and Wye circuits are the two methods that three wires can join together. This happens both at the power source with three coils and at the PDU end with three branches of outlets. Note that the generator and the PDU don’t need to match each other’s circuit types.

An introduction to three-phase power and PDUs

On PDUs, these phases join when we plug servers into the outlets. So we conceptually use the wirings of coils above and replace them with resistors to represent servers. Below is a simplified wiring diagram of a 3-phase Delta PDU showing the three line pairs as three modular branches. Each branch carries two phase currents and its own one common voltage drop.

An introduction to three-phase power and PDUs

And this one below is of a 3-phase Wye PDU. Note that Wye circuits have an additional line known as the neutral line where all three phases meet at one point. Here each branch carries one phase and a neutral line, therefore one common current. The neutral line isn’t considered as one of the phases.

An introduction to three-phase power and PDUs

Thanks to a neutral line, a Wye PDU can offer a second voltage source that is √3 times lower for smaller devices, like laptops or monitors. Common voltages for Wye PDUs are 230V/400V or 120V/208V, particularly in North America.

Where does the √3 come from?

Why are we multiplying by √3? As the name implies, we are adding phasors. Phasors are complex numbers representing sine wave functions. Adding phasors is like adding vectors. Say your GPS tells you to walk 1 mile East (vector a), then walk a 1 mile North (vector b). You walked 2 miles, but you only moved by 1.4 miles NE from the original location (vector a+b). That 1.4 miles of “work” is what we want.

Let’s take in our application L1 and L2 in a Delta circuit. we add phases L1 and L2, we get a L1/L2 line. We assume the 2 coils are identical. Let’s say α represents the voltage magnitude for each phase. The 2 phases are 120 degrees offset as designed in the 3-phase power generator:

|L1| = |L2| = α
L1 = |L1|∠0° = α∠0°
L2 = |L2|∠-120° = α∠-120°

Using vector addition to solve for L1/L2:

L1/L2 = L1 + L2

An introduction to three-phase power and PDUs
An introduction to three-phase power and PDUs

Convert L1/L2 into polar form:

An introduction to three-phase power and PDUs

Since voltage is a scalar, we’re only interested in the “work”:

|L1/L2| = √3α

Given that α also applies for L3. This means for any of the three line pairs, we multiply the phase voltage by √3 to calculate the line voltage.

Vline = √3 x Vphase

Now with the three line powers being equal, we can add them all to get the overall effective power. The derivation below works for both Delta and Wye circuits.

Poverall = 3 x Pline
Poverall = 3 x (Vline x Iline)
Poverall = (3/√3) x (Vphase x Iphase)
Poverall = √3 x Vphase x Iphase

Using the US example, Vphase is 208V and Iphase is 24A. This leads to the overall 3-phase power to be 8646W (√3 x 208V x 24A) or 8.6kW. There lies the biggest advantage for using 3-phase systems. Adding 2 sets of coils and wires (ignoring the neutral wire), we’ve turned a generator that can produce √3 or 1.7 times more power!

Dealing with 3-phase

The derivation in the section above assumes that the magnitude at all three phases is equal, but we know in practice that’s not always the case. In fact, it’s barely ever. We rarely have servers and switches evenly distributed across all three branches on a PDU. Each machine may have different loads and different specs, so power could be wildly different, potentially causing a dramatic phase imbalance. Having a heavily imbalanced setup could potentially hinder the PDU’s available capacity.

A perfectly balanced and fully utilized PDU at 8.6kW means that each of its three branches has 2.88kW of power consumed by machines. Laid out simply, it’s spread 2.88 + 2.88 + 2.88. This is the best case scenario. If we were to take 1kW worth of machines out of one branch, spreading power to 2.88 + 1.88 + 2.88. Imbalance is introduced, the PDU is underutilized, but we’re fine. However, if we were to put back that 1kW into another branch — like 3.88 + 1.88 + 2.88 — the PDU is over capacity, even though the sum is still 8.6kW. In fact, it would be over capacity even if you just added 500W instead of 1kW on the wrong branch, thus reaching 3.18 + 1.88 + 2.88 (8.1kW).

That’s because a 8.6kW PDU is spec’d to have a maximum of 24A for each phase current. Overloading one of the branches can force phase currents to go over 24A. Theoretically, we can reach the PDU’s capacity by loading one branch until its current reaches 24A and leave the other two branches unused. That’ll render it into a 1-phase PDU, losing the benefit of the √3 multiplier. In reality, the branch would have fuses rated less than 24A (usually 20A) to ensure we won’t reach that high and cause overcurrent issues. Therefore the same 8.6kW PDU would have one of its branches tripped at 4.2kW (208V x 20A).

Loading up one branch is the easiest way to overload the PDU. Being heavily imbalanced significantly lowers PDU capacity and increases risk of failure. To help minimize that, we must:

  • Ensure that total power consumption of all machines is under the PDU’s max power capacity
  • Try to be as phase-balanced as possible by spreading cabling evenly across the three branches
  • Ensure that the sum of phase currents from powered machines at each branch is under the fuse rating at the circuit breaker.

This spreadsheet from Raritan is very useful when designing racks.

For the sake of simplicity, let’s ignore other machines like switches. Our latest 2U4N servers are rated at 1800W. That means we can only fit a maximum of four of these 2U4N chassis (8600W / 1800W = 4.7 chassis). Rounding them up to 5 would reach a total rack level power consumption of 9kW, so that’s a no-no.

Splitting 4 chassis into 3 branches evenly is impossible, and will force us to have one of the branches to have 2 chassis. That would lead to a non-ideal phase balancing:

An introduction to three-phase power and PDUs

Keeping phase currents under 24A, there’s only 1.1A (24A – 22.9A) to add on L1 or L2 before the PDU gets overloaded. Say we want to add as many machines as we can under the PDU’s power capacity. One solution is we can add up to 242W on the L1/L2 branch until both L1 and L2 currents reach their 24A limit.

An introduction to three-phase power and PDUs

Alternatively, we can add up to 298W on the L2/L3 branch until L2 current reaches 24A. Note we can also add another 298W on the L3/L1 branch until L1 current reaches 24A.

An introduction to three-phase power and PDUs

In the examples above, we can see that various solutions are possible. Adding two 298W machines each at L2/L3 and L3/L1 is the most phase balanced solution, given the parameters. Nonetheless, PDU capacity isn’t optimized at 7.8kW.

Dealing with a 1800W server is not ideal, because whichever branch we choose to power one would significantly swing the phase balance unfavorably. Thankfully, our Gen X servers take up less space and are more power efficient. Smaller footprint allows us to have more flexibility and fine-grained control over our racks in many of our diverse data centers. Assuming each 1U server is 450W, as if we physically split the 1800W 2U4N into fours each with their own power supplies, we’re now able to fit 18 nodes. That’s 2 more nodes than the four 2U4N setup:

An introduction to three-phase power and PDUs

Adding two more servers here means we’ve increased our value by 12.5%. While there are more factors not considered here to calculate the Total Cost of Ownership, this is still a great way to show we can be smarter with asset costs.

Cloudflare provides the back-end services so that our customers can enjoy the performance, reliability, security, and global scale of our edge network. Meanwhile, we manage all of our hardware in over 100 countries with various power standards and compliances, and ensure that our physical infrastructure is as reliable as it can be.

There’s no Cloudflare without hardware, and there’s no Cloudflare without power. Want to know more? Watch this Cloudflare TV segment about power: https://cloudflare.tv/event/7E359EDpCZ6mHahMYjEgQl.

ASICs at the Edge

Post Syndicated from Tom Strickx original https://blog.cloudflare.com/asics-at-the-edge/

ASICs at the Edge

At Cloudflare we pride ourselves in our global network that spans more than 200 cities in over 100 countries. To handle all the traffic passing through our network, there are multiple technologies at play. So let’s have a look at one of the cornerstones that makes all of this work… ASICs. No, not the running shoes.

What’s an ASIC?

ASIC stands for Application Specific Integrated Circuit. The name already says it, it’s a chip with a very narrow use case, geared towards a single application. This is in stark contrast to a CPU (Central Processing Unit), or even a GPU (Graphics Processing Unit). A CPU is designed and built for general purpose computation, and does a lot of things reasonably well. A GPU is more geared towards graphics (it’s in the name), but in the last 15 years, there’s been a drastic shift towards GPGPU (General Purpose GPU), in which technologies such as CUDA or OpenCL allow you to use the highly parallel nature of the GPU to do general purpose computing. A good example of GPU use is video encoding, or more recently, computer vision, used in applications such as self-driving cars.

Unlike CPUs or GPUs, ASICs are built with a single function in mind. Great examples are the Google Tensor Processing Units (TPU), used to accelerate machine learning functions[1], or for orbital maneuvering[2], in which specific orbital maneuvers are encoded, like the Hohmann Transfer, used to move rockets (and their payloads) to a new orbit at a different altitude. And they are also heavily used in the networking industry. Technically, the use case in the network industry should be called an ASSP (Application Specific Standard Product), but network engineers are simple people, so we prefer to call it an ASIC.

Why an ASIC

ASICs have the major benefit of being hyper-efficient. The more complex hardware is, the more it will need cooling and power. As ASICs only contain the hardware components needed for their function, their overall size can be reduced, and so are their power requirements. This has a positive impact on the overall physical size of the network (devices don’t need to be as bulky to provide sufficient cooling), and helps reduce the power consumption of a data center.

Reducing hardware complexity also reduces the failure rate of the manufacturing process, and allows for easier production.

The downside is that you need to embed a lot of your features in hardware, and once a new technology or specification comes around, any chips made without that technology baked in, won’t be able to support it (VXLAN for example).

For network equipment, this works perfectly. Overall, the networking industry is slow-moving, and considerable time is taken before new technologies make it to the market (as can be seen with IPv6, MPLS implementations, xDSL availability, …). This means the chips don’t need to evolve on a yearly basis, and can instead be created on a much slower cycle, with bigger leaps in technology. For example, it took Broadcom two years to go from Tomahawk 3 to Tomahawk 4, but in that process they doubled the throughput. The benefits listed earlier are super helpful for network equipment, as they allow for considerable throughput in a small form factor.

Building an ASIC

As with chips of any kind, building an ASIC is a long-term process. Just like with CPUs, if there’s a defect in the hardware design, you have to start from scratch, and scrap the entire build line. As such, the development lifecycle is incredibly long. It starts with prototyping in an FPGA (Field Programmable Gate Array), in which chip designers can program their required functionality and confirm compatibility. All of this is done in a HDL (Hardware Description Language), such as Verilog.

Once the prototyping stage is over, they move to baking the new packet processing pipeline into the chip at a foundry. After that, no more changes can be made to the chip, as it’s literally baked into the hardware (unlike an FPGA, which can be reprogrammed). Further difficulty is added by the fact that there are a very small number of hardware companies that will buy ASICs in bulk to build equipment with; as such the unit cost can increase drastically.

All of this means that the iteration cycle of an ASIC tends to be on the slower side of things (compared to the yearly refreshes in the Intel Process-Architecture-Optimization model for example), and will usually be smaller incremental updates: For example, increases in port-speeds are incremental (1G → 10G → 25G → 40G → 100G → 400G → 800G → …), and are tied into upgrades to the SerDes (Serialiser/Deserialiser) part of the chip.

New protocol support is a lot harder, and might require multiple development cycles before it shows up in a chip.

What ASICs do

The ASICs in our network equipment are responsible for the switching and routing of packets, as well as being the first layer of defense (in the form of a stateless firewall). Due to the sheer nature of how fast packets get switched, fast memory access is a primary concern. Most ASICs will use a special sort of memory, called TCAM (Ternary Content-Addressable Memory). This memory will be used to store all sorts of lookup tables. These may be forwarding tables (where does this packet go), ACL (Access Control List) tables (is this packet allowed), or CoS (Class of Service) tables (which priority should be given to this packet)

CAM, and its more advanced sibling, TCAM, are fascinating kinds of memory, as they operate fundamentally different than traditional Random Access Memory (RAM). While you have to use a memory address to access data in RAM, with CAM and TCAM you can directly refer to the content you are looking for. It is a physical implementation of a key-value store.

In CAM you use the exact binary representation of a word, in a network application, that word is likely going to be an IP address, so 11001011.00000000.01110001.00000000 for example (203.0.113.0). While this is definitely useful, networks operate a big collection of IP addresses, and storing each individually would require significant memory. To remedy this memory requirement, TCAM can store three states, instead of the binary two. This third state, sometimes called ‘ignore’ state, allows for the storage of multiple sequential data words as a single entry.

In networking, these sequential data words are IP prefixes. So for the previous example, if we wanted to store the collection of that IP address, and the 254 IPs following it, in TCAM it would as follows: 11001011.00000000.01110001.XXXXXXXX (203.0.113.0/24). This storage method means we can ask questions of the ASIC such as “where should I send packets with the destination IP address of 203.0.113.19”, to which the ASIC can have a reply ready in a single clock cycle, as it does not need to run through all memory, but instead can directly reference the key. This reply will usually be a reference to a memory address in traditional RAM, where more data can be stored, such as output port, or firewall requirements for the packet.
ASICs at the Edge

To dig a bit deeper into what ASICs do in network equipment, let’s briefly go over some fundamentals.

Networking can be split into two primary components: routing and switching. Switching allows you to directly interconnect multiple devices, so they can talk with each other across the network. It’s what allows your phone to connect to your TV to play a new family video. Routing is the next level up. It’s the mechanism that interconnects all these switched networks into a network of networks, and eventually, the Internet.

So routers are the devices responsible for steering traffic through this complex maze of networks, so it gets to its destination safely, and hopefully, as fast as possible. On the Internet, routers will usually use a routing protocol called BGP (Border Gateway Protocol) to exchange reachability information for a prefix (a collection of IP addresses), also called NLRI (Network Layer Reachability Information).

As with navigating the roads, there are multiple ways to get from point A to point B on the Internet. To make sure the router makes the right decision, it will store all of the reachability information in the RIB (Routing Information Base). That way, if anything changes with one route, the router still has other options immediately available.

With this information, a BGP daemon can calculate the ideal path to take for any given destination from its own point-of-view. This Cisco documentation explains the decision process the daemon goes through to calculate that ideal path.

Once we have this ideal path for a given destination, we should store this information, as it would be very inefficient to calculate this every single time we need to go there. The storage database is called the FIB (Forwarding Information Base). The FIB will be a subset of the RIB, as it will only ever contain the best path for a destination at any given time, while the RIB keeps all the available paths, even the non-ideal ones.

With these individual components, routers can make packets go from point A to point B in a blink of an eye.

Here’ are some of the more specific functions our ASICs need to perform:

  1. FIB install: Once the router has calculated its FIB, it’s important the router can access this as quickly as possible. To do so, the ASIC will install (write) this calculated FIB into the TCAM, so any lookups can happen as quickly as possible.
    ASICs at the Edge

  2. Packet forwarding lookups: as we need to know where to send a received packet, we look up this information in TCAM, which is, as we mentioned, incredibly fast.

  3. Stateless Firewall: while a router routes packets between destinations, you also want to ensure that certain packets don’t reach a destination at all. This can be done using either a stateless or stateful firewall. “State” in this case refers to TCP state, so the router would need to understand if a connection is new, or already established. As maintaining state is a complex issue, which requires storing tables, and can quickly consume a lot of memory, most routers will only operate a stateless firewall.
    Instead, stateful firewalls often have their own appliances. At Cloudflare, we’ve opted to move maintaining state to our compute nodes, as that severely reduces the state-table (one router for all state vs X metals for all state combined). A stateless firewall makes use of the TCAM again to store rules on what to do with specific types of packets. For example, one of the rules we employ at our edge is DENY-BOGON-RANGES , in which we discard traffic sourced from RFC1918 space (and other unroutable space). As this makes use of TCAM, it can all be done at line rate (the maximum speed of the interface).

  4. Advanced features, such as GRE encapsulation: modern networking isn’t just packet switching and packet routing anymore, and more advanced features are needed. One of these is encapsulation. With packet encapsulation, a system will put a data packet into another data packet. Using this technique, it’s possible to build a network on top of an existing network (an overlay). Overlays can be used to build a virtual backbone for example, in which multiple locations can be virtually connected through the Internet.
    While you can encapsulate packets on a CPU (we do this for Magic Transit), there are considerable challenges in doing so in software. As such, the ASIC can have built-in functionality to encapsulate a packet in a multitude of protocols, such as GRE. You may not want encapsulated packets to have to take a second trip through your entire pipeline, as this adds latency, so these shortcuts can also be built into the chip.

  5. MPLS, EVPN, VXLAN, SDWAN, SDN, …: I ran out of buzzwords to enumerate here, but while MPLS isn’t new (the first RFC was created in 2001), it’s a rather advanced requirement, just as the others listed, which means not all ASIC vendors will implement this for all their chips due to the increased complexity.

Vendor Landscape

At Cloudflare, we interact with both hardware and software vendors on a daily basis while operating our global network. As we’re talking about ASICs today, we’ll explore the hardware landscape, but some hardware vendors also have their own NOS (Network Operating System).
There’s a vast selection of hardware out there, all with different features and pricing. It can become incredibly hard to see the wood for the trees, so we’ll focus on 4 important distinguishing factors: Throughput (how many bits can the ASIC push through), buffer size (how many bits can the ASIC store in memory in case of resource contention), programmability (how easy is it for a third party programmer like Cloudflare to interact directly with the ASIC), feature set (how many advanced things outside of routing/switching can the ASIC do).

The landscape is so varied because different companies have different requirements. A company like Cloudflare has different expectations for its network hardware than your typical corner shop. Even within our own network we’ll have different requirements for the different layers that make up our network.

Broadcom

The elephant in the networking room (or is it the jumbo frame in the switch?) is Broadcom. Broadcom is a semiconductor company, with their primary revenue in the wired infrastructure segment (over 50% of revenue[3]). While they’ve been around since 1991, they’ve become an unstoppable force in the last 10 years, in part due to their reliance on Apple (25% of revenue). As a semiconductor manufacturer, their market dominance is primarily achieved by acquiring other companies. A great example is the acquisition of Dune Networks, which has become an excellent revenue generator as the StrataDNX series of ASIC (Arad, QumranMX, Jericho). As such, they have become the biggest ASIC vendor by far, and own 59% of the entire Ethernet Integrated Circuits market[4].

As such, they supply a lot of merchant silicon to Cisco, Juniper, Arista and others. Up until recently, if you wanted to use the Broadcom SDK to accelerate your packet forwarding, you have to sign so many NDAs you might get a hand cramp, which makes programming them a lot trickier. This changed recently when Broadcom open-sourced their SDK. Let’s have a quick look at some of their products.

Tomahawk

The Tomahawk line of ASICs are the bread-and-butter for the enterprise market. They’re cheap and incredibly fast. The first generation of Tomahawk chips did 3.2Tbps linerate, with low-latency switching. The latest generation of this chip (Tomahawk 4) does 25.6Tbps in a 7nm transistor footprint[5]). As you can’t have a cheap, fast, and full feature set for a single package, this means you lose out on features. In this case, you’re missing most of the more advanced networking technologies such as VXLAN, and you have no buffer to speak of.
As an example of a different vendor using this silicon, you can have a look at the Juniper QFX5200 switching platform.

StrataDNX (Arad, QumranMX, Jericho)

These chipsets came through the acquisition of Dune Networks, and are a collection of high-bandwidth, deep buffer (large amount of memory available to store (buffer) packets) chips, allowing them to be deployed in versatile environments, including the Cloudflare edge. The Arista DCS-7280SR that we run in some of our edge locations as edge routers run on the Jericho chipset. Since then, the chips have evolved, and with Jericho2, Broadcom now have a 10Tbps deep buffer chip[6]. With their fabric chip (this links multiple ASICs together), you can build switches with 48x400G ports[7] without much effort.
Cisco built their NCS5500 line of routers using the QumranMX[8].

Trident

This ASIC is an upgrade from the Tomahawk chipset, with a complex and extensive feature set, while maintaining high throughput rates. The latest Trident4 does 12.8Tbps at incredibly low latencies[9], making it an incredibly flexible platform. It unfortunately has no buffer space to speak of, which limits its scope for Cloudflare, as we need the buffer space to be able to switch between the different port speeds we have on our edge routers. The Arista 7050X and 7300X are built on top of this.

Intel

Intel is well known in the network industry for building stable and high-performance 10G NICs (Network Interface Controller). They’re not known for ASICs. They made an initial attempt with their acquisition of Fulcrum[10], which built the FM6000[11] series of ASIC, but nothing of note was really built with them. Intel decided to try again in 2019 with their acquisition of Barefoot. This small manufacturer is responsible for the Barefoot Tofino ASIC, which may well be a fundamental paradigm shift in the network industry.

Barefoot Tofino

The Tofino[12] is built using a PISA (Protocol Independent Switch Architecture), and using P4 (Programming Protocol-Independent Packet Processors)[13], you can program the data-plane (packet forwarding) as you see fit. It’s a drastic move away from the traditional method of networking, in which direct programming of the ASIC isn’t easily possible, and definitely not through a standard programming language. As an added benefit, P4 also allows you to perform a formal verification of your forwarding program, and be sure that it will do what you expect it to. Caveat: OpenFlow tried this, but unfortunately never really got much traction.
ASICs at the Edge[14]

There are multiple variations of the Tofino 1 available, but the top-end ASIC has a 6.5Tbps linerate capacity. As the ASIC is programmable, its featureset is as rich as you’d want it to be. Unfortunately, the chip does not come with a lot of buffer memory, so we can’t deploy these as edge devices (yet). Both Arista (7170 Series[15]) and Cisco (Nexus 34180YC and 3464C series[16]) have built equipment with the Tofino chip inside.

Mellanox

As some of you may know, Mellanox is the vendor that recently got acquired by Nvidia, which also provides our 25G NICs in our compute nodes. Besides NICs, Mellanox has a well-established line of ASICs, mostly for switching.

Spectrum

The latest iteration of this ASIC, Spectrum 3 offers 12.8Tbps switching capacity, with an extensive featureset, including Deep Packet Inspection and NAT. This chip allows for building dense high-speed port devices, going up to 25.6Tbps[17]. Buffering wise, there’s none to really speak of (64MB). Mellanox also builds their own hardware platforms. Unlike the other vendors below, they aren’t shipped with the Mellanox Operating System, instead, they offer you a variety of choices to run on top, including Cumulus Linux (which was also acquired by Nvidia 🤔).

As mentioned, while we use their NIC technology extensively, we currently don’t have any Mellanox ASIC silicon in our network.

Juniper

Juniper is a network hardware supplier, and currently the biggest supplier of network equipment for Cloudflare. As previously mentioned in the Broadcom section, Juniper buys some of their silicon from Broadcom, but they also have a significant lineup of home-grown silicon, which can be split into 2 families: Trio and Express.

Express

The Express family is the switching-skewed family, where bandwidth is a priority, while still maintaining a broad range of feature capabilities. These chips live in the same application landscape as the Broadcom StrataDNX chips.

Paradise (Q5)

The Q5 is the new generation of the Juniper switching ASIC[18]. While by itself it doesn’t boast high linerates (500Gbps), when combined into a chassis with a fabric chip (Clos network in this case), they can produce switches (or line cards) with up to 12Tbps of throughput capacity[19]. In addition to allowing for high-throughput, dense network appliances, the chip also comes with a staggering amount of buffer space (4GB per ASIC), provided by external HMC (Hybrid Memory Cube). In this HMC, they’ve also decided to put the FIB, MAC and other tables (so no TCAM).
The Q5 chip is used in their QFX1000 lineup of switches, which include the QFX10002-36Q, QFX10002-60C, QFX10002-72Q and QFX10008, all of which are deployed in our datacenters, as either edge routers or core aggregation switches.

ExpressPlus (ZX)

The ExpressPlus is the more feature-rich and faster evolution of the Paradise chip. It offers double the bandwidth per chip (1Tbps) and is built into a combined Clos-fabric reaching 6Tbps in a 2U form-factor (PTX10002). It also has an increased logical scale, which comes with bigger buffers, larger FIB storage, and more ACL space.

The ExpressPlus drives some of the PTX line of IP routers, together with its newest sibling, Triton.

Triton (BT)

Triton is the latest generation of ASIC in the Express family, with 3.6Tbps of capacity per chip, making way for some truly bandwidth-dense hardware. Both Triton and ExpressPlus are 400GE capable.

Trio

The Trio family of chips are primarily used in the feature-heavy MX routing platform, and is currently at its 5th generation.

ASICs at the EdgeA Juniper MPC4E-3D-32XGE line card

Trio Eagle (Trio 4.0) (EA)

The Trio Eagle is the previous generation of the Trio Penta, and can be found on the MPC7E line cards for example. It’s a feature-rich ASIC, with a 400Gbps forwarding capacity, and significant buffer and TCAM capacity (as is to be expected from a routing platform ASIC)

Trio Penta (Trio 5.0) (ZT)

Penta is the new generation routing chip, which is built for the MX platform routers. On top of being a very beefy chip, capable of 500Gbps per ASIC, allowing Juniper to build line cards of up to 4Tbps of capacity, the chip also has a lot of baked in features, offering advanced hardware offloading for for example MACSec, or Layer 3 IPsec.

The Penta chip is packaged on the MPC10E and MPC11E line card, which can be installed in multiple variations of the MX chassis routers (MX480 included).

Cisco

Last but not least, there’s Cisco. As the saying goes “nobody ever got fired for buying Cisco”, they’re the biggest vendor of network solutions around. Just like Juniper, they have a mixed product fleet of merchant silicon, as well as home-grown. While we used to operate Cisco routers as edge routers (Cisco ASR 9000), this is no longer the case. We do still use them heavily for our ToR (Top-of-Rack) switching needs, utilizing both their Nexus 5000 series and Nexus 9000 series switches.

Bigsur

Bigsur is custom silicon developed for the Nexus 6000 line of switches (confusingly, the switches themselves are called Cisco Nexus 5672UP and Cisco Nexus 6001). In our specific model, the Cisco Nexus 5672UP, there’s 7 of them interconnected, providing 10G and 40G connectivity. Unfortunately Cisco is a lot more tight-lipped about their ASIC capabilities, so I can’t go as deep as I did with the Juniper chips. Feature-wise, there’s not a lot we require from them in our edge network. They’re simple Layer 2 forwarding switches, with no added requirements. Buffer wise, they use a system called Virtual Output Queueing, just like the Juniper Express chip. Unlike the Juniper silicon, the Bigsur ASIC doesn’t come with a lot of TCAM or buffer space.

Tahoe

The Tahoe is the Cisco ASIC found in the Cisco 9300-EX switches, also known as the LSE (Leaf Spine Engine). It offers higher-density port configurations compared to the Bigsur (1.6Tbps)[20]. Overall, this ASIC is a maturation of the Bigsur silicon, offering more advanced features such as advanced VXLAN+EVPN fabrics, greater port flexibility (10G, 25G, 40G and 100G), and increased buffer sizes (40MB). We use this ASIC extensively in both our edge data centers as well as in our core data centers.

Conclusion

A lot of different factors come into play when making the decision to purchase the next generation of Cloudflare network equipment. This post only scratches the surface of technical considerations to be made, and doesn’t come near any other factors, such as ecosystem contributions, openness, interoperability, or pricing. None of this would’ve been possible without the contributions from other network engineers—this post was written on the shoulders of giants. In particular, thanks to the excellent work by Jim Warner at UCSC, the engrossing book on the new MX platforms, written by David Roy (Day One: Inside the MX 5G), as well as the best book on the Juniper QFX lineup: Juniper QFX10000 Series by Douglas Richard Hanks Jr, and to finish it off, the Summary of Network ASICs post by Justin Pietsch.


  1. https://cloud.google.com/tpu/ ↩︎

  2. https://angel.co/company/spacex/jobs/744408-sr-fpga-asic-design-engineer ↩︎

  3. https://marketrealist.com/2017/02/wired-infrastructure-segment-protects-broadcom/ ↩︎

  4. https://www.wsj.com/articles/broadcom-lands-deals-to-place-components-in-apple-smartphones-11579821914 ↩︎

  5. https://www.globenewswire.com/news-release/2019/12/09/1958047/0/en/Broadcom-Ships-Tomahawk-4-Industry-s-Highest-Bandwidth-Ethernet-Switch-Chip-at-25-6-Terabits-per-Second.html ↩︎

  6. https://www.broadcom.com/products/ethernet-connectivity/switching/stratadnx/BCM88690 ↩︎

  7. https://www.ufispace.com/products/telco/core-edge/s9705-48d ↩︎

  8. https://www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2019/pdf/BRKSPG-2900.pdf ↩︎

  9. https://www.broadcom.com/products/ethernet-connectivity/switching/strataxgs/bcm56880-series ↩︎

  10. https://newsroom.intel.com/news-releases/intel-to-acquire-fulcrum-microsystems/ ↩︎

  11. https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/ethernet-switch-fm5000-fm6000-datasheet.pdf ↩︎

  12. https://barefootnetworks.com/products/brief-tofino/ ↩︎

  13. http://www.sigcomm.org/node/3503 ↩︎

  14. https://github.com/p4lang/p4lang.github.io/blob/master/assets/p4_switch_model-600px.png ↩︎

  15. https://www.arista.com/assets/data/pdf/Whitepapers/7170_White_Paper.pdf ↩︎

  16. https://www.barefootnetworks.com/press-releases/barefoot-networks-to-showcase-technologies-to-build-fast-and-resilient-networks-using-deep-insight-and-tofino-powered-cisco-nexus-switches-at-cisco-live-us-2019/ ↩︎

  17. https://www.mellanox.com/products/ethernet-switches/sn4000 ↩︎

  18. https://www.juniper.net/assets/us/en/local/pdf/whitepapers/2000599-en.pdf ↩︎

  19. https://www.juniper.net/assets/us/en/local/pdf/datasheets/1000531-en.pdf#page=7 ↩︎

  20. https://www.cisco.com/c/dam/global/fr_ch/solutions/data-center-virtualization/pdf/Cisco_Nexus_9300_EX_Platform.pdf#page=8 ↩︎

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

Post Syndicated from Brian Bassett original https://blog.cloudflare.com/getting-to-the-core/

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

Maintaining a server fleet the size of Cloudflare’s is an operational challenge, to say the least. Anything we can do to lower complexity and improve efficiency has effects for our SRE (Site Reliability Engineer) and Data Center teams that can be felt throughout a server’s 4+ year lifespan.

At the Cloudflare Core, we process logs to analyze attacks and compute analytics. In 2020, our Core servers were in need of a refresh, so we decided to redesign the hardware to be more in line with our Gen X edge servers. We designed two major server variants for the core. The first is Core Compute 2020, an AMD-based server for analytics and general-purpose compute paired with solid-state storage drives. The second is Core Storage 2020, an Intel-based server with twelve spinning disks to run database workloads.

Core Compute 2020

Earlier this year, we blogged about our 10th generation edge servers or Gen X and the improvements they delivered to our edge in both performance and security. The new Core Compute 2020 server leverages many of our learnings from the edge server. The Core Compute servers run a variety of workloads including Kubernetes, Kafka, and various smaller services.

Configuration Changes (Kubernetes)

Previous Generation Compute Core Compute 2020
CPU 2 x Intel Xeon Gold 6262 1 x AMD EPYC 7642
Total Core / Thread Count 48C / 96T 48C / 96T
Base / Turbo Frequency 1.9 / 3.6 GHz 2.3 / 3.3 GHz
Memory 8 x 32GB DDR4-2666 8 x 32GB DDR4-2933
Storage 6 x 480GB SATA SSD 2 x 3.84TB NVMe SSD
Network Mellanox CX4 Lx 2 x 25GbE Mellanox CX4 Lx 2 x 25GbE

Configuration Changes (Kafka)

Previous Generation (Kafka) Core Compute 2020
CPU 2 x Intel Xeon Silver 4116 1 x AMD EPYC 7642
Total Core / Thread Count 24C / 48T 48C / 96T
Base / Turbo Frequency 2.1 / 3.0 GHz 2.3 / 3.3 GHz
Memory 6 x 32GB DDR4-2400 8 x 32GB DDR4-2933
Storage 12 x 1.92TB SATA SSD 10 x 3.84TB NVMe SSD
Network Mellanox CX4 Lx 2 x 25GbE Mellanox CX4 Lx 2 x 25GbE

Both previous generation servers were Intel-based platforms, with the Kubernetes server based on Xeon 6262 processors, and the Kafka server based on Xeon 4116 processors. One goal with these refreshed versions was to converge the configurations in order to simplify spare parts and firmware management across the fleet.

As the above tables show, the configurations have been converged with the only difference being the number of NVMe drives installed depending on the workload running on the host. In both cases we moved from a dual-socket configuration to a single-socket configuration, and the number of cores and threads per server either increased or stayed the same. In all cases, the base frequency of those cores was significantly improved. We also moved from SATA SSDs to NVMe SSDs.

Core Compute 2020 Synthetic Benchmarking

The heaviest user of the SSDs was determined to be Kafka. The majority of the time Kafka is sequentially writing 2MB blocks to the disk. We created a simple FIO script with 75% sequential write and 25% sequential read, scaling the block size from a standard page table entry size of 4096KB to Kafka’s write size of 2MB. The results aligned with what we expected from an NVMe-based drive.

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware
Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware
Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware
Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

Core Compute 2020 Production Benchmarking

Cloudflare runs many of our Core Compute services in Kubernetes containers, some of which are multi-core. By transitioning to a single socket, problems associated with dual sockets were eliminated, and we are guaranteed to have all cores allocated for any given container on the same socket.

Another heavy workload that is constantly running on Compute hosts is the Cloudflare CSAM Scanning Tool. Our Systems Engineering team isolated a Compute 2020 compute host and the previous generation compute host, had them run just this workload, and measured the time to compare the fuzzy hashes for images to the NCMEC hash lists and verify that they are a “miss”.

Because the CSAM Scanning Tool is very compute intensive we specifically isolated it to take a look at its performance with the new hardware. We’ve spent a great deal of effort on software optimization and improved algorithms for this tool but investing in faster, better hardware is also important.

In these heatmaps, the X axis represents time, and the Y axis represents “buckets” of time taken to verify that it is not a match to one of the NCMEC hash lists. For a given time slice in the heatmap, the red point is the bucket with the most times measured, the yellow point the second most, and the green points the least. The red points on the Compute 2020 graph are all in the 5 to 8 millisecond bucket, while the red points on the previous Gen heatmap are all in the 8 to 13 millisecond bucket, which shows that on average, the Compute 2020 host is verifying hashes significantly faster.

Getting to the Core: Benchmarking Cloudflare’s Latest Server Hardware

Core Storage 2020

Another major workload we identified was ClickHouse, which performs analytics over large datasets. The last time we upgraded our servers running ClickHouse was back in 2018.

Configuration Changes

Previous Generation Core Storage 2020
CPU 2 x Intel Xeon E5-2630 v4 1 x Intel Xeon Gold 6210U
Total Core / Thread Count 20C / 40T 20C / 40T
Base / Turbo Frequency 2.2 / 3.1 GHz 2.5 / 3.9 GHz
Memory 8 x 32GB DDR4-2400 8 x 32GB DDR4-2933
Storage 12 x 10TB 7200 RPM 3.5” SATA 12 x 10TB 7200 RPM 3.5” SATA
Network Mellanox CX4 Lx 2 x 25GbE Mellanox CX4 Lx 2 x 25GbE

CPU Changes

For ClickHouse, we use a 1U chassis with 12 x 10TB 3.5” hard drives. At the time we were designing Core Storage 2020 our server vendor did not yet have an AMD version of this chassis, so we remained on Intel. However, we moved Core Storage 2020 to a single 20 core / 40 thread Xeon processor, rather than the previous generation’s dual-socket 10 core / 20 thread processors. By moving to the single-socket Xeon 6210U processor, we were able to keep the same core count, but gained 17% higher base frequency and 26% higher max turbo frequency. Meanwhile, the total CPU thermal design profile (TDP), which is an approximation of the maximum power the CPU can draw, went down from 165W to 150W.

On a dual-socket server, remote memory accesses, which are memory accesses by a process on socket 0 to memory attached to socket 1, incur a latency penalty, as seen in this table:

Previous Generation Core Storage 2020
Memory latency, socket 0 to socket 0 81.3 ns 86.9 ns
Memory latency, socket 0 to socket 1 142.6 ns N/A

An additional advantage of having a CPU with all 20 cores on the same socket is the elimination of these remote memory accesses, which take 76% longer than local memory accesses.

Memory Changes

The memory in the Core Storage 2020 host is rated for operation at 2933 MHz; however, in the 8 x 32GB configuration we need on these hosts, the Intel Xeon 6210U processor clocks them at 2666 MH. Compared to the previous generation, this gives us a 13% boost in memory speed. While we would get a slightly higher clock speed with a balanced, 6 DIMMs configuration, we determined that we are willing to sacrifice the slightly higher clock speed in order to have the additional RAM capacity provided by the 8 x 32GB configuration.

Storage Changes

Data capacity stayed the same, with 12 x 10TB SATA drives in RAID 0 configuration for best  throughput. Unlike the previous generation, the drives in the Core Storage 2020 host are helium filled. Helium produces less drag than air, resulting in potentially lower latency.

Core Storage 2020 Synthetic benchmarking

We performed synthetic four corners benchmarking: IOPS measurements of random reads and writes using 4k block size, and bandwidth measurements of sequential reads and writes using 128k block size. We used the fio tool to see what improvements we would get in a lab environment. The results show a 10% latency improvement and 11% IOPS improvement in random read performance. Random write testing shows 38% lower latency and 60% higher IOPS. Write throughput is improved by 23%, and read throughput is improved by a whopping 90%.

Previous Generation Core Storage 2020 % Improvement
4k Random Reads (IOPS) 3,384 3,758 11.0%
4k Random Read Mean Latency (ms, lower is better) 75.4 67.8 10.1% lower
4k Random Writes (IOPS) 4,009 6,397 59.6%
4k Random Write Mean Latency (ms, lower is better) 63.5 39.7 37.5% lower
128k Sequential Reads (MB/s) 1,155 2,195 90.0%
128k Sequential Writes (MB/s) 1,265 1,558 23.2%

CPU frequencies

The higher base and turbo frequencies of the Core Storage 2020 host’s Xeon 6210U processor allowed that processor to achieve higher average frequencies while running our production ClickHouse workload. A recent snapshot of two production hosts showed the Core Storage 2020 host being able to sustain an average of 31% higher CPU frequency while running ClickHouse.

Previous generation (average core frequency) Core Storage 2020 (average core frequency) % improvement
Mean Core Frequency 2441 MHz 3199 MHz 31%

Core Storage 2020 Production benchmarking

Our ClickHouse database hosts are continually performing merge operations to optimize the database data structures. Each individual merge operation takes just a few seconds on average, but since they’re constantly running, they can consume significant resources on the host. We sampled the average merge time every five minutes over seven days, and then sampled the data to find the average, minimum, and maximum merge times reported by a Compute 2020 host and by a previous generation host. Results are summarized below.

ClickHouse merge operation performance improvement
(time in seconds, lower is better)

Time Previous generation Core Storage 2020 % improvement
Mean time to merge 1.83 1.15 37% lower
Maximum merge time 3.51 2.35 33% lower
Minimum merge time 0.68 0.32 53% lower

Our lab-measured CPU frequency and storage performance improvements on Core Storage 2020 have translated into significantly reduced times to perform this database operation.

Conclusion

With our Core 2020 servers, we were able to realize significant performance improvements, both in synthetic benchmarking outside production and in the production workloads we tested. This will allow Cloudflare to run the same workloads on fewer servers, saving CapEx costs and data center rack space. The similarity of the configuration of the Kubernetes and Kafka hosts should help with fleet management and spare parts management. For our next redesign, we will try to further converge the designs on which we run the major Core workloads to further improve efficiency.

Special thanks to Will Buckner and Chris Snook for their help in the development of these servers, and to Tim Bart for validating CSAM Scanning Tool’s performance on Compute.

Anchoring Trust: A Hardware Secure Boot Story

Post Syndicated from Derek Chamorro original https://blog.cloudflare.com/anchoring-trust-a-hardware-secure-boot-story/

Anchoring Trust: A Hardware Secure Boot Story

Anchoring Trust: A Hardware Secure Boot Story

As a security company, we pride ourselves on finding innovative ways to protect our platform to, in turn, protect the data of our customers. Part of this approach is implementing progressive methods in protecting our hardware at scale. While we have blogged about how we address security threats from application to memory, the attacks on hardware, as well as firmware, have increased substantially. The data cataloged in the National Vulnerability Database (NVD) has shown the frequency of hardware and firmware-level vulnerabilities rising year after year.

Technologies like secure boot, common in desktops and laptops, have been ported over to the server industry as a method to combat firmware-level attacks and protect a device’s boot integrity. These technologies require that you create a trust ‘anchor’, an authoritative entity for which trust is assumed and not derived. A common trust anchor is the system Basic Input/Output System (BIOS) or the Unified Extensible Firmware Interface (UEFI) firmware.

While this ensures that the device boots only signed firmware and operating system bootloaders, does it protect the entire boot process? What protects the BIOS/UEFI firmware from attacks?

The Boot Process

Before we discuss how we secure our boot process, we will first go over how we boot our machines.

Anchoring Trust: A Hardware Secure Boot Story

The above image shows the following sequence of events:

  • After powering on the system (through a baseboard management controller (BMC) or physically pushing a button on the system), the system unconditionally executes the UEFI firmware residing on a flash chip on the motherboard.
  • UEFI performs some hardware and peripheral initialization and executes the Preboot Execution Environment (PXE) code, which is a small program that boots an image over the network and usually resides on a flash chip on the network card.
  • PXE sets up the network card, and downloads and executes a small program bootloader through an open source boot firmware called iPXE.
  • iPXE loads a script that automates a sequence of commands for the bootloader to know how to boot a specific operating system (sometimes several of them). In our case, it loads our Linux kernel, initrd (this contains device drivers which are not directly compiled into the kernel), and a standard Linux root filesystem. After loading these components, the bootloader executes and hands off the control to the kernel.
  • Finally, the Linux kernel loads any additional drivers it needs and starts applications and services.

UEFI Secure Boot

Our UEFI secure boot process is fairly straightforward, albeit customized for our environments. After loading the UEFI firmware from the bootloader, an initialization script defines the following variables:

Platform Key (PK): It serves as the cryptographic root of trust for secure boot, giving capabilities to manipulate and/or validate the other components of the secure boot framework.

Trusted Database (DB): Contains a signed (by platform key) list of hashes of all PCI option ROMs, as well as a public key, which is used to verify the signature of the bootloader and the kernel on boot.

These variables are respectively the master platform public key, which is used to sign all other resources, and an allow list database, containing other certificates, binary file hashes, etc. In default secure boot scenarios, Microsoft keys are used by default. At Cloudflare we use our own, which makes us the root of trust for UEFI:

Anchoring Trust: A Hardware Secure Boot Story

But, by setting our trust anchor in the UEFI firmware, what attack vectors still exist?

UEFI Attacks

As stated previously, firmware and hardware attacks are on the rise. It is clear from the figure below that firmware-related vulnerabilities have increased significantly over the last 10 years, especially since 2017, when the hacker community started attacking the firmware on different platforms:

Anchoring Trust: A Hardware Secure Boot Story

This upward trend, coupled with recent malware findings in UEFI, shows that trusting firmware is becoming increasingly problematic.

By tainting the UEFI firmware image, you poison the entire boot trust chain. The ability to trust firmware integrity is important beyond secure boot. For example, if you can’t trust the firmware not to be compromised, you can’t trust things like trusted platform module (TPM) measurements to be accurate, because the firmware is itself responsible for doing these measurements (e.g a TPM is not an on-path security mechanism, but instead it requires firmware to interact and cooperate with). Firmware may be crafted to extend measurements that are accepted by a remote attestor, but that don’t represent what’s being locally loaded. This could cause firmware to have a questionable measured boot and remote attestation procedure.

Anchoring Trust: A Hardware Secure Boot Story

If we can’t trust firmware, then hardware becomes our last line of defense.

Hardware Root of Trust

Early this year, we made a series of blog posts on why we chose AMD EPYC processors for our Gen X servers. With security in mind, we started turning on features that were available to us and set forth the plan of using AMD silicon as a Hardware Root of Trust (HRoT).

Platform Secure Boot (PSB) is AMD’s implementation of hardware-rooted boot integrity. Why is it better than UEFI firmware-based root of trust? Because it is intended to assert, by a root of trust anchored in the hardware, the integrity and authenticity of the System ROM image before it can execute. It does so by performing the following actions:

  • Authenticates the first block of BIOS/UEFI prior to releasing x86 CPUs from reset.
  • Authenticates the System Read-Only Memory (ROM) contents on each boot, not just during updates.
  • Moves the UEFI Secure Boot trust chain to immutable hardware.

This is accomplished by the AMD Platform Security Processor (PSP), an ARM Cortex-A5 microcontroller that is an immutable part of the system on chip (SoC). The PSB consists of two components:

On-chip Boot ROM

  • Embeds a SHA384 hash of an AMD root signing key
  • Verifies and then loads the off-chip PSP bootloader located in the boot flash

Off-chip Bootloader

  • Locates the PSP directory table that allows the PSP to find and load various images
  • Authenticates first block of BIOS/UEFI code
  • Releases CPUs after successful authentication
Anchoring Trust: A Hardware Secure Boot Story
  1. The PSP secures the On-chip Boot ROM code, loads the off-chip PSP firmware into PSP static random access memory (SRAM) after authenticating the firmware, and passes control to it.
  2. The Off-chip Bootloader (BL) loads and specifies applications in a specific order (whether or not the system goes into a debug state and then a secure EFI application binary interface to the BL)
  3. The system continues initialization through each bootloader stage.
  4. If each stage passes, then the UEFI image is loaded and the x86 cores are released.

Now that we know the booting steps, let’s build an image.

Build Process

Public Key Infrastructure

Before the image gets built, a public key infrastructure (PKI) is created to generate the key pairs involved for signing and validating signatures:

Anchoring Trust: A Hardware Secure Boot Story

Our original device manufacturer (ODM), as a trust extension, creates a key pair (public and private) that is used to sign the first segment of the BIOS (private key) and validates that segment on boot (public key).

On AMD’s side, they have a key pair that is used to sign (the AMD root signing private key) and certify the public key created by the ODM. This is validated by AMD’s root signing public key, which is stored as a hash value (RSASSA-PSS: SHA-384 with 4096-bit key is used as the hashing algorithm for both message and mask generation) in SPI-ROM.

Private keys (both AMD and ODM) are stored in hardware security modules.

Because of the way the PKI mechanisms are built, the system cannot be compromised if only one of the keys is leaked. This is an important piece of the trust hierarchy that is used for image signing.

Certificate Signing Request

Once the PKI infrastructure is established, a BIOS signing key pair is created, together with a certificate signing request (CSR). Creating the CSR uses known common name (CN) fields that many are familiar with:

  • countryName
  • stateOrProvinceName
  • localityName
  • organizationName

In addition to the fields above, the CSR will contain a serialNumber field, a 32-bit integer value represented in ASCII HEX format that encodes the following values:

  • PLATFORM_VENDOR_ID: An 8-bit integer value assigned by AMD for each ODM.
  • PLATFORM_MODEL_ID: A 4-bit integer value assigned to a platform by the ODM.
  • BIOS_KEY_REVISION_ID: is set by the ODM encoding a 4-bit key revision as unary counter value.
  • DISABLE_SECURE_DEBUG: Fuse bit that controls whether secure debug unlock feature is disabled permanently.
  • DISABLE_AMD_BIOS_KEY_USE: Fuse bit that controls if the BIOS, signed by an AMD key, (with vendor ID == 0) is permitted to boot on a CPU with non-zero Vendor ID.
  • DISABLE_BIOS_KEY_ANTI_ROLLBACK: Fuse bit that controls whether BIOS key anti-rollback feature is enabled.
Anchoring Trust: A Hardware Secure Boot Story

Remember these values, as we’ll show how we use them in a bit. Any of the DISABLE values are optional, but recommended based on your security posture/comfort level.

AMD, upon processing the CSR, provides the public part of the BIOS signing key signed and certified by the AMD signing root key as a RSA Public Key Token file (.stkn) format.

Putting It All Together

The following is a step-by-step illustration of how signed UEFI firmware is built:

Anchoring Trust: A Hardware Secure Boot Story
  1. The ODM submits their public key used for signing Cloudflare images to AMD.
  2. AMD signs this key using their RSA private key and passes it back to ODM.
  3. The AMD public key and the signed ODM public key are part of the final BIOS SPI image.
  4. The BIOS source code is compiled and various BIOS components (PEI Volume, Driver eXecution Environment (DXE) volume, NVRAM storage, etc.) are built as usual.
  5. The PSP directory and BIOS directory are built next. PSP directory and BIOS directory table points to the location of various firmware entities.
  6. The ODM builds the signed BIOS Root of Trust Measurement (RTM) signature based on the blob of BIOS PEI volume concatenated with BIOS Directory header, and generates the digital signature of this using the private portion of ODM signing key. The SPI location for signed BIOS RTM code is finally updated with this signature blob.
  7. Finally, the BIOS binaries, PSP directory, BIOS directory and various firmware binaries are combined to build the SPI BIOS image.

Enabling Platform Secure Boot

Platform Secure Boot is enabled at boot time with a PSB-ready firmware image. PSB is configured using a region of one-time programmable (OTP) fuses, specified for the customer. OTP fuses are on-chip non-volatile memory (NVM) that permits data to be written to memory only once. There is NO way to roll the fused CPU back to an unfused one.

Enabling PSB in the field will go through two steps: fusing and validating.

  • Fusing: Fuse the values assigned in the serialNumber field that was generated in the CSR
  • Validating: Validate the fused values and the status code registers

If validation is successful, the BIOS RTM signature is validated using the ODM BIOS signing key, PSB-specific registers (MP0_C2P_MSG_37 and MP0_C2P_MSG_38) are updated with the PSB status and fuse values, and the x86 cores are released

If validation fails, the registers above are updated with the PSB error status and fuse values, and the x86 cores stay in a locked state.

Let’s Boot!

With a signed image in hand, we are ready to enable PSB on a machine. We chose to deploy this on a few machines that had an updated, unsigned AMI UEFI firmware image, in this case version 2.16. We use a couple of different firmware update tools, so, after a quick script, we ran an update to change the firmware version from 2.16 to 2.18C (the signed image):

. $sudo ./UpdateAll.sh
Bin file name is ****.218C

BEGIN

+---------------------------------------------------------------------------+
|                 AMI Firmware Update Utility v5.11.03.1778                 |      
|                 Copyright (C)2018 American Megatrends Inc.                |                       
|                         All Rights Reserved.                              |
+---------------------------------------------------------------------------+
Reading flash ............... done
FFS checksums ......... ok
Check RomLayout ........ ok.
Erasing Boot Block .......... done
Updating Boot Block ......... done
Verifying Boot Block ........ done
Erasing Main Block .......... done
Updating Main Block ......... done
Verifying Main Block ........ done
Erasing NVRAM Block ......... done
Updating NVRAM Block ........ done
Verifying NVRAM Block ....... done
Erasing NCB Block ........... done
Updating NCB Block .......... done
Verifying NCB Block ......... done

Process completed.

After the update completed, we rebooted:

Anchoring Trust: A Hardware Secure Boot Story

After a successful install, we validated that the image was correct via the sysfs information provided in the dmidecode output:

Anchoring Trust: A Hardware Secure Boot Story

Testing

With a signed image installed, we wanted to test that it worked, meaning: what if an unauthorized user installed their own firmware image? We did this by downgrading the image back to an unsigned image, 2.16. In theory, the machine shouldn’t boot as the x86 cores should stay in a locked state. After downgrading, we rebooted and got the following:

Anchoring Trust: A Hardware Secure Boot Story

This isn’t a powered down machine, but the result of booting with an unsigned image.

Anchoring Trust: A Hardware Secure Boot Story

Flashing back to a signed image is done by running the same flashing utility through the BMC, so we weren’t bricked. Nonetheless, the results were successful.

Naming Convention

Our standard UEFI firmware images are alphanumeric, making it difficult to distinguish (by name) the difference between a signed and unsigned image (v2.16A vs v2.18C), for example. There isn’t a remote attestation capability (yet) to probe the PSB status registers or to store these values by means of a signature (e.g. TPM quote). As we transitioned to PSB, we wanted to make this easier to determine by adding a specific suffix: -sig  that we could query in userspace. This would allow us to query this information via Prometheus. Changing the file name alone wouldn’t do it, so we had to make the following changes to reflect a new naming convention for signed images:

  • Update filename
  • Update BIOS version for setup menu
  • Update post message
  • Update SMBIOS type 0 (BIOS version string identifier)

Signed images now have a -sig suffix:

~$ sudo dmidecode -t0
# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 3.3.0 present.
# SMBIOS implementations newer than version 3.2.0 are not
# fully supported by this version of dmidecode.

Handle 0x0000, DMI type 0, 26 bytes
BIOS Information
	Vendor: American Megatrends Inc.
	Version: V2.20-sig
	Release Date: 09/29/2020
	Address: 0xF0000
	Runtime Size: 64 kB
	ROM Size: 16 MB

Conclusion

Finding weaknesses in firmware is a challenge that many attackers have taken on. Attacks that physically manipulate the firmware used for performing hardware initialization during the booting process can invalidate many of the common secure boot features that are considered industry standard. By implementing a hardware root of trust that is used for code signing critical boot entities, your hardware becomes a ‘first line of defense’ in ensuring that your server hardware and software integrity can derive trust through cryptographic means.

What’s Next?

While this post discussed our current, AMD-based hardware platform, how will this affect our future hardware generations? One of the benefits of working with diverse vendors like AMD and Ampere (ARM) is that we can ensure they are baking in our desired platform security by default (which we’ll speak about in a future post), making our hardware security outlook that much brighter 😀.

Swiss-Swedish Diplomatic Row Over Crypto AG

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/swiss-swedish-diplomatic-row-over-crypto-ag.html

Previously I have written about the Swedish-owned Swiss-based cryptographic hardware company: Crypto AG. It was a CIA-owned Cold War operation for decades. Today it is called Crypto International, still based in Switzerland but owned by a Swedish company.

It’s back in the news:

Late last week, Swedish Foreign Minister Ann Linde said she had canceled a meeting with her Swiss counterpart Ignazio Cassis slated for this month after Switzerland placed an export ban on Crypto International, a Swiss-based and Swedish-owned cybersecurity company.

The ban was imposed while Swiss authorities examine long-running and explosive claims that a previous incarnation of Crypto International, Crypto AG, was little more than a front for U.S. intelligence-gathering during the Cold War.

Linde said the Swiss ban was stopping “goods” — which experts suggest could include cybersecurity upgrades or other IT support needed by Swedish state agencies — from reaching Sweden.

She told public broadcaster SVT that the meeting with Cassis was “not appropriate right now until we have fully understood the Swiss actions.”

EDITED TO ADD (10/13): Lots of information on Crypto AG.

Securing the International IoT Supply Chain

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/07/securing_the_in_1.html

Together with Nate Kim (former student) and Trey Herr (Atlantic Council Cyber Statecraft Initiative), I have written a paper on IoT supply chain security. The basic problem we try to solve is: how to you enforce IoT security regulations when most of the stuff is made in other countries? And our solution is: enforce the regulations on the domestic company that’s selling the stuff to consumers. There’s a lot of detail between here and there, though, and it’s all in the paper.

We also wrote a Lawfare post:

…we propose to leverage these supply chains as part of the solution. Selling to U.S. consumers generally requires that IoT manufacturers sell through a U.S. subsidiary or, more commonly, a domestic distributor like Best Buy or Amazon. The Federal Trade Commission can apply regulatory pressure to this distributor to sell only products that meet the requirements of a security framework developed by U.S. cybersecurity agencies. That would put pressure on manufacturers to make sure their products are compliant with the standards set out in this security framework, including pressuring their component vendors and original device manufacturers to make sure they supply parts that meet the recognized security framework.

News article.

Another Intel Speculative Execution Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/another_intel_s.html

Remember Spectre and Meltdown? Back in early 2018, I wrote:

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they — and the research into the Intel ME vulnerability — have shown researchers where to look, more is coming — and what they’ll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

That has turned out to be true. Here’s a new vulnerability:

On Tuesday, two separate academic teams disclosed two new and distinctive exploits that pierce Intel’s Software Guard eXtension, by far the most sensitive region of the company’s processors.

[…]

The new SGX attacks are known as SGAxe and CrossTalk. Both break into the fortified CPU region using separate side-channel attacks, a class of hack that infers sensitive data by measuring timing differences, power consumption, electromagnetic radiation, sound, or other information from the systems that store it. The assumptions for both attacks are roughly the same. An attacker has already broken the security of the target machine through a software exploit or a malicious virtual machine that compromises the integrity of the system. While that’s a tall bar, it’s precisely the scenario that SGX is supposed to defend against.

Another news article.

Used Tesla Components Contain Personal Information

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/used_tesla_comp.html

Used Tesla components, sold on eBay, still contain personal information, even after a factory reset.

This is a decades-old problem. It’s a problem with used hard drives. It’s a problem with used photocopiers and printers. It will be a problem with IoT devices. It’ll be a problem with everything, until we decide that data deletion is a priority.

Wi-Fi Chip Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/03/wi-fi_chip_vuln.html

There’s a vulnerability in Wi-Fi hardware that breaks the encryption:

The vulnerability exists in Wi-Fi chips made by Cypress Semiconductor and Broadcom, the latter a chipmaker Cypress acquired in 2016. The affected devices include iPhones, iPads, Macs, Amazon Echos and Kindles, Android devices, and Wi-Fi routers from Asus and Huawei, as well as the Raspberry Pi 3. Eset, the security company that discovered the vulnerability, said the flaw primarily affects Cypress’ and Broadcom’s FullMAC WLAN chips, which are used in billions of devices. Eset has named the vulnerability Kr00k, and it is tracked as CVE-2019-15126.

Manufacturers have made patches available for most or all of the affected devices, but it’s not clear how many devices have installed the patches. Of greatest concern are vulnerable wireless routers, which often go unpatched indefinitely.

That’s the real problem. Many of these devices won’t get patched — ever.

When Bloom filters don’t bloom

Post Syndicated from Marek Majkowski original https://blog.cloudflare.com/when-bloom-filters-dont-bloom/

When Bloom filters don't bloom

When Bloom filters don't bloom

I’ve known about Bloom filters (named after Burton Bloom) since university, but I haven’t had an opportunity to use them in anger. Last month this changed – I became fascinated with the promise of this data structure, but I quickly realized it had some drawbacks. This blog post is the tale of my brief love affair with Bloom filters.

While doing research about IP spoofing, I needed to examine whether the source IP addresses extracted from packets reaching our servers were legitimate, depending on the geographical location of our data centers. For example, source IPs belonging to a legitimate Italian ISP should not arrive in a Brazilian datacenter. This problem might sound simple, but in the ever-evolving landscape of the internet this is far from easy. Suffice it to say I ended up with many large text files with data like this:

When Bloom filters don't bloom

This reads as: the IP 192.0.2.1 was recorded reaching Cloudflare data center number 107 with a legitimate request. This data came from many sources, including our active and passive probes, logs of certain domains we own (like cloudflare.com), public sources (like BGP table), etc. The same line would usually be repeated across multiple files.

I ended up with a gigantic collection of data of this kind. At some point I counted 1 billion lines across all the harvested sources. I usually write bash scripts to pre-process the inputs, but at this scale this approach wasn’t working. For example, removing duplicates from this tiny file of a meager 600MiB and 40M lines, took… about an eternity:

When Bloom filters don't bloom

Enough to say that deduplicating lines using the usual bash commands like ‘sort’ in various configurations (see ‘–parallel’, ‘–buffer-size’ and ‘–unique’) was not optimal for such a large data set.

Bloom filters to the rescue

When Bloom filters don't bloom

Image by David Eppstein Public Domain

Then I had a brainwave – it’s not necessary to sort the lines! I just need to remove duplicated lines – using some kind of "set" data structure should be much faster. Furthermore, I roughly know the cardinality of the input file (number of unique lines), and I can live with some data points being lost – using a probabilistic data structure is fine!

Bloom-filters are a perfect fit!

While you should go and read Wikipedia on Bloom Filters, here is how I look at this data structure.

How would you implement a "set"? Given a perfect hash function, and infinite memory, we could just create an infinite bit array and set a bit number ‘hash(item)’ for each item we encounter. This would give us a perfect "set" data structure. Right? Trivial. Sadly, hash functions have collisions and infinite memory doesn’t exist, so we have to compromise in our reality. But we can calculate and manage the probability of collisions. For example, imagine we have a good hash function, and 128GiB of memory. We can calculate the probability of the second item added to the bit array colliding would be 1 in 1099511627776. The probability of collision when adding more items worsens as we fill up the bit array.

Furthermore, we could use more than one hash function, and end up with a denser bit array. This is exactly what Bloom filters optimize for. A Bloom filter is a bunch of math on top of the four variables:

  • ‘n’ – The number of input elements (cardinality)
  • ‘m’ – Memory used by the bit-array
  • ‘k’ – Number of hash functions counted for each input
  • ‘p’ – Probability of a false positive match

Given the ‘n’ input cardinality and the ‘p’ desired probability of false positive, the Bloom filter math returns the ‘m’ memory required and ‘k’ number of hash functions needed.

Check out this excellent visualization by Thomas Hurst showing how parameters influence each other:

mmuniq-bloom

Guided by this intuition, I set out on a journey to add a new tool to my toolbox – ‘mmuniq-bloom’, a probabilistic tool that, given input on STDIN, returns only unique lines on STDOUT, hopefully much faster than ‘sort’ + ‘uniq’ combo!

Here it is:

For simplicity and speed I designed ‘mmuniq-bloom’ with a couple of assumptions. First, unless otherwise instructed, it uses 8 hash functions k=8. This seems to be a close to optimal number for the data sizes I’m working with, and the hash function can quickly output 8 decent hashes. Then we align ‘m’, number of bits in the bit array, to be a power of two. This is to avoid the pricey % modulo operation, which compiles down to slow assembly ‘div’. With power-of-two sizes we can just do bitwise AND. (For a fun read, see how compilers can optimize some divisions by using multiplication by a magic constant.)

We can now run it against the same data file we used before:

When Bloom filters don't bloom

Oh, this is so much better! 12 seconds is much more manageable than 2 minutes before. But hold on… The program is using an optimized data structure, relatively limited memory footprint, optimized line-parsing and good output buffering… 12 seconds is still eternity compared to ‘wc -l’ tool:

When Bloom filters don't bloom

What is going on? I understand that counting lines by ‘wc’ is easier than figuring out unique lines, but is it really worth the 26x difference? Where does all the CPU in ‘mmuniq-bloom’ go?

It must be my hash function. ‘wc’ doesn’t need to spend all this CPU performing all this strange math for each of the 40M lines on input. I’m using a pretty non-trivial ‘siphash24’ hash function, so it surely burns the CPU, right? Let’s check by running the code computing hash function but not doing any Bloom filter operations:

When Bloom filters don't bloom

This is strange. Counting the hash function indeed costs about 2s, but the program took 12s in the previous run. The Bloom filter alone takes 10 seconds? How is that possible? It’s such a simple data structure…

A secret weapon – a profiler

It was time to use a proper tool for the task – let’s fire up a profiler and see where the CPU goes. First, let’s fire an ‘strace’ to confirm we are not running any unexpected syscalls:

When Bloom filters don't bloom

Everything looks good. The 10 calls to ‘mmap’ each taking 4ms (3971 us) is intriguing, but it’s fine. We pre-populate memory up front with ‘MAP_POPULATE’ to save on page faults later.

What is the next step? Of course Linux’s ‘perf’!

When Bloom filters don't bloom

Then we can see the results:

When Bloom filters don't bloom

Right, so we indeed burn 87.2% of cycles in our hot code. Let’s see where exactly. Doing ‘perf annotate process_line –source’ quickly shows something I didn’t expect.

When Bloom filters don't bloom

You can see 26.90% of CPU burned in the ‘mov’, but that’s not all of it! The compiler correctly inlined the function, and unrolled the loop 8-fold. Summed up that ‘mov’ or ‘uint64_t v = *p’ line adds up to a great majority of cycles!

When Bloom filters don't bloom

Clearly ‘perf’ must be mistaken, how can such a simple line cost so much? We can repeat the benchmark with any other profiler and it will show us the same problem. For example, I like using ‘google-perftools’ with kcachegrind since they emit eye-candy charts:

When Bloom filters don't bloom

The rendered result looks like this:

When Bloom filters don't bloom

Allow me to summarise what we found so far.

The generic ‘wc’ tool takes 0.45s CPU time to process 600MiB file. Our optimized ‘mmuniq-bloom’ tool takes 12 seconds. CPU is burned on one ‘mov’ instruction, dereferencing memory….

When Bloom filters don't bloom

Image by Jose Nicdao CC BY/2.0

Oh! I how could I have forgotten. Random memory access is slow! It’s very, very, very slow!

According to the rule of thumb "latency numbers every programmer should know about", one RAM fetch is about 100ns. Let’s do the math: 40 million lines, 8 hashes counted for each line. Since our Bloom filter is 128MiB, on our older hardware it doesn’t fit into L3 cache! The hashes are uniformly distributed across the large memory range – each hash generates a memory miss. Adding it together that’s…

When Bloom filters don't bloom

That suggests 32 seconds burned just on memory fetches. The real program is faster, taking only 12s. This is because, although the Bloom filter data does not completely fit into L3 cache, it still gets some benefit from caching. It’s easy to see with ‘perf stat -d’:

When Bloom filters don't bloom

Right, so we should have had at least 320M LLC-load-misses, but we had only 280M. This still doesn’t explain why the program was running only 12 seconds. But it doesn’t really matter. What matters is that the number of cache misses is a real problem and we can only fix it by reducing the number of memory accesses. Let’s try tuning Bloom filter to use only one hash function:

When Bloom filters don't bloom

Ouch! That really hurt! The Bloom filter required 64 GiB of memory to get our desired false positive probability ratio of 1-error-per-10k-lines. This is terrible!

Also, it doesn’t seem like we improved much. It took the OS 22 seconds to prepare memory for us, but we still burned 11 seconds in userspace. I guess this time any benefits from hitting memory less often were offset by lower cache-hit probability due to drastically increased memory size. In previous runs we required only 128MiB for the Bloom filter!

Dumping Bloom filters altogether

This is getting ridiculous. To get the same false positive guarantees we either must use many hashes in Bloom filter (like 8) and therefore many memory operations, or we can have 1 hash function, but enormous memory requirements.

We aren’t really constrained by available memory, instead we want to optimize for reduced memory accesses. All we need is a data structure that requires at most 1 memory miss per item, and use less than 64 Gigs of RAM…

While we could think of more sophisticated data structures like Cuckoo filter, maybe we can be simpler. How about a good old simple hash table with linear probing?

When Bloom filters don't bloom
Image by Vadims Podāns

Welcome mmuniq-hash

Here you can find a tweaked version of mmuniq-bloom, but using hash table:

Instead of storing bits as for the Bloom-filter, we are now storing 64-bit hashes from the ‘siphash24’ function. This gives us much stronger probability guarantees, with probability of false positives much better than one error in 10k lines.

Let’s do the math. Adding a new item to a hash table containing, say 40M, entries has ’40M/2^64′ chances of hitting a hash collision. This is about one in 461 billion – a reasonably low probability. But we are not adding one item to a pre-filled set! Instead we are adding 40M lines to the initially empty set. As per birthday paradox this has much higher chances of hitting a collision at some point. A decent approximation is ‘~n^2/2m’, which in our case is ‘~(40M^2)/(2*(2^64))’. This is a chance of one in 23000. In other words, assuming we are using good hash function, every one in 23 thousand random sets of 40M items, will have a hash collision. This practical chance of hitting a collision is non-negligible, but it’s still better than a Bloom filter and totally acceptable for my use case.

The hash table code runs faster, has better memory access patterns and better false positive probability than the Bloom filter approach.

When Bloom filters don't bloom

Don’t be scared about the "hash conflicts" line, it just indicates how full the hash table was. We are using linear probing, so when a bucket is already used, we just pick up the next empty bucket. In our case we had to skip over 0.7 buckets on average to find an empty slot in the table. This is fine and, since we iterate over the buckets in linear order, we can expect the memory to be nicely prefetched.

From the previous exercise we know our hash function takes about 2 seconds of this. Therefore, it’s fair to say 40M memory hits take around 4 seconds.

Lessons learned

Modern CPUs are really good at sequential memory access when it’s possible to predict memory fetch patterns (see Cache prefetching). Random memory access on the other hand is very costly.

Advanced data structures are very interesting, but beware. Modern computers require cache-optimized algorithms. When working with large datasets, not fitting L3, prefer optimizing for reduced number loads, over optimizing the amount of memory used.

I guess it’s fair to say that Bloom filters are great, as long as they fit into the L3 cache. The moment this assumption is broken, they are terrible. This is not news, Bloom filters optimize for memory usage, not for memory access. For example, see the Cuckoo Filters paper.

Another thing is the ever-lasting discussion about hash functions. Frankly – in most cases it doesn’t matter. The cost of counting even complex hash functions like ‘siphash24’ is small compared to the cost of random memory access. In our case simplifying the hash function will bring only small benefits. The CPU time is simply spent somewhere else – waiting for memory!

One colleague often says: "You can assume modern CPUs are infinitely fast. They run at infinite speed until they hit the memory wall".

Finally, don’t follow my mistakes – everyone should start profiling with ‘perf stat -d’ and look at the "Instructions per cycle" (IPC) counter. If it’s below 1, it generally means the program is stuck on waiting for memory. Values above 2 would be great, it would mean the workload is mostly CPU-bound. Sadly, I’m yet to see high values in the workloads I’m dealing with…

Improved mmuniq

With the help of my colleagues I’ve prepared a further improved version of the ‘mmuniq’ hash table based tool. See the code:

It is able to dynamically resize the hash table, to support inputs of unknown cardinality. Then, by using batching, it can effectively use the ‘prefetch’ CPU hint, speeding up the program by 35-40%. Beware, sprinkling the code with ‘prefetch’ rarely works. Instead, I specifically changed the flow of algorithms to take advantage of this instruction. With all the improvements I got the run time down to 2.1 seconds:

When Bloom filters don't bloom

The end

Writing this basic tool which tries to be faster than ‘sort | uniq’ combo revealed some hidden gems of modern computing. With a bit of work we were able to speed it up from more than two minutes to 2 seconds. During this journey we learned about random memory access latency, and the power of cache friendly data structures. Fancy data structures are exciting, but in practice reducing random memory loads often brings better results.

Technical Details of Why Cloudflare Chose AMD EPYC for Gen X Servers

Post Syndicated from Nitin Rao original https://blog.cloudflare.com/technical-details-of-why-cloudflare-chose-amd-epyc-for-gen-x-servers/

Technical Details of Why Cloudflare Chose AMD EPYC for Gen X Servers

From the very beginning Cloudflare used Intel CPU-based servers (and, also, Intel components for things like NICs and SSDs). But we’re always interested in optimizing the cost of running our service so that we can provide products at a low cost and high gross margin.

We’re also mindful of events like the Spectre and Meltdown vulnerabilities and have been working with outside parties on research into mitigation and exploitation which we hope to publish later this year.

We looked very seriously at ARM-based CPUs and continue to keep our software up to date for the ARM architecture so that we can use ARM-based CPUs when the requests per watt is interesting to us.

Technical Details of Why Cloudflare Chose AMD EPYC for Gen X Servers

In the meantime, we’ve deployed AMD’s EPYC processors as part of Gen X server platform and for the first time are not using any Intel components at all. This week, we announced details of this tenth generation of servers. Below is a recap of why we’re excited about the design, specifications, and performance of our newest hardware.

Servers for an Accelerated Future

Every server can run every service. This architectural decision has helped us achieve higher efficiency across the Cloudflare network. It has also given us more flexibility to build new software or adopt the newest available hardware.

Notably, Intel is not inside. We are not using their hardware for any major server components such as the CPU, board, memory, storage, network interface card (or any type of accelerator).

This time, AMD is inside.

Compared with our prior server (Gen 9), our Gen X server processes as much as 36% more requests while costing substantially less. Additionally, it enables a ~50% decrease in L3 cache miss rate and up to 50% decrease in NGINX p99 latency, powered by a CPU rated at 25% lower TDP (thermal design power) per core.

Gen X CPU benchmarks

To identify the most efficient CPU for our software stack, we ran several benchmarks for key workloads such as cryptography, compression, regular expressions, and LuaJIT. Then, we simulated typical requests we see, before testing servers in live production to measure requests per watt.    

Based on our findings, we selected the single socket 48-core AMD 2nd Gen EPYC 7642.

Technical Details of Why Cloudflare Chose AMD EPYC for Gen X Servers

Impact of Cache Locality

The single AMD EPYC 7642 performed very well during our lab testing, beating our Gen 9 server with dual Intel Xeon Platinum 6162 with the same total number of cores. Key factors we noticed were its large L3 cache, which led to a low L3 cache miss rate, as well as a higher sustained operating frequency.

Gen X Performance Tuning

Partnering with AMD, we tuned the 2nd Gen EPYC 7642 processor to achieve additional 6% performance. We achieved this by using power determinism and configuring the CPU’s Thermal Design Power (TDP).

Securing Memory at EPYC Scale

Finally, we described how we use Secure Memory Encryption (SME), an interesting security feature within the System on a Chip architecture of the AMD EPYC line. We were impressed by how we could achieve RAM encryption without significant decrease in performance. This reduces the worry that any data could be exfiltrated from a stolen server.

We enjoy designing hardware that improves the security, performance and reliability of our global network, trusted by over 26 million Internet properties.

Want to help us evaluate new hardware? Join us!

Securing Memory at EPYC Scale

Post Syndicated from Derek Chamorro original https://blog.cloudflare.com/securing-memory-at-epyc-scale/

Securing Memory at EPYC Scale

Securing Memory at EPYC Scale

Security is a serious business, one that we do not take lightly at Cloudflare. We have invested a lot of effort into ensuring that our services, both external and internal, are protected by meeting or exceeding industry best practices. Encryption is a huge part of our strategy as it is embedded in nearly every process we have. At Cloudflare, we encrypt data both in transit (on the network) and at rest (on the disk). Both practices address some of the most common vectors used to exfiltrate information and these measures serve to protect sensitive data from attackers but,  what about data currently in use?

Can encryption or any technology eliminate all threats? No, but as Infrastructure Security, it’s our job to consider worst-case scenarios. For example, what if someone were to steal a server from one of our data centers? How can we leverage the most reliable, cutting edge, innovative technology to secure all data on that host if it were in the wrong hands? Would it be protected? And, in particular, what about the server’s RAM?

Securing Memory at EPYC Scale

Data in random access memory (RAM) is usually stored in the clear. This can leave data vulnerable to software or hardware probing by an attacker on the system. Extracting data from memory isn’t an easy task but, with the rise of persistent memory technologies, additional attack vectors are possible:

  • Dynamic random-access memory (DRAM) interface snooping
  • Installation of hardware devices that access host memory
  • Freezing and stealing dual in-line memory module (DIMMs)
  • Stealing non-volatile dual in-line memory module (NVDIMMs)

So, what about enclaves? Hardware manufacturers have introduced Trusted Execution Environments (also known as enclaves) to help create security boundaries by isolating software execution at runtime so that sensitive data can be processed in a trusted environment, such as secure area inside an existing processor or Trusted Platform Module.

While this allows developers to shield applications in untrusted environments, it doesn’t effectively address all of the physical system attacks mentioned previously. Enclaves were also meant to run small pieces of code. You could run an entire OS in an enclave, but there are limitations and performance issues in doing so.

This isn’t meant to bash enclave usage; we just wanted a better solution for encrypting all memory at scale. We expected performance to be compromised, and conducted tests to see just how much.

Time to get EPYC

Since we are using AMD for our tenth generation “Gen X servers”, we found an interesting security feature within the System on a Chip architecture of the AMD EPYC line. Secure Memory Encryption (SME) is an x86 instruction set extension introduced by AMD and available in the EPYC processor line. SME provides the ability to mark individual pages of memory as encrypted using standard x86 page tables. A page that is marked encrypted will be automatically decrypted when read from DRAM and encrypted when written to DRAM. SME can therefore be used to protect the contents of DRAM from physical attacks on the system.

Sounds complicated, right? Here’s the secret: It isn’t 😀

Components

SME is comprised of two components:

  • AES-128 encryption engine: Embedded in the memory controller. It is responsible for encrypting and decrypting data in main memory when an appropriate key is provided via the Secure Processor.
  • AMD Secure Processor (AMD-SP): An on-die 32-bit ARM Cortex A5 CPU that provides cryptographic functionality for secure key generation and key management. Think of this like a mini hardware security module that uses a hardware random number generator to generate the 128-bit key(s) used by the encryption engine.

How It Works

We had two options available to us when it came to enabling SME. The first option, regular SME, requires enabling a model specific register MSR 0xC001_0010[SMEE]. This enables the ability to set a page table entry encryption bit:

  • 0 = memory encryption features are disabled
  • 1 = memory encryption features are enabled

After memory encryption is enabled, a physical address bit (C-Bit) is utilized to mark if a memory page is protected. The operating system sets the bit of a physical address to 1 in the page table entry (PTE) to indicate the page should be encrypted. This causes any data assigned to that memory space to be automatically encrypted and decrypted by the AES engine in the memory controller:

Securing Memory at EPYC Scale

Becoming More Transparent

While arbitrarily flagging which page table entries we want encrypted is nice, our objective is to ensure that we are incorporating the full physical protection of SME. This is where the second mode of SME came in, Transparent SME (TSME). In TSME, all memory is encrypted regardless of the value of the encrypt bit on any particular page. This includes both instruction and data pages, as well as the pages corresponding to the page tables themselves.

Enabling TSME is as simple as:

  1. Setting a BIOS flag:
Securing Memory at EPYC Scale

2. Enabling kernel support with the following flag:

CONFIG_AMD_MEM_ENCRYPT=y

After a reboot you should see the following in dmesg:

$ sudo dmesg | grep SME
[    2.537160] AMD Secure Memory Encryption (SME) active

Performance Testing

To weigh the pros and cons of implementation against the potential risk of a stolen server, we had to test the performance of enabling TSME. We took a test server that mirrored a production edge metal with the following specs:

  • Memory: 8 x 32GB 2933MHz
  • CPU: AMD 2nd Gen EPYC 7642 with SMT enabled and running NPS4 mode
  • OS: Debian 9
  • Kernel: 5.4.12

The performance tools we used were:

Stream

We used a custom STREAM binary with 24 threads, using all available cores, to measure the sustainable memory bandwidth (in MB/s). Four synthetic computational kernels are run, with the output of each kernel being used as an input to the next. The best rates observed are reported for each choice of thread count.

Securing Memory at EPYC Scale
Securing Memory at EPYC Scale

The figures above show 2.6% to 4.2% performance variation, with a mean of 3.7%. These were the highest numbers measured, which fell below an expected performance impact of >5%.

Cryptsetup

While cryptsetup is normally used for encrypting disk partitions, it has a benchmarking utility that will report on a host’s cryptographic performance by iterating key derivation functions using memory only:

Example:

$ sudo cryptsetup benchmark
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1      1162501 iterations per second for 256-bit key
PBKDF2-sha256    1403716 iterations per second for 256-bit key
PBKDF2-sha512    1161213 iterations per second for 256-bit key
PBKDF2-ripemd160  856679 iterations per second for 256-bit key
PBKDF2-whirlpool  661979 iterations per second for 256-bit key

Securing Memory at EPYC Scale
Securing Memory at EPYC Scale

Benchmarky

Benchmarky is a homegrown tool provided by our Performance team to run synthetic workloads against a specific target to evaluate performance of different components. It uses Cloudflare Workers to send requests and read stats on responses. In addition to that, it also reports versions of all important stack components and their CPU usage. Each test runs 256 concurrent clients, grabbing a cached 10kB PNG image from a performance testing endpoint, and calculating the requests per second (RPS).

Securing Memory at EPYC Scale
Securing Memory at EPYC Scale

Conclusion

In the majority of test results, performance decreased by a nominal amount, actually less than we expected. AMD’s official white paper on SME even states that encryption and decryption of memory through the AES engine does incur a small amount of additional latency for DRAM memory accesses, though dependent on the workload. Across all 11 data points, our average performance drag was only down by .699%. Even at scale, enabling this feature has reduced the worry that any data could be exfiltrated from a stolen server.

While we wait for other hardware manufacturers to add support for total memory encryption, we are happy that AMD has set the bar high and is protecting our next generation edge hardware.

Gen X Performance Tuning

Post Syndicated from Sung Park original https://blog.cloudflare.com/gen-x-performance-tuning/

Gen X Performance Tuning

Gen X Performance Tuning

We are using AMD 2nd Gen EPYC 7642 for our tenth generation “Gen X” servers. We found many aspects of this processor compelling such as its increase in performance due to its frequency bump and cache-to-core ratio. We have partnered with AMD to get the best performance out of this processor and today, we are highlighting our tuning efforts that led to an additional 6% performance.

Gen X Performance Tuning

Thermal Design Power & Dynamic Power

Thermal design power (TDP) and dynamic power, amongst others, play a critical role when tuning a system. Many share a common belief that thermal design power is the maximum or average power drawn by the processor. The 48-core AMD EPYC 7642 has a TDP rating of 225W which is just as high as the 64-core AMD EPYC 7742. It comes to mind that fewer cores should translate into lower power consumption, so why is the AMD EPYC 7642 expected to draw just as much power as the AMD EPYC 7742?

Gen X Performance Tuning
TDP Comparison between the EPYC 7642, EPYC 7742 and top-end EPYC 7H12

Let’s take a step back and understand that TDP does not always mean the maximum or average power that the processor will draw. At a glance, TDP may provide a good estimate about the processor’s power draw; TDP is really about how much heat the processor is expected to generate. TDP should be used as a guideline for designing cooling solutions with appropriate thermal capacitance. The cooling solution is expected to indefinitely dissipate heat up to the TDP, in turn, this can help the chip designers determine a power budget and create new processors around that constraint. In the case of the AMD EPYC 7642, the extra power budget was spent on retaining all of its 256 MiB L3 cache and the cores to operate at a higher sustained frequency as needed during our peak hours.

The overall power drawn by the processor depends on many different factors; dynamic or active power establishes the relationship between power and frequency. Dynamic power is a function of capacitance – the chip itself, supply voltage, frequency of the processor, and activity factor. The activity factor is dependent on the characteristics of the workloads running on the processor. Different workloads will have different characteristics. Some examples include Cloudflare Workers or Cloudflare for Teams, the hotspots in these or any other particular programs will utilize different parts of the processor, affecting the activity factor.

Gen X Performance Tuning

Determinism Modes & Configurable TDP

The latest AMD processors continue to implement AMD Precision Boost which opportunistically allows the processor to operate at a higher frequency than its base frequency. How much higher can the frequency go depends on many different factors such as electrical, thermal, and power limitations.

AMD offers a knob known as determinism modes. This knob affects power and frequency, placing emphasis one over the other depending on the determinism selected. There is a white paper posted on AMD’s website that goes into the nuanced details of determinism, and remembering how frequency and power are related – this was the simplest definition I took away from the paper:

Performance Determinism – Power is a function of frequency.

Power Determinism – Frequency is a function of power.

Another knob that is available to us is Configurable Thermal Design Power (cTDP), this knob allows the end-user to reconfigure the factory default thermal design power. The AMD EPYC 7642 is rated at 225W; however, we have been given guidance from AMD that this particular part can be reconfigured up to 240W. As mentioned previously, the cooling solution must support up to the TDP to avoid throttling. We have also done our due diligence and tested that our cooling solution can reliably dissipate up to 240W, even at higher ambient temperatures.

We gave these two knobs a try and got the results shown in the figures below using 10 KiB web assets over HTTPS provided by our performance team over a sustained period of time to properly heat up the processor. The figures are broken down into the average operating frequencies across all 48 cores, total package power drawn from the socket, and the highest reported die temperature out of the 8 dies that are laid out on the AMD EPYC 7642. Each figure will compare the results we obtained from power and performance determinism, and finally, cTDP at 240W using power determinism.

Performance determinism clearly emphasized stabilizing operating frequency by matching its frequency to its lowest performing core over time. This mode appears to be ideal for tuners that emphasizes predictability over maximizing the processor’s operating frequency. In other words, the number of cycles that the processor has at its disposal every second should be predictable. This is useful if two or more cores share data dependencies. By allowing the cores to work in unison; this can prevent one core stalling another. Power determinism on the other hand, maximized its power and frequency as much as it can.

Gen X Performance Tuning
Gen X Performance Tuning

Heat generated by power can be compounded even further by ambient temperature. All components should stay within safe operating temperature at all times as specified by the vendor’s datasheet. If unsure, please reach out to your vendor as soon as possible. We put the AMD EPYC 7642 through several thermal tests and determined that it will operate within safe operating temperatures. Before diving into the figure below, it is important to mention that our fan speed ramped up over time, preventing the processor from reaching critical temperature; nevertheless, this meant that our cooling solution worked as intended – a combination of fans and a heatsink rated for 240W. We have yet to see the processors throttle in production. The figure below shows the highest temperature of the 8 dies laid out on the AMD EPYC 7642.

Gen X Performance Tuning

An unexpected byproduct of performance determinism was frequency jitter. It took a longer time for performance determinism to achieve steady-state frequency, which contradicted predictable performance that performance determinism mode was meant to achieve.

Gen X Performance Tuning

Finally, here are the deltas out from the real world with performance determinism and TDP set to factory default of 225W as the baseline. Deltas under 10% can be influenced by a wide variety of factors especially in production, however, none of our data showed a negative trend. Here are the averages.

Gen X Performance Tuning

Nodes Per Socket (NPS)

The AMD EPYC 7642 physically lays out its 8 dies across 4 different quadrants on a single package. By having such a layout, AMD supports dividing its dies into NUMA domains and this feature is called Nodes Per Socket or NPS. Available NPS options differ model-to-model, the AMD EPYC 7642 supports 4, 2, and 1 node(s) per socket. We thought it might be worth our time exploring this option despite the fact that none of these choices will end up yielding a shared last level cache. We did not observe any significant deltas from the NPS options in terms of performance.

Gen X Performance Tuning
Gen X Performance Tuning

Conclusion

Tuning the processor to yield an additional 6% throughput or requests per second out of the box has been a great start. Some parts will have more rooms for improvement than others.  In production we were able to achieve an additional 2% requests per seconds with power determinism, and we achieved another 4% by reconfiguring the TDP to 240W. We will continue to work with AMD as well as internal teams within Cloudflare to identify additional areas of improvement. If you like the idea of supercharging the Internet on behalf of our customers, then come join us.

Impact of Cache Locality

Post Syndicated from Sung Park original https://blog.cloudflare.com/impact-of-cache-locality/

Impact of Cache Locality

Impact of Cache Locality

In the past, we didn’t have the opportunity to evaluate as many CPUs as we do today. The hardware ecosystem was simple – Intel had consistently delivered industry leading processors. Other vendors could not compete with them on both performance and cost. Recently it all changed: AMD has been challenging the status quo with their 2nd Gen EPYC processors.

This is not the first time that Intel has been challenged; previously there was Qualcomm, and we worked with AMD and considered their 1st Gen EPYC processors and based on the original Zen architecture, but ultimately, Intel prevailed. AMD did not give up and unveiled their 2nd Gen EPYC processors codenamed Rome based on the latest Zen 2 architecture.


This made many improvements over its predecessors. Improvements include a die shrink from 14nm to 7nm, a doubling of the top end core count from 32 to 64, and a larger L3 cache size. Let’s emphasize again on the size of that L3 cache, which is 32 MiB L3 cache per Core Complex Die (CCD).

This time around, we have taken steps to understand our workloads at the hardware level through the use of hardware performance counters and profiling tools. Using these specialized CPU registers and profilers, we collected data on the AMD 2nd Gen EPYC and Intel Skylake-based Xeon processors in a lab environment, then validated our observations in production against other generations of servers from the past.

Simulated Environment

CPU Specifications

Impact of Cache Locality

We evaluated several Intel Cascade Lake and AMD 2nd Gen EPYC processors, trading off various factors between power and performance; the AMD EPYC 7642 CPU came out on top. The majority of Cascade Lake processors have 1.375 MiB L3 cache per core shared across all cores, a common theme that started with Skylake. On the other hand, the 2nd Gen EPYC processors start at 4 MiB per core. The AMD EPYC 7642 is a unique SKU since it has 256 MiB of L3 cache shared across its 48 cores. Having a cache this large or approximately 5.33 MiB sitting right next to each core means that a program will spend fewer cycles fetching data from RAM with the capability to have more data readily available in the L3 cache.

Impact of Cache Locality
Before (Intel)
Impact of Cache Locality
After (AMD)

Traditional cache layout has also changed with the introduction of 2nd Gen EPYC, a byproduct of AMD using a multi-chip module (MCM) design. The 256 MiB L3 cache is formed by 8 individual dies or Core Complex Die (CCD) that is formed by 2 Core Complexes (CCX) with each CCX containing 16 MiB of L3 cache.

Impact of Cache Locality
Core Complex (CCX) – Up to four cores
Impact of Cache Locality
Core Complex Die (CCD) – Created by combining two CCXs
Impact of Cache Locality
AMD 2nd Gen EPYC 7642 – Created with 8x CCDs plus an I/O die in the center

Methodology

Our production traffic shares many characteristics of a sustained workload which typically does not induce large variation in operating frequencies nor enter periods of idle time. We picked out a simulated traffic pattern that closely resembled our production traffic behavior which was the Cached 10KiB png via HTTPS. We were interested in assessing the CPU’s maximum throughput or requests per second (RPS), one of our key metrics. With that being said, we did not disable Intel Turbo Boost or AMD Precision Boost, nor matched the frequencies clock-for-clock while measuring for requests per second, instructions retired per second (IPS), L3 cache miss rate, and sustained operating frequency.

Results

The 1P AMD 2nd Gen EPYC 7642 powered server took the lead and processed 50% more requests per second compared to our Gen 9’s 2P Intel Xeon Platinum 6162 server.

Impact of Cache Locality

We are running a sustained workload, so we should end up with a sustained operating frequency that is higher than base clock. The AMD EPYC 7642 operating frequency or the number cycles that the processor had at its disposal was approximately 20% greater than the Intel Xeon Platinum 6162, so frequency alone was not enough to explain the 50% gain in requests per second.

Impact of Cache Locality

Taking a closer look, the number of instructions retired over time was far greater on the AMD 2nd Gen EPYC 7642 server, thanks to its low L3 cache miss rate.

Impact of Cache Locality
Impact of Cache Locality

Production Environment

CPU Specifications

Impact of Cache Locality

Methodology

Our most predominant bottleneck appears to be the cache memory and we saw significant improvement in requests per second as well as time to process a request due to low L3 cache miss rate. The data we present in this section was collected at a point of presence that spanned between Gen 7 to Gen 9 servers. We also collected data from a secondary region to gain additional confidence that the data we present here was not unique to one particular environment. Gen 9 is the baseline just as we have done in the previous section.

We put the 2nd Gen EPYC-based Gen X into production with hopes that the results would mirror closely to what we have previously seen in the lab. We found that the requests per second did not quite align with the results we had hoped, but the AMD EPYC server still outperformed all previous generations including outperforming the Intel Gen 9 server by 36%.

Impact of Cache Locality

Sustained operating frequency was nearly identical to what we have seen back in the lab.

Impact of Cache Locality

Due to the lower than expected requests per second, we also saw lower instructions retired over time and higher L3 cache miss rate but maintained a lead over Gen 9, with 29% better performance.

Impact of Cache Locality
Impact of Cache Locality

Conclusion

The single AMD EPYC 7642 performed very well during our lab testing, beating our Gen 9 server with dual Intel Xeon Platinum 6162 with the same total number of cores. Key factors we noticed were its large L3 cache, which led to a low L3 cache miss rate, as well as a higher sustained operating frequency. The AMD 2nd Gen EPYC 7642 did not have as big of an advantage in production, but nevertheless still outperformed all previous generations. The observation we made in production was based on a PoP that could have been influenced by a number of other factors such as but not limited to ambient temperature, timing, and other new products that will shape our traffic patterns in the future such as WebAssembly on Cloudflare Workers. The AMD EPYC 7642 opens up the possibility for our upcoming Gen X server to maintain the same core count while processing more requests per second than its predecessor.

Got a passion for hardware? I think we should get in touch. We are always looking for talented and curious individuals to join our team. The data presented here would not have been possible if it was not for the teamwork between many different individuals within Cloudflare. As a team, we strive to work together to create highly performant, reliable, and secure systems that will form the pillars of our rapidly growing network that spans 200 cities in more than 90 countries and we are just getting started.