Tag Archives: microsoft

How the IBM PC Won, Then Lost, the Personal Computer Market

Post Syndicated from James W. Cortada original https://spectrum.ieee.org/how-the-ibm-pc-won-then-lost-the-personal-computer-market

On 12 August 1981, at the Waldorf Astoria Hotel in midtown Manhattan, IBM unveiled the company’s entrant into the nascent personal computer market: the IBM PC. With that, the preeminent U.S. computer maker launched another revolution in computing, though few realized it at the time. Press coverage of the announcement was lukewarm.

Soon, though, the world began embracing little computers by the millions, with IBM dominating those sales. The personal computer vastly expanded the number of people and organizations that used computers. Other companies, including Apple and Tandy Corp., were already making personal computers, but no other machine carried the revered IBM name. IBM’s essential contributions were to position the technology as suitable for wide use and to set a technology standard. Rivals were compelled to meet a demand that they had all grossly underestimated. As such, IBM had a greater effect on the PC’s acceptance than did Apple, Compaq, Dell, and even Microsoft.

Despite this initial dominance, by 1986 the IBM PC was becoming an also-ran. And in 2005, the Chinese computer maker Lenovo Group purchased IBM’s PC business.

What occurred between IBM’s wildly successful entry into the personal computer business and its inglorious exit nearly a quarter century later? From IBM’s perspective, a new and vast market quickly turned into an ugly battleground with many rivals. The company stumbled badly, its bureaucratic approach to product development no match for a fast-moving field. Over time, it became clear that the sad story of the IBM PC mirrored the decline of the company.

At the outset, though, things looked rosy.

How the personal computer revolution was launched

IBM did not invent the desktop computer. Most historians agree that the personal computer revolution began in April 1977 at the first West Coast Computer Faire. Here, Steve Jobs introduced the Apple II, with a price tag of US $1,298 (about $5,800 today), while rival Commodore unveiled its PET. Both machines were designed for consumers, not just hobbyists or the technically skilled. In August, Tandy launched its TRS-80, which came with games. Indeed, software for these new machines was largely limited to games and a few programming tools.

Photo of Steve Jobs in front of an Apple Computer sign
Apple cofounder Steve Jobs unveiled the Apple II at the West Coast Computer Faire in April 1977.
Tom Munnecke/Getty Images

IBM’s large commercial customers faced the implications of this emerging technology: Who would maintain the equipment and its software? How secure was the data in these machines? And what was IBM’s position: Should personal computers be taken seriously or not? By 1980, customers in many industries were telling their IBM contacts to enter the fray. At IBM plants in San Diego, Endicott, N.Y, and Poughkeepsie, N.Y., engineers were forming hobby clubs to learn about the new machines.

The logical place to build a small computer was inside IBM’s General Products Division, which focused on minicomputers and the successful typewriter business. But the division had no budget or people to allocate to another machine. IBM CEO Frank T. Cary decided to fund the PC’s development out of his own budget. He turned to William “Bill” Lowe, who had given some thought to the design of such a machine. Lowe reported directly to Cary, bypassing IBM’s complex product-development bureaucracy, which had grown massively during the creation of the System/360 and S/370. The normal process to get a new product to market took four or five years, but the incipient PC market was moving too quickly for that.

Photo of IBM CEO Frank T. Cary
IBM CEO Frank T. Cary authorized a secret initiative to develop a personal computer outside of Big Blue’s product-development process.
IBM

Cary asked Lowe to come back in several months with a plan for developing a machine within a year and to find 40 people from across IBM and relocate them to Boca Raton, Fla.

Lowe’s plan for the PC called for buying existing components and software and bolting them together into a package aimed at the consumer market. There would be no homegrown operating system or IBM-made chips. The product also had to attract corporate customers, although it was unclear how many of those there would be. Mainframe salesmen could be expected to ignore or oppose the PC, so the project was kept reasonably secret.

A friend of Lowe’s, Jack Sams, was a software engineer who vaguely knew Bill Gates, and he reached out to the 24-year-old Gates to see if he had an operating system that might work for the new PC. Gates had dropped out of Harvard to get into the microcomputer business, and he ran a 31-person company called Microsoft. While he thought of programming as an intellectual exercise, Gates also had a sharp eye for business.

In July 1980, the IBMers met with Gates but were not greatly impressed, so they turned instead to Gary Kildall, president of Digital Research, the most recognized microcomputer software company at the time. Kildall then made what may have been the business error of the century. He blew off the blue-suiters so that he could fly his airplane, leaving his wife—a lawyer—to deal with them. The meeting went nowhere, with too much haggling over nondisclosure agreements, and the IBMers left. Gates was now their only option, and he took the IBMers seriously.

The normal process to get a new IBM product to market took four or five years, but the incipient PC market was moving too quickly for that.

That August, Lowe presented his plan to Cary and the rest of the management committee at IBM headquarters in Armonk, N.Y. The idea of putting together a PC outside of IBM’s development process disturbed some committee members. The committee knew that IBM had previously failed with its own tiny machines—specifically the Datamaster and the 5110—but Lowe was offering an alternative strategy and already had Cary’s support. They approved Lowe’s plan.

Lowe negotiated terms, volumes, and delivery dates with suppliers, including Gates. To meet IBM’s deadline, Gates concluded that Microsoft could not write an operating system from scratch, so he acquired one called QDOS (“quick and dirty operating system”) that could be adapted. IBM wanted Microsoft, not the team in Boca Raton, to have responsibility for making the operating system work. That meant Microsoft retained the rights to the operating system. Microsoft paid $75,000 for QDOS. By the early 1990s, that investment had boosted the firm’s worth to $27 billion. IBM’s strategic error in not retaining rights to the operating system went far beyond that $27 billion; it meant that Microsoft would set the standards for the PC operating system. In fairness to IBM, nobody thought the PC business would become so big. Gates said later that he had been “lucky.”

Back at Boca Raton, the pieces started coming together. The team designed the new product, lined up suppliers, and were ready to introduce the IBM Personal Computer just a year after gaining the management committee’s approval. How was IBM able to do this?

Much credit goes to Philip Donald Estridge. An engineering manager known for bucking company norms, Estridge turned out to be the perfect choice to ram this project through. He wouldn’t show up at product-development review meetings or return phone calls. He made decisions quickly and told Lowe and Cary about them later. He staffed up with like-minded rebels, later nicknamed the “Dirty Dozen.” In the fall of 1980, Lowe moved on to a new job at IBM, so Estridge was now in charge. He obtained 8088 microprocessors from Intel, made sure Microsoft kept the development of DOS secret, and quashed rumors that IBM was building a system. The Boca Raton team put in long hours and built a beautiful machine.

The IBM PC was a near-instant success

The big day came on 12 August 1981. Estridge wondered if anyone would show up at the Waldorf Astoria. After all, the PC was a small product, not in IBM’s traditional space. Some 100 people crowded into the hotel. Estridge described the PC, had one there to demonstrate, and answered a few questions.

Image of an old IBM ad.
The IBM PC was aimed squarely at the business market, which compelled other computer makers to follow suit.
IBM

Meanwhile, IBM salesmen had received packets of materials the previous day. On 12 August, branch managers introduced the PC to employees and then met with customers to do the same. Salesmen weren’t given sample machines. Along with their customers, they collectively scratched their heads, wondering how they could use the new computer. For most customers and IBMers, it was a new world.

Nobody predicted what would happen next. The first shipments began in October 1981, and in its first year, the IBM PC generated $1 billion in revenue, far exceeding company projections. IBM’s original manufacturing forecasts called for 1 million machines over three years, with 200,000 the first year. In reality, customers were buying 200,000 PCs per month by the second year.

Those who ordered the first PCs got what looked to be something pretty clever. It could run various software packages and a nice collection of commercial and consumer tools, including the accessible BASIC programming language. Whimsical ads for the PC starred Charlie Chaplin’s Little Tramp and carried the tag line “A Tool for Modern Times.” People could buy the machines at ComputerLand, a popular retail chain in the United States. For some corporate customers, the fact that IBM now had a personal computing product meant that these little machines were not some crazy geek-hippie fad but in fact a new class of serious computing. Corporate users who did not want to rely on their company’s centralized data centers began turning to these new machines.

Estridge and his team were busy acquiring games and business software for the PC. They lined up Lotus Development Corp. to provide its 1-2-3 spreadsheet package; other software products followed from multiple suppliers. As developers began writing software for the IBM PC, they embraced DOS as the industry standard. IBM’s competitors, too, increasingly had to use DOS and Intel chips. And Cary’s decision to avoid the product-development bureaucracy had paid off handsomely.

IBM couldn’t keep up with rivals in the PC market

Encouraged by their success, the IBMers in Boca Raton released a sequel to the PC in early 1983, called the XT. In 1984 came the XT’s successor, the AT. That machine would be the last PC designed outside IBM’s development process. John Opel, who had succeeded Cary as CEO in January 1981, endorsed reining in the PC business. During his tenure, Opel remained out of touch with the PC and did not fully understand the significance of the technology.

We could conclude that Opel did not need to know much about the PC because business overall was outstanding. IBM’s revenue reached $29 billion in 1981 and climbed to $46 billion in 1984. The company was routinely ranked as one of the best run. IBM’s stock more than doubled, making IBM the most valuable company in the world.

The media only wanted to talk about the PC. On its 3 January 1983 cover, Time featured the personal computer, rather than its usual Man of the Year. IBM customers, too, were falling in love with the new machines, ignoring IBM’s other lines of business—mainframes, minicomputers, and typewriters.

Photo of Don Estridge.
Don Estridge was the right person to lead the skunkworks in Boca Raton, Fla., where the IBM PC was built.
IBM

On 1 August 1983, Estridge’s skunkworks was redesignated the Entry Systems Division (ESD), which meant that the PC business was now ensnared in the bureaucracy that Cary had bypassed. Estridge’s 4,000-person group mushroomed to 10,000. He protested that Corporate had transferred thousands of programmers to him who knew nothing about PCs. PC programmers needed the same kind of machine-software knowledge that mainframe programmers in the 1950s had; both had to figure out how to cram software into small memories to do useful work. By the 1970s, mainframe programmers could not think small enough.

Estridge faced incessant calls to report on his activities in Armonk, diverting his attention away from the PC business and slowing development of new products even as rivals began to speed up introduction of their own offerings. Nevertheless, in August 1984, his group managed to release the AT, which had been designed before the reorganization.

But IBM blundered with its first product for the home computing market: the PCjr (pronounced “PC junior”). The company had no experience with this audience, and as soon as IBM salesmen and prospective customers got a glimpse of the machine, they knew something had gone terribly wrong.

Unlike the original PC, the XT, and the AT, the PCjr was the sorry product of IBM’s multilayered development and review process. Rumors inside IBM suggested that the company had spent $250 million to develop it. The computer’s tiny keyboard was scornfully nicknamed the “Chiclet keyboard.” Much of the PCjr’s software, peripheral equipment, memory boards, and other extensions were incompatible with other IBM PCs. Salesmen ignored it, not wanting to make a bad recommendation to customers. IBM lowered the PCjr’s price, added functions, and tried to persuade dealers to promote it, to no avail. ESD even offered the machines to employees as potential Christmas presents for a few hundred dollars, but that ploy also failed.

IBM’s relations with its two most important vendors, Intel and Microsoft, remained contentious. Both Microsoft and Intel made a fortune selling IBM’s competitors the same products they sold to IBM. Rivals figured out that IBM had set the de facto technical standards for PCs, so they developed compatible versions they could bring to market more quickly and sell for less. Vendors like AT&T, Digital Equipment Corp., and Wang Laboratories failed to appreciate that insight about standards, and they suffered. (The notable exception was Apple, which set its own standards and retained its small market share for years.) As the prices of PC clones kept falling, the machines grew more powerful—Moore’s Law at work. By the mid-1980s, IBM was reacting to the market rather than setting the pace.

For some corporate customers, the fact that IBM now had a personal computing product meant that these little machines were not some crazy geek-hippie fad but were in fact a new class of serious computing.

Estridge was not getting along with senior executives at IBM, particularly those on the mainframe side of the house. In early 1985, Opel made Bill Lowe head of the PC business.

Then disaster struck. On 2 August 1985, Estridge, his wife, Mary Ann, and a handful of IBM salesmen from Los Angeles boarded Delta Flight 191 headed to Dallas. Over the Dallas airport, 700 feet off the ground, a strong downdraft slammed the plane to the ground, killing 137 people including the Estridges and all but one of the other IBM employees. IBMers were in shock. Despite his troubles with senior management, Estridge had been popular and highly respected. Not since the death of Thomas J. Watson Sr. nearly 30 years earlier had employees been so stunned by a death within IBM. Hundreds of employees attended the Estridges’ funeral. The magic of the PC may have died before the airplane crash, but the tragedy at Dallas confirmed it.

More missteps doomed the IBM PC and its OS/2 operating system

While IBM continued to sell millions of personal computers, over time the profit on its PC business declined. IBM’s share of the PC market shrank from roughly 80 percent in 1982–1983 to 20 percent a decade later.

Meanwhile, IBM was collaborating with Microsoft on a new operating system, OS/2, even as Microsoft was working on Windows, its replacement for DOS. The two companies haggled over royalty payments and how to work on OS/2. By 1987, IBM had over a thousand programmers assigned to the project and to developing telecommunications, costing an estimated $125 million a year.

OS/2 finally came out in late 1987, priced at $340, plus $2,000 for additional memory to run it. By then, Windows had been on the market for two years and was proving hugely popular. Application software for OS/2 took another year to come to market, and even then the new operating system didn’t catch on. As the business writer Paul Carroll put it, OS/2 began to acquire “the smell of failure.”

Known to few outside of IBM and Microsoft, Gates had offered to sell IBM a portion of his company in mid-1986. It was already clear that Microsoft was going to become one of the most successful firms in the industry. But Lowe declined the offer, making what was perhaps the second-biggest mistake in IBM’s history up to then, following his first one of not insisting on proprietary rights to Microsoft’s DOS or the Intel chip used in the PC. The purchase price probably would have been around $100 million in 1986, an amount that by 1993 would have yielded a return of $3 billion and in subsequent decades orders of magnitude more.

In fairness to Lowe, he was nervous that such an acquisition might trigger antitrust concerns at the U.S. Department of Justice. But the Reagan administration was not inclined to tamper with the affairs of large multinational corporations.

Gates offered to sell IBM a portion of Microsoft in mid-1986. But Lowe declined the offer, making what was perhaps the second-biggest mistake in IBM’s history up to then.

More to the point, Lowe, Opel, and other senior executives did not understand the PC market. Lowe believed that PCs, and especially their software, should undergo the same rigorous testing as the rest of the company’s products. That meant not introducing software until it was as close to bugproof as possible. All other PC software developers valued speed to market over quality—better to get something out sooner that worked pretty well, let users identify problems, and then fix them quickly. Lowe was aghast at that strategy.

Salesmen came forward with proposals to sell PCs in bulk at discounted prices but got pushback. The sales team I managed arranged to sell 6,000 PCs to American Standard, a maker of bathroom fixtures. But it took more than a year and scores of meetings for IBM’s contract and legal teams to authorize the terms.

Lowe’s team was also slow to embrace the faster chips that Intel was producing, most notably the 80386. The new Intel chip had just the right speed and functionality for the next generation of computers. Even as rivals moved to the 386, IBM remained wedded to the slower 286 chip.

As the PC market matured, the gold rush of the late 1970s and early 1980s gave way to a more stable market. A large software industry grew up. Customers found the PC clones, software, and networking tools to be just as good as IBM’s products. The cost of performing a calculation on a PC dropped so much that it was often significantly cheaper to use a little machine than a mainframe. Corporate customers were beginning to understand that economic reality.

Opel retired in 1986, and John F. Akers inherited the company’s sagging fortunes. Akers recognized that the mainframe business had entered a long, slow decline, the PC business had gone into a more rapid fall, and the move to billable services was just beginning. He decided to trim the ranks by offering an early retirement program. But too many employees took the buyout, including too many of the company’s best and brightest.

In 1995, IBM CEO Louis V. Gerstner Jr. finally pulled the plug on OS/2. It did not matter that Microsoft’s software was notorious for having bugs or that IBM’s was far cleaner. As Gerstner noted in his 2002 book, “What my colleagues seemed unwilling or unable to accept was that the war was already over and was a resounding defeat—90 percent market share for Windows to OS/2’s 5 percent or 6 percent.”

The end of the IBM PC

IBM soldiered on with the PC until Samuel J. Palmisano, who once worked in the PC organization, became CEO in 2002. IBM was still the third-largest producer of personal computers, including laptops, but PCs had become a commodity business, and the company struggled to turn a profit from those products. Palmisano and his senior executives had the courage to set aside any emotional attachments to their “Tool for Modern Times” and end it.

In December 2004, IBM announced it was selling its PC business to Lenovo for $1.75 billion. As the New York Times explained, the sale “signals a recognition by IBM, the prototypical American multinational, that its own future lies even further up the economic ladder, in technology services and consulting, in software and in the larger computers that power corporate networks and the Internet. All are businesses far more profitable for IBM than its personal computer unit.”

As soon as IBM salesmen and prospective customers got a glimpse of the IBM PCjr, they knew something had gone terribly wrong.

IBM already owned 19 percent of Lenovo, which would continue for three years under the deal, with an option to acquire more shares. The head of Lenovo’s PC business would be IBM senior vice president Stephen M. Ward Jr., while his new boss would be Lenovo’s chairman, Yang Yuanquing. Lenovo got a five-year license to use the IBM brand on the popular Thinkpad laptops and PCs, and to hire IBM employees to support existing customers in the West, where Lenovo was virtually unknown. IBM would continue to design new laptops for Lenovo in Raleigh, N.C. Some 4,000 IBMers already working in China would switch to Lenovo, along with 6,000 in the United States.

The deal ensured that IBM’s global customers had familiar support while providing a stable flow of maintenance revenue to IBM for five years. For Lenovo, the deal provided a high-profile partner. Palmisano wanted to expand IBM’s IT services business to Chinese corporations and government agencies. Now the company was partnered with China’s largest computer manufacturer, which controlled 27 percent of the Chinese PC market. The deal was one of the most creative in IBM’s history. And yet it remained for many IBMers a sad close to the quarter-century chapter of the PC.

This article is based on excerpts from IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019).

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the August 2021 print issue as “A Tool for Modern Times.”

The Essential Question

Photo of James W. Cortada.
James W. Cortada at the IBM building in Cranford, N.J., in the late 1970s.
James W. Cortada

How many IBM PCs can you fit in an 18-wheeler? That, according to historian James W. Cortada, is the most interesting question he’s ever asked.

He first raised the question in 1985, several years after IBM had introduced its wildly successful personal computer. Cortada was then head of a sales team at IBM’s Nashville site.

“We’d arranged to sell 6,000 PCs to American Standard. They agreed to send their trucks to pick up a certain number of PCs every month. So we needed to know how many PCs would fit,” Cortada explains. “I can’t even remember what the answer was, only that I was delighted that I thought to ask the question.”

Cortada worked in various capacities at IBM for 38 years. (That’s him in the parking lot of IBM’s distinctive building in Cranford, N.J., designed by Victor Lundy.) After he retired in 2012, he became a senior research fellow at the University of Minnesota’s Charles Babbage Institute, where he specializes in the history of technology. That transition might seem odd, but shortly before he joined IBM, Cortada had earned a Ph.D. in modern European history from Florida State University. And he continued to research, write, and publish during his IBM career.

Cover of IBM: The Rise and Fall and Reinvention of a Global Icon
IEEE Spectrum

This month’s Past Forward describes the 1981 launch of the IBM PC. It’s drawn from Cortada’s award-winning history of Big Blue: IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). “I was able to take advantage of the normal skills of a trained historian,” Cortada says. “And I had witnessed a third of IBM’s history. I knew what questions to ask. I knew the skeletons in the closet.”

Even before he started the book, a big question was whether he’d reveal those skeletons or not. “I decided to be candid,” Cortada says. “I didn’t want my grandsons to be embarrassed about what I wrote.”

Learn the Internet of Things with “IoT for Beginners” and Raspberry Pi

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/learn-the-internet-of-things-with-iot-for-beginners-and-raspberry-pi/

Want to dabble in the Internet of Things but don’t know where to start? Well, our friends at Microsoft have developed something fun and free just for you. Here’s Senior Cloud Advocate Jim Bennett to tell you all about their brand new online curriculum for IoT beginners.

IoT — the Internet of Things — is one of the biggest growth areas in technology, and one that, to me, is very exciting. You start with a device like a Raspberry Pi, sprinkle some sensors, dust with code, mix in some cloud services and poof! You have smart cities, self-driving cars, automated farming, robotic supermarkets, or devices that can clean your toilet after you shout at Alexa for the third time.

robot detecting a shelf restock is required
Why doesn’t my local supermarket have a restocking robot?

It feels like every week there is another survey out on what tech skills will be in demand in the next five years, and IoT always appears somewhere near the top. This is why loads of folks are interested in learning all about it.

In my day job at Microsoft, I work a lot with students and lecturers, and I’m often asked for help with content to get started with IoT. Not just how to use whatever cool-named IoT services come from your cloud provider of choice to enable digital whatnots to add customer value via thingamabobs, but real beginner content that goes back to the basics.

IoT for Beginners logo
‘IoT for Beginners’ is totally free for anyone wanting to learn about the Internet of Things

This is why a few of us have spent the last few months locked away building IoT for Beginners. It’s a free, open source, 24-lesson university-level IoT curriculum designed for teachers and students, and built by IoT experts, education experts and students.

What will you learn?

The lessons are grouped into projects that you can build with a Raspberry Pi so that you can deep-dive into use cases of IoT, following the journey of food from farm to table.

collection of cartoons of eye oh tee projects

You’ll build projects as you learn the concepts of IoT devices, sensors, actuators, and the cloud, including:

  • An automated watering system, controlling a relay via a soil moisture sensor. This starts off running just on your device, then moves to a free MQTT broker to add cloud control. It then moves on again to cloud-based IoT services to add features like security to stop Farmer Giles from hacking your watering system.
  • A GPS-based vehicle tracker plotting the route taken on a map. You get alerts when a vehicle full of food arrives at a location by using cloud-based mapping services and serverless code.
  • AI-based fruit quality checking using a camera on your device. You train AI models that can detect if fruit is ripe or not. These start off running in the cloud, then you move them to the edge running directly on your Raspberry Pi.
  • Smart stock checking so you can see when you need to restack the shelves, again powered by AI services.
  • A voice-controlled smart timer so you have more devices to shout at when cooking your food! This one uses AI services to understand what you say into your IoT device. It gives spoken feedback and even works in many different languages, translating on the fly.

Grab your Raspberry Pi and some sensors from our friends at Seeed Studio and get building. Without further ado, please meet IoT For Beginners: A Curriculum!

The post Learn the Internet of Things with “IoT for Beginners” and Raspberry Pi appeared first on Raspberry Pi.

More Russian Hacking

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/more-russian-hacking.html

Two reports this week. The first is from Microsoft, which wrote:

As part of our investigation into this ongoing activity, we also detected information-stealing malware on a machine belonging to one of our customer support agents with access to basic account information for a small number of our customers. The actor used this information in some cases to launch highly-targeted attacks as part of their broader campaign.

The second is from the NSA, CISA, FBI, and the UK’s NCSC, which wrote that the GRU is continuing to conduct brute-force password guessing attacks around the world, and is in some cases successful. From the NSA press release:

Once valid credentials were discovered, the GTsSS combined them with various publicly known vulnerabilities to gain further access into victim networks. This, along with various techniques also detailed in the advisory, allowed the actors to evade defenses and collect and exfiltrate various information in the networks, including mailboxes.

News article.

Machine Learning made easy with Raspberry Pi, Adafruit and Microsoft

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/machine-learning-made-easy-with-raspberry-pi-adafruit-and-microsoft/

Machine learning can sound daunting even for experienced Raspberry Pi hobbyists, but Microsoft and Adafruit Industries are determined to make it easier for everyone to have a go. Microsoft’s Lobe tool takes the stress out of training machine learning models, and Adafruit have developed an entire kit around their BrainCraft HAT, featuring Raspberry Pi 4 and a Raspberry Pi Camera, to get your own machine learning project off to a flying start.

adafruit lobe kit
Adafruit developed this kit especially for the BrainCraft HAT to be used with Microsoft Lobe on Raspberry Pi

Adafruit’s BrainCraft HAT

Adafruit’s BrainCraft HAT fits on top of Raspberry Pi 4 and makes it really easy to connect hardware and debug machine learning projects. The 240 x 240 colour display screen also lets you see what the camera sees. Two microphones allow for audio input, and access to the GPIO means you can connect things likes relays and servos, depending on your project.

Adafruit’s BrainCraft HAT in action detecting a coffee mug

Microsoft Lobe

Microsoft Lobe is a free tool for creating and training machine learning models that you can deploy almost anywhere. The hardest part of machine learning is arguably creating and training a new model, so this tool is a great way for newbies to get stuck in, as well as being a fantastic time-saver for people who have more experience.

Get started with one of three easy, medium, and hard tutorials featured on the lobe-adafruit-kit GitHub.

This is just a quick snippet of Microsoft’s full Lobe tutorial video.
Look how quickly the tool takes enough photos to train a machine learning model

‘Bakery’ identifies and prices different pastries

Lady Ada demonstrated Bakery: a machine learning model that uses an Adafruit BrainCraft HAT, a Raspberry Pi camera, and Microsoft Lobe. Watch how easy it is to train a new machine learning model in Microsoft Lobe from this point in the Microsoft Build Keynote video.

A quick look at Bakery from Adafruit’s delightful YouTube channel

Bakery identifies different baked goods based on images taken by the Raspberry Pi camera, then automatically identifies and prices them, in the absence of barcodes or price tags. You can’t stick a price tag on a croissant. There’d be flakes everywhere.

Extra functionality

Running this project on Raspberry Pi means that Lady Ada was able to hook up lots of other useful tools. In addition to the Raspberry Pi camera and the HAT, she is using:

  • Three LEDs that glow green when an object is detected
  • A speaker and some text-to-speech code that announces which object is detected
  • A receipt printer that prints out the product name and the price

All of this running on Raspberry Pi, and made super easy with Microsoft Lobe and Adafruit’s BrainCraft HAT. Adafruit’s Microsoft Machine Learning Kit for Lobe contains everything you need to get started.

full adafruit lobe kit
The full Microsoft Machine Learning Kit for Lobe with Raspberry Pi 4 kit

Watch the Microsoft Build keynote

And finally, watch Microsoft CTO Kevin Scott introduce Limor Fried, aka Lady Ada, owner of Adafruit Industries. Lady Ada joins remotely from the Adafruit factory in Manhattan, NY, to show how the BrainCraft HAT and Lobe work to make machine learning accessible.

The post Machine Learning made easy with Raspberry Pi, Adafruit and Microsoft appeared first on Raspberry Pi.

Enable secure access to applications with Cloudflare WAF and Azure Active Directory

Post Syndicated from Abhi Das original https://blog.cloudflare.com/cloudflare-waf-integration-azure-active-directory/

Enable secure access to applications with Cloudflare WAF and Azure Active Directory

Enable secure access to applications with Cloudflare WAF and Azure Active Directory

Cloudflare and Microsoft Azure Active Directory have partnered to provide an integration specifically for web applications using Azure Active Directory B2C. From today, customers using both services can follow the simple integration steps to protect B2C applications with Cloudflare’s Web Application Firewall (WAF) on any custom domain. Microsoft has detailed this integration as well.

Cloudflare Web Application Firewall

The Web Application Firewall (WAF) is a core component of the Cloudflare platform and is designed to keep any web application safe. It blocks more than 70 billion cyber threats per day. That is 810,000 threats blocked every second.

Enable secure access to applications with Cloudflare WAF and Azure Active Directory

The WAF is available through an intuitive dashboard or a Terraform integration, and it enables users to build powerful rules. Every request to the WAF is inspected against the rule engine and the threat intelligence built from protecting approximately 25 million internet properties. Suspicious requests can be blocked, challenged or logged as per the needs of the user, while legitimate requests are routed to the destination regardless of where the application lives (i.e., on-premise or in the cloud). Analytics and Cloudflare Logs enable users to view actionable metrics.

The Cloudflare WAF is an intelligent, integrated, and scalable solution to protect business-critical web applications from malicious attacks, with no changes to customers’ existing infrastructure.

Azure AD B2C

Azure AD B2C is a customer identity management service that enables custom control of how your customers sign up, sign in, and manage their profiles when using iOS, Android, .NET, single-page (SPA), and other applications and web experiences. It uses standards-based authentication protocols including OpenID Connect, OAuth 2.0, and SAML. You can customize the entire user experience with your brand so that it blends seamlessly with your web and mobile applications. It integrates with most modern applications and commercial off-the-shelf software, providing business-to-customer identity as a service. Customers of businesses of all sizes use their preferred social, enterprise, or local account identities to get single sign-on access to their applications and APIs. It takes care of the scaling and safety of the authentication platform, monitoring and automatically handling threats like denial-of-service, password spray, or brute force attacks.

Integrated solution

When setting up Azure AD B2C, many customers prefer to customize their authentication endpoint by hosting the solution under their own domain — for example, under store.example.com — rather than using a Microsoft owned domain. With the new partnership and integration, customers can now place the custom domain behind Cloudflare’s Web Application Firewall while also using Azure AD B2C, further protecting the identity service from sophisticated attacks.

This defense-in-depth approach allows customers to leverage both Cloudflare WAF capabilities along with Azure AD B2C native Identity Protection features to defend against cyberattacks.

Instructions on how to set up the integration are provided on the Azure website and all it requires is a Cloudflare account.

Enable secure access to applications with Cloudflare WAF and Azure Active Directory

Customer benefit

Azure customers need support for a strong set of security and performance tools once they implement Azure AD B2C in their environment. Integrating Cloudflare Web Application Firewall with Azure AD B2C can provide customers the ability to write custom security rules (including rate limiting rules), DDoS mitigation, and deploy advanced bot management features. The Cloudflare WAF works by proxying and inspecting traffic towards your application and analyzing the payloads to ensure only non-malicious content reaches your origin servers. By incorporating the Cloudflare integration into Azure AD B2C, customers can ensure that their application is protected against sophisticated attack vectors including zero-day vulnerabilities, malicious automated botnets, and other generic attacks such as those listed in the OWASP Top 10.

Conclusion

This integration is a great match for any B2C businesses that are looking to enable their customers to authenticate themselves in the easiest and most secure way possible.

Please give it a try and let us know how we can improve it. Reach out to us for other use cases for your applications on Azure. Register here for expressing your interest/feedback on Azure integration and for upcoming webinars on this topic.

Chinese Hackers Stole an NSA Windows Exploit in 2014

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/03/chinese-hackers-stole-an-nsa-windows-exploit-in-2014.html

Check Point has evidence that (probably government affiliated) Chinese hackers stole and cloned an NSA Windows hacking tool years before (probably government affiliated) Russian hackers stole and then published the same tool. Here’s the timeline:

The timeline basically seems to be, according to Check Point:

  • 2013: NSA’s Equation Group developed a set of exploits including one called EpMe that elevates one’s privileges on a vulnerable Windows system to system-administrator level, granting full control. This allows someone with a foothold on a machine to commandeer the whole box.
  • 2014-2015: China’s hacking team code-named APT31, aka Zirconium, developed Jian by, one way or another, cloning EpMe.
  • Early 2017: The Equation Group’s tools were teased and then leaked online by a team calling itself the Shadow Brokers. Around that time, Microsoft cancelled its February Patch Tuesday, identified the vulnerability exploited by EpMe (CVE-2017-0005), and fixed it in a bumper March update. Interestingly enough, Lockheed Martin was credited as alerting Microsoft to the flaw, suggesting it was perhaps used against an American target.
  • Mid 2017: Microsoft quietly fixed the vulnerability exploited by the leaked EpMo exploit.

Lots of news articles about this.

Indiscriminate Exploitation of Microsoft Exchange Servers (CVE-2021-24085)

Post Syndicated from Andrew Christian original https://blog.rapid7.com/2021/03/02/indiscriminate-exploitation-of-microsoft-exchange-servers-cve-2021-24085/

Indiscriminate Exploitation of Microsoft Exchange Servers (CVE-2021-24085)

The following blog post was co-authored by Andrew Christian and Brendan Watters.

Beginning Feb. 27, 2021, Rapid7’s Managed Detection and Response (MDR) team has observed a notable increase in the automated exploitation of vulnerable Microsoft Exchange servers to upload a webshell granting attackers remote access. The suspected vulnerability being exploited is a cross-site request forgery (CSRF) vulnerability: The likeliest culprit is CVE-2021-24085, an Exchange Server spoofing vulnerability released as part of Microsoft’s February 2021 Patch Tuesday advisory, though other CVEs may also be at play (e.g., CVE-2021-26855, CVE-2021-26865, CVE-2021-26857).

The following China Chopper command was observed multiple times beginning Feb. 27 using the same DigitalOcean source IP (165.232.154.116):

cmd /c cd /d C:\inetpub\wwwroot\aspnet_client\system_web&net group "Exchange Organization administrators" administrator /del /domain&echo [S]&cd&echo [E]

Exchange or other systems administrators who see this command—or any other China Chopper command in the near future—should look for the following in IIS logs:

  • 165.232.154.116 (the source IP of the requests)
  • /ecp/y.js
  • /ecp/DDI/DDIService.svc/GetList

Indicators of compromise (IOCs) from the attacks we have observed are consistent with IOCs for publicly available exploit code targeting CVE-2021-24085 released by security researcher Steven Seeley last week, shortly before indiscriminate exploitation began. After initial exploitation, attackers drop an ASP eval webshell before (usually) executing procdump against lsass.exe in order to grab all the credentials from the box. It would also be possible to then clean some indicators of compromise from the affected machine[s]. We have included a section on CVE-2021-24085 exploitation at the end of this document.

Exchange servers are frequent, high-value attack targets whose patch rates often lag behind attacker capabilities. Rapid7 Labs has identified nearly 170,000 Exchange servers vulnerable to CVE-2021-24085 on the public internet:

Indiscriminate Exploitation of Microsoft Exchange Servers (CVE-2021-24085)

Rapid7 recommends that Exchange customers apply Microsoft’s February 2021 updates immediately. InsightVM and Nexpose customers can assess their exposure to CVE-2021-24085 and other February Patch Tuesday CVEs with vulnerability checks. InsightIDR provides existing coverage for this vulnerability via our out-of-the-box China Chopper Webshell Executing Commands detection, and will alert you about any suspicious activity. View this detection in the Attacker Tool section of the InsightIDR Detection Library.

CVE-2021-24085 exploit chain

As part of the PoC for CVE-2021-24085, the attacker will search for a specific token using a request to /ecp/DDI/DDIService.svc/GetList. If that request is successful, the PoC moves on to writing the desired token to the server’s filesystem with the request /ecp/DDI/DDIService.svc/SetObject. At that point, the token is available for downloading directly. The PoC uses a download request to /ecp/poc.png (though the name could be anything) and may be recorded in the IIS logs themselves attached to the IP of the initial attack.

Indicators of compromise would include the requests to both /ecp/DDI/DDIService.svc/GetList and /ecp/DDI/DDIService.svc/SetObject, especially if those requests were associated with an odd user agent string like python. Because the PoC utilizes aSetObject to write the token o the server’s filesystem in a world-readable location, it would be beneficial for incident responders to examine any files that were created around the time of the requests, as one of those files could be the access token and should be removed or placed in a secure location. It is also possible that responders could discover the file name in question by checking to see if the original attacker’s IP downloaded any files.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Twelve-Year-Old Vulnerability Found in Windows Defender

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/twelve-year-old-vulnerability-found-in-windows-defender.html

Researchers found, and Microsoft has patched, a vulnerability in Windows Defender that has been around for twelve years. There is no evidence that anyone has used the vulnerability during that time.

The flaw, discovered by researchers at the security firm SentinelOne, showed up in a driver that Windows Defender — renamed Microsoft Defender last year — uses to delete the invasive files and infrastructure that malware can create. When the driver removes a malicious file, it replaces it with a new, benign one as a sort of placeholder during remediation. But the researchers discovered that the system doesn’t specifically verify that new file. As a result, an attacker could insert strategic system links that direct the driver to overwrite the wrong file or even run malicious code.

It isn’t unusual that vulnerabilities lie around for this long. They can’t be fixed until someone finds them, and people aren’t always looking.

SVR Attacks on Microsoft 365

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/svr-attacks-on-microsoft-365.html

FireEye is reporting the current known tactics that the SVR used to compromise Microsoft 365 cloud data as part of its SolarWinds operation:

Mandiant has observed UNC2452 and other threat actors moving laterally to the Microsoft 365 cloud using a combination of four primary techniques:

  • Steal the Active Directory Federation Services (AD FS) token-signing certificate and use it to forge tokens for arbitrary users (sometimes described as Golden SAML). This would allow the attacker to authenticate into a federated resource provider (such as Microsoft 365) as any user, without the need for that user’s password or their corresponding multi-factor authentication (MFA) mechanism.
  • Modify or add trusted domains in Azure AD to add a new federated Identity Provider (IdP) that the attacker controls. This would allow the attacker to forge tokens for arbitrary users and has been described as an Azure AD backdoor.
  • Compromise the credentials of on-premises user accounts that are synchronized to Microsoft 365 that have high privileged directory roles, such as Global Administrator or Application Administrator.
  • Backdoor an existing Microsoft 365 application by adding a new application or service principal credential in order to use the legitimate permissions assigned to the application, such as the ability to read email, send email as an arbitrary user, access user calendars, etc.

Lots of details here, including information on remediation and hardening.

The more we learn about the this operation, the more sophisticated it becomes.

In related news, MalwareBytes was also targeted.

US Cyber Command and Microsoft Are Both Disrupting TrickBot

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/us-cyber-command-and-microsoft-are-both-disrupting-trickbot.html

Earlier this month, we learned that someone is disrupting the TrickBot botnet network.

Over the past 10 days, someone has been launching a series of coordinated attacks designed to disrupt Trickbot, an enormous collection of more than two million malware-infected Windows PCs that are constantly being harvested for financial data and are often used as the entry point for deploying ransomware within compromised organizations.

On Sept. 22, someone pushed out a new configuration file to Windows computers currently infected with Trickbot. The crooks running the Trickbot botnet typically use these config files to pass new instructions to their fleet of infected PCs, such as the Internet address where hacked systems should download new updates to the malware.

But the new configuration file pushed on Sept. 22 told all systems infected with Trickbot that their new malware control server had the address 127.0.0.1, which is a “localhost” address that is not reachable over the public Internet, according to an analysis by cyber intelligence firm Intel 471.

A few days ago, the Washington Post reported that it’s the work of US Cyber Command:

U.S. Cyber Command’s campaign against the Trickbot botnet, an army of at least 1 million hijacked computers run by Russian-speaking criminals, is not expected to permanently dismantle the network, said four U.S. officials, who spoke on the condition of anonymity because of the matter’s sensitivity. But it is one way to distract them at least for a while as they seek to restore operations.

The network is controlled by “Russian speaking criminals,” and the fear is that it will be used to disrupt the US election next month.

The effort is part of what Gen. Paul Nakasone, the head of Cyber Command, calls “persistent engagement,” or the imposition of cumulative costs on an adversary by keeping them constantly engaged. And that is a key feature of CyberCom’s activities to help protect the election against foreign threats, officials said.

Here’s General Nakasone talking about persistent engagement.

Microsoft is also disrupting Trickbot:

We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world. We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.

[…]

We took today’s action after the United States District Court for the Eastern District of Virginia granted our request for a court order to halt Trickbot’s operations.

During the investigation that underpinned our case, we were able to identify operational details including the infrastructure Trickbot used to communicate with and control victim computers, the way infected computers talk with each other, and Trickbot’s mechanisms to evade detection and attempts to disrupt its operation. As we observed the infected computers connect to and receive instructions from command and control servers, we were able to identify the precise IP addresses of those servers. With this evidence, the court granted approval for Microsoft and our partners to disable the IP addresses, render the content stored on the command and control servers inaccessible, suspend all services to the botnet operators, and block any effort by the Trickbot operators to purchase or lease additional servers.

To execute this action, Microsoft formed an international group of industry and telecommunications providers. Our Digital Crimes Unit (DCU) led investigation efforts including detection, analysis, telemetry, and reverse engineering, with additional data and insights to strengthen our legal case from a global network of partners including FS-ISAC, ESET, Lumen’s Black Lotus Labs, NTT and Symantec, a division of Broadcom, in addition to our Microsoft Defender team. Further action to remediate victims will be supported by internet service providers (ISPs) and computer emergency readiness teams (CERTs) around the world.

This action also represents a new legal approach that our DCU is using for the first time. Our case includes copyright claims against Trickbot’s malicious use of our software code. This approach is an important development in our efforts to stop the spread of malware, allowing us to take civil action to protect customers in the large number of countries around the world that have these laws in place.

Brian Krebs comments:

In legal filings, Microsoft argued that Trickbot irreparably harms the company “by damaging its reputation, brands, and customer goodwill. Defendants physically alter and corrupt Microsoft products such as the Microsoft Windows products. Once infected, altered and controlled by Trickbot, the Windows operating system ceases to operate normally and becomes tools for Defendants to conduct their theft.”

This is a novel use of trademark law.

Vulnerability Finding Using Machine Learning

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/vulnerability_f.html

Microsoft is training a machine-learning system to find software bugs:

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time.

News article.

I wrote about this in 2018:

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic — and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things (IoT) systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

Microsoft Buys Corp.com

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/microsoft_buys_.html

A few months ago, Brian Krebs told the story of the domain corp.com, and how it is basically a security nightmare:

At issue is a problem known as “namespace collision,” a situation where domain names intended to be used exclusively on an internal company network end up overlapping with domains that can resolve normally on the open Internet.

Windows computers on an internal corporate network validate other things on that network using a Microsoft innovation called Active Directory, which is the umbrella term for a broad range of identity-related services in Windows environments. A core part of the way these things find each other involves a Windows feature called “DNS name devolution,” which is a kind of network shorthand that makes it easier to find other computers or servers without having to specify a full, legitimate domain name for those resources.

For instance, if a company runs an internal network with the name internalnetwork.example.com, and an employee on that network wishes to access a shared drive called “drive1,” there’s no need to type “drive1.internalnetwork.example.com” into Windows Explorer; typing “\\drive1\” alone will suffice, and Windows takes care of the rest.

But things can get far trickier with an internal Windows domain that does not map back to a second-level domain the organization actually owns and controls. And unfortunately, in early versions of Windows that supported Active Directory — Windows 2000 Server, for example — the default or example Active Directory path was given as “corp,” and many companies apparently adopted this setting without modifying it to include a domain they controlled.

Compounding things further, some companies then went on to build (and/or assimilate) vast networks of networks on top of this erroneous setting.

Now, none of this was much of a security concern back in the day when it was impractical for employees to lug their bulky desktop computers and monitors outside of the corporate network. But what happens when an employee working at a company with an Active Directory network path called “corp” takes a company laptop to the local Starbucks?

Chances are good that at least some resources on the employee’s laptop will still try to access that internal “corp” domain. And because of the way DNS name devolution works on Windows, that company laptop online via the Starbucks wireless connection is likely to then seek those same resources at “corp.com.”

In practical terms, this means that whoever controls corp.com can passively intercept private communications from hundreds of thousands of computers that end up being taken outside of a corporate environment which uses this “corp” designation for its Active Directory domain.

Microsoft just bought it, so it wouldn’t fall into the hands of any bad actors:

In a written statement, Microsoft said it acquired the domain to protect its customers.

“To help in keeping systems protected we encourage customers to practice safe security habits when planning for internal domain and network names,” the statement reads. “We released a security advisory in June of 2009 and a security update that helps keep customers safe. In our ongoing commitment to customer security, we also acquired the Corp.com domain.”

Emotet Malware Causes Physical Damage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/emotat_malware_.html

Microsoft is reporting that an Emotet malware infection shut down a network by causing computers to overheat and then crash.

The Emotet payload was delivered and executed on the systems of Fabrikam — a fake name Microsoft gave the victim in their case study — five days after the employee’s user credentials were exfiltrated to the attacker’s command and control (C&C) server.

Before this, the threat actors used the stolen credentials to deliver phishing emails to other Fabrikam employees, as well as to their external contacts, with more and more systems getting infected and downloading additional malware payloads.

The malware further spread through the network without raising any red flags by stealing admin account credentials authenticating itself on new systems, later used as stepping stones to compromise other devices.

Within 8 days since that first booby-trapped attachment was opened, Fabrikam’s entire network was brought to its knees despite the IT department’s efforts, with PCs overheating, freezing, and rebooting because of blue screens, and Internet connections slowing down to a crawl because of Emotet devouring all the bandwidth.

The infection mechanism was one employee opening a malicious attachment to a phishing email. I can’t find any information on what kind of attachment.

Building Windows containers with AWS CodePipeline and custom actions

Post Syndicated from Dmitry Kolomiets original https://aws.amazon.com/blogs/devops/building-windows-containers-with-aws-codepipeline-and-custom-actions/

Dmitry Kolomiets, DevOps Consultant, Professional Services

AWS CodePipeline and AWS CodeBuild are the primary AWS services for building CI/CD pipelines. AWS CodeBuild supports a wide range of build scenarios thanks to various built-in Docker images. It also allows you to bring in your own custom image in order to use different tools and environment configurations. However, there are some limitations in using custom images.

Considerations for custom Docker images:

  • AWS CodeBuild has to download a new copy of the Docker image for each build job, which may take longer time for large Docker images.
  • AWS CodeBuild provides a limited set of instance types to run the builds. You might have to use a custom image if the build job requires higher memory, CPU, graphical subsystems, or any other functionality that is not part of the out-of-the-box provided Docker image.

Windows-specific limitations

  • AWS CodeBuild supports Windows builds only in a limited number of AWS regions at this time.
  • AWS CodeBuild executes Windows Server containers using Windows Server 2016 hosts, which means that build containers are huge—it is not uncommon to have an image size of 15 GB or more (with .NET Framework SDK installed). Windows Server 2019 containers, which are almost half as small, cannot be used due to host-container mismatch.
  • AWS CodeBuild runs build jobs inside Docker containers. You should enable privileged mode in order to build and publish Linux Docker images as part of your build job. However, DIND is not supported on Windows and, therefore, AWS CodeBuild cannot be used to build Windows Server container images.

The last point is the critical one for microservice type of applications based on Microsoft stacks (.NET Framework, Web API, IIS). The usual workflow for this kind of applications is to build a Docker image, push it to ECR and update ECS / EKS cluster deployment.

Here is what I cover in this post:

  • How to address the limitations stated above by implementing AWS CodePipeline custom actions (applicable for both Linux and Windows environments).
  • How to use the created custom action to define a CI/CD pipeline for Windows Server containers.

CodePipeline custom actions

By using Amazon EC2 instances, you can address the limitations with Windows Server containers and enable Windows build jobs in the regions where AWS CodeBuild does not provide native Windows build environments. To accommodate the specific needs of a build job, you can pick one of the many Amazon EC2 instance types available.

The downside of this approach is additional management burden—neither AWS CodeBuild nor AWS CodePipeline support Amazon EC2 instances directly. There are ways to set up a Jenkins build cluster on AWS and integrate it with CodeBuild and CodeDeploy, but these options are too “heavy” for the simple task of building a Docker image.

There is a different way to tackle this problem: AWS CodePipeline provides APIs that allow you to extend a build action though custom actions. This example demonstrates how to add a custom action to offload a build job to an Amazon EC2 instance.

Here is the generic sequence of steps that the custom action performs:

  • Acquire EC2 instance (see the Notes on Amazon EC2 build instances section).
  • Download AWS CodePipeline artifacts from Amazon S3.
  • Execute the build command and capture any errors.
  • Upload output artifacts to be consumed by subsequent AWS CodePipeline actions.
  • Update the status of the action in AWS CodePipeline.
  • Release the Amazon EC2 instance.

Notice that most of these steps are the same regardless of the actual build job being executed. However, the following parameters will differ between CI/CD pipelines and, therefore, have to be configurable:

  • Instance type (t2.micro, t3.2xlarge, etc.)
  • AMI (builds could have different prerequisites in terms of OS configuration, software installed, Docker images downloaded, etc.)
  • Build command line(s) to execute (MSBuild script, bash, Docker, etc.)
  • Build job timeout

Serverless custom action architecture

CodePipeline custom build action can be implemented as an agent component installed on an Amazon EC2 instance. The agent polls CodePipeline for build jobs and executes them on the Amazon EC2 instance. There is an example of such an agent on GitHub, but this approach requires installation and configuration of the agent on all Amazon EC2 instances that carry out the build jobs.

Instead, I want to introduce an architecture that enables any Amazon EC2 instance to be a build agent without additional software and configuration required. The architecture diagram looks as follows:

Serverless custom action architecture

There are multiple components involved:

  1. An Amazon CloudWatch Event triggers an AWS Lambda function when a custom CodePipeline action is to be executed.
  2. The Lambda function retrieves the action’s build properties (AMI, instance type, etc.) from CodePipeline, along with location of the input artifacts in the Amazon S3 bucket.
  3. The Lambda function starts a Step Functions state machine that carries out the build job execution, passing all the gathered information as input payload.
  4. The Step Functions flow acquires an Amazon EC2 instance according to the provided properties, waits until the instance is up and running, and starts an AWS Systems Manager command. The Step Functions flow is also responsible for handling all the errors during build job execution and releasing the Amazon EC2 instance once the Systems Manager command execution is complete.
  5. The Systems Manager command runs on an Amazon EC2 instance, downloads CodePipeline input artifacts from the Amazon S3 bucket, unzips them, executes the build script, and uploads any output artifacts to the CodePipeline-provided Amazon S3 bucket.
  6. Polling Lambda updates the state of the custom action in CodePipeline once it detects that the Step Function flow is completed.

The whole architecture is serverless and requires no maintenance in terms of software installed on Amazon EC2 instances thanks to the Systems Manager command, which is essential for this solution. All the code, AWS CloudFormation templates, and installation instructions are available on the GitHub project. The following sections provide further details on the mentioned components.

Custom Build Action

The custom action type is defined as an AWS::CodePipeline::CustomActionType resource as follows:

  Ec2BuildActionType: 
    Type: AWS::CodePipeline::CustomActionType
    Properties: 
      Category: !Ref CustomActionProviderCategory
      Provider: !Ref CustomActionProviderName
      Version: !Ref CustomActionProviderVersion
      ConfigurationProperties: 
        - Name: ImageId 
          Description: AMI to use for EC2 build instances.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String
        - Name: InstanceType
          Description: Instance type for EC2 build instances.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String
        - Name: Command
          Description: Command(s) to execute.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String 
        - Name: WorkingDirectory 
          Description: Working directory for the command to execute.
          Key: true 
          Required: false
          Secret: false
          Queryable: false
          Type: String 
        - Name: OutputArtifactPath 
          Description: Path of the file(-s) or directory(-es) to use as custom action output artifact.
          Key: true 
          Required: false
          Secret: false
          Queryable: false
          Type: String 
      InputArtifactDetails: 
        MaximumCount: 1
        MinimumCount: 0
      OutputArtifactDetails: 
        MaximumCount: 1
        MinimumCount: 0 
      Settings: 
        EntityUrlTemplate: !Sub "https://${AWS::Region}.console.aws.amazon.com/systems-manager/documents/${RunBuildJobOnEc2Instance}"
        ExecutionUrlTemplate: !Sub "https://${AWS::Region}.console.aws.amazon.com/states/home#/executions/details/{ExternalExecutionId}"

The custom action type is uniquely identified by Category, Provider name, and Version.

Category defines the stage of the pipeline in which the custom action can be used, such as build, test, or deploy. Check the AWS documentation for the full list of allowed values.

Provider name and Version are the values used to identify the custom action type in the CodePipeline console or AWS CloudFormation templates. Once the custom action type is installed, you can add it to the pipeline, as shown in the following screenshot:

Adding custom action to the pipeline

The custom action type also defines a list of user-configurable properties—these are the properties identified above as specific for different CI/CD pipelines:

  • AMI Image ID
  • Instance Type
  • Command
  • Working Directory
  • Output artifacts

The properties are configurable in the CodePipeline console, as shown in the following screenshot:

Custom action properties

Note the last two settings in the Custom Action Type AWS CloudFormation definition: EntityUrlTemplate and ExecutionUrlTemplate.

EntityUrlTemplate defines the link to the AWS Systems Manager document that carries over the build actions. The link is visible in AWS CodePipeline console as shown in the following screenshot:

Custom action's EntityUrlTemplate link

ExecutionUrlTemplate defines the link to additional information related to a specific execution of the custom action. The link is also visible in the CodePipeline console, as shown in the following screenshot:

Custom action's ExecutionUrlTemplate link

This URL is defined as a link to the Step Functions execution details page, which provides high-level information about the custom build step execution, as shown in the following screenshot:

Custom build step execution

This page is a convenient visual representation of the custom action execution flow and may be useful for troubleshooting purposes as it gives an immediate access to error messages and logs.

The polling Lambda function

The Lambda function polls CodePipeline for custom actions when it is triggered by the following CloudWatch event:

  source: 
    - "aws.codepipeline"
  detail-type: 
    - "CodePipeline Action Execution State Change"
  detail: 
    state: 
      - "STARTED"

The event is triggered for every CodePipeline action started, so the Lambda function should verify if, indeed, there is a custom action to be processed.

The rest of the lambda function is trivial and relies on the following APIs to retrieve or update CodePipeline actions and deal with instances of Step Functions state machines:

CodePipeline API

AWS Step Functions API

You can find the complete source of the Lambda function on GitHub.

Step Functions state machine

The following diagram shows complete Step Functions state machine. There are three main blocks on the diagram:

  • Acquiring an Amazon EC2 instance and waiting while the instance is registered with Systems Manager
  • Running a Systems Manager command on the instance
  • Releasing the Amazon EC2 instance

Note that it is necessary to release the Amazon EC2 instance in case of error or exception during Systems Manager command execution, relying on Fallback States to guarantee that.

You can find the complete definition of the Step Function state machine on GitHub.

Step Functions state machine

Systems Manager Document

The AWS Systems Manager Run Command does all the magic. The Systems Manager agent is pre-installed on AWS Windows and Linux AMIs, so no additional software is required. The Systems Manager run command executes the following steps to carry out the build job:

  1. Download input artifacts from Amazon S3.
  2. Unzip artifacts in the working folder.
  3. Run the command.
  4. Upload output artifacts to Amazon S3, if any; this makes them available for the following CodePipeline stages.

The preceding steps are operating-system agnostic, and both Linux and Windows instances are supported. The following code snippet shows the Windows-specific steps.

You can find the complete definition of the Systems Manager document on GitHub.

mainSteps:
  - name: win_enable_docker
    action: aws:configureDocker
    inputs:
      action: Install

  # Windows steps
  - name: windows_script
    precondition:
      StringEquals: [platformType, Windows]
    action: aws:runPowerShellScript
    inputs:
      runCommand:
        # Ensure that if a command fails the script does not proceed to the following commands
        - "$ErrorActionPreference = \"Stop\""

        - "$jobDirectory = \"{{ workingDirectory }}\""
        # Create temporary folder for build artifacts, if not provided
        - "if ([string]::IsNullOrEmpty($jobDirectory)) {"
        - "    $parent = [System.IO.Path]::GetTempPath()"
        - "    [string] $name = [System.Guid]::NewGuid()"
        - "    $jobDirectory = (Join-Path $parent $name)"
        - "    New-Item -ItemType Directory -Path $jobDirectory"
                # Set current location to the new folder
        - "    Set-Location -Path $jobDirectory"
        - "}"

        # Download/unzip input artifact
        - "Read-S3Object -BucketName {{ inputBucketName }} -Key {{ inputObjectKey }} -File artifact.zip"
        - "Expand-Archive -Path artifact.zip -DestinationPath ."

        # Run the build commands
        - "$directory = Convert-Path ."
        - "$env:PATH += \";$directory\""
        - "{{ commands }}"
        # We need to check exit code explicitly here
        - "if (-not ($?)) { exit $LASTEXITCODE }"

        # Compress output artifacts, if specified
        - "$outputArtifactPath  = \"{{ outputArtifactPath }}\""
        - "if ($outputArtifactPath) {"
        - "    Compress-Archive -Path $outputArtifactPath -DestinationPath output-artifact.zip"
                # Upload compressed artifact to S3
        - "    $bucketName = \"{{ outputBucketName }}\""
        - "    $objectKey = \"{{ outputObjectKey }}\""
        - "    if ($bucketName -and $objectKey) {"
                    # Don't forget to encrypt the artifact - CodePipeline bucket has a policy to enforce this
        - "        Write-S3Object -BucketName $bucketName -Key $objectKey -File output-artifact.zip -ServerSideEncryption aws:kms"
        - "    }"
        - "}"
      workingDirectory: "{{ workingDirectory }}"
      timeoutSeconds: "{{ executionTimeout }}"

CI/CD pipeline for Windows Server containers

Once you have a custom action that offloads the build job to the Amazon EC2 instance, you may approach the problem stated at the beginning of this blog post: how to build and publish Windows Server containers on AWS.

With the custom action installed, the solution is quite straightforward. To build a Windows Server container image, you need to provide the value for Windows Server with Containers AMI, the instance type to use, and the command line to execute, as shown in the following screenshot:

Windows Server container custom action properties

This example executes the Docker build command on a Windows instance with the specified AMI and instance type, using the provided source artifact. In real life, you may want to keep the build script along with the source code and push the built image to a container registry. The following is a PowerShell script example that not only produces a Docker image but also pushes it to AWS ECR:

# Authenticate with ECR
Invoke-Expression -Command (Get-ECRLoginCommand).Command

# Build and push the image
docker build -t <ecr-repository-url>:latest .
docker push <ecr-repository-url>:latest

return $LASTEXITCODE

You can find a complete example of the pipeline that produces the Windows Server container image and pushes it to Amazon ECR on GitHub.

Notes on Amazon EC2 build instances

There are a few ways to get Amazon EC2 instances for custom build actions. Let’s take a look at a couple of them below.

Start new EC2 instance per job and terminate it at the end

This is a reasonable default strategy that is implemented in this GitHub project. Each time the pipeline needs to process a custom action, you start a new Amazon EC2 instance, carry out the build job, and terminate the instance afterwards.

This approach is easy to implement. It works well for scenarios in which you don’t have many builds and/or builds take some time to complete (tens of minutes). In this case, the time required to provision an instance is amortized. Conversely, if the builds are fast, instance provisioning time could be actually longer than the time required to carry out the build job.

Use a pool of running Amazon EC2 instances

There are cases when it is required to keep builder instances “warm”, either due to complex initialization or merely to reduce the build duration. To support this scenario, you could maintain a pool of always-running instances. The “acquisition” phase takes a warm instance from the pool and the “release” phase returns it back without terminating or stopping the instance. A DynamoDB table can be used as a registry to keep track of “busy” instances and provide waiting or scaling capabilities to handle high demand.

This approach works well for scenarios in which there are many builds and demand is predictable (e.g. during work hours).

Use a pool of stopped Amazon EC2 instances

This is an interesting approach, especially for Windows builds. All AWS Windows AMIs are generalized using a sysprep tool. The important implication of this is that the first start time for Windows EC2 instances is quite long: it could easily take more than 5 minutes. This is generally unacceptable for short-living build jobs (if your build takes just a minute, it is annoying to wait 5 minutes to start the instance).

Interestingly, once the Windows instance is initialized, subsequent starts take less than a minute. To utilize this, you could create a pool of initialized and stopped Amazon EC2 instances. In this case, for the acquisition phase, you start the instance, and when you need to release it, you stop or hibernate it.

This approach provides substantial improvements in terms of build start-up time.

The downside is that you reuse the same Amazon EC2 instance between the builds—it is not completely clean environment. Build jobs have to be designed to expect the presence of artifacts from the previous executions on the build instance.

Using an Amazon EC2 fleet with spot instances

Another variation of the previous strategies is to use Amazon EC2 Fleet to make use of cost-efficient spot instances for your build jobs.

Amazon EC2 Fleet makes it possible to combine on-demand instances with spot instances to deliver cost-efficient solution for your build jobs. On-demand instances can provide the minimum required capacity and spot instances provide a cost-efficient way to improve performance of your build fleet.

Note that since spot instances could be terminated at any time, the Step Functions workflow has to support Amazon EC2 instance termination and restart the build on a different instance transparently for CodePipeline.

Limits and Cost

The following are a few final thoughts.

Custom action timeouts

The default maximum execution time for CodePipeline custom actions is one hour. If your build jobs require more than an hour, you need to request a limit increase for custom actions.

Cost of running EC2 build instances

Custom Amazon EC2 instances could be even more cost effective than CodeBuild for many scenarios. However, it is difficult to compare the total cost of ownership of a custom-built fleet with CodeBuild. CodeBuild is a fully managed build service and you pay for each minute of using the service. In contrast, with Amazon EC2 instances you pay for the instance either per hour or per second (depending on instance type and operating system), EBS volumes, Lambda, and Step Functions. Please use the AWS Simple Monthly Calculator to get the total cost of your projected build solution.

Cleanup

If you are running the above steps as a part of workshop / testing, then you may delete the resources to avoid any further charges to be incurred. All resources are deployed as part of CloudFormation stack, so go to the Services, CloudFormation, select the specific stack and click delete to remove the stack.

Conclusion

The CodePipeline custom action is a simple way to utilize Amazon EC2 instances for your build jobs and address a number of CodePipeline limitations.

With AWS CloudFormation template available on GitHub you can import the CodePipeline custom action with a simple Start/Terminate instance strategy into your account and start using the custom action in your pipelines right away.

The CodePipeline custom action with a simple Start/Terminate instance strategy is available on GitHub as an AWS CloudFormation stack. You could import the stack to your account and start using the custom action in your pipelines right away.

An example of the pipeline that produces Windows Server containers and pushes them to Amazon ECR can also be found on GitHub.

I invite you to clone the repositories to play with the custom action, and to make any changes to the action definition, Lambda functions, or Step Functions flow.

Feel free to ask any questions or comments below, or file issues or PRs on GitHub to continue the discussion.

Critical Windows Vulnerability Discovered by NSA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/critical_window.html

Yesterday’s Microsoft Windows patches included a fix for a critical vulnerability in the system’s crypto library.

A spoofing vulnerability exists in the way Windows CryptoAPI (Crypt32.dll) validates Elliptic Curve Cryptography (ECC) certificates.

An attacker could exploit the vulnerability by using a spoofed code-signing certificate to sign a malicious executable, making it appear the file was from a trusted, legitimate source. The user would have no way of knowing the file was malicious, because the digital signature would appear to be from a trusted provider.

A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.

That’s really bad, and you should all patch your system right now, before you finish reading this blog post.

This is a zero-day vulnerability, meaning that it was not detected in the wild before the patch was released. It was discovered by security researchers. Interestingly, it was discovered by NSA security researchers, and the NSA security advisory gives a lot more information about it than the Microsoft advisory does.

Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities. Examples where validation of trust may be impacted include:

  • HTTPS connections
  • Signed files and emails
  • Signed executable code launched as user-mode processes

The vulnerability places Windows endpoints at risk to a broad range of exploitation vectors. NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable.The consequences of not patching the vulnerability are severe and widespread. Remote exploitation tools will likely be made quickly and widely available.Rapid adoption of the patch is the only known mitigation at this time and should be the primary focus for all network owners.

Early yesterday morning, NSA’s Cybersecurity Directorate head Anne Neuberger hosted a media call where she talked about the vulnerability and — to my shock — took questions from the attendees. According to her, the NSA discovered this vulnerability as part of its security research. (If it found it in some other nation’s cyberweapons stash — my personal favorite theory — she declined to say.) She did not answer when asked how long ago the NSA discovered the vulnerability. She said that this is not the first time the NSA sent Microsoft a vulnerability to fix, but it was the first time it has publicly taken credit for the discovery. The reason is that the NSA is trying to rebuild trust with the security community, and this disclosure is a result of its new initiative to share findings more quickly and more often.

Barring any other information, I would take the NSA at its word here. So, good for it.

And — seriously — patch your systems now: Windows 10 and Windows Server 2016/2019. Assume that this vulnerability has already been weaponized, probably by criminals and certainly by major governments. Even assume that the NSA is using this vulnerability — why wouldn’t it?

Ars Technica article. Wired article. CERT advisory.

EDITED TO ADD: Washington Post article.

EDITED TO ADD (1/16): The attack was demonstrated in less than 24 hours.

Brian Krebs blog post.

Managing domain membership of dynamic fleet of EC2 instances

Post Syndicated from whiteemm original https://aws.amazon.com/blogs/compute/managing-domain-membership-of-dynamic-fleet-of-ec2-instances/

This post is written by Alex Zarenin, Senior AWS Solution Architect, Microsoft Tech.

Updated: February 10, 2021

1.   Introduction

For most companies, a move of Microsoft workloads to AWS starts with “lift and shift” where existing workloads are moved from the on-premises data centers to the cloud. These workloads may include WEB and API farms, and a fleet of processing nodes, which typically depend on AD Domain membership for access to shared resources, such as file shares and SQL Server databases.

When the farms and set of processing nodes are static, which is typical for on-premises deployments, managing domain membership is simple – new instances join the AD Domain and stay there. When some machines are periodically recycled, respective AD computer accounts are disabled or deleted, and new accounts are added when new machines are added to the domain. However, these changes are slow and can be easily managed.

When these workloads are moved to the cloud, it is natural to set up WEB and API farms as scalability groups to allow for scaling up and scaling down membership to optimize cost while meeting the performance requirements. Similarly, processing nodes could be combined into scalability groups or created on-demand as a set of Amazon EC2 Spot Instances.

In either case, the fleet becomes very dynamic, and can expand and shrink multiple times to match the load or in response to some events, which makes manual management of AD Domain membership impractical. This scenario requires automated solution for managing domain membership.

2.   Challenges

This automated solution to manage domain membership of dynamic fleet of Amazon EC2 instances should provide for:

  • Seamless AD Domain joining when the new instances join the fleet and it should work both for Managed and native ADs;
  • Automatic unjoining from the AD Domain and removal from AD the respective computer account when the instance is stopped or terminated;
  • Following the best practices for protecting sensitive information – the identity of the account that is used for joining domain or removing computer account from the domain.
  • Extensive logging to facilitate troubleshooting if something does not work as expected.

3.   Solution overview

Joining an AD domain, whether native or managed, could be achieved by placing a PowerShell script that performs domain joining into the User Data section of the EC2 instance launch configuration.

It is much more difficult to implement domain unjoining and deleting computer account from AD upon instance termination as Windows does not support On-Shutdown trigger in the Task Scheduler. However, it is possible to define On-Shutdown script using the local Group Policy.

If defined, the On-Shutdown script runs on EVERY shutdown. However, joining a domain REQUIRES reboot of the machine, so On-Shutdown policy cannot be enabled on the first invocation of the User Data script as it will be removed from the domain by the On-Shutdown script right when it joins the domain. Thus, the User Data script must have some logic to define whether it is the first invocation upon instance launch, or the subsequent one following the domain join reboot. The On-Shutdown policy should be enabled only on the second start-up. This also necessitates to define the User Data script as “persistent” by specifying <persist>true</persist> in the User Data section of the launch configuration.

Both domain join and domain unjoin scripts require security context that allows to perform these operations on the domain, which is usually achieved by providing credentials for a user account with corresponding rights. In the proposed implementation, both scripts obtain account credentials from the AWS Secrets Manager under protection of security policies and roles – no credentials are stored in the scripts.

Both scripts generate detailed log of their operation stored in the Amazon CloudWatch logs.

In this post, I demonstrate a solution based upon PowerShell script that is scheduled to perform Active Directory domain joining on the instance start-up through the EC2 launch User Data script. I also show removal from the domain with the deletion of respective computer accounts from the domain upon instance shutdown using the script installed in the On-Shutdown policy.

User Data script overall logic:

  1. Initialize Logging
  2. Initialize On-Shutdown Policy
  3. Read fields UserID, Password, and Domain from prod/AD secret
  4. Verify machine’s domain membership
  5. If machine is already a member of the domain, then
    1. Enable On-Shutdown Policy
    2. Install RSAT for AD PowerShell
  6. Otherwise
    1. Create credentials from the secret
    2. Initiate domain join
    3. Request machine restart

On-Shutdown script overall logic:

  1. Initialize Logging
  2. Check cntrl variable; If cntrl variable is not set to value “run”, exit script
  3. Check whether machine is a member of the domain; if not, exit script
  4. Check if the RSAT for AD PowerShell installed; if not installed, exit the script
  5. Read fields UserID, Password, and Domain from prod/AD secret
  6. Create credentials from the secret
  7. Identify domain controller
  8. Remove machine from the domain
  9. Delete machine account from domain controller

Simplified Flow chart of User Data and On-Shutdown Scripts

Now that I have reviewed the overall logic of the scripts, I can examine components of each script in more details.

4.   Routines common to both UserData and On-Shutdown scripts

4.1. Processing configuration variables and parameters

UserData script does not accept parameters and is being executed exactly as being provided in the UserData section of the Launch configuration. However, at the beginning of the script a  variable is specified that could be easily changed:

[string]$SecretAD  = "prod/AD"

This variable provides the name of the secret defined in the Secrets Manager that contains UserID, Password, and Domain.

The On-Shutdown Group Policy invokes corresponding scrip with a parameter, which is stored in the registry as part of the policy set up. Thus, the first line of the On-Shutdown script defines the variable for this parameter:

param([string]$cntrl = "NotSet")

The next line in the On-Shutdown script provides the name of the secret – same as in the User Data script. They are generated from the corresponding variables in the User Data script.

4.2. The Logger class

Both scripts, UserData and On-Shutdown, use the same Logger class and perform logging into the Amazon CloudWatch log group /ps/boot/configuration/. If this log group does not exist, the script attempts to create respective log group. The name of the log group is stored in the Logger class variable $this.cwlGroup and can be changed if needed.

Each execution of either script creates a new log stream in the log group. The name of the log stream consists of three parts – machine name, script type, and date-time stamp. The script type is passed to the Logger class in the constructor. Two script types are used in the script – UserData for the script invoked through the UserData section and UnJoin for the script invoked through the On-Shutdown policy. These log stream names may look like

EC2AMAZ-714VBCO/UserData/2020-10-06_05.40.02

EC2AMAZ-714VBCO/UnJoin/2020-10-06_05.48.43

5.   The UserData script

The following are the major components of the UserData script.

5.1. Initializing the On-Shutdown policy

The SDManager class wraps functionality necessary to create On-Shutdown policy. The policy requires certain registry entries and a script that executes when policy is invoked. This script must be placed in a well-defined folder on the file system.

The SDManager constructor performs the following task:

  • Verifies that the folder C:\Windows\System32\GroupPolicy\Machine\Scripts\Shutdown exists and, if necessary, creates it;
  • Updates On-Shutdown script stored as an array in SDManager with the parameters provided to the constructor, and then saves adjusted script in the proper location;
  • Creates all registry entries required for On-Shutdown policy;
  • Sets the parameter that will be passed to the On-Shutdown script by the policy to a value that would preclude On-Shutdown script from removing machine from the domain.

SDManager exposes two member functions, EnableUnJoin() and DisableUnJoin(). These functions update parameter passed to On-Shutdown script to enable or disable removing machine from the domain, respectively.

5.2. Reading the “secret”

Using the value of configuration variable $SecretAD, the following code example retrieves the secret value from AWS Secrets Manager and creates PowerShell credential to be used for the operations on the domain. The Domain value from the secret is also used to verify that machine is a member of required domain.

Import-Module AWSPowerShell

try { $SecretObj = (Get-SECSecretValue -SecretId $SecretAD) }

catch

    {

    $log.WriteLine("Could not load secret <" + $SecretAD + "> - terminating execution")

    return

    }

[PSCustomObject]$Secret = ($SecretObj.SecretString  | ConvertFrom-Json)

$log.WriteLine("Domain (from Secret): <" + $Secret.Domain + ">")

To get the secret from AWS Secrets Manager, you must use an AWS-specific cmdlet. To make it available, you must import the AWSPowerShell module.

5.3. Checking for domain membership and enabling On-Shutdown policy

To check for domain membership, we use WMI Win32_ComputerObject. While performing check for domain membership, we also validate that if the machine is a member of the domain, it is the domain specified in the secret.

If machine is already a member of the correct domain, the script proceeds with installing RSAT for AD PowerShell, which is required for the On-Shutdown script. It also enables the On-Shutdown script. The following code example achieves this:

$compSys = Get-WmiObject -Class Win32_ComputerSystem

if ( ($compSys.PartOfDomain) -and ($compSys.Domain -eq $Secret.Domain))

    {

    $log.WriteLine("Already member of: <" + $compSys.Domain + "> - Verifying RSAT Status")

    $RSAT = (Get-WindowsFeature RSAT-AD-PowerShell)

    if ($RSAT -eq $null)

        {

        $log.WriteLine("<RSAT-AD-PowerShell> feature not found - terminating script")

        return

        }

    $log.WriteLine("Enable OnShutdown task to un-join Domain")

    $sdm.EnableUnJoin()

    if ( (-Not $RSAT.Installed) -and ($RSAT.InstallState -eq "Available") )

        {

        $log.WriteLine("Installing <RSAT-AD-PowerShell> feature")

        Install-WindowsFeature RSAT-AD-PowerShell

        }

    $log.WriteLine("Terminating script - ")

    return

    }

5.4. Joining Domain

If a machine is not a member of the domain or member of the wrong domain, the script creates credentials from the Secret and requests domain joining with subsequent restart of the machine. The following code example performs all these tasks:

$log.WriteLine("Domain Join required")

$log.WriteLine("Disable OnShutdown task to avoid reboot loop")

$sdm.DisableUnJoin()

$password   = $Secret.Password | ConvertTo-SecureString -asPlainText -Force

$username   = $Secret.UserID + "@" + $Secret.Domain

$credential = New-Object System.Management.Automation.PSCredential($username,$password)

$log.WriteLine("Attempting to join domain <" + $Secret.Domain + ">")

Add-Computer -DomainName $Secret.Domain -Credential $credential -Restart -Force

$log.WriteLine("Requesting restart...")

 

6.   The On-Shutdown script

Many components of the On-Shutdown script, such as logging, working with the AWS Secrets Manager, and validating domain membership are either the same or very similar to respective components of the UserData script.

One interesting difference is that the On-Shutdown script accepts parameter from the respective policy. The value of this parameter is set by EnableUnJoin() and DisableUnJoin() functions in the User Data script to control whether domain un-join will happen on a particular reboot – something that I discussed earlier. Thus, you have the following code example at the beginning of On-Shutdown script:

if ($cntrl -ne "run")

      {

      $log.WriteLine("Script param <" + $cntrl + "> not set to <run> - script terminated")

      return

      }

By setting the On-Shutdown policy parameter (a value in registry) to something other than “run” we can stop On-Shutdown script from executing – this is exactly what function DisableUnJoin() does. Similarly, the function EnableUnJoin() sets the value of this parameter to “run” thus allowing the On-Shutdown script to continue execution when invoked.

Another interesting problem with this script is how to implement removing a machine from the domain and deleting respective computer account from the Active Directory. If the script first removes machine from the domain, then it cannot find domain controller to delete computer account.

Alternatively, if the script first deletes computer account, and then tries to remove computer account by changing domain to a workgroup, this change would fail. The following code example represents how this issue was resolved in the script:

import-module ActiveDirectory

$DCHostName = (Get-ADDomainController -Discover).HostName

$log.WriteLine("Using Account <" + $username + ">")

$log.WriteLine("Using Domain Controller <" + $DCHostName + ">")

Remove-Computer -WorkgroupName "WORKGROUP" -UnjoinDomainCredential $credential -Force -Confirm:$false

Remove-ADComputer -Identity $MachineName -Credential $credential -Server "$DCHostName" -Confirm:$false

Before removing machine from the domain, the script obtains and stores in a local variable the name of one of the domain controllers. Then computer is switched from domain to the workgroup. As the last step, the respective computer account is being deleted from the AD using the host name of the domain controller obtained earlier.

7.   Managing Script Credentials

Both User Data and On-Shutdown scripts obtain the domain name and user credentials to add or remove computers from the domain from AWS Secrets Manager secret with the predefined name  prod/AD. This predefined name can be changed in the script.

Details on how to create a secret are available in AWS documentation. This secret should be defined as Other type of secrets and contain at least the following fields:

  • UserID
  • Password
  • Domain

Fill in respective fields on the Secrets Manager configuration screen and chose Next as illustrated on the following screenshot:

Screenshot Store a new secret. UserID, Password, Domain

Give the new secret the name prod/AD (this name is referred to in the script) and capture the secret’s ARN. The latter is required for creating a policy that allows access to this secret.

Screenshot of Secret Details and Secret Name

8.   Creating AWS Policy and Role to access the Credential Secret

8.1. Creating IAM Policy

The next step is to use IAM to create a policy that would allow access to the secret; the policy  statement will appear as following:

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Sid": "VisualEditor0",

            "Effect": "Allow",

            "Action": "secretsmanager:GetSecretValue",

            "Resource": "arn:aws:secretsmanager:us-east-1:NNNNNNNN7225:secret:prod/AD-??????"

        }

    ]

}

Below is the AWS console screenshot with the policy statement filled in:

Screenshot with policy statement filled in

The resource in the policy is identified with the “wildcard” characters for the 6 random characters at the end of the ARN, which may change when the Secret is updated. Configuring policy with the wildcard allows to extend rights to ALL versions of the secret, which would allow for changing credential information without changing respective policy.

Screenshot of review policy. Name AdminReadDomainCreds

Let’s name this policy AdminReadDomainCreds so that we may refer to it when creating an IAM Role.

 

8.2. Creating IAM Role

Now that I defined AdminReadDomainCreds policy, you can create a role AdminDomainJoiner that refers to this policy. On the Permission tab of the role creation dialog,  attach standard SSM policy for EC2, AmazonEC2RoleforSSM, policy that allows performing required CloudWatch logging operations, CloudWatchAgentAdminPolicy, and, finally, the custom policy AdminReadDomainCreds.

The Permission tab of the role creation dialog with the respective roles attached is shown in the following screenshot:

Screenshot of permission tab with roles

This role should include our new policy, AdminReadDomainCreds, in addition to standard SSM policy for EC2.

9.   Launching the Instance

Now, you’re ready to launch the instance or create the launch configuration. When configuring the instance for launch, it’s important to assign the instance to the role AdminDomainJoiner, which you just created:

Screenshot of Configure Instance Details. IAM role as AdminDomainJoiner

In the Advanced Details section of the configuration screen, paste the script into the User Data field:

If you named your secret differently than the name prod/AD that I used, modify the script parameters SecretAD to use the name of your secret.

10.   Conclusion

That’s it! When you launch this instance, it will automatically join the domain. Upon Stop or Termination, the instance will remove itself from the domain.

 

For your convenience we provide the full text of the UserData script:

 

<powershell>
# Script parameters
[string]$SecretAD = "prod/AD"
​
class Logger {
	#----------------------------------------------
	[string] hidden  $cwlGroup
	[string] hidden  $cwlStream
	[string] hidden  $sequenceToken
	#----------------------------------------------
	# Log Initialization
	#----------------------------------------------
	Logger([string] $Action) {
		$this.cwlGroup = "/ps/boot/configuration/"
		$this.cwlStream	= "{0}/{1}/{2}" -f $env:COMPUTERNAME, $Action,
		(Get-Date -UFormat "%Y-%m-%d_%H.%M.%S")
		$this.sequenceToken = ""
		#------------------------------------------
		if ( !(Get-CWLLogGroup -LogGroupNamePrefix $this.cwlGroup) ) {
			New-CWLLogGroup -LogGroupName $this.cwlGroup
			Write-CWLRetentionPolicy -LogGroupName $this.cwlGroup -RetentionInDays 3
		}
		if ( !(Get-CWLLogStream -LogGroupName $this.cwlGroup -LogStreamNamePrefix $this.cwlStream) ) {
			New-CWLLogStream -LogGroupName $this.cwlGroup -LogStreamName $this.cwlStream
		}
	}
	#----------------------------------------
	[void] WriteLine([string] $msg) {
		$logEntry = New-Object -TypeName "Amazon.CloudWatchLogs.Model.InputLogEvent"
		#-----------------------------------------------------------
		$logEntry.Message = $msg
		$logEntry.Timestamp = (Get-Date).ToUniversalTime()
		if ("" -eq $this.sequenceToken) {
			# First write into empty log...
			$this.sequenceToken = Write-CWLLogEvent -LogGroupName $this.cwlGroup `
				-LogStreamName $this.cwlStream `
				-LogEvent $logEntry
		}
		else {
			# Subsequent write into the log...
			$this.sequenceToken = Write-CWLLogEvent -LogGroupName $this.cwlGroup `
				-LogStreamName $this.cwlStream `
				-SequenceToken $this.sequenceToken `
				-LogEvent $logEntry
		}
	}
}
[Logger]$log = [Logger]::new("UserData")
$log.WriteLine("------------------------------")
$log.WriteLine("Log Started - V4.0")
$RunUser = $env:username
$log.WriteLine("PowerShell session user: $RunUser")
​
class SDManager {
	#-------------------------------------------------------------------
	[Logger] hidden $SDLog
	[string] hidden $GPScrShd_0_0 = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts\Shutdown\0\0"
	[string] hidden $GPMScrShd_0_0 = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts\Shutdown\0\0"
	#-------------------------------------------------------------------
	SDManager([Logger]$Log, [string]$RegFilePath, [string]$SecretName) {
		$this.SDLog = $Log
		#----------------------------------------------------------------
		[string] $SecretLine = '[string]$SecretAD    = "' + $SecretName + '"'
		#--------------- Local Variables -------------
		[string] $GPRootPath = "C:\Windows\System32\GroupPolicy"
		[string] $GPMcnPath = "C:\Windows\System32\GroupPolicy\Machine"
		[string] $GPScrPath = "C:\Windows\System32\GroupPolicy\Machine\Scripts"
		[string] $GPSShdPath = "C:\Windows\System32\GroupPolicy\Machine\Scripts\Shutdown"
		[string] $ScriptFile = [System.IO.Path]::Combine($GPSShdPath, "Shutdown-UnJoin.ps1")
		#region Shutdown script (scheduled through Local Policy)
		$ScriptBody =
		@(
			'param([string]$cntrl = "NotSet")',
			$SecretLine,
			'[string]$MachineName = $env:COMPUTERNAME',
			'class Logger {    ',
			'    #----------------------------------------------    ',
			'    [string] hidden  $cwlGroup    ',
			'    [string] hidden  $cwlStream    ',
			'    [string] hidden  $sequenceToken    ',
			'    #----------------------------------------------    ',
			'    # Log Initialization    ',
			'    #----------------------------------------------    ',
			'    Logger([string] $Action) {    ',
			'        $this.cwlGroup = "/ps/boot/configuration/"    ',
			'        $this.cwlStream = "{0}/{1}/{2}" -f $env:COMPUTERNAME, $Action,    ',
			'                                           (Get-Date -UFormat "%Y-%m-%d_%H.%M.%S")    ',
			'        $this.sequenceToken = ""    ',
			'        #------------------------------------------    ',
			'        if ( !(Get-CWLLogGroup -LogGroupNamePrefix $this.cwlGroup) ) {    ',
			'            New-CWLLogGroup -LogGroupName $this.cwlGroup    ',
			'            Write-CWLRetentionPolicy -LogGroupName $this.cwlGroup -RetentionInDays 3    ',
			'        }    ',
			'        if ( !(Get-CWLLogStream -LogGroupName $this.cwlGroup -LogStreamNamePrefix $this.cwlStream) ) {    ',
			'            New-CWLLogStream -LogGroupName $this.cwlGroup -LogStreamName $this.cwlStream    ',
			'        }    ',
			'    }    ',
			'    #----------------------------------------    ',
			'    [void] WriteLine([string] $msg) {    ',
			'        $logEntry = New-Object -TypeName "Amazon.CloudWatchLogs.Model.InputLogEvent"    ',
			'        #-----------------------------------------------------------    ',
			'        $logEntry.Message = $msg    ',
			'        $logEntry.Timestamp = (Get-Date).ToUniversalTime()    ',
			'        if ("" -eq $this.sequenceToken) {    ',
			'            # First write into empty log...    ',
			'            $this.sequenceToken = Write-CWLLogEvent -LogGroupName $this.cwlGroup `',
			'                -LogStreamName $this.cwlStream `',
			'                -LogEvent $logEntry    ',
			'        }    ',
			'        else {    ',
			'            # Subsequent write into the log...    ',
			'            $this.sequenceToken = Write-CWLLogEvent -LogGroupName $this.cwlGroup `',
			'                -LogStreamName $this.cwlStream `',
			'                -SequenceToken $this.sequenceToken `',
			'                -LogEvent $logEntry    ',
			'        }    ',
			'    }    ',
			'}    ',
			'[Logger]$log = [Logger]::new("UnJoin")',
			'$log.WriteLine("-----------------------------------------")',
			'$log.WriteLine("Log Started")',
			'if ($cntrl -ne "run") ',
			'    { ',
			'    $log.WriteLine("Script param <" + $cntrl + "> not set to <run> - script terminated") ',
			'    return',
			'    }',
			'$compSys = Get-WmiObject -Class Win32_ComputerSystem',
			'if ( -Not ($compSys.PartOfDomain))',
			'    {',
			'    $log.WriteLine("Not member of a domain - terminating script")',
			'    return',
			'    }',
			'$RSAT = (Get-WindowsFeature RSAT-AD-PowerShell)',
			'if ( $RSAT -eq $null -or (-Not $RSAT.Installed) )',
			'    {',
			'    $log.WriteLine("<RSAT-AD-PowerShell> feature not found - terminating script")',
			'    return',
			'    }',
			'$log.WriteLine("Removing machine <" +$MachineName + "> from Domain <" + $compSys.Domain + ">")',
			'$log.WriteLine("Reading Secret <" + $SecretAD + ">")',
			'Import-Module AWSPowerShell',
			'try { $SecretObj = (Get-SECSecretValue -SecretId $SecretAD) }',
			'catch ',
			'    { ',
			'    $log.WriteLine("Could not load secret <" + $SecretAD + "> - terminating execution")',
			'    return ',
			'    }',
			'[PSCustomObject]$Secret = ($SecretObj.SecretString  | ConvertFrom-Json)',
			'$password   = $Secret.Password | ConvertTo-SecureString -asPlainText -Force',
			'$username   = $Secret.UserID + "@" + $Secret.Domain',
			'$credential = New-Object System.Management.Automation.PSCredential($username,$password)',
			'import-module ActiveDirectory',
			'$DCHostName = (Get-ADDomainController -Discover).HostName',
			'$log.WriteLine("Using Account <" + $username + ">")',
			'$log.WriteLine("Using Domain Controller <" + $DCHostName + ">")',
			'Remove-Computer -WorkgroupName "WORKGROUP" -UnjoinDomainCredential $credential -Force -Confirm:$false ',
			'Remove-ADComputer -Identity $MachineName -Credential $credential -Server "$DCHostName" -Confirm:$false ',
			'$log.WriteLine("Machine <" +$MachineName + "> removed from Domain <" + $compSys.Domain + ">")'
		)
​
		$this.SDLog.WriteLine("Constracting artifacts required for domain UnJoin")
		#----------------------------------------------------------------
		try {
			if (!(Test-Path -Path $GPRootPath -pathType container))
			{ New-Item -ItemType directory -Path $GPRootPath }
			if (!(Test-Path -Path $GPMcnPath -pathType container))
			{ New-Item -ItemType directory -Path $GPMcnPath }
			if (!(Test-Path -Path $GPScrPath -pathType container))
			{ New-Item -ItemType directory -Path $GPScrPath }
			if (!(Test-Path -Path $GPSShdPath -pathType container))
			{ New-Item -ItemType directory -Path $GPSShdPath }
		}
		catch {
			$this.SDLog.WriteLine("Failure creating UnJoin script directory!" )
			$this.SDLog.WriteLine($_)
		}
		#----------------------------------------
		try {
			Set-Content $ScriptFile -Value $ScriptBody
		}
		catch {
			$this.SDLog.WriteLine("Failure saving UnJoin script!" )
			$this.SDLog.WriteLine($_)
		}
		#----------------------------------------
		$RegistryScript =
		@(
			'Windows Registry Editor Version 5.00',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts]',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts\Shutdown]',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts\Shutdown\0]',
			'"GPO-ID"="LocalGPO"',
			'"SOM-ID"="Local"',
			'"FileSysPath"="C:\\Windows\\System32\\GroupPolicy\\Machine"',
			'"DisplayName"="Local Group Policy"',
			'"GPOName"="Local Group Policy"',
			'"PSScriptOrder"=dword:00000001',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts\Shutdown\0\0]',
			'"Script"="Shutdown-UnJoin.ps1"',
			'"Parameters"=""',
			'"IsPowershell"=dword:00000001',
			'"ExecTime"=hex(b):00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts\Startup]',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts]',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts\Shutdown]',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts\Shutdown\0]',
			'"GPO-ID"="LocalGPO"',
			'"SOM-ID"="Local"',
			'"FileSysPath"="C:\\Windows\\System32\\GroupPolicy\\Machine"',
			'"DisplayName"="Local Group Policy"',
			'"GPOName"="Local Group Policy"',
			'"PSScriptOrder"=dword:00000001',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts\Shutdown\0\0]',
			'"Script"="Shutdown-UnJoin.ps1"',
			'"Parameters"=""',
			'"ExecTime"=hex(b):00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts\Startup]'
		)
		try {
			[string] $RegistryFile = [System.IO.Path]::Combine($RegFilePath, "OnShutdown.reg")
			Set-Content $RegistryFile -Value $RegistryScript
			&regedit.exe /S "$RegistryFile"
		}
		catch {
			$this.SDLog.WriteLine("Failure creating policy entry in Registry!" )
			$this.SDLog.WriteLine($_)
		}
	}
	#----------------------------------------
	[void] DisableUnJoin() {
		try {
			Set-ItemProperty -Path $this.GPScrShd_0_0  -Name "Parameters" -Value "ignore"
			Set-ItemProperty -Path $this.GPMScrShd_0_0 -Name "Parameters" -Value "ignore"
			&gpupdate /Target:computer /Wait:0
		}
		catch {
			$this.SDLog.WriteLine("Failure in <DisableUnjoin> function!" )
			$this.SDLog.WriteLine($_)
		}
	}
	#----------------------------------------
	[void] EnableUnJoin() {
		try {
			Set-ItemProperty -Path $this.GPScrShd_0_0  -Name "Parameters" -Value "run"
			Set-ItemProperty -Path $this.GPMScrShd_0_0 -Name "Parameters" -Value "run"
			&gpupdate /Target:computer /Wait:0
		}
		catch {
			$this.SDLog.WriteLine("Failure in <EnableUnjoin> function!" )
			$this.SDLog.WriteLine($_)
		}
	}
}
​
[SDManager]$sdm = [SDManager]::new($Log, "C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts", $SecretAD)
​
$log.WriteLine("Loading Secret <" + $SecretAD + ">")
Import-Module AWSPowerShell
try { $SecretObj = (Get-SECSecretValue -SecretId $SecretAD) }
catch {
	$log.WriteLine("Could not load secret <" + $SecretAD + "> - terminating execution")
	return
}
[PSCustomObject]$Secret = ($SecretObj.SecretString  | ConvertFrom-Json)
$log.WriteLine("Domain (from Secret): <" + $Secret.Domain + ">")
# Verify domain membership
$compSys = Get-WmiObject -Class Win32_ComputerSystem
#------------------------------------------------------------------------------
if ( ($compSys.PartOfDomain) -and ($compSys.Domain -eq $Secret.Domain)) {
	$log.WriteLine("Already member of: <" + $compSys.Domain + "> - Verifying RSAT Status")
​
	$RSAT = (Get-WindowsFeature RSAT-AD-PowerShell)
	if ($null -eq $RSAT) {
		$log.WriteLine("<RSAT-AD-PowerShell> feature not found - terminating script")
		return
	}
​
	$log.WriteLine("Enable OnShutdown task to un-join Domain")
	$sdm.EnableUnJoin()
​
	if ( (-Not $RSAT.Installed) -and ($RSAT.InstallState -eq "Available") ) {
		$log.WriteLine("Installing <RSAT-AD-PowerShell> feature")
		Install-WindowsFeature RSAT-AD-PowerShell
	}
​
	$log.WriteLine("Terminating script - ")
	return
}
# Performing Domain Join
$log.WriteLine("Domain Join required")
​
$log.WriteLine("Disable OnShutdown task to avoid reboot loop")
$sdm.DisableUnJoin()
$password = $Secret.Password | ConvertTo-SecureString -asPlainText -Force
$username = $Secret.UserID + "@" + $Secret.Domain
$credential = New-Object System.Management.Automation.PSCredential($username, $password)
​
$log.WriteLine("Attempting to join domain <" + $Secret.Domain + ">")
Add-Computer -DomainName $Secret.Domain -Credential $credential -Restart -Force
​
$log.WriteLine("Requesting restart...")
#------------------------------------------------------------------------------
</powershell>
<persist>true</persist>

 

Malicious MS Office Macro Creator

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/malicious_ms_of.html

Evil Clippy is a tool for creating malicious Microsoft Office macros:

At BlackHat Asia we released Evil Clippy, a tool which assists red teamers and security testers in creating malicious MS Office documents. Amongst others, Evil Clippy can hide VBA macros, stomp VBA code (via p-code) and confuse popular macro analysis tools. It runs on Linux, OSX and Windows.

The VBA stomping is the most powerful feature, because it gets around antivirus programs:

VBA stomping abuses a feature which is not officially documented: the undocumented PerformanceCache part of each module stream contains compiled pseudo-code (p-code) for the VBA engine. If the MS Office version specified in the _VBA_PROJECT stream matches the MS Office version of the host program (Word or Excel) then the VBA source code in the module stream is ignored and the p-code is executed instead.

In summary: if we know the version of MS Office of a target system (e.g. Office 2016, 32 bit), we can replace our malicious VBA source code with fake code, while the malicious code will still get executed via p-code. In the meantime, any tool analyzing the VBA source code (such as antivirus) is completely fooled.

Windows @ AWS re:Invent 2018

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/windows-aws-reinvent-2018/

This post is courtesy of Rodney Bozo, Senior Solutions Architect – Microsoft Technologies – AWS

Windows has been a first-class citizen at AWS for over a decade. More enterprises run Windows workloads today on AWS than any other cloud—according to IDC, it’s over 57%, 2X than the next provider. Over this period, we’ve worked with customers across the globe and taken their feedback to build the solutions that best support their Microsoft workloads.

Since 2008, the Microsoft ecosystem on AWS has grown to much more than just running virtual machines. We have solutions for SQL, Active Directory, .NET developers and more, as well as options to bring your licenses to extend the value of your existing investments.

Over the course of the week at AWS re:Invent, we are offering over 75 sessions covering Microsoft technologies in AWS, with a combination of breakout sessions, workshops, chalk talks, and builder sessions.

Find the entire list of Windows and .NET sessions on the session catalog. Here are some you should try not to miss:

Leadership and Management

Windows and Active Directory

SQL

.NET

Looking to get hands-on with Microsoft?

Still looking for more?

We have an extensive list of curated content on the AWS for Microsoft Workloads Self-Study Guide, including case studies, whitepapers, previous re:Invent presentations, reference architectures, and how-to instructional videos. Check it out!