Tag Archives: powershell

How AWS Managed Microsoft AD Helps to Simplify the Deployment and Improve the Security of Active Directory–Integrated .NET Applications

Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/how-aws-managed-microsoft-ad-helps-to-simplify-the-deployment-and-improve-the-security-of-active-directory-integrated-net-applications/

Companies using .NET applications to access sensitive user information, such as employee salary, Social Security Number, and credit card information, need an easy and secure way to manage access for users and applications.

For example, let’s say that your company has a .NET payroll application. You want your Human Resources (HR) team to manage and update the payroll data for all the employees in your company. You also want your employees to be able to see their own payroll information in the application. To meet these requirements in a user-friendly and secure way, you want to manage access to the .NET application by using your existing Microsoft Active Directory identities. This enables you to provide users with single sign-on (SSO) access to the .NET application and to manage permissions using Active Directory groups. You also want the .NET application to authenticate itself to access the database, and to limit access to the data in the database based on the identity of the application user.

Microsoft Active Directory supports these requirements through group Managed Service Accounts (gMSAs) and Kerberos constrained delegation (KCD). AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, enables you to manage gMSAs and KCD through your administrative account, helping you to migrate and develop .NET applications that need these native Active Directory features.

In this blog post, I give an overview of how to use AWS Managed Microsoft AD to manage gMSAs and KCD and demonstrate how you can configure a gMSA and KCD in six steps for a .NET application:

  1. Create your AWS Managed Microsoft AD.
  2. Create your Amazon RDS for SQL Server database.
  3. Create a gMSA for your .NET application.
  4. Deploy your .NET application.
  5. Configure your .NET application to use the gMSA.
  6. Configure KCD for your .NET application.

Solution overview

The following diagram shows the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD. The diagram also illustrates authentication and access and is numbered to show the six key steps required to use a gMSA and KCD. To deploy this solution, the AWS Managed Microsoft AD directory must be in the same Amazon Virtual Private Cloud (VPC) as RDS for SQL Server. For this example, my company name is Example Corp., and my directory uses the domain name, example.com.

Diagram showing the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD

Deploy the solution

The following six steps (numbered to correlate with the preceding diagram) walk you through configuring and using a gMSA and KCD.

1. Create your AWS Managed Microsoft AD directory

Using the Directory Service console, create your AWS Managed Microsoft AD directory in your Amazon VPC. In my example, my domain name is example.com.

Image of creating an AWS Managed Microsoft AD directory in an Amazon VPC

2. Create your Amazon RDS for SQL Server database

Using the RDS console, create your Amazon RDS for SQL Server database instance in the same Amazon VPC where your directory is running, and enable Windows Authentication. To enable Windows Authentication, select your directory in the Microsoft SQL Server Windows Authentication section in the Configure Advanced Settings step of the database creation workflow (see the following screenshot).

In my example, I create my Amazon RDS for SQL Server db-example database, and enable Windows Authentication to allow my db-example database to authenticate against my example.com directory.

Screenshot of configuring advanced settings

3. Create a gMSA for your .NET application

Now that you have deployed your directory, database, and application, you can create a gMSA for your .NET application.

To perform the next steps, you must install the Active Directory administration tools on a Windows server that is joined to your AWS Managed Microsoft AD directory domain. If you do not have a Windows server joined to your directory domain, you can deploy a new Amazon EC2 for Microsoft Windows Server instance and join it to your directory domain.

To create a gMSA for your .NET application:

  1. Log on to the instance on which you installed the Active Directory administration tools by using a user that is a member of the Admins security group or the Managed Service Accounts Admins security group in your organizational unit (OU). For my example, I use the Admin user in the example OU.

Screenshot of logging on to the instance on which you installed the Active Directory administration tools

  1. Identify which .NET application servers (hosts) will run your .NET application. Create a new security group in your OU and add your .NET application servers as members of this new group. This allows a group of application servers to use a single gMSA, instead of creating one gMSA for each server. In my example, I create a group, App_server_grp, in my example OU. I also add Appserver1, which is my .NET application server computer name, as a member of this new group.

Screenshot of creating a new security group

  1. Create a gMSA in your directory by running Windows PowerShell from the Start menu. The basic syntax to create the gMSA at the Windows PowerShell command prompt follows.
    PS C:\Users\admin> New-ADServiceAccount -name [gMSAname] -DNSHostName [domainname] -PrincipalsAllowedToRetrieveManagedPassword [AppServersSecurityGroup] -TrustedForDelegation $truedn <Enter>

    In my example, the gMSAname is gMSAexample, the DNSHostName is example.com, and the PrincipalsAllowedToRetrieveManagedPassword is the recently created security group, App_server_grp.

    PS C:\Users\admin> New-ADServiceAccount -name gMSAexample -DNSHostName example.com -PrincipalsAllowedToRetrieveManagedPassword App_server_grp -TrustedForDelegation $truedn <Enter>

    To confirm you created the gMSA, you can run the Get-ADServiceAccount command from the PowerShell command prompt.

    PS C:\Users\admin> Get-ADServiceAccount gMSAexample <Enter>
    
    DistinguishedName : CN=gMSAexample,CN=Managed Service Accounts,DC=example,DC=com
    Enabled           : True
    Name              : gMSAexample
    ObjectClass       : msDS-GroupManagedServiceAccount
    ObjectGUID        : 24d8b68d-36d5-4dc3-b0a9-edbbb5dc8a5b
    SamAccountName    : gMSAexample$
    SID               : S-1-5-21-2100421304-991410377-951759617-1603
    UserPrincipalName :

    You also can confirm you created the gMSA by opening the Active Directory Users and Computers utility located in your Administrative Tools folder, expand the domain (example.com in my case), and expand the Managed Service Accounts folder.
    Screenshot of confirming the creation of the gMSA

4. Deploy your .NET application

Deploy your .NET application on IIS on Amazon EC2 for Windows Server instances. For this step, I assume you are the application’s expert and already know how to deploy it. Make sure that all of your instances are joined to your directory.

5. Configure your .NET application to use the gMSA

You can configure your .NET application to use the gMSA to enforce strong password security policy and ensure password rotation of your service account. This helps to improve the security and simplify the management of your .NET application. Configure your .NET application in two steps:

  1. Grant to gMSA the required permissions to run your .NET application in the respective application folders. This is a critical step because when you change the application pool identity account to use gMSA, downtime can occur if the gMSA does not have the application’s required permissions. Therefore, make sure you first test the configurations in your development and test environments.
  2. Configure your application pool identity on IIS to use the gMSA as the service account. When you configure a gMSA as the service account, you include the $ at the end of the gMSA name. You do not need to provide a password because AWS Managed Microsoft AD automatically creates and rotates the password. In my example, my service account is gMSAexample$, as shown in the following screenshot.

Screenshot of configuring application pool identity

You have completed all the steps to use gMSA to create and rotate your .NET application service account password! Now, you will configure KCD for your .NET application.

6. Configure KCD for your .NET application

You now are ready to allow your .NET application to have access to other services by using the user identity’s permissions instead of the application service account’s permissions. Note that KCD and gMSA are independent features, which means you do not have to create a gMSA to use KCD. For this example, I am using both features to show how you can use them together. To configure a regular service account such as a user or local built-in account, see the Kerberos constrained delegation with ASP.NET blog post on MSDN.

In my example, my goal is to delegate to the gMSAexample account the ability to enforce the user’s permissions to my db-example SQL Server database, instead of the gMSAexample account’s permissions. For this, I have to update the msDS-AllowedToDelegateTo gMSA attribute. The value for this attribute is the service principal name (SPN) of the service instance that you are targeting, which in this case is the db-example Amazon RDS for SQL Server database.

The SPN format for the msDS-AllowedToDelegateTo attribute is a combination of the service class, the Kerberos authentication endpoint, and the port number. The Amazon RDS for SQL Server Kerberos authentication endpoint format is [database_name].[domain_name]. The value for my msDS-AllowedToDelegateTo attribute is MSSQLSvc/db-example.example.com:1433, where MSSQLSvc and 1433 are the SQL Server Database service class and port number standards, respectively.

Follow these steps to perform the msDS-AllowedToDelegateTo gMSA attribute configuration:

  1. Log on to your Active Directory management instance with a user identity that is a member of the Kerberos Delegation Admins security group. In this case, I will use admin.
  2. Open the Active Directory Users and Groups utility located in your Administrative Tools folder, choose View, and then choose Advanced Features.
  3. Expand your domain name (example.com in this example), and then choose the Managed Service Accounts security group. Right-click the gMSA account for the application pool you want to enable for Kerberos delegation, choose Properties, and choose the Attribute Editor tab.
  4. Search for the msDS-AllowedToDelegateTo attribute on the Attribute Editor tab and choose Edit.
  5. Enter the MSSQLSvc/db-example.example.com:1433 value and choose Add.
    Screenshot of entering the value of the multi-valued string
  6. Choose OK and Apply, and your KCD configuration is complete.

Congratulations! At this point, your application is using a gMSA rather than an embedded static user identity and password, and the application is able to access SQL Server using the identity of the application user. The gMSA eliminates the need for you to rotate the application’s password manually, and it allows you to better scope permissions for the application. When you use KCD, you can enforce access to your database consistently based on user identities at the database level, which prevents improper access that might otherwise occur because of an application error.

Summary

In this blog post, I demonstrated how to simplify the deployment and improve the security of your .NET application by using a group Managed Service Account and Kerberos constrained delegation with your AWS Managed Microsoft AD directory. I also outlined the main steps to get your .NET environment up and running on a managed Active Directory and SQL Server infrastructure. This approach will make it easier for you to build new .NET applications in the AWS Cloud or migrate existing ones in a more secure way.

For additional information about using group Managed Service Accounts and Kerberos constrained delegation with your AWS Managed Microsoft AD directory, see the AWS Directory Service documentation.

To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions about this post or its solution, start a new thread on the Directory Service forum.

– Peter

Amazon Lightsail Update – Launch and Manage Windows Virtual Private Servers

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-lightsail-update-launch-and-manage-windows-virtual-private-servers/

I first told you about Amazon Lightsail last year in my blog post, Amazon Lightsail – the Power of AWS, the Simplicity of a VPS. Since last year’s launch, thousands of customers have used Lightsail to get started with AWS, launching Linux-based Virtual Private Servers.

Today we are adding support for Windows-based Virtual Private Servers. You can launch a VPS that runs Windows Server 2012 R2, Windows Server 2016, or Windows Server 2016 with SQL Server 2016 Express and be up and running in minutes. You can use your VPS to build, test, and deploy .NET or Windows applications without having to set up or run any infrastructure. Backups, DNS management, and operational metrics are all accessible with a click or two.

Servers are available in five sizes, with 512 MB to 8 GB of RAM, 1 or 2 vCPUs, and up to 80 GB of SSD storage. Prices (including software licenses) start at $10 per month:

You can try out a 512 MB server for one month (up to 750 hours) at no charge.

Launching a Windows VPS
To launch a Windows VPS, log in to Lightsail , click on Create instance, and select the Microsoft Windows platform. Then click on Apps + OS if you want to run SQL Server 2016 Express, or OS Only if Windows is all you need:

If you want to use a Powershell script to customize your instance after it launches for the first time, click on Add launch script and enter the script:

Choose your instance plan, enter a name for your instance(s), and select the quantity to be launched, then click on Create:

Your instance will be up and running within a minute or so:

Click on the instance, and then click on Connect using RDP:

This will connect using a built-in, browser-based RDP client (you can also use the IP address and the credentials with another client):

Available Today
This feature is available today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (London), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Jeff;

 

How to Enable LDAPS for Your AWS Microsoft AD Directory

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-enable-ldaps-for-your-aws-microsoft-ad-directory/

Starting today, you can encrypt the Lightweight Directory Access Protocol (LDAP) communications between your applications and AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD. Many Windows and Linux applications use Active Directory’s (AD) LDAP service to read and write sensitive information about users and devices, including personally identifiable information (PII). Now, you can encrypt your AWS Microsoft AD LDAP communications end to end to protect this information by using LDAP Over Secure Sockets Layer (SSL)/Transport Layer Security (TLS), also called LDAPS. This helps you protect PII and other sensitive information exchanged with AWS Microsoft AD over untrusted networks.

To enable LDAPS, you need to add a Microsoft enterprise Certificate Authority (CA) server to your AWS Microsoft AD domain and configure certificate templates for your domain controllers. After you have enabled LDAPS, AWS Microsoft AD encrypts communications with LDAPS-enabled Windows applications, Linux computers that use Secure Shell (SSH) authentication, and applications such as Jira and Jenkins.

In this blog post, I show how to enable LDAPS for your AWS Microsoft AD directory in six steps: 1) Delegate permissions to CA administrators, 2) Add a Microsoft enterprise CA to your AWS Microsoft AD directory, 3) Create a certificate template, 4) Configure AWS security group rules, 5) AWS Microsoft AD enables LDAPS, and 6) Test LDAPS access using the LDP tool.

Assumptions

For this post, I assume you are familiar with following:

Solution overview

Before going into specific deployment steps, I will provide a high-level overview of deploying LDAPS. I cover how you enable LDAPS on AWS Microsoft AD. In addition, I provide some general background about CA deployment models and explain how to apply these models when deploying Microsoft CA to enable LDAPS on AWS Microsoft AD.

How you enable LDAPS on AWS Microsoft AD

LDAP-aware applications (LDAP clients) typically access LDAP servers using Transmission Control Protocol (TCP) on port 389. By default, LDAP communications on port 389 are unencrypted. However, many LDAP clients use one of two standards to encrypt LDAP communications: LDAP over SSL on port 636, and LDAP with StartTLS on port 389. If an LDAP client uses port 636, the LDAP server encrypts all traffic unconditionally with SSL. If an LDAP client issues a StartTLS command when setting up the LDAP session on port 389, the LDAP server encrypts all traffic to that client with TLS. AWS Microsoft AD now supports both encryption standards when you enable LDAPS on your AWS Microsoft AD domain controllers.

You enable LDAPS on your AWS Microsoft AD domain controllers by installing a digital certificate that a CA issued. Though Windows servers have different methods for installing certificates, LDAPS with AWS Microsoft AD requires you to add a Microsoft CA to your AWS Microsoft AD domain and deploy the certificate through autoenrollment from the Microsoft CA. The installed certificate enables the LDAP service running on domain controllers to listen for and negotiate LDAP encryption on port 636 (LDAP over SSL) and port 389 (LDAP with StartTLS).

Background of CA deployment models

You can deploy CAs as part of a single-level or multi-level CA hierarchy. In a single-level hierarchy, all certificates come from the root of the hierarchy. In a multi-level hierarchy, you organize a collection of CAs in a hierarchy and the certificates sent to computers and users come from subordinate CAs in the hierarchy (not the root).

Certificates issued by a CA identify the hierarchy to which the CA belongs. When a computer sends its certificate to another computer for verification, the receiving computer must have the public certificate from the CAs in the same hierarchy as the sender. If the CA that issued the certificate is part of a single-level hierarchy, the receiver must obtain the public certificate of the CA that issued the certificate. If the CA that issued the certificate is part of a multi-level hierarchy, the receiver can obtain a public certificate for all the CAs that are in the same hierarchy as the CA that issued the certificate. If the receiver can verify that the certificate came from a CA that is in the hierarchy of the receiver’s “trusted” public CA certificates, the receiver trusts the sender. Otherwise, the receiver rejects the sender.

Deploying Microsoft CA to enable LDAPS on AWS Microsoft AD

Microsoft offers a standalone CA and an enterprise CA. Though you can configure either as single-level or multi-level hierarchies, only the enterprise CA integrates with AD and offers autoenrollment for certificate deployment. Because you cannot sign in to run commands on your AWS Microsoft AD domain controllers, an automatic certificate enrollment model is required. Therefore, AWS Microsoft AD requires the certificate to come from a Microsoft enterprise CA that you configure to work in your AD domain. When you install the Microsoft enterprise CA, you can configure it to be part of a single-level hierarchy or a multi-level hierarchy. As a best practice, AWS recommends a multi-level Microsoft CA trust hierarchy consisting of a root CA and a subordinate CA. I cover only a multi-level hierarchy in this post.

In a multi-level hierarchy, you configure your subordinate CA by importing a certificate from the root CA. You must issue a certificate from the root CA such that the certificate gives your subordinate CA the right to issue certificates on behalf of the root. This makes your subordinate CA part of the root CA hierarchy. You also deploy the root CA’s public certificate on all of your computers, which tells all your computers to trust certificates that your root CA issues and to trust certificates from any authorized subordinate CA.

In such a hierarchy, you typically leave your root CA offline (inaccessible to other computers in the network) to protect the root of your hierarchy. You leave the subordinate CA online so that it can issue certificates on behalf of the root CA. This multi-level hierarchy increases security because if someone compromises your subordinate CA, you can revoke all certificates it issued and set up a new subordinate CA from your offline root CA. To learn more about setting up a secure CA hierarchy, see Securing PKI: Planning a CA Hierarchy.

When a Microsoft CA is part of your AD domain, you can configure certificate templates that you publish. These templates become visible to client computers through AD. If a client’s profile matches a template, the client requests a certificate from the Microsoft CA that matches the template. Microsoft calls this process autoenrollment, and it simplifies certificate deployment. To enable LDAPS on your AWS Microsoft AD domain controllers, you create a certificate template in the Microsoft CA that generates SSL and TLS-compatible certificates. The domain controllers see the template and automatically import a certificate of that type from the Microsoft CA. The imported certificate enables LDAP encryption.

Steps to enable LDAPS for your AWS Microsoft AD directory

The rest of this post is composed of the steps for enabling LDAPS for your AWS Microsoft AD directory. First, though, I explain which components you must have running to deploy this solution successfully. I also explain how this solution works and include an architecture diagram.

Prerequisites

The instructions in this post assume that you already have the following components running:

  1. An active AWS Microsoft AD directory – To create a directory, follow the steps in Create an AWS Microsoft AD directory.
  2. An Amazon EC2 for Windows Server instance for managing users and groups in your directory – This instance needs to be joined to your AWS Microsoft AD domain and have Active Directory Administration Tools installed. Active Directory Administration Tools installs Active Directory Administrative Center and the LDP tool.
  3. An existing root Microsoft CA or a multi-level Microsoft CA hierarchy – You might already have a root CA or a multi-level CA hierarchy in your on-premises network. If you plan to use your on-premises CA hierarchy, you must have administrative permissions to issue certificates to subordinate CAs. If you do not have an existing Microsoft CA hierarchy, you can set up a new standalone Microsoft root CA by creating an Amazon EC2 for Windows Server instance and installing a standalone root certification authority. You also must create a local user account on this instance and add this user to the local administrator group so that the user has permissions to issue a certificate to a subordinate CA.

The solution setup

The following diagram illustrates the setup with the steps you need to follow to enable LDAPS for AWS Microsoft AD. You will learn how to set up a subordinate Microsoft enterprise CA (in this case, SubordinateCA) and join it to your AWS Microsoft AD domain (in this case, corp.example.com). You also will learn how to create a certificate template on SubordinateCA and configure AWS security group rules to enable LDAPS for your directory.

As a prerequisite, I already created a standalone Microsoft root CA (in this case RootCA) for creating SubordinateCA. RootCA also has a local user account called RootAdmin that has administrative permissions to issue certificates to SubordinateCA. Note that you may already have a root CA or a multi-level CA hierarchy in your on-premises network that you can use for creating SubordinateCA instead of creating a new root CA. If you choose to use your existing on-premises CA hierarchy, you must have administrative permissions on your on-premises CA to issue a certificate to SubordinateCA.

Lastly, I also already created an Amazon EC2 instance (in this case, Management) that I use to manage users, configure AWS security groups, and test the LDAPS connection. I join this instance to the AWS Microsoft AD directory domain.

Diagram showing the process discussed in this post

Here is how the process works:

  1. Delegate permissions to CA administrators (in this case, CAAdmin) so that they can join a Microsoft enterprise CA to your AWS Microsoft AD domain and configure it as a subordinate CA.
  2. Add a Microsoft enterprise CA to your AWS Microsoft AD domain (in this case, SubordinateCA) so that it can issue certificates to your directory domain controllers to enable LDAPS. This step includes joining SubordinateCA to your directory domain, installing the Microsoft enterprise CA, and obtaining a certificate from RootCA that grants SubordinateCA permissions to issue certificates.
  3. Create a certificate template (in this case, ServerAuthentication) with server authentication and autoenrollment enabled so that your AWS Microsoft AD directory domain controllers can obtain certificates through autoenrollment to enable LDAPS.
  4. Configure AWS security group rules so that AWS Microsoft AD directory domain controllers can connect to the subordinate CA to request certificates.
  5. AWS Microsoft AD enables LDAPS through the following process:
    1. AWS Microsoft AD domain controllers request a certificate from SubordinateCA.
    2. SubordinateCA issues a certificate to AWS Microsoft AD domain controllers.
    3. AWS Microsoft AD enables LDAPS for the directory by installing certificates on the directory domain controllers.
  6. Test LDAPS access by using the LDP tool.

I now will show you these steps in detail. I use the names of components—such as RootCA, SubordinateCA, and Management—and refer to users—such as Admin, RootAdmin, and CAAdmin—to illustrate who performs these steps. All component names and user names in this post are used for illustrative purposes only.

Deploy the solution

Step 1: Delegate permissions to CA administrators


In this step, you delegate permissions to your users who manage your CAs. Your users then can join a subordinate CA to your AWS Microsoft AD domain and create the certificate template in your CA.

To enable use with a Microsoft enterprise CA, AWS added a new built-in AD security group called AWS Delegated Enterprise Certificate Authority Administrators that has delegated permissions to install and administer a Microsoft enterprise CA. By default, your directory Admin is part of the new group and can add other users or groups in your AWS Microsoft AD directory to this security group. If you have trust with your on-premises AD directory, you can also delegate CA administrative permissions to your on-premises users by adding on-premises AD users or global groups to this new AD security group.

To create a new user (in this case CAAdmin) in your directory and add this user to the AWS Delegated Enterprise Certificate Authority Administrators security group, follow these steps:

  1. Sign in to the Management instance using RDP with the user name admin and the password that you set for the admin user when you created your directory.
  2. Launch the Microsoft Windows Server Manager on the Management instance and navigate to Tools > Active Directory Users and Computers.
    Screnshot of the menu including the "Active Directory Users and Computers" choice
  3. Switch to the tree view and navigate to corp.example.com > CORP > Users. Right-click Users and choose New > User.
    Screenshot of choosing New > User
  4. Add a new user with the First name CA, Last name Admin, and User logon name CAAdmin.
    Screenshot of completing the "New Object - User" boxes
  5. In the Active Directory Users and Computers tool, navigate to corp.example.com > AWS Delegated Groups. In the right pane, right-click AWS Delegated Enterprise Certificate Authority Administrators and choose Properties.
    Screenshot of navigating to AWS Delegated Enterprise Certificate Authority Administrators > Properties
  6. In the AWS Delegated Enterprise Certificate Authority Administrators window, switch to the Members tab and choose Add.
    Screenshot of the "Members" tab of the "AWS Delegate Enterprise Certificate Authority Administrators" window
  7. In the Enter the object names to select box, type CAAdmin and choose OK.
    Screenshot showing the "Enter the object names to select" box
  8. In the next window, choose OK to add CAAdmin to the AWS Delegated Enterprise Certificate Authority Administrators security group.
    Screenshot of adding "CA Admin" to the "AWS Delegated Enterprise Certificate Authority Administrators" security group
  9. Also add CAAdmin to the AWS Delegated Server Administrators security group so that CAAdmin can RDP in to the Microsoft enterprise CA machine.
    Screenshot of adding "CAAdmin" to the "AWS Delegated Server Administrators" security group also so that "CAAdmin" can RDP in to the Microsoft enterprise CA machine

 You have granted CAAdmin permissions to join a Microsoft enterprise CA to your AWS Microsoft AD directory domain.

Step 2: Add a Microsoft enterprise CA to your AWS Microsoft AD directory


In this step, you set up a subordinate Microsoft enterprise CA and join it to your AWS Microsoft AD directory domain. I will summarize the process first and then walk through the steps.

First, you create an Amazon EC2 for Windows Server instance called SubordinateCA and join it to the domain, corp.example.com. You then publish RootCA’s public certificate and certificate revocation list (CRL) to SubordinateCA’s local trusted store. You also publish RootCA’s public certificate to your directory domain. Doing so enables SubordinateCA and your directory domain controllers to trust RootCA. You then install the Microsoft enterprise CA service on SubordinateCA and request a certificate from RootCA to make SubordinateCA a subordinate Microsoft CA. After RootCA issues the certificate, SubordinateCA is ready to issue certificates to your directory domain controllers.

Note that you can use an Amazon S3 bucket to pass the certificates between RootCA and SubordinateCA.

In detail, here is how the process works, as illustrated in the preceding diagram:

  1. Set up an Amazon EC2 instance joined to your AWS Microsoft AD directory domain – Create an Amazon EC2 for Windows Server instance to use as a subordinate CA, and join it to your AWS Microsoft AD directory domain. For this example, the machine name is SubordinateCA and the domain is corp.example.com.
  2. Share RootCA’s public certificate with SubordinateCA – Log in to RootCA as RootAdmin and start Windows PowerShell with administrative privileges. Run the following commands to copy RootCA’s public certificate and CRL to the folder c:\rootcerts on RootCA.
    New-Item c:\rootcerts -type directory
    copy C:\Windows\system32\certsrv\certenroll\*.cr* c:\rootcerts

    Upload RootCA’s public certificate and CRL from c:\rootcerts to an S3 bucket by following the steps in How Do I Upload Files and Folders to an S3 Bucket.

The following screenshot shows RootCA’s public certificate and CRL uploaded to an S3 bucket.
Screenshot of RootCA’s public certificate and CRL uploaded to the S3 bucket

  1. Publish RootCA’s public certificate to your directory domain – Log in to SubordinateCA as the CAAdmin. Download RootCA’s public certificate and CRL from the S3 bucket by following the instructions in How Do I Download an Object from an S3 Bucket? Save the certificate and CRL to the C:\rootcerts folder on SubordinateCA. Add RootCA’s public certificate and the CRL to the local store of SubordinateCA and publish RootCA’s public certificate to your directory domain by running the following commands using Windows PowerShell with administrative privileges.
    certutil –addstore –f root <path to the RootCA public certificate file>
    certutil –addstore –f root <path to the RootCA CRL file>
    certutil –dspublish –f <path to the RootCA public certificate file> RootCA
  2. Install the subordinate Microsoft enterprise CA – Install the subordinate Microsoft enterprise CA on SubordinateCA by following the instructions in Install a Subordinate Certification Authority. Ensure that you choose Enterprise CA for Setup Type to install an enterprise CA.

For the CA Type, choose Subordinate CA.

  1. Request a certificate from RootCA – Next, copy the certificate request on SubordinateCA to a folder called c:\CARequest by running the following commands using Windows PowerShell with administrative privileges.
    New-Item c:\CARequest -type directory
    Copy c:\*.req C:\CARequest

    Upload the certificate request to the S3 bucket.
    Screenshot of uploading the certificate request to the S3 bucket

  1. Approve SubordinateCA’s certificate request – Log in to RootCA as RootAdmin and download the certificate request from the S3 bucket to a folder called CARequest. Submit the request by running the following command using Windows PowerShell with administrative privileges.
    certreq -submit <path to certificate request file>

    In the Certification Authority List window, choose OK.
    Screenshot of the Certification Authority List window

Navigate to Server Manager > Tools > Certification Authority on RootCA.
Screenshot of "Certification Authority" in the drop-down menu

In the Certification Authority window, expand the ROOTCA tree in the left pane and choose Pending Requests. In the right pane, note the value in the Request ID column. Right-click the request and choose All Tasks > Issue.
Screenshot of noting the value in the "Request ID" column

  1. Retrieve the SubordinateCA certificate – Retrieve the SubordinateCA certificate by running following command using Windows PowerShell with administrative privileges. The command includes the <RequestId> that you noted in the previous step.
    certreq –retrieve <RequestId> <drive>:\subordinateCA.crt

    Upload SubordinateCA.crt to the S3 bucket.

  1. Install the SubordinateCA certificate – Log in to SubordinateCA as the CAAdmin and download SubordinateCA.crt from the S3 bucket. Install the certificate by running following commands using Windows PowerShell with administrative privileges.
    certutil –installcert c:\subordinateCA.crt
    start-service certsvc
  2. Delete the content that you uploaded to S3  As a security best practice, delete all the certificates and CRLs that you uploaded to the S3 bucket in the previous steps because you already have installed them on SubordinateCA.

You have finished setting up the subordinate Microsoft enterprise CA that is joined to your AWS Microsoft AD directory domain. Now you can use your subordinate Microsoft enterprise CA to create a certificate template so that your directory domain controllers can request a certificate to enable LDAPS for your directory.

Step 3: Create a certificate template


In this step, you create a certificate template with server authentication and autoenrollment enabled on SubordinateCA. You create this new template (in this case, ServerAuthentication) by duplicating an existing certificate template (in this case, Domain Controller template) and adding server authentication and autoenrollment to the template.

Follow these steps to create a certificate template:

  1. Log in to SubordinateCA as CAAdmin.
  2. Launch Microsoft Windows Server Manager. Select Tools > Certification Authority.
  3. In the Certificate Authority window, expand the SubordinateCA tree in the left pane. Right-click Certificate Templates, and choose Manage.
    Screenshot of choosing "Manage" under "Certificate Template"
  4. In the Certificate Templates Console window, right-click Domain Controller and choose Duplicate Template.
    Screenshot of the Certificate Templates Console window
  5. In the Properties of New Template window, switch to the General tab and change the Template display name to ServerAuthentication.
    Screenshot of the "Properties of New Template" window
  6. Switch to the Security tab, and choose Domain Controllers in the Group or user names section. Select the Allow check box for Autoenroll in the Permissions for Domain Controllers section.
    Screenshot of the "Permissions for Domain Controllers" section of the "Properties of New Template" window
  7. Switch to the Extensions tab, choose Application Policies in the Extensions included in this template section, and choose Edit
    Screenshot of the "Extensions" tab of the "Properties of New Template" window
  8. In the Edit Application Policies Extension window, choose Client Authentication and choose Remove. Choose OK to create the ServerAuthentication certificate template. Close the Certificate Templates Console window.
    Screenshot of the "Edit Application Policies Extension" window
  9. In the Certificate Authority window, right-click Certificate Templates, and choose New > Certificate Template to Issue.
    Screenshot of choosing "New" > "Certificate Template to Issue"
  10. In the Enable Certificate Templates window, choose ServerAuthentication and choose OK.
    Screenshot of the "Enable Certificate Templates" window

You have finished creating a certificate template with server authentication and autoenrollment enabled on SubordinateCA. Your AWS Microsoft AD directory domain controllers can now obtain a certificate through autoenrollment to enable LDAPS.

Step 4: Configure AWS security group rules


In this step, you configure AWS security group rules so that your directory domain controllers can connect to the subordinate CA to request a certificate. To do this, you must add outbound rules to your directory’s AWS security group (in this case, sg-4ba7682d) to allow all outbound traffic to SubordinateCA’s AWS security group (in this case, sg-6fbe7109) so that your directory domain controllers can connect to SubordinateCA for requesting a certificate. You also must add inbound rules to SubordinateCA’s AWS security group to allow all incoming traffic from your directory’s AWS security group so that the subordinate CA can accept incoming traffic from your directory domain controllers.

Follow these steps to configure AWS security group rules:

  1. Log in to the Management instance as Admin.
  2. Navigate to the EC2 console.
  3. In the left pane, choose Network & Security > Security Groups.
  4. In the right pane, choose the AWS security group (in this case, sg-6fbe7109) of SubordinateCA.
  5. Switch to the Inbound tab and choose Edit.
  6. Choose Add Rule. Choose All traffic for Type and Custom for Source. Enter your directory’s AWS security group (in this case, sg-4ba7682d) in the Source box. Choose Save.
    Screenshot of adding an inbound rule
  7. Now choose the AWS security group (in this case, sg-4ba7682d) of your AWS Microsoft AD directory, switch to the Outbound tab, and choose Edit.
  8. Choose Add Rule. Choose All traffic for Type and Custom for Destination. Enter your directory’s AWS security group (in this case, sg-6fbe7109) in the Destination box. Choose Save.

You have completed the configuration of AWS security group rules to allow traffic between your directory domain controllers and SubordinateCA.

Step 5: AWS Microsoft AD enables LDAPS


The AWS Microsoft AD domain controllers perform this step automatically by recognizing the published template and requesting a certificate from the subordinate Microsoft enterprise CA. The subordinate CA can take up to 180 minutes to issue certificates to the directory domain controllers. The directory imports these certificates into the directory domain controllers and enables LDAPS for your directory automatically. This completes the setup of LDAPS for the AWS Microsoft AD directory. The LDAP service on the directory is now ready to accept LDAPS connections!

Step 6: Test LDAPS access by using the LDP tool


In this step, you test the LDAPS connection to the AWS Microsoft AD directory by using the LDP tool. The LDP tool is available on the Management machine where you installed Active Directory Administration Tools. Before you test the LDAPS connection, you must wait up to 180 minutes for the subordinate CA to issue a certificate to your directory domain controllers.

To test LDAPS, you connect to one of the domain controllers using port 636. Here are the steps to test the LDAPS connection:

  1. Log in to Management as Admin.
  2. Launch the Microsoft Windows Server Manager on Management and navigate to Tools > Active Directory Users and Computers.
  3. Switch to the tree view and navigate to corp.example.com > CORP > Domain Controllers. In the right pane, right-click on one of the domain controllers and choose Properties. Copy the DNS name of the domain controller.
    Screenshot of copying the DNS name of the domain controller
  4. Launch the LDP.exe tool by launching Windows PowerShell and running the LDP.exe command.
  5. In the LDP tool, choose Connection > Connect.
    Screenshot of choosing "Connnection" > "Connect" in the LDP tool
  6. In the Server box, paste the DNS name you copied in the previous step. Type 636 in the Port box. Choose OK to test the LDAPS connection to port 636 of your directory.
    Screenshot of completing the boxes in the "Connect" window
  7. You should see the following message to confirm that your LDAPS connection is now open.

You have completed the setup of LDAPS for your AWS Microsoft AD directory! You can now encrypt LDAP communications between your Windows and Linux applications and your AWS Microsoft AD directory using LDAPS.

Summary

In this blog post, I walked through the process of enabling LDAPS for your AWS Microsoft AD directory. Enabling LDAPS helps you protect PII and other sensitive information exchanged over untrusted networks between your Windows and Linux applications and your AWS Microsoft AD. To learn more about how to use AWS Microsoft AD, see the Directory Service documentation. For general information and pricing, see the Directory Service home page.

If you have comments about this blog post, submit a comment in the “Comments” section below. If you have implementation or troubleshooting questions, start a new thread on the Directory Service forum.

– Vijay

Disabling Intel Hyper-Threading Technology on Amazon EC2 Windows Instances

Post Syndicated from Brian Beach original https://aws.amazon.com/blogs/compute/disabling-intel-hyper-threading-technology-on-amazon-ec2-windows-instances/

In a prior post, Disabling Intel Hyper-Threading on Amazon Linux, I investigated how the Linux kernel enumerates CPUs. I also discussed the options to disable Intel Hyper-Threading (HT Technology) in Amazon Linux running on Amazon EC2.

In this post, I do the same for Microsoft Windows Server 2016 running on EC2 instances. I begin with a quick review of HT Technology and the reasons you might want to disable it. I also recommend that you take a moment to review the prior post for a more thorough foundation.

HT Technology

HT Technology makes a single physical processor appear as multiple logical processors. Each core in an Intel Xeon processor has two threads of execution. Most of the time, these threads can progress independently; one thread executing while the other is waiting on a relatively slow operation (for example, reading from memory) to occur. However, the two threads do share resources and occasionally one thread is forced to wait while the other is executing.

There a few unique situations where disabling HT Technology can improve performance. One example is high performance computing (HPC) workloads that rely heavily on floating point operations. In these rare cases, it can be advantageous to disable HT Technology. However, these cases are rare, and for the overwhelming majority of workloads you should leave it enabled. I recommend that you test with and without HT Technology enabled, and only disable threads if you are sure it will improve performance.

Exploring HT Technology on Microsoft Windows

Here’s how Microsoft Windows enumerates CPUs. As before, I am running these examples on an m4.2xlarge. I also chose to run Windows Server 2016, but you can walk through these exercises on any version of Windows. Remember that the m4.2xlarge has eight vCPUs, and each vCPU is a thread of an Intel Xeon core. Therefore, the m4.2xlarge has four cores, each of which run two threads, resulting in eight vCPUs.

Windows does not have a built-in utility to examine CPU configuration, but you can download the Sysinternals coreinfo utility from Microsoft’s website. This utility provides useful information about the system CPU and memory topology. For this walkthrough, you enumerate the individual CPUs, which you can do by running coreinfo -c. For example:

C:\Users\Administrator >coreinfo -c

Coreinfo v3.31 - Dump information on system CPU and memory topology
Copyright (C) 2008-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

Logical to Physical Processor Map:
**------ Physical Processor 0 (Hyperthreaded)
--**---- Physical Processor 1 (Hyperthreaded)
----**-- Physical Processor 2 (Hyperthreaded)
------** Physical Processor 3 (Hyperthreaded)

As you can see from the screenshot, the coreinfo utility displays a table where each row is a physical core and each column is a logical CPU. In other words, the two asterisks on the first line indicate that CPU 0 and CPU 1 are the two threads in the first physical core. Therefore, my m4.2xlarge has for four physical processors and each processor has two threads resulting in eight total CPUs, just as expected.

It is interesting to note that Windows Server 2016 enumerates CPUs in a different order than Linux. Remember from the prior post that Linux enumerated the first thread in each core, followed by the second thread in each core. You can see from the output earlier that Windows Server 2016, enumerates both threads in the first core, then both threads in the second core, and so on. The diagram below shows the relationship of CPUs to cores and threads in both operating systems.

In the Linux post, I disabled CPUs 4–6, leaving one thread per core, and effectively disabling HT Technology. You can see from the diagram that you must disable the odd-numbered threads (that is, 1, 3, 5, and 7) to achieve the same result in Windows. Here’s how to do that.

Disabling HT Technology on Microsoft Windows

In Linux, you can globally disable CPUs dynamically. In Windows, there is no direct equivalent that I could find, but there are a few alternatives.

First, you can disable CPUs using the msconfig.exe tool. If you choose Boot, Advanced Options, you have the option to set the number of processors. In the example below, I limit my m4.2xlarge to four CPUs. Restart for this change to take effect.

Unfortunately, Windows does not disable hyperthreaded CPUs first and then real cores, as Linux does. As you can see in the following output, coreinfo reports that my c4.2xlarge has two real cores and four hyperthreads, after rebooting. Msconfig.exe is useful for disabling cores, but it does not allow you to disable HT Technology.

Note: If you have been following along, you can re-enable all your CPUs by unselecting the Number of processors check box and rebooting your system.

 

C:\Users\Administrator >coreinfo -c

Coreinfo v3.31 - Dump information on system CPU and memory topology
Copyright (C) 2008-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

Logical to Physical Processor Map:
**-- Physical Processor 0 (Hyperthreaded)
--** Physical Processor 1 (Hyperthreaded)

While you cannot disable HT Technology systemwide, Windows does allow you to associate a particular process with one or more CPUs. Microsoft calls this, “processor affinity”. To see an example, use the following steps.

  1. Launch an instance of Notepad.
  2. Open Windows Task Manager and choose Processes.
  3. Open the context (right click) menu on notepad.exe and choose Set Affinity….

This brings up the Processor Affinity dialog box.

As you can see, all the CPUs are allowed to run this instance of notepad.exe. You can uncheck a few CPUs to exclude them. Windows is smart enough to allow any scheduled operations to continue to completion on disabled CPUs. It then saves its state at the next scheduling event, and resumes those operations on another CPU. To ensure that only one thread in each core is able to run a process, you uncheck every other core. This effectively disables HT Technology for this process. For example:

Of course, this can be tedious when you have a large number of cores. Remember that the x1.32xlarge has 128 CPUs. Luckily, you can set the affinity of a running process from PowerShell using the Get-Process cmdlet. For example:

PS C:\&gt; (Get-Process -Name 'notepad').ProcessorAffinity = 0x55;

The ProcessorAffinity attribute takes a bitmask in hexadecimal format. 0x55 in hex is equivalent to 01010101 in binary. Think of the binary encoding as 1=enabled and 0=disabled. This is slightly confusing, but we work left to right so that CPU 0 is the rightmost bit and CPU 7 is the leftmost bit. Therefore, 01010101 means that the first thread in each CPU is enabled just as it was in the diagram earlier.

The calculator built into Windows includes a “programmer view” that helps you convert from hexadecimal to binary. In addition, the ProcessorAffinity attribute is a 64-bit number. Therefore, you can only configure the processor affinity on systems up to 64 CPUs. At the moment, only the x1.32xlarge has more than 64 vCPUs.

In the preceding examples, you changed the processor affinity of a running process. Sometimes, you want to start a process with the affinity already configured. You can do this using the start command. The start command includes an affinity flag that takes a hexadecimal number like the PowerShell example earlier.

C:\Users\Administrator&gt;start /affinity 55 notepad.exe

It is interesting to note that a child process inherits the affinity from its parent. For example, the following commands create a batch file that launches Notepad, and starts the batch file with the affinity set. If you examine the instance of Notepad launched by the batch file, you see that the affinity has been applied to as well.

C:\Users\Administrator&gt;echo notepad.exe > test.bat
C:\Users\Administrator&gt;start /affinity 55 test.bat

This means that you can set the affinity of your task scheduler and any tasks that the scheduler starts inherits the affinity. So, you can disable every other thread when you launch the scheduler and effectively disable HT Technology for all of the tasks as well. Be sure to test this point, however, as some schedulers override the normal inheritance behavior and explicitly set processor affinity when starting a child process.

Conclusion

While the Windows operating system does not allow you to disable logical CPUs, you can set processor affinity on individual processes. You also learned that Windows Server 2016 enumerates CPUs in a different order than Linux. Therefore, you can effectively disable HT Technology by restricting a process to every other CPU. Finally, you learned how to set affinity of both new and running processes using Task Manager, PowerShell, and the start command.

Note: this technical approach has nothing to do with control over software licensing, or licensing rights, which are sometimes linked to the number of “CPUs” or “cores.” For licensing purposes, those are legal terms, not technical terms. This post did not cover anything about software licensing or licensing rights.

If you have questions or suggestions, please comment below.

New – Amazon EC2 Elastic GPUs for Windows

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-ec2-elastic-gpus-for-windows/

Today we’re excited to announce the general availability of Amazon EC2 Elastic GPUs for Windows. An Elastic GPU is a GPU resource that you can attach to your Amazon Elastic Compute Cloud (EC2) instance to accelerate the graphics performance of your applications. Elastic GPUs come in medium (1GB), large (2GB), xlarge (4GB), and 2xlarge (8GB) sizes and are lower cost alternatives to using GPU instance types like G3 or G2 (for OpenGL 3.3 applications). You can use Elastic GPUs with many instance types allowing you the flexibility to choose the right compute, memory, and storage balance for your application. Today you can provision elastic GPUs in us-east-1 and us-east-2.

Elastic GPUs start at just $0.05 per hour for an eg1.medium. A nickel an hour. If we attach that Elastic GPU to a t2.medium ($0.065/hour) we pay a total of less than 12 cents per hour for an instance with a GPU. Previously, the cheapest graphical workstation (G2/3 class) cost 76 cents per hour. That’s over an 80% reduction in the price for running certain graphical workloads.

When should I use Elastic GPUs?

Elastic GPUs are best suited for applications that require a small or intermittent amount of additional GPU power for graphics acceleration and support OpenGL. Elastic GPUs support up to and including the OpenGL 3.3 API standards with expanded API support coming soon.

Elastic GPUs are not part of the hardware of your instance. Instead they’re attached through an elastic GPU network interface in your subnet which is created when you launch an instance with an Elastic GPU. The image below shows how Elastic GPUs are attached.

Since Elastic GPUs are network attached it’s important to provision an instance with adequate network bandwidth to support your application. It’s also important to make sure your instance security group allows traffic on port 2007.

Any application that can use the OpenGL APIs can take advantage of Elastic GPUs so Blender, Google Earth, SIEMENS SolidEdge, and more could all run with Elastic GPUs. Even Kerbal Space Program!

Ok, now that we know when to use Elastic GPUs and how they work, let’s launch an instance and use one.

Using Elastic GPUs

First, we’ll navigate to the EC2 console and click Launch Instance. Next we’ll select a Windows AMI like: “Microsoft Windows Server 2016 Base”. Then we’ll select an instance type. Then we’ll make sure we select the “Elastic GPU” section and allocate an eg1.medium (1GB) Elastic GPU.

We’ll also include some userdata in the advanced details section. We’ll write a quick PowerShell script to download and install our Elastic GPU software.


<powershell>
Start-Transcript -Path "C:\egpu_install.log" -Append
(new-object net.webclient).DownloadFile('http://ec2-elasticgpus.s3-website-us-east-1.amazonaws.com/latest', 'C:\egpu.msi')
Start-Process "msiexec.exe" -Wait -ArgumentList "/i C:\egpu.msi /qn /L*v C:\egpu_msi_install.log"
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Amazon\EC2ElasticGPUs\manager\", [EnvironmentVariableTarget]::Machine)
Restart-Computer -Force
</powershell>

This software sends all OpenGL API calls to the attached Elastic GPU.

Next, we’ll double check to make sure my security group has TCP port 2007 exposed to my VPC so my Elastic GPU can connect to my instance. Finally, we’ll click launch and wait for my instance and Elastic GPU to provision. The best way to do this is to create a separate SG that you can attach to the instance.

You can see an animation of the launch procedure below.

Alternatively we could have launched on the AWS CLI with a quick call like this:

$aws ec2 run-instances --elastic-gpu-specification Type=eg1.2xlarge \
--image-id ami-1a2b3c4d \
--subnet subnet-11223344 \
--instance-type r4.large \
--security-groups "default" "elasticgpu-sg"

then we could have followed the Elastic GPU software installation instructions here.

We can now see our Elastic GPU is humming along and attached by checking out the Elastic GPU status in the taskbar.

We welcome any feedback on the service and you can click on the Feedback link in the bottom left corner of the GPU Status Box to let us know about your experience with Elastic GPUs.

Elastic GPU Demonstration

Ok, so we have our instance provisioned and our Elastic GPU attached. My teammates here at AWS wanted me to talk about the amazingly wonderful 3D applications you can run, but when I learned about Elastic GPUs the first thing that came to mind was Kerbal Space Program (KSP), so I’m going to run a quick test with that. After all, if you can’t launch Jebediah Kerman into space then what was the point of all of that software? I’ve downloaded KSP and added the launch parameter of -force-opengl to make sure we’re using OpenGL to do our rendering. Below you can see my poor attempt at building a spaceship – I used to build better ones. It looks pretty smooth considering we’re going over a network with a lossy remote desktop protocol.

I’d show a picture of the rocket launch but I didn’t even make it off the ground before I experienced a rapid unscheduled disassembly of the rocket. Back to the drawing board for me.

In the mean time I can check my Amazon CloudWatch metrics and see how much GPU memory I used during my brief game.

Partners, Pricing, and Documentation

To continue to build out great experiences for our customers, our 3D software partners like ANSYS and Siemens are looking to take advantage of the OpenGL APIs on Elastic GPUs, and are currently certifying Elastic GPUs for their software. You can learn more about our partnerships here.

You can find information on Elastic GPU pricing here. You can find additional documentation here.

Now, if you’ll excuse me I have some virtual rockets to build.

Randall

Winpayloads – Undetectable Windows Payload Generation

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/y3Szx2PyNH4/

Winpayloads is a tool to provide undetectable Windows payload generation with some extras running on Python 2.7. It provides persistence, privilege escalation, shellcode invocation and much more. Features UACBypass – PowerShellEmpire PowerUp – PowerShellEmpire Invoke-Shellcode Invoke-Mimikatz Invoke-EventVwrBypass Persistence – Adds payload…

Read the full post at darknet.org.uk

How to Deploy Local Administrator Password Solution with AWS Microsoft AD

Post Syndicated from Dragos Madarasan original https://aws.amazon.com/blogs/security/how-to-deploy-local-administrator-password-solution-with-aws-microsoft-ad/

Local Administrator Password Solution (LAPS) from Microsoft simplifies password management by allowing organizations to use Active Directory (AD) to store unique passwords for computers. Typically, an organization might reuse the same local administrator password across the computers in an AD domain. However, this approach represents a security risk because it can be exploited during lateral escalation attacks. LAPS solves this problem by creating unique, randomized passwords for the Administrator account on each computer and storing it encrypted in AD.

Deploying LAPS with AWS Microsoft AD requires the following steps:

  1. Install the LAPS binaries on instances joined to your AWS Microsoft AD domain. The binaries add additional client-side extension (CSE) functionality to the Group Policy client.
  2. Extend the AWS Microsoft AD schema. LAPS requires new AD attributes to store an encrypted password and its expiration time.
  3. Configure AD permissions and delegate the ability to retrieve the local administrator password for IT staff in your organization.
  4. Configure Group Policy on instances joined to your AWS Microsoft AD domain to enable LAPS. This configures the Group Policy client to process LAPS settings and uses the binaries installed in Step 1.

The following diagram illustrates the setup that I will be using throughout this post and the associated tasks to set up LAPS. Note that the AWS Directory Service directory is deployed across multiple Availability Zones, and monitoring automatically detects and replaces domain controllers that fail.

Diagram illustrating this blog post's solution

In this blog post, I explain the prerequisites to set up Local Administrator Password Solution, demonstrate the steps involved to update the AD schema on your AWS Microsoft AD domain, show how to delegate permissions to IT staff and configure LAPS via Group Policy, and demonstrate how to retrieve the password using the graphical user interface or with Windows PowerShell.

This post assumes you are familiar with Lightweight Directory Access Protocol Data Interchange Format (LDIF) files and AWS Microsoft AD. If you need more of an introduction to Directory Service and AWS Microsoft AD, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service, which introduces working with schema changes in AWS Microsoft AD.

Prerequisites

In order to implement LAPS, you must use AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD. Any instance on which you want to configure LAPS must be joined to your AWS Microsoft AD domain. You also need a Management instance on which you install the LAPS management tools.

In this post, I use an AWS Microsoft AD domain called example.com that I have launched in the EU (London) region. To see which the regions in which Directory Service is available, see AWS Regions and Endpoints.

Screenshot showing the AWS Microsoft AD domain example.com used in this blog post

In addition, you must have at least two instances launched in the same region as the AWS Microsoft AD domain. To join the instances to your AWS Microsoft AD domain, you have two options:

  1. Use the Amazon EC2 Systems Manager (SSM) domain join feature. To learn more about how to set up domain join for EC2 instances, see joining a Windows Instance to an AWS Directory Service Domain.
  2. Manually configure the DNS server addresses in the Internet Protocol version 4 (TCP/IPv4) settings of the network card to use the AWS Microsoft AD DNS addresses (172.31.9.64 and 172.31.16.191, for this blog post) and perform a manual domain join.

For the purpose of this post, my two instances are:

  1. A Management instance on which I will install the management tools that I have tagged as Management.
  2. A Web Server instance on which I will be deploying the LAPS binary.

Screenshot showing the two EC2 instances used in this post

Implementing the solution

 

1. Install the LAPS binaries on instances joined to your AWS Microsoft AD domain by using EC2 Run Command

LAPS binaries come in the form of an MSI installer and can be downloaded from the Microsoft Download Center. You can install the LAPS binaries manually, with an automation service such as EC2 Run Command, or with your existing software deployment solution.

For this post, I will deploy the LAPS binaries on my Web Server instance (i-0b7563d0f89d3453a) by using EC2 Run Command:

  1. While signed in to the AWS Management Console, choose EC2. In the Systems Manager Services section of the navigation pane, choose Run Command.
  2. Choose Run a command, and from the Command document list, choose AWS-InstallApplication.
  3. From Target instances, choose the instance on which you want to deploy the LAPS binaries. In my case, I will be selecting the instance tagged as Web Server. If you do not see any instances listed, make sure you have met the prerequisites for Amazon EC2 Systems Manager (SSM) by reviewing the Systems Manager Prerequisites.
  4. For Action, choose Install, and then stipulate the following values:
    • Parameters: /quiet
    • Source: https://download.microsoft.com/download/C/7/A/C7AAD914-A8A6-4904-88A1-29E657445D03/LAPS.x64.msi
    • Source Hash: f63ebbc45e2d080630bd62a195cd225de734131a56bb7b453c84336e37abd766
    • Comment: LAPS deployment

Leave the other options with the default values and choose Run. The AWS Management Console will return a Command ID, which will initially have a status of In Progress. It should take less than 5 minutes to download and install the binaries, after which the Command ID will update its status to Success.

Status showing the binaries have been installed successfully

If the Command ID runs for more than 5 minutes or returns an error, it might indicate a problem with the installer. To troubleshoot, review the steps in Troubleshooting Systems Manager Run Command.

To verify the binaries have been installed successfully, open Control Panel and review the recently installed applications in Programs and Features.

Screenshot of Control Panel that confirms LAPS has been installed successfully

You should see an entry for Local Administrator Password Solution with a version of 6.2.0.0 or newer.

2. Extend the AWS Microsoft AD schema

In the previous section, I used EC2 Run Command to install the LAPS binaries on an EC2 instance. Now, I am ready to extend the schema in an AWS Microsoft AD domain. Extending the schema is a requirement because LAPS relies on new AD attributes to store the encrypted password and its expiration time.

In an on-premises AD environment, you would update the schema by running the Update-AdmPwdADSchema Windows PowerShell cmdlet with schema administrator credentials. Because AWS Microsoft AD is a managed service, I do not have permissions to update the schema directly. Instead, I will update the AD schema from the Directory Service console by importing an LDIF file. If you are unfamiliar with schema updates or LDIF files, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service.

To make things easier for you, I am providing you with a sample LDIF file that contains the required AD schema changes. Using Notepad or a similar text editor, open the SchemaChanges-0517.ldif file and update the values of dc=example,dc=com with your own AWS Microsoft AD domain and suffix.

After I update the LDIF file with my AWS Microsoft AD details, I import it by using the AWS Management Console:

  1. On the Directory Service console, select from the list of directories in the Microsoft AD directory by choosing its identifier (it will look something like d-534373570ea).
  2. On the Directory details page, choose the Schema extensions tab and choose Upload and update schema.
    Screenshot showing the "Upload and update schema" option
  3. When prompted for the LDIF file that contains the changes, choose the sample LDIF file.
  4. In the background, the LDIF file is validated for errors and a backup of the directory is created for recovery purposes. Updating the schema might take a few minutes and the status will change to Updating Schema. When the process has completed, the status of Completed will be displayed, as shown in the following screenshot.

Screenshot showing the schema updates in progress
When the process has completed, the status of Completed will be displayed, as shown in the following screenshot.

Screenshot showing the process has completed

If the LDIF file contains errors or the schema extension fails, the Directory Service console will generate an error code and additional debug information. To help troubleshoot error messages, see Schema Extension Errors.

The sample LDIF file triggers AWS Microsoft AD to perform the following actions:

  1. Create the ms-Mcs-AdmPwd attribute, which stores the encrypted password.
  2. Create the ms-Mcs-AdmPwdExpirationTime attribute, which stores the time of the password’s expiration.
  3. Add both attributes to the Computer class.

3. Configure AD permissions

In the previous section, I updated the AWS Microsoft AD schema with the required attributes for LAPS. I am now ready to configure the permissions for administrators to retrieve the password and for computer accounts to update their password attribute.

As part of configuring AD permissions, I grant computers the ability to update their own password attribute and specify which security groups have permissions to retrieve the password from AD. As part of this process, I run Windows PowerShell cmdlets that are not installed by default on Windows Server.

Note: To learn more about Windows PowerShell and the concept of a cmdlet (pronounced “command-let”), go to Getting Started with Windows PowerShell.

Before getting started, I need to set up the required tools for LAPS on my Management instance, which must be joined to the AWS Microsoft AD domain. I will be using the same LAPS installer that I downloaded from the Microsoft LAPS website. In my Management instance, I have manually run the installer by clicking the LAPS.x64.msi file. On the Custom Setup page of the installer, under Management Tools, for each option I have selected Install on local hard drive.

Screenshot showing the required management tools

In the preceding screenshot, the features are:

  • The fat client UI – A simple user interface for retrieving the password (I will use it at the end of this post).
  • The Windows PowerShell module – Needed to run the commands in the next sections.
  • The GPO Editor templates – Used to configure Group Policy objects.

The next step is to grant computers in the Computers OU the permission to update their own attributes. While connected to my Management instance, I go to the Start menu and type PowerShell. In the list of results, right-click Windows PowerShell and choose Run as administrator and then Yes when prompted by User Account Control.

In the Windows PowerShell prompt, I type the following command.

Import-module AdmPwd.PS

Set-AdmPwdComputerSelfPermission –OrgUnit “OU=Computers,OU=MyMicrosoftAD,DC=example,DC=com

To grant the administrator group called Admins the permission to retrieve the computer password, I run the following command in the Windows PowerShell prompt I previously started.

Import-module AdmPwd.PS

Set-AdmPwdReadPasswordPermission –OrgUnit “OU=Computers, OU=MyMicrosoftAD,DC=example,DC=com” –AllowedPrincipals “Admins”

4. Configure Group Policy to enable LAPS

In the previous section, I deployed the LAPS management tools on my management instance, granted the computer accounts the permission to self-update their local administrator password attribute, and granted my Admins group permissions to retrieve the password.

Note: The following section addresses the Group Policy Management Console and Group Policy objects. If you are unfamiliar with or wish to learn more about these concepts, go to Get Started Using the GPMC and Group Policy for Beginners.

I am now ready to enable LAPS via Group Policy:

  1. On my Management instance (i-03b2c5d5b1113c7ac), I have installed the Group Policy Management Console (GPMC) by running the following command in Windows PowerShell.
Install-WindowsFeature –Name GPMC
  1. Next, I have opened the GPMC and created a new Group Policy object (GPO) called LAPS GPO.
  2. In the Local Group Policy Editor, I navigate to Computer Configuration > Policies > Administrative Templates > LAPS. I have configured the settings using the values in the following table.

Setting

State

Options

Password Settings

Enabled

Complexity: large letters, small letters, numbers, specials

Do not allow password expiration time longer than required by policy

Enabled

N/A

Enable local admin password management

Enabled

N/A

  1. Next, I need to link the GPO to an organizational unit (OU) in which my machine accounts sit. In your environment, I recommend testing the new settings on a test OU and then deploying the GPO to production OUs.

Note: If you choose to create a new test organizational unit, you must create it in the OU that AWS Microsoft AD delegates to you to manage. For example, if your AWS Microsoft AD directory name were example.com, the test OU path would be example.com/example/Computers/Test.

  1. To test that LAPS works, I need to make sure the computer has received the new policy by forcing a Group Policy update. While connected to the Web Server instance (i-0b7563d0f89d3453a) using Remote Desktop, I open an elevated administrative command prompt and run the following command: gpupdate /force. I can check if the policy is applied by running the command: gpresult /r | findstr LAPS GPO, where LAPS GPO is the name of the GPO created in the second step.
  2. Back on my Management instance, I can then launch the LAPS interface from the Start menu and use it to retrieve the password (as shown in the following screenshot). Alternatively, I can run the Get-ADComputer Windows PowerShell cmdlet to retrieve the password.
Get-ADComputer [YourComputerName] -Properties ms-Mcs-AdmPwd | select name, ms-Mcs-AdmPwd

Screenshot of the LAPS UI, which you can use to retrieve the password

Summary

In this blog post, I demonstrated how you can deploy LAPS with an AWS Microsoft AD directory. I then showed how to install the LAPS binaries by using EC2 Run Command. Using the sample LDIF file I provided, I showed you how to extend the schema, which is a requirement because LAPS relies on new AD attributes to store the encrypted password and its expiration time. Finally, I showed how to complete the LAPS setup by configuring the necessary AD permissions and creating the GPO that starts the LAPS password change.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the Directory Service forum.

– Dragos

Amazon EC2 Container Service – Launch Recap, Customer Stories, and Code

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-ec2-container-service-launch-recap-customer-stories-and-code/

Today seems like a good time to recap some of the features that we have added to Amazon EC2 Container Service over the last year or so, and to share some customer success stories and code with you! The service makes it easy for you to run any number of Docker containers across a managed cluster of EC2 instances, with full console, API, CloudFormation, CLI, and PowerShell support. You can store your Linux and Windows Docker images in the EC2 Container Registry for easy access.

Launch Recap
Let’s start by taking a look at some of the newest ECS features and some helpful how-to blog posts that will show you how to use them:

Application Load Balancing – We added support for the application load balancer last year. This high-performance load balancing option runs at the application level and allows you to define content-based routing rules. It provides support for dynamic ports and can be shared across multiple services, making it easier for you to run microservices in containers. To learn more, read about Service Load Balancing.

IAM Roles for Tasks – You can secure your infrastructure by assigning IAM roles to ECS tasks. This allows you to grant permissions on a fine-grained, per-task basis, customizing the permissions to the needs of each task. Read IAM Roles for Tasks to learn more.

Service Auto Scaling – You can define scaling policies that scale your services (tasks) up and down in response to changes in demand. You set the desired minimum and maximum number of tasks, create one or more scaling policies, and Service Auto Scaling will take care of the rest. The documentation for Service Auto Scaling will help you to make use of this feature.

Blox – Scheduling, in a container-based environment, is the process of assigning tasks to instances. ECS gives you three options: automated (via the built-in Service Scheduler), manual (via the RunTask function), and custom (via a scheduler that you provide). Blox is an open source scheduler that supports a one-task-per-host model, with room to accommodate other models in the future. It monitors the state of the cluster and is well-suited to running monitoring agents, log collectors, and other daemon-style tasks.

Windows – We launched ECS with support for Linux containers and followed up with support for running Windows Server 2016 Base with Containers.

Container Instance Draining – From time to time you may need to remove an instance from a running cluster in order to scale the cluster down or to perform a system update. Earlier this year we added a set of lifecycle hooks that allow you to better manage the state of the instances. Read the blog post How to Automate Container Instance Draining in Amazon ECS to see how to use the lifecycle hooks and a Lambda function to automate the process of draining existing work from an instance while preventing new work from being scheduled for it.

CI/CD Pipeline with Code* – Containers simplify software deployment and are an ideal target for a CI/CD (Continuous Integration / Continuous Deployment) pipeline. The post Continuous Deployment to Amazon ECS using AWS CodePipeline, AWS CodeBuild, Amazon ECR, and AWS CloudFormation shows you how to build and operate a CI/CD pipeline using multiple AWS services.

CloudWatch Logs Integration – This launch gave you the ability to configure the containers that run your tasks to send log information to CloudWatch Logs for centralized storage and analysis. You simply install the Amazon ECS Container Agent and enable the awslogs log driver.

CloudWatch Events – ECS generates CloudWatch Events when the state of a task or a container instance changes. These events allow you to monitor the state of the cluster using a Lambda function. To learn how to capture the events and store them in an Elasticsearch cluster, read Monitor Cluster State with Amazon ECS Event Stream.

Task Placement Policies – This launch provided you with fine-grained control over the placement of tasks on container instances within clusters. It allows you to construct policies that include cluster constraints, custom constraints (location, instance type, AMI, and attribute), placement strategies (spread or bin pack) and to use them without writing any code. Read Introducing Amazon ECS Task Placement Policies to see how to do this!

EC2 Container Service in Action
Many of our customers from large enterprises to hot startups and across all industries, such as financial services, hospitality, and consumer electronics, are using Amazon ECS to run their microservices applications in production. Companies such as Capital One, Expedia, Okta, Riot Games, and Viacom rely on Amazon ECS.

Mapbox is a platform for designing and publishing custom maps. The company uses ECS to power their entire batch processing architecture to collect and process over 100 million miles of sensor data per day that they use for powering their maps. They also optimize their batch processing architecture on ECS using Spot Instances. The Mapbox platform powers over 5,000 apps and reaches more than 200 million users each month. Its backend runs on ECS allowing it to serve more than 1.3 billion requests per day. To learn more about their recent migration to ECS, read their recent blog post, We Switched to Amazon ECS, and You Won’t Believe What Happened Next.

Travel company Expedia designed their backends with a microservices architecture. With the popularization of Docker, they decided they would like to adopt Docker for its faster deployments and environment portability. They chose to use ECS to orchestrate all their containers because it had great integration with the AWS platform, everything from ALB to IAM roles to VPC integration. This made ECS very easy to use with their existing AWS infrastructure. ECS really reduced the heavy lifting of deploying and running containerized applications. Expedia runs 75% of all apps on AWS in ECS allowing it to process 4 billion requests per hour. Read Kuldeep Chowhan‘s blog post, How Expedia Runs Hundreds of Applications in Production Using Amazon ECS to learn more.

Realtor.com provides home buyers and sellers with a comprehensive database of properties that are currently for sale. Their move to AWS and ECS has helped them to support business growth that now numbers 50 million unique monthly users who drive up to 250,000 requests per second at peak times. ECS has helped them to deploy their code more quickly while increasing utilization of their cloud infrastructure. Read the Realtor.com Case Study to learn more about how they use ECS, Kinesis, and other AWS services.

Instacart talks about how they use ECS to power their same-day grocery delivery service:

Capital One talks about how they use ECS to automate their operations and their infrastructure management:

Code
Clever developers are using ECS as a base for their own work. For example:

Rack is an open source PaaS (Platform as a Service). It focuses on infrastructure automation, runs in an isolated VPC, and uses a single-tenant build service for security.

Empire is also an open source PaaS. It provides a Heroku-like workflow and is targeted at small and medium sized startups, with an emphasis on microservices.

Cloud Container Cluster Visualizer (c3vis) helps to visualize resource utilization within ECS clusters:

Stay Tuned
We have plenty of new features in the works for ECS, so stay tuned!

Jeff;

 

PowerMemory – Exploit Windows Credentials In Memory

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/xZfoefLWcs0/

PowerMemory is a PowerShell based tool to exploit Windows credentials present in files and memory, it levers Microsoft signed binaries to hack Windows. The method is totally new. It proves that it can be extremely easy to get credentials or any other information from Windows memory without needing to code in C-type languages. In addition,…

Read the full post at darknet.org.uk

New – Tag EC2 Instances & EBS Volumes on Creation

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-tag-ec2-instances-ebs-volumes-on-creation/

Way back in 2010, we launched Resource Tagging for EC2 instances and other EC2 resources. Since that launch, we have raised the allowable number of tags per resource from 10 to 50, and we have made tags more useful with the introduction of resource groups and a tag editor. Our customers use tags to track ownership, drive their cost accounting processes, implement compliance protocols, and to control access to resources via IAM policies.

The AWS tagging model provides separate functions for resource creation and resource tagging. While this is flexible and has worked well for many of our users, it does result in a small time window where the resources exist in an untagged state. Using two separate functions means that resource creation could succeed only for tagging to fail, again leaving resources in an untagged state.

Today we are making tagging more flexible and more useful, with four new features:

Tag on Creation – You can now specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources.

Enforced Tag Usage – You can now write IAM policies that mandate the use of specific tags on EC2 instances or EBS volumes.

Resource-Level Permissions – By popular request, the CreateTags and DeleteTags functions now support IAM’s resource-level permissions.

Enforced Volume Encryption – You can now write IAM policies that mandate the use of encryption for newly created EBS volumes.

Tag on Creation
You now have the ability to specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources (if the call creates both instances and volumes, you can specify distinct tags for the instance and for each volume). The resource creation and the tagging are performed atomically; both must succeed in order for the operation (RunInstances, CreateVolume, and other functions that create resources) to succeed. You no longer need to build tagging scripts that run after instances or volumes have been created.

Here’s how you specify tags when you launch an EC2 instance (the CostCenter and SaveSnapshotFlag tags are also set on any EBS volumes created when the instance is launched):

To learn more, read Using Tags.

Resource-Level Permissions
CreateTags and DeleteTags now support IAM’s resource-level permissions, as requested by many customers. This gives you additional control over the tag keys and values on existing resources.

Also, RunInstances and CreateVolume now support additional resource-level permissions. This allows you to exercise control over the users and groups that can tag resources on creation.

To learn more, see Example Policies for Working with the AWS CLI or an AWS SDK.

Enforced Tag Usage
You can now write IAM policies that enforce the use of specific tags. For example, you could write a policy that blocks the deletion of tags named Owner or Account. Or, you could write a “Deny” policy that disallows the creation of new tags for specific existing resources. You could also use an IAM policy to enforce the use of Department and CostCenter tags to help you achieve more accurate cost allocation reporting. In order to implement stronger compliance and security policies, you could also restrict access to DeleteTags if the resource is not tagged with the user’s name. The ability to enforce tag usage gives you precise control over access to resources, ownership, and cost allocation.

Here’s a statement that requires the use of costcenter and stack tags (with values of “115” and “prod,” respectively) for all newly created volumes:

"Statement": [
    {
      "Sid": "AllowCreateTaggedVolumes",
      "Effect": "Allow",
      "Action": "ec2:CreateVolume",
      "Resource": "arn:aws:ec2:us-east-1:123456789012:volume/*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/costcenter": "115",
          "aws:RequestTag/stack": "prod"
         },
         "ForAllValues:StringEquals": {
             "aws:TagKeys": ["costcenter","stack"]
         }
       }
     },
     {
       "Effect": "Allow",
       "Action": [
         "ec2:CreateTags"
       ],
       "Resource": "arn:aws:ec2:us-east-1:123456789012:volume/*",
       "Condition": {
         "StringEquals": {
             "ec2:CreateAction" : "CreateVolume"
        }
      }
    }
  ]

Enforced Volume Encryption
Using the additional IAM resource-level permissions now supported by RunInstances and CreateVolume, you can now write IAM policies that mandate the use of encryption for any EBS boot or data volumes created. You can use this to comply with regulatory requirements, enforce enterprise security policies, and to protect your data in compliance with applicable auditing requirements.

Here’s a sample statement that you can incorporate into an IAM policy for RunInstances and CreateVolume to enforce EBS volume encryption:

"Statement": [
        {
            "Effect": "Deny",
            "Action": [
                       "ec2:RunInstances",
                       "ec2:CreateVolume"
            ],
            "Resource": [
                "arn:aws:ec2:*:*:volume/*"
            ],
            "Condition": {
                "Bool": {
                    "ec2:Encrypted": "false"
                }
            }
        },

To learn more and to see some sample policies, take a look at Example Policies for Working with the AWS CLI or an AWS SDK and IAM Policies for Amazon EC2.

Available Now
As you can see, the combination of tagging and the new resource-level permissions on the resource creation and tag manipulation functions gives you the ability to track and control access to your EC2 resources.

This new feature is available now in all regions except AWS GovCloud (US) and China (Beijing). You can start using it today from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the AWS APIs.

We are planning to add support for additional EC2 resource types over time; stay tuned for more information!

Jeff;

EC2 Run Command is Now a CloudWatch Events Target

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-run-command-is-now-a-cloudwatch-events-target/

Ok, time for another peanut butter and chocolate post! Let’s combine EC2 Run Command (New EC2 Run Command – Remote Instance Management at Scale) and CloudWatch Events (New CloudWatch Events – Track and Respond to Changes to Your AWS Resources) and see what we get.

EC2 Run Command is part of EC2 Systems Manager. It allows you to operate on collections of EC2 instances and on-premises servers reliably and at scale, in a controlled and selective fashion. You can run scripts, install software, collect metrics and log files, manage patches, and much more, on both Windows and Linux.

CloudWatch Events gives you the ability to track changes to AWS resources in near real-time. You get a stream of system events that you can easily route to one or more targets including AWS Lambda functions, Amazon Kinesis streams, Amazon SNS topics, and built-in EC2 and EBS targets.

Better Together
Today we are bringing these two services together. You can now create CloudWatch Events rules that use EC2 Run Command to perform actions on EC2 instances or on-premises servers. This opens the door to all sorts of interesting ideas; here are a few that I came up with:

Final Log Collection – Collect application or system logs from instances that are being shut down (either manually or as a result of a scale-in operation initiated by Auto Scaling).

Error Log Condition – Collect logs after an application crash or a security incident.

Instance Setup – After an instance has started, download & install applications, set parameters and configurations, and launch processes.

Configuration Updates – When a config file is changed in S3, install it on applicable instances (perhaps determined by tags). For example, you could install an updated Apache web server config file on a set of properly tagged instances, and then restart the server so that it picks up the changes. Or, update an instance-level firewall each time the AWS IP Address Ranges are updated.

EBS Snapshot Testing and Tracking – After a fresh snapshot has been created, mount it on a test instance, check the filesystem for errors, and then index the files in the snapshot.

Instance Coordination – Every time an instance is launched or terminated, inform the others so that they can update internal tracking information or rebalance their workloads.

I’m sure that you have some more interesting ideas; please feel free to share them in the comments.

Time for Action!
Let’s set this up. Suppose I want to run a specific PowerShell script every time Auto Scaling adds another instance to an Auto Scaling Group.

I start by opening the CloudWatch Events Console and clicking on Create rule:

I configure my Event Source to be my Auto Scaling Group (AS-Main-1), and indicate that I want to take action when EC2 instances are launched successfully:

Then I set up the target. I choose SSM Run Command, pick the AWS-RunShellScript document, and indicate that I want the command to be run on the instances that are tagged as coming from my Auto Scaling group:

Then I click on Configure details, give my rule a name and a description, and click on Create rule:

With everything set up, the command service httpd start will be run on each instance launched as a result of a scale out operation.

Available Now
This new feature is available now and you can start using it today.

Jeff;

 

SessionGopher – Session Extraction Tool

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/di3wBU4noU8/

SessionGopher is a PowerShell Session Extraction tool that uses WMI to extract saved session information for remote access tools such as WinSCP, PuTTY, SuperPuTTY, FileZilla, and Microsoft Remote Desktop. The tool can find and decrypt saved session information for remote access tools. It has WMI functionality built in so it can be run remotely,…

Read the full post at darknet.org.uk

Streamline AMI Maintenance and Patching Using Amazon EC2 Systems Manager | Automation

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/streamline-ami-maintenance-and-patching-using-amazon-ec2-systems-manager-automation/

Here to tell you about using Automation for streamline AMI maintenance and patching is Taylor Anderson, a Senior Product Manager with EC2.

-Ana


 

Last December at re:Invent, we launched Amazon EC2 Systems Manager, which helps you automatically collect software inventory, apply OS patches, create system images, and configure Windows and Linux operating systems. These capabilities enable automated configuration and ongoing management of systems at scale, and help maintain software compliance for instances running in Amazon EC2 or on-premises.

One feature within Systems Manager is Automation, which can be used to patch, update agents, or bake applications into an Amazon Machine Image (AMI). With Automation, you can avoid the time and effort associated with manual image updates, and instead build AMIs through a streamlined, repeatable, and auditable process.

Recently, we released the first public document for Automation: AWS-UpdateLinuxAmi. This document allows you to automate patching of Ubuntu, CentOS, RHEL, and Amazon Linux AMIs, as well as automating the installation of additional site-specific packages and configurations.

More importantly, it makes it easy to get started with Automation, eliminating the need to first write an Automation document. AWS-UpdateLinuxAmi can also be used as a template when building your own Automation workflow. Windows users can expect the equivalent document―AWS-UpdateWindowsAmi―in the coming weeks.

AWS-UpdateLinuxAmi automates the following workflow:

  1. Launch a temporary EC2 instance from a source Linux AMI.
  2. Update the instance.
    • Invoke a user-provided, pre-update hook script on the instance (optional).
    • Update any AWS tools and agents on the instance, if present.
    • Update the instance’s distribution packages using the native package manager.
    • Invoke a user-provided post-update hook script on the instance (optional).
  3. Stop the temporary instance.
  4. Create a new AMI from the stopped instance.
  5. Terminate the instance.

Warning: Creation of an AMI from a running instance carries a risk that credentials, secrets, or other confidential information from that instance may be recorded to the new image. Use caution when managing AMIs created by this process.

Configuring roles and permissions for Automation

If you haven’t used Automation before, you need to configure IAM roles and permissions. This includes creating a service role for Automation, assigning a passrole to authorize a user to provide the service role, and creating an instance role to enable instance management under Systems Manager. For more details, see Configuring Access to Automation.

Executing Automation

      1. In the EC2 console, choose Systems Manager Services, Automations.
      2. Choose Run automation document
      3. Expand Document name and choose AWS-UpdateLinuxAmi.
      4. Choose the latest document version.
      5.  For SourceAmiId, enter the ID of the Linux AMI to update.
      6. For InstanceIamRole, enter the name of the instance role you created enabling Systems Manager to manage an instance (that is, it includes the AmazonEC2RoleforSSM managed policy). For more details, see Configuring Access to Automation.
      7.  For AutomationAssumeRole, enter the ARN of the service role you created for Automation. For more details, see Configuring Access to Automation.
      8.  Choose Run Automation.
      9. Monitor progress in the Automation Steps tab, and view step-level outputs.

After execution is complete, choose Description to view any outputs returned by the workflow. In this example, AWS-UpdateLinuxAmi returns the new AMI ID.

Next, choose Images, AMIs to view your new AMI.

There is no additional charge to use the Automation service, and any resources created by a workflow incur nominal charges. Note that if you terminate AWS-UpdateLinuxAmi before reaching the “Terminate Instance” step, shut down the temporary instance created by the workflow.

A CLI walkthrough of the above steps can be found at Automation CLI Walkthrough: Patch a Linux AMI.

Conclusion

Now that you’ve successfully run AWS-UpdateLinuxAmi, you may want to create default values for the service and instance roles. You can customize your workflow by creating your own Automation document based on AWS-UpdateLinuxAmi. For more details, see Create an Automaton Document. After you’ve created your document, you can write additional steps and add them to the workflow.

Example steps include:

      • Updating an Auto Scaling group with the new AMI ID (aws:invokeLambdaFunction action type)
      • Creating an encrypted copy of your new AMI (aws:encrypedCopy action type)
      • Validating your new AMI using Run Command with the RunPowerShellScript document (aws:runCommand action type)

Automation also makes a great addition to a CI/CD pipeline for application bake-in, and can be invoked as a CLI build step in Jenkins. For details on these examples, be sure to check out the Automation technical documentation. For updates on Automation, Amazon EC2 Systems Manager, Amazon CloudFormation, AWS Config, AWS OpsWorks and other management services, be sure to follow the all-new Management Tools blog.

 

Automating the Creation of Consistent Amazon EBS Snapshots with Amazon EC2 Systems Manager (Part 2)

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/automating-the-creation-of-consistent-amazon-ebs-snapshots-with-amazon-ec2-systems-manager-part-2/

Nicolas Malaval, AWS Professional Consultant

In my previous blog post, I discussed the challenge of creating Amazon EBS snapshots when you cannot turn off the instance during backup because this might exclude any data that has been cached by any applications or the operating system. I showed how you can use EC2 Systems Manager to run a script remotely on EC2 instances to prepare the applications and the operating system for backup and to automate the creating of snapshots on a daily basis. I gave a practical example of creating consistent Amazon EBS snapshots of Amazon Linux running a MySQL database.

In this post, I walk you through another practical example to create consistent snapshots of a Windows Server instance with Microsoft VSS (Volume Shadow Copy Service).

Understanding the example

VSS (Volume Shadow Copy Service) is a Windows built-in service that coordinates backup of VSS-compatible applications (SQL Server, Exchange Server, etc.) to flush and freeze their I/O operations.

The VSS service initiates and oversees the creation of shadow copies. A shadow copy is a point-in-time and consistent snapshot of a logical volume. For example, C: is a logical volume, which is different than an EBS snapshot. Multiple components are involved in the shadow copy creation:

  • The VSS requester requests the creation of shadow copies.
  • The VSS provider creates and maintains the shadow copies.
  • The VSS writers guarantee that you have a consistent data set to back up. They flush and freeze I/O operations, before the VSS provider creates the shadow copies, and release I/O operations, after the VSS provider has completed this action. There is usually one VSS writer for each VSS-compatible application.

I use Run Command to execute a PowerShell script on the Windows instance:

$EbsSnapshotPsFileName = "C:/tmp/ebsSnapshot.ps1"

$EbsSnapshotPs = New-Item -Type File $EbsSnapshotPsFileName -Force

Add-Content $EbsSnapshotPs '$InstanceID = Invoke-RestMethod -Uri http://169.254.169.254/latest/meta-data/instance-id'
Add-Content $EbsSnapshotPs '$AZ = Invoke-RestMethod -Uri http://169.254.169.254/latest/meta-data/placement/availability-zone'
Add-Content $EbsSnapshotPs '$Region = $AZ.Substring(0, $AZ.Length-1)'
Add-Content $EbsSnapshotPs '$Volumes = ((Get-EC2InstanceAttribute -Region $Region -Instance "$InstanceId" -Attribute blockDeviceMapping).BlockDeviceMappings.Ebs |? {$_.Status -eq "attached"}).VolumeId'
Add-Content $EbsSnapshotPs '$Volumes | New-EC2Snapshot -Region $Region -Description " Consistent snapshot of a Windows instance with VSS" -Force'
Add-Content $EbsSnapshotPs 'Exit $LastExitCode'

First, the script writes in a local file named ebsSnapshot.ps1 a PowerShell script that creates a snapshot of every EBS volume attached to the instance.

$EbsSnapshotCmdFileName = "C:/tmp/ebsSnapshot.cmd"
$EbsSnapshotCmd = New-Item -Type File $EbsSnapshotCmdFileName -Force

Add-Content $EbsSnapshotCmd 'powershell.exe -ExecutionPolicy Bypass -file $EbsSnapshotPsFileName'
Add-Content $EbsSnapshotCmd 'exit $?'

It writes in a second file named ebsSnapshot.cmd a shell script that executes the PowerShell script created earlier.

$VssScriptFileName = "C:/tmp/scriptVss.txt"
$VssScript = New-Item -Type File $VssScriptFileName -Force

Add-Content $VssScript 'reset'
Add-Content $VssScript 'set context persistent'
Add-Content $VssScript 'set option differential'
Add-Content $VssScript 'begin backup'

$Drives = Get-WmiObject -Class Win32_LogicalDisk |? {$_.VolumeName -notmatch "Temporary" -and $_.DriveType -eq "3"} | Select-Object DeviceID

$Drives | ForEach-Object { Add-Content $VssScript $('add volume ' + $_.DeviceID + ' alias Volume' + $_.DeviceID.Substring(0, 1)) }

Add-Content $VssScript 'create'
Add-Content $VssScript "exec $EbsSnapshotCmdFileName"
Add-Content $VssScript 'end backup'

$Drives | ForEach-Object { Add-Content $VssScript $('delete shadows id %Volume' + $_.DeviceID.Substring(0, 1) + '%') }

Add-Content $VssScript 'exit'

It creates a third file named scriptVss.txt containing DiskShadow commands. DiskShadow is a tool included in Windows Server 2008 and above, that exposes the functionality offered by the VSS service. The script creates a shadow copy of each logical volume stored on EBS, runs the shell script ebsSnapshot.cmd to create a snapshot of underlying EBS volumes, and then deletes the shadow copies to free disk space.

diskshadow.exe /s $VssScriptFileName
Exit $LastExitCode

Finally, it runs DiskShadow in script mode.

This PowerShell script is contained in a new SSM document and the maintenance window executes a command from this document every day at midnight on every Windows instance that has a tag “ConsistentSnapshot” equal to “WindowsVSS”.

Implementing and testing the example

First, use AWS CloudFormation to provision some of the required resources in your AWS account.

  1. Open Create a Stack to create a CloudFormation stack from the template.
  2. Choose Next.
  3. Enter the ID of the latest AWS Windows Server 2016 Base AMI available in the current region (see Finding a Windows AMI) in pWindowsAmiId.
  4. Follow the on-screen instructions.

CloudFormation creates the following resources:

  • A VPC with an Internet gateway attached.
  • A subnet on this VPC with a new route table, to enable access to the Internet and therefore to the AWS APIs.
  • An IAM role to grant an EC2 instance the required permissions.
  • A security group that allows RDP access from the Internet, as you need to log on to the instance later on.
  • A Windows instance in the subnet with the IAM role and the security group attached.
  • A SSM document containing the script described in the section above to create consistent EBS snapshots.
  • Another SSM document containing a script to restore logical volumes to a consistent state, as explained in the next section.
  • An IAM role to grant the maintenance window the required permissions.

After the stack creation completes, choose Outputs in the CloudFormation console and note the values returned:

  • IAM role for the maintenance window
  • Names of the two SSM documents

Then, manually create a maintenance window, if you have not already created it. For detailed instructions, see the “Example” section in the previous blog post.

After you create a maintenance window, assign a target where the task will run:

  1. In the Maintenance Window list, choose the maintenance window that you just created.
  2. For Actions, choose Register targets.
  3. For Owner information, enter WindowsVSS.
  4. Under the Select targets by section, choose Specifying tags. For Tag Name, choose ConsistentSnapshot. For Tag Value, choose WindowsVSS.
  5. Choose Register targets.

Finally, assign a task to perform during the window:

  1. In the Maintenance Window list, choose the maintenance window that you just created.
  2. For Actions, choose Register tasks.
  3. For Document, select the name of the SSM document that was returned by CloudFormation, with which to create snapshots.
  4. Under the Target by section, choose the target that you just created.
  5. Under the Role section, select the IAM role that was returned by CloudFormation.
  6. Under Execute on, for Targets, enter 1. For Stop after, enter 1 errors.
  7. Choose Register task.

You can view the history either in the History tab of the Maintenance Windows navigation pane of the Amazon EC2 console, as illustrated on the following figure, or in the Run Command navigation pane, with more details about each command executed.

Restoring logical volumes to a consistent state

DiskShadow―the VSS requester in this case―uses the Windows built-in VSS provider. To create a shadow copy, this built-in provider does not make a complete copy of the data. Instead, it keeps a copy of a block data before a change overwrites it, in a dedicated storage area. The logical volume can be restored to its initial consistent state, by combining the actual volume data with the initial data of the changed blocks.

The DiskShadow command create instructs the VSS service to proceed with the creation of shadow copies, including the release of I/O operations by the VSS writers after the shadow copies are created. Therefore, the EBS snapshots created by the next command exec may not be fully consistent.

Note: A workaround could be to build your own VSS provider in charge of creating EBS snapshots. Doing so would enable the EBS snapshots to be created before I/O operations are released. We will not develop this solution in this blog post.

Therefore, you need to “undo” any I/O operations that may have happened between the moment when the shadow copy was created and the moment when the EBS snapshots were created.

A solution consists of creating an EBS volume from the snapshot, attaching it to an intermediate Windows instance and to “revert” the VSS shadow copy to restore the EBS volume to a consistent state. For sake of simplicity, use the Windows instance that was backed up as the intermediate instance.

To manually restore an EBS snapshot to a consistent state:

  1. In the Amazon EC2 console, choose Instances.
  2. In the search box, enter Consistent EBS Snapshots – Windows with VSS. The search results should display a single instance. Note the Availability Zone for this instance.
  3. Choose Snapshots.
  4. Select the latest snapshot with the description “Consistent snapshot of Windows with VSS” and choose Actions, Create Volume.
  5. Select the same Availability Zone as the instance and choose Create, Volumes.
  6. Select the volume that was just created and choose Actions, Attach Volume.
  7. For Instance, choose Consistent EBS Snapshots – Windows with VSS and choose Attach.
  8. Choose Run Command, Run a command.
  9. In Command document, select the name of a SSM document to restore snapshots returned by CloudFormation. For Target instances, select the Windows and choose Run.

Run Command executes the following PowerShell script on the Windows instance. It retrieves the list of offline disks—which corresponds in this case to the EBS volume that you just attached—and for each offline disk, takes it online, revert existing shadow copies and takes it offline again.

$OfflineDisks = (Get-Disk |? {$_.OperationalStatus -eq "Offline"})

foreach ($OfflineDisk in $OfflineDisks) {
  Set-Disk -Number $OfflineDisk.Number -IsOffline $False
  Set-Disk -Number $OfflineDisk.Number -IsReadonly $False
  Write-Host "Disk " $OfflineDisk.Signature " is now online"
}

$ShadowCopyIds = (Get-CimInstance Win32_ShadowCopy).Id
Write-Host "Number of shadow copies found: " $ShadowCopyIds.Count

foreach ($ShadowCopyId in $ShadowCopyIds) {
  "revert " + $ShadowCopyId | diskshadow
}

foreach ($OfflineDisk in $OfflineDisks) {
  $CurrentSignature = (Get-Disk -Number $OfflineDisk.Number).Signature
  if ($OfflineDisk.Signature -eq $CurrentSignature) {
    Set-Disk -Number $OfflineDisk.Number -IsReadonly $True
    Set-Disk -Number $OfflineDisk.Number -IsOffline $True
    Write-Host "Disk " $OfflineDisk.Number " is now offline"
  }
  else {
    Set-Disk -Number $OfflineDisk.Number -Signature $OfflineDisk.Signature
    Write-Host "Reverting to the initial disk signature: " $OfflineDisk.Signature
  }
}

The EBS volume is now in a consistent state and can be detached from the intermediate instance.

Conclusion

In this series of blog posts, I showed how you can use Amazon EC2 Systems Manager to create consistent EBS snapshots on a daily basis, with two practical examples for Linux and Windows. You can adapt this solution to your own requirements. For example, you may develop scripts for your own applications.

If you have questions or suggestions, please comment below.