Tag Archives: Active Directory

[$] Authentication and authorization in Samba 4

Post Syndicated from jake original https://lwn.net/Articles/747122/rss

Volker Lendecke is one of the first contributors to Samba,
having submitted his first patches in 1994. In addition to developing
other important file-sharing tools, he’s heavily involved in development of
the winbind service, which is implemented in winbindd. Although the core Active Directory (AD) domain controller
(DC) code was written by his colleague Stefan Metzmacher, winbind is a
crucial component of Samba’s AD functionality.
In his information-packed talk at FOSDEM
2018
, Lendecke
said he aimed to give a high-level
overview of what AD and Samba authentication is, and in particular the
communication pathways and trust relationships between the parts of
Samba that authenticate a Samba user in an AD environment.

[$] Two FOSDEM talks on Samba 4

Post Syndicated from jake original https://lwn.net/Articles/747098/rss

Much as some of us would love never to have to deal with Windows,
it exists. It wants to authenticate its users and share
resources like files and printers over the network. Although many
enterprises use Microsoft tools to do this, there is a free alternative,
in the form of Samba. While Samba 3 has been happily providing
authentication along with file and print sharing to Windows clients for
many years,
the Microsoft world has been slowly moving toward Active Directory (AD).
Meanwhile, Samba 4, which adds a free reimplementation of AD on Linux, has
been increasingly ready for deployment. Three short talks at FOSDEM 2018
provided three different views of Samba 4, also known as Samba-AD,
and left behind a pretty clear picture that Samba 4 is truly
ready for use.

Subscribers can read on for a report from guest author Tom Yates on the first two of those talks; stay tuned for another on the third soon.

Build a Multi-Tenant Amazon EMR Cluster with Kerberos, Microsoft Active Directory Integration and EMRFS Authorization

Post Syndicated from Songzhi Liu original https://aws.amazon.com/blogs/big-data/build-a-multi-tenant-amazon-emr-cluster-with-kerberos-microsoft-active-directory-integration-and-emrfs-authorization/

One of the challenges faced by our customers—especially those in highly regulated industries—is balancing the need for security with flexibility. In this post, we cover how to enable multi-tenancy and increase security by using EMRFS (EMR File System) authorization, the Amazon S3 storage-level authorization on Amazon EMR.

Amazon EMR is an easy, fast, and scalable analytics platform enabling large-scale data processing. EMRFS authorization provides Amazon S3 storage-level authorization by configuring EMRFS with multiple IAM roles. With this functionality enabled, different users and groups can share the same cluster and assume their own IAM roles respectively.

Simply put, on Amazon EMR, we can now have an Amazon EC2 role per user assumed at run time instead of one general EC2 role at the cluster level. When the user is trying to access Amazon S3 resources, Amazon EMR evaluates against a predefined mappings list in EMRFS configurations and picks up the right role for the user.

In this post, we will discuss what EMRFS authorization is (Amazon S3 storage-level access control) and show how to configure the role mappings with detailed examples. You will then have the desired permissions in a multi-tenant environment. We also demo Amazon S3 access from HDFS command line, Apache Hive on Hue, and Apache Spark.

EMRFS authorization for Amazon S3

There are two prerequisites for using this feature:

  1. Users must be authenticated, because EMRFS needs to map the current user/group/prefix to a predefined user/group/prefix. There are several authentication options. In this post, we launch a Kerberos-enabled cluster that manages the Key Distribution Center (KDC) on the master node, and enable a one-way trust from the KDC to a Microsoft Active Directory domain.
  2. The application must support accessing Amazon S3 via Applications that have their own S3FileSystem APIs (for example, Presto) are not supported at this time.

EMRFS supports three types of mapping entries: user, group, and Amazon S3 prefix. Let’s use an example to show how this works.

Assume that you have the following three identities in your organization, and they are defined in the Active Directory:

To enable all these groups and users to share the EMR cluster, you need to define the following IAM roles:

In this case, you create a separate Amazon EC2 role that doesn’t give any permission to Amazon S3. Let’s call the role the base role (the EC2 role attached to the EMR cluster), which in this example is named EMR_EC2_RestrictedRole. Then, you define all the Amazon S3 permissions for each specific user or group in their own roles. The restricted role serves as the fallback role when the user doesn’t belong to any user/group, nor does the user try to access any listed Amazon S3 prefixes defined on the list.

Important: For all other roles, like emrfs_auth_group_role_data_eng, you need to add the base role (EMR_EC2_RestrictedRole) as the trusted entity so that it can assume other roles. See the following example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::511586466501:role/EMR_EC2_RestrictedRole"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

The following is an example policy for the admin user role (emrfs_auth_user_role_admin_user):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        }
    ]
}

We are assuming the admin user has access to all buckets in this example.

The following is an example policy for the data science group role (emrfs_auth_group_role_data_sci):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-science-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

This role grants all Amazon S3 permissions to the emrfs-auth-data-science-bucket-demo bucket and all the objects in it. Similarly, the policy for the role emrfs_auth_group_role_data_eng is shown below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo/*",
                "arn:aws:s3:::emrfs-auth-data-engineering-bucket-demo"
            ],
            "Action": [
                "s3:*"
            ]
        }
    ]
}

Example role mappings configuration

To configure EMRFS authorization, you use EMR security configuration. Here is the configuration we use in this post

Consider the following scenario.

First, the admin user admin1 tries to log in and run a command to access Amazon S3 data through EMRFS. The first role emrfs_auth_user_role_admin_user on the mapping list, which is a user role, is mapped and picked up. Then admin1 has access to the Amazon S3 locations that are defined in this role.

Then a user from the data engineer group (grp_data_engineering) tries to access a data bucket to run some jobs. When EMRFS sees that the user is a member of the grp_data_engineering group, the group role emrfs_auth_group_role_data_eng is assumed, and the user has proper access to Amazon S3 that is defined in the emrfs_auth_group_role_data_eng role.

Next, the third user comes, who is not an admin and doesn’t belong to any of the groups. After failing evaluation of the top three entries, EMRFS evaluates whether the user is trying to access a certain Amazon S3 prefix defined in the last mapping entry. This type of mapping entry is called the prefix type. If the user is trying to access s3://emrfs-auth-default-bucket-demo/, then the prefix mapping is in effect, and the prefix role emrfs_auth_prefix_role_default_s3_prefix is assumed.

If the user is not trying to access any of the Amazon S3 paths that are defined on the list—which means it failed the evaluation of all the entries—it only has the permissions defined in the EMR_EC2RestrictedRole. This role is assumed by the EC2 instances in the cluster.

In this process, all the mappings defined are evaluated in the defined order, and the first role that is mapped is assumed, and the rest of the list is skipped.

Setting up an EMR cluster and mapping Active Directory users and groups

Now that we know how EMRFS authorization role mapping works, the next thing we need to think about is how we can use this feature in an easy and manageable way.

Active Directory setup

Many customers manage their users and groups using Microsoft Active Directory or other tools like OpenLDAP. In this post, we create the Active Directory on an Amazon EC2 instance running Windows Server and create the users and groups we will be using in the example below. After setting up Active Directory, we use the Amazon EMR Kerberos auto-join capability to establish a one-way trust from the KDC running on the EMR master node to the Active Directory domain on the EC2 instance. You can use your own directory services as long as it talks to the LDAP (Lightweight Directory Access Protocol).

To create and join Active Directory to Amazon EMR, follow the steps in the blog post Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory.

After configuring Active Directory, you can create all the users and groups using the Active Directory tools and add users to appropriate groups. In this example, we created users like admin1, dataeng1, datascientist1, grp_data_engineering, and grp_data_science, and then add the users to the right groups.

Join the EMR cluster to an Active Directory domain

For clusters with Kerberos, Amazon EMR now supports automated Active Directory domain joins. You can use the security configuration to configure the one-way trust from the KDC to the Active Directory domain. You also configure the EMRFS role mappings in the same security configuration.

The following is an example of the EMR security configuration with a trusted Active Directory domain EMRKRB.TEST.COM and the EMRFS role mappings as we discussed earlier:

The EMRFS role mapping configuration is shown in this example:

We will also provide an example AWS CLI command that you can run.

Launching the EMR cluster and running the tests

Now you have configured Kerberos and EMRFS authorization for Amazon S3.

Additionally, you need to configure Hue with Active Directory using the Amazon EMR configuration API in order to log in using the AD users created before. The following is an example of Hue AD configuration.

[
  {
    "Classification":"hue-ini",
    "Properties":{

    },
    "Configurations":[
      {
        "Classification":"desktop",
        "Properties":{

        },
        "Configurations":[
          {
            "Classification":"ldap",
            "Properties":{

            },
            "Configurations":[
              {
                "Classification":"ldap_servers",
                "Properties":{

                },
                "Configurations":[
                  {
                    "Classification":"AWS",
                    "Properties":{
                      "base_dn":"DC=emrkrb,DC=test,DC=com",
                      "ldap_url":"ldap://emrkrb.test.com",
                      "search_bind_authentication":"false",
                      "bind_dn":"CN=adjoiner,CN=users,DC=emrkrb,DC=test,DC=com",
                      "bind_password":"Abc123456",
                      "create_users_on_login":"true",
                      "nt_domain":"emrkrb.test.com"
                    },
                    "Configurations":[

                    ]
                  }
                ]
              }
            ]
          },
          {
            "Classification":"auth",
            "Properties":{
              "backend":"desktop.auth.backend.LdapBackend"
            },
            "Configurations":[

            ]
          }
        ]
      }
    ]
  }

Note: In the preceding configuration JSON file, change the values as required before pasting it into the software setting section in the Amazon EMR console.

Now let’s use this configuration and the security configuration you created before to launch the cluster.

In the Amazon EMR console, choose Create cluster. Then choose Go to advanced options. On the Step1: Software and Steps page, under Edit software settings (optional), paste the configuration in the box.

The rest of the setup is the same as an ordinary cluster setup, except in the Security Options section. In Step 4: Security, under Permissions, choose Custom, and then choose the RestrictedRole that you created before.

Choose the appropriate subnets (these should meet the base requirement in order for a successful Active Directory join—see the Amazon EMR Management Guide for more details), and choose the appropriate security groups to make sure it talks to the Active Directory. Choose a key so that you can log in and configure the cluster.

Most importantly, choose the security configuration that you created earlier to enable Kerberos and EMRFS authorization for Amazon S3.

You can use the following AWS CLI command to create a cluster.

aws emr create-cluster --name "TestEMRFSAuthorization" \ 
--release-label emr-5.10.0 \ --instance-type m3.xlarge \ 
--instance-count 3 \ 
--ec2-attributes InstanceProfile=EMR_EC2_DefaultRole,KeyName=MyEC2KeyPair \ --service-role EMR_DefaultRole \ 
--security-configuration MyKerberosConfig \ 
--configurations file://hue-config.json \
--applications Name=Hadoop Name=Hive Name=Hue Name=Spark \ 
--kerberos-attributes Realm=EC2.INTERNAL, \ KdcAdminPassword=<YourClusterKDCAdminPassword>, \ ADDomainJoinUser=<YourADUserLogonName>,ADDomainJoinPassword=<YourADUserPassword>, \ 
CrossRealmTrustPrincipalPassword=<MatchADTrustPwd>

Note: If you create the cluster using CLI, you need to save the JSON configuration for Hue into a file named hue-config.json and place it on the server where you run the CLI command.

After the cluster gets into the Waiting state, try to connect by using SSH into the cluster using the Active Directory user name and password.

ssh -l [email protected] <EMR IP or DNS name>

Quickly run two commands to show that the Active Directory join is successful:

  1. id [user name] shows the mapped AD users and groups in Linux.
  2. hdfs groups [user name] shows the mapped group in Hadoop.

Both should return the current Active Directory user and group information if the setup is correct.

Now, you can test the user mapping first. Log in with the admin1 user, and run a Hadoop list directory command:

hadoop fs -ls s3://emrfs-auth-data-science-bucket-demo/

Now switch to a user from the data engineer group.

Retry the previous command to access the admin’s bucket. It should throw an Amazon S3 Access Denied exception.

When you try listing the Amazon S3 bucket that a data engineer group member has accessed, it triggers the group mapping.

hadoop fs -ls s3://emrfs-auth-data-engineering-bucket-demo/

It successfully returns the listing results. Next we will test Apache Hive and then Apache Spark.

 

To run jobs successfully, you need to create a home directory for every user in HDFS for staging data under /user/<username>. Users can configure a step to create a home directory at cluster launch time for every user who has access to the cluster. In this example, you use Hue since Hue will create the home directory in HDFS for the user at the first login. Here Hue also needs to be integrated with the same Active Directory as explained in the example configuration described earlier.

First, log in to Hue as a data engineer user, and open a Hive Notebook in Hue. Then run a query to create a new table pointing to the data engineer bucket, s3://emrfs-auth-data-engineering-bucket-demo/table1_data_eng/.

You can see that the table was created successfully. Now try to create another table pointing to the data science group’s bucket, where the data engineer group doesn’t have access.

It failed and threw an Amazon S3 Access Denied error.

Now insert one line of data into the successfully create table.

Next, log out, switch to a data science group user, and create another table, test2_datasci_tb.

The creation is successful.

The last task is to test Spark (it requires the user directory, but Hue created one in the previous step).

Now let’s come back to the command line and run some Spark commands.

Login to the master node using the datascientist1 user:

Start the SparkSQL interactive shell by typing spark-sql, and run the show tables command. It should list the tables that you created using Hive.

As a data science group user, try select on both tables. You will find that you can only select the table defined in the location that your group has access to.

Conclusion

EMRFS authorization for Amazon S3 enables you to have multiple roles on the same cluster, providing flexibility to configure a shared cluster for different teams to achieve better efficiency. The Active Directory integration and group mapping make it much easier for you to manage your users and groups, and provides better auditability in a multi-tenant environment.


Additional Reading

If you found this post useful, be sure to check out Use Kerberos Authentication to Integrate Amazon EMR with Microsoft Active Directory and Launching and Running an Amazon EMR Cluster inside a VPC.


About the Authors

Songzhi Liu is a Big Data Consultant with AWS Professional Services. He works closely with AWS customers to provide them Big Data & Machine Learning solutions and best practices on the Amazon cloud.

 

 

 

 

EU Compliance Update: AWS’s 2017 C5 Assessment

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/eu-compliance-update-awss-2017-c5-assessment/

C5 logo

AWS has completed its 2017 assessment against the Cloud Computing Compliance Controls Catalog (C5) information security and compliance program. Bundesamt für Sicherheit in der Informationstechnik (BSI)—Germany’s national cybersecurity authority—established C5 to define a reference standard for German cloud security requirements. With C5 (as well as with IT-Grundschutz), customers in German member states can use the work performed under this BSI audit to comply with stringent local requirements and operate secure workloads in the AWS Cloud.

Continuing our commitment to Germany and the AWS European Regions, AWS has added 16 services to this year’s scope:

The English version of the C5 report is available through AWS Artifact. The German version of the report will be available through AWS Artifact in the coming weeks.

– Oliver

Wanted: Sales Engineer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-sales-engineer/

At inception, Backblaze was a consumer company. Thousands upon thousands of individuals came to our website and gave us $5/mo to keep their data safe. But, we didn’t sell business solutions. It took us years before we had a sales team. In the last couple of years, we’ve released products that businesses of all sizes love: Backblaze B2 Cloud Storage and Backblaze for Business Computer Backup. Those businesses want to integrate Backblaze deeply into their infrastructure, so it’s time to hire our first Sales Engineer!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Backblaze B2 cloud storage is a building block for almost any computing service that requires storage. Customers need our help integrating B2 into iOS apps to Docker containers. Some customers integrate directly to the API using the programming language of their choice, others want to solve a specific problem using ready made software, already integrated with B2.

At the same time, our computer backup product is deepening it’s integration into enterprise IT systems. We are commonly asked for how to set Windows policies, integrate with Active Directory, and install the client via remote management tools.

We are looking for a sales engineer who can help our customers navigate the integration of Backblaze into their technical environments.

Are you 1/2” deep into many different technologies, and unafraid to dive deeper?

Can you confidently talk with customers about their technology, even if you have to look up all the acronyms right after the call?

Are you excited to setup complicated software in a lab and write knowledge base articles about your work?

Then Backblaze is the place for you!

Enough about Backblaze already, what’s in it for me?
In this role, you will be given the opportunity to learn about the technologies that drive innovation today; diverse technologies that customers are using day in and out. And more importantly, you’ll learn how to learn new technologies.

Just as an example, in the past 12 months, we’ve had the opportunity to learn and become experts in these diverse technologies:

  • How to setup VM servers for lab environments, both on-prem and using cloud services.
  • Create an automatically “resetting” demo environment for the sales team.
  • Setup Microsoft Domain Controllers with Active Directory and AD Federation Services.
  • Learn the basics of OAUTH and web single sign on (SSO).
  • Archive video workflows from camera to media asset management systems.
  • How upload/download files from Javascript by enabling CORS.
  • How to install and monitor online backup installations using RMM tools, like JAMF.
  • Tape (LTO) systems. (Yes – people still use tape for storage!)

How can I know if I’ll succeed in this role?

You have:

  • Confidence. Be able to ask customers questions about their environments and convey to them your technical acumen.
  • Curiosity. Always want to learn about customers’ situations, how they got there and what problems they are trying to solve.
  • Organization. You’ll work with customers, integration partners, and Backblaze team members on projects of various lengths. You can context switch and either have a great memory or keep copious notes. Your checklists have their own checklists.

You are versed in:

  • The fundamentals of Windows, Linux and Mac OS X operating systems. You shouldn’t be afraid to use a command line.
  • Building, installing, integrating and configuring applications on any operating system.
  • Debugging failures – reading logs, monitoring usage, effective google searching to fix problems excites you.
  • The basics of TCP/IP networking and the HTTP protocol.
  • Novice development skills in any programming/scripting language. Have basic understanding of data structures and program flow.
  • Your background contains:

  • Bachelor’s degree in computer science or the equivalent.
  • 2+ years of experience as a pre or post-sales engineer.
  • The right extra credit:
    There are literally hundreds of previous experiences you can have had that would make you perfect for this job. Some experiences that we know would be helpful for us are below, but make sure you tell us your stories!

  • Experience using or programming against Amazon S3.
  • Experience with large on-prem storage – NAS, SAN, Object. And backing up data on such storage with tools like Veeam, Veritas and others.
  • Experience with photo or video media. Media archiving is a key market for Backblaze B2.
  • Program arduinos to automatically feed your dog.
  • Experience programming against web or REST APIs. (Point us towards your projects, if they are open source and available to link to.)
  • Experience with sales tools like Salesforce.
  • 3D print door stops.
  • Experience with Windows Servers, Active Directory, Group policies and the like.
  • What’s it like working with the Sales team?
    The Backblaze sales team collaborates. We help each other out by sharing ideas, templates, and our customer’s experiences. When we talk about our accomplishments, there is no “I did this,” only “we”. We are truly a team.

    We are honest to each other and our customers and communicate openly. We aim to have fun by embracing crazy ideas and creative solutions. We try to think not outside the box, but with no boxes at all. Customers are the driving force behind the success of the company and we care deeply about their success.

    If this all sounds like you:

    1. Send an email to [email protected] with the position in the subject line.
    2. Tell us a bit about your Sales Engineering experience.
    3. Include your resume.

    The post Wanted: Sales Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

    How to Patch, Inspect, and Protect Microsoft Windows Workloads on AWS—Part 1

    Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-inspect-and-protect-microsoft-windows-workloads-on-aws-part-1/

    Most malware tries to compromise your systems by using a known vulnerability that the maker of the operating system has already patched. To help prevent malware from affecting your systems, two security best practices are to apply all operating system patches to your systems and actively monitor your systems for missing patches. In case you do need to recover from a malware attack, you should make regular backups of your data.

    In today’s blog post (Part 1 of a two-part post), I show how to keep your Amazon EC2 instances that run Microsoft Windows up to date with the latest security patches by using Amazon EC2 Systems Manager. Tomorrow in Part 2, I show how to take regular snapshots of your data by using Amazon EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

    What you should know first

    To follow along with the solution in this post, you need one or more EC2 instances. You may use existing instances or create new instances. For the blog post, I assume this is an EC2 for Microsoft Windows Server 2012 R2 instance installed from the Amazon Machine Images (AMIs). If you are not familiar with how to launch an EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. A private subnet is not directly accessible via the internet, and access to it requires either a VPN connection to your on-premises network or a jump host in a public subnet (a subnet with access to the internet). You must make sure that the EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager and Amazon Inspector. The following diagram shows how you should structure your Amazon Virtual Private Cloud (VPC). You should also be familiar with Restoring an Amazon EBS Volume from a Snapshot and Attaching an Amazon EBS Volume to an Instance.

    Later on, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the AWS Identity and Access Management (IAM) user you are using for this post must have the iam:PassRole permission. This permission allows this IAM user to assign tasks to pass their own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. This safeguard ensures that the user cannot use the creation of tasks to elevate their IAM privileges because their own IAM privileges limit which tasks they can run against an EC2 instance. You should also authorize your IAM user to use EC2, Amazon Inspector, Amazon CloudWatch, and Systems Manager. You can achieve this by attaching the following AWS managed policies to the IAM user you are using for this example: AmazonInspectorFullAccess, AmazonEC2FullAccess, and AmazonSSMFullAccess.

    Architectural overview

    The following diagram illustrates the components of this solution’s architecture.

    Diagram showing the components of this solution's architecture

    For this blog post, Microsoft Windows EC2 is Amazon EC2 for Microsoft Windows Server 2012 R2 instances with attached Amazon Elastic Block Store (Amazon EBS) volumes, which are running in your VPC. These instances may be standalone Windows instances running your Windows workloads, or you may have joined them to an Active Directory domain controller. For instances joined to a domain, you can be using Active Directory running on an EC2 for Windows instance, or you can use AWS Directory Service for Microsoft Active Directory.

    Amazon EC2 Systems Manager is a scalable tool for remote management of your EC2 instances. You will use the Systems Manager Run Command to install the Amazon Inspector agent. The agent enables EC2 instances to communicate with the Amazon Inspector service and run assessments, which I explain in detail later in this blog post. You also will create a Systems Manager association to keep your EC2 instances up to date with the latest security patches.

    You can use the EBS Snapshot Scheduler to schedule automated snapshots at regular intervals. You will use it to set up regular snapshots of your Amazon EBS volumes. EBS Snapshot Scheduler is a prebuilt solution by AWS that you will deploy in your AWS account. With Amazon EBS snapshots, you pay only for the actual data you store. Snapshots save only the data that has changed since the previous snapshot, which minimizes your cost.

    You will use Amazon Inspector to run security assessments on your EC2 for Windows Server instance. In this post, I show how to assess if your EC2 for Windows Server instance is vulnerable to any of the more than 50,000 CVEs registered with Amazon Inspector.

    In today’s and tomorrow’s posts, I show you how to:

    1. Launch an EC2 instance with an IAM role, Amazon EBS volume, and tags that Systems Manager and Amazon Inspector will use.
    2. Configure Systems Manager to install the Amazon Inspector agent and patch your EC2 instances.
    3. Take EBS snapshots by using EBS Snapshot Scheduler to automate snapshots based on instance tags.
    4. Use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any common vulnerabilities and exposures (CVEs).

    Step 1: Launch an EC2 instance

    In this section, I show you how to launch your EC2 instances so that you can use Systems Manager with the instances and use instance tags with EBS Snapshot Scheduler to automate snapshots. This requires three things:

    • Create an IAM role for Systems Manager before launching your EC2 instance.
    • Launch your EC2 instance with Amazon EBS and the IAM role for Systems Manager.
    • Add tags to instances so that you can automate policies for which instances you take snapshots of and when.

    Create an IAM role for Systems Manager

    Before launching your EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the EC2 instance you will launch. AWS already provides a preconfigured policy that you can use for your new role, and it is called AmazonEC2RoleforSSM.

    1. Sign in to the IAM console and choose Roles in the navigation pane. Choose Create new role.
      Screenshot of choosing "Create role"
    2. In the role-creation workflow, choose AWS service > EC2 > EC2 to create a role for an EC2 instance.
      Screenshot of creating a role for an EC2 instance
    3. Choose the AmazonEC2RoleforSSM policy to attach it to the new role you are creating.
      Screenshot of attaching the AmazonEC2RoleforSSM policy to the new role you are creating
    4. Give the role a meaningful name (I chose EC2SSM) and description, and choose Create role.
      Screenshot of giving the role a name and description

    Launch your EC2 instance

    To follow along, you need an EC2 instance that is running Microsoft Windows Server 2012 R2 and that has an Amazon EBS volume attached. You can use any existing instance you may have or create a new instance.

    When launching your new EC2 instance, be sure that:

    • The operating system is Microsoft Windows Server 2012 R2.
    • You attach at least one Amazon EBS volume to the EC2 instance.
    • You attach the newly created IAM role (EC2SSM).
    • The EC2 instance can connect to the internet through a network address translation (NAT) gateway or a NAT instance.
    • You create the tags shown in the following screenshot (you will use them later).

    If you are using an already launched EC2 instance, you can attach the newly created role as described in Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console.

    Add tags

    The final step of configuring your EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this blog post and to configure Amazon Inspector in Part 2. For this example, I add a tag key, Patch Group, and set the value to Windows Servers. I could have other groups of EC2 instances that I treat differently by having the same tag key but a different tag value. For example, I might have a collection of other servers with the Patch Group tag key with a value of IAS Servers.

    Screenshot of adding tags

    Note: You must wait a few minutes until the EC2 instance becomes available before you can proceed to the next section.

    At this point, you now have at least one EC2 instance you can use to configure Systems Manager, use EBS Snapshot Scheduler, and use Amazon Inspector.

    Note: If you have a large number of EC2 instances to tag, you may want to use the EC2 CreateTags API rather than manually apply tags to each instance.

    Step 2: Configure Systems Manager

    In this section, I show you how to use Systems Manager to apply operating system patches to your EC2 instances, and how to manage patch compliance.

    To start, I will provide some background information about Systems Manager. Then, I will cover how to:

    • Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
    • Associate a Systems Manager patch baseline with your instance to define which patches Systems Manager should apply.
    • Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
    • Monitor patch compliance to verify the patch state of your instances.

    Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is Amazon EC2 Systems Manager?

    Patch management is an important measure to prevent malware from infecting your systems. Most malware attacks look for vulnerabilities that are publicly known and in most cases are already patched by the maker of the operating system. These publicly known vulnerabilities are well documented and therefore easier for an attacker to exploit than having to discover a new vulnerability.

    Patches for these new vulnerabilities are available through Systems Manager within hours after Microsoft releases them. There are two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your EC2 instance. Second, you must install the Systems Manager agent on your EC2 instance. If you have used a recent Microsoft Windows Server 2012 R2 AMI published by AWS, Amazon has already installed the Systems Manager agent on your EC2 instance. You can confirm this by logging in to an EC2 instance and looking for Amazon SSM Agent under Programs and Features in Windows. To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see the documentation about installing the Systems Manager agent. If you forgot to attach the newly created role when launching your EC2 instance or if you want to attach the role to already running EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

    To make sure your EC2 instance receives operating system patches from Systems Manager, you will use the default patch baseline provided and maintained by AWS, and you will define a maintenance window so that you control when your EC2 instances should receive patches. For the maintenance window to be able to run any tasks, you also must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: Systems Manager will use this role instead of EC2. Earlier we created the EC2SSM role with the AmazonEC2RoleforSSM policy, which allowed the Systems Manager agent on our instance to communicate with the Systems Manager service. Here we need a new role with the policy AmazonSSMMaintenanceWindowRole to make sure the Systems Manager service is able to execute commands on our instance.

    Create the Systems Manager IAM role

    To create the new IAM role for Systems Manager, follow the same procedure as in the previous section, but in Step 3, choose the AmazonSSMMaintenanceWindowRole policy instead of the previously selected AmazonEC2RoleforSSM policy.

    Screenshot of creating the new IAM role for Systems Manager

    Finish the wizard and give your new role a recognizable name. For example, I named my role MaintenanceWindowRole.

    Screenshot of finishing the wizard and giving your new role a recognizable name

    By default, only EC2 instances can assume this new role. You must update the trust policy to enable Systems Manager to assume this role.

    To update the trust policy associated with this new role:

    1. Navigate to the IAM console and choose Roles in the navigation pane.
    2. Choose MaintenanceWindowRole and choose the Trust relationships tab. Then choose Edit trust relationship.
    3. Update the policy document by copying the following policy and pasting it in the Policy Document box. As you can see, I have added the ssm.amazonaws.com service to the list of allowed Principals that can assume this role. Choose Update Trust Policy.
      {
         "Version":"2012-10-17",
         "Statement":[
            {
               "Sid":"",
               "Effect":"Allow",
               "Principal":{
                  "Service":[
                     "ec2.amazonaws.com",
                     "ssm.amazonaws.com"
                 ]
               },
               "Action":"sts:AssumeRole"
            }
         ]
      }

    Associate a Systems Manager patch baseline with your instance

    Next, you are going to associate a Systems Manager patch baseline with your EC2 instance. A patch baseline defines which patches Systems Manager should apply. You will use the default patch baseline that AWS manages and maintains. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your EC2 instance.

    Navigate to the EC2 console, scroll down to Systems Manager Shared Resources in the navigation pane, and choose Managed Instances. Your new EC2 instance should be available there. If your instance is missing from the list, verify the following:

    1. Go to the EC2 console and verify your instance is running.
    2. Select your instance and confirm you attached the Systems Manager IAM role, EC2SSM.
    3. Make sure that you deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram at the start of this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
    4. Check the Systems Manager Agent logs for any errors.

    Now that you have confirmed that Systems Manager can manage your EC2 instance, it is time to associate the AWS maintained patch baseline with your EC2 instance:

    1. Choose Patch Baselines under Systems Manager Services in the navigation pane of the EC2 console.
    2. Choose the default patch baseline as highlighted in the following screenshot, and choose Modify Patch Groups in the Actions drop-down.
      Screenshot of choosing Modify Patch Groups in the Actions drop-down
    3. In the Patch group box, enter the same value you entered under the Patch Group tag of your EC2 instance in “Step 1: Configure your EC2 instance.” In this example, the value I enter is Windows Servers. Choose the check mark icon next to the patch group and choose Close.Screenshot of modifying the patch group

    Define a maintenance window

    Now that you have successfully set up a role and have associated a patch baseline with your EC2 instance, you will define a maintenance window so that you can control when your EC2 instances should receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your EC2 instances do not all reboot at the same time. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs.

    To define a maintenance window:

    1. Navigate to the EC2 console, scroll down to Systems Manager Shared Resources in the navigation pane, and choose Maintenance Windows. Choose Create a Maintenance Window.
      Screenshot of starting to create a maintenance window in the Systems Manager console
    2. Select the Cron schedule builder to define the schedule for the maintenance window. In the example in the following screenshot, the maintenance window will start every Saturday at 10:00 P.M. UTC.
    3. To specify when your maintenance window will end, specify the duration. In this example, the four-hour maintenance window will end on the following Sunday morning at 2:00 A.M. UTC (in other words, four hours after it started).
    4. Systems manager completes all tasks that are in process, even if the maintenance window ends. In my example, I am choosing to prevent new tasks from starting within one hour of the end of my maintenance window because I estimated my patch operations might take longer than one hour to complete. Confirm the creation of the maintenance window by choosing Create maintenance window.
      Screenshot of completing all boxes in the maintenance window creation process
    5. After creating the maintenance window, you must register the EC2 instance to the maintenance window so that Systems Manager knows which EC2 instance it should patch in this maintenance window. To do so, choose Register new targets on the Targets tab of your newly created maintenance window. You can register your targets by using the same Patch Group tag you used before to associate the EC2 instance with the AWS-provided patch baseline.
      Screenshot of registering new targets
    6. Assign a task to the maintenance window that will install the operating system patches on your EC2 instance:
      1. Open Maintenance Windows in the EC2 console, select your previously created maintenance window, choose the Tasks tab, and choose Register run command task from the Register new task drop-down.
      2. Choose the AWS-RunPatchBaseline document from the list of available documents.
      3. For Parameters:
        1. For Role, choose the role you created previously (called MaintenanceWindowRole).
        2. For Execute on, specify how many EC2 instances Systems Manager should patch at the same time. If you have a large number of EC2 instances and want to patch all EC2 instances within the defined time, make sure this number is not too low. For example, if you have 1,000 EC2 instances, a maintenance window of 4 hours, and 2 hours’ time for patching, make this number at least 500.
        3. For Stop after, specify after how many errors Systems Manager should stop.
        4. For Operation, choose Install to make sure to install the patches.
          Screenshot of stipulating maintenance window parameters

    Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. Note that if you don’t want to wait, you can adjust the schedule to run sooner by choosing Edit maintenance window on the Maintenance Windows page of Systems Manager. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed on the Maintenance Windows page of Systems Manager and select your maintenance window.

    Screenshot of the maintenance window successfully created

    Monitor patch compliance

    You also can see the overall patch compliance of all EC2 instances that are part of defined patch groups by choosing Patch Compliance under Systems Manager Services in the navigation pane of the EC2 console. You can filter by Patch Group to see how many EC2 instances within the selected patch group are up to date, how many EC2 instances are missing updates, and how many EC2 instances are in an error state.

    Screenshot of monitoring patch compliance

    In this section, you have set everything up for patch management on your instance. Now you know how to patch your EC2 instance in a controlled manner and how to check if your EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all EC2 instances you manage.

    Summary

    In Part 1 of this blog post, I have shown how to configure EC2 instances for use with Systems Manager, EBS Snapshot Scheduler, and Amazon Inspector. I also have shown how to use Systems Manager to keep your Microsoft Windows–based EC2 instances up to date. In Part 2 of this blog post tomorrow, I will show how to take regular snapshots of your data by using EBS Snapshot Scheduler and how to use Amazon Inspector to check if your EC2 instances running Microsoft Windows contain any CVEs.

    If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the EC2 forum or the Amazon Inspector forum, or contact AWS Support.

    – Koen

    How AWS Managed Microsoft AD Helps to Simplify the Deployment and Improve the Security of Active Directory–Integrated .NET Applications

    Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/how-aws-managed-microsoft-ad-helps-to-simplify-the-deployment-and-improve-the-security-of-active-directory-integrated-net-applications/

    Companies using .NET applications to access sensitive user information, such as employee salary, Social Security Number, and credit card information, need an easy and secure way to manage access for users and applications.

    For example, let’s say that your company has a .NET payroll application. You want your Human Resources (HR) team to manage and update the payroll data for all the employees in your company. You also want your employees to be able to see their own payroll information in the application. To meet these requirements in a user-friendly and secure way, you want to manage access to the .NET application by using your existing Microsoft Active Directory identities. This enables you to provide users with single sign-on (SSO) access to the .NET application and to manage permissions using Active Directory groups. You also want the .NET application to authenticate itself to access the database, and to limit access to the data in the database based on the identity of the application user.

    Microsoft Active Directory supports these requirements through group Managed Service Accounts (gMSAs) and Kerberos constrained delegation (KCD). AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, enables you to manage gMSAs and KCD through your administrative account, helping you to migrate and develop .NET applications that need these native Active Directory features.

    In this blog post, I give an overview of how to use AWS Managed Microsoft AD to manage gMSAs and KCD and demonstrate how you can configure a gMSA and KCD in six steps for a .NET application:

    1. Create your AWS Managed Microsoft AD.
    2. Create your Amazon RDS for SQL Server database.
    3. Create a gMSA for your .NET application.
    4. Deploy your .NET application.
    5. Configure your .NET application to use the gMSA.
    6. Configure KCD for your .NET application.

    Solution overview

    The following diagram shows the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD. The diagram also illustrates authentication and access and is numbered to show the six key steps required to use a gMSA and KCD. To deploy this solution, the AWS Managed Microsoft AD directory must be in the same Amazon Virtual Private Cloud (VPC) as RDS for SQL Server. For this example, my company name is Example Corp., and my directory uses the domain name, example.com.

    Diagram showing the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD

    Deploy the solution

    The following six steps (numbered to correlate with the preceding diagram) walk you through configuring and using a gMSA and KCD.

    1. Create your AWS Managed Microsoft AD directory

    Using the Directory Service console, create your AWS Managed Microsoft AD directory in your Amazon VPC. In my example, my domain name is example.com.

    Image of creating an AWS Managed Microsoft AD directory in an Amazon VPC

    2. Create your Amazon RDS for SQL Server database

    Using the RDS console, create your Amazon RDS for SQL Server database instance in the same Amazon VPC where your directory is running, and enable Windows Authentication. To enable Windows Authentication, select your directory in the Microsoft SQL Server Windows Authentication section in the Configure Advanced Settings step of the database creation workflow (see the following screenshot).

    In my example, I create my Amazon RDS for SQL Server db-example database, and enable Windows Authentication to allow my db-example database to authenticate against my example.com directory.

    Screenshot of configuring advanced settings

    3. Create a gMSA for your .NET application

    Now that you have deployed your directory, database, and application, you can create a gMSA for your .NET application.

    To perform the next steps, you must install the Active Directory administration tools on a Windows server that is joined to your AWS Managed Microsoft AD directory domain. If you do not have a Windows server joined to your directory domain, you can deploy a new Amazon EC2 for Microsoft Windows Server instance and join it to your directory domain.

    To create a gMSA for your .NET application:

    1. Log on to the instance on which you installed the Active Directory administration tools by using a user that is a member of the Admins security group or the Managed Service Accounts Admins security group in your organizational unit (OU). For my example, I use the Admin user in the example OU.

    Screenshot of logging on to the instance on which you installed the Active Directory administration tools

    1. Identify which .NET application servers (hosts) will run your .NET application. Create a new security group in your OU and add your .NET application servers as members of this new group. This allows a group of application servers to use a single gMSA, instead of creating one gMSA for each server. In my example, I create a group, App_server_grp, in my example OU. I also add Appserver1, which is my .NET application server computer name, as a member of this new group.

    Screenshot of creating a new security group

    1. Create a gMSA in your directory by running Windows PowerShell from the Start menu. The basic syntax to create the gMSA at the Windows PowerShell command prompt follows.
      PS C:\Users\admin> New-ADServiceAccount -name [gMSAname] -DNSHostName [domainname] -PrincipalsAllowedToRetrieveManagedPassword [AppServersSecurityGroup] -TrustedForDelegation $truedn <Enter>

      In my example, the gMSAname is gMSAexample, the DNSHostName is example.com, and the PrincipalsAllowedToRetrieveManagedPassword is the recently created security group, App_server_grp.

      PS C:\Users\admin> New-ADServiceAccount -name gMSAexample -DNSHostName example.com -PrincipalsAllowedToRetrieveManagedPassword App_server_grp -TrustedForDelegation $truedn <Enter>

      To confirm you created the gMSA, you can run the Get-ADServiceAccount command from the PowerShell command prompt.

      PS C:\Users\admin> Get-ADServiceAccount gMSAexample <Enter>
      
      DistinguishedName : CN=gMSAexample,CN=Managed Service Accounts,DC=example,DC=com
      Enabled           : True
      Name              : gMSAexample
      ObjectClass       : msDS-GroupManagedServiceAccount
      ObjectGUID        : 24d8b68d-36d5-4dc3-b0a9-edbbb5dc8a5b
      SamAccountName    : gMSAexample$
      SID               : S-1-5-21-2100421304-991410377-951759617-1603
      UserPrincipalName :

      You also can confirm you created the gMSA by opening the Active Directory Users and Computers utility located in your Administrative Tools folder, expand the domain (example.com in my case), and expand the Managed Service Accounts folder.
      Screenshot of confirming the creation of the gMSA

    4. Deploy your .NET application

    Deploy your .NET application on IIS on Amazon EC2 for Windows Server instances. For this step, I assume you are the application’s expert and already know how to deploy it. Make sure that all of your instances are joined to your directory.

    5. Configure your .NET application to use the gMSA

    You can configure your .NET application to use the gMSA to enforce strong password security policy and ensure password rotation of your service account. This helps to improve the security and simplify the management of your .NET application. Configure your .NET application in two steps:

    1. Grant to gMSA the required permissions to run your .NET application in the respective application folders. This is a critical step because when you change the application pool identity account to use gMSA, downtime can occur if the gMSA does not have the application’s required permissions. Therefore, make sure you first test the configurations in your development and test environments.
    2. Configure your application pool identity on IIS to use the gMSA as the service account. When you configure a gMSA as the service account, you include the $ at the end of the gMSA name. You do not need to provide a password because AWS Managed Microsoft AD automatically creates and rotates the password. In my example, my service account is gMSAexample$, as shown in the following screenshot.

    Screenshot of configuring application pool identity

    You have completed all the steps to use gMSA to create and rotate your .NET application service account password! Now, you will configure KCD for your .NET application.

    6. Configure KCD for your .NET application

    You now are ready to allow your .NET application to have access to other services by using the user identity’s permissions instead of the application service account’s permissions. Note that KCD and gMSA are independent features, which means you do not have to create a gMSA to use KCD. For this example, I am using both features to show how you can use them together. To configure a regular service account such as a user or local built-in account, see the Kerberos constrained delegation with ASP.NET blog post on MSDN.

    In my example, my goal is to delegate to the gMSAexample account the ability to enforce the user’s permissions to my db-example SQL Server database, instead of the gMSAexample account’s permissions. For this, I have to update the msDS-AllowedToDelegateTo gMSA attribute. The value for this attribute is the service principal name (SPN) of the service instance that you are targeting, which in this case is the db-example Amazon RDS for SQL Server database.

    The SPN format for the msDS-AllowedToDelegateTo attribute is a combination of the service class, the Kerberos authentication endpoint, and the port number. The Amazon RDS for SQL Server Kerberos authentication endpoint format is [database_name].[domain_name]. The value for my msDS-AllowedToDelegateTo attribute is MSSQLSvc/db-example.example.com:1433, where MSSQLSvc and 1433 are the SQL Server Database service class and port number standards, respectively.

    Follow these steps to perform the msDS-AllowedToDelegateTo gMSA attribute configuration:

    1. Log on to your Active Directory management instance with a user identity that is a member of the Kerberos Delegation Admins security group. In this case, I will use admin.
    2. Open the Active Directory Users and Groups utility located in your Administrative Tools folder, choose View, and then choose Advanced Features.
    3. Expand your domain name (example.com in this example), and then choose the Managed Service Accounts security group. Right-click the gMSA account for the application pool you want to enable for Kerberos delegation, choose Properties, and choose the Attribute Editor tab.
    4. Search for the msDS-AllowedToDelegateTo attribute on the Attribute Editor tab and choose Edit.
    5. Enter the MSSQLSvc/db-example.example.com:1433 value and choose Add.
      Screenshot of entering the value of the multi-valued string
    6. Choose OK and Apply, and your KCD configuration is complete.

    Congratulations! At this point, your application is using a gMSA rather than an embedded static user identity and password, and the application is able to access SQL Server using the identity of the application user. The gMSA eliminates the need for you to rotate the application’s password manually, and it allows you to better scope permissions for the application. When you use KCD, you can enforce access to your database consistently based on user identities at the database level, which prevents improper access that might otherwise occur because of an application error.

    Summary

    In this blog post, I demonstrated how to simplify the deployment and improve the security of your .NET application by using a group Managed Service Account and Kerberos constrained delegation with your AWS Managed Microsoft AD directory. I also outlined the main steps to get your .NET environment up and running on a managed Active Directory and SQL Server infrastructure. This approach will make it easier for you to build new .NET applications in the AWS Cloud or migrate existing ones in a more secure way.

    For additional information about using group Managed Service Accounts and Kerberos constrained delegation with your AWS Managed Microsoft AD directory, see the AWS Directory Service documentation.

    To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions about this post or its solution, start a new thread on the Directory Service forum.

    – Peter

    Updated AWS SOC Reports Are Now Available with 19 Additional Services in Scope

    Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/updated-aws-soc-reports-are-now-available-with-19-additional-services-in-scope/

    AICPA SOC logo

    Newly updated reports are available for AWS System and Organization Control Report 1 (SOC 1), formerly called AWS Service Organization Control Report 1, and AWS SOC 2: Security, Availability, & Confidentiality Report. You can download both reports for free and on demand in the AWS Management Console through AWS Artifact. The updated AWS SOC 3: Security, Availability, & Confidentiality Report also was just released. All three reports cover April 1, 2017, through September 30, 2017.

    With the addition of the following 19 services, AWS now supports 51 SOC-compliant AWS services and is committed to increasing the number:

    • Amazon API Gateway
    • Amazon Cloud Directory
    • Amazon CloudFront
    • Amazon Cognito
    • Amazon Connect
    • AWS Directory Service for Microsoft Active Directory
    • Amazon EC2 Container Registry
    • Amazon EC2 Container Service
    • Amazon EC2 Systems Manager
    • Amazon Inspector
    • AWS IoT Platform
    • Amazon Kinesis Streams
    • AWS Lambda
    • AWS [email protected]
    • AWS Managed Services
    • Amazon S3 Transfer Acceleration
    • AWS Shield
    • AWS Step Functions
    • AWS WAF

    With this release, we also are introducing a separate spreadsheet, eliminating the need to extract the information from multiple PDFs.

    If you are not yet an AWS customer, contact AWS Compliance to access the SOC Reports.

    – Chad

    Now Better Together! Register for and Attend this November 15 Tech Talk: “How to Integrate AWS Directory Service with Office 365”

    Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/now-better-together-register-for-and-attend-this-november-15-tech-talk-how-to-integrate-aws-directory-service-with-office-365/

    AWS Online Tech Talks banner

    As part of the AWS Online Tech Talks series, AWS will present How to Integrate AWS Directory Service with Office 365 on Wednesday, November 15. This tech talk will start at 9:00 A.M. Pacific Time and end at 9:40 A.M. Pacific Time.

    If you want to support Active Directory–aware workloads in AWS and Office 365 simultaneously using a managed Active Directory in the cloud, you need a nonintuitive integration to synchronize identities between deployments. AWS has recently introduced the ability for you to authenticate your Office 365 permissions using AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) by using a custom configuration of Active Directory Federation Services (AD FS). In this webinar, AWS Directory Service Product Manager Ron Cully shows how to configure your AWS Managed Microsoft AD environment to synchronize with Office 365. He will provide detailed configuration settings, architectural considerations, and deployment steps for a highly available, secure, and easy-to-manage solution in the AWS Cloud.

    You also will learn how to:

    • Deploy AWS Managed Microsoft AD.
    • Deploy Microsoft Azure AD Connect and AD FS with AWS Managed Microsoft AD.
    • Authenticate user access to Office 365 by using AWS Managed Microsoft AD.

    This tech talk is free. Register today.

    – Craig

    Introducing AWS Directory Service for Microsoft Active Directory (Standard Edition)

    Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/introducing-aws-directory-service-for-microsoft-active-directory-standard-edition/

    Today, AWS introduced AWS Directory Service for Microsoft Active Directory (Standard Edition), also known as AWS Microsoft AD (Standard Edition), which is managed Microsoft Active Directory (AD) that is performance optimized for small and midsize businesses. AWS Microsoft AD (Standard Edition) offers you a highly available and cost-effective primary directory in the AWS Cloud that you can use to manage users, groups, and computers. It enables you to join Amazon EC2 instances to your domain easily and supports many AWS and third-party applications and services. It also can support most of the common use cases of small and midsize businesses. When you use AWS Microsoft AD (Standard Edition) as your primary directory, you can manage access and provide single sign-on (SSO) to cloud applications such as Microsoft Office 365. If you have an existing Microsoft AD directory, you can also use AWS Microsoft AD (Standard Edition) as a resource forest that contains primarily computers and groups, allowing you to migrate your AD-aware applications to the AWS Cloud while using existing on-premises AD credentials.

    In this blog post, I help you get started by answering three main questions about AWS Microsoft AD (Standard Edition):

    1. What do I get?
    2. How can I use it?
    3. What are the key features?

    After answering these questions, I show how you can get started with creating and using your own AWS Microsoft AD (Standard Edition) directory.

    1. What do I get?

    When you create an AWS Microsoft AD (Standard Edition) directory, AWS deploys two Microsoft AD domain controllers powered by Microsoft Windows Server 2012 R2 in your Amazon Virtual Private Cloud (VPC). To help deliver high availability, the domain controllers run in different Availability Zones in the AWS Region of your choice.

    As a managed service, AWS Microsoft AD (Standard Edition) configures directory replication, automates daily snapshots, and handles all patching and software updates. In addition, AWS Microsoft AD (Standard Edition) monitors and automatically recovers domain controllers in the event of a failure.

    AWS Microsoft AD (Standard Edition) has been optimized as a primary directory for small and midsize businesses with the capacity to support approximately 5,000 employees. With 1 GB of directory object storage, AWS Microsoft AD (Standard Edition) has the capacity to store 30,000 or more total directory objects (users, groups, and computers). AWS Microsoft AD (Standard Edition) also gives you the option to add domain controllers to meet the specific performance demands of your applications. You also can use AWS Microsoft AD (Standard Edition) as a resource forest with a trust relationship to your on-premises directory.

    2. How can I use it?

    With AWS Microsoft AD (Standard Edition), you can share a single directory for multiple use cases. For example, you can share a directory to authenticate and authorize access for .NET applications, Amazon RDS for SQL Server with Windows Authentication enabled, and Amazon Chime for messaging and video conferencing.

    The following diagram shows some of the use cases for your AWS Microsoft AD (Standard Edition) directory, including the ability to grant your users access to external cloud applications and allow your on-premises AD users to manage and have access to resources in the AWS Cloud. Click the diagram to see a larger version.

    Diagram showing some ways you can use AWS Microsoft AD (Standard Edition)--click the diagram to see a larger version

    Use case 1: Sign in to AWS applications and services with AD credentials

    You can enable multiple AWS applications and services such as the AWS Management Console, Amazon WorkSpaces, and Amazon RDS for SQL Server to use your AWS Microsoft AD (Standard Edition) directory. When you enable an AWS application or service in your directory, your users can access the application or service with their AD credentials.

    For example, you can enable your users to sign in to the AWS Management Console with their AD credentials. To do this, you enable the AWS Management Console as an application in your directory, and then assign your AD users and groups to IAM roles. When your users sign in to the AWS Management Console, they assume an IAM role to manage AWS resources. This makes it easy for you to grant your users access to the AWS Management Console without needing to configure and manage a separate SAML infrastructure.

    Use case 2: Manage Amazon EC2 instances

    Using familiar AD administration tools, you can apply AD Group Policy objects (GPOs) to centrally manage your Amazon EC2 for Windows or Linux instances by joining your instances to your AWS Microsoft AD (Standard Edition) domain.

    In addition, your users can sign in to your instances with their AD credentials. This eliminates the need to use individual instance credentials or distribute private key (PEM) files. This makes it easier for you to instantly grant or revoke access to users by using AD user administration tools you already use.

    Use case 3: Provide directory services to your AD-aware workloads

    AWS Microsoft AD (Standard Edition) is an actual Microsoft AD that enables you to run traditional AD-aware workloads such as Remote Desktop Licensing Manager, Microsoft SharePoint, and Microsoft SQL Server Always On in the AWS Cloud. AWS Microsoft AD (Standard Edition) also helps you to simplify and improve the security of AD-integrated .NET applications by using group Managed Service Accounts (gMSAs) and Kerberos constrained delegation (KCD).

    Use case 4: SSO to Office 365 and other cloud applications

    You can use AWS Microsoft AD (Standard Edition) to provide SSO for cloud applications. You can use Azure AD Connect to synchronize your users into Azure AD, and then use Active Directory Federation Services (AD FS) so that your users can access Microsoft Office 365 and other SAML 2.0 cloud applications by using their AD credentials.

    Use case 5: Extend your on-premises AD to the AWS Cloud

    If you already have an AD infrastructure and want to use it when migrating AD-aware workloads to the AWS Cloud, AWS Microsoft AD (Standard Edition) can help. You can use AD trusts to connect AWS Microsoft AD (Standard Edition) to your existing AD. This means your users can access AD-aware and AWS applications with their on-premises AD credentials, without needing you to synchronize users, groups, or passwords.

    For example, your users can sign in to the AWS Management Console and Amazon WorkSpaces by using their existing AD user names and passwords. Also, when you use AD-aware applications such as SharePoint with AWS Microsoft AD (Standard Edition), your logged-in Windows users can access these applications without needing to enter credentials again.

    3. What are the key features?

    AWS Microsoft AD (Standard Edition) includes the features detailed in this section.

    Extend your AD schema

    With AWS Microsoft AD, you can run customized AD-integrated applications that require changes to your directory schema, which defines the structures of your directory. The schema is composed of object classes such as user objects, which contain attributes such as user names. AWS Microsoft AD lets you extend the schema by adding new AD attributes or object classes that are not present in the core AD attributes and classes.

    For example, if you have a human resources application that uses employee badge color to assign specific benefits, you can extend the schema to include a badge color attribute in the user object class of your directory. To learn more, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service.

    Create user-specific password policies

    With user-specific password policies, you can apply specific restrictions and account lockout policies to different types of users in your AWS Microsoft AD (Standard Edition) domain. For example, you can enforce strong passwords and frequent password change policies for administrators, and use less-restrictive policies with moderate account lockout policies for general users.

    Add domain controllers

    You can increase the performance and redundancy of your directory by adding domain controllers. This can help improve application performance by enabling directory clients to load-balance their requests across a larger number of domain controllers.

    Encrypt directory traffic

    You can use AWS Microsoft AD (Standard Edition) to encrypt Lightweight Directory Access Protocol (LDAP) communication between your applications and your directory. By enabling LDAP over Secure Sockets Layer (SSL)/Transport Layer Security (TLS), also called LDAPS, you encrypt your LDAP communications end to end. This helps you to protect sensitive information you keep in your directory when it is accessed over untrusted networks.

    Improve the security of signing in to AWS services by using multi-factor authentication (MFA)

    You can improve the security of signing in to AWS services, such as Amazon WorkSpaces and Amazon QuickSight, by enabling MFA in your AWS Microsoft AD (Standard Edition) directory. With MFA, your users must enter a one-time passcode (OTP) in addition to their AD user names and passwords to access AWS applications and services you enable in AWS Microsoft AD (Standard Edition).

    Get started

    To get started, use the Directory Service console to create your first directory with just a few clicks. If you have not used Directory Service before, you may be eligible for a 30-day limited free trial.

    Summary

    In this blog post, I explained what AWS Microsoft AD (Standard Edition) is and how you can use it. With a single directory, you can address many use cases for your business, making it easier to migrate and run your AD-aware workloads in the AWS Cloud, provide access to AWS applications and services, and connect to other cloud applications. To learn more about AWS Microsoft AD, see the Directory Service home page.

    If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the Directory Service forum.

    – Peter

    Federate Database User Authentication Easily with IAM and Amazon Redshift

    Post Syndicated from Thiyagarajan Arumugam original https://aws.amazon.com/blogs/big-data/federate-database-user-authentication-easily-with-iam-and-amazon-redshift/

    Managing database users though federation allows you to manage authentication and authorization procedures centrally. Amazon Redshift now supports database authentication with IAM, enabling user authentication though enterprise federation. No need to manage separate database users and passwords to further ease the database administration. You can now manage users outside of AWS and authenticate them for access to an Amazon Redshift data warehouse. Do this by integrating IAM authentication and a third-party SAML-2.0 identity provider (IdP), such as AD FS, PingFederate, or Okta. In addition, database users can also be automatically created at their first login based on corporate permissions.

    In this post, I demonstrate how you can extend the federation to enable single sign-on (SSO) to the Amazon Redshift data warehouse.

    SAML and Amazon Redshift

    AWS supports Security Assertion Markup Language (SAML) 2.0, which is an open standard for identity federation used by many IdPs. SAML enables federated SSO, which enables your users to sign in to the AWS Management Console. Users can also make programmatic calls to AWS API actions by using assertions from a SAML-compliant IdP. For example, if you use Microsoft Active Directory for corporate directories, you may be familiar with how Active Directory and AD FS work together to enable federation. For more information, see the Enabling Federation to AWS Using Windows Active Directory, AD FS, and SAML 2.0 AWS Security Blog post.

    Amazon Redshift now provides the GetClusterCredentials API operation that allows you to generate temporary database user credentials for authentication. You can set up an IAM permissions policy that generates these credentials for connecting to Amazon Redshift. Extending the IAM authentication, you can configure the federation of AWS access though a SAML 2.0–compliant IdP. An IAM role can be configured to permit the federated users call the GetClusterCredentials action and generate temporary credentials to log in to Amazon Redshift databases. You can also set up policies to restrict access to Amazon Redshift clusters, databases, database user names, and user group.

    Amazon Redshift federation workflow

    In this post, I demonstrate how you can use a JDBC– or ODBC-based SQL client to log in to the Amazon Redshift cluster using this feature. The SQL clients used with Amazon Redshift JDBC or ODBC drivers automatically manage the process of calling the GetClusterCredentials action, retrieving the database user credentials, and establishing a connection to your Amazon Redshift database. You can also use your database application to programmatically call the GetClusterCredentials action, retrieve database user credentials, and connect to the database. I demonstrate these features using an example company to show how different database users accounts can be managed easily using federation.

    The following diagram shows how the SSO process works:

    1. JDBC/ODBC
    2. Authenticate using Corp Username/Password
    3. IdP sends SAML assertion
    4. Call STS to assume role with SAML
    5. STS Returns Temp Credentials
    6. Use Temp Credentials to get Temp cluster credentials
    7. Connect to Amazon Redshift using temp credentials

    Walkthrough

    Example Corp. is using Active Directory (idp host:demo.examplecorp.com) to manage federated access for users in its organization. It has an AWS account: 123456789012 and currently manages an Amazon Redshift cluster with the cluster ID “examplecorp-dw”, database “analytics” in us-west-2 region for its Sales and Data Science teams. It wants the following access:

    • Sales users can access the examplecorp-dw cluster using the sales_grp database group
    • Sales users access examplecorp-dw through a JDBC-based SQL client
    • Sales users access examplecorp-dw through an ODBC connection, for their reporting tools
    • Data Science users access the examplecorp-dw cluster using the data_science_grp database group.
    • Partners access the examplecorp-dw cluster and query using the partner_grp database group.
    • Partners are not federated through Active Directory and are provided with separate IAM user credentials (with IAM user name examplecorpsalespartner).
    • Partners can connect to the examplecorp-dw cluster programmatically, using language such as Python.
    • All users are automatically created in Amazon Redshift when they log in for the first time.
    • (Optional) Internal users do not specify database user or group information in their connection string. It is automatically assigned.
    • Data warehouse users can use SSO for the Amazon Redshift data warehouse using the preceding permissions.

    Step 1:  Set up IdPs and federation

    The Enabling Federation to AWS Using Windows Active Directory post demonstrated how to prepare Active Directory and enable federation to AWS. Using those instructions, you can establish trust between your AWS account and the IdP and enable user access to AWS using SSO.  For more information, see Identity Providers and Federation.

    For this walkthrough, assume that this company has already configured SSO to their AWS account: 123456789012 for their Active Directory domain demo.examplecorp.com. The Sales and Data Science teams are not required to specify database user and group information in the connection string. The connection string can be configured by adding SAML Attribute elements to your IdP. Configuring these optional attributes enables internal users to conveniently avoid providing the DbUser and DbGroup parameters when they log in to Amazon Redshift.

    The user-name attribute can be set up as follows, with a user ID (for example, nancy) or an email address (for example. [email protected]corp.com):

    <Attribute Name="https://redshift.amazon.com/SAML/Attributes/DbUser">  
      <AttributeValue>user-name</AttributeValue>
    </Attribute>

    The AutoCreate attribute can be defined as follows:

    <Attribute Name="https://redshift.amazon.com/SAML/Attributes/AutoCreate">
        <AttributeValue>true</AttributeValue>
    </Attribute>

    The sales_grp database group can be included as follows:

    <Attribute Name="https://redshift.amazon.com/SAML/Attributes/DbGroups">
        <AttributeValue>sales_grp</AttributeValue>
    </Attribute>

    For more information about attribute element configuration, see Configure SAML Assertions for Your IdP.

    Step 2: Create IAM roles for access to the Amazon Redshift cluster

    The next step is to create IAM policies with permissions to call GetClusterCredentials and provide authorization for Amazon Redshift resources. To grant a SQL client the ability to retrieve the cluster endpoint, region, and port automatically, include the redshift:DescribeClusters action with the Amazon Redshift cluster resource in the IAM role.  For example, users can connect to the Amazon Redshift cluster using a JDBC URL without the need to hardcode the Amazon Redshift endpoint:

    Previous:  jdbc:redshift://endpoint:port/database

    Current:  jdbc:redshift:iam://clustername:region/dbname

    Use IAM to create the following policies. You can also use an existing user or role and assign these policies. For example, if you already created an IAM role for IdP access, you can attach the necessary policies to that role. Here is the policy created for sales users for this example:

    Sales_DW_IAM_Policy

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "redshift:DescribeClusters"
                ],
                "Resource": [
                    "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "redshift:GetClusterCredentials"
                ],
                "Resource": [
                    "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
                    "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
                ],
                "Condition": {
                    "StringEquals": {
                        "aws:userid": "AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com"
                    }
                }
            },
            {
                "Effect": "Allow",
                "Action": [
                    "redshift:CreateClusterUser"
                ],
                "Resource": [
                    "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "redshift:JoinGroup"
                ],
                "Resource": [
                    "arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp"
                ]
            }
        ]
    }

    The policy uses the following parameter values:

    • Region: us-west-2
    • AWS Account: 123456789012
    • Cluster name: examplecorp-dw
    • Database group: sales_grp
    • IAM role: AIDIODR4TAW7CSEXAMPLE
    Policy Statement Description
    {
    "Effect":"Allow",
    "Action":[
    "redshift:DescribeClusters"
    ],
    "Resource":[
    "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
    ]
    }

    Allow users to retrieve the cluster endpoint, region, and port automatically for the Amazon Redshift cluster examplecorp-dw. This specification uses the resource format arn:aws:redshift:region:account-id:cluster:clustername. For example, the SQL client JDBC can be specified in the format jdbc:redshift:iam://clustername:region/dbname.

    For more information, see Amazon Resource Names.

    {
    "Effect":"Allow",
    "Action":[
    "redshift:GetClusterCredentials"
    ],
    "Resource":[
    "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
    "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
    ],
    "Condition":{
    "StringEquals":{
    "aws:userid":"AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com"
    }
    }
    }

    Generates a temporary token to authenticate into the examplecorp-dw cluster. “arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}” restricts the corporate user name to the database user name for that user. This resource is specified using the format: arn:aws:redshift:region:account-id:dbuser:clustername/dbusername.

    The Condition block enforces that the AWS user ID should match “AIDIODR4TAW7CSEXAMPLE:${redshift:DbUser}@examplecorp.com”, so that individual users can authenticate only as themselves. The AIDIODR4TAW7CSEXAMPLE role has the Sales_DW_IAM_Policy policy attached.

    {
    "Effect":"Allow",
    "Action":[
    "redshift:CreateClusterUser"
    ],
    "Resource":[
    "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
    ]
    }
    Automatically creates database users in examplecorp-dw, when they log in for the first time. Subsequent logins reuse the existing database user.
    {
    "Effect":"Allow",
    "Action":[
    "redshift:JoinGroup"
    ],
    "Resource":[
    "arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp"
    ]
    }
    Allows sales users to join the sales_grp database group through the resource “arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/sales_grp” that is specified in the format arn:aws:redshift:region:account-id:dbgroup:clustername/dbgroupname.

    Similar policies can be created for Data Science users with access to join the data_science_grp group in examplecorp-dw. You can now attach the Sales_DW_IAM_Policy policy to the role that is mapped to IdP application for SSO.
 For more information about how to define the claim rules, see Configuring SAML Assertions for the Authentication Response.

    Because partners are not authorized using Active Directory, they are provided with IAM credentials and added to the partner_grp database group. The Partner_DW_IAM_Policy is attached to the IAM users for partners. The following policy allows partners to log in using the IAM user name as the database user name.

    Partner_DW_IAM_Policy

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "redshift:DescribeClusters"
                ],
                "Resource": [
                    "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "redshift:GetClusterCredentials"
                ],
                "Resource": [
                    "arn:aws:redshift:us-west-2:123456789012:cluster:examplecorp-dw",
                    "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
                ],
                "Condition": {
                    "StringEquals": {
                        "redshift:DbUser": "${aws:username}"
                    }
                }
            },
            {
                "Effect": "Allow",
                "Action": [
                    "redshift:CreateClusterUser"
                ],
                "Resource": [
                    "arn:aws:redshift:us-west-2:123456789012:dbuser:examplecorp-dw/${redshift:DbUser}"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "redshift:JoinGroup"
                ],
                "Resource": [
                    "arn:aws:redshift:us-west-2:123456789012:dbgroup:examplecorp-dw/partner_grp"
                ]
            }
        ]
    }

    redshift:DbUser“: “${aws:username}” forces an IAM user to use the IAM user name as the database user name.

    With the previous steps configured, you can now establish the connection to Amazon Redshift through JDBC– or ODBC-supported clients.

    Step 3: Set up database user access

    Before you start connecting to Amazon Redshift using the SQL client, set up the database groups for appropriate data access. Log in to your Amazon Redshift database as superuser to create a database group, using CREATE GROUP.

    Log in to examplecorp-dw/analytics as superuser and create the following groups and users:

    CREATE GROUP sales_grp;
    CREATE GROUP datascience_grp;
    CREATE GROUP partner_grp;

    Use the GRANT command to define access permissions to database objects (tables/views) for the preceding groups.

    Step 4: Connect to Amazon Redshift using the JDBC SQL client

    Assume that sales user “nancy” is using the SQL Workbench client and JDBC driver to log in to the Amazon Redshift data warehouse. The following steps help set up the client and establish the connection:

    1. Download the latest Amazon Redshift JDBC driver from the Configure a JDBC Connection page
    2. Build the JDBC URL with the IAM option in the following format:
      jdbc:redshift:iam://examplecorp-dw:us-west-2/sales_db

    Because the redshift:DescribeClusters action is assigned to the preceding IAM roles, it automatically resolves the cluster endpoints and the port. Otherwise, you can specify the endpoint and port information in the JDBC URL, as described in Configure a JDBC Connection.

    Identify the following JDBC options for providing the IAM credentials (see the “Prepare your environment” section) and configure in the SQL Workbench Connection Profile:

    plugin_name=com.amazon.redshift.plugin.AdfsCredentialsProvider 
    idp_host=demo.examplecorp.com (The name of the corporate identity provider host)
    idp_port=443  (The port of the corporate identity provider host)
    user=examplecorp\nancy(corporate user name)
    password=***(corporate user password)

    The SQL workbench configuration looks similar to the following screenshot:

    Now, “nancy” can connect to examplecorp-dw by authenticating using the corporate Active Directory. Because the SAML attributes elements are already configured for nancy, she logs in as database user nancy and is assigned the sales_grp. Similarly, other Sales and Data Science users can connect to the examplecorp-dw cluster. A custom Amazon Redshift ODBC driver can also be used to connect using a SQL client. For more information, see Configure an ODBC Connection.

    Step 5: Connecting to Amazon Redshift using JDBC SQL Client and IAM Credentials

    This optional step is necessary only when you want to enable users that are not authenticated with Active Directory. Partners are provided with IAM credentials that they can use to connect to the examplecorp-dw Amazon Redshift clusters. These IAM users are attached to Partner_DW_IAM_Policy that assigns them to be assigned to the public database group in Amazon Redshift. The following JDBC URLs enable them to connect to the Amazon Redshift cluster:

    jdbc:redshift:iam//examplecorp-dw/analytics?AccessKeyID=XXX&SecretAccessKey=YYY&DbUser=examplecorpsalespartner&DbGroup= partner_grp&AutoCreate=true

    The AutoCreate option automatically creates a new database user the first time the partner logs in. There are several other options available to conveniently specify the IAM user credentials. For more information, see Options for providing IAM credentials.

    Step 6: Connecting to Amazon Redshift using an ODBC client for Microsoft Windows

    Assume that another sales user “uma” is using an ODBC-based client to log in to the Amazon Redshift data warehouse using Example Corp Active Directory. The following steps help set up the ODBC client and establish the Amazon Redshift connection in a Microsoft Windows operating system connected to your corporate network:

    1. Download and install the latest Amazon Redshift ODBC driver.
    2. Create a system DSN entry.
      1. In the Start menu, locate the driver folder or folders:
        • Amazon Redshift ODBC Driver (32-bit)
        • Amazon Redshift ODBC Driver (64-bit)
        • If you installed both drivers, you have a folder for each driver.
      2. Choose ODBC Administrator, and then type your administrator credentials.
      3. To configure the driver for all users on the computer, choose System DSN. To configure the driver for your user account only, choose User DSN.
      4. Choose Add.
    3. Select the Amazon Redshift ODBC driver, and choose Finish. Configure the following attributes:
      Data Source Name =any friendly name to identify the ODBC connection 
      Database=analytics
      user=uma(corporate user name)
      Auth Type-Identity Provider: AD FS
      password=leave blank (Windows automatically authenticates)
      Cluster ID: examplecorp-dw
      idp_host=demo.examplecorp.com (The name of the corporate IdP host)

    This configuration looks like the following:

    1. Choose OK to save the ODBC connection.
    2. Verify that uma is set up with the SAML attributes, as described in the “Set up IdPs and federation” section.

    The user uma can now use this ODBC connection to establish the connection to the Amazon Redshift cluster using any ODBC-based tools or reporting tools such as Tableau. Internally, uma authenticates using the Sales_DW_IAM_Policy  IAM role and is assigned the sales_grp database group.

    Step 7: Connecting to Amazon Redshift using Python and IAM credentials

    To enable partners, connect to the examplecorp-dw cluster programmatically, using Python on a computer such as Amazon EC2 instance. Reuse the IAM users that are attached to the Partner_DW_IAM_Policy policy defined in Step 2.

    The following steps show this set up on an EC2 instance:

    1. Launch a new EC2 instance with the Partner_DW_IAM_Policy role, as described in Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances. Alternatively, you can attach an existing IAM role to an EC2 instance.
    2. This example uses Python PostgreSQL Driver (PyGreSQL) to connect to your Amazon Redshift clusters. To install PyGreSQL on Amazon Linux, use the following command as the ec2-user:
      sudo easy_install pip
      sudo yum install postgresql postgresql-devel gcc python-devel
      sudo pip install PyGreSQL

    1. The following code snippet demonstrates programmatic access to Amazon Redshift for partner users:
      #!/usr/bin/env python
      """
      Usage:
      python redshift-unload-copy.py <config file> <region>
      
      * Copyright 2014, Amazon.com, Inc. or its affiliates. All Rights Reserved.
      *
      * Licensed under the Amazon Software License (the "License").
      * You may not use this file except in compliance with the License.
      * A copy of the License is located at
      *
      * http://aws.amazon.com/asl/
      *
      * or in the "license" file accompanying this file. This file is distributed
      * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
      * express or implied. See the License for the specific language governing
      * permissions and limitations under the License.
      """
      
      import sys
      import pg
      import boto3
      
      REGION = 'us-west-2'
      CLUSTER_IDENTIFIER = 'examplecorp-dw'
      DB_NAME = 'sales_db'
      DB_USER = 'examplecorpsalespartner'
      
      options = """keepalives=1 keepalives_idle=200 keepalives_interval=200
                   keepalives_count=6"""
      
      set_timeout_stmt = "set statement_timeout = 1200000"
      
      def conn_to_rs(host, port, db, usr, pwd, opt=options, timeout=set_timeout_stmt):
          rs_conn_string = """host=%s port=%s dbname=%s user=%s password=%s
                               %s""" % (host, port, db, usr, pwd, opt)
          print "Connecting to %s:%s:%s as %s" % (host, port, db, usr)
          rs_conn = pg.connect(dbname=rs_conn_string)
          rs_conn.query(timeout)
          return rs_conn
      
      def main():
          # describe the cluster and fetch the IAM temporary credentials
          global redshift_client
          redshift_client = boto3.client('redshift', region_name=REGION)
          response_cluster_details = redshift_client.describe_clusters(ClusterIdentifier=CLUSTER_IDENTIFIER)
          response_credentials = redshift_client.get_cluster_credentials(DbUser=DB_USER,DbName=DB_NAME,ClusterIdentifier=CLUSTER_IDENTIFIER,DurationSeconds=3600)
          rs_host = response_cluster_details['Clusters'][0]['Endpoint']['Address']
          rs_port = response_cluster_details['Clusters'][0]['Endpoint']['Port']
          rs_db = DB_NAME
          rs_iam_user = response_credentials['DbUser']
          rs_iam_pwd = response_credentials['DbPassword']
          # connect to the Amazon Redshift cluster
          conn = conn_to_rs(rs_host, rs_port, rs_db, rs_iam_user,rs_iam_pwd)
          # execute a query
          result = conn.query("SELECT sysdate as dt")
          # fetch results from the query
          for dt_val in result.getresult() :
              print dt_val
          # close the Amazon Redshift connection
          conn.close()
      
      if __name__ == "__main__":
          main()

    You can save this Python program in a file (redshiftscript.py) and execute it at the command line as ec2-user:

    python redshiftscript.py

    Now partners can connect to the Amazon Redshift cluster using the Python script, and authentication is federated through the IAM user.

    Summary

    In this post, I demonstrated how to use federated access using Active Directory and IAM roles to enable single sign-on to an Amazon Redshift cluster. I also showed how partners outside an organization can be managed easily using IAM credentials.  Using the GetClusterCredentials API action, now supported by Amazon Redshift, lets you manage a large number of database users and have them use corporate credentials to log in. You don’t have to maintain separate database user accounts.

    Although this post demonstrated the integration of IAM with AD FS and Active Directory, you can replicate this solution across with your choice of SAML 2.0 third-party identity providers (IdP), such as PingFederate or Okta. For the different supported federation options, see Configure SAML Assertions for Your IdP.

    If you have questions or suggestions, please comment below.


    Additional Reading

    Learn how to establish federated access to your AWS resources by using Active Directory user attributes.


    About the Author

    Thiyagarajan Arumugam is a Big Data Solutions Architect at Amazon Web Services and designs customer architectures to process data at scale. Prior to AWS, he built data warehouse solutions at Amazon.com. In his free time, he enjoys all outdoor sports and practices the Indian classical drum mridangam.

     

    Join Us for AWS IAM Day on Monday, October 16, in New York City

    Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/join-us-for-aws-iam-day-on-monday-october-16-in-new-york-city/

    Join us in New York City at the AWS Pop-up Loft for AWS IAM Day on Monday, October 16, from 9:30 A.M.–4:15 P.M. Eastern Time. At this free technical event, you will learn AWS Identity and Access Management (IAM) concepts from IAM product managers, as well as tools and strategies you can use for controlling access to your AWS environment, such as the IAM policy language and IAM best practices. You also will take an IAM policy ninja dive deep into permissions and how to use IAM roles to delegate access to your AWS resources. Last, you will learn how to integrate Active Directory with AWS workloads.

    You can attend one session or stay for the full day.

    Learn more about the available sessions and register!

    – Craig

    Join Us for AWS IAM Day on Monday, October 9, in San Francisco

    Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/join-us-for-aws-iam-day-on-monday-october-9-in-san-francisco/

    Join us in San Francisco at the AWS Pop-up Loft for AWS IAM Day on Monday, October 9, from 9:30 A.M.–4:15 P.M. Pacific Time. At this free technical event, you will learn AWS Identity and Access Management (IAM) concepts from IAM product managers, as well as tools and strategies you can use for controlling access to your AWS environment, such as the IAM policy language and IAM best practices. You also will take an IAM policy ninja dive deep into permissions and how to use IAM roles to delegate access to your AWS resources. Last, you will learn how to integrate Active Directory with AWS workloads.

    You can attend one session or stay for the full day.

    Learn more about the available sessions and register!

    – Craig

    How to Enable LDAPS for Your AWS Microsoft AD Directory

    Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-enable-ldaps-for-your-aws-microsoft-ad-directory/

    Starting today, you can encrypt the Lightweight Directory Access Protocol (LDAP) communications between your applications and AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD. Many Windows and Linux applications use Active Directory’s (AD) LDAP service to read and write sensitive information about users and devices, including personally identifiable information (PII). Now, you can encrypt your AWS Microsoft AD LDAP communications end to end to protect this information by using LDAP Over Secure Sockets Layer (SSL)/Transport Layer Security (TLS), also called LDAPS. This helps you protect PII and other sensitive information exchanged with AWS Microsoft AD over untrusted networks.

    To enable LDAPS, you need to add a Microsoft enterprise Certificate Authority (CA) server to your AWS Microsoft AD domain and configure certificate templates for your domain controllers. After you have enabled LDAPS, AWS Microsoft AD encrypts communications with LDAPS-enabled Windows applications, Linux computers that use Secure Shell (SSH) authentication, and applications such as Jira and Jenkins.

    In this blog post, I show how to enable LDAPS for your AWS Microsoft AD directory in six steps: 1) Delegate permissions to CA administrators, 2) Add a Microsoft enterprise CA to your AWS Microsoft AD directory, 3) Create a certificate template, 4) Configure AWS security group rules, 5) AWS Microsoft AD enables LDAPS, and 6) Test LDAPS access using the LDP tool.

    Assumptions

    For this post, I assume you are familiar with following:

    Solution overview

    Before going into specific deployment steps, I will provide a high-level overview of deploying LDAPS. I cover how you enable LDAPS on AWS Microsoft AD. In addition, I provide some general background about CA deployment models and explain how to apply these models when deploying Microsoft CA to enable LDAPS on AWS Microsoft AD.

    How you enable LDAPS on AWS Microsoft AD

    LDAP-aware applications (LDAP clients) typically access LDAP servers using Transmission Control Protocol (TCP) on port 389. By default, LDAP communications on port 389 are unencrypted. However, many LDAP clients use one of two standards to encrypt LDAP communications: LDAP over SSL on port 636, and LDAP with StartTLS on port 389. If an LDAP client uses port 636, the LDAP server encrypts all traffic unconditionally with SSL. If an LDAP client issues a StartTLS command when setting up the LDAP session on port 389, the LDAP server encrypts all traffic to that client with TLS. AWS Microsoft AD now supports both encryption standards when you enable LDAPS on your AWS Microsoft AD domain controllers.

    You enable LDAPS on your AWS Microsoft AD domain controllers by installing a digital certificate that a CA issued. Though Windows servers have different methods for installing certificates, LDAPS with AWS Microsoft AD requires you to add a Microsoft CA to your AWS Microsoft AD domain and deploy the certificate through autoenrollment from the Microsoft CA. The installed certificate enables the LDAP service running on domain controllers to listen for and negotiate LDAP encryption on port 636 (LDAP over SSL) and port 389 (LDAP with StartTLS).

    Background of CA deployment models

    You can deploy CAs as part of a single-level or multi-level CA hierarchy. In a single-level hierarchy, all certificates come from the root of the hierarchy. In a multi-level hierarchy, you organize a collection of CAs in a hierarchy and the certificates sent to computers and users come from subordinate CAs in the hierarchy (not the root).

    Certificates issued by a CA identify the hierarchy to which the CA belongs. When a computer sends its certificate to another computer for verification, the receiving computer must have the public certificate from the CAs in the same hierarchy as the sender. If the CA that issued the certificate is part of a single-level hierarchy, the receiver must obtain the public certificate of the CA that issued the certificate. If the CA that issued the certificate is part of a multi-level hierarchy, the receiver can obtain a public certificate for all the CAs that are in the same hierarchy as the CA that issued the certificate. If the receiver can verify that the certificate came from a CA that is in the hierarchy of the receiver’s “trusted” public CA certificates, the receiver trusts the sender. Otherwise, the receiver rejects the sender.

    Deploying Microsoft CA to enable LDAPS on AWS Microsoft AD

    Microsoft offers a standalone CA and an enterprise CA. Though you can configure either as single-level or multi-level hierarchies, only the enterprise CA integrates with AD and offers autoenrollment for certificate deployment. Because you cannot sign in to run commands on your AWS Microsoft AD domain controllers, an automatic certificate enrollment model is required. Therefore, AWS Microsoft AD requires the certificate to come from a Microsoft enterprise CA that you configure to work in your AD domain. When you install the Microsoft enterprise CA, you can configure it to be part of a single-level hierarchy or a multi-level hierarchy. As a best practice, AWS recommends a multi-level Microsoft CA trust hierarchy consisting of a root CA and a subordinate CA. I cover only a multi-level hierarchy in this post.

    In a multi-level hierarchy, you configure your subordinate CA by importing a certificate from the root CA. You must issue a certificate from the root CA such that the certificate gives your subordinate CA the right to issue certificates on behalf of the root. This makes your subordinate CA part of the root CA hierarchy. You also deploy the root CA’s public certificate on all of your computers, which tells all your computers to trust certificates that your root CA issues and to trust certificates from any authorized subordinate CA.

    In such a hierarchy, you typically leave your root CA offline (inaccessible to other computers in the network) to protect the root of your hierarchy. You leave the subordinate CA online so that it can issue certificates on behalf of the root CA. This multi-level hierarchy increases security because if someone compromises your subordinate CA, you can revoke all certificates it issued and set up a new subordinate CA from your offline root CA. To learn more about setting up a secure CA hierarchy, see Securing PKI: Planning a CA Hierarchy.

    When a Microsoft CA is part of your AD domain, you can configure certificate templates that you publish. These templates become visible to client computers through AD. If a client’s profile matches a template, the client requests a certificate from the Microsoft CA that matches the template. Microsoft calls this process autoenrollment, and it simplifies certificate deployment. To enable LDAPS on your AWS Microsoft AD domain controllers, you create a certificate template in the Microsoft CA that generates SSL and TLS-compatible certificates. The domain controllers see the template and automatically import a certificate of that type from the Microsoft CA. The imported certificate enables LDAP encryption.

    Steps to enable LDAPS for your AWS Microsoft AD directory

    The rest of this post is composed of the steps for enabling LDAPS for your AWS Microsoft AD directory. First, though, I explain which components you must have running to deploy this solution successfully. I also explain how this solution works and include an architecture diagram.

    Prerequisites

    The instructions in this post assume that you already have the following components running:

    1. An active AWS Microsoft AD directory – To create a directory, follow the steps in Create an AWS Microsoft AD directory.
    2. An Amazon EC2 for Windows Server instance for managing users and groups in your directory – This instance needs to be joined to your AWS Microsoft AD domain and have Active Directory Administration Tools installed. Active Directory Administration Tools installs Active Directory Administrative Center and the LDP tool.
    3. An existing root Microsoft CA or a multi-level Microsoft CA hierarchy – You might already have a root CA or a multi-level CA hierarchy in your on-premises network. If you plan to use your on-premises CA hierarchy, you must have administrative permissions to issue certificates to subordinate CAs. If you do not have an existing Microsoft CA hierarchy, you can set up a new standalone Microsoft root CA by creating an Amazon EC2 for Windows Server instance and installing a standalone root certification authority. You also must create a local user account on this instance and add this user to the local administrator group so that the user has permissions to issue a certificate to a subordinate CA.

    The solution setup

    The following diagram illustrates the setup with the steps you need to follow to enable LDAPS for AWS Microsoft AD. You will learn how to set up a subordinate Microsoft enterprise CA (in this case, SubordinateCA) and join it to your AWS Microsoft AD domain (in this case, corp.example.com). You also will learn how to create a certificate template on SubordinateCA and configure AWS security group rules to enable LDAPS for your directory.

    As a prerequisite, I already created a standalone Microsoft root CA (in this case RootCA) for creating SubordinateCA. RootCA also has a local user account called RootAdmin that has administrative permissions to issue certificates to SubordinateCA. Note that you may already have a root CA or a multi-level CA hierarchy in your on-premises network that you can use for creating SubordinateCA instead of creating a new root CA. If you choose to use your existing on-premises CA hierarchy, you must have administrative permissions on your on-premises CA to issue a certificate to SubordinateCA.

    Lastly, I also already created an Amazon EC2 instance (in this case, Management) that I use to manage users, configure AWS security groups, and test the LDAPS connection. I join this instance to the AWS Microsoft AD directory domain.

    Diagram showing the process discussed in this post

    Here is how the process works:

    1. Delegate permissions to CA administrators (in this case, CAAdmin) so that they can join a Microsoft enterprise CA to your AWS Microsoft AD domain and configure it as a subordinate CA.
    2. Add a Microsoft enterprise CA to your AWS Microsoft AD domain (in this case, SubordinateCA) so that it can issue certificates to your directory domain controllers to enable LDAPS. This step includes joining SubordinateCA to your directory domain, installing the Microsoft enterprise CA, and obtaining a certificate from RootCA that grants SubordinateCA permissions to issue certificates.
    3. Create a certificate template (in this case, ServerAuthentication) with server authentication and autoenrollment enabled so that your AWS Microsoft AD directory domain controllers can obtain certificates through autoenrollment to enable LDAPS.
    4. Configure AWS security group rules so that AWS Microsoft AD directory domain controllers can connect to the subordinate CA to request certificates.
    5. AWS Microsoft AD enables LDAPS through the following process:
      1. AWS Microsoft AD domain controllers request a certificate from SubordinateCA.
      2. SubordinateCA issues a certificate to AWS Microsoft AD domain controllers.
      3. AWS Microsoft AD enables LDAPS for the directory by installing certificates on the directory domain controllers.
    6. Test LDAPS access by using the LDP tool.

    I now will show you these steps in detail. I use the names of components—such as RootCA, SubordinateCA, and Management—and refer to users—such as Admin, RootAdmin, and CAAdmin—to illustrate who performs these steps. All component names and user names in this post are used for illustrative purposes only.

    Deploy the solution

    Step 1: Delegate permissions to CA administrators


    In this step, you delegate permissions to your users who manage your CAs. Your users then can join a subordinate CA to your AWS Microsoft AD domain and create the certificate template in your CA.

    To enable use with a Microsoft enterprise CA, AWS added a new built-in AD security group called AWS Delegated Enterprise Certificate Authority Administrators that has delegated permissions to install and administer a Microsoft enterprise CA. By default, your directory Admin is part of the new group and can add other users or groups in your AWS Microsoft AD directory to this security group. If you have trust with your on-premises AD directory, you can also delegate CA administrative permissions to your on-premises users by adding on-premises AD users or global groups to this new AD security group.

    To create a new user (in this case CAAdmin) in your directory and add this user to the AWS Delegated Enterprise Certificate Authority Administrators security group, follow these steps:

    1. Sign in to the Management instance using RDP with the user name admin and the password that you set for the admin user when you created your directory.
    2. Launch the Microsoft Windows Server Manager on the Management instance and navigate to Tools > Active Directory Users and Computers.
      Screnshot of the menu including the "Active Directory Users and Computers" choice
    3. Switch to the tree view and navigate to corp.example.com > CORP > Users. Right-click Users and choose New > User.
      Screenshot of choosing New > User
    4. Add a new user with the First name CA, Last name Admin, and User logon name CAAdmin.
      Screenshot of completing the "New Object - User" boxes
    5. In the Active Directory Users and Computers tool, navigate to corp.example.com > AWS Delegated Groups. In the right pane, right-click AWS Delegated Enterprise Certificate Authority Administrators and choose Properties.
      Screenshot of navigating to AWS Delegated Enterprise Certificate Authority Administrators > Properties
    6. In the AWS Delegated Enterprise Certificate Authority Administrators window, switch to the Members tab and choose Add.
      Screenshot of the "Members" tab of the "AWS Delegate Enterprise Certificate Authority Administrators" window
    7. In the Enter the object names to select box, type CAAdmin and choose OK.
      Screenshot showing the "Enter the object names to select" box
    8. In the next window, choose OK to add CAAdmin to the AWS Delegated Enterprise Certificate Authority Administrators security group.
      Screenshot of adding "CA Admin" to the "AWS Delegated Enterprise Certificate Authority Administrators" security group
    9. Also add CAAdmin to the AWS Delegated Server Administrators security group so that CAAdmin can RDP in to the Microsoft enterprise CA machine.
      Screenshot of adding "CAAdmin" to the "AWS Delegated Server Administrators" security group also so that "CAAdmin" can RDP in to the Microsoft enterprise CA machine

     You have granted CAAdmin permissions to join a Microsoft enterprise CA to your AWS Microsoft AD directory domain.

    Step 2: Add a Microsoft enterprise CA to your AWS Microsoft AD directory


    In this step, you set up a subordinate Microsoft enterprise CA and join it to your AWS Microsoft AD directory domain. I will summarize the process first and then walk through the steps.

    First, you create an Amazon EC2 for Windows Server instance called SubordinateCA and join it to the domain, corp.example.com. You then publish RootCA’s public certificate and certificate revocation list (CRL) to SubordinateCA’s local trusted store. You also publish RootCA’s public certificate to your directory domain. Doing so enables SubordinateCA and your directory domain controllers to trust RootCA. You then install the Microsoft enterprise CA service on SubordinateCA and request a certificate from RootCA to make SubordinateCA a subordinate Microsoft CA. After RootCA issues the certificate, SubordinateCA is ready to issue certificates to your directory domain controllers.

    Note that you can use an Amazon S3 bucket to pass the certificates between RootCA and SubordinateCA.

    In detail, here is how the process works, as illustrated in the preceding diagram:

    1. Set up an Amazon EC2 instance joined to your AWS Microsoft AD directory domain – Create an Amazon EC2 for Windows Server instance to use as a subordinate CA, and join it to your AWS Microsoft AD directory domain. For this example, the machine name is SubordinateCA and the domain is corp.example.com.
    2. Share RootCA’s public certificate with SubordinateCA – Log in to RootCA as RootAdmin and start Windows PowerShell with administrative privileges. Run the following commands to copy RootCA’s public certificate and CRL to the folder c:\rootcerts on RootCA.
      New-Item c:\rootcerts -type directory
      copy C:\Windows\system32\certsrv\certenroll\*.cr* c:\rootcerts

      Upload RootCA’s public certificate and CRL from c:\rootcerts to an S3 bucket by following the steps in How Do I Upload Files and Folders to an S3 Bucket.

    The following screenshot shows RootCA’s public certificate and CRL uploaded to an S3 bucket.
    Screenshot of RootCA’s public certificate and CRL uploaded to the S3 bucket

    1. Publish RootCA’s public certificate to your directory domain – Log in to SubordinateCA as the CAAdmin. Download RootCA’s public certificate and CRL from the S3 bucket by following the instructions in How Do I Download an Object from an S3 Bucket? Save the certificate and CRL to the C:\rootcerts folder on SubordinateCA. Add RootCA’s public certificate and the CRL to the local store of SubordinateCA and publish RootCA’s public certificate to your directory domain by running the following commands using Windows PowerShell with administrative privileges.
      certutil –addstore –f root <path to the RootCA public certificate file>
      certutil –addstore –f root <path to the RootCA CRL file>
      certutil –dspublish –f <path to the RootCA public certificate file> RootCA
    2. Install the subordinate Microsoft enterprise CA – Install the subordinate Microsoft enterprise CA on SubordinateCA by following the instructions in Install a Subordinate Certification Authority. Ensure that you choose Enterprise CA for Setup Type to install an enterprise CA.

    For the CA Type, choose Subordinate CA.

    1. Request a certificate from RootCA – Next, copy the certificate request on SubordinateCA to a folder called c:\CARequest by running the following commands using Windows PowerShell with administrative privileges.
      New-Item c:\CARequest -type directory
      Copy c:\*.req C:\CARequest

      Upload the certificate request to the S3 bucket.
      Screenshot of uploading the certificate request to the S3 bucket

    1. Approve SubordinateCA’s certificate request – Log in to RootCA as RootAdmin and download the certificate request from the S3 bucket to a folder called CARequest. Submit the request by running the following command using Windows PowerShell with administrative privileges.
      certreq -submit <path to certificate request file>

      In the Certification Authority List window, choose OK.
      Screenshot of the Certification Authority List window

    Navigate to Server Manager > Tools > Certification Authority on RootCA.
    Screenshot of "Certification Authority" in the drop-down menu

    In the Certification Authority window, expand the ROOTCA tree in the left pane and choose Pending Requests. In the right pane, note the value in the Request ID column. Right-click the request and choose All Tasks > Issue.
    Screenshot of noting the value in the "Request ID" column

    1. Retrieve the SubordinateCA certificate – Retrieve the SubordinateCA certificate by running following command using Windows PowerShell with administrative privileges. The command includes the <RequestId> that you noted in the previous step.
      certreq –retrieve <RequestId> <drive>:\subordinateCA.crt

      Upload SubordinateCA.crt to the S3 bucket.

    1. Install the SubordinateCA certificate – Log in to SubordinateCA as the CAAdmin and download SubordinateCA.crt from the S3 bucket. Install the certificate by running following commands using Windows PowerShell with administrative privileges.
      certutil –installcert c:\subordinateCA.crt
      start-service certsvc
    2. Delete the content that you uploaded to S3  As a security best practice, delete all the certificates and CRLs that you uploaded to the S3 bucket in the previous steps because you already have installed them on SubordinateCA.

    You have finished setting up the subordinate Microsoft enterprise CA that is joined to your AWS Microsoft AD directory domain. Now you can use your subordinate Microsoft enterprise CA to create a certificate template so that your directory domain controllers can request a certificate to enable LDAPS for your directory.

    Step 3: Create a certificate template


    In this step, you create a certificate template with server authentication and autoenrollment enabled on SubordinateCA. You create this new template (in this case, ServerAuthentication) by duplicating an existing certificate template (in this case, Domain Controller template) and adding server authentication and autoenrollment to the template.

    Follow these steps to create a certificate template:

    1. Log in to SubordinateCA as CAAdmin.
    2. Launch Microsoft Windows Server Manager. Select Tools > Certification Authority.
    3. In the Certificate Authority window, expand the SubordinateCA tree in the left pane. Right-click Certificate Templates, and choose Manage.
      Screenshot of choosing "Manage" under "Certificate Template"
    4. In the Certificate Templates Console window, right-click Domain Controller and choose Duplicate Template.
      Screenshot of the Certificate Templates Console window
    5. In the Properties of New Template window, switch to the General tab and change the Template display name to ServerAuthentication.
      Screenshot of the "Properties of New Template" window
    6. Switch to the Security tab, and choose Domain Controllers in the Group or user names section. Select the Allow check box for Autoenroll in the Permissions for Domain Controllers section.
      Screenshot of the "Permissions for Domain Controllers" section of the "Properties of New Template" window
    7. Switch to the Extensions tab, choose Application Policies in the Extensions included in this template section, and choose Edit
      Screenshot of the "Extensions" tab of the "Properties of New Template" window
    8. In the Edit Application Policies Extension window, choose Client Authentication and choose Remove. Choose OK to create the ServerAuthentication certificate template. Close the Certificate Templates Console window.
      Screenshot of the "Edit Application Policies Extension" window
    9. In the Certificate Authority window, right-click Certificate Templates, and choose New > Certificate Template to Issue.
      Screenshot of choosing "New" > "Certificate Template to Issue"
    10. In the Enable Certificate Templates window, choose ServerAuthentication and choose OK.
      Screenshot of the "Enable Certificate Templates" window

    You have finished creating a certificate template with server authentication and autoenrollment enabled on SubordinateCA. Your AWS Microsoft AD directory domain controllers can now obtain a certificate through autoenrollment to enable LDAPS.

    Step 4: Configure AWS security group rules


    In this step, you configure AWS security group rules so that your directory domain controllers can connect to the subordinate CA to request a certificate. To do this, you must add outbound rules to your directory’s AWS security group (in this case, sg-4ba7682d) to allow all outbound traffic to SubordinateCA’s AWS security group (in this case, sg-6fbe7109) so that your directory domain controllers can connect to SubordinateCA for requesting a certificate. You also must add inbound rules to SubordinateCA’s AWS security group to allow all incoming traffic from your directory’s AWS security group so that the subordinate CA can accept incoming traffic from your directory domain controllers.

    Follow these steps to configure AWS security group rules:

    1. Log in to the Management instance as Admin.
    2. Navigate to the EC2 console.
    3. In the left pane, choose Network & Security > Security Groups.
    4. In the right pane, choose the AWS security group (in this case, sg-6fbe7109) of SubordinateCA.
    5. Switch to the Inbound tab and choose Edit.
    6. Choose Add Rule. Choose All traffic for Type and Custom for Source. Enter your directory’s AWS security group (in this case, sg-4ba7682d) in the Source box. Choose Save.
      Screenshot of adding an inbound rule
    7. Now choose the AWS security group (in this case, sg-4ba7682d) of your AWS Microsoft AD directory, switch to the Outbound tab, and choose Edit.
    8. Choose Add Rule. Choose All traffic for Type and Custom for Destination. Enter your directory’s AWS security group (in this case, sg-6fbe7109) in the Destination box. Choose Save.

    You have completed the configuration of AWS security group rules to allow traffic between your directory domain controllers and SubordinateCA.

    Step 5: AWS Microsoft AD enables LDAPS


    The AWS Microsoft AD domain controllers perform this step automatically by recognizing the published template and requesting a certificate from the subordinate Microsoft enterprise CA. The subordinate CA can take up to 180 minutes to issue certificates to the directory domain controllers. The directory imports these certificates into the directory domain controllers and enables LDAPS for your directory automatically. This completes the setup of LDAPS for the AWS Microsoft AD directory. The LDAP service on the directory is now ready to accept LDAPS connections!

    Step 6: Test LDAPS access by using the LDP tool


    In this step, you test the LDAPS connection to the AWS Microsoft AD directory by using the LDP tool. The LDP tool is available on the Management machine where you installed Active Directory Administration Tools. Before you test the LDAPS connection, you must wait up to 180 minutes for the subordinate CA to issue a certificate to your directory domain controllers.

    To test LDAPS, you connect to one of the domain controllers using port 636. Here are the steps to test the LDAPS connection:

    1. Log in to Management as Admin.
    2. Launch the Microsoft Windows Server Manager on Management and navigate to Tools > Active Directory Users and Computers.
    3. Switch to the tree view and navigate to corp.example.com > CORP > Domain Controllers. In the right pane, right-click on one of the domain controllers and choose Properties. Copy the DNS name of the domain controller.
      Screenshot of copying the DNS name of the domain controller
    4. Launch the LDP.exe tool by launching Windows PowerShell and running the LDP.exe command.
    5. In the LDP tool, choose Connection > Connect.
      Screenshot of choosing "Connnection" > "Connect" in the LDP tool
    6. In the Server box, paste the DNS name you copied in the previous step. Type 636 in the Port box. Choose OK to test the LDAPS connection to port 636 of your directory.
      Screenshot of completing the boxes in the "Connect" window
    7. You should see the following message to confirm that your LDAPS connection is now open.

    You have completed the setup of LDAPS for your AWS Microsoft AD directory! You can now encrypt LDAP communications between your Windows and Linux applications and your AWS Microsoft AD directory using LDAPS.

    Summary

    In this blog post, I walked through the process of enabling LDAPS for your AWS Microsoft AD directory. Enabling LDAPS helps you protect PII and other sensitive information exchanged over untrusted networks between your Windows and Linux applications and your AWS Microsoft AD. To learn more about how to use AWS Microsoft AD, see the Directory Service documentation. For general information and pricing, see the Directory Service home page.

    If you have comments about this blog post, submit a comment in the “Comments” section below. If you have implementation or troubleshooting questions, start a new thread on the Directory Service forum.

    – Vijay

    Samba 4.7.0 released

    Post Syndicated from corbet original https://lwn.net/Articles/734639/rss

    The Samba 4.7.0 release is out. New features include whole DB read locks
    (a reliability improvement), active directory with Kerberos support,
    detailed audit trails for authentication and authorization activities, a
    multi-process LDAP server, better read-only domain controller support, and
    more. See the release
    notes
    for details.