Tag Archives: Kibana

Configure SAML single sign-on for Kibana with AD FS on Amazon Elasticsearch Service

Post Syndicated from Sajeev Attiyil Bhaskaran original https://aws.amazon.com/blogs/security/configure-saml-single-sign-on-for-kibana-with-ad-fs-on-amazon-elasticsearch-service/

It’s a common use case for customers to integrate identity providers (IdPs) with Amazon Elasticsearch Service (Amazon ES) to achieve single sign-on (SSO) with Kibana. This integration makes it possible for users to leverage their existing identity credentials and offers administrators a single source of truth for user and permissions management. In this blog post, we’ll discuss how you can configure Security Assertion Markup Language (SAML) authentication for Kibana by using Amazon ES and Microsoft Active Directory Federation Services (AD FS).

Amazon ES now natively supports SSO authentication that uses the SAML protocol. With SAML authentication for Kibana, users can integrate directly with their existing third-party IdPs, such as Okta, Ping Identity, OneLogin, Auth0, AD FS, AWS Single Sign-on, and Azure Active Directory. SAML authentication for Kibana is powered by Open Distro for Elasticsearch, an Apache 2.0-licensed distribution of Elasticsearch, and is available to all Amazon ES customers who have enabled fine-grained access controls.

When you set up SAML authentication with Kibana, you can configure authentication that uses either service provider (SP)-initiated SSO or IdP-initiated SSO. The SP-initiated SSO flow occurs when a user directly accesses any SAML-configured Kibana endpoint, at which time Amazon ES redirects the user to their IdP for authentication, followed by a redirect back to Amazon ES after successful authentication. An IdP-initiated SSO flow typically occurs when a user chooses a link that first initiates the sign-in flow at the IdP, skipping the redirect between Amazon ES and the IdP. This blog post will focus on the SAML SP-initiated SSO flow.

Prerequisites

To complete this walkthrough, you must have the following:

Solution overview

For the solution presented in this post, you use your existing AD FS as an IdP for the user’s authentication. The SAML federation uses a claim-based authentication model in which user attributes (in this case stored in Active Directory) are passed from the IdP (AD FS) to the SP (Kibana).

Let’s walk through how a user would use the SAML protocol to access Amazon ES Kibana (the SP) while using AD FS as the IdP. In Figure 1, the user authentication request comes from an on-premises network, which is connected to Amazon VPC through a VPN connection—in this case, this could also be over AWS Direct Connect. The Amazon ES domain and AD FS are created in the same VPC.

Figure 1: A high-level view of a SAML transaction between Amazon ES and AD FS

Figure 1: A high-level view of a SAML transaction between Amazon ES and AD FS

The initial sign-in flow is as follows:

  1. Open a browser on the on-premises computer and navigate to the Kibana endpoint for your Amazon ES domain in the VPC.
  2. Amazon ES generates a SAML authentication request for the user and redirects it back to the browser.
  3. The browser redirects the SAML authentication request to AD FS.
  4. AD FS parses the SAML request and prompts user to enter credentials.
    1. User enters credentials and AD FS authenticates the user with Active Directory.
    2. After successful authentication, AD FS generates a SAML response and returns the encoded SAML response to the browser. The SAML response contains the destination (the Assertion Consumer Service (ACS) URL), the authentication response issuer (the AD FS entity ID URL), the digital signature, and the claim (which user is authenticated with AD FS, the user’s NameID, the group, the attribute used in SAML assertions, and so on).
  5. The browser sends the SAML response to the Kibana ACS URL, and then Kibana redirects to Amazon ES.
  6. Amazon ES validates the SAML response. If all the validations pass, you are redirected to the Kibana front page. Authorization is performed by Kibana based on the role mapped to the user. The role mapping is performed based on attributes of the SAML assertion being consumed by Kibana and Amazon ES.

Deploy the solution

Now let’s walk through the steps to set up SAML authentication for Kibana single sign-on by using Amazon ES and Microsoft AD FS.

Enable SAML for Amazon Elasticsearch Service

The first step in the configuration setup process is to enable SAML authentication in the Amazon ES domain.

To enable SAML for Amazon ES

  1. Sign in to the Amazon ES console and choose any existing Amazon ES domain that meets the criteria described in the Prerequisites section of this post.
  2. Under Actions, select Modify Authentication.
  3. Select the Enable SAML authentication check box.
    Figure 2: Enable SAML authentication

    Figure 2: Enable SAML authentication

    When you enable SAML, it automatically creates and displays the different URLs that are required to configure SAML support in your IdP.

    Figure 3: URLs for configuring the IdP

    Figure 3: URLs for configuring the IdP

  4. Look under Configure your Identity Provider (IdP), and note down the URL values for Service provider entity ID and SP-initiated SSO URL.

Set up and configure AD FS

During the SAML authentication process, the browser receives the SAML assertion token from AD FS and forwards it to the SP. In order to pass the claims to the Amazon ES domain, AD FS (the claims provider) and the Amazon ES domain (the relying party) have to establish a trust between them. Then you define the rules for what type of claims AD FS needs to send to the Amazon ES domain. The Amazon ES domain authorizes the user with internal security roles or backend roles, according to the claims in the token.

To configure Amazon ES as a relying party in AD FS

  1. Sign in to the AD FS server. In Server Manager, choose Tools, and then choose AD FS Management.
  2. In the AD FS management console, open the context (right-click) menu for Relying Party Trust, and then choose Add Relying Party Trust.

    Figure 4: Set up a relying party trust

    Figure 4: Set up a relying party trust

  3. In the Add Relying Party Trust Wizard, select Claims aware, and then choose Start.

    Figure 5: Create a claims aware application

    Figure 5: Create a claims aware application

  4. On the Select Data Source page, choose Enter data about the relying party manually, and then choose Next.

    Figure 6: Enter data about the relying party manually

    Figure 6: Enter data about the relying party manually

  5. On the Specify Display Name page, type in the display name of your choice for the relying party, and then choose Next. Choose Next again to move past the Configure Certificate screen. (Configuring a token encryption certificate is optional and at the time of writing, Amazon ES doesn’t support SAML token encryption.)

    Figure 7: Provide a display name for the relying party

    Figure 7: Provide a display name for the relying party

  6. On the Configure URL page, do the following steps.
    1. Choose the Enable support for the SAML 2.0 WebSSO protocol check box.
    2. In the URL field, add the SP-initiated SSO URL that you noted when you enabled SAML authentication in Amazon ES earlier.
    3. Choose Next.

      Figure 8: Enable SAML support and provide the SP-initiated SSO URL

      Figure 8: Enable SAML support and provide the SP-initiated SSO URL

  7. On the Configure Identifiers page, do the following:
      1. For Relying party trust identifier, provide the service provider entity ID that you noted when you enabled SAML authentication in Amazon ES.
      2. Choose Add, and then choose Next.

     

    Figure 9: Provide the service provider entity ID

    Figure 9: Provide the service provider entity ID

  8. On the Choose Access Control Policy page, choose the appropriate access for your domain. Depending on your requirements, choose one of these options:
    • Choose Permit Specific Group to restrict access to one or more groups in your Active Directory domain based on the Active Directory group.
    • Choose Permit Everyone to allow all Active Directory domain users to access Kibana.

    Note: This step only provides access for the users to authenticate into Kibana. You have not yet set up Open Distro security roles and permissions.

     

    Figure 10: Choose an access control policy

    Figure 10: Choose an access control policy

  9. On the Ready to Add Trust page, choose Next, and then choose Close.

Now you’ve finished adding Amazon ES as a relying party trust.

To configure claim issuance rules for the relying party during the authentication process, AD FS sends user attributes—claims—to the relying party. With claim rules, you define what claims AD FS can send to the Amazon ES domain. In the following procedure, you create two claim rules: one is to send the incoming Windows account name as the Name ID and the other is to send Active Directory groups as roles.

To configure claim issuance rules

  1. On the Relying Party Trusts page, right-click the relying party trust (in this case, AWS_ES_Kibana) and choose Edit Claim Issuance Policy.

    Figure 11: Edit the claim issuance policy

    Figure 11: Edit the claim issuance policy

  2. Configure the claim rule to send the Windows account name as the Name ID, using these steps.
    1. In the Edit Claim Issuance Policy dialog box, choose Add Rule. The Add Transform Claim Rule Wizard opens.
    2. For Rule Type, choose Transform an Incoming Claim, and then choose Next.
    3. On the Configure Rule page, enter the following information:
      • Claim rule name: NameId
      • Incoming claim type: Windows account name
      • Outgoing claim type: Name ID
      • Outgoing name ID format: Unspecified
      • Pass through all claim values: Select this option
    4. Choose Finish.

     

    Figure 12: Set the claim rule for Name ID

    Figure 12: Set the claim rule for Name ID

  3. Configure Active Directory groups to send as roles, using the following steps.
    1. In the Edit Claim Issuance Policy dialog box, choose Add Rule. The Add Transform Claim Rule Wizard opens.
    2. For Rule Type, choose Send LDAP Attributes as Claims, and then choose Next.
    3. On the Configure Rule page, enter or choose the following settings:
      • Claim rule name: Send-Groups-as-Roles
      • Attribute store: Active Directory
      • LDAP attribute: Token-Groups – Unqualified Names (to select the group name)
      • Outgoing claim type: Roles (the value for Roles should match the Roles Key that you will set in the Configure SAML in the Amazon ES domain step later in this process)
    4. Choose Finish

      Figure 13: Set claim rule for Active Directory groups as Roles

      Figure 13: Set claim rule for Active Directory groups as Roles

The configuration of AD FS is now complete and you can download the SAML metadata file from AD FS. The SAML metadata is in XML format and is needed to configure SAML in the Amazon ES domain. The AD FS metadata file (the IdP metadata) can be accessed from the following link (replace <AD FS FQDN> with the domain name of your AD FS server). Copy the XML and note down the value of entityID from the XML, as shown in Figure 14. You will need this information in the next steps.

https://<AD FS FQDN>/FederationMetadata/2007-06/FederationMetadata.xml

 

Figure 14: The value of entityID in the XML file

Figure 14: The value of entityID in the XML file

Configure SAML in the Amazon ES domain

Next, you configure SAML settings in the Amazon Elasticsearch Service console. You need to import the IdP metadata, configure the IdP entity ID, configure the backend role, and set up the Roles key.

To configure SAML setting in the Amazon ES domain

    1. Sign in to the Amazon Elasticsearch Service console. On the Actions menu, choose Modify authentication.
    2. Import the IdP metadata, using the following steps.
      1. Choose Import IdP metadata, and then choose Metadata from IdP.
      2. Paste the contents of the FederationMetadata XML file (the IdP metadata) that you copied earlier in the Add or edit metadata field. You can also choose the Import from XML file button if you have the metadata file on the local disk.

        Figure 15: The imported identity provider metadata

        Figure 15: The imported identity provider metadata

    3. Copy and paste the value of entityID from the XML file to the IdP entity ID field, if that field isn’t autofilled.
    4. For SAML manager backend role (the console may refer to this as master backend role), enter the name of the group you created in AD FS as part of the prerequisites for this post. In this walkthrough, we set the name of the group as admins, and therefore the backend role is admins.

Optionally, you can also provide the user name instead of the backend role.

  1. Set up the Roles key, using the following steps.
    1. Under Optional SAML settings, for Roles key, enter Roles. This value must match the value for Outgoing claim type, which you set when you configured claims rules earlier.

      Figure 16: Set the Roles key

      Figure 16: Set the Roles key

    2. Leave the Subject key field empty to use the NameID element of the SAML assertion for the user name. Keep the defaults for everything else, and then choose Submit.

It can take few minutes to update the SAML settings and for the domain to come back to the active state.

Congratulations! You’ve completed all the SP and IdP configurations.

Sign in to Kibana

When the domain comes back to the active state, choose the Kibana URL in the Amazon ES console. You will be redirected to the AD FS sign-in page for authentication. Provide the user name and password for any of the users in the admins group. The example in Figure 17 uses the credentials for the user [email protected], who is a member of the admins group.

Figure 17: The AD FS sign-in screen with user credentials

Figure 17: The AD FS sign-in screen with user credentials

AD FS authenticates the user and redirect the page to Kibana. If the user has at least one role mapped, you go to the Kibana home page, as shown in Figure 18. In this walkthrough, you mapped the AD FS group admins as a backend role to the manager user. Internally, the Open Distro security plugin maps the backend role admins to the security roles all_access and security_manager. Therefore, the Active Directory user in the admins group is authorized with the privileges of the manager user in the domain. For more granular access, you can create different AD FS groups and map the group names (backend roles) to internal security roles by using Role Mappings in Kibana.

Figure 18: The AD FS user user1@example.com is successfully logged in to Kibana

Figure 18: The AD FS user [email protected] is successfully logged in to Kibana

Note: At the time of writing for this blog post, if you specify the <SingleLogoutService /> details in the AD FS metadata XML, when you sign out from Kibana, Kibana will call AD FS directly and try to sign the user out. This doesn’t work currently, because AD FS expects the sign-out request to be signed with a certificate that Amazon ES doesn’t currently support. If you remove <SingleLogoutService /> from the metadata XML file, Amazon ES will use its own internal sign-out mechanism and sign the user out on the Amazon ES side. No calls will be made to AD FS for signing out.

Conclusion

In this post, we covered setting up SAML authentication for Kibana single sign-on by using Amazon ES and Microsoft AD FS. The integration of IdPs with your Amazon ES domain provides a powerful way to control fine-grained access to your Kibana endpoint and integrate with existing identity lifecycle processes for create/update/delete operations, which reduces the operational overhead required to manage users.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Elasticsearch Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sajeev Attiyil Bhaskaran

Sajeev works closely with AWS customers to provide them architectural and engineering assistance and guidance. He dives deep into big data technologies and streaming solutions and leads onsite and online sessions for customers to design best solutions for their use cases. Outside of work, he enjoys spending time with his wife and daughter.

Author

Jagadeesh Pusapadi

Jagadeesh is a Senior Solutions Architect with AWS working with customers on their strategic initiatives. He helps customers build innovative solutions on the AWS Cloud by providing architectural guidance to achieve desired business outcomes.

Connecting to Kibana Within an AWS VPC

Post Syndicated from Bozho original https://techblog.bozho.net/connecting-to-kibana-within-an-aws-vpc/

When you use the managed Elasticsearch service on AWS, you usually choose an encrypted connection (via KMS-managed keys), which means you can’t use just any tool to connect to your Elasticsearch cluster. In fact, in order to manually execute commands the easiest option is to use the built-in Kibana and its dev tools.

However, connecting to Kibana is also not trivial due to typical security precautions. Elasticsearch can be run outside or inside a VPC. If you run it outside a VPC, you have to modify its access policy to allow connections from a set of IPs (e.g. your office network).

But if you run it inside a VPC (which is recommended), you have to connect to the VPC. And you have a lot of options for that, but all of them are rather complicated and sometimes even introduce additional cost.

A much simpler approach is to connect via SSH to a machine in the VPC (typically your bastion/jump host) and use it as a SOCKS proxy for your browser. The steps are:

  1. Open an SSH tunnel. If you are using Windows, you can do it with PuTTy. On Linux, you can use ssh$ ssh -D 1337 -q -C -N [email protected]
  2. Set the SOCKS proxy in the browser. On Firefox, open Options and type “SOCKS”, you’ll have only one option (in Network options) to choose, and then set localhost, 1337 (or whatever port you’ve chosen). Here are the instruction for Chrome (or you can use a plugin)
  3. Open the Kibana URL in the browser. Note that now all your browser traffic will go through your VPC, so depending on the VPC configuration other websites might not work.

That’s it, a quick tip that might potentially save a lot of time trying to get a VPC connection to work.

The post Connecting to Kibana Within an AWS VPC appeared first on Bozho's tech blog.

Masking field values with Amazon Elasticsearch Service

Post Syndicated from Prashant Agrawal original https://aws.amazon.com/blogs/security/masking-field-values-with-amazon-elasticsearch-service/

Amazon Elasticsearch Service (Amazon ES) is a fully managed service that you can use to deploy, secure, and run Elasticsearch cost-effectively at scale. The service provides support for open-source Elasticsearch APIs, managed Kibana, and integration with Logstash and other AWS services. Amazon ES provides a deep security model that spans many layers of interaction and supports fine-grained access control at the cluster, index, document, and field level, on a per-user basis. The service’s security plugin integrates with federated identity providers for Kibana login.

A common use case for Amazon ES is log analytics. Customers configure their applications to store log data to the Elasticsearch cluster, where the data can be queried for insights into the functionality and use of the applications over time. In many cases, users reviewing those insights should not have access to all the details from the log data. The log data for a web application, for example, might include the source IP addresses of incoming requests. Privacy rules in many countries require that those details be masked, wholly or in part. This post explains how to set up field masking within your Amazon ES domain.

Field masking is an alternative to field-level security that lets you anonymize the data in a field rather than remove it altogether. When creating a role, add a list of fields to mask. Field masking affects whether you can see the contents of a field when you search. You can use field masking to either perform a random hash or pattern-based substitution of sensitive information from users, who shouldn’t have access to that information.

When you use field masking, Amazon ES creates a hash of the actual field values before returning the search results. You can apply field masking on a per-role basis, supporting different levels of visibility depending on the identity of the user making the query. Currently, field masking is only available for string-based fields. A search result with a masked field (clientIP) looks like this:

{
  "_index": "web_logs",
  "_type": "_doc",
  "_id": "1",
  "_score": 1,
  "_source": {
    "agent": "Mozilla/5.0 (X11; Linux x86_64; rv:6.0a1) Gecko/20110421 Firefox/6.0a1",
    "bytes": 0,
    "clientIP": "7e4df8d4df7086ee9c05efe1e21cce8ff017a711ee9addf1155608ca45d38219",
    "host": "www.example.com",
    "extension": "txt",
    "geo": {
      "src": "EG",
      "dest": "CN",
      "coordinates": {
        "lat": 35.98531194,
        "lon": -85.80931806
      }
    },
    "machine": {
      "ram": 17179869184,
      "os": "win 7"
    }
  }
}

To follow along in this post, make sure you have an Amazon ES domain with Elasticsearch version 6.7 or higher, sample data loaded (this example uses the web logs data supplied by Kibana), and access to Kibana through a role with administrator privileges for the domain.

Configure field masking

Field masking is managed by defining specific access controls within the Kibana visualization system. You’ll need to create a new Kibana role, define the fine-grained access-control privileges for that role, specify which fields to mask, and apply that role to specific users.

You can use either the Kibana console or direct-to-API calls to set up field masking. In our first example, we’ll use the Kibana console.

To configure field masking in the Kibana console

  1. Log in to Kibana, choose the Security pane, and then choose Roles, as shown in Figure 1.

    Figure 1: Choose security roles

    Figure 1: Choose security roles

  2. Choose the plus sign (+) to create a new role, as shown in Figure 2.

    Figure 2: Create role

    Figure 2: Create role

  3. Choose the Index Permissions tab, and then choose Add index permissions, as shown in Figure 3.

    Figure 3: Set index permissions

    Figure 3: Set index permissions

  4. Add index patterns and appropriate permissions for data access. See the Amazon ES documentation for details on configuring fine-grained access control.
  5. Once you’ve set Index Patterns, Permissions: Action Groups, Document Level Security Query, and Include or exclude fields, you can use the Anonymize fields entry to mask the clientIP, as shown in Figure 4.

    Figure 4: Anonymize field

    Figure 4: Anonymize field

  6. Choose Save Role Definition.
  7. Next, you need to create one or more users and apply the role to the new users. Go back to the Security page and choose Internal User Database, as shown in Figure 5.

    Figure 5: Select Internal User Database

    Figure 5: Select Internal User Database

  8. Choose the plus sign (+) to create a new user, as shown in Figure 6.

    Figure 6: Create user

    Figure 6: Create user

  9. Add a username and password, and under Open Distro Security Roles, select the role es-mask-role, as shown in Figure 7.

    Figure 7: Select the username, password, and roles

    Figure 7: Select the username, password, and roles

  10. Choose Submit.

If you prefer, you can perform the same task by using the Amazon ES REST API using Kibana dev tools.

Use the following API to create a role as described in below snippet and shown in Figure 8.

PUT _opendistro/_security/api/roles/es-mask-role
{
  "cluster_permissions": [],
  "index_permissions": [
    {
      "index_patterns": [
        "web_logs"
      ],
      "dls": "",
      "fls": [],
      "masked_fields": [
        "clientIP"
      ],
      "allowed_actions": [
        "data_access"
      ]
    }
  ]
}

Sample response:

{
  "status": "CREATED",
  "message": "'es-mask-role' created."
}
Figure 8: API to create Role

Figure 8: API to create Role

Use the following API to create a user with the role as described in below snippet and shown in Figure 9.

PUT _opendistro/_security/api/internalusers/es-mask-user
{
  "password": "xxxxxxxxxxx",
  "opendistro_security_roles": [
    "es-mask-role"
  ]
}

Sample response:

{
  "status": "CREATED",
  "message": "'es-mask-user' created."
}
Figure 9: API to create User

Figure 9: API to create User

Verify field masking

You can verify field masking by running a simple search query using Kibana dev tools (GET web_logs/_search) and retrieving the data first by using the kibana_user (with no field masking), and then by using the es-mask-user (with field masking) you just created.

Query responses run by the kibana_user (all access) have the original values in all fields, as shown in Figure 10.

Figure 10: Retrieval of the full clientIP data with kibana_user

Figure 10: Retrieval of the full clientIP data with kibana_user

Figure 11, following, shows an example of what you would see if you logged in as the es-mask-user. In this case, the clientIP field is hidden due to the es-mask-role you created.

Figure 11: Retrieval of the masked clientIP data with es-mask-user

Figure 11: Retrieval of the masked clientIP data with es-mask-user

Use pattern-based field masking

Rather than creating a hash, you can use one or more regular expressions and replacement strings to mask a field. The syntax is <field>::/<regular-expression>/::<replacement-string>.

You can use either the Kibana console or direct-to-API calls to set up pattern-based field masking. In the following example, clientIP is masked in such a way that the last three parts of the IP address are masked by xxx using the pattern is clientIP::/[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}$/::xxx.xxx.xxx>. You see only the first part of the IP address, as shown in Figure 12.

Figure 12: Anonymize the field with a pattern

Figure 12: Anonymize the field with a pattern

Run the search query to verify that the last three parts of clientIP are masked by custom characters and only the first part is shown to the requester, as shown in Figure 13.

Figure 13: Retrieval of the masked clientIP (according to the defined pattern) with es-mask-user

Figure 13: Retrieval of the masked clientIP (according to the defined pattern) with es-mask-user

Conclusion

Field level security should be the primary approach for ensuring data access security – however if there are specific business requirements that cannot be met with this approach, then field masking may offer a viable alternative. By using field masking, you can selectively allow or prevent your users from seeing private information such as personally identifying information (PII) or personal healthcare information (PHI). For more information about fine-grained access control, see the Amazon Elasticsearch Service Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Elasticsearch Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Prashant Agrawal

Prashant is a Search Specialist Solutions Architect with Amazon Elasticsearch Service. He works closely with team members to help customers migrate their workloads to the cloud. Before joining AWS, he helped various customers use Elasticsearch for their search and analytics use cases.

How to visualize multi-account Amazon Inspector findings with Amazon Elasticsearch Service

Post Syndicated from Moumita Saha original https://aws.amazon.com/blogs/security/how-to-visualize-multi-account-amazon-inspector-findings-with-amazon-elasticsearch-service/

Amazon Inspector helps to improve the security and compliance of your applications that are deployed on Amazon Web Services (AWS). It automatically assesses Amazon Elastic Compute Cloud (Amazon EC2) instances and applications on those instances. From that assessment, it generates findings related to exposure, potential vulnerabilities, and deviations from best practices.

You can use the findings from Amazon Inspector as part of a vulnerability management program for your Amazon EC2 fleet across multiple AWS Regions in multiple accounts. The ability to rank and efficiently respond to potential security issues reduces the time that potential vulnerabilities remain unresolved. This can be accelerated within a single pane of glass for all the accounts in your AWS environment.

Following AWS best practices, in a secure multi-account AWS environment, you can provision (using AWS Control Tower) a group of accounts—known as core accounts, for governing other accounts within the environment. One of the core accounts may be used as a central security account, which you can designate for governing the security and compliance posture across all accounts in your environment. Another core account is a centralized logging account, which you can provision and designate for central storage of log data.

In this blog post, I show you how to:

  1. Use Amazon Inspector, a fully managed security assessment service, to generate security findings.
  2. Gather findings from multiple Regions across multiple accounts using Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS).
  3. Use AWS Lambda to send the findings to a central security account for deeper analysis and reporting.

In this solution, we send the findings to two services inside the central security account:

Solution overview

Overall architecture

The flow of events to implement the solution is shown in Figure 1 and described in the following process flow.

Figure 1: Solution overview architecture

Figure 1: Solution overview architecture

Process flow

The flow of this architecture is divided into two types of processes—a one-time process and a scheduled process. The AWS resources that are part of the one-time process are triggered the first time an Amazon Inspector assessment template is created in each Region of each application account. The AWS resources of the scheduled process are triggered at a designated interval of Amazon Inspector scan in each Region of each application account.

One-time process

  1. An event-based Amazon CloudWatch rule in each Region of every application account triggers a regional AWS Lambda function when an Amazon Inspector assessment template is created for the first time in that Region.

    Note: In order to restrict this event to trigger the Lambda function only the first time an assessment template is created, you must use a specific user-defined tag to trigger the Attach Inspector template to SNS Lambda function for only one Amazon Inspector template per Region. For more information on tags, see the Tagging AWS resources documentation.

  2. The Lambda function attaches the Amazon Inspector assessment template (created in application accounts) to the cross-account Amazon SNS topic (created in the security account). The function, the template, and the topic are all in the same AWS Region.

    Note: This step is needed because Amazon Inspector templates can only be attached to SNS topics in the same account via the AWS Management Console or AWS Command Line Interface (AWS CLI).

Scheduled process

  1. A scheduled Amazon CloudWatch Event in every Region of the application accounts starts the Amazon Inspector scan at a scheduled time interval, which you can configure.
  2. An Amazon Inspector agent conducts the scan on the EC2 instances of the Region where the assessment template is created and sends any findings to Amazon Inspector.
  3. Once the findings are generated, Amazon Inspector notifies the Amazon SNS topic of the security account in the same Region.
  4. The Amazon SNS topics from each Region of the central security account receive notifications of Amazon Inspector findings from all application accounts. The SNS topics then send the notifications to a central Amazon SQS queue in the primary Region of the security account.
  5. The Amazon SQS queue triggers the Send findings Lambda function (as shown in Figure 1) of the security account.

    Note: Each Amazon SQS message represents one Amazon Inspector finding.

  6. The Send findings Lambda function assumes a cross-account role to fetch the following information from all application accounts:
    1. Finding details from the Amazon Inspector API.
    2. Additional Amazon EC2 attributes—VPC, subnet, security group, and IP address—from EC2 instances with potential vulnerabilities.
  7. The Lambda function then sends all the gathered data to a central S3 bucket and a domain in Amazon ES—both in the central security account.

These Amazon Inspector findings, along with additional attributes on the scanned instances, can be used for further analysis and visualization via Kibana—a data visualization dashboard for Amazon ES. Storing a copy of these findings in an S3 bucket gives you the opportunity to forward the findings data to outside monitoring tools that don’t support direct data ingestion from AWS Lambda.

Prerequisites

The following resources must be set up before you can implement this solution:

  1. A multi-account structure. To learn how to set up a multi-account structure, see Setting up AWS Control Tower and AWS Landing zone.
  2. Amazon Inspector agents must be installed on all EC2 instances. See Installing Amazon Inspector agents to learn how to set up Amazon Inspector agents on EC2 instances. Additionally, keep note of all the Regions where you install the Amazon Inspector agent.
  3. An Amazon ES domain with Kibana authentication. See Getting started with Amazon Elasticsearch Service and Use Amazon Cognito for Kibana access control.
  4. An S3 bucket for centralized storage of Amazon Inspector findings.
  5. An S3 bucket for storage of the Lambda source code for the solution.

Set up Amazon Inspector with Amazon ES and S3

Follow these steps to set up centralized Amazon Inspector findings with Amazon ES and Amazon S3:

  1. Upload the solution ZIP file to the S3 bucket used for Lambda code storage.
  2. Collect the input parameters for AWS CloudFormation deployment.
  3. Deploy the base template into the central security account.
  4. Deploy the second template in the primary Region of all application accounts to create global resources.
  5. Deploy the third template in all Regions of all application accounts.

Step 1: Upload the solution ZIP file to the S3 bucket used for Lambda code storage

  1. From GitHub, download the file Inspector-to-S3ES-crossAcnt.zip.
  2. Upload the ZIP file to the S3 bucket you created in the central security account for Lambda code storage. This code is used to create the Lambda function in the first CloudFormation stack set of the solution.

Step 2: Collect input parameters for AWS CloudFormation deployment

In this solution, you deploy three AWS CloudFormation stack sets in succession. Each stack set should be created in the primary Region of the central security account. Underlying stacks are deployed across the central security account and in all the application accounts where the Amazon Inspector scan is performed. You can learn more in Working with AWS CloudFormation StackSets.

Before you proceed to the stack set deployment, you must collect the input parameters for the first stack set: Central-SecurityAcnt-BaseTemplate.yaml.

To collect input parameters for AWS CloudFormation deployment

  1. Fetch the account ID (CentralSecurityAccountID) of the AWS account where the stack set will be created and deployed. You can use the steps in Finding your AWS account ID to help you find the account ID.
  2. Values for the ES domain parameters can be fetched from the Amazon ES console.
    1. Open the Amazon ES Management Console and select the Region where the Amazon ES domain exists.
    2. Select the domain name to view the domain details.
    3. The value for ElasticsearchDomainName is displayed on the top left corner of the domain details.
    4. On the Overview tab in the domain details window, select and copy the URL value of the Endpoint to use as the ElasticsearchEndpoint parameter of the template. Make sure to exclude the https:// at the beginning of the URL.

      Figure 2: Details of the Amazon ES domain for fetching parameter values

      Figure 2: Details of the Amazon ES domain for fetching parameter values

  3. Get the values for the S3 bucket parameters from the Amazon S3 console.
    1. Open the Amazon S3 Management Console.
    2. Copy the name of the S3 bucket that you created for centralized storage of Amazon Inspector findings. Save this bucket name for the LoggingS3Bucket parameter value of the Central-SecurityAcnt-BaseTemplate.yaml template.
    3. Select the S3 bucket used for source code storage. Select the bucket name and copy the name of this bucket for the LambdaSourceCodeS3Bucket parameter of the template.

      Figure 3: The S3 bucket where Lambda code is uploaded

      Figure 3: The S3 bucket where Lambda code is uploaded

  4. On the bucket details page, select the source code ZIP file name that you previously uploaded to the bucket. In the detail page of the ZIP file, choose the Overview tab, and then copy the value in the Key field to use as the value for the LambdaCodeS3Key parameter of the template.

    Figure 4: Details of the Lambda code ZIP file uploaded in Amazon S3 showing the key prefix value

    Figure 4: Details of the Lambda code ZIP file uploaded in Amazon S3 showing the key prefix value

Note: All of the other input parameter values of the template are entered automatically, but you can change them during stack set creation if necessary.

Step 3: Deploy the base template into the central security account

Now that you’ve collected the input parameters, you’re ready to deploy the base template that will create the necessary resources for this solution implementation in the central security account.

Prerequisites for CloudFormation stack set deployment

There are two permission modes that you can choose from for deploying a stack set in AWS CloudFormation. If you’re using AWS Organizations and have all features enabled, you can use the service-managed permissions; otherwise, self-managed permissions mode is recommended. To deploy this solution, you’ll use self-managed permissions mode. To run stack sets in self-managed permissions mode, your administrator account and the target accounts must have two IAM roles—AWSCloudFormationStackSetAdministrationRole and AWSCloudFormationStackSetExecutionRole—as prerequisites. In this solution, the administrator account is the central security account and the target accounts are application accounts. You can use the following CloudFormation templates to create the necessary IAM roles:

To deploy the base template

  1. Download the base template (Central-SecurityAcnt-BaseTemplate.yaml) from GitHub.
  2. Open the AWS CloudFormation Management Console and select the Region where all the stack sets will be created for deployment. This should be the primary Region of your environment.
  3. Select Create StackSet.
    1. In the Create StackSet window, select Template is ready and then select Upload a template file.
    2. Under Upload a template file, select Choose file and select the Central-SecurityAcnt-BaseTemplate.yaml template that you downloaded earlier.
    3. Choose Next.
  4. Add stack set details.
    1. Enter a name for the stack set in StackSet name.
    2. Under Parameters, most of the values are pre-populated except the values you collected in the previous procedure for CentralSecurityAccountID, ElasticsearchDomainName, ElasticsearchEndpoint, LoggingS3Bucket, LambdaSourceCodeS3Bucket, and LambdaCodeS3Key.
    3. After all the values are populated, choose Next.
  5. Configure StackSet options.
    1. (Optional) Add tags as described in the prerequisites to apply to the resources in the stack set that these rules will be deployed to. Tagging is a recommended best practice, because it enables you to add metadata information to resources during their creation.
    2. Under Permissions, choose the Self service permissions mode to be used for deploying the stack set, and then select the AWSCloudFormationStackSetAdministrationRole from the dropdown list.

      Figure 5: Permission mode to be selected for stack set deployment

      Figure 5: Permission mode to be selected for stack set deployment

    3. Choose Next.
  6. Add the account and Region details where the template will be deployed.
    1. Under Deployment locations, select Deploy stacks in accounts. Under Account numbers, enter the account ID of the security account that you collected earlier.

      Figure 6: Values to be provided during the deployment of the first stack set

      Figure 6: Values to be provided during the deployment of the first stack set

    2. Under Specify regions, select all the Regions where the stacks will be created. This should be the list of Regions where you installed the Amazon Inspector agent. Keep note of this list of Regions to use in the deployment of the third template in an upcoming step.
      • Though an Amazon Inspector scan is performed in all the application accounts, the regional Amazon SNS topics that send scan finding notifications are created in the central security account. Therefore, this template is created in all the Regions where Amazon Inspector will notify SNS. The template has the logic needed to handle the creation of specific AWS resources only in the primary Region, even though the template executes in many Regions.
      • The order in which Regions are selected under Specify regions defines the order in which the stack is deployed in the Regions. So you must make sure that the primary Region of your deployment is the first one specified under Specify regions, followed by the other Regions of stack set deployment. This is required because global resources are created using one Region—ideally the primary Region—and so stack deployment in that Region should be done before deployment to other Regions in order to avoid any build dependencies.

        Figure 7: Showing the order of specifying the Regions of stack set deployment

        Figure 7: Showing the order of specifying the Regions of stack set deployment

  7. Review the template settings and select the check box to acknowledge the Capabilities section. This is required if your deployment template creates IAM resources. You can learn more at Controlling access with AWS Identity and Access Management.

    Figure 8: Acknowledge IAM resources creation by AWS CloudFormation

    Figure 8: Acknowledge IAM resources creation by AWS CloudFormation

  8. Choose Submit to deploy the stack set.

Step 4: Deploy the second template in the primary Region of all application accounts to create global resources

This template creates the global resources required for sending Amazon Inspector findings to Amazon ES and Amazon S3.

To deploy the second template

  1. Download the template (ApplicationAcnts-RolesTemplate.yaml) from GitHub and use it to create the second CloudFormation stack set in the primary Region of the central security account.
  2. To deploy the template, follow the steps used to deploy the base template (described in the previous section) through Configure StackSet options.
  3. In Set deployment options, do the following:
    1. Under Account numbers, enter the account IDs of your application accounts as comma-separated values. You can use the steps in Finding your AWS account ID to help you gather the account IDs.
    2. Under Specify regions, select only your primary Region.

      Figure 9: Select account numbers and specify Regions

      Figure 9: Select account numbers and specify Regions

  4. The remaining steps are the same as for the base template deployment.

Step 5: Deploy the third template in all Regions of all application accounts

This template creates the resources in each Region of all application accounts needed for scheduled scanning of EC2 instances using Amazon Inspector. Notifications are sent to the SNS topics of each Region of the central security account.

To deploy the third template

  1. Download the template InspectorRun-SetupTemplate.yaml from GitHub and create the final AWS CloudFormation stack set. Similar to the previous stack sets, this one should also be created in the central security account.
  2. For deployment, follow the same steps you used to deploy the base template through Configure StackSet options.
  3. In Set deployment options:
    1. Under Account numbers, enter the same account IDs of your application accounts (comma-separated values) as you did for the second template deployment.
    2. Under Specify regions, select all the Regions where you installed the Amazon Inspector agent.

      Note: This list of Regions should be the same as the Regions where you deployed the base template.

  4. The remaining steps are the same as for the second template deployment.

Test the solution and delivery of the findings

After successful deployment of the architecture, to test the solution you can wait until the next scheduled Amazon Inspector scan or you can use the following steps to run the Amazon Inspector scan manually.

To run the Amazon Inspector scan manually for testing the solution

  1. In any one of the application accounts, go to any Region where the Amazon Inspector scan was performed.
  2. Open the Amazon Inspector console.
  3. In the left navigation menu, select Assessment templates to see the available assessments.
  4. Choose the assessment template that was created by the third template.
  5. Choose Run to start the assessment immediately.
  6. When the run is complete, Last run status changes from Collecting data to Analysis Complete.

    Figure 10: Amazon Inspector assessment run

    Figure 10: Amazon Inspector assessment run

  7. You can see the recent scan findings in the Amazon Inspector console by selecting Assessment runs from the left navigation menu.

    Figure 11: The assessment run indicates total findings from the last Amazon Inspector run in this Region

    Figure 11: The assessment run indicates total findings from the last Amazon Inspector run in this Region

  8. In the left navigation menu, select Findings to see details of each finding, or use the steps in the following section to verify the delivery of findings to the central security account.

Test the delivery of the Amazon Inspector findings

This solution delivers the Amazon Inspector findings to two AWS services—Amazon ES and Amazon S3—in the primary Region of the central security account. You can either use Kibana to view the findings sent to Amazon ES or you can use the findings sent to Amazon S3 and forward them to the security monitoring software of your preference for further analysis.

To check whether the findings are delivered to Amazon ES

  1. Open the Amazon ES Management Console and select the Region where the Amazon ES domain is located.
  2. Select the domain name to view the domain details.
  3. On the domain details page, select the Kibana URL.

    Figure 12: Amazon ES domain details page

    Figure 12: Amazon ES domain details page

  4. Log in to Kibana using your preferred authentication method as set up in the prerequisites.
    1. In the left panel, select Discover.
    2. In the Discover window, select a Region to view the total number of findings in that Region.

      Figure 13: The total findings in Kibana for the chosen Region of an application account

      Figure 13: The total findings in Kibana for the chosen Region of an application account

To check whether the findings are delivered to Amazon S3

  1. Open the Amazon S3 Management Console.
  2. Select the S3 bucket that you created for storing Amazon Inspector findings.
  3. Select the bucket name to view the bucket details. The total number of findings for the chosen Region is at the top right corner of the Overview tab.

    Figure 14: The total security findings as stored in an S3 bucket for us-east-1 Region

    Figure 14: The total security findings as stored in an S3 bucket for us-east-1 Region

Visualization in Kibana

The data sent to the Amazon ES index can be used to create visualizations in Kibana that make it easier to identify potential security gaps and plan the remediation accordingly.

You can use Kibana to create a dashboard that gives an overview of the potential vulnerabilities identified in different instances of different AWS accounts. Figure 15 shows an example of such a dashboard. The dashboard can help you rank the need for remediation based on criteria such as:

  • The category of vulnerability
  • The most impacted AWS accounts
  • EC2 instances that need immediate attention
Figure 15: A sample Kibana dashboard showing findings from Amazon Inspector

Figure 15: A sample Kibana dashboard showing findings from Amazon Inspector

You can build additional panels to visualize details of the vulnerability findings identified by Amazon Inspector, such as the CVE ID of the security vulnerability, its description, and recommendations on how to remove the vulnerabilities.

Figure 16: A sample Kibana dashboard panel listing the top identified vulnerabilities and their details

Figure 16: A sample Kibana dashboard panel listing the top identified vulnerabilities and their details

Conclusion

By using this solution to combine Amazon Inspector, Amazon SNS topics, Amazon SQS queues, Lambda functions, an Amazon ES domain, and S3 buckets, you can centrally analyze and monitor the vulnerability posture of EC2 instances across your AWS environment, including multiple Regions across multiple AWS accounts. This solution is built following least privilege access through AWS IAM roles and policies to help secure the cross-account architecture.

In this blog post, you learned how to send the findings directly to Amazon ES for visualization in Kibana. These visualizations can be used to build dashboards that security analysts can use for centralized monitoring. Better monitoring capability helps analysts to identify potentially vulnerable assets and perform remediation activities to improve security of your applications in AWS and their underlying assets. This solution also demonstrates how to store the findings from Amazon Inspector in an S3 bucket, which makes it easier for you to use those findings to create visualizations in your preferred security monitoring software.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Moumita Saha

Moumita is a Security Consultant with AWS Professional Services working to help enterprise customers secure their workloads in the cloud. She assists customers in secure cloud migration, designing automated solutions to protect against cyber threats in the cloud. She is passionate about cyber security, data privacy, and new, emerging cloud-security technologies.

Get started with fine-grained access control in Amazon Elasticsearch Service

Post Syndicated from Jon Handler original https://aws.amazon.com/blogs/security/get-started-with-fine-grained-access-control-in-amazon-elasticsearch-service/

Amazon Elasticsearch Service (Amazon ES) provides fine-grained access control, powered by the Open Distro for Elasticsearch security plugin. The security plugin adds Kibana authentication and access control at the cluster, index, document, and field levels that can help you secure your data. You now have many different ways to configure your Amazon ES domain to provide access control. In this post, I offer basic configuration information to get you started.

Figure 1: A high-level view of data flow and security

Figure 1: A high-level view of data flow and security

Figure 1 details the authentication and access control provided in Amazon ES. The left half of the diagram details the different methods of authenticating. Looking horizontally, requests originate either from Kibana or directly access the REST API. When using Kibana, you can use a login screen powered by the Open Distro security plugin, your SAML identity provider, or Amazon Cognito. Each of these methods results in an authenticated identity: SAML providers via the response, Amazon Cognito via an AWS Identity and Access Management (IAM) identity, and Open Distro via an internal user identity. When you use the REST API, you can use AWS Signature V4 request signing (SigV4 signing), or user name and password authentication. You can also send unauthenticated traffic, but your domain should be configured to reject all such traffic.

The right side of the diagram details the access control points. You can consider the handling of access control in two phases to better understand it—authentication at the edge by IAM and authentication in the Amazon ES domain by the Open Distro security plugin.

First, requests from Kibana or direct API calls have to reach your domain endpoint. If you follow best practices and the domain is in an Amazon Virtual Private Cloud (VPC), you can use Amazon Elastic Compute Cloud (Amazon EC2) security groups to allow or deny traffic based on the originating IP address or security group of the Amazon EC2 instances. Best practice includes least privilege based on subnet ACLs and security group ingress and egress restrictions. In this post, we assume that your requests are legitimate, meet your access control criteria, and can reach your domain.

When a request reaches the domain endpoint—the edge of your domain—, it can be anonymous or it can carry identity and authentication information as described previously. Each Amazon ES domain carries a resource-based IAM policy. With this policy, you can allow or deny traffic based on an IAM identity attached to the request. When your policy specifies an IAM principal, Amazon ES evaluates the request against the allowed Actions in the policy and allows or denies the request. If you don’t have an IAM identity attached to the request (SAML assertion, or user name and password) you should leave the domain policy open and pass traffic through to fine-grained access control in Amazon ES without any checks. You should employ IAM security best practices and add additional IAM restrictions for direct-to-API access control once your domain is set up.

The Open Distro for Elasticsearch security plugin has its own internal user database for user name and password authentication and handles access control for all users. When traffic reaches the Elasticsearch cluster, the plugin validates any user name and password authentication information against this internal database to identify the user and grant a set of permissions. If a request comes with identity information from either SAML or an IAM role, you map that backend role onto the roles or users that you have created in Open Distro security.

Amazon ES documentation and the Open Distro for Elasticsearch documentation give more information on all of these points. For this post, I walk through a basic console setup for a new domain.

Console set up

The Amazon ES console provides a guided wizard that lets you configure—and reconfigure—your Amazon ES domain. Step 1 offers you the opportunity to select some predefined configurations that carry through the wizard. In step 2, you choose the instances to deploy in your domain. In Step 3, you configure the security. This post focuses on step 3. See also these tutorials that explain using an IAM master user and using an HTTP-authenticated master user.

Note: At the time of writing, you cannot enable fine-grained access control on existing domains; you must create a new domain and enable the feature at domain creation time. You can use fine-grained access control with Elasticsearch versions 6.8 and later.

Set your endpoint

Amazon ES gives you a DNS name that resolves to an IP address that you use to send traffic to the Elasticsearch cluster in the domain. The IP address can be in the IP space of the public internet, or it can resolve to an IP address in your VPC. While—with fine-grained access control—you have the means of securing your cluster even when the endpoint is a public IP address, we recommend using VPC access as the more secure option. Shown in Figure 2.

Figure 2: Select VPC access

Figure 2: Select VPC access

With the endpoint in your VPC, you use security groups to control which ports accept traffic and limit access to the endpoints of your Amazon ES domain to IP addresses in your VPC. Make sure to use least privilege when setting up security group access.

Enable fine-grained access control

You should enable fine-grained access control. Shown in Figure 3.

Figure 3: Enabled fine-grained access control

Figure 3: Enabled fine-grained access control

Set up the master user

The master user is the administrator identity for your Amazon ES domain. This user can set up additional users in the Amazon ES security plugin, assign roles to them, and assign permissions for those roles. You can choose user name and password authentication for the master user, or use an IAM identity. User name and password authentication, shown in Figure 4, is simpler to set up and—with a strong password—may provide sufficient security depending on your use case. We recommend you follow your organization’s policy for password length and complexity. If you lose this password, you can return to the domain’s dashboard in the AWS Management Console and reset it. You’ll use these credentials to log in to Kibana. Following best practices on choosing your master user, you should move to an IAM master user once setup is complete.

Note: Password strength is a function of length, complexity of characters (e.g., upper and lower case letters, numbers, and special characters), and unpredictability to decrease the likelihood the password could be guessed or cracked over a period of time.

 

Figure 4: Setting up the master username and password

Figure 4: Setting up the master username and password

Do not enable Amazon Cognito authentication

When you use Kibana, Amazon ES includes a login experience. You currently have three choices for the source of the login screen:

  1. The Open Distro security plugin
  2. Amazon Cognito
  3. Your SAML-compliant system

You can apply fine-grained access control regardless of how you log in. However, setting up fine-grained access control for the master user and additional users is most straightforward if you use the login experience provided by the Open Distro security plugin. After your first login, and when you have set up additional users, you should migrate to either Cognito or SAML for login, taking advantage of the additional security they offer. To use the Open Distro login experience, disable Amazon Cognito authentication, as shown in Figure 5.

Figure 5: Amazon Cognito authentication is not enabled

Figure 5: Amazon Cognito authentication is not enabled

If you plan to integrate with your SAML identity provider, check the Prepare SAML authentication box. You will complete the set up when the domain is active.

Figure 6: Choose Prepare SAML authentication if you plan to use it

Figure 6: Choose Prepare SAML authentication if you plan to use it

Use an open access policy

When you create your domain, you attach an IAM policy to it that controls whether your traffic must be signed with AWS SigV4 request signing for authentication. Policies that specify an IAM principal require that you use AWS SigV4 signing to authenticate those requests. The domain sends your traffic to IAM, which authenticates signed requests to resolve the user or role that sent the traffic. The domain and IAM apply the policy access controls and either accept the traffic or reject it based on the commands. This is done down to the index level for single-index API calls.

When you use fine-grained access control, your traffic is also authenticated by the Amazon ES security plugin, which makes the IAM authentication redundant. Create an open access policy, as shown in Figure 7, which doesn’t specify a principal and so doesn’t require request signing. This may be acceptable, since you can choose to require an authenticated identity on all traffic. The security plugin authenticates the traffic as above, providing access control based on the internal database.

Figure 7: Selected open access policy

Figure 7: Selected open access policy

Encrypted data

Amazon ES provides an option to encrypt data in transit and at rest for any domain. When you enable fine-grained access control, you must use encryption with the corresponding checkboxes automatically checked and not changeable. These include Transport Layer Security (TLS) for requests to the domain and for traffic between nodes in the domain, and encryption of data at rest through AWS Key Management Service (KMS). Shown in Figure 8.

Figure 8: Enabled encryption

Figure 8: Enabled encryption

Accessing Kibana

When you complete the domain creation wizard, it takes about 10 minutes for your domain to activate. Return to the console and the Overview tab of your Amazon ES dashboard. When the Domain Status is Active, select the Kibana URL. Since you created your domain in your VPC, you must be able to access the Kibana endpoint via proxy, VPN, SSH tunnel, or similar. Use the master user name and password that you configured earlier to log in to Kibana, as shown in Figure 9. As detailed above, you should only ever log in as the master user to set up additional users—administrators, users with read-only access, and others.

Figure 9: Kibana login page

Figure 9: Kibana login page

Conclusion

Congratulations, you now know the basic steps to set up the minimum configuration to access your Amazon ES domain with a master user. You can examine the settings for fine-grained access control in the Kibana console Security tab. Here, you can add additional users, assign permissions, map IAM users to security roles, and set up your Kibana tenancy. We’ll cover those topics in future posts.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Elasticsearch Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jon Handler

Jon is a Principal Solutions Architect at AWS. He works closely with the CloudSearch and Elasticsearch teams, providing help and guidance to a broad range of customers who have search workloads that they want to move to the AWS Cloud. Prior to joining AWS, Jon’s career as a software developer included four years of coding a large-scale, eCommerce search engine. Jon holds a Bachelor of the Arts from the University of Pennsylvania, and a Master of Science and a Ph. D. in Computer Science and Artificial Intelligence from Northwestern University.

Author

Sajeev Attiyil Bhaskaran

Sajeev is a Senior Cloud Engineer focused on big data and analytics. He works with AWS customers to provide architectural and engineering assistance and guidance. He dives deep into big data technologies and streaming solutions. He also does onsite and online sessions for customers to design best solutions for their use cases. In his free time, he enjoys spending time with his wife and daughter.