Tag Archives: SDN

How AWS Managed Microsoft AD Helps to Simplify the Deployment and Improve the Security of Active Directory–Integrated .NET Applications

Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/how-aws-managed-microsoft-ad-helps-to-simplify-the-deployment-and-improve-the-security-of-active-directory-integrated-net-applications/

Companies using .NET applications to access sensitive user information, such as employee salary, Social Security Number, and credit card information, need an easy and secure way to manage access for users and applications.

For example, let’s say that your company has a .NET payroll application. You want your Human Resources (HR) team to manage and update the payroll data for all the employees in your company. You also want your employees to be able to see their own payroll information in the application. To meet these requirements in a user-friendly and secure way, you want to manage access to the .NET application by using your existing Microsoft Active Directory identities. This enables you to provide users with single sign-on (SSO) access to the .NET application and to manage permissions using Active Directory groups. You also want the .NET application to authenticate itself to access the database, and to limit access to the data in the database based on the identity of the application user.

Microsoft Active Directory supports these requirements through group Managed Service Accounts (gMSAs) and Kerberos constrained delegation (KCD). AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, enables you to manage gMSAs and KCD through your administrative account, helping you to migrate and develop .NET applications that need these native Active Directory features.

In this blog post, I give an overview of how to use AWS Managed Microsoft AD to manage gMSAs and KCD and demonstrate how you can configure a gMSA and KCD in six steps for a .NET application:

  1. Create your AWS Managed Microsoft AD.
  2. Create your Amazon RDS for SQL Server database.
  3. Create a gMSA for your .NET application.
  4. Deploy your .NET application.
  5. Configure your .NET application to use the gMSA.
  6. Configure KCD for your .NET application.

Solution overview

The following diagram shows the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD. The diagram also illustrates authentication and access and is numbered to show the six key steps required to use a gMSA and KCD. To deploy this solution, the AWS Managed Microsoft AD directory must be in the same Amazon Virtual Private Cloud (VPC) as RDS for SQL Server. For this example, my company name is Example Corp., and my directory uses the domain name, example.com.

Diagram showing the components of a .NET application that uses Amazon RDS for SQL Server with a gMSA and KCD

Deploy the solution

The following six steps (numbered to correlate with the preceding diagram) walk you through configuring and using a gMSA and KCD.

1. Create your AWS Managed Microsoft AD directory

Using the Directory Service console, create your AWS Managed Microsoft AD directory in your Amazon VPC. In my example, my domain name is example.com.

Image of creating an AWS Managed Microsoft AD directory in an Amazon VPC

2. Create your Amazon RDS for SQL Server database

Using the RDS console, create your Amazon RDS for SQL Server database instance in the same Amazon VPC where your directory is running, and enable Windows Authentication. To enable Windows Authentication, select your directory in the Microsoft SQL Server Windows Authentication section in the Configure Advanced Settings step of the database creation workflow (see the following screenshot).

In my example, I create my Amazon RDS for SQL Server db-example database, and enable Windows Authentication to allow my db-example database to authenticate against my example.com directory.

Screenshot of configuring advanced settings

3. Create a gMSA for your .NET application

Now that you have deployed your directory, database, and application, you can create a gMSA for your .NET application.

To perform the next steps, you must install the Active Directory administration tools on a Windows server that is joined to your AWS Managed Microsoft AD directory domain. If you do not have a Windows server joined to your directory domain, you can deploy a new Amazon EC2 for Microsoft Windows Server instance and join it to your directory domain.

To create a gMSA for your .NET application:

  1. Log on to the instance on which you installed the Active Directory administration tools by using a user that is a member of the Admins security group or the Managed Service Accounts Admins security group in your organizational unit (OU). For my example, I use the Admin user in the example OU.

Screenshot of logging on to the instance on which you installed the Active Directory administration tools

  1. Identify which .NET application servers (hosts) will run your .NET application. Create a new security group in your OU and add your .NET application servers as members of this new group. This allows a group of application servers to use a single gMSA, instead of creating one gMSA for each server. In my example, I create a group, App_server_grp, in my example OU. I also add Appserver1, which is my .NET application server computer name, as a member of this new group.

Screenshot of creating a new security group

  1. Create a gMSA in your directory by running Windows PowerShell from the Start menu. The basic syntax to create the gMSA at the Windows PowerShell command prompt follows.
    PS C:\Users\admin> New-ADServiceAccount -name [gMSAname] -DNSHostName [domainname] -PrincipalsAllowedToRetrieveManagedPassword [AppServersSecurityGroup] -TrustedForDelegation $truedn <Enter>

    In my example, the gMSAname is gMSAexample, the DNSHostName is example.com, and the PrincipalsAllowedToRetrieveManagedPassword is the recently created security group, App_server_grp.

    PS C:\Users\admin> New-ADServiceAccount -name gMSAexample -DNSHostName example.com -PrincipalsAllowedToRetrieveManagedPassword App_server_grp -TrustedForDelegation $truedn <Enter>

    To confirm you created the gMSA, you can run the Get-ADServiceAccount command from the PowerShell command prompt.

    PS C:\Users\admin> Get-ADServiceAccount gMSAexample <Enter>
    
    DistinguishedName : CN=gMSAexample,CN=Managed Service Accounts,DC=example,DC=com
    Enabled           : True
    Name              : gMSAexample
    ObjectClass       : msDS-GroupManagedServiceAccount
    ObjectGUID        : 24d8b68d-36d5-4dc3-b0a9-edbbb5dc8a5b
    SamAccountName    : gMSAexample$
    SID               : S-1-5-21-2100421304-991410377-951759617-1603
    UserPrincipalName :

    You also can confirm you created the gMSA by opening the Active Directory Users and Computers utility located in your Administrative Tools folder, expand the domain (example.com in my case), and expand the Managed Service Accounts folder.
    Screenshot of confirming the creation of the gMSA

4. Deploy your .NET application

Deploy your .NET application on IIS on Amazon EC2 for Windows Server instances. For this step, I assume you are the application’s expert and already know how to deploy it. Make sure that all of your instances are joined to your directory.

5. Configure your .NET application to use the gMSA

You can configure your .NET application to use the gMSA to enforce strong password security policy and ensure password rotation of your service account. This helps to improve the security and simplify the management of your .NET application. Configure your .NET application in two steps:

  1. Grant to gMSA the required permissions to run your .NET application in the respective application folders. This is a critical step because when you change the application pool identity account to use gMSA, downtime can occur if the gMSA does not have the application’s required permissions. Therefore, make sure you first test the configurations in your development and test environments.
  2. Configure your application pool identity on IIS to use the gMSA as the service account. When you configure a gMSA as the service account, you include the $ at the end of the gMSA name. You do not need to provide a password because AWS Managed Microsoft AD automatically creates and rotates the password. In my example, my service account is gMSAexample$, as shown in the following screenshot.

Screenshot of configuring application pool identity

You have completed all the steps to use gMSA to create and rotate your .NET application service account password! Now, you will configure KCD for your .NET application.

6. Configure KCD for your .NET application

You now are ready to allow your .NET application to have access to other services by using the user identity’s permissions instead of the application service account’s permissions. Note that KCD and gMSA are independent features, which means you do not have to create a gMSA to use KCD. For this example, I am using both features to show how you can use them together. To configure a regular service account such as a user or local built-in account, see the Kerberos constrained delegation with ASP.NET blog post on MSDN.

In my example, my goal is to delegate to the gMSAexample account the ability to enforce the user’s permissions to my db-example SQL Server database, instead of the gMSAexample account’s permissions. For this, I have to update the msDS-AllowedToDelegateTo gMSA attribute. The value for this attribute is the service principal name (SPN) of the service instance that you are targeting, which in this case is the db-example Amazon RDS for SQL Server database.

The SPN format for the msDS-AllowedToDelegateTo attribute is a combination of the service class, the Kerberos authentication endpoint, and the port number. The Amazon RDS for SQL Server Kerberos authentication endpoint format is [database_name].[domain_name]. The value for my msDS-AllowedToDelegateTo attribute is MSSQLSvc/db-example.example.com:1433, where MSSQLSvc and 1433 are the SQL Server Database service class and port number standards, respectively.

Follow these steps to perform the msDS-AllowedToDelegateTo gMSA attribute configuration:

  1. Log on to your Active Directory management instance with a user identity that is a member of the Kerberos Delegation Admins security group. In this case, I will use admin.
  2. Open the Active Directory Users and Groups utility located in your Administrative Tools folder, choose View, and then choose Advanced Features.
  3. Expand your domain name (example.com in this example), and then choose the Managed Service Accounts security group. Right-click the gMSA account for the application pool you want to enable for Kerberos delegation, choose Properties, and choose the Attribute Editor tab.
  4. Search for the msDS-AllowedToDelegateTo attribute on the Attribute Editor tab and choose Edit.
  5. Enter the MSSQLSvc/db-example.example.com:1433 value and choose Add.
    Screenshot of entering the value of the multi-valued string
  6. Choose OK and Apply, and your KCD configuration is complete.

Congratulations! At this point, your application is using a gMSA rather than an embedded static user identity and password, and the application is able to access SQL Server using the identity of the application user. The gMSA eliminates the need for you to rotate the application’s password manually, and it allows you to better scope permissions for the application. When you use KCD, you can enforce access to your database consistently based on user identities at the database level, which prevents improper access that might otherwise occur because of an application error.

Summary

In this blog post, I demonstrated how to simplify the deployment and improve the security of your .NET application by using a group Managed Service Account and Kerberos constrained delegation with your AWS Managed Microsoft AD directory. I also outlined the main steps to get your .NET environment up and running on a managed Active Directory and SQL Server infrastructure. This approach will make it easier for you to build new .NET applications in the AWS Cloud or migrate existing ones in a more secure way.

For additional information about using group Managed Service Accounts and Kerberos constrained delegation with your AWS Managed Microsoft AD directory, see the AWS Directory Service documentation.

To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions about this post or its solution, start a new thread on the Directory Service forum.

– Peter

Manage Kubernetes Clusters on AWS Using CoreOS Tectonic

Post Syndicated from Arun Gupta original https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-coreos-tectonic/

There are multiple ways to run a Kubernetes cluster on Amazon Web Services (AWS). The first post in this series explained how to manage a Kubernetes cluster on AWS using kops. This second post explains how to manage a Kubernetes cluster on AWS using CoreOS Tectonic.

Tectonic overview

Tectonic delivers the most current upstream version of Kubernetes with additional features. It is a commercial offering from CoreOS and adds the following features over the upstream:

  • Installer
    Comes with a graphical installer that installs a highly available Kubernetes cluster. Alternatively, the cluster can be installed using AWS CloudFormation templates or Terraform scripts.
  • Operators
    An operator is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex stateful applications on behalf of a Kubernetes user. This release includes an etcd operator for rolling upgrades and a Prometheus operator for monitoring capabilities.
  • Console
    A web console provides a full view of applications running in the cluster. It also allows you to deploy applications to the cluster and start the rolling upgrade of the cluster.
  • Monitoring
    Node CPU and memory metrics are powered by the Prometheus operator. The graphs are available in the console. A large set of preconfigured Prometheus alerts are also available.
  • Security
    Tectonic ensures that cluster is always up to date with the most recent patches/fixes. Tectonic clusters also enable role-based access control (RBAC). Different roles can be mapped to an LDAP service.
  • Support
    CoreOS provides commercial support for clusters created using Tectonic.

Tectonic can be installed on AWS using a GUI installer or Terraform scripts. The installer prompts you for the information needed to boot the Kubernetes cluster, such as AWS access and secret key, number of master and worker nodes, and instance size for the master and worker nodes. The cluster can be created after all the options are specified. Alternatively, Terraform assets can be downloaded and the cluster can be created later. This post shows using the installer.

CoreOS License and Pull Secret

Even though Tectonic is a commercial offering, a cluster for up to 10 nodes can be created by creating a free account at Get Tectonic for Kubernetes. After signup, a CoreOS License and Pull Secret files are provided on your CoreOS account page. Download these files as they are needed by the installer to boot the cluster.

IAM user permission

The IAM user to create the Kubernetes cluster must have access to the following services and features:

  • Amazon Route 53
  • Amazon EC2
  • Elastic Load Balancing
  • Amazon S3
  • Amazon VPC
  • Security groups

Use the aws-policy policy to grant the required permissions for the IAM user.

DNS configuration

A subdomain is required to create the cluster, and it must be registered as a public Route 53 hosted zone. The zone is used to host and expose the console web application. It is also used as the static namespace for the Kubernetes API server. This allows kubectl to be able to talk directly with the master.

The domain may be registered using Route 53. Alternatively, a domain may be registered at a third-party registrar. This post uses a kubernetes-aws.io domain registered at a third-party registrar and a tectonic subdomain within it.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name tectonic.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

The command shows an output such as the following:

[
  "ns-1924.awsdns-48.co.uk",
  "ns-501.awsdns-62.com",
  "ns-1259.awsdns-29.org",
  "ns-749.awsdns-29.net"
]

Create NS records for the domain with your registrar. Make sure that the NS records can be resolved using a utility like dig web interface. A sample output would look like the following:

The bottom of the screenshot shows NS records configured for the subdomain.

Download and run the Tectonic installer

Download the Tectonic installer (version 1.7.1) and extract it. The latest installer can always be found at coreos.com/tectonic. Start the installer:

./tectonic/tectonic-installer/$PLATFORM/installer

Replace $PLATFORM with either darwin or linux. The installer opens your default browser and prompts you to select the cloud provider. Choose Amazon Web Services as the platform. Choose Next Step.

Specify the Access Key ID and Secret Access Key for the IAM role that you created earlier. This allows the installer to create resources required for the Kubernetes cluster. This also gives the installer full access to your AWS account. Alternatively, to protect the integrity of your main AWS credentials, use a temporary session token to generate temporary credentials.

You also need to choose a region in which to install the cluster. For the purpose of this post, I chose a region close to where I live, Northern California. Choose Next Step.

Give your cluster a name. This name is part of the static namespace for the master and the address of the console.

To enable in-place update to the Kubernetes cluster, select the checkbox next to Automated Updates. It also enables update to the etcd and Prometheus operators. This feature may become a default in future releases.

Choose Upload “tectonic-license.txt” and upload the previously downloaded license file.

Choose Upload “config.json” and upload the previously downloaded pull secret file. Choose Next Step.

Let the installer generate a CA certificate and key. In this case, the browser may not recognize this certificate, which I discuss later in the post. Alternatively, you can provide a CA certificate and a key in PEM format issued by an authorized certificate authority. Choose Next Step.

Use the SSH key for the region specified earlier. You also have an option to generate a new key. This allows you to later connect using SSH into the Amazon EC2 instances provisioned by the cluster. Here is the command that can be used to log in:

ssh –i <key> [email protected]<ec2-instance-ip>

Choose Next Step.

Define the number and instance type of master and worker nodes. In this case, create a 6 nodes cluster. Make sure that the worker nodes have enough processing power and memory to run the containers.

An etcd cluster is used as persistent storage for all of Kubernetes API objects. This cluster is required for the Kubernetes cluster to operate. There are three ways to use the etcd cluster as part of the Tectonic installer:

  • (Default) Provision the cluster using EC2 instances. Additional EC2 instances are used in this case.
  • Use an alpha support for cluster provisioning using the etcd operator. The etcd operator is used for automated operations of the etcd master nodes for the cluster itself, in addition to for etcd instances that are created for application usage. The etcd cluster is provisioned within the Tectonic installer.
  • Bring your own pre-provisioned etcd cluster.

Use the first option in this case.

For more information about choosing the appropriate instance type, see the etcd hardware recommendation. Choose Next Step.

Specify the networking options. The installer can create a new public VPC or use a pre-existing public or private VPC. Make sure that the VPC requirements are met for an existing VPC.

Give a DNS name for the cluster. Choose the domain for which the Route 53 hosted zone was configured earlier, such as tectonic.kubernetes-aws.io. Multiple clusters may be created under a single domain. The cluster name and the DNS name would typically match each other.

To select the CIDR range, choose Show Advanced Settings. You can also choose the Availability Zones for the master and worker nodes. By default, the master and worker nodes are spread across multiple Availability Zones in the chosen region. This makes the cluster highly available.

Leave the other values as default. Choose Next Step.

Specify an email address and password to be used as credentials to log in to the console. Choose Next Step.

At any point during the installation, you can choose Save progress. This allows you to save configurations specified in the installer. This configuration file can then be used to restore progress in the installer at a later point.

To start the cluster installation, choose Submit. At another time, you can download the Terraform assets by choosing Manually boot. This allows you to boot the cluster later.

The logs from the Terraform scripts are shown in the installer. When the installation is complete, the console shows that the Terraform scripts were successfully applied, the domain name was resolved successfully, and that the console has started. The domain works successfully if the DNS resolution worked earlier, and it’s the address where the console is accessible.

Choose Download assets to download assets related to your cluster. It contains your generated CA, kubectl configuration file, and the Terraform state. This download is an important step as it allows you to delete the cluster later.

Choose Next Step for the final installation screen. It allows you to access the Tectonic console, gives you instructions about how to configure kubectl to manage this cluster, and finally deploys an application using kubectl.

Choose Go to my Tectonic Console. In our case, it is also accessible at http://cluster.tectonic.kubernetes-aws.io/.

As I mentioned earlier, the browser does not recognize the self-generated CA certificate. Choose Advanced and connect to the console. Enter the login credentials specified earlier in the installer and choose Login.

The Kubernetes upstream and console version are shown under Software Details. Cluster health shows All systems go and it means that the API server and the backend API can be reached.

To view different Kubernetes resources in the cluster choose, the resource in the left navigation bar. For example, all deployments can be seen by choosing Deployments.

By default, resources in the all namespace are shown. Other namespaces may be chosen by clicking on a menu item on the top of the screen. Different administration tasks such as managing the namespaces, getting list of the nodes and RBAC can be configured as well.

Download and run Kubectl

Kubectl is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

It can also be conveniently installed using the Homebrew package manager. To find and access a cluster, Kubectl needs a kubeconfig file. By default, this configuration file is at ~/.kube/config. This file is created when a Kubernetes cluster is created from your machine. However, in this case, download this file from the console.

In the console, choose admin, My Account, Download Configuration and follow the steps to download the kubectl configuration file. Move this file to ~/.kube/config. If kubectl has already been used on your machine before, then this file already exists. Make sure to take a backup of that file first.

Now you can run the commands to view the list of deployments:

~ $ kubectl get deployments --all-namespaces
NAMESPACE         NAME                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system       etcd-operator                           1         1         1            1           43m
kube-system       heapster                                1         1         1            1           40m
kube-system       kube-controller-manager                 3         3         3            3           43m
kube-system       kube-dns                                1         1         1            1           43m
kube-system       kube-scheduler                          3         3         3            3           43m
tectonic-system   container-linux-update-operator         1         1         1            1           40m
tectonic-system   default-http-backend                    1         1         1            1           40m
tectonic-system   kube-state-metrics                      1         1         1            1           40m
tectonic-system   kube-version-operator                   1         1         1            1           40m
tectonic-system   prometheus-operator                     1         1         1            1           40m
tectonic-system   tectonic-channel-operator               1         1         1            1           40m
tectonic-system   tectonic-console                        2         2         2            2           40m
tectonic-system   tectonic-identity                       2         2         2            2           40m
tectonic-system   tectonic-ingress-controller             1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-alertmanager   1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-prometheus     1         1         1            1           40m
tectonic-system   tectonic-prometheus-operator            1         1         1            1           40m
tectonic-system   tectonic-stats-emitter                  1         1         1            1           40m

This output is similar to the one shown in the console earlier. Now, this kubectl can be used to manage your resources.

Upgrade the Kubernetes cluster

Tectonic allows the in-place upgrade of the cluster. This is an experimental feature as of this release. The clusters can be updated either automatically, or with manual approval.

To perform the update, choose Administration, Cluster Settings. If an earlier Tectonic installer, version 1.6.2 in this case, is used to install the cluster, then this screen would look like the following:

Choose Check for Updates. If any updates are available, choose Start Upgrade. After the upgrade is completed, the screen is refreshed.

This is an experimental feature in this release and so should only be used on clusters that can be easily replaced. This feature may become a fully supported in a future release. For more information about the upgrade process, see Upgrading Tectonic & Kubernetes.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster as this ensures that all resources created by the cluster are appropriately cleaned up.

The easiest way to delete the cluster is using the assets downloaded in the last step of the installer. Extract the downloaded zip file. This creates a directory like <cluster-name>_TIMESTAMP. In that directory, give the following command to delete the cluster:

TERRAFORM_CONFIG=$(pwd)/.terraformrc terraform destroy --force

This destroys the cluster and all associated resources.

You may have forgotten to download the assets. There is a copy of the assets in the directory tectonic/tectonic-installer/darwin/clusters. In this directory, another directory with the name <cluster-name>_TIMESTAMP contains your assets.

Conclusion

This post explained how to manage Kubernetes clusters using the CoreOS Tectonic graphical installer.  For more details, see Graphical Installer with AWS. If the installation does not succeed, see the helpful Troubleshooting tips. After the cluster is created, see the Tectonic tutorials to learn how to deploy, scale, version, and delete an application.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

Arun

Manage Kubernetes Clusters on AWS Using Kops

Post Syndicated from Arun Gupta original https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-kops/

Any containerized application typically consists of multiple containers. There is a container for the application itself, one for database, possibly another for web server, and so on. During development, its normal to build and test this multi-container application on a single host. This approach works fine during early dev and test cycles but becomes a single point of failure for production where the availability of the application is critical. In such cases, this multi-container application is deployed on multiple hosts. There is a need for an external tool to manage such a multi-container multi-host deployment. Container orchestration frameworks provides the capability of cluster management, scheduling containers on different hosts, service discovery and load balancing, crash recovery and other related functionalities. There are multiple options for container orchestration on Amazon Web Services: Amazon ECS, Docker for AWS, and DC/OS.

Another popular option for container orchestration on AWS is Kubernetes. There are multiple ways to run a Kubernetes cluster on AWS. This multi-part blog series provides a brief overview and explains some of these approaches in detail. This first post explains how to create a Kubernetes cluster on AWS using kops.

Kubernetes and Kops overview

Kubernetes is an open source, container orchestration platform. Applications packaged as Docker images can be easily deployed, scaled, and managed in a Kubernetes cluster. Some of the key features of Kubernetes are:

  • Self-healing
    Failed containers are restarted to ensure that the desired state of the application is maintained. If a node in the cluster dies, then the containers are rescheduled on a different node. Containers that do not respond to application-defined health check are terminated, and thus rescheduled.
  • Horizontal scaling
    Number of containers can be easily scaled up and down automatically based upon CPU utilization, or manually using a command.
  • Service discovery and load balancing
    Multiple containers can be grouped together discoverable using a DNS name. The service can be load balanced with integration to the native LB provided by the cloud provider.
  • Application upgrades and rollbacks
    Applications can be upgraded to a newer version without an impact to the existing one. If something goes wrong, Kubernetes rolls back the change.

Kops, short for Kubernetes Operations, is a set of tools for installing, operating, and deleting Kubernetes clusters in the cloud. A rolling upgrade of an older version of Kubernetes to a new version can also be performed. It also manages the cluster add-ons. After the cluster is created, the usual kubectl CLI can be used to manage resources in the cluster.

Download Kops and Kubectl

There is no need to download the Kubernetes binary distribution for creating a cluster using kops. However, you do need to download the kops CLI. It then takes care of downloading the right Kubernetes binary in the cloud, and provisions the cluster.

The different download options for kops are explained at github.com/kubernetes/kops#installing. On MacOS, the easiest way to install kops is using the brew package manager.

brew update && brew install kops

The version of kops can be verified using the kops version command, which shows:

Version 1.6.1

In addition, download kubectl. This is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

Make sure to include the directory where kubectl is downloaded in your PATH.

IAM user permission

The IAM user to create the Kubernetes cluster must have the following permissions:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • IAMFullAccess
  • AmazonVPCFullAccess

Alternatively, a new IAM user may be created and the policies attached as explained at github.com/kubernetes/kops/blob/master/docs/aws.md#setup-iam-user.

Create an Amazon S3 bucket for the Kubernetes state store

Kops needs a “state store” to store configuration information of the cluster.  For example, how many nodes, instance type of each node, and Kubernetes version. The state is stored during the initial cluster creation. Any subsequent changes to the cluster are also persisted to this store as well. As of publication, Amazon S3 is the only supported storage mechanism. Create a S3 bucket and pass that to the kops CLI during cluster creation.

This post uses the bucket name kubernetes-aws-io. Bucket names must be unique; you have to use a different name. Create an S3 bucket:

aws s3api create-bucket --bucket kubernetes-aws-io

I strongly recommend versioning this bucket in case you ever need to revert or recover a previous version of the cluster. This can be enabled using the AWS CLI as well:

aws s3api put-bucket-versioning --bucket kubernetes-aws-io --versioning-configuration Status=Enabled

For convenience, you can also define KOPS_STATE_STORE environment variable pointing to the S3 bucket. For example:

export KOPS_STATE_STORE=s3://kubernetes-aws-io

This environment variable is then used by the kops CLI.

DNS configuration

As of Kops 1.6.1, a top-level domain or a subdomain is required to create the cluster. This domain allows the worker nodes to discover the master and the master to discover all the etcd servers. This is also needed for kubectl to be able to talk directly with the master.

This domain may be registered with AWS, in which case a Route 53 hosted zone is created for you. Alternatively, this domain may be at a different registrar. In this case, create a Route 53 hosted zone. Specify the name server (NS) records from the created zone as NS records with the domain registrar.

This post uses a kubernetes-aws.io domain registered at a third-party registrar.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name cluster.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

This shows an output such as the following:

[
"ns-94.awsdns-11.com",
"ns-1962.awsdns-53.co.uk",
"ns-838.awsdns-40.net",
"ns-1107.awsdns-10.org"
]

Create NS records for the domain with your registrar. Different options on how to configure DNS for the cluster are explained at github.com/kubernetes/kops/blob/master/docs/aws.md#configure-dns.

Experimental support to create a gossip-based cluster was added in Kops 1.6.2. This post uses a DNS-based approach, as that is more mature and well tested.

Create the Kubernetes cluster

The Kops CLI can be used to create a highly available cluster, with multiple master nodes spread across multiple Availability Zones. Workers can be spread across multiple zones as well. Some of the tasks that happen behind the scene during cluster creation are:

  • Provisioning EC2 instances
  • Setting up AWS resources such as networks, Auto Scaling groups, IAM users, and security groups
  • Installing Kubernetes.

Start the Kubernetes cluster using the following command:

kops create cluster \
--name cluster.kubernetes-aws.io \
--zones us-west-2a \
--state s3://kubernetes-aws-io \
--yes

In this command:

  • --zones
    Defines the zones in which the cluster is going to be created. Multiple comma-separated zones can be specified to span the cluster across multiple zones.
  • --name
    Defines the cluster’s name.
  • --state
    Points to the S3 bucket that is the state store.
  • --yes
    Immediately creates the cluster. Otherwise, only the cloud resources are created and the cluster needs to be started explicitly using the command kops update --yes. If the cluster needs to be edited, then the kops edit cluster command can be used.

This starts a single master and two worker node Kubernetes cluster. The master is in an Auto Scaling group and the worker nodes are in a separate group. By default, the master node is m3.medium and the worker node is t2.medium. Master and worker nodes are assigned separate IAM roles as well.

Wait for a few minutes for the cluster to be created. The cluster can be verified using the command kops validate cluster --state=s3://kubernetes-aws-io. It shows the following output:

Using cluster from kubectl context: cluster.kubernetes-aws.io

Validating cluster cluster.kubernetes-aws.io

INSTANCE GROUPS
NAME                 ROLE      MACHINETYPE    MIN    MAX    SUBNETS
master-us-west-2a    Master    m3.medium      1      1      us-west-2a
nodes                Node      t2.medium      2      2      us-west-2a

NODE STATUS
NAME                                           ROLE      READY
ip-172-20-38-133.us-west-2.compute.internal    node      True
ip-172-20-38-177.us-west-2.compute.internal    master    True
ip-172-20-46-33.us-west-2.compute.internal     node      True

Your cluster cluster.kubernetes-aws.io is ready

It shows the different instances started for the cluster, and their roles. If multiple cluster states are stored in the same bucket, then --name <NAME> can be used to specify the exact cluster name.

Check all nodes in the cluster using the command kubectl get nodes:

NAME                                          STATUS         AGE       VERSION
ip-172-20-38-133.us-west-2.compute.internal   Ready,node     14m       v1.6.2
ip-172-20-38-177.us-west-2.compute.internal   Ready,master   15m       v1.6.2
ip-172-20-46-33.us-west-2.compute.internal    Ready,node     14m       v1.6.2

Again, the internal IP address of each node, their current status (master or node), and uptime are shown. The key information here is the Kubernetes version for each node in the cluster, 1.6.2 in this case.

The kubectl value included in the PATH earlier is configured to manage this cluster. Resources such as pods, replica sets, and services can now be created in the usual way.

Some of the common options that can be used to override the default cluster creation are:

  • --kubernetes-version
    The version of Kubernetes cluster. The exact versions supported are defined at github.com/kubernetes/kops/blob/master/channels/stable.
  • --master-size and --node-size
    Define the instance of master and worker nodes.
  • --master-count and --node-count
    Define the number of master and worker nodes. By default, a master is created in each zone specified by --master-zones. Multiple master nodes can be created by a higher number using --master-count or specifying multiple Availability Zones in --master-zones.

A three-master and five-worker node cluster, with master nodes spread across different Availability Zones, can be created using the following command:

kops create cluster \
--name cluster2.kubernetes-aws.io \
--zones us-west-2a,us-west-2b,us-west-2c \
--node-count 5 \
--state s3://kubernetes-aws-io \
--yes

Both the clusters are sharing the same state store but have different names. This also requires you to create an additional Amazon Route 53 hosted zone for the name.

By default, the resources required for the cluster are directly created in the cloud. The --target option can be used to generate the AWS CloudFormation scripts instead. These scripts can then be used by the AWS CLI to create resources at your convenience.

Get a complete list of options for cluster creation with kops create cluster --help.

More details about the cluster can be seen using the command kubectl cluster-info:

Kubernetes master is running at https://api.cluster.kubernetes-aws.io
KubeDNS is running at https://api.cluster.kubernetes-aws.io/api/v1/proxy/namespaces/kube-system/services/kube-dns

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Check the client and server version using the command kubectl version:

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Both client and server version are 1.6 as shown by the Major and Minor attribute values.

Upgrade the Kubernetes cluster

Kops can be used to create a Kubernetes 1.4.x, 1.5.x, or an older version of the 1.6.x cluster using the --kubernetes-version option. The exact versions supported are defined at github.com/kubernetes/kops/blob/master/channels/stable.

Or, you may have used kops to create a cluster a while ago, and now want to upgrade to the latest recommended version of Kubernetes. Kops supports rolling cluster upgrades where the master and worker nodes are upgraded one by one.

As of kops 1.6.1, upgrading a cluster is a three-step process.

First, check and apply the latest recommended Kubernetes update.

kops upgrade cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

The --yes option immediately applies the changes. Not specifying the --yes option shows only the changes that are applied.

Second, update the state store to match the cluster state. This can be done using the following command:

kops update cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

Lastly, perform a rolling update for all cluster nodes using the kops rolling-update command:

kops rolling-update cluster \
--name cluster2.kubernetes-aws.io \
--state s3://kubernetes-aws-io \
--yes

Previewing the changes before updating the cluster can be done using the same command but without specifying the --yes option. This shows the following output:

NAME                 STATUS        NEEDUPDATE    READY    MIN    MAX    NODES
master-us-west-2a    NeedsUpdate   1             0        1      1      1
nodes                NeedsUpdate   2             0        2      2      2

Using --yes updates all nodes in the cluster, first master and then worker. There is a 5-minute delay between restarting master nodes, and a 2-minute delay between restarting nodes. These values can be altered using --master-interval and --node-interval options, respectively.

Only the worker nodes may be updated by using the --instance-group node option.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster using the kops command. This ensures that all resources created by the cluster are appropriately cleaned up.

The command to delete the Kubernetes cluster is:

kops delete cluster --state=s3://kubernetes-aws-io --yes

If multiple clusters have been created, then specify the cluster name as in the following command:

kops delete cluster cluster2.kubernetes-aws.io --state=s3://kubernetes-aws-io --yes

Conclusion

This post explained how to manage a Kubernetes cluster on AWS using kops. Kubernetes on AWS users provides a self-published list of companies using Kubernetes on AWS.

Try starting a cluster, create a few Kubernetes resources, and then tear it down. Kops on AWS provides a more comprehensive tutorial for setting up Kubernetes clusters. Kops docs are also helpful for understanding the details.

In addition, the Kops team hosts office hours to help you get started, from guiding you with your first pull request. You can always join the #kops channel on Kubernetes slack to ask questions. If nothing works, then file an issue at github.com/kubernetes/kops/issues.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

— Arun

EtherApe – Graphical Network Monitor

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/DxSK15EgI5k/

EtherApe is a graphical network monitor for Unix modelled after etherman. Featuring link layer, IP and TCP modes, it displays network activity graphically. Hosts and links change in size with traffic. Colour coded protocols display. It supports Ethernet, FDDI, Token Ring, ISDN, PPP, SLIP and WLAN devices, plus several encapsulation formats. It can…

Read the full post at darknet.org.uk

Denuvo Accused of Using Unlicensed Software to Protect its Anti-Piracy Tool

Post Syndicated from Andy original https://torrentfreak.com/denuvo-accused-of-using-unlicensed-software-to-protect-it-anti-piracy-tool-170605/

Just recently, anti-piracy outfit Denuvo has been hitting the headlines every few weeks, but for reasons the Austrian company would rather forget.

As a result of providing the leading anti-piracy solution for games, the company is now well and truly in the spotlight of pirates, each desperate to defeat Denuvo protection on new games as quickly as possible. Now, however, the company has a rather different headache to contend with.

According to a post on Russian forum RSDN, Denuvo is accused of engaging in a little piracy of its own. The information comes from a user called drVanо, who is a developer at VMProtect Software, a company whose tools protect against reverse engineering and cracking.

“I want to tell you a story about one very clever and greedy Austrian company called Denuvo Software Solutions GmbH,” drVano begins.

“A while ago, this company released a protection system of the same name but the most remarkable thing is that they absolutely illegally used our VMProtect software in doing so.”

drVano says that around three years ago, VMProtect Software and Denuvo entered into correspondence about the possibility of Denuvo using VMProtect in their system. VMProtect says they were absolutely clear that would not be possible under a standard $500 license, since the cost to Denuvo of producing something similar for themselves would be several hundred thousand dollars.

However, with no proper deal set up, drVano says that Denuvo went ahead anyway, purchasing a cheap license for VMProtect and going on to “mow loot” (a Russian term for making bank) with their successful Denuvo software.

“Everything went well for Denuvo until we notified them that their VMProtect license had been canceled due to a breach of its licensing conditions. Options were offered for solving the problem, including paying modest compensation to us. Our proposal was ignored,” drVano says.

Interestingly, drVano says that VMProtect then took what appears to be a rather unorthodox measure against Denuvo. After cooperation with Sophos, the anti-virus vendor agreed to flag up the offending versions of Denuvo as potential malware. VMProtect says it has also been speaking with Valve about not featuring the work of “scammers” on its platform.

In a nutshell, Denuvo is being accused of using pirated versions of VMProtect in order to create its own anti-piracy software. It’s one of the most ironic claims ever made against an anti-piracy company and it will be intriguing to see how this plays out. According to VMProtect, legal action might not be far away.

“Through our long-standing partners from Intellect-C, we are starting to prepare an official claim against Denuvo Software Solutions GmbH with the prospect of going to court. This might be a very good lesson for ‘greedy’ developers who do not care about the intellectual property rights of their colleagues in the same trade,” drVano concludes.

Just last week, the latest version of Denuvo was cracked by rising star Baldman, who revealed what a toll the protection was taking on video gaming hardware. The protection, which according to reports is the only recent version of Denuvo that doesn’t use VMProtect, collapsed in less than a week.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

UNIFICli – a CLI tool to manage Ubiquiti’s Unifi Controller

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2017/04/unificli-cli-tool-to-manage-ubiquitis.html

As mentioned earlier, I made a nodejs library interface to the Ubiquiti Unifi Controller’s REST API which is available here – https://github.com/delian/node-unifiapi
Now I am introducing a small, demo, CLI interface, which uses that same library to remotely connect and configure Ubiquiti Unifi Controller (or Ubiquiti UC-CK Cloud Key).
This software is available on GitHub here – https://github.com/delian/unificli and its main goal for me is to be able to test the node-unifiapi library. The calls and parameters are almost 1:1 with the library and this small code provides great example how such tools could be built.
This tool is able to connect to a controller either via direct HTTPS connection or via WebRTC trough Ubiquiti’s Unifi Clould network (they name it SDN). And also you have a command you could use to connect to wireless access point via SSH over WebRTC.
The tool is not completed, neither have any goal. Feel free to fix bugs, extend it with features or provide suggestions. Any help with the development will be appreciated.
Commands can be executed via the cli too:

npm start connectSSH 00:01:02:03:04:05 -l unifikeylocation

AWS Direct Connect Update – Link Aggregation Groups, Bundles, and re:Invent Recap

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-direct-connect-update-link-aggregation-groups-bundles-and-reinvent-recap/

AWS Direct Connect helps our large-scale customers to create private, dedicated network connections to their office, data center, or colocation facility. Our customers create 1 Gbps and 10 Gbps connections in order to reduce their network costs, increase data transfer throughput, and to get a more consistent network experience than is possible with an Internet-based connection.

Today I would like to tell you about a new Link Aggregation feature for Direct Connect. I’d also like to tell you about our new Direct Connect Bundles and to tell you more about how we used Direct Connect to provide a first-class customer experience at AWS re:Invent 2016.

Link Aggregation Groups
Some of our customers would like to set up multiple connections (generally known as ports) between their location and one of the 46 Direct Connect locations. Some of them would like to create a highly available link that is resilient in the face of network issues outside of AWS; others simply need more data transfer throughput.

In order to support this important customer use case, you can now purchase up to 4 ports and treat them as a single managed connection, which we call a Link Aggregation Group or LAG. After you have set this up, traffic is load-balanced across the ports at the level of individual packet flows. All of the ports are active simultaneously, and are represented by a single BGP session. Traffic across the group is managed via Dynamic LACP (Link Aggregation Control Protocol – or ISO/IEC/IEEE 8802-1AX:2016). When you create your group, you also specify the minimum number of ports that must be active in order for the connection to be activated.

You can order a new group with multiple ports and you can aggregate existing ports into a new group. Either way, all of the ports must have the same speed (1 Gbps or 10 Gbps).

All of the ports in as group will connect to the same device on the AWS side. You can add additional ports to an existing group as long as there’s room on the device (this information is now available in the Direct Connect Console). If you need to expand an existing group and the device has no open ports, you can simply order a new group and migrate your connections.

Here’s how you can make use of link aggregation from the Console. First, creating a new LAG from scratch:

And second, creating a LAG from existing connections:


Link Aggregation Groups are now available in the US East (Northern Virginia), US West (Northern California), US East (Ohio), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Mumbai), and Asia Pacific (Seoul) Regions and you can create them today. We expect to make them available in the remaining regions by the end of this month.

Direct Connect Bundles
We announced some powerful new Direct Connect Bundles at re:Invent 2016. Each bundle is an advanced, hybrid reference architecture designed to reduce complexity and to increase performance. Here are the new bundles:

Level 3 Communications Powers Amazon WorkSpaces – Connects enterprise applications, data, user workspaces, and end-point devices to offer reliable performance and a better end-user experience:

SaaS Architecture enhanced by AT&T NetBond – Enhances quality and user experience for applications migrated to the AWS Cloud:

Aviatrix User Access Integrated with Megaport DX – Supports encrypted connectivity between AWS Cloud Regions, between enterprise data centers and AWS, and on VPN access to AWS:

Riverbed Hybrid SDN/NFV Architecture over Verizon Secure Cloud Interconnect – Allows enterprise customers to provide secure, optimized access to AWS services in a hybrid network environment:

Direct Connect at re:Invent 2016
In order to provide a top-notch experience for attendees and partners at re:Invent, we worked with Level 3 to set up a highly available and fully redundant set of connections. This network was used to support breakout sessions, certification exams, the hands-on labs, the keynotes (including the live stream to over 25,000 viewers in 122 countries), the hackathon, bootcamps, and workshops. The re:Invent network used four 10 Gbps connections, two each to US West (Oregon) and US East (Northern Virginia):

It supported all of the re:Invent venues:

Here are some video resources that will help you to learn more about how we did this, and how you can do it yourself:

Jeff;

How to Add More Application Support to Your Microsoft AD Directory by Extending the Schema

Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/how-to-add-more-application-support-to-your-microsoft-ad-directory-by-extending-the-schema/

Some Active Directory (AD) integrated applications require custom changes to the directory schema. Today, we have added the ability for an administrator to extend the schema of AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD. Specifically, you can modify the AD schema and enable many more applications. This feature also allows you to add new attributes and object classes to your AD that your application requires and that are not present in the core AD classes and attributes. Finally, it allows you to rename and disable attributes you create.

To update your schema, you upload a compatible Lightweight Directory Access Protocol Data Interchange Format (LDIF) file through the Directory Service console or AWS SDK. LDIF is a standard for formatted text designed to exchange data and update schemas for Lightweight Directory Access Protocol (LDAP) servers such as AD. Applications that require elevated permissions, such as Enterprise or Domain Admins, might not be supported.

In this blog post, I explain schema attributes and classes, and I give an overview of LDIF files and formatting. I then walk through a use case, which adds a new attribute to the computer class object that stores the Amazon EC2 instance identifier for my EC2 instances that are joined to my Microsoft AD domain, in three main steps:

  1. Create an LDIF file.
  2. Import an LDIF file.
  3. Validate schema updates.

I also show how to add a value to the new attribute.

Schema concepts

Schemas define the structures of directories and are composed of object classes that contain attributes, which can be uniquely referenced by an object identifier. Before you can modify a schema, it is useful to know how schemas are defined. This section covers important concepts you need to know about AD schemas. If you are already familiar with AD schemas and LDIF files, feel free to skip ahead. Note: All links in this section go to the Microsoft Developer Network (MSDN) website.

Schema classes: Each schema class, similar to a table in a database, has several properties, such as objectClassCategory, that define the class category. Go to Characteristics of Object Classes to see the complete list of classes’ characteristics, and learn more about how to create a new class at Defining a New Class.

Schema attributes: Each schema attribute, which is similar to a field in a database, has several properties that define its characteristics. For example, the property used by LDAP clients to read and write the attribute is the IDAPDDisplayName property, which must be unique across all attributes and classes. Characteristics of Attributes has a complete list of attribute characteristics; you can find additional guidance for creating a new attribute on Defining a New Attribute.

Schema linked attributes: Some attributes are linked between two classes with forward and back links. For example, when you add a user to a group, AD creates a forward link to the group, and AD adds a back link from the group to the user. A unique linkID must be generated when creating an attribute that will be linked.

Object identifier (OID): Each class and attribute must have an OID that is unique for all your objects. Software vendors must obtain their own unique OID to ensure uniqueness to avoid conflicts when more than one application uses the same attribute for different purposes. To ensure uniqueness, you can obtain a root OID from an ISO Name Registration Authority. Alternatively, you can obtain a base OID from Microsoft. To learn more about OIDs and how to obtain them, go to Object Identifiers.

LDIF files and formatting

LDIF files are formatted text files that you can use to modify objects and schemas in an LDAP directory such as AD. LDIF files contain instructions to define classes and attributes for objects that are stored in the directory. An instruction to define a class or an attribute is composed of multiple lines, each specifying a different property of the class or attribute. The format for LDIF files is an Internet Engineering Task Force (IETF) standard defined in Request for Comments (RFC) 2849.

To modify the schema in your Microsoft AD directory, you must first obtain or create an LDIF file. You must also have permissions to modify any objects held within the organizational unit (OU) that your Microsoft AD administrative account is delegated to control. You can also obtain an LDIF file by exporting one from a preexisting directory. Often, software manufacturers will provide a pre-created LDIF file with their products. When you have your LDIF file, you submit the LDIF file to extend the existing schema.

LDAP directories do not read the LDIF file. Rather, a program (for example, Ldifde.exe) interprets the LDIF file. The program converts the LDIF instructions into a sequence of LDAP commands that it sends to the directory, using the LDAP protocol. In the case of Microsoft AD, the LDIF file is submitted through the Directory Service console or AWS SDK; you do not have permissions to make modifications to the schema directly from an LDAP tool or application running in your Amazon VPC.

Important: Schema modifications are a critical operation, and you must perform them with extreme caution. LDIF file errors can break your applications! To recover from an error, you can only apply another modification to disable changes you made, or restore your entire directory to a previous state, which results in directory down time. Carefully plan, approve, and test all schema changes in a test environment first. Please review the AWS Shared Responsibility Model.

Whether you create an LDIF file from the ground up, export an LDIF file from another AD schema, or use an LDIF file supplied by your software vendor, you must understand a few key LDIF file formatting rules. In all cases, you must be familiar with the standards to modify the file when required. For example, each modification in an LDIF file is preceded by a distinguished name (DN) that uniquely identifies the object you are changing in the schema. The DNs in your LDIF file must exactly match the DNs of your directory. If you import an LDIF from a different directory, the DNs must be edited throughout the LDIF file. See the LDIF formatting rules in the Extending Your Microsoft AD Schema.

AD operates from a directory schema stored in a cache that is filled from the directory stored on disk. AD refreshes the cache every 5 minutes, and when schema changes are made, they are applied only to the directory stored on disk. If you need the schema changes stored on disk to take effect immediately, you must issue a command to update the schema cache after making each change.

How to extend your Microsoft AD directory schema

In the remainder of this blog post, I show how to extend your Microsoft AD directory schema by adding a new attribute to the computer object class to store the EC2 instance identifier. This walkthrough follows three main steps: 1) Create an LDIF file, 2) import an LDIF file, and 3) Validate schema updates. For this demonstration, my company name is Example, Inc., and my directory uses the domain name, example.com.

Step 1: Create an LDIF file

To create your own LDIF file:

  1. Define your new schema attribute.
  2. Define your new schema object class.
  3. Create the LDIF file.

A. Define your new schema attribute

In this example, the new attribute is named EC2InstanceID. To make my attribute name unique within the schema, I add example- as a prefix. The prefix and name act as a form of documentation about the creator and purpose of an object when an administrator is browsing the directory schema. In this case, the common name (CN) and IDAPDisplayName are the same: example-EC2InstanceID. To define your own prefix for your schema changes, you can use your DNS, an acronym, or another string that is unique to your company. For now, I am defining my attribute and property values that I will use later when I create the LDIF file in Step 3.

The following table shows the complete set of properties for creating my new attribute. To learn more details about attributes and properties, go to Defining a New Attribute and System Checks and Restrictions Imposed on Schema Additions and Modifications.

Property Property Value for My Attribute Description
CN example-EC2InstanceID Attribute common name
lDAPDisplayName example-EC2InstanceID Name used by LDAP clients
adminDisplayName example-EC2InstanceID Name used by admin tools
attributeSyntax 2.5.5.12 OID for string (Unicode)
oMSyntax 64 oMSyntax for string (Unicode)
objectClass top,attributeSchema Is instance of this objectClass
attributeID 1.2.840.113556.8000.9999.2.1 The OID for this attribute
isSingleValued TRUE Attribute has unique value
searchFlags 1 Attribute is indexed for search
isMemberOfPartialAttributeSet TRUE Replicated to the global catalog

Note: Because this attribute does not have an oMSyntax of 127, an oMObjectClass is not called for, in this example.

B. Define your new schema object class

AD comes with a core schema that defines objects used by the Windows operating system and many AD-integrated applications. You should not modify core schema definitions for AD, also called Category 1 objects. Instead, you should create a new auxiliary class and add the new attributes to this new auxiliary class. Then, add the attributes defined in the new auxiliary class to the core class to extend the schema so that the directory remains compatible with applications that depend on the core schema.

In my example, my goal is to add the example-EC2InstanceID attribute to the Computer class. To do so, I create a new auxiliary class named example-Computer and add the example-EC2InstanceID attribute to this new auxiliary class. Later, I will add the example-EC2InstanceID attribute defined in the example-Computer auxiliary class to the Computer core class. For now, I define only the example-Computer class that I will translate into LDIF instructions in Step 3.

The following table shows the complete set of properties for creating my new auxiliary class. To learn more details about classes and properties, go to Defining a New Class.

Property Property Value for My New Auxiliary Class Description
Cn example-Computer Class Common Name
lDAPDisplayName example-Computer Name used by LDA Clients
adminDisplayName example-Computer Name used by admin tools
governsID 1.2.840.113556.8000.9999.1.1 The OID for the auxiliary class
mayContain example-EC2InstanceID Class optionally contains this attribute
possSuperiors organizationalUnit, container This auxiliary class can be a child of organizationalUnit or container
objectClassCategory 3 Auxiliary class

C. Create the LDIF file

Now that I have defined my new example-EC2InstanceID attribute and example-Computer auxiliary class, it is time to put everything together in a single LDIF file. The sequence of LDIF instructions to extend the schema in my example is:

  1. Define the example-EC2InstanceID.
  2. Update the schema cache.
  3. Define the example-Computer auxiliary class.
  4. Update schema cache.
  5. Add the attributes defined in the example-Computer auxiliary class to the Computer.
  6. Update schema cache.

For each new instruction in the LDIF file, you must define the distinguished name (DN) as the first line of the instruction. The DN identifies an AD object within the AD object’s tree and must contain the domain components for your directory. In my example, the domain components for my directory are DC=example,DC=com.

The DN also must contain the common name (CN) of the AD object. The first CN entry is the attribute or class name. Next, you must use CN=Schema,CN=Configuration. This CN ensures that you can extend the AD schema. As mentioned before, you cannot add or modify AD objects’ content.

Remember: If you are creating a new file, using a file created by an application provider, or using a file exported from another domain, you must update the DNs throughout the LDIF file to match your directory’s naming conventions. The general format for a DN follows.

dn: CN=[attribute or class name],CN=Schema,CN=Configuration,DC=[domain_name]

In my example, the DN for my new attribute example-EC2InstanceID follows.

dn: CN=example-EC2InstanceID,CN=Schema,CN=Configuration,DC=example,DC=com

I create an LDIF file by using the instructions for the six previously named steps to define and add my new attribute to the AD schema. I name my LDIF file schema_EC2InstanceID.ldif, which is available for download. If you want to perform a test with the file, don’t forget to change the dn entries to specify your own domain name. The following is the code of the LDIF file.

# 1 – Define the example-EC2InstanceID attribute

dn: CN=example-EC2InstanceID,CN=Schema,CN=Configuration,DC=example,DC=com
changetype: add
objectClass: top
objectClass: attributeSchema
cn: example-EC2InstanceID
attributeID: 1.2.840.113556.1.8000.9999.2.1
attributeSyntax: 2.5.5.12
isSingleValued: TRUE
adminDisplayName: example-EC2InstanceID
adminDescription: EC2 Instance ID
oMSyntax: 64
searchFlags: 1
lDAPDisplayName: example-EC2InstanceID
systemOnly: FALSE

# 2 – Update the schema cache

dn:
changetype: modify
add: schemaUpdateNow
schemaUpdateNow: 1
-

# 3 – Define the example-Computer auxiliary class

dn: CN=example-Computer,CN=Schema,CN=Configuration,DC=example,DC=com
changetype: add
objectClass: top
objectClass: classSchema
cn: example-Computer
governsID: 1.2.840.113556.1.8000.9999.1.1
mayContain: example-EC2InstanceID
rDNAttID: cn
adminDisplayName: example-Computer
adminDescription:  Example Computer Class
objectClassCategory: 3
lDAPDisplayName: example-Computer
name: example-Computer
systemOnly: FALSE

# 4 – Update the schema cache

dn:
changetype: modify
add: schemaUpdateNow
schemaUpdateNow: 1
-

# 5 – Add the attributes defined in the example-Computer auxiliary class to the

# Computer class

dn: CN=Computer,CN=Schema,CN=Configuration,DC=example,DC=com
changetype: modify
add: auxiliaryClass
auxiliaryClass: example-Computer
-

# 6 – Update the schema cache

dn:
changetype: modify
add: schemaUpdateNow
schemaUpdateNow: 1

Step 2: Import an LDIF file

After carefully reviewing and saving my LDIF file, I am now ready to upload it and apply it to my directory:

  1. Open the AWS Management Console, choose Directory Service, and then choose your Microsoft AD Directory ID link (as shown in the following screenshot).
    Screenshot of choosing the Directory ID
  2. Choose the Schema extensions tab. Here you can see a history of all the schema modification attempts made to the directory with the Start Time, End Time, and Description of the update provided by you, and the Status.
    Screenshot of "Schema extensions" tab
  1. Choose Upload and update schema to upload your LDIF file.
  2. On the Upload LDIF file and update schema page, choose Browse to choose your LDIF file. Type a description for this schema update to document why you updated your schema, and choose Update Schema.
    Screenshot of "Update LDIF file and update schema" window

When you choose Update Schema, Microsoft AD performs an initial syntax validation of your LDIF file.

Note: The most common syntax validation error is failing to update the DNs in your LDIF file to match your directory’s naming conventions.

After the initial LDIF file syntax validation, three actions are performed automatically:

  1. Microsoft AD creates a snapshot. The snapshot allows you to restore your directory just in case you face any issues in your application after updating the schema. The creation of the snapshot takes approximately 40 minutes. If you have reached your manual snapshot limit, Microsoft AD will prompt you to delete one of your manual snapshots before continuing.
  2. Your LDIF file is applied against your directory to a domain controller in isolation. Microsoft AD selects one of your DCs to be the schema master. It removes that DC from directory replication and applies your LDIF file using Ldifde.exe. This operation usually takes less than a minute.
  3. The schema updates are replicated to all domain controllers. After successfully applying your LDIF file against your domain controller in isolation, the modified DC is added back into replication, and the schema updates are replicated to all domain controllers.

While these steps are in progress, your directory is available to respond to application and user requests. However, you cannot perform directory maintenance operations such as creating trusts or adding IP routes. You can monitor the progress in the schema update history, as shown in the next screenshot.

Note: If any errors occur during the update, you receive a detailed error message and, when possible, a line number to show where the error occurred in your LDIF file. If an error occurs, your Microsoft AD schema will remain unmodified.

Screenshot showing snapshot being created

Step 3: Validate the schema updates

The last step is to validate if your schema updates have been applied to your directory. This is a key step before you migrate or update any application that relies on the schema updates. You can accomplish this by using a variety of different LDAP tools, or by writing a test tool that issues appropriate LDAP commands. I will show you two ways to accomplish this: by using the AD Schema Snap-in and Windows PowerShell.

To verify if the schema updates are present, you must run AD tools from a computer that is joined to your Microsoft AD domain. This can be a Windows server running in your on-premises network, with access to your Amazon Virtual Private Cloud (VPC) through a Virtual Private Network (VPN) connection. You also can perform this on an Amazon EC2 Windows instance (see Joining a Domain Using the Amazon EC2 Launch Wizard).

To use the AD Schema Snap-in:

  1. Install the AD Schema Snap-In by following this procedure.
  2. Open the Microsoft Management Console (MMC) and expand the Active Directory Schema tree for your directory. You should be able to see your classes and attributes. The following screenshot shows the example-EC2InstanceID attribute.
    Screenshot showing the example-EC2InstanceID attribute

If you prefer to use Windows PowerShell, you can use it to see if your schema update was applied to your directory by using the Get-ADObject cmdlet. The following code shows the syntax of my example.

# New attribute EC2InstanceID
PS C:\> get-adobject -Identity 'CN= EC2InstanceID,CN=Schema,CN=Configuration,DC=example,DC=com' -Properties * <Enter>

adminDescription: EC2 instance ID
adminDisplayName: example-EC2InstanceID
attributeID     : 1.2.840.113556.1.8000.9999.2.1
attributeSyntax : 2.5.5.12
CanonicalName   : example.com/Configuration/Schema/example-EC2InstanceID
CN              : example-EC2InstanceID
...

# New auxiliary class example-computer
PS C:\> get-adobject -Identity 'CN=example-Computer,CN=Schema,CN=Configuration,DC=example,DC=com' -Properties * <Enter>

adminDescription: example computer class
adminDisplayName: example-Computer
CanonicalName   : example.com/Configuration/Schema/example-Computer
CN              : example-Computer
...

# New auxiliary class example-computer added to the computer class
PS C:\> get-adobject -Identity 'CN=computer,CN=Schema,CN=Configuration,DC=example,DC=com' -Properties auxiliaryClass | select -ExpandProperty auxiliaryClass <Enter>

example-Computer
ipHost

Optional: Add a value to the new attribute

Now that you have created a new attribute, let’s add a new value to the attribute of a computer in your Microsoft AD directory. Open Windows PowerShell and set the new attribute with the following command.

PS C:\> set-adcomputer -Identity <computer name> -add @{example-EC2InstanceID = '<EC2 instance ID>'}

Now, validate if the added EC2InstanceID value to a computer object in the previous command is present in your Microsoft AD directory:

PS C:\> get-adcomputer -Identity <computer name> –Property example-EC2InstanceID

DistinguishedName     : CN=<computer name>,OU=Computers,DC=example,DC=com
DNSHostName           : <computer name>.example.com
Enabled               : True
example-EC2InstanceID : <EC2 instance ID>
Name                  : <computer name>
ObjectClass           : computer
ObjectGUID            : e7c2f596-4f8c-4084-a124-6fc6babaf3c8
SamAccountName        : <computer name>
SID                   : S-1-5-21-3794764471-3203795520-307520662-1104

Summary

In this blog post, I provided an overview of the key concepts, terms, and steps involved in extending a Microsoft AD schema, including how to format your LDIF file and how to apply your LDIF file against your Microsoft AD directory using the Directory Service console. Even if you are importing the LDIF file from another domain or application, these steps are important to understanding the potential impact on your directory. Extending the schema is a critical operation. Do not apply any schema update in your production environment without testing it with your application in a dev/test environment first.

For additional information about schema extensions, how to create LDIF files, and common errors, see the AWS Directory Service documentation.

To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions, post them on the Directory Service forum.

– Peter

 

SDN and beyond

Post Syndicated from Yate Team original http://blog.yate.ro/2015/09/15/sdn-and-beyond/

Software-defined networking (SDN) and network function virtualization (NFV) are new approaches to designing and operating mobile networks, granting operators better management possibilities and better use of the network capabilities. NFV represents the virtualization of network nodes roles, which culminates in separate software implementations performing the functions typically executed by hardware components. At the other end, […]

A Case Study in Global Fault Isolation

Post Syndicated from AWS Architecture Blog original https://www.awsarchitectureblog.com/2014/05/a-case-study-in-global-fault-isolation.html

In a previous blog post, we talked about using shuffle sharding to get magical fault isolation. Today, we'll examine a specific use case that Route 53 employs and one of the interesting tradeoffs we decided to make as part of our sharding. Then, we'll discuss how you can employ some of these concepts in your own applications.

Overview of Anycast DNS

One of our goals at Amazon Route 53 is to provide low-latency DNS resolution to customers. We do this, in part, by announcing our IP addresses using “anycast” from over 50 edge locations around the globe. Anycast works by routing packets to the closest (network-wise) location that is “advertising” a particular address. In the image below, we can see that there are three locations, all of which can receive traffic for the 205.251.194.72 address.

Anycast Routing

[Blue circles represent edge locations; orange circles represent AWS regions]

For example, if a customer has ns-584.awsdns-09.net assigned as a nameserver, issuing a query to that nameserver could result in that query landing at any one of multiple locations responsible for advertising the underlying IP address. Where the query lands depends on the anycast routing of the Internet, but it is generally going to be the closest network-wise (and hence, low latency) location to the end user.

Behind the scenes, we have thousands of nameserver names (e.g. ns-584.awsdns-09.net) hosted across four top-level domains (.com, .net, .co.uk, and .org). We refer to all the nameservers in one top-level domain as a ‘stripe;' thus, we have a .com stripe, a .net stripe, a .co.uk stripe, and a .org stripe. This is where shuffle sharding comes in: each Route 53 domain (hosted zone) receives four nameserver names one from each of stripe. As a result, it is unlikely that two zones will overlap completely across all four nameservers. In fact, we enforce a rule during nameserver assignment that no hosted zone can overlap by more than two nameservers with any previously created hosted zone.

DNS Resolution

Before continuing, it’s worth quickly explaining how DNS resolution works. Typically, a client, such as your laptop or desktop has a “stub resolver.” The stub resolver simply contacts a recursive nameserver (resolver), which in turn queries authoritative nameservers, on the Internet to find the answers to a DNS query. Typically, resolvers are provided by your ISP or corporate network infrastructure, or you may rely on an open resolver such as Google DNS. Route 53 is an authoritative nameserver, responsible for replying to resolvers on behalf of customers. For example, when a client program attempts to look up amazonaws.com, the stub resolver on the machine will query the resolver. If the resolver has the data in cache and the value hasn’t expired, it will use the cached value. Otherwise, the resolver will query authoritative nameservers to find the answer.

Route 53 Nameserver Striping

[Every location advertises one or more stripes, but we only show Sydney, Singapore, and Hong Kong in the above diagram for clarity.]

Each Route 53 edge location is responsible for serving the traffic for one or more stripes. For example, our edge location in Sydney, Australia could serve both the .com and .net, while Singapore could serve just the .org stripe. Any given location can serve the same stripe as other locations. Hong Kong could serve the .net stripe, too. This means that if a resolver in Australia attempts to resolve a query against a nameserver in the .org stripe, which isn’t provided in Australia, the query will go to the closest location that provides the .org stripe (which is likely Singapore). A resolver in Singapore attempting to query against a nameserver in the .net stripe may go to Hong Kong or Sydney depending on the potential Internet routes from that resolver’s particular network. This is shown in the diagram above.

For any given domain, in general, resolvers learn the lowest latency nameserver based upon the round trip time of the query (this technique is often called SRTT or smooth round-trip time). Over a few queries, a resolver in Australia would gravitate toward using the nameservers on the .net and .com stripes for Route 53 customers’ domains.

Not all resolvers do this. Some choose randomly amongst the nameservers. Others may end up choosing the slowest one, but our experiments show that about 80% of resolvers use the lowest RTT nameserver. For additional information, this presentation presents information on how various resolvers choose which nameserver they utilize. Additionally, many other resolvers (such as Google Public DNS) use pre-fetching, or have very short timeouts if a resolver fails to resolve against a particular nameserver.

The Latency-Availability Decision

Given the above resolver behavior, one option, for a DNS provider like Route 53, might be to advertise all four stripes from every edge location. This would mean that no matter which nameserver a resolver choses, it will always go to the closest network location. However, we believe this provides a poor availability model.

Why? Because edge locations can sometimes fail to provide resolution for a variety of reasons that are very hard to control: the edge location may lose power or Internet connectivity, the resolver may lose connectivity to the edge location, or an intermediary transit provider may lose connectivity. Our experiments have shown that these types of events can cause about 5 minutes of disruption as the Internet updates routing tables. In recent years another serious risk has arisen: large-scale transit network congestion due to DDOS attacks. Our colleague, Nathan Dye, has a talk from AWS re:Invent that provides more details: www.youtube.com/watch?v=V7vTPlV8P3U.

In all of these failure scenarios, advertising every nameserver from every location may result in resolvers having no fallback location. All nameservers would route to the same location and resolvers would fail to resolve DNS queries, resulting in an outage for customers.

In the diagram below, we show the difference for a resolver querying domain X, whose nameservers (NX1, NX2, NX3, NX4) are advertised from all locations and domain Y, whose nameservers (NY1, NY2, NY3, NY4) are advertised in a subset of the locations.

Route 53 Nameserver Striping

When the path from the resolver to location A is impaired, all queries to the nameservers for domain X will fail. In comparison, even if the path from the resolver to location A is impaired, there are other transit paths to reach nameservers at locations B, C, and D in order to resolve the DNS for domain Y.

Route 53 typically advertises only one stripe per edge location. As a result, if anything goes wrong with a resolver being able to reach an edge location, that resolver has three other nameservers in three other locations to which it can fall back. For example, if we deploy bad software that causes the edge location to stop responding, the resolver can still retry elsewhere. This is why we organize our deployments in "stripe order"; Nick Trebon provides a great overview of our deployment strategies in the previous blog post. It also means that queries to Route 53 gain a lot of Internet path diversity, which helps resolvers route around congestion and other intermediary problems on their path to reaching Route 53.

Route 53's foremost goal is to always meet our promise of a 100% SLA for DNS queries – that all of our customers' DNS names should resolve all the time. Our customers also tell us that latency is next most important feature of a DNS service provider. Maximizing Internet path and edge location diversity for availibility necessarily means that some nameservers will respond from farther-away edge locations. For most resolvers, our method has no impact on the minimum RTT, or fastest nameserver, and how quickly it can respond. As resolvers generally use the fastest nameserver, we’re confident that any compromise in resolution times is small and that this is a good balance between the goals of low latency and high availability.

On top of our striping across locations, you may have noticed that the four stripes use different top-level domains. We use multiple top-levels domains in case one of the three TLD providers (.com and .net are both operated by Verisign) has any sort of DNS outage. While this rarely happens, it means that as a customer, you’ll have increased protection during a TLD’s DNS outage because at least two of your four nameservers will continue to work.

Applications

You, too, can apply the same techniques in your own systems and applications. If your system isn't end-user facing, you could also consider utilizing multiple TLDs for resilience as well. Especially in the case where you control your own API and clients calling the API, there's no reason to place all your eggs in one TLD basket.

Another application of what we've discussed is minimizing downtime during failovers. For high availability applications, we recommend customers utilize Route 53 DNS Failover. With failover configured, Route 53 will only return answers for healthy endpoints. In order to determine endpoint health, Route 53 issues health checks against your endpoint. As a result, there is a minimum of 10 seconds (assuming you configured fast health checks with a single failover interval) where the application could be down, but failover has not triggered yet. On top of that, there is the additional time incurred for resolvers to expire the DNS entry from their cache based upon the record's TTL. To minimize this failover time, you could write your clients to behave similar to the resolver behavior described earlier. And, while you may not employ an anycast system, you can host your endpoints in multiple locations (e.g. different availability zones and perhaps even different regions). Your clients would learn the SRTT of the multiple endpoints over time and only issue queries to the fastest endpoint, but fallback to the other endpoints if the fastest is unavailable. And, of course, you could shuffle shard your endpoints to achieve increased fault isolation while doing all of the above.

– Lee-Ming Zen

A Case Study in Global Fault Isolation

Post Syndicated from Lee-Ming Zen original https://aws.amazon.com/blogs/architecture/a-case-study-in-global-fault-isolation/

In a previous blog post, we talked about using shuffle sharding to get magical fault isolation. Today, we’ll examine a specific use case that Route 53 employs and one of the interesting tradeoffs we decided to make as part of our sharding. Then, we’ll discuss how you can employ some of these concepts in your own applications.

Overview of Anycast DNS

One of our goals at Amazon Route 53 is to provide low-latency DNS resolution to customers. We do this, in part, by announcing our IP addresses using “anycast” from over 50 edge locations around the globe. Anycast works by routing packets to the closest (network-wise) location that is “advertising” a particular address. In the image below, we can see that there are three locations, all of which can receive traffic for the 205.251.194.72 address.

(Blue circles represent edge locations; orange circles represent AWS regions)

For example, if a customer has ns-584.awsdns-09.net assigned as a nameserver, issuing a query to that nameserver could result in that query landing at any one of multiple locations responsible for advertising the underlying IP address. Where the query lands depends on the anycast routing of the Internet, but it is generally going to be the closest network-wise (and hence, low latency) location to the end user.

Behind the scenes, we have thousands of nameserver names (e.g. ns-584.awsdns-09.net) hosted across four top-level domains (.com, .net, .co.uk, and .org). We refer to all the nameservers in one top-level domain as a ‘stripe;’ thus, we have a .com stripe, a .net stripe, a .co.uk stripe, and a .org stripe. This is where shuffle sharding comes in: each Route 53 domain (hosted zone) receives four nameserver names one from each of stripe. As a result, it is unlikely that two zones will overlap completely across all four nameservers. In fact, we enforce a rule during nameserver assignment that no hosted zone can overlap by more than two nameservers with any previously created hosted zone.

DNS Resolution

Before continuing, it’s worth quickly explaining how DNS resolution works. Typically, a client, such as your laptop or desktop has a “stub resolver.” The stub resolver simply contacts a recursive nameserver (resolver), which in turn queries authoritative nameservers, on the Internet to find the answers to a DNS query. Typically, resolvers are provided by your ISP or corporate network infrastructure, or you may rely on an open resolver such as Google DNS. Route 53 is an authoritative nameserver, responsible for replying to resolvers on behalf of customers. For example, when a client program attempts to look up amazonaws.com, the stub resolver on the machine will query the resolver. If the resolver has the data in cache and the value hasn’t expired, it will use the cached value. Otherwise, the resolver will query authoritative nameservers to find the answer.

(Every location advertises one or more stripes, but we only show Sydney, Singapore, and Hong Kong in the above diagram for clarity.)

Each Route 53 edge location is responsible for serving the traffic for one or more stripes. For example, our edge location in Sydney, Australia could serve both the .com and .net, while Singapore could serve just the .org stripe. Any given location can serve the same stripe as other locations. Hong Kong could serve the .net stripe, too. This means that if a resolver in Australia attempts to resolve a query against a nameserver in the .org stripe, which isn’t provided in Australia, the query will go to the closest location that provides the .org stripe (which is likely Singapore). A resolver in Singapore attempting to query against a nameserver in the .net stripe may go to Hong Kong or Sydney depending on the potential Internet routes from that resolver’s particular network. This is shown in the diagram above.

For any given domain, in general, resolvers learn the lowest latency nameserver based upon the round trip time of the query (this technique is often called SRTT or smooth round-trip time). Over a few queries, a resolver in Australia would gravitate toward using the nameservers on the .net and .com stripes for Route 53 customers’ domains.

Not all resolvers do this. Some choose randomly amongst the nameservers. Others may end up choosing the slowest one, but our experiments show that about 80% of resolvers use the lowest RTT nameserver. For additional information, this presentation presents information on how various resolvers choose which nameserver they utilize. Additionally, many other resolvers (such as Google Public DNS) use pre-fetching, or have very short timeouts if a resolver fails to resolve against a particular nameserver.

The Latency-Availability Decision

Given the above resolver behavior, one option, for a DNS provider like Route 53, might be to advertise all four stripes from every edge location. This would mean that no matter which nameserver a resolver choses, it will always go to the closest network location. However, we believe this provides a poor availability model.

Why? Because edge locations can sometimes fail to provide resolution for a variety of reasons that are very hard to control: the edge location may lose power or Internet connectivity, the resolver may lose connectivity to the edge location, or an intermediary transit provider may lose connectivity. Our experiments have shown that these types of events can cause about 5 minutes of disruption as the Internet updates routing tables. In recent years another serious risk has arisen: large-scale transit network congestion due to DDOS attacks. Our colleague, Nathan Dye, has a talk from AWS re:Invent that provides more details: www.youtube.com/watch?v=V7vTPlV8P3U.

In all of these failure scenarios, advertising every nameserver from every location may result in resolvers having no fallback location. All nameservers would route to the same location and resolvers would fail to resolve DNS queries, resulting in an outage for customers.

In the diagram below, we show the difference for a resolver querying domain X, whose nameservers (NX1, NX2, NX3, NX4) are advertised from all locations and domain Y, whose nameservers (NY1, NY2, NY3, NY4) are advertised in a subset of the locations.

When the path from the resolver to location A is impaired, all queries to the nameservers for domain X will fail. In comparison, even if the path from the resolver to location A is impaired, there are other transit paths to reach nameservers at locations B, C, and D in order to resolve the DNS for domain Y.

Route 53 typically advertises only one stripe per edge location. As a result, if anything goes wrong with a resolver being able to reach an edge location, that resolver has three other nameservers in three other locations to which it can fall back. For example, if we deploy bad software that causes the edge location to stop responding, the resolver can still retry elsewhere. This is why we organize our deployments in “stripe order”; Nick Trebon provides a great overview of our deployment strategies in the previous blog post. It also means that queries to Route 53 gain a lot of Internet path diversity, which helps resolvers route around congestion and other intermediary problems on their path to reaching Route 53.

Route 53’s foremost goal is to always meet our promise of a 100% SLA for DNS queries – that all of our customers’ DNS names should resolve all the time. Our customers also tell us that latency is next most important feature of a DNS service provider. Maximizing Internet path and edge location diversity for availibility necessarily means that some nameservers will respond from farther-away edge locations. For most resolvers, our method has no impact on the minimum RTT, or fastest nameserver, and how quickly it can respond. As resolvers generally use the fastest nameserver, we’re confident that any compromise in resolution times is small and that this is a good balance between the goals of low latency and high availability.

On top of our striping across locations, you may have noticed that the four stripes use different top-level domains. We use multiple top-levels domains in case one of the three TLD providers (.com and .net are both operated by Verisign) has any sort of DNS outage. While this rarely happens, it means that as a customer, you’ll have increased protection during a TLD’s DNS outage because at least two of your four nameservers will continue to work.

Applications

You, too, can apply the same techniques in your own systems and applications. If your system isn’t end-user facing, you could also consider utilizing multiple TLDs for resilience as well. Especially in the case where you control your own API and clients calling the API, there’s no reason to place all your eggs in one TLD basket.

Another application of what we’ve discussed is minimizing downtime during failovers. For high availability applications, we recommend customers utilize Route 53 DNS Failover. With failover configured, Route 53 will only return answers for healthy endpoints. In order to determine endpoint health, Route 53 issues health checks against your endpoint. As a result, there is a minimum of 10 seconds (assuming you configured fast health checks with a single failover interval) where the application could be down, but failover has not triggered yet. On top of that, there is the additional time incurred for resolvers to expire the DNS entry from their cache based upon the record’s TTL. To minimize this failover time, you could write your clients to behave similar to the resolver behavior described earlier. And, while you may not employ an anycast system, you can host your endpoints in multiple locations (e.g. different availability zones and perhaps even different regions). Your clients would learn the SRTT of the multiple endpoints over time and only issue queries to the fastest endpoint, but fallback to the other endpoints if the fastest is unavailable. And, of course, you could shuffle shard your endpoints to achieve increased fault isolation while doing all of the above.

– Lee-Ming Zen

Apache 2.0 -> 2.2 LDAP Changes on Ubuntu

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2008/01/01/apache-2-2-ldap.html

I thought the following might be of use to those of you who are still
using Apache 2.0 with LDAP and wish to upgrade to 2.2. I found this
basic information around online, but I had to search pretty hard for it.
Perhaps presenting this in a more straightforward way might help the
next searcher to find an answer more quickly. It’s probably only of
interest if you are using LDAP as your authentication system with an
older Apache (e.g., 2.0) and have upgraded to 2.2 on an Ubuntu or Debian
system (such as upgrading from dapper to gutsy.)

When running dapper on my intranet web server with Apache
2.0.55-4ubuntu2.2, I had something like this:

<Directory /var/www/intranet>
Order allow,deny
Allow from 192.168.1.0/24

Satisfy All
AuthLDAPEnabled on
AuthType Basic
AuthName “Example.Org Intranet”
AuthLDAPAuthoritative on
AuthLDAPBindDN uid=apache,ou=roles,dc=example,dc=org
AuthLDAPBindPassword APACHE_BIND_ACCT_PW
AuthLDAPURL ldap://127.0.0.1/ou=staff,ou=people,dc=example,dc=org?cn
AuthLDAPGroupAttributeIsDN off
AuthLDAPGroupAttribute memberUid

require valid-user
</Directory>

I upgraded that server to gutsy (via dapper → edgy → feisty
→ gutsy in succession, just because it’s safer), and it now has
Apache 2.2.4-3build1. The methods to do LDAP authentication is a bit
more straightforward now, but it does require this change:

<Directory /var/www/intranet>
Order allow,deny
Allow from 192.168.1.0/24

AuthType Basic
AuthName “Example.Org Intranet”
AuthBasicProvider ldap
AuthzLDAPAuthoritative on
AuthLDAPBindDN uid=apache,ou=roles,dc=example,dc=org
AuthLDAPBindPassword APACHE_BIND_ACCT_PW
AuthLDAPURL ldap://127.0.0.1/ou=staff,ou=people,dc=example,dc=org

require valid-user
Satisfy all
</Directory>

However, this wasn’t enough. When I set this up, I got rather strange
error messages such as:

[error] [client MYIP] GROUP: USERNAME not in required group(s).

I found somewhere online (I’ve now lost the link!) that you couldn’t
have standard pam auth competing with the LDAP authentication. This
seemed strange to me, since I’ve told it I want the authentication
provided by LDAP, but anyway, doing the following on the system:

a2dismod auth_pam
a2dismod auth_sys_group

solved the problem. I decided to move on rather than dig deeper into the
true reasons. Sometimes, administration life is actually better with a
mystery about.