Tag Archives: mlp

Introducing Email Templates and Bulk Sending

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/ses/introducing-email-templates-and-bulk-sending/

The Amazon SES team is excited to announce our latest update, which includes two related features that help you send personalized emails to large groups of customers. This post discusses these features, and provides examples that you can follow to start using these features right away.

Email templates

You can use email templates to create the structure of an email that you plan to send to multiple recipients, or that you will use again in the future. Each template contains a subject line, a text part, and an HTML part. Both the subject and the email body can contain variables that are automatically replaced with values specific to each recipient. For example, you can include a {{name}} variable in the body of your email. When you send the email, you specify the value of {{name}} for each recipient. Amazon SES then automatically replaces the {{name}} variable with the recipient’s first name.

Creating a template

To create a template, you use the CreateTemplate API operation. To use this operation, pass a JSON object with four properties: a template name (TemplateName), a subject line (SubjectPart), a plain text version of the email body (TextPart), and an HTML version of the email body (HtmlPart). You can include variables in the subject line or message body by enclosing the variable names in two sets of curly braces. The following example shows the structure of this JSON object.

{
  "TemplateName": "MyTemplate",
  "SubjectPart": "Greetings, {{name}}!",
  "TextPart": "Dear {{name}},\r\nYour favorite animal is {{favoriteanimal}}.",
  "HtmlPart": "<h1>Hello {{name}}</h1><p>Your favorite animal is {{favoriteanimal}}.</p>"
}

Use this example to create your own template, and save the resulting file as mytemplate.json. You can then use the AWS Command Line Interface (AWS CLI) to create your template by running the following command: aws ses create-template --cli-input-json mytemplate.json

Sending an email created with a template

Now that you have created a template, you’re ready to send email that uses the template. You can use the SendTemplatedEmail API operation to send email to a single destination using a template. Like the CreateTemplate operation, this operation accepts a JSON object with four properties. For this operation, the properties are the sender’s email address (Source), the name of an existing template (Template), an object called Destination that contains the recipient addresses (and, optionally, any CC or BCC addresses) that will receive the email, and a property that refers to the values that will be replaced in the email (TemplateData). The following example shows the structure of the JSON object used by the SendTemplatedEmail operation.

{
  "Source": "[email protected]",
  "Template": "MyTemplate",
  "Destination": {
    "ToAddresses": [ "[email protected]" ]
  },
  "TemplateData": "{ \"name\":\"Alejandro\", \"favoriteanimal\": \"zebra\" }"
}

Customize this example to fit your needs, and then save the resulting file as myemail.json. One important note: in the TemplateData property, you must use a blackslash (\) character to escape the quotes within this object, as shown in the preceding example.

When you’re ready to send the email, run the following command: aws ses send-templated-email --cli-input-json myemail.json

Bulk email sending

In most cases, you should use email templates to send personalized emails to several customers at the same time. The SendBulkTemplatedEmail API operation helps you do that. This operation also accepts a JSON object. At a minimum, you must supply a sender email address (Source), a reference to an existing template (Template), a list of recipients in an array called Destinations (within which you specify the recipient’s email address, and the variable values for that recipient), and a list of fallback values for the variables in the template (DefaultTemplateData). The following example shows the structure of this JSON object.

{
  "Source":"[email protected]",
  "ConfigurationSetName":"ConfigSet",
  "Template":"MyTemplate",
  "Destinations":[
    {
      "Destination":{
        "ToAddresses":[
          "[email protected]"
        ]
      },
      "ReplacementTemplateData":"{ \"name\":\"Anaya\", \"favoriteanimal\":\"yak\" }"
    },
    {
      "Destination":{ 
        "ToAddresses":[
          "[email protected]"
        ]
      },
      "ReplacementTemplateData":"{ \"name\":\"Liu\", \"favoriteanimal\":\"water buffalo\" }"
    },
    {
      "Destination":{
        "ToAddresses":[
          "[email protected]"
        ]
      },
      "ReplacementTemplateData":"{ \"name\":\"Shirley\", \"favoriteanimal\":\"vulture\" }"
    },
    {
      "Destination":{
        "ToAddresses":[
          "[email protected]"
        ]
      },
      "ReplacementTemplateData":"{}"
    }
  ],
  "DefaultTemplateData":"{ \"name\":\"friend\", \"favoriteanimal\":\"unknown\" }"
}

This example sends unique emails to Anaya ([email protected]), Liu ([email protected]), Shirley ([email protected]), and a fourth recipient ([email protected]), whose name and favorite animal we didn’t specify. Anaya, Liu, and Shirley will see their names in place of the {{name}} tag in the template (which, in this example, is present in both the subject line and message body), as well as their favorite animals in place of the {{favoriteanimal}} tag in the message body. The DefaultTemplateData property determines what happens if you do not specify the ReplacementTemplateData property for a recipient. In this case, the fourth recipient will see the word “friend” in place of the {{name}} tag, and “unknown” in place of the {{favoriteanimal}} tag.

Use the example to create your own list of recipients, and save the resulting file as mybulkemail.json. When you’re ready to send the email, run the following command: aws ses send-bulk-templated-email --cli-input-json mybulkemail.json

Other considerations

There are a few limits and other considerations when using these features:

  • You can create up to 10,000 email templates per Amazon SES account.
  • Each template can be up to 10 MB in size.
  • You can include an unlimited number of replacement variables in each template.
  • You can send email to up to 50 destinations in each call to the SendBulkTemplatedEmail operation. A destination includes a list of recipients, as well as CC and BCC recipients. Note that the number of destinations you can contact in a single call to the API may be limited by your account’s maximum sending rate. For more information, see Managing Your Amazon SES Sending Limits in the Amazon SES Developer Guide.

We look forward to seeing the amazing things you create with these new features. If you have any questions, please leave a comment on this post, or let us know in the Amazon SES forum.

Security advisories for Tuesday

Post Syndicated from ris original http://lwn.net/Articles/708992/rss

Debian has updated php5 (multiple vulnerabilities).

Debian-LTS has updated monit
(regression in previous update) and unzip (buffer overflows).

Fedora has updated golang (F25; F24:
denial of service), kernel (F25; F24; F23:
three vulnerabilities), perl-DBD-MySQL
(F25: two vulnerabilities), php-simplesamlphp-saml2 (F25; F24; F23: incorrect signature verification),
php-simplesamlphp-saml2_1 (F25; F24; F23:
incorrect signature verification), and python-tornado (F24: XSRF protection bypass).

Gentoo has updated SQUASHFS (two
code execution flaws from 2012), bash (code
execution), botan (two vulnerabilities), elfutils (code execution from 2014), ghostscript-gpl (buffer overflow from 2015),
nodejs (multiple vulnerabilities), pixman (code execution), systemd (multiple vulnerabilities from 2013),
tigervnc (two vulnerabilities from 2014),
webkit-gtk (many vulnerabilities, some from
2014 and 2015), xstream (code execution
from 2013), and zabbix (two vulnerabilities).

openSUSE has updated Chromium
(multiple vulnerabilities), ImageMagick (Leap42.2; Leap42.1: two vulnerabilities), java-1_7_0-openjdk (Leap42.2, 42.1: multiple
vulnerabilities), libass (Leap42.1, 13.2:
two vulnerabilities), libgit2 (Leap42.2:
two vulnerabilities), pacemaker (Leap42.1:
two vulnerabilities), pcre (Leap42.2, 42.1:
multiple vulnerabilities, some from 2014 and 2015), perl-DBD-mysql (13.2: use after free), php5 (Leap42.2, 42.1: two vulnerabilities), php7 (Leap42.2: two vulnerabilities), qemu (Leap42.1: multiple vulnerabilities), and
util-linux (Leap42.2: denial of service).

Oracle has updated kernel 3.8.13 (OL7; OL6: two
vulnerabilities), and kernel 2.6.39 (OL6; OL5: denial of service).

Slackware has updated kernel (privilege escalation), loudmouth (roster push attack), and php (multiple vulnerabilities).

SUSE has updated firefox, nss
(SLE11-SP2: multiple vulnerabilities).

Frequently Asked Questions About Compliance in the AWS Cloud

Post Syndicated from Chad Woolf original https://blogs.aws.amazon.com/security/post/Tx2M9XYV2FNQ483/Frequently-Asked-Questions-About-Compliance-in-the-AWS-Cloud

Every month, AWS Compliance fields thousands of questions about how to achieve and maintain compliance in the cloud. Among other things, customers are eager to take advantage of the cost savings and security at scale that AWS offers while still maintaining robust security and regulatory compliance. Because regulations across industries and geographies can be complex, we thought it might be helpful to share answers to some of the frequently asked questions we hear about compliance in the AWS cloud, as well as to clear up potential misconceptions about how operating in the cloud might affect compliance.

Is AWS compliant with [Program X]?

Context is required to answer this question. In all cases, customers operating in the cloud remain responsible for complying with applicable laws and regulations, and it is up to you to determine whether AWS services meet applicable requirements for your business. To help you make this determination, we have enacted assurance programs across multiple industries and jurisdictions to inform and support AWS customers. We think about these assurance programs across the following three broad categories.

1. Certifications and attestations

Compliance certifications and attestations (evidence showing that something is true) are assessed by a third-party, independent auditor and result in a certification, audit report, or attestation of compliance.

Assurance programs in this category include:

2. Laws and regulations

AWS customers remain responsible for complying with applicable compliance laws and regulations. In some cases, AWS offers functionality (such as security features), enablers, and legal agreements (such as the AWS Data Processing Agreement and Business Associate Agreement) to support customer compliance. Requirements under applicable laws and regulations may not be subject to certification or attestation.

Assurance programs in this category include:

3. Alignments and frameworks

Compliance alignments and frameworks include published security or compliance requirements for a specific purpose, such as a specific industry or function. AWS provides functionality (such as security features) and enablers (including compliance playbooks, mapping documents, and whitepapers) for these types of programs.

Requirements under specific alignments and frameworks may not be subject to certification or attestation; however, some alignments and frameworks are covered by other compliance programs. (for instance, NIST guidelines can be mapped to applicable FedRAMP security baselines).

Assurance programs in this category include:

How does AWS separate the responsibilities that they cover from the ones I still need to maintain around my compliance program?

AWS operates on the AWS Shared Responsibility Model. While AWS manages security of the cloud, customers remain responsible for compliance and security in the cloud. You retain control of the security you choose to implement to protect your content, platform, applications, systems, and networks, and you are responsible for meeting specific compliance and regulatory requirements.

Learn more about the AWS Shared Responsibility Model by watching the following video.

What’s an example of an AWS community focused on compliance?

AWS recently released a publicly available GitHub repository for AWS Config Rules. All members of the AWS community can contribute to this repository to help make effective and useful Config Rules. You can tap into the collective ingenuity and expertise of the entire AWS community to automate your compliance checks. For more information, see Announcing the AWS Config Rules Repository: A New Community-Based Source of Custom Rules for AWS Config.

What is AWS’s formal security incident response plan?

AWS’s formally documented incident response plan addresses purpose, scope, roles, responsibilities, and management commitment. It has been developed in alignment with ISO 27001 and NIST 800-53 standards. AWS has implemented the following three-phased approach to incident management:

  1. AWS detects an incident.  
  2. Specialized teams address the incident.
  3. AWS conducts a postmortem and deep root-cause analysis of the incident.

Mechanisms are in place to allow the customer support team to be notified of operational issues that impact the customer experience. A Service Health Dashboard is available and maintained by the customer support team to alert customers to any issues that may be of broad impact. The AWS incident management program is reviewed by independent external auditors during audits of AWS’s SOC, PCI DSS, ISO 27001, and FedRAMP compliance.

How often does AWS issue SOC reports and when does the next one become available?

AWS issues two SOC 1 and SOC 2 reports covering 6-month periods each year (the first report covers October 1 through March 31, and the other covers April 1 through September 30). There are many factors that play into the release date of the report, but we target early May and early November each year to release new reports. Our downloadable AWS SOC 3 Report is issued annually and is released along with the May SOC 1 and SOC 2 reports.

Please contact us with questions about using AWS products in a compliant manner, or if you’d like to learn more about compliance in the cloud, see the AWS Cloud Compliance website.

– Chad

CaffeOnSpark Open Sourced for Distributed Deep Learning on Big Data Clusters

Post Syndicated from yahoo original https://yahooeng.tumblr.com/post/139916828451

yahoohadoop:

By Andy Feng(@afeng76), Jun Shi and Mridul Jain (@mridul_jain), Yahoo Big ML Team
Introduction
Deep learning (DL) is a critical capability required by Yahoo product teams (ex. Flickr, Image Search) to gain intelligence from massive amounts of online data. Many existing DL frameworks require a separated cluster for deep learning, and multiple programs have to be created for a typical machine learning pipeline (see Figure 1). The separated clusters require large datasets to be transferred among them, and introduce unwanted system complexity and latency for end-to-end learning.
image
Figure 1: ML Pipeline with multiple programs on separated clusters

As discussed in our earlier Tumblr post, we believe that deep learning should be conducted in the same cluster along with existing data processing pipelines to support feature engineering and traditional (non-deep) machine learning. We created CaffeOnSpark to allow deep learning training and testing to be embedded into Spark applications (see Figure 2). 
image
Figure 2: ML Pipeline with single program on one cluster

CaffeOnSpark: API & Configuration and CLI

CaffeOnSpark is designed to be a Spark deep learning package. Spark MLlib supported a variety of non-deep learning algorithms for classification, regression, clustering, recommendation, and so on. Deep learning is a key capacity that Spark MLlib lacks currently, and CaffeOnSpark is designed to fill that gap. CaffeOnSpark API supports dataframes so that you can easily interface with a training dataset that was prepared using a Spark application, and extract the predictions from the model or features from intermediate layers for results and data analysis using MLLib or SQL.
imageFigure 3: CaffeOnSpark as a Spark Deep Learning package

1:   def main(args: Array[String]): Unit = {
2:   val ctx = new SparkContext(new SparkConf())
3:   val cos = new CaffeOnSpark(ctx)
4:   val conf = new Config(ctx, args).init()
 5:   val dl_train_source = DataSource.getSource(conf, true)
 6:   cos.train(dl_train_source)
 7:   val lr_raw_source = DataSource.getSource(conf, false)
 8:   val extracted_df = cos.features(lr_raw_source)
 9:   val lr_input_df = extracted_df.withColumn(“Label”, cos.floatarray2doubleUDF(extracted_df(conf.label)))
10:     .withColumn(“Feature”, cos.floatarray2doublevectorUDF(extracted_df(conf.features(0))))
11:  val lr = new LogisticRegression().setLabelCol(“Label”).setFeaturesCol(“Feature”)
12:  val lr_model = lr.fit(lr_input_df)
13:  lr_model.write.overwrite().save(conf.outputPath)
14: }

Figure 4: Scala application using CaffeOnSpark both MLlib

Scala program in Figure 4 illustrates how CaffeOnSpark and MLlib work together:
L1-L4 … You initialize a Spark context, and use it to create CaffeOnSpark and configuration object.
L5-L6 … You use CaffeOnSpark to conduct DNN training with a training dataset on HDFS.
L7-L8 …. The learned DL model is applied to extract features from a feature dataset on HDFS.
L9-L12 … MLlib uses the extracted features to perform non-deep learning (more specifically logistic regression for classification).
L13 … You could save the classification model onto HDFS.

As illustrated in Figure 4, CaffeOnSpark enables deep learning steps to be seamlessly embedded in Spark applications. It eliminates unwanted data movement in traditional solutions (as illustrated in Figure 1), and enables deep learning to be conducted on big-data clusters directly. Direct access to big-data and massive computation power are critical for DL to find meaningful insights in a timely manner.
CaffeOnSpark uses the configuration files for solvers and neural network as in standard Caffe uses. As illustrated in our example, the neural network will have a MemoryData layer with 2 extra parameters:

source_class specifying a data source class

source specifying dataset location.
The initial CaffeOnSpark release has several built-in data source classes (including com.yahoo.ml.caffe.LMDB for LMDB databases and com.yahoo.ml.caffe.SeqImageDataSource for Hadoop sequence files). Users could easily introduce customized data source classes to interact with the existing data formats.

CaffeOnSpark applications will be launched by standard Spark commands, such as spark-submit. Here are 2 examples of spark-submit commands. The first command uses CaffeOnSpark to train a DNN model saved onto HDFS. The second command is a custom Spark application that embedded CaffeOnSpark along with MLlib.
First command:
spark-submit    –files caffenet_train_solver.prototxt,caffenet_train_net.prototxt    –num-executors 2      –class com.yahoo.ml.caffe.CaffeOnSpark        caffe-grid-0.1-SNAPSHOT-jar-with-dependencies.jar       -train -persistent       -conf caffenet_train_solver.prototxt       -model hdfs:///sample_images.model       -devices 2
Second command:

spark-submit    –files caffenet_train_solver.prototxt,caffenet_train_net.prototxt    –num-executors 2      –class com.yahoo.ml.caffe.examples.MyMLPipeline                                         caffe-grid-0.1-SNAPSHOT-jar-with-dependencies.jar
       -features fc8        -label label        -conf caffenet_train_solver.prototxt        -model hdfs:///sample_images.model          -output hdfs:///image_classifier_model        -devices 2

System Architecture
imageFigure 5: System Architecture

Figure 5 describes the system architecture of CaffeOnSpark. We launch Caffe engines on GPU devices or CPU devices within the Spark executor, via invoking a JNI layer with fine-grain memory management. Unlike traditional Spark applications, CaffeOnSpark executors communicate to each other via MPI allreduce style interface via TCP/Ethernet or RDMA/Infiniband. This Spark+MPI architecture enables CaffeOnSpark to achieve similar performance as dedicated deep learning clusters.
Many deep learning jobs are long running, and it is important to handle potential system failures. CaffeOnSpark enables training state being snapshotted periodically, and thus we could resume from previous state after a failure of a CaffeOnSpark job.
Open Source
In the last several quarters, Yahoo has applied CaffeOnSpark on several projects, and we have received much positive feedback from our internal users. Flickr teams, for example, made significant improvements on image recognition accuracy with CaffeOnSpark by training with millions of photos from the Yahoo Webscope Flickr Creative Commons 100M dataset on Hadoop clusters.
CaffeOnSpark is beneficial to deep learning community and the Spark community. In order to advance the fields of deep learning and artificial intelligence, Yahoo is happy to release CaffeOnSpark at github.com/yahoo/CaffeOnSpark under Apache 2.0 license.
CaffeOnSpark can be tested on an  AWS EC2 cloud or on your own Spark clusters. Please find the detailed instructions at Yahoo github repository, and share your feedback at [email protected]. Our goal is to make CaffeOnSpark widely available to deep learning scientists and researchers, and we welcome contributions from the community to make that happen. .

How to Set Up SSO to the AWS Management Console for Multiple Accounts by Using AD FS and SAML 2.0

Post Syndicated from Alessandro Martini original https://blogs.aws.amazon.com/security/post/Tx2989L4392V75K/How-to-Set-Up-SSO-to-the-AWS-Management-Console-for-Multiple-Accounts-by-Using-A

AWS supports Security Assertion Markup Language (SAML) 2.0, an open standard for identity federation used by many identity providers (IdPs). SAML enables federated single sign-on (SSO), which enables your users to sign in to the AWS Management Console or to make programmatic calls to AWS APIs by using assertions from a SAML-compliant IdP. Many of you maintain multiple AWS accounts (for example, production, development, and test accounts), and have asked how to use SAML to enable identity federation to those accounts. Therefore, in this blog post I will demonstrate how you can enable federated users to access the AWS Management Console with multiple AWS accounts and SAML.

If you use Microsoft Active Directory for corporate directories, you may already be familiar with how Active Directory and AD FS work together to enable federation, as described in the AWS Security Blog post, Enabling Federation to AWS Using Windows Active Directory, AD FS, and SAML 2.0. As a result, I decided to use Active Directory with AD FS as the example IdP in this post.

To automate both the installation and configuration of AD FS and Active Directory, I will use Windows PowerShell in this post. By leveraging Windows PowerShell, you eliminate the manual installation and configuration steps, and allow yourself to focus on the high-level process.

If you want to manage access to all your AWS accounts with Active Directory and AD FS, you’ve come to the right place!

Background

To set up your Windows Active Directory domain, you have many options. You can use an Amazon EC2 instance and set up your own domain with dcpromo or by installing the Active Directory role (if using Windows Server 2012 and later). You can automate this process by using an AWS CloudFormation template that creates a Windows instance and sets up a domain for you. Alternatively, you may want to create a Simple AD with AWS Directory Service. Information about how to manage these directories, join EC2 instances to the domain, and create users and groups is in our documentation.

First things first

With SAML federation, AWS requires the IdP to issue a SAML assertion with some mandatory attributes (known as claims). This AWS documentation explains how to configure the SAML assertion. In short, you need the assertion to contain:

An attribute of name https://aws.amazon.com/SAML/Attributes/Role (note this is not a URL to a resource, but a custom attribute for our AWS Security Token Service [STS]). Its value must be at least one role/provider pair as a comma-separated list of their Amazon Resource Names (ARNs). Because the ARNs are unique per AWS account, this information tells AWS to which account you want to federate.

An attribute of name https://aws.amazon.com/SAML/Attributes/RoleSessionName (again, this is just a definition of type, not an actual URL) with a string value. This is the federated user’s friendly name in AWS.

A name identifier (NameId) that is used to identify the subject of a SAML assertion.

AWS has recently published troubleshooting steps in our documentation about how to debug a SAML response from your IdP. In my experience, the problem usually is related to the three attributes mentioned above: they are either missing, misspelled (remember the names are cAsE sEnSitiVe!), or they don’t contain the expected values. If you are struggling with SAML federation, you should always start by first collecting a copy of the SAML response you are sending to AWS.

Don’t know how to collect a copy of the SAML response? Check our documentation, and then decode and troubleshoot the response.

Use case

A company, Example Corp., wants:

Federated identity access for specific groups of users in its organization.

To manage federation across multiple AWS accounts.

To deal with three populations of users:

Users that will access 1 account with 1 role (1:1).

Users that will access multiple accounts with 1 role (N:1).

Users that will access multiple accounts with multiple roles (N:M).

Example Corp. is using Active Directory, and they want to use AD FS to manage federation centrally. Example Corp. wants to federate to these two AWS accounts: 123456789012 and 111122223333.

Prepare your environment

The blog post, Enabling Federation to AWS Using Windows Active Directory, AD FS, and SAML 2.0, shows how to prepare Active Directory and install AD FS 2.0 on Windows Server 2008 R2. In this blog post, we will install AD FS 3.0 on Windows Server 2012 R2. AD FS 3.0 cannot be installed on Windows Server 2008 R2 and earlier, so make sure you pick the right version of Windows Server. The AD FS 3.0 and AD FS 2.0 installations and configurations are very similar; therefore, I decided to use this blog post as a chance to show how to do the same with Windows PowerShell. I will report the steps from the GUI as well, but more as additional reading at the end of the blog post.

I like how Windows PowerShell can make the configuration steps easy. To make things even easier for you here, I have kept the same naming convention from Jeff Wierer’s blog post for the claim rules, Active Directory groups, and IAM entities.

The .zip file with the collection of related scripts contains:

Two folders: Logs (where log files are stored) and Utilities (where this PowerShell script is saved).

The following scripts:

00-Configure-AD.ps1 – It simplifies Active Directory group and user creation as well as the configuration required to leverage the federation solution explained in this post.

01-Install-ADFS.ps1 – It installs AD FS 3.0 on Windows Server 2012 R2 and downloads the federation metadata.

02-Configure-IAM.ps1 – It creates an identity provider and two IAM roles in the AWS account you choose.

03-Configure-ADFS.ps1 – It creates a relying party trust to AWS by using the following templates:

auth.txt

claims.txt

Extract the file on the Windows Server 2012 R2 computer you designated for the AD FS 3.0 installation. Also, install AWS Tools for Windows PowerShell on that computer, because this is required to complete the IAM configuration from the command line. You don’t need to configure a credential profile at this time.

General workflow

These are the steps of the general workflow:

The user goes to the AD FS sign-in page to authenticate.

AD FS authenticates the user against Active Directory.

Active Directory returns the user’s information.

AD FS dynamically builds ARNs by using Active Directory group memberships for the IAM roles and user attributes for the AWS account IDs, and sends a signed assertion to AWS STS.

The user gets to the AWS role selection page, where he can choose which account to access and which role to assume.

AWS STS is the single point of access for all SAML-federated access. The ARNs in the SAML response are used to identify your SAML provider and IAM role in your destination account. The following section explains how to simplify administration for the ARNs in your AD FS server, providing custom claim rule code examples.

The solution in action

I want to start from the end to show you how the user experience will look.

I have a user called Bob who is a member of two Active Directory groups:

AWS-Dev

AWS-Production

Note: You need to enable View > Advanced Features in Active Directory Users and Computers to see the Attribute editor tab.

This user has two AWS account IDs in the url attribute, as shown in the following images.

Bob then connects to https://adfs.example.com/adfs/ls/idpinitiatedosignon.aspx, where he can pick Amazon Web Services as the destination application after he has authenticated, as shown in the following image.

When Bob gets to the AWS role selection page, he gets 4 possible choices (2 choices for each of the 2 accounts displayed) as the combination of the groups he belongs to and the AWS account IDs from the url attribute, as shown in the following image. Thanks to the new role selection for SAML-based single sign-on, it is easier for the user to understand the destination account he would access.

This workflow is summarized in the following diagram.

Let’s now see how to use my Windows PowerShell scripts to set up this solution.

Active Directory configuration (Windows PowerShell: 00-Configure-AD.ps1)

The first script, 00-Configure-AD.ps1, can be used to create two Active Directory groups (AWS-Production and AWS-Dev) and a user with a password of your choice. The script asks you many questions so that you can either create new users or assign permissions to already existing users. Let’s see how it works in more detail.

To run the scripts, I launch Windows PowerShell with administrative privileges (see the following screenshot) from the server where I will install AD FS 3.0. The machine is already joined to the example.com domain, so I connect using my Domain Administrator user, Alessandro.

I download the scripts to my desktop, and after unzipping the file, I launch the script located in my AD FS folder on my desktop:

PS C:UsersalessandroDesktopADFS> .0-Configure-AD.ps1

The script will ask some questions about what you want to do. In order, it will ask for (based on your answers):

Active Directory AWS groups creation.

Do you want to create two AD groups called AWS-Production and AWS-Dev?

AD FS service account creation.

Do you want to create an AD FS service account? A user name and password will be requested.

How many new Active Directory users do you want to create?

List the AWS account IDs you want this user to access (for example, 123456789012, 111122223333).

Active Directory group membership for AWS access.

What level of access do you want to grant?

How many existing Active Directory users do you want to grant access to AWS?

Type the user name of the user you want to manage.

AWS account association.

Do you want to keep the existing AWS account associations?

Check the current Active Directory group membership for AWS access.

Do you want to keep [GROUP MEMBERSHIP]?

Active Directory group membership for AWS access.

What level of access do you want to grant?

The following screenshot shows the workflow for the creation of user Bob, which is assigned to the AWS accounts 123456789012 and 111122223333; he also is a member of AWS-Production.

My answers to the questions of the script are in red:

Active Directory AWS groups creation

Do you want to create two AD groups called AWS-Production and AWS-Dev? Y

AD FS service account creation

Do you want to create an AD FS service account? User name and password will be requested. Y

A credential request window allows me to type the user name and password for the user creation.

How many new Active Directory users do you want to create? 1

A credential request window allows me to type the user name and password for the user creation.

List the AWS account IDs you want this user to access (such as 123456789012,111122223333)  123456789012,111122223333

Active Directory group membership for AWS access

What level of access do you want to grant? P

How many existing Active Directory users do you want to grant access to AWS? 0

If you don’t need to create the Active Directory groups AWS-Production and AWS-Dev, you can simply type N when asked. You can do the same thing for the AD FS service account.

The provided script and the steps just outlined do the following in your domain:

Create two Active Directory Groups named AWS-Production and AWS-Dev.

Create the AD FS service account ADFSSVC. This account will be used as the AD FS service account later on. This account is not associated with any Active Directory group because this is a service account.

Create a user named Bob.

Give Bob an email address ([email protected]). This is automatically done by the script by combining the user name and the domain name.

Associate Bob with two AWS account IDs: 123456789012 and  111122223333

Add Bob to the AWS-Production group.

If you have an existing user you want to manage, you can run the script again and tell the script how many existing users you want to manage: How many existing Active Directory users do you want to grant access to AWS?

In the following example, I don’t need to create the Active Directory groups and AD FS service account again, but I manage Bob (which now already exists) and add him to the AWS-Dev group.

These are my answers to the questions:

Active Directory AWS groups creation

Do you want to create two AD groups called AWS-Production and AWS-Dev? N

AD FS service account creation

Do you want to create an AD FS service account? User name and password will be requested. N

How many new Active Directory users do you want to create? 0

How many existing Active Directory users do you want to grant access to AWS? 1

Enter the user name of the user you want to manage. Bob

AWS account association.

Do you want to keep the existing AWS account associations? Y

Check the current Active Directory group membership for AWS access.

Do you want to keep [GROUP MEMBERSHIP]? Y

Active Directory group membership for AWS access.

What level of access do you want to grant? D

The code is available to you, and it can be adjusted to run in noninteractive mode and to accept parameters to run in a batch script.

AD FS installation (Windows PowerShell: 01-Install-ADFS.ps1)

The script I will use now is 01-Install-ADFS.ps1. This script will install the AD FS 3.0 Windows role, create a self-signed certificate, and configure AD FS for the first use. The configuration will ask for the credentials of the service account you have created before (ADFSSVC). Please note you should provide the user with the NETBIOS (for example, EXAMPLEadfssvc).

Before you can move to the next step and create a SAML provider in IAM, you need the federation metadata document for your AD FS federation server, which you can download from https://<yourservername>/FederationMetadata/2007-06/FederationMetadata.xml. The federation metadata is automatically downloaded in the same folder as the script (the downloaded file is called federationmetadata.xml).

The warning about the SPN is a known issue that can be fixed by running the following command at the command line (make sure you run the command line as an administrator):

setspn -a host/localhost adfssvc

Note that adfssvc is the name of the service account I used.

If the command is successful, you will see output like this:

Registering ServicePrincipalNames for
CN=ADFSSVC,CN=Users, DC=example,DC=com    
host/localhost

IAM configuration (Windows PowerShell: 02-Configure-IAM.ps1)

The next script, 02-Configure-IAM.ps1, will create an identity provider and 2 IAM roles in a specified AWS account. The SAML provider name that is created is ADFS, and the IAM roles are called ADFS-Production and ADFS-Dev. The roles trust the SAML provider ADFS.

The script will first ask how many AWS accounts you want to configure. Here is the question and my answer (in red): How many AWS accounts do you want to configure? 1

You will now be asked for the IAM access key and secret access keys for each of the AWS accounts you want to configure. The IAM user is used to create the required IAM entities, and it must have enough permissions to create an identity provider and a role.

Note: The script creates a SAML provider called ADFS and 2 IAM roles called ADFS-Dev and ADFS-Production. If you are creating these IAM objects manually, remember that you need to use the same names that I use in this blog post. This solution assumes the SAML provider and the IAM role names are the same across all the AWS accounts.

Here is the output that I got, which includes the IAM roles information.

I can then run the script again for the other AWS account, 111122223333.

AD FS configuration (Windows PowerShell: 03-Configure-ADFS.ps1)

The last script (03-Configure-ADFS.ps1) configures AD FS by creating the AWS relying party trust. All the required claim rules are added. You can see the rules in the .txt files that are in the .zip file you downloaded at the beginning of this process). If you are interested about the logic behind the code and how I came up with it, see the “Under the hood” section near the end of this blog post

auth.txt

@RuleTemplate = "AllowAllAuthzRule"
=> issue(Type = "http://schemas.microsoft.com/authorization/claims/permit", Value =
"true");

claims.txt

@RuleTemplate = "MapClaims"
@RuleName = "NameId"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid"]
 => issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/format"] = "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent");
 
@RuleTemplate = "LdapClaims"
@RuleName = "RoleSessionName"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
 => issue(store = "Active Directory", types = ("https://aws.amazon.com/SAML/Attributes/RoleSessionName"), query = ";mail;{0}", param = c.Value);
 
@RuleName = "Get AD Groups"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
 => add(store = "Active Directory", types = ("http://temp/variable"), query = ";tokenGroups;{0}", param = c.Value);
 
@RuleName = "Get AWS Accounts from User attributes"
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]
 => add(store = "Active Directory", types = ("http://temp/AWSAccountsFromUser"), query = ";url;{0}", param = c.Value);
 
@RuleName = "Dynamic ARN – Adding AWS Accounts"
c:[Type == "http://temp/AWSAccountsFromUser"]
 => add(Type = "http://temp/AWSAccountsFromUser2", Value = RegExReplace("arn:aws:iam::AWSACCOUNT:saml-provider/ADFS,arn:aws:iam::AWSACCOUNT:role/ADFS-", "AWSACCOUNT", c.Value));
 
@RuleName = "Dynamic ARN – Adding Roles"
c1:[Type == "http://temp/AWSAccountsFromUser2"]
 && c2:[Type == "http://temp/variable", Value =~ "(?i)^AWS-"]
 => issue(Type = "https://aws.amazon.com/SAML/Attributes/Role", Value = RegExReplace(c2.Value, "AWS-", c1.Value)); 

Run the script in a Windows PowerShell window launched as administrator, as shown in the following image.

The browser should be automatically launched with the AD FS sign-in page.

You will notice that an application is already set up (Amazon Web Services). You can now authenticate with the user Bob (or whoever you have previously configured with the Windows PowerShell script 00-Configure-AD.ps1), and you should be able to federate to AWS.

If you can’t access your AWS accounts, start troubleshooting by first collecting a copy of the SAML response you are sending to AWS (this is explained in our documentation). You can then decode and troubleshoot it.

How to handle exceptions

The solution presented so far works well if you have users with the same permissions across different accounts. However, what if a user must have Production access in one account and only Dev access in a second account? I will now show a few additional claim rules you can add at the end of the claim rules chain. Based on your needs, you can pick the claim rule code for the exception that you need. Because this is a case-by-case choice, I will show how to manage the exceptions in the UI. First, you need to open the AD FS Microsoft Management Console on your AD FS server.

Expand Trust Relationships, click Relying Party Trusts, right-click the relying party trust Amazon Web Services, and then click Edit Claim Rules.

You then should see the 6 rules shown in the following image.

Each of the following paragraphs will explain how to add a seventh rule. You can add as many rules as needed to manage multiple exceptions at the same time.

Exception—Static ARNs for DOMAINuser

With the following custom claim rule, we check the Windows account name of the authenticated user. If the user matches our condition, we issue specific ARNs for him.

You can place this additional custom claim rule after all the other claim rules you have already created, as shown in the following image.

The next rule—Exception – Static ARNs for DOMAINuser—is another custom claim rule. Follow these steps to create it.

In the Edit Claim Rules for Amazon Web Services dialog box, click Add Rule.

In the Claim rule template list, select Send Claims Using a Custom Rule, and then click Next.

For Claim rule name, type Exception – Static ARNs for DOMAINuser, and then in Custom rule, enter the following:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Value == "DOMAINusername"]
 => issue(Type = "https://aws.amazon.com/SAML/Attributes/Role", Value = "arn:aws:iam::YOURACCOUNTID:saml-provider/ADFS,arn:aws:iam::YOURACCOUNTID:role/ADFS-Dev");

Code explanation

If the user is DOMAINusername, issue a claim of type https://aws.amazon.com/SAML/Attributes/Role with value:

"arn:aws:iam::YOURACCOUNTID:saml-provider/ADFS,arn:aws:iam::YOURACCOUNTID:role/ADFS-Dev”

In this example, user EXAMPLEBob would be granted access to ADFS-Dev in the specified account.

Now, Bob will be able to pick ADFS-Dev from account 444455556666 as well.

This workflow is summarized in the following diagram.

EXAMPLEBob goes to the AD FS sign-in page to authenticate.

AD FS authenticates the user against Active Directory.

Active Directory returns the user’s information. EXAMPLEBob belongs to two AD groups (AWS-Production and AWS-Dev) and his user object attribute refers to two AWS accounts (123456789012 and 111122223333).

AD FS dynamically builds four ARNs by using Active Directory group memberships for the IAM roles and user attributes for the AWS account IDs. Additionally, AD FS adds the ARNs for a third AWS account (444455556666) with a single IAM role (ADFS-Dev) and sends a signed assertion to STS.

EXAMPLEBob gets to the AWS role selection page, where he can choose between accounts 123456789012, 111122223333, and 444455556666. In the first two accounts, he can choose between ADFS-Production and ADFS-Dev. For account 444455556666, only ADFS-Dev is available. All of these are IAM roles created in the specific accounts.

Exception—Static ARNs for anyone

This exception is to grant access to certain AWS accounts and IAM roles to any authenticated user. Again, you can place this exception at the end of the claim rules chain you have defined so far.

Follow these steps to create this customer claim rule:

In the Edit Claim Rules for Amazon Web Services dialog box, click Add Rule.

In the Claim rule template list, select Send Claims Using a Custom Rule, and then click Next.

For Claim rule name, type Exception – Static ARNs for anyone, and then in Custom rule, enter the following code. Make sure to change the parts in red to your AWS account ID and the required IAM role name.

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]
 => issue(Type = "https://aws.amazon.com/SAML/Attributes/Role", Value = "arn:aws:iam::YOURACCOUNTID:saml-provider/ADFS,arn:aws:iam::YOURACCOUNTID:role/ADFS-Dev");

You can iterate the same logic for as many accounts as you need. The result is that anyone has been granted access to the specified account (444455556666) with the specified IAM role (ADFS-Dev).

Code explanation

If there is an incoming claim for an authenticated user (http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname), you issue a claim of type https://aws.amazon.com/SAML/Attributes/Role with value  "arn:aws:iam::YOURACCOUNTID:saml-provider/ADFS,arn:aws:iam::YOURACCOUNTID:role/ADFS-Dev”. In this example, any authenticated user will be granted access to ADFS-Dev in the specified account.

This workflow is summarized in the following diagram.

A domain user goes to the AD FS login page to authenticate.

AD FS authenticates the user against Active Directory.

Active Directory returns the user’s information. Let’s assume this domain user does not belong to any Active Directory group that starts with AWS- (in other words, AWS-Production, AWS-Dev), and his user object attribute contains no AWS account.

AD FS has no information to build dynamic ARNs from Active Directory group memberships and user attributes. AD FS has a rule to generate static ARNs for any authenticated user for a specific AWS account (444455556666) with a single IAM role (ADFS-Dev), and sends a signed assertion to AWS STS.

The domain user can only assume one role in one AWS account. Therefore, the AWS role selection page is automatically skipped. The domain user will get access to account 444455556666 with the ADFS-Dev role, which must be created in the AWS account before any access is attempted.

Handling exceptions with an ad hoc attribute store (Microsoft SQL Server)

AD FS can retrieve information from Active Directory. Additionally, AD FS provides built-in capabilities to read information from a SQL database. For convenience, I am using Microsoft SQL Server.

I launched an EC2 instance with SQL Server installed, and I joined it to my Active Directory Domain as SQL.example.com. Because I need a domain-joined SQL server, I am not using Amazon RDS, but you can decide to use License Mobility, or use an Amazon-provided Amazon Machine Image.

The idea is to create a new database that AD FS can access to read information from a specific table. I store, for each Active Directory user that I want to grant access to AWS, the values for the role attribute. This attribute must be the comma-separated list of the SAML provider and IAM role ARNs, like: arn:aws:iam::YOURACCOUNTID:saml-provider/SAMLPROVIDERNAME,arn:aws:iam::YOURACCOUNTID:role/ROLENAME". You can then customize the parts in red and store the resulting strings in the database so that you can retrieve them when the user authenticates.

The downside of using SQL Server is that you are using a third system that you need to manage for your federation (Active Directory, AD FS, and SQL Server). High availability and fault tolerance for a SQL database are challenges for DBAs. On the other hand, you won’t have to change any claim rules in AD FS in case a new exception needs to be defined for a user. An update in the DB table would only be required because AD FS queries it during the claim rules chain evaluation.

SQL configuration

On the database EC2 instance, I opened the SQL Server Management Studio and connected with a user that has enough privileges to create a new database there. Then I:

Created a new database named ADFS.

Created a new table named AWS with columns UserId and RoleARN. Here is a query example to create this table:

CREATE TABLE AWS
( UserId varchar(100) NOT NULL,
  RoleARN varchar(200) NOT NULL,
  CONSTRAINT AWS_pk PRIMARY KEY (UserId,RoleARN)
); 

Added values to the new table. For example, if I want to give Bob access to the IAM role ADFS-Dev to two other AWS accounts (777788889999 and 444455556666), here are the values to add:

UserId: EXAMPLEBob
RoleARN: arn:aws:iam::777788889999:saml-provider/ADFS,arn:aws:iam::777788889999:role/ADFS-Dev

UserId: EXAMPLEBob
RoleARN: arn:aws:iam::444455556666:saml-provider/ADFS,arn:aws:iam::444455556666:role/ADFS-Dev

Note: The primary key of the table is the UserId and the RoleARN together, so you can define multiple RoleARNs for the same user. Please change the parts in red to your AWS account information.

Make sure the AD FS account has read access to the SQL database and table.

AD FS configuration

To configure a new attribute store, you first need to open the AD FS Microsoft Management Console.

Under Trust Relationships, right-click Attribute Stores, and then click Add Attribute Store.

Type the following values:

Display name: SQL

Attribute store type: SQL

Connection string: Server=SQL;Database=ADFS;Integrated Security=True

Click OK. Right-click the relying party Amazon Web Services, and then click Edit Claim Rules.

This rule—Exception – ARNs from SQL—is again a custom claim rule.

In order to create this rule, follow these steps.

In the Edit Claim Rules for Amazon Web Services dialog box, click Add Rule.

In the Claim rule template list, select Send Claims Using a Custom Rule, and then click Next.

For Claim rule name, type Exception – ARNs from SQL, and then in Custom rule, enter the following:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]
 => issue(store = "SQL", types = ("https://aws.amazon.com/SAML/Attributes/Role"), query = "SELECT RoleARN from dbo.AWS where UserId= {0}", param = c.Value);

Code explanation

If there is an incoming claim that shows you are authenticated (http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname), you then issue a claim of type https://aws.amazon.com/SAML/Attributes/Role. This is the result of the SQL query on the AWS table in the database AD FS where the column UserId is equal to the Windows account name of the current user. Because RoleARN in the database already contains the comma-separated list of SAML providers and IAM role ARNs, the returned value is already a valid role claim.

The entire workflow is summarized in the following diagram.

EXAMPLEBob goes to the AD FS login page to authenticate.

AD FS authenticates the user against Active Directory.

Active Directory returns the user’s information. EXAMPLEBob belongs to two AD groups (AWS-Production and AWS-Dev), and his user object attribute refers to two AWS accounts (123456789012 and 111122223333).

AD FS queries the SQL server to get possible exceptions defined for EXAMPLEBob.

SQL returns two Role attributes that refer to account 777788889999, IAM role ADFS-Dev, and account 444455556666, IAM role ADFS-Dev.

AD FS dynamically builds four ARNs by using Active Directory group memberships for the IAM roles and user attributes for the AWS account IDs. Additionally, AD FS adds the ARNs for two additional AWS accounts (444455556666 and 777788889999) with a single IAM role (ADFS-Dev) and sends a signed assertion to AWS STS.

EXAMPLEBob gets to the AWS role selection page, where he can choose among accounts 123456789012, 111122223333, 444455556666, and 777788889999. In the first two accounts, he can choose between ADFS-Production and ADFS-Dev. For accounts 444455556666 and 777788889999, only ADFS-Dev is available. All of these are IAM roles created in each specific accounts.

Under the hood—AD FS claim rule explanation (from the GUI)

I will start from the initial AD FS configuration so that you can understand exactly what the provided Windows PowerShell scripts do.

In these steps, I will add the claim rules so that the elements AWS requires and AD FS doesn’t provide by default (NameId, RoleSessionName, and Roles) are added to the SAML authentication response. When you’re ready, open the AD FS Microsoft Management Console (MMC).

Under Trust Relationships, click Relying Party Trusts, right-click the relying party (in this case Amazon Web Services), and then click Edit Claim Rules (see the following screenshot).

Follow the subsequent procedures to create the claim rules for NameId, RoleSessionName, and Roles, which are three mandatory attributes for the SAML response that AD FS will send to AWS STS.

Adding NameId

A name identifier, represented by the NameID element in SAML 2.0, is generally used to identify the subject of a SAML assertion. One reason for including an identifier is to enable the relying party to refer to the subject later, such as in a query or a sign-out request. You will set this attribute of the Windows account name of the user as follows.

In the Edit Claim Rules for Amazon Web Services dialog box, click Add Rule.

Select Transform an Incoming Claim, and then click Next (see the following screenshot).

Use the following settings:

Claim rule name: NameId

Incoming claim type: Windows account name

Outgoing claim type: Name ID

Outgoing name ID format: Persistent Identifier

Pass through all claim values: Select this option

Then click Finish.

Adding a RoleSessionName

You will use the email address of an authenticated user as the RoleSessionName. You can query Active Directory for this attribute as follows.

In the Edit Claim Rules for Amazon Web Services dialog box, click Add Rule.

In the Claim rule template list, select Send LDAP Attributes as Claims (as shown in the following image).

Use the following settings:

Claim rule name: RoleSessionName

Attribute store: Active Directory

LDAP Attribute: E-Mail-Addresses

Outgoing Claim Type: https://aws.amazon.com/SAML/Attributes/RoleSessionName 

Then click Finish.

Adding Roles

Unlike the two previous claims, here I use custom rules to send role attributes. The role must be a comma-separated list of two ARNs: the SAML provider and the IAM role you want to assume. Generate this string by retrieving all the authenticated user’s Active Directory groups and then matching the groups that start with IAM roles of a similar name. I used the names of these groups to create ARNs of IAM roles in the Example Corp. AWS accounts (those that start with AWS-). To know if the user can access one or more of my accounts, I query a user attribute. With few custom claim rules, you can use regular expressions to identify these special Active Directory groups, get the user attribute, and build these ARN strings.

Sending role attributes requires four custom rules. The first rule retrieves all the authenticated user’s Active Directory group memberships; the second rule retrieves the AWS accounts of the user; the third and fourth perform the transformation to the role’s claim.

In the Edit Claim Rules for Amazon Web Services dialog box, click Add Rule.

In the Claim rule template list, select Send Claims Using a Custom Rule, and then click Next.

For Claim rule name, type Get AD Groups, and then in Custom rule, enter the following:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"]
 => add(store = "Active Directory", types = ("http://temp/variable"), query = ";tokenGroups;{0}", param = c.Value);

Click Finish.

This custom rule uses a script in the claim rule language that retrieves all the groups the authenticated user is a member of and places them into a temporary claim named http://temp/variable (a variable you can access later). I use this in the next rule to transform the groups into IAM role ARNs.

Dynamically generate multi-account role attributes

These AD FS claim rules that you will have at the end of the configuration in the AD FS MMC—NameId, RoleSessionName, and Get AD Groups—are the ones just defined (see the following screenshot). Get AWS Accounts from User attributes, Dynamic ARN – Adding AWS Accounts, and Dynamic ARN – Adding Roles are custom claim rules, and I will explain them in this section.

Get AWS accounts from user attributes

We now define a claim rule to get the AWS accounts a user can access from his Active Directory user object attributes. We will use the Active Directory user attribute url, because this is an attribute defined by default in Active Directory. No Active Directory schema extension is required then. You can use a different user attribute instead, if url is already in use in your organization.

In the Edit Claim Rules for Amazon Web Services dialog box, click Add Rule.

In the Claim rule template list, select Send Claims Using a Custom Rule, and then click Next.

For Claim rule name, type Get AWS Accounts from User attributes, and then in Custom rule, enter the following:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]
 => add(store = "Active Directory", types = ("http://temp/AWSAccountsFromUser"), query = ";url;{0}", param = c.Value);

Code explanation

Let’s analyze the code in this example:

The “if statement” condition:

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]

The special operator:

=>

The add statement:

add(store = "Active Directory", types = ("http://temp/AWSAccountsFromUser"), query = ";url;{0}", param = c.Value);

For each rule defined, AD FS checks the input claims, evaluates them against the condition, and applies the statement to the claim if the condition is true. The variable c in the syntax is an incoming claim that you can check conditions against and use values from it in the following statement. In this example, you will check to see if there is an incoming claim that has a type that is http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname.

Then, add a claim. Using the add statement instead of the issue statement will add a claim to the incoming claim set. This will not add the claim to the outgoing token. It is like setting a temporary variable you can use in subsequent rules. In this example, add a claim of type http://temp/AWSAccountsFromUser. Its value is the result of the query = ";url;{0}" on the incoming claim http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname, which basically means, “Get the url attribute for the incoming user object.”

Note: You can read the AD FS 2.0 Claims Rule Language Primer to learn more about claims rule language.

Dynamic ARN—Adding AWS Accounts

Using a template for the ARN string, first replace the placeholder AWS account IDs with the AWS account IDs the incoming users have been granted access to.

In the Edit Claim Rules for Amazon Web Services dialog box, click Add Rule.

In the Claim rule template list, select Send Claims Using a Custom Rule, and then click Next.

For Claim rule name, type Dynamic ARN – Adding AWS Accounts, and then in Custom rule, enter the following:

c:[Type == "http://temp/AWSAccountsFromUser"]
 => add(Type = "http://temp/AWSAccountsFromUser2", Value = RegExReplace("arn:aws:iam::AWSACCOUNT:saml-provider/ADFS,arn:aws:iam::AWSACCOUNT:role/ADFS-", "AWSACCOUNT", c.Value));

Note: Copy the code as it is and make no changes. In this case, AWSACCOUNT is a placeholder that will be automatically replaced by a real AWS account.

Code explanation

If there is an incoming claim of type “http://temp/AWSAccountsFromUser”, add another claim of type “http://temp/AWSAccountsFromUser2” as the result of the replacement of the string AWSACCOUNT in the string template “arn:aws:iam::AWSACCOUNT:saml-provider/ADFS,arn:aws:iam::AWSACCOUNT:role/ADFS-” with the values contained in the incoming claim. The incoming claim contains all the possible AWS account IDs the user can access. The output looks like the following (you would have your own AWS account IDs in place of the fictitious AWS account IDs):

“arn:aws:iam::123456789012:saml-provider/ADFS,arn:aws:iam::123456789012:role/ADFS-”

“arn:aws:iam:: 111122223333:saml-provider/ADFS,arn:aws:iam:: 111122223333:role/ADFS-”

Note: Because we are adding and not issuing a claim, you won’t actually see any http://temp/AWSAccountsFromUser2 claim in your SAML response.

Dynamic ARN—Adding Roles

I will now replace the IAM role name placeholder based on the Active Directory group membership of the user.

In the Edit Claim Rules for Amazon Web Services dialog box, click Add Rule.

In the Claim rule template list, select Send Claims Using a Custom Rule, and then click Next.

For Claim rule name, type Dynamic ARN – Adding Roles, and then in Custom rule, enter the following:

c1:[Type == "http://temp/AWSAccountsFromUser2"]
 && c2:[Type == "http://temp/variable", Value =~ "(?i)^AWS-"]
 => issue(Type = "https://aws.amazon.com/SAML/Attributes/Role", Value = RegExReplace(c2.Value, "AWS-", c1.Value));

Code explanation

If there is an incoming claim of type “http://temp/AWSAccountsFromUser2” and an incoming claim of type “http://temp/variable” with a value that starts with “AWS-”, issue an outgoing claim of type “https://aws.amazon.com/SAML/Attributes/Role” as the result of the replacement of the string “AWS-” inside the value of the second condition claim (c2) with the value of the first condition claim (c1).

Claim c2 contains the groups that start with “AWS-” that the user belongs to, and claim c1 contains as many strings as the AWS account ID the user has access to in the following form (please note these are fictitious AWS account IDs):

“arn:aws:iam::12345678912:saml-provider/ADFS,arn:aws:iam::123456789012:role/ADFS-”

“arn:aws:iam:: 111122223333:saml-provider/ADFS,arn:aws:iam:: 111122223333:role/ADFS-”

The claim rule replaces the substring “AWS-” in the Active Directory group name and amends the aforementioned strings with ARNs. For example, with the groups mentioned at the beginning of this blog post—AWS-Production and AWS-Dev—the resulting ARNs would be:

“arn:aws:iam::123456789012:saml-provider/ADFS,arn:aws:iam::123456789012:role/ADFS-Production”
“arn:aws:iam::123456789012:saml-provider/ADFS,arn:aws:iam::123456789012:role/ADFS-Dev”

“arn:aws:iam:: 111122223333:saml-provider/ADFS,arn:aws:iam:: 111122223333:role/ADFS-Production”
“arn:aws:iam:: 111122223333:saml-provider/ADFS,arn:aws:iam:: 111122223333:role/ADFS-Dev”

Summing up

Your Active Directory users now can access multiple AWS accounts with their Active Directory credentials. They first log in by using the provided AD FS login page. After each user is authenticated, AD FS is configured to get the following information related to the user from Active Directory:

The user object attribute url, which contains the AWS account the user can access.

The user group membership, which contains the IAM roles the user can access in each account.

Bob can get production access to all the accounts defined in his user attribute url by belonging to the Active Directory group AWS-Production. To grant Bob access to a new AWS account, you can update Bob’s url attribute with the new AWS account, and AD FS will automatically combine this additional account with the Active Directory groups to which Bob belongs.

To change Bob’s access to his AWS accounts, Bob shouldn’t assume the Production role anymore, but the Dev one instead. You simply need to remove Bob from the AWS-Production Active Directory group and have him belong to the AWS-Dev Active Directory group. This change will propagate to all his AWS accounts.

Each AWS account you have in this configuration needs to be configured in the same way. You need to create a SAML provider called ADFS with your AD FS metadata and create IAM roles that trust this provider. The IAM roles must comply with the naming convention you have defined with AD FS (in other words, AWS-Production for the Active Directory group name and ADFS-Production as the related IAM role). To make the configuration easier, I have provided this Windows PowerShell script collection that will help you configure Active Directory, AD FS, and IAM.

When you don’t want a change in either the user attribute or group membership to affect multiple accounts at the same time, you must create an exception. You can define exceptions directly in the claim rule code in AD FS, or in an attribute store as a SQL database. The latter approach introduces a new system to manage and is therefore more complex, but if you need to add an exception for a specific user, you don’t need to change any claim rules in AD FS. AD FS can query the database to retrieve any exception for a specific user.

Note that the AWS Security Blog earlier this year published How to Implement Federated API and CLI Access Using SAML 2.0 and AD FS. By combining that blog post with this one, you can achieve multi-account federated API and CLI access!

I hope you found this post useful. If you have questions, please post them on the IAM forum or in the comments area below.

– Alessandro

Node.JS module to access Cisco IOS XR XML interface

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2015/03/nodejs-module-to-access-cisco-ios-xr.html

Hello to all,This is the early version of my module for Node.JS that allows configuring routers and retrieving information over Cisco IOS XR’s XML interface.The module is in its early phases – it still does not read IOS XR schema files and therefore decode the data (in JSON) in a little ugly way (too much arrays). I am planning to fix it, so there may be changes in the responses.Please see bellow the first version of the documentation I’ve set in the github:Module for Cisco XML API interface IOS XRThis is a small module that implements interface to Cisco IOS XR XML Interface.This module open an maintain TCP session to the router, sends requests and receive responses.InstallationTo install the module do something like that:npm install node-ciscoxmlUsageIt is very easy to use this module. See the methods bellow:Load the moduleTo load and use the module, you have to use a code similar to this:var cxml = require(‘node-ciscoxml’);var c = cxml( { …connect options.. });Module init and connect optionshost (default 127.0.0.1) – the hostname of the router where we’ll connectport (default 38751) – the port of the router where XML API is listeningusername (default guest) – the username used for authentication, if username is requested by the remote sidepassword (default guest) – the password used for authentication, if password is requested by the remote sideconnectErrCnt (default 3) – how many times it will retry to connect in case of an errorautoConnect (default true) – should it try to auto connect to the remote side if a request is dispatched and there is no open session alreadyautoDisconnect (default 60000) – how much milliseconds we will wait for another request before the tcp session to the remote side is closed. If the value is 0, it will wait forever (or until the remote side disconnects). Bear in mind autoConnect set to false does not assume autoDisconnect set to 0/false as well.userPromptRegex (default (Username|Login)) – the rule used to identify that the remote side requests for a usernamepassPromptRegex (default Password) – the rule used to identify that the remote side requests for a passwordxmlPromptRegex (default XML>) – the rule used to identify successful login/connectionnoDelay (default true) – disables the Nagle algorithm (true)keepAlive (default 30000) – enabled or disables (value of 0) TCP keepalive for the socketssl (default false) – if it is set to true or an object, then SSL session will be opened. Node.js TLS module is used for that so if the ssl points to an object, the tls options are taken from it. Be careful – enabling SSL does not change the default port from 38751 to 38752. You have to set it explicitly!Example:var cxml = require(‘node-ciscoxml’);var c = cxml( { host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});connect methodThis method forces explicitly a connection. It could accept any options of the above.Example:var cxml = require(‘node-ciscoxml’);var c = cxml();c.connect( { host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});The connect method is not necessary to be used. If autoConnect is enabled (default) the module will automatically open and close tcp connections when needed.Connect supports callback. Example:var cxml = require(‘node-ciscoxml’);cxml().connect( { host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’}, function(err) { if (!err) console.log(‘Successful connection’);});The callback may be the only parameter as well. Example:var cxml = require(‘node-ciscoxml’);cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’}).connect(function(err) { if (!err) console.log(‘Successful connection’);});Example with SSL:var cxml = require(‘node-ciscoxml’);var fs = require(‘fs’);cxml({ host: ‘10.10.1.1’, port: 38752, username: ‘xmlapi’, password: ‘xmlpass’, ssl: { // These are necessary only if using the client certificate authentication key: fs.readFileSync(‘client-key.pem’), cert: fs.readFileSync(‘client-cert.pem’), // This is necessary only if the server uses the self-signed certificate ca: [ fs.readFileSync(‘server-cert.pem’) ] }}).connect(function(err) { if (!err) console.log(‘Successful connection’);});disconnect methodThis method explicitly disconnects a connection.sendRaw method.sendRaw(data,callback)Parameters:data – a string containing valid Cisco XML request to be sentcallback – function that will be called when a valid Cisco XML response is receivedExample:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.sendRaw(‘<Request><GetDataSpaceInfo/></Request>’,function(err,data) { console.log(‘Received’,err,data);});sendRawObj method.sendRawObj(data,callback)Parameters:data – a javascript object that will be converted to a Cisco XML requestcallback – function that will be called with valid Cisco XML response converted to javascript objectExample:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.sendRawObj({ GetDataSpaceInfo: ” },function(err,data) { console.log(‘Received’,err,data);});rootGetDataSpaceInfo method.rootGetDataSpaceInfo(callback)Equivalent to .sendRawObj for GetDataSpaceInfo commandgetNextSends getNext request with a specific id, so we can retrieve the rest of the previous operation if it has been truncated.id – the ID callback – the callback with the data (in js object format)Keep in mind next response may be truncated as well, so you have to check for IteratorID all the time.Example:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.sendRawObj({ Get: { Configuration: {} } },function(err,data) { console.log(‘Received’,err,data); if ((!err) && data && data.Response.$.IteratorID) { return c.getNext(data.Response.$.IteratorID,function(err,nextData) { // .. code to merge data with nextData }); } // .. code});sendRequest methodThis method is equivalent to sendRawObj but it can automatically detect the need and resupply GetNext requests so the response is absolutley full. Therefore this method should be the preferred method for sending requests that expect very large replies.Example:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.sendRequest({ GetDataSpaceInfo: ” },function(err,data) { console.log(‘Received’,err,data);});requestPath methodThis is a method equivalent to sendRequest but instead of an object, the request may be formatted in a simple path string. This metod is not very useful for complex requests. But its value is in the ability to simplify very much the simple requests. The response is in JavaScript objectExample:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.requestPath(‘Get.Configuration.Hostname’,function(err,data) { console.log(‘Received’,err,data);});reqPathPath methodThis is the same method as requestPath, but the response is not an object, but a path array. The method supports optional filter, which has to be a RegExp object and all paths and values will be tested against it Only those returning true will be included in the response array.Example:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.reqPathPath(‘Get.Configuration.Hostname’,/Hostname/,function(err,data) { console.log(‘Received’,data[0]); // The output should be something like // [ ‘Response(“MajorVersion”=”1″,”MinorVersion”=”0”).Get.Configuration.Hostname(“MajorVersion”=”1″,”MinorVersion”=”0”)’, ‘asr9k-router’ ] });This method could be very useful for getting simple responses and configurations.getConfig methodThis method requests the whole configuration of the remote device and return it as objectExample:c.getConfig(function(err,config) { console.log(err,config);});cliConfig methodThis method is quite simple, it executes a command(s) in CLI Configuration mode and return the response in JS Object. You have to know that any configuration change in IOS XR is not effective unless it is committed!Example:c.cliConfig(‘username testuserngroup operatorn’,function(err,data) { console.log(err,data); c.commit();});cliExec methodExecutes a command(s) in CLI Exec mode and return the response in JS Object.c.cliExec(‘show interfaces’,function(err,data) { console.log(err,data?data.Response.CLI[0].Exec[0]);});commit methodCommit the current configuration.Example:c.commit(function(err,data) { console.log(err,data);});lock methodLocks the configuration mode.Example:c.lock(function(err,data) { console.log(err,data);});unlock methodUnlocks the configuration mode.Example:c.unlock(function(err,data) { console.log(err,data);});Configure Cisco IOS XR for XML agentTo configure IOS XR for remote XML configuration you have to:Ensure you have *mgbl*** package installed and activated! Without it you will have no xml agentcommands!Enable the XML agent with a similar configuration:xml agent vrf default ipv4 access-list SECUREACCESS ! ipv6 enable session timeout 10 iteration on size 100000!You can enable tty and/or ssl agents as well!(Keep in mind – full filtering of the XML access has to be done by the control-plane management-plane command! The XML interface does not use VTYs!)You have to ensure you have correctly configured aaa as the xml agent uses default method for both authentication and authorization and that cannot be changed (last verified with IOS XR 5.3).You have to have both aaa authentication and authorization. If authorization is not set (aaa authorization default local or none), you may not be able to log in. And you shall ensure that both the authentication and authorization share the same source (tacacs+ or local).The default agent port is 38751 for the default agent and 38752 for SSL.DebuggingThe module uses “debug” module to log its outputs. You can enable the debugging by having in your code something like:require(‘debug’).enable(‘ciscoxml’);Or setting DEBUG environment to ciscoxml before starting the Node.JS

Node.JS module to access Cisco IOS XR XML interface

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2015/03/nodejs-module-to-access-cisco-ios-xr.html

Hello to all,This is the early version of my module for Node.JS that allows configuring routers and retrieving information over Cisco IOS XR’s XML interface.The module is in its early phases – it still does not read IOS XR schema files and therefore decode the data (in JSON) in a little ugly way (too much arrays). I am planning to fix it, so there may be changes in the responses.Please see bellow the first version of the documentation I’ve set in the github:Module for Cisco XML API interface IOS XRThis is a small module that implements interface to Cisco IOS XR XML Interface.This module open an maintain TCP session to the router, sends requests and receive responses.InstallationTo install the module do something like that:npm install node-ciscoxmlUsageIt is very easy to use this module. See the methods bellow:Load the moduleTo load and use the module, you have to use a code similar to this:var cxml = require(‘node-ciscoxml’);var c = cxml( { …connect options.. });Module init and connect optionshost (default 127.0.0.1) – the hostname of the router where we’ll connectport (default 38751) – the port of the router where XML API is listeningusername (default guest) – the username used for authentication, if username is requested by the remote sidepassword (default guest) – the password used for authentication, if password is requested by the remote sideconnectErrCnt (default 3) – how many times it will retry to connect in case of an errorautoConnect (default true) – should it try to auto connect to the remote side if a request is dispatched and there is no open session alreadyautoDisconnect (default 60000) – how much milliseconds we will wait for another request before the tcp session to the remote side is closed. If the value is 0, it will wait forever (or until the remote side disconnects). Bear in mind autoConnect set to false does not assume autoDisconnect set to 0/false as well.userPromptRegex (default (Username|Login)) – the rule used to identify that the remote side requests for a usernamepassPromptRegex (default Password) – the rule used to identify that the remote side requests for a passwordxmlPromptRegex (default XML>) – the rule used to identify successful login/connectionnoDelay (default true) – disables the Nagle algorithm (true)keepAlive (default 30000) – enabled or disables (value of 0) TCP keepalive for the socketssl (default false) – if it is set to true or an object, then SSL session will be opened. Node.js TLS module is used for that so if the ssl points to an object, the tls options are taken from it. Be careful – enabling SSL does not change the default port from 38751 to 38752. You have to set it explicitly!Example:var cxml = require(‘node-ciscoxml’);var c = cxml( { host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});connect methodThis method forces explicitly a connection. It could accept any options of the above.Example:var cxml = require(‘node-ciscoxml’);var c = cxml();c.connect( { host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});The connect method is not necessary to be used. If autoConnect is enabled (default) the module will automatically open and close tcp connections when needed.Connect supports callback. Example:var cxml = require(‘node-ciscoxml’);cxml().connect( { host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’}, function(err) { if (!err) console.log(‘Successful connection’);});The callback may be the only parameter as well. Example:var cxml = require(‘node-ciscoxml’);cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’}).connect(function(err) { if (!err) console.log(‘Successful connection’);});Example with SSL:var cxml = require(‘node-ciscoxml’);var fs = require(‘fs’);cxml({ host: ‘10.10.1.1’, port: 38752, username: ‘xmlapi’, password: ‘xmlpass’, ssl: { // These are necessary only if using the client certificate authentication key: fs.readFileSync(‘client-key.pem’), cert: fs.readFileSync(‘client-cert.pem’), // This is necessary only if the server uses the self-signed certificate ca: [ fs.readFileSync(‘server-cert.pem’) ] }}).connect(function(err) { if (!err) console.log(‘Successful connection’);});disconnect methodThis method explicitly disconnects a connection.sendRaw method.sendRaw(data,callback)Parameters:data – a string containing valid Cisco XML request to be sentcallback – function that will be called when a valid Cisco XML response is receivedExample:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.sendRaw(‘<Request><GetDataSpaceInfo/></Request>’,function(err,data) { console.log(‘Received’,err,data);});sendRawObj method.sendRawObj(data,callback)Parameters:data – a javascript object that will be converted to a Cisco XML requestcallback – function that will be called with valid Cisco XML response converted to javascript objectExample:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.sendRawObj({ GetDataSpaceInfo: ” },function(err,data) { console.log(‘Received’,err,data);});rootGetDataSpaceInfo method.rootGetDataSpaceInfo(callback)Equivalent to .sendRawObj for GetDataSpaceInfo commandgetNextSends getNext request with a specific id, so we can retrieve the rest of the previous operation if it has been truncated.id – the ID callback – the callback with the data (in js object format)Keep in mind next response may be truncated as well, so you have to check for IteratorID all the time.Example:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.sendRawObj({ Get: { Configuration: {} } },function(err,data) { console.log(‘Received’,err,data); if ((!err) && data && data.Response.$.IteratorID) { return c.getNext(data.Response.$.IteratorID,function(err,nextData) { // .. code to merge data with nextData }); } // .. code});sendRequest methodThis method is equivalent to sendRawObj but it can automatically detect the need and resupply GetNext requests so the response is absolutley full. Therefore this method should be the preferred method for sending requests that expect very large replies.Example:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.sendRequest({ GetDataSpaceInfo: ” },function(err,data) { console.log(‘Received’,err,data);});requestPath methodThis is a method equivalent to sendRequest but instead of an object, the request may be formatted in a simple path string. This metod is not very useful for complex requests. But its value is in the ability to simplify very much the simple requests. The response is in JavaScript objectExample:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.requestPath(‘Get.Configuration.Hostname’,function(err,data) { console.log(‘Received’,err,data);});reqPathPath methodThis is the same method as requestPath, but the response is not an object, but a path array. The method supports optional filter, which has to be a RegExp object and all paths and values will be tested against it Only those returning true will be included in the response array.Example:var cxml = require(‘node-ciscoxml’);var c = cxml({ host: ‘10.10.1.1’, port: 5000, username: ‘xmlapi’, password: ‘xmlpass’});c.reqPathPath(‘Get.Configuration.Hostname’,/Hostname/,function(err,data) { console.log(‘Received’,data[0]); // The output should be something like // [ ‘Response(“MajorVersion”=”1″,”MinorVersion”=”0”).Get.Configuration.Hostname(“MajorVersion”=”1″,”MinorVersion”=”0”)’, ‘asr9k-router’ ] });This method could be very useful for getting simple responses and configurations.getConfig methodThis method requests the whole configuration of the remote device and return it as objectExample:c.getConfig(function(err,config) { console.log(err,config);});cliConfig methodThis method is quite simple, it executes a command(s) in CLI Configuration mode and return the response in JS Object. You have to know that any configuration change in IOS XR is not effective unless it is committed!Example:c.cliConfig(‘username testuserngroup operatorn’,function(err,data) { console.log(err,data); c.commit();});cliExec methodExecutes a command(s) in CLI Exec mode and return the response in JS Object.c.cliExec(‘show interfaces’,function(err,data) { console.log(err,data?data.Response.CLI[0].Exec[0]);});commit methodCommit the current configuration.Example:c.commit(function(err,data) { console.log(err,data);});lock methodLocks the configuration mode.Example:c.lock(function(err,data) { console.log(err,data);});unlock methodUnlocks the configuration mode.Example:c.unlock(function(err,data) { console.log(err,data);});Configure Cisco IOS XR for XML agentTo configure IOS XR for remote XML configuration you have to:Ensure you have *mgbl*** package installed and activated! Without it you will have no xml agentcommands!Enable the XML agent with a similar configuration:xml agent vrf default ipv4 access-list SECUREACCESS ! ipv6 enable session timeout 10 iteration on size 100000!You can enable tty and/or ssl agents as well!(Keep in mind – full filtering of the XML access has to be done by the control-plane management-plane command! The XML interface does not use VTYs!)You have to ensure you have correctly configured aaa as the xml agent uses default method for both authentication and authorization and that cannot be changed (last verified with IOS XR 5.3).You have to have both aaa authentication and authorization. If authorization is not set (aaa authorization default local or none), you may not be able to log in. And you shall ensure that both the authentication and authorization share the same source (tacacs+ or local).The default agent port is 38751 for the default agent and 38752 for SSL.DebuggingThe module uses “debug” module to log its outputs. You can enable the debugging by having in your code something like:require(‘debug’).enable(‘ciscoxml’);Or setting DEBUG environment to ciscoxml before starting the Node.JS

node-netflowv9 node.js module for processing of netflowv9 has been updated to 0.2.5

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2015/03/node-netflowv9-nodejs-module-for.html

My node-netflowv9 library has been updated to version 0.2.5There are few new things -Almost all of the IETF netflow types are decoded now. Which means practically that we support IPFIXUnknown NetFlow v9 type does not throw an error. It is decoded into property with name ‘unknown_type_XXX’ where XXX is the ID of the typeUnknown NetFlow v9 Option Template scope does not throw an error. It is decoded in ‘unknown_scope_XXX’ where XXX is the ID of the scopeThe user can overwrite how different types of NetFlow are decoded and the user can define its own decoding for new types. The same for scopes. And this can happen “on fly” – at any time.The library supports well multiple netflow collectors running at the same timeA lot of new options and models for using of the library has been introducedBellow is the updated README.md file, describing how to use the library:UsageThe usage of the netflowv9 collector library is very very simple. You just have to do something like this:var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(3000);or you can use it as event provider:Collector({port: 3000}).on(‘data’,function(flow) { console.log(flow);});The flow will be presented in a format very similar to this:{ header: { version: 9, count: 25, uptime: 2452864139, seconds: 1401951592, sequence: 254138992, sourceId: 2081 }, rinfo: { address: ‘15.21.21.13’, family: ‘IPv4’, port: 29471, size: 1452 }, packet: Buffer <00 00 00 00 ….> flow: [ { in_pkts: 3, in_bytes: 144, ipv4_src_addr: ‘15.23.23.37’, ipv4_dst_addr: ‘16.16.19.165’, input_snmp: 27, output_snmp: 16, last_switched: 2452753808, first_switched: 2452744429, l4_src_port: 61538, l4_dst_port: 62348, out_as: 0, in_as: 0, bgp_ipv4_next_hop: ‘16.16.1.1’, src_mask: 32, dst_mask: 24, protocol: 17, tcp_flags: 0, src_tos: 0, direction: 1, fw_status: 64, flow_sampler_id: 2 } } ]There will be one callback for each packet, which may contain more than one flow.You can also access a NetFlow decode function directly. Do something like this:var netflowPktDecoder = require(‘node-netflowv9’).nfPktDecode;….console.log(netflowPktDecoder(buffer))Currently we support netflow version 1, 5, 7 and 9.OptionsYou can initialize the collector with either callback function only or a group of options within an object.The following options are available during initialization:port – defines the port where our collector will listen to.Collector({ port: 5000, cb: function (flow) { console.log(flow) } })If no port is provided, then the underlying socket will not be initialized (bind to a port) until you call listen method with a port as a parameter:Collector(function (flow) { console.log(flow) }).listen(port)cb – defines a callback function to be executed for every flow. If no call back function is provided, then the collector fires ‘data’ event for each received flowCollector({ cb: function (flow) { console.log(flow) } }).listen(5000)ipv4num – defines that we want to receive the IPv4 ip address as a number, instead of decoded in a readable dot formatCollector({ ipv4num: true, cb: function (flow) { console.log(flow) } }).listen(5000)socketType – defines to what socket type we will bind to. Default is udp4. You can change it to udp6 is you like.Collector({ socketType: ‘udp6’, cb: function (flow) { console.log(flow) } }).listen(5000)nfTypes – defines your own decoders to NetFlow v9+ typesnfScope – defines your own decoders to NetFlow v9+ Option Template scopesDefine your own decoders for NetFlow v9+ typesNetFlow v9 could be extended with vendor specific types and many vendors define their own. There could be no netflow collector in the world that decodes all the specific vendor types. By default this library decodes in readable format all the types it recognises. All the unknown types are decoded as ‘unknown_type_XXX’ where XXX is the type ID. The data is provided as a HEX string. But you can extend the library yourself. You can even replace how current types are decoded. You can even do that on fly (you can dynamically change how the type is decoded in different periods of time).To understand how to do that, you have to learn a bit about the internals of how this module works.When a new flowset template is received from the NetFlow Agent, this netflow module generates and compile (with new Function()) a decoding functionWhen a netflow is received for a known flowset template (we have a compiled function for it) – the function is simply executedThis approach is quite simple and provides enormous performance. The function code is as small as possible and as well on first execution Node.JS compiles it with JIT and the result is really fast.The function code is generated with templates that contains the javascript code to be add for each netflow type, identified by its ID.Each template consist of an object of the following form:{ name: ‘property-name’, compileRule: compileRuleObject }compileRuleObject contains rules how that netflow type to be decoded, depending on its length. The reason for that is, that some of the netflow types are variable length. And you may have to execute different code to decode them depending on the length. The compileRuleObject format is simple:{ length: ‘javascript code as a string that decode this value’, …}There is a special length property of 0. This code will be used, if there is no more specific decode defined for a length. For example:{ 4: ‘code used to decode this netflow type with length of 4’, 8: ‘code used to decode this netflow type with length of 8’, 0: ‘code used to decode ANY OTHER length’}decoding codeThe decoding code must be a string that contains javascript code. This code will be concatenated to the function body before compilation. If that code contain errors or simply does not work as expected it could crash the collector. So be careful.There are few variables you have to use:$pos – this string is replaced with a number containing the current position of the netflow type within the binary buffer.$len – this string is replaced with a number containing the length of the netflow type$name – this string is replaced with a string containing the name property of the netflow type (defined by you above)buf – is Node.JS Buffer object containing the Flow we want to decodeo – this is the object where the decoded flow is written to.Everything else is pure javascript. It is good if you know the restrictions of the javascript and Node.JS capabilities of the Function() method, but not necessary to allow you to write simple decoding by yourself.If you want to decode a string, of variable length, you could write a compileRuleObject of the form:{ 0: ‘o[“$name”] = buf.toString(“utf8”,$pos,$pos+$len)’}The example above will say that for this netfow type, whatever length it has, we will decode the value as utf8 string.ExampleLets assume you want to write you own code for decoding a NetFlow type, lets say 4444, which could be of variable length, and contains a integer number.You can write a code like this:Collector({ port: 5000, nfTypes: { 4444: { // 4444 is the NetFlow Type ID which decoding we want to replace name: ‘my_vendor_type4444’, // This will be the property name, that will contain the decoded value, it will be also the value of the $name compileRule: { 1: “o[‘$name’]=buf.readUInt8($pos);”, // This is how we decode type of length 1 to a number 2: “o[‘$name’]=buf.readUInt16BE($pos);”, // This is how we decode type of length 2 to a number 3: “o[‘$name’]=buf.readUInt8($pos)*65536+buf.readUInt16BE($pos+1);”, // This is how we decode type of length 3 to a number 4: “o[‘$name’]=buf.readUInt32BE($pos);”, // This is how we decode type of length 4 to a number 5: “o[‘$name’]=buf.readUInt8($pos)*4294967296+buf.readUInt32BE($pos+1);”, // This is how we decode type of length 5 to a number 6: “o[‘$name’]=buf.readUInt16BE($pos)*4294967296+buf.readUInt32BE($pos+2);”, // This is how we decode type of length 6 to a number 8: “o[‘$name’]=buf.readUInt32BE($pos)*4294967296+buf.readUInt32BE($pos+4);”, // This is how we decode type of length 8 to a number 0: “o[‘$name’]=’Unsupported Length of $len'” } } }, cb: function (flow) { console.log(flow) }});It looks to be a bit complex, but actually it is not. In most of the cases, you don’t have to define a compile rule for each different length. The following example defines a decoding for a netflow type 6789 that carry a string:var colObj = Collector(function (flow) { console.log(flow)});colObj.listen(5000);colObj.nfTypes[6789] = { name: ‘vendor_string’, compileRule: { 0: ‘o[“$name”] = buf.toString(“utf8”,$pos,$pos+$len)’ }}As you can see, we can also change the decoding on fly, by defining a property for that netflow type within the nfTypes property of the colObj (the Collector object). Next time when the NetFlow Agent send us a NetFlow Template definition containing this netflow type, the new rule will be used (the routers usually send temlpates from time to time, so even currently compiled templates are recompiled).You could also overwrite the default property names where the decoded data is written. For example:var colObj = Collector(function (flow) { console.log(flow)});colObj.listen(5000);colObj.nfTypes[14].name = ‘outputInterface’;colObj.nfTypes[10].name = ‘inputInterface’;Logging / Debugging the moduleYou can use the debug module to turn on the logging, in order to debug how the library behave. The following example show you how:require(‘debug’).enable(‘NetFlowV9’);var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(5555);Multiple collectorsThe module allows you to define multiple collectors at the same time. For example:var Collector = require(‘node-netflowv9’);Collector(function(flow) { // Collector 1 listening on port 5555 console.log(flow);}).listen(5555);Collector(function(flow) { // Collector 2 listening on port 6666 console.log(flow);}).listen(6666);NetFlowV9 Options TemplateNetFlowV9 support Options template, where there could be an option Flow Set that contains data for a predefined fields within a certain scope. This module supports the Options Template and provides the output of it as it is any other flow. The only difference is that there is a property isOption set to true to remind to your code, that this data has come from an Option Template.Currently the following nfScope are supported – system, interface, line_card, netflow_cache. You can overwrite the decoding of them, or add another the same way (and using absolutley the same format) as you overwrite nfTypes.

node-netflowv9 node.js module for processing of netflowv9 has been updated to 0.2.5

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2015/03/node-netflowv9-nodejs-module-for.html

My node-netflowv9 library has been updated to version 0.2.5There are few new things -Almost all of the IETF netflow types are decoded now. Which means practically that we support IPFIXUnknown NetFlow v9 type does not throw an error. It is decoded into property with name ‘unknown_type_XXX’ where XXX is the ID of the typeUnknown NetFlow v9 Option Template scope does not throw an error. It is decoded in ‘unknown_scope_XXX’ where XXX is the ID of the scopeThe user can overwrite how different types of NetFlow are decoded and the user can define its own decoding for new types. The same for scopes. And this can happen “on fly” – at any time.The library supports well multiple netflow collectors running at the same timeA lot of new options and models for using of the library has been introducedBellow is the updated README.md file, describing how to use the library:UsageThe usage of the netflowv9 collector library is very very simple. You just have to do something like this:var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(3000);or you can use it as event provider:Collector({port: 3000}).on(‘data’,function(flow) { console.log(flow);});The flow will be presented in a format very similar to this:{ header: { version: 9, count: 25, uptime: 2452864139, seconds: 1401951592, sequence: 254138992, sourceId: 2081 }, rinfo: { address: ‘15.21.21.13’, family: ‘IPv4’, port: 29471, size: 1452 }, packet: Buffer <00 00 00 00 ….> flow: [ { in_pkts: 3, in_bytes: 144, ipv4_src_addr: ‘15.23.23.37’, ipv4_dst_addr: ‘16.16.19.165’, input_snmp: 27, output_snmp: 16, last_switched: 2452753808, first_switched: 2452744429, l4_src_port: 61538, l4_dst_port: 62348, out_as: 0, in_as: 0, bgp_ipv4_next_hop: ‘16.16.1.1’, src_mask: 32, dst_mask: 24, protocol: 17, tcp_flags: 0, src_tos: 0, direction: 1, fw_status: 64, flow_sampler_id: 2 } } ]There will be one callback for each packet, which may contain more than one flow.You can also access a NetFlow decode function directly. Do something like this:var netflowPktDecoder = require(‘node-netflowv9’).nfPktDecode;….console.log(netflowPktDecoder(buffer))Currently we support netflow version 1, 5, 7 and 9.OptionsYou can initialize the collector with either callback function only or a group of options within an object.The following options are available during initialization:port – defines the port where our collector will listen to.Collector({ port: 5000, cb: function (flow) { console.log(flow) } })If no port is provided, then the underlying socket will not be initialized (bind to a port) until you call listen method with a port as a parameter:Collector(function (flow) { console.log(flow) }).listen(port)cb – defines a callback function to be executed for every flow. If no call back function is provided, then the collector fires ‘data’ event for each received flowCollector({ cb: function (flow) { console.log(flow) } }).listen(5000)ipv4num – defines that we want to receive the IPv4 ip address as a number, instead of decoded in a readable dot formatCollector({ ipv4num: true, cb: function (flow) { console.log(flow) } }).listen(5000)socketType – defines to what socket type we will bind to. Default is udp4. You can change it to udp6 is you like.Collector({ socketType: ‘udp6’, cb: function (flow) { console.log(flow) } }).listen(5000)nfTypes – defines your own decoders to NetFlow v9+ typesnfScope – defines your own decoders to NetFlow v9+ Option Template scopesDefine your own decoders for NetFlow v9+ typesNetFlow v9 could be extended with vendor specific types and many vendors define their own. There could be no netflow collector in the world that decodes all the specific vendor types. By default this library decodes in readable format all the types it recognises. All the unknown types are decoded as ‘unknown_type_XXX’ where XXX is the type ID. The data is provided as a HEX string. But you can extend the library yourself. You can even replace how current types are decoded. You can even do that on fly (you can dynamically change how the type is decoded in different periods of time).To understand how to do that, you have to learn a bit about the internals of how this module works.When a new flowset template is received from the NetFlow Agent, this netflow module generates and compile (with new Function()) a decoding functionWhen a netflow is received for a known flowset template (we have a compiled function for it) – the function is simply executedThis approach is quite simple and provides enormous performance. The function code is as small as possible and as well on first execution Node.JS compiles it with JIT and the result is really fast.The function code is generated with templates that contains the javascript code to be add for each netflow type, identified by its ID.Each template consist of an object of the following form:{ name: ‘property-name’, compileRule: compileRuleObject }compileRuleObject contains rules how that netflow type to be decoded, depending on its length. The reason for that is, that some of the netflow types are variable length. And you may have to execute different code to decode them depending on the length. The compileRuleObject format is simple:{ length: ‘javascript code as a string that decode this value’, …}There is a special length property of 0. This code will be used, if there is no more specific decode defined for a length. For example:{ 4: ‘code used to decode this netflow type with length of 4’, 8: ‘code used to decode this netflow type with length of 8’, 0: ‘code used to decode ANY OTHER length’}decoding codeThe decoding code must be a string that contains javascript code. This code will be concatenated to the function body before compilation. If that code contain errors or simply does not work as expected it could crash the collector. So be careful.There are few variables you have to use:$pos – this string is replaced with a number containing the current position of the netflow type within the binary buffer.$len – this string is replaced with a number containing the length of the netflow type$name – this string is replaced with a string containing the name property of the netflow type (defined by you above)buf – is Node.JS Buffer object containing the Flow we want to decodeo – this is the object where the decoded flow is written to.Everything else is pure javascript. It is good if you know the restrictions of the javascript and Node.JS capabilities of the Function() method, but not necessary to allow you to write simple decoding by yourself.If you want to decode a string, of variable length, you could write a compileRuleObject of the form:{ 0: ‘o[“$name”] = buf.toString(“utf8”,$pos,$pos+$len)’}The example above will say that for this netfow type, whatever length it has, we will decode the value as utf8 string.ExampleLets assume you want to write you own code for decoding a NetFlow type, lets say 4444, which could be of variable length, and contains a integer number.You can write a code like this:Collector({ port: 5000, nfTypes: { 4444: { // 4444 is the NetFlow Type ID which decoding we want to replace name: ‘my_vendor_type4444’, // This will be the property name, that will contain the decoded value, it will be also the value of the $name compileRule: { 1: “o[‘$name’]=buf.readUInt8($pos);”, // This is how we decode type of length 1 to a number 2: “o[‘$name’]=buf.readUInt16BE($pos);”, // This is how we decode type of length 2 to a number 3: “o[‘$name’]=buf.readUInt8($pos)*65536+buf.readUInt16BE($pos+1);”, // This is how we decode type of length 3 to a number 4: “o[‘$name’]=buf.readUInt32BE($pos);”, // This is how we decode type of length 4 to a number 5: “o[‘$name’]=buf.readUInt8($pos)*4294967296+buf.readUInt32BE($pos+1);”, // This is how we decode type of length 5 to a number 6: “o[‘$name’]=buf.readUInt16BE($pos)*4294967296+buf.readUInt32BE($pos+2);”, // This is how we decode type of length 6 to a number 8: “o[‘$name’]=buf.readUInt32BE($pos)*4294967296+buf.readUInt32BE($pos+4);”, // This is how we decode type of length 8 to a number 0: “o[‘$name’]=’Unsupported Length of $len'” } } }, cb: function (flow) { console.log(flow) }});It looks to be a bit complex, but actually it is not. In most of the cases, you don’t have to define a compile rule for each different length. The following example defines a decoding for a netflow type 6789 that carry a string:var colObj = Collector(function (flow) { console.log(flow)});colObj.listen(5000);colObj.nfTypes[6789] = { name: ‘vendor_string’, compileRule: { 0: ‘o[“$name”] = buf.toString(“utf8”,$pos,$pos+$len)’ }}As you can see, we can also change the decoding on fly, by defining a property for that netflow type within the nfTypes property of the colObj (the Collector object). Next time when the NetFlow Agent send us a NetFlow Template definition containing this netflow type, the new rule will be used (the routers usually send temlpates from time to time, so even currently compiled templates are recompiled).You could also overwrite the default property names where the decoded data is written. For example:var colObj = Collector(function (flow) { console.log(flow)});colObj.listen(5000);colObj.nfTypes[14].name = ‘outputInterface’;colObj.nfTypes[10].name = ‘inputInterface’;Logging / Debugging the moduleYou can use the debug module to turn on the logging, in order to debug how the library behave. The following example show you how:require(‘debug’).enable(‘NetFlowV9’);var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(5555);Multiple collectorsThe module allows you to define multiple collectors at the same time. For example:var Collector = require(‘node-netflowv9’);Collector(function(flow) { // Collector 1 listening on port 5555 console.log(flow);}).listen(5555);Collector(function(flow) { // Collector 2 listening on port 6666 console.log(flow);}).listen(6666);NetFlowV9 Options TemplateNetFlowV9 support Options template, where there could be an option Flow Set that contains data for a predefined fields within a certain scope. This module supports the Options Template and provides the output of it as it is any other flow. The only difference is that there is a property isOption set to true to remind to your code, that this data has come from an Option Template.Currently the following nfScope are supported – system, interface, line_card, netflow_cache. You can overwrite the decoding of them, or add another the same way (and using absolutley the same format) as you overwrite nfTypes.

NetFlow Version 9 module for Node.JS

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/06/netflow-version-9-library-for-nodejs.html

I am writing some small automation scripts to help me in my work from time to time. I needed a NetFlow collector and I wanted to write it in javascript for Node.JS because of my general desire to support this platform enabling JavaScript language in generic application programming and system programming.Node.JS is having probably the best in the market package manager (for a framework) named npm. It is extremely easy to install and maintain a package, to keep dependencies or even “scoping” it on a local installation avoiding the need of having root permissions for your machine. This is great. However, most of the packages registered in the npm database are junk. A lot of code is left without any development or having generic bugs or is simply incomplete. I am strongly suggesting to the nodejs community to introduce a package statuses based on public voting marking each module in “production”, “stable”, “unstable”, “development” quality and to set by default the npm search searching in “production” and “stable”. Actually, npm already have a way to do that, but leaves the marking decision to the package owner.Anyway, I was looking for Netflow v9 module that could allow me to capture netflow traffic with this version. Unfortunately the only module supporting NetFlow was node-Netflowd. It does support Netflow version 5 but has a lot of issues with NetFlow v9, to say at least. After few hours testing it at the end I decided to write one on my own.So please welcome the newest Node.JS module that support collecting and decoding of NetFlow version 9 flows named “node-netflowv9“This module supports only Netflow v9 and has to be used only for it.The library is very very simple, having about 250 lines of code and supports all of the publicly defined Cisco properties, including variable length numbers and IPv6 addressing.It is very easy to use it. You just have to do something like this:var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(3000);The flow will be represented in JavaScript object in a format very similar to this:{ header: { version: 9, count: 25, uptime: 2452864139, seconds: 1401951592, sequence: 254138992, sourceId: 2081 }, rinfo: { address: ‘15.21.21.13’, family: ‘IPv4’, port: 29471, size: 1452 }, flow: { in_pkts: 3, in_bytes: 144, ipv4_src_addr: ‘15.23.23.37’, ipv4_dst_addr: ‘16.16.19.165’, input_snmp: 27, output_snmp: 16, last_switched: 2452753808, first_switched: 2452744429, l4_src_port: 61538, l4_dst_port: 62348, out_as: 0, in_as: 0, bgp_ipv4_next_hop: ‘16.16.1.1’, src_mask: 32, dst_mask: 24, protocol: 17, tcp_flags: 0, src_tos: 0, direction: 1, fw_status: 64, flow_sampler_id: 2 } }There will be a callback per each flow, not only one for each packet. If the packet contain 10 flows, there will be 10 callbacks containing each different flow. This simplifies the Collector code as you don’t have to loop on your own trough the flows.Keep in mind that Netflow v9 does not have a fixed structure (in difference to NetFlow v1/v5) and it is based on templates. It depends on the platform which properties it will set in the temlpates and what will be the order of it. You always have to test you netflow v9 collector configuration. This library is trying to simplify it as much as possible, but it cannot compensate it.My general feeling is that S-Flow is much better defined and much more powerful than NetFlow in general. NetFlow v9 is the closest Cisco product that can provide (but is not necessary providing) similar functionality. However, the behavior and the functionality of NetFlow v9 differ between the different Cisco products. On some – you can define aggregations and templates on your own. On some (IOS XR) you can’t and you use NetFlow v9 as a replacement to NetFlow v5. On some other Cisco products (Nexus 7000) there is no support of NetFlow at all, but there is S-Flow :)In all of the Cisco products, the interfaces are sent as SNMP interface index. However, this index may not be persistent (between device reboots) and to associate it with an interface name you have to implement cached SNMP GET to the interface table OID on your own.Because of the impressive performance of the modern JavaScript this little module performs really fast in Node.JS. I have a complex collector implemented with configurable and evaluated aggregations that uses on average less than 2% CPU on a virtual machine, processing about 100 packets with flows and about 1000 flow statistics per second.Update:http://deliantech.blogspot.com/2014/06/new-improved-version-of-node-netflowv9.html

NetFlow Version 9 module for Node.JS

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/06/netflow-version-9-library-for-nodejs.html

I am writing some small automation scripts to help me in my work from time to time. I needed a NetFlow collector and I wanted to write it in javascript for Node.JS because of my general desire to support this platform enabling JavaScript language in generic application programming and system programming.Node.JS is having probably the best in the market package manager (for a framework) named npm. It is extremely easy to install and maintain a package, to keep dependencies or even “scoping” it on a local installation avoiding the need of having root permissions for your machine. This is great. However, most of the packages registered in the npm database are junk. A lot of code is left without any development or having generic bugs or is simply incomplete. I am strongly suggesting to the nodejs community to introduce a package statuses based on public voting marking each module in “production”, “stable”, “unstable”, “development” quality and to set by default the npm search searching in “production” and “stable”. Actually, npm already have a way to do that, but leaves the marking decision to the package owner.Anyway, I was looking for Netflow v9 module that could allow me to capture netflow traffic with this version. Unfortunately the only module supporting NetFlow was node-Netflowd. It does support Netflow version 5 but has a lot of issues with NetFlow v9, to say at least. After few hours testing it at the end I decided to write one on my own.So please welcome the newest Node.JS module that support collecting and decoding of NetFlow version 9 flows named “node-netflowv9“This module supports only Netflow v9 and has to be used only for it.The library is very very simple, having about 250 lines of code and supports all of the publicly defined Cisco properties, including variable length numbers and IPv6 addressing.It is very easy to use it. You just have to do something like this:var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(3000);The flow will be represented in JavaScript object in a format very similar to this:{ header: { version: 9, count: 25, uptime: 2452864139, seconds: 1401951592, sequence: 254138992, sourceId: 2081 }, rinfo: { address: ‘15.21.21.13’, family: ‘IPv4’, port: 29471, size: 1452 }, flow: { in_pkts: 3, in_bytes: 144, ipv4_src_addr: ‘15.23.23.37’, ipv4_dst_addr: ‘16.16.19.165’, input_snmp: 27, output_snmp: 16, last_switched: 2452753808, first_switched: 2452744429, l4_src_port: 61538, l4_dst_port: 62348, out_as: 0, in_as: 0, bgp_ipv4_next_hop: ‘16.16.1.1’, src_mask: 32, dst_mask: 24, protocol: 17, tcp_flags: 0, src_tos: 0, direction: 1, fw_status: 64, flow_sampler_id: 2 } }There will be a callback per each flow, not only one for each packet. If the packet contain 10 flows, there will be 10 callbacks containing each different flow. This simplifies the Collector code as you don’t have to loop on your own trough the flows.Keep in mind that Netflow v9 does not have a fixed structure (in difference to NetFlow v1/v5) and it is based on templates. It depends on the platform which properties it will set in the temlpates and what will be the order of it. You always have to test you netflow v9 collector configuration. This library is trying to simplify it as much as possible, but it cannot compensate it.My general feeling is that S-Flow is much better defined and much more powerful than NetFlow in general. NetFlow v9 is the closest Cisco product that can provide (but is not necessary providing) similar functionality. However, the behavior and the functionality of NetFlow v9 differ between the different Cisco products. On some – you can define aggregations and templates on your own. On some (IOS XR) you can’t and you use NetFlow v9 as a replacement to NetFlow v5. On some other Cisco products (Nexus 7000) there is no support of NetFlow at all, but there is S-Flow :)In all of the Cisco products, the interfaces are sent as SNMP interface index. However, this index may not be persistent (between device reboots) and to associate it with an interface name you have to implement cached SNMP GET to the interface table OID on your own.Because of the impressive performance of the modern JavaScript this little module performs really fast in Node.JS. I have a complex collector implemented with configurable and evaluated aggregations that uses on average less than 2% CPU on a virtual machine, processing about 100 packets with flows and about 1000 flow statistics per second.Update:http://deliantech.blogspot.com/2014/06/new-improved-version-of-node-netflowv9.html