Tag Archives: netflow

timeShift(GrafanaBuzz, 1w) Issue 30

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/01/19/timeshiftgrafanabuzz-1w-issue-30/

Welcome to TimeShift

We’re only 6 weeks away from the next GrafanaCon and here at Grafana Labs we’re buzzing with excitement. We have some great talks lined up that you won’t want to miss.

This week’s TimeShift covers Grafana’s annotation functionality, monitoring with Prometheus, integrating Grafana with NetFlow and a peek inside Stream’s monitoring stack. Enjoy!


Latest Stable Release

Grafana 4.6.3 is now available. Latest bugfixes include:

  • Gzip: Fixes bug Gravatar images when gzip was enabled #5952
  • Alert list: Now shows alert state changes even after adding manual annotations on dashboard #99513
  • Alerting: Fixes bug where rules evaluated as firing when all conditions was false and using OR operator. #93183
  • Cloudwatch: CloudWatch no longer display metrics’ default alias #101514, thx @mtanda

Download Grafana 4.6.3 Now


From the Blogosphere

Walkthrough: Watch your Ansible deployments in Grafana!: Your graphs start spiking and your platform begins behaving abnormally. Did the config change in a deployment, causing the problem? This article covers Grafana’s new annotation functionality, and specifically, how to create deployment annotations via Ansible playbooks.

Application Monitoring in OpenShift with Prometheus and Grafana: There are many article describing how to monitor OpenShift with Prometheus running in the same cluster, but what if you don’t have admin permissions to the cluster you need to monitor?

Spring Boot Metrics Monitoring Using Prometheus & Grafana: As the title suggests, this post walks you through how to configure Prometheus and Grafana to monitor you Spring Boot application metrics.

How to Integrate Grafana with NetFlow: Learn how to monitor NetFlow from Scrutinizer using Grafana’s SimpleJSON data source.

Stream & Go: News Feeds for Over 300 Million End Users: Stream lets you build scalable newsfeeds and activity streams via their API, which is used by more than 300 million end users. In this article, they discuss their monitoring stack and why they chose particular components and technologies.


GrafanaCon EU Tickets are Going Fast!

We’re six weeks from kicking off GrafanaCon EU! Join us for talks from Google, Bloomberg, Tinder, eBay and more! You won’t want to miss two great days of open source monitoring talks and fun in Amsterdam. Get your tickets before they sell out!

Get Your Ticket Now


Grafana Plugins

We have a couple of plugin updates to share this week that add some new features and improvements. Updating your plugins is easy. For on-prem Grafana, use the Grafana-cli tool, or update with 1 click on your Hosted Grafana.

UPDATED PLUGIN

Druid Data Source – This new update is packed with new features. Notable enhancement include:

  • Post Aggregation feature
  • Support for thetaSketch
  • Improvements to the Query editor

Update Now

UPDATED PLUGIN

Breadcrumb Panel – The Breadcrumb Panel is a small panel you can include in your dashboard that tracks other dashboards you have visited – making it easy to navigate back to a previously visited dashboard. The latest release adds support for dashboards loaded from a file.

Update Now


Upcoming Events

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

SnowCamp 2018: Yves Brissaud – Application metrics with Prometheus and Grafana | Grenoble, France – Jan 24, 2018:
We’ll take a look at how Prometheus, Grafana and a bit of code make it possible to obtain temporal data to visualize the state of our applications as well as to help with development and debugging.

Register Now

Women Who Go Berlin: Go Workshop – Monitoring and Troubleshooting using Prometheus and Grafana | Berlin, Germany – Jan 31, 2018: In this workshop we will learn about one of the most important topics in making apps production ready: Monitoring. We will learn how to use tools you’ve probably heard a lot about – Prometheus and Grafana, and using what we learn we will troubleshoot a particularly buggy Go app.

Register Now

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. There is no need to register; all are welcome.

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Carl Bergquist – Quickie: Monitoring? Not OPS Problem

Why should we monitor our system? Why can’t we just rely on the operations team anymore? They use to be able to do that. What’s currently changing? Presentation content: – Why do we monitor our system – How did it use to work? – Whats changing – Why do we need to shift focus – Everyone should be on call. – Resilience is the goal (Best way of having someone care about quality is to make them responsible).

Register Now

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Leonard Gram – Presentation: DevOps Deconstructed

What’s a Site Reliability Engineer and how’s that role different from the DevOps engineer my boss wants to hire? I really don’t want to be on call, should I? Is Docker the right place for my code or am I better of just going straight to Serverless? And why should I care about any of it? I’ll try to answer some of these questions while looking at what DevOps really is about and how commodisation of servers through “the cloud” ties into it all. This session will be an opinionated piece from a developer who’s been on-call for the past 6 years and would like to convince you to do the same, at least once.

Register Now

Stockholm Metrics and Monitoring | Stockholm, Sweden – Feb 7, 2018:
Observability 3 ways – Logging, Metrics and Distributed Tracing

Let’s talk about often confused telemetry tools: Logging, Metrics and Distributed Tracing. We’ll show how you capture latency using each of the tools and how they work differently. Through examples and discussion, we’ll note edge cases where certain tools have advantages over others. By the end of this talk, we’ll better understand how each of Logging, Metrics and Distributed Tracing aids us in different ways to understand our applications.

Register Now

OpenNMS – Introduction to “Grafana” | Webinar – Feb 21, 2018:
IT monitoring helps detect emerging hardware damage and performance bottlenecks in the enterprise network before any consequential damage or disruption to business processes occurs. The powerful open-source OpenNMS software monitors a network, including all connected devices, and provides logging of a variety of data that can be used for analysis and planning purposes. In our next OpenNMS webinar on February 21, 2018, we introduce “Grafana” – a web-based tool for creating and displaying dashboards from various data sources, which can be perfectly combined with OpenNMS.

Register Now


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

As we say with pie charts, use emojis wisely 😉


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

That wraps up our 30th issue of TimeShift. What do you think? Are there other types of content you’d like to see here? Submit a comment on this issue below, or post something at our community forum.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

How to Optimize and Visualize Your Security Groups

Post Syndicated from Guy Denney original https://blogs.aws.amazon.com/security/post/Tx246GOZNFIW79N/How-to-Optimize-and-Visualize-Your-Security-Groups

Many organizations start their journey with AWS by experimenting with existing applications. Those experiments may include trying to move an application to the cloud. To move an application successfully, you need to know the network ports, protocols, and IP addresses necessary for it to function. Although you can use AWS security groups to restrict access to ports and protocols in your Amazon Virtual Private Cloud (Amazon VPC), many developers determine these rules via trial and error, often resulting in overly permissive security groups.

When the experiment is complete and an application is finally functional, some organizations do not go back to narrow their security group rules to only include the necessary network ports, protocols, and IP addresses. This creates a less than optimal security posture.

In this blog post, I will present a method that uses network data to optimize and visualize your security groups. 

Overview

Removing unused rules or limiting source IP addresses requires either an in-depth knowledge of an application’s active ports on the instances, or analysis of active network traffic. The method described in this post can help you remediate security groups to only necessary source IPs, ports, and nested security groups. This can improve the security stance of your AWS resources while minimizing the potential impact to production instances. Here are the basic steps:

  1. Use VPC Flow Logs and Amazon Elasticsearch Service (Amazon ES) to capture information about the IP traffic in an Amazon VPC.
  2. Associate the network traffic with an elastic network interface (ENI), instances, and security groups.
  3. Demonstrate how to visualize and analyze network traffic from VPC Flow Logs by using Amazon ES.

Step 1: The setup

Create an Amazon ES cluster

The first step in the process is to create an Amazon ES cluster. Create the cluster first because it will take time for it to be available. If you are new to Amazon ES, you can learn more about it in the Amazon ES documentation.

To create an Amazon ES cluster:

  1. In the AWS Management Console, click Elasticsearch Service under Analytics.
  2. Click Create a new domain. Type flowlogs for the Elasticsearch domain name.
  3. Set Instance count to 2 and select the Enable zone awareness check box. (This ensures cluster stability if an Availability Zone outage occurs.) Accept the defaults for the rest of the page. Click Next.
  4. From the drop-down on the next page, select Allow access to the domain from specific IP(s).
  5. In the dialog box, type or paste the comma-separated list of valid IPv4 addresses or CIDR blocks you would like to be able to access the Amazon ES domain. For more information, see Configuring Access Policies. Click Next.
  6. On the next page click Confirm and create.

The cluster will be available in a few minutes. In the meantime, you can start the next step of the process, which is to enable VPC Flow Logs.

Enable VPC Flow Logs

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. For more information about VPC Flow Logs, see VPC Flow Logs.

To enable VPC Flow Logs:

  1. In the AWS Management Console, click VPC under Networking.
  2. Click Your VPCs (as shown in the following screenshot), and select the VPC you would like to analyze. (You can also enable VPC Flow Logs on only a subnet, if you do not want to enable it on the entire VPC.)
  3. Click the Flow Logs tab in the bottom pane.

  1. Click Create Flow Log. If this is the first time you have set up VPC Flow Logs in this account, you must click Set Up Permissions. This will open a new tab in your browser.

    1. For IAM Role, choose Create a new IAM Role.
    2. To establish the Role Name, type flowlogsRole.

    3. Click Allow. Close the tab and navigate back to the Create Flow Log dialog box from Step 4.
  2. Now you can select the Role flowlogsRole and set the Destination Log Group to FlowLogs. Click Create Flow Logs.

The VPC Flow Logs data is now streaming to CloudWatch Logs. The next step is to enable the data to stream from CloudWatch Logs to the Amazon ES cluster. You can accomplish this through a built-in Lambda function.

To flow data to your Amazon ES cluster:

  1. In the AWS Management Console, select CloudWatch under Management Tools.
  2. Click Logs in the left pane and select the check box next to FlowLogs under Log Groups.
  3. From the Actions menu at the top of the page, select Stream to Amazon Elasticsearch Service.

  4. Select the Amazon ES Cluster name flowlogs from the drop-down.
  5. For Lambda IAM Execution Role, select Create new IAM role.

  6. In the dialog box, click Allow. Then click Next.

  7. For Log Format, select Amazon VPC Flow Logs from the drop-down. Click Next.

  8. Click Next again, and then click Start Streaming.

VPC Flow Logs will now begin capturing information about the IP traffic going to and from network interfaces in your VPC, and stream that information to your Amazon ES cluster. Data is now flowing to your Amazon ES cluster, but the Amazon ES cluster is making some assumptions about the format of the data. The next step of the process is to provide formatting information to Amazon ES that is more explicit and then remove any data in the Amazon ES cluster that is not in the correct format.

Format data in the ES cluster

A flow log record is a space-separated string that has the following format.

version account-id interface-id srcaddr dstaddr srcport dstport protocol packets bytes start end action log-status

By default, Amazon ES assumes that dashes and periods in fields are separators. This causes results to be returned twice, which clutters the dashboard with partial results. To correct this behavior, we must first set interface-id, srcaddr, and dstaddr to not_analyzed by running the curl command from a shell prompt. Before accessing your Amazon ES cluster, you should review your access policy and security approach. For more information, see Securing Your Elasticsearch Cluster.

The curl command is available on Mac OS and Amazon Linux AMI, and on the curl app page for Windows. Be sure to replace the placeholder value with your Amazon ES domain endpoint here and elsewhere in this post. For more information about how to run commands on your Amazon ES cluster, see Talking to Elasticsearch.

curl -XPUT "http://YOUR_ES_DOMAIN_ENDPOINT/_template/template_1" -d'
{
    "template":"cwl-*","mappings":{
        "NetFlow": {
            "properties": {
                "interface_id": { "type": "string", "index": "not_analyzed"},
                "srcaddr": { "type": "string", "index": "not_analyzed"},
                "dstaddr": { "type": "string", "index": "not_analyzed"}
            }
        }
    }
}'

After running the preceding command, remove the old data from the cluster to clear data that was indexed incorrectly. Do this by executing the following delete command.

curl -XDELETE 'http://YOUR_ES_DOMAIN_ENDPOINT/cwl*/'

Import dashboards and visualizations

Now, network traffic for your VPC is flowing in to your Amazon ES cluster. To visualize and search the data, I will use a tool built into Amazon ES called Kibana. I have created a dashboard you can use to import into your Amazon ES cluster to simplify and speed up your implementation.

You import dashboards and visualizations by using the curl command against the Amazon ES cluster endpoint. However, some customers find it simpler to use a handy tool to manage the saving and copying of data with Amazon ES, such as elasticdump. (If you don’t already have npm installed, you must install the npm package manager. For more information, go to What is npm?)

After you have installed elasticdump, run the following command. (Again, be sure to replace the placeholder value with your Amazon ES domain endpoint.)

elasticdump --input=SGDashboard.json --output=http:// YOUR_ES_DOMAIN_ENDPOINT/.kibana-4

You now have a dashboard to monitor the traffic in your VPC.

To find the Kibana URL:

  1. In the AWS Management Console, click Elasticsearch Service under Analytics.
  2. Click flowlogs under My Elasticsearch domains.
  3. Click the link next to Kibana, as shown in the following screenshot.

  4. Click the Dashboards tab and open FlowLogDash (as shown in the following screenshot).

You will see the Kibana FlowLogDash (as shown in the following screenshot).

Step 2: Associating ENIs with security groups

The remediation of security groups, though, is more complicated. The VPC Flow Logs only capture the ENI for the traffic, so you must associate the ENIs with their related security groups.

This is where API script “magic” comes in. You must have the Amazon Command Line Interface and jq installed on the same computer to run this Bash script (create a separate subdirectory for downloading and running the script). The script queries the Amazon API to discover the associations between ENIs and security groups. It then builds the list of security groups with links to the Kibana dashboard, which filters the results.

Use the following command to change the file permissions, so you can execute the script.

chmod 744 sgremediate.sh

Edit the script to add your VPC ID and Kibana endpoint (the following screenshot shows placeholders for both values).

Now you can run the Bash script and send the output to an HTML file by using the following command.

sgremediate.sh > index.html

An example of the resulting file is shown in the following screenshot. The file will be a list of your security groups with links to the Kibana dashboard. The links contain information necessary to filter the dashboards to the traffic that is associated with the security group and flowing to the underlined instances.

If you click the links in the index.html file, you will return to the Kibana dashboard and see only information relative to the security group under review. Let’s first review the dashboard and how to interpret its information.

Step 3: Using the FlowLogDash dashboard

The FlowLogDash dashboard is composed of a set of visualizations. Each visualization contains a view or summarization of the underlying data in the Amazon ES cluster contains, as shown in the preceding screenshot. You can control the time frame for the dashboard in the upper right corner (see the following screenshot). By clicking the timeframe, the dashboard exposes alternative timeframes that can be selected. If you click the small arrow at the bottom of the page, you will collapse the time frame view.

On the FlowLogDash dashboard, the left side is divided into three sections. The top section is a list of the ENIs, a count of records, and the sum of bytes. The middle pie chart shows the percentages of accept and reject actions. The bottom pie chart shows relative percentages of protocols for the flow log data.

In the middle pane of the dashboard is a large pie chart that displays the source IP address, protocol, destination port, and destination IP address of the network traffic flowing in the VPC. These fields map to the security group’s Inbound rules tab in the AWS Management Console.

  

On the right side of the FlowLogDash dashboard is a list of destination ports and below it are the raw VPC Flow Log records. This information is useful because ports can be open in the security group but have no network traffic flowing to the instances on those ports. The corresponding rules probably can be removed.

Visualize and analyze VPC network traffic

Amazon ES allows you to view and filter VPC Flow Log data to determine what network traffic is flowing inside your VPC. Amazon ES can assist in narrowing ports or IP source addresses in security groups to improve your organization’s security stance.

The sgremediate.sh script I mentioned previously queries the AWS APIs, produces a list of security groups, and builds a link to the Kibana FlowLogDash dashboard, which automatically filters the results for all ENIs associated with a security group. Because VPC Flow Logs record traffic in both directions, the script also excludes the primary private IP from the results to clean up the dashboard clutter. After you click the link in the index.html file, you can see the filtered results in the search window, indicated by the red arrow in the following screenshot. You can remove or edit the text in the search box to customize the query.

Keep in mind that an ENI may be associated with two or more security groups. Let’s say you have two security groups associated with the same ENI, and one of the security groups has traffic it will register for both groups. You will still see traffic to the ENI listed in the second security group because it is allowing traffic to the ENI.

Also, keep in mind that security groups are stateful, so if the instance itself is initiating traffic to a different location, the Kibana dashboard will display the return traffic. The best example of this is port 123 NTP. To remove this traffic from the display, select the port on the right side of the dashboard, and then reverse the filter, as shown in the following screenshot. By reversing the filter, you can exclude data from the view.

Summary

To ensure that your AWS cloud environment is secure, maintainable, and only allows intended traffic can be a challenging task. By using VPC Flow Logs and Amazon ES together with Kibana dashboards, you can visualize and better optimize control over your security groups and your cloud security.

If you have comments about this blog post, please submit them in the “Comments” section below. If you have questions about this solution or its implementation, contact your AWS account support team or start a new thread on the AWS WAF forum.

– Guy

Example how to use node-netflowv9 and define your own netflow type decoders

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2015/03/example-how-to-use-node-netflowv9-and.html

This is an example of how you can use node-netflowv9 library (version >= 0.2.5) to define your own proprietary Netflow v9 type decoders if they are not supported.The given primer is adding decoding for types 30000, 30001, 30002 for Cisco ASA/PIX netflow:var Collector = require(‘node-netflowv9’);var colObj = Collector(function (flow) { console.log(flow) });colObj.listen(5000);var aclDecodeRule = { 12: ‘o[“$name”] = { aclId: buf.readUInt32BE($pos), aclLineId: buf.readUInt32BE($pos+4), aclCnfId: buf.readUInt32BE($pos+8) };’};colObj.nfTypes[33000] = { name: ‘nf_f_ingress_acl_id’, compileRule: aclDecodeRule };colObj.nfTypes[33001] = { name: ‘nf_f_egress_acl_id’, compileRule: aclDecodeRule };colObj.nfTypes[33002] = { name: ‘nf_f_fw_ext_event’, compileRule: { 2: ‘o[‘$name’]=buf.readUInt16BE($pos);’} };colObj.nfTypes[40000] = { name: ‘nf_f_username’, compileRule: { 0: ‘o[“$name”] = buf.toString(“utf8”,$pos,$pos+$len);’} };

Example how to use node-netflowv9 and define your own netflow type decoders

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2015/03/example-how-to-use-node-netflowv9-and.html

This is an example of how you can use node-netflowv9 library (version >= 0.2.5) to define your own proprietary Netflow v9 type decoders if they are not supported.The given primer is adding decoding for types 30000, 30001, 30002 for Cisco ASA/PIX netflow:var Collector = require(‘node-netflowv9’);var colObj = Collector(function (flow) { console.log(flow) });colObj.listen(5000);var aclDecodeRule = { 12: ‘o[“$name”] = { aclId: buf.readUInt32BE($pos), aclLineId: buf.readUInt32BE($pos+4), aclCnfId: buf.readUInt32BE($pos+8) };’};colObj.nfTypes[33000] = { name: ‘nf_f_ingress_acl_id’, compileRule: aclDecodeRule };colObj.nfTypes[33001] = { name: ‘nf_f_egress_acl_id’, compileRule: aclDecodeRule };colObj.nfTypes[33002] = { name: ‘nf_f_fw_ext_event’, compileRule: { 2: ‘o[‘$name’]=buf.readUInt16BE($pos);’} };colObj.nfTypes[40000] = { name: ‘nf_f_username’, compileRule: { 0: ‘o[“$name”] = buf.toString(“utf8”,$pos,$pos+$len);’} };

node-netflowv9 node.js module for processing of netflowv9 has been updated to 0.2.5

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2015/03/node-netflowv9-nodejs-module-for.html

My node-netflowv9 library has been updated to version 0.2.5There are few new things -Almost all of the IETF netflow types are decoded now. Which means practically that we support IPFIXUnknown NetFlow v9 type does not throw an error. It is decoded into property with name ‘unknown_type_XXX’ where XXX is the ID of the typeUnknown NetFlow v9 Option Template scope does not throw an error. It is decoded in ‘unknown_scope_XXX’ where XXX is the ID of the scopeThe user can overwrite how different types of NetFlow are decoded and the user can define its own decoding for new types. The same for scopes. And this can happen “on fly” – at any time.The library supports well multiple netflow collectors running at the same timeA lot of new options and models for using of the library has been introducedBellow is the updated README.md file, describing how to use the library:UsageThe usage of the netflowv9 collector library is very very simple. You just have to do something like this:var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(3000);or you can use it as event provider:Collector({port: 3000}).on(‘data’,function(flow) { console.log(flow);});The flow will be presented in a format very similar to this:{ header: { version: 9, count: 25, uptime: 2452864139, seconds: 1401951592, sequence: 254138992, sourceId: 2081 }, rinfo: { address: ‘15.21.21.13’, family: ‘IPv4’, port: 29471, size: 1452 }, packet: Buffer <00 00 00 00 ….> flow: [ { in_pkts: 3, in_bytes: 144, ipv4_src_addr: ‘15.23.23.37’, ipv4_dst_addr: ‘16.16.19.165’, input_snmp: 27, output_snmp: 16, last_switched: 2452753808, first_switched: 2452744429, l4_src_port: 61538, l4_dst_port: 62348, out_as: 0, in_as: 0, bgp_ipv4_next_hop: ‘16.16.1.1’, src_mask: 32, dst_mask: 24, protocol: 17, tcp_flags: 0, src_tos: 0, direction: 1, fw_status: 64, flow_sampler_id: 2 } } ]There will be one callback for each packet, which may contain more than one flow.You can also access a NetFlow decode function directly. Do something like this:var netflowPktDecoder = require(‘node-netflowv9’).nfPktDecode;….console.log(netflowPktDecoder(buffer))Currently we support netflow version 1, 5, 7 and 9.OptionsYou can initialize the collector with either callback function only or a group of options within an object.The following options are available during initialization:port – defines the port where our collector will listen to.Collector({ port: 5000, cb: function (flow) { console.log(flow) } })If no port is provided, then the underlying socket will not be initialized (bind to a port) until you call listen method with a port as a parameter:Collector(function (flow) { console.log(flow) }).listen(port)cb – defines a callback function to be executed for every flow. If no call back function is provided, then the collector fires ‘data’ event for each received flowCollector({ cb: function (flow) { console.log(flow) } }).listen(5000)ipv4num – defines that we want to receive the IPv4 ip address as a number, instead of decoded in a readable dot formatCollector({ ipv4num: true, cb: function (flow) { console.log(flow) } }).listen(5000)socketType – defines to what socket type we will bind to. Default is udp4. You can change it to udp6 is you like.Collector({ socketType: ‘udp6’, cb: function (flow) { console.log(flow) } }).listen(5000)nfTypes – defines your own decoders to NetFlow v9+ typesnfScope – defines your own decoders to NetFlow v9+ Option Template scopesDefine your own decoders for NetFlow v9+ typesNetFlow v9 could be extended with vendor specific types and many vendors define their own. There could be no netflow collector in the world that decodes all the specific vendor types. By default this library decodes in readable format all the types it recognises. All the unknown types are decoded as ‘unknown_type_XXX’ where XXX is the type ID. The data is provided as a HEX string. But you can extend the library yourself. You can even replace how current types are decoded. You can even do that on fly (you can dynamically change how the type is decoded in different periods of time).To understand how to do that, you have to learn a bit about the internals of how this module works.When a new flowset template is received from the NetFlow Agent, this netflow module generates and compile (with new Function()) a decoding functionWhen a netflow is received for a known flowset template (we have a compiled function for it) – the function is simply executedThis approach is quite simple and provides enormous performance. The function code is as small as possible and as well on first execution Node.JS compiles it with JIT and the result is really fast.The function code is generated with templates that contains the javascript code to be add for each netflow type, identified by its ID.Each template consist of an object of the following form:{ name: ‘property-name’, compileRule: compileRuleObject }compileRuleObject contains rules how that netflow type to be decoded, depending on its length. The reason for that is, that some of the netflow types are variable length. And you may have to execute different code to decode them depending on the length. The compileRuleObject format is simple:{ length: ‘javascript code as a string that decode this value’, …}There is a special length property of 0. This code will be used, if there is no more specific decode defined for a length. For example:{ 4: ‘code used to decode this netflow type with length of 4’, 8: ‘code used to decode this netflow type with length of 8’, 0: ‘code used to decode ANY OTHER length’}decoding codeThe decoding code must be a string that contains javascript code. This code will be concatenated to the function body before compilation. If that code contain errors or simply does not work as expected it could crash the collector. So be careful.There are few variables you have to use:$pos – this string is replaced with a number containing the current position of the netflow type within the binary buffer.$len – this string is replaced with a number containing the length of the netflow type$name – this string is replaced with a string containing the name property of the netflow type (defined by you above)buf – is Node.JS Buffer object containing the Flow we want to decodeo – this is the object where the decoded flow is written to.Everything else is pure javascript. It is good if you know the restrictions of the javascript and Node.JS capabilities of the Function() method, but not necessary to allow you to write simple decoding by yourself.If you want to decode a string, of variable length, you could write a compileRuleObject of the form:{ 0: ‘o[“$name”] = buf.toString(“utf8”,$pos,$pos+$len)’}The example above will say that for this netfow type, whatever length it has, we will decode the value as utf8 string.ExampleLets assume you want to write you own code for decoding a NetFlow type, lets say 4444, which could be of variable length, and contains a integer number.You can write a code like this:Collector({ port: 5000, nfTypes: { 4444: { // 4444 is the NetFlow Type ID which decoding we want to replace name: ‘my_vendor_type4444’, // This will be the property name, that will contain the decoded value, it will be also the value of the $name compileRule: { 1: “o[‘$name’]=buf.readUInt8($pos);”, // This is how we decode type of length 1 to a number 2: “o[‘$name’]=buf.readUInt16BE($pos);”, // This is how we decode type of length 2 to a number 3: “o[‘$name’]=buf.readUInt8($pos)*65536+buf.readUInt16BE($pos+1);”, // This is how we decode type of length 3 to a number 4: “o[‘$name’]=buf.readUInt32BE($pos);”, // This is how we decode type of length 4 to a number 5: “o[‘$name’]=buf.readUInt8($pos)*4294967296+buf.readUInt32BE($pos+1);”, // This is how we decode type of length 5 to a number 6: “o[‘$name’]=buf.readUInt16BE($pos)*4294967296+buf.readUInt32BE($pos+2);”, // This is how we decode type of length 6 to a number 8: “o[‘$name’]=buf.readUInt32BE($pos)*4294967296+buf.readUInt32BE($pos+4);”, // This is how we decode type of length 8 to a number 0: “o[‘$name’]=’Unsupported Length of $len'” } } }, cb: function (flow) { console.log(flow) }});It looks to be a bit complex, but actually it is not. In most of the cases, you don’t have to define a compile rule for each different length. The following example defines a decoding for a netflow type 6789 that carry a string:var colObj = Collector(function (flow) { console.log(flow)});colObj.listen(5000);colObj.nfTypes[6789] = { name: ‘vendor_string’, compileRule: { 0: ‘o[“$name”] = buf.toString(“utf8”,$pos,$pos+$len)’ }}As you can see, we can also change the decoding on fly, by defining a property for that netflow type within the nfTypes property of the colObj (the Collector object). Next time when the NetFlow Agent send us a NetFlow Template definition containing this netflow type, the new rule will be used (the routers usually send temlpates from time to time, so even currently compiled templates are recompiled).You could also overwrite the default property names where the decoded data is written. For example:var colObj = Collector(function (flow) { console.log(flow)});colObj.listen(5000);colObj.nfTypes[14].name = ‘outputInterface’;colObj.nfTypes[10].name = ‘inputInterface’;Logging / Debugging the moduleYou can use the debug module to turn on the logging, in order to debug how the library behave. The following example show you how:require(‘debug’).enable(‘NetFlowV9’);var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(5555);Multiple collectorsThe module allows you to define multiple collectors at the same time. For example:var Collector = require(‘node-netflowv9’);Collector(function(flow) { // Collector 1 listening on port 5555 console.log(flow);}).listen(5555);Collector(function(flow) { // Collector 2 listening on port 6666 console.log(flow);}).listen(6666);NetFlowV9 Options TemplateNetFlowV9 support Options template, where there could be an option Flow Set that contains data for a predefined fields within a certain scope. This module supports the Options Template and provides the output of it as it is any other flow. The only difference is that there is a property isOption set to true to remind to your code, that this data has come from an Option Template.Currently the following nfScope are supported – system, interface, line_card, netflow_cache. You can overwrite the decoding of them, or add another the same way (and using absolutley the same format) as you overwrite nfTypes.

node-netflowv9 node.js module for processing of netflowv9 has been updated to 0.2.5

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2015/03/node-netflowv9-nodejs-module-for.html

My node-netflowv9 library has been updated to version 0.2.5There are few new things -Almost all of the IETF netflow types are decoded now. Which means practically that we support IPFIXUnknown NetFlow v9 type does not throw an error. It is decoded into property with name ‘unknown_type_XXX’ where XXX is the ID of the typeUnknown NetFlow v9 Option Template scope does not throw an error. It is decoded in ‘unknown_scope_XXX’ where XXX is the ID of the scopeThe user can overwrite how different types of NetFlow are decoded and the user can define its own decoding for new types. The same for scopes. And this can happen “on fly” – at any time.The library supports well multiple netflow collectors running at the same timeA lot of new options and models for using of the library has been introducedBellow is the updated README.md file, describing how to use the library:UsageThe usage of the netflowv9 collector library is very very simple. You just have to do something like this:var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(3000);or you can use it as event provider:Collector({port: 3000}).on(‘data’,function(flow) { console.log(flow);});The flow will be presented in a format very similar to this:{ header: { version: 9, count: 25, uptime: 2452864139, seconds: 1401951592, sequence: 254138992, sourceId: 2081 }, rinfo: { address: ‘15.21.21.13’, family: ‘IPv4’, port: 29471, size: 1452 }, packet: Buffer <00 00 00 00 ….> flow: [ { in_pkts: 3, in_bytes: 144, ipv4_src_addr: ‘15.23.23.37’, ipv4_dst_addr: ‘16.16.19.165’, input_snmp: 27, output_snmp: 16, last_switched: 2452753808, first_switched: 2452744429, l4_src_port: 61538, l4_dst_port: 62348, out_as: 0, in_as: 0, bgp_ipv4_next_hop: ‘16.16.1.1’, src_mask: 32, dst_mask: 24, protocol: 17, tcp_flags: 0, src_tos: 0, direction: 1, fw_status: 64, flow_sampler_id: 2 } } ]There will be one callback for each packet, which may contain more than one flow.You can also access a NetFlow decode function directly. Do something like this:var netflowPktDecoder = require(‘node-netflowv9’).nfPktDecode;….console.log(netflowPktDecoder(buffer))Currently we support netflow version 1, 5, 7 and 9.OptionsYou can initialize the collector with either callback function only or a group of options within an object.The following options are available during initialization:port – defines the port where our collector will listen to.Collector({ port: 5000, cb: function (flow) { console.log(flow) } })If no port is provided, then the underlying socket will not be initialized (bind to a port) until you call listen method with a port as a parameter:Collector(function (flow) { console.log(flow) }).listen(port)cb – defines a callback function to be executed for every flow. If no call back function is provided, then the collector fires ‘data’ event for each received flowCollector({ cb: function (flow) { console.log(flow) } }).listen(5000)ipv4num – defines that we want to receive the IPv4 ip address as a number, instead of decoded in a readable dot formatCollector({ ipv4num: true, cb: function (flow) { console.log(flow) } }).listen(5000)socketType – defines to what socket type we will bind to. Default is udp4. You can change it to udp6 is you like.Collector({ socketType: ‘udp6’, cb: function (flow) { console.log(flow) } }).listen(5000)nfTypes – defines your own decoders to NetFlow v9+ typesnfScope – defines your own decoders to NetFlow v9+ Option Template scopesDefine your own decoders for NetFlow v9+ typesNetFlow v9 could be extended with vendor specific types and many vendors define their own. There could be no netflow collector in the world that decodes all the specific vendor types. By default this library decodes in readable format all the types it recognises. All the unknown types are decoded as ‘unknown_type_XXX’ where XXX is the type ID. The data is provided as a HEX string. But you can extend the library yourself. You can even replace how current types are decoded. You can even do that on fly (you can dynamically change how the type is decoded in different periods of time).To understand how to do that, you have to learn a bit about the internals of how this module works.When a new flowset template is received from the NetFlow Agent, this netflow module generates and compile (with new Function()) a decoding functionWhen a netflow is received for a known flowset template (we have a compiled function for it) – the function is simply executedThis approach is quite simple and provides enormous performance. The function code is as small as possible and as well on first execution Node.JS compiles it with JIT and the result is really fast.The function code is generated with templates that contains the javascript code to be add for each netflow type, identified by its ID.Each template consist of an object of the following form:{ name: ‘property-name’, compileRule: compileRuleObject }compileRuleObject contains rules how that netflow type to be decoded, depending on its length. The reason for that is, that some of the netflow types are variable length. And you may have to execute different code to decode them depending on the length. The compileRuleObject format is simple:{ length: ‘javascript code as a string that decode this value’, …}There is a special length property of 0. This code will be used, if there is no more specific decode defined for a length. For example:{ 4: ‘code used to decode this netflow type with length of 4’, 8: ‘code used to decode this netflow type with length of 8’, 0: ‘code used to decode ANY OTHER length’}decoding codeThe decoding code must be a string that contains javascript code. This code will be concatenated to the function body before compilation. If that code contain errors or simply does not work as expected it could crash the collector. So be careful.There are few variables you have to use:$pos – this string is replaced with a number containing the current position of the netflow type within the binary buffer.$len – this string is replaced with a number containing the length of the netflow type$name – this string is replaced with a string containing the name property of the netflow type (defined by you above)buf – is Node.JS Buffer object containing the Flow we want to decodeo – this is the object where the decoded flow is written to.Everything else is pure javascript. It is good if you know the restrictions of the javascript and Node.JS capabilities of the Function() method, but not necessary to allow you to write simple decoding by yourself.If you want to decode a string, of variable length, you could write a compileRuleObject of the form:{ 0: ‘o[“$name”] = buf.toString(“utf8”,$pos,$pos+$len)’}The example above will say that for this netfow type, whatever length it has, we will decode the value as utf8 string.ExampleLets assume you want to write you own code for decoding a NetFlow type, lets say 4444, which could be of variable length, and contains a integer number.You can write a code like this:Collector({ port: 5000, nfTypes: { 4444: { // 4444 is the NetFlow Type ID which decoding we want to replace name: ‘my_vendor_type4444’, // This will be the property name, that will contain the decoded value, it will be also the value of the $name compileRule: { 1: “o[‘$name’]=buf.readUInt8($pos);”, // This is how we decode type of length 1 to a number 2: “o[‘$name’]=buf.readUInt16BE($pos);”, // This is how we decode type of length 2 to a number 3: “o[‘$name’]=buf.readUInt8($pos)*65536+buf.readUInt16BE($pos+1);”, // This is how we decode type of length 3 to a number 4: “o[‘$name’]=buf.readUInt32BE($pos);”, // This is how we decode type of length 4 to a number 5: “o[‘$name’]=buf.readUInt8($pos)*4294967296+buf.readUInt32BE($pos+1);”, // This is how we decode type of length 5 to a number 6: “o[‘$name’]=buf.readUInt16BE($pos)*4294967296+buf.readUInt32BE($pos+2);”, // This is how we decode type of length 6 to a number 8: “o[‘$name’]=buf.readUInt32BE($pos)*4294967296+buf.readUInt32BE($pos+4);”, // This is how we decode type of length 8 to a number 0: “o[‘$name’]=’Unsupported Length of $len'” } } }, cb: function (flow) { console.log(flow) }});It looks to be a bit complex, but actually it is not. In most of the cases, you don’t have to define a compile rule for each different length. The following example defines a decoding for a netflow type 6789 that carry a string:var colObj = Collector(function (flow) { console.log(flow)});colObj.listen(5000);colObj.nfTypes[6789] = { name: ‘vendor_string’, compileRule: { 0: ‘o[“$name”] = buf.toString(“utf8”,$pos,$pos+$len)’ }}As you can see, we can also change the decoding on fly, by defining a property for that netflow type within the nfTypes property of the colObj (the Collector object). Next time when the NetFlow Agent send us a NetFlow Template definition containing this netflow type, the new rule will be used (the routers usually send temlpates from time to time, so even currently compiled templates are recompiled).You could also overwrite the default property names where the decoded data is written. For example:var colObj = Collector(function (flow) { console.log(flow)});colObj.listen(5000);colObj.nfTypes[14].name = ‘outputInterface’;colObj.nfTypes[10].name = ‘inputInterface’;Logging / Debugging the moduleYou can use the debug module to turn on the logging, in order to debug how the library behave. The following example show you how:require(‘debug’).enable(‘NetFlowV9’);var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(5555);Multiple collectorsThe module allows you to define multiple collectors at the same time. For example:var Collector = require(‘node-netflowv9’);Collector(function(flow) { // Collector 1 listening on port 5555 console.log(flow);}).listen(5555);Collector(function(flow) { // Collector 2 listening on port 6666 console.log(flow);}).listen(6666);NetFlowV9 Options TemplateNetFlowV9 support Options template, where there could be an option Flow Set that contains data for a predefined fields within a certain scope. This module supports the Options Template and provides the output of it as it is any other flow. The only difference is that there is a property isOption set to true to remind to your code, that this data has come from an Option Template.Currently the following nfScope are supported – system, interface, line_card, netflow_cache. You can overwrite the decoding of them, or add another the same way (and using absolutley the same format) as you overwrite nfTypes.

node-netflowv9 is updated to support netflow v1, v5, v7 and v9

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/11/node-netflowv9-is-updated-to-support.html

My netflow module for Node.JS has been updated. Now it support more NetFlow versions – like NetFlow ver 1, ver 5, ver 7 and ver 9. Also it has been modified to be able to be used as Event Generator (instead of doing callbacks). Now you can do as well (the old model is still supported):Collector({port: 3000}).on(‘data’,function(flow) { console.log(flow);});Additionally, the module now supports and decode option templates and option data flows for NetFlow v9.

node-netflowv9 is updated to support netflow v1, v5, v7 and v9

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/11/node-netflowv9-is-updated-to-support.html

My netflow module for Node.JS has been updated. Now it support more NetFlow versions – like NetFlow ver 1, ver 5, ver 7 and ver 9. Also it has been modified to be able to be used as Event Generator (instead of doing callbacks). Now you can do as well (the old model is still supported):Collector({port: 3000}).on(‘data’,function(flow) { console.log(flow);});Additionally, the module now supports and decode option templates and option data flows for NetFlow v9.

New improved version of the node-netflowv9 module for Node.JS

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/06/new-improved-version-of-node-netflowv9.html

As I have mentioned before, I have implemented a NetFlowV9 compatible library for decode of Cisco NetFlow version 9 packets for Node.JS.Now I am upgrading it to version 0.1 which has few updates:bug fixes (including avoidance of an issue that happens with ASR9k and IOS XR 4.3)now you can start the collector (with a second parameter of true) in a mode where you want to receive only one call back per packet instead of one callback per flow (the default mode). That could be useful if you want to count the lost packets (otherwise the relation netflow packet – callback is lost)decrease of the code sizenow the module compiles the templates dynamically into a function (using new Function). I like this approach very much, as it creates really fast functions (in contrast to eval, Function is always JIT processed) and it allows me to spare loops, function calls and memory copy. I like to do things like that with every data structure that allows it. Anyway, as an effect of this, the new module is about 3 times faster with all the live tests I was able to perform

New improved version of the node-netflowv9 module for Node.JS

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/06/new-improved-version-of-node-netflowv9.html

As I have mentioned before, I have implemented a NetFlowV9 compatible library for decode of Cisco NetFlow version 9 packets for Node.JS.Now I am upgrading it to version 0.1 which has few updates:bug fixes (including avoidance of an issue that happens with ASR9k and IOS XR 4.3)now you can start the collector (with a second parameter of true) in a mode where you want to receive only one call back per packet instead of one callback per flow (the default mode). That could be useful if you want to count the lost packets (otherwise the relation netflow packet – callback is lost)decrease of the code sizenow the module compiles the templates dynamically into a function (using new Function). I like this approach very much, as it creates really fast functions (in contrast to eval, Function is always JIT processed) and it allows me to spare loops, function calls and memory copy. I like to do things like that with every data structure that allows it. Anyway, as an effect of this, the new module is about 3 times faster with all the live tests I was able to perform

Simple example for Node.JS sflow collector

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/06/simple-example-for-nodejs-sflow.html

Sometimes you can use the SFlow or Netflow to extra add intelligence to your network. The collectors available on internet are usually there just to collect and store data used for accounting or nice graphics. But the collectors are either not allowing you to execute your own code in case of certain rules/thresholds reached, or do not react in real time (in general, the protocols delays you too. You cannot expect NetFlow accounting to be used in real time at all, while SFlow has modes that are bit more fast to react, by design, it is still not considered to be real-time sampling/accounting).Just imagine you have a simple goal – you want to automatically detect floods and notify the operators or you can even automatically apply filters.If you have an algorithm that can distinguish the incorrect traffic from the normal traffic from NetFlow/SFlow sampling you may like to execute an operation immediately when that happens.The modern DoS attacks and floods may be complex and hard to detect. But mainly it is hard to make the currently available NetFlow/SFlow collector software to do that for you and then trigger/execute external application.However, it is very easy to program it yourself.I am giving you a simple example that uses the node-sflow module to collect packet samples, measure how many of them match a certain destination ip address and if they are above certain pps thresholds to execute an external program (that is supposed to block that traffic). Then after a period of time it will execute another program (that is supposed to unblock the traffic).This program is very small – about 120 lines of code and allows you to use complex configuration file where you can define a list of rules that can match optionally vlans and networks for the sampled packet and then count how many samples you have per destination for that rule. The rule list is executed until first match in the configured order within the array, so that allows you to create black and white lists and different thresholds per networks and vlans, or to have different rules per overlapped ip addresses as long as they belong to different vlans.Keep in mind this is just an example software there just for your example, showing you how to use node-sflow and pcap modules together! It is not supposed to be used in production, unless you definitely know what you are doing!The goal of this example it here just to show you how easy is to add extra logic within your network.The code is available on git-hub here https://github.com/delian/sflow-collector/

Simple example for Node.JS sflow collector

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/06/simple-example-for-nodejs-sflow.html

Sometimes you can use the SFlow or Netflow to extra add intelligence to your network. The collectors available on internet are usually there just to collect and store data used for accounting or nice graphics. But the collectors are either not allowing you to execute your own code in case of certain rules/thresholds reached, or do not react in real time (in general, the protocols delays you too. You cannot expect NetFlow accounting to be used in real time at all, while SFlow has modes that are bit more fast to react, by design, it is still not considered to be real-time sampling/accounting).Just imagine you have a simple goal – you want to automatically detect floods and notify the operators or you can even automatically apply filters.If you have an algorithm that can distinguish the incorrect traffic from the normal traffic from NetFlow/SFlow sampling you may like to execute an operation immediately when that happens.The modern DoS attacks and floods may be complex and hard to detect. But mainly it is hard to make the currently available NetFlow/SFlow collector software to do that for you and then trigger/execute external application.However, it is very easy to program it yourself.I am giving you a simple example that uses the node-sflow module to collect packet samples, measure how many of them match a certain destination ip address and if they are above certain pps thresholds to execute an external program (that is supposed to block that traffic). Then after a period of time it will execute another program (that is supposed to unblock the traffic).This program is very small – about 120 lines of code and allows you to use complex configuration file where you can define a list of rules that can match optionally vlans and networks for the sampled packet and then count how many samples you have per destination for that rule. The rule list is executed until first match in the configured order within the array, so that allows you to create black and white lists and different thresholds per networks and vlans, or to have different rules per overlapped ip addresses as long as they belong to different vlans.Keep in mind this is just an example software there just for your example, showing you how to use node-sflow and pcap modules together! It is not supposed to be used in production, unless you definitely know what you are doing!The goal of this example it here just to show you how easy is to add extra logic within your network.The code is available on git-hub here https://github.com/delian/sflow-collector/

SFlow version 5 module for Node.JS

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/06/sflow-version-5-module-for-nodejs.html

Unfortunately, as with NetFlow Version 9, SFlow version 5 (and SFlow in general) has not been very well supported by the Node.JS community up to now.I needed modern SFlow version 5 compatible module, so I had to write one on my own.Please welcome the newest module in Node.JS’s NPM that can decode SFlow version 5 packets and be used in the development of simple and easy SFlow collectors! The module is named node-sflow and you can look at its code here https://github.com/delian/node-sflowPlease be careful, as in the next days I may change the object structure of the flow representation to simplify it! Any tests and experiments are welcome.The sflow module is available in the public npm (npm install node-sflow) repository.To use it you have to do:var Collector = require(‘node-sflow’);Collector(function(flow) { console.log(flow);}).listen(3000); In general SFlow is much more powerful protocol than NetFlow, even it its latest version (version 9). It can represent more complex counters, report about errors, drops, full packet headers (not only their properties), collect information from interfaces, flows, vlans, and combine them in a much more complex reports.However, the SFlow support in the agents – the networking equipment is usually extremely simplified – far from the richness and complexity the SFlow protocol may provide. Most of the vendors just do packet sampling and send them over SFlow as raw packet/frame header with an associated unclear counter.In case of you having the issue specified above, this module cannot help much. You will just get the raw packet header (usually Ethernet + IP header) as a Node.JS buffer and then you have to decode it on your own. I want to keep the node-sflow module simple and I don’t plan to decode raw packet headers there as this feature is not a feature of the SFlow itself. If you need to decode the raw packet header I can suggest one easy solution for you. You can use the pcap module from the npm repository and decode the raw header with it:var Collector = require(‘node-sflow’);var pcap = require(‘pcap’);Collector(function(flow) { if (flow && flow.flow.records && flow.flow.records.length>0) { flow.flow.records.forEach(function(n) { if (n.type == ‘raw’) { if (n.protocolText == ‘ethernet’) { try { var pkt = pcap.decode.ethernet(n.header, 0); if (pkt.ethertype!=2048) return; console.log(‘VLAN’,pkt.vlan?pkt.vlan.id:’none’,’Packet’,pkt.ip.protocol_name,pkt.ip.saddr,’:’,pkt.ip.tcp?pkt.ip.tcp.sport:pkt.ip.udp.sport,’->’,pkt.ip.daddr,’:’,pkt.ip.tcp?pkt.ip.tcp.dport:pkt.ip.udp.dport) } catch(e) { console.log(e); } } } }); }}).listen(3000);

SFlow version 5 module for Node.JS

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/06/sflow-version-5-module-for-nodejs.html

Unfortunately, as with NetFlow Version 9, SFlow version 5 (and SFlow in general) has not been very well supported by the Node.JS community up to now.I needed modern SFlow version 5 compatible module, so I had to write one on my own.Please welcome the newest module in Node.JS’s NPM that can decode SFlow version 5 packets and be used in the development of simple and easy SFlow collectors! The module is named node-sflow and you can look at its code here https://github.com/delian/node-sflowPlease be careful, as in the next days I may change the object structure of the flow representation to simplify it! Any tests and experiments are welcome.The sflow module is available in the public npm (npm install node-sflow) repository.To use it you have to do:var Collector = require(‘node-sflow’);Collector(function(flow) { console.log(flow);}).listen(3000); In general SFlow is much more powerful protocol than NetFlow, even it its latest version (version 9). It can represent more complex counters, report about errors, drops, full packet headers (not only their properties), collect information from interfaces, flows, vlans, and combine them in a much more complex reports.However, the SFlow support in the agents – the networking equipment is usually extremely simplified – far from the richness and complexity the SFlow protocol may provide. Most of the vendors just do packet sampling and send them over SFlow as raw packet/frame header with an associated unclear counter.In case of you having the issue specified above, this module cannot help much. You will just get the raw packet header (usually Ethernet + IP header) as a Node.JS buffer and then you have to decode it on your own. I want to keep the node-sflow module simple and I don’t plan to decode raw packet headers there as this feature is not a feature of the SFlow itself. If you need to decode the raw packet header I can suggest one easy solution for you. You can use the pcap module from the npm repository and decode the raw header with it:var Collector = require(‘node-sflow’);var pcap = require(‘pcap’);Collector(function(flow) { if (flow && flow.flow.records && flow.flow.records.length>0) { flow.flow.records.forEach(function(n) { if (n.type == ‘raw’) { if (n.protocolText == ‘ethernet’) { try { var pkt = pcap.decode.ethernet(n.header, 0); if (pkt.ethertype!=2048) return; console.log(‘VLAN’,pkt.vlan?pkt.vlan.id:’none’,’Packet’,pkt.ip.protocol_name,pkt.ip.saddr,’:’,pkt.ip.tcp?pkt.ip.tcp.sport:pkt.ip.udp.sport,’->’,pkt.ip.daddr,’:’,pkt.ip.tcp?pkt.ip.tcp.dport:pkt.ip.udp.dport) } catch(e) { console.log(e); } } } }); }}).listen(3000);

NetFlow Version 9 module for Node.JS

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/06/netflow-version-9-library-for-nodejs.html

I am writing some small automation scripts to help me in my work from time to time. I needed a NetFlow collector and I wanted to write it in javascript for Node.JS because of my general desire to support this platform enabling JavaScript language in generic application programming and system programming.Node.JS is having probably the best in the market package manager (for a framework) named npm. It is extremely easy to install and maintain a package, to keep dependencies or even “scoping” it on a local installation avoiding the need of having root permissions for your machine. This is great. However, most of the packages registered in the npm database are junk. A lot of code is left without any development or having generic bugs or is simply incomplete. I am strongly suggesting to the nodejs community to introduce a package statuses based on public voting marking each module in “production”, “stable”, “unstable”, “development” quality and to set by default the npm search searching in “production” and “stable”. Actually, npm already have a way to do that, but leaves the marking decision to the package owner.Anyway, I was looking for Netflow v9 module that could allow me to capture netflow traffic with this version. Unfortunately the only module supporting NetFlow was node-Netflowd. It does support Netflow version 5 but has a lot of issues with NetFlow v9, to say at least. After few hours testing it at the end I decided to write one on my own.So please welcome the newest Node.JS module that support collecting and decoding of NetFlow version 9 flows named “node-netflowv9“This module supports only Netflow v9 and has to be used only for it.The library is very very simple, having about 250 lines of code and supports all of the publicly defined Cisco properties, including variable length numbers and IPv6 addressing.It is very easy to use it. You just have to do something like this:var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(3000);The flow will be represented in JavaScript object in a format very similar to this:{ header: { version: 9, count: 25, uptime: 2452864139, seconds: 1401951592, sequence: 254138992, sourceId: 2081 }, rinfo: { address: ‘15.21.21.13’, family: ‘IPv4’, port: 29471, size: 1452 }, flow: { in_pkts: 3, in_bytes: 144, ipv4_src_addr: ‘15.23.23.37’, ipv4_dst_addr: ‘16.16.19.165’, input_snmp: 27, output_snmp: 16, last_switched: 2452753808, first_switched: 2452744429, l4_src_port: 61538, l4_dst_port: 62348, out_as: 0, in_as: 0, bgp_ipv4_next_hop: ‘16.16.1.1’, src_mask: 32, dst_mask: 24, protocol: 17, tcp_flags: 0, src_tos: 0, direction: 1, fw_status: 64, flow_sampler_id: 2 } }There will be a callback per each flow, not only one for each packet. If the packet contain 10 flows, there will be 10 callbacks containing each different flow. This simplifies the Collector code as you don’t have to loop on your own trough the flows.Keep in mind that Netflow v9 does not have a fixed structure (in difference to NetFlow v1/v5) and it is based on templates. It depends on the platform which properties it will set in the temlpates and what will be the order of it. You always have to test you netflow v9 collector configuration. This library is trying to simplify it as much as possible, but it cannot compensate it.My general feeling is that S-Flow is much better defined and much more powerful than NetFlow in general. NetFlow v9 is the closest Cisco product that can provide (but is not necessary providing) similar functionality. However, the behavior and the functionality of NetFlow v9 differ between the different Cisco products. On some – you can define aggregations and templates on your own. On some (IOS XR) you can’t and you use NetFlow v9 as a replacement to NetFlow v5. On some other Cisco products (Nexus 7000) there is no support of NetFlow at all, but there is S-Flow :)In all of the Cisco products, the interfaces are sent as SNMP interface index. However, this index may not be persistent (between device reboots) and to associate it with an interface name you have to implement cached SNMP GET to the interface table OID on your own.Because of the impressive performance of the modern JavaScript this little module performs really fast in Node.JS. I have a complex collector implemented with configurable and evaluated aggregations that uses on average less than 2% CPU on a virtual machine, processing about 100 packets with flows and about 1000 flow statistics per second.Update:http://deliantech.blogspot.com/2014/06/new-improved-version-of-node-netflowv9.html

NetFlow Version 9 module for Node.JS

Post Syndicated from Delian Delchev original http://deliantech.blogspot.com/2014/06/netflow-version-9-library-for-nodejs.html

I am writing some small automation scripts to help me in my work from time to time. I needed a NetFlow collector and I wanted to write it in javascript for Node.JS because of my general desire to support this platform enabling JavaScript language in generic application programming and system programming.Node.JS is having probably the best in the market package manager (for a framework) named npm. It is extremely easy to install and maintain a package, to keep dependencies or even “scoping” it on a local installation avoiding the need of having root permissions for your machine. This is great. However, most of the packages registered in the npm database are junk. A lot of code is left without any development or having generic bugs or is simply incomplete. I am strongly suggesting to the nodejs community to introduce a package statuses based on public voting marking each module in “production”, “stable”, “unstable”, “development” quality and to set by default the npm search searching in “production” and “stable”. Actually, npm already have a way to do that, but leaves the marking decision to the package owner.Anyway, I was looking for Netflow v9 module that could allow me to capture netflow traffic with this version. Unfortunately the only module supporting NetFlow was node-Netflowd. It does support Netflow version 5 but has a lot of issues with NetFlow v9, to say at least. After few hours testing it at the end I decided to write one on my own.So please welcome the newest Node.JS module that support collecting and decoding of NetFlow version 9 flows named “node-netflowv9“This module supports only Netflow v9 and has to be used only for it.The library is very very simple, having about 250 lines of code and supports all of the publicly defined Cisco properties, including variable length numbers and IPv6 addressing.It is very easy to use it. You just have to do something like this:var Collector = require(‘node-netflowv9’);Collector(function(flow) { console.log(flow);}).listen(3000);The flow will be represented in JavaScript object in a format very similar to this:{ header: { version: 9, count: 25, uptime: 2452864139, seconds: 1401951592, sequence: 254138992, sourceId: 2081 }, rinfo: { address: ‘15.21.21.13’, family: ‘IPv4’, port: 29471, size: 1452 }, flow: { in_pkts: 3, in_bytes: 144, ipv4_src_addr: ‘15.23.23.37’, ipv4_dst_addr: ‘16.16.19.165’, input_snmp: 27, output_snmp: 16, last_switched: 2452753808, first_switched: 2452744429, l4_src_port: 61538, l4_dst_port: 62348, out_as: 0, in_as: 0, bgp_ipv4_next_hop: ‘16.16.1.1’, src_mask: 32, dst_mask: 24, protocol: 17, tcp_flags: 0, src_tos: 0, direction: 1, fw_status: 64, flow_sampler_id: 2 } }There will be a callback per each flow, not only one for each packet. If the packet contain 10 flows, there will be 10 callbacks containing each different flow. This simplifies the Collector code as you don’t have to loop on your own trough the flows.Keep in mind that Netflow v9 does not have a fixed structure (in difference to NetFlow v1/v5) and it is based on templates. It depends on the platform which properties it will set in the temlpates and what will be the order of it. You always have to test you netflow v9 collector configuration. This library is trying to simplify it as much as possible, but it cannot compensate it.My general feeling is that S-Flow is much better defined and much more powerful than NetFlow in general. NetFlow v9 is the closest Cisco product that can provide (but is not necessary providing) similar functionality. However, the behavior and the functionality of NetFlow v9 differ between the different Cisco products. On some – you can define aggregations and templates on your own. On some (IOS XR) you can’t and you use NetFlow v9 as a replacement to NetFlow v5. On some other Cisco products (Nexus 7000) there is no support of NetFlow at all, but there is S-Flow :)In all of the Cisco products, the interfaces are sent as SNMP interface index. However, this index may not be persistent (between device reboots) and to associate it with an interface name you have to implement cached SNMP GET to the interface table OID on your own.Because of the impressive performance of the modern JavaScript this little module performs really fast in Node.JS. I have a complex collector implemented with configurable and evaluated aggregations that uses on average less than 2% CPU on a virtual machine, processing about 100 packets with flows and about 1000 flow statistics per second.Update:http://deliantech.blogspot.com/2014/06/new-improved-version-of-node-netflowv9.html