Tag Archives: node.js

Sencha ExtJS grid update in real time from the back-end

Post Syndicated from Anonymous original http://deliantech.blogspot.com/2014/10/sencha-extjs-grid-update-in-real-time.html

Hello to all,
I love using Sencha ExtJS in some projects as it is the most complete JavaScript UI framework, even though it is kind of slow, not fast reacting and being cpu and memory expensive. ExtJS allows you to do very fast and lazy development of otherwise complex UI and especially if you use Sencha Architect you can minimize the UI development time focusing only on the important things of your code.
However, ExtJS has quite few draw backs – missing features or some things are over complex and hard to be kept in mind by inexperienced developer (like their Controller idea). 
Here I would like to show you a little example how you can implement a very simple real time update of Sencha Grids (tables) from the backend for an multi user application.
Why do you need this?
I often develop apps that has to be used by multiple persons at the same time and they share and modify the same data.
In such situation, a developer usually has to resolve all those conflicting cases where two users try to modify the same exact data. And Sencha ExtJS grids are not very helpful here. Sencha uses the concept of Store that interact with the data of the back-end (for example by using REST API) and then the Store is assigned to a visualization object like ComboBox or a Grid (Tables). If you modify a table (with the help of Cell Edit Plugin or Row Edit Plugin) that has autoSync property set to true, then any modification you do automatically generates a REST POST/PUT/DELETE query to inform the back end. It can never be easier for a developer, right? But all the data sent to the back end contains the whole modified row – all the properties. On a first sight, this is not an issue. But it is, if you have multiple users editing the same table at the same time. The problem happens because the Sencha Store caches the data. So if User1 modifies it – it is stored on the server. But if User2 modifies the same row but a different column, it will do that over the old data and can overwrite the User1 modification. The backend cannot know which property has been modified and which not and who of the two modifications has to be kept.
There are a lot of tricks a developer usually use to avoid this conflicts. Keeping a version of the modification with each data row in the server, which is received in GET by the UI clients. So when a modification happens, it is accepted only if the client sends the same version number as the one stored in the server, and then the version in the server increases. If another one modification is received with older cached data, it will not be accepted as it will have a different version number. Then the customer will receive an error, then the UI software may refresh its data and updates the versions and the content visualized to the user. 
This is quite popular model, but it is not very nice for the user. The problem is that with multiple users working with the application modifying the same data over the same time, the user will constantly be outdated and will constantly receive errors loosing all its modifications.
The only good solution for both users and the system in general is if in case of change we can update the data in real time in all UI applications. This does not avoid all the possibilities for conflict. But it is highly minimizing it, making the whole operation more pleasant for the end user.
This problem and the need of resolving it happens quite often. Google Spreadsheet and later Google Docs has introduced real time update between the UI data of all the users modifying the same document about 4 years ago.
Example
I like to show here that it is not really hard to update in real time the Stores of ExtJS applications.
It actually requires very little additional code.
Lets imaging we are using a UI developed in Sencha ExtJS with Stores communicating through REST with the backend. The backend for this example will be Node.JS and MongoDB.
Between the Node.JS and the Ext.JS UI there will be Socket.IO session that we will use to push the updates from the Node.JS to the ExtJS Store. I love Socket.IO because it provides a simple WebSockets interface with fallback to HTTP pooling model in case of WebSockets cannot be open (which happens a lot, if you are so unlucky to use a Microsoft security software for example – it blocks WebSockets). 
At the MongoDB we may use capped collections. I love capped collections – they are not only limited in size, but also they allow you to bind a triggers (make the collection tailable) that will receive any new insertion immediately when it happen.
So imagine your Node.JS express REST code looks something like this:
app.get(‘/rest/myrest’,restGetMyrest);
app.put(‘/rest/myrest/:id’,restPutMyrest);
app.post(‘/rest/myrest/:id’,restPostMyrest);
app.del(‘/rest/myrest/:id’,restDelMyrest);

function restGetMyrest(req,res) { // READ REST method
   db.collection(‘myrest’).find().toArray(function(err,q) { return res.send(200,q) })
}

function restPutMyrest(req,res) { // UPDATE REST method
  var id = ObjectID.createFromHexString(req.param(‘id’));
  db.collection(‘myrest’).findAndModify({ _id: id }, [[‘_id’:’asc’]], { $set: req.body }, { safe: true, ‘new’: true }, function(err,q) {
      if (err || (!q)) return res.send(500);
      db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘update’, data: q }, function() {});
      return res.send(200,q);
  })
}

function restPostMyrest(req,res) { // CREATE REST method
  var id = ObjectID.createFromHexString(req.param(‘id’));
  db.collection(‘myrest’).insert({ _id: id },req.body, { safe: true }, function(err,q) {
      if (err || (!q)) return res.send(500);
      setTimeout(function() {
         db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘create’, data: q[0] }, function() {});
      },250);
      return res.send(200,q);
  })
}

function restDelMyrest(req,res) { // DELETE REST method
  var id = ObjectID.createFromHexString(req.param(‘id’));
  db.collection(‘myrest’).remove({ _id: id }, { $set: req.body }, { safe: true }, function(err,q) {
      if (err || (!q)) return res.send(500);
      db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘delete’, data: { _id: id } }, function() {});
      return res.send(201,{});
  })
}
As you can see above – we have implemented a classical CRUD REST method named “myrest” retrieving and storing data in a mongodb collection named ‘myrest’. However, with all modification we also store that modification in a mongodb capped collection named “capDb”.
We use this capped collection (in bold) as an internal mechanism for communication within the NodeDB. You can use events instead, or you can directly send this message to the Socket.IO receiver. However, I like capped db, as they set a lot of advantages – there can be multiple Node.JS processes listening on a capped db and receiving the updates simultaneously. So it is easier to implement clusters that way, including notifying Node.JS processes distributed over different machines.
So now, may be in another file or anywhere else, you may have a simple Node.JS Socket.IO code looking like this:
var s = sIo.of(‘/updates’);
db.createCollection(“capDb”, { capped: true, size: 100000 }, function (err, col) {
   var stream = col.find({},{ tailable: true, awaitdata: true, numberOfRetries: -1 }).stream();
   stream.on(‘data’,function(doc) {
       s.emit(doc.op,doc);
   }
});
 
With this little code above we are basically broadcasting to everyone connected with Socket.IO to /updates the content of the last insertion in the tailable capDb. Also we are creating this collection, if it does not exists from before.
This is everything you need in Node.JS 🙂
Now we can get back to the Ext.JS code. Simply you need to have somewhere in your HTML application this code executed:
var socket = io.connect(‘/updates’);
socket.on(‘create’, function(msg) {
   var s = Ext.StoreMgr.get(msg.method);
   if ((!s)||(s.getCount()>s.pageSize||s.findRecord(‘id’,msg.data._id)) return;
   s.suspendAutoSync();
   s.add(msg.data);
   s.commitChanges();
   s.resumeAutoSync();
});
socket.on(‘update’, function(msg) {
   var s = Ext.StoreMgr.get(msg.method);
   var r;
   if ((!s)||(!(r=s.findRecord(‘id’,msg.data._id))) return;
   s.suspendAutoSync();
   for (var k in msg.data) if (r.get(k) != msg.data[k]) r.set(k,msg.data[k]);
   s.commitChanges();
   s.resumeAutoSync();
});
socket.on(‘delete’,function(msg) {
   var s = Ext.StoreMgr.get(msg.method);
   var r;
   if ((!s)||(!(r=s.findRecord(‘id’,msg.data._id))) return;
   s.suspendAutoSync();
   s.remove(r);
   s.commitChanges();
   s.resumeAutoSync();
});
This is all.
Basically what we do from end to end –
If the Node.JS receives any CRUD REST operation it updates the data in the MongoDB, but also for Create, Update, Delete it notify over Socket.IO all the listening web clients about this operation (in my example, I use tailable capped collection in MongoDB as a an internal messaging bus, but you can emit to the Socket.IO directly or use another messaging bus like EventEmitter).
Then the ExtJS receives the update over Socket.IO and assumes that the method property contains the name of the Store that has to be updated. Then we find the store, suspedAutoSync if it exists (otherwise we can get into update->autosync->rest->update loop), modify the content of the record (or the store) and resume AutoSync.
With this simple code you can broadcast all the modifications in your data between all the extjs users that are currently online, so they can see updates in real time in their grids.
A single REST method may be used by multiple stores. In such case, you have to modify your code with some association between the REST method name and all the related stores.
However, for this simple example, that is unnecessary.
Some other day, I may show you my “ExtJS WebSockets CRUD proxy” I made, where you have only one communication channel between the stores and the backend – Socket.IO. It is much faster and removes the need of having REST code at all in your server. 

New improved version of the node-netflowv9 module for Node.JS

Post Syndicated from Anonymous original http://deliantech.blogspot.com/2014/06/new-improved-version-of-node-netflowv9.html

As I have mentioned before, I have implemented a NetFlowV9 compatible library for decode of Cisco NetFlow version 9 packets for Node.JS.
Now I am upgrading it to version 0.1 which has few updates:
  • bug fixes (including avoidance of an issue that happens with ASR9k and IOS XR 4.3)
  • now you can start the collector (with a second parameter of true) in a mode where you want to receive only one call back per packet instead of one callback per flow (the default mode). That could be useful if you want to count the lost packets (otherwise the relation netflow packet – callback is lost)
  • decrease of the code size
  • now the module compiles the templates dynamically into a function (using new Function). I like this approach very much, as it creates really fast functions (in contrast to eval, Function is always JIT processed) and it allows me to spare loops, function calls and memory copy. I like to do things like that with every data structure that allows it. Anyway, as an effect of this, the new module is about 3 times faster with all the live tests I was able to perform

Simple example for Node.JS sflow collector

Post Syndicated from Anonymous original http://deliantech.blogspot.com/2014/06/simple-example-for-nodejs-sflow.html

Sometimes you can use the SFlow or Netflow to extra add intelligence to your network. The collectors available on internet are usually there just to collect and store data used for accounting or nice graphics. But the collectors are either not allowing you to execute your own code in case of certain rules/thresholds reached, or do not react in real time (in general, the protocols delays you too. You cannot expect NetFlow accounting to be used in real time at all, while SFlow has modes that are bit more fast to react, by design, it is still not considered to be real-time sampling/accounting).
Just imagine you have a simple goal – you want to automatically detect floods and notify the operators or you can even automatically apply filters.
If you have an algorithm that can distinguish the incorrect traffic from the normal traffic from NetFlow/SFlow sampling you may like to execute an operation immediately when that happens.
The modern DoS attacks and floods may be complex and hard to detect. But mainly it is hard to make the currently available NetFlow/SFlow collector software to do that for you and then trigger/execute external application.
However, it is very easy to program it yourself.
I am giving you a simple example that uses the node-sflow module to collect packet samples, measure how many of them match a certain destination ip address and if they are above certain pps thresholds to execute an external program (that is supposed to block that traffic). Then after a period of time it will execute another program (that is supposed to unblock the traffic).
This program is very small – about 120 lines of code and allows you to use complex configuration file where you can define a list of rules that can match optionally vlans and networks for the sampled packet and then count how many samples you have per destination for that rule. The rule list is executed until first match in the configured order within the array, so that allows you to create black and white lists and different thresholds per networks and vlans, or to have different rules per overlapped ip addresses as long as they belong to different vlans.
Keep in mind this is just an example software there just for your example, showing you how to use node-sflow and pcap modules together! It is not supposed to be used in production, unless you definitely know what you are doing!
The goal of this example it here just to show you how easy is to add extra logic within your network.
The code is available on git-hub here https://github.com/delian/sflow-collector/

AMF0/3 encoding/decoding in Node.JS

Post Syndicated from Anonymous original http://deliantech.blogspot.com/2014/06/amf03-encodingdecoding-in-nodejs.html

I am writing my own RTMP restreamer (RTMP is Adobe’s dying streaming protocol widely used with Flash) in Node.JS.
Although, there are quite of few RTMP modules, no one is complete, nor operates with Node.JS buffers, nor support fully ether AMF0 or AMF3 encoding and decoding.
So I had to write one on my own.
The first module is the AMF0/AMF3 utils that allow me to encode or decode AMF data. AMF is a binary encoding used in Adobe’s protocols, very similar to BER (used in ITU’s protocols) but supporting complex objects. In general the goal of AMF is to encode ActiveScript objects into binary. As ActiveScript is a language belonging to the JavaScript’s familly, basically the ActiveScript’s objects are javascript objects (with the exception of some simplified arrays).
My module is named node-amfutils and is now available in the public NPM repository as well as here https://github.com/delian/node-amfutils
It is not fully completed nor very well tested as I have very limited environment to do the tests. However, it works for me and provides the best AMF0 and AMF3 support currently available for Node.JS – 
  • It can encode/decode all the objects defined in both AMF0 and AMF3 (the other AMF modules in the npm repository supports partial AMF0 or partial AMF3)
  • It uses Node.JS buffers (it is not necessary to do string to buffer to string conversion, as you have to do with the other modules)

It is easy to use this module. You just have to do something like this:

var amfUtils = require('node-amfutils');
var buffer = amfUtils.amf0Encode([{ a: "xxx"},b: null]);

SFlow version 5 module for Node.JS

Post Syndicated from Anonymous original http://deliantech.blogspot.com/2014/06/sflow-version-5-module-for-nodejs.html

Unfortunately, as with NetFlow Version 9, SFlow version 5 (and SFlow in general) has not been very well supported by the Node.JS community up to now.
I needed modern SFlow version 5 compatible module, so I had to write one on my own.
Please welcome the newest module in Node.JS’s NPM that can decode SFlow version 5 packets and be used in the development of simple and easy SFlow collectors! The module is named node-sflow and you can look at its code here https://github.com/delian/node-sflow
Please be careful, as in the next days I may change the object structure of the flow representation to simplify it! Any tests and experiments are welcome.
The sflow module is available in the public npm (npm install node-sflow) repository.
To use it you have to do:
var Collector = require('node-sflow');

Collector(function(flow) {
    console.log(flow);
}).listen(3000); 
In general SFlow is much more powerful protocol than NetFlow, even it its latest version (version 9). It can represent more complex counters, report about errors, drops, full packet headers (not only their properties), collect information from interfaces, flows, vlans, and combine them in a much more complex reports.

However, the SFlow support in the agents – the networking equipment is usually extremely simplified – far from the richness and complexity the SFlow protocol may provide. Most of the vendors just do packet sampling and send them over SFlow as raw packet/frame header with an associated unclear counter.

In case of you having the issue specified above, this module cannot help much. You will just get the raw packet header (usually Ethernet + IP header) as a Node.JS buffer and then you have to decode it on your own. I want to keep the node-sflow module simple and I don’t plan to decode raw packet headers there as this feature is not a feature of the SFlow itself.

If you need to decode the raw packet header I can suggest one easy solution for you. You can use the pcap module from the npm repository and decode the raw header with it:

var Collector = require('node-sflow');
var pcap = require('pcap');

Collector(function(flow) {
    if (flow && flow.flow.records && flow.flow.records.length>0) {
        flow.flow.records.forEach(function(n) {
            if (n.type == 'raw') {
                if (n.protocolText == 'ethernet') {
                    try {
                        var pkt = pcap.decode.ethernet(n.header, 0);
                        if (pkt.ethertype!=2048) return;
                        console.log('VLAN',pkt.vlan?pkt.vlan.id:'none','Packet',pkt.ip.protocol_name,pkt.ip.saddr,':',pkt.ip.tcp?pkt.ip.tcp.sport:pkt.ip.udp.sport,'->',pkt.ip.daddr,':',pkt.ip.tcp?pkt.ip.tcp.dport:pkt.ip.udp.dport)
                    } catch(e) { console.log(e); }
                }
            }
        });
    }
}).listen(3000);