Tag Archives: MongoDB

Monitoring MongoDB nodes and clusters with Zabbix

Post Syndicated from Dmitry Lambert original https://blog.zabbix.com/monitoring-mongodb-nodes-and-clusters-with-zabbix/16031/

Zabbix Agent 2 enables our users to monitor a whole set of new systems with minimal configuration required on the monitored systems. Forget about writing custom monitoring scripts, deploying additional packages, or configuring ODBC. A great use-case for Zabbix Agent 2  is monitoring one of the most popular NoSQL DB backends – MongoDB. Below, you can read a detailed description and step-by-step guide through the use case or refer to the video available here.

Zabbix MongoDB template

For this example, we will be using Zabbix version 5.4, but MongoDB monitoring by Zabbix Agent 2 is supported starting from version 5.0. If you have a fresh deployment of Zabbix version 5.0 or newer, you will be able to find the MongoDB template in your ‘Configuration‘ – ‘Templates‘ section.

MongoDB Node and Cluster templates

On the other hand, if you have an instance that you deployed before the release of Zabbix 5.0 and then upgraded to Zabbix 5.0 or newer, you will have to import the template manually from our git page. Let’s remember that Zabbix DOES NOT apply new templates or modify existing templates during an upgrade. Therefore, newly released templates have to be IMPORTED MANUALLY!

We can see that we have two MongoDB templates – ‘MongoDB cluster by Zabbix Agent 2’ and ‘MongoDB node by Zabbix agent 2’. Depending on your MongoDB setup – individual nodes or a cluster, apply the corresponding template. Note that the MongoDB cluster template can automatically create hosts for your config servers and shards and apply the MongoDB node template on these hosts.

Host prototypes for config servers and shards

Deploying Zabbix Agent 2 on your Host

Since the data collection is done by Zabbix Agent 2, first, let’s deploy Zabbix Agent 2 on our MongoDB node or cluster host. Let’s start with adding the Zabbix 5.4 repository and install the Zabbix Agent 2 via a  package.

Add the Zabbix 5.4 repository:

rpm -Uvh https://repo.zabbix.com/zabbix/5.4/rhel/8/x86_64/zabbix-release-5.4-1.el8.noarch.rpm

Install Zabbix Agent 2:

yum install zabbix-agent2

What if you already have the regular Zabbix Agent running on this machine? In this case, we have two options for how we can proceed. We can simply remove the regular Zabbix Agent and Deploy Zabbix Agent 2. In this case, make sure you make a backup of the Zabbix Agent configuration file and migrate all of the changes to the Zabbix Agent 2 configuration file.

The second option is running both of the Zabbix Agents in parallel. In this case, we need to make sure that both agents – Zabbix Agent and Zabbix Agent 2 are listening on their own specific ports because, by default, both agents are listening for connections on port 10050. This can be configured in the Zabbix Agent configuration file by changing the ‘ListenPort’ parameter.

Don’t forget to specify the ‘Server‘ parameter in the Zabbix Agent 2 configuration file. This parameter should contain your Zabbix Server address or DNS name. By defining it here, you will allow Zabbix Agent 2 to accept the metric poll requests from Zabbix Server.

After you have made the configuration changes in the Zabbix Agent 2 configuration file, don’t forget to restart Zabbix Agent 2 to apply the changes:

systemctl restart zabbix-agent2

Creating a MongoDB user for monitoring

Once the agent has been deployed and configured, you need to ensure that you have a MongoDB database user that we can use for monitoring purposes. Below you can find a brief example of how you can create a MongoDB user:

Access the MongoDB shell:

mongosh

Switch to the MongoDB admin database:

use admin

Create a user with ‘userAdminAnyDatabase‘ permissions:

db.createUser(
... {
..... user: "zabbix_mon",
..... pwd: "zabbix_mon",
..... roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
..... }
... )

The username for the newly created user is ‘zabbix_mon’. The password is also ‘zabbix_mon‘ – feel free to change these as per your security policy.

Creating and configuring a MongoDB host

Next, you need to open your Zabbix frontend and create a new host representing your MongoDB node. You can see that in our example, we called our node ‘MongoDB’ and assigned it to a ‘MongoDB Servers’ host group. You can use more detailed naming in a production environment and use your own host group assignment logic. But remember – a host needs to belong to AT LEAST a single host group! 

Since the metrics are collected by Zabbix Agent 2, you must also create an Agent interface on the host. Zabbix Server will connect to this interface and request the metrics from the Zabbix Agent 2. Define the IP address or DNS name of your MongoDB host, where you previously deployed Zabbix Agent 2. Mind the port – by default, we have port 10050 defined over here, but if you have modified the ‘ListenPort’ parameter in the Zabbix Agent 2 config and changed the value from the default one (10050) to something else, you also need to use the same port number here.

MongoDB host configuration example

Next, navigate to the ‘Templates’ tab and assign the corresponding template – either ‘MongoDB node by Zabbix agent 2’ or ‘MongoDB cluster by Zabbix Agent 2’. In our example, we will assign the MongoDB node template.

Before adding the host, you also need to provide authentication and connection parameters by editing the corresponding User Macros. These User Macros are used by the items that specify which metrics should we be collecting. Essentially, we are forwarding the connectivity and authentication information to Zabbix Agent 2, telling it to use these values when collecting the metrics from our MongoDB instance.

To do this, navigate to the ‘Macros’ tab in the host configuration screen. Then, select ‘Inherited and host macros’ to display macros inherited from the MongoDB template.

We can see a bunch of macros here – some of them are related to trigger thresholds and discovery filters, but what we’re interested in right now are the following macros:

  • {$MONGODB.PASSWORD} –  MongoDB username. For our example, we will set this to zabbix_mon
  • {$MONGODB.USER} – MongoDB password. For our example, we will set this to zabbix_mon
  • {$MONGODB.CONNSTRING} – MongoDB connection string. Specify the MongoDB address and port here to which the Zabbix Agent 2 should connect and perform the metric collection

Now we are ready to add the host. Once the host has been added, we might have to wait for a minute or so before Zabbix begins to monitor the host. This is because Zabbix Server doesn’t instantly pick up the configuration changes. By default, Zabbix Server updates the Configuration Cache once a minute.

Fine-tuning MongoDB monitoring

At this point, we should see a green ZBX Icon next to our MongoDB host.

Data collection on the MongoDB host has started – note the green ‘ZBX’ icon.

This means that the Zabbix Server has successfully connected to our Zabbix Agent 2, and the metric collection has begun. You can now navigate to the ‘Monitoring’ – ‘Latest data’ section, filter the view by your MongoDB host, and you should see all of the collected metrics here.

MongoDB metrics in ‘Monitoring’ – ‘Latest data’

The final task is to tune the MongoDB monitoring on your hosts, collecting only the required metrics. Navigate to ‘Configuration’ –Hosts’, find your MongoDB hosts, and go through the different entity types on the host – items, triggers, discovery rules. See an item that you don’t wish to collect metrics for? Feel free to disable it. Open up the discovery rules – change the update intervals on them or disable the unnecessary ones.

Note: Be careful not to disable master items. Many of the items and discovery rules here are of type ‘Dependent item’ which means, that they require a so-called ‘Master item’. Feel free to read more about dependent items here.

Remember the ‘Macros’ section in the host configuration? Let’s return to it. here we can see some macros which are used in our trigger thresholds, like:

  • {$MONGODB.REPL.LAG.MAX.WARN} – Maximum replication lag in seconds
  • {$MONGODB.CURSOR.OPEN.MAX.WARN} – Maximum number of open cursors

Feel free to change these as per your problem threshold requirements.

One last thing here – we can filter which elements get discovered by our discovery rules. This is once again defined by user macros like:

  • {$MONGODB.LLD.FILTER.DB.MATCHES} – Databases that should be discovered (By default, the value here is ‘.*’, which will match everything)
  • {$MONGODB.LLD.FILTER.DB.NOT_MATCHES} – Databases that should be excluded from the discovery

And that’s it! After some additional tuning has been applied, we are good to go – our MongoDB entities are being discovered, metrics are getting collected, and problem thresholds have been defined. And all of it has been done with the native Zabbix Agent 2 functionality and an out-of-the-box MongoDB template!

node.js module implementing EventEmitter interface using MongoDB tailable cursors as backend

Post Syndicated from Anonymous original http://deliantech.blogspot.com/2015/03/nodejs-module-implementing-eventemitter.html

I’ve published in the npm a new module, that I’ve used privately for a long time which implements EventEmitter interface using MongoDB tailable cursors as backend.
This module could be used as a messaging bus between processes or even between node.js modules as it allows implementing EventEmitter wihout need of sharing the object instance in advance.
Please see the first version of the README.md bellow:

Module for creating event bus interface based on MongoDB tailable cursors

The idea behind this module is to create EventEmitter like interface, which uses MongoDB capped collections and tailable cursors as an internal messaging bus. This model has a lot of advantages, especially if you already use MongoDB in your project.
The advantages are:
You don’t have to exchange the event emitter object between different pages or even different processes (forked, clustered, living on separate machines). As long as you use the same mongoUrl and capped collection name, you can exchange information. This way you can even create applications that runs on a different hardware and they may exchanging events and data as if they are the same application! Also your events are stored in a collection and could be used as a transaction log latley (mongodb’s own transaction log is implemented with capped collections).
It simplifies an application development very much.

Installation

To install the module run the following command:
npm install node-mongotailableevents

Short

It is easy to use that module. Look at the following example:
var ev = require('node-mongotailableevents');

var e = ev( { ...options ... }, callback );

e.on('event',callback);

e.emit('event',data);

Initialization and options

The following options can be used with the module
  • mongoUrl (default mongodb://127.0.0.1/test) – the URL to the mongo database
  • mongoOptions (default none) – Specific options to be used for the connection to the mongo database
  • name (default tailedEvents) – the name of the capped collection that will be created if it does not exists
  • size (default 1000000) – the maximum size of the capped collection (when reached, the oldest records will be automatically removed)
  • max (default 1000) – the maximum size in amount of records for the capped collection
You can call and create a new event emitter instance without options:
var ev = require('node-mongotailableevents');
var e = ev();
Or you can call and create a event emitter instance with options:
var ev = require('node-mongotailableevents');
var e = ev({
   mongoUrl: 'mongodb://127.0.0.1/mydb',
   name: 'myEventCollection'
});
Or you can call and create a event emitter instance with options and callback, which will be called when the collection is created successfuly:
var ev = require('node-mongotailableevents');
ev({
   mongoUrl: 'mongodb://127.0.0.1/mydb',
   name: 'myEventCollection'
}, function(err, e) {
    console.log('EventEmitter',e);
});
Or you can call and create event emitter with just callback (and default options):
ev(function(err, e) {
    console.log('EventEmitter',e);
});

Usage

This module inherits EventEmitter, so you can use all of the EventEmitter methods. Example:
ev(function(err, e) {
    if (err) throw err;

    e.on('myevent',function(data) {
        console.log('We have received',data);
    });

    e.emit('myevent','my data');
});
The best feature is that you can exchange events between different pages or processes, without the need of exchange in advance of the eventEmitter object instance or without any complex configuration, as long as both pages processes uses the same mongodb database (but it could be a different replica servers) and the same “name” (the name of the capped collection). This way you can create massive clusters and messaging bus distributed among multiple machines without a need of any separate messaging system and its configuration.
Do a simple example – start two separate node processes with the following code, and see what the results are:
var ev = require('node-mongotailableevents');
ev(function(err, e) {
    if (err) throw err;

    e.on('myevent',function(data) {
        console.log('We have received',data);
    });

    setInterval(function() {
        e.emit('myevent','my data'+parseInt(Math.random()*1000000));
    },5000);
});
You shall see on both of the outputs both of the messages received.

Sencha ExtJS grid update in real time from the back-end

Post Syndicated from Anonymous original http://deliantech.blogspot.com/2014/10/sencha-extjs-grid-update-in-real-time.html

Hello to all,
I love using Sencha ExtJS in some projects as it is the most complete JavaScript UI framework, even though it is kind of slow, not fast reacting and being cpu and memory expensive. ExtJS allows you to do very fast and lazy development of otherwise complex UI and especially if you use Sencha Architect you can minimize the UI development time focusing only on the important things of your code.
However, ExtJS has quite few draw backs – missing features or some things are over complex and hard to be kept in mind by inexperienced developer (like their Controller idea). 
Here I would like to show you a little example how you can implement a very simple real time update of Sencha Grids (tables) from the backend for an multi user application.
Why do you need this?
I often develop apps that has to be used by multiple persons at the same time and they share and modify the same data.
In such situation, a developer usually has to resolve all those conflicting cases where two users try to modify the same exact data. And Sencha ExtJS grids are not very helpful here. Sencha uses the concept of Store that interact with the data of the back-end (for example by using REST API) and then the Store is assigned to a visualization object like ComboBox or a Grid (Tables). If you modify a table (with the help of Cell Edit Plugin or Row Edit Plugin) that has autoSync property set to true, then any modification you do automatically generates a REST POST/PUT/DELETE query to inform the back end. It can never be easier for a developer, right? But all the data sent to the back end contains the whole modified row – all the properties. On a first sight, this is not an issue. But it is, if you have multiple users editing the same table at the same time. The problem happens because the Sencha Store caches the data. So if User1 modifies it – it is stored on the server. But if User2 modifies the same row but a different column, it will do that over the old data and can overwrite the User1 modification. The backend cannot know which property has been modified and which not and who of the two modifications has to be kept.
There are a lot of tricks a developer usually use to avoid this conflicts. Keeping a version of the modification with each data row in the server, which is received in GET by the UI clients. So when a modification happens, it is accepted only if the client sends the same version number as the one stored in the server, and then the version in the server increases. If another one modification is received with older cached data, it will not be accepted as it will have a different version number. Then the customer will receive an error, then the UI software may refresh its data and updates the versions and the content visualized to the user. 
This is quite popular model, but it is not very nice for the user. The problem is that with multiple users working with the application modifying the same data over the same time, the user will constantly be outdated and will constantly receive errors loosing all its modifications.
The only good solution for both users and the system in general is if in case of change we can update the data in real time in all UI applications. This does not avoid all the possibilities for conflict. But it is highly minimizing it, making the whole operation more pleasant for the end user.
This problem and the need of resolving it happens quite often. Google Spreadsheet and later Google Docs has introduced real time update between the UI data of all the users modifying the same document about 4 years ago.
Example
I like to show here that it is not really hard to update in real time the Stores of ExtJS applications.
It actually requires very little additional code.
Lets imaging we are using a UI developed in Sencha ExtJS with Stores communicating through REST with the backend. The backend for this example will be Node.JS and MongoDB.
Between the Node.JS and the Ext.JS UI there will be Socket.IO session that we will use to push the updates from the Node.JS to the ExtJS Store. I love Socket.IO because it provides a simple WebSockets interface with fallback to HTTP pooling model in case of WebSockets cannot be open (which happens a lot, if you are so unlucky to use a Microsoft security software for example – it blocks WebSockets). 
At the MongoDB we may use capped collections. I love capped collections – they are not only limited in size, but also they allow you to bind a triggers (make the collection tailable) that will receive any new insertion immediately when it happen.
So imagine your Node.JS express REST code looks something like this:
app.get(‘/rest/myrest’,restGetMyrest);
app.put(‘/rest/myrest/:id’,restPutMyrest);
app.post(‘/rest/myrest/:id’,restPostMyrest);
app.del(‘/rest/myrest/:id’,restDelMyrest);

function restGetMyrest(req,res) { // READ REST method
   db.collection(‘myrest’).find().toArray(function(err,q) { return res.send(200,q) })
}

function restPutMyrest(req,res) { // UPDATE REST method
  var id = ObjectID.createFromHexString(req.param(‘id’));
  db.collection(‘myrest’).findAndModify({ _id: id }, [[‘_id’:’asc’]], { $set: req.body }, { safe: true, ‘new’: true }, function(err,q) {
      if (err || (!q)) return res.send(500);
      db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘update’, data: q }, function() {});
      return res.send(200,q);
  })
}

function restPostMyrest(req,res) { // CREATE REST method
  var id = ObjectID.createFromHexString(req.param(‘id’));
  db.collection(‘myrest’).insert({ _id: id },req.body, { safe: true }, function(err,q) {
      if (err || (!q)) return res.send(500);
      setTimeout(function() {
         db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘create’, data: q[0] }, function() {});
      },250);
      return res.send(200,q);
  })
}

function restDelMyrest(req,res) { // DELETE REST method
  var id = ObjectID.createFromHexString(req.param(‘id’));
  db.collection(‘myrest’).remove({ _id: id }, { $set: req.body }, { safe: true }, function(err,q) {
      if (err || (!q)) return res.send(500);
      db.collection(‘capDb’).insert({ method: ‘myrest’, op: ‘delete’, data: { _id: id } }, function() {});
      return res.send(201,{});
  })
}
As you can see above – we have implemented a classical CRUD REST method named “myrest” retrieving and storing data in a mongodb collection named ‘myrest’. However, with all modification we also store that modification in a mongodb capped collection named “capDb”.
We use this capped collection (in bold) as an internal mechanism for communication within the NodeDB. You can use events instead, or you can directly send this message to the Socket.IO receiver. However, I like capped db, as they set a lot of advantages – there can be multiple Node.JS processes listening on a capped db and receiving the updates simultaneously. So it is easier to implement clusters that way, including notifying Node.JS processes distributed over different machines.
So now, may be in another file or anywhere else, you may have a simple Node.JS Socket.IO code looking like this:
var s = sIo.of(‘/updates’);
db.createCollection(“capDb”, { capped: true, size: 100000 }, function (err, col) {
   var stream = col.find({},{ tailable: true, awaitdata: true, numberOfRetries: -1 }).stream();
   stream.on(‘data’,function(doc) {
       s.emit(doc.op,doc);
   }
});
 
With this little code above we are basically broadcasting to everyone connected with Socket.IO to /updates the content of the last insertion in the tailable capDb. Also we are creating this collection, if it does not exists from before.
This is everything you need in Node.JS 🙂
Now we can get back to the Ext.JS code. Simply you need to have somewhere in your HTML application this code executed:
var socket = io.connect(‘/updates’);
socket.on(‘create’, function(msg) {
   var s = Ext.StoreMgr.get(msg.method);
   if ((!s)||(s.getCount()>s.pageSize||s.findRecord(‘id’,msg.data._id)) return;
   s.suspendAutoSync();
   s.add(msg.data);
   s.commitChanges();
   s.resumeAutoSync();
});
socket.on(‘update’, function(msg) {
   var s = Ext.StoreMgr.get(msg.method);
   var r;
   if ((!s)||(!(r=s.findRecord(‘id’,msg.data._id))) return;
   s.suspendAutoSync();
   for (var k in msg.data) if (r.get(k) != msg.data[k]) r.set(k,msg.data[k]);
   s.commitChanges();
   s.resumeAutoSync();
});
socket.on(‘delete’,function(msg) {
   var s = Ext.StoreMgr.get(msg.method);
   var r;
   if ((!s)||(!(r=s.findRecord(‘id’,msg.data._id))) return;
   s.suspendAutoSync();
   s.remove(r);
   s.commitChanges();
   s.resumeAutoSync();
});
This is all.
Basically what we do from end to end –
If the Node.JS receives any CRUD REST operation it updates the data in the MongoDB, but also for Create, Update, Delete it notify over Socket.IO all the listening web clients about this operation (in my example, I use tailable capped collection in MongoDB as a an internal messaging bus, but you can emit to the Socket.IO directly or use another messaging bus like EventEmitter).
Then the ExtJS receives the update over Socket.IO and assumes that the method property contains the name of the Store that has to be updated. Then we find the store, suspedAutoSync if it exists (otherwise we can get into update->autosync->rest->update loop), modify the content of the record (or the store) and resume AutoSync.
With this simple code you can broadcast all the modifications in your data between all the extjs users that are currently online, so they can see updates in real time in their grids.
A single REST method may be used by multiple stores. In such case, you have to modify your code with some association between the REST method name and all the related stores.
However, for this simple example, that is unnecessary.
Some other day, I may show you my “ExtJS WebSockets CRUD proxy” I made, where you have only one communication channel between the stores and the backend – Socket.IO. It is much faster and removes the need of having REST code at all in your server.