Tag Archives: Technical

Installing and configuring the Zabbix Proxy

Post Syndicated from Werner Dijkerman original https://blog.zabbix.com/installing-and-configuring-the-zabbix-proxy/13319/

In the previous blog post, we created a Zabbix Server setup, created several users, a media type, and an action. But today, we will install on a 3rd node the Zabbix Proxy. This Zabbix Proxy will have its database running on the same host, so the “node-3” host has both the MySQL and Zabbix Proxy running.

This blog post is the 2nd part of a 3 part series of blog posts where Werner Dijkerman gives us an example of how to set up your Zabbix infrastructure by using Ansible.
You can find part 1 of the blog post by clicking Here

A git repository containing the code of these blog posts is available, which can be found on https://github.com/dj-wasabi/blog-installing-zabbix-with-ansible. Before we run Ansible, we have opened a shell on the “bastion” node. We do that with the following command:

$ vagrant ssh bastion

Once we have opened the shell, go to the “/ansible” directory where we have all of our Ansible files present.

$ cd /ansible

With the previous blog post, we executed the “zabbix-server.yml” playbook. Now we use the “zabbix-proxy.yml” playbook. The playbook will deploy a MySQL database on “node-3” and also installs the Zabbix Proxy on the same host.

$ ansible-playbook -i hosts zabbix-proxy.yml

This playbook will run for a few minutes creating all services on the node. While it is running, we will explain some of the configuration options we have set.

The configuration which we will talk about can be found in “/ansible/group_vars/zabbix_proxy” directory. This is the directory that is only used when we deploy the Zabbix proxy and contains 2 files. 1 file called “secret”, and a file called “generic”. It doesn’t really matter what names the files have in this directory. I used a file called the “secret” for letting you know that this file contains secrets and should be encrypted with a tool like ansible-vault. As this is out of scope for this blog, I simply made sure the file is in plain text. So how do we know that this directory is used for the Zabbix Proxy node?

In the previous blog post, we mentioned that with the “-I” argument, we provided the location for the inventory file. This inventory file contains the hostnames and the groups that Ansible is using. If we open the inventory file “hosts”, we can see a group called “zabbix_proxy.” So Ansible uses the information in the “/ansible/group_vars/zabbix_proxy” directory as input for variables. But how does the “/ansible/zabbix-proxy.yml” file know which host or groups to use? At the beginning of this file, you will notice the following:

- hosts: zabbix_proxy
  become: true
  collections:
    - community.zabbix

Here you will see the that “hosts” key contains the value “zabbix_proxy”. All tasks and roles that we have configured in this play will be applied to all of the hosts that are part of the zabbix_proxy group. In our case, we have only 1 host part of the group. If you would have for example 4 different datacenters and within each datacenter you want to have a Zabbix Proxy running, executing this playbook will be done on these 4 hosts and at the end of the run you would have 4 Zabbix Proxy servers running.

Within the “/ansible/group_vars/zabbix_proxy/generic” the file, we have several options configured. Let’s discuss the following options:

* zabbix_server_host
* zabbix_proxy_name
* zabbix_api_create_proxy
* zabbix_proxy_configfrequency

zabbix_server_host

The first one, the “zabbix_server_host” property tells us where the Zabbix Proxy can find the Zabbix Server. This will allow the Zabbix Proxy and the Zabbix Server to communicate with each other. Normally you would have to configure the firewall (Iptables or Firewalld) as well to allow the traffic, but in this case, there is no need for that. Everything inside our environment which we have created with Vagrant has full access. When you are going to deploy a production-like environment, don’t forget to configure the firewall (Currently this configuration of the firewalls are not yet available as part of the Ansible Zabbix Collection for both the Zabbix Server and the Zabbix Proxy. So for now you should be creating a playbook in order to configure the local firewall to allow/deny traffic).

As you will notice, we didn’t configure the property with a value like an IP address or FQDN. We use some Ansible notation to do that for us, so we only have the Zabbix Server information in one place instead of multiple places. In this case, Ansible will get the information by reading the inventory file and looking for a host entry with the name “node-1” (Which is the hostname that is running the Zabbix Server), and we use the value found by the property named “ansible_host” (Which has a value “10.10.1.11”).

zabbix_proxy_name

This is the name of the Zabbix Proxy host, which will be shown in the Zabbix frontend. We will see this later in this blog when we will create a new host to be monitored. When you create a new host, you can configure if that new host should be monitored by a proxy and if so, you will see this name.

zabbix_api_create_proxy

When we deploy the Zabbix Proxy role, we will not only install the Zabbix Proxy package, the configuration file and start the service. We also perform an API call to the Zabbix Server to create a Zabbix Proxy entry. With this API call, we can configure hosts to be monitored via this new Zabbix Proxy.

zabbix_proxy_configfrequency

The last one is just for demonstration purposes. With a default installation/configuration of the Zabbix Proxy, it has a basic value of 3600. This means that the Zabbix Server sends the configuration every 3600 to the Zabbix Proxy. Because we are running a small demo here in this Vagrant setup, we have set this to 60 seconds.

Now the deployment of our Zabbix Proxy will be ready.

When we open the Zabbix Web interface again, we go to “Administration” and click on “Proxies”. Here we see the following:

We see an overview of all proxies available, and in our case, we only have 1. We have “node-3” configured, which has an “Active” mode. When you want to configure a “Passive” mode proxy, you’ll have to update the “/ansible/group_vars/zabbix_proxy” file and add somewhere in the file the following entry: “zabbix_proxy_status: passive”. Once you have updated and saved the file, you’ll have to rerun the “ansible-playbook -i hosts zabbix-proxy.yml” command. If you will then recheck the page, you will notice that it now has the “Passive” mode.

So let’s go to “Configuration” – “Hosts”. At the moment, you will only see 1 host, which is the “Zabbix server,” like in the following picture.

Let’s open the host creation page to demonstrate that you can now set the host to be monitored by a proxy. The actual creation of a host is something that we will do automatically when we deploy the Zabbix Agent with Ansible and not something we should do manually. 😉 As you will notice, you are able to click on the dropdown menu with the option “Monitored by proxy” and see the “node-3” appear. That is very good!

Summary

We have installed and configured both a Zabbix Server and a Zabbix Proxy, and we are all set now. With the Zabbix Proxy, we have installed both the MySQL database and the Zabbix Proxy on the same node. Whereas we did install them separately with the Zabbix Server. With the following blog post, we will go and install the Zabbix Agent on all nodes.

Installing the Zabbix Server with Ansible

Post Syndicated from Werner Dijkerman original https://blog.zabbix.com/installing-the-zabbix-server-with-ansible/13317/

Today we are focusing more on the automation of installation and software configuration instead of using the manual approach. Installing and configuring software the manual way takes a lot more time, you can easily make more errors by forgetting steps or making typos, and it will probably be a bit boring when you need to do this for a large number of servers.

In this blog post, I will demonstrate how to install and configure a Zabbix environment with Ansible. Ansible has the potential to simplify many of your day-to-day tasks. As an alternative to Ansible, you may also opt in to use Puppet, Chef, and SaltStack to install and configure your Zabbix environment.

Ansible does not have any specific infrastructure requirements for it to do its job. We just need to make sure that the user exists on the target host, preferably configured with SSH keys. With tools like Puppet or Chef, you need to have a server running somewhere, and you will need to deploy an agent on your nodes. You can learn more about Ansible here:  https://docs.ansible.com/ansible/latest/index.html.

This post is the first in a series of three articles. We will set up a (MySQL) Database running on 1 node (“node-2”), Zabbix Server incl. Frontend, which will be running on a separate node (“node-1”). Once we have built this, we configure an action, media and we will create some users. In the following image you will see the environment we will create.

Our environment we will create.
The environment we will create.

In the 2nd blog post, we will set up a Zabbix Proxy and a MySQL database on a new but the same node (“node-3”). In the 3rd blog post, we will install the Zabbix Agent on all of the 3 nodes we were using so far and configure some user parameters. Where the Zabbix Agent on “node-3” is using the Zabbix Proxy, the Zabbix Agent on the nodes “node-1” and “node-2” will be monitored by the Zabbix Server.

Preparations

A git repository containing the code used in these blog posts is available, which can be found on https://github.com/dj-wasabi/blog-installing-zabbix-with-ansible. Before we can do anything, we have to install Vagrant (https://www.vagrantup.com/downloads.html) and Virtualbox (https://www.virtualbox.org/wiki/Downloads). Once you have done that, please clone the earlier mentioned git repository somewhere on your host. For this demo, we will not run the Zabbix Frontend with TLS certificates.

We have to update the hosts file. With the following line, we need to make sure that we can access the Zabbix Frontend.

10.10.1.11 zabbix.example.com

In the “ROOT” directory of the git repository which you cloned some moments ago, where you can also find the Vagrantfile, This Vagrantfile contains the configuration of the virtual machine of our setup. We will create 4 Virtual Machine’s running Ubuntu 20.04, each with 1 CPU and 1 GB of Ram which you can see in the first “config” block. In the 2nd config block, we configure our “bastion” host, which we discuss later. This node will get the ip 10.10.1.3 and we also mount the ansible directory in this Virtual Machine on location “/ansible”. For installing and configuring this node we will use a playbook bastion.yml to do this. With this playbook, we will install some packages like Python, git and Ansible inside this bastion virtual machine.

The 3rd config block is part of a loop that will configure and it will create 3 Virtual Machines. Each virtual machine is also an Ubuntu node, had its own ip (respectively 10.10.1.11 for the first node, 10.10.1.12 for the second and 10.10.1.13 for the 3rd node) and just like the “bastion” node, they have each 1 CPU and 1 GB of RAM.

You will have to execute the following command:

$ vagrant up

With this command, we will start our Virtual Machine’s. This might take a while, as it will download a VirtualBox image containing Ubuntu. The “vagrant up” command will start the “bastion” node and all other nodes as a part of this demo. Once that is done, we need to access a shell on the “bastion” node:

$ vagrant ssh bastion

This “bastion” node is a fundamental node on which we will execute Ansible, but we will not be installing anything on this host. We have opened a shell in the Virtual Machine we just created. You can compare it with creating an “ssh” connection. We have to go to the following directory before we can download the dependencies:

$ cd /ansible

As mentioned before, we have to download the Ansible dependencies. The installation depends on several Ansible Roles and an Ansible Collection. With the Ansible Roles and the Ansible Collection, we can install MySQL, Apache, and the Zabbix components. We have to execute the following command to download the dependencies:

$ ansible-galaxy install -r requirements.yml
Starting galaxy role install process
- downloading role 'mysql', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-mysql/archive/3.3.0.tar.gz
- extracting geerlingguy.mysql to /home/vagrant/.ansible/roles/geerlingguy.mysql
- geerlingguy.mysql (3.3.0) was installed successfully
- downloading role 'apache', owned by geerlingguy
- downloading role from https://github.com/geerlingguy/ansible-role-apache/archive/3.1.4.tar.gz
- extracting geerlingguy.apache to /home/vagrant/.ansible/roles/geerlingguy.apache
- geerlingguy.apache (3.1.4) was installed successfully
- extracting wdijkerman.php to /home/vagrant/.ansible/roles/wdijkerman.php
- wdijkerman.php was installed successfully
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'community.zabbix:1.2.0' to '/home/vagrant/.ansible/collections/ansible_collections/community/zabbix'
Created collection for community.zabbix at /home/vagrant/.ansible/collections/ansible_collections/community/zabbix
community.zabbix (1.2.0) was installed successfully

Your output may vary because of versions that might have been updated already since writing this blog post. We now have downloaded the dependencies and are ready to install the rest of our environment. But why do we need to download a role for MySQL, Apache and php? A role contains all the neccecerry tasks and files to configure that specific service. So in the case for the MySQL Ansible role, it will install the MySQL-server and all other packages that MySQL requires on the host, it will configure that the mysqld service is created and is running, but it will also create the databases, create and configure MySQL users and configure the root password. Using a role will help us install our environment and we don’t have to figure out ourselves on installing and configuring a MySQL server manually.

So what about the collection, the Ansible Community Zabbix Collection? Ansible has introduced this concept with Ansible 2.10 and is basically a “collection” of plugins, modules and roles for a specific service. In our case, with the Zabbix Collection, the collection contains the roles for installing the Zabbix Server, Proxy, Agent, Javagateway and the Frond-end. But it also contains a plugin to use a Zabbix environment as our inventory and contains modules for creating resources in Zabbix. All of these modules will work with the Zabbix API to configure these resources, like actions, triggers, groups. templates, proxies etc. Basically, everything we want to create and use can be done with a role or a collection.

Installing Zabbix Server

Now we can execute the following command, which will install the MySQL database on “node-2” and installs the Zabbix Server on “node-1”:

$ ansible-playbook -i hosts zabbix-server.yml

This might take a while, a minute, or 10 depending on the performance of your host. We execute the “ansible-playbook” command, and then “-i” we provide the location of the inventory file. Here you see the contents of the inventory file:

[zabbix_server]
node-1 ansible_host=10.10.1.11

[zabbix_database]
node-2 ansible_host=10.10.1.12

[zabbix_proxy]
node-3 ansible_host=10.10.1.13

[database:children]
zabbix_database
zabbix_proxy

This inventory file contains basically all of our nodes and to which group the hosts belong. We can see in that file that there is a group called “zabbix_server” (The value between [] square brackets is the name for the group) and contains the “node-1” host. Because we have a group called “zabbix_server,” we also have a directory containing some files. These are all the properties (or variables) that will be used for all hosts (in our case, only the “node-1”) in the “zabbix_server” group.

Web Interface

Now you can open your favorite browser and open “zabbix.example.com”, and you will see the Zabbix login screen. Please enter the default credentials:

Username: Admin
Password: zabbix

On the Dashboard, you will probably notice that it complains that it can not connect to the Zabbix Agent running on the Zabbix Server, which is fine as we haven’t  installed it yet. We will do this in a later blog post.

Dashboard overview

When we go to “Administration” and click on “Media types,” we will see a media type called “A: Ops email.” That is the one we have created. We can open the “/ansible/zabbix-server.yml” file and go to line 33, where we have configured the creation of the Mediatype. In this case, we have configured multiple templates for sending emails via the “mail.example.com” SMTP server.

Now we have seen the media type, we will look at the trigger we just created. This trigger makes use of the media type we just saw. The trigger can be found in the “/ansible/zabbix-server.yml” file on line 69. When you go to “Configuration” and “Actions,” you will see our created trigger “A: Send alerts to Admin”. But we don’t want to run this in Production, and for demonstrating purposes, we have selected to be triggered when the severity is Information or higher.

And lastly, we are going to see that we have also created new internal users. Navigate to “Administration” – “Users,” and you will see that we have created a user called “wdijkerman”, which can be found in the “/ansible/zabbix-server.yml” file on line 95. This user will be part of a group created earlier called “ops,”. The user type is Zabbix super admin and we have configured the email media type to be used 24×7.

We have defined a default password for this user – “password”. When you have changed the password in the Zabbix Frontend UI, executing the playbook would not change the password back again to “password.” So don’t worry about it. But if you would have removed – let’s say – the “ops” group, then, when you execute the playbook again, the group will be re-added to the user.

Summary

As you see, it is effortless to create and configure a Zabbix environment with Ansible. We didn’t have to do anything manually, and all installations and configurations were applied automatically when we executed the ansible-playbook command. You can find more information on either the Ansible page https://docs.ansible.com/ansible/latest/collections/community/zabbix/ or on the Github page https://github.com/ansible-collections/community.zabbix.

In the next post, we will install and configure the Zabbix Proxy.

Save 2 clicks, test data preprocessing

Post Syndicated from Aigars Kadiķis original https://blog.zabbix.com/save-2-clicks-test-data-preprocessing/13249/

This topic is related to template development from scratch, bulk data input, and a lot of dependable items having different preprocessing steps each.

If these keywords resonate with you, keep reading.

Story stars back in a day when a “Test now” button was invented inside the item preprocessing section. In this way, we can simulate the entire preprocessing stack. A very cool feature to have.

Nevertheless, we tend to copy over and over again the data input:

While this is fine for small projects with simple preprocessing steps which match our knowledge league. It is not so OK in we have ambition to solve the impossible. Figure out a data preprocessing rule(s) which suit our needs.

For a template development process, the solution is to skip data input and inject a static value in the very first preprocessing step. Let me introduce the concept.

JavaScript preprocessing step 1:

return 'this is input text';

JavaScript preprocessing step 2:

return value.replace("text","data");

Now we have static input, no need to spend time to “click” the input data.

Sometimes the input is not just one line but multiple lines, and tabs, and spaces and double quotes and single quotes and special characters. To respect all these things, we must get our hands dirty with the base64 format.

To prepare input data as base64 string, on windows systems it can be easily done with Notepad++. Just select all text and select “Plugin commands” => “Base64 Encode” (functionality is not there with a lite version of Notepad++):

After that, we need to copy all content to clipboard:

Create the first JavasSript preprocessing with the content from the clipboard. Here is the same example:

return 'PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTE2Ij8+DQo8am9ibG9nPg0KICA8am9iX2xvZ192ZXJzaW9uIHZlcnNpb249IjIuMCIvPg0KICA8aGVhZGVyPg0KICAgIDxzdGFydF90aW1lPkpvYiBzdGFydGVkOiBNb25kYXksIEF1Z3VzdCAxMCwgMjAyMCBhdCAxOjAwOjA1IFBNPC9zdGFydF90aW1lPg0KICA8L2hlYWRlcj4NCiAgPGZvb3Rlcj4NCiAgICA8ZW5kX3RpbWU+Sm9iIGVuZGVkOiBNb25kYXksIEF1Z3VzdCAxMCwgMjAyMCBhdCAzOjE3OjUwIFBNPC9lbmRfdGltZT4NCiAgICA8T3BlcmF0aW9uRXJyb3JzIFR5cGU9ImpvYmZ0cl9qb2Jjb21wbF9zaHV0ZG93biI+Sm9iIGNvbXBsZXRpb24gc3RhdHVzOiBDYW5jZWxlZCBieSBzZXJ2aWNlIHNodXRkb3duPC9PcGVyYXRpb25FcnJvcnM+DQogICAgPGNvbXBsZXRlU3RhdHVzPjE8L2NvbXBsZXRlU3RhdHVzPg0KICAgIDxhYm9ydFVzZXJOYW1lPlRoZSBqb2Igd2FzIGNhbmNlbGVkIGJlY2F1c2UgdGhlIHJlc3BvbnNlIHRvIGEgbWVkaWEgcmVxdWVzdCBhbGVydCB3YXMgQ2FuY2VsLCBvciBiZWNhdXNlIHRoZSBhbGVydCB3YXMgY29uZmlndXJlZCB0byBhdXRvbWF0aWNhbGx5IHJlc3BvbmQgd2l0aCBDYW5jZWwsIG9yIGJlY2F1c2UgdGhlIEJhY2t1cCBFeGVjIEpvYiBFbmdpbmUgc2VydmljZSB3YXMgc3RvcHBlZC48L2Fib3J0VXNlck5hbWU+DQogIDwvZm9vdGVyPg0KPC9qb2Jsb2c+DQo=';

In the next step, there must be decoding scheduled. Kindly copy the code 1:1. Configure it as a second preprocessing step:

var k = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/="
function d(e) {
    var t, n, o, r, a = "",
        i = "",
        c = "",
        l = 0;
    for (/[^A-Za-z0-9+/=]/g.exec(e) && alert("1"), e = e.replace(/[^A-Za-z0-9+/=]/g, ""); t = k.indexOf(e.charAt(l++)) << 2 | (o = k.indexOf(e.charAt(l++))) >> 4, n = (15 & o) << 4 | (r = k.indexOf(e.charAt(l++))) >> 2, i = (3 & r) << 6 | (c = k.indexOf(e.charAt(l++))), a += String.fromCharCode(t), 64 != r && (a += String.fromCharCode(n)), 64 != c && (a += String.fromCharCode(i)), t = n = i = "", o = r = c = "", l < e.length;);
    return unescape(a)
}
return d(value);

This is how it looks like:

Go to testing section and ensure the data in Zabbix is similar as it was in Notepad++:

Data has been successfully decoded. Multiple lines, quite original stuff. The tabs are not visible with a naked human eye but they are there, I promise!

Now we can “play” out the next preprocessing steps and try out different things:

When one preprocessing has been figured out, just clone the item and start to developing a next one. Sure, if we succeed the ambition, it will be required to spend 5 minutes to go through all items, remove first 2 steps and link the item to master key 😉

Ok. That is it for today. Bye.

By the way, on Linux system to have base64 string we only need:

  1. A command where the output entertains us
  2. Pipe it to ‘base64 -w0’
systemctl list-unit-files --type=service | base64 -w0

What takes disk space

Post Syndicated from Aigars Kadiķis original https://blog.zabbix.com/what-takes-disk-space/13349/

In today’s class let’s talk about where the disk space goes. Which items and hosts objects consume the disk space the most.

The post will cover things like:
Biggest tables in a database
Biggest data coming to the instance right now
Biggest data inside one partition of the DB table
Print hosts and items which consumes the most disk space

Biggest tables

In general, the leading tables are:

history
history_uint
history_str
history_text
history_log
events

‘history_uint’ will store integers. ‘history’ will store decimal numbers.
‘history_str’, ‘history_text’, ‘history_log’ stores textual data.
In the table ‘events’ goes problem events, internal events, agent auto-registration events, discovery events.

Have a look yourself in a database which tables take the most space. On MySQL:

SELECT table_name,
       table_rows,
       data_length,
       index_length,
       round(((data_length + index_length) / 1024 / 1024 / 1024),2) "Size in GB"
FROM information_schema.tables
WHERE table_schema = "zabbix"
ORDER BY round(((data_length + index_length) / 1024 / 1024 / 1024),2) DESC
LIMIT 8;

On PostgreSQL:

SELECT *, pg_size_pretty(total_bytes) AS total , pg_size_pretty(index_bytes) AS index ,
       pg_size_pretty(toast_bytes) AS toast , pg_size_pretty(table_bytes) AS table
FROM (SELECT *, total_bytes-index_bytes-coalesce(toast_bytes, 0) AS table_bytes
   FROM (SELECT c.oid,
             nspname AS table_schema,
             relname AS table_name ,
             c.reltuples AS row_estimate ,
             pg_total_relation_size(c.oid) AS total_bytes ,
             pg_indexes_size(c.oid) AS index_bytes ,
             pg_total_relation_size(reltoastrelid) AS toast_bytes
      FROM pg_class c
      LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
      WHERE relkind = 'r' ) a) a;

Detect big data coming to instance right now

Analyze ‘history_log’ table for the last 30 minutes:

SELECT hosts.host,items.itemid,items.key_,
COUNT(history_log.itemid)  AS 'count', AVG(LENGTH(history_log.value)) AS 'avg size',
(COUNT(history_log.itemid) * AVG(LENGTH(history_log.value))) AS 'Count x AVG'
FROM history_log 
JOIN items ON (items.itemid=history_log.itemid)
JOIN hosts ON (hosts.hostid=items.hostid)
WHERE clock > UNIX_TIMESTAMP(NOW() - INTERVAL 30 MINUTE)
GROUP BY hosts.host,history_log.itemid
ORDER BY 6 DESC
LIMIT 1\G

With PostgreSQL:

SELECT hosts.host,history_log.itemid,items.key_,
COUNT(history_log.itemid) AS "count", AVG(LENGTH(history_log.value))::NUMERIC(10,2) AS "avg size",
(COUNT(history_log.itemid) * AVG(LENGTH(history_log.value)))::NUMERIC(10,2) AS "Count x AVG"
FROM history_log 
JOIN items ON (items.itemid=history_log.itemid)
JOIN hosts ON (hosts.hostid=items.hostid)
WHERE clock > EXTRACT(epoch FROM NOW()-INTERVAL '30 MINUTE')
GROUP BY hosts.host,history_log.itemid,items.key_
ORDER BY 6 DESC
LIMIT 5
\gx

Re-run the same query but replace ‘history_log’ (in all places) with ‘history_text’ or ‘history_str’.

Which hosts consume the most space

This is a very heavy query. We will go back one day and analyze 6 minutes of that data:

SELECT ho.hostid, ho.name, count(*) AS records, 
(count(*)* (SELECT AVG_ROW_LENGTH FROM information_schema.tables 
WHERE TABLE_NAME = 'history_text' and TABLE_SCHEMA = 'zabbix')/1024/1024) AS 'Total size average (Mb)', 
sum(length(history_text.value))/1024/1024 + sum(length(history_text.clock))/1024/1024 + sum(length(history_text.ns))/1024/1024 + sum(length(history_text.itemid))/1024/1024 AS 'history_text Column Size (Mb)'
FROM history_text
LEFT OUTER JOIN items i on history_text.itemid = i.itemid 
LEFT OUTER JOIN hosts ho on i.hostid = ho.hostid 
WHERE ho.status IN (0,1)
AND clock > UNIX_TIMESTAMP(now() - INTERVAL 1 DAY - INTERVAL 6 MINUTE)
AND clock < UNIX_TIMESTAMP(now() - INTERVAL 1 DAY)
GROUP BY ho.hostid
ORDER BY 4 DESC
LIMIT 5\G

If “6-minute query” works in a relatively good time frame, try “INTERVAL 60 MINUTE”.
If “INTERVAL 60 MINUTE” works good, try “INTERVAL 600 MINUTE”.

Analyze in partition level (MySQL)

On MySQL, if database table partitioning is enabled we can list the biggest partitions on a filesystem:

ls -lh history_log#*

It will print:

-rw-r-----. 1 mysql mysql  44M Jan 24 20:23 history_log#p#p2021_02w.ibd
-rw-r-----. 1 mysql mysql  24M Jan 24 21:20 history_log#p#p2021_03w.ibd
-rw-r-----. 1 mysql mysql 128K Jan 11 00:59 history_log#p#p2021_04w.ibd

From previous output, we can take partition name ‘p2021_02w’ and use it in a query:

SELECT ho.hostid, ho.name, count(*) AS records, 
(count(*)* (SELECT AVG_ROW_LENGTH FROM information_schema.tables 
WHERE TABLE_NAME = 'history_log' and TABLE_SCHEMA = 'zabbix')/1024/1024) AS 'Total size average (Mb)', 
sum(length(history_log.value))/1024/1024 + 
sum(length(history_log.clock))/1024/1024 +
sum(length(history_log.ns))/1024/1024 + 
sum(length(history_log.itemid))/1024/1024 AS 'history_log Column Size (Mb)'
FROM history_log PARTITION (p2021_02w)
LEFT OUTER JOIN items i on history_log.itemid = i.itemid 
LEFT OUTER JOIN hosts ho on i.hostid = ho.hostid 
WHERE ho.status IN (0,1)
GROUP BY ho.hostid
ORDER BY 4 DESC
LIMIT 10;

You can reproduce a similar scenario while listing:

ls -lh history_text#*
ls -lh history_str#*

Free up disk space (MySQL)

Deleting a host in GUI will not free up data space on MySQL. It will create empty rows in table where the new data can be inserted. If you want to really free up disk space, we can rebuild partition. At first list all possible partition names:

SHOW CREATE TABLE history\G

To rebuild partition:

ALTER TABLE history REBUILD PARTITION p202101160000;

Free up disk space (PostgreSQL)

On PostgreSQL, there is a process which is responsible for vacuuming the table. To ensure a vacuum has been done lately, kindly run:

SELECT schemaname, relname, n_live_tup, n_dead_tup, last_autovacuum
FROM pg_stat_all_tables
WHERE n_dead_tup > 0
ORDER BY n_dead_tup DESC;

In output, we look at ‘n_dead_tup’ it means a dead tuple.
If the last auto vacuum has not occurred in last 10 days, it’s bad. We have to install a different definition. We can increase vacuum priority by having:

vacuum_cost_page_miss = 10
vacuum_cost_page_dirty = 20
autovacuum_vacuum_threshold = 50
autovacuum_vacuum_scale_factor = 0.01
autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = 3000
autovacuum_max_workers = 6

Alright. That is it for today.

Getting your notifications via Signal

Post Syndicated from Brian van Baekel original https://blog.zabbix.com/getting-your-notifications-via-signal/13286/

Recently, Whatsapp pushed their new privacy policy where they announced to share more data with Facebook, causing an exodus to other platforms, where Signal is one of the more popular ones, among Telegram. Both are great alternatives, but I prefer Signal due to the open-source part, end to end encryption, and last but not least: their business model (living on donations instead of selling your data).

Typically, Zabbix is sending notifications to whatever medium you’ve chosen if a problem is detected. We all know the Email messages, the various webhook integrations with Slack/MS Teams/ Jira, etc, perhaps even some text message integrations and such. Now, if we’re migrating to Signal, we suddenly have access to the Signal API and can utilize it to receive Zabbix notifications. Nice!

There is only one drawback. You need a separate phone number to register against Signal. Don’t use your own phone number – unless you want to lose the ability to use Signal ;(

There are various ways to get a phone number for this purpose:

  • Use the phone number of your current SMS gateway
  • Use the company phone number (a lot of cloud PBX are providing the option to receive the verification email)
  • Purchase a prepaid phone number.
  • Use a service like Twilio

You just need to receive one text message, the rest of the communications will go via the internet

Time to get rid of Whatsapp and move to Signal! But… How to use Signal to get your notifications?

Signal-cli

Although we could built everything from scratch, talking to the API of Signal, there is a nice implementation available in order to talk to Signal within a few minutes: Signal-cli

Although this github page is very comprehensive in order to get Signal-cli installed, but of course it is not doing anything with Zabbix.

Configuration tasks

For this guide, we’re using:

  • Centos 8
  • Zabbix 5.2

signal-cli installation

First, lets install the Signal-cli utility, and in order to do so we need to resolve the dependency of Java by installing the openjdk application:

dnf -y install java-11-openjdk-devel.x86_64

After this installation, we should be good to continue with the installation of signal-cli. According to their installation guide, this should be sufficient:

export VERSION="0.7.3"
wget https://github.com/AsamK/signal-cli/releases/download/v"${VERSION}"/signal-cli-"${VERSION}".tar.gz
sudo tar xf signal-cli-"${VERSION}".tar.gz -C /opt
sudo ln -sf /opt/signal-cli-"${VERSION}"/bin/signal-cli /usr/local/bin/

At the time of writing, the most recent version is 0.7.3, and that’s what we’re installing here. If in the future a new version is released, of course you should install that!

If everything went as expected, we should be able to register ourself to Signal.

signal-cli registration

Since we want to execute these commands by Zabbix, we must make sure the registration is done with the correct user on the Zabbix server, otherwise you will get the following error message:

Unregistered user error

(ERROR App – User +19293771253 is not registered.)

In order to prevent this error, lets do the authentication against Signal as Zabbix user:

Important: The USERNAME (your phone number) must include the country calling code, i.e. the number must start with a “+” sign and you must replace everything between the  < > in the following examples with your own values

runuser -l zabbix -c 'signal-cli -u <NUMBER> register'

Now, check for incoming test messages on this phone number. Within seconds you should receive a 6 digit code in the following format: xxx-xxx

Once you’ve received the text, it’s time to complete the registration:

runuser -l zabbix -c 'signal-cli -u <NUMBER> verify <CODE>'

Since we’re running these commands as a different user, we won’t see the output of them. Let’s just test!

Sending messages from the command line is straight forward:

runuser -l zabbix -c 'signal-cli -u <NUMBER> send -m <MESSAGE> <RECEIVER NUMBER>'

You will see the message id as output. Simply ignore it, since it’s not relevant at this point.

Within seconds:

It works! Great.

So now we’ve got this part covered, time to get the AlertScript set up, before heading to the frontend.

Zabbix AlertScript setup

Ok, so now we’ve got the registration done, we need to make sure Zabbix can utilise it. In order to do so, we use a very old method. Although it would’ve made more sense to use the webhook option, that means I had to built the communication with Signal from scratch.

So AlertScripts it is. In your terminal/SSH session with the Zabbix server open a new file with this command: vi /usr/lib/zabbix/alertscripts/signal.sh and insert the following contents:

#!/bin/bash
signal-cli -u '+19293771253' send -m "$1" $2

 That’s right. just 2 lines. After saving the file, change the owner and set the permissions:

chown zabbix:zabbix /usr/lib/zabbix/alertscripts/signal.sh
chmod 7000 /usr/lib/zabbix/alertscripts/signal.sh

and it’s time to move to our frontend.

Zabbix mediatype configuration

In the frontend, go to Administration -> Mediatypes and create a new mediatype:

Signal Mediatype

Name: Signal
Type: Script
Script name: signal.sh
Script parameters:
    {ALERT.MESSAGE}
    {ALERT.SENDTO}

don’t forget to configure some Message templates as well (second tab in the Mediatype configuration). You can just use the defaults if you click on ‘add’

Zabbix media configuration

Next step. Navigate to Administration -> Users (or just open your own user profile) and create a new media:

new-media

Type: Signal
Sendto: <your number>
When active / severity as per needs

Important: The USERNAME (your phone number) must include the country calling code, i.e. the number must start with a “+” sign

We’re almost there, just some configuration on the actions

Zabbix action configuration

This step is only needed if you are sending notifications right now via a specific mediatype. If you configured the ‘send only to’ option to ‘- All -‘ there is nothing to change, and it will work straight away!

Otherwise, navigate to Configuration -> Actions and find the action you want to change, and in the Operations, Recovery operations and Update operations change the ‘send only to’ option to ‘Signal’

Save your action and it’s time to test – Generate some problem to confirm the implementation actually works.

Wrap up

That’s it. By now you should have a working implementation where Zabbix is sending notifications to Signal. The setup was extremely straight forward and easy to configure. Nevertheless, if you need help getting this going, we (Opensource ICT Solutions) offer consultancy services as well, and are more than happy to help you out!

 

Examine Data Overview

Post Syndicated from Aigars Kadiķis original https://blog.zabbix.com/examine-data-overview/13225/

In this lab, let’s practice to create an on-screen report of the data (most recent metrics) which is very important for us.

This post represents one technique how to advance from functionality under:
“Monitoring” => “Overview”.

To create a report of the things you are fancy, we need to somehow mark those things. We need to mark items to belong under a specific application. The best way is to modify the name of an existing application and add some extra keywords inside. Please don’t create a second application. I will explain later why to not do so.

Here is a thought process of how to mark items under a single application.

Sample 1:

Total Memory
Total amount of CPU cores

Sample 2:

Current usage CPU
Current usage Memory

Sample 3:

TCP state ESTABLISHED
TCP state LISTEN
TCP state TIME_WAIT
...

It’s always only one application. Notice that each group has a common keyword: “Total”, “Current usage”, “TCP state”.

Now to list the data coming from a specific application:

  1. “Monitoring” => “Overview”
  2. Select “Data overview”
  3. Pick a “Host groups”
  4. Set an “Application”
  5. On the right top corner set Hosts location: “Left”
  6. Apply

It is always quite challenging to think of a naming system which is very independent and not overlapping. Good luck and keep “challenge accepted” running in your heart.

Of course, you can create an “extra” application for each item, for example, an application “Overview1”, but that will create a duplicate entry while browsing data under:
“Monitoring” => “Latest data”.

It’s possible to reach some limitations in the “Data overview” page if there are more than 50 entries to represent. We will see the message at the bottom of the page:

Not all results are displayed. Please provide more specific search criteria.

To solve this problem starting with 5.2 there is an option to configure the limit (default is 50):

On version 5.0 to customize this, have to modify ‘defines.inc.php’

# cd /usr/share/zabbix/include
# grep ZBX_MAX_TABLE_COLUMNS defines.inc.php
define('ZBX_MAX_TABLE_COLUMNS', 50);

Summarize devices that are not reachable

Post Syndicated from Aigars Kadiķis original https://blog.zabbix.com/summarize-devices-that-are-not-reachable/13219/

In this lab, we will list all devices which are not reachable by a monitoring tool. This is good when we want to improve the overall monitoring experience and decrease the size queue (metrics which has not been arrived at the instance).

Tools required for the job: Access to a database server or a Windows computer with PowerShell

To summarize devices that are not reachable at the moment we can use a database query. Tested and works on 4.0, 5.0, on MySQL and PostgreSQL:

SELECT hosts.host,
       interface.ip,
       interface.dns,
       interface.useip,
       CASE interface.type
           WHEN 1 THEN 'ZBX'
           WHEN 2 THEN 'SNMP'
           WHEN 3 THEN 'IPMI'
           WHEN 4 THEN 'JMX'
       END AS "type",
       hosts.error
FROM hosts
JOIN interface ON interface.hostid=hosts.hostid
WHERE hosts.available=2
  AND interface.main=1
  AND hosts.status=0;

A very similar (but not exactly the same) outcome can be obtained via Windows PowerShell by contacting Zabbix API. Try this snippet:

$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("Content-Type", "application/json")
$url = 'http://192.168.1.101/api_jsonrpc.php'
$user = 'api'
$password = 'zabbix'

# authorization
$key = Invoke-RestMethod $url -Method 'POST' -Headers $headers -Body "
{
    `"jsonrpc`": `"2.0`",
    `"method`": `"user.login`",
    `"params`": {
        `"user`": `"$user`",
        `"password`": `"$password`"
    },
    `"id`": 1
}
" | foreach { $_.result }
echo $key

# filter out unreachable Agent, SNMP, JMX, IPMI hosts
Invoke-RestMethod $url -Method 'POST' -Headers $headers -Body "
{
    `"jsonrpc`": `"2.0`",
    `"method`": `"host.get`",
    `"params`": {
        `"output`": [`"interfaces`",`"host`",`"proxy_hostid`",`"disable_until`",`"lastaccess`",`"errors_from`",`"error`"],
        `"selectInterfaces`": `"extend`",
        `"filter`": {`"available`": `"2`",`"status`":`"0`"}
    },
    `"auth`": `"$key`",
    `"id`": 1
}
" | foreach { $_.result }  | foreach { $_.interfaces } | Out-GridView

# log out
Invoke-RestMethod $url -Method 'POST' -Headers $headers -Body "
{
    `"jsonrpc`": `"2.0`",
    `"method`": `"user.logout`",
    `"params`": [],
    `"id`": 1,
    `"auth`": `"$key`"
}
"

Set a valid credential (URL, username, password) on the top of the code before executing it.

The benefit of PowerShell here is that we can use some on-the-fly filtering:

What is the exact meaning of the field ‘type’ we can understand by looking on the previous database query:

       CASE interface.type
           WHEN 1 THEN 'ZBX'
           WHEN 2 THEN 'SNMP'
           WHEN 3 THEN 'IPMI'
           WHEN 4 THEN 'JMX'
       END AS "type",

On Windows PowerShell, it is possible to download the unreachable hosts directly to CSV file. To do that, in the code above, we need to change:

Out-GridView

to

Export-Csv c:\temp\unavailable.hosts.csv

Alright, this was the knowledge bit today. Let’s keep Zabbixing!

Staying up to date when using official Zabbix packages

Post Syndicated from Jurijs Klopovskis original https://blog.zabbix.com/staying-up-to-date-when-using-official-zabbix-packages/12806/

It is not a secret that Zabbix maintains package repositories for multiple GNU/Linux distributions to make installing the software and staying up to date with the latest releases as easy as possible. To make use of the official Zabbix packages one should follow the instructions on https://www.zabbix.com/download. In this article, we would like to talk about some common points of confusion that people have when using Zabbix packages.

Being a Zabbix package maintainer, I often notice that people are confused about which packages are provided for which operating system. That’s why we have created a table that gives users info about package availability by operating system at-a-glance. Furthermore, we would like to clarify certain specific issues to eliminate any potentially remaining misunderstandings. In particular, let’s address the issue of packages no longer being provided for certain operating systems.

It is important to understand that Zabbix packages depend on other packages provided by the operating system. Whether those are web server and PHP packages needed for the frontend or OpenSSL required pretty much by all other Zabbix components, Zabbix is limited by the versions of these packages that are shipped with the operating system, or by how up to date these packages are.

Any professional system administrator is familiar with the need to install the latest security updates as one of the central measures to keep their systems secure. Unless the system provides the necessary security updates, that system should not be used. But there are also other aspects besides security that should be taken into consideration.

One specific case that we would like to discuss is Red Hat Enterprise Linux 7. In fact, RHEL/CentOS 7 constitutes a large chunk of Zabbix installations.

Heads Up! The same packages are used for RHEL, CentOS & Oracle Linux, thus when RHEL is mentioned, CentOS is also implied.

As many of you may have noticed, only zabbix-agent, zabbix-sender & zabbix-get packages have been provided for RHEL 7 when version 5.2 was released. What’s the deal?

Red Hat backports security fixes for older packages, and this is awesome. Despite that, the essential packages that Zabbix uses as dependencies are tremendously old.
Case in point, RHEL 7 ships with:

  • PHP 5.4.16
  • MariaDB 5.5.68 & PostgreSQL 9.2.24
  • OpenSSL 1.0.2k

Let’s talk about these in detail.

PHP 5.4

Starting with version 5.0, Zabbix frontend requires PHP version 7.2 or higher. Simply put, our frontend developers needed to make use of the new PHP features to improve the user experience. Also, 7.2 was the oldest supported version in the upstream.

Quite expectedly, this caused some problems when packaging Zabbix for RHEL 7, due to the distribution shipping PHP version 5.4. At first, the idea was to drop support for the 5.0 frontend on RHEL 7 altogether, but after consulting with the support team, it was decided to find a way around to keep providing these packages somehow.

Enter Red Hat Software Collections. Instead of being dropped completely, Zabbix 5.0 frontend packages were based on PHP 7.2 found in RH SCL. The day was saved, but in the end, this still was not the cleanest solution. A lot of things had to be altered from the way they are usually done. Changes had to be made to configuration files and user instructions. The repository structure was altered and frontend-related packages were renamed to include the “scl” suffix to reflect the changes. As a result, these changes made package maintenance pretty difficult for us. Furthermore, extra attention was required from the users when installing these packages and especially when updating from the previous versions.

As a side note, on Debian-based distros that have the same problem, the frontend package has been deprecated altogether.

Old Databases & OpenSSL

Secure connection to the database was introduced in 5.0, however, it does not work on RHEL 7.

Try for yourself. Put DBTLSConnect=required option into /etc/zabbix/zabbix_server.conf file and try to restart the Zabbix server. It will fail with the following error:

"DBTLSConnect" configuration parameter cannot be used: Zabbix server was compiled without PostgreSQL or MySQL library version that support TLS

This happens due to RHEL 7 shipping old database packages. Yes, using RH SCL is possible but implementation would be an even bigger mess than what was required for making the 5.0 frontend work. Considering that RHEL 7 is on its way out, it takes just too much effort to implement and support.

Another issue is the fact that old OpenSSL packages prevent the use of TLS 1.3 among other things.

For example, add TLSCipherPSK13=TLS_AES_128_GCM_SHA256 setting to /etc/zabbix/zabbix_proxy.conf and restart the server. You will get the following error in the proxy log file.

cannot set list of TLS 1.3 PSK ciphersuites: compiled with OpenSSL version older than 1.1.1. Consider not using parameters "TLSCipherPSK13" or "--tls-cipher13"

TLS 1.3 is fully supported in RHEL 8.

The usage of HashiCorp Vault can possibly be affected by the old OpenSSL version as well.

There are potentially other issues that haven’t been discovered yet. Because of the nature of the old packages on RHEL 7, it is hard to fully predict what can go wrong.

In conclusion

Taking into consideration all of the above, it was decided to not provide server and frontend packages for 5.2 on RHEL 7. We do understand that this is super-inconvenient for some people, but the truth is that this has to be done sooner or later. It could have been done in 5.4 or 6.0, but that is simply kicking the can further down the road. It is a painful, but necessary change.

Proxy packages for 5.2 will be provided to keep some backward compatibility, but keep in mind that a lot of the modern features will not work there, including:

  • No support for TLS 1.3
  • No support for encrypted database connections

And most importantly, support for proxy on RHEL 7 will be dropped in Zabbix 5.4!

Note
RHEL 7 support for existing Zabbix customers will still be provided.

In short upgrade to RHEL 8. This will have to be done sooner or later. Do that and forget about this type of problem in the foreseeable future.

Note
We are aware of recent change in CentOS 8 lifecycle and are investigating its impact on Zabbix packages.

Of course  the cost of upgrading RHEL may be prohibitive. So, if the upgrade is impossible for one reason or another, which options are available?

  • Use container images. Probably the most progressive option of all. Zabbix has great container images. Consider using them, if using 5.2 is an impediment.
  • Use 5.0 LTS instead. Indeed, 5.0 packages are available for RHEL 7 and will be supported for some time. Despite the known problems, described above, this can be a great option.
  • Build from source. Of course, there is always a hard way. Grab the sources and build away. If you choose to go this route, then you must take into account the potential problems caused by old packages on the system.

Ultimately, we suggest thinking of this as a motivation to make an upgrade. If you really need new features of Zabbix, consider using an up-to-date operating system.

Close problem automatically via Zabbix API

Post Syndicated from Aigars Kadiķis original https://blog.zabbix.com/close-problem-automatically-via-zabbix-api/12461/

Today we are talking about a use case when it’s impossible to find a proper way to write a recovery expression for the Zabbix trigger. In other words, we know how to identify problems. But there is no good way to detect when the problem is gone.

This mostly relates to a huge environment, for example:

  • Got one log file. There are hundreds of patterns inside. We respect all of them. We need them
  • SNMP trap item (snmptrap.fallback) with different patterns being written inside

In these situations, the trigger is most likely configured to “Event generation mode: Multiple.” This practically means: when a “problematic metric” hits the instance, it will open +1 additional problem.

Goal:
I just need to receive an email about the record, then close the event.

As a workaround (let’s call it a solution here), we can define an action which will:

  1. contact an API endpoint
  2. manually acknowledge the event and close it

The biggest reason why this functionality is possible is that: when an event hits the action, the operation actually knows the event ID of the problem. The macro {EVENT.ID} saves the day.

To solve the problem, we need to install API characteristics at the global level:

     {$Z_API_PHP}=http://127.0.0.1/api_jsonrpc.php
    {$Z_API_USER}=api
{$Z_API_PASSWORD}=zabbix

NOTE
‘http://127.0.0.1/api_jsonrpc.php’ means the frontend server runs on the same server as systemd:zabbix-server. If it is not the case, we need to plot a front-end address of Zabbix GUI + add ‘api_jsonrpc.php’.

We will have 2 actions. The first one will deliver a notification to email:

After 1 minute, a second action will close the event:

This is a full bash snippet we must put inside. No need to change anything. It works with copy and paste:

url={$Z_API_PHP}
    user={$Z_API_USER}
password={$Z_API_PASSWORD}

# authorization
auth=$(curl -sk -X POST -H "Content-Type: application/json" -d "
{
	\"jsonrpc\": \"2.0\",
	\"method\": \"user.login\",
	\"params\": {
		\"user\": \"$user\",
		\"password\": \"$password\"
	},
	\"id\": 1,
	\"auth\": null
}
" $url | \
grep -E -o "([0-9a-f]{32,32})")

# acknowledge and close event
curl -sk -X POST -H "Content-Type: application/json" -d "
{
	\"jsonrpc\": \"2.0\",
	\"method\": \"event.acknowledge\",
	\"params\": {
		\"eventids\": \"{EVENT.ID}\",
		\"action\": 1,
		\"message\": \"Problem resolved.\"
	},
	\"auth\": \"$auth\",
	\"id\": 1
}" $url

# close api key
curl -sk -X POST -H "Content-Type: application/json" -d "
{
    \"jsonrpc\": \"2.0\",
    \"method\": \"user.logout\",
    \"params\": [],
    \"id\": 1,
    \"auth\": \"$auth\"
}
" $url

Zabbix API scripting via curl and jq

Post Syndicated from Aigars Kadiķis original https://blog.zabbix.com/zabbix-api-scripting-via-curl-and-jq/12434/

In this lab we will use a bash environment and utilities ‘curl’ and ‘jq’ to perform Zabbix API calls, do some scripting.

‘curl’ is a tool to exchange JSON messages over HTTP/HTTPS.
‘jq’ utility helps to locate and extract specific elements in output.

To follow the lab we need to install ‘jq’:

# On CentOS7/RHEL7:
yum install epel-release && yum install jq

# On CentOS8/RHEL8:
dnf install jq

# On Ubuntu/Debian:
apt install jq

# On any 64-bit Linux platform:
curl -skL "https://github.com/stedolan/jq/releases/download/jq1.5/jq-linux64" -o /usr/bin/jq && chmod +x /usr/bin/jq

Obtaining an authorization token

In order to operate with API calls we need to:

  • Define an API endpoint. this is an URL, a PHP file which is designed to accept requests
  • Obtain an authorization token

If you tend to execute API calls from frontend server then most likelly.

url=http://127.0.0.1/api_jsonrpc.php
# or:
url=http://127.0.0.1/zabbix/api_jsonrpc.php

It’s required to set the URL variable to jump to the next step. Test if you have it configured:

echo $url

Any API call needs to be used via authorization token. To put one token in variable use the command:

auth=$(curl -s -X POST -H 'Content-Type: application/json-rpc' \
-d '
{"jsonrpc":"2.0","method":"user.login","params":
{"user":"api","password":"zabbix"},
"id":1,"auth":null}
' $url | \
jq -r .result
)

Note
Notice there is user ‘api’ with password ‘zabbix’. This is a dedicated user for API calls.

Check if you have a session key. It should be 32 character HEX string:

echo $auth

Though process

1) visit the documentation page and pick an API flavor for example alert.get:

{
"jsonrpc": "2.0",
"method": "alert.get",
"params": {
	"output": "extend",
	"actionids": "3"
},
"auth": "038e1d7b1735c6a5436ee9eae095879e",
"id": 1
}

2) Let’s use our favorite text editor and build in Find&Replace functionality to escape all double quotes:

{
\"jsonrpc\": \"2.0\",
\"method\": \"alert.get\",
\"params\": {
	\"output\": \"extend\",
	\"actionids\": \"3\"
},
\"auth\": \"038e1d7b1735c6a5436ee9eae095879e\",
\"id\": 1
}

NOTE
Don’t ever think to do this process manually by hand!

3) Replace session key 038e1d7b1735c6a5436ee9eae095879e with our variable $auth

{
\"jsonrpc\": \"2.0\",
\"method\": \"alert.get\",
\"params\": {
	\"output\": \"extend\",
	\"actionids\": \"3\"
},
\"auth\": \"$auth\",
\"id\": 1
}

4) Now let’s encapsulate the API command with curl:

curl -s -X POST \
-H 'Content-Type: application/json-rpc' \
-d " \

{
\"jsonrpc\": \"2.0\",
\"method\": \"alert.get\",
\"params\": {
	\"output\": \"extend\",
	\"actionids\": \"3\"
},
\"auth\": \"$auth\",
\"id\": 1
}

" $url

By executing the previous command, it should already print a JSON content in response.
To make the output more beautiful we can pipe it to jq .:

curl -s -X POST \
-H 'Content-Type: application/json-rpc' \
-d " \

{
\"jsonrpc\": \"2.0\",
\"method\": \"alert.get\",
\"params\": {
	\"output\": \"extend\",
	\"actionids\": \"3\"
},
\"auth\": \"$auth\",
\"id\": 1
}

" $url | jq .

Wrap everything together in one file

This is ready to use the snippet:

#!/bin/bash

# 1. set connection details
url=http://127.0.0.1/api_jsonrpc.php
user=api
password=zabbix

# 2. get authorization token
auth=$(curl -s -X POST \
-H 'Content-Type: application/json-rpc' \
-d " \
{
 \"jsonrpc\": \"2.0\",
 \"method\": \"user.login\",
 \"params\": {
  \"user\": \"$user\",
  \"password\": \"$password\"
 },
 \"id\": 1,
 \"auth\": null
}
" $url | \
jq -r '.result'
)

# 3. show triggers in problem state
curl -s -X POST \
-H 'Content-Type: application/json-rpc' \
-d " \
{
 \"jsonrpc\": \"2.0\",
    \"method\": \"trigger.get\",
    \"params\": {
        \"output\": \"extend\",
        \"selectHosts\": \"extend\",
        \"filter\": {
            \"value\": 1
        },
        \"sortfield\": \"priority\",
        \"sortorder\": \"DESC\"
    },
    \"auth\": \"$auth\",
    \"id\": 1
}
" $url | \
jq -r '.result'

# 4. logout user
curl -s -X POST \
-H 'Content-Type: application/json-rpc' \
-d " \
{
    \"jsonrpc\": \"2.0\",
    \"method\": \"user.logout\",
    \"params\": [],
    \"id\": 1,
    \"auth\": \"$auth\"
}
" $url

Conveniences

We can use https://jsonpathfinder.com/ to identify what should be the path to extract an element.

For example, to list all Zabbix proxies we will use and API call:

curl -s -X POST \
-H 'Content-Type: application/json-rpc' \
-d " \
{
    \"jsonrpc\": \"2.0\",
    \"method\": \"proxy.get\",
    \"params\": {
        \"output\": [\"host\"]
    },
    \"auth\": \"$auth\",
    \"id\": 1
} 
" $url

It may print content like:

{"jsonrpc":"2.0","result":[{"host":"broceni","proxyid":"10387"},{"host":"mysql8mon","proxyid":"12066"},{"host":"riga","proxyid":"12585"}],"id":1}

Inside JSONPathFinder by using a mouse click at the right panel, we can locate a sample element what we need to extract:

It suggests a path ‘x.result[1].host’. This means to extract all elements we can remove the number and use ‘.result[].host’ like this:

curl -s -X POST \
-H 'Content-Type: application/json-rpc' \
-d " \
{
    \"jsonrpc\": \"2.0\",
    \"method\": \"proxy.get\",
    \"params\": {
        \"output\": [\"host\"]
    },
    \"auth\": \"$auth\",
    \"id\": 1
} 
" $url | jq -r '.result[].host'

Now it prints only the proxy titles:

broceni
mysql8mon
riga

That is it for today. Bye.

Zabbix API calls through Postman

Post Syndicated from Aigars Kadiķis original https://blog.zabbix.com/zabbix-api-calls-through-postman/12198/

Zabbix API calls can be used through the graphical user interface (GUI), no need to jump to scripting. An application to perform API calls is called Postman.

Benefits:

  • Available on Windows, Linux, or MAC
  • Save/synchronize your collection with Google account
  • Can copy and paste examples from the official documentation page

Let’s go to basic steps on how to perform API calls:

1st step – Grab API method user.login and use a dedicated username and password to obtain and session token:

{
    "jsonrpc": "2.0",
    "method": "user.login",
    "params": {
        "user": "api",
        "password": "zabbix"
    },
    "id": 1
}

This is how it looks in Postman:

NOTE
We recommend using a dedicated user for API calls. For example, a user called “api”. Make sure the user type has been chosen as “Zabbix Super Admin” so through this user we can access any type of information.

2nd step – Use API method trigger.get to list all triggers in the problem state:

{
    "jsonrpc": "2.0",
    "method": "trigger.get",
    "params": {
        "output": [
            "triggerid",
            "description",
            "priority"
        ],
        "filter": {
            "value": 1
        },
        "sortfield": "priority",
        "sortorder": "DESC"
    },
    "auth": "<session key>",
    "id": 1
}

Replace “<session key>” inside the API snippet to make it work. Then click “Send” button. It will list all triggers in the problem state on the right side:

Postman conveniences – Environments

Environments are “a must” if you:

  • Have a separate test, development, and production Zabbix instance
  • Plan to migrate Zabbix to next version (4.0 to 5.0) so it’s better to test all API calls beforehand

On the top right corner, there is a button Manage Environments. Let’s click it.

Now Create an environment:

Each environment must consist of url and auth key:

Now we have one definition prod. Can close window with [X]:

In order to work with your new environment, select a newly created profile prod. It’s required to substitute Zabbix API endpoint with {{url}} and plot {{auth}} to serve as a dynamic authorization key:

NOTE
Every time we notice an API procedure does not work anymore, all we need to do is to enter Manage environments section and install a new session tokken..

Topic in video format:
https://youtu.be/B14tsDUasG8?t=2513

Why Zabbix throttling preprocessing is a key point for high-frequency monitoring

Post Syndicated from Dmitry Lambert original https://blog.zabbix.com/why-zabbix-throttling-preprocessing-is-a-key-point-for-high-frequency-monitoring/12364/

Sometimes we need much more than collecting generic data from our servers or network devices. For high-frequency monitoring, we need functionality to offload сore components from the extensive load. Throttling is the exact thing that will allow you to drop repetitive values on a Pre-processing level and collect only changing values.

Contents

I. High-frequency monitoring (0:33)

1. High-frequency monitoring issues (2:25)
2. Throttling (5:55)

Throttling is available since Zabbix 4.2 and is highly effective for high-frequency monitoring.

High-frequency monitoring

We have to set update intervals for all of the items we create in Configuration > Host > Items > Create item.

Setting update interval

The smallest update interval for regular items in Zabbix is one second. If we want to monitor all items, including memory usage, network bandwidth, or CPU load once per second, this can be considered a high-frequency interval. However, in the case of industrial equipment or telemetry data, we’ll most likely need the data more often, for instance, every 1 millisecond.

The easiest way to send data every millisecond is to use Zabbix sender — a small utility to send values to the Zabbix server or the proxy. But first, these values should be gathered.

High-frequency monitoring issues

Selecting an update interval for different items

We have to think about performance, as the more data we have, the more performance issues will arise and the more powerful hardware we’ll have to buy.

If the data grabbed from a host is constantly changing, it makes sense to collect the data every 10 or 100 milliseconds, for instance. This means that we have to process this changing data with the triggers, store it in the database, visualize it in the Latest data, as every time we receive a new value.

There are values that does not have that trend to change very frequently, but without Throttling we would still collect a new value every milisecond and process it with all our triggers and internal processes, even if the value does not change over hours.

Throttling

The greatest way to solve this problem is through throttling.

To illustrate it, in Configuration > Hosts, let’s create a ‘Throttling‘ host and add it to a group.

Creating host

Then we’ll create an item to work as a Zabbix sender item.

Creating Zabbix sender item

NOTE. For a Zabbix sender item, the Type should always be ‘Zabbix trapper’.

Then open the CLI and reload the config cache:

zabbix_server -R config_cache_reload

Now we can send values to the Zabbix sender, specifying IP address of the Zabbix server, hostname, which is case-sensitive, the key, and then the value — 1:

zabbix_sender -z 127.0.0.1 -s Throttling -k youtube -o 1

If we send value “1” several times, they all will be displayed in Monitoring > Latest data.

Displaying the values grabbed from the host

NOTE. It’s possible to filter the Latest data to display only the needed host and set a sufficient range of the last values to be displayed.

Using this method we are spamming the Zabbix server. So, we can add throttling to the settings of our item in the Pre-processing tab in Configuration > Hosts.

NOTE. There are no other parameters to configure besides this Pre-processing step from the throttling menu.

Discard unchanged

Discard unchanged throttling option

With the ‘Discard unchanged‘ throttling option, only new values will be processed by the server, while identical values will be ignored.

Throttling ignores identical values

Discard unchanged with a heartbeat

If we change the pre-processing settings for our item in the Pre-processing tab in Configuration > Hosts to ‘Discard unchanged with a heartbeat‘, we have one additional Parameter to specify — the interval to send the values if they are identical.

Discard unchanged with a heartbeat

So, if we specify 120 seconds, then in Monitoring > Latest data, we’ll get the values once per 120 seconds even if they are identical.

Displaying identical values with an interval

This throttling option is useful when we have nodata() triggers. So, with the Discard unchanged throttling option, the nodata() triggers will fire as identical data will be dropped. If we use Discard unchanged with heartbeat even identical values will be grabbed, so the trigger won’t fire.

In simpler words, the ‘Discard unchanged‘ throttling option will drop all identical values, while ‘Discard unchanged with heartbeat‘ will send even the identical values with the specified interval.

Watch the video.

 

Our Top 4 Favorite Google Chrome DevTools Tips & Tricks

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/our-top-4-favorite-google-chrome-devtools-tips-tricks/

Welcome to the final installation of our 3-part series on Google Chrome’s DevTools. In part 1 and part 2, we explored an introduction to using DevTools, as well as how you can use it to diagnose SSL and security issues on your site. In the third and final part of our DevTools series, we will be sharing our 4 favourite useful tips and tricks to help you achieve a variety of useful and efficient tasks with DevTools, without ever leaving your website!

Clearing All Site Data

Perhaps one of the most frustrating things when building your website, is the occasional menace that is browser caching. If you’ve put a lot of time into building websites, you probably know the feeling of making a change and then wondering why your site still shows the old page after you refresh. But even further to that, there can be any number of other reasons why you may need to clear all of your site data. Commonly, one might be inclined to just flush all cookies and cache settings in their browser, wiping their history from every website. This can lead to being suddenly logged out of all of your usual haunts – a frustrating inconvenience if you’re ever in a hurry to get something done.

Thankfully, DevTools has a handy little tool that allows you to clear all data related to the site that you’re currently on, without wiping anything else.

  1. Open up DevTools
  2. Click on the “Application” tab. If you can’t see it, just increase the width of your DevTools or click the arrow to view all available tabs.
  3. Click on the “Clear storage” tab under the “Application” heading.
  4. You will see how much local disk usage that specific website is taking up on your computer. To clear it all, just click on the “Clear site data” button.

That’s it! Your cache, cookies, and all other associated data for that website will be wiped out, without losing cached data for any other website.

Testing Device Responsiveness

In today’s world of websites, mobile devices make up more than half of all traffic to websites. That means that it’s more important than ever to ensure your website is fully responsive and looking sharp across all devices, not just desktop computers. Chrome DevTools has an incredibly useful tool to allow you to view your website as if you were viewing it on a mobile device.

  1. Open up DevTools.
  2. Click the “Toggle device toolbar” button on the top left corner. It looks like a tablet and mobile device. Alternatively, press Ctrl + Shift + M on Windows.
  3. At the top of your screen you should now see a dropdown of available devices to pick from, such as iPhone X. Selecting a device will adjust your screen’s ratio to that of the selected device.

Much easier than sneaking down to the Apple store to test out your site on every model of iPhone or iPad, right?

Viewing Console Errors

Sometimes you may experience an error on your site, and not know where to look for more information. This is where DevTools’ Console tab can come in very handy. If you experience any form of error on your site, you can likely follow these steps to find a lead on what to do to solve it:

  1. Open up DevTools
  2. Select the “Console” tab
  3. If your console logged any errors, you can find them here. You may see a 403 error, or a 500 error, etc. The console will generally log extra information too.

If you follow the above steps and you see a 403 error, you then know your action was not completed due to a permissions issue – which can get you started on the right track to troubleshooting the issue. Whatever the error(s) may be, there is usually a plethora of information available on potential solutions by individually researching those error codes or phrases on Google or your search engine of choice.

Edit Any Text On The Page

While you can right-click on text and choose “inspect element”, and then modify text that way, this alternative method allows you to modify any text on a website as if you were editing a regular document or a photoshop file, etc.

  1. Open up DevTools
  2. Go to the “Console” tab
  3. Copy and paste the following into the console and hit enter:
    1. document.designMode=”on”

Once that’s done, you can select any text on your page and edit it. This is actually one of the more fun DevTools features, and it can make testing text changes on your site an absolute breeze.

Conclusion

This concludes our entire DevTools series! We hope you’ve enjoyed it and maybe picked up a few new tools along the way. Our series only really scratches the surface of what DevTools can be used for, but we hope this has offered a useful introduction to some of the types of things you can accomplish. If you want to keep learning, be sure to head over to Google’s Chrome DevTools documentation for so much more!

The post Our Top 4 Favorite Google Chrome DevTools Tips & Tricks appeared first on AWS Managed Services by Anchor.

Diagnosing Security Issues with Google Chrome’s DevTools

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/diagnosing-security-issues-with-google-chromes-devtools/

Welcome to part 2 of our 3 part series on delving into some of the most useful features and functions of Google Chrome’s DevTools. In part 1, we went over a brief introduction of DevTools, plus some minor customisations. In this part 2, we’ll be taking a look into the security panel section of DevTools, including some of the different things you can look at when diagnosing a website or application for security and SSL issues.

The Security Panel

One of Chrome’s most helpful features has to be the security panel. To begin, visit any website through Google Chrome and open up DevTools, then select “Security” from the list of tabs at the top. If you can’t see it, you may need to click the two arrows to display more options or increase the width of DevTools.

Inspecting Your SSL Certificate

When we talk about security on websites, one of the first things that we usually would consider is the presence of an SSL certificate. The security tab allows us to inspect the website’s SSL certificate, which can have many practical uses. For example, when you visit your website, you may see a concerning red “Unsafe” warning. If you suspect that that may be something to do with your SSL certificate, it’s very likely that you’re correct. The problem is, the issue with your SSL certificate could be any number of things. It may be expired, revoked, or maybe no SSL certificate exists at all. This is where DevTools can come in handy. With the Security tab open, go ahead and click “View certificate” to inspect your SSL certificate. In doing so, you will be able to see what domain the SSL has been issued to, what certificate authority it was issued by, and its expiration date – among various other details, such as viewing the full certification path.

For insecure or SSL warnings, viewing your SSL certificate is the perfect first step in the troubleshooting process.

Diagnosing Mixed Content

Sometimes your website may show as insecure, and not have a green padlock in your address bar. You may have checked your SSL certificate is valid using the method above, and everything is all well and good there, but your site is still not displaying a padlock. This can be due to what’s called mixed content. Put simply; mixed content means that your website itself is configured to load over HTTPS://, but some resources (scripts, images, etc) on your website are set to HTTP://. For a website to show as fully secure, all resources must be served over HTTPS://, and your website’s URL must also be configured to load as HTTPS://.

Any resources that are not loading securely are vulnerable to man-in-the-middle attacks, whereby a malicious actor can intercept data sent through your website, potentially leaking private information. This is doubly important for eCommerce sites or any sites handling personal information, and why it’s so important to ensure that your website is fully secure, not to mention increasing users’ trust in your website.

To assist in diagnosing mixed content, head back into the security tab again. Once you have that open, go ahead and refresh the website that you’re diagnosing. If there are any non-secure resources on the page, the panel on the left-hand side will list them. Secure resources will be green, and those non-secure will be red. Oftentimes this can be one or two images with an HTTP:// URL. Whatever the case, this is one of the easiest ways to diagnose what’s preventing your site from gaining a green padlock. Once you have a list of which content is insecure, you can go ahead and manually adjust those issues on your website.

There are always sites like “Why No Padlock?” that effectively do the same thing as the steps listed above, but the beauty of DevTools is that it is one tool that can do it all for you, without having to leave your website.

Conclusion

This concludes part 2 of our 3-part DevTools series! As always, be sure to head over to Google’s Chrome DevTools documentation for further information on everything discussed here.

We hope that this has helped you gain some insight into how you might practically use DevTools when troubleshooting security and SSL issues on your own site. Now that you’re familiar with the basics of the security panel stay tuned for part 3 where we will get stuck into some of the most useful DevTools tips and tricks of all.

The post Diagnosing Security Issues with Google Chrome’s DevTools appeared first on AWS Managed Services by Anchor.

An Introduction To Getting Started with Google Chrome’s DevTools

Post Syndicated from Andy Haine original https://www.anchor.com.au/blog/2020/10/an-introduction-to-getting-started-with-google-chromes-devtools/

Whether you’re a cloud administrator or developer, having a strong arsenal of dev tools under your belt will help to make your everyday tasks and website or application maintenance a lot more efficient.

One of the tools our developers use every day to assist our clients is Chrome’s Devtools. Whether you work on websites or applications for your own clients, or you manage your own company’s assets, Devtools is definitely worth spending the time to get to know. From style and design to troubleshooting technical issues, you would be hard-pressed to find such an effective tool for both.

Whether you already use Chrome’s DevTools on a daily basis, or you’re yet to discover its power and functionality, we hope we can show you something new in our 3-part DevTools series! In part 1 of this series, we will be giving you a brief introduction to DevTools. In part 2, we will cover diagnosing security issues using DevTools. Finally, in part 3, we’ll go over some of the more useful tips and tricks that you can use to enhance your workflow.

While in this series, we will be using Chrome’s DevTools, most of this advice also applies to other popular browser’s developer tools, such as Microsoft Edge or Mozilla Firefox. Although the functionality and location of the tools will differ, doing a quick Google search should help you to dig up anything you’re after.

An Introduction to Chrome DevTools

Chrome DevTools, also known as Chrome Developer tools, is a set of tools built into the Chrome browser to assist web/application developers and novice users alike. Some of the things it can be used for includes, but is not limited to:

  • Debugging and troubleshooting errors
  • Editing on the fly
  • Adjusting/testing styling (CSS) before making live changes
  • Emulating different network speeds (like 3G) to determine load times on slower networks
  • Testing responsiveness across different devices
  • Performance auditing to ensure your website or application is fast and well optimised

All of the above features can greatly enhance productivity when you’re building or editing, whether you’re a professional developer or a hobbyist looking to build your first site or application.

Chrome DevTools has been around for a long time (since Chrome’s initial release), but it’s a tool that has been continuously worked on and improved since its beginnings. It is now extremely feature-rich, and still being improved every day. Keep in mind; the above features are only a very brief overview of all of the functionality DevTools has to offer. In this series, we’ll get you comfortably acquainted with DevTools, but you can additionally find very in-depth documentation over at Google’s DevTools site here, where they provide breakdowns of every feature.

How to open Chrome DevTools

There are a few different ways that you can access DevTools in your Chrome browser.

  1. Open DevTools by right-clicking on anything within the browser page, and select the “Inspect” button. This will open DevTools and jump to the specific element that you selected.
  2. Another method is via the Chrome browser menu. Simply go to the top right corner and click the three dots > More tools > Developer tools.
  3. If you prefer hotkeys, you can open DevTools by doing either of the following, depending on your operating system:

Windows = F12 or Ctrl + shift + I

Mac = Cmd + Opt + I

Customising your Environment

Now that you know what DevTools is and how to open it, it’s worth spending a little bit of time customising DevTools to your own personal preferences.

To begin with, DevTools has a built-in dark mode. When you’re looking at code or a lot of small text all the time, using a dark theme can greatly help to reduce eye strain. Enabling dark mode can be done by following the instructions below:

  1. Open up DevTools using your preferred method above
  2. Once you’re in, click the settings cog on the top right to open up the DevTools settings panel
  3. Under the ‘Appearance’ heading, adjust the ‘Theme’ to ‘Dark’

You may wish to spend some time exploring the remainder of the preferences section of DevTools, as there are a lot of layout and functionality customisations available.

Conclusion

This concludes part 1 of our 3 part DevTools series. We hope that this has been a useful and informative introduction to getting started using DevTools for your own project! Now that you’re familiar with the basics stay tuned for part 2 where we will show you how you can diagnose basic security issues – and more!

The post An Introduction To Getting Started with Google Chrome’s DevTools appeared first on AWS Managed Services by Anchor.