Tag Archives: Integrations

Monitoring Configuration Backups with Zabbix and Oxidized

Post Syndicated from Brian van Baekel original https://blog.zabbix.com/monitoring-configuration-backups-with-zabbix-and-oxidized/28260/

As a Zabbix partner, we help customers worldwide with all their Zabbix needs, ranging from building a simple template all the way to (massive) turn-key implementations, trainings, and support contracts. Quite often during projects, we get the question, “How about making configuration backups of our network equipment? We need this, as another tool was also capable of doing this!”

The answer is always the same – yes, but no. Yes, technically it is possible to get your configuration backups in Zabbix. It’s not even that hard to set up initially. However, you really should not want configuration backups. Zabbix is not made for them, and you will run into limitations within minutes. As you can imagine, the customer is never happy with this limitation, and some actively start to question where we think the limitation is to see if it is a limitation for them as well. So we simply set up an SSH agent item and get that config out:

Voila! Once per hour Zabbix will log in to that device, execute the command ‘show full-configuration,’ and get back the result. Frankly, it just works. You check the Monitoring -> Latest data section of the host and see that there is data for this item. Problem solved, right?

No. As a matter of fact, this is where the problems start. Zabbix allows us to store up to 64KB of data in a item value. The above screenshot is of a (small) fortigate firewall and the config if stored in a text file is just over 1.1MB. So, Zabbix truncates the data, which renders the backup useless –  restore will never work. At the same time, Zabbix is not sanitizing the output, so all secrets are left in it.

To make it even worse, it’s challenging to make a diff of different config versions/revisions – that feature is just not there. Most of the time, the customer is at this point convinced that Zabbix is not the right tool and the next question pops up – “Now what? How can we fix this?” This is where our added value is presented, as we do have a solution here which is rather affordable (free) as well.

The solution is Oxidized, which is basically Rancid on steroids. This project started years ago and is released under the Apache 2.0 license. We found it by accident, started playing around with it, and never left it. The project is available on Github (https://github.com/ytti/oxidized) and written in Ruby. Incidentally, if you (or your company) have Ruby devs and want to give something back to the community, the Oxidized project is looking for extra maintainers!

At this point, we show our customers the GUI of Oxidized, which in our case involves just making backups of a few firewalls:

So we have the name, the model, and (in this case) just one group. The status shows whether the backup was successful or not, the last update and when the last change was detected. At the same time, under actions, we can get the full config file, look at previous revisions(and diff them) combined with a ‘execute now’ option.

Looking at the versions, it’s simply showing this:

This is already giving us a nice idea of what is going on. We see the versions and dates at a glance, but the moment we check the diff option, we can easily see what was actually changed:

The perfect solution, except that it is not integrated with Zabbix. That means double administration and a lot of extra work, combined with the inevitable errors – devices not added, credential mismatches, connection errors, etc. Luckily, we can easily change the format of the above information from GUI to json by just adding ‘.json’ at the end of the url:

http://<IP/DNS>:<PORT>/nodes.json

This will give the following output:

[
  {
    "name": "fw-mid-01",
    "full_name": "fw-mid-01",
    "ip": "192.168.4.254",
    "group": "default",
    "model": "FortiOS",
    "last": {
      "start": "2024-06-13 06:46:14 UTC",
      "end": "2024-06-13 06:46:54 UTC",
      "status": "no_connection",
      "time": 40.018852483
    },
    "vars": null,
    "mtime": "unknown",
    "status": "no_connection",
    "time": "2024-06-13 06:46:54 UTC"
  },
  {
    "name": "FW-HUNZE",
    "full_name": "FW-HUNZE",
    "ip": "192.168.0.254",
    "group": "default",
    "model": "FortiOS",
    "last": {
      "start": "2024-06-13 06:46:54 UTC",
      "end": "2024-06-13 06:47:04 UTC",
      "status": "success",
      "time": 10.029043912
    },
    "vars": null,
    "mtime": "2024-06-13 06:47:05 UTC",
    "status": "success",
    "time": "2024-06-13 06:47:04 UTC"
  }
]

As you might know, Zabbix is perfectly capable of parsing json formats and creating items and triggers out of them. A master item, dependent lld (https://blog.zabbix.com/low-level-discovery-with-dependent-items/13634/), and within minutes you’ve got Oxidized making configuration backups while Zabbix is monitoring and alerting on the status:

At this point we’re getting close to a nice integration, but we haven’t overcome the double configuration management yet.

Oxidized can read its configuration from multiple sources, including a CSV file, SQL, SQLite, MySQL or HTTP. The easiest is a CSV file – just make sure you’ve got all information in the correct column and it works. An example:

Oxidized config:

source:
  default: csv
  csv:
    file: /var/lib/oxidized/router.db
    delimiter: !ruby/regexp /:/
    map:
      name: 0
      ip: 1
      model: 2
      username: 3
      password: 4
    vars_map:
      enable: 5

CSV file:

rtr01.local:192.168.1.1:ios:oxidized:5uP3R53cR3T:T0p53cR3t

Great, now we have to configure 2 places (Zabbix and Oxidized) and get a username/password cleartext in a CSV file. What about SQL as a source, and letting it connect to Zabbix? From there we should be able to get information regarding the hostname, but somehow we need the credentials as well. That’s not a default piece of information in Zabbix, but UserMacros can save us here.

So on our host we add 2 extra macros:

At the same time, we need to tell Oxidized what kind of device it is. There are multiple ways of doing this, obviously. A tag, a usermacro, hostgroups, you name it. In order to do this, we place a tag on the host:

Now we make sure that Oxidized is only taking hosts with the tag ‘oxidized’ and extract from them the host name, IP address, model, username, and password:

+-----------+----------------+---------+------------+------------------------------+
| host      | ip             | model   | username   | password                     |
+-----------+----------------+---------+------------+------------------------------+
| fw-mid-01 | 192.168.4.254  | fortios | <redacted> | <redacted>                   |
| FW-HUNZE  | 192.168.0.254  | fortios | <redacted> | <redacted>                   |
+-----------+----------------+---------+------------+------------------------------+

This way, we simply add our host in Zabbix, add the SSH credentials, and Oxidized will pick it up the next time a backup is scheduled. Zabbix will immediately start monitoring the status of those jobs and alert you if something fails.

This blog post is not meant as a complete integration write down, but rather as a way to give some insight into how we as a partner operate in the field, taking advantage of the flexibility of the products we work with. This post should give you enough information to build it yourself, but of course we’re always available to help you or just build it as part of our consultancy offering.

 

The post Monitoring Configuration Backups with Zabbix and Oxidized appeared first on Zabbix Blog.

Enhancing Network Synergy: rConfig’s Native Integration with Zabbix

Post Syndicated from Stephen Stack original https://blog.zabbix.com/enhancing-network-synergy-rconfigs-native-integration-with-zabbix/28283/

Native integration between two leading open-source tools – Zabbix for network monitoring and rConfig for configuration management, delivers substantial benefits to organizations. On one side, Zabbix offers a platform that maintains a Single Source of Truth for network device inventories. It provides real-time monitoring, problem detection, alerting, and other critical features that are essential for day-to-day operations, ensuring smooth and reliable network connectivity crucial for business continuity.

On the other side, there’s rConfig, renowned for its robust and reliable network automation, configuration backup, and compliance management. Integrating rConfig with Zabbix enhances its capabilities, allowing for seamless Device Inventory synchronization. This union not only simplifies the management of network configurations but also introduces more advanced Network Automation Platform features. Together, they form a powerhouse toolset that streamlines network management tasks, reduces operational overhead, and boosts overall network performance, making it easier for businesses to focus on growth and innovation without being hindered by network reliability concerns.

Optimizing Network Management with Unified Inventory

At rConfig, we are deeply embedded with our customers, and our main mission is to work with them to solve their real-world problems. One significant challenge that consistently surfaces – both from client feedback and our own experiences – is managing and accurately locating a trusted and reliable central network inventory. This challenge brings to the forefront a classic dilemma in Enterprise Architecture circles: In our scenario of network inventory, which system ought to act as the System of Record, and which should function as the System of Engagement to optimize interactions with records for various purposes, such as Network Management Systems (NMS) and Network Configuration Management (NCM)?

Enterprise Architecture circles illustrating systems of record, insight and engagement. Credit: Sharon Moore - https://samoore.me/
Enterprise Architecture circles illustrating systems of record, insight and engagement. Credit: Sharon Moore – https://samoore.me/

At rConfig, from a product perspective we’ve chosen to focus on what we do best and love most: Network Configuration Management. Therefore, integrating with an upstream Network Management System (NMS) that can act as the System of Record for network device inventory was a logical step for us. Given that many of our customers also use Zabbix network operations, it was a natural choice to begin our integration journey with them. Our platforms are highly complementary, which streamlines the integration process and enhances our ability to serve our customers better. This strategic decision allows us to offer a seamless and efficient management solution that not only meets the current needs but also scales to address future challenges in network management.

Enhanced Integration Through ETL

You might be wondering how this integration works and whether it’s straightforward or challenging to set up. Setting up the integration between rConfig and Zabbix is relatively straightforward, but, as with any complex data driven systems, it requires careful planning and diligence to ensure that the data flow between the systems is fully optimized and automated. This is where ETL – or Extract, Transform, Load – plays a crucial role. ETL is a process that involves extracting data from the Zabbix API in its raw form, transforming it into a format that rConfig can readily process and validate, and then loading it into the rConfig production database. This process also efficiently handles any data conflicts and updates.

The advantages of using ETL are significant, enhancing data quality and making the data more accessible, thereby enabling rConfig to analyze information more effectively and make well-informed, data-driven decisions. At rConfig, our user interface is designed to aid in the development and troubleshooting of features, though we’re also fond of using the CLI for those who prefer it. Below is a screenshot from our lab showing the end-to-end ETL process with Zabbix in action. It illustrates the steps rConfig takes to connect to Zabbix, extract, validate, transform and map the data, load it to staging, and finally, move it to the production environment for a small set of devices.

While the screenshot below displays just a few devices as a sample integration in our lab, the most extensive integration we’ve achieved in a production environment with this new rConfig feature involved syncing a single Zabbix instance with over 5,000 host/device records. This highlights its efficiency and reliability in a real-world environment.

Screenshot of rConfig Zabbix Integration on the Command Line
Screenshot of rConfig Zabbix Integration on the Command Line

Going Deeper: Understanding the Integration Process

To grasp the integration process more clearly, let’s dive into the details that will help you understand how to set everything up before we automate the task. Our documentation website, docs.rconfig.com, provides comprehensive details, and our YouTube channel features a great demonstration video of the entire process.

Initial Setup: The first step involves configuring rConfig to connect and authenticate with the Zabbix API. This setup is managed through the Configuration page in the rConfig user interface. During this phase, you can also apply filters to select specific Zabbix tags or host groups, refining exactly which host records you want to synchronize.

Screenshot of Zabbix Configuration page in rConfig V7 professional
Screenshot of Zabbix Configuration page in rConfig V7 professional

Data Extraction and Validation: Once the connection is established, rConfig extracts host records in raw JSON format. This stage involves validating the data to ensure that the correct tags and data mappings are in place.

Screenshot of Zabbix Raw Host Extract page in rConfig V7 Professional
Screenshot of Zabbix Raw Host Extract page in rConfig V7 Professional

Staging for Review: After validation, the data is loaded into a staging table. This allows for a thorough review to confirm that the mapped rConfig data fields are correct, ensuring that the newly imported devices are associated with the appropriate connection templates, categories, and tags.

Screenshot of Zabbix host staging table in rConfig V7 Professional
Screenshot of Zabbix host staging table in rConfig V7 Professional

Final Loading: The final step involves transferring the staged devices to the main production devices table. After this transfer, the staging table is cleared. The devices then appear in the main device table, marked with a special icon indicating that they are synced through integration.

Screenshot of Zabbix host fully loaded to production devices table in rConfig V7 Professional 
Screenshot of Zabbix host fully loaded to production devices table in rConfig V7 Professional

Seamless Operational Integration: Once the devices are loaded into the production table, they are automatically incorporated into standard rConfig scheduled tasks, automations, or any other rConfig feature that utilizes the device data (like categories and tags). This integration facilitates a seamless operational workflow between the platforms. Users can even access these devices directly in Zabbix from within the rConfig UI, streamlining operations management.

After all the above steps are completed, and the initial setup is done future loads are completed on a scheduled and automation basis using the rConfig Task manager.

Screenshot of rConfig Device detail view for a Zabbix integrated host
Screenshot of rConfig Device detail view for a Zabbix integrated host

This detailed setup and validation process ensures that the integration between rConfig and Zabbix is not only effective but also enhances the functionality and efficiency of managing network devices across platforms.

Case Study: Enhancing Network Management for a Las Vegas Entertainment Organization

  1. Challenge: A prominent Las Vegas entertainment organization faced significant difficulties in managing the diverse and complex network that supports their extensive operations, including gaming, security, and hospitality services. The primary issues were outdated network inventories and inefficient management of network configurations across numerous devices, leading to operational disruptions and security vulnerabilities.
  2. Solution: To address these challenges, the organization implemented the integration of rConfig with Zabbix, focusing on automating and centralizing the network management process. This solution aimed to synchronize network device inventories across the organization’s extensive operations, ensuring accurate and real-time data availability.
  3. Implementation: The integration process began with setting up Zabbix to continuously monitor and gather data from network devices across different venues and services. This data was then extracted, standardized, and loaded into rConfig, where it could be used for automated configuration management and backup. The setup also included sophisticated mapping and validation to ensure all data transferred between Zabbix and rConfig was accurate and relevant.

Benefits:

  • Improved Network Reliability: The automated synchronization of network inventories reduced the frequency of network failures and minimized downtime, which is crucial in the high-stakes environment of Las Vegas entertainment.
  • Enhanced Security: With more accurate and timely network data, the organization could better identify and respond to security threats, protecting sensitive information and ensuring the safety of both guests and operations.
  • Operational Efficiency: The IT team was able to shift their focus from routine network maintenance to strategic initiatives that enhanced overall business operations, including integrating new technologies and improving guest experiences.
  • Scalability: The integration provided a scalable solution that could accommodate future expansion, whether adding new devices or incorporating new technologies or venues into the network.
  • Outcome: The implementation of the rConfig and Zabbix integration dramatically transformed the organization’s network management capabilities. The IT department noted a substantial reduction in the manpower and time required for routine maintenance, while operational uptime improved significantly. The organization now enjoys a robust, streamlined network management system that supports its dynamic environment, ensuring that both guests and staff benefit from reliable and secure network services.

This case study highlights the power of effective network management solutions in supporting complex operations and enhancing business efficiency and security within the entertainment industry.

Conclusion: Forging Ahead with Innovative Partnerships

In conclusion, the Zabbix platform stands out as a cornerstone in network monitoring, renowned for its extensive capabilities in real-time monitoring, problem detection, and alerting. Its robust architecture not only supports a broad range of network environments but also offers the flexibility and scalability necessary for today’s diverse technological landscapes. The platform’s ability to provide detailed and accurate network insights is crucial for organizations aiming to maintain optimal operational continuity and security.

The integration of Zabbix with rConfig, a globally reliable and robust network configuration management (NCM) solution, enhances these benefits significantly, creating a synergistic relationship that leverages the strengths of both platforms. For customers and partners, this integration means not only smoother and more efficient network management but also the assurance that they are supported by two of the leading solutions in the industry. Together, Zabbix and rConfig deliver a comprehensive network management experience that drives efficiency, reduces costs, and ensures a higher level of network reliability and security, positioning them as indispensable tools in the toolkit of any organization serious about its network infrastructure.

About rConfig

rConfig is an industry leader in network configuration management and automation. Founded in 2010 and based in Ireland, rConfig has been at the forefront of delivering innovative solutions that simplify the complexities of network management. Our software is designed to be both powerful and user-friendly, making it an ideal choice for IT professionals across a variety of sectors, including education, government, manufacturing, and large global enterprises.

With the capability to manage up to 10s of 1000s of devices, rConfig offers robust functionalities such as automated config backups, compliance management, and network automation. Our platform is vendor-agnostic, which allows seamless integration with a diverse range of network devices and systems, from traditional IT to IoT and OT environments. This flexibility ensures that our clients can manage all aspects of their network configurations, regardless of the underlying technology.

rConfig is committed to continuous innovation and customer-centric solutions, with industry first solutions such as API backups and our Script Integration Engine. Our native integration with platforms like Zabbix exemplifies our dedication to enhancing network management through strategic partnerships. This collaboration not only streamlines operations but also amplifies the benefits provided, ensuring that our customers have access to the most advanced tools in the industry.

 

The post Enhancing Network Synergy: rConfig’s Native Integration with Zabbix appeared first on Zabbix Blog.

Case Study: Monitoring Railway Infrastructure for Infrabel

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/case-study-monitoring-railway-infrastructure-for-infrabel/28035/

Infrabel is a government-owned public limited company that builds, owns, maintains, and upgrades the Belgian railway network, makes its capacity available to railway operator companies, and handles train traffic control. Headquartered in Brussels, Infrabel employs over 9,000 people and manages 3,602 kilometers of rail lines.

The challenge

Infrabel needed a monitoring solution that was flexible enough to manage not only infrastructure, but also OS level metrics, data centers, service and application states, and the availability of railway infrastructure components.

The solution

To begin with, Zabbix agents are deployed on railway station screens and broadcasting systems. This is possible because under the hood these pieces of hardware they run Debian OS, which means they can be monitored on the OS level by Zabbix agents right out of the box with our official templates.

This can be very easily automated together with low level discovery, autoregistration, or network discovery. Devices can be pinged from Zabbix proxies or Zabbix servers to check if they are available. If they are unavailable, Zabbix sends a notification, after which an engineer either restores the network connectivity or replaces the hardware.

In addition, Infrabel also uses Zabbix to retrieve and monitor data collected from ActiveMQ. This is where a combination of custom bash scripts and Zabbix sender is used, so the required data (also related to the railway infrastructure and data centre, hardware, and software) is retrieved from ActiveMQ via Bash script, then forwarded to Zabbix sender via a wrapper script, sent to the Zabbix server or proxy, stored and analyzed in Zabbix, and acted upon if required.

The results

Infrabel found that they could get the most out of Zabbix by integrating it with a third-party ticketing system they were already using. The integration itself is simple – when Zabbix generates a problem, the Zabbix API is then used to retrieve the problems related to a particular set of triggers that need to be forwarded to this third-party system.

These alerts are then forwarded via API to whatever system Infrabel requires – Zabbix has a variety of integrations available right out-of-the-box using web hooks, including Slack, JIRA, Microsoft Teams, and many others. Messengers can also be used with Zabbix, but Infrabel has opted to use Zabbix API for their custom ticketing solution.

In conclusion

Infrabel is the perfect example of how the flexibility of Zabbix allows it to adapt to any industry or need. The possibility to use Zabbix API, web hooks, or a combination of both was a game-changer for Infrabel – just as it could be for any customer in any industry.

You can learn more about what we can do for customers across a variety of industries by visiting our website or requesting a demo.

The post Case Study: Monitoring Railway Infrastructure for Infrabel appeared first on Zabbix Blog.

Make your interaction with Zabbix API faster: Async zabbix_utils.

Post Syndicated from Aleksandr Iantsen original https://blog.zabbix.com/make-your-interaction-with-zabbix-api-faster-async-zabbix_utils/27837/

In this article, we will explore the capabilities of the new asynchronous modules of the zabbix_utils library. Thanks to asynchronous execution, users can expect improved efficiency, reduced latency, and increased flexibility in interacting with Zabbix components, ultimately enabling them to create efficient and reliable monitoring solutions that meet their specific requirements.

There is a high demand for the Python library zabbix_utils. Since its release and up to the moment of writing this article, zabbix_utils has been downloaded from PyPI more than 15,000 times. Over the past week, the library has been downloaded more than 2,700 times. The first article about the zabbix_utils library has already gathered around 3,000 views. Among the array of tools available, the library has emerged as a popular choice, offering developers and administrators a comprehensive set of functions for interacting with Zabbix components such as Zabbix server, proxy, and agents.

Considering the demand from users, as well as the potential of asynchronous programming to optimize interaction with Zabbix, we are pleased to present a new version of the library with new asynchronous modules in addition to the existing synchronous ones. The new zabbix_utils modules are designed to provide a significant performance boost by taking advantage of the inherent benefits of asynchronous programming to speed up communication between Zabbix and your service or script.

You can read the introductory article about zabbix_utils for a more comprehensive understanding of working with the library.

Benefits and Usage Scenarios

From expedited data retrieval and real-time event monitoring to enhanced scalability, asynchronous programming empowers you to build highly efficient, flexible, and reliable monitoring solutions adapted to meet your specific needs and challenges.

The new version of zabbix_utils and its asynchronous components may be useful in the following scenarios:

  • Mass data gathering from multiple hosts: When it’s necessary to retrieve data from a large number of hosts simultaneously, asynchronous programming allows requests to be executed in parallel, significantly speeding up the data collection process;
  • Mass resource exporting: When templates, hosts or problems need to be exported in parallel. This parallel execution reduces the overall export time, especially when dealing with a large number of resources;
  • Sending alerts from or to your system: When certain actions need to be performed based on monitoring conditions, such as sending alerts or running scripts, asynchronous programming provides rapid condition processing and execution of corresponding actions;
  • Scaling the monitoring system: With an increase in the number of monitored resources or the volume of collected data, asynchronous programming provides better scalability and efficiency for the monitoring system.

Installation and Configuration

If you already use the zabbix_utils library, simply updating the library to the latest version and installing all necessary dependencies for asynchronous operation is sufficient. Otherwise, you can install the library with asynchronous support using the following methods:

  • By using pip:
~$ pip install zabbix_utils[async]

Using [async] allows you to install additional dependencies (extras) needed for the operation of asynchronous modules.

  • By cloning from GitHub:
~$ git clone https://github.com/zabbix/python-zabbix-utils
~$ cd python-zabbix-utils/
~$ pip install -r requirements.txt
~$ python setup.py install

The process of working with the asynchronous version of the zabbix_utils library is similar to the synchronous one, except for some syntactic differences of asynchronous code in Python.

Working with Zabbix API

To work with the Zabbix API in asynchronous mode, you need to import the AsyncZabbixAPI class from the zabbix_utils library:

from zabbix_utils import AsyncZabbixAPI

Similar to the synchronous ZabbixAPI, the new AsyncZabbixAPI can use the following environment variables: ZABBIX_URL, ZABBIX_TOKEN, ZABBIX_USER, ZABBIX_PASSWORD. However, when creating an instance of the AsyncZabbixAPI class you cannot specify a token or a username and password, unlike the synchronous version. They can only be passed when calling the login() method. The following usage scenarios are available here:

  • Use preset values of environment variables, i.e., not pass any parameters to AsyncZabbixAPI:
~$ export ZABBIX_URL="https://zabbix.example.local"
api = AsyncZabbixAPI()
  • Pass only the Zabbix API address as input, which can be specified as either the server IP/FQDN address or DNS name (in this case, the HTTP protocol will be used) or as an URL of Zabbix API:
api = AsyncZabbixAPI(url="127.0.0.1")

After declaring an instance of the AsyncZabbixAPI class, you need to call the login() method to authenticate with the Zabbix API. There are two ways to do this:

  • Using environment variable values:
~$ export ZABBIX_USER="Admin"
~$ export ZABBIX_PASSWORD="zabbix"

or

~$ export ZABBIX_TOKEN="xxxxxxxx"

and then:

await api.login()
  • Passing the authentication data when calling login():
await api.login(user="Admin", password="zabbix")

Like ZabbixAPI, the new AsyncZabbixAPI class supports version getting and comparison:

# ZabbixAPI version field
ver = api.version
print(type(ver).__name__, ver) # APIVersion 6.0.29

# Method to get ZabbixAPI version
ver = api.api_version()
print(type(ver).__name__, ver) # APIVersion 6.0.29

# Additional methods
print(ver.major)     # 6.0
print(ver.minor)     # 29
print(ver.is_lts())  # True

# Version comparison
print(ver < 6.4)        # True
print(ver != 6.0)       # False
print(ver != "6.0.24")  # True

After authentication, you can make any API requests described for all supported versions in the Zabbix documentation.

The format for calling API methods looks like this:

await api_instance.zabbix_object.method(parameters)

For example:

await api.host.get()

After completing all needed API requests, it is necessary to call logout() to close the API session if authentication was done using username and password, and also close the asynchronous sessions:

await api.logout()

More examples of usage can be found here.

Sending Values to Zabbix Server/Proxy

The asynchronous class AsyncSender has been added, which also helps to send values to the Zabbix server or proxy for items of the Zabbix Trapper data type.

AsyncSender can be imported as follows:

from zabbix_utils import AsyncSender

Values ​​can be sent in a group, for this it is necessary to import ItemValue:

import asyncio
from zabbix_utils import ItemValue, AsyncSender


items = [
    ItemValue('host1', 'item.key1', 10),
    ItemValue('host1', 'item.key2', 'Test value'),
    ItemValue('host2', 'item.key1', -1, 1702511920),
    ItemValue('host3', 'item.key1', '{"msg":"Test value"}'),
    ItemValue('host2', 'item.key1', 0, 1702511920, 100)
]

async def main():
    sender = AsyncSender('127.0.0.1', 10051)
    response = await sender.send(items)
    # processing the received response

asyncio.run(main())

As in the synchronous version, it is possible to specify the size of chunks when sending values in a group using the parameter chunk_size:

sender = AsyncSender('127.0.0.1', 10051, chunk_size=2)
response = await sender.send(items)

In the example, the chunk size is set to 2. So, 5 values passed in the code above will be sent in three requests of two, two, and one value, respectively.

Also it is possible to send a single value:

sender = AsyncSender(server='127.0.0.1', port=10051)
resp = await sender.send_value('example_host', 'example.key', 50, 1702511920))

If your server has multiple network interfaces, and values need to be sent from a specific one, the AsyncSender provides the option to specify a source_ip for sent values:

sender = AsyncSender(
    server='zabbix.example.local',
    port=10051,
    source_ip='10.10.7.1'
)
resp = await sender.send_value('example_host', 'example.key', 50, 1702511920)

AsyncSender also supports reading connection parameters from the Zabbix agent/agent2 configuration file. To do this, you need to set the use_config flag and specify the path to the configuration file if it differs from the default /etc/zabbix/zabbix_agentd.conf:

sender = AsyncSender(
    use_config=True,
    config_path='/etc/zabbix/zabbix_agent2.conf'
)

More usage examples can be found here.

Getting values from Zabbix Agent/Agent2 by item key.

In cases where you need the functionality of our standart zabbix_get utility but native to your Python project and working asynchronously, consider using the AsyncGetter class. A simple example of its usage looks like this:

import asyncio
from zabbix_utils import AsyncGetter

async def main():
    agent = AsyncGetter('10.8.54.32', 10050)
    resp = await agent.get('system.uname')
    print(resp.value) # Linux zabbix_server 5.15.0-3.60.5.1.el9uek.x86_64

asyncio.run(main())

Like AsyncSender, the AsyncGetter class supports specifying the source_ip address:

agent = AsyncGetter(
    host='zabbix.example.local',
    port=10050,
    source_ip='10.10.7.1'
)

More usage examples can be found here.

Conclusions

The new version of the zabbix_utils library provides users with the ability to implement efficient and scalable monitoring solutions, ensuring fast and reliable communication with the Zabbix components. Asynchronous way of interaction gives a lot of room for performance improvement and flexible task management when handling a large volume of requests to Zabbix components such as Zabbix API and others.

We have no doubt that the new version of zabbix_utils will become an indispensable tool for developers and administrators, helping them create more efficient, flexible, and reliable monitoring solutions that best meet their requirements and expectations.

The post Make your interaction with Zabbix API faster: Async zabbix_utils. appeared first on Zabbix Blog.

Monitoring Oracle Cloud Infrastructure (OCI) with Zabbix

Post Syndicated from Kristaps Naglis original https://blog.zabbix.com/monitoring-oracle-cloud-infrastructure-oci-with-zabbix/27638/

Monitoring Oracle Cloud Infrastructure resources is crucial for maintaining optimal performance, security, and cost-efficiency. By continuously monitoring resources such as compute instances, storage, databases, and networking components, you can proactively identify and address potential issues before they escalate, ensuring uninterrupted service delivery.

Monitoring also provides valuable insights into resource utilization patterns, enabling capacity planning and optimization efforts. Furthermore, effective monitoring enhances security posture by detecting anomalies or suspicious activities that could indicate potential security breaches.

This integration consists of multiple templates, where each template covers a specific OCI resource. As of writing this blog post, such services are supported:

  • OCI Compute;
  • OCI Autonomous Database (serverless);
  • OCI Object Storage;
  • OCI Virtual Cloud Networks (VCNs);
  • OCI Block Volumes;
  • OCI Boot Volumes.

The structure of these templates is made so that you do not need to configure each service separately. There is a master template Oracle Cloud by HTTP, which needs to be configured and rest of the services get monitored automatically using low-level discovery and host prototypes. Overall, the structure of templates looks like this:

Prepare OCI for monitoring with Zabbix

In all of these examples, an OCI web console will be used, therefore, keep in mind that some of the UI elements or navigation menus can potentially change in the future.

User creation

First of all, you need to set up a user in OCI that Zabbix will use to access the Rest API and gather data. To do so, navigate to 'Identity & Security' -> 'Domains' from the main menu in top-left of the screen. Select the domain for which you wish to monitor resources and head over to Users view on the left sidebar. Here, create a new user.

API key generation

After this, OCI should automatically redirect you to the newly created user page. If not, for some reason, go back to all users in domain and open your previously created user.

You need to assign API keys to this user. To do so, on the left sidebar, press API keys and then Add API key. If you already have an RSA key pair that you wish to use for this monitoring user, you can select an option to either upload or paste the public key. However, in this example, I will generate a completely new key pair.

After pressing button Add, OCI will show you details about your environment and the API key. Save this information somewhere safe for now, as we will need to use this data later inside Zabbix. After saving this, you can now close this pop-up.

User group creation

To make user management more flexible, a good idea would be to create a user group specifically for monitoring and assign monitoring user to this group. So in case, in the future, you will need more monitoring users, you will be able to just assign those users to a common monitoring user group, and all access rights will work automatically.

To do this, navigate back to 'Identity & Security' -> 'Domains' and select Groups. Here press on button Create group. Give a name for this group and assign your monitoring user to it while you’re at it.

Policy creation

Now that you have successfully created a monitoring user, generated API keys and created a user group for monitoring, you now need to create a new OCI policy. Policies in OCI regulate what resources its users can access and what they can do with those resources. To create a new policy, open the top-left menu and navigate to 'Identity & Security' -> 'Policies' and press on the button Create Policy.

 

This will open a new view where you can define policy settings. Give this policy a name and description and select the appropriate compartment. Inside the policy builder, enable Show manual editor and copy-paste these rules, just remember to replace zabbix-mon with your monitoring group name:

Allow group 'zabbix-mon' to read metrics in tenancy
Allow group 'zabbix-mon' to read instances in tenancy
Allow group 'zabbix-mon' to read subnets in tenancy
Allow group 'zabbix-mon' to read vcns in tenancy
Allow group 'zabbix-mon' to read vnic-attachments in tenancy
Allow group 'zabbix-mon' to read volumes in tenancy
Allow group 'zabbix-mon' to read objectstorage-namespaces in tenancy
Allow group 'zabbix-mon' to read buckets in tenancy
Allow group 'zabbix-mon' to read autonomous-databases in tenancy

This set of rules will give your monitoring group users read-only access to only those resources that Zabbix Oracle Cloud Infrastructure templates use.

After you are done, press the Create button and the policy will be created.

If you decide to modify our template to access more data from OCI, you will need to append appropriate rules here as well.

That is it for configuration on the OCI side now.

Prepare Zabbix

Create host

First, open up your Zabbix web interface, head over to Hosts section and create a new host. Here, you will need to specify a host name of your choice, template Oracle Cloud by HTTPS and assign it to a group.

Before pressing the Add button, you should configure macros. Open the Macros tab and select Inherited and host macros. There will be quite a few of them, but each with its own use-case. Some of these macros are discussed in Optional configuration section, so for now let’s focus on the ones that are necessary for template to work at all.

Essentially, there are 3 steps to take here:

  1. This is the moment when OCI configuration file values from API key generation section come into play. You will need to set the macro value to the value that OCI gave you:
    • {$OCI.API.TENANCY} – here set the tenancy OCID value;
    • {$OCI.API.USER} – here set the user OCID value;
    • {$OCI.API.FINGERPRINT} – here set the fingerprint value;
  2. Now you need to find where you saved the private key for your OCI monitoring user from the same API key generation section. Copy the contents of that private key file and paste them into {$OCI.API.PRIVATE.KEY}macro.
  3. Once the authentication credentials are entered, you need to identify the correct OCI API endpoints that match for your region (the one that OCI configuration file gave you in the API key generation section). One way to do this is to go to Oracle Cloud Infrastructure Documentation, and on the left side you will find all the API services. Under each of these services, there is a home page dedicated to each service with respective API endpoints available. Required API service endpoints for Zabbix are:

    In my case the OCI region is eu-stockholm-1, therefore the above API service endpoints look like this:

    OCI API service URL Zabbix macro name
    Core Services API iaas.eu-stockholm-1.oraclecloud.com {$OCI.API.CORE.HOST}
    Database Service API database.eu-stockholm-1.oraclecloud.com {$OCI.API.AUTONOMOUS.DB.HOST}
    Object Storage Service API objectstorage.eu-stockholm-1.oraclecloud.com {$OCI.API.OBJECT.STORAGE.HOST}
    Monitoring API telemetry.eu-stockholm-1.oraclecloud.com {$OCI.API.TELEMETRY.HOST}

    Now that you have the necessary host names, you need to save them as values for their respective Zabbix macros.

     

    IMPORTANT! API Endpoint URLs need to be entered without the HTTP scheme (https://).

After these 3 steps, your Macros tab should look something like this.

After pressing the button Add, this host will be added to Zabbix.

This host will not have any metrics items, instead it will discover all the OCI resources and add them as separate hosts in Zabbix. This will happen automatically, sometime between when Zabbix server reloads its configuration cache and 1 hour after that (because discovery rules have update interval of 1 hour). However, if you want to speed this up, you can manually refresh Zabbix server config cache with CLI command zabbix_server -R config_cache_reload and execute all discovery rules manually.

After that, in the Hosts view, you should see all of your monitored OCI resources.

Also, each of these hosts will have their own dashboards created automatically that will give a good overview of resource utilization.

Optional configuration

When creating the OCI host in Zabbix, there were a few macros that required modification of values. As you might have noticed, there were quite a few other macros that we skipped over. In the following sections, usage of these macros will be explained. These are optional macros, so depending on your requirements, you might not even need to change them.

Use macros for low-level discovery filtering

In official Zabbix templates, you might find macros that end with MATCHES and NOT_MATCHES. These are used for Low-level discovery rules (LLDs), to help you filter resources that should or should not be discovered. These values are using regular expressions. Therefore, you can use wildcard symbols for pattern matching.

Usage of these macros can be found in Filters tab, under discovery rules.

Usually default values for MATCHES is .* and for NOT_MATCHESCHANGE_IF_NEEDED. This means that any kind of value will be discovered if it is not equal to CHANGE_IF_NEEDED. For example, if such macros are filtering compute instance name:

  • {"instance-name": "compute_instance_1"} will be discovered;
  • {"instance-name": "web-server"} will be discovered;
  • {"instance-name": "CHANGE_IF_NEEDED"} will not be discovered.

In OCI templates, such discovery macros are also used. One example is compute instance discovery where filters are checking the instance state:

  • Macro {$OCI.COMPUTE.DISCOVERY.STATE.MATCHES} has a value of .*;
  • Macro {$OCI.COMPUTE.DISCOVERY.STATE.NOT_MATCHES} has a value of TERMINATED.

This means that any instance that is in terminated state will not be discovered. This is because when you terminate an instance in OCI, it is still kept for a while, but there is no reason to monitor such an instance.

Now that you have an idea how these filters work, you can adjust them based on your requirements.

Low-level discovery resource filtering by free-form tags of OCI resources

To add additional resource discovery filtering options, every discovery script (except VCN discovery), gathers free-form tag data about a specific resource. Since free-form tags are completely custom and format or usage will vary between users, the free-from tag filters are not included under LLD filters by default, but can be easily added as they are already being collected by scripts.

To show an example, let’s assume that you want to categorize OCI compute instances by their usage utilizing free-form tags.

For one of the instances, add a new free-from tag, such as, usage with value web-server.

You need to repeat this for other compute instances and set usage free-from tag values as you want to.

Now, switch over to Zabbix and open Oracle Cloud by HTTP template in Zabbix and go to Discovery rules. Find Compute instances discovery and open it.

Under LLD macros tab, add a new macro that will represent this location group tag, for example, {#USAGE} $.tags.usage.

Under the Filters tab, there will already be filters regarding the compute instance name and state. Click “Add” to add a new filter and define the previously created LLD macro and add a matching pattern and value, for example, {#USAGE} matches web-server.

The next time Compute instances discovery is executed, it will only discover OCI compute instances that have the free-form tag usage that matches the regex of web-server. You can also experiment with the LLD filter pattern matching value to receive different matching results for a specified value.

IMPORTANT! You must have the free-from tag present on every resource in OCI that you are using this method on. In these examples you would need to set usage free-form tag on every compute instance, or Zabbix LLD will stop working. If a tag does not exist when Zabbix LLD searches for it, it will throw an error. The free-form tag does not need to have a value, make sure that the tag key is there.

HTTP proxy usage

If needed, you can specify an HTTP proxy for the template to use by changing the value of {$OCI.HTTP.PROXY} user macro. Every request will use this proxy.

Custom OK HTTP response

If using proxy, returned OK HTTP response could change from 200 to some different value. In that case, please change user macro {$OCI.HTTP.RETURN.CODE.OK} to the appropriate value. By default, this is set to 200, therefore, all request items will only accept data when response is 200.

That is about it. Good luck and have fun!

The post Monitoring Oracle Cloud Infrastructure (OCI) with Zabbix appeared first on Zabbix Blog.

Introducing zabbix_utils – the official Python library for Zabbix API

Post Syndicated from Aleksandr Iantsen original https://blog.zabbix.com/python-zabbix-utils/27056/

Zabbix is a flexible and universal monitoring solution that integrates with a wide variety of different systems right out of the box. Despite actively expanding the list of natively supported systems for integration (via templates or webhook integrations), there may still be a need to integrate with custom systems and services that are not yet supported. In such cases, a library taking care of implementing interaction protocols with the Zabbix API, Zabbix server/proxy, or Agent/Agent2 becomes extremely useful. Given that Python is widely adopted among DevOps and SRE engineers as well as server administrators, we decided to release a library for this programming language first.

We are pleased to introduce zabbix_utils – a Python library for seamless interaction with Zabbix API, Zabbix server/proxy, and Zabbix Agent/Agent2. Of course, there are popular community solutions for working with these Zabbix components in Python. Keeping this fact in mind, we have tried to consolidate popular issues and cases along with our experience to develop as convenient a tool as possible. Furthermore, we made sure that transitioning to the tool is as straightforward and clear as possible. Thanks to official support, you can be confident that the current version of the library is compatible with the latest Zabbix release.

In this article, we will introduce you to the main capabilities of the library and provide examples of how to use it with Zabbix components.

Usage Scenarios

The zabbix_utils library can be used in the following scenarios, but is not limited to them:

  • Zabbix automation
  • Integration with third-party systems
  • Custom monitoring solutions
  • Data export (hosts, templates, problems, etc.)
  • Integration into your Python application for Zabbix monitoring support
  • Anything else that comes to mind

You can use zabbix_utils for automating Zabbix tasks, such as scripting the automatic monitoring setup of your IT infrastructure objects. This can involve using ZabbixAPI for the direct management of Zabbix objects, Sender for sending values to hosts, and Getter for gathering data from Agents. We will discuss Sender and Getter in more detail later in this article.

For example, let’s imagine you have an infrastructure consisting of different branches. Each server or workstation is deployed from an image with an automatically configured Zabbix Agent and each branch is monitored by a Zabbix proxy since it has an isolated network. Your custom service or script can fetch a list of this equipment from your CMDB system, along with any additional information. It can then use this data to create hosts in Zabbix and link the necessary templates using ZabbixAPI based on the received information. If the information from CMDB is insufficient, you can request data directly from the configured Zabbix Agent using Getter and then use this information for further configuration and decision-making during setup. Another part of your script can access AD to get a list of branch users to update the list of users in Zabbix through the API and assign them the appropriate permissions and roles based on information from AD or CMDB (e.g., editing rights for server owners).

Another use case of the library may be when you regularly export templates from Zabbix for subsequent import into a version control system. You can also establish a mechanism for loading changes and rolling back to previous versions of templates. Here a variety of other use cases can also be implemented – it’s all up to your requirements and the creative usage of the library.

Of course, if you are a developer and there is a requirement to implement Zabbix monitoring support for your custom system or tool, you can implement sending data describing any events generated by your custom system/tool to Zabbix using Sender.

Installation and Configuration

To begin with, you need to install the zabbix_utils library. You can do this in two main ways:

  • By using pip:
~$ pip install zabbix_utils
  • By cloning from GitHub:
~$ git clone https://github.com/zabbix/python-zabbix-utils
~$ cd python-zabbix-utils/
~$ python setup.py install

No additional configuration is required. But you can specify values for the following environment variables: ZABBIX_URL, ZABBIX_TOKEN, ZABBIX_USER, ZABBIX_PASSWORD if you need. These use cases are described in more detail below.

Working with Zabbix API

To work with Zabbix API, it is necessary to import the ZabbixAPI class from the zabbix_utils library:

from zabbix_utils import ZabbixAPI

If you are using one of the existing popular community libraries, in most cases, it will be sufficient to simply replace the ZabbixAPI import statement with an import from our library.

At that point you need to create an instance of the ZabbixAPI class. T4here are several usage scenarios:

  • Use preset values of environment variables, i.e., not pass any parameters to ZabbixAPI:
~$ export ZABBIX_URL="https://zabbix.example.local"
~$ export ZABBIX_USER="Admin"
~$ export ZABBIX_PASSWORD="zabbix"
from zabbix_utils import ZabbixAPI


api = ZabbixAPI()
  • Pass only the Zabbix API address as input, which can be specified as either the server IP/FQDN address or DNS name (in this case, the HTTP protocol will be used) or as an URL, and the authentication data should still be specified as values for environment variables:
~$ export ZABBIX_USER="Admin"
~$ export ZABBIX_PASSWORD="zabbix"
from zabbix_utils import ZabbixAPI

api = ZabbixAPI(url="127.0.0.1")
  • Pass only the Zabbix API address to ZabbixAPI, as in the example above, and pass the authentication data later using the login() method:
from zabbix_utils import ZabbixAPI

api = ZabbixAPI(url="127.0.0.1")
api.login(user="Admin", password="zabbix")
  • Pass all parameters at once when creating an instance of ZabbixAPI; in this case, there is no need to subsequently call login():
from zabbix_utils import ZabbixAPI

api = ZabbixAPI(
    url="127.0.0.1",
    user="Admin",
    password="zabbix"
)

The ZabbixAPI class supports working with various Zabbix versions, automatically checking the API version during initialization. You can also work with the Zabbix API version as an object as follows:

from zabbix_utils import ZabbixAPI

api = ZabbixAPI()

# ZabbixAPI version field
ver = api.version
print(type(ver).__name__, ver) # APIVersion 6.0.24

# Method to get ZabbixAPI version
ver = api.api_version()
print(type(ver).__name__, ver) # APIVersion 6.0.24

# Additional methods
print(ver.major)    # 6.0
print(ver.minor)    # 24
print(ver.is_lts()) # True

As a result, you will get an APIVersion object that has major and minor fields returning the respective minor and major parts of the current version, as well as the is_lts() method, returning true if the current version is LTS (Long Term Support), and false otherwise. The APIVersion object can also be compared to a version represented as a string or a float number:

# Version comparison
print(ver < 6.4)      # True
print(ver != 6.0)     # False
print(ver != "6.0.5") # True

If the account and password (or starting from Zabbix 5.4 – token instead of login/password) are not set as environment variable values or during the initialization of ZabbixAPI, then it is necessary to call the login() method for authentication:

from zabbix_utils import ZabbixAPI

api = ZabbixAPI(url="127.0.0.1")
api.login(token="xxxxxxxx")

After authentication, you can make any API requests described for all supported versions in the Zabbix documentation.

The format for calling API methods looks like this:

api_instance.zabbix_object.method(parameters)

For example:

api.host.get()

After completing all the necessary API requests, it’s necessary to execute logout() if authentication was done using login and password:

api.logout()

More examples of usage can be found here.

Sending Values to Zabbix Server/Proxy

There is often a need to send values to Zabbix Trapper. For this purpose, the zabbix_sender utility is provided. However, if your service or script sending this data is written in Python, calling an external utility may not be very convenient. Therefore, we have developed the Sender, which will help you send values to Zabbix server or proxy one by one or in groups. To work with Sender, you need to import it as follows:

from zabbix_utils import Sender

After that, you can send a single value:

from zabbix_utils import Sender

sender = Sender(server='127.0.0.1', port=10051)
resp = sender.send_value('example_host', 'example.key', 50, 1702511920)

Alternatively, you can put them into a group for simultaneous sending, for which you need to additionally import ItemValue:

from zabbix_utils import ItemValue, Sender


items = [
    ItemValue('host1', 'item.key1', 10),
    ItemValue('host1', 'item.key2', 'Test value'),
    ItemValue('host2', 'item.key1', -1, 1702511920),
    ItemValue('host3', 'item.key1', '{"msg":"Test value"}'),
    ItemValue('host2', 'item.key1', 0, 1702511920, 100)
]

sender = Sender('127.0.0.1', 10051)
response = sender.send(items)

For cases when there is a necessity to send more values than Zabbix Trapper can accept at one time, there is an option for fragmented sending, i.e. sequential sending in separate fragments (chunks). By default, the chunk size is set to 250 values. In other words, when sending values in bulk, the 400 values passed to the send() method for sending will be sent in two stages. 250 values will be sent first, and the remaining 150 values will be sent after receiving a response. The chunk size can be changed, to do this, you simply need to specify your value for the chunk_size parameter when initializing Sender:

from zabbix_utils import ItemValue, Sender


items = [
    ItemValue('host1', 'item.key1', 10),
    ItemValue('host1', 'item.key2', 'Test value'),
    ItemValue('host2', 'item.key1', -1, 1702511920),
    ItemValue('host3', 'item.key1', '{"msg":"Test value"}'),
    ItemValue('host2', 'item.key1', 0, 1702511920, 100)
]

sender = Sender('127.0.0.1', 10051, chunk_size=2)
response = sender.send(items)

In the example above, the chunk size is set to 2. So, 5 values passed will be sent in three requests of two, two, and one value, respectively.

If your server has multiple network interfaces, and values need to be sent from a specific one, the Sender provides the option to specify a source_ip for the sent values:

from zabbix_utils import Sender

sender = Sender(
    server='zabbix.example.local',
    port=10051,
    source_ip='10.10.7.1'
)
resp = sender.send_value('example_host', 'example.key', 50, 1702511920)

It also supports reading connection parameters from the Zabbix Agent/Agent2 configuration file. To do this, set the use_config flag, after which it is not necessary to pass connection parameters when creating an instance of Sender:

from zabbix_utils import Sender

sender = Sender(
    use_config=True,
    config_path='/etc/zabbix/zabbix_agent2.conf'
)
response = sender.send_value('example_host', 'example.key', 50, 1702511920)

Since the Zabbix Agent/Agent2 configuration file can specify one or even several Zabbix clusters consisting of multiple Zabbix server instances, Sender will send data to the first available server of each cluster specified in the ServerActive parameter in the configuration file. In case the ServerActive parameter is not specified in the Zabbix Agent/Agent2 configuration file, the server address from the Server parameter with the standard Zabbix Trapper port – 10051 will be taken.

By default, Sender returns the aggregated result of sending across all clusters. But it is possible to get more detailed information about the results of sending for each chunk and each cluster:

print(response)
# {"processed": 2, "failed": 0, "total": 2, "time": "0.000108", "chunk": 2}

if response.failed == 0:
    print(f"Value sent successfully in {response.time}")
else:
    print(response.details)
    # {
    #     127.0.0.1:10051: [
    #         {
    #             "processed": 1,
    #             "failed": 0,
    #             "total": 1,
    #             "time": "0.000051",
    #             "chunk": 1
    #         }
    #     ],
    #     zabbix.example.local:10051: [
    #         {
    #             "processed": 1,
    #             "failed": 0,
    #             "total": 1,
    #             "time": "0.000057",
    #             "chunk": 1
    #         }
    #     ]
    # }
    for node, chunks in response.details.items():
        for resp in chunks:
            print(f"processed {resp.processed} of {resp.total} at {node.address}:{node.port}")
            # processed 1 of 1 at 127.0.0.1:10051
            # processed 1 of 1 at zabbix.example.local:10051

More usage examples can be found here.

Getting values from Zabbix Agent/Agent2 by item key.

Sometimes it can also be useful to directly retrieve values from the Zabbix Agent. To assist with this task, zabbix_utils provides the Getter. It performs the same function as the zabbix_get utility, allowing you to work natively within Python code. Getter is straightforward to use; just import it, create an instance by passing the Zabbix Agent’s address and port, and then call the get() method, providing the data item key for the value you want to retrieve:

from zabbix_utils import Getter

agent = Getter('10.8.54.32', 10050)
resp = agent.get('system.uname')

In cases where your server has multiple network interfaces, and requests need to be sent from a specific one, you can specify the source_ip for the Agent connection:

from zabbix_utils import Getter

agent = Getter(
    host='zabbix.example.local',
    port=10050,
    source_ip='10.10.7.1'
)
resp = agent.get('system.uname')

The response from the Zabbix Agent will be processed by the library and returned as an object of the AgentResponse class:

print(resp)
# {
#     "error": null,
#     "raw": "Linux zabbix_server 5.15.0-3.60.5.1.el9uek.x86_64",
#     "value": "Linux zabbix_server 5.15.0-3.60.5.1.el9uek.x86_64"
# }

print(resp.error)
# None

print(resp.value)
# Linux zabbix_server 5.15.0-3.60.5.1.el9uek.x86_64

More usage examples can be found here.

Conclusions

The zabbix_utils library for Python allows you to take full advantage of monitoring using Zabbix, without limiting yourself to the integrations available out of the box. It can be valuable for both DevOps and SRE engineers, as well as Python developers looking to implement monitoring support for their system using Zabbix.

In the next article, we will thoroughly explore integration with an external service using this library to demonstrate the capabilities of zabbix_utils more comprehensively.

Questions

Q: Which Agent versions are supported for Getter?

A: Supported versions of Zabbix Agents are the same as Zabbix API versions, as specified in the readme file. Our goal is to create a library with full support for all Zabbix components of the same version.

Q: Does Getter support Agent encryption?

A: Encryption support is not yet built into Sender and Getter, but you can create your wrapper using third-party libraries for both.

from zabbix_utils import Sender

def psk_wrapper(sock, tls):
    # ...
    # Implementation of TLS PSK wrapper for the socket
    # ...

sender = Sender(
    server='zabbix.example.local',
    port=10051,
    socket_wrapper=psk_wrapper
)

More examples can be found here.

Q: Is it possible to set a timeout value for Getter?

A: The response timeout value can be set for the Getter, as well as for ZabbixAPI and Sender. In all cases, the timeout is set for waiting for any responses to requests.

# Example of setting a timeout for Sender
sender = Sender(server='127.0.0.1', port=10051, timeout=30)

# Example of setting a timeout for Getter
agent = Getter(host='127.0.0.1', port=10050, timeout=30)

Q: Is parallel (asynchronous) mode supported?

A: Currently, the library does not include asynchronous classes and methods, but we plan to develop asynchronous versions of ZabbixAPI and Sender.

Q: Is it possible to specify multiple servers when sending through Sender without specifying a configuration file (for working with an HA cluster)?

A: Yes, it’s possible by the following way:

from zabbix_utils import Sender


zabbix_clusters = [
    [
        'zabbix.cluster1.node1',
        'zabbix.cluster1.node2:10051'
    ],
    [
        'zabbix.cluster2.node1:10051',
        'zabbix.cluster2.node2:20051',
        'zabbix.cluster2.node3'
    ]
]

sender = Sender(clusters=zabbix_clusters)
response = sender.send_value('example_host', 'example.key', 10, 1702511922)

print(response)
# {"processed": 2, "failed": 0, "total": 2, "time": "0.000103", "chunk": 2}

print(response.details)
# {
#     "zabbix.cluster1.node1:10051": [
#         {
#             "processed": 1,
#             "failed": 0,
#             "total": 1,
#             "time": "0.000050",
#             "chunk": 1
#         }
#     ],
#     "zabbix.cluster2.node2:20051": [
#         {
#             "processed": 1,
#             "failed": 0,
#             "total": 1,
#             "time": "0.000053",
#             "chunk": 1
#         }
#     ]
# }

The post Introducing zabbix_utils – the official Python library for Zabbix API appeared first on Zabbix Blog.

Monitoring AWS Cost Explorer with Zabbix

Post Syndicated from evgenii.gordymov original https://blog.zabbix.com/monitoring-aws-cost-explorer-with-zabbix/26159/

Cloud-based service platforms are becoming increasingly popular, and one of the most widely adopted is Amazon Web Services (AWS). Like many cloud services, AWS charges a user fee, which has led many users to look for a breakdown of which specific services they are being charged for. Fortunately, Zabbix has an AWS Cost Explorer over HTTP template that’s ready to run right out of the box and provides a list of daily and monthly maintenance costs.

Why monitor AWS costs?

While AWS cost data is stored for 12 months, Zabbix allows data to be stored for up to 25 years (see Keep lost resources period). The Keep lost resources period is a vital parameter for storing data longer than 12 months since the cost data removed from AWS will result in the discovered items becoming lost. Therefore, if we want to keep our cost data for a period longer than 12 months, Keep lost resources period parameter needs to be adjusted accordingly.

In addition, Zabbix can show fees charged for unavailable services, such as test deployments for a cluster in the us-east-1 region.

Preparing to monitor in a few easy steps

I recommend visiting zabbix.com/integrations/aws for any sources referred to in this tutorial. You can also find a link to all Zabbix templates there. For the most part, we will follow the steps outlined in the readme.

The AWS Cost Explorer by HTTP template can use key-based and role-based authorization. Set the following macros  {$AWS.AUTH_TYPE}, possible values: role_base, access_key (using by default).

If you are using access key-based authorization, be sure set the following macros {$AWS.ACCESS.KEY.ID}, {$AWS.SECRET.ACCESS.KEY}.

Create or use an existing access key, which you can get from Identity and Access Management (IAM).

Accessing the IAM Console:
  • Log in to your AWS Management Console.Navigate to the IAM service.
  • Next, go to the Users tab and select the required user.

Creating a access key for monitoring:
  • After that, go to the Security credentials tab.
  • Select Create access key.

Add the following required permissions to your Zabbix IAM policy in order to collect metrics.

Defining Permissions through IAM Policies:
  • Access the “Policies” section within IAM.
  • Click on “Create Policy”.
  • Select the JSON tab to define policy permissions.
  • Provide a meaningful name and description for the policy.
  • Structure the policy document based on the permissions needed for the AWS Cost Explorer by HTTP template.
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "ce:GetDimensionValues", "ce:GetCostAndUsage" ], "Effect": "Allow", "Resource": "*" } ] }

Attaching Policies to the User:
  •  Go back to the “Users” section within IAM.
  •  Click on “Add Permissions”.

– Search for and select the policy created in the previous step.

– Review the attached policies to ensure they align with the intended permissions for the user.

Creating a host in Zabbix

Now, let’s create a host that will represent the metrics available via the Cost Explorer API:

  • Create a Host Group in which to put hosts related to AWS. For this example, let’s create one that we’ll call AWS Cloud.
  • Head to the host page under Configuration and click Create host. Give this host the name AWS Cost. We’ll also assign this host to the AWS Cloud group we created and attach the AWS Cost Explorer template by HTTP.
  • Click the Macros tab and select Inherited and host macros. In this case, we need to change the first two macros. The first, {$AWS.ACCESS.KEY.ID}, should be set to the received access key ID. For the second, {$AWS.SECRET.ACCESS.KEY}, the secret access key should be set to the previously retrieved value from the Security credentials tab.
  • Click Add. The AWS Cost Explorer template has three low-level discovery rules that use master items. The low-level discovery rules will start discovering resources only after the master item has collected the required data.

    The best practice is to always test such items for data. Don’t forget to fill in the required macros!

    In AWS daily costs by services and AWS monthly costs by services discovery you can filter by service, which can be specified in macros.
  • Let’s execute the master items to collect the required data on-demand. Choose both items to get data and click Execute now.

    In a few minutes, you should receive cost metrics by services for 12 months plus the current month, as well as by day. If you want the information to be stored longer, remember to change the Keep lost resources period in the LLD rule, as it’s set to 30 days by default.

Good luck!

The post Monitoring AWS Cost Explorer with Zabbix appeared first on Zabbix Blog.

Leveraging Telegram as a User Interface for Zabbix with Sven Putteneers

Post Syndicated from Michael Kammer original https://blog.zabbix.com/leveraging-telegram-as-a-user-interface-for-zabbix-with-sven-putteneers/26604/

One of the highlights of any Zabbix Summit is the diverse set of fascinating speakers who show up each year. With that in mind, we’re continuing our series of interviews with Summit 2023 speakers by sitting down with 7 to 7 CEO Sven Putteneers, who has been gracious enough to fill us in on his work, his Zabbix experience, and the details of integrating Zabbix with the popular messaging app Telegram.

Please tell us a bit about yourself and your work.

I’m a 43-year-old computer geek with a strong interest in Open Source software and programming. I work for a big telco, where I help to build and maintain a cloud telephony platform. Apart from that, I administer our Zabbix installation, which we have set up as a multi-tenant platform and which we use to provide a “monitoring as a service” offering to our customers. Aside from my day job, I founded a company (7 to 7), where Zabbix consultancy is part of the services I offer.

How long have you been using Zabbix? What kind of daily Zabbix tasks are you involved in at your company?

I have been using Zabbix daily for the last 7+ years. My daily tasks are configuring new customers and hosts, maintaining our Zabbix deployment, programming integrations with external systems, and thinking about how we can improve our Zabbix installation in any way.

For my side job, I give Zabbix-related advice, help customers solve tough Zabbix problems (e.g. “how to monitor this exotic device”) and roll out Zabbix installations from scratch to a fully functional monitoring platform. I also offer monitoring in an MSP-like fashion for customers who want their infrastructure monitored but don’t want to deploy their own Zabbix.

Can you give us a sneak peek at what we can expect to hear during your Zabbix Summit speech?

I’ll describe how a Telegram bot that is connected to your Zabbix deployment can turn your Telegram app into a small but powerful user interface for your Zabbix. This means not just using Telegram as a one-directional notification mechanism (like email), but allowing you to query your Zabbix, perform actions (like acknowledging alarms), fetch graphs, etc.

Why did you decide to write a bot for Telegram as opposed to other popular messaging systems? Was it simply a matter of preference or were technical considerations taken into account?

Preference was only a factor after we made a first selection based purely on technical criteria. Some of the criteria we had were that it had to be multi-platform (our Zabbix users are on Android as well as on iOS and use Mac, Windows, and Linux on their computers), it preferably had built-in platform support for bots, and the option of sending more than just plain text.

Telegram ticked all the boxes and has some nice extra features that were not hard requirements (like in-place updating of already sent messages instead of just being able to send new messages), so we decided to go with that.

Have you written any other custom integrations for Zabbix?

Yes, but most of these are for internal systems (like our in-house CRM) and are not really interesting outside the company.

I have written some integrations for monitoring (i.e. UserParameter scripts and external scripts -scripts + the accompanying templates) to monitor systems that have an API that is difficult to query with vanilla Zabbix. An example would be TLS certificate monitoring that is a bit more in-depth than what Agent2 currently offers.

I have also fixed some bugs in a script called mib2zabbix, which as the name suggests takes an MIB file as input and outputs a template file that can be imported in Zabbix.
There are a few features I still want to add to the script (like generating the new walk items for efficient SNMP value gathering), but the script as it is has saved us a tremendous amount of time already.

One fun (and useful!) thing I wrote is a script that uses zabbix_sender to feed data to a “fake host” representing all the things we monitor (think of it as an item per monitored host). Because our Zabbix is multi-tenant, we have some naming conventions and rules around mandatory hostgroup membership to control where alarms for a specific host (or trigger) get sent and when.

I did a talk about how we use hostgroups to control action logic at the Zabbix Benelux Conference 2020 (and the same talk again at the Online Meetup in September 2020. The “fake host” alerts us when a host doesn’t conform to our conventions or is misconfigured, so alarm notifications would be prevented from being sent, for example.

The cool thing is that since this is all based on discovery rules and a script that pulls everything from Zabbix through the API and then feeds data about potential problems back through zabbix_sender, new hosts are picked up automatically and are checked for compliance with our conventions within minutes after they’ve been added.

 

The post Leveraging Telegram as a User Interface for Zabbix with Sven Putteneers appeared first on Zabbix Blog.

How to write a webhook for Zabbix

Post Syndicated from Andrey Biba original https://blog.zabbix.com/how-to-write-a-webhook-for-zabbix/25298/

As you know, a picture is worth a thousand words. Therefore, I would like to share the process of creating a webhook from scratch. In this article, we will walk through the creation process step by step – starting with studying the target service with which Zabbix will integrate and finishing with tests for sending events from Zabbix. Although it may seem complicated, writing your own integrations is not so difficult.

Preparation

First, we need to decide what we want to see as a result of the webhook. In most cases, the services to which we will send events are divided into 2 types:

  • Messengers to which you can send messages. For example, Telegram, Slack, Discord, etc.
  • Service Desks where you can open, close, and update tickets. For example, Jira, Redmine, ServiceNow, etc.

In both cases, the principle of creating a webhook will not differ – the difference is only in the complexity of one type from the other.

In this article, I will describe the process of creating a webhook for messengers – and specifically for Line messenger.

After we have decided on the type, we need to find out whether this service supports the possibility of API requests and, if it does, what is required for this. Usually, all the services you want to integrate Zabbix with have somewhat detailed documentation about the API methods they support. By the way, Zabbix also has its own API, which is documented in detail.

After we are done studying the Line documentation, we find out that messages are sent using the POST method to the https://api.line.me/v2/bot/message/push endpoint, using the Line bot token in the request header for authorization and passing a specially formatted JSON in the request body with the content of the message. Confused? No problem. Let’s take a closer look.

HTTP requests

The operation of the API is based on HTTP requests, which are executed with parameters provided by the developers of this API.

Several types of HTTP requests are used more often than others:

  • GET – is perhaps the most common one that all of us encounter on a daily basis. This request only involves getting data. For example, the browser used a GET request from the web server to fetch the article you are currently reading.
  • POST – is a request that sends data to a resource. This is exactly the case when we want to pass something to the service using API requests.
  • PUT – is much less common than the previous 2, but no less important. This query replaces the values in a resource.

These are not all HTTP request methods, but these three will suffice for a general introduction.

We are done with methods. Let’s move on to the endpoint.

An endpoint is a permanent address of a resource via which we transfer, receive, or change data. In this case, https://api.line.me/v2/bot/message/push is the endpoint that accepts POST requests to send messages.

So, the method and the endpoint are defined. What’s next?

Generally, any HTTP request consists of:

  1. URL
  2. Method
  3. Headers
  4. Body
HTTP request structure

We have already dealt with the first two, but the headers and the request body remain.

Headers usually contain service information that allows you to process a request correctly. For example, the Content-Type: application/json header implies that our request body should be interpreted as a json object. Also, quite often, authorization information is passed in the headers. As in the case of Line, the Authorization: Bearer {channel access token} header contains the authorization token of the bot on behalf of which messages will be sent.

The request body usually contains the information we want to pass on to the service. In our case, this will be the subject and body of the event in Zabbix.

Checking the service API

The documentation is good, but it is necessary to check that everything we read works exactly how it is documented. It is not uncommon that a service can be developed faster than the documentation can keep up with it. So field testing never hurts. Excluding unexpected behavior will significantly reduce the time spent searching for problems.

I recommend using Postman to work with API requests – a handy tool that saves time. But for this article, we will use cURL due to its prevalence and ease of use.

I will not describe the process of creating the Line Bot API token because this is not directly related to the article. However, for those interested in this process, I will leave a link here.

As we have already found out, the request type will be POST, the access point URL is https://api.line.me/v2/bot/message/push, and additional headers must be passed: Content-Type: application/json which specifies the type of data to be sent (in our case it is JSON) and Authorization: Bearer {token value}. And the messages themselves are in JSON format. For example, I used 2 messages – “Hello, world1” and “Hello, world2”. As a result, I got the following query:

After executing the request, we got the expected result of 2 messages that came to the messenger, which were in the request body.

Excellent! So half of the work has already been done: there is a ready-made request that works in manual mode and successfully sends messages to Line. The only thing left is to put the necessary information in the right places and automate the process using JS and Zabbix.

Integration with Zabbix

After successfully completing the tests, go to Zabbix, create a new notification method in the Administration section, select the webhook type, and name it Line.

For webhook integrations with external services, Zabbix uses the built-in JavaScript engine on Duktape. Parameters are passed to the script, which is used to build the logic of the webhook. As a result of the script, tags can be returned that will be assigned to the event. This is usually necessary in case of integration with service desks in order to be able to update the status of tickets.

Let’s take a closer look at the webhook setup interface.

The Media type section contains the general settings for the new media type:

  • Name – Name of the media type.
  • Type – The type of media type. There are 4 types: email, SMS, webhook, and script.
  • Parameters – This is a list of variables passed to the code. All necessary data can be passed through parameters: event id, event type, trigger severity, event source, etc. You can specify macros and text values in parameters. The parameters are passed as a JSON string, accessible through the built-in variable value.
  • Script – JS script describing the logic of the webhook.
  • Timeout – The time after which the script will be terminated.
  • Process tags   – If this option is enabled, the webhook will support generating tags for events sent using this hook.
  • Include event menu entry – This option makes the Menu Entry Name and Menu Entry URL fields available for use.
  • Menu entry name – The text displayed in the event dropdown menu for the Menu entry URL submitted using this hook.
  • Menu entry URL – A link to an external resource in the event menu.
  • Description – A text field that contains a description of the notification method.
  • Enabled – an Option that allows enabling or disabling the media type.

The Message templates section contains templates that are used by webhook to send alerts. Each template contains:

  • Message type – The event type to which the message will apply. For example, Problem – when the trigger fires and Problem recovery – when the problem is resolved.
  • Subject  – The headline of the message.
  • Message – A message template that contains useful information about the event. For example, event time, date, event name, host name, etc.

The Options section contains additional options:

  • Concurrent sessions – The number of concurrent sessions to send an alert.
  • Attempts – The number of retries in case of send failure.
  • Attempt interval  – The frequency of attempts to send an alert.

When writing your own webhook, you can take an existing one as a basis – Zabbix has more than thirty ready-made webhook solutions of varying complexity. All basic functions are usually repeated from hook to hook with little or no change at all, as are the parameters passed to them.

Let’s set the following parameters:

It is convenient to set parameter values with macros. A macro is a variable in Zabbix that contains a specific value. Macros allow you to optimize and automate your work. They can be used in various places, such as triggers, filters, alerts, and so on.

A little more about each macro separately in order to understand why each of them is needed:

  • {ALERT.SUBJECT} – The subject of the event message. This value is taken from the Subject field of the corresponding Message template type.
  • {ALERT.MESSAGE} – The event message body. This value is taken from the Message field of the corresponding Message template type.
  • {EVENT.ID} – The event id in Zabbix. Could be used for generating a link to an event
  • {EVENT.NSEVERITY} – The numerical definition of the event’s severity from 0-5. We will use this to change the message in case of different severity.
  • {EVENT.SOURCE} – The event source. Needed to handle events correctly. In most cases, we are interested in triggers; this corresponds to source value 0.
  • {EVENT.UPDATE.STATUS} – Returns 1 if it is an update event. For example, in case of acknowledge operations or a change in severity.
  • {EVENT.VALUE} – The event state. 0 for recovery and 1 for the problem.
  • {ALERT.SENDTO} – The field from the media type assigned to the user. It returns the ID of the user or group in the Line, where it will be necessary to send a message
  • {TRIGGER.DESCRIPTION} – A macro that will be expanded if the event source is a trigger. Returns the description of the trigger
  • {TRIGGER.ID} – The trigger ID. Required to generate a link to an event in Zabbix

Webhooks can use other macros if needed. A list of all macros can be viewed on the documentation page. Be careful – not all macros can be used in webhooks.

Writing the script

Before writing the script, let’s define the main points that the webhook will need to be able to perform:

  • the script should describe the logic for sending messages
  • handle possible errors
  • logging for debugging

I will not describe the entire code in order not to repeat the same type of blocks and concentrate only on important aspects.

To send messages, let’s write a function that will accept messages and params variables. We got the following function:

function sendMessage(messages, params) {
    // Declaring variables
    var response,
        request = new HttpRequest();

    // Adding the required headers to the request
    request.addHeader('Content-Type: application/json');
    request.addHeader('Authorization: Bearer ' + params.bot_token);

    // Forming the request that will send the message
    response = request.post('https://api.line.me/v2/bot/message/push', JSON.stringify({
        "to": params.send_to,
        "messages": messages
    }));

    // If the response is different from 200 (OK), return an error with the content of the response
    if (request.getStatus() !== 200) {
        throw "API request failed: " + response;
    }
}

Of course, this is not a reference function, and depending on the requirements for the request may differ. There may be other required headers and a different request body. In some cases, it may be necessary to add an additional step to obtain authorization data through another API request.

In this case, the request to send a message returns an empty {} object, so it makes no sense to return it from the function. But for example, when sending a message to Telegram, an object with data about this message is returned. If you pass this data to tags, you can write logic that will change the already sent message – for example, in case of closing or updating the problem.

Now let’s describe a function that will accept webhook parameters and validate their values. In the example, we will not describe all the conditions because they are of the same type:

function validateParams(params) {
    // Checking that the bot_token parameter is a string and not empty
    if (typeof params.bot_token !== 'string' || params.bot_token.trim() === '') {
        throw 'Field "bot_token" cannot be empty';
    }

    // Checking that the event_source parameter is only a number from 0-3
    if ([0, 1, 2, 3].indexOf(parseInt(params.event_source)) === -1) {
        throw 'Incorrect "event_source" parameter given: "' + params.event_source + '".nMust be 0-3.';
    }

    // If an event of type "Discovery" or "Autoregistration" set event_value 1, 
    // which means "Problem", and we will process these events same as problems
    if (params.event_source === '1' || params.event_source === '2') {
        params.event_value = '1';
    }

    ...

    // Checking that trigger_id is a number and not equal to zero
    if (isNaN(params.trigger_id) && params.event_source === '0') {
        throw 'field "trigger_id" is not a number';
    }
}

As you can see from the code, in most cases these are simple checks that allow you to avoid errors associated with the input data. Validation is necessary because there is no guarantee that the expected value will be in the parameter.

The main block of code is placed inside the try…catch block in order to correctly handle errors:

try {
    // Declaring the params variable and writing the webhook parameters to it
    var params = JSON.parse(value);

    // Calling the validation function and passing parameters to it for verification
    validateParams(params);

    // If the event is a trigger and it is in the problem status, compose the message body
    if (params.event_source === '0' && params.event_value === '1') {
        var line_message = [
            {
                "type": "text",
                "text": params.alert_subject + 'nn' +
                    params.alert_message + 'n' + params.trigger_description
            }
        ];
    }

    ...

    // Sending a composed message
    sendMessage(line_message, params);

    // Returning OK so that the webhook understands that the script has completed with OK status
    return 'OK';
}
catch (err) {
    // Adding a log function so in case of problems you can see the error in the Zabbix server console
    Zabbix.log(4, '[ Line Webhook ] Line notification failed : ' + err);

    // In case of an error, return it from the webhook
    throw 'Line notification failed : ' + err;
}

Here we assign parameter values to the params variable, then validate them using the validateParams() function, describe the main conditions for generating a message, and send this message to the messenger. At the same time, the try…catch block allows you to catch all errors, log them to Zabbix and return them in a readable form to the user in the web interface.

For writing webhooks in Zabbix, there is a guideline dedicated to this topic. Please read this information because it will help you write better code and avoid common mistakes.

Testing

After we’ve finished with the webhook script, it’s time to test how our code works. To do this, Zabbix provides a function to send test messages. Go to the AdministrationMedia types, find Line, and click on the Test button opposite it. In the window that appears, fill in all the fields with the necessary data and press the Test button. Check the messenger and see that the message came with the data we specified in the test.

Ready-made Line integration can be found in the Zabbix git repository and in all recent Zabbix instance builds.

Troubleshooting

Of course, everything in the article looks like I did it on the first attempt and did not encounter a single error or problem. Naturally, this is not the case in practice. Work with each new product includes Research & Development. How can you catch errors and, most importantly, understand the problem?

Well, as I wrote earlier – read the documentation and test all requests before writing code. At this stage, it is easiest to catch all the problems. The response to the HTTP request will explicitly describe the error. For example, if you make a mistake in the request body and send an object with incorrect values, the service will return the body with an error description and the response status 400 (Bad request).

There are several options for debugging in case of errors that may occur when writing a webhook script:

  • Focus on the errors displayed when the notification method is executed. For example, if you mistyped or set the wrong name of the function and variable.
  • Include logging in the code for displaying service information. For example, while you are in the script development stage, the result of the function can be logged using the Zabbix.log() function. Zabbix supports 6 debug levels (0-5), which can be set in this function. Usually, webhooks use level 4, which contains information for debugging.
  • Use the zabbix_js utility. You can transfer a file with a script and parameters to it. You can read more about it here.

Conclusion

I hope this article has helped you better understand how webhooks work in Zabbix and highlighted the basic steps for creating, diagnosing, and preparing to write your integration. The Zabbix community is constantly adding custom templates and media types. I expect that after reading this article, more people will be interested in creating their own webhooks and sharing them with the community. We appreciate any contribution to the development and expansion of the base of integration solutions.

Questions

Q: I don’t know JS, but I know other languages. Is native support of other languages planned in Zabbix, such as Python?

A: For now, there are no such plans.

Q: Are there any restrictions with writing a JS script for a webhook?

A: Yes, there are. The built-in Duktape engine is used to execute the code, and it does not have all the functionality that is available in the latest JS releases. Therefore, I recommend that you read the documentation of this engine and the built-in objects to learn more about the available methods.

What’s Up, Home? – Zabbix the Storyteller

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-zabbix-the-storyteller/24629/

Can you create fairy tales with Zabbix? Of course, you can! By day, I am a monitoring technical lead in a global cyber security company. By night, I monitor my home with Zabbix & Grafana and do some weird experiments with them. Welcome to my blog about this project.

We all know how Zabbix has a never-ending list of integrations for just about everything — need to integrate it with OpsGenie, PagerDuty, Teams, Slack, or something else? No problem, there’s probably a ready-made integration for that already.

But, based on questions I’ve received over the years at work, not everyone realizes how utterly powerful the alert message templating engine is for you to create custom messages with the help of built-in macros and of course the user macros you can define. The default Zabbix HTML e-mail message template is very compact in its format, and for me easy to read, but years ago someone at work told me that the alerts were not easy for him to follow.

What I did back then was that I created an alert template of my own, which tells about the events in a bit different format, here’s a short snippet from those alerts.

Fairy tale time!

Now that at home we have our almost-three-months-old-baby, I’m using her as the perfect excuse to make Zabbix alerts to be like fairy tales. You know the drill. Your kiddo wants to hear yet another story before he or she falls asleep, and you have already run out of fresh stories to read.

What if your Zabbix would generate fairy tales for you? Well, not really, but at least the following would make the stories a bit more amusing to you and very confusing to your kid.

Let’s first create a new media type via Zabbix Administration –> Media types. For this, I just cloned the default HTML e-mail media type and gave it a name.

And then, my fantastic story template looks like this:

Add the template to user media type

Next, to actually receive these alerts, you need to configure your user profile and in its media types add the new media type.

Using the template

Getting the new template into use is easy; just go to Zabbix Configuration –> Actions and create a new trigger action with whatever conditions you like.

And then on Operations tab make Zabbix send the alerts via your new fairy tale media type.

The alert e-mail

So this is how the e-mail looks like.

Now go and add some CSS, pictures, whatever you like to your stories. And, perhaps, unlike me, go and change the {ITEM.DESCRIPTION} macro to contain also some instructions what to do with the alert, like at our custom alerts at work I have a tendency to add some hints about how to resolve the issue.

I have been working at Forcepoint since 2014 and I would have many stories to tell you about all these years. — Janne Pikkarainen

This post was originally published on the author’s LinkedIn account.

The post What’s Up, Home? – Zabbix the Storyteller appeared first on Zabbix Blog.

Integrating Zabbix with your existing IT solutions by Aleksandrs Larionovs / Zabbix Summit Online 2021

Post Syndicated from Alexandrs Larionovs original https://blog.zabbix.com/integrating-zabbix-with-your-existing-it-solutions-by-aleksandrs-larionovs-zabbix-summit-online-2021/18671/

Zabbix 6.0 LTS comes packed with many new integrations and templates. As the total number of templates and integrations grows, we plan to make major improvements to our template repository. This will greatly improve the workflow of developing a new community template, submitting template pull requests, following the development process of a template, and much more.

The full recording of the speech is available on the official Zabbix Youtube channel.

What are integrations?

By definition, integrations are connections between systems and applications that work together as a whole to share information and data. In Zabbix, we separate integrations into two types:

  • Out-of-the-box templates
    • Templates contain items, triggers, graphs, and other entities that allow you to monitor any device, service, application, and other monitoring endpoints.
  • Webhook integrations
    • Webhooks allow you to send the information from Zabbix to any sort of 3rd party system like ITSM or messaging applications.

Where to find the latest integrations?

If you’re installing a fresh Zabbix instance, it will come pre-packaged with the latest set of official templates and webhooks. If you wish to download and import the integrations manually, you can find them in:

How do you benefit from integrations?

What are the benefits of using the official Zabbix integrations for you as a Zabbix user?

  • Monitor your endpoints in a tested and optimized manner
  • Monitor a large variety of 3rd party systems
  • Official templates come with a guarantee of quality and official support
  • Official templates provide quick deployment of monitoring logic for your monitoring endpoints

Having supported integrations can also be important from the standpoint of a vendor. Having an official Zabbix integration can provide multiple benefits for vendors:

  • Supported vendors get the ability to engage the Zabbix community
  • Collaborate with Zabbix and receive additional recognition
  • Provide higher quality monitoring support by collaborating in the development of the integration
  • Set higher monitoring standards for your product
  • Improve your public image

What if I wish to request a new official integration?

There are multiple ways how you can approach a scenario when there is no official integration for a specific product:

  • Option 1:
    • Create a ZBXNEXT ticket with your request at support.zabbix.com
    • Ask your friends and colleagues to vote on a request
    • If there is community interest in the integration – we will develop it!
  • Option 2:
  • Option 3:
    • Look for an unofficial community template

How are the official Zabbix integrations made?

Our first step in developing a new template is prioritizing which template should we tackle first. This includes looking at the current IT landscape and deciding which of the components are vastly considered as Essential services. Then, we look at the community requests and the number of votes behind each request. We also continuously work on improving the existing templates and evaluating the priority of the requested improvements. There is also the option to sponsor an integration by contacting our Sales department.

After we prioritize our list, we proceed to development – we do research, talk to community experts, create focus groups and proceed with the development. Once the development is finished, we proceed with validation – this includes internal reviews from the Integrations team as well as giving our colleagues from the Support team the chance to take a look at the newly developed template. Community feedback is also important for us – the feedback regarding the template can be left either in the comments under the specific feature request or in our Suggestions and Feedback forum section.

Community templates

While we pride ourselves on the rapid growth of our integrations team and the pace at which we have been delivering new official templates and integrations, we, of course, can’t instantly develop a template for every vendor and device out there. This is where our community has been of great help to us.

Moving from share.zabbix.com

Previously, if you were to find Zabbix lacking a template or an integration that you require, you would visit share.zabbix.com and look for a community solution to your problem. At this point, we have decided to migrate away from share.zabbix.com since, over the years, we have found it lacking in multiple aspects:

  • The website was hard to navigate
  • The underlying platform was outdated
  • Once uploaded, the templates were rarely maintained
  • It was hard to collaborate on templates
  • Lacking standardization – each template could use different naming conventions or metric collection approaches
  • Zombie templates – templates developed for old versions but never updated along the way.

Community template repository

The new go-to place for community templates will be our Community template repository. The repository will serve as a platform for collaboration. Once uploaded, the templates can be maintained by either the original developer or other community members. The platform will be moderated by the Integrations team, who will also provide feedback on the community templates to ensure a higher quality of the templates and additional validation. The documentation will also be generated for the community templates, containing the contents of each template – this way, you can have a transparent look at the template before downloading it.

The process

Let’s go through the whole process of developing and maintaining a community template.

1. Collaborate

  • For existing templates – you can start a discussion on Github issues to discuss issues or potential improvements on the template.
  • You can create a new bug report related to the template
  • For older community templates – you will be able to take over the maintenance of this template and continue improving it down the line
  • Develop and publish a new template or an integration

When it comes to community development, Zabbix does provide an official set of guidelines that the developers can follow to ensure that the template uses the official best practice conventions and approaches:

  • Naming conventions for templates and template entities
  • A set of best practices helps the community developers to simplify the decision making regarding best template and integration development approaches
  • Practical and ethical framework for template and integration development
  • This enables the community developers to follow the same set of development guidelines as the official Zabbix Integrations team

2. Pull request

Once you have decided to make a new integration or modify an existing integration, create a pull request describing the proposed changes and be ready to participate in a discussion related to the proposed set of changes. We will review and moderate the discussion and assist you in ensuring a smooth template development process.

3. Validation

The validation process consists of two parts. First, we will review if the template is valid, can be imported in Zabbix, and is usable by our community members. Next, the Integrations team will check if the template is developed according to the Zabbix standard and suggest any necessary changes.

4. Merge

If the validation process has been passed, we will accept the pull request and merge the integration into the repository. Afterward, the readme file for the integration will be generated. Finally, the template will be added to the template directory, and it will be available for everyone to see and download.

The Templates directory will have a similar structure to what you are used to in share.zabbix.com, so you should feel right at home here. We tried to check and migrate each and every valid template, but if you don’t find your template in the list – simply submit a pull request to us, and we will review it.

The generated Readme file will contain a list of entities included in the template – such as User macros, Template linkages, Discovery rules, Items, and more.

Where can I find the repository?

The Zabbix community template repository can be found in https://github.com/zabbix/community-templates. All you need to participate is a Github account and the willingness to participate in the integration development process.

To report an issue with the template repository or the official integrations, feel free to use our support portal: https://support.zabbix.com/

  • To report a bug – open a ZBX ticket
  • To suggest an improvement – open a ZBXNEXT ticket

For any discussions related to the Zabbix integrations, you can use (but are not limited to) the following channels:

Questions

Q: What is the workflow for our users that wish for Zabbix to develop new integration for a specific product?

A: We’re actively listening to our community. The best way to voice your request is to look for an existing feature request on https://support.zabbix.com/ and vote on it. If there is no such feature request – feel free to create it and vote on it. Thirdly, you can always contact our sales department and use our integration services to have the required template developed for you.

 

Q: Where can I see which integrations are currently developed or scheduled for the next release?

A: You can track the development process of a template by following the particular feature request in our support portal. You can also take a look at the official Zabbix roadmap and see what features, fixes, and integrations are currently scheduled for the upcoming Zabbix versions.

 

The post Integrating Zabbix with your existing IT solutions by Aleksandrs Larionovs / Zabbix Summit Online 2021 appeared first on Zabbix Blog.

Maintaining Zabbix API token via JavaScript

Post Syndicated from Aigars Kadiķis original https://blog.zabbix.com/maintaining-zabbix-api-token-via-javascript/15561/

In this blog post, we will talk about maintaining and storing the Zabbix API session key in an automated fashion. The blog post builds upon the Close problem automatically via Zabbix API subject and can be used as extra configuration for this particular use-case. The blog post also shares a great example of synthetic monitoring by way of JavaScript preprocessing – how to emulate a scenario in an automated fashion and get alerted in case of any problems.

Prerequisites

First, let us create the Zabbix API user and user macros where we will store our username, password, Zabbix URL and the API session key.

1) Open “Administration” => “Users”. Create a new user ‘api’ with password ‘zabbix’. At the permissions tab set User Type “Zabbix Super Admin”.

2) Go to “Administration” => “General” => “Macros”. Configure base characteristics:

      {$Z_API_PHP} = http://demo.zabbix.demo/api_jsonrpc.php
     {$Z_API_USER} = api
 {$Z_API_PASSWORD} = zabbix
{$Z_API_SESSIONID} =

It’s OK to leave {$Z_API_SESSIONID} empty for now.

3) Let’s check if the Zabbix backend server can reach the Zabbix frontend server. Make sure that you are logged into the Zabbix backend server by looking up the zabbix_server process:

ps auxww | grep "[z]abbix_server.*conf"

Ensure that we can reach the Zabbix frontend by curling the Zabbix frontend server from the Zabbix backend server:

curl -s "http://demo.zabbix.demo/index.php" | grep Zabbix

4) Download template “Check and repair Zabbix API token” and import it in your instance.

5) Create a new host with an Agent interface and link the template. The IP address of the host does not matter. The template will use an agentless check to do the monitoring, it will use an “HTTP agent” item.

How it works

Our goal for today is to figure out a way to keep the Zabbix API authentication token up to date in a user macro. This way we can reuse the macro repeatedly for items, action operations and scripts that require for us to use the Zabbix API. We need to ensure that even if the token changes, the macro gets automatically updated with the new token value! Let’s try and understand each step of the underlying workflow required for us to achieve this goal.

The first component of our workflow is the “Validate session key raw” item. This is an HTTP agent item that performs a POST request with an arbitrary method – proxy.get in this case, but we could have used ANY other method. We simply want to check if an arbitrary Zabbix API call can be executed with the current {$Z_API_SESSIONID} macro value.

The second part of the workflow is the “Repair session key” dependent item. This item utilizes the JavaScript preprocessing step with custom JavaScript code to check the values obtained by the previous item and generate a new authentication token if that is necessary.

The third item – “Status”, is another dependent item that uses regular expression preprocessing steps to check for different error messages or status codes in the value of the “Validate session key raw” item. Most of the triggers defined in this template will react to the values obtained by this item.

 

Below you can see the full underlying workflow:

Code-wise, the magic is implemented with the following JavaScript code snippet:

if (value.match(/Session terminated/)) {

var req = new CurlHttpRequest();

// Zabbix API
var json_rpc='{$Z_API_PHP}'

// lib curl header
req.AddHeader('Content-Type: application/json');

// First request to get authentication token
var token =  JSON.parse(req.Post(json_rpc,
'{"jsonrpc":"2.0","method":"user.login","params":{"user":"{$Z_API_USER}","password":"{$Z_API_PASSWORD}"},"id":1,"auth":null}'
));

// If authentication was unsuccessful
if ( token.error )
{
// Login name or password is incorrect
return 32500;
}

else {
// Update the macro

// Get the global macro ID
// We cannot plot here a very native Zabbix macro because it will be automatically expanded
// Must use a workaround to distinguish a dollar sign from the actual macro name and merge with '+'
var id = JSON.parse(req.Post(json_rpc,
'{"jsonrpc":"2.0","method":"usermacro.get","params":{"output":["globalmacroid"],"globalmacro":true,"filter":{"macro":"{$'+'Z_API_SESSIONID'+'}"}},"auth":"'+token.result+'","id":1}'
)).result[0].globalmacroid;

// This line contains a keyword '+value+' which will grab exactly the previous value outside this JavaScript snippet
var overwrite = JSON.parse(req.Post(json_rpc,
'{"jsonrpc":"2.0","method":"usermacro.updateglobal","params":{"globalmacroid":"'+id+'","value":"'+token.result+'"},"auth":"'+token.result+'","id":1}'
));

// Return the id (an integer) of the macro, which was updated
return overwrite.result.globalmacroids[0];
}

} else {
return 0;
}

Throughout the JavaScript code, we are extracting a value from one step and using that as an input for the next step.

After the data has been collected, the values are analyzed by 4 triggers. 3 out of these 4 triggers prints a misconfiguration problem that requires a human investigation. We also have a “repairing Zabbix API session key in background” title, which is the main trigger that indicates a token has been expired and repair automatically.

And that’s it – now any integration that requires a Zabbix API authentication token can receive the current token value by referencing the user macro that we created in the article. We’ve ensured that the token is always going to stay up to date and in case of any issues, we will receive an alert from Zabbix!

Getting your notifications via Signal

Post Syndicated from Brian van Baekel original https://blog.zabbix.com/getting-your-notifications-via-signal/13286/

Recently, Whatsapp pushed their new privacy policy where they announced to share more data with Facebook, causing an exodus to other platforms, where Signal is one of the more popular ones, among Telegram. Both are great alternatives, but I prefer Signal due to the open-source part, end to end encryption, and last but not least: their business model (living on donations instead of selling your data).

Typically, Zabbix is sending notifications to whatever medium you’ve chosen if a problem is detected. We all know the Email messages, the various webhook integrations with Slack/MS Teams/ Jira, etc, perhaps even some text message integrations and such. Now, if we’re migrating to Signal, we suddenly have access to the Signal API and can utilize it to receive Zabbix notifications. Nice!

There is only one drawback. You need a separate phone number to register against Signal. Don’t use your own phone number – unless you want to lose the ability to use Signal ;(

There are various ways to get a phone number for this purpose:

  • Use the phone number of your current SMS gateway
  • Use the company phone number (a lot of cloud PBX are providing the option to receive the verification email)
  • Purchase a prepaid phone number.
  • Use a service like Twilio

You just need to receive one text message, the rest of the communications will go via the internet

Time to get rid of Whatsapp and move to Signal! But… How to use Signal to get your notifications?

Signal-cli

Although we could built everything from scratch, talking to the API of Signal, there is a nice implementation available in order to talk to Signal within a few minutes: Signal-cli

Although this github page is very comprehensive in order to get Signal-cli installed, but of course it is not doing anything with Zabbix.

Configuration tasks

For this guide, we’re using:

  • Centos 8
  • Zabbix 5.2

signal-cli installation

First, lets install the Signal-cli utility, and in order to do so we need to resolve the dependency of Java by installing the openjdk application:

dnf -y install java-11-openjdk-devel.x86_64

After this installation, we should be good to continue with the installation of signal-cli. According to their installation guide, this should be sufficient:

export VERSION="0.7.3"
wget https://github.com/AsamK/signal-cli/releases/download/v"${VERSION}"/signal-cli-"${VERSION}".tar.gz
sudo tar xf signal-cli-"${VERSION}".tar.gz -C /opt
sudo ln -sf /opt/signal-cli-"${VERSION}"/bin/signal-cli /usr/local/bin/

At the time of writing, the most recent version is 0.7.3, and that’s what we’re installing here. If in the future a new version is released, of course you should install that!

If everything went as expected, we should be able to register ourself to Signal.

signal-cli registration

Since we want to execute these commands by Zabbix, we must make sure the registration is done with the correct user on the Zabbix server, otherwise you will get the following error message:

Unregistered user error

(ERROR App – User +19293771253 is not registered.)

In order to prevent this error, lets do the authentication against Signal as Zabbix user:

Important: The USERNAME (your phone number) must include the country calling code, i.e. the number must start with a “+” sign and you must replace everything between the  < > in the following examples with your own values

runuser -l zabbix -c 'signal-cli -u <NUMBER> register'

Now, check for incoming test messages on this phone number. Within seconds you should receive a 6 digit code in the following format: xxx-xxx

Once you’ve received the text, it’s time to complete the registration:

runuser -l zabbix -c 'signal-cli -u <NUMBER> verify <CODE>'

Since we’re running these commands as a different user, we won’t see the output of them. Let’s just test!

Sending messages from the command line is straight forward:

runuser -l zabbix -c 'signal-cli -u <NUMBER> send -m <MESSAGE> <RECEIVER NUMBER>'

You will see the message id as output. Simply ignore it, since it’s not relevant at this point.

Within seconds:

It works! Great.

So now we’ve got this part covered, time to get the AlertScript set up, before heading to the frontend.

Zabbix AlertScript setup

Ok, so now we’ve got the registration done, we need to make sure Zabbix can utilise it. In order to do so, we use a very old method. Although it would’ve made more sense to use the webhook option, that means I had to built the communication with Signal from scratch.

So AlertScripts it is. In your terminal/SSH session with the Zabbix server open a new file with this command: vi /usr/lib/zabbix/alertscripts/signal.sh and insert the following contents:

#!/bin/bash
signal-cli -u '+19293771253' send -m "$1" $2

 That’s right. just 2 lines. After saving the file, change the owner and set the permissions:

chown zabbix:zabbix /usr/lib/zabbix/alertscripts/signal.sh
chmod 7000 /usr/lib/zabbix/alertscripts/signal.sh

and it’s time to move to our frontend.

Zabbix mediatype configuration

In the frontend, go to Administration -> Mediatypes and create a new mediatype:

Signal Mediatype

Name: Signal
Type: Script
Script name: signal.sh
Script parameters:
    {ALERT.MESSAGE}
    {ALERT.SENDTO}

don’t forget to configure some Message templates as well (second tab in the Mediatype configuration). You can just use the defaults if you click on ‘add’

Zabbix media configuration

Next step. Navigate to Administration -> Users (or just open your own user profile) and create a new media:

new-media

Type: Signal
Sendto: <your number>
When active / severity as per needs

Important: The USERNAME (your phone number) must include the country calling code, i.e. the number must start with a “+” sign

We’re almost there, just some configuration on the actions

Zabbix action configuration

This step is only needed if you are sending notifications right now via a specific mediatype. If you configured the ‘send only to’ option to ‘- All -‘ there is nothing to change, and it will work straight away!

Otherwise, navigate to Configuration -> Actions and find the action you want to change, and in the Operations, Recovery operations and Update operations change the ‘send only to’ option to ‘Signal’

Save your action and it’s time to test – Generate some problem to confirm the implementation actually works.

Wrap up

That’s it. By now you should have a working implementation where Zabbix is sending notifications to Signal. The setup was extremely straight forward and easy to configure. Nevertheless, if you need help getting this going, we (Opensource ICT Solutions) offer consultancy services as well, and are more than happy to help you out!

 

Aranet — a wireless IOT sensor platform

Post Syndicated from Toms Reksna original https://blog.zabbix.com/aranet-a-wireless-iot-sensor-platform/12953/

Aranet — wireless IoT sensor platform. Wherever you need to measure anything – temperature, air quality, light, or any other physical parameter – Aranet’s main mission is to deliver these measurements simply, easily, and above all – wirelessly. Aranet is manufactured by SAF Tehnika — a company with over 20 years of experience in the telecom industry, microwave radio, and test & measurement equipment manufacturing, and a certified partner of Zabbix.

Contents

I. Aranet wireless sensor network (1:41)
II. Aranet in retail (5:53)
III. Indoor air quality and COVID19 (8:20)
IV. Partnership with Aranet (12:11)
V. Questions & Answers (13:52)

Aranet wireless sensor network

Aranet is a wireless sensor network consisting of the Aranet PRO base station and sensors transmitting data to one another over the 868 MHz frequency in Europe and 920 MHz frequency in the United States. This frequency allows us to have a very large line of sight distance between the sensor and the base station — up to 3 kilometers line of sight and a couple of hundred meters indoors.

Sensors are intended to measure different environmental parameters. You can connect up to a hundred sensors per base station. Sensors can be configured to send the data over different intervals — once every minute, two minutes, five, or 10 minutes. Sensors are very power efficient — with a regular AA battery, they will last up to 10 years.

Aranet ecosystem

Aranet technology is based on the LoRa physical layer. We have built our proprietary LPWAN protocol with XXTEA encryption on top of LoRa to make the radio parameters better and to increase the battery life.

Aranet technology

The brain of the system is the Aranet PRO base station – the radio receiver with a built-in web server housing SensorHUB software and internal memory for local data storage. It is made with ease of use in mind – you can connect directly to the base station with your PC, laptop, or phone over Ethernet or Wi-Fi, open up your web browser and access the free SensorHUB software. You don’t even need to install anything.

Aranet PRO base station offers a lot of features such as graphing, exporting data, etc. In addition, its internal memory allows for storing 10 years of readings even if the Internet goes down.

The sensors are sending data to the base station. Several such base stations can be agglomerated into the Aranet Cloud solution collecting data from several base stations and allowing you to access the data from anywhere.

Aranet architecture

With over 20 years of experience in radio manufacturing, we believe that we’ve created one of the best-in-class systems in terms of wireless connectivity with our base stations and in-house cloud. However, we are looking for a strategic partnership where the Aranet system can become a part of a larger system. This brought us to the partnership with Zabbix so that we can integrate our cloud solution with the Zabbix monitoring system.

Aranet philosophy

Aranet Example Use Cases

Aranet for retail

Rimi

Aranet has been actively used in retail, for instance, Rimi — a chain of Latvian supermarkets, where 6,500 sensors have been installed in 125 stores. Aranet is planning to expand to other Baltic states.

Aranet equipment is primarily used for:

  • Monitoring of freezer temperatures. Earlier, they had to check the temperature manually — somebody had to walk around with the legal pad and check the temperature to make sure that freezers are working properly and to report to the relevant government agencies. Aranet allowed for automating this process.
  • Alarms in case of malfunction. In the case of a malfunction, an alarm can be sent to avoid product spoilage.
  • Working on predictive maintenance, including machine learning algorithms for predictive maintenance to locate anomalies in the defrost cycle temperature data helping to prevent breakages.

Aranet in retail

Benefits

  • Even the largest supermarkets (8800 m2/94 000 ft2) can be covered with a single base station.
  • Manual data collection can be avoided
  • Freezer temperature operating costs can be optimized (20% energy costs reduction).
  • Product spoilage can be avoided.
  • Litigation/fines for slip and fall accidents can be avoided.

Aranet for indoor air quality and COVID19 safety

Due to COVID19, many governments and health agencies have changed their guidelines, including the Center for Disease Control in the United States, and they now state that COVID19 can be transmitted through aerosols. Aerosols are small droplets that are released when we cough, sneeze, or talk. As these droplets are small — about five microns, they linger in the air for up to nine or more minutes. So, that means that you don’t even have to be in contact with the infected person to actually catch the disease.

This requires proper ventilation practices, which can decrease x10 the time aerosol particles stay in the air.

Aranet4 PRO – a wireless COVID19 safety network

One way to estimate if ventilation is sufficient is to measure CO2. The amount of CO2 (air exhaled by other people) in a certain room is a measure of the risk of contagion. The recommended air circulation per person is 60m3 /h, which is approximately 800ppm CO2 concentration — almost twice as much as the outside value.

Aranet wireless CO2 sensor

Aranet offers a wireless CO2 sensor that also measures temperature, relative humidity, and air pressure. It comes with a useful Bluetooth application, which allows you to easily get the latest readings. But the most important thing is that this sensor can generate alerts. So, whenever the value exceeds the critical level, you have a visual indication — green, yellow, or red, as well as an audible alert prompting to manually increase the ventilation, for instance, by opening windows.

Lately, these sensors have been gaining popularity, especially in schools, universities, and offices as they offer:

  • Simple plug-and-play setup with the Aranet base station.
  • Updating information available locally on each sensor, as well as centrally on the base station, so that you can see what spaces need additional ventilation.
  • Free software – graphs, reports, centralized alarms.
  • Control of airborne COVID19 spread in schools, offices, and other indoor facilities.

Partnership with Aranet

Aranet wireless network can be implemented in many other industries:

  • Horticulture,
  • Livestock,
  • Building Management,
  • Warehousing,
  • Data Centres,
  • Pharma,
  • Medical,
  • Retail.

So, Aranet is looking for integration and distribution partners, which are interested in wireless monitoring. Details of the partnership are available on aranet.com or can be requested from [email protected].

The Aranet’s core value is the wisdom of Lord Kelvin: “you can only improve what you can measure”. So, we strive for delivering these measurements in the easiest and the most straightforward way possible so that you could improve whatever you wish.

Questions & Answers

Question. Is there some way or some benefit to integrating Aranet with Zabbix?

Answer. Aranet has many and diverse applications, as well as Zabbix. So, adding physical parameters on top of the monitoring solution network parameters would help out. For data centers or retail stores, in addition to alerts of something wrong with the network, alarms of something physical happening would be useful. It might be useful to be alerted, for instance, if it’s too hot.

Question. Is it possible to switch your sensors to LoRaWAN so that we can use existing networks?

Answer. We have decided to have our proprietary network based on the LoRa physical layer with proprietary communication software. This decision was made for several reasons:

  • ease of use— the main thing that our customers actually value. Aranet system can be easily set up in a couple of minutes — you just lay the sensors and they start working. With LoRaWAN you have the base station from one provider, and sensors from the other, so it takes time to make the system work. Aranet works out of the box.
  • improved battery due to our protocol.
  • improved security as with Aranet you control the whole ecosystem from the base station to sensors. In addition, with Aranet you won’t face dependencies, password management, or communication issues.
  • private network

Question. Are there any electrical sensors — volts, amps, power, or anything like that?

Answer. We can monitor voltage, but these are mostly for third-party integrations. We have pulse output sensors, which you can connect to these electricity meters, for instance. So, this can be monitored.

 

Let me subscribe – Zabbix masters IoT topics

Post Syndicated from Wolfgang Alper original https://blog.zabbix.com/let-me-subscribe-zabbix-masters-iot-topics/12710/

Zabbix 5.2 supports two important protocols used in the world of the Internet of Things — MQTT and Modbus. Now we can benefit from the newest Zabbix features and integrate Zabbix network monitoring in the world of IoT.

Contents

I. What is MQTT? (3:32:13)
II. MQTT and Zabbix integration (3:39:48)

1.MQTT setup (3:40:03)
2.Node-RED (3:42:12)
3.Splitting data (3:45:45)
4.Publishing data from Zabbix (3:52:23)

III. Questions & Answers (3:55:42)

What is MQTT?

MQTT — the Message Queuing Telemetry Transport was invented in 1999, and designed to be bandwidth-efficient and lightweight, thus battery efficient. Initially, it was developed to allow for monitoring oil pipelines.

It is a well-defined ISO standard — ISO/IEC 20922, and it is getting increasingly adopted due to its suitability for the Internet of Things (IoT), sensor networks, home automation, machine-to-machine (M2M), and mobile applications. MQTT usually uses TCP/IP as the transport protocol — over ports 1883, and can be encrypted using TLS transport mechanism with 8883 as the default port.

There is a variation of MQTT available — MQTT-SN (MQTT for Sensor Networks) used for non-TCP/IP networks, such as Zigbee (IEEE 80215.4 radio-based protocol) or other UDP / Bluetooth-based implementations.

There are 2 types of network entities available: ‘Message broker‘ and ‘Clients‘.

MQTT supports 3 Quality-of Service levels:

— 0: At most once – “Fire and forget” where you might or might not receive the message.
— 1: At least once – The message can be sent/delivered multiple times.
— 2: Exactly once – Safest and slowest service.

MQTT is based on a ‘publish’ / ‘subscribe-to-topic’ mechanism:

1. Publish/subscribe.

Publish/subscribe pattern

MQTT Message Broker consumes messages published by clients (on the left) using two-level ‘Topics‘ (such as, for instance, office temperature, office humidity, or indoor air quality). The clients on the rights side act as subscribers receiving any information published on a particular topic. Every time a message is published to the broker, the broker notifies all of the subscribers (Clients 3 and 4), and these clients get the sensor value.

2. Combined publishing/subscribing

Combined pub/sub

A client can be a subscriber and a publisher at the same time. So, in this example, Client 1 is publishing a brightness value and Client 3 has a subscription for that brightness value. Client 3 may decide that the brightness, for instance, of 1,500 might be too low, so it can publish a new message to the topic ‘office’ to let the light controller know that it should increase the brightness, while Client 2, for instance, the light controller with a subscription, may change the brightness level on receipt of the message.

3. Wildcards subs

+ = single-level, # = multi-level

Wildcards in MQTT are easy. So, you can have, for instance, ‘office + brightness’ topic,  where the ‘+’ sign can be substituted by any topic name. If the ‘+’ sign substitutes just one level in our topic, then it is a single-level wildcard. While the pound sign works for a multi-level wildcard.

MQTT features:

  • Clients can publish and subscribe to one or more topics.
  • One client can publish and subscribe at the same time.
  • Clients can subscribe using single/multi-level wildcards.
  • Clients can choose between three different QoS levels.

MQTT advanced features:

  • Messages can be retained by the broker for new subscribers. So, if a new client subscribes to a particular topic, then the publisher can mark its messages as ‘Retained‘ so that the new subscriber gets the last retained message.
  • Clients can provide a “last will and testament” that will be published by the broker when the client “dies”.

MQTT and Zabbix integration

MQTT setup

Integrating Zabbix into the multiple-client mix

Integrated structure:

1. Four sensors:

    • Server room.
    • Training room.
    • Sales room.
    • Support room.

2. Four different topics:

    • office
    • bielefeld (home town)
    • serverroom
    • trainingroom

3. Mosquitto MQTT Message Broker, which is one of the well-known message brokers.

So, the sensors are publishing the data to the Mosquitto Message Broker, where any MQTT-enabled device or system can pick those values up. In our case, it’s the home automation system, which subscribes to the Message Broker and has access to all of the values published by the sensors.

Thanks to MQTT support in Zabbix 5.2, Zabbix can now subscribe to the Mosquitto Message Broker and immediately get access to all of the sensors publishing their values to the broker.

As we can have multiple subscribers, multiple clients can subscribe to one topic on the Message Broker. So the home automation system can subscribe to the same values published to the Message Broker, as well as Zabbix.

Node-RED

Sooner or later, you will need Node-RED, which is a flow-based programming tool allowing you to subscribe to the broker and to publish messages to the broker acting as the client, as well as to work with the data.

Data Processing in Node-RED

This setup might be useful, if, for instance, some Zabbix trigger fires and passes the information over to the MQTT to publish the outcome of the trigger to the Message Broker, which will be then picked up by the home automation system.

Zabbix publishes data to the broker

You can have two different Zabbix instances subscribing to the same Message Broker acting just as two different clients.

Multiple Zabbix servers sharing the same data

Node-RED:

    • Construction kit for the Internet of Things and home automation.
    • Acts as MQTT client able to publish and subscribe.
    • Flow-based tool for visual programming based on Node.js.
    • Graphical web editor.
    • Supports input, processing, and output nodes.
    • Extensible with plugins and custom function nodes.

Different types of nodes can be connected in the workspace. For instance, the nodes subscribing to a topic and transforming the data, or the nodes writing the data to a log file.

Node-RED

We can get the data from the sensors as the raw JSON string containing 20-30 metrics in a payload, and as a parsed JSON object in the Node-RED Debug node with easy-to-read metrics, such as, for instance, temperature, humidity, WiFi quality, indoor air quality, etc.

Multiple metrics in one message

Splitting data

We have different options for data splitting available:

  • Split on MQTT level: use Node-RED to split metrics and then publish them in their own topics (it’s good to set up when other clients can handle only a single metric at a time).

Splitting data in Node-RED

 

  • Split on Zabbix level: set up an MQTT item as a master item and use Zabbix JSON preprocessing with corresponding dependent items. Its more efficient because Zabbix would need only one subscription.

We can get the data with the brand-new mqtt.get item in Zabbix 5.2:

— Requires Agent 2.
— Requires active checks. As every time a client publishes a message to the topic, we need the broker to push that data to us, we need active checks, so mqtt.get must listen to the subscription and get notified when the new data comes in.
— Broker URL default is localhost.
— User name and password are optional.
— Uses Eclipse Paho Go client library.

One Zabbix agent in active mode sending data to multiple hosts

For our setup with four sensors: in Sales Room, Server Room, Support Room, and Training Room, we need four hosts in Zabbix. Traditionally, you need four different agents to handle them as each agent running as active needs to configure its own hostname. However in our setup, we need just one agent installed and handling different hosts by subscribing to multiple topics.

This is possible because of the the new feature  running active agent checks from multiple hosts which is now available in Zabbix 5.2. All we need is:

—  to set up hosts in Zabbix (as usual),
—  to define our MQTT items (as usual),
—  to set up just one agent with all of the hostnames the agent should be responsible for (the new feature),
—  to set up the master item, which is our mqtt.get item,
—  to define several dependent items and preprocessing for each of the dependent items, and
—  to start preprocessing with JSONPath.

NOTE. Every time the master item gets an update, so do all of the dependent items in Zabbix.

Master item and dependent items

  • Combine both methods: let other clients subscribe to a single metric using their specific topic, but publish all sensor data for Zabbix in one topic.

NOTEData received and displayed on the dashboard is based on the MQTT item, the payload, and the MQTT messages received from the Message Broker.

Sensor data dashboard

Publishing data from Zabbix

Now you want to publish the outcome of a Zabbix trigger, so it can be consumed by other MQTT-enabled devices. Any MQTT subscriber, like Node-RED, should receive the alert. To do that, you need:

  • to define a new media type to send problems to the topic, that is, to pass the data over to the Message Broker:
  • to use the command-line tool for Mosquitto — mosquitto_pub allowing us to publish the message.
#!/bin/sh
mosquitto_pub -h yourbroker.io -m "$1" -t "zabbix/problems/$2"

  • to make sure that the data is sent to the broker in the right format. In this case, we use JSON as transport and define a JSON problem template and a JSON problem recovery template.

 

In Zabbix, you’ll see the problem, the actions, and the media type firing using the subscription, and in the Debug node of Note-RED, you’ll see that the data is received from Zabbix.

Zabbix problems  published via MQTT

This model with Node-RED can be used to create sophisticated setups. For instance, you can take the data from Zabbix, forward it by actions and media types, preprocess them in Node-RED, and transform the data in many different ways.

IoT devices and other subscribers can react to issues detected by Zabbix using Node-RED

NOTE. To try out the MQTT setup and new Zabbix features, you can use the Live broker available on IntelliTrend new GitHub account, getting data from Zabbix sensors every 10 minutes. You’ll also find templates,  access data, address of the broker, etc. —  everything you need to to get started.

Questions & Answers

Question. If the MQTT client gets overloaded due to high message frequency on subscribe topics, how will that affect Zabbix?

Answer. Here the broker might be overloaded or the Zabbix agent might not be able to follow up. If for the problem with the broker, the quality of service levels is defined in the MQTT protocol, more specifically — QoS level 2, which guarantees delivery. So if QoS2 is used as a QoS level, the messages won’t get lost but would be resent in case of failure.

Question. What else would you expect from the IoT side of Zabbix? What kind of protocols or things would get added? 

Answer. There’s always room for improvement. You can use third-party tools, custom scripts, or any tools to enhance Zabbix. I’m sure that using user script parameters was an excellent design decision. But the official support of MQTT is a quantum leap for Zabbix because it opens the door to most IoT infrastructures, as MQTT is the most important IoT protocol so far.

For instance, one of our customers is monitoring the infrastructure of electricity generators, production systems, etc. They use their own monitoring platform provided by vendors. The request was to integrate alerts or some metrics into Zabbix. The customer’s monitoring platform used MQTT protocol. So, all we had to do was to make their monitoring platform use external scripts and MQTT support.