All posts by Arturs Lontons

Improving SNMP monitoring performance with bulk SNMP data collection

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/improving-snmp-monitoring-performance-with-bulk-snmp-data-collection/27231/

Zabbix 6.4 introduced major improvements to SNMP monitoring, especially when it comes to collecting large numbers of metrics from a single device. This is done by utilizing master-dependent item logic and combining it with low-level discovery and newly introduced preprocessing rules. This blog post will cover the drawbacks of the legacy SNMP monitoring approach, the benefits of the new approach, and the steps required to deploy bulk SNMP metric collection.

The legacy SNMP monitoring approach – potential pitfalls

Let’s take a look at the SNMP monitoring logic that all of us are used to. For our example here, we will look at network interface discovery on a network switch.

To start off, we create a low-level discovery rule. In the discovery rule, we specify which low-level discovery macros are collected from which OIDs. This way, we create multiple low-level discovery macro and OID pairs. Zabbix then goes through the list of indexes at the end of the specified OIDs and matches the collected values to low-level discovery macros. Zabbix also collects the list of discovered indexes for the specified OIDs and automatically matches them with the {#SNMPINDEX} low-level discovery macros.

An example of regular SNMP discovery key:

discovery[{#IFOPERSTATUS},1.3.6.1.2.1.2.2.1.8,{#IFADMINSTATUS},1.3.6.1.2.1.2.2.1.7,{#IFALIAS},1.3.6.1.2.1.31.1.1.1.18,{#IFNAME},1.3.6.1.2.1.31.1.1.1.1,{#IFDESCR},1.3.6.1.2.1.2.2.1.2,{#IFTYPE},1.3.6.1.2.1.2.2.1.3]
An example of regular SNMP low-level discovery rule

The collected low-level discovery data will look something like this:

[
{
"{#SNMPINDEX}":"3",
"{#IFOPERSTATUS}":"2",
"{#IFADMINSTATUS}":"1",
"{#IFALIAS}":"",
"{#IFNAME}":"3",
"{#IFDESCR}":"3",
"{#IFTYPE}":"6"
},
{
"{#SNMPINDEX}":"4",
"{#IFOPERSTATUS}":"2",
"{#IFADMINSTATUS}":"1",
"{#IFALIAS}":"",
"{#IFNAME}":"4",
"{#IFDESCR}":"4",
"{#IFTYPE}":"6"
},
{
"{#SNMPINDEX}":"5",
"{#IFOPERSTATUS}":"2",
"{#IFADMINSTATUS}":"1",
"{#IFALIAS}":"",
"{#IFNAME}":"5",
"{#IFDESCR}":"5",
"{#IFTYPE}":"6"
},
{
"{#SNMPINDEX}":"6",
"{#IFOPERSTATUS}":"2",
"{#IFADMINSTATUS}":"1",
"{#IFALIAS}":"",
"{#IFNAME}":"6",
"{#IFDESCR}":"6",
"{#IFTYPE}":"6"
},
{
"{#SNMPINDEX}":"7",
"{#IFOPERSTATUS}":"2",
"{#IFADMINSTATUS}":"1",
"{#IFALIAS}":"",
"{#IFNAME}":"7",
"{#IFDESCR}":"7",
"{#IFTYPE}":"6"
}
]

Once the low-level discovery rule is created, we move on to creating item prototypes.

Items created based on this item prototype will collect metrics from the OIDs specified in the SNMP OID field and will create an item per index ( {#SNMPINDEX} macro) collected by the low-level discovery rule. Note that the item type is SNMP agent – each discovered and created item will be a regular SNMP item, polling the device and collecting metrics based on the item OID.

Now, imagine we have hundreds of interfaces and we’re polling a variety of metrics at a rapid interval for each interface. If our device has older or slower hardware, this can cause an issue where the device simply cannot process that many requests. To resolve this, a better way to collect SNMP metrics is required.

Bulk data collection with master – dependent items

Before we move on to the improved SNMP metric collection approach, we need to first take a look at how master-dependent item bulk metric collection and low-level discovery logic are implemented in Zabbix.

  • First, we create a master item, which collects both the metrics and low-level discovery information in a single go.
  • Next, we create a low-level discovery rule of type dependent item and point at the master item created in the previous step. At this point, we need to either ensure that the data collected by the master item is formatted in JSON or convert the data to JSON by using preprocessing.
  • Once we have ensured that our data is JSON-formatted, we can use the LLD macros tab to populate our low-level discovery macro values via JSONPath. Note: Here the SNMP low-level discovery with bulk metric collection uses a DIFFERENT APPROACH, designed specifically for SNMP checks.
  • Finally, we create item prototypes of type dependent item and once again point them at the master item created in the first step (Remember – our master item contains not only low-level discovery information, but also all of the required metrics). Here we use JSONPath preprocessing together with low-level discovery macros to specify which values should be collected. Remember that low-level discovery macros will be resolved to their values for each of the items created from the item prototype.

Improving SNMP monitoring performance with bulk metric collection

The SNMP bulk metric collection and discovery logic is very similar to what is discussed in the previous section, but it is more tailored to SNMP nuances.

Here, to avoid excessive polling, a new walk[] item has been introduced. The item utilizes GetBulk requests with SNMPv2 and v3 interfaces and GetNext for SNMPv1 interfaces to collect SNMP data. GetBulk requests perform much better by design. A GetBulk request retrieves values of all instances at the end of the OID tree in a single go, instead of issuing individual Get requests per each instance.

To utilize this in Zabbix, first we have to create a walk[] master item, specifying the list of OIDs from which to collect values. The retrieved values will be used in both low-level discovery (e.g.: interface names) and items created from low-level discovery item prototypes (e.g.: incoming and outgoing traffic).

Two new preprocessing steps have been introduced to facilitate SNMP bulk data collection:

  • SNMP walk to JSON is used to specify the OIDs from which the low-level discovery macros will be populated with their values
  • SNMP walk value is used in the item prototypes to specify the OID from which the item value will be collected

The workflow for SNMP bulk data collection can be described in the following steps:

  • Create a master walk[] item containing the required OIDs
  • Create a low-level discovery rule of type dependent item which depends on the walk[] master item
  • Define low-level discovery macros by using the SNMP walk to JSON preprocessing step
  • Create item prototypes of type dependent item which depend on the walk[] master item, and use the SNMP walk value preprocessing step to specify which OID should be used for value collection

Monitoring interface traffic with bulk SNMP data collection

Let’s take a look at a simple example which you can use as a starting point for implementing bulk SNMP metric collection for your devices. In the following example we will create a master walk[] item, a dependent low-level discovery rule to discover network interfaces, and dependent item prototypes for incoming and outgoing traffic.

Creating the master item

We will start by creating the walk[] SNMP agent master item. The name and the key of the item can be specified arbitrarily. What’s important here is the OID field, where we will specify the list of comma separated OIDs from which their instance values will be collected.

walk[1.3.6.1.2.1.31.1.1.1.6,1.3.6.1.2.1.31.1.1.1.10,1.3.6.1.2.1.31.1.1.1.1,1.3.6.1.2.1.2.2.1.2,1.3.6.1.2.1.2.2.1.3]

The walk[] item will collect values from the following OIDs:

  • 1.3.6.1.2.1.31.1.1.1.6 – Incoming traffic
  • 1.3.6.1.2.1.31.1.1.1.10 – Outgoing traffic
  • 1.3.6.1.2.1.31.1.1.1.1 – Interface names
  • 1.3.6.1.2.1.2.2.1.2 – Interface descriptions
  • 1.3.6.1.2.1.2.2.1.3 – Interface types

SNMP bulk metric collection master walk[] item

Here we can see the resulting values collected by this item:

Note: For readability, the output has been truncated and some of the interfaces have been left out.

.1.3.6.1.2.1.2.2.1.2.102 = STRING: DEFAULT_VLAN
.1.3.6.1.2.1.2.2.1.2.104 = STRING: VLAN3
.1.3.6.1.2.1.2.2.1.2.105 = STRING: VLAN4
.1.3.6.1.2.1.2.2.1.2.106 = STRING: VLAN5
.1.3.6.1.2.1.2.2.1.2.4324 = STRING: Switch loopback interface
.1.3.6.1.2.1.2.2.1.3.102 = INTEGER: 53
.1.3.6.1.2.1.2.2.1.3.104 = INTEGER: 53
.1.3.6.1.2.1.2.2.1.3.105 = INTEGER: 53
.1.3.6.1.2.1.2.2.1.3.106 = INTEGER: 53
.1.3.6.1.2.1.2.2.1.3.4324 = INTEGER: 24
.1.3.6.1.2.1.31.1.1.1.1.102 = STRING: DEFAULT_VLAN
.1.3.6.1.2.1.31.1.1.1.1.104 = STRING: VLAN3
.1.3.6.1.2.1.31.1.1.1.1.105 = STRING: VLAN4
.1.3.6.1.2.1.31.1.1.1.1.106 = STRING: VLAN5
.1.3.6.1.2.1.31.1.1.1.1.4324 = STRING: lo0
.1.3.6.1.2.1.31.1.1.1.10.102 = Counter64: 0
.1.3.6.1.2.1.31.1.1.1.10.104 = Counter64: 0
.1.3.6.1.2.1.31.1.1.1.10.105 = Counter64: 0
.1.3.6.1.2.1.31.1.1.1.10.106 = Counter64: 0
.1.3.6.1.2.1.31.1.1.1.10.4324 = Counter64: 12073
.1.3.6.1.2.1.31.1.1.1.6.102 = Counter64: 0
.1.3.6.1.2.1.31.1.1.1.6.104 = Counter64: 0
.1.3.6.1.2.1.31.1.1.1.6.105 = Counter64: 0
.1.3.6.1.2.1.31.1.1.1.6.106 = Counter64: 0
.1.3.6.1.2.1.31.1.1.1.6.4324 = Counter64: 12457

By looking at these values we can confirm that the item collects values required for both the low-level discovery rule (interface name, type, and description) and the items created from item prototypes (incoming/outgoing traffic).

Creating the low-level discovery rule

As our next step, we will create a dependent low-level discovery rule which will discover interfaces based on the data from the master walk[] item.

Interface discovery dependent low-level discovery rule

The most important part of configuring the low-level discovery rule lies in defining the SNMP walk to JSON preprocessing step. Here we can assign low-level discovery macros to OIDs. For our example, we will assign the {#IFNAME} macro to the OID containig the values of interface names:

Field name: {#IFNAME}
OID prefix: 1.3.6.1.2.1.31.1.1.1.1
Dependent low-level discovery rule preprocessing steps

The name and the key of the dependent item can be specified arbitrarily.

Creating item prototypes

Finally, let’s create two dependent item prototypes to collect traffic data from our master item.

Here we will provide an arbitrary name and key containing low-level discovery macros. On items created from the item prototypes, the macros will resolve as our OID values, thus giving each item a unique name and key.

Note: The {#SNMPINDEX} macro is automatically collected by the low-level discovery rule and contains the indexes from the OIDs specified in the SNMP walk to JSON preprocessing step.

The final step in creating the item prototype is using the SNMP walk value preprocessing step to define which value will be collected by the item. We will also append the {#SNMPINDEX} macro at the end of the OID. This way, each item created from the prototype will collect data from a unique OID corresponding to the correct object instance.

Incoming traffic item prototype

Incoming traffic item prototype preprocessing step:

SNMP walk value: 1.3.6.1.2.1.31.1.1.1.6.{#SNMPINDEX}
Incoming traffic item preprocessing steps

 

Outgoing traffic item prototype

Outgoing traffic item prototype preprocessing step:

SNMP walk value: 1.3.6.1.2.1.31.1.1.1.10.{#SNMPINDEX}
Outgoing traffic item preprocessing steps

Note: Since the collected traffic values are counter values (always increasing), the Change per second preprocessing step is required to collect the traffic per second values.

Note: Since the values are collected in bytes, we will use the Custom multiplier preprocessing step to convert bytes to bits.

Final notes

And we’re done! Now all we have to do is wait until the master item update interval kicks in and we should see our items getting discovered by the low-level discovery rule.

Items created from the item prototypes

After we have confirmed that our interfaces are getting discovered and the items are collecting metrics from the master item, we should also implement the Discard unchanged with heartbeat preprocessing step on our low-level discovery rule. This way, the low-level discovery rule will not try and discover new entities in situations where we’re getting the same set of interfaces over and over again from our master item. This in turn improves the overall performance of internal low-level discovery processes.

Discard unchanged with heartbeat preprocessing on the low-level discovery rule

Note that we discovered other interface parameters than just the interface name – interface description and type are also collected in the master item. To use this data, we would have to add additional fields in the low-level discovery rule SNMP walk to JSON preprocessing step and assign low-level discovery macros to the corresponding OIDs containing this information. Once that is done, we can use the new macros in the item prototype to provide additional information in item name or key, or filter the discovered interfaces based on this information (e.g.: only discover interfaces of a particular type).

If you have any questions, comments, or suggestions regarding a topic you wish to see covered next in our blog, don’t hesitate to leave a comment below!

The post Improving SNMP monitoring performance with bulk SNMP data collection appeared first on Zabbix Blog.

Zabbix SNMP monitoring one-day training course

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/zabbix-snmp-monitoring-one-day-training-course/25746/

SNMP – Simple network management protocol is a networking protocol that is extremely prevalent in network hardware such as switches and routers, as well as a variety of other devices such as printers, power supplies and even regular server hosts. Depending on the device, a wide array of metrics can be retrieved by using SNMP without the need to deploy any additional software on the target device. Our newly introduced Zabbix SNMP monitoring one-day extra course will teach you all you need to know about SNMP and how to use Zabbix to get the most out of your SNMP devices with the most optimal data collection approach.

Course contents

The goal of the course is to introduce the core concepts behind SNMP and provide the attendees with the skills necessary to extract the required metrics from SNMP endpoints. The course covers topics such as:

  • SNMP core concepts
  • SNMP command line utilities
  • Differences between SNMP versions
  • SNMP resource discovery with Zabbix Low-level discovery feature
  • Optimizing SNMP data collection with the SNMP walk[*] item introduced in Zabbix 6.4
  • Configuring and monitoring SNMP traps

The course will be hosted by a Zabbix Certified Trainer with thorough experience in SNMP monitoring and Zabbix configuration. The courses are designed with open Q&A sessions in mind, where attendees can discuss their current and potential use cases and how the course contents can be applied to their own real-life scenarios.

Zabbix Certified Specialist on-site training

SNMP monitoring in Zabbix 6.4 and newer versions

Zabbix 6.4 introduces a whole new approach to bulk SNMP monitoring – the walk[*] item. During the course, attendees will learn how the new bulk data collection approach greatly increases the metric collection speed and reduces the potential performance impact on the monitoring endpoint. The course covers the main differences between the old and new approaches and demonstrates how the new SNMP items can be configured in the correct manner. Trainers will also demonstrate and provide the opportunity to practice SNMP resource discovery with the new walk[*] item and Zabbix low-level discovery (LLD) features.

Practical tasks

Depending on the device and vendor, configuring SNMP monitoring can be both very simple or it may require understanding the underlying OID structure to find the data that you’re looking for. To guide our students through real-life scenarios in understanding how SNMP monitoring may differ across devices and vendors, the course includes practical tasks, during which the participants will get the opportunity to configure Zabbix SNMP monitoring for an array of network devices belonging to a selection of the most popular vendors, such as Cisco, HP and Mikrotik. By the end of the course, students will have both a theoretical and practical understanding of SNMP as a protocol and how it can be used by Zabbix for metric collection and problem detection.

At the end of the course, every attendee will receive the course attendance certificate.

Sign up for the SNMP monitoring Zabbix training course and check our other one-day extra courses by visiting the Zabbix Training Extra Courses page.

The post Zabbix SNMP monitoring one-day training course appeared first on Zabbix Blog.

Integrate Zabbix with your data pipelines by configuring real-time metric and event streaming

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/integrate-zabbix-with-your-data-pipelines-by-configuring-real-time-metric-and-event-streaming/25728/

Modern IT infrastructures tend to utilize multiple data sources to evaluate and react to the current state of the infrastructure. A set of internal solutions and dedicated software tools are used to correlate the collected information and react in a proper way to changes in the environment – be it a gradual increase or decrease in resource usage, unexpected load spikes or run-of-the-mill outages.

With the release of Zabbix 6.4, metrics collected by Zabbix and events generated based on trigger expressions can be integrated into such a data pipeline by using the new real-time metric and event streaming feature.

Zabbix real-time metric and event streaming

Before Zabbix 6.4 Zabbix supported real-time export of history, trends and events to files. The export was performed by exporting the data in newline-delimited JSON format. Additional scripting was mandatory if a user wanted to integrate this data within their data pipeline since Zabbix only exported the data to files without any further transformations or streaming.

On the other hand – real-time metric streaming can be a lot more flexible when it comes to integrating Zabbix with data pipelines, filtering the required data and securing the connection to the third party endpoint.

In addition to specifiying the streaming endpoint URL, Zabbix users can choose between streaming Item values and Events. On top of that, Tag filtering can be used to further narrow down the data that will be streamed to the endpoint.

Connectors

Real-time item value and event streaming can be configured by accessing the Administration – General – Connectors section. Here Zabbix administrators will have to create connectors and specify what kind of data the connector will be streaming. A single connector can only stream either Item values or events – if you wish to stream both, you will have to define at least two connectors, one for values and the other for events.

Configuring a new connector

Marking the Advanced configuration checkbox enables Zabbix administrators to further configure each individual connector – from specifying the number of concurrent sessions, limiting the number of attempts and specifying the connection Timeout to configuring HTTP proxies and enabling SSL connections.

An HTTP authentication method can also be specified for each connector. It is possible to use one of the following methods:

None – no authentication used;

Basic – basic authentication is used;

NTLM – NTLM (Windows NT LAN Manager) authentication is used;

Kerberos – Kerberos authentication is used;

Digest – Digest authentication is used;

Bearer – Bearer authentication is used.

In addition, the number of pre-forked connector worker instances must be specified in the Zabbix server configuration file in the StartConnectors parameter.

Setting StartConnectors parameter in Zabbix server configuration file

Protocol

Under the hood, the data is sent over HTTP using newline-delimited JSON export protocol.

The following example shows a trigger event for Zabbix agent being unreachable:

{"clock":1519304285,"ns":123456789,"value":1,"name":"Either Zabbix agent is unreachable on Host B or pollers are too busy on Zabbix Server","severity":3,"eventid":42, "hosts":[{"host":"Host B", "name":"Host B visible"},{"host":"Zabbix Server","name":"Zabbix Server visible"}],"groups":["Group X","Group Y","Group Z","Zabbix servers"],"tags":[{"tag":"availability","value":""},{"tag":"data center","value":"Riga"}]}

Do not get confused by the value field, the value if which will always be 1 for problem events, while for items, it will contain the item value:

{"host":{"host":"Host B","name":"Host B visible"},"groups":["Group X","Group Y","Group Z"],"item_tags":[{"tag":"foo","value":"test"}],"itemid":4,"name":"CPU Load","clock":1519304285,"ns":123456789,"value":0.1,"type":0}

Use cases

Finally, what particular use cases can we use the real-time metric streaming feature for? As I mentioned in the introduction, Zabbix item values and events could serve as an additional source of near real-time information about the current system behavior.

For example, we could stream item values and events to message brokers such as Kafka, RabbitMQ or Amazon Kinesis. Combine this with additional automation solutions and your services could dynamically scale (think K8s/Docker containers) depending on the current (or expected) load. Zabbix Kubernetes and Docker container monitoring templates very much complement such an approach.

The streamed data could also be used to gain new insights about the system behavior by streaming it to an AI engine or data lakes/data warehouses for long-term storage and analysis.

Zabbix 6.4 is out now!

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/zabbix-6-4-is-out-now/25444/

Zabbix team is pleased to announce the release of the latest Zabbix major version – Zabbix 6.4. The release delivers many long-awaited improvements, such as Just-in-time LDAP and SAML user provisioning; support of older Zabbix proxy versions for simplified proxy management and zerodowntime Zabbix upgrades; near-instant configuration sync across Zabbix agents and proxies, and much more! 

New features and improvements

Just-in-time (JIT) user provisioning 

Zabbix 6.4 adds support of JIT user provisioning for LDAP and SAML authentication.

JIT user provisioning can be enabled in LDAP/SAML authentication settings

Zabbix administrators can now configure user provisioning by selecting the LDAP group pattern for matching and automatically assign User groups and User roles to the discovered users. Media types can also be mapped based on LDAP/SAML attributes.

A media can be assigned to the provisioned users based on their LDAP/SAML attributes
A group and role is assigned to the provisioned users

Cause and symptom events 

Zabbix 6.4 adds the ability to mark events as Cause or Symptom events. This allows us to filter events in a way, where we can see only root cause problems instead of being overwhelmed by symptom events. It is also possible to pause action operations for symptom events as to avoid unnecessary noise.

Multiple symptom events can be linked to a single cause event
Any event can be marked as a symptom or converted to a cause event
Action operations can be paused for symptom problems

Instant propagation of configuration changes 

Continuing to build on changes introduced in Zabbix 6.2 (Collecting only configuration change deltas), Zabbix 6.4 introduces instant configuration synchronization across passive and active agents and proxies.

  • Instead of receiving the full configuration copy every 2 minutes (old behavior), in Zabbix 6.4 active agent receives the configuration copy only when changes have been performed
  • RefreshActiveChecks parameter now supports a range 1-86400 (old range: 60-3600)
  • The ProxyConfigFrequency parameter is now used in both Zabbix server (for passive mode) and Zabbix proxy (for active mode) configuration files
  • ConfigFrequency parameter in Zabbix proxy configuration is now deprecated
  • Default ProxyConfigFrequency parameter is 10 seconds (down from 1 hour)

This also improves the performance of Zabbix servers and proxies, since only configuration deltas are synced. As for active agents – the active agent receives a full configuration copy only when any changes are detected in the configuration instead of receiving it every RefreshActiveChecks interval (old behavior)

New SNMP walk item for bulk collection and discovery of SNMP metrics 

A new SNMP agent walk item has been introduced. The item looks at a specified OID or OIDs and polls their indexes by suing the SNMP GetBulk requests. An SNMP GetBulk request can provide better performance and more rapid metric collection and discovery from enterprise-tier SNMP devices.

For example:

walk[1.3.6.1.1,1.3.6.2]

Result:

1.3.6.1.2.1.1 = STRING: "<value1>"
1.3.6.1.2.1.2 = STRING: "<value2>"
1.3.6.1.2.1.3 = STRING: "<value3>"
1.3.6.2.1 = INTEGER: 10
1.3.6.2.2 = INTEGER: 20

Textual values can then be transformed to JSON, which can serve as a master item for low-level discovery rules:

SNMP walk to JSON transforms the obtained data to JSON

Resulting values:

[
{"{#SNMPINDEX}":"7","{#IFALIAS}":"Uplink PT","{#IFTYPE}":"6"},
{"{#SNMPINDEX}": "8","{#IFALIAS}": "Uplink FB","{#IFTYPE}":"6"},
{"{#SNMPINDEX}": "473","{#IFALIAS}":"lag","{#IFTYPE}":"161"}
]

Once the data is converted to JSON, we can use SNMP walk value preprocessing step together with LLD macros, to create dependent item prototypes:

SNMP walk value preprocessing step can be used to specify value for extraction in item prototypes

Support of data collection for outdated proxies

To improve the Zabbix component upgrade workflows (especially for large environments), outdated proxies can still perform data collection with a newer Zabbix server version:

  • Proxy is fully supported if it has the same major version as the Zabbix server
  • Proxy is marked as outdated if its major version is older than the Zabbix server but not older than the previous LTS release
  • Outdated proxies still support data collection and remote command execution
  • In other scenarios, the proxy becomes not supported
Deployed proxy compatibility can be seen in Zabbix frontend
Server version Current proxy version Outdated proxy version Unsupported proxy version
6.4 6.4 6.0, 6.2 Older than 6.0; newer than 6.4
7.0 7.0 6.0, 6.2, 6.4 Older than 6.0; newer than 7.0
7.2 7.2 7.0 Older than 7.0; newer than 7.2

New menu layout 

Zabbix menu layout has been redesigned. The goal of the new menu layout is to provide logical and consistent access to main Zabbix features.

The new menu provides a more consistent and logical layout to Zabbix features

Real-time streaming of metrics and events over HTTP

In addition to streaming collected metrics and events to files, Zabbix 6.4 adds the option to stream metrics and events over HTTP. Zabbix administrators have the option to filter the data for streaming by using tag filters. A new Connectors section has been introduced under Administration – General. Here Zabbix administrators can define an external system where item values and events should be pushed to.

Define a new connector to stream metrics and events over HTTP

Zabbix 6.4 can be used as a source of information for other applications, analytics reports, and AI engines by streaming metrics and events over HTTP in real time. Metrics and events can be streamed to message brokers like Kafka, RabbitMQ, or Amazon Kinesis to adapt the behavior of external systems in real time. 

Template versioning 

Template versioning has been introduced to improve template management and ease of use. Templates are now marked with vendor ar version fields, which are visible in Zabbix frontend; these fields can also be added when writing a custom template.

Template version and vendor fields are visible in the frontend

Development framework for Zabbix widget creation 

Zabbix has a large developer community creating their own custom frontend modules, widgets and Go plugins. In Zabbix 6.4, our goal was to streamline this process by creating a development framework for widget creation. To achieve this, the following changes have been introduced:

  • Widgets have been converted to modules
  • Modules are now fully self-contained and modular
  • Built-in widgets reside in ui/widgets
  • Custom widgets reside in ui/modules/<widget>
  • Adding new widgets is as simple as adding new files without changing the existing files

In addition to these changes, we have also added a new Developer Center section to our documentation. The section contains guides, tutorials and code examples to guide our community in developing Frontend modules and widgets, as well as help with Zabbix agent 2 custom Go plugin development.

The Developer Center section contains guides, tutorials, and code examples for extending Zabbix

Other features and improvements 

The release includes many other changes:

  • Simple check, External check, SSH agent, Telnet agent item types now do not require an interface to be present on the host 
  • Pre-configured email media type settings for Gmail and O365 email providers 
  • Dynamic item value widget thresholds
  • Option to define custom labeled links for hosts and events
  • Ability to label trigger URLs
  • Improved preprocessing performance and thread-based preprocessing workers
  • Ability to label aggregated datasets in Graph widget
  • SQLite3 Zabbix proxies now automatically recreate the SQLite3 database file during an upgrade
  • A host status filter (enabled/disabled) has been added under Data collection – Hosts
  • Additional filtering options have been added to the Action log
  • Action log now supports import to CSV
  • Multiple context menu improvements to Host, Item and Event context menus
  • Old password verification is now required when changing your internal Zabbix user password
  • Value cache performance improvements when working with metrics that get updated less frequently than once per day
  • Added commands to enable profiling of rwlocks/mutexes (for debugging)

The full list of changes, bug fixes, and new features can be found in the Zabbix 6.4 release notes

New templates and integrations

Zabbix 6.4 comes pre-packaged with many new templates and integrations for the most popular vendors and cloud providers. Multiple existing templates have also received improvements:

  • Microsoft Azure MySQL servers 
  • Microsoft Azure PostgreSQL servers 
  • Microsoft Azure virtual machines 
  • Low-level discovery improvements in AWS by HTTP template 
  • Veeam Backup Enterprise Manager 
  • Veeam Backup and Replication 
  • Cisco Nexus 9000 Series 
  • BMC Control-M 
  • Cisco Meraki dashboard 
  • OS processes by Zabbix agent 
  • Improvements to filesystem discovery in official Zabbix OS templates 

Zabbix 6.4 introduces a webhook integration for the Line messaging app, allowing Zabbix events to be forwarded to the Line messenger. 

Zabbix 6.4 adds a variety of new templates and integrations

Zabbix 6.4 packages and images

Official Zabbix packages and images are available for: 

  • Linux distributions for different hardware platforms on RHEL, CentOS, Oracle Linux, Debian, SUSE, Ubuntu, Raspbian 
  • Virtualization platforms based on VMWare, VirtualBox, Hyper-V, XEN 
  • Docker 
  • Packages and pre-compiled agents for the most popular platforms, including macOS and MSI packages for Microsoft Windows 

You can find the download instructions and download the new version on the Download page.

One-click deployments for the following cloud platforms are coming soon: 

  • AWS, Azure, Google Cloud Platform, Digital Ocean 

Upgrading to Zabbix 6.4

In order to upgrade to Zabbix 6.4 you need to upgrade your repository package and download and install the new Zabbix component packages (Zabbix server, proxy, frontend, and other Zabbix components). When you start the Zabbix server, an automatic database schema upgrade will be performed. Zabbix agents are backward compatible; therefore, it is not required to install the new agent versions. You can perform the agent upgrade at a later time. 

If you’re using the official Docker container images – simply deploy a new set of containers for your Zabbix components. Once the Zabbix server container connects to the backend database, the database upgrade will be performed automatically.

You can find detailed step-by-step upgrade instructions on our Upgrade procedure page. 

Join the webinar

If you wish to learn more about the Zabbix 6.4 features and improvements, we invite you to join our What’s new in Zabbix 6.4 public webinar.

During the webinar, you will get the opportunity to:

  • Learn about Zabbix 6.4 features and improvements
  • See the latest Zabbix templates and integrations
  • Participate in a Q&A session with Zabbix founder and CEO Alexei Vladishev
  • Discuss the latest Zabbix version with Zabbix community and Zabbix team members

This is a public webinar – anyone can sign up, attend and have their questions answered by the Zabbix team!

Handy Tips #40: Simplify metric pattern matching by creating global regular expressions

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/simplify-metric-pattern-matching-by-creating-global-regular-expressions/24225/

Streamline your data collection, problem detection and low-level discovery by defining global regular expressions. 

Pattern matching within unstructured data is mostly done by using regular expressions. Defining a regular expression can be a lengthy task, that can be simplified by predefining a set of regular expressions which can be quickly referenced down the line.  

Simplify pattern matching by defining global regular expressions:

  • Reference global regular expressions in log monitoring and snmp trap items
  • Simplify pattern matching in trigger functions and calculated items
  • Global regular expressions can be referenced in low-level discovery filters
  •  Combine multiple subexpressions into a single global regular expression
Check out the video to learn how to define and use global regular expressions.
Define and use global regular expressions: 
  1. Navigate to Administration General Regular expressions
  2. Type in your global regular expression name
  3. Select the regular expression type and provide subexpressions
  4. Press Add and provide multiple subexpressions
  5. Navigate to the Test tab and enter the test string
  6. Click on Test expressions and observe the result
  7. Press Add to save and add the global regular expression
  8. Navigate to Configuration Hosts
  9. Find the host on which you will test the global regular expression
  10. Click on either the Items, Triggers or Discovery button to open the corresponding section
  11. Find your item, trigger or LLD rule and open it
  12. Insert the global regular expression
  13. Use the @ symbol to reference a global regular expression by its name
  14. Update the element to save the changes
Tips and best practices
  • Each subexpressions and the total combined result can be tested in Zabbix frontend 
  • Zabbix uses AND logic if several subexpressions are defined 
  • Global regular expressions can be referenced by referring to their name, prefixed with the @ symbol 
  • Zabbix documentation contains the list of locations supporting the usage of global regular expression. 

Sign up for the official Zabbix Certified Specialist course and learn how to optimize your data collection, enrich your alerts with useful information, and minimize the amount of noise and false alarms. During the course, you will perform a variety of practical tasks under the guidance of a Zabbix certified trainer, where you will get the chance to discuss how the current use cases apply to your own unique infrastructure. 

The post Handy Tips #40: Simplify metric pattern matching by creating global regular expressions appeared first on Zabbix Blog.

Handy Tips #39: Extracting metrics from structured data with Zabbix preprocessing

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-39-extracting-metrics-from-structured-data/24163/

Collect structured data in bulk and use Zabbix preprocessing to extract and transform the necessary metrics. 

Collecting data from custom monitoring endpoints such as web applications or custom in-house software can result in the collected data requiring further extraction or transformation to fit our requirements. 

Use Zabbix preprocessing to extract metrics from structured data: 

  • Extract data with JSONPath and XPath expressions
  • Transform XML and CSV data to JSON structures

  • Check for error messages in JSON and XML structures
  • Extract and transform metrics from Prometheus exporter endpoints

Check out the video to learn how to use Zabbix preprocessing to extract metrics from structured data.

Extract metrics from structured data with Zabbix preprocessing: 

  1. Navigate to Configuration → Hosts
  2. Find the host where structured data is collected
  3. Click on the Items button next to the host
  4. Create or open an item collecting structured data
  5. For this example, we will transform CSV to JSON
  6. Open the Preprocessing tab
  7. Select a structured data preprocessing rule
  8. If required, provide the necessary parameters
  9. Optionally, select a validation preprocessing step
  10. For this example, we will check for errors in JSON
  11. Extract a value by using JSONPath or XML XPath preprocessing steps
  12. Press Test to open the test window
  13. Press Get value and test to test the item
  14. Close the test window and press Add or Update to add or update the item
  15. Optionally, create dependent items to extract values from this item

Tips and best practices
  • You check the Handy Tips #37 to learn how to collect structured data from HTTP end-points 

  • For CSV to JSON preprocessing the first parameter allows you specify a CSV delimiter, while the second parameter specifies the quotation symbol 

  • For CSV to JSON preprocessing If the With header row checkbox is marked, the header line values will be interpreted as column names 

  • For details on XML to JSON preprocessing, refer to our serialization rules for more details. 

Learn how to leverage the many types of data collection provided by Zabbix and empower your data collection and processing. Sign up for our Zabbix Certified Specialist course, where under the guidance of a Zabbix certified trainer you will learn more about different types and technologies of monitoring and learn how to get the most out of your Zabbix instance. 

The post Handy Tips #39: Extracting metrics from structured data with Zabbix preprocessing appeared first on Zabbix Blog.

Handy Tips #38: Automating SNMP item creation with low-level discovery

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-38-automating-snmp-item-creation-with-low-level-discovery/23521/

Let Zabbix automatically discover and start monitoring your SNMP data points.

Creating items manually for each network interface, fan, temperature sensor, and other SNMP data points can be a very time-consuming task. To save time, Zabbix administrators need to automate item, trigger, and graph creation as much as possible.

Automate item, trigger and graph creation with SNMP low-level discovery rules:

  • An entity will be created for each of the discovered indexes
  • Specify multiple OIDs to discover additional information about an entity

  • Filter entities based on any of the discovered OID values
  • Low-level discovery can be used with SNMP v1, v2c and v3

Check out the video to learn how to use Zabbix low-level discovery to discover SNMP entities.

How to use Zabbix low-level discovery to discover SNMP entities:

  1. Navigate to ConfigurationHosts and find your SNMP host
  2. Open the Discovery section and create a discovery rule
  3. Provide a name, a key, and select the Type – SNMP agent
  4. Populate the SNMP OID field with the following LLD syntax
  5. discovery[{#LLD.MACRO1},<OID1>,{#LLD.MACRO2},<OID2>]
  6. Navigate to the Filters section and provide the LLD filters
  7. Press Add to create the LLD rule
  8. Open the Item prototypes section and create an item prototype
  9. Provide the Item prototype name and key
  10. Populate the OID field ending it with the {#SNMPINDEX} LLD macro
  11. Configure the required tags and preprocessing rules
  12. Press Add to create the item prototype
  13. Wait for the LLD rule to execute and observe the discovered items

Tips and best practices
  • snmpwalk tool can be used to list the OIDs provided by the monitored device
  • If a particular entity does not have the specified OID, the corresponding macro will be omitted for it
  • OIDs can be added to your LLD rule for usage in filters and tags
  • The {#SNMPINDEX} LLD macro is discovered automatically based on the indexes listed for each OID in the LLD rule

Learn how Zabbix low-level discovery rules can be used to automate the creation of your Zabbix entities by attending our Zabbix Certified Professional course. During the course, you will learn the many use cases of low-level discovery by performing a variety of practical tasks under the guidance of a Zabbix certified trainer.

The post Handy Tips #38: Automating SNMP item creation with low-level discovery appeared first on Zabbix Blog.

Handy Tips #37: Collecting metrics from HTTP endpoints with HTTP agent items

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-37-collecting-metrics-from-http-endpoints-with-http-agent-items/23160/

Collect metrics from HTTP endpoints such as web application APIs by defining HTTP agent items.

Collecting metrics from web services and applications is a complex affair usually done by scripting around CLIs and APIs. Organizations require an efficient way to monitor such and endpoints and react to collected data.

Collect and react to data from web services and applications with Zabbix HTTP agent items:

  • Collect metrics agentlessly using HTTP/HTTPS protocols
  • Collect metrics in bulk to reduce the number of outgoing requests

  • Zabbix preprocessing can be utilized to extract the required metrics from the response
  • Select from multiple HTTP authentication types

Check out the video to learn how to define HTTP items and collect metrics from HTTP endpoints.

Define HTTP items and collect metrics from HTTP endpoints:

  1. Navigate to ConfigurationHosts and find your host
  2. Open the Items section and press the Create item button
  3. Select TypeHTTP agent
  4. Provide the item key, name and URL
  5. For now, set the Type of information to Text
  6. Optionally, provide the request body and required status codes
  7. Press the Test button and then press Get value and test
  8. Save the resulting value to help you define the preprocessing steps
  9. Navigate to the Preprocessing tab
  10. Define a JSONPath preprocessing step to extract a value from the previous test result
  11. Navigate to the Item section
  12. Change the Type of information to Numeric (float)
  13. Perform the item test one more time
  14. Press Add to add the item

Tips and best practices
  • HTTP item check is executed by Zabbix server or Zabbix proxy
  • Zabbix will follow redirects if the Follow redirects option is checked
  • HTTP items have their own Timeout parameter defined in the item configuration
  • Receiving a status code not listed in the Required status codes field will result in the item becoming unsupported

Learn how to automate your Zabbix configuration workflows and integrate Zabbix with external systems by signing up for the Automation and Integration with Zabbix API course. During the course, students will learn how to use the Zabbix API by implementing different use cases under the guidance of a Zabbix certified trainer.

The post Handy Tips #37: Collecting metrics from HTTP endpoints with HTTP agent items appeared first on Zabbix Blog.

Handy Tips #36: Collecting custom metrics with Zabbix agent user parameters

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-36-collecting-custom-metrics-with-zabbix-agent-user-parameters/22850/

Define custom agent keys to collect custom metrics by executing scripts or commands with Zabbix user parameters.

Having a simple way to extend the metric collection functionality of a monitoring tool can be vital if we wish to monitor custom in-house software or simply collect metrics not available out of the box.

Collect custom metrics with Zabbix agent by defining user parameters:

  • Define an unlimited number of user parameters for your Zabbix agents
  • Parameters such as usernames and passwords can be passed to flexible user parameters

  • User parameters support Zabbix agent data collection in active and passive modes
  • User parameters can collect bulk data for further processing by dependent items

Check out the video to learn how to define user parameters for Zabbix agents.

Define user parameters for Zabbix agents:

  1. Test your custom command on the host on which you will create the user parameter
  2. Open the Zabbix agent configuration file in a text editor
  3. A simple user parameter can be defined by adding the line: UserParameter=key,command
  4. A flexible user parameter can be defined by adding the line: UserParameter=key[*],command
  5. For flexible user parameters, use $1…$9 positional references to reference your custom key parameters
  6. Save the changes
  7. Reload user parameters by using the command zabbix_agentd -R userparameter_reload
  8. Open the Zabbix frontend and navigate to ConfigurationHosts
  9. Find your host and click on the Items button next to the host
  10. Press the Create item button
  11. Give your item a name and select the item type – Zabbix agent or Zabbix agent (active)
  12. Provide the key that you defined as your user parameter key
  13. For flexible user parameters, provide the key parameters
  14. Press the Test button and then press Get value and test to test your user parameter
  15. Press the Add button to add the item

Tips and best practices
  • User parameter commands need to be executed within the Zabbix agent Timeout parameter value
  • User parameters can be reloaded by executing the zabbix_agentd -R userparameter_reload command
  • User parameters can be defined in the Zabbix agent configuration file, or the files specified by the Include parameter
  • By default, certain symbols are not permitted to be used in user parameters
  • The usage of restricted characters can be permitted by setting the value of UnsafeUserParameters parameter to 1

Learn how to leverage the many types of data collection provided by Zabbix and empower your data collection and processing. Sign up for our Zabbix Certified Specialist course, where under the guidance of a Zabbix certified trainer you will learn more about different types and technologies of monitoring and learn how to get the most out of your Zabbix instance.

The post Handy Tips #36: Collecting custom metrics with Zabbix agent user parameters appeared first on Zabbix Blog.

Handy Tips #35: Monitoring log file entries with Zabbix agent

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-35-monitoring-log-file-entries-with-zabbix-agent/22607/

Collect and react on entries in your Windows or Linux logs with Zabbix log monitoring.

Log file entries can contain OS or application-level information that can help you react proactively to potential issues or track the root cause of a problem after it has occurred.  For this reason, keeping a constant lookout for issues in mission-critical log files is vital.

Collect log file entries with Zabbix agent and react on them:

  • Zabbix agent can monitor log files on Windows and Unix-like operating systems
  • Decide between collecting every log entry or only entries matching your criteria

  • Monitor Windows event logs and collect entries matching specific severity, source or eventid
  • Choose between returning the whole log line or simply count the number of matched lines

Check out the video to learn how to collect and match log file entries.

How to match and collect log file entries:

  1. Navigate to ConfigurationHosts
  2. Find your Host
  3. Click on the Items button next to the host
  4. Click the Create item button
  5. Select the item type – Zabbix agent (active)
  6. Make sure that the Type of information is selected as Log
  7. Provide the item name and key
  8. Select the log item key
  9. Use the log file as the first parameter of the key
  10. The second parameter should contain a regular expression used to match the log lines
  11. Optionally, provide the log time format to collect the local log timestamp
  12. Set the Update interval to 1s
  13. Press the Add button
  14. Generate new log line entries
  15. Navigate to MonitoringLatest data
  16. Confirm that the matching log entries are being collected

Tips and best practices
  • Log monitoring is supported only by active Zabbix agent
  • If restarted, Zabbix agent will continue monitoring the log file from where it left off
  • The mode log item parameter can be used to specify should the monitoring begin from the start of the file or its latest entry
  • The logrt item can be used to monitor log files that are being rotated
  • The output parameter can be used to output specific regexp capture groups

Learn how to configure and optimize your log monitoring by attending our Zabbix Certified Specialist course, where under the guidance of a Zabbix certified trainer you will obtain hands-on experience with different log file monitoring items and learn how to create trigger expressions to detect problems based on the collected log lines.

The post Handy Tips #35: Monitoring log file entries with Zabbix agent appeared first on Zabbix Blog.

Handy Tips #34: Creating context-sensitive problem thresholds with Zabbix user macros

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-34-creating-context-sensitive-problem-thresholds-with-zabbix-user-macros/22281/

Provide context and define custom problem thresholds by using Zabbix user macros.

Problem thresholds can vary for the same metric on different monitoring endpoints. We can have a server where having 10% of free space is perfectly fine, and a server where anything below 20% is a cause for concern.

Define Zabbix user macros with context:

  • Override the default macro value with a context-specific value
  • Add flexibility by using context macros as problem thresholds

  • Define a default value that will be used if a matching context is not found
  • Any low-level discovery macro value can be used as the context

Check out the video to learn how to define and use user macros with context:

How to define macros with context:

  1. Navigate to ConfigurationHosts
  2. Click on the Discovery button next to your host
  3. Press the Create discovery rule button
  4. We will use the net.if.discovery key to discover network interfaces
  5. Add the discovery rule
  6. Press the Item prototypes button
  7. Press the Create item prototype button
  8. We will use the net.if.in[“{#IFNAME}”] item key
  9. Add the Change per second and Custom multiplier:8 preprocessing steps
  10. Add the item prototype
  11. Press the trigger prototypes button
  12. Press the Create trigger prototype button
  13. Create a trigger prototype: avg(/Linux server/net.if.in[“{#IFNAME}”],1m)>{$IF.BAND.MAX:”{#IFNAME}”}
  14. Add the trigger prototype
  15. Click on the host and navigate to the Macros section
  16. Create macros with context
  17. Provide context for interface names: {$IF.BAND.MAX:”enp0s3″}
  18. Press the Update button
  19. Simulate a problem and check if context is taken into account

Tips and best practices
  • Macro context can be matched with static text or a regular expression
  • Only low-level discovery macros are supported in the context
  • Simple context macros are matched before matching context macros that contain regular expressions
  • Macro context must be quoted with ” if the context contains a } character or starts with a ” character

Learn how to get the most out of your low-level discovery rules to create smart and flexible items, triggers, and hosts by registering for the Zabbix Certified Professional course. During the course, you will learn how to enhance your low-level discovery workflows by using overrides, filters, macro context, and receive hands-on practical experience in creating fully custom low-level discovery rules from scratch.

The post Handy Tips #34: Creating context-sensitive problem thresholds with Zabbix user macros appeared first on Zabbix Blog.

Zabbix Summit: A celebration of all things monitoring and open-source

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/zabbix-summit-a-celebration-of-all-things-monitoring-and-open-source/21738/

Many of us have visited a number of different conferences over the years. The setting and the goal of the conferences can vary by a large degree – from product presentations to technology stack overviews and community get-togethers. Zabbix Summit is somewhat special in that, as it aims to combine all of the aforementioned goals and present them in a friendly, inclusive, and approachable manner.

As an open-source product with a team consisting of open-source enthusiasts, it is essential for us to ensure that the core tenets of what we stand for are also represented in the events that we host, especially so for Zabbix Summit. Our goal is for our attendees to feel right at home and welcome during the Summit – no matter if you’re a hardened IT and monitoring professional or just a beginner looking to chat and learn from the leading industry experts.

Connecting with the Zabbix community

Networking plays a large part in achieving the goals that we have set up for the event. From friendly banter during coffee breaks and speeches (you never know when a question will turn into a full-fledged discussion) to the evening fun-part events – all of this helps us build our community and encourages people to help each other and mutually contribute to each other’s projects.

Of course, the past two years have challenged our preconceptions of how such an event can be hosted in a way where we achieve our usual goals. While hosting a conference online can make things a bit more simple (everyone is already in the comfort of their home or office and organizers don’t have to spend time and other resources renting a venue, for example) the novelty of “online events” can wear of quite quickly. The conversations don’t flow as naturally as they do in person. Perusing through a list of attendees in Zoom isn’t quite the same as noticing a friend or recognizing an acquaintance while standing in line at the snack bar. As for the event speakers – steering your presentation in the correct direction can be quite complex without observing the emotional feedback of your audience. Are they bored? Are they excited? Is everyone half asleep 5 minutes in? Who knows.

With travel and on-premise events slowly becoming a part of our lives again, we’re excited to get back to our usual way of hosting Zabbix Summit. In 2022, it will be held on-premises in Riga, Latvia on October 7-8, and we can’t wait to interact with our community members, clients, and partners face-to-face again!

Making the best Zabbix Summit yet

As with every Zabbix Summit, this year’s event will build on the knowledge and feedback we have gained in previous years to make this year’s Summit the best it has ever been. This year will be special for us – we will be celebrating the 10th anniversary of the Zabbix Summit hosted on-premises! In addition to conducting the event on-site, we will also be live-streaming the event online, so if you can’t meet us in person – tune in and say hello to the Zabbix team virtually!

Zabbix Summit 2019 conference venue

Over the years we have managed to define a set of criteria for the Zabbix Summit speeches with the goal to provide content that can deliver unique value to our attendees. As a Zabbix certified trainer, a Zabbix fan, and a long-time Zabbix user, I know that there are certain types of speeches that immediately attract my attention:

  • In-depth Zabbix functionality overviews from Zabbix experts or Zabbix team members
  • Unique business monitoring use cases
  • Custom Zabbix integrations, applications, and extensions
  • How Zabbix is used in the context of the latest IT trends (e.g.: Kubernetes, cloud environments, configuration management tools such as Ansible and Chef)
  • Designing and scaling Zabbix deployments for different types of large and distributed environments

This is something that we try to put extra focus on for the Zabbix Summit. Speeches like these are bound to encourage questions from the audience and serve as a great demonstration of using Zabbix outside the proverbial box that is simple infrastructure monitoring.

Looking back at Zabbix Summit 2021, we had an abundance of truly unique speeches that can serve as guidelines for complex monitoring use cases. Some of the speeches that come to mind are Wolfgang Alper’s Zabbix meets television – Clever use of Zabbix features, where Wolfgang talked about how Zabbix is used in the broadcasting industry to collect Graylog entries and even monitor TV production trucks!

Not to mention the custom solution used for host identification and creation in Zabbix called Omnissiah, presented during the last year’s Zabbix Summit by Jacob Robinson.

As Zabbix has greatly expanded its set of features since the previous year’s summit, this year we expect the speeches to cover an even larger scope of topics related to many different industries and technology stacks.

Workshops – what to expect

Workshops are a whole other type of ordeal. In an environment where we can have participants coming from different IT backgrounds with very different skill sets, it’s important to make the workshop interesting, while at the same time making it accessible to everyone.

Zabbix workshop session at the Zabbix Summit 2019

There are a few ways we go about this to ensure the best possible workshop experience for our Zabbix Summit attendees:

  • Use native Zabbix features to configure and deploy unique use cases
  • Focus on a thorough analysis of a particular feature, uncovering functionality that many users may not be aware of
  • Demonstrate the latest or even upcoming Zabbix features
  • Interact with the audience and be open to questions and discussions

In the vast majority of cases, this allows keeping a smooth pace during the workshop while also having fun and discussing the potential use cases and the functionality of the features on display.

Becoming Zabbix certified during Zabbix Summit 2022

But why stop at workshops? During the Zabbix Summit conferences, we always give our attendees a chance to test their knowledge by attempting to pass the Zabbix certified user, specialist, or professional certification exams. The exams not only test your proficiency in Zabbix but can also reveal some missing pieces in your Zabbix knowledge that you can discuss with the Zabbix community right on the spot. Receiving a brand new Zabbix certificate is also a great way to start your day, won’t you agree?

A moment of jubilation for our freshly certified Zabbix specialists and professionals

This year the Summit attendees will also get the chance to participate in Zabbix one-day courses focused on problem detection, Zabbix security, Zabbix API, and data pre-processing. Our trainers will walk you through each of these topics from A-Z and they’re worth checking out both for Zabbix beginners as well as seasoned Zabbix veterans. I can attest that by the end of the course you will have a list of features that you will want to try out in your own infrastructure – and I’m saying that as a Zabbix-certified expert.

As for those who already have Zabbix 5.0 certifications – we’ve got a nice surprise in store for you too. We will be holding Zabbix certified specialist and professional upgrade courses, which will get you up to speed with the latest Zabbix 6.0 features and upgrade your certification level to Zabbix 6.0 certified specialist and professional.

Scaling up the Zabbix Summit

But we haven’t slumbered for the last two years of working and hosting events remotely. We have continued growing as a team and expanding our partner and customer network. Who knows what surprises October will bring, but currently our plan is for Zabbix Summit 2022 to reflect our growth.

Zabbix team at the Zabbix Summit 2019

Currently, we stand to host approximately 500 attendees on-site and expect the online viewership to reach approximately 7000 unique viewers from over 80 countries all across the globe.

With over 20 speakers from industries such as banking and finance, healthcare and medical, IT & Telecommunications, and an audience consisting of system administrators, engineers, developers, technical leads, and system architects, Zabbix Summit is the monitoring event for knowledge sharing and networking across different industries and roles.

The fun part

Spending the major part of the day networking and partaking in knowledge sharing can be an amazing experience, but when all is said and done, most of us will want to unwind after an eventful day at the conference. The Zabbix Summit conference fun part events are where you will get to strengthen your bonds with other fellow Zabbix community members and simply relax in an informal atmosphere.

Zabbix Summit 2019 Sunset afterparty

The Zabbix Summit fun part consists of three parties.

  • Kick off Zabbix Summit 2022 by joining the Zabbix team and your fellow conference attendees for an evening of social networking and fun over cocktails and games at the Meet & Greet party.
  • Join the main networking event to mark the 10th anniversary of the Zabbix Summit. Apart from good vibes, cool music, and like-minded people, expect the award ceremony honoring the most loyal Zabbix Summit attendees, fun games to play, and other entertaining activities.
  • Celebrate the end of the Zabbix Summit 2022 by attending the closing party where you can network with conference peers and discuss the latest IT trends with like-minded people in a relaxed atmosphere.
Zabbix Summit 2019 Main party

Invite a travel companion

Zabbix Summit is also a great chance to take a friend or a loved one to the conference. The conference premises are located in the very heart of Riga – perfect for taking strolls across and exploring Riga Old Town.

If you’re interested in a more guided experience for your companion, we invite you to register for the Travel companion upgrade. Your travel companion will get to enjoy the Riga city tour followed by a lunch with the rest of the guests accompanying the Zabbix conference participants. Last time, we nurtured our travel companions with a delightful tour across the Riga Central market, accompanied by the Latvian-famous chef Martins Sirmais, and full of local food tasting. Our team is preparing something special also for this year. The tour will take place on October 7 during the conference time.

Visit the Zabbix offices

Are you a fan of the product and what we stand for? Why not pay us a visit and attend the Zabbix open doors day on October 6 from 13:00 till 15:00. Take a tour of the office and sit down with us for an informal chat and a cup of coffee or tea. There won’t be any speeches, workshops, or presentations, just friendly conversations with Zabbix team, our partners, and the community to warm up before the Summit. Although, there might be friendly foosball and office badminton tournaments if any volunteers will appear.

Welcoming our community members at the Zabbix Summit 2019 Open Doors day

All things said and done – Zabbix Summit is not only about deep technical knowledge and opinion sharing on monitoring. It is and has always been primarily a celebration of the Zabbix community. It is the community feedback that largely shapes the Zabbix summit and helps us build upcoming events on the foundations laid in the previous year. Throughout the years Zabbix summit has grown into much more than a simple conference – it’s an opportunity to travel, visit us, connect with like-minded people and spend a couple of days in a relaxed atmosphere in the heart of a beautiful Northern European city.

The post Zabbix Summit: A celebration of all things monitoring and open-source appeared first on Zabbix Blog.

Handy Tips #33: Pause unwanted alarms by suppressing your problems

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-33-pause-unwanted-alarms-by-suppressing-your-problems/21981/

Suppress problems indefinitely or until a specific point in time with the problem suppression feature.

There are plenty of use cases when detected infrastructure or business problems need to be temporarily suppressed, and the alerting workflows have to be paused. This applies to scenarios such as emergency maintenance, unexpected load on your systems, migrations to new environments, and many others.

Use Zabbix problem suppression feature to suppress unwanted problems and pause your alerts:

  • Suppress problems indefinitely or until a specific point in time
  • Suppress a single problem or together with all of the related problems

  • Pause your actions until the problem suppression is over
  • Use relative or absolute time syntax to suppress problems until a specific point in time

Check out the video to learn how to use the problem suppression feature:

How to suppress unwanted problems:

  1. Open the MonitoringProblems page or a Problems widget
  2. Find the problem that you wish to suppress
  3. Press the No button under the Ack column
  4. Select the suppression scope
  5. Mark the Suppress checkbox
  6. Select the suppression method
  7. If you have selected Until provide the date or suppression interval
  8. Optionally, provide a message that will be visible to others
  9. Press the Update button
  10. Once the window has been refreshed, the problem will be hidden
  11. Open the Problems widget or the Problems page configuration
  12. Mark the Show suppressed problems checkbox
  13. Inspect the suppressed problem

Tips and best practices
  • Once suppressed the problem is marked by a blinking suppression icon in the Info column, before being hidden
  • A suppressed problem may be hidden or shown, depending on the problem filter/widget settings
  • Suppression details are displayed in a popup when positioning the mouse on the suppression icon in the Actions column
  • The event.acknowledge API method can be used to suppress/unsuppress a problem via Zabbix API

Do you wish to learn how to automatically detect and resolve complex problems in your infrastructure by creating smart problem thresholds?
Check out the Advanced Problem and Anomaly Detection with Zabbix training course, where under the guidance of a Zabbix certified trainer you will learn how to get the most out of Zabbix problem detection.

The post Handy Tips #33: Pause unwanted alarms by suppressing your problems appeared first on Zabbix Blog.

Zabbix 6.2 is out now!

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/zabbix-6-2-is-out-now/21602/

The Zabbix team is pleased to announce the release of the latest Zabbix major version – Zabbix 6.2! The latest version delivers features aimed at improving configuration management and performance on large Zabbix instances as well as extending the flexibility of the existing Zabbix functionality.

New features

A brief overview of the major new features available with the release of Zabbix 6.2:

  • Ability to suppress individual problems
    • Suppress problems indefinitely or until a specific point in time
  • Support of CyberArk vault for secret storage
  • Official AWS EC2 template
    • discover and monitor AWS EC2 performance statistics, alarms, and AWS EBS volumes
  • Ability to synchronize Zabbix proxy configuration directly from Zabbix frontend
    • Configuration synchronization is supported by active and passive proxies
  • Improved flexibility for hosts discovered from host prototypes
    • Link additional templates
    • Create and modify user macros
    • Populate the host with new tags
  • New items for VMware monitoring
  • The ability to further customize the hosts discovered by VMware discovery
  • Active agent check status can now be tracked from Zabbix frontend
  • Incremental configuration synchronization
    • Faster configuration synchronization
    • Reduced configuration synchronization performance impact
  • Newly created items are now checked within a minute after their creation
  • Execute now functionality is now available from the Latest data section
  • A warning message is now displayed when performing Execute now on items that do not support it
  • Templates are now grouped in template groups, instead of host groups
    • Improved host and template filtering
  • Multiple LDAP servers can now be defined and saved under Authentication – LDAP settings
  • Ability to collect Windows registry key values with the new registry monitoring items
  • New item for OS process discovery and collecting individual process statistics
  • New digital clock widget
  • The default Global view dashboard has been updated with the latest Zabbix widgets
  • The Graph widget has been further improved
    • Added stacked graph support
    • Legend now provides additional information
    • Added support of simple trigger display
  • UI forms now provide direct links to the relevant documentation sections
  • Many other improvements and features
Enhance the observability of your VMware infrastructure with the new items
Track your EC2 instances in a single pane of glass view
Suppress problems indefinitely or until a specific point in time
Track the active agent interface status from Zabbix frontend

New templates and integrations

Zabbix 6.2 comes pre-packaged with many new templates for the most popular vendors:

  • Envoy proxy
  • HashiCorp Consul
  • AWS EC2 Template
  • CockroachDB
  • TrueNAS
  • HPE MSA 2060 & 2040
  • HPE Primera
  • The S.M.A.R.T. monitoring template has received improvements

Zabbix 6.2 introduces a webhook integration for the GLPI IT Asset Management solution. This webhook can be used to forward problems created in Zabbix to the GLPi Assistance section

Zabbix 6.2 packages and images

The official Zabbix packages and images are available for:

  • Linux distributions for different hardware platforms on RHEL, CentOS, Oracle Linux, Debian, SUSE, Ubuntu, Raspbian, Alma Linux, Rocky Linux
  • Virtualization platforms based on VMware, VirtualBox, Hyper-V, XEN
  • Docker
  • Packages and precompiled agents for most popular platforms, including macOS and MSI packages for Windows

You can find the download instructions and download the new version on the Download page: https://www.zabbix.com/download

One-click deployments for the following cloud platforms are coming soon:

  • AWS, Azure, Google Cloud, Digital Ocean, Linode, Oracle Cloud, Red Hat OpenShift

Upgrading to Zabbix 6.2

In order to upgrade to Zabbix 6.2, you need to upgrade your repository package and download and install the new Zabbix component packages (Zabbix server, proxy, frontend, and other Zabbix components). When you start the Zabbix Server, an automatic database schema upgrade will be performed. Zabbix agents are backward compatible; therefore, it is not required to install the new agent versions. You can do it at a later time if needed.

If you’re using the official Docker container images – simply deploy a new set of containers for your Zabbix components. Once the Zabbix server container connects to the backend database, the database upgrade will be performed automatically.

You can find step-by-step instructions for the upgrade process to Zabbix 6.2 in the Zabbix documentation.

Join the webinar

If you wish to learn more about the Zabbix 6.2 features and improvements, we invite you to join our What’s new in Zabbix 6.2 public webinar.

During the webinar, you will get the opportunity to:

  • Learn about the Zabbix 6.2 features and improvements
  • See the latest Zabbix templates and integrations
  • Participate in a Q&A session with Zabbix founder and CEO Alexei Vladishev
  • Discuss the latest Zabbix version with Zabbix community and Zabbix team members
  • Anyone can sign up and attend the webinar at absolutely no cost

Don’t hesitate and sign up for the webinar now!

The post Zabbix 6.2 is out now! appeared first on Zabbix Blog.

Handy Tips #32: Deploying Zabbix in the Azure cloud platform

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-32-deploying-zabbix-in-the-azure-cloud-platform/21355/

Deploy your Zabbix servers and proxies in the Azure cloud.

There are many use cases where deploying your Zabbix server or Zabbix proxies in the cloud can reduce costs, provide an additional layer of security and redundancy, and improve the available management toolset.

Deploy your Zabbix instance in the Azure cloud with the official Zabbix cloud images:

  • Cloud images are available for the latest Zabbix server and proxy versions
  • Deploy a fresh Zabbix instance in 5 minutes

  • Dynamically scale the cloud resources
  • Select the deployment options based on your budget

Check out the video to learn how to deploy Zabbix in the Microsoft Azure cloud platform:

How to deploy Zabbix in the Azure cloud platform:

  1. Navigate to the Zabbix Cloud Images page
  2. Select the Microsoft Azure vendor and Zabbix server cloud image
  3. Press the Get it now button and press Continue in the next window
  4. On the deployment page press the Create button
  5. Provide the virtual machine name, resource group, region
  6. Specify the administrator account settings
  7. Provide the disk, network, tag, and advanced settings
  8. Verify the provided settings
  9. Press Create to begin deploying the virtual machine
  10. For public key authentication: download and store the private key
  11. Once the deployment is complete, press the Go to resource button
  12. Save your public IP address and connect to it via SSH
  13. Save the initial frontend username and password
  14. Use the public IP address to connect to your Zabbix frontend
  15. Log in with the saved username and password obtained

Tips and best practices
  • The default SSH user is called azureuser
  • Remember to store your SSH private key in a secure location
  • You can access the Zabbix database by using the root user
  • The password for the MySQL database root user is stored in /root/.my.cnf configuration file

Feeling overwhelmed with deploying and managing your Zabbix instance?
Check out the Zabbix certified specialist courses, where under the guidance of a Zabbix certified trainer, you will learn how to deploy, configure and manage your Zabbix instance.

The post Handy Tips #32: Deploying Zabbix in the Azure cloud platform appeared first on Zabbix Blog.

Handy Tips #31: Detecting invalid metrics with Zabbix validation preprocessing

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-31-detecting-invalid-metrics-with-zabbix-validation-preprocessing/21036/

Monitor and react to unexpected or faulty outputs from your monitoring targets by using Zabbix validation preprocessing.

In case of a failure, some monitoring endpoints like sensors or specific application or OS level counters can start outputting faulty metrics. Such behavior needs to be detected and reacted to as soon as possible.

Use Zabbix preprocessing to validate the collected metrics:

  • Select from and combine multiple preprocessing validation steps
  • Display a custom error message in case of an unexpected metric

  • Discard or change the value in case of an unexpected metric
  • Create an internal action to react to items becoming not supported

Check out the video to learn how to use preprocessing to detect invalid metrics.

Define preprocessing steps and react on invalid metrics:

  1. Navigate to ConfigurationHosts and find your host
  2. Click on the Items button
  3. Find the item for which the preprocessing steps will be defined
  4. Open the item and click on the Preprocessing tab
  5. For our example, we will use the Temperature item
  6. Select the In range preprocessing step
  7. Define the min and max preprocessing parameters
  8. Mark the Custom on fail checkbox
  9. Press the Set error to button and enter your custom error message
  10. Press the Update button
  11. Simulate an invalid metric by sending an out-of-range value to this item
  12. Navigate to ConfigurationHostsYour Host →  Items
  13. Observe the custom error message being displayed next to your item

Tips and best practices
  • Validation preprocessing can check for errors in JSON, XML, or unstructured text with JSONPath, XPath, or Regex
  • User macros and low-level discovery macros can be used to define the In range validation values
  • The Check for not supported value preprocessing step is always executed as the first preprocessing step
  • Internal actions can be used to define action conditions and receive alerts about specific items receiving invalid metrics

The post Handy Tips #31: Detecting invalid metrics with Zabbix validation preprocessing appeared first on Zabbix Blog.

Handy Tips #30: Detect continuous increase or decrease of values with monotonic history functions

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-30-detect-continuous-increase-or-decrease-of-values-with-monotonic-history-functions-2/20867/

Analyze your incoming metrics and look for interruptions in continuously increasing or decreasing metrics with the monoinc and monodec history functions.

A continuous increase or decrease is the norm for metrics such as server uptime, time remaining until a task is executed, number of daily transactions, and many other such use cases. A software or hardware failure could impact these counters and we need to ensure that they are providing the data in an expected manner.

Use monoinc and monodec history functions and detect if value monotonicity is true or false:

  • Detect monotonicity over a number of values or a time period
  • Strict and weak modes of monotonicity detection

  • Receive alerts if a metric is not monotonic
  • The monotonicity check can be combined with other functions to create flexible problem generation logic

Check out the video to learn how to use the monoinc and monodec history functions

How to configure monoinc and monodec history functions:

  1. Identify the items for which you wish to detect monotonicity
  2. For this example, the system.uptime key is used
  3. Navigate to ConfigurationHostsYour hostTriggers
  4. Press the Create trigger button
  5. Provide the trigger name and severity
  6. Press the Add button to add the trigger expression
  7. Select the item, the monoinc function, evaluation period, mode and result
  8. For this example, we will use the strict mode
  9. An example expression: monoinc(/Linux server/system.uptime,1h,”strict”)=0
  10. Simulate a problem by restarting the host
  11. Navigate to MonitoringProblems
  12. Confirm that the problem has been generated

Tips and best practices
  • The functions return 1 if all elements in the evaluation period continuously decrease or increase, 0 otherwise
  • The default mode – weak, checks if every value is bigger/smaller or the same as the previous one
  • The strict mode checks if every value has increased/decreased
  • Relative and absolute time shifts can be used to analyze time periods for monotonicity

The post Handy Tips #30: Detect continuous increase or decrease of values with monotonic history functions appeared first on Zabbix Blog.

Handy Tips #29: Discovering hosts and services with network discovery

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-29-discovering-hosts-and-services-with-network-discovery/20484/

Automate host creation and monitoring with Zabbix network discovery.

Creating hosts for a large number of monitoring endpoints can become a menial and time-consuming task. It is important to provide the end-users with the tools to automate such tasks to create and start monitoring hosts based on a user-defined set of rules and conditions.

Automate host onboarding and offboarding with Zabbix network discovery:

  • Discover monitoring endpoints and services in user defined IP ranges
  • Define a set of services that should be discovered

  • Provide custom workflows based on the received values
  • Onboard or offboard hosts based on the discovery status

Check out the video to learn how to discover your monitoring endpoints with Zabbix network discovery.

How to configure Zabbix network discovery:

  1. Navigate to ConfigurationDiscovery
  2. Press the Create discovery rule button service button
  3. Provide the discovery rule name, IP range and update interval
  4. Define discovery checks
  5. Press the Add button
  6. Navigate to ​​​​​​​ConfigurationActionsDiscovery actions
  7. Press the Create action button
  8. Provide the action name and action conditions
  9. Navigate to the Operations tab
  10. Define operations to assign templates and host groups
  11. Press the Add button
  12. Wait for the services to be discovered
  13. Navigate to MonitoringDiscovery and confirm the discovery status
  14. Confirm that the hosts have been created in Zabbix

Tips and best practices
  • A single discovery rule will always be processed by a single Discoverer process
  • Every check of a service and a host generates one of the following events: Host or service – Discovered/Up/Lost/Down
  • The hosts discovered by different proxies are always treated as different hosts
  • A host is also added, even if the Add host operation is missing, if you select operations resulting in actions on a host, such as enable/disable/add to host group/link template

The post Handy Tips #29: Discovering hosts and services with network discovery appeared first on Zabbix Blog.

Handy Tips #28: Keeping track of your services with business service monitoring

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-28-keeping-track-of-your-services-with-business-service-monitoring/20307/

Configure and deploy flexible business services and monitor the availability of your business and its individual components.

The availability of a business service tends to depend on the state of many interconnected components. Therefore, detecting the current state of a business service requires a sufficiently complex and flexible monitoring logic.

Define flexible business service trees and stay informed about the state of your business services:

  • Business services can depend on an unlimited number of underlying components
  • Select from multiple business service status propagation rules

  •  Calculate the business service state based on the weight of the business service components
  • Receive alerts whenever your business service is unavailable

Check out the video to learn how to configure business service monitoring.

How to configure business service monitoring:

  1. Navigate to Services – Services
  2. Click the Edit button and then click the Create service button
  3. For this example, we will define an Online store business service
  4. Name your service and mark the Advanced configuration checkbox
  5. Click the Add button under the Additional rules
  6. Set the service status and select the conditions
  7. For this example, we will set the status to High
  8. We will use the condition “If weight of child services with Warning status or above is at least 6
  9. Set the Status calculation rule to Set status to OK
  10. Press the Add button
  11. Press the Add child service button
  12. For this example, we will define Web server child services
  13. Provide a child service name and a problem tag
  14. For our example, we will use node name Equals node # tag
  15. Mark the Advanced configuration checkbox and assign the service weight
  16. Press the Add button
  17. Repeat steps 12 – 17 and add additional child services
  18. Simulate a problem on your services so the summary weight is >= 6
  19. Navigate to Services – Services and check the parent service state

Tips and best practices
  • Service actions can be defined in the Services → Service actions menu section
  • Service root cause problem can be displayed in notifications with the {SERVICE.ROOTCAUSE} macro
  • Service status will not be propagated to the parent service if the status propagation rule is set to Ignore this service
  • Service-level tags are used to identify a service. Service-level tags are not used to map problems to the service

The post Handy Tips #28: Keeping track of your services with business service monitoring appeared first on Zabbix Blog.

Handy Tips #27: Tracking changes with the improved Zabbix Audit log

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-27-tracking-changes-with-the-improved-zabbix-audit-log/20176/

Track the creation of new entities, updates to the existing configuration, and potential intrusion attempts with Zabbix audit log.

If your monitoring environment is managed by more than a single administrator, it can become hard to track the implemented changes and additions. Having a detailed audit log can help you analyze any potentially unwanted changes and detect potential intrusion attempts.

Use Zabbix audit log to track changes in your environment:

  • Track configuration changes and updates
  • Audit log displays information about Zabbix server and frontend operations

  • Identify potential intrusion attempts and their source
  •  Filter the audit log by action and resource types

Check out the video to learn how to track changes in Zabbix audit log.

How to track changes in Zabbix audit log:

  1. Navigate to Administration  General  Audit log
  2. Enable and configure your Zabbix audit log settings
  3. Perform a failed login attempt
  4. Check the related entries under Reports  Audit log
  5. Navigate to Configuration →  Hosts
  6. Import hosts or templates from a YAML file
  7. Check the related entries under Reports  Audit log
  8. Filter the entries by the Recordset ID
  9. Navigate to Configuration  Hosts
  10. Find a host with a low-level discovery rule on it
  11. Execute the low-level discovery rule
  12. Check the related entries under Reports  Audit log

Tips and best practices
  • Audit logging should be enabled in the Administration settings to collect audit records
  • Audit log entry storage period can be defined under Administration → General → Audit log
  • Each audit log entry belongs to a Recordset ID which is shared by entries created as a result of the same operation
  • auditlog.get API method can be used to obtain audit log entries via the Zabbix API

The post Handy Tips #27: Tracking changes with the improved Zabbix Audit log appeared first on Zabbix Blog.