Tag Archives: conferences

Data solution for solar energy application

Post Syndicated from Brad Berwald original https://blog.zabbix.com/data-solution-for-solar-energy-application/13005/

Morningstar, the world’s leading supplier of solar controllers for remote solar power systems, has partnered with Zabbix to provide pre-configured integration of their data-enabled solar power products with the Zabbix network monitoring solution. Now, both power system data and network performance metrics integrate seamlessly to allow remote solar systems to be monitored and managed from a single software platform on the premise or in the cloud, using solutions from Zabbix.

Contents

I. About Morningstar (1:43)
II. Products and technology (6:01)

III. Data solution for solar application (10:28)

IV. Zabbix solution (15:09)

V. Conclusion (21:40)
VI. Questions and Answers (23:09)

 

In this post, an overview of Morningstar’s diverse product line is presented and the industry applications they support, as well as of how the Zabbix network monitor can be used to manage and log time-series data using SNMP and MODBUS protocols for powerful and scalable system oversight and trend analysis.

Morningstar has been working in partnership with Zabbix to provide integration for the Morningstar products, including easy-to-use templates and pre-formatted data sets in order to speed up getting these products online, so that customers can monitor both their network data and solar power system data at remote sites.

About Morningstar

Morningstar is the leading supplier of charge controllers and inverters generally used in remote power systems around the world.

Morningstar, located in Newtown, Pennsylvania, USA, has sold over 4 million products deployed into the field since the company’s inception in 1993. Morningstar currently works in over 100 countries and provides reliable remote power for mission-critical applications.

We’d like to think of ourselves as the ‘charging experts’ because of our focus on battery life and many years of charging innovation.  We have a diverse product line and many models designed for application specific needs, such as solar lighting and telecommunications. Morningstar has one of the lowest hardware failure rates in the industry.

Some of these mission-critical applications include:

— Residential and rural electrification.

— Commercial systems.

— Industrial products, including telecommunications, oil and gas, security applications.

— Mobile and marine application, which generally includes boats, RVs, and caravans, agricultural applications, etc.

Overview of Morningstar solar applications

 

— Railroad industry where remote signaling and track management is often remotely powered by solar applications because of its critical nature and absence of a readily available electric grid.

— Traffic applications, early warning systems, signaling messaging systems, traffic, and speed monitoring equipment also can be easily powered for mobile deployment with a battery-based system.

— Oil and gas industry is a specific and notable market for Morningstar, because oil field automation measurement of the gas flow and pressure (RTUs), as well as methane injection points used to keep the gas flowing and avoid well freeze-ups, can be powered by solar power with a very modest amount of power for data monitoring. Since the pipelines often traverse very remote regions, this is highly advantageous to get power where it’s needed.

— In telecommunications, cellular base stations and backhaul links to provide the data for the sites, lane mobile radio applications, and satellite-based infrastructure benefit from remote solar power. In these applications, the loads can be modest or they can be quite significant. In that case, several controllers can be combined together to charge a very large battery bank often with a hybrid diesel gen-set in conjunction with other renewable energy sources to provide a hybrid power system. This increases reliability, provides diversity during inclement weather, and maintains the high integrity of the site link.

— In the cases of rural electrification, small amounts of DC power can be provided in remote locations with no grid access in the countries with a large populace and huge needs for lighting and cell phone charging.

Recently, we did a notable project in Peru, where nearly 1 million Peruvians were provided remote power access using 200,000 DC energy boxes. They provided basic 12V DC power, USB charging, and were distributed over the country in some of the most remote locations.  These home systems easily met the needs for lighting, device charging and other small equipment power needs. In addition, 3,000 integrated power systems for community centers were also deployed to provide 230V AC power for more critical loads, including more substantial lighting, communication and, in some cases, health equipment for the benefit of the local population.

So, we’re very proud to have deployed probably one of the largest and most ambitious rural electrification projects in the history of the off-grid industry. The project was completed last year with our partner Tozzi Green of Italy.

Products and technology

Morningstar has a diverse product set covering all power levels  — anywhere from modest 50W needs up to models that handle 3.2kW per device and that can be paralleled for even greater capacity. We also provide inverter systems in both our SureSine and coming MultiWave, which will be coming to market in the near future.  These inverters provide AC power and enable hybrid system charging (combining both solar and AC sources together).  These meet more demanding load needs and add robust high current charging capabilities from the grid or from diesel generators.

Morningstar products and technologies

So, together all these product lines make up a diverse set of products that really fulfill the variety of needs in an off-grid remote power system. In many cases, each of our products includes open communication protocols, which can be used for remote management.

Charge controllers

A charge controller is installed between the PV modules and the battery. It monitors various system power and voltage readings and temperatures, The charge controlled is also intended for managing the batteries in order to provide long-life adequate charging, take the batteries through their various charging stages, and to manage the DC loads connected to the device. They can, of course, extend the battery life significantly if the battery’s setpoints are configured correctly and the right choice for the battery model is made. That depends a lot on the battery chemistry, the temperatures it will experience, and how deeply it will be cycled or discharged each day while providing power to the system.

We have products in both the PWM and MPPT in our line of charge controller topologies.

 

MPPT controllers are able to convert DC power from the PV array to the proper battery voltage. So, it has an integrated DC-to-DC converter and controls the charge of the battery preventing overcharging and extending the life.

 

It is also used with one of our inverter products in order to provide small AC loads, such as equipment that requires 120 or 230-volt power remotely in the field.

Our product line covers a variety of PWM charge controllers.

PWM charge controllers

  • Pulse width modulation products are more cost-effective, simpler in design (from a complexity standpoint), and provide direct charging from an equally sized nominal solar array.
  • The MPPT charge controller line ensures the maximum power point is tracked (MPPT), therefore optimizing power harvest. The modules can be of a much wider range of voltages, much higher voltage, and will actually be monitored and tracked by the controller to provide the optimal operating point for the system.
  • The SunSaver and ProStar MPPT lines are used extensively in smaller systems under a thousand watts.
  • Our TriStar family is used for 3kW or greater and is able to be paralleled. A notable product is our 600V controller, which allows the PV modules to be wired in series for very high voltage input providing advantages in efficiency and PV array distance from the controller. All the MPPT controllers will take the input voltage and conver to the proper output voltage. They will convert it to the expected output to support 12V, 24V, or 48V battery systems.
  • Morningstar inverter line includes the SureSine and MultiWave inverter chargers. We also have a very extensive line of accessories used with each of these controllers. These generally provide protocol conversion hardware, interface adapters, and other items that can control relays to support system control or actuate additional components in remote off-grid systems.

EMC-1 Morningstar’s Ethernet MeterBus converter

EMC-1 is a simple serial 2 Ethernet converter that also supports a real-time operating system and a variety of protocols. So, Morningstar products can be connected to industry-specific applications using those standard protocols — Modbus over IP, or SNMP. It can display a simple HTML web GUI to allow direct connection and a one-to-one basis with the product for simple status monitoring using any type of device, including mobile devices, such as phones or tablets.

Data solution for solar application

Challenges to remote monitoring of solar power sources

Power for wireless ISP infrastructure is a common off-grid application requiring network traffic and power to be monitored together. Customers’ access in the field using Wi-Fi or LTE communications should also be enabled.

When these devices are deployed, clearly they have to have their network equipment monitored with the network management system or NMS. These network monitors can now be enabled with EMS and using SNMP, and Zabbix to make this far easier to integrate the power systems into the same monitoring system. So, you have a single point of software and data collection, and both power and network bandwidth and status can be monitored at the same time.

What this monitoring can help achieve:

  • Measurement of the true load consumption in the field. The power levels will vary depending on the type, amount of usage, and the technology and frequencies used. So, the load in the field throughout the day, during peak and off-peak hours can be directly monitored in real time.
  • Detection and root cause of network outages. We need to minimize network outages and to ensure that the site is reliable and the network is on at all times to avoid customer dissatisfaction and frustration by the operating carrier. The ability to monitor both power and network allows the root cause of network outages to be determined, whether it’s a system configuration, bandwidth restrictions, or something that has caused difficulty with the power system itself, such as a depleted battery, insufficient solar, even electrical faults, or possibly tampering with the system.
  • Ensuring sufficient power at the site to prevent deep battery discharge. It also ensures that you have adequate PV to cycle the battery properly. So, when the battery is depleted each day from powering the loads, it can be fully recharged the next day when PV power is available again. This balance is difficult to manage because you have to always ensure power for the battery, protect the loads, but you may or may not have adequate sun each day. So, reserve power is often provided in the system to ensure that the site will remain up during lower than average or uneven periods of PV supply.

Measurement of the current system status, as well as historical data. Measurement of all this data is ensured by the network monitoring software. Sometimes, with a high level of granularity, so that you can see what is happening in the system on a minute-by-minute or hour-by-hour basis.  This helps detect system faults otherwise missed.

Ensuring system resiliency during low periods of production. In peak times, you usually have more than adequate power. However, off-grid systems may be system-sized in order to handle a worst-case scenario, for instance, for the winter months or the off-peak months with the less amount of sun hours and lower levels of solar insulation that can provide as much power as you expected during the summer. So, during these worst-case periods of the year,  monitoring can be really critical because it’s when you’re most likely to experience an outage due to inadequate solar.

  • When long-term data is available, you can compare, for instance, month-to-month or season-to-season power output, as well as look at trends and analyze the system lifetime of operation to detect anomalies and negative trends during operation to indicate pending battery failure.

With a lot of lead-acid batteries, a minimum of five years is generally acceptable. With newer lithium technologies, battery life is extended to 10 or more years when adequately sized. So, the batteries can have a robust life as long as they are sized correctly and given adequate power.

But monitoring the end stage of a battery, which most likely will occur at some time in the system, is really critical. Many remote sites that are deployed for an extended period of time can go through one or two battery replacement cycles. So, a downward trend with the power declining over time and the batteries beginning to show signs that their health is no longer adequate to support the system can be detected with this long-term data analysis.

Zabbix solution

Remote monitoring of a Morningstar EMC-1 adapter’s IP connection through Zabbix monitoring platform

A typical system in this diagram shows one of the ProStar MPPT controllers connected to a solar array and a battery storage system. Loads can be typical among many of the applications. EMC-1 can be connected to the device in order to provide IP connectivity. That’s something that can be connected to a variety of services:

  • Modbus protocol to connect to SCADA or other HMI Data viewing solutions, which are common in automation and oil and gas.
  • Simple HTTP or HTML web pages to get a simple look at a dashboard to understand what’s going on in the system.
  • SNMP can be used alongside the network monitoring software, and the Zabbix network monitoring software can be enabled to monitor the entire site with just one tool.

Being cloud-based or server-based can have great applications for energy storage, data logging, notifications, and alarms. Native or external databases in the cloud can be used to archive the large amounts of data that will be accumulated. The sites can grow to hundreds or even thousands of deployed systems. So, the software tool and the server must be scalable so that they can grow in time and keep up with the needs of the data.

Advantages:

  • The benefits of an IP-based solution involve its compatibility with any network transport layer. In addition, there’s a variety of wireless applications in the field, including point-to-point, licensed, unlicensed, Wi-Fi, proprietary, wireless protocols,and cellular.
  • Recently, notable gains in the satellite industry have provided lower latency and higher bandwidth. With satellite, you can often reach almost any part of the world, which gives it great benefits for solar power applications.
  • SNMP — a very lightweight protocol. On a metered and wireless connection, especially in these hard-to-reach locations, low overhead UDP packets and minimal infrastructure for monitoring can make sure that you have minimal impact on the system itself in terms of overhead.

Zabbix dashboards for Morningstar solar systems

  • Native Morningstar SNMP support is provided by these tools. We work to review use cases, system needs for solar applications, as well as data sets. The MIB files are already being imported and device templates are being pre-configured for a variety of Morningstar product solutions that support the EMC-1.
  • Dedicated templates allow you to easily connect the hardware to an existing system and go about monitoring Morningstar’s tools using your existing Zabbix instance.
  • Performance visibility of solar-powered systems is available:

— on a very high level to see if there are any systems that have needs or are in a fault state, or

— in greater detail to analyze the time series data and to allow you to correlate that data with other aspects of the system, to determine what is the root cause of the system and how that data is trending. Time series data correlation provides for accelerated troubleshooting.

  • At a glance dashboard management tool makes it easy to monitor the status of all Morningstar devices on the network and to scale to hundreds of sites.
  • Active advanced and custom alerts sent out by the Zabbix system and triggered by the power system events ensure proactive notification of when there is a pending issue at the site, hopefully, before critical loads drop. If you can be notified, then using the bi-directional nature of some of the other protocols, system changes, corrections, extended runtimes, or even auxiliary charging systems, such as generators, can be activated to prevent an outage. Such proactive monitoring can only take place when you’re working at scale using a tool such as Zabbix.

Advantages of monitoring with Zabbix

  • Morningstar provides some simple PC-based utilities that run on Windows software and can provide direct Modbus capability for communication, very simple data logging on a very small number of devices. Morningstar MSView functional utility allows configuration files for the products to be uploaded and deployed to the controllers in the field, as well as basic troubleshooting.
  • Morningstar Live View is our built-in web dashboard that also runs on the EMC. It allows a simple web page with everything displayed in HTML so that it can be viewed on any device regardless of the operating system.

These two products are meant for troubleshooting, site deployment, and configuration of small-scale systems. They’re not set to be scaled.

  • With Zabbix, an almost unlimited number of devices can be connected depending on your computing resources and power.
  • Zabbix supports SNMP and Modbus, which is beneficial for both telecom and industrial automation or smart city applications.
  • Zabbix gives you a real-time data display, as well as custom alerts and notifications. You can set up custom log intervals, downloading of extensive amounts of log data, etc. Reports can be generated based on custom filtering, as well as long-term historical data, which becomes more critical to understanding the site’s longevity.
  • There are cloud-based systems where APIs are available, cloud-to-cloud integrations can be utilized and advanced data management analysis or intelligence can be added onto existing servers by using additional third-party tools.

So, it’s really the only way to manage data of this scale and size.

Conclusion

Zabbix adds a great deal of value and capabilities to Morningstar products when used in the field. If additional access is provided via satellite, cellular, or fixed wireless technologies, then the charge controllers can perform their duty of providing remote power for these systems but easily integrating using the existing protocols to monitor across the entire system deployment.

As solar equipment is often used to provide power for remote network infrastructure. Integrating data from the network components and power systems into a centralized NMS provides an essential management tool to optimize system health and increase uptime. Zabbix also adds configuration options and valuable data analytics to ensure full system visibility. More information on the Zabbix network monitoring tools or Morningstar data-enabled remote power products is available at https://www.zabbix.com and www.morningstarcorp.com or can be requested from [email protected] and [email protected], respectively.

Questions and Answers

Question. Are Morningstar templates shared somewhere? Are they available to the public?

Answer. A part of the partnership with Zabbix is to get all this integrated. We’re putting the finishing touches on how that will be available and easily downloadable as part of our SNMP support documentation. In addition, we can do some cross-referring, so that we can help our products get online. Hopefully, you can get them plugged into the major network monitoring access. All that will be available probably within the next month.

Question.  Zabbix starting from 5.2 natively supports Modbus and MQTT. Do you plan on using that in your environment?

Answer.  Yes, MQTT has come up quite recently and is an indeal solution where IP addressing challenges exist and pub/sub style data reporting is preferred from session initiated within the network. Currently, we support Modbus and SNMP, though we are considering other protocols. Modbus has been used within the solar industry for a long time for automation and control. We also have extenive market in the oil gas industry, where they utilize Modbus both for polling of the data, as well as real-time control by actually making configuration changes to the product remotely.

SNMP is a more recent development and it helps to get on the bandwagon with telecommunications and IT-related markets. So, it’s an easy transition using a protocol that customers are already familiar with.

Question. How do you use report generation? How do you enable it and implement it in Zabbix?

Answer. A lot of our customers are looking for trending data over a certain period of time. So, they would set up regular intervals for the data to be collected and reported because the long-term trending data is about looking at the same site during different periods of time or looking at the same site next to its peers to see how the power system may be varying from what is expected. So, regular report intervals can be executed and filtered based on certain conditions.

There are really a few key parameters of a solar site to look at to understand the health of the system. You need to focus on the battery levels, the maximum power of the solar panel, and a quick diagnostic check to find out if the controller shows any faults or alarms. So, if you have a simple report you can quickly be sure that hundreds of sites are in good shape. If one of them isn’t, you could drill down into more detail on just that specific site.

Aranet — a wireless IOT sensor platform

Post Syndicated from Toms Reksna original https://blog.zabbix.com/aranet-a-wireless-iot-sensor-platform/12953/

Aranet — wireless IoT sensor platform. Wherever you need to measure anything – temperature, air quality, light, or any other physical parameter – Aranet’s main mission is to deliver these measurements simply, easily, and above all – wirelessly. Aranet is manufactured by SAF Tehnika — a company with over 20 years of experience in the telecom industry, microwave radio, and test & measurement equipment manufacturing, and a certified partner of Zabbix.

Contents

I. Aranet wireless sensor network (1:41)
II. Aranet in retail (5:53)
III. Indoor air quality and COVID19 (8:20)
IV. Partnership with Aranet (12:11)
V. Questions & Answers (13:52)

Aranet wireless sensor network

Aranet is a wireless sensor network consisting of the Aranet PRO base station and sensors transmitting data to one another over the 868 MHz frequency in Europe and 920 MHz frequency in the United States. This frequency allows us to have a very large line of sight distance between the sensor and the base station — up to 3 kilometers line of sight and a couple of hundred meters indoors.

Sensors are intended to measure different environmental parameters. You can connect up to a hundred sensors per base station. Sensors can be configured to send the data over different intervals — once every minute, two minutes, five, or 10 minutes. Sensors are very power efficient — with a regular AA battery, they will last up to 10 years.

Aranet ecosystem

Aranet technology is based on the LoRa physical layer. We have built our proprietary LPWAN protocol with XXTEA encryption on top of LoRa to make the radio parameters better and to increase the battery life.

Aranet technology

The brain of the system is the Aranet PRO base station – the radio receiver with a built-in web server housing SensorHUB software and internal memory for local data storage. It is made with ease of use in mind – you can connect directly to the base station with your PC, laptop, or phone over Ethernet or Wi-Fi, open up your web browser and access the free SensorHUB software. You don’t even need to install anything.

Aranet PRO base station offers a lot of features such as graphing, exporting data, etc. In addition, its internal memory allows for storing 10 years of readings even if the Internet goes down.

The sensors are sending data to the base station. Several such base stations can be agglomerated into the Aranet Cloud solution collecting data from several base stations and allowing you to access the data from anywhere.

Aranet architecture

With over 20 years of experience in radio manufacturing, we believe that we’ve created one of the best-in-class systems in terms of wireless connectivity with our base stations and in-house cloud. However, we are looking for a strategic partnership where the Aranet system can become a part of a larger system. This brought us to the partnership with Zabbix so that we can integrate our cloud solution with the Zabbix monitoring system.

Aranet philosophy

Aranet Example Use Cases

Aranet for retail

Rimi

Aranet has been actively used in retail, for instance, Rimi — a chain of Latvian supermarkets, where 6,500 sensors have been installed in 125 stores. Aranet is planning to expand to other Baltic states.

Aranet equipment is primarily used for:

  • Monitoring of freezer temperatures. Earlier, they had to check the temperature manually — somebody had to walk around with the legal pad and check the temperature to make sure that freezers are working properly and to report to the relevant government agencies. Aranet allowed for automating this process.
  • Alarms in case of malfunction. In the case of a malfunction, an alarm can be sent to avoid product spoilage.
  • Working on predictive maintenance, including machine learning algorithms for predictive maintenance to locate anomalies in the defrost cycle temperature data helping to prevent breakages.

Aranet in retail

Benefits

  • Even the largest supermarkets (8800 m2/94 000 ft2) can be covered with a single base station.
  • Manual data collection can be avoided
  • Freezer temperature operating costs can be optimized (20% energy costs reduction).
  • Product spoilage can be avoided.
  • Litigation/fines for slip and fall accidents can be avoided.

Aranet for indoor air quality and COVID19 safety

Due to COVID19, many governments and health agencies have changed their guidelines, including the Center for Disease Control in the United States, and they now state that COVID19 can be transmitted through aerosols. Aerosols are small droplets that are released when we cough, sneeze, or talk. As these droplets are small — about five microns, they linger in the air for up to nine or more minutes. So, that means that you don’t even have to be in contact with the infected person to actually catch the disease.

This requires proper ventilation practices, which can decrease x10 the time aerosol particles stay in the air.

Aranet4 PRO – a wireless COVID19 safety network

One way to estimate if ventilation is sufficient is to measure CO2. The amount of CO2 (air exhaled by other people) in a certain room is a measure of the risk of contagion. The recommended air circulation per person is 60m3 /h, which is approximately 800ppm CO2 concentration — almost twice as much as the outside value.

Aranet wireless CO2 sensor

Aranet offers a wireless CO2 sensor that also measures temperature, relative humidity, and air pressure. It comes with a useful Bluetooth application, which allows you to easily get the latest readings. But the most important thing is that this sensor can generate alerts. So, whenever the value exceeds the critical level, you have a visual indication — green, yellow, or red, as well as an audible alert prompting to manually increase the ventilation, for instance, by opening windows.

Lately, these sensors have been gaining popularity, especially in schools, universities, and offices as they offer:

  • Simple plug-and-play setup with the Aranet base station.
  • Updating information available locally on each sensor, as well as centrally on the base station, so that you can see what spaces need additional ventilation.
  • Free software – graphs, reports, centralized alarms.
  • Control of airborne COVID19 spread in schools, offices, and other indoor facilities.

Partnership with Aranet

Aranet wireless network can be implemented in many other industries:

  • Horticulture,
  • Livestock,
  • Building Management,
  • Warehousing,
  • Data Centres,
  • Pharma,
  • Medical,
  • Retail.

So, Aranet is looking for integration and distribution partners, which are interested in wireless monitoring. Details of the partnership are available on aranet.com or can be requested from [email protected].

The Aranet’s core value is the wisdom of Lord Kelvin: “you can only improve what you can measure”. So, we strive for delivering these measurements in the easiest and the most straightforward way possible so that you could improve whatever you wish.

Questions & Answers

Question. Is there some way or some benefit to integrating Aranet with Zabbix?

Answer. Aranet has many and diverse applications, as well as Zabbix. So, adding physical parameters on top of the monitoring solution network parameters would help out. For data centers or retail stores, in addition to alerts of something wrong with the network, alarms of something physical happening would be useful. It might be useful to be alerted, for instance, if it’s too hot.

Question. Is it possible to switch your sensors to LoRaWAN so that we can use existing networks?

Answer. We have decided to have our proprietary network based on the LoRa physical layer with proprietary communication software. This decision was made for several reasons:

  • ease of use— the main thing that our customers actually value. Aranet system can be easily set up in a couple of minutes — you just lay the sensors and they start working. With LoRaWAN you have the base station from one provider, and sensors from the other, so it takes time to make the system work. Aranet works out of the box.
  • improved battery due to our protocol.
  • improved security as with Aranet you control the whole ecosystem from the base station to sensors. In addition, with Aranet you won’t face dependencies, password management, or communication issues.
  • private network

Question. Are there any electrical sensors — volts, amps, power, or anything like that?

Answer. We can monitor voltage, but these are mostly for third-party integrations. We have pulse output sensors, which you can connect to these electricity meters, for instance. So, this can be monitored.

 

Let me subscribe – Zabbix masters IoT topics

Post Syndicated from Wolfgang Alper original https://blog.zabbix.com/let-me-subscribe-zabbix-masters-iot-topics/12710/

Zabbix 5.2 supports two important protocols used in the world of the Internet of Things — MQTT and Modbus. Now we can benefit from the newest Zabbix features and integrate Zabbix network monitoring in the world of IoT.

Contents

I. What is MQTT? (3:32:13)
II. MQTT and Zabbix integration (3:39:48)

1.MQTT setup (3:40:03)
2.Node-RED (3:42:12)
3.Splitting data (3:45:45)
4.Publishing data from Zabbix (3:52:23)

III. Questions & Answers (3:55:42)

What is MQTT?

MQTT — the Message Queuing Telemetry Transport was invented in 1999, and designed to be bandwidth-efficient and lightweight, thus battery efficient. Initially, it was developed to allow for monitoring oil pipelines.

It is a well-defined ISO standard — ISO/IEC 20922, and it is getting increasingly adopted due to its suitability for the Internet of Things (IoT), sensor networks, home automation, machine-to-machine (M2M), and mobile applications. MQTT usually uses TCP/IP as the transport protocol — over ports 1883, and can be encrypted using TLS transport mechanism with 8883 as the default port.

There is a variation of MQTT available — MQTT-SN (MQTT for Sensor Networks) used for non-TCP/IP networks, such as Zigbee (IEEE 80215.4 radio-based protocol) or other UDP / Bluetooth-based implementations.

There are 2 types of network entities available: ‘Message broker‘ and ‘Clients‘.

MQTT supports 3 Quality-of Service levels:

— 0: At most once – “Fire and forget” where you might or might not receive the message.
— 1: At least once – The message can be sent/delivered multiple times.
— 2: Exactly once – Safest and slowest service.

MQTT is based on a ‘publish’ / ‘subscribe-to-topic’ mechanism:

1. Publish/subscribe.

Publish/subscribe pattern

MQTT Message Broker consumes messages published by clients (on the left) using two-level ‘Topics‘ (such as, for instance, office temperature, office humidity, or indoor air quality). The clients on the rights side act as subscribers receiving any information published on a particular topic. Every time a message is published to the broker, the broker notifies all of the subscribers (Clients 3 and 4), and these clients get the sensor value.

2. Combined publishing/subscribing

Combined pub/sub

A client can be a subscriber and a publisher at the same time. So, in this example, Client 1 is publishing a brightness value and Client 3 has a subscription for that brightness value. Client 3 may decide that the brightness, for instance, of 1,500 might be too low, so it can publish a new message to the topic ‘office’ to let the light controller know that it should increase the brightness, while Client 2, for instance, the light controller with a subscription, may change the brightness level on receipt of the message.

3. Wildcards subs

+ = single-level, # = multi-level

Wildcards in MQTT are easy. So, you can have, for instance, ‘office + brightness’ topic,  where the ‘+’ sign can be substituted by any topic name. If the ‘+’ sign substitutes just one level in our topic, then it is a single-level wildcard. While the pound sign works for a multi-level wildcard.

MQTT features:

  • Clients can publish and subscribe to one or more topics.
  • One client can publish and subscribe at the same time.
  • Clients can subscribe using single/multi-level wildcards.
  • Clients can choose between three different QoS levels.

MQTT advanced features:

  • Messages can be retained by the broker for new subscribers. So, if a new client subscribes to a particular topic, then the publisher can mark its messages as ‘Retained‘ so that the new subscriber gets the last retained message.
  • Clients can provide a “last will and testament” that will be published by the broker when the client “dies”.

MQTT and Zabbix integration

MQTT setup

Integrating Zabbix into the multiple-client mix

Integrated structure:

1. Four sensors:

    • Server room.
    • Training room.
    • Sales room.
    • Support room.

2. Four different topics:

    • office
    • bielefeld (home town)
    • serverroom
    • trainingroom

3. Mosquitto MQTT Message Broker, which is one of the well-known message brokers.

So, the sensors are publishing the data to the Mosquitto Message Broker, where any MQTT-enabled device or system can pick those values up. In our case, it’s the home automation system, which subscribes to the Message Broker and has access to all of the values published by the sensors.

Thanks to MQTT support in Zabbix 5.2, Zabbix can now subscribe to the Mosquitto Message Broker and immediately get access to all of the sensors publishing their values to the broker.

As we can have multiple subscribers, multiple clients can subscribe to one topic on the Message Broker. So the home automation system can subscribe to the same values published to the Message Broker, as well as Zabbix.

Node-RED

Sooner or later, you will need Node-RED, which is a flow-based programming tool allowing you to subscribe to the broker and to publish messages to the broker acting as the client, as well as to work with the data.

Data Processing in Node-RED

This setup might be useful, if, for instance, some Zabbix trigger fires and passes the information over to the MQTT to publish the outcome of the trigger to the Message Broker, which will be then picked up by the home automation system.

Zabbix publishes data to the broker

You can have two different Zabbix instances subscribing to the same Message Broker acting just as two different clients.

Multiple Zabbix servers sharing the same data

Node-RED:

    • Construction kit for the Internet of Things and home automation.
    • Acts as MQTT client able to publish and subscribe.
    • Flow-based tool for visual programming based on Node.js.
    • Graphical web editor.
    • Supports input, processing, and output nodes.
    • Extensible with plugins and custom function nodes.

Different types of nodes can be connected in the workspace. For instance, the nodes subscribing to a topic and transforming the data, or the nodes writing the data to a log file.

Node-RED

We can get the data from the sensors as the raw JSON string containing 20-30 metrics in a payload, and as a parsed JSON object in the Node-RED Debug node with easy-to-read metrics, such as, for instance, temperature, humidity, WiFi quality, indoor air quality, etc.

Multiple metrics in one message

Splitting data

We have different options for data splitting available:

  • Split on MQTT level: use Node-RED to split metrics and then publish them in their own topics (it’s good to set up when other clients can handle only a single metric at a time).

Splitting data in Node-RED

 

  • Split on Zabbix level: set up an MQTT item as a master item and use Zabbix JSON preprocessing with corresponding dependent items. Its more efficient because Zabbix would need only one subscription.

We can get the data with the brand-new mqtt.get item in Zabbix 5.2:

— Requires Agent 2.
— Requires active checks. As every time a client publishes a message to the topic, we need the broker to push that data to us, we need active checks, so mqtt.get must listen to the subscription and get notified when the new data comes in.
— Broker URL default is localhost.
— User name and password are optional.
— Uses Eclipse Paho Go client library.

One Zabbix agent in active mode sending data to multiple hosts

For our setup with four sensors: in Sales Room, Server Room, Support Room, and Training Room, we need four hosts in Zabbix. Traditionally, you need four different agents to handle them as each agent running as active needs to configure its own hostname. However in our setup, we need just one agent installed and handling different hosts by subscribing to multiple topics.

This is possible because of the the new feature  running active agent checks from multiple hosts which is now available in Zabbix 5.2. All we need is:

—  to set up hosts in Zabbix (as usual),
—  to define our MQTT items (as usual),
—  to set up just one agent with all of the hostnames the agent should be responsible for (the new feature),
—  to set up the master item, which is our mqtt.get item,
—  to define several dependent items and preprocessing for each of the dependent items, and
—  to start preprocessing with JSONPath.

NOTE. Every time the master item gets an update, so do all of the dependent items in Zabbix.

Master item and dependent items

  • Combine both methods: let other clients subscribe to a single metric using their specific topic, but publish all sensor data for Zabbix in one topic.

NOTEData received and displayed on the dashboard is based on the MQTT item, the payload, and the MQTT messages received from the Message Broker.

Sensor data dashboard

Publishing data from Zabbix

Now you want to publish the outcome of a Zabbix trigger, so it can be consumed by other MQTT-enabled devices. Any MQTT subscriber, like Node-RED, should receive the alert. To do that, you need:

  • to define a new media type to send problems to the topic, that is, to pass the data over to the Message Broker:
  • to use the command-line tool for Mosquitto — mosquitto_pub allowing us to publish the message.
#!/bin/sh
mosquitto_pub -h yourbroker.io -m "$1" -t "zabbix/problems/$2"

  • to make sure that the data is sent to the broker in the right format. In this case, we use JSON as transport and define a JSON problem template and a JSON problem recovery template.

 

In Zabbix, you’ll see the problem, the actions, and the media type firing using the subscription, and in the Debug node of Note-RED, you’ll see that the data is received from Zabbix.

Zabbix problems  published via MQTT

This model with Node-RED can be used to create sophisticated setups. For instance, you can take the data from Zabbix, forward it by actions and media types, preprocess them in Node-RED, and transform the data in many different ways.

IoT devices and other subscribers can react to issues detected by Zabbix using Node-RED

NOTE. To try out the MQTT setup and new Zabbix features, you can use the Live broker available on IntelliTrend new GitHub account, getting data from Zabbix sensors every 10 minutes. You’ll also find templates,  access data, address of the broker, etc. —  everything you need to to get started.

Questions & Answers

Question. If the MQTT client gets overloaded due to high message frequency on subscribe topics, how will that affect Zabbix?

Answer. Here the broker might be overloaded or the Zabbix agent might not be able to follow up. If for the problem with the broker, the quality of service levels is defined in the MQTT protocol, more specifically — QoS level 2, which guarantees delivery. So if QoS2 is used as a QoS level, the messages won’t get lost but would be resent in case of failure.

Question. What else would you expect from the IoT side of Zabbix? What kind of protocols or things would get added? 

Answer. There’s always room for improvement. You can use third-party tools, custom scripts, or any tools to enhance Zabbix. I’m sure that using user script parameters was an excellent design decision. But the official support of MQTT is a quantum leap for Zabbix because it opens the door to most IoT infrastructures, as MQTT is the most important IoT protocol so far.

For instance, one of our customers is monitoring the infrastructure of electricity generators, production systems, etc. They use their own monitoring platform provided by vendors. The request was to integrate alerts or some metrics into Zabbix. The customer’s monitoring platform used MQTT protocol. So, all we had to do was to make their monitoring platform use external scripts and MQTT support.

Lift and shift your Zabbix to Oracle Cloud with MySQL database service

Post Syndicated from Vittorio Cioe original https://blog.zabbix.com/lift-and-shift-your-zabbix-to-oracle-cloud-with-mysql-database-service/12792/

 

If you are tired of administering the infrastructure on your own and would prefer to gain time to focus on real monitoring activities rather than costly platform upgrades, you can easily lift and shift your MySQL-based Zabbix installation stack to Oracle Cloud.

Contents

I. Moving to the Cloud (1:46)
II. Moving Zabbix to Oracle Cloud (2:41)

1. Planning migration (3:22)
2. Migrating Zabbix to Oracle Cloud (6:17)
3. Migrating the database to MySQL Database Service (8:47)

III. Questions & Answers (15:12)

Moving to the Cloud

The data is increasingly moving to the cloud — the consumer data followed by the enterprise data, as enterprises are always a bit slower in adopting technologies.

Data moving to the cloud

Oracle Cloud Infrastructure, OCI, is the 4th cloud provider in the Cloud Infrastructure Ranking of the Gartner Magic Quadrant based on ‘Completeness of Vision’ and ‘Ability to Execute’.

OCI is available in 26 regions and has 26 data centers across the world with 12 more planned.

26 Regions Live, 12+ Planned

24+ Industry and Regional Certifications

Moving Zabbix to Oracle Cloud

With Zabbix in the Oracle Cloud you can:

  1. get the latest updates on the technology stack, minimizing downtime and service windows.
  2. convert the time you spend managing your monitoring platform into the time you spend monitoring your platforms.
  3. leverage the most secure and cost-effective cloud platform in the market, including security information and security updates made available by OCI.

Planning migration

To plan effective migration of the on-premise Zabbix instance with clients, proxies, management server, interface, and database, we need to migrate the last three instance components. Basically, we need:

  • the server configuration;
  • on-premise network topology to understand what can communicate with the outside or what would eventually go over VPN, that is, the network topology of client and proxies; and
  • the database.

Migration requirements

We also need to set up the following in the OCI tenancy:

  • MySQL Database System,
  • Compute instance for the Zabbix Server,
  • storage for database and backup,
  • networking/load balancing.

The target architecture involves setting up the VPN from your data center to the Oracle cloud tenancy and deploying the load balancer, the Zabbix server in redundancy over availability domains, and the MySQL database in a separate subnet.

Required Components:
• Cloud Networking,
• Zabbix Cloud Image,
• MySQL Database Service,
• VPN Connection for client/proxies.

Oracle Cloud target architecture for Zabbix

You can also have a lighter setup, for instance, with proxies communicating over TLS connections over the Internet or communicating directly with the Zabbix Server in the Oracle Cloud, and the Zabbix server interfacing with the database. Here, you will need fewer elements: server, database, and VCN.

Oracle Cloud target architecture for Zabbix — a simpler solution

Migrating Zabbix to Oracle Cloud

Zabbix migration to the Oracle Cloud is straightforward.

1. Before you begin:

  • set up tenancy and compartments,
  • set up cloud networking — public and private VCN.

2. Zabbix deployment on the VM:

  • select one-click deployment or DIY — use the official Zabbix OCI Marketplace Image or deploy an OCI Compute Instance and install manually,
  • choose the desired Compute ‘shape’ during deployment.

3. Configuration:

  • start the instance,
  • edit the config file,
  • point to the database with the IP address, username, and password (to do that, you’ll need to open several ports in the cloud network via the GUI).

The OCI infrastructure allows for multiple choices. The Zabbix Server is lightweight software requiring resources. In the majority of cases, a powerful VM will be enough. Otherwise, you’ll have the Oracle Cloud available.

Compute services for any enterprise use case

In the Oracle Cloud you’ll have the bare metal option — the physical machines dedicated to a single customer, Kubernetes container engine, and a lot of fast storage possibilities, which end up being quite cheap.

Migrating the database to MySQL Database Service

MySQL Database Service is the managed offer for MySQL in Oracle Cloud, fully developed, managed, and supported by the MySQL team. It is secure and provides the latest features as it leverages the Oracle Cloud, which has been rated by various sources as one of the most secure cloud platforms.

In addition, the platform is built on the MySQL Enterprise Edition binaries, so it is fully compatible with the platform you might be using. Finally, it costs way less on a yearly basis than a full-blown on-premise MySQL Enterprise subscription.

MySQL Database Service — 100% developed, managed, and supported by the MySQL team

Considerations before migration

Before you begin:

  • check your MySQL 8.0 compatibility,
  • check your database size (to assess the time needed to migrate), and
  • plan a service window.

High-level migration plan

  1. Set up cloud networking.
  2. Set up your (on-premise) networking secure connection (to communicate with the cloud).
  3. Create MySQL Database Service DB System with storage.
  4. Move the data using MySQL Shell Dump & Load utility.

Creating MySQL DB system with just a few clicks

  • Create a customized configuration.
  • Start the wizard to create DB system.
  • Select Virtual Cloud Network (VCN).
  • Select subnet to place your MySQL endpoint.
  • Select MySQL configuration (or create customized instances for your workload).
  • The shape for the DB System (CPU and RAM) will be set automatically.
  • Select the size of the storage for data and backup.
  • Create a backup policy or accept the default.

Creating MySQL instances

You can use MySQL Shell Upgrade Checker Utility to check the compatibility with MySQL8.0.

util.checkForServerUpgrade()

Loading the data

To move the data, you can use the MySQL Shell Dump & Load utility, which is capable of multi-threading and is callable with the JavaScript methods from MySQL Shell.

So, you can dump on what can be a bastion machine, and load your instance to the cloud. It will take several minutes to load the database of several gigabytes, so it is necessary to plan the service maintenance window accordingly.

In addition, the utility is easy to use. You just need to connect to an instance and dump.

MySQL Shell Dump & Load

The operation is pretty straightforward and the migration time will depend on the size of the database.

Free trial

You can have a test drive of the MySQL Database Service with $300 in cloud credits, which you can spend in the Oracle Cloud on MySQL Database Service or other cloud services.

 

Questions & Answers

Question. Do you help with migrating the databases from older versions to MySQL 8.0?

Answer. Yes, this is the thing we normally do for our customers — providing guidance, though data migration is normally straightforward.

Question. Does the database size matter? How efficient MySQL Shell Dump is? What if my database is terabytes in size?

Answer. MySQL Shell Dump & Load utility is much more efficient than what MySQL Dump used to be. The database size still matters. In that case, it will require more time, still way less than it used to take

 

 

 

 

What’s new in Zabbix 5.2

Post Syndicated from Alexei Vladishev original https://blog.zabbix.com/whats-new-in-zabbix-5-2/12550/

Zabbix is a universal open-source enterprise-level monitoring solution, therefore Zabbix has all the enterprise-grade features included: SSO, distributed monitoring, Zabbix Insights, advanced security, no data storage limits, and much more. Zabbix 5.2 offers over 35 new features and functional improvements.

Contents

I.Introduction
II. New features and functional improvements

1. Synthetic monitoring
2. Keep secrets in the external vault
3. Zabbix insights
4. User roles
5. IoT Monitoring
6. Load balancing
7. User Timezones
8. Yaml for import/export
9. Template improvements
10. Discovery and cloud monitoring
11. Usability improvements
12. Preprocessing improvements
13. Other improvements

III. Questions & Answers

Introduction

Zabbix gives you freedom, as it offers:

  • no per-metric fees,
  • no license fees,
  • deployment anywhere, and
  • easy migration from on-premise to the cloud and vice versa.

Zabbix also offers business benefits for the companies that need centralized monitoring and collecting data all over their IT infrastructure and other sources.

  • Umbrella monitoring, as Zabbix is flexible to replace most of the monitoring solutions already in use.
  • Free and open-source solution with 24×7 vendor support worldwide.
  • Technical Support at fixed prices for unlimited monitoring regardless of the number of devices monitored and extremely low TCO.

Business benefits

New features and functional improvements

Zabbix 5.2 offers over 35 new features and functional improvements.

Synthetic monitoring

1. Zabbix 5.2 supports complex multi-step scripted data collection, advanced availability checks, and complex interaction with different HTTP APIs.

Multi-step data collection

  • Multiple steps to get data.

Multi-step data collection is needed is, for instance, you need to authenticate and then to retrieve data from different APIs.

Authentification and retrieving data from different API.

  • 2. Check if the whole service works: Zabbix API.

Advanced availability checks and APIs

  • Calculate the sum of unknown parts.

For a list of customers retrieved from an API with URLs behind each customer, Zabbix allows for checking the availability of all URLs.

2. New item-type script

Now the process of data collection can be scripted. It’s no longer a one-step process, so we can take advantage of cycles, event statements, and all the power of JavaScript to retrieve the data.

New item-type scripts

Keep secrets in the external vault

The ability to store secrets in the vault is valuable for using sensitive information, for instance, in financial, military, or government industries, as:

  • all sensitive information is kept outside of Zabbix in a secure place: HashiCorp Vault,
  • no secret data is stored in Zabbix DB, and
  • all sensitive data, such as passwords, API tokens, user names, etc., shall be secured.

So, in Zabbix 5.2 a new user macro type is introduced — Vault secret. In Zabbix 5.0, the secret text was introduced, which is stored in the Zabbix DB, but is never exposed to end-users. With the Vault secret macro, the data is stored externally.

Vault secret macro

Now security measures in Zabbix comply with the best security standards possible:

  • all communications with Zabbix Agent or Zabbix Proxy are encrypted using HTTPS, TLS, or PSK;
  • Agent key restrictions can be used on the Zabbix Agent side;
  • communication with the Zabbix Web Interface is encrypted using HTTPS; and
  • integration with HashCorp Vault is now possible to keep secrets externally.

Security enhancements

NOTE. The one-day security training course is now held by Zabbix with no prerequisites and simple signup.

  • Recommended for experienced Zabbix users.
  • Does not require existing Zabbix certification.
  • Will cover security options on an expert level.
  • Secret macros and Vault.
  • Securing connections using PSKor certificates.
  • Restricting Agent keys.
  • Granular user permissions

 

 

 

Zabbix insights

  • Ability to analyze long term data efficiently using new trigger functions.
  • Zabbix will provide you with information about anomalies.
  • More value out of Zabbix trend data, which is kept longer.

This new feature allows Zabbix to generate alerts, for instance, “Average number of transactions increased by 24% in September”.

Zabbix 5.2 new functions

  • The new functions allow for specifying, which trend data is needed, and then for comparing this data with the data for another period.
trendavg(period, period_shift)
trendcount(period, period_shift)
trenddelta(period, period_shift)
trendmax(period, period_shift)
trendmin(period, period_shift)
trendmin(period, period_shift)
  • Trends tables instead of history. The time shift function is already available in Zabbix, but it has significant limitations, as it works only with history tables, that is, involves heavy processing, and doesn’t allow for specifying an absolute period of time.

  • Use the Gregorian calendar for period and period_shift.

— h (hour), d (day), w (week), M (month), and y (year).

  • Calculate upon the end of a period.
  • Customized event name — a new field in the trigger definition, which is:

— optional, can use Trigger Name instead,
— displaying problem with a context,
— supports a new macro {? … } (“Expression macro”):

      • fmtnum(digits)

— applicable to ITEM.VALUE, ITEM.LASTVALUE and expression macros;
fmtnum(2) gives 14.85 instead of 14.8512345.

      • fmttime(format, time_shift)

— applicable to {TIME};
— uses strftime format codes;
{TIME}.fmttime(“%B,%Y”) gives October 2020.

For instance, to detect abnormal traffic, we can define the expression to compare traffic for different periods. If the difference exceeds the abnormality factor defined by the user macro, Zabbix will generate the event defined by the user.

Triggers

Then Zabbix will generate the following message:

Problems

Use cases
  • Trend functions can be used to detect abnormal behavior of IT metrics and non-IT KPIs.
  • Real-world applications:
    — business performance,
    — sales and marketing,
    — warehousing,
    — human resources,
    — customer support.

User roles

Granular control of user permissions

  • Customer portal, read-only users.
  • Different parts of UI can be made accessible for different user roles.
  • Control what user operations are accessible: maintenance, editing of dashboards, etc.
  • Fine-grained control access to API and its methods for extra security.

In Zabbix 5.2, the ability to define user roles is introduced. It is possible to define as many user roles, as you need. Here, it’s necessary to specify:

  • User type (User, Admin, Super Admin),
  • Access to UI elements (what the user can do),
  • Access to API (if enabled, we may filter by API methods),
  • Access to actions (define, what user actions are available to different users).

User roles defined

IoT Monitoring

Zabbix is a universal solution used to support not only IT infrastructure, so the capacity to monitor factory equipment or sensors is really important. New Zabbix 5.2 now offers out-of-the-box support of Modbus and MQTT protocols — the most important IoT protocols. Now it is possible to monitor sensors and hardware equipment, and integration with built-in management systems, factory equipment, and IoT gateways is available without using external scripts.

Modbus

Modbus has become a de facto standard communication protocol — a commonly available means of connecting industrial electronic devices working on Agent and Agent 2 TCP or serial connections.

where:

modbus.get — new item key,
endpoint — endpoint defined as protocol://connection_string,
slave id — slave ID,
function — Modbus function,
address — address of first registry, coil or input,
count — number of records to read,
type — type of data,
endianness — endianness configuration offset – number of registers, starting from ‘address’, the results of which will be discarded.

modbus.get is made to get information out of Modbus and returns JSON:

modbus.get[“tcp://192.168.6.1:511”]
Modbus.get[“rtu://COM1:9600:8n”]
MQTT
  • MQTT is a standard messaging protocol for the Internet of Things (IoT) among others.
  • Native solution for monitoring messages published by MQTT brokers.
  • Supported by Agent 2 Active Check only.

broker_url — MQTT broker URL (if empty, localhost with port 1883 is used),
topic — MQTT topic (mandatory). Wildcards (+,#) are supported,
username, password — authentication credentials (if required).

  • MQTT subscribes to a specific topic or topics (with wildcards) of the provided broker and
    waits for publications.
mqtt.get["tcp://host:1883","path/to/topic"]
mqtt.get["tcp://host:1883","path/to/topic"]

Load balancing

Starting from Zabbix 5.2, it has become easy to make horizontal scaling for Zabbix UI and API components. You just need to set up HAProxy or another load balancing solution, then some cluster nodes running as containers on physical or virtual machines or in the cloud, and you’ll get redundancy, high availability, and load balancing out of the box for Zabbix UI and API components.

Horizontal scaling for Zabbix UI and API

User Timezones

Zabbix 5.2 supports user timezones for each user. This is a feature appreciated by larger companies with users connecting to Zabbix UI from different countries or continents.

User timezones for each user

YAML for import/export

For import and export operations in Zabbix YAML is now used by default, though JSON and XML are still supported.

YAML is more user-friendly and easy to edit manually, while JSON and XML are excessive in the use of special characters. So if you keep your templates in a repository, you can modify them using a text editor. All official templates in Zabbix have been already converted to YAML.

YAML for import/export

Template improvements

  • Simpler template names, which are also easier to search for.

  • Templated screens converted to template dashboards. When modifying dashboards now you are dealing with dashboard widgets, not screen elements anymore.

  • See all hosts linked to a specific template.

  • The number of templates in System information.

Discovery and cloud monitoring

  • Host interfaces can be discovered from LLD. Now it is possible to define ways to discover host interfaces when a host prototype is created. This feature is especially useful to discover cloud resources.

  • Hosts without interfaces. We can create virtual hosts or hosts with no interface for service checks, for instance.
  • Tags on host prototypes from any discovery macro. Tags play an increasingly important role in Zabbix, and now in addition to tags on the template level, on the host level, and on the trigger level, it is possible to define tags on the host prototype level as well.

Usability improvements

  • Save filters. This feature is implemented to monitor problems and hosts. In Zabbix 5.2, you can basically name filters. This functionality is similar to that used in modern browsers, such as Firefox or Safari. We have different tabs, and every tab displays a number of problems in real time, and you can easily switch from one filter to another.

Filter tabs

  • Show clearly that any tab in Zabbix UI contains a non-empty list, for instance, the number of preprocessing rules. This functionality is implemented for all tabs in Zabbix UI.

Number of lists displayed in the tabs

  • The default language can now be defined for the system.

Defining system default language

  • Essential configuration parameters moved from defines.inc.php to Zabbix UI, which allows for finer tuning.

Finer tuning

  • SNMP settings in the test item window,  for instance, before adding an item to a template.

Testing SNMP parameters

  • Filters and additional details in the list of dashboards.

Additional information in dashboards

Preprocessing improvements

  • Macros in JavaScript preprocessing (also backported to 5.0).
  • Check for not supported value and override items unsupported for any reason, which is useful for advanced availability checks: any problem -> service is down.

Other improvements

  • In larger environments, there may be performance issues, and understanding what’s happening inside Zabbix is vitally important. Now it is possible to specify diagnostic information to be retrieved from Zabbix. We can also retrieve this information from the Zabbix API.

Retrieving diagnostic information from the value cache log file

  • UI protected from checking the existence of a user.
  • Simpler schedule for unsupported items.
  • Ability to mass-update item Timeout.
  • Ability to retrieve HTTP response headers in Webhooks.
  • Ability to specify the default search path for user parameters.
  • Max length of user macro values increased to 2048 characters.
  • Active Agent can work as multiple hosts (Hostname=host1,host2,host3), which might be useful if you run different services on one host and need to split them.
  • Official support of Docker images.
  • Eventlog-related macros in operational data.
  • Support of user macros in the item description.

Out of the box monitoring and alerting

We have increased the number of integrations supported in Zabbix out-of-the-box and the number of officially supported monitoring templates and plugins for Zabbix Agent 2.

  • Ticketing.

  • Alerting.

  • Monitoring.

Deployment

You can deploy Zabbix anywhere: on-premise or in the cloud.

Deploy on-premise

Deploy in the cloud

How to upgrade

Procedure for upgrading from Zabbix 5.0 is as for any other Zabbix release:

  • Backup DB.
  • Upgrade packages (Zabbix Server, Frontend)
  • Restart zabbix_server.
  • Watch the log file, Zabbix will start DB schema upgrade automatically.
  • Upgrade all proxies.
  • Update agents (optional).

Otherwise, contact Zabbix engineers, order an upgrade to the new release, and enjoy the new features effortlessly.

Questions & Answers

Question. Does Zabbix plan to support other scripting languages?

Answer. No, we don’t have such plans. We analyzed other languages but selected JavaScript as Zabbix embedded language. However, now you can use any scripting language in Zabbix in external scripts, including PowerShell, Python, etc.

Question. Does Zabbix plan to support other vaults besides HashCorp?

Answer. We might support other solutions. This new Zabbix functionality allows for implementing other vaults. If you need some other vault to be supported, you need to register the respective Zabbix feature request.

Question. Does Zabbix plan to improve the existing graphs and provide official Grafana integration?

Answer. We do plan to provide more advanced visualization options for dashboards. Now we are merging the screens and dashboard functionality, and we plan to release new widgets for more advanced visualization in Zabbix 5.4.

The existing Zabbix plugin for Grafana works smoothly, and we don’t plan to introduce another solution.

Question. Does Zabbix plan to support another database backend, for instance, time-series databases?

Answer. According to Zabbix Roadmap, in Zabbix 5.4 we plan to introduce generic API allowing to connect to any storage for time-series data, that is, to create some official connectors to storage solutions.

Question. Does Zabbix plan to natively support integration with LDAP? At the moment Zabbix provides LDAP support, but we still have to manually create users and so on. Does Zabbix plan to automate it in some way?

Answer. We created this functionality a couple of years ago, but we designed it in a complex way and decided not to implement it yet. It’s not on our shortlist, but we plan to implement it, as it is one of the top-voted features.

Question. Can Agent secrets be stored in the vault?

Answer. At this moment we don’t support this feature. In a highly-distributed environment, where agents are distributed all across your IT infrastructure, you’ll have to maintain a connection between Zabbix Agents and the vault. Still, if you feel the feature should be in Zabbix, feel free to register the respective Zabbix feature request.

Question. Does Zabbix have a Kubernetes operator?

Answer. We don’t have the Kubernetes operator officially supported yet, but there are a few operators available from our community.

Question. Do we plan to improve our report functionality?

Answer. Absolutely. This is the primary focus of Zabbix 5.4 and Zabbix 6.0. We are exploring two directions: improving the widgets to enrich visualization in Zabbix and supporting schedule report generation so that Zabbix would generate PDF reports and send them out on a regular basis.

Question. Do we plan to enable changing server configuration parameters without the need to restart the server?

Answer. That depends on the configuration parameters. It can be implemented for some configuration parameters. What is really needed is the ability to change parameters related to performance in real-time, the ability to change the number of pollers, trappers, escalators, etc. I think this functionality will be implemented soon.

Question. Can we create a Zabbix instance as a code via JSON, XML, or some other way?

Answer. We are moving in this direction. For instance, the transition to YAML format is a step in this way. So, you will be able to keep your templates in the git repository. The missing step is versioning for templates in order to manage templates, as well as the ability to export the whole Zabbix configuration to YAML format. Versioning is on the roadmap to Zabbix 5.4.

Question. Do we plan to support metric gathering from Spring Actuator and Spring Boot? As at the moment, Prometheus is to be used to gather metrics.

Answer. If Prometheus can be used to gather metrics from these systems, Zabbix can do it as well as Zabbix support data collection from Prometheus out-of-the-box. 

Question. How can someone become a partner of Zabbix?

Answer. The best way to become a partner is to contact Zabbix by email at [email protected]

Question. How does Zabbix see interaction with Grafana? As that of competitors or friendly entities?

Answer. Grafana focuses on the visualization of data coming from different sources. Though Grafana provides some monitoring options, I see Grafana as an add-on to Zabbix. If you need a better visualization from Zabbix or Zabbix doesn’t deliver the visualization you expect, you are free to use Grafana.

Zabbix Summit Online 2020: Remote Experience of Sharing Knowledge and Being Together

Post Syndicated from Jekaterina Petruhina original https://blog.zabbix.com/zabbix-summit-online-2020-remote-experience-of-sharing-knowledge-and-being-together/12526/

Zabbix Summit 2020 was supposed to be the greatest Zabbix event of the decade – we planned to celebrate the 10th anniversary Zabbix Summit. But the very different 2020 circumstances intervened, and we had to adjust to the new reality and shift our focus. It so happened that in 2020 we held not the tenth anniversary but the first online Zabbix summit.


Available for everyone

When it was evident that the on-site event is not an option, we made a decision – the event should be available for everyone, and we made it free of charge. The other task was to manage the timing so that Summit would be available for attendees from all over the globe. We managed it efficiently by making the event as long as it was necessary to be convenient for users from Japan and China, Europe, and the USA and Latin American Region. Yes, it was quite a long day for Zabbix Team. However, we achieved what we were aiming for – about 8000 Zabbix enthusiasts from worldwide joined the Zabbix Summit live stream. This year, it became available to have extensive speeches from all regions, because the traveling issue was solved.

We made the most of the focus on the recently released Zabbix 5.2. And of course, we left enough place for use cases and professional tips as well. If, for some reason, you couldn’t join us on October 30, you are always welcome to watch the speeches in the record.

 

Traditionally every Zabbix Summit delivers an option to attend hands-on workshops. Due to the online format, it was possible to run more workshop sessions than in previous years, and attendees could join as many sessions as they wanted. Moreover, the workshops have been recorded and now are available on the Zabbix website.

Summit fun

Every Zabbix Summit is all about networking and fun. Let’s be honest, this unofficial part means a lot for the community along with the agenda. Unfortunately, we couldn’t meet in person this time and have fun all together at parties. Still, the community chat in Telegram made it clear – there are no boundaries for you guys to keep in touch, discuss ideas, and communicate. You made the networking part exist this year, and we are delighted and grateful for seeing such enthusiasm, activity, and interest in Zabbix. We provided the Summit attendees opportunity to communicate with the Zabbix Sales and Technical team, get acquainted with the event’s sponsors, and ask the questions via special Zoom rooms, and it worked well. Even though there were hundreds and hundreds of thousands of kilometers between the visitors of the event, there was a feeling that it happens here and now – with an audience full of interested people looking for opportunities to learn new things and help others.

What about next year?

Well, we think positive, however, stay realistic. Thus Zabbix Summit 2021 will also be held online.

If you care for better further events organized by Zabbix, we encourage you to fill out this post-Summit survey. It will help us understand what we have to improve to make Zabbix Summit Online 2021 even more generous. 

PS: Take a break and look at some behind the scenes photos – how the Zabbix Summit 2020 looked from the inside.

Security and Human Behavior (SHB) 2020

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/06/security_and_hu_9.html

Today is the second day of the thirteenth Workshop on Security and Human Behavior. It’s being hosted by the University of Cambridge, which in today’s world means we’re all meeting on Zoom.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The forty or so attendees include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

Our goal is always to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to six to eight minutes, with the rest of the time for open discussion. We’ve done pretty well translating this format to video chat, including using the random breakout feature to put people into small groups.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s schedule is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, eleventh, and twelfth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Speakers Censored at AISA Conference in Melbourne

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/10/speakers_censor.html

Two speakers were censored at the Australian Information Security Association’s annual conference this week in Melbourne. Thomas Drake, former NSA employee and whistleblower, was scheduled to give a talk on the golden age of surveillance, both government and corporate. Suelette Dreyfus, lecturer at the University of Melbourne, was scheduled to give a talk on her work — funded by the EU government — on anonymous whistleblowing technologies like SecureDrop and how they reduce corruption in countries where that is a problem.

Both were put on the program months ago. But just before the event, the Australian government’s ACSC (the Australian Cyber Security Centre) demanded they both be removed from the program.

It’s really kind of stupid. Australia has been benefiting a lot from whistleblowers in recent years — exposing corruption and bad behavior on the part of the government — and the government doesn’t like it. It’s cracking down on the whistleblowers and reporters who write their stories. My guess is that someone high up in ACSC saw the word “whistleblower” in the descriptions of those two speakers and talks and panicked.

You can read details of their talks, including abstracts and slides, here. Of course, now everyone is writing about the story. The two censored speakers spent a lot of the day yesterday on the phone with reporters, and they have a bunch of TV and radio interviews today.

I am at this conference, speaking on Wednesday morning (today in Australia, as I write this). ACSC used to have its own government cybersecurity conference. This is the first year it combined with AISA. I hope it’s the last. And that AISA invites the two speakers back next year to give their censored talks.

EDITED TO ADD (10/9): More on the censored talks, and my comments from the stage at the conference.

Slashdot thread.

Security and Human Behavior (SHB) 2019

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/06/security_and_hu_8.html

Today is the second day of the twelfth Workshop on Security and Human Behavior, which I am hosting at Harvard University.

SHB is a small, annual, invitational workshop of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, criminologists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions — all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating two days of my professional year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks — remotely, because he was denied a visa earlier this year.

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth, and eleventh SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops. Ross also maintains a good webpage of psychology and security resources.

Security and Human Behavior (SHB 2018)

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/security_and_hu_7.html

I’m at Carnegie Mellon University, at the eleventh Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions — all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating conference of my year. It influences my thinking in many different, and sometimes surprising, ways.

This year’s program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks. (Ross also maintains a good webpage of psychology and security resources.)

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.

Next year, I’ll be hosting the event at Harvard.

Our Newest AWS Community Heroes (Spring 2018 Edition)

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/our-newest-aws-community-heroes-spring-2018-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these Heroes share their time and knowledge across social media and in-person events. Heroes also actively help drive content at Meetups, workshops, and conferences.

This March, we have five Heroes that we’re happy to welcome to our network of cloud innovators:

Peter Sbarski

Peter Sbarski is VP of Engineering at A Cloud Guru and the organizer of Serverlessconf, the world’s first conference dedicated entirely to serverless architectures and technologies. His work at A Cloud Guru allows him to work with, talk and write about serverless architectures, cloud computing, and AWS. He has written a book called Serverless Architectures on AWS and is currently collaborating on another book called Serverless Design Patterns with Tim Wagner and Yochay Kiriaty.

Peter is always happy to talk about cloud computing and AWS, and can be found at conferences and meetups throughout the year. He helps to organize Serverless Meetups in Melbourne and Sydney in Australia, and is always keen to share his experience working on interesting and innovative cloud projects.

Peter’s passions include serverless technologies, event-driven programming, back end architecture, microservices, and orchestration of systems. Peter holds a PhD in Computer Science from Monash University, Australia and can be followed on Twitter, LinkedIn, Medium, and GitHub.

 

 

 

Michael Wittig

Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. widdix maintains several AWS related open source projects, most notably a collection of production-ready CloudFormation templates. In 2016, widdix released marbot: a Slack bot supporting your DevOps team to detect and solve incidents on AWS.

In close collaboration with his brother Andreas Wittig, the Wittig brothers are actively creating AWS related content. Their book Amazon Web Services in Action (Manning) introduces AWS with a strong focus on automation. Andreas and Michael run the blog cloudonaut.io where they share their knowledge about AWS with the community. The Wittig brothers also published a bunch of video courses with O’Reilly, Manning, Pluralsight, and A Cloud Guru. You can also find them speaking at conferences and user groups in Europe. Both brothers are co-organizing the AWS user group in Stuttgart.

 

 

 

 

Fernando Hönig

Fernando is an experienced Infrastructure Solutions Leader, holding 5 AWS Certifications, with extensive IT Architecture and Management experience in a variety of market sectors. Working as a Cloud Architect Consultant in United Kingdom since 2014, Fernando built an online community for Hispanic speakers worldwide.

Fernando founded a LinkedIn Group, a Slack Community and a YouTube channel all of them named “AWS en Español”, and started to run a monthly webinar via YouTube streaming where different leaders discuss aspects and challenges around AWS Cloud.

During the last 18 months he’s been helping to run and coach AWS User Group leaders across LATAM and Spain, and 10 new User Groups were founded during this time.

Feel free to follow Fernando on Twitter, connect with him on LinkedIn, or join the ever-growing Hispanic Community via Slack, LinkedIn or YouTube.

 

 

 

Anders Bjørnestad

Anders is a consultant and cloud evangelist at Webstep AS in Norway. He finished his degree in Computer Science at the Norwegian Institute of Technology at about the same time the Internet emerged as a public service. Since then he has been an IT consultant and a passionate advocate of knowledge-sharing.

He architected and implemented his first customer solution on AWS back in 2010, and is essential in building Webstep’s core cloud team. Anders applies his broad expert knowledge across all layers of the organizational stack. He engages with developers on technology and architectures and with top management where he advises about cloud strategies and new business models.

Anders enjoys helping people increase their understanding of AWS and cloud in general, and holds several AWS certifications. He co-founded and co-organizes the AWS User Groups in the largest cities in Norway (Oslo, Bergen, Trondheim and Stavanger), and also uses any opportunity to engage in events related to AWS and cloud wherever he is.

You can follow him on Twitter or connect with him on LinkedIn.

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.

 

 

 

 

 

 

 

 

Success at Apache: A Newbie’s Narrative

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/170536010891

yahoodevelopers:

Kuhu Shukla (bottom center) and team at the 2017 DataWorks Summit


By Kuhu Shukla

This post first appeared here on the Apache Software Foundation blog as part of ASF’s “Success at Apache” monthly blog series.

As I sit at my desk on a rather frosty morning with my coffee, looking up new JIRAs from the previous day in the Apache Tez project, I feel rather pleased. The latest community release vote is complete, the bug fixes that we so badly needed are in and the new release that we tested out internally on our many thousand strong cluster is looking good. Today I am looking at a new stack trace from a different Apache project process and it is hard to miss how much of the exceptional code I get to look at every day comes from people all around the globe. A contributor leaves a JIRA comment before he goes on to pick up his kid from soccer practice while someone else wakes up to find that her effort on a bug fix for the past two months has finally come to fruition through a binding +1.

Yahoo – which joined AOL, HuffPost, Tumblr, Engadget, and many more brands to form the Verizon subsidiary Oath last year – has been at the frontier of open source adoption and contribution since before I was in high school. So while I have no historical trajectories to share, I do have a story on how I found myself in an epic journey of migrating all of Yahoo jobs from Apache MapReduce to Apache Tez, a then-new DAG based execution engine.

Oath grid infrastructure is through and through driven by Apache technologies be it storage through HDFS, resource management through YARN, job execution frameworks with Tez and user interface engines such as Hive, Hue, Pig, Sqoop, Spark, Storm. Our grid solution is specifically tailored to Oath’s business-critical data pipeline needs using the polymorphic technologies hosted, developed and maintained by the Apache community.

On the third day of my job at Yahoo in 2015, I received a YouTube link on An Introduction to Apache Tez. I watched it carefully trying to keep up with all the questions I had and recognized a few names from my academic readings of Yarn ACM papers. I continued to ramp up on YARN and HDFS, the foundational Apache technologies Oath heavily contributes to even today. For the first few weeks I spent time picking out my favorite (necessary) mailing lists to subscribe to and getting started on setting up on a pseudo-distributed Hadoop cluster. I continued to find my footing with newbie contributions and being ever more careful with whitespaces in my patches. One thing was clear – Tez was the next big thing for us. By the time I could truly call myself a contributor in the Hadoop community nearly 80-90% of the Yahoo jobs were now running with Tez. But just like hiking up the Grand Canyon, the last 20% is where all the pain was. Being a part of the solution to this challenge was a happy prospect and thankfully contributing to Tez became a goal in my next quarter.

The next sprint planning meeting ended with me getting my first major Tez assignment – progress reporting. The progress reporting in Tez was non-existent – “Just needs an API fix,”  I thought. Like almost all bugs in this ecosystem, it was not easy. How do you define progress? How is it different for different kinds of outputs in a graph? The questions were many.

I, however, did not have to go far to get answers. The Tez community actively came to a newbie’s rescue, finding answers and posing important questions. I started attending the bi-weekly Tez community sync up calls and asking existing contributors and committers for course correction. Suddenly the team was much bigger, the goals much more chiseled. This was new to anyone like me who came from the networking industry, where the most open part of the code are the RFCs and the implementation details are often hidden. These meetings served as a clean room for our coding ideas and experiments. Ideas were shared, to the extent of which data structure we should pick and what a future user of Tez would take from it. In between the usual status updates and extensive knowledge transfers were made.

Oath uses Apache Pig and Apache Hive extensively and most of the urgent requirements and requests came from Pig and Hive developers and users. Each issue led to a community JIRA and as we started running Tez at Oath scale, new feature ideas and bugs around performance and resource utilization materialized. Every year most of the Hadoop team at Oath travels to the Hadoop Summit where we meet our cohorts from the Apache community and we stand for hours discussing the state of the art and what is next for the project. One such discussion set the course for the next year and a half for me.

We needed an innovative way to shuffle data. Frameworks like MapReduce and Tez have a shuffle phase in their processing lifecycle wherein the data from upstream producers is made available to downstream consumers. Even though Apache Tez was designed with a feature set corresponding to optimization requirements in Pig and Hive, the Shuffle Handler Service was retrofitted from MapReduce at the time of the project’s inception. With several thousands of jobs on our clusters leveraging these features in Tez, the Shuffle Handler Service became a clear performance bottleneck. So as we stood talking about our experience with Tez with our friends from the community, we decided to implement a new Shuffle Handler for Tez. All the conversation points were tracked now through an umbrella JIRA TEZ-3334 and the to-do list was long. I picked a few JIRAs and as I started reading through I realized, this is all new code I get to contribute to and review. There might be a better way to put this, but to be honest it was just a lot of fun! All the whiteboards were full, the team took walks post lunch and discussed how to go about defining the API. Countless hours were spent debugging hangs while fetching data and looking at stack traces and Wireshark captures from our test runs. Six months in and we had the feature on our sandbox clusters. There were moments ranging from sheer frustration to absolute exhilaration with high fives as we continued to address review comments and fixing big and small issues with this evolving feature.

As much as owning your code is valued everywhere in the software community, I would never go on to say “I did this!” In fact, “we did!” It is this strong sense of shared ownership and fluid team structure that makes the open source experience at Apache truly rewarding. This is just one example. A lot of the work that was done in Tez was leveraged by the Hive and Pig community and cross Apache product community interaction made the work ever more interesting and challenging. Triaging and fixing issues with the Tez rollout led us to hit a 100% migration score last year and we also rolled the Tez Shuffle Handler Service out to our research clusters. As of last year we have run around 100 million Tez DAGs with a total of 50 billion tasks over almost 38,000 nodes.

In 2018 as I move on to explore Hadoop 3.0 as our future release, I hope that if someone outside the Apache community is reading this, it will inspire and intrigue them to contribute to a project of their choice. As an astronomy aficionado, going from a newbie Apache contributor to a newbie Apache committer was very much like looking through my telescope - it has endless possibilities and challenges you to be your best.

About the Author:

Kuhu Shukla is a software engineer at Oath and did her Masters in Computer Science at North Carolina State University. She works on the Big Data Platforms team on Apache Tez, YARN and HDFS with a lot of talented Apache PMCs and Committers in Champaign, Illinois. A recent Apache Tez Committer herself she continues to contribute to YARN and HDFS and spoke at the 2017 Dataworks Hadoop Summit on “Tez Shuffle Handler: Shuffling At Scale With Apache Hadoop”. Prior to that she worked on Juniper Networks’ router and switch configuration APIs. She likes to participate in open source conferences and women in tech events. In her spare time she loves singing Indian classical and jazz, laughing, whale watching, hiking and peering through her Dobsonian telescope.

Community Profile: Dr. Lucy Rogers

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-lucy-rogers/

This column is from The MagPi issue 58. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

Dr Lucy Rogers calls herself a Transformer. “I transform simple electronics into cool gadgets, I transform science into plain English, I transform problems into opportunities. I am also a catalyst. I am interested in everything around me, and can often see ways of putting two ideas from very different fields together into one package. If I cannot do this myself, I connect the people who can.”

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Among many other projects, Dr Lucy Rogers currently focuses much of her attention on reducing the damage from space debris

It’s a pretty wide range of interests and skills for sure. But it only takes a brief look at Lucy’s résumé to realise that she means it. When she says she’s interested in everything around her, this interest reaches from electronics to engineering, wearable tech, space, robotics, and robotic dinosaurs. And she can be seen talking about all of these things across various companies’ social media, such as IBM, websites including the Women’s Engineering Society, and books, including her own.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

With her bright LED boots, Lucy was one of the wonderful Pi community members invited to join us and HRH The Duke of York at St James’s Palace just over a year ago

When not attending conferences as guest speaker, tinkering with electronics, or creating engaging IoT tutorials, she can be found retrofitting Raspberry Pis into the aforementioned robotic dinosaurs at Blackgang Chine Land of Imagination, writing, and judging battling bots for the BBC’s Robot Wars.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

First broadcast in the UK between 1998 and 2004, Robot Wars was revived in 2016 with a new look and new judges, including Dr Lucy Rogers. Competitors battle their home-brew robots, and Lucy, together with the other two judges, awards victories among the carnage of robotic remains

Lucy graduated from Lancaster University with a degree in Mechanical Engineering. After that, she spent seven years at Rolls-Royce Industrial Power Group as a graduate trainee before becoming a chartered engineer and earning her PhD in bubbles.

Bubbles?

“Foam formation in low‑expansion fire-fighting equipment. I investigated the equipment to determine how the bubbles were formed,” she explains. Obviously. Bubbles!

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Lucy graduated from the Singularity University Graduate Studies Program in 2011, focusing on how robotics, nanotech, medicine, and various technologies can tackle the challenges facing the world

She then went on to become a fellow of the Royal Astronomical Society (RAS) in 2005 and, later, a fellow of both the Institution of Mechanical Engineers (IMechE) and British Interplanetary Society. As a member of the Association of British Science Writers, Lucy wrote It’s ONLY Rocket Science: an Introduction in Plain English.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

In It’s Only Rocket Science: An Introduction in Plain English Lucy explains that ‘hard to understand’ isn’t the same as ‘impossible to understand’, and takes her readers through the journey of building a rocket, leaving Earth, and travelling the cosmos

As a standout member of the industry, and all-round fun person to be around, Lucy has quickly established herself as a valued member of the Pi community.

In 2014, with the help of Neil Ford and Andy Stanford-Clark, Lucy worked with the UK’s oldest amusement park, Blackgang Chine Land of Imagination, on the Isle of Wight, with the aim of updating its animatronic dinosaurs. The original Blackgang Chine dinosaurs had a limited range of behaviour: able to roar, move their heads, and stomp a foot in a somewhat repetitive action.

When she contacted Raspberry Pi back in the November of that same year, the team were working on more creative, varied behaviours, giving each dinosaur a new Raspberry Pi-sized brain. This later evolved into a very successful dino-hacking Raspberry Jam.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Lucy, Neil Ford, and Andy Stanford-Clark used several Raspberry Pis and Node-RED to visualise flows of events when updating the robotic dinosaurs at Blackgang Chine. They went on to create the successful WightPi Raspberry Jam event, where visitors could join in with the unique hacking opportunity.

Given her love for tinkering with tech, and a love for stand-up comedy that can be uncovered via a quick YouTube search, it’s no wonder that Lucy was asked to help judge the first round of the ‘Make us laugh’ Pioneers challenge for Raspberry Pi. Alongside comedian Bec Hill, Code Club UK director Maria Quevedo, and the face of the first challenge, Owen Daughtery, Lucy lent her expertise to help name winners in the various categories of the teens event, and offered her support to future Pioneers.

The post Community Profile: Dr. Lucy Rogers appeared first on Raspberry Pi.

How I coined the term ‘open source’ (Opensource.com)

Post Syndicated from jake original https://lwn.net/Articles/746207/rss

Over at Opensource.com, Christine Peterson has published her account of coining the term “open source”. Originally written in 2006, her story on the origin of the term has now been published for the first time. The 20 year anniversary of the adoption of “open source” is being celebrated this year by the Open Source Initiative at various conferences (recently at linux.conf.au, at FOSDEM on February 3, and others). “Between meetings that week, I was still focused on the need for a better name and came up with the term “open source software.” While not ideal, it struck me as good enough. I ran it by at least four others: Eric Drexler, Mark Miller, and Todd Anderson liked it, while a friend in marketing and public relations felt the term “open” had been overused and abused and believed we could do better. He was right in theory; however, I didn’t have a better idea, so I thought I would try to go ahead and introduce it. In hindsight, I should have simply proposed it to Eric Raymond, but I didn’t know him well at the time, so I took an indirect strategy instead.

Todd had agreed strongly about the need for a new term and offered to assist in getting the term introduced. This was helpful because, as a non-programmer, my influence within the free software community was weak. My work in nanotechnology education at Foresight was a plus, but not enough for me to be taken very seriously on free software questions. As a Linux programmer, Todd would be listened to more closely.”