Tag Archives: IOT

Is That Smart Home Technology Secure? Here’s How You Can Find Out.

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2023/10/30/is-that-smart-home-technology-secure-heres-how-you-can-find-out/

Is That Smart Home Technology Secure? Here’s How You Can Find Out.

As someone who likes the convenience of smart home Internet of Things (IoT) technology, I am regularly on the lookout for products that meet my expectations while also considering security and privacy concerns. Smart technology should never be treated differently than how we as consumers look at other products, like purchasing an automobile for example. In the case of automobiles, we search for the vehicle that meets our visual and performance expectations, but that will also keep us and our family safe. With that said, shouldn’t we also seek smart home technologies that are secure and protect our privacy?

I can’t tell you which solution will work for your specific case, but I can give you some pointers around technology security to help you do that research and determine which solution may best meet your needs and help you stay secure while doing it. Many of these recommendations will work no matter what IoT product you’re looking to purchase; however, I do recommend taking the time to perform some of these basic product security research steps.

The first thing I recommend is to visit the vendor site and search to see what they have to say about their products’ security. Also, do they have a vulnerability disclosure program (VDP)? If an organization that manufactures and sells IoT technology doesn’t have much to say about their products’ security or an easy way for you or someone else to report a security issue, then I highly recommend you move on.

This would indicate that product security probably doesn’t matter to them as much as it should. I also say this to the product vendors out there: If you don’t take product security seriously enough to help educate us consumers on why your products are the best when it comes to security, then why should we buy your products?

Next, I always recommend searching the Common Vulnerability Exposure (CVE) database and the Internet for the product you’re looking to buy and/or the vendor’s name. The information you find is sometimes very telling in terms of how an organization handles security vulnerability disclosure and follow-up patching of their products.

The existence of a vulnerability in an IoT product isn’t necessarily a bad thing; we’re always going to find vulnerabilities within IoT Products. The question we’re looking to answer by doing this search is this: How does this vendor handle reported vulnerabilities? For example, do they patch them quickly, or does it take months (or years!) for them to react – or will they ultimately do nothing? If there is no vulnerability information published on a specific IoT product, it may be that no one has bothered to test the security of the product. It’s also possible that the vendor has silently patched their issues and never issued any CVEs.

It is unlikely, but not impossible, that a product will never contain a vulnerability. Over the years I’ve encountered products where I was unsuccessful in finding any issues; however, not being successful in finding vulnerabilities within a product doesn’t mean they couldn’t possibly exist.

Recently, I became curious to learn how vendors that produce and/or retrofit garage door openers stack up in terms of security, so I followed the research process discussed above. I took a look at multiple vendors to see, are any of them following my recommendations? The sad part is, practically none of them even mentioned the word “security” on their websites. One clear exception was Tuya, a global IoT hardware and IoT software-as-a-service (SaaS) organization.

When I examined the Tuya website, I quickly located their security page and it was full of useful information. On this page, Tuya points out their security policies, standards, and compliance. Along with having a VDP, they also run a bug bounty program. Bug bounty programs allow researchers to work with a vendor to report security issues – and get paid to do it. Tuya’s bug bounty information is located at the Tuya Security Response Center. Vendors take note: This is how an IoT product vendor should present themselves and their security program.

In closing, consumers, if you’re looking to spend your hard-earned money, please take the time to do some basic research to see if the vendor has a proactive security program. Also, vendors, remember that consumers are becoming more aware and concerned about product security. If you want your product to rise to the status of “best solution around,” I highly recommend you start taking product security seriously as well as share details and access to your security program for your business and products. This data will help consumers make more informed decisions on which product best meets their needs and expectations.

Evolving cyber threats demand new security approaches – The benefits of a unified and global IT/OT SOC

Post Syndicated from Stuart Gregg original https://aws.amazon.com/blogs/security/evolving-cyber-threats-demand-new-security-approaches-the-benefits-of-a-unified-and-global-it-ot-soc/

In this blog post, we discuss some of the benefits and considerations organizations should think through when looking at a unified and global information technology and operational technology (IT/OT) security operations center (SOC). Although this post focuses on the IT/OT convergence within the SOC, you can use the concepts and ideas discussed here when thinking about other environments such as hybrid and multi-cloud, Industrial Internet of Things (IIoT), and so on.

The scope of assets has vastly expanded as organizations transition to remote work, and from increased interconnectivity through the Internet of Things (IoT) and edge devices coming online from around the globe, such as cyber physical systems. For many organizations, the IT and OT SOCs were separate, but there is a strong argument for convergence, which provides better context for the business outcomes of being able to respond to unexpected activity. In the ten security golden rules for IIoT solutions, AWS recommends deploying security audit and monitoring mechanisms across OT and IIoT environments, collecting security logs, and analyzing them using security information and event management (SIEM) tools within a SOC. SOCs are used to monitor, detect, and respond; this has traditionally been done separately for each environment. In this blog post, we explore the benefits and potential trade-offs of the convergence of these environments for the SOC. Although organizations should carefully consider the points raised throughout this blog post, the benefits of a unified SOC outweigh the potential trade-offs—visibility into the full threat chain propagating from one environment to another is critical for organizations as daily operations become more connected across IT and OT.

Traditional IT SOC

Traditionally, the SOC was responsible for security monitoring, analysis, and incident management of the entire IT environment within an organization—whether on-premises or in a hybrid architecture. This traditional approach has worked well for many years and ensures the SOC has the visibility to effectively protect the IT environment from evolving threats.

Note: Organizations should be aware of the considerations for security operations in the cloud which are discussed in this blog post.

Traditional OT SOC

Traditionally, OT, IT, and cloud teams have worked on separate sides of the air gap as described in the Purdue model. This can result in siloed OT, IIoT, and cloud security monitoring solutions, creating potential gaps in coverage or missing context that could otherwise have improved the response capability. To realize the full benefits of IT/OT convergence, IIoT, IT and OT must collaborate effectively to provide a broad perspective and the most effective defense. The convergence trend applies to newly connected devices and to how security and operations work together.

As organizations explore how industrial digital transformation can give them a competitive advantage, they’re using IoT, cloud computing, artificial intelligence and machine learning (AI/ML), and other digital technologies. This increases the potential threat surface that organizations must protect and requires a broad, integrated, and automated defense-in-depth security approach delivered through a unified and global SOC.

Without full visibility and control of traffic entering and exiting OT networks, the operations function might not be able to get full context or information that can be used to identify unexpected events. If a control system or connected assets such as programmable logic controllers (PLCs), operator workstations, or safety systems are compromised, threat actors could damage critical infrastructure and services or compromise data in IT systems. Even in cases where the OT system isn’t directly impacted, the secondary impacts can result in OT networks being shut down due to safety concerns over the ability to operate and monitor OT networks.

The SOC helps improve security and compliance by consolidating key security personnel and event data in a centralized location. Building a SOC is significant because it requires a substantial upfront and ongoing investment in people, processes, and technology. However, the value of an improved security posture is of great consideration compared to the costs.

In many OT organizations, operators and engineering teams may not be used to focusing on security; in some cases, organizations set up an OT SOC that’s independent from their IT SOC. Many of the capabilities, strategies, and technologies developed for enterprise and IT SOCs apply directly to the OT environment, such as security operations (SecOps) and standard operating procedures (SOPs). While there are clearly OT-specific considerations, the SOC model is a good starting point for a converged IT/OT cybersecurity approach. In addition, technologies such as a SIEM can help OT organizations monitor their environment with less effort and time to deliver maximum return on investment. For example, by bringing IT and OT security data into a SIEM, IT and OT stakeholders share access to the information needed to complete security work.

Benefits of a unified SOC

A unified SOC offers numerous benefits for organizations. It provides broad visibility across the entire IT and OT environments, enabling coordinated threat detection, faster incident response, and immediate sharing of indicators of compromise (IoCs) between environments. This allows for better understanding of threat paths and origins.

Consolidating data from IT and OT environments in a unified SOC can bring economies of scale with opportunities for discounted data ingestion and retention. Furthermore, managing a unified SOC can reduce overhead by centralizing data retention requirements, access models, and technical capabilities such as automation and machine learning.

Operational key performance indicators (KPIs) developed within one environment can be used to enhance another, promoting operational efficiency such as reducing mean time to detect security events (MTTD). A unified SOC enables integrated and unified security, operations, and performance, which supports comprehensive protection and visibility across technologies, locations, and deployments. Sharing lessons learned between IT and OT environments improves overall operational efficiency and security posture. A unified SOC also helps organizations adhere to regulatory requirements in a single place, streamlining compliance efforts and operational oversight.

By using a security data lake and advanced technologies like AI/ML, organizations can build resilient business operations, enhancing their detection and response to security threats.

Creating cross-functional teams of IT and OT subject matter experts (SMEs) help bridge the cultural divide and foster collaboration, enabling the development of a unified security strategy. Implementing an integrated and unified SOC can improve the maturity of industrial control systems (ICS) for IT and OT cybersecurity programs, bridging the gap between the domains and enhancing overall security capabilities.

Considerations for a unified SOC

There are several important aspects of a unified SOC for organizations to consider.

First, the separation of duty is crucial in a unified SOC environment. It’s essential to verify that specific duties are assigned to individuals based on their expertise and job function, allowing the most appropriate specialists to work on security events for their respective environments. Additionally, the sensitivity of data must be carefully managed. Robust access and permissions management is necessary to restrict access to specific types of data, maintaining that only authorized analysts can access and handle sensitive information. You should implement a clear AWS Identity and Access Management (IAM) strategy following security best practices across your organization to verify that the separation of duties is enforced.

Another critical consideration is the potential disruption to operations during the unification of IT and OT environments. To promote a smooth transition, careful planning is required to minimize any loss of data, visibility, or disruptions to standard operations. It’s crucial to recognize the differences in IT and OT security. The unique nature of OT environments and their close ties to physical infrastructure require tailored cybersecurity strategies and tools that address the distinct missions, challenges, and threats faced by industrial organizations. A copy-and-paste approach from IT cybersecurity programs will not suffice.

Furthermore, the level of cybersecurity maturity often varies between IT and OT domains. Investment in cybersecurity measures might differ, resulting in OT cybersecurity being relatively less mature compared to IT cybersecurity. This discrepancy should be considered when designing and implementing a unified SOC. Baselining the technology stack from each environment, defining clear goals and carefully architecting the solution can help ensure this discrepancy has been accounted for. After the solution has moved into the proof-of-concept (PoC) phase, you can start to testing for readiness to move the convergence to production.

You also must address the cultural divide between IT and OT teams. Lack of alignment between an organization’s cybersecurity policies and procedures with ICS and OT security objectives can impact the ability to secure both environments effectively. Bridging this divide through collaboration and clear communication is essential. This has been discussed in more detail in the post on managing organizational transformation for successful IT/OT convergence.

Unified IT/OT SOC deployment:

Figure 1 shows the deployment that would be expected in a unified IT/OT SOC. This is a high-level view of a unified SOC. In part 2 of this post, we will provide prescriptive guidance on how to design and build a unified and global SOC on AWS using AWS services and AWS Partner Network (APN) solutions.

Figure 1: Unified IT/OT SOC architecture

Figure 1: Unified IT/OT SOC architecture

The parts of the IT/OT unified SOC are the following:

Environment: There are multiple environments, including a traditional IT on-premises organization, OT environment, cloud environment, and so on. Each environment represents a collection of security events and log sources from assets.

Data lake: A centralized place for data collection, normalization, and enrichment to verify that raw data from the different environments is standardized into a common scheme. The data lake should support data retention and archiving for long term storage.

Visualize: The SOC includes multiple dashboards based on organizational and operational needs. Dashboards can cover scenarios for multiple environments including data flows between IT and OT environments. There are also specific dashboards for the individual environments to cover each stakeholder’s needs. Data should be indexed in a way that allows humans and machines to query the data to monitor for security and performance issues.

Security analytics: Security analytics are used to aggregate and analyze security signals and generate higher fidelity alerts and to contextualize OT signals against concurrent IT signals and against threat intelligence from reputable sources.

Detect, alert, and respond: Alerts can be set up for events of interest based on data across both individual and multiple environments. Machine learning should be used to help identify threat paths and events of interest across the data.

Conclusion

Throughout this blog post, we’ve talked through the convergence of IT and OT environments from the perspective of optimizing your security operations. We looked at the benefits and considerations of designing and implementing a unified SOC.

Visibility into the full threat chain propagating from one environment to another is critical for organizations as daily operations become more connected across IT and OT. A unified SOC is the nerve center for incident detection and response and can be one of the most critical components in improving your organization’s security posture and cyber resilience.

If unification is your organization’s goal, you must fully consider what this means and design a plan for what a unified SOC will look like in practice. Running a small proof of concept and migrating in steps often helps with this process.

In the next blog post, we will provide prescriptive guidance on how to design and build a unified and global SOC using AWS services and AWS Partner Network (APN) solutions.

Learn more:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Stuart Gregg

Stuart Gregg

Stuart enjoys providing thought leadership and being a trusted advisor to customers. In his spare time, Stuart can be seen either training for an Ironman or snacking.

Ryan Dsouza

Ryan Dsouza

Ryan is a Principal IIoT Security Solutions Architect at AWS. Based in New York City, Ryan helps customers design, develop, and operate more secure, scalable, and innovative IIoT solutions using AWS capabilities to deliver measurable business outcomes. Ryan has over 25 years of experience in multiple technology disciplines and industries and is passionate about bringing security to connected devices.

What’s Up, Home? – Does your phone battery drain faster in summertime?

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-does-your-phone-battery-drain-faster-in-summertime/26084/

Can you verify if your phone battery drains faster during summer compared to other seasons with Zabbix? Of course, you can!

I have always felt that during summer my phone battery drains faster than during the darker seasons. This would only be logical, as during summer the phone display must be brighter than during the darker seasons. Additionally, during summer we also tend to be more outdoors, so instead of home Wi-Fi the phone is using mobile data. On the other hand, during winter time if you are outdoors, the cold weather might affect battery life as well.

But how severe is the drainage difference? Let’s check it out!

Analyzing the data

In June, my iPhone battery average level has been 51%

In April, it was around 67%.

In March, about 71%.

What does this prove?

Well, necessarily not much as this is not a very scientific method. However, my day-to-day phone usage does not vary much — if anything, during March/April the drainage should have been worse, as I have a tendency to participate in one daily afternoon meeting from outdoors; I’ll stroll around with our baby and get some fresh air whilst being online in the meeting. Now in June, I’ve been enjoying my summer holiday. But clearly, something is going on as the average percentage difference is such high. For my Apple Watch, the difference is there but not that significant.

Why am I not comparing the data with the iPhone activity? Heh — to prove my point with my earlier “Things that WILL go wrong when monitoring IoT devices” post; something has happened as my Home Assistant won’t provide that information anymore. The same happened with connection-type data. I’ll have to take a look at that someday, but lately, I’ve been busy as a bee with 1) the summer holiday and 2) preparing material for Zabbix Summit 2023.

Real-world applications for this kind of cherry-picking

Even though this post is very thin in its contents, I’m posting this as a hint for you on how you can utilize Zabbix in more serious monitoring targets in this way. Need to compare UPS battery depletion rate? Disk space usage rate? CPU usage? Concurrent connections? Whatever the data, many times it’s useful to stop for a moment and compare how things were (some time units) ago.

Yes, you can simply pick a longer time period in Zabbix time picker and see the data for one year or whatever, and most of the time it will show you a change in pattern, but if the changes are more subtle or the graph is very busy, sometimes zooming in to shorter time periods in history will show you something that you might otherwise miss.

For example, if I expand the time range for the battery usage, not only you’ll notice that the Home Assistant iCloud ride has been a bumpy one, but also the details do get lost when the time range is longer.

More ways to compare historical data

I didn’t build any new graphs or dashboards for this quick experiment, I merely used the time picker. In case one would need to have comparison data available at any time, using time shift and additional data sets would help you out. And, like many of us do, one way to dive deeper into data collected by Zabbix would be to analyze it in Grafana, but that’s a story for another day.

I have been working at Forcepoint since 2014 and I’m happy that my work won’t drain my personal battery. — Janne Pikkarainen

This post was originally published on the author’s page.

The post What’s Up, Home? – Does your phone battery drain faster in summertime? appeared first on Zabbix Blog.

What’s Up, Home? – 7 things to beware of if you monitor your home

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-7-things-to-beware-of-if-you-monitor-your-home/26035/

When reading this blog, you could easily think that everything is smooth sailing all the time. No. When you monitor your home IoT — or frankly, just USE your home IoT — you have plenty of small details to watch out for. I list them for you, so you don’t have to find them out the hard way like I’ve done over this 1+ year of journey.

1. The status is not what it seems

This is especially true with the IoT devices operating on 433 MHz radio frequency. Your home smart hub sends the signal like a radio station hoping for your IoT device to catch it, and to my understanding, it does not get a reply back from the device. If anything is interfering with the signal, your device will miss the signal and thus your home smart hub will be showing the wrong status.

So, you will need to either get rid of these devices and replace them with devices that use a two-way communication protocol such as ZigBee or if that’s not possible, to set up extra monitoring to try to guess if the command your home smart hub sent actually went through. Did you attempt to power on/off a smart power socket connected to a radiator? Keep an eye on the smart temperature meter and react soon if the temperature does not start to rise after the power socket got powered on, or so.

2. Battery-low messages can be deceiving

Two of my Philips Hue motion sensors have been complaining about low battery status for about six months now, but they are still operating just fine. I’ll let you know when I finally have to replace the batteries on them. 

On the contrary, the batteries on some 433 MHz frequency Telldus thermometers can just die without too much warning. For them, your monitoring need to react fast if the values are not coming in. To make things more complicated, not TOO fast though, as sometimes these thermometers can hibernate for some time before reporting new values; possibly when there’s no change in temperature, they will enter some power save mode or something. I don’t know.

3. Bluetooth devices and 2.4 GHz Wi-Fi can interfere with each other

Even though my devices do not use 2.4 GHz Wi-Fi too much, I have some devices like Sonos smart speaker where it’s a must. So, for example, when playing music through that speaker, it’s possible that my Raspberry Pi 4 cannot hear the RuuviTag environmental sensor very reliably. It did help somewhat when I found out that on my Asus router, it was possible to enable some kind of “Bluetooth coexistence” mode, but it’s not a 100% solution for my issue.

4. Make sure any helper components are really up

Along with Zabbix and Grafana, my Raspberry Pi 4 runs Home Assistant to harvest some values about my iPhone and so on. It runs as a Docker image and generally is stable, but sometimes it just stops working. I have an automatic daily restart of that Docker image and so far that has been a relatively good way to keep the image running.

5. APIs can and will change

Monitoring something over some API? Or through web scenarios? Rest assured that your joy won’t last forever. This is IT, and things just won’t remain the same. SOMETHING is guaranteed to change every now and then and the more your monitoring relies on 3rd party things, the less you can trust that your monitoring just would keep on working. No, it’s likely you will need to alter things every now and then.

6. Monitor your monitoring

Even though Raspberry Pi 4 and Zabbix are very reliable and are very unlikely to cause you any trouble, of course, they can fail, or more likely something else will not be like it should. Your home router or Internet connection can die. Electricity can go down. Hardware can die. If you want to be really sure, monitor your monitoring from outside somehow. Have a separate monitoring running on the cloud somehow. 

In our case, the electricity and ISP are very reliable, and Cozify smart home hub has a nice feature where the Cozify cloud will text me if the hub loses connectivity — that’s usually a good indication that either the ISP or power went down. Also, I’m about to roll out a small cron job on this site which would check if my Zabbix has updated a test file in a while. If not, it would indicate my Zabbix would be down or otherwise unreachable, so then whatsuphome.fi could e-mail me.

7. You will get paranoid

With more knowledge comes more pain. With some devices, you’ll start to think that they are going to break soon. As an example, the freezer I keep referring to — sometimes it has short periods of time when its temperature for some reason rises a bit and then it goes down again. I don’t know if that has something to do with the fact that our freezer is one of those which does not form ice everywhere so it’s maintenance-free, or if that’s something else, but we keep observing spikes like this about once a week.

I have been working at Forcepoint since 2014 and have learnt not to trust the technology. — Janne Pikkarainen

This post was originally published on the author’s page.

The post What’s Up, Home? – 7 things to beware of if you monitor your home appeared first on Zabbix Blog.

What’s Up, Home? – Can ChatGPT help set up monitoring a USB-connected printer with Zabbix?

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-can-chatgpt-help-set-up-monitoring-a-usb-connected-printer-with-zabbix/25980/

Can you monitor a USB-connected printer with Zabbix? Of course, you can! But can ChatGPT help set up the monitoring? Well… erm… maybe! By day, I am a Lead Site Reliability Engineer in a global cyber security company, Forcepoint. By night, I monitor my home with Zabbix & Grafana and do some weird experiments with them. Welcome to my blog about the project.

When it comes to printing, I am not laser-sharp. That’s simply because I have not printed anything in a long, long time, and even if I have, it’s been a printer maintained by Someone Else. Yes, I know how to add paper and how to do a ritual dance whilst printing to prevent the printer from doing an annoying paper jam. Yes, I have added some printer servers under Zabbix monitoring at the office. That’s about where my printer wisdom ends.

Nobody buys a printer anymore, except for us

It’s 2023, and nobody prints anymore, or that’s my personal impression. Well, that changed at our home as we enter another domain I understand 0% about. Occasionally my wife is making some clothes for our baby, me, and herself. For that, she’s printing out the source code… the CAD models… well… the blueprints for the clothes. I kid you not, the clothes schematics look so complicated to me and the text-based instructions so alien that I’m sure the instructions are not from this planet.

Anyway, my wife found a used HP LaserJet Pro MFP M28a for us for a steal. No, not a literal steal, but for 40 euros, which did sound cheap to me, so we bought it. Of course, normal human beings would just connect the printer to their laptop/desktop and be done with it. However, you know how this story continues.

Flirting with ChatGPT

I went to Zabbix integrations and was sure I would be able to easily find out how to monitor a USB-connected printer. I’ll just search and… what? Nothing? No worries! We have the community templates! I’ll just quickly grab a template from there…. the empty result set, AGAIN? Monitoring SNMP-enabled printers is a different story, for that Zabbix can suit you very well, but just try to find out something for USB-connected printers. I know I could probably monitor my printer by spying cups but I’m not sure if it could return the details I wanted.

With my experiments, I’m used to the fact that I cannot get direct answers to my monitoring needs using search engines, as I’m doing stuff that not too many have done. Even if they have, they might not have blogged about it. That’s ok, but as this was about printing which is a common topic, I decided to try out what happens if I try out ChatGPT. I did the tests with the free ChatGPT 3.5, the new ChatGPT 4 would probably be a different ride.

Here’s our lovely little chat.

Me: I have an HP LaserJet MFP M28a monochrome laser printer connected to USB. How do I get its remaining toner level through the command line on Ubuntu 22.04?

ChatGPT:  You can get the remaining toner level of your HP LaserJet MFP M28a monochrome laser printer through command line on Ubuntu 22.04 by following these steps:

  1. Install the “hplip” package by running the following command in the terminal:

sudo apt-get install hplip

  1. Once the installation is complete, run the following command to set up your printer:

sudo hp-setup -i

Follow the instructions provided by the setup wizard to configure your printer.

  1. After the printer is set up, run the following command to get the remaining toner level:

hp-systray -x

This will launch the HP Device Manager, which displays the toner level for your printer.

Alternatively, you can also run the following command to get the toner level directly in the terminal:

hp-levels -n

This will display the toner level in a percentage format.

Me: None of those worked as hp-levels and hp-systray require QT4 and D-Bus. Let’s use ipptool instead.

ChatGPT: Sure, we can use the ipptool command to get the remaining toner level of your HP LaserJet MFP M28a monochrome laser printer on Ubuntu 22.04.

Here are the steps to follow:

  1. Make sure your printer is connected to the computer and turned on.
  2. Open the terminal on your Ubuntu 22.04 machine.
  3. Run the following command to install the CUPS and IPP utilities:

sudo apt-get install cups-ipp-utils

  1. Run the following command to find the printer’s URI:

lpinfo -v

The output should contain a line with the URI of your printer, which will look something like this:

ipp://HP_LaserJet_MFP_M28a.local:631/ipp/print

  1. Run the following command to get the remaining toner level:

ipptool -t -v -H ipp://localhost:631/ipp/print -I /usr/share/cups/model/HP/hp-laserjet_pro_mfp_m26a.ppd get-printer-attributes.test | grep marker-levels

Replace “get-printer-attributes.test” with the actual name of the test file used by your printer.

  1. The output of the command should contain a line with the toner level, which will look something like this:

marker-levels: marker-high-levels=(100) marker-low-levels=(5) marker-names=(Black Cartridge) marker-current-levels=(30)

In this example, the toner level is 30%.

That’s it! This is how you can use the ipptool command to get the remaining toner level of your HP LaserJet MFP M28a monochrome laser printer on Ubuntu 22.04.

* * * 

The latest tip really did work, so we’re off to a very good start with ChatGPT! Next, I tried asking if it could create a Zabbix template for me, but even though it really tried, it did fail.

Adding printer to Zabbix

So, before continuing with the failed ChatGPT attempt, let’s add this printer to monitoring, as that’s the reason you are reading this blog anyway. I was very surprised about the detail level of the ipptool output: everything from supported paper sizes to printing options to other usual details, but then also some dynamic details like printer uptime.

I wanted to add those dynamic details, so I read through the output and added the details in the way you already know me doing:

  • I first added a cronjob that runs every minute 

    ipptool -t -v “ipp://HP%20LaserJet%20MFP%20M28a%20(7C69CB)%20(USB)._ipp._tcp.local/” – get-printer-attributes.test  >/tmp/hp-printer.txt

  • Zabbix then reads that text file to a master item, and with dependent items and item preprocessing cherry-picks the interesting details

In screenshots, like this.

Does it work?

Of course, it does, as this routine is what I’ve done so many times before in this blog. Here’s the result. And yes, the toner will likely run out soon — this is a used printer and it’s complaining about low toner level every time we print. An interesting experiment to see how many pages we can still print before it actually runs out of juice. For “0% left”, as also reported by other tools, the printer does an excellent job.

Back to ChatGPT we go

If I would copy-paste my complete chat with ChatGPT, this blog post would become ridiculously long. Communicating with ChatGPT was like with a hyper-active intern who proceeds to do SOMETHING only to realize moments later that whatever it did was completely else than what you asked for. Probably I’m just a sucky ChatGPT prompter.

To get you an idea of how ChatGPT failed, here’s a summary of how it failed:

  • It really attempted to create YAML-based template files for me.
  • Unfortunately, when attempting to import the templates to Zabbix, that part failed every time 
  • When I told the error messages to ChatGPT, it attempted to fix its errors, but in weird ways. Sometimes it changed the template in drastic ways, even if it was supposed to only add or modify a single line of it. Multiple times, it decided to change the format from YAML to XML unless I demanded it to stay on YAML

Here are a few snippets from the chat. Maybe at some point, I’ll throw in some money to try out ChatGPT 4.

… this went on and on until I gave up. In conclusion: this time ChatGPT nudged me in to correct direction to get the desired output about the printer info (although a sharp-eyed reader might notice I hinted it about the tool I’d like to use and how I might after all know more about printers than I pretended during this blog post…). Then, ChatGPT ran out of ink when it tried to generate the Zabbix templates. It’s scary advanced anyway, and someday I will try out the more advanced ChatGPT 4.

I have been working at Forcepoint since 2014 and luckily I don’t suffer about the empty paper syndrome very often. — Janne Pikkarainen

This post was originally published on the author’s page.

The post What’s Up, Home? – Can ChatGPT help set up monitoring a USB-connected printer with Zabbix? appeared first on Zabbix Blog.

What’s Up, Home? – Monitor your mobile data usage

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-monitor-your-mobile-data-usage/25856/

Can you monitor your mobile data usage with Zabbix? Of course, you can! By day, I am a Lead Site Reliability Engineer in a global cyber security company Forcepoint. By night, I monitor my home with Zabbix & Grafana Labs and do some weird experiments with them. Welcome to my blog about this project.

As it is Easter (the original blog post was published two months ago), this entry is a bit short but as I was remoting into my home systems over VPN and my phone, I got this blog post idea. 

When on the go, I tend to stay connected to my home network over VPN. Or rather, an iOS Shortcut pretty much runs OpenVPN home profile for me whenever I exit my home. 

My Zabbix collects statistics from my home router over SNMP, and as usual, the data includes per-port traffic statistics. VPN clients are shown as tunnel interfaces so Zabbix LLD picks them up nicely.

So, as a result, I get to see how much traffic my phone consumes whenever I’m using a mobile VPN. Here are seven-day example screenshots from ZBX Viewer mobile app. 

VPN connection bits sent.

VPN connection bits received

So, from this data I could get all the statistics I need. And, using my ElastiFlow setup, could likely see to where my phone has been connecting to most.

This post was originally published on the author’s page.

The post What’s Up, Home? – Monitor your mobile data usage appeared first on Zabbix Blog.

What’s Up, Home? – Monitor your iPhone & Apple Watch with Zabbix

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-monitor-your-iphone-amp-apple-watch-with-zabbix/25817/

I’m entering a whole new level of monitoring and “What’s up, home?” could now also be called “What’s up, me?”. Recently my colleague did hint to me about Home Assistant’s HomeKit Controller integration just to get my HomeKit-compatible Netatmo environmental monitoring device to get to return value back to Zabbix without my Siri kludge. One thing lead to another and now I’m monitoring my iPhone and Apple Watch — so, practically monitoring myself.

But how to get to this level? Let’s rewind a bit.

Home Assistant

Home Assistant is a nice home automation software. It is open source and provides many, many integrations for automating your home. I now have my Netatmo comfortably monitored through that…

Bye-bye, mobile app and my Siri kludge. This screenshot is from Home Assistant.

… but while exploring Home Assistant’s integrations, I came upon its iCloud integration. Oh boy. This takes my monitoring to a whole new level.

But how to get this data to Zabbix?

On Home Assistant, you can go to your account settings and create a Long-lived access token. With that, you then just pass the authorization bearer as part of your HTTP request and you are done. So, like this.

This way you’ll receive your Home Assistant data back in JSON format. As the output is really really really long, and I needed just a relatively small set of data for myself, I cherry-picked those using the above item as the master item and then created a bunch of dependent items.

… and here’s a single item so you get the idea.

Let’s create some dashboards

Now that I have my data in Zabbix, it’s time to create some dashboards. Fascinating that I can now truly monitor my iPhone and Apple Watch like this.

I also created a Grafana dashboard.

Observations

This has been now running for roughly a day for me. Already some observations:

  • While driving, at traffic lights I tried to see what would happen if I disable the Bluetooth connection between my car and my iDevices. My status was reported as Cycling instead of Automotive for the rest of the trip. Hmm.
  • Not all the data will be updated in real-time, but there’s a significant lag. Also, it seems I might need to VPN to my home so the data would be updated sooner while I’m not at home.
  • iPhone’s custom focus modes are not updated to Home Assistant. During the sleep focus mode, the focus mode was reported as On, but for any other mode I tried it only shows Off. Shame, I would have loved to start tracking things like how long it takes for me to put our baby to sleep or how much of the time I’m spending with this blog. That has to wait for now.

But anyway, this thing just opened a whole new Pandora’s box for me to explore. 

This post was originally published on the author’s page.

Understanding the Ecosystem of Smart Cities for the Purpose of Security Testing

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/12/29/understanding-the-ecosystem-of-smart-cities-for-the-purpose-of-security-testing/

Understanding the Ecosystem of Smart Cities for the Purpose of Security Testing

Is there a defined ecosystem, similar to what we encountered with the Internet of Things (IoT), that can be charted out as it relates to smart city technology and its security implications?

While evaluating IoT I struggled with defining what IoT is. I found that there were varying definitions out there, but none that helped me fully understand what constitutes IoT and how to approach evaluating its security posture. To solve that dilemma in my mind and to better be able to discuss it with vendors and consumers I finally landed on the concept that IoT is often better defined as a series of traits that can be used to explain it, its structure, and better understand the components and their interaction with each other. This concept and approach also allowed me to properly map out all of the interlinking mechanisms as it relates to security testing of the IoT technology’s full ecosystem.

Looking at it from this perspective we see that Smart Cities leverage IoT technology and concepts at its core but in many cases with a much more defined relationship to data. With this in mind, I have started looking at the various components that make up Smart Cities, abstracting out their specific purposes, with the goal of having a model to help better understand the various security concerns as we plan for our Smart City future.

Through general observation we can see that Smart City solutions consist of the following five general areas:

Embedded technology

  • Sensors
  • Actuators
  • Aggregators & Edge or Fog appliances

Management and control

  • Client-side application
  • Cloud application
  • APIs
  • Server application

Data storage

  • Cloud storage
  • On-premises storage
  • Edge or Fog storage

Data access

  • Cloud application
  • Client-side application
  • Server-side application

Communication

  • Ethernet
  • WiFi (802.11abgn)
  • Radio frequency (RF) ( BLE, Zigbee, Z-wave, LoRa, etc.…)
  • Cellular communication ( GSM, LTE, 3,4,5G)

Mapping these various components to a specific smart city solutions ecosystem, we can better establish the relationship between all the components in that solution. This in turn allows us to improve our threat modeling processes by including those interrelationships. Also, similar to general IoT security testing, understanding the interconnected relationships allows us to take a more holistic approach to security testing.

The typical approach of testing each component only as a stand-alone entity is always a short-sighted approach and misses the mark when identifying attack vectors and associated risk that often come to light only when security testing takes into consideration the interaction of these components. This approach leads us to always ask the question: what happens to the security posture of one set of components if there is a security failure in another set of components? Also, the holistic approach helps us better map the security risk levels across the entire ecosystem. Just because a low-risk condition is found in a single item does not mean that the risk is not compounded into a higher risk category due to some interaction within other components.

So, in conclusion, the solution to establishing a solid security testing response for Smart City technologies is to map out the entire ecosystem of the solution that is being designed and deployed. Define a solid understanding of the various components and their interaction with each other; then, conduct threat modeling to determine the possible risks and threats that are expected to come against the smart city solutions.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/11/08/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-pt-4/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

Welcome back to our blog series on Rapid7’s IoT Village exercise from DEF CON 30. In our previous posts, we covered how to achieve access to flash memory, how to extract file system data from the device, and how to modify the data we’ve extracted. In this post, we’ll cover how to gain root access over the device’s secure shell protocol (SSH).

Gaining root access over SSH

Before we move on to establishing SSH connect as root, you may need to set the local IP address on your local host to allow you to access the cable modem at its default IP address of 192.168.100.1. In our example, we set the local IP address to 192.168.100.100 to allow this connection.

To set the local IP address on your host, the first thing is to identify the local ethernet interface. You can do this from the Linux CLE terminal by running the ifconfig command:

  • ifconfig
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 10: IFCONFIG showing Local Ethernet Interfaces

In our example, the ethernet interface is enp0s25, as shown above. Using that interface name (enp0s25), we can set the local IP address to 192.168.100.100 using the following command

  • ifconfig enp0s25 192.168.100.100

To validate that you’ve set the correct IP address, you can rerun the ifconfig command and examine the results to confirm:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 11: Ethernet Interface Set To 192.168.100.100

It’s also possible to connect your host system directly to the cable modem’s ethernet port and have your host interface setup for DHCP – the cable modem should assign an IP address to your host device.

Once you have a valid IP address assigned and/or configured on your host system, power up the cable modem and see if your changes were made correctly and if you now have root access. Again, ensure the SD Card reader is disconnected before plugging 12v power supply into the cable modem.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

Once you’ve confirmed that the SD Card reader is disconnected, power up the cable modem and wait for the boot-up sequence to complete. Boot-up is complete when only the top LED is lit and the second LED is flashing:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

From the CLI terminal on your host, you can run the nmap command to show the open ports on the cable modem. This will also show if your changes to the cable modem firmware were made correctly.

  • nmap -sS 192.168.100.1 -p 22,80,443,1337
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 12: NMAP Scan Results

At a minimum, you should see TCP port 1337 as open as shown above in Figure 12. If not, then most likely an error was made either when copying the dropbear_rsa_key file or making changes to the inittab file.

If the TCP port 1337 is open, the next step is to attempt to login to the cable modem with the following SSH command as root. When prompted for password, use “arris” in all lower case.

Note: Since the kernel on this device is believed to have created an environment restriction to prevent console access, we were only successful in getting around that restriction with the -T switch. This -T switch in SSH disables all pseudo-terminal allocation, and without using it, no functioning console can be established. Also, when connected, you will not receive a typical command line interface prompt, but the device should still accept and execute commands properly.

If you receive a “no matching key exchange method found” error (Figure 13), you will need to either define that Diffie-hellman-group-sha1 in the SSH command or create a config file to do this automatically.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 13: Key Exchange Error

Defining a config file is the easier method. We did this prior to the DEF CON IoT Village, so participants in the exercise would not need to. Since others may be using this writeup to recreate the exercise, I decided to add this to prevent any unnecessary confusion.

To create a config file to support SSH login to this cable modem, without error, you will need to create the following folder “.ssh” and file “config” within the home directory of the user you are logging as. In our example, we were logged in as root. To get to the home folder, the simplest method is to enter the “cd” command without any arguments. This will take you to the home directory of the logged in user.

  • cd

Once in your home directory, try to change directory “cd” to the “.ssh” folder to see if one exists:

  • cd .ssh

If it does, you won’t need to create one and can skip over the creation steps below. If not, then you will need to create that folder in your home directory with the following command:

  • mkdir .ssh

Once you have changed directory “cd” to the .ssh folder, you can use vi to create and edit a config file.

  • vi config

Once in vi, make the following entries in the config file shown below in Figure 14. These entries will enable support for access to cable modem at 192.168.100.1, for the user root, a Cipher of aes256-cbd, and the Diffie-hellman key exchange.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 14: Config File

Once the config is created and saved, you should be able to login over SSH to the cable modem and not receive any more errors.

When you connect and log in, the SSH console will not show you a typical command prompt. So, once you hit the return key after the above SSH command, run the command “ls -al” to show a directory and file listing on the cable modem as shown below in Figure 15. This should indicate whether you successfully logged in or not.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 15: Cable Modem Root Console

At this point, you should now have root-level access to the cable modem over SSH.

You may ask, “What do I gain from getting this level of root access to an IoT device?” This level of access allows us to do more advanced and detailed security testing on a device. This is not as easily done when sitting on the outside of the IoT device or attempting to emulate on some virtual machine, because often, the original hardware contains components and features that are difficult to emulate. With root-level access, we can interact more directly with running services and applications and better monitor the results of any testing we may be conducting.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/11/01/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-pt-3/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Welcome back to our blog series on Rapid7’s IoT Village exercise from DEF CON 30. In our previous posts, we covered how to achieve access to flash memory and how to extract file system data from the device. In this post, we’ll cover how to modify the data we’ve extracted.

Modify extracted file systems data

Now that you have unsquashfs’d the Squash file system, the next step is to take a look at the extracted file system and its general structure. In our example, the unsquashed file system is located at /root/Desktop/Work/squashfs-root. To see the structure, while in the folder /Desktop/Work, you can run the following command to change director and then list the file and folders:

  • cd squashfs-root
  • ls -al

As you can see, we have unpacked a copy of the squash file system containing the embedded Linux root file system that was installed on the cable modem for the ARM processor.

The next goal will be to make the following three changes so we can eventually gain access to the cable modem via SSH:

  1. Create or add a dropbear_rsa_key to squashfs.
  2. Remove symbolic link to passwd and recreate it.
  3. Modify the inittab file to launch dropbear on startup.

To make these changes, you will first need to change the directory to the squashfs-root folder. In our IoT Village exercise example, that folder was “~/Desktop/Work/squashfs-root/etc”, and the attendees used the following command:

  • cd ~/Desktop/Work/squashfs-root/etc

It is critical that you are in the correct directory and not in the etc directory of your local host before running any of the following commands to avoid potential damage to your laptop or desktop’s configuration. You can validate this by entering the command “pwd”, which allows you to examine the returned path results as shown below:

  • pwd
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

The next thing we need to do is locate a copy of the dropbear_rsa_key file and copy it over to “~/Desktop/Work/squashfs-root/etc”. This RSA key is needed to support dropbear services, which allow SSH communication to the cable modem. It turns out that a usable copy of dropbear_rsa_key file is located within the etc folder on Partition 12, which in our example was found to be mounted at /media/root/disk/etc. You can use the application Disks to confirm the location of the mount point for Partition 12, similar to the method we used for Partition 5 shown in Figure 4.

By running the following command during the IoT Village exercise, attendees were successfully able to copy the dropbear_rsa_key from Partition 12 into “~/Desktop/Work/squashfs-root/etc/”:

  • cp /media/root/disk/etc/dropbear_rsa_key .
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Next, we had participants remove the symbolic linked passwd file and create a new passwd file that points to the correct shell. Originally, the symbolic link pointed to a passwd file that pointed the root user to a shell that prevented root from accessing the system. By changing the passwd file, we can assign a usable shell environment to the root user.

Like above, the first step is to make sure you are in the correct folder listed below:

“~/Desktop/Work/squashfs-root/etc”

Once you have validated that you are in the correct folder, you can run the following command to delete the passwd file from the squash file systems:

  • rm passwd
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once this is deleted, you can next create a new passwd file and add the following data to this file as shown below using vi:

Note: Here is a list of common vi interaction commands that are useful when working within vi:

  • i = insert mode. This allows you to enter data in the file
  • esc key will exit insert mode
  • esc key followed by entering :wq and then the enter key will write and exit vi
  • esc key followed dd will delete the line you are on
  • esc key followed x will single character
  • esc key followed shift a will place you in append insert mode at the end of the line
  • esc key followed a will place you in append insert mode at the point of your cursor
  • vi passwd

Once in vi, add the following line:

root:x:0:0:root:/:/bin/sh

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Next, we need to alter the inittab file. Again, make sure you are currently in the folder “/root/Desktop/Work/squashfs-root/etc”. Once this is validated, you can use vi to open and edit the inittab file.

  • vi inittab

The file should originally look like this:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

You will need to add the following line to the file:

::respawn:/usr/sbin/dropbear -p 1337 -r /etc/dropbear_rsa_key

Add the line as shown below, and save the file:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once you’ve completed all these changes, double-check them to make sure they are correct before moving on to the next sections. This will help avoid the need to redo everything if you made an incorrect alteration.

If everything looks correct, it’s time to move repack the squash file system and write the data back to Partition 5 on the cable modem.

Repacking squash file system and rewriting back to Modem

In this step, you will be repacking the squash file system and using the Linux dd command to write the image back to the cable modems NAND Flash memory.

The first thing you will need to do is change the directory back to the working folder – in our example, that is “/Desktop/Work”. This can be done from the current location of “~/Desktop/Work/squashfs-root/etc” by running the following command:

  • cd ../../
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Next, you’ll use the command “mksquashfs” to repack the squash folder into a binary file called new-P5.bin. To do this, run the following command from the “/Desktop/Work” folder that you should currently be in.

  • mksquashfs squashfs-root/ new-P5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once the above command has completed, you should have a file called new-P5.bin. This binary file contains the squashfs-root folder properly packed and ready to be copied back onto the cable modem partition 5.

Note: If for some reason you think you have made a mistake and need to rerun the “mksquashfs” command, make sure you delete the new-P5.bin file first. “mksquashfs” will not overwrite the file, but it will append the data to it. If this happens it will cause the new-P5.bin images to have duplicates of all the files which will cause your attempt to gain root access to fail. So, if you rerun mksquashfs make sure to delete the old new-P5.bin file first.

One you have run the mksquashfs and have a new-P5.bin file containing the repacked squashfs, you’ll use the Linux dd command to write the binary file back to the cable modem partition 5.

To complete this step, first make sure you have identified the correct “Device:” location using the method shown in Figure 7 from part 2 of this blog series.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

In this example, the “Device:” was determined to be sdd5. So we can write the binary images by running the following dd command:

  • dd if=new-P5.bin of=/dev/sdd5
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once the dd command completes, the modified squash file system image should now be written on the modem’s NAND flash memory chip. Before proceeding, disconnect the SD Card reader from the cable modem as shown below.

Note: Attaching the SD Card Reader and 12V power supply to the cable modem at the same time will damage the cable modem and render it nonfunctional.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

In our next and final post in this series, we’ll cover how to gain root access over the device’s secure shell protocol (SSH). Check back with us next week!

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/10/25/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-pt-2/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Welcome back to our blog series on Rapid7’s IoT Village exercise from DEF CON 30. Last week, we covered the basics of the exercise and achieving access to flash memory. In this post, we’ll cover how to extract partition data.

Extracting partition data

The next step in our hands-on IoT hacking exercise is to identify the active partition and extract the filesystems for modification. The method I have used for this is to examine the file date stamps – the one with the most current date is likely the current active file system.

Note: Curious why there are multiple or duplicate filesystem partitions on an embedded device? Typically, with embedded device firmware, there are two of each partition, two kernel images, and two root file system images. This is so the device’s firmware can be updated safely and prevent the device from being bricked if an error or power outage occurs during the firmware update. The firmware update process updates the files on the offline partitions, and once that is completed properly, the boot process loads the new updated partitions as active and takes the old partitions offline. This way, if a failure occurs, the old unchanged partition can be reloaded, preventing the device from being bricked.

On this cable modem, we have 7 partitions. The key ones we want to work with are Partition 5, 6, 12, 13. This cable modem has two MCU:, an ARM and an ATOM processor. Partition 5 and 6 are root filesystems for the ARM processor, which is what we will be hacking. Partition 12 and 13 are for the root filesystems for the ATOM process, which controls the RF communication on the cable.

To ensure we alter the active partition, the first thing we need to do is check the date stamps on the mounted file systems to see whether partition 5 or partition 6 is the active partition. There are several ways to do this, but the method we use is to click on partition 5 or 6 in the Disks application to highlight it, and then click on the “Mounted at:” link as shown below in Figure 4 to open the mounted file partition shown in Figure 5:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 4: Partition File System Mount Locations

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 5: Mounted File System Partition 5

Once File Manager opens the folders, you can right click on the “etc” folder, select “Properties,” and check the date listed in “Modified:” as shown below in Figure 6:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 6: Folder Date Stamp

You will want to do this on both partitions 5 and 6 for the cable modem to identify the partition with the most current date stamp, as this is typically the active partition. For the example cable modems we used at DEF CON IoT Village, partition 5 was found to have the most current date stamp.

Extracting the active partition

The next step is to extract the partition with the newest date stamp (Partition 5). To do this, we first need to identify which Small Computer System Interface (SCSI) disk partition 5 is attached to. This can be identified by selecting Partition 5 with the Disks application and then read the “Device:” as shown below in Figure 7:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 7: Device

Also remember to record the “Device:” information. You’ll need this during several of the future steps. In our example, we see that this is /dev/sdd5, as shown in Figure 7.

To extract the partition image, we launched the Terminal application on your Linux host to gain access to the command line interface (CLI). Once the CLI is opened, create a storage area to hold the partition binary and file system data. In our example, we created a folder for this called /Desktop/Work.

From CLI within the /Desktop/Work folder, we ran the Linux dd command to make a binary copy of Partition 5. We used the following command to make sure we used the device location that Partition 5 was connected to: /dev/sdd5. A sample output of this dd command is shown below in Figure 8:

  • dd command arguments:
    if=file Read input
    of=file Write output
  • dd if=/dev/sdd5 of=part5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 8: dd Command

Note: Before proceeding, we highly recommend that you make a binary copy of the Cable Modem full NAND flash. This may come in handy if anything goes wrong and you need to return the device to its original operation mode. This can be done by running the following dd command against the full “Device:“. In this example that would be /dev/sdd

  • dd if=/dev/sdd of=Backup_Image.bin

Unsquashfs the partition binary

Next, we’ll extract the file system from the Partition 5 image that we dd’d from the cable modem in the previous steps. This file system was determined to be a Squash file system, a type commonly used on embedded devices.

Note: A Squash file system is a compressed read-only file system commonly found on embedded devices. Since the file system is read-only, it is not possible to make alteration of that files system while it is in place. To make modifications, the file system will need to be extracted from the device’s storage and uncompressed, which is what we’ll do in the following exercise steps.

So, your first question may be, “How do we know that this is a squash file system?” If you look at the partition data in the application “disks”, you will see that “Content:” shows (squashfs 4.0)

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Another simple option to identify the content of the file is to run the file command against the binary file extracted from modem with the dd command and view the output. An example of this output is shown below in Figure 9:

  • file part5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 9: Ouput from File Command

To gain access to the partition binary squash file system, we will be extracting/unpacking the squash file system using the unsquashfs command. A squash file system is read-only and cannot be altered in place – the only way to alter a squashfs is to unpack it first. This is a simple process and is done by running the unsquashfs command against the binary file part5.bin:

  • unsquashfs part5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Once the command completes you should now have a folder called “squashfs-root”. This folder contains the extracted embedded Linux root file systems for the cable modem.

In our next post, we’ll cover how to modify the data we just extracted. Check back with us next week!

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/10/18/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-part-1/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1

Rapid7 was back this year at DEF CON 30 participating at the IoT Village with another hands-on hardware hacking exercise, with the goal of teaching attendees’ various concepts and methods for IoT hacking. Over the years, these exercises have covered several different embedded device topics, including how to use a Logic Analyzer, extracting firmware, and gaining root access to an embedded IoT device.

Like last year, we had many IoT Village attendees request a copy of our exercise manual, so again I decided to create an in-depth write-up about the exercise we ran, with some expanded context to answer several questions and expand on the discussion we had with attendees at this year’s DEF CON IoT Village.

This year’s exercise focused on the following key areas:

  • Interaction with eMMC in circuit
  • Using Linux dd command to make binary copy of flash memory
  • Use unsquashfs and mksquashfs commands to unpack and repack read only squash file systems
  • Alter startup files within the embedded Linux operating system to execute code during device startup
  • Leverage dropbear to enable SSH access

Summary of exercise

The goal of this year’s hands-on hardware hacking exercise was to gain root access to a Arris SB6190 Cable modem without needing to install any external code. To do this, the user interacted with the device via a PHISON PS8211-0 embedded multimedia controller (eMMC) to mount up and gain access to the NAND flash memory storage. With NAND flash memory access, the user was able to identify the partitions of interest and extract those partitions using the Linux dd command.

Next, the user extracted the filesystem from the partition binary files and was then able to modify key elements to enable SSH access over the ethernet connection. After the modification where completed the filesystems were repacked and written back to the modem device. Finally, the attendee was able to power up the device and login over ethernet using SSH with root access and default device password.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1

eMMC access to flash memory

In this first section of the exercise, we focused on understanding the process of gaining access to the NAND flash memory by interacting with a PHISON PS8211-0 embedded multimedia controller (eMMC).

Wiring up eMMC and SD card breakout board

To interact with typical eMMC devices, we typically need the following connections.

  • CMD Command
  • DAT Data
  • CLK Clock
  • VCC Voltage 3.3v
  • VCCq Controller Voltage 1.8v – 3.3v
  • GND Ground

As shown in the above bullets, there are typically two different voltages required to interact with eMMC chips. However, in this case, we determined that the PHISON PS8211-0 eMMC chip did not have a different controller voltage for VCCq, meaning that the voltage used was only 3.3v for this example.

When connecting to and interacting with an eMMC device, we usually can utilize the internal power supply of the device. This often works well when different VCC and VCCq voltages are required, but in those cases, we also have to hold the microcontroller unit (MCU) at reset state to prevent the processor from causing interruption when trying to read memory. In this example, we found that the PHISON eMMC chip and NAND memory could be powered by supplying the voltage externally via the SD Card reader.

When using an SD Card reader to supply voltage, we must avoid hooking up the device’s normal source of power also. Hooking both sources – normal and SD Card – into the devices will lead to permanent damage to the device.

When it comes to soldering the needed wiring for this exercise, we realized allowing attendees to do the soldering connection would be much more complex than we could support. So, all the wiring was presoldered before the IoT Village event using 30-gauge color-coded wirewrap wire. This wiring was then attached to a SD Card breakout board as shown below in Figure 1:

  • White = Data
  • Blue = Clock
  • Yellow = Command
  • Red = Voltage (VCC)
  • Black = Ground
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Figure 1: Wiring Hookups

Also, as you can see in the above images, the wires do not run parallel against each other, but have a reasonable gap between them and pass over each other perpendicularly when they cross over. This is because we found during testing that if we ran wires directly next to each other, it caused the partitions to fail to mount properly, most likely because noise was induced into the lines from the other lines affecting the signal.

Note: If you are looking to do your own wiring, the 30-gauge wirewrap wire I used is a Polyvinylidene fluoride coated insulation wire under the brand name of Kynar. The benefit of using Kynar wirewrap is that this insulation does not melt or shrink as easily from heat from the solder iron. When heated by a solder iron, standard plastic-coated insulation will shrink back, exposing uninsulated wire. This can lead to wires shorting out on the circuit board.

Connect SD card reader

With the modem wired up to SD Card breakout as shown above we can mount NAND flash memory by connecting a SD Card reader. Note, not all SD Card readers will work, I used a simple trial and error method with several SD Card readers I had in my possession until I found that an inexpensive DYNEX brand reader worked. It should be attached as shown below in Figure 2:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Figure 2: Connected SD Card Reader

Once plugged in, the various partitions on the Cable modem NAND Flash memory should start loading. In this case a total of seven partitions mounted up. This can take a few minutes to complete. If your system opened each one of the volumes as it mounted, I typically shut them down to avoid all the confusion on your system desktop. To see the layout of the various partitions on the NAND Flash and gather information as needed for reading and writing to the correct partitions. We used the Linux application Disks. Once Disks is opened you can click on the 118 MB Drive in the left column, and it will show all of the partitions and should look something like Figure 3 below:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Figure 3: Disks NAND Flash Partitions

In our second installment of this 4-part blog series, we’ll discuss the step of extracting partition data. Check back with us next week!

Addressing the Evolving Attack Surface Part 1: Modern Challenges

Post Syndicated from Bria Grangard original https://blog.rapid7.com/2022/10/17/addressing-the-evolving-attack-surface-part-1-modern-challenges/

Addressing the Evolving Attack Surface Part 1: Modern Challenges

Lately, we’ve been hearing a lot from our customers requesting help on how to manage their evolving attack surface. As new 0days appear, new applications are spun up, and cloud instances change hourly, it can be hard for our customers to get a full view of risk into their environments.

We put together a webinar to chat more about how Rapid7 can help customers meet this challenge with two amazing presenters Cindy Stanton, SVP of Product and Customer Marketing, and Peter Scott, VP of Product Marketing.

At the beginning of this webcast, Cindy highlights where the industry started from traditional vulnerability management (VM) which was heavily focused on infrastructure but has evolved significantly over the last couple of years. Cindy discusses this rapid expansion of the attack surface having been accelerated by remote workforces during the pandemic, convergence of IT and IoT initiatives, modern development of applications leveraging containers and microservices, adoption of the public cloud, and so much more. Today, security teams face the daunting challenge of having so many layers up and down the stack from traditional infrastructure to cloud environments, applications, and beyond.They need a way to understand their full attack surface. Cindy, gives an example of this evolving challenge of increasing resources and complexity of cloud adoption below.



Addressing the Evolving Attack Surface Part 1: Modern Challenges

Cindy then turns things over to Peter Scott to walk us through the many challenges security teams are facing. For example, traditional tools aren’t purpose-built to keep pace with cloud environment, getting complete coverage of assets in your environment requires multiple solutions from different vendors that are all speaking different languages, and no solutions are providing a unified view of an organization’s risk. These challenges on top of growing economic pressures often make security teams choose between continued  investment in traditional infrastructure and applications, or investing more in securing cloud environments. Peter then discusses the challenges security teams face from expanded roles, disjointed security stacks, and increases in the threat landscape. Some of these challenges are highlighted more in the video below.



Addressing the Evolving Attack Surface Part 1: Modern Challenges

After spending some time discussing the challenges organizations and security teams are facing, Cindy and Peter dive deeper into the steps organizations can take to expand their existing VM programs to include cloud environments. We will cover these steps and more in the next blog post of this series. Until then, if you’re curious to learn more about Rapid7’s InsightCloudSec solution feel free to check out the demo here, or watch the replay of this webinar at any time!

What’s Up, Home? – Staring at the Video Stream

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-staring-at-the-video-stream/23882/

Can you make sure your video streams are up with Zabbix? Of course, you can! By day, I am a monitoring technical lead in a global cyber security company. By night, I monitor my home with Zabbix & Grafana Labs and do some weird experiments with them. Welcome to my weekly blog about the project.

You might have a surveillance camera at home to record suspicious activities in your yard while you are away or so. Most of the time the cameras do work just fine but might require a hard reboot from time to time, for example, due to harsh weather, or not coming back after a network outage. A networked camera responding to ping does not 100% mean the camera is actually functional. I have seen our camera going black and refusing to connect to its stream even though it thinks it’s working just fine.

Zabbix to the rescue!

Connecting to your camera

My post for this week is mostly to maybe give you a new approach for monitoring your cameras, not so much a functional solution as I’m still figuring out how to do this properly.

For example, I can connect to our camera via RTSP protocol and pass some credentials with it, so rtsp://myusername:[email protected]:443/myAddress

To figure out a connection address for your camera model, iSpyConnect has a nice camera database.

Playing the stream

To test if the video stream works, VLC and mplayer are good options; for visually verifying the stream works, try something like

mplayer ‘rtsp://myusername:[email protected]:443/myAddress’

or for those who like to use a GUI, in VLC, File –> Open Network –> enter your camera address.

For obvious reasons, I am not posting here an image from our camera. Anyway, trust me, this method should work if you have a compatible camera.

Let’s go next for the neat tricks part, which I’m still figuring out myself, too.

Making sure the stream works

To make sure the video stream is up and running, make your Zabbix server, Zabbix proxy, or a dedicated media server to continuously stream your video feed. For example:

mplayer -vo null ‘rtsp://myusername:[email protected]:443/myAddress’

The combination above would make mplayer play the stream with a null video driver; thus, the stream will be continuously played, but just with no visual video output generated. In other words, under perfect conditions, the mplayer process should be running on the server all the time. If anything goes wrong with the stream, mplayer quits itself, and the process goes away from the process list, too.

Using Zabbix to check the player status

Now that you have some server continuously playing the stream, it’s time to check the status with Zabbix.

From here, checking the stream status with Zabbix is simple, just

  • create a new item to check if for example mplayer process is around with Zabbix Agent item type and proc.num[,mplayer] key and
  • make your Zabbix alert about it if the number of mplayer processes is <1
Camera screenshots to your Zabbix user interface

Both mplayer and VLC can be controlled remotely, so here’s an idea I have not yet implemented but testing out.

If a motion sensor, either an external unit or a built-in, detects movement, make Zabbix send a command to the camera to record a screenshot of the camera stream, or possibly a short video. Then just make the script to save the photo or video in a directory that Zabbix can access and then show with its URL widget type.

mplayer has a slave mode for receiving commands from external programs, which might work together with a FIFO pipe.

Real-time video stream in your Zabbix user interface

At least VLC can transcode RTSP to HTTP stream in real-time, so in theory, then embedding the resulting stream to your Zabbix user interface should very much be doable with a short HTML file and Zabbix URL widget type. This one I did not yet even start to try out, though.

So, that’s all for this week’s blog post. I’m still building this thing out, but if you have successfully done something similar, please let me know!

I have been working at Forcepoint since 2014 and am a true fan of functional testing. — Janne Pikkarainen

This post was originally published on the author’s LinkedIn account.

The post What’s Up, Home? – Staring at the Video Stream appeared first on Zabbix Blog.

Securing the Internet of Things

Post Syndicated from Matt Silverlock original https://blog.cloudflare.com/rethinking-internet-of-things-security/

Securing the Internet of Things

Securing the Internet of Things

It’s hard to imagine life without our smartphones. Whereas computers were mostly fixed and often shared, smartphones meant that every individual on the planet became a permanent, mobile node on the Internet — with some 6.5B smartphones on the planet today.

While that represents an explosion of devices on the Internet, it will be dwarfed by the next stage of the Internet’s evolution: connecting devices to give them intelligence. Already, Internet of Things (IoT) devices represent somewhere in the order of double the number of smartphones connected to the Internet today — and unlike smartphones, this number is expected to continue to grow tremendously, since they aren’t bound to the number of humans that can carry them.

But the exponential growth in devices has brought with it an explosion in risk. We’ve been defending against DDoS attacks from Internet of Things (IoT) driven botnets like Mirai and Meris for years now. They keep growing, because securing IoT devices still remains challenging, and manufacturers are often not incentivized to secure them. This has driven NIST (the U.S. National Institute of Standards and Technology) to actively define requirements to address the (lack of) IoT device security, and the EU isn’t far behind.

It’s also the type of problem that Cloudflare solves best.

Today, we’re excited to announce our Internet of Things platform: with the goal to provide a single pane-of-glass view over your IoT devices, provision connectivity for new devices, and critically, secure every device from the moment it powers on.

Not just lightbulbs

It’s common to immediately think of lightbulbs or simple motion sensors when you read “IoT”, but that’s because we often don’t consider many of the devices we interact with on a daily basis as an IoT device.

Think about:

  • Almost every payment terminal
  • Any modern car with an infotainment or GPS system
  • Millions of industrial devices that power — and are critical to — logistics services, industrial processes, and manufacturing businesses

You especially may not realize that nearly every one of these devices has a SIM card, and connects over a cellular network.

Cellular connectivity has become increasingly ubiquitous, and if the device can connect independently of Wi-Fi network configuration (and work out of the box), you’ve immediately avoided a whole class of operational support challenges. If you’ve just read our earlier announcement about the Zero Trust SIM, you’re probably already seeing where we’re headed.

Hundreds of thousands of IoT devices already securely connect to our network today using mutual TLS and our API Shield product. Major device manufacturers use Workers and our Developer Platform to offload authentication, compute and most importantly, reduce the compute needed on the device itself. Cloudflare Pub/Sub, our programmable, MQTT-based messaging service, is yet another building block.

But we realized there were still a few missing pieces: device management, analytics and anomaly detection. There are a lot of “IoT SIM” providers out there, but the clear majority are focused on shipping SIM cards at scale (great!) and less so on the security side (not so great) or the developer side (also not great). Customers have been telling us that they wanted a way to easily secure their IoT devices, just as they secure their employees with our Zero Trust platform.

Cloudflare’s IoT Platform will build in support for provisioning cellular connectivity at scale: we’ll support ordering, provisioning and managing cellular connectivity for your devices. Every packet that leaves each IoT device can be inspected, approved or rejected by policies you create before it reaches the Internet, your cloud infrastructure, or your other devices.

Emerging standards like IoT SAFE will also allow us to use the SIM card as a root-of-trust, storing device secrets (and API keys) securely on the device, whilst raising the bar to compromise.

This also doesn’t mean we’re leaving the world of mutual TLS behind: we understand that not every device makes sense to connect over solely over a cellular network, be it due to per-device costs, lack of coverage, or the need to support an existing deployment that can’t just be re-deployed.

Bringing Zero Trust security to IoT

Unlike humans, who need to be able to access a potentially unbounded number of destinations (websites), the endpoints that an IoT device needs to speak to are typically far more bounded. But in practice, there are often few controls in place (or available) to ensure that a device only speaks to your API backend, your storage bucket, and/or your telemetry endpoint.

Our Zero Trust platform, however, has a solution for this: Cloudflare Gateway. You can create DNS, network or HTTP policies, and allow or deny traffic based not only on the source or destination, but on richer identity- and location- based controls. It seemed obvious that we could bring these same capabilities to IoT devices, and allow developers to better restrict and control what endpoints their devices talk to (so they don’t become part of a botnet).

Securing the Internet of Things

At the same time, we also identified ways to extend Gateway to be aware of IoT device specifics. For example, imagine you’ve provisioned 5,000 IoT devices, all connected over cellular directly into Cloudflare’s network. You can then choose to lock these devices to a specific geography if there’s no need for them to “travel”; ensure they can only speak to your API backend and/or metrics provider; and even ensure that if the SIM is lifted from the device it no longer functions by locking it to the IMEI (the serial of the modem).

Building these controls at the network layer raises the bar on IoT device security and reduces the risk that your fleet of devices becomes the tool of a bad actor.

Get the compute off the device

We’ve talked a lot about security, but what about compute and storage? A device can be extremely secure if it doesn’t have to do anything or communicate anywhere, but clearly that’s not practical.

Simultaneously, doing non-trivial amounts of compute “on-device” has a number of major challenges:

  • It requires a more powerful (and thus, more expensive) device. Moderately powerful (e.g. ARMv8-based) devices with a few gigabytes of RAM might be getting cheaper, but they’re always going to be more expensive than a lower-powered device, and that adds up quickly at IoT-scale.
  • You can’t guarantee (or expect) that your device fleet is homogenous: the devices you deployed three years ago can easily be several times slower than what you’re deploying today. Do you leave those devices behind?
  • The more business logic you have on the device, the greater the operational and deployment risk. Change management becomes critical, and the risk of “bricking” — rendering a device non-functional in a way that you can’t fix it remotely — is never zero. It becomes harder to iterate and add new features when you’re deploying to a device on the other side of the world.
  • Security continues to be a concern: if your device needs to talk to external APIs, you have to ensure you have explicitly scoped the credentials they use to avoid them being pulled from the device and used in a way you don’t expect.

We’ve heard other platforms talk about “edge compute”, but in practice they either mean “run the compute on the device” or “in a small handful of cloud regions” (introducing latency) — neither of which fully addresses the problems highlighted above.

Instead, by enabling secure access to Cloudflare Workers for compute, Analytics Engine for device telemetry, D1 as a SQL database, and Pub/Sub for massively scalable messaging — IoT developers can both keep the compute off the device, but still keep it close to the device thanks to our global network (275+ cities and counting).

On top of that, developers can use modern tooling like Wrangler to both iterate more rapidly and deploy software more safely, avoiding the risk of bricking or otherwise breaking part of your IoT fleet.

Where do I sign up?

You can register your interest in our IoT Platform today: we’ll be reaching out over the coming weeks to better understand the problems teams are facing and working to get our closed beta into the hands of customers in the coming months. We’re especially interested in teams who are in the throes of figuring out how to deploy a new set of IoT devices and/or expand an existing fleet, no matter the use-case.

In the meantime, you can start building on API Shield and Pub/Sub (MQTT) if you need to start securing IoT devices today.

What’s Up, Home? – How Zabbix Can Help You with Rising Electricity Bills

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-how-zabbix-can-help-you-with-rising-electricity-bills/23582/

Can you monitor your upcoming electricity bills with Zabbix? Of course, you can! By day, I am a monitoring technical lead in a global cyber security company. By night, I monitor my home with Zabbix & Grafana Labs and make some weird experiments with them. Welcome to my weekly blog about the project.

With the current world events, energy prices are soaring. But how much do I need to really pay next month for my electricity? Zabbix to the rescue!

(Yes, in Finland I can check that from my electricity company’s page, but where’s the fun in that?)

Fixed vs spot price

There are two kinds of electricity contracts you can subscribe to in Finland. With a fixed price, you can be sure your bill does not fluctuate that much from month to month, as you pay the same price per kilowatt for every hour of the day. In this kind of deal, the electricity company adds some extra to each kilowatt, so you will automatically pay some extra compared to the electricity market price, but at least you don’t get so severely surprised by market price peaks.

Then there’s the spot price, where you pay only the electricity market price. This can and will vary a lot depending on the hour of the day, but at least in theory, this is the cheapest option in the long run. But, if the market price goes WAY up, like it tends to do in the winter, and has now been peaking due to world events, this can add to your bill.

Nordpool, please respond

There’s Nord Pool (“Nord Pool runs the leading power market in Europe, offering day-ahead and intraday markets to our customers”), and there’s a Python library for accessing Nord Pool electricity prices. With it, I could get hour-by-hour prices, but for this experiment, let’s stick with the average kWh price. The example script on the GitHub page shows all kinds of data, and for fun let’s use Zabbix item preprocessing to parse the average price from its output.

I now have the below script on running as a cron task every night, so my results will be updated once per 24 hours.

So, Zabbix then reads the file contents, like in so many of my previous blog posts.

Next, let’s add some preprocessing. The regular expression part gets the Average value from the script output, and the custom multiplier changes the value from “Euros per Megawatt” to “Euros per Kilowatt”, for it to be a familiar value for me from the electricity bills.

And… it’s working! As I know our average consumption, let’s add a new Grafana dashboard.

Four seasons

During summer, we don’t actually use very much electricity compared to our harsh winter; for example, keeping our garage “warm” (about +10C) during winter contributes to our electricity bill quite a lot.
Here’s a dashboard showing some guesstimations of how expensive the different seasons will be for us. Or, hopefully cheap, if the long overdue new Olkiluoto 3 nuclear plant finally could operate at its full capacity here in Finland.

The guesstimate above is missing some taxes and electricity transfer prices, so the reality will be a bit more expensive than this. Maybe I should also add some triggers to Zabbix to make me alert about any really crazy price changes.

Anyway, now I can start gathering nuts for the cold winter as it seems that it will be an expensive one.

I have been working at Forcepoint since 2014 and I’m happy that my laptop does not consume too much electricity. — Janne Pikkarainen

This post was originally published on the author’s LinkedIn account.

The post What’s Up, Home? – How Zabbix Can Help You with Rising Electricity Bills appeared first on Zabbix Blog.

What’s Up, Home? – Automatic Temperature Control

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-automatic-temperature-control/23401/

Can you automatically control the temperature of your home in a time- and room-based manner using Zabbix? Of course, you can!

By day, I am a monitoring technical lead in a global cyber security company. By night, I monitor my home with Zabbix & Grafana and make some weird experiments with them. Welcome to my weekly blog about this project.

Earlier in this blog series, I made Zabbix read the status of our air conditioner, and made it possible to use Zabbix as manual remote control for the device. But we need to take a step further and make Zabbix control the AC based on the time of day and if I am at home or not.

Forget the sweaty nights

Usually in Finland during the summer, the nights are not so hot that an AC would be needed. However, that can happen during any rare heat wave we get. It’s annoying to wake up in the middle of the night all sweaty and turn on the AC when it’s already too late.

Of course, I could just leave the AC on when I go to bed, but let’s make Zabbix do some good for our electricity bill and for the environment by not using the AC when it’s not needed.

Detecting if I am at home

Like so many times before in this blog series, Cozify smart home hub is the true star of this story. It detects if anyone is at home based on if a specific phone or, for example, a smart key fob is present and reachable in Cozify’s range. For this case, I will be using my smart key fob in Zabbix, too. This is how it looks in Cozify.

… and here’s the key fob reachability status in Zabbix.

Surprisingly enough, it shows 1 (or “True”) as my status now that I type this blog entry at home and my keys are at home.

A deeper dive into my key fob Zabbix item

To make this all work, I have a set of Python scripts gathering data from Cozify via an unofficial Cozify API Python library. One of the scripts gets the reachability status for all the items, and here’s the configuration for my key fob Zabbix item.

… and some preprocessing …

Let’s add some triggers

Now that we have the key fob data, let’s create some triggers to combine the data about my presence with the temperature information.

I created the triggers by using Zabbix expression constructor:

.. and when I was done, this is how it looked.

I made a similar trigger for our living room, too.

Next, some scripts

Next I added some scripts under Zabbix Administration → Scripts and made them as Action operations.

This one turns on the AC:

… and this one turns it off.

Lights, camera, action!

We have our triggers and scripts, great! Next, it’s time to add some actions.

  • During the daytime, Zabbix will be interested in the living room temperature and will turn on AC if the temperature goes over 23C for ten consecutive times
  • During the nighttime, Zabbix will be reading the temperature of our bedroom and turn on AC if the temperature goes over 23C for ten consecutive times

We will see how well my attempt at this will work. Here’s what the operations look like — if it’s too hot, turn on the AC, and when the temperature comes down enough, turn off the AC.

As I built this thing while I was writing this blog entry, it’s possible I would need to fine-tune the thresholds somewhat to not make my automatic AC control too aggressive. Anyway, this now works in theory.

Oh, BTW, Cozify could also make similar rules, but as it does not directly support our air conditioner (but would require a separate Air Patrol device for that), this is again a great example of how I can utilize Cozify, but with Zabbix extend my home’s IoT functionality even more for free.

I have been working at Forcepoint since 2014 and always try to find out new ways to automate things. — Janne Pikkarainen

This post was originally published on the author’s LinkedIn account.

The post What’s Up, Home? – Automatic Temperature Control appeared first on Zabbix Blog.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/09/08/baxter-sigma-spectrum-infusion-pumps-multiple-vulnerabilities-fixed/

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

Rapid7, Inc. (Rapid7) discovered vulnerabilities in two TCP/IP-enabled medical devices produced by Baxter Healthcare. The affected products are:

  • SIGMA Spectrum Infusion Pump (Firmware Version 8.00.01)
  • SIGMA Wi-Fi Battery (Firmware Versions 16, 17, 20 D29)

Rapid7 initially reported these issues to Baxter on April 20, 2022. Since then, members of our research team have worked alongside the vendor to discuss the impact, resolution, and a coordinated response for these vulnerabilities.

Product description

Baxter’s SIGMA Spectrum product is a commonly used brand of infusion pumps, which are typically used by hospitals to deliver medication and nutrition directly into a patient’s circulatory system. These TCP/IP-enabled devices deliver data to healthcare providers to enable more effective, coordinated care.

Credit

The vulnerabilities in two TCP/IP-enabled medical devices were discovered by Deral Heiland, Principal IoT Researcher at Rapid7. They are being disclosed in accordance with Rapid7’s vulnerability disclosure policy after coordination with the vendor.

Vendor statement

“In support of our mission to save and sustain lives, Baxter takes product security seriously. We are committed to working with the security researcher community to verify and respond to legitimate vulnerabilities and ask researchers to participate in our responsible reporting process. Software updates to disable Telnet and FTP (CVE-2022-26392) are in process. Software updates to address the format string attack (CVE-2022-26393) are addressed in WBM version 20D30 and all other WBM versions. Authentication is already available in Spectrum IQ (CVE-2022-26394). Instructions to erase all data and settings from WBMs and pumps before decommissioning and transferring to other facilities (CVE-2022-26390) are in process for incorporation into the Spectrum Operator’s Manual and are available in the Baxter Security Bulletin.”

Exploitation and remediation

This section details the potential for exploitation and our remediation guidance for the issues discovered and reported by Rapid7, so that defenders of this technology can gauge the impact of, and mitigations around, these issues appropriately.

Battery units store Wi-Fi credentials (CVE-2022-26390)

Rapid7 researchers tested Spectrum battery units for vulnerabilities. We found all units that were tested store Wi-Fi credential data in non-volatile memory on the device.

When a Wi-Fi battery unit is connected to the primary infusion pump and the infusion pump is powered up, the pump will transfer the Wi-Fi credential to the battery unit.

Exploitation

An attacker with physical access to an infusion pump could install a Wi-Fi battery unit (easily purchased on eBay), and then quickly power-cycle the infusion pump and remove the Wi-Fi battery – allowing them to walk away with critical Wi-Fi data once a unit has been disassembled and reverse-engineered.

Also, since these battery units store Wi-Fi credentials in non-volatile memory, there is a risk that when the devices are de-acquisitioned and no efforts are made to overwrite the stored data, anyone acquiring these devices on the secondary market could gain access to critical Wi-Fi credentials of the organization that de-acquisitioned the devices.

Remediation

To mitigate this vulnerability, organizations should restrict physical access by any unauthorized personnel to the infusion pumps or associated Wi-Fi battery units.

In addition, before de-acquisitioning the battery units, batteries should be plugged into a unit with invalid or blank Wi-Fi credentials configured and the unit powered up. This will overwrite the Wi-Fi credentials stored in the non-volatile memory of the batteries. Wi-Fi must be enabled on the infusion pump unit for this overwrite to work properly.

Format string vulnerabilities

“Hostmessage” (CVE-2022-26392)

When running a telnet session on the Baxter Sigma Wi-Fi Battery Firmware Version 16, the command “hostmessage” is vulnerable to format string vulnerability.

Exploitation

An attacker could trigger this format string vulnerability by entering the following command during a telnet session:

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

To view the output of this format string vulnerability, `settrace state=on` must be enabled in the telnet session. `set trace` does not need to be enabled for the format string vulnerability to be triggered, but it does need to be enabled if the output of the vulnerability is to be viewed.

Once `set trace` is enabled and showing output within the telnet session screen, the output of the vulnerability can be viewed, as shown below, where each `%x` returned data from the device’s process stack.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

SSID (CVE-2022-26393)

Rapid7 also found another format string vulnerability on Wi-Fi battery software version 20 D29. This vulnerability is triggered within SSID processing by the `get_wifi_location (20)` command being sent via XML to the Wi-Fi battery at TCP port 51243 or UDP port 51243.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

Exploitation

This format string vulnerability can be triggered by first setting up a Wi-Fi access point containing format string specifiers in the SSID. Next, an attacker could send a `get_wifi_location (20)` command via TCP Port 51243 or UDP port 51243 to the infusion pump. This causes the device to process the SSID name of the access point nearby and trigger the exploit.  The results of the triggering of format strings can be viewed with trace log output within a telnet session as shown below.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

The SSID of `AAAA%x%x%x%x%x%x%x%x%x%x%x%x%x%x` allows for control of 4 bytes on the stack, as shown above, using the `%x` to walk the stack until it reaches 41414141. By changing the leading `AAAA` in the SSID, a malicious actor could potentially use the format string injection to read and write arbitrary memory. At a minimum, using format strings of `%s` and `%n` could allow for a denial of service (DoS) by triggering an illegal memory read (`%s`) and/or illegal memory write (`%n`).

Note that in order to trigger this DoS effect, the attacker would need to be within normal radio range and either be on the device’s network or wait for an authorized `get_wifi_location` command (the latter would itself be a usual, non-default event).

Remediation

To prevent exploitation, organizations should restrict access to the network segments containing the infusion pumps. They should also monitor network traffic for any unauthorized host communicating over TCP and UDP port 51243 to infusion pumps. In addition, be sure to monitor Wi-Fi space for rogue access points containing format string specifiers within the SSID name.

Unauthenticated network reconfiguration via TCP/UDP (CVE-2022-26394)

All Wi-Fi battery units tested (versions 16, 17, and 20 D29) allowed for remote unauthenticated changing of the SIGMA GW IP address. The SIGMA GW setting is used for configuring the back-end communication services for the devices operation.

Exploitation

An attacker could accomplish a remote redirect of SIGMA GW by sending an XML command 15 to TCP or UDP port 51243. During testing, only the SIGMA GW IP was found to be remotely changeable using this command. An example of this command and associated structure is shown below:

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

This could be used by a malicious actor to man-in-the-middle (MitM) all the communication initiated by the infusion pump. This could lead to information leakage and/or data being manipulated by a malicious actor.

Remediation

Organizations using SIGMA Spectrum products should restrict access to the network segments containing the infusion pumps. They should also monitor network traffic for any unauthorized host communicating over TCP and UDP port 51243 to the infusion pumps.

UART configuration access to Wi-Fi configuration data (additional finding)

The SIGMA Spectrum infusion pump unit transmits data unencrypted to the Wi-Fi battery unit via universal asynchronous receiver-transmitter (UART). During the power-up cycle of the infusion pump, the first block of data contains the Wi-Fi configuration data. This communication contains the SSID and 64-Character hex PSK.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

Exploitation

A malicious actor with physical access to an infusion pump can place a communication shim between the units (i.e., the pump and the Wi-Fi battery) and capture this data during the power-up cycle of the unit.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

Remediation

To help prevent exploitation, organizations should restrict physical access by unauthorized persons to the infusion pumps and associated Wi-Fi battery units.

Note that this is merely an additional finding based on physical, hands-on access to the device. While Baxter has addressed this finding through better decommissioning advice to end users, this particular issue does not rank for its own CVE identifier, as local encryption is beyond the scope of the hardware design of the device.

Disclosure timeline

Baxter is an exemplary medical technology company with an obvious commitment to patient and hospital safety. While medtech vulnerabilities can be tricky and expensive to work through, we’re quite pleased with the responsiveness, transparency, and genuine interest shown by Baxter’s product security teams.

  • April, 2022: Issues discovered by Deral Heiland of Rapid7
  • Wed, April 20, 2022: Issues reported to Baxter product security
  • Wed, May 11, 2022: Update requested from Baxter
  • Wed, Jun 1, 2022: Teleconference with Baxter and Rapid7 presenting findings
  • Jun-Jul 2022: Several follow up conversations and updates between Baxter and Rapid7
  • Tue, Aug 2, 2022: Coordination tracking over VINCE and more teleconferencing involving Baxter, Rapid7, CERT/CC, and ICS-CERT (VU#142423)
  • Wed, Aug 31, 2022: Final review of findings and mitigations
  • Thu Sep 8, 2022: Baxter advisory published
  • Thu, Sep 8, 2022: Public disclosure of these issues
  • Thu, Sep 8, 2022: ICS-CERT advisory published

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

What’s Up, Home? – The Relaxing Breeze

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-the-relaxing-breeze/22031/

Can you monitor a home air conditioner with Zabbix? Of course, you can! By day, I am a monitoring tech lead in a global cyber security company. By night, I monitor my home. Welcome to my weekly blog about how I monitor my home with Zabbix & Grafana and do some weird experiments.

At the moment I was writing this blog post, the summer — and thus maybe the heat wave season — was only approaching. For the past two summers, our home has been a hot place to be. Enough is enough, so that day we got an air conditioner. It’s not a very high-end model, but these things currently come with built-in Wi-Fi.

Wouldn’t it be cool to monitor that with Zabbix? Yes.

Wi-Fi, do you read me?

Getting the air conditioner connected to our Wi-Fi was as easy as the manual promised: press the Health button eight times in a row and wait until the air conditioner says “be-be-be-be-be-beep”. Sure enough, that happened, and moments later the AC Freedom app I installed on my iPhone started to show this.

For a normal person, this would be more than enough. For me, this was only the beginning and the next step would be adding the thing to Zabbix.

Encountering headwind

Checking the first things first — no, Zabbix does not seem to support this AC out of the box. No worries, that is not the end of the world, it just slightly slows things down, and also makes things a bit more interesting.

Now that the AC was connected to our Wi-Fi, I first went to check out some data about the AC from the Wi-Fi admin interface. It revealed to me that the device contains network hardware by Broadlink in it. Ah-ha! Search engine, here I come!

Moments later, I found out Broadlink Air Conditioners to Mqtt.

Okay, MQTT it is. That’s a lightweight protocol designed for IoT device communication, and I had absolutely no clue how that worked, as this was the first time I got to use it. It would blow if this step proved to be too cumbersome.

Luckily, thanks to open source and 2022, getting it to run was not that hard.

It’s nearly summer, welcome mosquitos

The aforementioned Broadlink AC to MQTT quickly raised my confidence, as it immediately found my new device. Yes, I can do this!

… that’s nice, but how to use this any further? I could not see any MQTT messages anywhere.

Soon enough I realized I need to install an MQTT message broker to catch the messages and I found Eclipse Mosquitto.

An apt install mosquitto mosquitto-clients and some config file guessing later my jaw dropped, as I saw this:

Wow, that’s a wind-wind situation. It returns sane values! My next step was then to find out if I can somehow access those URL-like paths with Zabbix.

With Zabbix, MQTT is just a breeze

I remembered from some ancient Zabbix Summit that Zabbix 5.x gained Modbus/MQTT support. My Raspberry Pi 4 is running Zabbix 6.0.4, so certainly that part should be covered.

In the end, getting MQTT to run with Zabbix was almost too easy. Zabbix agent 2 has native MQTT support with its mqtt.get active check, so I tried to add an item like this:

And, as this is Zabbix, of course, it works:

Yay! From now on, my home Zabbix can alert me about the AC as well and generate some fancy graphs.

What’s next?

As I’ve got the AC unit recently and this is just the beginning, I still have more things to add later.

  • Add some “Yikes! It’s too hot!” triggers
  • Create a Grafana dashboard
  • Try out if I can adjust the air conditioner settings using Zabbix

Anyway, in the end, this certainly was easier than I expected.

I have been working at Forcepoint since 2014 and monitoring has never been cooler. — Janne Pikkarainen

The post What’s Up, Home? – The Relaxing Breeze appeared first on Zabbix Blog.

Keep Things Cool with Zabbix

Post Syndicated from Laura Schilder original https://blog.zabbix.com/keep-things-cool-with-zabbix/21534/

Do your friends, colleagues or maybe even your significant other have a nasty habit of leaving the fridge half-open causing you a frustrating evening and potentially even ruining your cherished batch of pistachio-flavored ice cream?

With the right thermometer and a little Zabbix knowledge, you can configure Zabbix to keep a watchful eye on the temperature of your fridge and alert you whenever things in your fridge are about to stop being cool.

IoT

The Internet of things represents objects that are capable of autonomously transferring data over a network. The objects can be something like a temperature sensor, a smart fridge, or an electric scooter Even a garbage can and a vending equipped with proper sensors can be IoT objects.

Well, let’s go back to the thermometer that I was talking about. That thermometer is also an IoT device and it uses a specific protocol; for this specific one, we will use an aggregator: The Things Network (TTN).

But why do we need an aggregator? If you plan on monitoring a large number of sensors you will have to establish connections to each of these sensors individually. An aggregator can be used as the central point of communication, instead of directly connecting to each of the sensors.

In this blog post we will be using the following components:

  • Mini hub TBMH100 (the gateway)
  • Dragino LHT65 (the thermometer)

The Things Network

Now, not just any thermometer can connect to the internet. But the thermometer I used is one from The Things Network. The Things Network is open source, just like Zabbix, and works with LoRaWAN. If you do not know what LoRaWAN is just keep reading and I will explain what it is.

LoRaWAN stands for Long Range Wide Area Network and it’s a protocol that is made for long-distance communication and low power consumption. Certain nodes use this protocol and send information via radio. For Europe the frequency used for transferring data is 868MHz. This is how the thermometer sends the temperature to The Things Network.

Before we are able to see the sent values, we do have to configure a gateway and add it to The Things Network. After adding a gateway to TTN, the only thing remaining is having to add the thermometers. All of this information is also available in The Things Network console. We’re going to set up an MQTT connection via The Things Network console, and configure it so Zabbix can collect, process, and visualize your IoT data, as well as receive alerts whenever the temperature in our fridge gets too hot or too cold.

What is MQTT? In short –  MQTT is a lightweight network protocol. MQTT is designed for remote locations that have devices with resources that have limited bandwidth. It has to run over a transport protocol and is characterized by: Ordered, lossless and bi-directional connections. Typically, TCP/IP connections are used for this. It also is an OASIS standard and an ISO recommendation.

TTN configuration

Let’s start by adding a gateway to The Things Network. To do that, you will have to create an account on the things stack and own a gateway. But, before we get started, check what kind of gateway you have. We will be using the gateway that is meant to be inside a building. If that is done let’s start with adding it to TTN.

Let’s start at the beginning. Open the TTN webpage and log in. Easy as that. Now when you see this screen: click on the Go to gateways button.

After that, you click on the white Claim gateway button. Do not confuse it with the Add gateway button – we need to press Claim gateway.

All the fields you see on the next page will have to be filled in:

As I mentioned before the frequency should be around 868MHz. For this example, I will just use the recommended frequency. After that, click on the Claim gateway button. The gateway should work after this. If you do not know what to fill in the form fields, you can find all the information you need on the backside of your gateway.

This is what it will look like when you have successfully claimed the gateway:

 

Since we now have a gateway we can add the thermometer to The Things Network. To do this, we have to go to the Application tab in the console. Once we clicked on the Application tab, it will be empty. We will have to make our own application before we can add the thermometer and we will do that by pressing the Add application button. Once clicked, you should see the following:

 

After you have created your application, click on it and you will see a screen like this:

There you will have to go to End devices and click on Add devices. It will bring you to a screen like this:

Now, you will see just one drop-down menu, but once you start filling them in, additional menus will show up. In our case, we’re using a thermometer from Dragino. After filling in the model and region, the screen should look something like this:

 

For step two, we had to grab the box in which the sensor was shipped. Inside the box is a sticker with all the information that you need. When you have filled in all the fields, click on the Add device button. After adding the device it will look something like this:

 

Now, that’s all for adding the thermometers. If everything works, we will just have to set up an MQTT connection between The Things Network and Zabbix. On the TTN side, we have to go to integrations, and then MQTT. Everything you have on that page we can just copy. Generate an API key and copy it. Save it as we will need it later.

Zabbix

After all these steps on The Things Network side, we will finally move to Zabbix. What we will do first in Zabbix is make sure that we can get the information from The Things Network. This will be done via MQTT. For that, we will need Zabbix agent 2. Now there are of course more steps than just that. So, let me explain.

Zabbix MQTT

Let’s start by downloading Zabbix agent 2 (if you already have it you can skip this step) for that we will use this command:

dnf install zabbix-agent2

Once the agent is installed, we will have to modify the config file:

vim /etc/zabbix/zabbix_agent2.conf

I am using vim, but if you want to use something else, feel free to use another text editor. Once the configuration file has been opened, we will go ahead and change the Hostname parameter. We will be changing it to this:

Hostname=TTN

Don’t forget to start (or restart, if the agent 2 has already been installed) your agent 2 service.

systemctl start zabbix-agent2

Now that we have that out of the way we can start by making a new host. It will be a regular Zabbix host. This is what mine looks like:

Note that the Host name here matches the Hostname parameter which we edited in the previous step.  Do you recall when I said that you have to copy all the MQTT information from The Things Network? Well, we will use it here. We will have to make an item that will use the Zabbix agent (active) item type to get the information. Now, for the key, we can select the mqtt key from Zabbix but we will be missing some of the required parameters. The key will have to look something like this:

mqtt.get[broker,topic,username,password]

In the end, the item itself will look something like this:

In our case, the key looks like this:

mqtt.get[tls://eu1.cloud.thethings.network:8883, #, thermometers@ttn, NNSXS.EMK3T5FLBB2YPLYWXLP7BYOG7JHFSBKEUG23BMY.IJSZ4AC475CU5JJOLRJRYLDU6MXEODWCUYIOLZSAWSXP4L32473Q].

To check if it works just navigate to MonitoringLatest data, find our host and you should see the collected data. It should look something like this:

{"v3/thermometers@ttn/devices/eui-a84041a4e10000/up":"{\"end_device_ids\":{\"device_id\":\"eui-a84041a4e1000000\",\"application_ids\":{\"application_id\":\"thermometers\"},\"dev_eui\":\"A84041A4E10000\",\"join_eui\":\"A000000000000100\",\"dev_addr\":\"260B4F08\"},\"correlation_ids\":[\"as:up:01G7CJFS1180WT7M2GHQRWVFKA\",\"gs:conn:01G7A3RFY7CT62SGBH2BGJ7T31\",\"gs:up:host:01G7A3RG2EWWCHEW9HVBQ6KA5A\",\"gs:uplink:01G7CJFRTGY0NV6R4Y8AV9XKGG\",\"ns:uplink:01G7CJFRTHSDCK3DVR7EDGJY5V\",\"rpc:/ttn.lorawan.v3.GsNs/HandleUplink:01G7CJFRTHP5JMRVSNP8ZZR1X1\",\"rpc:/ttn.lorawan.v3.NsAs/HandleUplink:01G7CJFS10A5RAD5SE864Q99R8\"],\"received_at\":\"2022-07-07T14:54:39.137181192Z\",\"uplink_message\":{\"session_key_id\":\"AYGu5fFGW+vxth9cFIw2+g==\",\"f_port\":2,\"f_cnt\":601,\"frm_payload\":\"y/kH5QIoAX//f/8=\",\"decoded_payload\":{\"BatV\":3.065,\"Bat_status\":3,\"Ext_sensor\":\"Temperature Sensor\",\"Hum_SHT\":55.2,\"TempC_DS\":327.67,\"TempC_SHT\":20.21},\"rx_metadata\":[{\"gateway_ids\":{\"gateway_id\":\"gateway7\",\"eui\":\"58A0CBFFFE803D17\"},\"time\":\"2022-07-07T14:54:38.903268098Z\",\"timestamp\":945990219,\"rssi\":-61,\"channel_rssi\":-61,\"snr\":7.5,\"uplink_token\":\"ChYKFAoIZ2F0ZXdheTcSCFigy//+gD0XEMvUisMDGgwIrueblgYQ/fHaugMg+Omki8TiEioMCK7nm5YGEIKO264D\"}],\"settings\":{\"data_rate\":{\"lora\":{\"bandwidth\":125000,\"spreading_factor\":7}},\"coding_rate\":\"4/5\",\"frequency\":\"868500000\",\"timestamp\":945990219,\"time\":\"2022-07-07T14:54:38.903268098Z\"},\"received_at\":\"2022-07-07T14:54:38.929160377Z\",\"consumed_airtime\":\"0.061696s\",\"version_ids\":{\"brand_id\":\"dragino\",\"model_id\":\"lht65\",\"hardware_version\":\"_unknown_hw_version_\",\"firmware_version\":\"1.8\",\"band_id\":\"EU_863_870\"},\"network_ids\":{\"net_id\":\"000013\",\"tenant_id\":\"ttn\",\"cluster_id\":\"eu1\",\"cluster_address\":\"eu1.cloud.thethings.network\"}}}"}

Zabbix LLD with Dependent items

Now, after seeing all the data you want to be able to read it normally. Well, for that we will use Low-Level Discovery. It will also help add the thermometer to Zabbix.

To achieve our goal we will start by navigating to the Configuration – Hosts page. Select the host that you created earlier. Once there, select Discovery rules at the top. Now we are going to create a new Low-level discovery rule. It will be a dependent item. The master item is the item we made in the previous step. Once you have done that, it should look like so:

But we have not finished yet. We will also need to add a pre-processing step. For the pre-processing step, we need to provide a javascript script. The data that has been sent is not ‘native’ Zabbix LLD data, so we need to make it suitable for Zabbix.

We will use a script like this to format our data:

var lld = [];
var regexp = /@ttn\/devices\/([\w-]+)/g;
var lines = value.split("\n");
var lines_num = lines.length;
for (i = 0; i < lines_num; i++)
{
var match = regexp.exec(lines);
var row = {};
row["{#SENSOR}"] = match[1];
lld.push(row);
}
return JSON.stringify(lld);

In the script above we are transforming the data into a format that Zabbix can use it. Let’s drill it down line by line:
Line 1: Declare a new array with name lld
Line 2: Declare a regex with a specific value
Line 3: Let’s split the received value into an array of substrings. Splitting happens on the value “\n” which represents a newline
Line 4: Count the number of lines
Line 5: A For loop to populate the array that is declared in line 1.
Line 7: Match the regex in the lines.
Line 8: Declare an object with the name ‘row’
Line 9: Add the text {#SENSOR} with the 1st value of the variable ‘match’
Line 10: Push the row object into the lld array
Line 12: Convert the lld array into a JSON string

After Line 12, you will get something like this returned:

[{"{#SENSOR}":"eui-a84041a4exxxxxxx"},{"{#SENSOR}":"eui-a84041a4eyyyyyyy"}]

Now the data is formatted into the Zabbix LLD format, ready to be parsed.

Once the preprocessing step is added, the rule should be complete. This means that Zabbix will start discovering the thermometers, but no items are created by just adding the LLD rule like we have done so far. We also need to add the item prototypes.

I will use temperature for the internal sensor as an example here. So, let’s start at the beginning and go to Item prototypes. We will add a new item prototype. In the name and key fields, we will use the Low-level discovery macro: {#SENSOR}. The key is arbitrary – we ill put our LLD macro as a parameter, to make each item created from the prototypes unique. For units, we will use C because it stands for temperature in Celsius. When finished, it should look like this:

Now, if you look closely at the screenshot I also have a tag and preprocessing step. You can see the tag configuration in the image below. The tag will be used for filtering and providing additional information – the  sensor ID.

As for the item prototype preprocessing step –  it is a little bit harder. Do you remember the data that you got from the first item we made? Well, if you take that and throw in a regex, you can make the preprocessing step. What I did was go to https://regex101.com and paste the complete string we received from the master item and start matching the temperatures.

Once the regex is done, go to the Preprocessing tab in Zabbix. Add one step, and choose Regular Expression as the Name. Now the parameters will be (in case of this thermometer):

TempC_SHT\\":(\d+.\d+)}

and in the output field we will use the first capture group – \1. It should look like this:

If we take a careful look at the data provided by the Master item, there is a “decoded payload”:

decoded_payload\":{\"BatV\":3.056,\"Bat_status\":3,\"Ext_sensor\":\"Temperature Sensor\",\"Hum_SHT\":50.8,\"TempC_DS\":21.75,\"TempC_SHT\":21.95}

From that payload, we are cherry-picking the TempC_SHT value. There are more values to collect here, like battery status, voltage, humidity, etc. This highly depends on the sensors used, of course.
In the Low-Level Discovery rule, we can keep on adding more item prototypes to parse all of these metrics and let the LLD automatically create the items from the prototypes.

After adding the low-level discovery rule and the preprocessing step you will see something like this:

Now, as you can see, Multiple items have been created from our prototypes. If you look closely, you will also notice that I get two of everything. This is because the Low-Level Discovery discovered two thermometers.

Conclusion

Now that everything has been configured, we can finally track the temperature of our IoT thermometers. The next time somebody leaves your fridge open, you can find out in time. Cool, right? Well, that’s just one of many IoT examples that we can start to monitor –  the potential for discovering and monitoring IoT devices is unlimited. If you wish to check out the template used for this example, feel free to visit our github page.

The post Keep Things Cool with Zabbix appeared first on Zabbix Blog.