Tag Archives: IOT

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/11/08/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-pt-4/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

Welcome back to our blog series on Rapid7’s IoT Village exercise from DEF CON 30. In our previous posts, we covered how to achieve access to flash memory, how to extract file system data from the device, and how to modify the data we’ve extracted. In this post, we’ll cover how to gain root access over the device’s secure shell protocol (SSH).

Gaining root access over SSH

Before we move on to establishing SSH connect as root, you may need to set the local IP address on your local host to allow you to access the cable modem at its default IP address of 192.168.100.1. In our example, we set the local IP address to 192.168.100.100 to allow this connection.

To set the local IP address on your host, the first thing is to identify the local ethernet interface. You can do this from the Linux CLE terminal by running the ifconfig command:

  • ifconfig
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 10: IFCONFIG showing Local Ethernet Interfaces

In our example, the ethernet interface is enp0s25, as shown above. Using that interface name (enp0s25), we can set the local IP address to 192.168.100.100 using the following command

  • ifconfig enp0s25 192.168.100.100

To validate that you’ve set the correct IP address, you can rerun the ifconfig command and examine the results to confirm:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 11: Ethernet Interface Set To 192.168.100.100

It’s also possible to connect your host system directly to the cable modem’s ethernet port and have your host interface setup for DHCP – the cable modem should assign an IP address to your host device.

Once you have a valid IP address assigned and/or configured on your host system, power up the cable modem and see if your changes were made correctly and if you now have root access. Again, ensure the SD Card reader is disconnected before plugging 12v power supply into the cable modem.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

Once you’ve confirmed that the SD Card reader is disconnected, power up the cable modem and wait for the boot-up sequence to complete. Boot-up is complete when only the top LED is lit and the second LED is flashing:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

From the CLI terminal on your host, you can run the nmap command to show the open ports on the cable modem. This will also show if your changes to the cable modem firmware were made correctly.

  • nmap -sS 192.168.100.1 -p 22,80,443,1337
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 12: NMAP Scan Results

At a minimum, you should see TCP port 1337 as open as shown above in Figure 12. If not, then most likely an error was made either when copying the dropbear_rsa_key file or making changes to the inittab file.

If the TCP port 1337 is open, the next step is to attempt to login to the cable modem with the following SSH command as root. When prompted for password, use “arris” in all lower case.

Note: Since the kernel on this device is believed to have created an environment restriction to prevent console access, we were only successful in getting around that restriction with the -T switch. This -T switch in SSH disables all pseudo-terminal allocation, and without using it, no functioning console can be established. Also, when connected, you will not receive a typical command line interface prompt, but the device should still accept and execute commands properly.

If you receive a “no matching key exchange method found” error (Figure 13), you will need to either define that Diffie-hellman-group-sha1 in the SSH command or create a config file to do this automatically.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 13: Key Exchange Error

Defining a config file is the easier method. We did this prior to the DEF CON IoT Village, so participants in the exercise would not need to. Since others may be using this writeup to recreate the exercise, I decided to add this to prevent any unnecessary confusion.

To create a config file to support SSH login to this cable modem, without error, you will need to create the following folder “.ssh” and file “config” within the home directory of the user you are logging as. In our example, we were logged in as root. To get to the home folder, the simplest method is to enter the “cd” command without any arguments. This will take you to the home directory of the logged in user.

  • cd

Once in your home directory, try to change directory “cd” to the “.ssh” folder to see if one exists:

  • cd .ssh

If it does, you won’t need to create one and can skip over the creation steps below. If not, then you will need to create that folder in your home directory with the following command:

  • mkdir .ssh

Once you have changed directory “cd” to the .ssh folder, you can use vi to create and edit a config file.

  • vi config

Once in vi, make the following entries in the config file shown below in Figure 14. These entries will enable support for access to cable modem at 192.168.100.1, for the user root, a Cipher of aes256-cbd, and the Diffie-hellman key exchange.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 14: Config File

Once the config is created and saved, you should be able to login over SSH to the cable modem and not receive any more errors.

When you connect and log in, the SSH console will not show you a typical command prompt. So, once you hit the return key after the above SSH command, run the command “ls -al” to show a directory and file listing on the cable modem as shown below in Figure 15. This should indicate whether you successfully logged in or not.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 15: Cable Modem Root Console

At this point, you should now have root-level access to the cable modem over SSH.

You may ask, “What do I gain from getting this level of root access to an IoT device?” This level of access allows us to do more advanced and detailed security testing on a device. This is not as easily done when sitting on the outside of the IoT device or attempting to emulate on some virtual machine, because often, the original hardware contains components and features that are difficult to emulate. With root-level access, we can interact more directly with running services and applications and better monitor the results of any testing we may be conducting.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/11/01/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-pt-3/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Welcome back to our blog series on Rapid7’s IoT Village exercise from DEF CON 30. In our previous posts, we covered how to achieve access to flash memory and how to extract file system data from the device. In this post, we’ll cover how to modify the data we’ve extracted.

Modify extracted file systems data

Now that you have unsquashfs’d the Squash file system, the next step is to take a look at the extracted file system and its general structure. In our example, the unsquashed file system is located at /root/Desktop/Work/squashfs-root. To see the structure, while in the folder /Desktop/Work, you can run the following command to change director and then list the file and folders:

  • cd squashfs-root
  • ls -al

As you can see, we have unpacked a copy of the squash file system containing the embedded Linux root file system that was installed on the cable modem for the ARM processor.

The next goal will be to make the following three changes so we can eventually gain access to the cable modem via SSH:

  1. Create or add a dropbear_rsa_key to squashfs.
  2. Remove symbolic link to passwd and recreate it.
  3. Modify the inittab file to launch dropbear on startup.

To make these changes, you will first need to change the directory to the squashfs-root folder. In our IoT Village exercise example, that folder was “~/Desktop/Work/squashfs-root/etc”, and the attendees used the following command:

  • cd ~/Desktop/Work/squashfs-root/etc

It is critical that you are in the correct directory and not in the etc directory of your local host before running any of the following commands to avoid potential damage to your laptop or desktop’s configuration. You can validate this by entering the command “pwd”, which allows you to examine the returned path results as shown below:

  • pwd
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

The next thing we need to do is locate a copy of the dropbear_rsa_key file and copy it over to “~/Desktop/Work/squashfs-root/etc”. This RSA key is needed to support dropbear services, which allow SSH communication to the cable modem. It turns out that a usable copy of dropbear_rsa_key file is located within the etc folder on Partition 12, which in our example was found to be mounted at /media/root/disk/etc. You can use the application Disks to confirm the location of the mount point for Partition 12, similar to the method we used for Partition 5 shown in Figure 4.

By running the following command during the IoT Village exercise, attendees were successfully able to copy the dropbear_rsa_key from Partition 12 into “~/Desktop/Work/squashfs-root/etc/”:

  • cp /media/root/disk/etc/dropbear_rsa_key .
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Next, we had participants remove the symbolic linked passwd file and create a new passwd file that points to the correct shell. Originally, the symbolic link pointed to a passwd file that pointed the root user to a shell that prevented root from accessing the system. By changing the passwd file, we can assign a usable shell environment to the root user.

Like above, the first step is to make sure you are in the correct folder listed below:

“~/Desktop/Work/squashfs-root/etc”

Once you have validated that you are in the correct folder, you can run the following command to delete the passwd file from the squash file systems:

  • rm passwd
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once this is deleted, you can next create a new passwd file and add the following data to this file as shown below using vi:

Note: Here is a list of common vi interaction commands that are useful when working within vi:

  • i = insert mode. This allows you to enter data in the file
  • esc key will exit insert mode
  • esc key followed by entering :wq and then the enter key will write and exit vi
  • esc key followed dd will delete the line you are on
  • esc key followed x will single character
  • esc key followed shift a will place you in append insert mode at the end of the line
  • esc key followed a will place you in append insert mode at the point of your cursor
  • vi passwd

Once in vi, add the following line:

root:x:0:0:root:/:/bin/sh

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Next, we need to alter the inittab file. Again, make sure you are currently in the folder “/root/Desktop/Work/squashfs-root/etc”. Once this is validated, you can use vi to open and edit the inittab file.

  • vi inittab

The file should originally look like this:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

You will need to add the following line to the file:

::respawn:/usr/sbin/dropbear -p 1337 -r /etc/dropbear_rsa_key

Add the line as shown below, and save the file:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once you’ve completed all these changes, double-check them to make sure they are correct before moving on to the next sections. This will help avoid the need to redo everything if you made an incorrect alteration.

If everything looks correct, it’s time to move repack the squash file system and write the data back to Partition 5 on the cable modem.

Repacking squash file system and rewriting back to Modem

In this step, you will be repacking the squash file system and using the Linux dd command to write the image back to the cable modems NAND Flash memory.

The first thing you will need to do is change the directory back to the working folder – in our example, that is “/Desktop/Work”. This can be done from the current location of “~/Desktop/Work/squashfs-root/etc” by running the following command:

  • cd ../../
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Next, you’ll use the command “mksquashfs” to repack the squash folder into a binary file called new-P5.bin. To do this, run the following command from the “/Desktop/Work” folder that you should currently be in.

  • mksquashfs squashfs-root/ new-P5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once the above command has completed, you should have a file called new-P5.bin. This binary file contains the squashfs-root folder properly packed and ready to be copied back onto the cable modem partition 5.

Note: If for some reason you think you have made a mistake and need to rerun the “mksquashfs” command, make sure you delete the new-P5.bin file first. “mksquashfs” will not overwrite the file, but it will append the data to it. If this happens it will cause the new-P5.bin images to have duplicates of all the files which will cause your attempt to gain root access to fail. So, if you rerun mksquashfs make sure to delete the old new-P5.bin file first.

One you have run the mksquashfs and have a new-P5.bin file containing the repacked squashfs, you’ll use the Linux dd command to write the binary file back to the cable modem partition 5.

To complete this step, first make sure you have identified the correct “Device:” location using the method shown in Figure 7 from part 2 of this blog series.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

In this example, the “Device:” was determined to be sdd5. So we can write the binary images by running the following dd command:

  • dd if=new-P5.bin of=/dev/sdd5
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once the dd command completes, the modified squash file system image should now be written on the modem’s NAND flash memory chip. Before proceeding, disconnect the SD Card reader from the cable modem as shown below.

Note: Attaching the SD Card Reader and 12V power supply to the cable modem at the same time will damage the cable modem and render it nonfunctional.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

In our next and final post in this series, we’ll cover how to gain root access over the device’s secure shell protocol (SSH). Check back with us next week!

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/10/25/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-pt-2/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Welcome back to our blog series on Rapid7’s IoT Village exercise from DEF CON 30. Last week, we covered the basics of the exercise and achieving access to flash memory. In this post, we’ll cover how to extract partition data.

Extracting partition data

The next step in our hands-on IoT hacking exercise is to identify the active partition and extract the filesystems for modification. The method I have used for this is to examine the file date stamps – the one with the most current date is likely the current active file system.

Note: Curious why there are multiple or duplicate filesystem partitions on an embedded device? Typically, with embedded device firmware, there are two of each partition, two kernel images, and two root file system images. This is so the device’s firmware can be updated safely and prevent the device from being bricked if an error or power outage occurs during the firmware update. The firmware update process updates the files on the offline partitions, and once that is completed properly, the boot process loads the new updated partitions as active and takes the old partitions offline. This way, if a failure occurs, the old unchanged partition can be reloaded, preventing the device from being bricked.

On this cable modem, we have 7 partitions. The key ones we want to work with are Partition 5, 6, 12, 13. This cable modem has two MCU:, an ARM and an ATOM processor. Partition 5 and 6 are root filesystems for the ARM processor, which is what we will be hacking. Partition 12 and 13 are for the root filesystems for the ATOM process, which controls the RF communication on the cable.

To ensure we alter the active partition, the first thing we need to do is check the date stamps on the mounted file systems to see whether partition 5 or partition 6 is the active partition. There are several ways to do this, but the method we use is to click on partition 5 or 6 in the Disks application to highlight it, and then click on the “Mounted at:” link as shown below in Figure 4 to open the mounted file partition shown in Figure 5:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 4: Partition File System Mount Locations

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 5: Mounted File System Partition 5

Once File Manager opens the folders, you can right click on the “etc” folder, select “Properties,” and check the date listed in “Modified:” as shown below in Figure 6:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 6: Folder Date Stamp

You will want to do this on both partitions 5 and 6 for the cable modem to identify the partition with the most current date stamp, as this is typically the active partition. For the example cable modems we used at DEF CON IoT Village, partition 5 was found to have the most current date stamp.

Extracting the active partition

The next step is to extract the partition with the newest date stamp (Partition 5). To do this, we first need to identify which Small Computer System Interface (SCSI) disk partition 5 is attached to. This can be identified by selecting Partition 5 with the Disks application and then read the “Device:” as shown below in Figure 7:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 7: Device

Also remember to record the “Device:” information. You’ll need this during several of the future steps. In our example, we see that this is /dev/sdd5, as shown in Figure 7.

To extract the partition image, we launched the Terminal application on your Linux host to gain access to the command line interface (CLI). Once the CLI is opened, create a storage area to hold the partition binary and file system data. In our example, we created a folder for this called /Desktop/Work.

From CLI within the /Desktop/Work folder, we ran the Linux dd command to make a binary copy of Partition 5. We used the following command to make sure we used the device location that Partition 5 was connected to: /dev/sdd5. A sample output of this dd command is shown below in Figure 8:

  • dd command arguments:
    if=file Read input
    of=file Write output
  • dd if=/dev/sdd5 of=part5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 8: dd Command

Note: Before proceeding, we highly recommend that you make a binary copy of the Cable Modem full NAND flash. This may come in handy if anything goes wrong and you need to return the device to its original operation mode. This can be done by running the following dd command against the full “Device:“. In this example that would be /dev/sdd

  • dd if=/dev/sdd of=Backup_Image.bin

Unsquashfs the partition binary

Next, we’ll extract the file system from the Partition 5 image that we dd’d from the cable modem in the previous steps. This file system was determined to be a Squash file system, a type commonly used on embedded devices.

Note: A Squash file system is a compressed read-only file system commonly found on embedded devices. Since the file system is read-only, it is not possible to make alteration of that files system while it is in place. To make modifications, the file system will need to be extracted from the device’s storage and uncompressed, which is what we’ll do in the following exercise steps.

So, your first question may be, “How do we know that this is a squash file system?” If you look at the partition data in the application “disks”, you will see that “Content:” shows (squashfs 4.0)

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Another simple option to identify the content of the file is to run the file command against the binary file extracted from modem with the dd command and view the output. An example of this output is shown below in Figure 9:

  • file part5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 9: Ouput from File Command

To gain access to the partition binary squash file system, we will be extracting/unpacking the squash file system using the unsquashfs command. A squash file system is read-only and cannot be altered in place – the only way to alter a squashfs is to unpack it first. This is a simple process and is done by running the unsquashfs command against the binary file part5.bin:

  • unsquashfs part5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Once the command completes you should now have a folder called “squashfs-root”. This folder contains the extracted embedded Linux root file systems for the cable modem.

In our next post, we’ll cover how to modify the data we just extracted. Check back with us next week!

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/10/18/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-part-1/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1

Rapid7 was back this year at DEF CON 30 participating at the IoT Village with another hands-on hardware hacking exercise, with the goal of teaching attendees’ various concepts and methods for IoT hacking. Over the years, these exercises have covered several different embedded device topics, including how to use a Logic Analyzer, extracting firmware, and gaining root access to an embedded IoT device.

Like last year, we had many IoT Village attendees request a copy of our exercise manual, so again I decided to create an in-depth write-up about the exercise we ran, with some expanded context to answer several questions and expand on the discussion we had with attendees at this year’s DEF CON IoT Village.

This year’s exercise focused on the following key areas:

  • Interaction with eMMC in circuit
  • Using Linux dd command to make binary copy of flash memory
  • Use unsquashfs and mksquashfs commands to unpack and repack read only squash file systems
  • Alter startup files within the embedded Linux operating system to execute code during device startup
  • Leverage dropbear to enable SSH access

Summary of exercise

The goal of this year’s hands-on hardware hacking exercise was to gain root access to a Arris SB6190 Cable modem without needing to install any external code. To do this, the user interacted with the device via a PHISON PS8211-0 embedded multimedia controller (eMMC) to mount up and gain access to the NAND flash memory storage. With NAND flash memory access, the user was able to identify the partitions of interest and extract those partitions using the Linux dd command.

Next, the user extracted the filesystem from the partition binary files and was then able to modify key elements to enable SSH access over the ethernet connection. After the modification where completed the filesystems were repacked and written back to the modem device. Finally, the attendee was able to power up the device and login over ethernet using SSH with root access and default device password.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1

eMMC access to flash memory

In this first section of the exercise, we focused on understanding the process of gaining access to the NAND flash memory by interacting with a PHISON PS8211-0 embedded multimedia controller (eMMC).

Wiring up eMMC and SD card breakout board

To interact with typical eMMC devices, we typically need the following connections.

  • CMD Command
  • DAT Data
  • CLK Clock
  • VCC Voltage 3.3v
  • VCCq Controller Voltage 1.8v – 3.3v
  • GND Ground

As shown in the above bullets, there are typically two different voltages required to interact with eMMC chips. However, in this case, we determined that the PHISON PS8211-0 eMMC chip did not have a different controller voltage for VCCq, meaning that the voltage used was only 3.3v for this example.

When connecting to and interacting with an eMMC device, we usually can utilize the internal power supply of the device. This often works well when different VCC and VCCq voltages are required, but in those cases, we also have to hold the microcontroller unit (MCU) at reset state to prevent the processor from causing interruption when trying to read memory. In this example, we found that the PHISON eMMC chip and NAND memory could be powered by supplying the voltage externally via the SD Card reader.

When using an SD Card reader to supply voltage, we must avoid hooking up the device’s normal source of power also. Hooking both sources – normal and SD Card – into the devices will lead to permanent damage to the device.

When it comes to soldering the needed wiring for this exercise, we realized allowing attendees to do the soldering connection would be much more complex than we could support. So, all the wiring was presoldered before the IoT Village event using 30-gauge color-coded wirewrap wire. This wiring was then attached to a SD Card breakout board as shown below in Figure 1:

  • White = Data
  • Blue = Clock
  • Yellow = Command
  • Red = Voltage (VCC)
  • Black = Ground
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Figure 1: Wiring Hookups

Also, as you can see in the above images, the wires do not run parallel against each other, but have a reasonable gap between them and pass over each other perpendicularly when they cross over. This is because we found during testing that if we ran wires directly next to each other, it caused the partitions to fail to mount properly, most likely because noise was induced into the lines from the other lines affecting the signal.

Note: If you are looking to do your own wiring, the 30-gauge wirewrap wire I used is a Polyvinylidene fluoride coated insulation wire under the brand name of Kynar. The benefit of using Kynar wirewrap is that this insulation does not melt or shrink as easily from heat from the solder iron. When heated by a solder iron, standard plastic-coated insulation will shrink back, exposing uninsulated wire. This can lead to wires shorting out on the circuit board.

Connect SD card reader

With the modem wired up to SD Card breakout as shown above we can mount NAND flash memory by connecting a SD Card reader. Note, not all SD Card readers will work, I used a simple trial and error method with several SD Card readers I had in my possession until I found that an inexpensive DYNEX brand reader worked. It should be attached as shown below in Figure 2:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Figure 2: Connected SD Card Reader

Once plugged in, the various partitions on the Cable modem NAND Flash memory should start loading. In this case a total of seven partitions mounted up. This can take a few minutes to complete. If your system opened each one of the volumes as it mounted, I typically shut them down to avoid all the confusion on your system desktop. To see the layout of the various partitions on the NAND Flash and gather information as needed for reading and writing to the correct partitions. We used the Linux application Disks. Once Disks is opened you can click on the 118 MB Drive in the left column, and it will show all of the partitions and should look something like Figure 3 below:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Figure 3: Disks NAND Flash Partitions

In our second installment of this 4-part blog series, we’ll discuss the step of extracting partition data. Check back with us next week!

Addressing the Evolving Attack Surface Part 1: Modern Challenges

Post Syndicated from Bria Grangard original https://blog.rapid7.com/2022/10/17/addressing-the-evolving-attack-surface-part-1-modern-challenges/

Addressing the Evolving Attack Surface Part 1: Modern Challenges

Lately, we’ve been hearing a lot from our customers requesting help on how to manage their evolving attack surface. As new 0days appear, new applications are spun up, and cloud instances change hourly, it can be hard for our customers to get a full view of risk into their environments.

We put together a webinar to chat more about how Rapid7 can help customers meet this challenge with two amazing presenters Cindy Stanton, SVP of Product and Customer Marketing, and Peter Scott, VP of Product Marketing.

At the beginning of this webcast, Cindy highlights where the industry started from traditional vulnerability management (VM) which was heavily focused on infrastructure but has evolved significantly over the last couple of years. Cindy discusses this rapid expansion of the attack surface having been accelerated by remote workforces during the pandemic, convergence of IT and IoT initiatives, modern development of applications leveraging containers and microservices, adoption of the public cloud, and so much more. Today, security teams face the daunting challenge of having so many layers up and down the stack from traditional infrastructure to cloud environments, applications, and beyond.They need a way to understand their full attack surface. Cindy, gives an example of this evolving challenge of increasing resources and complexity of cloud adoption below.



Addressing the Evolving Attack Surface Part 1: Modern Challenges

Cindy then turns things over to Peter Scott to walk us through the many challenges security teams are facing. For example, traditional tools aren’t purpose-built to keep pace with cloud environment, getting complete coverage of assets in your environment requires multiple solutions from different vendors that are all speaking different languages, and no solutions are providing a unified view of an organization’s risk. These challenges on top of growing economic pressures often make security teams choose between continued  investment in traditional infrastructure and applications, or investing more in securing cloud environments. Peter then discusses the challenges security teams face from expanded roles, disjointed security stacks, and increases in the threat landscape. Some of these challenges are highlighted more in the video below.



Addressing the Evolving Attack Surface Part 1: Modern Challenges

After spending some time discussing the challenges organizations and security teams are facing, Cindy and Peter dive deeper into the steps organizations can take to expand their existing VM programs to include cloud environments. We will cover these steps and more in the next blog post of this series. Until then, if you’re curious to learn more about Rapid7’s InsightCloudSec solution feel free to check out the demo here, or watch the replay of this webinar at any time!

What’s Up, Home? – Staring at the Video Stream

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-staring-at-the-video-stream/23882/

Can you make sure your video streams are up with Zabbix? Of course, you can! By day, I am a monitoring technical lead in a global cyber security company. By night, I monitor my home with Zabbix & Grafana Labs and do some weird experiments with them. Welcome to my weekly blog about the project.

You might have a surveillance camera at home to record suspicious activities in your yard while you are away or so. Most of the time the cameras do work just fine but might require a hard reboot from time to time, for example, due to harsh weather, or not coming back after a network outage. A networked camera responding to ping does not 100% mean the camera is actually functional. I have seen our camera going black and refusing to connect to its stream even though it thinks it’s working just fine.

Zabbix to the rescue!

Connecting to your camera

My post for this week is mostly to maybe give you a new approach for monitoring your cameras, not so much a functional solution as I’m still figuring out how to do this properly.

For example, I can connect to our camera via RTSP protocol and pass some credentials with it, so rtsp://myusername:[email protected]:443/myAddress

To figure out a connection address for your camera model, iSpyConnect has a nice camera database.

Playing the stream

To test if the video stream works, VLC and mplayer are good options; for visually verifying the stream works, try something like

mplayer ‘rtsp://myusername:[email protected]:443/myAddress’

or for those who like to use a GUI, in VLC, File –> Open Network –> enter your camera address.

For obvious reasons, I am not posting here an image from our camera. Anyway, trust me, this method should work if you have a compatible camera.

Let’s go next for the neat tricks part, which I’m still figuring out myself, too.

Making sure the stream works

To make sure the video stream is up and running, make your Zabbix server, Zabbix proxy, or a dedicated media server to continuously stream your video feed. For example:

mplayer -vo null ‘rtsp://myusername:[email protected]:443/myAddress’

The combination above would make mplayer play the stream with a null video driver; thus, the stream will be continuously played, but just with no visual video output generated. In other words, under perfect conditions, the mplayer process should be running on the server all the time. If anything goes wrong with the stream, mplayer quits itself, and the process goes away from the process list, too.

Using Zabbix to check the player status

Now that you have some server continuously playing the stream, it’s time to check the status with Zabbix.

From here, checking the stream status with Zabbix is simple, just

  • create a new item to check if for example mplayer process is around with Zabbix Agent item type and proc.num[,mplayer] key and
  • make your Zabbix alert about it if the number of mplayer processes is <1
Camera screenshots to your Zabbix user interface

Both mplayer and VLC can be controlled remotely, so here’s an idea I have not yet implemented but testing out.

If a motion sensor, either an external unit or a built-in, detects movement, make Zabbix send a command to the camera to record a screenshot of the camera stream, or possibly a short video. Then just make the script to save the photo or video in a directory that Zabbix can access and then show with its URL widget type.

mplayer has a slave mode for receiving commands from external programs, which might work together with a FIFO pipe.

Real-time video stream in your Zabbix user interface

At least VLC can transcode RTSP to HTTP stream in real-time, so in theory, then embedding the resulting stream to your Zabbix user interface should very much be doable with a short HTML file and Zabbix URL widget type. This one I did not yet even start to try out, though.

So, that’s all for this week’s blog post. I’m still building this thing out, but if you have successfully done something similar, please let me know!

I have been working at Forcepoint since 2014 and am a true fan of functional testing. — Janne Pikkarainen

This post was originally published on the author’s LinkedIn account.

The post What’s Up, Home? – Staring at the Video Stream appeared first on Zabbix Blog.

Securing the Internet of Things

Post Syndicated from Matt Silverlock original https://blog.cloudflare.com/rethinking-internet-of-things-security/

Securing the Internet of Things

Securing the Internet of Things

It’s hard to imagine life without our smartphones. Whereas computers were mostly fixed and often shared, smartphones meant that every individual on the planet became a permanent, mobile node on the Internet — with some 6.5B smartphones on the planet today.

While that represents an explosion of devices on the Internet, it will be dwarfed by the next stage of the Internet’s evolution: connecting devices to give them intelligence. Already, Internet of Things (IoT) devices represent somewhere in the order of double the number of smartphones connected to the Internet today — and unlike smartphones, this number is expected to continue to grow tremendously, since they aren’t bound to the number of humans that can carry them.

But the exponential growth in devices has brought with it an explosion in risk. We’ve been defending against DDoS attacks from Internet of Things (IoT) driven botnets like Mirai and Meris for years now. They keep growing, because securing IoT devices still remains challenging, and manufacturers are often not incentivized to secure them. This has driven NIST (the U.S. National Institute of Standards and Technology) to actively define requirements to address the (lack of) IoT device security, and the EU isn’t far behind.

It’s also the type of problem that Cloudflare solves best.

Today, we’re excited to announce our Internet of Things platform: with the goal to provide a single pane-of-glass view over your IoT devices, provision connectivity for new devices, and critically, secure every device from the moment it powers on.

Not just lightbulbs

It’s common to immediately think of lightbulbs or simple motion sensors when you read “IoT”, but that’s because we often don’t consider many of the devices we interact with on a daily basis as an IoT device.

Think about:

  • Almost every payment terminal
  • Any modern car with an infotainment or GPS system
  • Millions of industrial devices that power — and are critical to — logistics services, industrial processes, and manufacturing businesses

You especially may not realize that nearly every one of these devices has a SIM card, and connects over a cellular network.

Cellular connectivity has become increasingly ubiquitous, and if the device can connect independently of Wi-Fi network configuration (and work out of the box), you’ve immediately avoided a whole class of operational support challenges. If you’ve just read our earlier announcement about the Zero Trust SIM, you’re probably already seeing where we’re headed.

Hundreds of thousands of IoT devices already securely connect to our network today using mutual TLS and our API Shield product. Major device manufacturers use Workers and our Developer Platform to offload authentication, compute and most importantly, reduce the compute needed on the device itself. Cloudflare Pub/Sub, our programmable, MQTT-based messaging service, is yet another building block.

But we realized there were still a few missing pieces: device management, analytics and anomaly detection. There are a lot of “IoT SIM” providers out there, but the clear majority are focused on shipping SIM cards at scale (great!) and less so on the security side (not so great) or the developer side (also not great). Customers have been telling us that they wanted a way to easily secure their IoT devices, just as they secure their employees with our Zero Trust platform.

Cloudflare’s IoT Platform will build in support for provisioning cellular connectivity at scale: we’ll support ordering, provisioning and managing cellular connectivity for your devices. Every packet that leaves each IoT device can be inspected, approved or rejected by policies you create before it reaches the Internet, your cloud infrastructure, or your other devices.

Emerging standards like IoT SAFE will also allow us to use the SIM card as a root-of-trust, storing device secrets (and API keys) securely on the device, whilst raising the bar to compromise.

This also doesn’t mean we’re leaving the world of mutual TLS behind: we understand that not every device makes sense to connect over solely over a cellular network, be it due to per-device costs, lack of coverage, or the need to support an existing deployment that can’t just be re-deployed.

Bringing Zero Trust security to IoT

Unlike humans, who need to be able to access a potentially unbounded number of destinations (websites), the endpoints that an IoT device needs to speak to are typically far more bounded. But in practice, there are often few controls in place (or available) to ensure that a device only speaks to your API backend, your storage bucket, and/or your telemetry endpoint.

Our Zero Trust platform, however, has a solution for this: Cloudflare Gateway. You can create DNS, network or HTTP policies, and allow or deny traffic based not only on the source or destination, but on richer identity- and location- based controls. It seemed obvious that we could bring these same capabilities to IoT devices, and allow developers to better restrict and control what endpoints their devices talk to (so they don’t become part of a botnet).

Securing the Internet of Things

At the same time, we also identified ways to extend Gateway to be aware of IoT device specifics. For example, imagine you’ve provisioned 5,000 IoT devices, all connected over cellular directly into Cloudflare’s network. You can then choose to lock these devices to a specific geography if there’s no need for them to “travel”; ensure they can only speak to your API backend and/or metrics provider; and even ensure that if the SIM is lifted from the device it no longer functions by locking it to the IMEI (the serial of the modem).

Building these controls at the network layer raises the bar on IoT device security and reduces the risk that your fleet of devices becomes the tool of a bad actor.

Get the compute off the device

We’ve talked a lot about security, but what about compute and storage? A device can be extremely secure if it doesn’t have to do anything or communicate anywhere, but clearly that’s not practical.

Simultaneously, doing non-trivial amounts of compute “on-device” has a number of major challenges:

  • It requires a more powerful (and thus, more expensive) device. Moderately powerful (e.g. ARMv8-based) devices with a few gigabytes of RAM might be getting cheaper, but they’re always going to be more expensive than a lower-powered device, and that adds up quickly at IoT-scale.
  • You can’t guarantee (or expect) that your device fleet is homogenous: the devices you deployed three years ago can easily be several times slower than what you’re deploying today. Do you leave those devices behind?
  • The more business logic you have on the device, the greater the operational and deployment risk. Change management becomes critical, and the risk of “bricking” — rendering a device non-functional in a way that you can’t fix it remotely — is never zero. It becomes harder to iterate and add new features when you’re deploying to a device on the other side of the world.
  • Security continues to be a concern: if your device needs to talk to external APIs, you have to ensure you have explicitly scoped the credentials they use to avoid them being pulled from the device and used in a way you don’t expect.

We’ve heard other platforms talk about “edge compute”, but in practice they either mean “run the compute on the device” or “in a small handful of cloud regions” (introducing latency) — neither of which fully addresses the problems highlighted above.

Instead, by enabling secure access to Cloudflare Workers for compute, Analytics Engine for device telemetry, D1 as a SQL database, and Pub/Sub for massively scalable messaging — IoT developers can both keep the compute off the device, but still keep it close to the device thanks to our global network (275+ cities and counting).

On top of that, developers can use modern tooling like Wrangler to both iterate more rapidly and deploy software more safely, avoiding the risk of bricking or otherwise breaking part of your IoT fleet.

Where do I sign up?

You can register your interest in our IoT Platform today: we’ll be reaching out over the coming weeks to better understand the problems teams are facing and working to get our closed beta into the hands of customers in the coming months. We’re especially interested in teams who are in the throes of figuring out how to deploy a new set of IoT devices and/or expand an existing fleet, no matter the use-case.

In the meantime, you can start building on API Shield and Pub/Sub (MQTT) if you need to start securing IoT devices today.

What’s Up, Home? – How Zabbix Can Help You with Rising Electricity Bills

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-how-zabbix-can-help-you-with-rising-electricity-bills/23582/

Can you monitor your upcoming electricity bills with Zabbix? Of course, you can! By day, I am a monitoring technical lead in a global cyber security company. By night, I monitor my home with Zabbix & Grafana Labs and make some weird experiments with them. Welcome to my weekly blog about the project.

With the current world events, energy prices are soaring. But how much do I need to really pay next month for my electricity? Zabbix to the rescue!

(Yes, in Finland I can check that from my electricity company’s page, but where’s the fun in that?)

Fixed vs spot price

There are two kinds of electricity contracts you can subscribe to in Finland. With a fixed price, you can be sure your bill does not fluctuate that much from month to month, as you pay the same price per kilowatt for every hour of the day. In this kind of deal, the electricity company adds some extra to each kilowatt, so you will automatically pay some extra compared to the electricity market price, but at least you don’t get so severely surprised by market price peaks.

Then there’s the spot price, where you pay only the electricity market price. This can and will vary a lot depending on the hour of the day, but at least in theory, this is the cheapest option in the long run. But, if the market price goes WAY up, like it tends to do in the winter, and has now been peaking due to world events, this can add to your bill.

Nordpool, please respond

There’s Nord Pool (“Nord Pool runs the leading power market in Europe, offering day-ahead and intraday markets to our customers”), and there’s a Python library for accessing Nord Pool electricity prices. With it, I could get hour-by-hour prices, but for this experiment, let’s stick with the average kWh price. The example script on the GitHub page shows all kinds of data, and for fun let’s use Zabbix item preprocessing to parse the average price from its output.

I now have the below script on running as a cron task every night, so my results will be updated once per 24 hours.

So, Zabbix then reads the file contents, like in so many of my previous blog posts.

Next, let’s add some preprocessing. The regular expression part gets the Average value from the script output, and the custom multiplier changes the value from “Euros per Megawatt” to “Euros per Kilowatt”, for it to be a familiar value for me from the electricity bills.

And… it’s working! As I know our average consumption, let’s add a new Grafana dashboard.

Four seasons

During summer, we don’t actually use very much electricity compared to our harsh winter; for example, keeping our garage “warm” (about +10C) during winter contributes to our electricity bill quite a lot.
Here’s a dashboard showing some guesstimations of how expensive the different seasons will be for us. Or, hopefully cheap, if the long overdue new Olkiluoto 3 nuclear plant finally could operate at its full capacity here in Finland.

The guesstimate above is missing some taxes and electricity transfer prices, so the reality will be a bit more expensive than this. Maybe I should also add some triggers to Zabbix to make me alert about any really crazy price changes.

Anyway, now I can start gathering nuts for the cold winter as it seems that it will be an expensive one.

I have been working at Forcepoint since 2014 and I’m happy that my laptop does not consume too much electricity. — Janne Pikkarainen

This post was originally published on the author’s LinkedIn account.

The post What’s Up, Home? – How Zabbix Can Help You with Rising Electricity Bills appeared first on Zabbix Blog.

What’s Up, Home? – Automatic Temperature Control

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-automatic-temperature-control/23401/

Can you automatically control the temperature of your home in a time- and room-based manner using Zabbix? Of course, you can!

By day, I am a monitoring technical lead in a global cyber security company. By night, I monitor my home with Zabbix & Grafana and make some weird experiments with them. Welcome to my weekly blog about this project.

Earlier in this blog series, I made Zabbix read the status of our air conditioner, and made it possible to use Zabbix as manual remote control for the device. But we need to take a step further and make Zabbix control the AC based on the time of day and if I am at home or not.

Forget the sweaty nights

Usually in Finland during the summer, the nights are not so hot that an AC would be needed. However, that can happen during any rare heat wave we get. It’s annoying to wake up in the middle of the night all sweaty and turn on the AC when it’s already too late.

Of course, I could just leave the AC on when I go to bed, but let’s make Zabbix do some good for our electricity bill and for the environment by not using the AC when it’s not needed.

Detecting if I am at home

Like so many times before in this blog series, Cozify smart home hub is the true star of this story. It detects if anyone is at home based on if a specific phone or, for example, a smart key fob is present and reachable in Cozify’s range. For this case, I will be using my smart key fob in Zabbix, too. This is how it looks in Cozify.

… and here’s the key fob reachability status in Zabbix.

Surprisingly enough, it shows 1 (or “True”) as my status now that I type this blog entry at home and my keys are at home.

A deeper dive into my key fob Zabbix item

To make this all work, I have a set of Python scripts gathering data from Cozify via an unofficial Cozify API Python library. One of the scripts gets the reachability status for all the items, and here’s the configuration for my key fob Zabbix item.

… and some preprocessing …

Let’s add some triggers

Now that we have the key fob data, let’s create some triggers to combine the data about my presence with the temperature information.

I created the triggers by using Zabbix expression constructor:

.. and when I was done, this is how it looked.

I made a similar trigger for our living room, too.

Next, some scripts

Next I added some scripts under Zabbix Administration → Scripts and made them as Action operations.

This one turns on the AC:

… and this one turns it off.

Lights, camera, action!

We have our triggers and scripts, great! Next, it’s time to add some actions.

  • During the daytime, Zabbix will be interested in the living room temperature and will turn on AC if the temperature goes over 23C for ten consecutive times
  • During the nighttime, Zabbix will be reading the temperature of our bedroom and turn on AC if the temperature goes over 23C for ten consecutive times

We will see how well my attempt at this will work. Here’s what the operations look like — if it’s too hot, turn on the AC, and when the temperature comes down enough, turn off the AC.

As I built this thing while I was writing this blog entry, it’s possible I would need to fine-tune the thresholds somewhat to not make my automatic AC control too aggressive. Anyway, this now works in theory.

Oh, BTW, Cozify could also make similar rules, but as it does not directly support our air conditioner (but would require a separate Air Patrol device for that), this is again a great example of how I can utilize Cozify, but with Zabbix extend my home’s IoT functionality even more for free.

I have been working at Forcepoint since 2014 and always try to find out new ways to automate things. — Janne Pikkarainen

This post was originally published on the author’s LinkedIn account.

The post What’s Up, Home? – Automatic Temperature Control appeared first on Zabbix Blog.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/09/08/baxter-sigma-spectrum-infusion-pumps-multiple-vulnerabilities-fixed/

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

Rapid7, Inc. (Rapid7) discovered vulnerabilities in two TCP/IP-enabled medical devices produced by Baxter Healthcare. The affected products are:

  • SIGMA Spectrum Infusion Pump (Firmware Version 8.00.01)
  • SIGMA Wi-Fi Battery (Firmware Versions 16, 17, 20 D29)

Rapid7 initially reported these issues to Baxter on April 20, 2022. Since then, members of our research team have worked alongside the vendor to discuss the impact, resolution, and a coordinated response for these vulnerabilities.

Product description

Baxter’s SIGMA Spectrum product is a commonly used brand of infusion pumps, which are typically used by hospitals to deliver medication and nutrition directly into a patient’s circulatory system. These TCP/IP-enabled devices deliver data to healthcare providers to enable more effective, coordinated care.

Credit

The vulnerabilities in two TCP/IP-enabled medical devices were discovered by Deral Heiland, Principal IoT Researcher at Rapid7. They are being disclosed in accordance with Rapid7’s vulnerability disclosure policy after coordination with the vendor.

Vendor statement

“In support of our mission to save and sustain lives, Baxter takes product security seriously. We are committed to working with the security researcher community to verify and respond to legitimate vulnerabilities and ask researchers to participate in our responsible reporting process. Software updates to disable Telnet and FTP (CVE-2022-26392) are in process. Software updates to address the format string attack (CVE-2022-26393) are addressed in WBM version 20D30 and all other WBM versions. Authentication is already available in Spectrum IQ (CVE-2022-26394). Instructions to erase all data and settings from WBMs and pumps before decommissioning and transferring to other facilities (CVE-2022-26390) are in process for incorporation into the Spectrum Operator’s Manual and are available in the Baxter Security Bulletin.”

Exploitation and remediation

This section details the potential for exploitation and our remediation guidance for the issues discovered and reported by Rapid7, so that defenders of this technology can gauge the impact of, and mitigations around, these issues appropriately.

Battery units store Wi-Fi credentials (CVE-2022-26390)

Rapid7 researchers tested Spectrum battery units for vulnerabilities. We found all units that were tested store Wi-Fi credential data in non-volatile memory on the device.

When a Wi-Fi battery unit is connected to the primary infusion pump and the infusion pump is powered up, the pump will transfer the Wi-Fi credential to the battery unit.

Exploitation

An attacker with physical access to an infusion pump could install a Wi-Fi battery unit (easily purchased on eBay), and then quickly power-cycle the infusion pump and remove the Wi-Fi battery – allowing them to walk away with critical Wi-Fi data once a unit has been disassembled and reverse-engineered.

Also, since these battery units store Wi-Fi credentials in non-volatile memory, there is a risk that when the devices are de-acquisitioned and no efforts are made to overwrite the stored data, anyone acquiring these devices on the secondary market could gain access to critical Wi-Fi credentials of the organization that de-acquisitioned the devices.

Remediation

To mitigate this vulnerability, organizations should restrict physical access by any unauthorized personnel to the infusion pumps or associated Wi-Fi battery units.

In addition, before de-acquisitioning the battery units, batteries should be plugged into a unit with invalid or blank Wi-Fi credentials configured and the unit powered up. This will overwrite the Wi-Fi credentials stored in the non-volatile memory of the batteries. Wi-Fi must be enabled on the infusion pump unit for this overwrite to work properly.

Format string vulnerabilities

“Hostmessage” (CVE-2022-26392)

When running a telnet session on the Baxter Sigma Wi-Fi Battery Firmware Version 16, the command “hostmessage” is vulnerable to format string vulnerability.

Exploitation

An attacker could trigger this format string vulnerability by entering the following command during a telnet session:

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

To view the output of this format string vulnerability, `settrace state=on` must be enabled in the telnet session. `set trace` does not need to be enabled for the format string vulnerability to be triggered, but it does need to be enabled if the output of the vulnerability is to be viewed.

Once `set trace` is enabled and showing output within the telnet session screen, the output of the vulnerability can be viewed, as shown below, where each `%x` returned data from the device’s process stack.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

SSID (CVE-2022-26393)

Rapid7 also found another format string vulnerability on Wi-Fi battery software version 20 D29. This vulnerability is triggered within SSID processing by the `get_wifi_location (20)` command being sent via XML to the Wi-Fi battery at TCP port 51243 or UDP port 51243.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

Exploitation

This format string vulnerability can be triggered by first setting up a Wi-Fi access point containing format string specifiers in the SSID. Next, an attacker could send a `get_wifi_location (20)` command via TCP Port 51243 or UDP port 51243 to the infusion pump. This causes the device to process the SSID name of the access point nearby and trigger the exploit.  The results of the triggering of format strings can be viewed with trace log output within a telnet session as shown below.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

The SSID of `AAAA%x%x%x%x%x%x%x%x%x%x%x%x%x%x` allows for control of 4 bytes on the stack, as shown above, using the `%x` to walk the stack until it reaches 41414141. By changing the leading `AAAA` in the SSID, a malicious actor could potentially use the format string injection to read and write arbitrary memory. At a minimum, using format strings of `%s` and `%n` could allow for a denial of service (DoS) by triggering an illegal memory read (`%s`) and/or illegal memory write (`%n`).

Note that in order to trigger this DoS effect, the attacker would need to be within normal radio range and either be on the device’s network or wait for an authorized `get_wifi_location` command (the latter would itself be a usual, non-default event).

Remediation

To prevent exploitation, organizations should restrict access to the network segments containing the infusion pumps. They should also monitor network traffic for any unauthorized host communicating over TCP and UDP port 51243 to infusion pumps. In addition, be sure to monitor Wi-Fi space for rogue access points containing format string specifiers within the SSID name.

Unauthenticated network reconfiguration via TCP/UDP (CVE-2022-26394)

All Wi-Fi battery units tested (versions 16, 17, and 20 D29) allowed for remote unauthenticated changing of the SIGMA GW IP address. The SIGMA GW setting is used for configuring the back-end communication services for the devices operation.

Exploitation

An attacker could accomplish a remote redirect of SIGMA GW by sending an XML command 15 to TCP or UDP port 51243. During testing, only the SIGMA GW IP was found to be remotely changeable using this command. An example of this command and associated structure is shown below:

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

This could be used by a malicious actor to man-in-the-middle (MitM) all the communication initiated by the infusion pump. This could lead to information leakage and/or data being manipulated by a malicious actor.

Remediation

Organizations using SIGMA Spectrum products should restrict access to the network segments containing the infusion pumps. They should also monitor network traffic for any unauthorized host communicating over TCP and UDP port 51243 to the infusion pumps.

UART configuration access to Wi-Fi configuration data (additional finding)

The SIGMA Spectrum infusion pump unit transmits data unencrypted to the Wi-Fi battery unit via universal asynchronous receiver-transmitter (UART). During the power-up cycle of the infusion pump, the first block of data contains the Wi-Fi configuration data. This communication contains the SSID and 64-Character hex PSK.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

Exploitation

A malicious actor with physical access to an infusion pump can place a communication shim between the units (i.e., the pump and the Wi-Fi battery) and capture this data during the power-up cycle of the unit.

Baxter SIGMA Spectrum Infusion Pumps: Multiple Vulnerabilities (FIXED)

Remediation

To help prevent exploitation, organizations should restrict physical access by unauthorized persons to the infusion pumps and associated Wi-Fi battery units.

Note that this is merely an additional finding based on physical, hands-on access to the device. While Baxter has addressed this finding through better decommissioning advice to end users, this particular issue does not rank for its own CVE identifier, as local encryption is beyond the scope of the hardware design of the device.

Disclosure timeline

Baxter is an exemplary medical technology company with an obvious commitment to patient and hospital safety. While medtech vulnerabilities can be tricky and expensive to work through, we’re quite pleased with the responsiveness, transparency, and genuine interest shown by Baxter’s product security teams.

  • April, 2022: Issues discovered by Deral Heiland of Rapid7
  • Wed, April 20, 2022: Issues reported to Baxter product security
  • Wed, May 11, 2022: Update requested from Baxter
  • Wed, Jun 1, 2022: Teleconference with Baxter and Rapid7 presenting findings
  • Jun-Jul 2022: Several follow up conversations and updates between Baxter and Rapid7
  • Tue, Aug 2, 2022: Coordination tracking over VINCE and more teleconferencing involving Baxter, Rapid7, CERT/CC, and ICS-CERT (VU#142423)
  • Wed, Aug 31, 2022: Final review of findings and mitigations
  • Thu Sep 8, 2022: Baxter advisory published
  • Thu, Sep 8, 2022: Public disclosure of these issues
  • Thu, Sep 8, 2022: ICS-CERT advisory published

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

What’s Up, Home? – The Relaxing Breeze

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-the-relaxing-breeze/22031/

Can you monitor a home air conditioner with Zabbix? Of course, you can! By day, I am a monitoring tech lead in a global cyber security company. By night, I monitor my home. Welcome to my weekly blog about how I monitor my home with Zabbix & Grafana and do some weird experiments.

At the moment I was writing this blog post, the summer — and thus maybe the heat wave season — was only approaching. For the past two summers, our home has been a hot place to be. Enough is enough, so that day we got an air conditioner. It’s not a very high-end model, but these things currently come with built-in Wi-Fi.

Wouldn’t it be cool to monitor that with Zabbix? Yes.

Wi-Fi, do you read me?

Getting the air conditioner connected to our Wi-Fi was as easy as the manual promised: press the Health button eight times in a row and wait until the air conditioner says “be-be-be-be-be-beep”. Sure enough, that happened, and moments later the AC Freedom app I installed on my iPhone started to show this.

For a normal person, this would be more than enough. For me, this was only the beginning and the next step would be adding the thing to Zabbix.

Encountering headwind

Checking the first things first — no, Zabbix does not seem to support this AC out of the box. No worries, that is not the end of the world, it just slightly slows things down, and also makes things a bit more interesting.

Now that the AC was connected to our Wi-Fi, I first went to check out some data about the AC from the Wi-Fi admin interface. It revealed to me that the device contains network hardware by Broadlink in it. Ah-ha! Search engine, here I come!

Moments later, I found out Broadlink Air Conditioners to Mqtt.

Okay, MQTT it is. That’s a lightweight protocol designed for IoT device communication, and I had absolutely no clue how that worked, as this was the first time I got to use it. It would blow if this step proved to be too cumbersome.

Luckily, thanks to open source and 2022, getting it to run was not that hard.

It’s nearly summer, welcome mosquitos

The aforementioned Broadlink AC to MQTT quickly raised my confidence, as it immediately found my new device. Yes, I can do this!

… that’s nice, but how to use this any further? I could not see any MQTT messages anywhere.

Soon enough I realized I need to install an MQTT message broker to catch the messages and I found Eclipse Mosquitto.

An apt install mosquitto mosquitto-clients and some config file guessing later my jaw dropped, as I saw this:

Wow, that’s a wind-wind situation. It returns sane values! My next step was then to find out if I can somehow access those URL-like paths with Zabbix.

With Zabbix, MQTT is just a breeze

I remembered from some ancient Zabbix Summit that Zabbix 5.x gained Modbus/MQTT support. My Raspberry Pi 4 is running Zabbix 6.0.4, so certainly that part should be covered.

In the end, getting MQTT to run with Zabbix was almost too easy. Zabbix agent 2 has native MQTT support with its mqtt.get active check, so I tried to add an item like this:

And, as this is Zabbix, of course, it works:

Yay! From now on, my home Zabbix can alert me about the AC as well and generate some fancy graphs.

What’s next?

As I’ve got the AC unit recently and this is just the beginning, I still have more things to add later.

  • Add some “Yikes! It’s too hot!” triggers
  • Create a Grafana dashboard
  • Try out if I can adjust the air conditioner settings using Zabbix

Anyway, in the end, this certainly was easier than I expected.

I have been working at Forcepoint since 2014 and monitoring has never been cooler. — Janne Pikkarainen

The post What’s Up, Home? – The Relaxing Breeze appeared first on Zabbix Blog.

Keep Things Cool with Zabbix

Post Syndicated from Laura Schilder original https://blog.zabbix.com/keep-things-cool-with-zabbix/21534/

Do your friends, colleagues or maybe even your significant other have a nasty habit of leaving the fridge half-open causing you a frustrating evening and potentially even ruining your cherished batch of pistachio-flavored ice cream?

With the right thermometer and a little Zabbix knowledge, you can configure Zabbix to keep a watchful eye on the temperature of your fridge and alert you whenever things in your fridge are about to stop being cool.

IoT

The Internet of things represents objects that are capable of autonomously transferring data over a network. The objects can be something like a temperature sensor, a smart fridge, or an electric scooter Even a garbage can and a vending equipped with proper sensors can be IoT objects.

Well, let’s go back to the thermometer that I was talking about. That thermometer is also an IoT device and it uses a specific protocol; for this specific one, we will use an aggregator: The Things Network (TTN).

But why do we need an aggregator? If you plan on monitoring a large number of sensors you will have to establish connections to each of these sensors individually. An aggregator can be used as the central point of communication, instead of directly connecting to each of the sensors.

In this blog post we will be using the following components:

  • Mini hub TBMH100 (the gateway)
  • Dragino LHT65 (the thermometer)

The Things Network

Now, not just any thermometer can connect to the internet. But the thermometer I used is one from The Things Network. The Things Network is open source, just like Zabbix, and works with LoRaWAN. If you do not know what LoRaWAN is just keep reading and I will explain what it is.

LoRaWAN stands for Long Range Wide Area Network and it’s a protocol that is made for long-distance communication and low power consumption. Certain nodes use this protocol and send information via radio. For Europe the frequency used for transferring data is 868MHz. This is how the thermometer sends the temperature to The Things Network.

Before we are able to see the sent values, we do have to configure a gateway and add it to The Things Network. After adding a gateway to TTN, the only thing remaining is having to add the thermometers. All of this information is also available in The Things Network console. We’re going to set up an MQTT connection via The Things Network console, and configure it so Zabbix can collect, process, and visualize your IoT data, as well as receive alerts whenever the temperature in our fridge gets too hot or too cold.

What is MQTT? In short –  MQTT is a lightweight network protocol. MQTT is designed for remote locations that have devices with resources that have limited bandwidth. It has to run over a transport protocol and is characterized by: Ordered, lossless and bi-directional connections. Typically, TCP/IP connections are used for this. It also is an OASIS standard and an ISO recommendation.

TTN configuration

Let’s start by adding a gateway to The Things Network. To do that, you will have to create an account on the things stack and own a gateway. But, before we get started, check what kind of gateway you have. We will be using the gateway that is meant to be inside a building. If that is done let’s start with adding it to TTN.

Let’s start at the beginning. Open the TTN webpage and log in. Easy as that. Now when you see this screen: click on the Go to gateways button.

After that, you click on the white Claim gateway button. Do not confuse it with the Add gateway button – we need to press Claim gateway.

All the fields you see on the next page will have to be filled in:

As I mentioned before the frequency should be around 868MHz. For this example, I will just use the recommended frequency. After that, click on the Claim gateway button. The gateway should work after this. If you do not know what to fill in the form fields, you can find all the information you need on the backside of your gateway.

This is what it will look like when you have successfully claimed the gateway:

 

Since we now have a gateway we can add the thermometer to The Things Network. To do this, we have to go to the Application tab in the console. Once we clicked on the Application tab, it will be empty. We will have to make our own application before we can add the thermometer and we will do that by pressing the Add application button. Once clicked, you should see the following:

 

After you have created your application, click on it and you will see a screen like this:

There you will have to go to End devices and click on Add devices. It will bring you to a screen like this:

Now, you will see just one drop-down menu, but once you start filling them in, additional menus will show up. In our case, we’re using a thermometer from Dragino. After filling in the model and region, the screen should look something like this:

 

For step two, we had to grab the box in which the sensor was shipped. Inside the box is a sticker with all the information that you need. When you have filled in all the fields, click on the Add device button. After adding the device it will look something like this:

 

Now, that’s all for adding the thermometers. If everything works, we will just have to set up an MQTT connection between The Things Network and Zabbix. On the TTN side, we have to go to integrations, and then MQTT. Everything you have on that page we can just copy. Generate an API key and copy it. Save it as we will need it later.

Zabbix

After all these steps on The Things Network side, we will finally move to Zabbix. What we will do first in Zabbix is make sure that we can get the information from The Things Network. This will be done via MQTT. For that, we will need Zabbix agent 2. Now there are of course more steps than just that. So, let me explain.

Zabbix MQTT

Let’s start by downloading Zabbix agent 2 (if you already have it you can skip this step) for that we will use this command:

dnf install zabbix-agent2

Once the agent is installed, we will have to modify the config file:

vim /etc/zabbix/zabbix_agent2.conf

I am using vim, but if you want to use something else, feel free to use another text editor. Once the configuration file has been opened, we will go ahead and change the Hostname parameter. We will be changing it to this:

Hostname=TTN

Don’t forget to start (or restart, if the agent 2 has already been installed) your agent 2 service.

systemctl start zabbix-agent2

Now that we have that out of the way we can start by making a new host. It will be a regular Zabbix host. This is what mine looks like:

Note that the Host name here matches the Hostname parameter which we edited in the previous step.  Do you recall when I said that you have to copy all the MQTT information from The Things Network? Well, we will use it here. We will have to make an item that will use the Zabbix agent (active) item type to get the information. Now, for the key, we can select the mqtt key from Zabbix but we will be missing some of the required parameters. The key will have to look something like this:

mqtt.get[broker,topic,username,password]

In the end, the item itself will look something like this:

In our case, the key looks like this:

mqtt.get[tls://eu1.cloud.thethings.network:8883, #, [email protected], NNSXS.EMK3T5FLBB2YPLYWXLP7BYOG7JHFSBKEUG23BMY.IJSZ4AC475CU5JJOLRJRYLDU6MXEODWCUYIOLZSAWSXP4L32473Q].

To check if it works just navigate to MonitoringLatest data, find our host and you should see the collected data. It should look something like this:

{"v3/[email protected]/devices/eui-a84041a4e10000/up":"{\"end_device_ids\":{\"device_id\":\"eui-a84041a4e1000000\",\"application_ids\":{\"application_id\":\"thermometers\"},\"dev_eui\":\"A84041A4E10000\",\"join_eui\":\"A000000000000100\",\"dev_addr\":\"260B4F08\"},\"correlation_ids\":[\"as:up:01G7CJFS1180WT7M2GHQRWVFKA\",\"gs:conn:01G7A3RFY7CT62SGBH2BGJ7T31\",\"gs:up:host:01G7A3RG2EWWCHEW9HVBQ6KA5A\",\"gs:uplink:01G7CJFRTGY0NV6R4Y8AV9XKGG\",\"ns:uplink:01G7CJFRTHSDCK3DVR7EDGJY5V\",\"rpc:/ttn.lorawan.v3.GsNs/HandleUplink:01G7CJFRTHP5JMRVSNP8ZZR1X1\",\"rpc:/ttn.lorawan.v3.NsAs/HandleUplink:01G7CJFS10A5RAD5SE864Q99R8\"],\"received_at\":\"2022-07-07T14:54:39.137181192Z\",\"uplink_message\":{\"session_key_id\":\"AYGu5fFGW+vxth9cFIw2+g==\",\"f_port\":2,\"f_cnt\":601,\"frm_payload\":\"y/kH5QIoAX//f/8=\",\"decoded_payload\":{\"BatV\":3.065,\"Bat_status\":3,\"Ext_sensor\":\"Temperature Sensor\",\"Hum_SHT\":55.2,\"TempC_DS\":327.67,\"TempC_SHT\":20.21},\"rx_metadata\":[{\"gateway_ids\":{\"gateway_id\":\"gateway7\",\"eui\":\"58A0CBFFFE803D17\"},\"time\":\"2022-07-07T14:54:38.903268098Z\",\"timestamp\":945990219,\"rssi\":-61,\"channel_rssi\":-61,\"snr\":7.5,\"uplink_token\":\"ChYKFAoIZ2F0ZXdheTcSCFigy//+gD0XEMvUisMDGgwIrueblgYQ/fHaugMg+Omki8TiEioMCK7nm5YGEIKO264D\"}],\"settings\":{\"data_rate\":{\"lora\":{\"bandwidth\":125000,\"spreading_factor\":7}},\"coding_rate\":\"4/5\",\"frequency\":\"868500000\",\"timestamp\":945990219,\"time\":\"2022-07-07T14:54:38.903268098Z\"},\"received_at\":\"2022-07-07T14:54:38.929160377Z\",\"consumed_airtime\":\"0.061696s\",\"version_ids\":{\"brand_id\":\"dragino\",\"model_id\":\"lht65\",\"hardware_version\":\"_unknown_hw_version_\",\"firmware_version\":\"1.8\",\"band_id\":\"EU_863_870\"},\"network_ids\":{\"net_id\":\"000013\",\"tenant_id\":\"ttn\",\"cluster_id\":\"eu1\",\"cluster_address\":\"eu1.cloud.thethings.network\"}}}"}

Zabbix LLD with Dependent items

Now, after seeing all the data you want to be able to read it normally. Well, for that we will use Low-Level Discovery. It will also help add the thermometer to Zabbix.

To achieve our goal we will start by navigating to the Configuration – Hosts page. Select the host that you created earlier. Once there, select Discovery rules at the top. Now we are going to create a new Low-level discovery rule. It will be a dependent item. The master item is the item we made in the previous step. Once you have done that, it should look like so:

But we have not finished yet. We will also need to add a pre-processing step. For the pre-processing step, we need to provide a javascript script. The data that has been sent is not ‘native’ Zabbix LLD data, so we need to make it suitable for Zabbix.

We will use a script like this to format our data:

var lld = [];
var regexp = /@ttn\/devices\/([\w-]+)/g;
var lines = value.split("\n");
var lines_num = lines.length;
for (i = 0; i < lines_num; i++)
{
var match = regexp.exec(lines);
var row = {};
row["{#SENSOR}"] = match[1];
lld.push(row);
}
return JSON.stringify(lld);

In the script above we are transforming the data into a format that Zabbix can use it. Let’s drill it down line by line:
Line 1: Declare a new array with name lld
Line 2: Declare a regex with a specific value
Line 3: Let’s split the received value into an array of substrings. Splitting happens on the value “\n” which represents a newline
Line 4: Count the number of lines
Line 5: A For loop to populate the array that is declared in line 1.
Line 7: Match the regex in the lines.
Line 8: Declare an object with the name ‘row’
Line 9: Add the text {#SENSOR} with the 1st value of the variable ‘match’
Line 10: Push the row object into the lld array
Line 12: Convert the lld array into a JSON string

After Line 12, you will get something like this returned:

[{"{#SENSOR}":"eui-a84041a4exxxxxxx"},{"{#SENSOR}":"eui-a84041a4eyyyyyyy"}]

Now the data is formatted into the Zabbix LLD format, ready to be parsed.

Once the preprocessing step is added, the rule should be complete. This means that Zabbix will start discovering the thermometers, but no items are created by just adding the LLD rule like we have done so far. We also need to add the item prototypes.

I will use temperature for the internal sensor as an example here. So, let’s start at the beginning and go to Item prototypes. We will add a new item prototype. In the name and key fields, we will use the Low-level discovery macro: {#SENSOR}. The key is arbitrary – we ill put our LLD macro as a parameter, to make each item created from the prototypes unique. For units, we will use C because it stands for temperature in Celsius. When finished, it should look like this:

Now, if you look closely at the screenshot I also have a tag and preprocessing step. You can see the tag configuration in the image below. The tag will be used for filtering and providing additional information – the  sensor ID.

As for the item prototype preprocessing step –  it is a little bit harder. Do you remember the data that you got from the first item we made? Well, if you take that and throw in a regex, you can make the preprocessing step. What I did was go to https://regex101.com and paste the complete string we received from the master item and start matching the temperatures.

Once the regex is done, go to the Preprocessing tab in Zabbix. Add one step, and choose Regular Expression as the Name. Now the parameters will be (in case of this thermometer):

TempC_SHT\\":(\d+.\d+)}

and in the output field we will use the first capture group – \1. It should look like this:

If we take a careful look at the data provided by the Master item, there is a “decoded payload”:

decoded_payload\":{\"BatV\":3.056,\"Bat_status\":3,\"Ext_sensor\":\"Temperature Sensor\",\"Hum_SHT\":50.8,\"TempC_DS\":21.75,\"TempC_SHT\":21.95}

From that payload, we are cherry-picking the TempC_SHT value. There are more values to collect here, like battery status, voltage, humidity, etc. This highly depends on the sensors used, of course.
In the Low-Level Discovery rule, we can keep on adding more item prototypes to parse all of these metrics and let the LLD automatically create the items from the prototypes.

After adding the low-level discovery rule and the preprocessing step you will see something like this:

Now, as you can see, Multiple items have been created from our prototypes. If you look closely, you will also notice that I get two of everything. This is because the Low-Level Discovery discovered two thermometers.

Conclusion

Now that everything has been configured, we can finally track the temperature of our IoT thermometers. The next time somebody leaves your fridge open, you can find out in time. Cool, right? Well, that’s just one of many IoT examples that we can start to monitor –  the potential for discovering and monitoring IoT devices is unlimited. If you wish to check out the template used for this example, feel free to visit our github page.

The post Keep Things Cool with Zabbix appeared first on Zabbix Blog.

What’s Up, Home? – Razor-sharp Thinking

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-razor-sharp-thinking/21507/

Can you monitor a Philips OneBlade shaver with Zabbix? Of course, you can! But why do that and how to monitor a dumb device with zero IoT capabilities?

Welcome to my weekly blog: I get my bread and butter by being a monitoring tech lead in a global cyber security company, but I monitor my home for fun with Zabbix & Grafana and do some weird experiments.

Staying Alive

We all know how the battery-operated shavers, toothbrushes and similar devices sound very energetic and trustworthy immediately after you have charged their battery to full. Over time (over not so long time) they start to sound tired, but technically you can still use them. Or, you think you can still use them, but instead, they will betray you and die in the middle of the operation. Zabbix to the rescue!

Sing to me, bad boy

To get an idea about the battery runtime left, I needed to somehow capture the sound frequency and analyze it. The recording part was easy — after I had charged my razor to full level, I did leave it running and recorded the sound with my iPhone Voice Memos.

But how to get the sound frequency? This is the part where the audio engineers of the world can laugh at me in unison.

At first, I tried with Audacity as traditionally it has done all the tricks I possibly need to do with audio. Unfortunately, I could not find a way to accomplish my dream with it, and even if I would have, I fear I would have to manually do something with it, instead of the automated fashion I’m wishing for.

I could see all kinds of frequencies with Audacity, but was not able to isolate the humming sound of Philips OneBlade, at least not to a format I could use with Zabbix. Yes, Audacity has macros and some functionality remotely from the command line, but I interrupted my attempts with it. If you can do stuff like this with Audacity, drop me a note, I’m definitely interested!

Here come the numbers

Then, after a bit of searching, I found out aubiopitch. It analyzes the sample and returns a proper heckton of numbers back to you.

Those are not GPS coordinates or lottery numbers. That’s a timestamp in seconds and the sound frequency in Hz. And, just by peeking at the file manually, I found out that the values around 100, plus-minus something, were constantly present in the file. Yes, my brains have developed a very good pattern matching algorithm when it comes to log files, as that’s what I have been staring at for the last 20+ years.

As my 30+ minutes sample contained over 300,000 lines of these numbers, I did not want to bother my poor little home Zabbix with this kind of data volume for my initial analysis. I hate spreadsheet programs, especially with data that spans to hundreds of thousands of rows or more, so how to analyze my data? I possibly could have utilized Grafana’s CSV plugin, but to make things more interesting (for me, anyway), I called to my old friend gnuplot instead. Well, a friend in a sense that I know that it exists and that I occasionally used it two decades ago for simple plotting.

There it is, my big long needle in a haystack! Among some other environmental sounds, aubiopitch did recognize the Philips soundtrack as well! What if I filter out those higher frequencies? Or at least attempt to, my gnuplot-fu is not strong.

Yes, there it is, the upper line steadily coming down. After my first recording, it looks like that with a full battery the captured frequency starts from about 115 Hz, and everything goes well until about 93 Hz, but if I would start to shave around that time, I would better be quick, as I would only have two to three minutes left before the frequency quickly spirals down.

Production show-stoppers

This thing is not in “production” yet, because

  • I need to do more recordings to see if I get similar frequencies each time
  • I need to fiddle with iPhone Shortcuts to make this as automated as possible.

Anyway, I did start building a preliminary Zabbix template with some macros already filled in…

… and I have a connection established between my dear Siri and Zabbix, too; this will be a topic for another blog entry in the future.

I am hoping that I could get Siri to upload the Voice Memo automatically to my Zabbix Raspberry Pi, which then would immediately analyze the data with aubiopitch maybe with a simple incron hook, and Zabbix would parse the values. That part is yet to be implemented, but I am getting there. It’s just numbers, and in the end, I will just point Zabbix to a simple text file to gather its numbers or make zabbix_sender to send in the values. Been there, done that.

I have been working at Forcepoint since 2014 and for this post to happen I needed to use some razor-sharp thinking. — Janne Pikkarainen

The post What’s Up, Home? – Razor-sharp Thinking appeared first on Zabbix Blog.

What’s Up, Home? Welcome to my Zabbixverse

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-welcome-to-my-zabbixverse/21353/

By day, I am a monitoring technical lead in a global cyber security company. By night, I monitor my home with Zabbix and Grafana in very creative ways. But what has Zabbix to do with Blender 3D software or virtual reality? Read on.

Full-stack monitoring is an old concept — in the IT world, it means your service is monitored all the way from physical level (data center environmental status like temperature or smoke detection, power, network connectivity, hardware status…) to operating system status, to your application status, enriched with all kinds of data such as application logs or end-to-end testing performance. Zabbix has very mature support for that, but how about… full house monitoring in 3D and, possibly, in virtual reality?

Slow down, what are you talking about?

The catacombs of my heart do have a place for 3D modeling. I am not a talented 3D artist, not by a long shot, but I have flirted with 3D apps since Amiga 500 and it’s Real 3D 1.4, then later with Amiga 1200 a legally purchased Tornado 3D, and not so legally downloaded Lightwave. With Linux, so after 1999 for me, I have used POV-Ray about 20 years ago, and as Blender went open source a long time ago, I have tried it out every now and then.

So, in theory, I can do 3D. In practice, it’s the “Hmm, I wonder what happens if I press this button” approach I use.

Not so slow, get to the point, please

Okay. There are several reasons why I am doing this whole home monitoring thing.

  1. I have been doing IT monitoring for 20+ years, so really, there is not much new for me. Don’t get that wrong — boring is GOOD when it comes to business monitoring. Your business does count on it, and it’s perfect that whatever you need to monitor, you can do it reliably and easily. But for me, it does not challenge my brain or get my creative juices flowing. Monitoring the 3D world sure does.
  2. With my home Zabbix & Grafana, I can get as wild and childish as I ever want. Of course, not so much at work. (Though I admit that at work I did set up an easter egg Grafana dashboard called OnlyFans — it is literally showing how the cooling fans of our servers and other devices are doing).
  3. I want to give y’all new ideas and motivation to take your monitoring to the next level.
  4. I want to help raise Zabbix as a product to a whole new level from traditional IT monitoring to monitoring the environment we all live in — anyway, the future of monitoring will more and more be in the real world, too
2D or not 2D, that is the question

For traditional IT monitoring, 2D interface and 2D alerts are OK, maybe apart from physical rack location visualization, where it definitely helps if a sysadmin can locate a malfunctioning server easily from a picture.

For the Real World monitoring, it is a different story. I’m sure an electrician would appreciate if the alert would contain pictures or animations visualizing the exact location of whatever was broken. The same for plumbers, guards, whoever needs to get to fix something in huge buildings, fast.

Let’s get to it

Now that you know my motivation, let’s finally get started!

In my case, leaping Zabbix from 2D to 3D meant just a bunch of easy steps:

  1. Model my home in Sweet Home 3D; it’s very easy to use and definitely easier for my back than my wife requesting “could we try out how the sofa would look like over there…?”
  2. Import the Sweet Home 3D object to Blender
  3. In Blender, relabel the interesting objects to match with the names in Zabbix
  4. Hook Zabbix and Blender together with Python and Blender Python API, so Zabbix can change the alerting object somehow for its properties — change material, change color, add a glow effect, make it fire/smoke/explode, whatever
  5. Ask Blender Python API to export the rendered results as PNG images and as X3D files
Home sweet home

Sweet Home 3D is a relatively easy-to-use home modeling application. It’s free, and already contains a generous bunch of furniture, and with a small sum, you’ll get access to many, many more items.

After a few moments, I had my home modeled in Sweet Home 3D.

 

Next, I exported the file to .obj format, recognized by Blender.

Will it blend?

In Blender, I created a new scene, removed the meme-worthy default cube, and imported the Sweet Home 3D model to Blender.

Oh wow, it worked! Next, I needed to label the interesting items, such as our living room TV to match the names in Zabbix.

You modeled your home. Great! But does this Zabbix —> Blender integration work?

Yes, it does. Here is my first “let’s throw in some random objects into a Blender scene and try to manipulate it from Zabbix” attempt before any Sweet Home 3D business.

Fancy? No. Meaningful? Yes. There’s a lot going on in here.

  1. Through Python, Zabbix was able to modify a Blender scene and change some colors to red.
  2. Blender rendered the scene in its headless server mode (without GUI), and saved the resulting PNG still frame.
  3. The script ran by Zabbix did copy the image to be available for Zabbix UI (in my case, I created /assets/3d/ directory which contains everything relevant to this experiment).
  4. Zabbix URL widget is showing the image.

My Zabbix is now consulting Blender for every severity >=Average trigger, and I can also run the rendering manually any time I want.

First, here’s the manual refresh.

 

Next, here is the trigger:

 

Static image result

Here is a static PNG image rendering result by Blender Eevee rendering engine. Like gaming engines, Eevee cuts some corners when it comes to accuracy, but with a powerful GPU it can do wonders in real-time or at least in near-real-time.

The “I am not a 3D artist” part will hit you now hard. Cover your eyes, this will hurt. Here’s the Eevee rendering result.

 That green color? No, our home is not like that. I just tried to make this thing look more futuristic, perhaps Matrix-like… but now it looks like… well… like I would have used a 3D program. The red Rudolph the Rednose Reindeer nose-like thing? I imagined it would be a neatly glowing red sphere along with the TV glowing, indicating an alert with our TV. Fail for the visual part, but at least the alert logic works! And don’t ask why the TV looks so strange.

But you get the point. Imagine if a warehouse/factory/whatever monitoring center would see something like this in their alerts. No more cryptic “Power socket S1F1A255DU not working” alerts, instead, the alert would pinpoint the alert in a visual way.

There was supposed to be an earth-shattering VR! Where’s the VR?

Mark Zuckerberg, be very afraid with your Metaverse, as Zabbixverse will rule the world. Among many other formats, Blender can export its scenes to X3D format. It’s one of the virtual world formats our web browsers do support, and dead simple to embed inside Zabbix/Grafana. Blender would support WebGL, too, but getting X3D to run only needed the use of <x3d> tag, so for my experiment, it was super easy.

The video looks crappy because I have not done any texture/light work yet, but the concept works! In the video, it is me controlling the movement.

In my understanding, X3D/WebGL supports VR headsets, too, so in theory you could be observing the status of whatever physical facility you monitor through your VR headset.

Of course, this works in Grafana, too.

How much does this cost to implement?

It’s free! I mean, Zabbix is free, Python is free and Blender is free, and open source. If you have some 3D blueprints of your facility in a format Blender can support — it supports plenty — you’re all set! Have an engineer or two or ten for doing the 3D scene labeling work, and pretty soon you will see you are doing your monitoring in 3D world.

What are the limitations?

The new/resolved alerts are not updated to the scene in real-time. For PNG files that does not matter much, as those are static and Zabbix can update those as often as needed, but for the interactive X3D files it’s a shame that for now the scene will only be updated whenever you refresh the page, or Zabbix does it for you. I need to learn if I can update X3D properties in real-time instead of a forced page load.

Coming up next week: monitoring Philips OneBlade

Next week I will show you how I monitor a Philips OneBlade shaver for its estimated runtime left. The device does not have any IoT functionality, so how do I monitor it? Tune in to this blog next week at the same Zabbix time.

I have been working at Forcepoint since 2014 and never get bored of inventing new ways to visualize data.

The post What’s Up, Home? Welcome to my Zabbixverse appeared first on Zabbix Blog.

What’s Up, Home? – Observe!

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-observe/21201/

By day, I monitor a global cyber security company for a living. By night, I monitor my home with Zabbix and Grafana. In this weekly blog series, I’m sharing my weird experiments and new ideas on how to utilize monitoring.

On Easter, we were not at home but doing Easter stuff, that week I did not implement any major functionality to my home monitoring environment. But while I’m brewing new weird features, here are some bits and pieces of what I have learned about my home, and not shown here earlier.

I’m watching you, TV

We have one of those ‘smart’ TVs, just like about every recent TV happens to be. The one we have is a Samsung 2021 model. And, of course, I monitor it.

On the last two-day graph above, value 1.0 means that our TV is awake and responding to ICMP ping. During the annotated short spikes the TV does not have its screen on, but it is just silently awake and doing something with the network — may be checking for firmware updates or sending telemetry?

Anyway, it is definitely doing that many times per day. I will need to snoop more closely on what the heck it is doing.

A longer period of responding to ping indicates that we are actually watching the TV (or me playing PS5).

Garage, or not to garage?

That time, when I was writing this blog post, the spring has finally come, so we were doing some spring cleaning at home; no need for heavy winter jackets to be in our hallway closet anymore and so forth. For some items, my wife wondered what would be the humidity percentage in our garage.

Zabbix & Grafana to the rescue! The graph below shows the humidity levels of our living room and garage.

So, our garage definitely is a more humid place, and for now, some humidity-sensitive items were left inside our house instead of the garage.

Don’t get lost, get a map

This part is very much of a work in progress and is lacking the majority of the IoT devices we have, but I am also building a visual network map of my home environment. The map below uses the traditional Zabbix network map, but if I manage to pull a rabbit or two out of my hat, during the upcoming weeks you will see something Completely Else. Stay tuned!

Next week I will show you a definitely very weird target to monitor if I just manage to figure out how to do it.

There’s an app for that

But what if I am not at home? Sure, for any serious situations like a freezer temperature rapidly rising my Zabbix will e-mail me, but what if I just want to browse around? Using the web interface via iPhone could be done but is definitely not very convenient, so I am using ZBX Viewer app for iPhone instead. It’s handy, it’s free and it works.

I have been working at Forcepoint since 2014 and never get bored of staying up to date about the status of my house. — Janne Pikkarainen

* Please note that this blog post was originally written in April and some events mentioned do not correspond to the actual date at the time of publication.

The post What’s Up, Home? – Observe! appeared first on Zabbix Blog.

What’s Up, Home? – Don’t Forget the Facial Cream

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-dont-forget-the-facial-cream/21063/

Can you monitor the regular use of facial cream with Zabbix? Of course, you can! Here’s how. This same method could be very useful for monitoring if the elderly remember to take their meds or so.

What the heck?

A little background story. My forehead has a tendency for dry skin, so I should be using facial cream daily. Of course, as a man, I can guarantee you that 100% of the days I remember to use the cream, I apply it, so in practice, this means about 40-50% hit ratio.

As lately I have been adding more monitored targets to my home Zabbix, one night my wife probably thought she was being snarky or funny when she said “One monitor I could happily receive data about would be how often you remember to use your facial cream.

A monitoring nerd does not take such ideas lightly.

Howdy door sensor, would you like to do some work?

I found a spare magnetic door sensor and a handy box where to store the cream.

You can see where this is going. This totally beautiful prototype of my Facial Cream Smart Storage Box is now deployed to test. If I open or close the box, the door sensor status changes, thus the facial cream mercy countdown timer resets.

How does it work? And does it really work?

Cozify smart IoT hub is keeping an eye on the magnetic door sensor’s last status change. And look, that awesome brown tape does not bother the magnets at all, Cozify reported the status as changed.

Now that I got the Cozify part working, my Zabbix can then receive the last change time as in Unix time.

On my Grafana, there’s now this absolutely gorgeous new panel, converting the Unix time to the “How long ago the last event happened?” indicator.

So the dashboard part is now working. But that is not all we need to do.

Alerting and escalation

Dashboards and monitoring are not useful at all if proper alerts are not being sent out. I now have this new alert trigger action rule in place.

In other words, if I forget to apply the facial cream, I have a one-hour time window to apply it, or otherwise, the alert gets escalated to my wife.

Will this method work? Is my prototype box reliable? I will tell you next time.

I have been working at Forcepoint since 2014 and never get tired of finding out new areas to monitor. — Janne Pikkarainen

The post What’s Up, Home? – Don’t Forget the Facial Cream appeared first on Zabbix Blog.

Evaluating the Security of an Enterprise IoT Deployment at Domino’s Pizza

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/06/06/evaluating-the-security-of-an-enterprise-iot-deployment-at-dominos-pizza/

Evaluating the Security of an Enterprise IoT Deployment at Domino's Pizza

Recently, I had a great opportunity to work with Domino’s Pizza to evaluate an internally conceived Internet of Things (IoT)-based business solution they had designed and deployed throughout their US store locations. The goal of this research project was to understand the security implications around a large-scale enterprise IoT project and processes related to:

  • Acquisition, implementation, and deployment
  • Technology and functionality
  • Management and support

Laying the groundwork

I sat down with each of the internal teams involved with this project, and we discussed those key areas and how security was defined and applied within each. I gained valuable new insight into how security should play into the design and construction of a large IoT business solution, especially within the planning and acquisition phases. This opportunity allowed me to see how a security-driven organization like Domino’s approaches a large-scale project like this.

I walked away from this phase of the project with some great takeaways that should be considered on all like-minded projects:

  • Always consider vendor security in your risk planning and modeling
  • Security “must-haves” should map to your organization’s internal security policies

Assessing the security status quo

Also, as part of this research project, I conducted a full ecosystem security assessment, examining all the critical hardware components, operation software, and associated network communications. As with any large-scale enterprise implementation, we did find a few security problems. This is the main reason all projects, even those with security built in from the start, should go through a wide-ranging security assessment to flush out any shortcomings that could be lurking under the hood. Once completed, I delivered a comprehensive report, which the security teams and project developers then used to quickly create solutions for fixing the identified issues.

This also allowed me the chance to observe and discuss the processes and methodologies used by this enterprise organization for building and deploying fixes into production and doing that in a safe way to avoid impacting production.

During a typical security assessment of an enterprise-wide business solution like this, we are reminded of a couple key best-practice items that should always be considered, such as:

  • When testing the security for a new technology, use a holistic approach that targets the entire solutions ecosystem.
  • Conduct regular testing of documented security procedures — security is a moving target, and testing these procedures regularly can help identify deficiencies.

Bringing the idea to life

Once an idea is designed, built, and deployed into production, we have to make sure the deployed solution remains fully functional and secure. To accomplish that, we moved the deployed enterprise IoT solution under a structured management and support plan at Domino’s. This support structure was designed as expected to help avoid or prevent outages and security incidents that could impact production, loss of services, or loss of data, focusing on:

Again, it was nice to sit down with the various teams involved in the support infrastructure and talk security and to also see how it was not only applied to this specific project, but how the organization applied these same security methodologies across the whole enterprise.

During this final evaluation phase of this project, I was reminded of one of the most critical takeaways that many organizations — unlike Domino’s, who did it correctly — fail to apply: When deploying new embedded technology within your enterprise environment, make sure the technology is properly integrated into your organization’s patch management.

At the conclusion of this research project, I took away a greatly improved understanding of the complexity, difficulties, and security best-practice challenges a large enterprise IoT project could demand. I was pleased to see, and work with, an organization that was up to that challenge and who successfully delivered this project to their business.

If you’d like to read more detail on this security research project, check out my report here.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Announcing Pub/Sub: Programmable MQTT-based Messaging

Post Syndicated from Matt Silverlock original https://blog.cloudflare.com/announcing-pubsub-programmable-mqtt-messaging/

Announcing Pub/Sub: Programmable MQTT-based Messaging

Announcing Pub/Sub: Programmable MQTT-based Messaging

One of the underlying questions that drives Platform Week is “how do we enable developers to build full stack applications on Cloudflare?”. With Workers as a serverless environment for easily deploying distributed-by-default applications, KV and Durable Objects for caching and coordination, and R2 as our zero-egress cost object store, we’ve continued to discuss what else we need to build to help developers both build new apps and/or bring existing ones over to Cloudflare’s Developer Platform.

With that in mind, we’re excited to announce the private beta of Cloudflare Pub/Sub, a programmable message bus built on the ubiquitous and industry-standard MQTT protocol supported by tens of millions of existing devices today.

In a nutshell, Pub/Sub allows you to:

  • Publish event, telemetry or sensor data from any MQTT capable client (and in the future, other client-facing protocols)
  • Write code that can filter, aggregate and/or modify messages as they’re published to the broker using Cloudflare Workers, and before they’re distributed to subscribers, without the need to ferry messages to a single “cloud region”
  • Push events from applications in other clouds, or from on-prem, with Pub/Sub acting as a programmable event router or a hook into persistent data storage (such as R2 or KV)
  • Move logic out of the client, where it can be hard (or risky!) to push updates, or where running code on devices raises the materials cost (CPU, memory), while still keeping latency as low as possible (your code runs in every location).

And there’s likely a long list of things we haven’t even predicted yet. We’ve seen developers build incredible things on top of Cloudflare Workers, and we’re excited to see what they build with the power of a programmable message bus like Pub/Sub, too.

Why, and what is, MQTT?

If you haven’t heard of MQTT before, you might be surprised to know that it’s one of the most pervasive “messaging protocols” deployed today. There are tens of millions (at least!) of devices that speak MQTT today, from connected payment terminals through to autonomous vehicles, cell phones, and even video games. Sensor readings, telemetry, financial transactions and/or mobile notifications & messages are all common use-cases for MQTT, and the flexibility of the protocol allows developers to make trade-offs around reliability, topic hierarchy and persistence specific to their use-case.

We chose MQTT as the foundation for Cloudflare Pub/Sub as we believe in building on top of open, accessible standards, as we did when we chose the Service Worker API as the foundation for Workers, and with our recently announced participation in the Winter Community Group around server-side runtime APIs. We also wanted to enable existing clients an easy path to benefit from Cloudflare’s scale and programmability, and ensure that developers have a rich ecosystem of client libraries in languages they’re familiar with today.

Beyond that, however, we also think MQTT meets the needs of a modern “publish-subscribe” messaging service. It has flexible delivery guarantees, TLS for transport encryption (no bespoke crypto!), a scalable topic creation and subscription model, extensible per-message metadata, and importantly, it provides a well-defined specification with clear error messages.

With that in mind, we expect to support many more “on-ramps” to Pub/Sub: a lot of the best parts of MQTT can be abstracted away from clients who might want to talk to us over HTTP or WebSockets.

Building Blocks

Given the ability to write code that acts on every message published to a Pub/Sub Broker, what does it look like in practice?

Here’s a simple-but-illustrative example of handling Pub/Sub messages directly in a Worker. We have clients (in this case, payment terminals) reporting back transaction data, and we want to capture the number of transactions processed in each region, so we can track transaction volumes over time.

Specifically, we:

  1. Filter on a specific topic prefix for messages we care about
  2. Parse the message for a specific key:value pair as a metric
  3. Write that metric directly into Workers Analytics Engine, our new serverless time-series analytics service, so we can directly query it with GraphQL.

This saves us having to stand up and maintain an external metrics service, configure another cloud service, or think about how it will scale: we can do it all directly on Cloudflare.

# language: TypeScript

async function pubsub(
  messages: Array<PubSubMessage>,
  env: any,
  ctx: ExecutionContext
): Promise<Array<PubSubMessage>> {
  
  for (let msg of messages) {
    // Extract a value from the payload and write it to Analytics Engine
    // In this example, a transactionsProcessed counter that our clients are sending
    // back to us.
    if (msg.topic.startsWith(“/transactions/”)) {
      // This is non-blocking, and doesn’t hold up our message
      // processing.
      env.TELEMETRY.writeDataPoint({
        // We label this metric so that we can query against these labels
        labels: [`${msg.broker}.${msg.namespace}`, msg.payload.region, msg.payload.merchantId],
        metrics: [msg.payload.transactionsProcessed ?? 0]
      });
    }
  }

  // Return our messages back to the Broker
  return messages;
}

const worker = {
  async fetch(req: Request, env: any, ctx: ExecutionContext) {
    // Critical: you must validate the incoming request is from your Broker
    // In the future, Workers will be able to do this on your behalf for Workers
    // in the same account as your Pub/Sub Broker.
    if (await isValidBrokerRequest(req)) {

      // Parse the incoming PubSub messages
      let incomingMessages: Array<PubSubMessage> = await req.json();
      
      // Pass the message to our pubsub handler, and capture the returned
      // messages
      let outgoingMessages = await pubsub(incomingMessages, env, ctx);

      // Re-serialize the messages and return a HTTP 200 so our Broker
      // knows we’ve successfully handled them
      return new Response(JSON.stringify(outgoingMessages), { status: 200 });
    }

    return new Response("not a valid Broker request", { status: 403 });
  },
};

export default worker;

We can then query these metrics directly using a familiar language: SQL. Our query takes the metrics we’ve written and gives us a breakdown of transactions processed by our payment devices, grouped by merchant (and again, all on Cloudflare):

SELECT
  label_2 as region,
  label_3 as merchantId,
  sum(metric_1) as total_transactions
FROM TELEMETRY
WHERE
  metric_1 > 0
  AND timestamp >= now() - 604800
GROUP BY
  region,
  merchantId
ORDER BY
  total_transactions DESC
LIMIT 10

You could replace or augment the calls to Analytics Engine with any number of examples:

  • Asynchronously writing messages (using ctx.waitUntil) on specific topics to our R2 object storage without blocking message delivery
  • Rewriting messages on-the-fly with data populated from KV, before the message is pushed to subscribers
  • Aggregate messages based on their payload and HTTP POST them to legacy infrastructure hosted outside of Cloudflare

Pub/Sub gives you a way to get data into Cloudflare’s network, filter, aggregate and/or mutate it, and push it back out to subscribers — whether there’s 10, 1,000 or 10,000 of them listening on that topic.

Where are we headed?

As we often like to say: we’re just getting started. The private beta for Pub/Sub is just the beginning of our journey, and we have a long list of capabilities we’re already working on.

Critically, one of our priorities is to cover as much of the MQTT v5.0 specification as we can, so that customers can migrate existing deployments and have it “just work”. Useful capabilities like shared subscriptions that allow you to load-balance messages across many subscribers; wildcard subscriptions (both single- and multi-tier) for aggregation use cases, stronger delivery guarantees (QoS), and support for additional authentication modes (specifically, Mutual TLS) are just a few of the things we’re working on.

Beyond that, we’re focused on making sure Pub/Sub’s developer experience is the best it  can be, and during the beta we’ll be:

  • Supporting a new set of “pubsub” sub-commands in Wrangler, our developer CLI, so that getting started is as low-friction as possible
  • Building ‘native’ bindings (similar to how Workers KV operates) that allow you to publish messages and subscribe to topics directly from Worker code, regardless of whether the message originates from (or is destined for) a client beyond Cloudflare
  • Exploring more ways to publish & subscribe from non-MQTT based clients, including HTTP requests and WebSockets, so that integrating existing code is even easier.

Our developer documentation will cover these capabilities as we land them.

We’re also aware that pricing is a huge part of developer experience, and are committed to ensuring that there is an accessible and flexible free tier. We want to enable developers to experiment, prototype and solve problems we haven’t thought of yet. We’ll be sharing more on pricing during the course of the beta.

Getting Started

If you want to start using Pub/Sub, sign up for the private beta: we plan to start enabling access within the next month. We’re looking forward to collecting feedback from developers and seeing what folks start to build.

In the meantime, review the brand-new Pub/Sub developer documentation to understand how Pub/Sub works under the hood, the MQTT protocol, and how it integrates with Cloudflare Workers.

Marrying Zabbix and Cozify IoT hub

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/marrying-zabbix-and-cozify-iot-hub/20377/

For those of you who know me, this should not come as a surprise: I absolutely love Zabbix. It gives me the ultimate freedom to monitor whatever I need to monitor and is flexible enough to be able to monitor absolutely everything you can imagine. It’s free, it’s open-source, and scales to whatever needs you might have.

What does a monitoring nerd who is a technical lead for monitoring in a global cyber security company do during his downtime? That’s a silly question, mind you. Of course, he monitors his home with a home Zabbix instance.

Temperatures at our home, measured by Cozify, data collected by Zabbix

Hello, Cozify

We have had a Cozify home automation system in our household since 2017. It is a nice central hub that supports IoT devices from a plethora of vendors and a vast selection of device categories, ranging from Philips Hue lights to motion sensors to cameras to fire alarm systems. You can then configure actions on some other device based on actions on one device: for example, turn on a light if a motion sensor detects movement.

Cozify is a very capable device, but where it definitely lacks is monitoring and analytics about what’s going on underneath.

As a monitoring addict, that is something I simply cannot stand.

Let’s build a bridge between Cozify and Zabbix

Someone has built an unofficial Python library for communicating with Cozify API. The library is a bit limited in functionality, the most limiting factor being that it only supports read-only operations. However, for my monitoring purposes, that does not matter, as I anyway need to read data.

For my initial testing purposes, I wrote a couple of small Python scripts to gather temperature and humidity data from our temperature sensors, and one script to monitor the general availability of the different IoT devices we have around. The scripts are run from cron every five minutes, and the results are written to text files that Zabbix reads. Zabbix has master items for temperature, humidity and reachability files, and using the dependent items, it can populate the data for all the 40+ data points I have now using just three polls.

Benefits of such project

Other than the cool geek factor, what’s the benefit of monitoring your home IoT hub? There’s plenty!

  • I get to learn all kinds of patterns about our home status: temperatures, reliability of individual devices, and the amount of time any device has been on/off
  • I get notified immediately if a critical device, like a smoke alarm, does not function properly
  • I get notified if the battery level on any battery-operated IoT device is getting low and can react before a device dies
  • I can follow how quickly the battery is draining on some device
Still for me to do

The current implementation is way too manual. It would be possible to utilize Zabbix low-level discovery to parse the JSON received from Cozify, but if I just dump everything from it, it contains all the possible device categories with different parameters: Philips Hue lights do report everything from their current brightness/color settings to if their firmware has been upgraded, and then the temperature or motion sensors do report back completely different set of data. That makes creating the monitored items automatically in a sane way a bit difficult.

So, I need to think a bit and figure out how to make my Cozify template more automatic.

I also need to set up a home Grafana instance speaking to Zabbix. Zabbix is excellent at collecting the monitoring data and sending out alerts, but Grafana is the perfect partner for Zabbix to do all the analytics and eye candy.

I have 20+ years of sysadmin/monitoring experience. Forcepoint has been my landing spot since 2014, and there I have been a monitoring technical lead since 2016. Everything Linux/FreeBSD, Zabbix, Grafana and open source in general is close to my heart. So close, in fact, that monitoring is also my hobby and I do weird experiments with Zabbix & Grafana at home. — Janne Pikkarainen

The post Marrying Zabbix and Cozify IoT hub appeared first on Zabbix Blog.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/04/07/lessons-in-iot-hacking-how-to-dead-bug-a-bga-flash-memory-chip/

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Dead-bugging — what is that, you ask? The concept comes from the idea that a memory chip, once it’s flipped over so you can attach wires to it, looks a little like a dead bug on its back.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

So why would we do this for the purposes of IoT hacking? The typical reason is if you want to extract the memory from the device, and you either don’t have a chip reader socket for that chip package type or your chip reader and socket pinouts don’t match the device.

I encounter this issue regularly with Ball Grid Array (BGA) memory devices. BGA devices don’t have legs like the chip shown above, but they do have small pads on the bottom, with small solder balls for attaching the device to a circuit board. The following BGA chip has 162 of these pads — here it is placed on a penny for size comparison.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Sometimes, I encounter memory chips and don’t have a socket for attaching it to my chip reader. Sourcing the correct socket could take months, often from China, and I need to extract the data today. Other times, it’s just not cost-effective to purchase one of these sockets for my lab because I don’t encounter that chip package type very often. However, I do encounter the chip package type shown above all the time on embedded Multi Chip Packages (eMCP), and I have a chip reader for that device type.

Unfortunately, further research on this flash memory chip revealed that it is a Multi-Chip Package (MCP), meaning it does not have a built-in embedded controller, so my chip readers can’t interact with it. Also, I couldn’t find a chip reader socket that was even available to support this. This is where a little research and the dead-bugging method came in handy.

Getting started

The first step was to track down a datasheet for this Macronix memory chip MX63U1GC12HA. Once I located the datasheet, I searched it to identify key characteristics of the chip that I would help me match it to another chip package type, which I could target with my chip reader, an RT809H.

Although this MCP chip package has 162 pads on the bottom, most of those aren’t necessary for us to be able to access the flash memory. MCP packages contain both RAM and NAND Flash memory, so I only needed to find the pads associated with the NAND flash along with ground and power connection.

The next step I identified the correct chip type using the datasheet and identification number MX63U1GC12HA. Here’s what the components of that number mean:

  • MX = Macronix
  • 63 = NAND + LPDRAM
  • U = NAND Voltage: 1.8V
  • 1G = 1Gig NAND Density
  • C = x8 Bus

Next, the NAND flash pads I needed to identify and connect to were:

  • I/O 0-7 = Data Input/Output x8
  • CLE = Command Latch Enable
  • ALE = Address Latch Enable
  • CE# = Chip Enable
  • WE# = Write Enable
  • RE# = Read Enable
  • WP# = Write Protect
  • R/B# = Ready / Busy Out
  • VCC = Voltage
  • VSSm = Ground
  • PT = Chip Protection Enable

With the datasheet, I also identified the above listed connection on the actual chip pad surface.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Typically, the hardest part is soldering the wires to these pads. This is the part that often scares most people away, but it looks harder than it really is. To avoid making it any harder than it has to be, I recommend going light on the coffee that morning – a recommendation I often don’t follow myself, which I end up regretting.

I have found one trick that works well to make attaching wires easier. This adds an extra step to the process but will speed things up later and remove much of the frustration. I recommend first attaching BGA balls to pads you need to attach wires to. Since the pads on this MCP chip are only 0.3 mm, I recommend using a microscope. I typically lay the balls by hand — once flux is placed on the chip surface, it’s simple to move the balls onto the pads one at a time and have them stay in place. Of course, this can also be done with solder paste and stencil. So, pick your favorite poison.

Once the balls have been placed on the correct pads, I place the chip in an InfraRed (IR) reflow oven to fix the balls to the pads. The lead-based BGA balls I use are Sn63/Pb37 and should melt at 183°C or 361°F.  I use the following temperature curve set on my IR oven, which I determined using a thermal probe along with some trial-and-error methods. During the reflow process, it’s easy to accidentally damage a chip by overheating it, so take caution. My curve tops out just above 200°C, which has worked well, and I have yet to damage the chips using this curve.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Once the oven has run through its cycle and the chip has cooled down, I clean the chip with alcohol to remove any remaining flux. If all goes well with the reballing process, the chip should have balls attached at each of the required locations, as shown below.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Attaching the wires

The next part is attaching wires to each of these pads. The wire I use for this is 40 gauge magnet wire, which is small enough to be attached to pads that are often .25 to .35 mm in size. This magnet wire is insulated with a thin coat of clear enamel, which can be problematic when soldering it to very small pads and trying to keeping the heat to a reasonable level. To resolve this issue, I burn the enamel insulation away and also coat the end of the wire with a thin coat of solder during that process. To do this, I melt solder onto the end of my solder iron and then stick the end of the magnet wire into the ball of solder on the end of the iron. This method works to remove the enamel insulation and tin the end of the wire, as shown below.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Once the magnet wire has been tinned, I next cut off the excess tinned area with wire cutters. How much you clip off depends on how big the pads are on the chip you’re attaching it to. The goal is to leave enough to properly solder it but not enough overhanging that could cause it to electrically short to other pads.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

By pre-tinning the wire and adding solder balls to the chip pads, the process of attaching the wires becomes much quicker and less frustrating. To attach the wires, I take the tinned magnet wire and place a small amount of flux on the tinned area. Then, I push the wire against the solder ball on the chip pad I am attaching it to, and with the hot solder iron, I just barely touch the solder ball on the pad – instantly, the wire is attached. I use a micro-tip solder iron and set the heat high, so it is instant when I do this process. An example of this is shown below:

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

For the MX63U1GC12HA MCP chip, I used this process to attach all 17 of the needed wires, as shown below, and then held them in place using E6000 brand glue to prevent accidentally knocking the wires loose from mechanical stress on the solder joints.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Reading the chip

Next, it’s time to figure out how to read this chip to extract the firmware data from it. First, we need to attach the 17 wires to the chip reader. To do this, I custom-built a 48-pin Zero insertion force (ZIF) plug with screw terminals that I could attach to the ZIF socket of my RT809H chip programmer. This jig allows each wire to be attached via the screw terminals to any of the 48 pins as needed.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

How we wire up a dead-bugged memory chip for reading depends on several things.

  • Do we have a datasheet?
  • Does the chip we are dead-bugging come in other package styles?
  • Does the chip reader support the chip we have, and we just don’t have the correct socket?
  • Does the manufacturer of our chip produce an unrelated chip that has a similar memory size, bus width, and layout?

Since I didn’t have a chip reader that supports this 162 BGA MCP device, I started looking for another Macronix chip that:

  • Had 48 pins or less so I could wire it up to my chip reader
  • Was a NAND Single Level Cell (SLC)
  • Had 1g in density
  • Had 8 bit bus
  • Had operational voltage of 1.8v

After a little time Googling followed by digging through several different datasheets, I found a MX30UF1G18AC-TI, which was for a 48 TSOP package and appeared to match the key areas I was looking for.

Here’s what the name MX30UF1G18AC-TI tells us:

  • MX = Macronix
  • 30 = NAND
  • U = 1.7V to 1.95V
  • F = SLC
  • 1G= 1G-bit
  • 18A= 4-bit ECC with standard feature, x8

The diagrams found in the MX30UF1G18AC datasheet showed the pinout for the TSOP48 NAND memory chip. Using that data, I was able to match each of the required pins to the 162 BGA MCP MX63U1GC12HA so I could correctly wire each connection to the 48-pin ZIF socket for my RT809H chip programmer.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

Once all of the connecting wires were properly connected to the screw terminal of my Zif socket, I selected the MX30UF1G18AC chip from the drop-down on the chip programmer and clicked “read.” As expected, the chip programmer first queried the chip for its ID. If it does not match, it will prompt you with “Chip ID does not match,” as shown below.

In this case, I selected “Ignore,” and the devices successfully extracted the data from the NAND flash chip. Some chip readers allow you to just turn this off before attempting to read the chip. Also, if the chip you’re reading is only different in package style, the chip ID will probably match.

Lessons in IoT Hacking: How to Dead-Bug a BGA Flash Memory Chip

The perfect solution is always to have all the proper equipment needed to read all memory chips you encounter, but very few pockets are that deep — or maybe the correct socket is months out for delivery, and you need the data from the chip today. In those cases, having the skills to do this work is important.  

I have successfully used this process in a pinch many times to extract firmware from chips when I didn’t have the proper sockets at hand – and in some cases, I didn’t have full datasheets either. If you have not done this, I recommend giving it a try. Expand those soldering skills, and build out test platforms and methods to further simplify the process. Eventually, you may need to use this method, and it’s always better to be prepared.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.