Tag Archives: IOT

New Research: The Proliferation of Cellular in IoT

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2024/07/30/new-research-the-proliferation-of-cellular-in-iot/

Researchers explain the trend and argue for deeper understanding

New Research: The Proliferation of Cellular in IoT

Analysis of Cellular Based Internet of Things (IoT) Technology is a new whitepaper co-authored by Rapid7 principal security researcher Deral Heiland and Thermo Fisher Scientific lead product security researcher Carlota Bindner.

In this new research, the authors dive deep into the fairly recent uptick in the use of cellular communications in IoT-based devices like GPS trackers and certain types of medical equipment. Their main goal is to provide context into the pervasive nature of cellular technology embedded within modern devices all over the world.

They go on to demonstrate the importance of breaking open these IoT devices with the goal of penetration testing (pentesting) the strength of the security — or lack thereof — built into the onboard tech. Absent a Wi-Fi connection, they say, it’s critical these devices are able to leverage cellular as a back-up communications method, particularly in the category of potentially life-saving medical devices.

Testing the Tech

Indeed, 2022 saw shipments of IoT cellular modules grow a substantial 14% year-over-year, signalling the ubiquity of IoT in today’s devices as producers hope the daily-life conveniences the technology enables will continue to propel the  significant growth of cellular module shipments.

When an industry is experiencing significant growth, it’s important that pentesting teams have the ability to appropriately test the technology for security vulnerabilities. This research helps the Rapid7 pentesting team and others continually examine the technology, test its boundaries, and learn how to keep it safe.

Let’s take a look at some key IoT security testing scenarios and takeaways from this whitepaper.

CAT-M and NB-IoT

Cellular technologies for IoT are often high-priced, despite being extremely common in 2024. CAT-M and NB-IoT have helped to facilitate cellular communications for IoT devices, bringing down costs at scale. Their primary areas of focus are to provide low-power wide area network (LPWAN) signals that bolster radio communications used for IoT devices.

According to the paper, CAT-M and NB-IoT are complementary standards that excel in different use cases, each helping enable IoT direct-cloud communications via cellular services. There are several subsets of these technologies — such as CAT-M1, CAT-NB1/CAT-NB2 — and it’s made clear in the research why it’s critical to comprehend how each of these enable cellular-based IoT communications so that practitioners can better secure the devices and tech.

Cellular Modules

The whitepaper then gets into the nitty gritty of reviewing how the researchers deconstructed several cellular module devices to test how they function and communicate with each other.

From discovering module-based GPS trackers to examining cellular modules in smart camera systems, this highly technical process weaves between looking at the orientation of cellular modules on circuit boards to how manufacturers can implement their own proprietary commands for use with their own cellular modules.

What’s Next

To reiterate, it’s vital that pentesting professionals understand as much of this cellular technology as possible in order to effectively test devices that leverage these capabilities. In this way, security is put at the forefront of these marvelous little gadgets that aim to make all of our lives just a little easier.

Ready to learn more? Dive into the deep technical details contained in the whitepaper now.

Monitoring Self-Service Markets with Zabbix and IoT

Post Syndicated from Aurea Araujo original https://blog.zabbix.com/monitoring-self-service-markets-with-zabbix-and-iot/28422/

QU!CK Scan & Go, a startup specializing in self-service markets, required a monitoring system that could allow a comprehensive view of operations. Read on to see how Zabbix provided them with a solution that positively impacted their operations as well as their finances.

The convenience of having access to an establishment supplying staple foods around the clock is the motivating factor behind the rise of QU!CK Scan & Go. Since 2021, QU!CK Scan & Go has been developing self-service mini market systems, available in residential complexes and corporate buildings.

Available 24 hours a day, 7 days a week, the technology developed by QU!CK Scan & Go allows markets to be open at all times, with 100% self-service. Customers select the products they want, confirm the price by scanning a barcode, and complete the purchase in their own app with a credit card or virtual wallet.

QU!CK Scan & Go was the first company in the self-service market segment to operate in Argentina. As of this writing, they have 25 self-service stores located in Argentina and 2 in the United States.

The challenge

With the rapid growth in their business, QU!CK Scan & Go needed to be able to easily visualize operations in order to handle environmental issues and avoid product loss due to external factors. In the event of a power outage, for instance, refrigerators and freezers will fail to function, a problem that may take considerable time and effort to fix.

This scenario  isn’t an abstract hypothetical – power outages are a recurring issue in Argentina. In 2021 and 2022, the average length of a power outage was 5 hours. For freezers storing products such as ice cream, frozen processed foods, and other perishable items, that’s more than enough time for the products to thaw and become unusable, resulting in severe financial losses.

The solution

QU!CK Scan & Go’s search for a solution led them to Zabbix by way of CTL Information Technology, a Zabbix Certified Partner in Argentina.  Juan Guido Camaño, CEO of CTL, immediately grasped the fact that Zabbix provided the perfect solution for what QU!CK Scan & Go needed to monitor.

“Zabbix was our first, second and third choice, due to our extensive experience with the tool. We did not believe that there would be any better alternative.”

– Juan Guido Camaño, CEO of CTL

At the beginning of the implementation project, CTL identified all possible variables necessary for monitoring that should generate alarms in the case of an extraordinary event. These included:

  • Power outages
  • Internet connection status
  • Opened doors
  • Ambient and air conditioning temperatures
  • Refrigerator and freezer temperatures

In 2021 and 2022, the team at CTL carried out the proof of concept and the implementation of the tool in the first self-service markets, following a stage-by-stage plan.

First, they had to configure the Zabbix Agent on the monitoring device. After that, we created a standard monitoring model to be used in all establishments, according to data collection and alarm triggering needs. The alarms were subsequently adjusted, with possible responses implemented according to each variable identified. At that point, data visualization was organized in an external system just for reviewing the integrated dashboards.

Thanks to the implementation of IoT devices to control the temperature and the opening and closing of doors, alerts are sent to Zabbix in the event of unusual activity, such as very high or low temperatures, doors opened without supervision, and refrigerator doors open longer than the stipulated time, among other issues.

The results

Since the implementation of Zabbix project in QU!CK Scan & Go’s self-service markets, a variety of benefits have been apparent, including:

  • Increased control of self-service establishments
  • Faster resolution of incidents
  • Improved visualization of operations
  • Increased availability of services

However, the biggest returns on investment were observed at the financial level. With power outage monitoring and quick corrective actions, losses of perishable products have decreased by 75%.

“Losses of refrigerated products ceased to be an issue due to constant monitoring and immediate alerts in case of incidents during power outages.” – Juan Guido Camaño, CEO of CTL

Additionally, with real-time visualization of operations and business monitoring, the profitability of refrigerated products during power outage incidents has increased by 100%. Currently, QU!CK Scan & Go is the leading company in the self-service market segment in Argentina in terms of turnover, with a rapidly increased brand value.

“In a 100% self-service business model, investments made in incident identification technologies have a direct impact on the company’s results.” – Marcos Acuña, QU!CK Scan & Go

What’s next

While successful, the Zabbix project carried out by CTL and QU!CK Scan & Go is far from finished. The implementation of Zabbix in the company is accelerating at the same rate that new establishments are opened, and the proposal is to continue expanding this monitoring project by completely migrating data visualization to Zabbix.

“Having already managed to ensure the availability of the services associated with QU!CK operations, we are now focusing on the continuous infrastructure optimization.” – Juan Guido Camaño, CEO of CTL

For QU!CK Scan & Go, Zabbix has become much more than an IT infrastructure monitoring provider. Our solutions have improved their business and brought added value to their brand.

“With Zabbix, the return on investment after opening a new location is achieved 50% faster than it used to be.” – Marcos Acuña, Founder of QU!CK Scan & Go

Our goal of promoting seamless services to the technology market together with our partners is most visible in situations like this one, when we’re able to go beyond basic monitoring and position Zabbix as a vital support service for strategic decision making. To find out more about what Zabbix can do for customers in the retail sector, visit us here.

The post Monitoring Self-Service Markets with Zabbix and IoT appeared first on Zabbix Blog.

How PostNL processes billions of IoT events with Amazon Managed Service for Apache Flink

Post Syndicated from Çağrı Çakır original https://aws.amazon.com/blogs/big-data/how-postnl-processes-billions-of-iot-events-with-amazon-managed-service-for-apache-flink/

This post is co-written with Çağrı Çakır and Özge Kavalcı from PostNL.

PostNL is the designated universal postal service provider for the Netherlands and has three main business units offering postal delivery, parcel delivery, and logistics solutions for ecommerce and cross-border solutions. With 5,800 retail points, 11,000 mailboxes, and over 900 automated parcel lockers, the company plays an important role in the logistics value chain. It aims to be the delivery organization of choice by making it as easy as possible to send and receive parcels and mail. With almost 34,000 employees, PostNL is at the heart of society. On a typical weekday, the company delivers an average of 1.1 million parcels and 6.9 million letters across Belgium, Netherlands, and Luxemburg.

In this post, we describe the legacy PostNL stream processing solution, its challenges, and why PostNL chose Amazon Managed Service for Apache Flink to help modernize their Internet of Things (IoT) data stream processing platform. We provide a reference architecture, describe the steps we took to migrate to Apache Flink, and the lessons learned along the way.

With this migration, PostNL has been able to build a scalable, robust, and extendable stream processing solution for their IoT platform. Apache Flink is a perfect fit for IoT. Scaling horizontally, it allows processing the sheer volume of data generated by IoT devices. With event time semantics, you can correctly handle events in the order they were generated, even from occasionally disconnected devices.

PostNL is excited about the potential of Apache Flink, and now plans to use Managed Service for Apache Flink with other streaming use cases and shift more business logic upstream into Apache Flink.

Apache Flink and Managed Service for Apache Flink

Apache Flink is a distributed computation framework that allows for stateful real-time data processing. It provides a single set of APIs for building batch and streaming jobs, making it straightforward for developers to work with bounded and unbounded data. Managed Service for Apache Flink is an AWS service that provides a serverless, fully managed infrastructure for running Apache Flink applications. Developers can build highly available, fault-tolerant, and scalable Apache Flink applications with ease and without needing to become an expert in building, configuring, and maintaining Apache Flink clusters on AWS.

The challenge of real-time IoT data at scale

Today, PostNL’s IoT platform, Roller Cages solution, tracks more than 380,000 assets with Bluetooth Low Energy (BLE) technology in near real time. The IoT platform was designed to provide availability, geofencing, and bottom state events of each asset by using telemetry sensor data such as GPS points and accelerometers that are coming from Bluetooth devices. Those events are used by different internal consumers to make logistical operations straightforward to plan, more efficient, and sustainable.

PostNL Roller cages tracking solution

Tracking this high volume of assets emitting different sensor readings inevitably creates billions of raw IoT events for the IoT platform as well as for the downstream systems. Handling this load repeatedly both within the IoT platform and throughout the downstream systems was neither cost-efficient nor easy to maintain. To reduce the cardinality of events, the IoT platform uses stream processing to aggregate data over fixed time windows. These aggregations must be based on the moment when the device emitted the event. This type of aggregation based on event time becomes complex when messages may be delayed and arrive out of order, which may frequently happen with IoT devices that can get disconnected temporarily.

The following diagram illustrates the overall flow from edge to the downstream systems.

PostNL IoT workflow

The workflow consists of the following components:

  1. The edge architecture includes IoT BLE devices that serve as sources of telemetry data, and gateway devices that connect these IoT devices to the IoT platform.
  2. Inlets contain a set of AWS services such as AWS IoT Core and Amazon API Gateway to collect IoT detections using MQTTS or HTTPS and deliver them to the source data stream using Amazon Kinesis Data Streams.
  3. The aggregation application filters IoT detections, aggregates them for a fixed time window, and sinks aggregations to the destination data stream.
  4. Event producers are the combination of different stateful services that generate IoT events such as geofencing, availability, bottom state, and in-transit.
  5. Outlets, including services such as Amazon EventBridge, Amazon Data Firehose, and Kinesis Data Streams, deliver produced events to consumers.
  6. Consumers, which are internal teams, interpret IoT events and build business logic based on them.

The core component of this architecture is the aggregation application. This component was originally implemented using a legacy stream processing technology. For several reasons, as we discuss shortly, PostNL decided to evolve this critical component. The journey of replacing the legacy stream processing with Managed Service for Apache Flink is the focus of the rest of this post.

The decision to migrate the aggregation application to Managed Service for Apache Flink

As the number of connected devices grows, so does the necessity for a robust and scalable platform capable of handling and aggregating massive volumes of IoT data. After thorough analysis, PostNL opted to migrate to Managed Service for Apache Flink, driven by several strategic considerations that align with evolving business needs:

  • Enhanced data aggregation – Using Apache Flink’s strong capabilities in real-time data processing enables PostNL to efficiently aggregate raw IoT data from various sources. The ability to extend the aggregation logic beyond what was provided by the current solution can unlock more sophisticated analytics and more informed decision-making processes.
  • Scalability – The managed service provides the ability to scale your application horizontally. This allows PostNL to handle increasing data volumes effortlessly as the number of IoT devices grows. This scalability means that data processing capabilities can expand in tandem with the business.
  • Focus on core business – By adopting a managed service, the IoT platform team can focus on implementing business logic and develop new use cases. The learning curve and overhead of operating Apache Flink at scale would have diverted valuable energies and resources of the relatively small team, slowing down the adoption process.
  • Cost-effectiveness – Managed Service for Apache Flink employs a pay-as-you-go model that aligns with operational budgets. This flexibility is particularly beneficial for managing costs in line with fluctuating data processing needs.

Challenges of handling late events

Common stream processing use cases require aggregating events based on when they were generated. This is called event time semantics. When implementing this type of logic, you may encounter the problem of delayed events, in which events reach your processing system late, long after other events generated around the same time.

Late events are common in IoT due to reasons inherent to the environment, such as network delays, device failures, temporarily disconnected devices, or downtime. IoT devices often communicate over wireless networks, which can introduce delays in transmitting data packets. And sometimes they may experience intermittent connectivity issues, resulting in data being buffered and sent in batches after connectivity is restored. This may result in events being processed out of order—some events may be processed several minutes after other events that were generated around the same time.

Imagine you want to aggregate events generated by devices within a specific 10-second window. If events can be several minutes late, how can you be sure you have received all events that were generated in those 10 seconds?

A simple implementation may just wait for several minutes, allowing late events to arrive. But this method means that you can’t calculate the result of your aggregation until several minutes later, increasing the output latency. Another solution would be waiting a few seconds, and then dropping any events arriving later.

Increasing latency or dropping events that may contain critical information are not palatable options for the business. The solution must be a good compromise, a trade-off between latency and completeness.

Apache Flink offers event time semantics out of the box. In contrast to other stream processing frameworks, Flink offers multiple options for dealing with late events. We dive into how Apache Flink deal with late events next.

A powerful stream processing API

Apache Flink provides a rich set of operators and libraries for common data processing tasks, including windowing, joins, filters, and transformations. It also includes over 40 connectors for various data sources and sinks, including streaming systems like Apache Kafka and Amazon Managed Streaming for Apache Kafka, or Kinesis Data Streams, databases, and also file system and object stores like Amazon Simple Storage Service (Amazon S3).

But the most important characteristic for PostNL is that Apache Flink offers different APIs with different level of abstractions. You can start with a higher level of abstraction, SQL, or Table API. These APIs abstract streaming data as more familiar tables, making them easier to learn for simpler use cases. If your logic becomes more complex, you can switch to the lower level of abstraction of the DataStream API, where streams are represented natively, closer to the processing happening inside Apache Flink. If you need the finest-grained level of control on how each single event is handled, you can switch to the Process Function.

A key learning has been that choosing one level of abstraction for your application is not an irreversible architectural decision. In the same application, you can mix different APIs, depending on the level of control you need at that specific step.

Scaling horizontally

To process billions of raw events and grow with the business, the ability to scale was an essential requirement for PostNL. Apache Flink is designed to scale horizontally, distributing processing and application state across multiple processing nodes, with the ability to scale out further when the workload grows.

For this particular use case, PostNL had to aggregate the sheer volume of raw events with similar characteristics and over time, to reduce their cardinality and make the data flow manageable for the other systems downstream. These aggregations go beyond simple transformations that handle one event at a time. They require a framework capable of stateful stream processing. This is exactly the type of use case Apache Flink was designed for.

Advanced event time semantics

Apache Flink emphasizes event time processing, which enables accurate and consistent handling of data with respect to the time it occurred. By providing built-in support for event time semantics, Flink can handle out-of-order events and late data gracefully. This capability was fundamental for PostNL. As mentioned, IoT generated events may arrive late and out of order. However, the aggregation logic must be based on the moment the measurement was actually taken by the device—the event time—and not when it’s processed.

Resiliency and guarantees

PostNL had to make sure no data sent from the device is lost, even in case of failure or restart of the application. Apache Flink offers strong fault tolerance guarantees through its distributed snapshot-based checkpointing mechanism. In the event of failures, Flink can recover the state of the computations and achieve exactly-once semantics of the result. For example, each event from a device is never missed nor counted twice, even in the event of an application failure.

The journey of choosing the right Apache Flink API

A key requirement of the migration was reproducing exactly the behavior of the legacy aggregation application, as expected by the downstream systems that can’t be modified. This introduced several additional challenges, in particular around windowing semantics and late event handling.

As we have seen, in IoT, events may be out of order by several minutes. Apache Flink offers two high-level concepts for implementing event time semantics with out-of-order events: watermarks and allowed lateness.

Apache Flink provides a range of flexible APIs with different levels of abstraction. After some initial research, Flink-SQL and the Table API were discarded. These higher levels of abstraction provide advanced windowing and event time semantics, but couldn’t provide the fine-grained control PostNL needed to reproduce exactly the behavior of the legacy application.

The lower level of abstraction of the DataStream API also offers windowing aggregation capabilities, and allows you to customize the behaviors with custom triggers, evictors, and handling late events by setting an allowed lateness.

Unfortunately, the legacy application was designed to handle late events in a peculiar way. The result was a hybrid event time and processing time logic that couldn’t be easily reproduced using high-level Apache Flink primitives.

Fortunately, Apache Flink offers a further lower level of abstraction, the ProcessFunction API. With this API, you have the finest-grained control on application state, and you can use timers to implement virtually any custom time-based logic.

PostNL decided to go in this direction. The aggregation was implemented using a KeyedProcessFunction that provides a way to perform arbitrary stateful processing on keyed streams—logically partitioned streams. Raw events from each IoT device are aggregated based on their event time (the timestamp written on the event by the source device) and the results of each window is emitted based on processing time (the current system time).

This fine-grained control finally allowed PostNL to reproduce exactly the behavior expected by the downstream applications.

The journey to production readiness

Let’s explore the journey of migrating to Managed Service for Apache Flink, from the start of the project to the rollout to production.

Identifying requirements

The first step of the migration process focused on thoroughly understanding the existing system’s architecture and performance metrics. The goal was to provide a seamless transition to Managed Service for Apache Flink with minimal disruption to ongoing operations.

Understanding Apache Flink

PostNL needed to familiarize themselves with the Managed Service for Apache Flink application and its streaming processing capabilities, including built-in windowing strategies, aggregation functions, event time vs. processing time differences, and finally KeyProcessFunction and mechanisms for handling late events.

Different options were considered, using primitives provided by Apache Flink out of the box, for event time logic and late events. The biggest requirement was to reproduce exactly the behavior of the legacy application. The ability to switch to using a lower level of abstraction helped. Using the finest-grained control allowed by the ProcessFunction API, PostNL was able to handle late events exactly as the legacy application.

Designing and implementing ProcessFunction

The business logic is designed using ProcessFunction to emulate the peculiar behavior of the legacy application in handling late events without excessively delaying the initial results. PostNL decided to use Java for the implementation, because Java is the primary language for Apache Flink. Apache Flink allows you to develop and test your application locally, in your preferred integrated development environment (IDE), using all the available debug tools, before deploying it to Managed Service for Apache Flink. Java 11 with Maven compiler was used for implementation. For more information about IDE requirements, refer to Getting started with Amazon Managed Service for Apache Flink (DataStream API).

Testing and validation

The following diagram shows the architecture used to validate the new application.

Testing architecture

To validate the behavior of the ProcessFunction and late event handling mechanisms, integration tests were designed to run both the legacy application and the Managed Service for Flink application in parallel (Steps 3 and 4). This parallel execution allowed PostNL to directly compare the results generated by each application under identical conditions. Multiple integration test cases push data to the source stream (2) in parallel (7) and wait until their aggregation window is complete, then they pull the aggregated results from the destination stream to compare (8). Integration tests are automatically triggered by the CI/CD pipeline after deployment of the infrastructure is complete. During the integration tests, the primary focus was on achieving data consistency and processing accuracy between the legacy application and the Managed Service for Flink application. The output streams, aggregated data, and processing latencies were compared to validate that the migration didn’t introduce any unexpected discrepancies. For writing and running the integration tests, Robot Framework, an open source automation framework, was utilized.

After the integration tests are passed, there is one more validation layer: end-to-end tests. Similar to the integration tests, end-to-end tests are automatically invoked by the CI/CD pipeline after the deployment of the platform infrastructure is complete. This time, multiple end-to-end test cases send data to AWS IoT Core (1) in parallel (9) and check the aggregated results from the destination S3 bucket (5, 6) dumped from the output stream to compare (10).

Deployment

PostNL decided to run the new Flink application on shadow mode. The new application ran for some time in parallel with the legacy application, consuming exactly the same inputs, and sending output from both applications to a data lake on Amazon S3. This allowed them to compare the results of the two applications using real production data, and also to test the stability and performance of the new one.

Performance optimization

During migration, the PostNL IoT platform team learned how the Flink application can be fine-tuned for optimal performance, considering factors such as data volume, processing speed, and efficient late event handling. A particularly interesting aspect was to verify that the state size wasn’t increasing unbounded over the long term. A risk of using the finest-grained control of ProcessFunction is state leak. This happens when your implementation, directly controlling the state in the ProcessFunction, misses some corner cases where a state is never deleted. This causes the state to grow unbounded. Because streaming applications are designed to run continuously, an expanding state can degrade performance and eventually exhaust memory or local disk space.

With this phase of testing, PostNL found the right balance of application parallelism and resources—including compute, memory, and storage—to process the normal daily workload profile without lag, and handle occasional peaks without over-provisioning, optimizing both performance and cost-effectiveness.

Final switch

After running the new application in shadow mode for some time, the team decided the application was stable and emitting the expected output. The PostNL IoT platform finally switched over to production and shut down the legacy application.

Key takeaways

Among the several learnings gathered in the journey of adopting Managed Service for Apache Flink, some are particularly important, and proving key when expanding to new and diverse use cases:

  • Understand event time semantics – A deep understanding of event time semantics is crucial in Apache Flink for accurately implementing time-dependent data operations. This knowledge makes sure events are processed correctly relative to when they actually occurred.
  • Use the powerful Apache Flink API – Apache Flink’s API allows for the creation of complex, stateful streaming applications beyond basic windowing and aggregations. It’s important to fully grasp the extensive capabilities offered by the API to tackle sophisticated data processing challenges.
  • With power comes more responsibility – The advanced functionality of Apache Flink’s API brings significant responsibility. Developers must make sure applications are efficient, maintainable, and stable, requiring careful resource management and adherence to best practices in coding and system design.
  • Don’t mix event time and processing time logic – Combining event time and processing time for data aggregation presents unique challenges. It prevents you from using higher-level functionalities provided out of the box by Apache Flink. The lowest level of abstractions among Apache Flink APIs allow for implementing custom time-based logic, but require a careful design to achieve accuracy and timely results, alongside extensive testing to validate good performance.

Conclusion

In the journey of adopting Apache Flink, the PostNL team learned how the powerful Apache Flink APIs allow you to implement complex business logic. The team came to appreciate how Apache Flink can be utilized to solve several and diverse problems, and they are now planning to extend it to more stream processing use cases.

With Managed Service for Apache Flink, the team was able to focus on the business value and implementing the required business logic, without worrying about the heavy lifting of setting up and managing an Apache Flink cluster.

To learn more about Managed Service for Apache Flink and choosing the right managed service option and API for your use case, see What is Amazon Managed Service for Apache Flink. To experience hands-on how to develop, deploy, and operate Apache Flink applications on AWS, see the Amazon Managed Service for Apache Flink Workshop.


About the Authors

Çağrı ÇakırÇağrı Çakır is the Lead Software Engineer for the PostNL IoT platform, where he manages the architecture that processes billions of events each day. As an AWS Certified Solutions Architect Professional, he specializes in designing and implementing event-driven architectures and stream processing solutions at scale. He is passionate about harnessing the power of real-time data, and dedicated to optimizing operational efficiency and innovating scalable systems.

Ozge KavalciÖzge Kavalcı works as Senior Solution Engineer for the PostNL IoT platform and loves to build cutting-edge solutions that integrate with the IoT landscape. As an AWS Certified Solutions Architect, she specializes in designing and implementing highly scalable serverless architectures and real-time stream processing solutions that can handle unpredictable workloads. To unlock the full potential of real-time data, she is dedicated to shaping the future of IoT integration.

Amit SinghAmit Singh works as a Senior Solutions Architect at AWS with enterprise customers on the value proposition of AWS, and participates in deep architectural discussions to make sure solutions are designed for successful deployment in the cloud. This includes building deep relationships with senior technical individuals to enable them to be cloud advocates. In his free time, he likes to spend time with his family and learn more about everything cloud.

Lorenzo NicoraLorenzo Nicora works as Senior Streaming Solutions Architect at AWS helping customers across EMEA. He has been building cloud-centered, data-intensive systems for several years, working in the finance industry both through consultancies and for fintech product companies. He has used open-source technologies extensively and contributed to several projects, including Apache Flink.

Helpful tools to get started in IoT Assessments

Post Syndicated from Tommy Yowell original https://blog.rapid7.com/2024/06/18/helpful-tools-to-get-started-in-iot-assessments/

Helpful tools to get started in IoT Assessments

The Internet of Things (IoT) can be a daunting field to get into. With many different tools and products available on the market it can be confusing to even know where to start. Having performed dozens of IoT assessments, I felt it would be beneficial to compile a basic list of items that are essential to getting started delving into the realm of testing embedded devices. The tools that will be covered in this post are primarily used to interact with the debug interface of embedded devices, however, many of them have multiple functions, from reading data from a memory chip to removing components from the physical circuit board. I would like to note that neither I, nor Rapid7, benefit in any way from the sale of any of these products. We honestly believe they are useful tools for any beginner.

1) Serial Debugger

One of the most used items when it comes to IoT testing would be a device used to interface with low-speed interfaces available on embedded devices. Gaining access to the debug interface on embedded devices is the easiest way to get a look under the hood of how the device is operating. One of the most popular and readily available devices on the market currently would be the Tigard.

Helpful tools to get started in IoT Assessments

The Tigard is a great open-source tool that has support for all the commonly used interfaces you might encounter on modern day embedded devices. It has support for Universal Asynchronous Receiver-Transmitter (UART), Joint Test Access Group (JTAG), Serial Peripheral Interface (SPI), Inter-Integrated Circuit (I2C), and Serial Wire Debug (SWD) connections. This device allows you to connect to various serial consoles or even extract the contents of commonly found flash memory chips. It is powered by a USB-C connection and also has the ability to select commonly used voltage supplies to power components when needed.

Link: https://www.crowdsupply.com/securinghw/tigard

2) PCByte Probes

A tool that saves a ton of time when it comes to connecting to serial interfaces and on-board components is a set of PCByte Probes. Without these probes, you would often have to resort to soldering on header pins or trying to attach to onboard components using probe connectors.

Helpful tools to get started in IoT Assessments

The starter level probe set includes 4 hands-free probes, a set of PCB holders, a magnetic base, and accessories. Oftentimes embedded devices contain small components on the circuit board that are not easily accessible due to size requirements. These probes allow for quick, solder-free, connections to be made to embedded devices. All you need to do is position the spring-loaded probes on areas of the circuit board and connect the included dupont wires to either a logic analyzer or a serial debugger to interface with the target device. The included circuit board holders are a nice touch to ensure the circuit board is kept firmly in position while working.

Link: https://sensepeek.com/pcbite-20

3) Rework Station

While working with embedded devices, there might be scenarios you run into that involve removing small components from the embedded device for offline analysis. There are many options for rework stations out on the internet, all with various levels of price and functionality. A model that hits the sweet spot of price and functionality is the Aoyue 968A+ Professional SMD Digital Hot Air Rework Station.

Helpful tools to get started in IoT Assessments

This rework station includes a number of tools to make any reworking job easy in one simple package. It includes a soldering iron, hot air rework gun, vacuum pickup tool, and a fume extractor. There are many times when performing embedded testing that it is necessary to either solder wires onto connections or remove components from the board for data extraction. The 70 watt soldering iron and 550 watt hot air gun provides plenty of power for quick soldering jobs and component rework.

Link: https://www.amazon.com/Aoyue-968A-Digital-Rework-Station/dp/B006FA481G?th=1

4) Logic Analyzer

Another important tool to have on hand when testing embedded devices is a logic analyzer. Many times, you will find that the debug port on an embedded device is not labeled on the circuit board. That is when a logic analyzer comes in handy to identify what various components on the board are without unnecessary guesswork. Logic analyzers are used to decode signals found on the board to identify and decode protocols such as UART, SPI, and I2C. There are many out on the market, but the sweet spot for price and functionality would be the Saleae Logic 8.

Helpful tools to get started in IoT Assessments

Saleae offers many different models of logic analyzers that all come in at different price points. Typically, the base model which supports 8 channels at a max speed of 100MS/s is sufficient for the majority, however, they do offer additional models that support a larger number of channels at higher speeds. Saleae includes the Logic 2 software which allows you to seamlessly interact with the device and identify protocols and decode signals on the board.

Link: https://usd.saleae.com/products/saleae-logic-8

As we’ve explored in this blog post, there are many options out on the market for conducting detailed analysis on embedded devices. Many of the tools out there are available at different price points and offer various levels of functionality and ease of interacting and interfacing with embedded devices. The goal with this guide is not to provide a comprehensive list of all available options, however to cover the basic tools used to begin your IoT journey.

Free network flow monitoring for all enterprise customers

Post Syndicated from Chris Draper original https://blog.cloudflare.com/free-network-monitoring-for-enterprise


A key component of effective corporate network security is establishing end to end visibility across all traffic that flows through the network. Every network engineer needs a complete overview of their network traffic to confirm their security policies work, to identify new vulnerabilities, and to analyze any shifts in traffic behavior. Often, it’s difficult to build out effective network monitoring as teams struggle with problems like configuring and tuning data collection, managing storage costs, and analyzing traffic across multiple visibility tools.

Today, we’re excited to announce that a free version of Cloudflare’s network flow monitoring product, Magic Network Monitoring, is available to all Enterprise Customers. Every Enterprise Customer can configure Magic Network Monitoring and immediately improve their network visibility in as little as 30 minutes via our self-serve onboarding process.

Enterprise Customers can visit the Magic Network Monitoring product page, click “Talk to an expert”, and fill out the form. You’ll receive access within 24 hours of submitting the request. Over the next month, the free version of Magic Network Monitoring will be rolled out to all Enterprise Customers. The product will automatically be available by default without the need to submit a form.

How it works

Cloudflare customers can send their network flow data (either NetFlow or sFlow) from their routers to Cloudflare’s network edge.

Magic Network Monitoring will pick up this data, parse it, and instantly provide insights and analytics on your network traffic. These analytics include traffic volume overtime in bytes and packets, top protocols, sources, destinations, ports, and TCP flags.

Dogfooding Magic Network Monitoring during the remediation of the Thanksgiving 2023 security incident

Let’s review a recent example of how Magic Network Monitoring improved Cloudflare’s own network security and traffic visibility during the Thanksgiving 2023 security incident. Our security team needed a lightweight method to identify malicious packet characteristics in our core data center traffic. We monitored for any network traffic sourced from or destined to a list of ASNs associated with the bad actor. Our security team setup Magic Network Monitoring and established visibility into our first core data center within 24 hours of the project kick-off. Today, Cloudflare continues to use Magic Network Monitoring to monitor for traffic related to bad actors and to provide real time traffic analytics on more than 1 Tbps of core data center traffic.

Magic Network Monitoring – Traffic Analytics

Monitoring local network traffic from IoT devices

Magic Network Monitoring also improves visibility on any network traffic that doesn’t go through Cloudflare. Imagine that you’re a network engineer at ACME Corporation, and it’s your job to manage and troubleshoot IoT devices in a factory that are connected to the factory’s internal network. The traffic generated by these IoT devices doesn’t go through Cloudflare because it is destined to other devices and endpoints on the internal network. Nonetheless, you still need to establish network visibility into device traffic over time to monitor and troubleshoot the system.

To solve the problem, you configure a router or other network device to securely send encrypted traffic flow summaries to Cloudflare via an IPSec tunnel. Magic Network Monitoring parses the data, and instantly provides you with insights and analytics on your network traffic. Now, when an IoT device goes down, or a connection between IoT devices is unexpectedly blocked, you can analyze historical network traffic data in Magic Network Monitoring to speed up the troubleshooting process.

Monitoring cloud network traffic

As cloud networking becomes increasingly prevalent, it is essential for enterprises to invest in visibility across their cloud environments. Let’s say you’re responsible for monitoring and troubleshooting your corporation’s cloud network operations which are spread across multiple public cloud providers. You need to improve visibility into your cloud network traffic to analyze and troubleshoot any unexpected traffic patterns like configuration drift that leads to an exposed network port.

To improve traffic visibility across different cloud environments, you can export cloud traffic flow logs from any virtual device that supports NetFlow or sFlow to Cloudflare. In the future, we are building support for native cloud VPC flow logs in conjunction with Magic Cloud Networking. Cloudflare will parse this traffic flow data and provide alerts plus analytics across all your cloud environments in a single pane of glass on the Cloudflare dashboard.

Improve your security posture today in less than 30 minutes

If you’re an existing Enterprise customer, and you want to improve your corporate network security, you can get started right away. Visit the Magic Network Monitoring product page, click “Talk to an expert”, and fill out the form. You’ll receive access within 24 hours of submitting the request. You can begin the self-serve onboarding tutorial, and start monitoring your first batch of network traffic in less than 30 minutes.

Over the next month, the free version of Magic Network Monitoring will be rolled out to all Enterprise Customers. The product will be automatically available by default without the need to submit a form.

If you’re interested in becoming an Enterprise Customer, and have more questions about Magic Network Monitoring, you can talk with an expert. If you’re a free customer, and you’re interested in testing a limited beta of Magic Network Monitoring, you can fill out this form to request access.

Privacy, Security, and Connected Devices: Key Takeaways From CES 2024

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2024/01/18/privacy-security-and-connected-devices-key-takeaways-from-ces-2024/

Privacy, Security, and Connected Devices: Key Takeaways From CES 2024

The topic of data privacy has become so relevant in our age of smart technology. With everything becoming connected, including our homes, workplaces, cities, and even our cars, those who develop this technology are obligated to identify consumers’ expectations for privacy and then find the best ways to meet those expectations. This of course includes determining how to best secure the data with which these technologies interact. As you can imagine, accomplishing these requirements is no easy feat.

Yes, connected technology developers have their work cut out for them, and that’s why CES 2024 included a panel to discuss this very topic: “Safeguarding Your Sanctuary: Expectations for Data Privacy in the Smart Home Era.” I had the privilege of being a part of this four-person panel, and if you weren’t in the room with us, here’s your chance to get some of the key takeaways from our discussion.

Putting the Consumer’s Needs First

What do consumers expect? The answer to this question is not black and white because individual consumers have different views of what privacy means to them. Therefore, defining a baseline that puts control of much of this data back in the hands of the consumer becomes critical.

That said, if consumers are going to have the ability to make their own data decisions then it’s important that easily understood mechanisms for managing data privacy are embedded within their smart technology. The greater technology community should also do its part to educate consumers on the overall importance of privacy and security — and the role they play in ensuring it for themselves.

Another example of putting the consumer’s needs first is when vendors have an online presence where they share details about their security and privacy policies and processes along with a point of contact so security researchers and consumers can report potential security issues within a product. The vendor’s website is also a perfect place for them to step in and play a role in educating consumers on privacy and security topics. I pointed out that if a consumer is researching product brands for purchase and a vendor has nothing to say about their privacy policies or their security program, then I typically recommend steering away from that product brand.  

The Do’s and Don’ts of Data Collection and Sharing

User data collection and sharing is a central theme in consumer privacy and data security, and our CES panel discussed this at length. Consumer opt-in for data sharing is becoming the rule rather than the exception, and our panel agreed with this practice.

One good example of data sharing in which many consumers would choose to opt in is home security vendors sharing customer data with insurance companies, thereby allowing for the consumer to potentially get a discount on their homeowner’s insurance premiums.

We also discussed data collected by the product vendor for the purpose of improving product performance and capabilities. This process should be expected, but we also pointed out that vendors should have a data retention policy and process in place that includes purging data past a certain age. For one, most data typically loses value over time as it relates to product enhancement purposes; if this data isn’t purged it could create a higher level of risk for the vendor should the data be stolen in a breach. Also, collecting and storing data that may not have any apparent business value is a risky move that vendors should avoid.

Outsmarting Connected Devices

Where do smart devices go to die… or to be reborn? The fact is, many consumers don’t always consider the serious privacy and security implications, which explains why over the last five years more than 30% of the previously used Internet of Things (IoT) devices I have purchased from Ebay for research and training purposes get delivered to me still containing consumer data, including product account passwords and WIFI pre-shared key data.

Consumers need to ensure that they do a factory reset on the devices they are disposing of. Not only that, in today’s smart home and smart car scenarios, consumers need to be extra mindful of the connected devices they’re using that will change hands. Selling your home or car means more than just turning over the keys, it means factory resetting anything that’s interacted with your personal data. Vendors can also play a role here by properly documenting the processes for factory resetting their products and also making sure those processes are easy for a consumer to perform.

Genie Aladdin Connect Retrofit Garage Door Opener: Multiple Vulnerabilities

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2024/01/03/genie-aladdin-connect-retrofit-garage-door-opener-multiple-vulnerabilities/

Genie Aladdin Connect Retrofit Garage Door Opener: Multiple Vulnerabilities

Rapid7, Inc. (Rapid7) discovered vulnerabilities in Aladdin Connect retrofit kit garage door opener and Android mobile application produced by Genie. The affected products are:

  • Aladdin Garage door smart retrofit kit, Model ALDCM
  • Android Mobile application ALADDIN Connect, Version 5.65 Build 2075

Rapid7 initially reported these issues to Overhead Door — the parent company of The Genie Company — on August 22nd 2023. Since then, members of our research team have worked alongside the vendor to discuss the impact, resolution, and a coordinated response for these vulnerabilities.

Product description

The Aladdin Connect garage door opener (Retrofit-kit) is a smart IoT solution which allows standard electric garage doors to be upgraded to support smart technology for remote access and use of mobile applications for opening and closing of the garage door.

Credit

The vulnerabilities in Genie Aladdin Connect retrofit garage door opener and mobile application were discovered by Deral Heiland, Principal IoT Researcher at Rapid7. They are being disclosed in accordance with Rapid7’s vulnerability disclosure policy after coordination with the vendor.

Vendor statement

Trusted for generations by millions of homeowners, The Genie Company is committed to security, and we collaborate with valued researchers, such as Rapid7, to respond to and resolve vulnerabilities on behalf of our customers.

Exploitation and remediation

This section details the potential for exploitation and our remediation guidance for the issues discovered and reported by Rapid7, so that defenders of this technology can gauge the impact of, and mitigations around, these issues appropriately.

Android Application Insecure Storage (CVE-2023-5879) – FIXED

While examining the Android mobile application, Aladdin Connect, for general security issues, Rapid7 found that the user’s password was stored in clear text in the following file:

  • /data/data/com.geniecompany.AladdinConnect/shared_prefs/com.genie.gdocntl.MainActivity.xml

The persistence of this data was tested by logging out and rebooting the device. Typically logging out and rebooting a mobile device leads to the data being purged from the device. In this case neither the file, nor its contents, were purged. Figure 2 is copy of file content after logout and reboot:

Genie Aladdin Connect Retrofit Garage Door Opener: Multiple Vulnerabilities
Figure 2: Clear text Stored User Credentials

Exploitation

An attacker with physical access to the user’s smartphone (i.e., via a lost or stolen phone), would be able to potentially extract this critical data, allowing access to the user’s service account to control the garage door opener.

Remediation

To mitigate this vulnerability, users should set a password pin code on the mobile devices to restrict access.

Additional Note from Vendor

This vulnerability is tied to the biometric capability (touch or face recognition).

Mitigation: Update to the latest app upgrade available in the play store. App version v5.73

Cross-site Scripting (XSS) injected into Aladdin Connect garage door opener (Retrofit-Kit) configuration setup web server console via broadcast SSID name (CVE-2023-5880)

When the Aladdin connect device is placed into Wi-Fi configuration mode, the user web interface used for configuring the device is vulnerable to XSS injection via broadcast SSID names containing HTML and or JavaScript.

Exploitation

This XSS attack via SSID injection method can be done by running a software-based Wi-Fi access point to broadcast HTML or JavaScript as the SSID name such as:

  • </script><svg onload=alert(1)>

An example of this is shown in Figure 3, using airbase-ng to broadcast the HTML and or JavaScript code:

Genie Aladdin Connect Retrofit Garage Door Opener: Multiple Vulnerabilities
Figure 3: SSID Name Injection Method

In the example found in Figure 4, a simple alert box is triggered on the Aladdin base unit Wi-Fi configuration webpage from the above SSID name. Also, the image on the right of Figure 4 shows the actual web page source delivered to the end user. No user interaction is needed to trigger this, they only need to view the web page during configuration mode.

Genie Aladdin Connect Retrofit Garage Door Opener: Multiple Vulnerabilities
Figure 4: XSS Injection using SSID Injection Method

Also, a denial of service (DoS) of the Wi-Fi configuration page can be accomplished by just broadcasting an SSID containing </script> preventing the web page from being used to configure the device’s setup. This corrupted web page is shown in Figure 5:

Genie Aladdin Connect Retrofit Garage Door Opener: Multiple Vulnerabilities
Figure 5: Corrupted Wi-Fi Configuration Page

Remediation

To mitigate this vulnerability, users should avoid running setup if any oddly named SSIDs are being broadcast in the general vicinity, such as SSIDs containing HTML markup language and/or JavaScript code in their names.

Also, in general the mobile application can be used to set up and configure the Garage Door opener. This will avoid any direct interaction with the vulnerable  “ Garage Door Control Setup” configuration page.

Additional Notes from the Vendor

This is a very low-impact vulnerability with minimal risk. This can only occur when the owner places the device in the wifi configuration mode for a limited period and the intruder operates within the 2.4 GHz band distance range during that limited configuration period.  The device will not be impacted by the misconfiguration if that were to occur and it is fully capable of recovering from misconfiguration. The device cannot be operated with a misconfigured SSID as the device can only be claimed by the owner using the mobile app. There is no vulnerability in the mobile app which is the approved mode of device provisioning.

Mitigation: Use mobile app to configure the device.

Unauthenticated access allowed to web interface for “Garage Door Control Module Setup” page (CVE-2023-5881) – FIXED

This vulnerability allows a user with network access to connect to the Aladdin Connect device web server’s “Garage Door Control Module Setup” web page and alter the Garage doors connected WIFI SSID settings without authenticating.

Exploitation

The device allows unauthenticated access to Garage Door Control Module Setup configuration page on TCP Port 80, This allows anyone with network access to reconfigure the Wi-Fi settings without being challenged to authenticate. A sample of this access to the configuration web page is shown in Figure 6:

Genie Aladdin Connect Retrofit Garage Door Opener: Multiple Vulnerabilities
Figure 6: Unauthenticated Configuration Services Access Port 80

Remediation

To prevent exploitation, users should only attach the Aladdin Garage door smart retrofit kit to a network they own and control. Also, access to this network should not be allowed from any other network source such as the Internet.

Additional Notes from the Vendor

This is a very low-impact vulnerability with minimal risk. This can only occur when the intruder has access to the same local network as the retrofit kit (use the same network router), so the attack vector is limited to local. This web interface is not accessible from the internet. The device cannot be operated with a misconfigured SSID, as the device can only be claimed using the mobile app that an owner would use.

Mitigation: Update the Retrofit device to the latest software version, 14.1.1. Fix was automatically updated on all online devices as of December 2023. Please reach out to customer service to confirm if your device has the update.

Authenticated user access to others users data via service API – FIXED

An authenticated user can gain unauthorized access to other users’ data by querying the following API using a different device ID than their own.

  • https://pxdqkls7aj.execute-api.us-east-1.amazonaws.com/Android/devices/879267

Here are sample fields that are potentially viewable data using this method:

Genie Aladdin Connect Retrofit Garage Door Opener: Multiple Vulnerabilities
Genie Aladdin Connect Retrofit Garage Door Opener: Multiple Vulnerabilities
Figure 1: Enumeration of User Data

Additional Notes from the Vendor

This was resolved immediately after our internal penetration testing detected the issue. This happened because of a recent software update. The fix was applied to the API on 07/25/2023.

Mitigation: None

There’s One Last Gift Under the Tree, It’s Hands-On IoT!

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/12/27/theres-one-last-gift-under-the-tree-its-hands-on-iot/

There’s One Last Gift Under the Tree, It’s Hands-On IoT!

It’s the holiday season and since we’re in a giving mood we thought we’d surprise our loyal readers with a fun, hands-on hardware exercise to enjoy during some well-earned downtime.

But first, a little background. Every year Rapid7 has a pretty solid presence at DefCon in Las Vegas. This year was no exception. One of the cornerstones of our DefCon experience is participating in the IoT Village. Deral Heiland, our Principal Security Researcher for IoT, takes attendees through each of the steps of breaking into a particular piece of IoT hardware. And every year we release his talk (with a few additions) for those who couldn’t make it to Vegas for the conference.

What we have here is this year’s Hands-On IoT presentation for the hacking of an IP camera over Universal Asynchronous Receiver/Transmitter (UART). It’s Deral’s original presentation with some added details and context. In this paper, Deral takes you step by step through the process, offering insight into how UART and U-Boots operate, as well as some troubleshooting techniques should your attempts not work as seamlessly as Deral’s.

Typically, we would release Deral’s presentation in a series of blog posts over a few weeks. But this year we decided to spare y’all the suspense each week and release it as one comprehensive paper. We hope you enjoy reading it as much as we enjoyed making it and we wish you all the best this holiday season.

Click here to download the paper.

Is That Smart Home Technology Secure? Here’s How You Can Find Out.

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2023/10/30/is-that-smart-home-technology-secure-heres-how-you-can-find-out/

Is That Smart Home Technology Secure? Here’s How You Can Find Out.

As someone who likes the convenience of smart home Internet of Things (IoT) technology, I am regularly on the lookout for products that meet my expectations while also considering security and privacy concerns. Smart technology should never be treated differently than how we as consumers look at other products, like purchasing an automobile for example. In the case of automobiles, we search for the vehicle that meets our visual and performance expectations, but that will also keep us and our family safe. With that said, shouldn’t we also seek smart home technologies that are secure and protect our privacy?

I can’t tell you which solution will work for your specific case, but I can give you some pointers around technology security to help you do that research and determine which solution may best meet your needs and help you stay secure while doing it. Many of these recommendations will work no matter what IoT product you’re looking to purchase; however, I do recommend taking the time to perform some of these basic product security research steps.

The first thing I recommend is to visit the vendor site and search to see what they have to say about their products’ security. Also, do they have a vulnerability disclosure program (VDP)? If an organization that manufactures and sells IoT technology doesn’t have much to say about their products’ security or an easy way for you or someone else to report a security issue, then I highly recommend you move on.

This would indicate that product security probably doesn’t matter to them as much as it should. I also say this to the product vendors out there: If you don’t take product security seriously enough to help educate us consumers on why your products are the best when it comes to security, then why should we buy your products?

Next, I always recommend searching the Common Vulnerability Exposure (CVE) database and the Internet for the product you’re looking to buy and/or the vendor’s name. The information you find is sometimes very telling in terms of how an organization handles security vulnerability disclosure and follow-up patching of their products.

The existence of a vulnerability in an IoT product isn’t necessarily a bad thing; we’re always going to find vulnerabilities within IoT Products. The question we’re looking to answer by doing this search is this: How does this vendor handle reported vulnerabilities? For example, do they patch them quickly, or does it take months (or years!) for them to react – or will they ultimately do nothing? If there is no vulnerability information published on a specific IoT product, it may be that no one has bothered to test the security of the product. It’s also possible that the vendor has silently patched their issues and never issued any CVEs.

It is unlikely, but not impossible, that a product will never contain a vulnerability. Over the years I’ve encountered products where I was unsuccessful in finding any issues; however, not being successful in finding vulnerabilities within a product doesn’t mean they couldn’t possibly exist.

Recently, I became curious to learn how vendors that produce and/or retrofit garage door openers stack up in terms of security, so I followed the research process discussed above. I took a look at multiple vendors to see, are any of them following my recommendations? The sad part is, practically none of them even mentioned the word “security” on their websites. One clear exception was Tuya, a global IoT hardware and IoT software-as-a-service (SaaS) organization.

When I examined the Tuya website, I quickly located their security page and it was full of useful information. On this page, Tuya points out their security policies, standards, and compliance. Along with having a VDP, they also run a bug bounty program. Bug bounty programs allow researchers to work with a vendor to report security issues – and get paid to do it. Tuya’s bug bounty information is located at the Tuya Security Response Center. Vendors take note: This is how an IoT product vendor should present themselves and their security program.

In closing, consumers, if you’re looking to spend your hard-earned money, please take the time to do some basic research to see if the vendor has a proactive security program. Also, vendors, remember that consumers are becoming more aware and concerned about product security. If you want your product to rise to the status of “best solution around,” I highly recommend you start taking product security seriously as well as share details and access to your security program for your business and products. This data will help consumers make more informed decisions on which product best meets their needs and expectations.

Evolving cyber threats demand new security approaches – The benefits of a unified and global IT/OT SOC

Post Syndicated from Stuart Gregg original https://aws.amazon.com/blogs/security/evolving-cyber-threats-demand-new-security-approaches-the-benefits-of-a-unified-and-global-it-ot-soc/

In this blog post, we discuss some of the benefits and considerations organizations should think through when looking at a unified and global information technology and operational technology (IT/OT) security operations center (SOC). Although this post focuses on the IT/OT convergence within the SOC, you can use the concepts and ideas discussed here when thinking about other environments such as hybrid and multi-cloud, Industrial Internet of Things (IIoT), and so on.

The scope of assets has vastly expanded as organizations transition to remote work, and from increased interconnectivity through the Internet of Things (IoT) and edge devices coming online from around the globe, such as cyber physical systems. For many organizations, the IT and OT SOCs were separate, but there is a strong argument for convergence, which provides better context for the business outcomes of being able to respond to unexpected activity. In the ten security golden rules for IIoT solutions, AWS recommends deploying security audit and monitoring mechanisms across OT and IIoT environments, collecting security logs, and analyzing them using security information and event management (SIEM) tools within a SOC. SOCs are used to monitor, detect, and respond; this has traditionally been done separately for each environment. In this blog post, we explore the benefits and potential trade-offs of the convergence of these environments for the SOC. Although organizations should carefully consider the points raised throughout this blog post, the benefits of a unified SOC outweigh the potential trade-offs—visibility into the full threat chain propagating from one environment to another is critical for organizations as daily operations become more connected across IT and OT.

Traditional IT SOC

Traditionally, the SOC was responsible for security monitoring, analysis, and incident management of the entire IT environment within an organization—whether on-premises or in a hybrid architecture. This traditional approach has worked well for many years and ensures the SOC has the visibility to effectively protect the IT environment from evolving threats.

Note: Organizations should be aware of the considerations for security operations in the cloud which are discussed in this blog post.

Traditional OT SOC

Traditionally, OT, IT, and cloud teams have worked on separate sides of the air gap as described in the Purdue model. This can result in siloed OT, IIoT, and cloud security monitoring solutions, creating potential gaps in coverage or missing context that could otherwise have improved the response capability. To realize the full benefits of IT/OT convergence, IIoT, IT and OT must collaborate effectively to provide a broad perspective and the most effective defense. The convergence trend applies to newly connected devices and to how security and operations work together.

As organizations explore how industrial digital transformation can give them a competitive advantage, they’re using IoT, cloud computing, artificial intelligence and machine learning (AI/ML), and other digital technologies. This increases the potential threat surface that organizations must protect and requires a broad, integrated, and automated defense-in-depth security approach delivered through a unified and global SOC.

Without full visibility and control of traffic entering and exiting OT networks, the operations function might not be able to get full context or information that can be used to identify unexpected events. If a control system or connected assets such as programmable logic controllers (PLCs), operator workstations, or safety systems are compromised, threat actors could damage critical infrastructure and services or compromise data in IT systems. Even in cases where the OT system isn’t directly impacted, the secondary impacts can result in OT networks being shut down due to safety concerns over the ability to operate and monitor OT networks.

The SOC helps improve security and compliance by consolidating key security personnel and event data in a centralized location. Building a SOC is significant because it requires a substantial upfront and ongoing investment in people, processes, and technology. However, the value of an improved security posture is of great consideration compared to the costs.

In many OT organizations, operators and engineering teams may not be used to focusing on security; in some cases, organizations set up an OT SOC that’s independent from their IT SOC. Many of the capabilities, strategies, and technologies developed for enterprise and IT SOCs apply directly to the OT environment, such as security operations (SecOps) and standard operating procedures (SOPs). While there are clearly OT-specific considerations, the SOC model is a good starting point for a converged IT/OT cybersecurity approach. In addition, technologies such as a SIEM can help OT organizations monitor their environment with less effort and time to deliver maximum return on investment. For example, by bringing IT and OT security data into a SIEM, IT and OT stakeholders share access to the information needed to complete security work.

Benefits of a unified SOC

A unified SOC offers numerous benefits for organizations. It provides broad visibility across the entire IT and OT environments, enabling coordinated threat detection, faster incident response, and immediate sharing of indicators of compromise (IoCs) between environments. This allows for better understanding of threat paths and origins.

Consolidating data from IT and OT environments in a unified SOC can bring economies of scale with opportunities for discounted data ingestion and retention. Furthermore, managing a unified SOC can reduce overhead by centralizing data retention requirements, access models, and technical capabilities such as automation and machine learning.

Operational key performance indicators (KPIs) developed within one environment can be used to enhance another, promoting operational efficiency such as reducing mean time to detect security events (MTTD). A unified SOC enables integrated and unified security, operations, and performance, which supports comprehensive protection and visibility across technologies, locations, and deployments. Sharing lessons learned between IT and OT environments improves overall operational efficiency and security posture. A unified SOC also helps organizations adhere to regulatory requirements in a single place, streamlining compliance efforts and operational oversight.

By using a security data lake and advanced technologies like AI/ML, organizations can build resilient business operations, enhancing their detection and response to security threats.

Creating cross-functional teams of IT and OT subject matter experts (SMEs) help bridge the cultural divide and foster collaboration, enabling the development of a unified security strategy. Implementing an integrated and unified SOC can improve the maturity of industrial control systems (ICS) for IT and OT cybersecurity programs, bridging the gap between the domains and enhancing overall security capabilities.

Considerations for a unified SOC

There are several important aspects of a unified SOC for organizations to consider.

First, the separation of duty is crucial in a unified SOC environment. It’s essential to verify that specific duties are assigned to individuals based on their expertise and job function, allowing the most appropriate specialists to work on security events for their respective environments. Additionally, the sensitivity of data must be carefully managed. Robust access and permissions management is necessary to restrict access to specific types of data, maintaining that only authorized analysts can access and handle sensitive information. You should implement a clear AWS Identity and Access Management (IAM) strategy following security best practices across your organization to verify that the separation of duties is enforced.

Another critical consideration is the potential disruption to operations during the unification of IT and OT environments. To promote a smooth transition, careful planning is required to minimize any loss of data, visibility, or disruptions to standard operations. It’s crucial to recognize the differences in IT and OT security. The unique nature of OT environments and their close ties to physical infrastructure require tailored cybersecurity strategies and tools that address the distinct missions, challenges, and threats faced by industrial organizations. A copy-and-paste approach from IT cybersecurity programs will not suffice.

Furthermore, the level of cybersecurity maturity often varies between IT and OT domains. Investment in cybersecurity measures might differ, resulting in OT cybersecurity being relatively less mature compared to IT cybersecurity. This discrepancy should be considered when designing and implementing a unified SOC. Baselining the technology stack from each environment, defining clear goals and carefully architecting the solution can help ensure this discrepancy has been accounted for. After the solution has moved into the proof-of-concept (PoC) phase, you can start to testing for readiness to move the convergence to production.

You also must address the cultural divide between IT and OT teams. Lack of alignment between an organization’s cybersecurity policies and procedures with ICS and OT security objectives can impact the ability to secure both environments effectively. Bridging this divide through collaboration and clear communication is essential. This has been discussed in more detail in the post on managing organizational transformation for successful IT/OT convergence.

Unified IT/OT SOC deployment:

Figure 1 shows the deployment that would be expected in a unified IT/OT SOC. This is a high-level view of a unified SOC. In part 2 of this post, we will provide prescriptive guidance on how to design and build a unified and global SOC on AWS using AWS services and AWS Partner Network (APN) solutions.

Figure 1: Unified IT/OT SOC architecture

Figure 1: Unified IT/OT SOC architecture

The parts of the IT/OT unified SOC are the following:

Environment: There are multiple environments, including a traditional IT on-premises organization, OT environment, cloud environment, and so on. Each environment represents a collection of security events and log sources from assets.

Data lake: A centralized place for data collection, normalization, and enrichment to verify that raw data from the different environments is standardized into a common scheme. The data lake should support data retention and archiving for long term storage.

Visualize: The SOC includes multiple dashboards based on organizational and operational needs. Dashboards can cover scenarios for multiple environments including data flows between IT and OT environments. There are also specific dashboards for the individual environments to cover each stakeholder’s needs. Data should be indexed in a way that allows humans and machines to query the data to monitor for security and performance issues.

Security analytics: Security analytics are used to aggregate and analyze security signals and generate higher fidelity alerts and to contextualize OT signals against concurrent IT signals and against threat intelligence from reputable sources.

Detect, alert, and respond: Alerts can be set up for events of interest based on data across both individual and multiple environments. Machine learning should be used to help identify threat paths and events of interest across the data.

Conclusion

Throughout this blog post, we’ve talked through the convergence of IT and OT environments from the perspective of optimizing your security operations. We looked at the benefits and considerations of designing and implementing a unified SOC.

Visibility into the full threat chain propagating from one environment to another is critical for organizations as daily operations become more connected across IT and OT. A unified SOC is the nerve center for incident detection and response and can be one of the most critical components in improving your organization’s security posture and cyber resilience.

If unification is your organization’s goal, you must fully consider what this means and design a plan for what a unified SOC will look like in practice. Running a small proof of concept and migrating in steps often helps with this process.

In the next blog post, we will provide prescriptive guidance on how to design and build a unified and global SOC using AWS services and AWS Partner Network (APN) solutions.

Learn more:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Stuart Gregg

Stuart Gregg

Stuart enjoys providing thought leadership and being a trusted advisor to customers. In his spare time, Stuart can be seen either training for an Ironman or snacking.

Ryan Dsouza

Ryan Dsouza

Ryan is a Principal IIoT Security Solutions Architect at AWS. Based in New York City, Ryan helps customers design, develop, and operate more secure, scalable, and innovative IIoT solutions using AWS capabilities to deliver measurable business outcomes. Ryan has over 25 years of experience in multiple technology disciplines and industries and is passionate about bringing security to connected devices.

What’s Up, Home? – Does your phone battery drain faster in summertime?

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-does-your-phone-battery-drain-faster-in-summertime/26084/

Can you verify if your phone battery drains faster during summer compared to other seasons with Zabbix? Of course, you can!

I have always felt that during summer my phone battery drains faster than during the darker seasons. This would only be logical, as during summer the phone display must be brighter than during the darker seasons. Additionally, during summer we also tend to be more outdoors, so instead of home Wi-Fi the phone is using mobile data. On the other hand, during winter time if you are outdoors, the cold weather might affect battery life as well.

But how severe is the drainage difference? Let’s check it out!

Analyzing the data

In June, my iPhone battery average level has been 51%

In April, it was around 67%.

In March, about 71%.

What does this prove?

Well, necessarily not much as this is not a very scientific method. However, my day-to-day phone usage does not vary much — if anything, during March/April the drainage should have been worse, as I have a tendency to participate in one daily afternoon meeting from outdoors; I’ll stroll around with our baby and get some fresh air whilst being online in the meeting. Now in June, I’ve been enjoying my summer holiday. But clearly, something is going on as the average percentage difference is such high. For my Apple Watch, the difference is there but not that significant.

Why am I not comparing the data with the iPhone activity? Heh — to prove my point with my earlier “Things that WILL go wrong when monitoring IoT devices” post; something has happened as my Home Assistant won’t provide that information anymore. The same happened with connection-type data. I’ll have to take a look at that someday, but lately, I’ve been busy as a bee with 1) the summer holiday and 2) preparing material for Zabbix Summit 2023.

Real-world applications for this kind of cherry-picking

Even though this post is very thin in its contents, I’m posting this as a hint for you on how you can utilize Zabbix in more serious monitoring targets in this way. Need to compare UPS battery depletion rate? Disk space usage rate? CPU usage? Concurrent connections? Whatever the data, many times it’s useful to stop for a moment and compare how things were (some time units) ago.

Yes, you can simply pick a longer time period in Zabbix time picker and see the data for one year or whatever, and most of the time it will show you a change in pattern, but if the changes are more subtle or the graph is very busy, sometimes zooming in to shorter time periods in history will show you something that you might otherwise miss.

For example, if I expand the time range for the battery usage, not only you’ll notice that the Home Assistant iCloud ride has been a bumpy one, but also the details do get lost when the time range is longer.

More ways to compare historical data

I didn’t build any new graphs or dashboards for this quick experiment, I merely used the time picker. In case one would need to have comparison data available at any time, using time shift and additional data sets would help you out. And, like many of us do, one way to dive deeper into data collected by Zabbix would be to analyze it in Grafana, but that’s a story for another day.

I have been working at Forcepoint since 2014 and I’m happy that my work won’t drain my personal battery. — Janne Pikkarainen

This post was originally published on the author’s page.

The post What’s Up, Home? – Does your phone battery drain faster in summertime? appeared first on Zabbix Blog.

What’s Up, Home? – 7 things to beware of if you monitor your home

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-7-things-to-beware-of-if-you-monitor-your-home/26035/

When reading this blog, you could easily think that everything is smooth sailing all the time. No. When you monitor your home IoT — or frankly, just USE your home IoT — you have plenty of small details to watch out for. I list them for you, so you don’t have to find them out the hard way like I’ve done over this 1+ year of journey.

1. The status is not what it seems

This is especially true with the IoT devices operating on 433 MHz radio frequency. Your home smart hub sends the signal like a radio station hoping for your IoT device to catch it, and to my understanding, it does not get a reply back from the device. If anything is interfering with the signal, your device will miss the signal and thus your home smart hub will be showing the wrong status.

So, you will need to either get rid of these devices and replace them with devices that use a two-way communication protocol such as ZigBee or if that’s not possible, to set up extra monitoring to try to guess if the command your home smart hub sent actually went through. Did you attempt to power on/off a smart power socket connected to a radiator? Keep an eye on the smart temperature meter and react soon if the temperature does not start to rise after the power socket got powered on, or so.

2. Battery-low messages can be deceiving

Two of my Philips Hue motion sensors have been complaining about low battery status for about six months now, but they are still operating just fine. I’ll let you know when I finally have to replace the batteries on them. 

On the contrary, the batteries on some 433 MHz frequency Telldus thermometers can just die without too much warning. For them, your monitoring need to react fast if the values are not coming in. To make things more complicated, not TOO fast though, as sometimes these thermometers can hibernate for some time before reporting new values; possibly when there’s no change in temperature, they will enter some power save mode or something. I don’t know.

3. Bluetooth devices and 2.4 GHz Wi-Fi can interfere with each other

Even though my devices do not use 2.4 GHz Wi-Fi too much, I have some devices like Sonos smart speaker where it’s a must. So, for example, when playing music through that speaker, it’s possible that my Raspberry Pi 4 cannot hear the RuuviTag environmental sensor very reliably. It did help somewhat when I found out that on my Asus router, it was possible to enable some kind of “Bluetooth coexistence” mode, but it’s not a 100% solution for my issue.

4. Make sure any helper components are really up

Along with Zabbix and Grafana, my Raspberry Pi 4 runs Home Assistant to harvest some values about my iPhone and so on. It runs as a Docker image and generally is stable, but sometimes it just stops working. I have an automatic daily restart of that Docker image and so far that has been a relatively good way to keep the image running.

5. APIs can and will change

Monitoring something over some API? Or through web scenarios? Rest assured that your joy won’t last forever. This is IT, and things just won’t remain the same. SOMETHING is guaranteed to change every now and then and the more your monitoring relies on 3rd party things, the less you can trust that your monitoring just would keep on working. No, it’s likely you will need to alter things every now and then.

6. Monitor your monitoring

Even though Raspberry Pi 4 and Zabbix are very reliable and are very unlikely to cause you any trouble, of course, they can fail, or more likely something else will not be like it should. Your home router or Internet connection can die. Electricity can go down. Hardware can die. If you want to be really sure, monitor your monitoring from outside somehow. Have a separate monitoring running on the cloud somehow. 

In our case, the electricity and ISP are very reliable, and Cozify smart home hub has a nice feature where the Cozify cloud will text me if the hub loses connectivity — that’s usually a good indication that either the ISP or power went down. Also, I’m about to roll out a small cron job on this site which would check if my Zabbix has updated a test file in a while. If not, it would indicate my Zabbix would be down or otherwise unreachable, so then whatsuphome.fi could e-mail me.

7. You will get paranoid

With more knowledge comes more pain. With some devices, you’ll start to think that they are going to break soon. As an example, the freezer I keep referring to — sometimes it has short periods of time when its temperature for some reason rises a bit and then it goes down again. I don’t know if that has something to do with the fact that our freezer is one of those which does not form ice everywhere so it’s maintenance-free, or if that’s something else, but we keep observing spikes like this about once a week.

I have been working at Forcepoint since 2014 and have learnt not to trust the technology. — Janne Pikkarainen

This post was originally published on the author’s page.

The post What’s Up, Home? – 7 things to beware of if you monitor your home appeared first on Zabbix Blog.

What’s Up, Home? – Can ChatGPT help set up monitoring a USB-connected printer with Zabbix?

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-can-chatgpt-help-set-up-monitoring-a-usb-connected-printer-with-zabbix/25980/

Can you monitor a USB-connected printer with Zabbix? Of course, you can! But can ChatGPT help set up the monitoring? Well… erm… maybe! By day, I am a Lead Site Reliability Engineer in a global cyber security company, Forcepoint. By night, I monitor my home with Zabbix & Grafana and do some weird experiments with them. Welcome to my blog about the project.

When it comes to printing, I am not laser-sharp. That’s simply because I have not printed anything in a long, long time, and even if I have, it’s been a printer maintained by Someone Else. Yes, I know how to add paper and how to do a ritual dance whilst printing to prevent the printer from doing an annoying paper jam. Yes, I have added some printer servers under Zabbix monitoring at the office. That’s about where my printer wisdom ends.

Nobody buys a printer anymore, except for us

It’s 2023, and nobody prints anymore, or that’s my personal impression. Well, that changed at our home as we enter another domain I understand 0% about. Occasionally my wife is making some clothes for our baby, me, and herself. For that, she’s printing out the source code… the CAD models… well… the blueprints for the clothes. I kid you not, the clothes schematics look so complicated to me and the text-based instructions so alien that I’m sure the instructions are not from this planet.

Anyway, my wife found a used HP LaserJet Pro MFP M28a for us for a steal. No, not a literal steal, but for 40 euros, which did sound cheap to me, so we bought it. Of course, normal human beings would just connect the printer to their laptop/desktop and be done with it. However, you know how this story continues.

Flirting with ChatGPT

I went to Zabbix integrations and was sure I would be able to easily find out how to monitor a USB-connected printer. I’ll just search and… what? Nothing? No worries! We have the community templates! I’ll just quickly grab a template from there…. the empty result set, AGAIN? Monitoring SNMP-enabled printers is a different story, for that Zabbix can suit you very well, but just try to find out something for USB-connected printers. I know I could probably monitor my printer by spying cups but I’m not sure if it could return the details I wanted.

With my experiments, I’m used to the fact that I cannot get direct answers to my monitoring needs using search engines, as I’m doing stuff that not too many have done. Even if they have, they might not have blogged about it. That’s ok, but as this was about printing which is a common topic, I decided to try out what happens if I try out ChatGPT. I did the tests with the free ChatGPT 3.5, the new ChatGPT 4 would probably be a different ride.

Here’s our lovely little chat.

Me: I have an HP LaserJet MFP M28a monochrome laser printer connected to USB. How do I get its remaining toner level through the command line on Ubuntu 22.04?

ChatGPT:  You can get the remaining toner level of your HP LaserJet MFP M28a monochrome laser printer through command line on Ubuntu 22.04 by following these steps:

  1. Install the “hplip” package by running the following command in the terminal:

sudo apt-get install hplip

  1. Once the installation is complete, run the following command to set up your printer:

sudo hp-setup -i

Follow the instructions provided by the setup wizard to configure your printer.

  1. After the printer is set up, run the following command to get the remaining toner level:

hp-systray -x

This will launch the HP Device Manager, which displays the toner level for your printer.

Alternatively, you can also run the following command to get the toner level directly in the terminal:

hp-levels -n

This will display the toner level in a percentage format.

Me: None of those worked as hp-levels and hp-systray require QT4 and D-Bus. Let’s use ipptool instead.

ChatGPT: Sure, we can use the ipptool command to get the remaining toner level of your HP LaserJet MFP M28a monochrome laser printer on Ubuntu 22.04.

Here are the steps to follow:

  1. Make sure your printer is connected to the computer and turned on.
  2. Open the terminal on your Ubuntu 22.04 machine.
  3. Run the following command to install the CUPS and IPP utilities:

sudo apt-get install cups-ipp-utils

  1. Run the following command to find the printer’s URI:

lpinfo -v

The output should contain a line with the URI of your printer, which will look something like this:

ipp://HP_LaserJet_MFP_M28a.local:631/ipp/print

  1. Run the following command to get the remaining toner level:

ipptool -t -v -H ipp://localhost:631/ipp/print -I /usr/share/cups/model/HP/hp-laserjet_pro_mfp_m26a.ppd get-printer-attributes.test | grep marker-levels

Replace “get-printer-attributes.test” with the actual name of the test file used by your printer.

  1. The output of the command should contain a line with the toner level, which will look something like this:

marker-levels: marker-high-levels=(100) marker-low-levels=(5) marker-names=(Black Cartridge) marker-current-levels=(30)

In this example, the toner level is 30%.

That’s it! This is how you can use the ipptool command to get the remaining toner level of your HP LaserJet MFP M28a monochrome laser printer on Ubuntu 22.04.

* * * 

The latest tip really did work, so we’re off to a very good start with ChatGPT! Next, I tried asking if it could create a Zabbix template for me, but even though it really tried, it did fail.

Adding printer to Zabbix

So, before continuing with the failed ChatGPT attempt, let’s add this printer to monitoring, as that’s the reason you are reading this blog anyway. I was very surprised about the detail level of the ipptool output: everything from supported paper sizes to printing options to other usual details, but then also some dynamic details like printer uptime.

I wanted to add those dynamic details, so I read through the output and added the details in the way you already know me doing:

  • I first added a cronjob that runs every minute 

    ipptool -t -v “ipp://HP%20LaserJet%20MFP%20M28a%20(7C69CB)%20(USB)._ipp._tcp.local/” – get-printer-attributes.test  >/tmp/hp-printer.txt

  • Zabbix then reads that text file to a master item, and with dependent items and item preprocessing cherry-picks the interesting details

In screenshots, like this.

Does it work?

Of course, it does, as this routine is what I’ve done so many times before in this blog. Here’s the result. And yes, the toner will likely run out soon — this is a used printer and it’s complaining about low toner level every time we print. An interesting experiment to see how many pages we can still print before it actually runs out of juice. For “0% left”, as also reported by other tools, the printer does an excellent job.

Back to ChatGPT we go

If I would copy-paste my complete chat with ChatGPT, this blog post would become ridiculously long. Communicating with ChatGPT was like with a hyper-active intern who proceeds to do SOMETHING only to realize moments later that whatever it did was completely else than what you asked for. Probably I’m just a sucky ChatGPT prompter.

To get you an idea of how ChatGPT failed, here’s a summary of how it failed:

  • It really attempted to create YAML-based template files for me.
  • Unfortunately, when attempting to import the templates to Zabbix, that part failed every time 
  • When I told the error messages to ChatGPT, it attempted to fix its errors, but in weird ways. Sometimes it changed the template in drastic ways, even if it was supposed to only add or modify a single line of it. Multiple times, it decided to change the format from YAML to XML unless I demanded it to stay on YAML

Here are a few snippets from the chat. Maybe at some point, I’ll throw in some money to try out ChatGPT 4.

… this went on and on until I gave up. In conclusion: this time ChatGPT nudged me in to correct direction to get the desired output about the printer info (although a sharp-eyed reader might notice I hinted it about the tool I’d like to use and how I might after all know more about printers than I pretended during this blog post…). Then, ChatGPT ran out of ink when it tried to generate the Zabbix templates. It’s scary advanced anyway, and someday I will try out the more advanced ChatGPT 4.

I have been working at Forcepoint since 2014 and luckily I don’t suffer about the empty paper syndrome very often. — Janne Pikkarainen

This post was originally published on the author’s page.

The post What’s Up, Home? – Can ChatGPT help set up monitoring a USB-connected printer with Zabbix? appeared first on Zabbix Blog.

What’s Up, Home? – Monitor your mobile data usage

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-monitor-your-mobile-data-usage/25856/

Can you monitor your mobile data usage with Zabbix? Of course, you can! By day, I am a Lead Site Reliability Engineer in a global cyber security company Forcepoint. By night, I monitor my home with Zabbix & Grafana Labs and do some weird experiments with them. Welcome to my blog about this project.

As it is Easter (the original blog post was published two months ago), this entry is a bit short but as I was remoting into my home systems over VPN and my phone, I got this blog post idea. 

When on the go, I tend to stay connected to my home network over VPN. Or rather, an iOS Shortcut pretty much runs OpenVPN home profile for me whenever I exit my home. 

My Zabbix collects statistics from my home router over SNMP, and as usual, the data includes per-port traffic statistics. VPN clients are shown as tunnel interfaces so Zabbix LLD picks them up nicely.

So, as a result, I get to see how much traffic my phone consumes whenever I’m using a mobile VPN. Here are seven-day example screenshots from ZBX Viewer mobile app. 

VPN connection bits sent.

VPN connection bits received

So, from this data I could get all the statistics I need. And, using my ElastiFlow setup, could likely see to where my phone has been connecting to most.

This post was originally published on the author’s page.

The post What’s Up, Home? – Monitor your mobile data usage appeared first on Zabbix Blog.

What’s Up, Home? – Monitor your iPhone & Apple Watch with Zabbix

Post Syndicated from Janne Pikkarainen original https://blog.zabbix.com/whats-up-home-monitor-your-iphone-amp-apple-watch-with-zabbix/25817/

I’m entering a whole new level of monitoring and “What’s up, home?” could now also be called “What’s up, me?”. Recently my colleague did hint to me about Home Assistant’s HomeKit Controller integration just to get my HomeKit-compatible Netatmo environmental monitoring device to get to return value back to Zabbix without my Siri kludge. One thing lead to another and now I’m monitoring my iPhone and Apple Watch — so, practically monitoring myself.

But how to get to this level? Let’s rewind a bit.

Home Assistant

Home Assistant is a nice home automation software. It is open source and provides many, many integrations for automating your home. I now have my Netatmo comfortably monitored through that…

Bye-bye, mobile app and my Siri kludge. This screenshot is from Home Assistant.

… but while exploring Home Assistant’s integrations, I came upon its iCloud integration. Oh boy. This takes my monitoring to a whole new level.

But how to get this data to Zabbix?

On Home Assistant, you can go to your account settings and create a Long-lived access token. With that, you then just pass the authorization bearer as part of your HTTP request and you are done. So, like this.

This way you’ll receive your Home Assistant data back in JSON format. As the output is really really really long, and I needed just a relatively small set of data for myself, I cherry-picked those using the above item as the master item and then created a bunch of dependent items.

… and here’s a single item so you get the idea.

Let’s create some dashboards

Now that I have my data in Zabbix, it’s time to create some dashboards. Fascinating that I can now truly monitor my iPhone and Apple Watch like this.

I also created a Grafana dashboard.

Observations

This has been now running for roughly a day for me. Already some observations:

  • While driving, at traffic lights I tried to see what would happen if I disable the Bluetooth connection between my car and my iDevices. My status was reported as Cycling instead of Automotive for the rest of the trip. Hmm.
  • Not all the data will be updated in real-time, but there’s a significant lag. Also, it seems I might need to VPN to my home so the data would be updated sooner while I’m not at home.
  • iPhone’s custom focus modes are not updated to Home Assistant. During the sleep focus mode, the focus mode was reported as On, but for any other mode I tried it only shows Off. Shame, I would have loved to start tracking things like how long it takes for me to put our baby to sleep or how much of the time I’m spending with this blog. That has to wait for now.

But anyway, this thing just opened a whole new Pandora’s box for me to explore. 

This post was originally published on the author’s page.

Understanding the Ecosystem of Smart Cities for the Purpose of Security Testing

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/12/29/understanding-the-ecosystem-of-smart-cities-for-the-purpose-of-security-testing/

Understanding the Ecosystem of Smart Cities for the Purpose of Security Testing

Is there a defined ecosystem, similar to what we encountered with the Internet of Things (IoT), that can be charted out as it relates to smart city technology and its security implications?

While evaluating IoT I struggled with defining what IoT is. I found that there were varying definitions out there, but none that helped me fully understand what constitutes IoT and how to approach evaluating its security posture. To solve that dilemma in my mind and to better be able to discuss it with vendors and consumers I finally landed on the concept that IoT is often better defined as a series of traits that can be used to explain it, its structure, and better understand the components and their interaction with each other. This concept and approach also allowed me to properly map out all of the interlinking mechanisms as it relates to security testing of the IoT technology’s full ecosystem.

Looking at it from this perspective we see that Smart Cities leverage IoT technology and concepts at its core but in many cases with a much more defined relationship to data. With this in mind, I have started looking at the various components that make up Smart Cities, abstracting out their specific purposes, with the goal of having a model to help better understand the various security concerns as we plan for our Smart City future.

Through general observation we can see that Smart City solutions consist of the following five general areas:

Embedded technology

  • Sensors
  • Actuators
  • Aggregators & Edge or Fog appliances

Management and control

  • Client-side application
  • Cloud application
  • APIs
  • Server application

Data storage

  • Cloud storage
  • On-premises storage
  • Edge or Fog storage

Data access

  • Cloud application
  • Client-side application
  • Server-side application

Communication

  • Ethernet
  • WiFi (802.11abgn)
  • Radio frequency (RF) ( BLE, Zigbee, Z-wave, LoRa, etc.…)
  • Cellular communication ( GSM, LTE, 3,4,5G)

Mapping these various components to a specific smart city solutions ecosystem, we can better establish the relationship between all the components in that solution. This in turn allows us to improve our threat modeling processes by including those interrelationships. Also, similar to general IoT security testing, understanding the interconnected relationships allows us to take a more holistic approach to security testing.

The typical approach of testing each component only as a stand-alone entity is always a short-sighted approach and misses the mark when identifying attack vectors and associated risk that often come to light only when security testing takes into consideration the interaction of these components. This approach leads us to always ask the question: what happens to the security posture of one set of components if there is a security failure in another set of components? Also, the holistic approach helps us better map the security risk levels across the entire ecosystem. Just because a low-risk condition is found in a single item does not mean that the risk is not compounded into a higher risk category due to some interaction within other components.

So, in conclusion, the solution to establishing a solid security testing response for Smart City technologies is to map out the entire ecosystem of the solution that is being designed and deployed. Define a solid understanding of the various components and their interaction with each other; then, conduct threat modeling to determine the possible risks and threats that are expected to come against the smart city solutions.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/11/08/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-pt-4/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

Welcome back to our blog series on Rapid7’s IoT Village exercise from DEF CON 30. In our previous posts, we covered how to achieve access to flash memory, how to extract file system data from the device, and how to modify the data we’ve extracted. In this post, we’ll cover how to gain root access over the device’s secure shell protocol (SSH).

Gaining root access over SSH

Before we move on to establishing SSH connect as root, you may need to set the local IP address on your local host to allow you to access the cable modem at its default IP address of 192.168.100.1. In our example, we set the local IP address to 192.168.100.100 to allow this connection.

To set the local IP address on your host, the first thing is to identify the local ethernet interface. You can do this from the Linux CLE terminal by running the ifconfig command:

  • ifconfig
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 10: IFCONFIG showing Local Ethernet Interfaces

In our example, the ethernet interface is enp0s25, as shown above. Using that interface name (enp0s25), we can set the local IP address to 192.168.100.100 using the following command

  • ifconfig enp0s25 192.168.100.100

To validate that you’ve set the correct IP address, you can rerun the ifconfig command and examine the results to confirm:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 11: Ethernet Interface Set To 192.168.100.100

It’s also possible to connect your host system directly to the cable modem’s ethernet port and have your host interface setup for DHCP – the cable modem should assign an IP address to your host device.

Once you have a valid IP address assigned and/or configured on your host system, power up the cable modem and see if your changes were made correctly and if you now have root access. Again, ensure the SD Card reader is disconnected before plugging 12v power supply into the cable modem.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

Once you’ve confirmed that the SD Card reader is disconnected, power up the cable modem and wait for the boot-up sequence to complete. Boot-up is complete when only the top LED is lit and the second LED is flashing:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4

From the CLI terminal on your host, you can run the nmap command to show the open ports on the cable modem. This will also show if your changes to the cable modem firmware were made correctly.

  • nmap -sS 192.168.100.1 -p 22,80,443,1337
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 12: NMAP Scan Results

At a minimum, you should see TCP port 1337 as open as shown above in Figure 12. If not, then most likely an error was made either when copying the dropbear_rsa_key file or making changes to the inittab file.

If the TCP port 1337 is open, the next step is to attempt to login to the cable modem with the following SSH command as root. When prompted for password, use “arris” in all lower case.

Note: Since the kernel on this device is believed to have created an environment restriction to prevent console access, we were only successful in getting around that restriction with the -T switch. This -T switch in SSH disables all pseudo-terminal allocation, and without using it, no functioning console can be established. Also, when connected, you will not receive a typical command line interface prompt, but the device should still accept and execute commands properly.

If you receive a “no matching key exchange method found” error (Figure 13), you will need to either define that Diffie-hellman-group-sha1 in the SSH command or create a config file to do this automatically.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 13: Key Exchange Error

Defining a config file is the easier method. We did this prior to the DEF CON IoT Village, so participants in the exercise would not need to. Since others may be using this writeup to recreate the exercise, I decided to add this to prevent any unnecessary confusion.

To create a config file to support SSH login to this cable modem, without error, you will need to create the following folder “.ssh” and file “config” within the home directory of the user you are logging as. In our example, we were logged in as root. To get to the home folder, the simplest method is to enter the “cd” command without any arguments. This will take you to the home directory of the logged in user.

  • cd

Once in your home directory, try to change directory “cd” to the “.ssh” folder to see if one exists:

  • cd .ssh

If it does, you won’t need to create one and can skip over the creation steps below. If not, then you will need to create that folder in your home directory with the following command:

  • mkdir .ssh

Once you have changed directory “cd” to the .ssh folder, you can use vi to create and edit a config file.

  • vi config

Once in vi, make the following entries in the config file shown below in Figure 14. These entries will enable support for access to cable modem at 192.168.100.1, for the user root, a Cipher of aes256-cbd, and the Diffie-hellman key exchange.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 14: Config File

Once the config is created and saved, you should be able to login over SSH to the cable modem and not receive any more errors.

When you connect and log in, the SSH console will not show you a typical command prompt. So, once you hit the return key after the above SSH command, run the command “ls -al” to show a directory and file listing on the cable modem as shown below in Figure 15. This should indicate whether you successfully logged in or not.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 4
Figure 15: Cable Modem Root Console

At this point, you should now have root-level access to the cable modem over SSH.

You may ask, “What do I gain from getting this level of root access to an IoT device?” This level of access allows us to do more advanced and detailed security testing on a device. This is not as easily done when sitting on the outside of the IoT device or attempting to emulate on some virtual machine, because often, the original hardware contains components and features that are difficult to emulate. With root-level access, we can interact more directly with running services and applications and better monitor the results of any testing we may be conducting.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/11/01/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-pt-3/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Welcome back to our blog series on Rapid7’s IoT Village exercise from DEF CON 30. In our previous posts, we covered how to achieve access to flash memory and how to extract file system data from the device. In this post, we’ll cover how to modify the data we’ve extracted.

Modify extracted file systems data

Now that you have unsquashfs’d the Squash file system, the next step is to take a look at the extracted file system and its general structure. In our example, the unsquashed file system is located at /root/Desktop/Work/squashfs-root. To see the structure, while in the folder /Desktop/Work, you can run the following command to change director and then list the file and folders:

  • cd squashfs-root
  • ls -al

As you can see, we have unpacked a copy of the squash file system containing the embedded Linux root file system that was installed on the cable modem for the ARM processor.

The next goal will be to make the following three changes so we can eventually gain access to the cable modem via SSH:

  1. Create or add a dropbear_rsa_key to squashfs.
  2. Remove symbolic link to passwd and recreate it.
  3. Modify the inittab file to launch dropbear on startup.

To make these changes, you will first need to change the directory to the squashfs-root folder. In our IoT Village exercise example, that folder was “~/Desktop/Work/squashfs-root/etc”, and the attendees used the following command:

  • cd ~/Desktop/Work/squashfs-root/etc

It is critical that you are in the correct directory and not in the etc directory of your local host before running any of the following commands to avoid potential damage to your laptop or desktop’s configuration. You can validate this by entering the command “pwd”, which allows you to examine the returned path results as shown below:

  • pwd
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

The next thing we need to do is locate a copy of the dropbear_rsa_key file and copy it over to “~/Desktop/Work/squashfs-root/etc”. This RSA key is needed to support dropbear services, which allow SSH communication to the cable modem. It turns out that a usable copy of dropbear_rsa_key file is located within the etc folder on Partition 12, which in our example was found to be mounted at /media/root/disk/etc. You can use the application Disks to confirm the location of the mount point for Partition 12, similar to the method we used for Partition 5 shown in Figure 4.

By running the following command during the IoT Village exercise, attendees were successfully able to copy the dropbear_rsa_key from Partition 12 into “~/Desktop/Work/squashfs-root/etc/”:

  • cp /media/root/disk/etc/dropbear_rsa_key .
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Next, we had participants remove the symbolic linked passwd file and create a new passwd file that points to the correct shell. Originally, the symbolic link pointed to a passwd file that pointed the root user to a shell that prevented root from accessing the system. By changing the passwd file, we can assign a usable shell environment to the root user.

Like above, the first step is to make sure you are in the correct folder listed below:

“~/Desktop/Work/squashfs-root/etc”

Once you have validated that you are in the correct folder, you can run the following command to delete the passwd file from the squash file systems:

  • rm passwd
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once this is deleted, you can next create a new passwd file and add the following data to this file as shown below using vi:

Note: Here is a list of common vi interaction commands that are useful when working within vi:

  • i = insert mode. This allows you to enter data in the file
  • esc key will exit insert mode
  • esc key followed by entering :wq and then the enter key will write and exit vi
  • esc key followed dd will delete the line you are on
  • esc key followed x will single character
  • esc key followed shift a will place you in append insert mode at the end of the line
  • esc key followed a will place you in append insert mode at the point of your cursor
  • vi passwd

Once in vi, add the following line:

root:x:0:0:root:/:/bin/sh

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Next, we need to alter the inittab file. Again, make sure you are currently in the folder “/root/Desktop/Work/squashfs-root/etc”. Once this is validated, you can use vi to open and edit the inittab file.

  • vi inittab

The file should originally look like this:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

You will need to add the following line to the file:

::respawn:/usr/sbin/dropbear -p 1337 -r /etc/dropbear_rsa_key

Add the line as shown below, and save the file:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once you’ve completed all these changes, double-check them to make sure they are correct before moving on to the next sections. This will help avoid the need to redo everything if you made an incorrect alteration.

If everything looks correct, it’s time to move repack the squash file system and write the data back to Partition 5 on the cable modem.

Repacking squash file system and rewriting back to Modem

In this step, you will be repacking the squash file system and using the Linux dd command to write the image back to the cable modems NAND Flash memory.

The first thing you will need to do is change the directory back to the working folder – in our example, that is “/Desktop/Work”. This can be done from the current location of “~/Desktop/Work/squashfs-root/etc” by running the following command:

  • cd ../../
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Next, you’ll use the command “mksquashfs” to repack the squash folder into a binary file called new-P5.bin. To do this, run the following command from the “/Desktop/Work” folder that you should currently be in.

  • mksquashfs squashfs-root/ new-P5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once the above command has completed, you should have a file called new-P5.bin. This binary file contains the squashfs-root folder properly packed and ready to be copied back onto the cable modem partition 5.

Note: If for some reason you think you have made a mistake and need to rerun the “mksquashfs” command, make sure you delete the new-P5.bin file first. “mksquashfs” will not overwrite the file, but it will append the data to it. If this happens it will cause the new-P5.bin images to have duplicates of all the files which will cause your attempt to gain root access to fail. So, if you rerun mksquashfs make sure to delete the old new-P5.bin file first.

One you have run the mksquashfs and have a new-P5.bin file containing the repacked squashfs, you’ll use the Linux dd command to write the binary file back to the cable modem partition 5.

To complete this step, first make sure you have identified the correct “Device:” location using the method shown in Figure 7 from part 2 of this blog series.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

In this example, the “Device:” was determined to be sdd5. So we can write the binary images by running the following dd command:

  • dd if=new-P5.bin of=/dev/sdd5
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

Once the dd command completes, the modified squash file system image should now be written on the modem’s NAND flash memory chip. Before proceeding, disconnect the SD Card reader from the cable modem as shown below.

Note: Attaching the SD Card Reader and 12V power supply to the cable modem at the same time will damage the cable modem and render it nonfunctional.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 3

In our next and final post in this series, we’ll cover how to gain root access over the device’s secure shell protocol (SSH). Check back with us next week!

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/10/25/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-pt-2/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Welcome back to our blog series on Rapid7’s IoT Village exercise from DEF CON 30. Last week, we covered the basics of the exercise and achieving access to flash memory. In this post, we’ll cover how to extract partition data.

Extracting partition data

The next step in our hands-on IoT hacking exercise is to identify the active partition and extract the filesystems for modification. The method I have used for this is to examine the file date stamps – the one with the most current date is likely the current active file system.

Note: Curious why there are multiple or duplicate filesystem partitions on an embedded device? Typically, with embedded device firmware, there are two of each partition, two kernel images, and two root file system images. This is so the device’s firmware can be updated safely and prevent the device from being bricked if an error or power outage occurs during the firmware update. The firmware update process updates the files on the offline partitions, and once that is completed properly, the boot process loads the new updated partitions as active and takes the old partitions offline. This way, if a failure occurs, the old unchanged partition can be reloaded, preventing the device from being bricked.

On this cable modem, we have 7 partitions. The key ones we want to work with are Partition 5, 6, 12, 13. This cable modem has two MCU:, an ARM and an ATOM processor. Partition 5 and 6 are root filesystems for the ARM processor, which is what we will be hacking. Partition 12 and 13 are for the root filesystems for the ATOM process, which controls the RF communication on the cable.

To ensure we alter the active partition, the first thing we need to do is check the date stamps on the mounted file systems to see whether partition 5 or partition 6 is the active partition. There are several ways to do this, but the method we use is to click on partition 5 or 6 in the Disks application to highlight it, and then click on the “Mounted at:” link as shown below in Figure 4 to open the mounted file partition shown in Figure 5:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 4: Partition File System Mount Locations

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 5: Mounted File System Partition 5

Once File Manager opens the folders, you can right click on the “etc” folder, select “Properties,” and check the date listed in “Modified:” as shown below in Figure 6:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 6: Folder Date Stamp

You will want to do this on both partitions 5 and 6 for the cable modem to identify the partition with the most current date stamp, as this is typically the active partition. For the example cable modems we used at DEF CON IoT Village, partition 5 was found to have the most current date stamp.

Extracting the active partition

The next step is to extract the partition with the newest date stamp (Partition 5). To do this, we first need to identify which Small Computer System Interface (SCSI) disk partition 5 is attached to. This can be identified by selecting Partition 5 with the Disks application and then read the “Device:” as shown below in Figure 7:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 7: Device

Also remember to record the “Device:” information. You’ll need this during several of the future steps. In our example, we see that this is /dev/sdd5, as shown in Figure 7.

To extract the partition image, we launched the Terminal application on your Linux host to gain access to the command line interface (CLI). Once the CLI is opened, create a storage area to hold the partition binary and file system data. In our example, we created a folder for this called /Desktop/Work.

From CLI within the /Desktop/Work folder, we ran the Linux dd command to make a binary copy of Partition 5. We used the following command to make sure we used the device location that Partition 5 was connected to: /dev/sdd5. A sample output of this dd command is shown below in Figure 8:

  • dd command arguments:
    if=file Read input
    of=file Write output
  • dd if=/dev/sdd5 of=part5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 8: dd Command

Note: Before proceeding, we highly recommend that you make a binary copy of the Cable Modem full NAND flash. This may come in handy if anything goes wrong and you need to return the device to its original operation mode. This can be done by running the following dd command against the full “Device:“. In this example that would be /dev/sdd

  • dd if=/dev/sdd of=Backup_Image.bin

Unsquashfs the partition binary

Next, we’ll extract the file system from the Partition 5 image that we dd’d from the cable modem in the previous steps. This file system was determined to be a Squash file system, a type commonly used on embedded devices.

Note: A Squash file system is a compressed read-only file system commonly found on embedded devices. Since the file system is read-only, it is not possible to make alteration of that files system while it is in place. To make modifications, the file system will need to be extracted from the device’s storage and uncompressed, which is what we’ll do in the following exercise steps.

So, your first question may be, “How do we know that this is a squash file system?” If you look at the partition data in the application “disks”, you will see that “Content:” shows (squashfs 4.0)

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Another simple option to identify the content of the file is to run the file command against the binary file extracted from modem with the dd command and view the output. An example of this output is shown below in Figure 9:

  • file part5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Figure 9: Ouput from File Command

To gain access to the partition binary squash file system, we will be extracting/unpacking the squash file system using the unsquashfs command. A squash file system is read-only and cannot be altered in place – the only way to alter a squashfs is to unpack it first. This is a simple process and is done by running the unsquashfs command against the binary file part5.bin:

  • unsquashfs part5.bin
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Pt. 2

Once the command completes you should now have a folder called “squashfs-root”. This folder contains the extracted embedded Linux root file systems for the cable modem.

In our next post, we’ll cover how to modify the data we just extracted. Check back with us next week!

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2022/10/18/hands-on-iot-hacking-rapid7-at-def-con-30-iot-village-part-1/

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1

Rapid7 was back this year at DEF CON 30 participating at the IoT Village with another hands-on hardware hacking exercise, with the goal of teaching attendees’ various concepts and methods for IoT hacking. Over the years, these exercises have covered several different embedded device topics, including how to use a Logic Analyzer, extracting firmware, and gaining root access to an embedded IoT device.

Like last year, we had many IoT Village attendees request a copy of our exercise manual, so again I decided to create an in-depth write-up about the exercise we ran, with some expanded context to answer several questions and expand on the discussion we had with attendees at this year’s DEF CON IoT Village.

This year’s exercise focused on the following key areas:

  • Interaction with eMMC in circuit
  • Using Linux dd command to make binary copy of flash memory
  • Use unsquashfs and mksquashfs commands to unpack and repack read only squash file systems
  • Alter startup files within the embedded Linux operating system to execute code during device startup
  • Leverage dropbear to enable SSH access

Summary of exercise

The goal of this year’s hands-on hardware hacking exercise was to gain root access to a Arris SB6190 Cable modem without needing to install any external code. To do this, the user interacted with the device via a PHISON PS8211-0 embedded multimedia controller (eMMC) to mount up and gain access to the NAND flash memory storage. With NAND flash memory access, the user was able to identify the partitions of interest and extract those partitions using the Linux dd command.

Next, the user extracted the filesystem from the partition binary files and was then able to modify key elements to enable SSH access over the ethernet connection. After the modification where completed the filesystems were repacked and written back to the modem device. Finally, the attendee was able to power up the device and login over ethernet using SSH with root access and default device password.

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1

eMMC access to flash memory

In this first section of the exercise, we focused on understanding the process of gaining access to the NAND flash memory by interacting with a PHISON PS8211-0 embedded multimedia controller (eMMC).

Wiring up eMMC and SD card breakout board

To interact with typical eMMC devices, we typically need the following connections.

  • CMD Command
  • DAT Data
  • CLK Clock
  • VCC Voltage 3.3v
  • VCCq Controller Voltage 1.8v – 3.3v
  • GND Ground

As shown in the above bullets, there are typically two different voltages required to interact with eMMC chips. However, in this case, we determined that the PHISON PS8211-0 eMMC chip did not have a different controller voltage for VCCq, meaning that the voltage used was only 3.3v for this example.

When connecting to and interacting with an eMMC device, we usually can utilize the internal power supply of the device. This often works well when different VCC and VCCq voltages are required, but in those cases, we also have to hold the microcontroller unit (MCU) at reset state to prevent the processor from causing interruption when trying to read memory. In this example, we found that the PHISON eMMC chip and NAND memory could be powered by supplying the voltage externally via the SD Card reader.

When using an SD Card reader to supply voltage, we must avoid hooking up the device’s normal source of power also. Hooking both sources – normal and SD Card – into the devices will lead to permanent damage to the device.

When it comes to soldering the needed wiring for this exercise, we realized allowing attendees to do the soldering connection would be much more complex than we could support. So, all the wiring was presoldered before the IoT Village event using 30-gauge color-coded wirewrap wire. This wiring was then attached to a SD Card breakout board as shown below in Figure 1:

  • White = Data
  • Blue = Clock
  • Yellow = Command
  • Red = Voltage (VCC)
  • Black = Ground
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Figure 1: Wiring Hookups

Also, as you can see in the above images, the wires do not run parallel against each other, but have a reasonable gap between them and pass over each other perpendicularly when they cross over. This is because we found during testing that if we ran wires directly next to each other, it caused the partitions to fail to mount properly, most likely because noise was induced into the lines from the other lines affecting the signal.

Note: If you are looking to do your own wiring, the 30-gauge wirewrap wire I used is a Polyvinylidene fluoride coated insulation wire under the brand name of Kynar. The benefit of using Kynar wirewrap is that this insulation does not melt or shrink as easily from heat from the solder iron. When heated by a solder iron, standard plastic-coated insulation will shrink back, exposing uninsulated wire. This can lead to wires shorting out on the circuit board.

Connect SD card reader

With the modem wired up to SD Card breakout as shown above we can mount NAND flash memory by connecting a SD Card reader. Note, not all SD Card readers will work, I used a simple trial and error method with several SD Card readers I had in my possession until I found that an inexpensive DYNEX brand reader worked. It should be attached as shown below in Figure 2:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Figure 2: Connected SD Card Reader

Once plugged in, the various partitions on the Cable modem NAND Flash memory should start loading. In this case a total of seven partitions mounted up. This can take a few minutes to complete. If your system opened each one of the volumes as it mounted, I typically shut them down to avoid all the confusion on your system desktop. To see the layout of the various partitions on the NAND Flash and gather information as needed for reading and writing to the correct partitions. We used the Linux application Disks. Once Disks is opened you can click on the 118 MB Drive in the left column, and it will show all of the partitions and should look something like Figure 3 below:

Hands-On IoT Hacking: Rapid7 at DEF CON 30 IoT Village, Part 1
Figure 3: Disks NAND Flash Partitions

In our second installment of this 4-part blog series, we’ll discuss the step of extracting partition data. Check back with us next week!