When it comes to the Internet of Things (IoT), many developers think in terms of microcontrollers, system-on-chip boards, single-board computers, sensors, and various other electronic components. While devices are undoubtedly the foundation of IoT, the core value of a connected solution lies in the data generated by these devices.

The devices layer is only the tip of the iceberg with the underlying data platform that is below the watermark tackling the heavy lifting. One of the key pillars of a robust IoT data platform is Apache Kafka, an open source software designed to handle massive amounts of data ingestion. It acts as a gateway to the data processing pipeline powered in the data center by Apache Storm, Apache Spark, and Apache Hadoop clusters.

If you are a developer considering IoT as a career option, it is time for you to start investing in Apache Kafka. This article explores the role that Apache Kafka plays in deploying a scalable IoT solution.

Kafka: A High-Performance Ingestion Layer for Sensor Data

IoT devices comprise of a variety of sensors capable of generating multiple data points, which are collected at a high frequency. A simple thermostat may generate a few bytes of data per minute while a connected car or a wind turbine generates gigabytes of data in just a few seconds. These massive data sets are ingested into the data processing pipeline for storage, transformation, processing, querying, and analysis.

Read More:   Moving Day 2 Operations from Production to Development – InApps Technology 2022

Each data set consists of multiple data points representing specific metrics. For example, a connected Heating, ventilation and air conditioning (HVAC) system would report ambient temperature, desired temperature, humidity, air quality, blower speed, load, and energy consumption metrics.

In a large shopping complex, these data points are frequently collected from hundreds of HVACs. Since these devices may not be powerful enough to run the full TCP networking stack, they use protocols like Z-Wave, and ZigBee to send the data to a central gateway that is capable of aggregating the data points and ingesting them into the system.

The gateway pushes the data set to an Apache Kafka cluster, where the data takes multiple paths. Data points that need to be monitored in real-time go through the hot path. In our HVAC scenario, it is important to track metrics like temperature, humidity, and air quality in real-time to take corrective action. These data points may go through an Apache Storm and Apache Spark cluster for near real-time processing.

Metrics such as load and power consumption are analyzed after collecting them over a period of time. These data points that are collected and analyzed through a batch process typically take the cold path of the data processing pipeline. A MapReduce job may be run within a Hadoop cluster for analyzing the energy efficiency of HVACs.

Irrespective of the path that the data points take, they need to be ingested into the system. Apache Kafka acts as the high-performance data ingestion layer dealing with massive amounts of data sets. The components of the data processing pipeline responsible for hot path and cold path analytics become subscribers of Apache Kafka.

Kafka vs. MQTT

Apache Kafka is not a replacement to MQTT, which is a message broker that is typically used for Machine-to-Machine (M2M) communication. The design goals of Kafka are very different from MQTT.

Read More:   How to Maintain Speed, Security – InApps 2022

In an IoT solution, the devices can be classified into sensors and actuators. Sensors generate data points while actuators are mechanical components that may be controlled through commands. For example, the ambient lighting in a room may be used to adjust the brightness of an LED bulb. In this scenario, the light sensor needs to talk to the LED, which is an example of M2M communication. MQTT is the protocol optimized for sensor networks and M2M.

Since Kafka doesn’t use HTTP for ingestion, it delivers better performance and scale.

Since MQTT is designed for low-power devices, it cannot handle the ingestion of massive datasets. On the other hand, Apache Kafka may deal with high-velocity data ingestion but not with M2M.

Scalable IoT solutions use MQTT as an explicit device communication while relying on Apache Kafka for ingesting sensor data. It is also possible to bridge Kafka and MQTT for ingestion and M2M. But it is recommended to keep them separate by configuring the devices or gateways as Kafka producers while still participating in the M2M network managed by an MQTT broker.

Kafka vs. HTTP/REST

Apache Kafka exposes a TCP port based on a binary protocol. The client, which pushes the data, initiates a socket connection and then writes a sequence of request messages and reads back the corresponding response message. This protocol does not require a handshake for each connection or disconnection.

Since Kafka doesn’t use HTTP for ingestion, it delivers better performance and scale. The client can connect to one of the instances of the cluster to ingest data. This architecture combined with raw TCP sockets offers maximum scalability and throughput.

While it may be tempting to use an HTTP proxy for communicating with a Kafka cluster, it is recommended that the solution uses a native client. Since Kafka is written in Java, the native Java client library delivers the best possible performance. The community has built optimized client libraries for Go, Python, and even Node.js. Shopify has also contributed to an open source Go library for Kafka called as Sarama. The Mailgun team at Rackspace has built Kafka-Pixy, an open source HTTP proxy for Kafka. There are multiple libraries for Python, C#, Ruby, and other languages.

Read More:   Update Logz.io Adds a Cast of Thousands to Help with Log Analysis

Most of the IoT gateways are powerful enough to run Java, Go or Python. For best performance and throughput, it is recommended to use a client library natively designed for Kafka.

Getting Started with Kafka

Apache Kafka is developed in Java, and its deployment is managed by Apache ZooKeeper. Any OS capable of running a JVM can be used to deploy a Kafka cluster. To test the waters, you may want to run Kafka in Docker.

If you don’t want to deal with the infrastructure, you can get started with a managed Kafka service in the cloud. IBM Bluemix has Message Hub, a  fully managed, cloud-based messaging service based on Kafka. Cloud Karafka is another streaming platform in the public cloud, designed for Apache Kafka workloads. Aiven.io offers hosted Kafka along with InfluxDB, Grafana, and Elasticsearch. If you are an existing Salesforce.com or Heroku developer, you can take advantage of Kafka on Heroku.

Apache Kafka is the foundation of many Big Data deployments. In the upcoming articles of this series, I will introduce the key concepts, architecture, and terminology of Kafka. Stay tuned.

InApps is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.

Feature image: Salt Creek Falls, Oregon, by Nathan Anderson, via Unsplash.