MQTT is a standard protocol used to transfer data on a network. MQTT is supported by a wide range of Internet of Things (IoT) devices and manufacturing execution systems (MES).

You can use MQTT to send predictions from a computer vision model to a different device on your network or an MES. You can run your vision model on one device, such as an NVIDIA Jetson, and send the results for further processing elsewhere.

In this guide, we are going to walk through an example of how to broadcast computer vision predictions over MQTT. Here is an example of MQTT predictions broadcast over MQTT:

0:00
/0:11

To follow this guide, you will need:

  1. A free Roboflow account, and;
  2. An MQTT broker set up to receive information.

We will walk through an example of sending predictions from a bottle cap inspection model over MQTT. But, you can use any model hosted on Roboflow or with Roboflow Inference with this guide.

Without further ado, let’s get started!

Step #1: Prepare a Model

Before we can send model predictions, we need to deploy a model. For this guide, we will use Roboflow Inference, an open source computer vision inference server. The software behind Inference powers millions of API calls a month for enterprises. You can deploy any model trained on or uploaded to Roboflow.

We are going to deploy a bottle cap quality assurance model. This model can be used to ensure that a cap is properly sealed on a bottle. We will deploy this model on a webcam.

To learn more about training a model on Roboflow, refer to the Roboflow Getting Started guide.

Once you have a model ready, you are ready to install Inference:

pip install inference

Then, create a new Python file and add the following code:

from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes

pipeline = InferencePipeline.init(
    model_id="bottle-cap-integrity/3",
    video_reference=0, # Path to video, device id (int, usually 0 for built in webcams), or RTSP stream url
    on_prediction=render_boxes,
)

pipeline.start()
pipeline.join()

In this code, we use InferencePipeline to run our model on a video stream. The 0 value corresponds to the ID of our webcam. 0 represents the default webcam on a device. You can also specify an RTSP stream URL if you have an RTSP stream on which you want to run inference.

To run the code above, you will need to set your Roboflow API key in an environment variable called ROBOFLOW_API_KEY:

export ROBOFLOW_API_KEY=""

Learn how to retrieve your API key.

We can run the code above to test our model:

0:00
/0:06

Our model successfully identifies when the bottle in frame is not properly sealed. Now, we can work on broadcasting information from our model across MQTT.

Step #2: Broadcast Predictions Over MQTT

A minimal MQTT deployment involves two components:

  1. A broker, which receives and processes messages, and;
  2. A client, which sends messages to the broker.

For this guide, we assume you already have a broker set up. This may be a broker provided by your manufacturing execution system or a broker deployed using another system in your facility.

Let’s write the client code we need to send messages to the broker.

For this, we are going to use the Paho MQTT client, a well-maintained Python package for use in broadcasting MQTT messages.

To install Paho, run:

pip3 install paho-mqtt

Next, create a new Python file and add the following code:

import json

import paho.mqtt.client as mqtt
from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes

client = mqtt.Client(mqtt.CallbackAPIVersion.VERSION1)

client.connect(
    host="localhost",
    port=1883,
)


def on_prediction(predictions, video_frame):
    render_boxes(predictions=predictions, video_frame=video_frame)
    predictions = json.dumps(predictions)
    client.publish(
        topic="bottle-cap-integrity", payload=predictions, qos=0, retain=False
    )


pipeline = InferencePipeline.init(
    model_id="bottle-cap-integrity/7",
    video_reference=0,
    on_prediction=on_prediction,
    confidence=0.3,
)

pipeline.start()
pipeline.join()

In the code above, replace:

  1. localhost with your broker URL, and;
  2. 1883 with your broker port.
  3. bottle-cap-integrity with the name of the MQTT topic to which you want to send message.
  4. 0 with your video input. 0 is your default webcam.

The code above does not have SSL enabled since SSL configuration will vary. Make sure you enable SSL authentication with your broker when deploying your code in production. Adjust the code above to authenticate with SSL as necessary.

In the code above, we connect to our MQTT broker. Then, we define a callback function that sends every prediction from our model over to MQTT. This is done using the client.publish() call. We then use the InferencePipeline API offered in the Roboflow Inference package to run our model on a video stream.

Let’s run our code above. In the video below, we have two streams open: the predictions from our model displayed in real time and the messages our broker is receiving.

0:00
/0:11

On the left window in the video, you can see predictions from our model. On the right window, predictions are coming in every time there is a bottle on the screen.

You can customize the code above to meet your business requirements. For example, you can send predictions only if a defective object comes into frame. This can be done using supervision, a Python package that provides a range of utilities for use in building computer vision applications. 

With Supervision, you can use ByteTrack, an object detection tracking algorithm, to track objects. Then, you can trigger an MQTT message when a new object comes into view. This is ideal on an assembly line where you only want to trigger one evaluation per product (in this case, one evaluation for each bottle that goes past the camera).

Conclusion

MQTT is a standard protocol for exchanging data. MQTT is commonly used in Internet of Things and manufacturing applications to send messages on a local network. You can use MQTT to broadcast computer vision model predictions.

In this guide, we walked through how to broadcast computer vision predictions over MQTT. We showed how to deploy a model trained on or uploaded to Roboflow. We deployed a model using Roboflow Inference, which lets you run models on your own device. We then sent MQTT messages for all predictions returned by our model.

If you need assistance integrating MQTT into your manufacturing pipeline, contact the Roboflow sales team to find out more about how Roboflow can help.