Skip to main content

6 posts tagged with "aws"

View All Tags

· 13 min read
Michael Hart

This post is about how to build an AWS Step Functions state machine and how you can use it to interact with IoT edge devices. In this case, we are sending a smoothie order to a "robot" and waiting for it to make that smoothie.

The state machine works by chaining together a series of Lambda functions and defining how data should be passed between them (if you're not sure about Lambda function, take a look at this blog post!). There's also a step where the state machine needs to wait for the smoothie to be made, which is slightly more complicated - we'll cover that later in this post.

This post is also available in video form - check the video link below if you want to follow along!

AWS Step Functions Service

AWS Step Functions is an AWS service that allows users to build serverless workflows. Serverless came up in my post on Lambda functions - it means that you can run applications in the cloud without provisioning any servers or contantly-running resources. That in turns means you only pay for the time that something is executing in the cloud, which is often much cheaper than provisioning a server, but with the same performance.

To demonstrate Step Functions, we're building a state machine that accepts smoothie orders from customers and sends them to an available robot to make that smoothie. Our state machine will look for an available robot, send it the order, and wait for the order to complete. The state machine will be built in AWS Step Functions, which we can access using the console.

State Machine Visual Representation

First, we'll look at the finished state machine to get an idea of how it works. Clicking the edit button within the state machine will open the workflow Design tab for a visual representation of the state machine:

Visual representation of Step Functions State Machine

Each box in the diagram is a stage of the Step Functions state machine. Most of the stages are Lambda functions, which are configured to interface with AWS resources. For example, the first stage (GetRobot) scans a DynamoDB table for the first robot with the ONLINE status, meaning that it is ready for work.

If at least one robot is available, GetRobot will pass its name to the next stage - SetRobotWorking. This function updates that robot's entry in the DynamoDB table to WORKING, so future invocations don't try to give that robot another smoothie order.

From there, the robot name is again passed on to TellRobotOrder, which is responsible for sending an MQTT message via AWS IoT Core to tell the robot its new smoothie order. This is where the state machine gets slightly more complicated - we need the state machine to pause and wait for the smoothie to be made.

Activities

While we're waiting for the smoothie to be made, we could have the Lambda function wait for a response, but we would be paying for the entire time that function is sitting and waiting. If the smoothie takes 5 minutes to complete, that would be over 6000x the price!

Instead, we can use the Activities feature of Step Functions to allow the state machine to wait at no extra cost. The system follows this setup:

IoT Rule to Robot Diagram

When the state machine sends the smoothie order to the robot, it includes a generated task token. The robot thens make the smoothie, and when it is finished, publishes a message saying it was successful with that same task token. An IoT Rule that forwards that message to another Lambda function, which tells the state machine that the task was a success. Finally, the state machine updates the robot's status back to ONLINE, so it can receive more orders, and the state machine completes successfully.

Why go through Lambda and IoT Core?

The robot could directly call the Task Success API, but we would need to give it permission to do so - as well as a direct internet connection. This version of the system means that the robot only ever communicates using MQTT messages via AWS IoT Core. See my video on AWS IoT Core to see how to set this up.

Testing the Smoothie State Machine

To test the state machine, we start with a table with two robots, both with ONLINE status. If you follow the setup instructions in the README, your table will have these entries:

Robots with ONLINE state

Successful Execution

If we now request any kind of smoothie using the test_stepfunction.sh script, we start an execution of the state machine. It will find that Robot1 is free to perform the function and update its status to WORKING:

Robot1 with WORKING state

Then it will send an MQTT message requesting the smoothie. After a few seconds, the mock robot script will respond with a success message. We can see this in the MQTT test client:

MQTT Test Client showing order and success messages

This allows the state machine to finish its execution successfully:

Successful step function execution

If we click on the execution, we can see the successful path lit up in green:

State machine diagram with successful states

Smoothie Complete!

We've made our first fake smoothie! Now we should make sure we can handle errors that happen during smoothie making.

Robot Issue during Execution

What happens if there is an issue with the robot? Here we can use error handling in Step Functions. We define a timeout on the smoothie making task, and if that timeout is reached before the task is successful, we catch the error - in this case, we update the robot's state to BROKEN and fail that state machine's execution.

To test this, we can kill the mock robot script, which simulates all robots being offline. In this case, running the test_stepfunction.sh will request the smoothie from Robot1, but will then time out after 10 seconds. This then updates the robot's state to BROKEN, ensuring that future executions do not request smoothies from Robot1.

Robot Status shown as BROKEN

The overall state execution also fails, allowing us to alert the customer of the failure:

Execution fails from time out

We can also see what happened to cause the failure by clicking on the execution and scrolling to the diagram:

State Machine diagram of timeout failure

Another execution will have the same effect for Robot2, leaving us with no available robots.

No Available Robots

If we never add robots into the table, or all of our robots are BROKEN or WORKING, we won't have a robot to make a smoothie order. That means our state machine will fail at the first step - getting an available robot:

State Machine diagram with no robots available

That's our state machine defined and tested. In the next section, we'll take a look at how it's built.

Building a State Machine

To build the Step Functions state machine, we have a few options, but I would recommend using CDK for the definition and the visual designer in the console for prototyping. If you're not sure what the benefits of using CDK are, I invite you to watch my video on the benefits, where I discuss how to use CDK with SiteWise:

The workflow goes something like this:

  1. Make a base state machine with functions and AWS resources using CDK
  2. Use the visual designer to prototype and build the stages of the state machine up further
  3. Define the stages back in the CDK code to make the state machine reproducible and recover from any breaking changes made in the previous step

Once complete, you should be able to deploy the CDK stack to any AWS account and have a fully working serverless application! To make this step simpler, I've uploaded my CDK code to a Github repository. Setup instructions are in the README, so I'll leave them out of this post. Instead, we'll break down some of the code in the repository to see how it forms the full application.

CDK Stack

This time, I've split the CDK stack into multiple files to make the dependencies and interactions clearer. In this case, the main stack is at lib/cdk-stack.ts, and refers to the four components:

  1. RobotTable - the DynamoDB table containing robot names and statuses
  2. Functions - the Lambda functions with the application logic, used to interact with other AWS services
  3. IoTRules - the IoT Rule used to forward the MQTT message from a successful smoothie order back to the Step Function
  4. SmoothieOrderHandler - the definition of the state machine itself, referring to the Lambda functions in the Functions construct

We can take a look at each of these in turn to understand how they work.

RobotTable

This construct is simple; it defines a DynamoDB table where the name of the robot is the primary key. The table will be filled by a script after stack deployment, so this is as much as it needed. Once filled, the table will have the same contents as shown in the testing section.

Functions

This construct defines four Lambda functions. All four are written using Rust to minimize the execution time - the benefits are discussed more in my blog post on Lambda functions. Each handler function is responsible for one small task to show how the state machine can pass data around.

Combining Functions

We could simplify the state machine by combining functions together, or using Step Functions to call AWS services directly. I'll leave it to you to figure out how to simplify the state machine!

The functions are as follows:

  1. Get Available Robot - scans the DynamoDB table to find the first robot with ONLINE status. Requires the table name as an environment variable, and permission to read the table.
  2. Update Status - updates the robot name to the given status in the DynamoDB table. Also requires the table name as an environment variable, and permission to write to the table.
  3. Send MQTT - sends a smoothie order to the given robot name. Requires IoT data permissions to connect to IoT Core and publish a message.
  4. Send Task Success - called by an IoT Rule when a robot publishes that it has successfully finished a smoothie. Requires permission to send the task success message to the state machine, which has to be done after the state machine is defined, hence updating the permission in a separate function.

IoT Rules

This construct defines an IoT Rule that listens on topic filter robots/+/success for any messages, then pulls out the contents of the MQTT message and calls the Send Task Success Lambda function. The only additional permission it needs is to call a Lambda function, so it can call the Send Task Success function.

Smoothie Order Handler

This construct pulls all the Lambda functions together into our state machine. Each stage corresponds to one of the stages in the State Machine Visual Representation section.

The actual state machine is defined as a chain of functions:

const orderDef =
getAvailableRobot
.next(setRobotWorking)
.next(tellRobotOrder
.addCatch(setRobotBroken.next(finishFailure),
{
errors: [step.Errors.TIMEOUT],
resultPath: step.JsonPath.DISCARD,
})
)
.next(setRobotFinished)
.next(finishSuccess);

Defining each stage as a constant, then chaining them together, allows us to see the logic of the state machine more easily. However, it does hide the information that is being passed between stages - Step Functions will store metadata while executing and pass the output of one function to the next. We don't always want to pass the output of one function directly to another, so we define how to modify the data for each stage.

For example, the Get Robot function looks up a robot name, so the entire output payload should be saved for the next function:

const getAvailableRobot = new steptasks.LambdaInvoke(this, 'GetRobot', {
lambdaFunction: functions.getAvailableRobotFunction,
outputPath: "$.Payload",
});

However, the Set Robot Working stage does not produce any relevant output for future stages, so its output can be discarded. Also, it needs a new Status field defined for the function to work, so the payload is defined in the stage. To set one of the fields based on the output of the previous function, we use .$ to tell Step Functions to fill it in automatically. Hence, the result is:

const setRobotWorking = new steptasks.LambdaInvoke(this, 'SetRobotWorking', {
lambdaFunction: functions.updateStatusFunction,
payload: step.TaskInput.fromObject({
"RobotName.$": "$.RobotName",
"Status": "WORKING",
}),
resultPath: step.JsonPath.DISCARD,
});

Another interesting thing to see in this construct is how to define a stage that waits for a task to complete before continuing. This is done by changing the integration pattern, plus passing the task token to the task handler - in this case, our mock robot. The definition is as follows:

const tellRobotOrder = new steptasks.LambdaInvoke(this, 'TellRobotOrder', {
lambdaFunction: functions.sendMqttFunction,
// Define the task token integration pattern
integrationPattern: step.IntegrationPattern.WAIT_FOR_TASK_TOKEN,
// Define the task timeout
taskTimeout: step.Timeout.duration(cdk.Duration.seconds(10)),
payload: step.TaskInput.fromObject({
// Pass the task token to the task handler
"TaskToken": step.JsonPath.taskToken,
"RobotName.$": "$.RobotName",
"SmoothieName.$": "$.SmoothieName",
}),
resultPath: step.JsonPath.DISCARD,
});

This tells the state machine to generate a task token and give it to the Lambda function as defined, then wait for a task success signal before continuing. We can also define a catch route in case the task times out, which is done using the addCatch function:

.addCatch(setRobotBroken.next(finishFailure),
{
errors: [step.Errors.TIMEOUT],
resultPath: step.JsonPath.DISCARD,
})

With that, we've seen how the state machine is built, seen how it runs, and seen how to completely define it in CDK code.

Challenge!

Do you want to test your understanding? Here are a couple of challenges for you to extend this example:

  1. Retry making the smoothie! If a robot times out making the smoothie, just cancelling the order is not a good customer experience - ideally, the system should give the order to another robot instead. See if you can set up a retry path from the BROKEN robot status update back to the start of the state machine.
  2. Add a queue to the input! At present, if we have more orders than robots, the later orders will simply fail immediately. Try adding a queue that starts executing the state machine using Amazon Simple Queue Service (SQS).

Summary

Step Functions can be used to build serverless applications as state machines that call other AWS resources. In particular, a powerful combination is Step Functions with AWS Lambda functions for the application logic.

We can use other serverless AWS resources to access more cloud functionality or interface with edge devices. In this case, we use MQTT messages via IoT Core to message robots with smoothie orders, then listen for the responses to those messages to continue execution. We can also use a DynamoDB table to store robot statuses, which is a serverless database table. The table contains each robot's current status as the step function executes.

Best of all, this serverless application runs in the cloud, giving us all of the advantages of running using AWS - excellent logging and monitoring, fine-grained permissions, and modifying the application on demand, to name a few!

· 17 min read
Michael Hart

This is the second part of the "ROS2 Control with the JetBot" series, where I show you how to get a JetBot working with ROS2 Control! This is a sequel to the part 1 blog post, where I showed how to drive the JetBot's motors using I2C and PWM with code written in C++.

In this post, I show the next step in making ROS2 Control work with the WaveShare JetBot - wrapping the motor control code in a System. I'll walk through some concepts, show the example repository for ROS2 Control implementations, and then show how to implement the System for JetBot and see it running.

This post is also available in video form - check the video link below if you want to follow along!

ROS2 Control Concepts

First, before talking about any of these concepts, there's an important distinction to make: ROS Control and ROS2 Control are different frameworks, and are not compatible with one another. This post is focused on ROS2 Control - or as their documentation calls it, ros2_control.

ros2_control's purpose is to simplify integrating new hardware into ROS2. The central idea is to separate controllers from systems, actuators, and sensors. A controller is responsible for controlling the movement of a robot; an actuator is responsible for moving a particular joint, like a motor moving a wheel. There's a good reason for this separation: it allows us to write a controller for a wheel configuration, without knowing which specific motors are used to move the wheels.

Let's take an example: the Turtlebot and the JetBot are both driven using one wheel on each side and casters to keep the robots level. These are known as differential drive robots.

Turtlebot image with arrows noting wheels

Turtlebot 3 Burger image edited from Robotis

JetBot image with arrows noting wheels and caster

WaveShare JetBot AI Kit image edited from NVIDIA

As the motor configuration is the same, the mathematics for controlling them is also the same, which means we can write one controller to control either robot - assuming we can abstract away the code to move the motors.

In fact, this is exactly what's provided by the ros2_controllers library. This library contains several standard controllers, including our differential drive controller. We could build a JetBot and a Turtlebot by setting up this standard controller to be able to move their motors - all we need to do is write the code for moving the motors when commanded to by the controller.

ros2_control also provides the controller manager, which is used to manage resources and activate/deactivate controllers, to allow for advanced functionality like switching between controllers. Our use case is simple, so we will only use it to activate the controller. This architecture is explained well in the ros2_control documentation - see the architecture page for more information.

This post shows how to perform this process for the JetBot. We're going to use the I2C and motor classes from the previous post in the series to define a ros2_control system that will work with the differential drive controller. We use a System rather than an Actuator because we want to define one class that can control both motors in one write call, instead of having two separate Actuators.

ROS2 Control Demos Repository

To help us with our ros2_control system implementation, the ros2_control framework has helpfully provided us with a set of examples. One of these examples is exactly what we want - building a differential drive robot (or diffbot, in the examples) with a custom System for driving the motors.

The repository has a great many examples available. If you're here to learn about ros2_control, but not to build a diffbot, there are examples of building simulations, building URDF files representing robots, externally connected sensors, and many more.

We will be using example 2 from this demo repository as a basis, but stripping out anything we don't require right now, like supporting simulation; we can return these parts in later iterations as we come to understand them.

JetBot System Implementation

In this section, I'll take you through the key parts of my JetBot System implementation for ros2_control. The code is available on Github - remember that this repository will be updated over time, so select the tag jetbot-motors-pt2 to get the same code version as in this article!

Components are libraries, not nodes

ros2_control uses a different method of communication from the standard ROS2 publish/subscribe messaging. Instead, the controller will load the code for the motors as a plugin library, and directly call functions inside it. This is the reason we had to rewrite the motor driver in C++ - it has to be a library that can be loaded by ros2_control, which is written in C++.

Previously, we wrote an example node that span the wheels using the motor driver; now we are replacing this executable by a library that can be loaded by ros2_control. In CMakeLists.txt, we can see:

add_library(${PROJECT_NAME}
SHARED
hardware/src/jetbot_system.cpp
hardware/src/i2c_device.cpp
hardware/src/motor.cpp
)

...

pluginlib_export_plugin_description_file(hardware_interface jetbot_control.xml)

These are the lines that build the JetBot code as a library instead of a system, and export definitions that show it is a valid plugin library to be loaded by ros2_control. A new file, jetbot_control.xml, tells ros2_control more information about this library to allow it to be loaded - in this case, the library name and ros2_control plugin type (SystemInterface - we'll discuss this more in the Describing the JetBot section).

Code Deep Dive

For all of the concepts in ros2_control, the actual implementation of a System is quite simple. Our JetBotSystemHardware class extends the SystemInterface class:

class JetBotSystemHardware : public hardware_interface::SystemInterface {

In the private fields of the class, we create the fields that we will need during execution. This includes the I2CDevice and two Motor classes from the previous post, along with two vectors for the hardware commands and hardware velocities:

 private:
std::vector<MotorPins> motor_pin_sets_;
std::vector<Motor> motors_;
std::shared_ptr<I2CDevice> i2c_device_;
std::vector<double> hw_commands_;
std::vector<double> hw_velocities_;

Then, a number of methods need to be overridden from the base class. Take a look at the full header file to see them, but essentially it boils down to three concepts:

  1. export_state_interfaces/export_command_interfaces: report the state and command interfaces supported by this system class. These interfaces can then be checked by the controller for compatibility.
  2. on_init/on_activate/on_deactivate: lifecycle methods automatically called by the controller. Different setup stages for the System occur in these methods, including enabling the motors in the on_activate method and stopping them in on_deactivate.
  3. read/write: methods called every controller update. read is for reading the velocities from the motors, and write is for writing requested speeds into the motors.

From these, we use the on_init method to:

  1. Initialize the base SystemInterface class
  2. Read the pin configuration used for connecting to the motors from the parameters
  3. Check that the provided hardware information matches the expected information - for example, that there are two velocity command interfaces
  4. Initialize the I2CDevice and Motors

This leaves the System initialized, but not yet activated. Once on_activate is called, the motors are enabled and ready to receive commands. The read and write methods are then repeatedly called for reading from and writing to the motors respectively. When it's time to shutdown, on_deactivate will stop the motors, and the destructors of the classes perform any required cleanup. There are more lifecycle states that could potentially be used for a more complex system - these are documented in the ros2 demos repository.

This System class, plus the I2CDevice and Motor classes, are compiled into the plugin library, ready to be loaded by the controller.

Describing the JetBot

The SystemInterface then comes into play when describing the robot. The description folder from the example contains the files that define the robot, including its ros2_control configuration, simulation configuration, and materials used to represent it during simulation. As this implementation has been pared down to basics, only the ros2_control configuration with mock hardware flag have been kept in.

The jetbot.ros2_control.xacro file defines the ros2_control configuration needed to control the robot. It uses xacro files to define this configuration, where xacro is a tool that extends XML files by allowing us to define macros that can be referenced in other files:

<xacro:macro name="jetbot_ros2_control" params="name prefix use_mock_hardware">

In this case, we are defining a macro for the ros2_control part of the JetBot that can be used in the overall robot description.

We then define the ros2_control portion with type system:

<ros2_control name="${name}" type="system">

Inside this block, we give the path to the plugin library, along with the parameters needed to configure it. You may recognize the pin numbers in this section!

<hardware>
<plugin>jetbot_control/JetBotSystemHardware</plugin>
<param name="pin_enable_0">8</param>
<param name="pin_pos_0">9</param>
<param name="pin_neg_0">10</param>
<param name="pin_enable_1">13</param>
<param name="pin_pos_1">12</param>
<param name="pin_neg_1">11</param>
</hardware>

This tells any controller loading our JetBot system hardware which pins are used to drive the PWM chip. But, we're not done yet - we also need to tell ros2_control the command and state interfaces available.

ros2_control Joints, Command Interfaces, and State Interfaces

ros2_control uses joints to understand what the movable parts of a robot are. In our case, we define one joint for each motor.

Each joint then defines a number of command and state interfaces. Each command interface accepts velocity, position, or effort commands, which allows ros2_control controllers to command the joints to move as it needs. State interfaces report a measurement from the joint out of velocity, position, or effort, which allows ros2_control to monitor how much the joint has actually moved and adjust itself. In our case, each joint accepts velocity commands and reports measured velocity - although we configure the controller to ignore the velocity, because we don't actually have a sensor like an encoder in the JetBot. This means we're using open loop control, as opposed to closed loop control.

<joint name="${prefix}left_wheel_joint">
<command_interface name="velocity"/>
<state_interface name="velocity"/>
</joint>

Closed loop control is far more accurate than open loop control. Imagine you're trying to sprint exactly 100 metres from a starting line, but you have to do it once blindfolded, and once again without a blindfold and line markings every ten metres - which run is likely to be more accurate? In the JetBot, there's no sensor to measure how much it has moved, so the robot is effectively blindfolded and guessing how far it has travelled. This means our navigation won't be as accurate - we are limited by hardware.

JetBot Description

With the ros2_control part of the JetBot defined, we can import and use this macro in the overall JetBot definition. As we've stripped out all other definitions, such as simulation parameters, this forms the only part of the overall JetBot definition:

<xacro:include filename="$(find jetbot_control)/ros2_control/jetbot.ros2_control.xacro" />
<xacro:jetbot_ros2_control
name="JetBot" prefix="$(arg prefix)" use_mock_hardware="$(arg use_mock_hardware)"/>

Let's summarize what we've created so far:

  1. A plugin library capable of writing commands to the JetBot motors
  2. A ros2_control xacro file, describing the plugin to load and the parameters to give it
  3. One joint per motor, each with a velocity command and state interface
  4. An overall description file that imports the ros2_control file and calls the macro

Now when we use xacro to build the overall description file, it will import the ros2_control file macro and expand it, giving a complete robot description that we can add to later. It's now time to look at creating a controller manager and a differential drive controller.

Creating A Controller

So far, we've defined a JetBot using description files. Now we want to be able to launch ros2_control and tell it what controller to create, how to configure it, and how load our defined JetBot. For this, we use the jetbot_controllers.yaml file.

We start with the controller_manager. This is used to load one or more controllers and swap between them. It also makes sure that resources are only used by one controller at a time and manages the change between controllers. In our case, we're only using it to load and run one controller:

controller_manager:
ros__parameters:
update_rate: 10 # Hz

jetbot_base_controller:
type: diff_drive_controller/DiffDriveController

We tell the manager to update at 10Hz and to load the diff_drive_controller/DiffDriveController controller. This is the standard differential drive controller discussed earlier. If we take a look at the information page, we can see a lot of configuration for it - we provide this configuration in the same file.

We define that the controller is open loop, as there is no feedback. We give the names of the joints for the controller to control - this is how the controller knows it can send velocities to the two wheels implemented by our system class. We also set velocity limits on both linear and angular movement:

linear.x.max_velocity: 0.016
linear.x.min_velocity: -0.016
angular.z.max_velocity: 0.25
angular.z.min_velocity: -0.25

These numbers are obtained through experimentation! ros2_control operates using target velocities specified in radians per second [source]. However, the velocity we send to motors doesn't correspond to radians per second - the range of -1 to +1 is the minimum velocity up to maximum velocity of the motors, which change with the battery level of the robot. I obtained the numbers given through experimentation - these move the robot at a reasonable pace.

Finally, we supply the wheel separation and radius, specified in metres. I measured these from my own robot. The separation is the minimum separation between wheels, and the radius is from the centre of one wheel to the very edge:

wheel_separation: 0.104
wheel_radius: 0.032

With this, we have described how to configure a controller manager with a differential drive controller to control our JetBot!

Launching the Controller

The last step here is to provide a launch script to bring everything up. The example again provides us with the launch script, including a field that allows us to launch with mock hardware if we want - this is great for testing that everything loads correctly on a system that doesn't have the right hardware.

The launch script goes through a few steps to get to the full ros2_control system, starting with loading the robot description. We specify the path to the description file relative to the package, and use the xacro tool to generate the full XML for us:

# Get URDF via xacro
robot_description_content = Command(
[
PathJoinSubstitution([FindExecutable(name="xacro")]),
" ",
PathJoinSubstitution(
[FindPackageShare("jetbot_control"), "urdf", "jetbot.urdf.xacro"]
),
" ",
"use_mock_hardware:=",
use_mock_hardware,
]
)
robot_description = {"robot_description": robot_description_content}

Following this, we load the jetbot controller configuration:

robot_controllers = PathJoinSubstitution(
[
FindPackageShare("jetbot_control"),
"config",
"jetbot_controllers.yaml",
]
)

With the robot description and the robot controller configuration loaded, we can pass these to the controller manager:

control_node = Node(
package="controller_manager",
executable="ros2_control_node",
parameters=[robot_description, robot_controllers],
output="both",
)

Finally, we ask the launched controller manager to start up the jetbot_base_controller:

robot_controller_spawner = Node(
package="controller_manager",
executable="spawner",
arguments=[
"jetbot_base_controller",
"--controller-manager",
"/controller_manager",
],
)

All that remains is to build the package and launch the new launch file!

ros2_control Launch Execution

This article has been written from the bottom up, but now we have the full story, we can look from the top down:

  1. We launch the JetBot launch file defined in the package
  2. The launch file spawns the controller manager, which is used to load controllers and manage resources
  3. The launch file requests that the controller manager launches the differential drive controller
  4. The differential drive controller loads the JetBot System as a plugin library
  5. The System connects to the I2C bus, and hence, the motors
  6. The controller can then command the System to move the motors as requested by ROS2 messaging
success

Hooray! We have defined everything we need to launch ros2_control and configure it to control our JetBot! Now we have a controller that is able to move our robot around.

Running on the JetBot

To try the package out, we first need a working JetBot. If you're not sure how to do the initial setup, I've created a video on exactly that:

With the JetBot working, we can create a workspace and clone the code into it. Use VSCode over SSH to execute the following commands:

mkdir ~/dev_ws
cd ~/dev_ws
git clone https://github.com/mikelikesrobots/jetbot-ros-control -b jetbot-motors-pt2
cp -r ./jetbot-ros-control/.devcontainer .

Then use the Dev Containers plugin to rebuild and reload the container. This will take a few minutes, but the step is crucial to allow us to run ROS2 Humble on the JetBot, which uses an older version of Ubuntu. Once complete, we can build the workspace, source it, and launch the controller:

source /opt/ros/humble/setup.bash
colcon build --symlink-install
source install/setup.bash
ros2 launch jetbot_control jetbot.launch.py

This should launch the controller and allow it to connect to the motors successfully. Now we can use teleop_twist_keyboard to test it - but with a couple of changes.

First, we now expect messages to go to /jetbot_base_controller/cmd_vel topic instead of the previous /cmd_vel topic. We can fix that by asking teleop_twist_keyboard to remap the topic it normally publishes to.

Secondly, we normally expect /cmd_vel to accept Twist messages, but the controller expects TwistStamped messages. There is a parameter for teleop_twist_keyboard that turns its messages into TwistStamped messages, but while trying it out I found that the node ignored that parameter. Checking it out from source fixed it for me, so in order to run the keyboard test, I recommend building and running from source:

git clone https://github.com/ros2/teleop_twist_keyboard
colcon build --symlink-install
source install/setup.bash
ros2 run teleop_twist_keyboard teleop_twist_keyboard \
--ros-args \
-p stamped:=true \
-r /cmd_vel:=/jetbot_base_controller/cmd_vel

Once running, you should be able to use the standard keyboard controls written on screen to move the robot around. Cool!

Let's do one more experiment, to see how the configuration works. Go into the jetbot_controllers.yaml file and play with the maximum velocity and acceleration fields, to see how the robot reacts. Relaunch after every configuration change to see the result. You can also tune these parameters to match what you expect more closely.

That's all for this stage - we have successfully integrated our JetBot's motors into a ros2_control System interface!

Next Steps

Having this setup gives us a couple of options going forwards.

First, we stripped out a lot of configuration that supported simulation - we could add this back in to support Gazebo simulation, where the robot in the simulation should act nearly identically to the real life robot. This allows us to start developing robotics applications purely in simulation, which is likely to be faster due to the reset speed of the simulation, lack of hardware requirements, and so on.

Second, we could start running a navigation stack that can move the robot for us; for example, we could request that the robot reaches an end point, and the navigation system will plan a path to take the robot to that point, and even face the right direction.

Stay tuned for more posts in this series, where we will explore one or both of these options, now that we have the robot integrated into ROS2 using ros2_control.

· 14 min read
Michael Hart

This post shows how to build two simple functions, running in the cloud, using AWS Lambda. The purpose of these functions is the same - to update the status of a given robot name in a database, allowing us to view the current statuses in the database or build tools on top of it. This is one way we could coordinate robots in one or more fleets - using the cloud to store the state and run the logic to co-ordinate those robots.

This post is also available in video form - check the video link below if you want to follow along!

What is AWS Lambda?

AWS Lambda is a service for executing serverless functions. That means you don't need to provision any virtual machines or clusters in the cloud - just trigger the Lambda with some kind of event, and your pre-built function will run. It runs on inputs from the event and could give you some outputs, make changes in the cloud (like database modifications), or both.

AWS Lambda charges based on the time taken to execute the function and the memory assigned to the function. The compute power available for a function scales with the memory assigned to it. We will explore this later in the post by comparing the memory and execution time of two Lambda functions.

In short, AWS Lambda allows you to build and upload functions that will execute in the cloud when triggered by configured events. Take a look at the documentation if you'd like to learn more about the service!

How does that help with robot co-ordination?

Moving from one robot to multiple robots helping with the same task means that you will need a central system to co-ordinate between them. The system may distribute orders to different robots, tell them to go and recharge their batteries, or alert a user when something goes wrong.

This central service can run anywhere that the robots are able to communicate with it - on one of the robots, on a server near the robots, or in the cloud. If you want to avoid standing up and maintaining a server that is constantly online and reachable, the cloud is an excellent choice, and AWS Lambda is a great way to run function code as part of this central system.

Let's take an example: you have built a prototype robot booth for serving drinks. Customers can place an order at a terminal next to the robot and have their drink made. Now that your booth is working, you want to add more booths with robots and distribute orders among them. That means your next step is to add two new features:

  1. Customers should be able to place orders online through a digital portal or webapp.
  2. Any order should be dispatched to any available robot at a given location, and alert the user when complete.

Suddenly, you have gone from one robot capable of accepting orders through a terminal to needing a central database with ordering system. Not only that, but if you want to be able to deploy to a new location, having a single server per site makes it more difficult to route online orders to the right location. One central system in the cloud to manage the orders and robots is perfect for this use case.

Building Lambda Functions

Convinced? Great! Let's start by building a simple Lambda function - or rather, two simple Lambda functions. We're going to build one Python function and one Rust function. That's to allow us to explore the differences in memory usage and runtime, both of which increase the cost of running Lambda functions.

All of the code used in this post is available on Github, with setup instructions in the README. In this post, I'll focus on relevant parts of the code.

Python Function

Firstly, what are the Lambda functions doing? In both cases, they accept a name and a status as arguments, attached to the event object passed to the handler; check the status is valid; and update a DynamoDB table for the given robot name with the given robot status. For example, in the Python code:

def lambda_handler(event, context):
# ...
name = str(event["name"])
status = str(event["status"])

We can see that the event is passed to the lambda handler and contains the required fields, name and status. If valid, the DynamoDB table is updated:

ddb = boto3.resource("dynamodb")
table = ddb.Table(table_name)
table.update_item(
Key={"name": name},
AttributeUpdates={
"status": {
"Value": status
}
},
ReturnValues="UPDATED_NEW",
)

Rust Function

Here is the equivalent for checking the input arguments for Rust:

#[derive(Deserialize, Debug, Serialize)]
#[serde(rename_all = "UPPERCASE")]
enum Status {
Online,
}
// ...
#[derive(Deserialize, Debug)]
struct Request {
name: String,
status: Status,
}

The difference here is that Rust states its allowed arguments using an enum, so no extra code is required for checking that arguments are valid. The arguments are obtained by accessing event.payload fields:

let status_str = format!("{}", &event.payload.status);
let status = AttributeValueUpdate::builder().value(AttributeValue::S(status_str)).build();
let name = AttributeValue::S(event.payload.name.clone());

With the fields obtained and checked, the DynamoDB table can be updated:

let request = ddb_client
.update_item()
.table_name(table_name)
.key("name", name)
.attribute_updates("status", status);
tracing::info!("Executing request [{request:?}]...");

let response = request
.send()
.await;
tracing::info!("Got response: {:#?}", response);

CDK Build

To make it easier to build and deploy the functions, the sample repository contains a CDK stack. I've talked more about Cloud Development Kit (CDK) and the advantages of Infrastructure-as-Code (IaC) in my video "From AWS IoT Core to SiteWise with CDK Magic!":

In this case, our CDK stack is building and deploying a few things:

  1. The two Lambda functions
  2. The DynamoDB table used to store the robot statuses
  3. An IoT Rule per Lambda function that will listen for MQTT messages and call the corresponding Lambda function

The DynamoDB table comes from Amazon DynamoDB, another service from AWS that keeps a NoSQL database in the cloud. This service is also serverless, again meaning that no servers or clusters are needed.

There are also two IoT Rules, which are from AWS IoT Core, and define an action to take when an MQTT message is published on a particular topic filter. In our case, it allows robots to publish an MQTT message saying they are online, and will call the corresponding Lambda function. I have used IoT Rules before for inserting data into AWS IoT SiteWise; for more information on setting up rules and seeing how they work, take a look at the video I linked just above.

Testing the Functions

Once the CDK stack has been built and deployed, take a look at the Lambda console. You should have two new functions built, just like in the image below:

Two new Lambda functions in the AWS console

Great! Let's open one up and try it out. Open the function name that has "Py" in it and scroll down to the Test section (top red box). Enter a test name (center red box) and a valid input JSON document (bottom red box), then save the test.

Test configuration for Python Lambda function

Now run the test event. You should see a box pop up saying that the test was successful. Note the memory assigned and the billed duration - these are the main factors in determining the cost of running the function. The actual memory used is not important for cost, but can help optimize the right settings for cost and speed of execution.

Test result for Python Lambda function

You can repeat this for the Rust function, only with the test event name changed to TestRobotRs so we can tell them apart. Note that the memory used and duration taken are significantly lower.

Test result for Rust Lambda function

Checking the Database Table

We can now access the DynamoDB table to check the results of the functions. Access the DynamoDB console and click on the table created by the stack.

DynamoDB Table List

Select the button in the top right to explore items.

Explore Table Items button in DynamoDB

This should reveal a screen with the current items in the table - the two test names you used for the Lambda functions:

DynamoDB table with Lambda test items

Success! We have used functions run in the cloud to modify a database to contain the current status of two robots. We could extend our functions to allow different statuses to be posted, such as OFFLINE or CHARGING, then write other applications to work using the current statuses of the robots, all within the cloud. One issue is that this is a console-heavy way of executing the functions - surely there's something more accessible to our robots?

Executing the Functions

Lambda functions have a huge variety of ways that they can be executed. For example, we could set up an API Gateway that is able to accept API requests and forward them to the Lambda, then return the results. One way to check the possible input types is to access the Lambda, then click the "Add trigger" button. There are far too many options to list them all here, so I encourage you to take a look for yourself!

Lambda add trigger button

There's already one input for each Lambda - the AWS IoT trigger. This is an IoT Rule set up by the CDK stack, which is watching the topic filter robots/+/status. We can test this using either the MQTT test client or by running the test script in the sample repository:

./scripts/send_mqtt.sh

One message published on the topic will trigger both functions to run, and we can see the update in the table.

DynamoDB Table Contents after MQTT

There is only one extra entry, and that's because both functions executed on the same input. That means "FakeRobot" had its status updated to ONLINE once by each function.

If we wanted, we could set up the robot to call the Lambda function when it comes online - it could make an API call, or it could connect to AWS IoT Core and publish a message with its ONLINE status. We could also set up more Lambda functions to take customer orders, dispatch them to robots, and so on - the Lambda functions and accompanying AWS services allow us to build a completely serverless robot co-ordination system in the cloud. If you want to see more about connecting ROS2 robots to AWS IoT Core, take a look at my video here:

Lambda Function Cost

How much does Lambda cost to run? For this section, I'll give rough numbers using the AWS Price Calculator. We will assume a rough estimate of 100 messages per minute - that accounts for customer orders arriving, robots reporting their status when it changes, and orders are being distributed; in all, I'll assume a rough estimate of 100 messages per minute, triggering 1 Lambda function invocation each.

For our functions, we can run the test case a few times for each function to get a small spread of numbers. We can also edit the configuration in the console to set higher memory limits, to see if the increase in speed will offset the increased memory cost.

Edit Lambda general configuration

Edit Lambda memory setting

Finally, we will use an ARM architecture, as this currently costs less than x86 in AWS.

I will run a valid test input for each test function 4 times each for 3 different memory values - 128MB, 256MB, and 512MB - and take the latter 3 invocations, as the first invocation takes much longer. I will then take the median billed runtime and calculate the cost per month for 100 invocations per minute at that runtime and memory usage.

My results are as follows:

TestPython (128MB)Python (256MB)Python (512MB)Rust (128MB)Rust (256MB)Rust (512MB)
1594 ms280 ms147 ms17 ms5 ms6 ms
2574 ms279 ms147 ms15 ms6 ms6 ms
3561 ms274 ms133 ms5 ms5 ms6 ms
Median574 ms279 ms147 ms15 ms5 ms6 ms
Monthly Cost$5.07$4.95$5.17$0.99$0.95$1.06

There is a lot of information to pull out from this table! The first thing to notice is the monthly cost. This is the estimated cost per month for Lambda - 100 invocations per minute for the entire month costs a maximum total of $5.17. These are rough numbers, and other services will add to that cost, but that's still very low!

Next, in the Python function, we can see that multiplying the memory will divide the runtime by roughly the same factor. The cost stays roughly the same as well. That means we can configure the function to use more memory to get the fastest runtime, while still paying the same price. In some further testing, I found that 1024MB is a good middle ground. It's worth experimenting to find the best price point and speed of execution.

If we instead look at the Rust function, we find that the execution time is pretty stable from 256MB onwards. Adding more memory doesn't speed up our function - it is most likely limited by the response time of DynamoDB. The optimal point seems to be 256MB, which gives very stable (and snappy) response times.

Finally, when we compare the two functions, we can see that Rust is much faster to respond (5ms instead of 279 ms at 256MB), and costs ~20% as much per month. That's a large difference in execution time and in cost, and tells us that it's worth considering a compiled language (Rust, C++, Go etc) when building a Lambda function that will be executed many times.

The main point to take away from this comparison is that memory and execution time are the major factors when estimating Lambda cost. If we can minimize these parameters, we will minimize cost of Lambda invocation. The follow-up to that is to consider using a compiled language for frequently-run functions to minimize these parameters.

Summary

Once you move from one robot working alone to multiple robots working together, you're very likely to need some central management system, and the cloud is a great option for this. What's more, you can use serverless technologies like AWS Lambda and Amazon DynamoDB to only pay for the transactions - no upkeep, and no server provisioning. This makes the management process easy: just define your database and the functions to interact with it, and your system is good to go!

AWS Lambda is a great way to define one or more of these functions. It can react to events like API calls or MQTT messages by integrating with other services. By combining IoT, DynamoDB, and Lambda, we can allow robots to send an MQTT message that triggers a Lambda, allowing us to track the current status of robots in our fleet - all deployed using CDK.

Lambda functions are charged by invocation, where the cost for each invocation depends on the memory assigned to the function and the time taken for that function to complete. We can minimize the cost of Lambda by reducing the memory required and the execution time for a function. Because of this, using a compiled language could translate to large savings for functions that run frequently. With that said, the optimal price point might not be the minimum possible memory - the Python function seems to be cheapest when configured with 1024MB.

We could continue to expand this system by adding more possible statuses, defining the fleet for each robot, and adding more functions to manage distributing orders. This is the starting point of our management system. See if you can expand one or both of the Lambda functions to define more possible statuses for the robots!

· 15 min read
Michael Hart

Welcome to a new series - setting up the JetBot to work with ROS2 Control interfaces! Previously, I showed how to set up the JetBot to work from ROS commands, but that was a very basic motor control method. It didn't need to be advanced because a human was remote controlling it. However, if we want autonomous control, we need to be able to travel a specific distance or follow a defined path, like a spline. A better way of moving a robot using ROS is by using the ROS Control interfaces; if done right, this means your robot can autonomously follow a path sent by the ROS navigation stack. That's our goal for this series: move the JetBot using RViz!

The first step towards this goal is giving ourselves the ability to control the motors using C++. That's because the controllers in ROS Control requires extending C++ classes. Unfortunately, the existing drivers are in Python, meaning we'll need to rewrite them in C++ - which is a good opportunity to learn how the serial control works. We use I2C to talk to the motor controller chip, an AdaFruit DC Motor + Stepper FeatherWing, which sets the PWM duty cycle that makes the motors move. I'll refer to this chip as the FeatherWing for the rest of this article.

First, we'll look at how I2C works in general. We don't need to know this, but it helps to understand how the serial communication works so we can understand the function calls in the code better.

Once we've seen how I2C works, we'll look at the commands sent to set up and control the motors. This will help us understand how to translate the ROS commands into something our motors will understand.

The stage after this will be in another article in this series, so stay tuned!

This post is also available in video form - check the video link below if you want to follow along!

Inter-Integrated Circuit (IIC/I2C)

I2C can get complicated! If you want to really dive deep into the timings and circuitry needed to make it work, this article has great diagrams and explanations. The image I use here is from the same site.

SDA and SCLK

I2C is a serial protocol, meaning that it sends bits one at a time. It uses two wires called SDA and SCLK; together, these form the I2C bus. Multiple devices can be attached to these lines and take it in turns to send data. We can see the bus in the image below:

I2C Bus with SCLK and SDA lines

Data is sent on the SDA line, and a clock signal is sent on the SCLK line. The clock helps the devices know when to send the next bit. This is a very helpful part of I2C - the speed doesn't need to be known beforehand! Compare this with UART communication, which has two lines between every pair of devices: one to send data from A to B, and one to send data from B to A. Both devices must know in advance how fast to send their data so the other side can understand it. If they don't agree on timing, or even if one side's timing is off, the communication fails. By using a line for the clock in I2C, all devices are given the timing to send data - no prior knowledge required!

The downside of this is that there's only one line to send data on: SDA. The devices must take it in turns to send data. I2C solves this by designating a master device and one or more slave devices on the bus. The master device is responsible for sending the clock signal and telling the slave devices when to send data. In our case, the master device is the Jetson Nano, and the slave device is the FeatherWing. We could add extra FeatherWing boards to the bus, each with extra motors, and I2C would allow the Jetson to communicate with all of them - but this brings a new problem: how would each device know when it is the one meant to respond to a request?

Addressing

The answer is simple. Each slave device on the bus has a unique address. In our case, the FeatherWing has a default address of 0x60, which is hex notation for the number 96. In fact, if we look at the Python version of the JetBot motor code, we can see the following:

if 96 in addresses:

Aha! So when we check what devices are available on the bus, we see device 96 - the FeatherWing.

When the Jetson wants to talk to a specific device, it starts by selecting the address. It sends the address it wants on the SDA line before making a request, and each device on the bus can check that address with the address it is expecting. If it's the wrong address, it ignores the request. For example, if the FeatherWing has a device of 0x61, and the Jetson sends the address 0x60, the FeatherWing should ignore that request - it's for a different address.

But, how do we assign an address to each device?

The answer comes by looking at the documentation for the FeatherWing:

FeatherWing I2C Addressing

By soldering different pins on the board together, we can tell the board a new address to take when it starts up. That way, we can have multiple FeatherWing boards, each with a different address, all controllable from the Jetson. Cool!

Pulse Width Modulation (PWM)

With that, we have a basic understanding of how the Jetson controls I2C devices connected to it, including our FeatherWing board. Next we want to understand how the FeatherWing controls the motors, so we can program the Jetson to issue corresponding commands to the FeatherWing.

The first step of this is PWM itself - how does the board run a motor at a particular speed? The following step is the I2C commands needed to make the FeatherWing do that. I'll start with the speed.

Motor Wires

Each JetBot motor is a DC motor with two wires. By making one wire a high voltage with the other wire at 0V, the motor will run at full speed in one direction; if we flip which wire is the high voltage, the motor will turn in the opposite direction. We will say that the wire that makes the motor move forwards is the positive terminal, and the backwards wire is the negative terminal.

We can see the positive (red) wire and the negative (black) wire from the product information page:

DC Motor with red and black wires

That means we know how to move the motor at full speed:

  1. Forwards - red wire has voltage, black wire is 0V
  2. Backwards - black wire has voltage, red wire is 0V

There are another couple of modes that we should know about:

  1. Motor off - both wires are 0V
  2. Motor brakes - both wires have voltage

Which gives us full speed forwards, full speed backwards, brake, and off. How do we move the motor at a particular speed? Say, half speed forwards?

Controlling Motor Speed

The answer is PWM. Essentially, instead of having a wire constantly have high voltage, we turn the voltage on and off. For half speed forwards, we have the wire on for 50% of the time, and off for 50% of the time. By switching between them really fast, we effectively make the motor move at half speed because it can't switch on and off fast enough to match the wire - the average voltage is half the full voltage!

That, in essence, is PWM: switch the voltage on the wire very fast from high to low and back again. The proportion of time spent high determines how much of the time the motor is on.

We can formalize this a bit more with some language. The frequency is how quickly the signal changes from high to low and back. The duty cycle is the proportion of time the wire is on. We can see this in the following diagram from Cadence:

PWM signal with mean voltage, duty cycle, and frequency

We can use this to set a slower motor speed. We choose a high enough frequency, which in our case is 1.6 kHz. This means the PWM signal goes through a high-low cycle 1600 times per second. Then if we want to go forwards at 25% speed, we cane set the duty cycle of the positive wire to our desired speed - 25% speed means 25% duty cycle. We can go backwards at 60% speed by setting 60% duty cycle on the negative wire.

Producing this signal sounds very manual, which is why the FeatherWing comes with a dedicated PWM chip. We can use I2C commands from the Jetson to set the PWM frequency and duty cycle for a motor, and it handles generating the signal for us, driving the motor. Excellent!

Controlling Motors through the FeatherWing

Now we know how to move a particular motor at a particular speed, forwards or backwards, we need to understand how to command the FeatherWing to do so. I struggled with this part! I couldn't find information on the product page about how to do this, which I would ordinarily use to set up an embedded system like this. This is because AdaFruit provides libraries to use the FeatherWing without needing any of this I2C or PWM knowledge.

Thankfully, the AdaFruit MotorKit library and its dependencies had all of the code I needed to write a basic driver in C++ - thank you, AdaFruit! The following is the list of links I used for a reference on controlling the FeatherWing:

  1. Adafruit_CircuitPython_MotorKit
  2. Adafruit_CircuitPython_PCA9685
  3. Adafruit_CircuitPython_Motor
  4. Adafruit_CircuitPython_BusDevice
  5. Adafruit_CircuitPython_Register

Thanks to those links, I was able to put together a basic C++ driver, available on here on Github.

Git Tag

Note that this repository will have updates in future to add to the ROS Control of the JetBot. To use the code quoted in this article, ensure you use the git tag jetbot-motors-pt1.

FeatherWing Initial Setup

To get the PWM chip running, we first do some chip setup - resetting, and setting the clock value. This is done in the I2CDevice constructor by opening the I2C bus with the Linux I2C driver; selecting the FeatherWing device by address; setting its mode to reset it; and setting its clock using a few reads and writes of the mode and prescaler registers. Once this is done, the chip is ready to move the motors.

I2CDevice::I2CDevice() {
i2c_fd_ = open("/dev/i2c-1", O_RDWR);
if (!i2c_fd_) {
std::__throw_runtime_error("Failed to open I2C interface!");
}

// Select the PWM device
if (!trySelectDevice())
std::__throw_runtime_error("Failed to select PWM device!");

// Reset the PWM device
if (!tryReset()) std::__throw_runtime_error("Failed to reset PWM device!");

// Set the PWM device clock
if (!trySetClock()) std::__throw_runtime_error("Failed to set PWM clock!");
}

Let's break this down into its separate steps. First, we use the Linux driver to open the I2C device.

// Headers needed for Linux I2C driver
#include <fcntl.h>
#include <linux/i2c-dev.h>
#include <linux/i2c.h>
#include <linux/types.h>
#include <sys/ioctl.h>
#include <unistd.h>

// ...

// Open the I2C bus
i2c_fd_ = open("/dev/i2c-1", O_RDWR);
// Check that the open was successful
if (!i2c_fd_) {
std::__throw_runtime_error("Failed to open I2C interface!");
}

We can now use this i2c_fd_ with Linux system calls to select the device address and read/write data. To select the device by address:

bool I2CDevice::trySelectDevice() {
return ioctl(i2c_fd_, I2C_SLAVE, kDefaultDeviceAddress) >= 0;
}

This uses the ioctl function to select a slave device by the default device address 0x60. It checks the return code to see if it was successful. Assuming it was, we can proceed to reset:

bool I2CDevice::tryReset() { return tryWriteReg(kMode1Reg, 0x00); }

The reset is done by writing a 0 into the Mode1 register of the device. Finally, we can set the clock. I'll omit the code for brevity, but you can take a look at the source code for yourself. It involves setting the mode register to accept a new clock value, then setting the clock, then setting the mode to accept the new value and waiting 5ms. After this, the PWM should run at 1.6 kHz.

Once the setup is complete, the I2CDevice exposes two methods: one to enable the motors, and one to set a duty cycle. The motor enable sets a particular pin on, so I'll skip that function. The duty cycle setter has more logic:

buf_[0] = kPwmReg + 4 * pin;

if (duty_cycle == 0xFFFF) {
// Special case - fully on
buf_[1] = 0x00;
buf_[2] = 0x10;
buf_[3] = 0x00;
buf_[4] = 0x00;
} else if (duty_cycle < 0x0010) {
// Special case - fully off
buf_[1] = 0x00;
buf_[2] = 0x00;
buf_[3] = 0x00;
buf_[4] = 0x10;
} else {
// Shift by 4 to fit 12-bit register
uint16_t value = duty_cycle >> 4;
buf_[1] = 0x00;
buf_[2] = 0x00;
buf_[3] = value & 0xFF;
buf_[4] = (value >> 8) & 0xFF;
}

Here we can see that the function checks the requested duty cycle. If at the maximum, it sets the motor PWM signal at its maximum - 0x1000. If below a minimum, it sets the duty cycle to its minimum. Anywhere in between will shift the value by 4 bits to match the size of the register in the PWM chip, then transmit that. Between the three if blocks, the I2CDevice has the ability to set any duty cycle for a particular pin. It's then up to the Motor class to decide which pins should be set, and the duty cycle to set them to.

Initialize Motors

Following the setup of I2CDevice, each Motor gets a reference to the I2CDevice to allow it to talk to the PWM chip, as well as a set of pins that correspond to the motor. The pins are as follows:

MotorEnable PinPositive PinNegative Pin
Motor 18910
Motor 2131112

The I2CDevice and Motors are constructed in the JetBot control node. Note that the pins are passed inline:

device_ptr_ = std::make_shared<JetBotControl::I2CDevice>();
motor_1_ = JetBotControl::Motor(device_ptr_, std::make_tuple(8, 9, 10), 1);
motor_2_ =
JetBotControl::Motor(device_ptr_, std::make_tuple(13, 11, 12), 2);

Each motor can then request the chip enables the motor on the enable pin, which again is done in the constructor:

Motor::Motor(I2CDevicePtr i2c, MotorPins pins, uint32_t motor_number)
: i2c_{i2c}, pins_{pins}, motor_number_{motor_number} {
u8 enable_pin = std::get<0>(pins_);
if (!i2c_->tryEnableMotor(enable_pin)) {
std::string error =
"Failed to enable motor " + std::to_string(motor_number) + "!";
std::__throw_runtime_error(error.c_str());
}
}

Once each motor has enabled itself, it is ready to send the command to spin forwards or backwards, brake, or turn off. This example only allows the motor to set itself to spinning or not spinning. The command is sent by the control node once more:

motor_1_.trySetSpinning(spinning_);
motor_2_.trySetSpinning(spinning_);

To turn on, the positive pin is set to fully on, or 0xFFFF, while the negative pin is set to off:

if (!i2c_->trySetDutyCycle(pos_pin, 0xFFFF)) {
return false;
}
if (!i2c_->trySetDutyCycle(neg_pin, 0)) {
return false;
}

To turn off, both positive and negative pins are set to fully on, or 0xFFFF:

if (!i2c_->trySetDutyCycle(pos_pin, 0xFFFF)) {
return false;
}
if (!i2c_->trySetDutyCycle(neg_pin, 0xFFFF)) {
return false;
}

Finally, when the node is stopped, the motors make sure they stop spinning. This is done in the destructor of the Motor class:

Motor::~Motor() {
trySetSpinning(false);
}

While the Motor class only has the ability to turn fully on or fully off, it has the code to set a duty cycle anywhere between 0 and 0xFFFF - which means we can set any speed in the direction we want the motors to spin!

Trying it out

If you want to give it a try, and you have a JetBot to do it with, you can follow my setup guide in this video:

Once this is done, follow the instructions in the README to get set up. This means cloning the code onto the JetBot, opening the folder inside the dev container, and then running:

source /opt/ros/humble/setup.bash
colcon build
source install/setup.bash
ros2 run jetbot_control jetbot_control

This will set the motors spinning for half a second, then off for half a second.

Congratulations!

If you followed along to this point, you have successfully moved your JetBot's motors using a driver written in C++!

· 15 min read
Michael Hart

In this post, I'll show how to use two major concepts together:

  1. Docker images that can be privately hosted in Amazon Elastic Container Registry (ECR); and
  2. AWS IoT Greengrass components containing Docker Compose files.

These Docker Compose files can be used to run public Docker components, or pull private images from ECR. This means that you can deploy your own system of microservices to any platform compatible with AWS Greengrass.

This post is also available in video form - check the video link below if you want to follow along!

What is Docker?

Docker is very widely known in the DevOps world, but if you haven't heard of it, it's a way of taking a software application and bundling it up with all its dependencies so it can be easily moved around and run as an application. For example, if you have a Python application, you have two options:

  1. Ask the user to install Python, the pip dependencies needed to run the application, and give instructions on downloading and running the application; or
  2. Ask the user to install Docker, then provide a single command to download and run your application.

Either option is viable, but it's clear to see the advantages of the Docker method.

Docker Terminology

A container is a running instance of an application, and an image is the result of saving a container so it can be run again. You can have multiple containers based on the same image. Think of it as a movie saved on a hard disk: the file on disk is the "image", and playing that movie is the "container".

Docker Compose

On top of bundling software into images, Docker has a plugin called Docker Compose, which is a way of defining a set of containers that can be run together. With the right configuration, the containers can talk to each or the host computer. For instance, you might want to run a web server, API server, and database at the same time; with Docker Compose, you can define these services in one file and give them permission to talk to each other.

Building Docker Compose into Greengrass Components

We're now going to use Docker Compose by showing how to deploy and run an application using Greengrass and Docker Compose. Let's take a look at the code.

The code we're using comes from my sample repository.

Clone the Code

The first step is to check the code out on your local machine. The build scripts use Bash, so I'd recommend something Linux-based, like Ubuntu. Execute the following to check out the code:

git clone https://github.com/mikelikesrobots/greengrass-docker-compose.git

Check Dependencies

The next step is to make sure all of our dependencies are installed and set up. Execute all of the following and check the output is what you expect - a help or version message.

aws --version
gdk --version
jq --version
docker --version

The AWS CLI will also need credentials set up such that the following call works:

aws sts get-caller-identity

Docker will need to be able to run containers as a super user. Check this using:

sudo docker run hello-world

Finally, we need to make sure Docker Compose is installed. This is available either as a standalone script or as a plugin to Docker, where the latter is the more recent method of installing. If you have access to the plugin version, I would recommend using that, although you will need to update the Greengrass component build script - docker-compose is currently used.

# For the script version
docker-compose --version

# For the plugin version
docker compose --version

More information can be found for any of these components using:

  1. AWS CLI
  2. Greengrass Development Kit (GDK)
  3. jq: sudo apt install jq
  4. Docker
  5. Docker Compose

Greengrass Permissions Setup

The developer guide for Docker in Greengrass tells us that we may need to add permissions to our Greengrass Token Exchange Role to be able to deploy components using either/both of ECR and S3, for storing private Docker images and Greengrass component artifacts respectively. We can check this by navigating to the IAM console, searching for Greengrass, and selecting the TokenExchangeRole. Under this role, we should see one or more policies granting us permission to use ECR and S3.

Token Exchange Role Policies

For the ECR policy, we expect a JSON block similar to the following:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
],
"Resource": [
"*"
],
"Effect": "Allow"
}
]
}

For the S3 policy, we expect:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject"
],
"Resource": [
"*"
],
"Effect": "Allow"
}
]
}

With these policies in place, any system we deploy Greengrass components to should have permission to pull Docker images from ECR and Greengrass component artifacts from S3.

Elastic Container Registry Setup

By default, this project builds and pushes a Docker image tagged python-hello-world:latest. It expects an ECR repository of the same name to exist. We can create this by navigating to the ECR console and clicking "Create repository". Set the name to python-hello-world and keep the settings default otherwise, then click Create repository. This should create a new entry in the Repositories list:

ECR Repository Created

Copy the URI from the repo and strip off the python-hello-world ending to get the BASE_URI. Then, back in your cloned repository, open the .env file and replace the ECR_REPO variable with your base URL. It should look like the following:

ECR_REPO=012345678901.dkr.ecr.us-west-2.amazonaws.com

Any new Docker image you attempt to create locally will need a new ECR repository. You can follow the creation steps up to the base URI step - this is a one-time operation.

Building the Docker Images and Greengrass Components

At this point, you can build all the components and images by running:

./build_all.sh

Alternatively, any individual image/component can be built by changing directory into the component and running:

source ../../.env && ./build.sh

Publishing Docker Images and Greengrass Components

Just as with building the images/components, you can now publish them using:

./publish_all.sh

If any image/component fails to publish, it should prevent the script execution to allow you to investigate further.

Deploying Your Component

With the images and component pushed, you can now use the component in a Greengrass deployment.

First, make sure you have Greengrass running on a system. If you don't, you can follow the guide in the video below:

Once you have Greengrass set up, you can navigate to the core device in the console. Open the Greengrass console, select core devices, and then select your device.

Select Greengrass Core Device

From the deployments tab, click the current deployment.

Select current deployment

In the top right, select Actions, then Revise.

Revise Greengrass Deployment

Click Select components, then tick the new component to add it to the deployment. Skip to Review and accept.

Add Component to Deployment

This will now deploy the component to your target Greengrass device. Assuming all the setup is correct, you can access the Greengrass device, and list the most recent logs using:

sudo vim /greengrass/v2/logs/com.docker.PythonHelloWorld.log

This will show all current logs. It may take a few minutes to get going, so keep checking back! Once the component is active, you should see some log lines similar to the following:

2024-01-19T21:46:03.514Z [INFO] (Copier) com.docker.PythonHelloWorld: stdout. [36mpython-hello-world_1  |^[[0m Received new message on topic /topic/local/pubsub: Hello from local pubsub topic. {scriptName=services.com.docker.PythonHelloWorld.lifecycle.Run.Script, serviceName=com.docker.PythonHelloWorld, currentState=RUNNING}
2024-01-19T21:46:03.514Z [INFO] (Copier) com.docker.PythonHelloWorld: stdout. [36mpython-hello-world_1 |^[[0m Successfully published 999 message(s). {scriptName=services.com.docker.PythonHelloWorld.lifecycle.Run.Script, serviceName=com.docker.PythonHelloWorld, currentState=RUNNING}
2024-01-19T21:46:03.514Z [INFO] (Copier) com.docker.PythonHelloWorld: stdout. [36mpython-hello-world_1 |^[[0m Received new message on topic /topic/local/pubsub: Hello from local pubsub topic. {scriptName=services.com.docker.PythonHelloWorld.lifecycle.Run.Script, serviceName=com.docker.PythonHelloWorld, currentState=RUNNING}
2024-01-19T21:46:03.514Z [INFO] (Copier) com.docker.PythonHelloWorld: stdout. [36mpython-hello-world_1 |^[[0m Successfully published 1000 message(s). {scriptName=services.com.docker.PythonHelloWorld.lifecycle.Run.Script, serviceName=com.docker.PythonHelloWorld, currentState=RUNNING}
2024-01-19T21:46:05.306Z [INFO] (Copier) com.docker.PythonHelloWorld: stdout. [36mcomdockerpythonhelloworld_python-hello-world_1 exited with code 0. {scriptName=services.com.docker.PythonHelloWorld.lifecycle.Run.Script, serviceName=com.docker.PythonHelloWorld, currentState=RUNNING}

From these logs, we can see both "Successfully published" messages and "Received new message" messages, showing that the component is running correctly and has all the permissions it needs.

This isn't the only way to check the component is running! We could also use the Local Debug Console, a locally-hosted web UI, to publish/subscribe to local topics. Take a look at this excellent video if you want to set this method up for yourself:

Congratulations!

If you got to this point, you have successfully deployed a Docker Compose application using Greengrass!

Diving into the code

To understand how to extend the code, we need to first understand how it works.

From the top-level directory, we can see a couple of important folders (components, docker) and a couple of important scripts (build_all.sh, publish_all.sh).

components contains all of the Greengrass components, where each component goes in a separate folder and is built using GDK. We can see this from the folder inside, com.docker.PythonHelloWorld. docker contains all of the Docker images, where each image is in a separate folder and is built using Docker. We have already seen build_all.sh and publish_all.sh, but if we take a look inside, we see that both scripts source the .env file, then goes through all Docker folders followed by all Greengrass folders, for each one executing the build.sh or publish.sh script inside. The only exception is for publishing Greengrass components, where the standard gdk component publish command is used directly instead of adding an extra file.

Let's take a deeper dive into the Docker image and the Greengrass component in turn.

Docker Image (docker/python-hello-world)

Inside this folder, we can see the LocalPubSub sample application from Greengrass (see the template), with some minor modifications. Instead of passing in the topic to publish on and the message to publish, we are instead using environment variables.

topic = os.environ.get("MQTT_TOPIC", "example/topic")
message = os.environ.get("MQTT_MESSAGE", "Example Hello!")

Passing command line arguments directly to Greengrass components is easy, but passing those same arguments through Docker Compose is more difficult. It's an easier pattern to use environment variables specified by the Docker Compose file and modified by Greengrass configuration - we will see more on this in the Greengrass Component deep dive.

Therefore, the component retrieves its topic and message from the environment, then published 1000 messages and listens for those same messages.

We also have a simple Dockerfile showing how to package the application. From a base image of python, we add the application code into the app directory, and then specify the entrypoint as the main.py script.

Finally, we have the build.sh and publish.sh. The build simply uses the docker build command with a default tag of the ECR repo. The publish step does slightly more work by logging in to the ECR repository with Docker before pushing the component. Note that both scripts use the ECR_REPO variable set in the .env file.

If we want to add other Docker images, we can add a new folder with our component name and copy the contents of the python-hello-world image. We can then update the image name in the build and publish scripts and change the application code and Dockerfile as required. A new ECR repo will also be required, matching the name given in the build and publish scripts.

Greengrass Component (components/com.docker.PythonHelloWorld)

Inside our Greengrass component, we can see a build script, the Greengrass component files, and the docker-compose.yml that will be deployed using Greengrass.

The build script is slightly more complicated than the Docker equivalent, which is because the ECR repository environment variable needs to be replaced in the other files, but also needs to be reset after the component build to avoid committing changes to the source code. These lines...

find . -maxdepth 1 -type f -not -name "*.sh" -exec sed -i "s/{ECR_REPO}/$ECR_REPO/g" {} \;
gdk component build
find . -maxdepth 1 -type f -not -name "*.sh" -exec sed -i "s/$ECR_REPO/{ECR_REPO}/g" {} \;

...replace the ECR_REPO placeholder with the actual repo, then build the component with GDK, then replace that value back to the placeholder. As a result, the built files are modified, but the source files are changed to their original state.

Next we have the GDK configuration file, which shows that our build system is set to zip. We could push only the Docker Compose file, but this method allows us to zip other files that support it if we want to extend the component. We also have the version tag, which needs to be incremented with new component versions.

After that, we have the Docker Compose file. This contains one single service, some environment variables, and a volume. The service refers to the python-hello-world Docker image built by docker/python-hello-world by specifying the image name.

warning

This component references the latest tag of python-hello-world. If you want your Greengrass component version to be meaningful, you should extend the build scripts to give a version number as the Docker image tag, so that each component version references a specific Docker image version.

We can see the MQTT_TOPIC and MQTT_MESSAGE environment variables that need to be passed to the container. These can be overridden in the recipe.yaml by Greengrass configuration, allowing us to pass configuration through to the Docker container.

Finally, we can see some other parameters which are needed for the Docker container to be able to publish and subscribe to local MQTT topics:

environment:
- SVCUID
- AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT
volumes:
- ${AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT}:${AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT}

These will need to be included in any Greengrass component where the application needs to use MQTT. Other setups are available in the Greengrass Docker developer guide.

If we want to add a new Docker container to our application, we can create a new service block, just like python-hello-world, and change our environment variables and image tags. Note that we don't need to reference images stored in ECR - we can also access public Docker images!

The last file is the recipe.yaml, which contains a lot of important information for our component. Firstly, the default configuration allows our component to publish and subscribe to MQTT, but also specifies the environment variables we expect to be able to override:

ComponentConfiguration:
DefaultConfiguration:
Message: "Hello from local pubsub topic"
Topic: "/topic/local/pubsub"

This allows us to override the message and topic using Greengrass configuration, set in the cloud.

The recipe also specifies that the Docker Application Manager and Token Exchange Service are required to function correctly. Again, see the developer guide for more information.

We also need to look at the Manifests section, which specifies the Artifacts required and the Lifecycle for running the application. Within Artifacts, we can see:

- URI: "docker:{ECR_REPO}/python-hello-world:latest"

This line specifies that a Docker image is required from our ECR repo. Each new private Docker image added to the Compose file will need a line like this to grant permission to access it. However, public Docker images can be freely referenced.

- URI: "s3://BUCKET_NAME/COMPONENT_NAME/COMPONENT_VERSION/com.docker.PythonHelloWorld.zip"
Unarchive: ZIP

This section specified that the component files are in a zip, and the S3 location is supplied during the GDK build. We are able to use files from this zip by referencing the {artifacts:decompressedPath}/com.docker.PythonHelloWorld/ path.

In fact, we do this during the Run lifecycle stage:

Lifecycle:
Run:
RequiresPrivilege: True
Script: |
MQTT_TOPIC="{configuration:/Topic}" \
MQTT_MESSAGE="{configuration:/Message}" \
docker-compose -f {artifacts:decompressedPath}/com.docker.PythonHelloWorld/docker-compose.yml up

This uses privilege, as Docker requires super user privilege in our current setup. It is possible to set it up to work without super user, but this method is the simplest. We also pass MQTT_TOPIC and MQTT_MESSAGE as environment variables to the docker-compose command. With the up command, we tell the component to start the application in the Docker Compose file.

tip

If we want to change to use Compose as a plugin, we can change the run command here to start with docker compose.

And that's the important parts of the source code! I encourage you to read through and check your understanding of the parameters - not setting permissions and environment variables correctly can lead to some confusing errors.

Where to go from here

Given this setup, we should be able to deploy private or public Docker containers, which paves the path for deploying our robot software using Greengrass. We can run a number of containers together for the robot software. This method of deployment gives us the advantages of Greengrass, like having an easier route to the cloud, a more fault-tolerant deployment with roll-back mechanism and version tracking, and allowing us to deploy other components like the CloudWatch Log Manager.

In the future, we can extend this setup to build ROS2 containers, allowing us to migrate our robot software to Docker images and Greengrass components. We could install Greengrass on each robot, then deploy the full software with configuration options. We then also have a mechanism to update components or add more as needed, all from the cloud.

Give the repository a clone and try it out for yourself!

· 7 min read
Michael Hart

This post shows how to build a Robot Operating System 2 node using Rust, a systems programming language built for safety, security, and performance. In the post, I'll tell you about Rust - the programming language, not the video game! I'll tell you why I think it's useful in general, then specifically in robotics, and finally show you how to run a ROS2 node written entirely in Rust that will send messages to AWS IoT Core.

This post is also available in video form - check the video link below if you want to follow along!

Why Rust?

The first thing to talk about is, why Rust in particular over other programming languages? Especially given that ROS2 has strong support for C++ and Python, we should think carefully about whether it's worth travelling off the beaten path.

There are much more in-depths articles and videos about the language itself, so I'll keep my description brief. Rust is a systems-level programming language, which is the same langauge as C and C++, but with a very strict compiler that blocks you from doing "unsafe" operations. That means the language is built for high performance, but with a greatly diminished risk of doing something unsafe as C and C++ allow.

Rust is also steadily growing in traction. It is the only language other than C to make its way into the Linux kernel - and the Linux kernel was originally written in C! The Windows kernel is also rewriting some modules in Rust - check here to see what they have to say:

The major tech companies are adopting Rust, including Google, Facebook, and Amazon. This recent 2023 keynote from Dr Wener Vogels, Vice President and CTO of Amazon.com, had some choice words to say about Rust. Take a look here to hear this expert in the industry:

Why isn't Rust used more?

That's a great question. Really, I've presented the best parts in this post so far. Some of the drawbacks include:

  1. Being a newer language means less community support and less components provided out of the box. For example, writing a desktop GUI in Rust is possible, but the libraries are still maturing.
  2. It's harder to learn than most languages. The stricter compiler means some normal programming patterns don't work, whcih means relearning some concepts and finding different ways to accomplish the same task.
  3. It's hard for a new language to gain traction! Rust has to prove it will stand the test of time.

Having said that, I believe learning the language is worth it for safety, security, and sustainability reasons. Safety and security comes from the strict compiler, and sustainability comes from being a low-level language that does the task faster and with fewer resources.

That's true for robotics as much as it is for general applications. Some robot software can afford to be slow, like high-level message passing and decision making, but a lot of it needs to be real-time and high-performance, like processing Lidar data. My example today is perfectly acceptable in Python because it's passing non-urgent messages, but it is a good use case to explore using Rust in.

With that, let's stop talking about Rust, and start looking at building that ROS2 node.

Building a ROS2 Node

The node we're building replicates the Python-based node from this blog post. The same setup is required, meaning the setup of X.509 certificates, IoT policies, and so on will be used. If you want to follow along, make sure to run through that setup to the point of running the code - at which point, we can switch over to the Rust-based node. If you prefer to follow instructions from a README, please follow this link - it is the repository containing the source code we'll be using!

Prerequisites

The first part of our setup is making sure all of our tools are installed. This node can be built on any operating system, but instructions are given for Ubuntu, so you may need some extra research for other systems.

Execute the following to install Rust using Rustup:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

There are further dependencies taken from the ROS2 Rust repository as follows:

sudo apt install -y git libclang-dev python3-pip python3-vcstool # libclang-dev is required by bindgen
# Install these plugins for cargo and colcon:
cargo install --debug cargo-ament-build # --debug is faster to install
pip install git+https://github.com/colcon/colcon-cargo.git
pip install git+https://github.com/colcon/colcon-ros-cargo.git

Source Code

Assuming your existing ROS2 workspace is at ~/ros2_ws, the following commands can be used to check out the source code:

cd ~/ros2_ws/src
git clone https://github.com/mikelikesrobots/aws-iot-node-rust.git
git clone https://github.com/ros2-rust/ros2_rust.git
git clone https://github.com/aws-samples/aws-iot-robot-connectivity-samples-ros2.git

ROS2 Rust then uses vcs to import the other repositories it needs:

cd ~/ros2_ws
vcs import src < src/ros2_rust/ros2_rust_humble.repos

That concludes checking out the source code.

Building the workspace

The workspace can now be built. It takes around 10m to build ROS2 Rust, which should only need to be done once. Following that, changes to the code from this repository can be built very quickly. To build the workspace, execute:

cd ~/ros2_ws
colcon build
source install/setup.bash

The build output should look something like this:

Colcon Build Complete

Once the initial build has completed, the following command can be used for subsequent builds:

colcon build --packages-select aws_iot_node

Here it is in action:

build-only-iot

Now, any changes that are made to this repository can be built and tested with cargo commands, such as:

cargo build
cargo run --bin mock-telemetry

The cargo build log will look something like:

cargo-build-complete

Multi-workspace Setup

The ROS2 Rust workspace takes a considerable amount of time to build, and often gets built as part of the main workspace when it's not required, slowing down development. A different way of structuring workspaces is to separate the ROS2 Rust library from your application, as follows:

# Create and build a workspace for ROS2 Rust
mkdir -p ~/ros2_rust_ws/src
cd ~/ros2_rust_ws/src
git clone https://github.com/ros2-rust/ros2_rust.git
cd ~/ros2_rust_ws
vcs import src < src/ros2_rust/ros2_rust_humble.repos
colcon build
source install/setup.bash

# Check out application code into main workspace
cd ~/ros2_ws/src
git clone https://github.com/mikelikesrobots/aws-iot-node-rust.git
git clone https://github.com/aws-samples/aws-iot-robot-connectivity-samples-ros2.git
cd ~/ros2_ws
colcon build
source install/local_setup.bash

This method means that the ROS2 Rust workspace only needs to be updated with new releases for ROS2 Rust, and otherwise can be left. Furthermore, you can source the setup script easily by adding a line to your ~/.bashrc:

echo "source ~/ros2_rust_ws/install/setup.bash" >> ~/.bashrc

The downside of this method is that you can only source further workspaces using the local_setup.bash script, or it will overwrite the variables needed to access the ROS2 Rust libraries.

Running the Example

To run the example, you will need the IOT_CONFIG_FILE variable set from the Python repository.

Open two terminals. In each terminal, source the workspace, then run one of the two nodes as follows:

source ~/ros2_ws/install/setup.bash  # Both terminals
source ~/ros2_ws/install/local_setup.bash # If using the multi-workspace setup method
ros2 run aws_iot_node mqtt-telemetry --ros-args --param path_for_config:=$IOT_CONFIG_FILE # One terminal
ros2 run aws_iot_node mock-telemetry # Other terminal

Using a split terminal in VSCode, this looks like the following:

Both MQTT and Mock nodes running

You should now be able to see messages appearing in the MQTT test client in AWS IoT Core. This will look like the following:

MQTT Test Client

Conclusion

We've demonstrated that it's possible to build nodes in Rust just as with C++ and Python - although there's an extra step of setting up ROS2 Rust so our node can link to it. We can now build other nodes in Rust if we're on a resource constrained system, such as a Raspberry Pi or other small dev kit, and we want the guarantees from the Rust compiler that the C++ compiler doesn't have while being more secure and sustainable than a Python-based version.

Check out the repo and give it a try for yourself!