Skip to main content

· 11 min read
Michael Hart

This post is to give you my five tops on how to stand out as a Software Engineer. These are tips that will help at any career level, not just when you're starting out.

If you prefer a video format, check out my YouTube video below:

Knowing When to Ask Questions

My first tip is knowing when to ask questions. The phrasing of this title sounds like you need to ask fewer questions, but mostly likely you need to ask more questions. The truth is that when you start on a new team, that team is expecting you to ask a lot of questions. This is especially true when you're just starting out, so my advice to you is this:

  1. When you need to know something that you can't find out online, don't waste your time. Ask another team member straight away.
  2. When you need to know something that you could possibly find online, try setting yourself a time limit before asking for help. Give it 15-30 minutes, try and work through it, then find a team member to take a look with you.

This will give you a good balance between feeling like you're pestering people and taking all their time up, and being able to actually complete your work. The last thing your new team members want for you is to sit there wasting hours or even days on something they could have helped you with in two minutes.

Example - Ask Straight Away

You want to find some documentation for your project. It's very unlikely you could find this information by yourself, so it's best for you to find a tem member to ask for help.

Example - Wait, Then Ask

You've changed something in the code and it won't compile any more. This is something you could probably figure out by yourself given enough time, so set a timer for 15 minutes, then try to work through it. If the timer goes off, and you haven't made any progress, find someone who can help.

Taking Responsibility

My second tip is to take responsibility, and there's two ways to take responsibility that I'm talking about: first, taking responsibility for tasks that you don't normally do as part of your work; second, taking responsibility when you make a mistake.

Volunteering for Tasks

As far as tasks outside your normal work goes, it's very common for your manager or your team to have a task come up that needs to be completed, but doesn't naturally fall to a particular person. Chances are, your team would prefer one person to be responsible for it and drive it to completion. If that's something you could do, but is outside of your normal work area, it's a great idea to consider taking it on. It's a way that you can stand out as an engineer, learn something new, and grow in your career.

Example - Running a Hackathon

I had a teammate who wanted to participate in the Hackathon, but when I encouraged them to try and organise it for themself, they weren't willing to take on that responsibility. Instead, I took on the task: I arranged it, chose the theme for it, and made sure it went ahead. Now, I'm much more prepared to run other Hackathons in the future.

Owning Your Mistakes

The second way that you should take responsibility is when you've made a mistake. Especially when you're starting out, but all throughout your career, you can and will make mistakes. The best thing that you can do is learn from them and try to make sure they don't happen again.

The best response you can give if you get called out in a meeting for something you've done wrong is to avoid getting upset, and to simply say, "yes, I made a mistake there, and here's what I'm going to do to stop it happening again."

That could be something you do differently, or it could be a process that you or your team put in place to make sure that no one will make that mistake again.

Example - Not Paying Attention

I took part in an informational meeting with a lot of distractions in the house. I couldn't pay full attention, I wasn't able to ask questions at the end, and I didn't even realise how distracted I was until both my manager and one of my colleagues commented on it. I realised how disrespectful it was to the presenter at the time. To this day, I make sure that there as few distractions in the house as possible when I'm attending a meeting out of respect for other people's time.

Actively Pursue Advancement

My third tip for you is to actively pursue advancement. By that, I mean you need to go after what you want for your next career step.

In my experience, many people are content to receive tasks from their team, and do their work well and on time, but that's not the way that you can grow your career the fastest. The best thing that you can do is have an honest conversation with your manager about where you want to be. Is there more responsibility that you want to take on? Do you want a raise, or a promotion? These are things that you need to bring attention to if you want to make them happen and you need to make them happen yourself.

To understand this better, try and think of it from your manager's point of view - or their manager's point of view. They have teams ot manage, projects to get out on time, customers they need to talk to and keep happy; how much of their attention do you think is solely on you? The answer is probably not that much, which is why you need to bring their attention on to you. You need to make it happen, and that's what this conversation would do.

Talk to your manager, tell them what you need, and ask them for feedback so you understand exactly where you are and what weaknesses you need to work on in order to progress.

Example - Asking for a Raise

My most recent example of this was when I was working on a team and I felt like I was taking on more responsibility than my level required - even acting in a team lead role. I had a conversation with my manager and asked him for a raise. Not only did he respond positively to this, he actually helped me to get promoted instead so that I became the official team lead. It was a benefit to me and a benefit to him because it showed how he was growing his team.

Document Your Wins

My fourth tip is to document your wins, by which I mean writing down what you're doing as you're doing it, including any wins that you have in that process.

You can start this by taking notes every day of what you're doing. Open a note with the date and your responsibilities for the day, then log what you do during the day.

Daily Note Template

I've configured my note-taking software Obsidian to automatically open this every day.

This is a great way to keep a log and look back on how you fixed something in the past, but it will also help you with the next step: periodically updating a document that contains all of your wins.

Daily Notes List

My list of daily notes since starting digital notes.

My recommendation for this is something called a brag doc. I use a modified version of the document suggested by Julia Evans. To use this document effectively, set time aside every 2-4 weeks and update the document with what you've been doing and what has gone well. Use the daily notes you've been taking to help supplement this. By doing it bit by bit, it's a lot easier to keep track of what you've done over a long period of time. You'll also have a great body of evidence if you need to pursue advancement; you can show evidence that you've been working above your level. Bonus points if you can write some sort of data down - numbers are even more convincing than quotes when you're trying to prove something.

The next step after this is to use that brag doc to keep your resume, CV, or LinkedIn profile up to date. This is another point where it's much easier to do it little by little over time so it's always up to date, instead of making one large effort when you need it.

Example - Resource Feedback Spreadsheet

For example, I have a spreadsheet that keeps track of everyone that's reached out to me from the company saying something about the resources that I put online. This is a great way for me to figure out what the best resources are what's been most helpful - plus, if I need to prove that what I've done has been helpful, I have all the evidence right there.

Remember to set that time aside for writing your daily notes and updating your brag doc and LinkedIn profile. Don't dismiss this - keeping my LinkedIn profile up to date was what landed me my job at Amazon in the first place!

Know Your Worth

My fifth and final tip is to know your worth. Getting into software engineering is no easy feat. It takes a lot of training and technical knowledge, so getting where you are is already a battle - not to mention any experience that you can get on top of that.

You've earned the right to be confident. You should be confident in your statements and your decisions while being prepared to learn from your mistakes. Even if you don't feel confident from your amount of experience, I advise you to act like you're confident. Enough time acting like you're confident and you will eventually feel that confidence.

Example - High-Level Meeting

My most recent example of this was taking part in a meeting with leaders several levels above me. I was nervous and didn't want to speak in case I didn't sound like I knew what I was talking about. I spent a lot of the meeting sitting and taking notes, distilling what I had heard down right up until the point where I made some realisations. When I eventually spoke up about them, the leaders listened to me and the conversation took a whole different direction.

Another part of knowing your worth is being aware of what other options you have. Keep an eye on your career field and see what other job opportunities there are, as well as the kind of salary your job normally makes. It's a great idea to know what's out there - either you'll find something that you think is more exciting and is a better opportunity, or you can be satisfied that where you are is the best place for you.

Putting the Tips into Action

So there you have my top five tips on how to stand out as a software engineer.

Some things you can do periodically, like updating your brag doc or your LinkedIn profile. You can start straight away by opening up a daily note, writing the date, and starting to take notes. You can also make an empty brag doc, ready to start filling in your first entries.

Another way you can get started is by arranging a talk with your manager, where you can have an honest conversation about where you want to be and what kind of feedback your manager can provide you. Arrange the meeting, go in knowing what you want, and write down the result of the meeting so you can look back on it in your log.

The last part is interacting with your team. Start acting with more confidence around your team, take responsibility for something that's outside of your comfort zone, and take responsibility for your mistakes as they happen. A few good examples of tasks you can take responsibility for are being the Scrum Master for your team, writing up documentation that is currently missing, or leading a meeting that no one has volunteered for.

Any of these options will help you to get started and to grow in your career. Good luck standing out as a software engineer!

· 13 min read
Michael Hart

This post is about how to build an AWS Step Functions state machine and how you can use it to interact with IoT edge devices. In this case, we are sending a smoothie order to a "robot" and waiting for it to make that smoothie.

The state machine works by chaining together a series of Lambda functions and defining how data should be passed between them (if you're not sure about Lambda function, take a look at this blog post!). There's also a step where the state machine needs to wait for the smoothie to be made, which is slightly more complicated - we'll cover that later in this post.

This post is also available in video form - check the video link below if you want to follow along!

AWS Step Functions Service

AWS Step Functions is an AWS service that allows users to build serverless workflows. Serverless came up in my post on Lambda functions - it means that you can run applications in the cloud without provisioning any servers or contantly-running resources. That in turns means you only pay for the time that something is executing in the cloud, which is often much cheaper than provisioning a server, but with the same performance.

To demonstrate Step Functions, we're building a state machine that accepts smoothie orders from customers and sends them to an available robot to make that smoothie. Our state machine will look for an available robot, send it the order, and wait for the order to complete. The state machine will be built in AWS Step Functions, which we can access using the console.

State Machine Visual Representation

First, we'll look at the finished state machine to get an idea of how it works. Clicking the edit button within the state machine will open the workflow Design tab for a visual representation of the state machine:

Visual representation of Step Functions State Machine

Each box in the diagram is a stage of the Step Functions state machine. Most of the stages are Lambda functions, which are configured to interface with AWS resources. For example, the first stage (GetRobot) scans a DynamoDB table for the first robot with the ONLINE status, meaning that it is ready for work.

If at least one robot is available, GetRobot will pass its name to the next stage - SetRobotWorking. This function updates that robot's entry in the DynamoDB table to WORKING, so future invocations don't try to give that robot another smoothie order.

From there, the robot name is again passed on to TellRobotOrder, which is responsible for sending an MQTT message via AWS IoT Core to tell the robot its new smoothie order. This is where the state machine gets slightly more complicated - we need the state machine to pause and wait for the smoothie to be made.


While we're waiting for the smoothie to be made, we could have the Lambda function wait for a response, but we would be paying for the entire time that function is sitting and waiting. If the smoothie takes 5 minutes to complete, that would be over 6000x the price!

Instead, we can use the Activities feature of Step Functions to allow the state machine to wait at no extra cost. The system follows this setup:

IoT Rule to Robot Diagram

When the state machine sends the smoothie order to the robot, it includes a generated task token. The robot thens make the smoothie, and when it is finished, publishes a message saying it was successful with that same task token. An IoT Rule that forwards that message to another Lambda function, which tells the state machine that the task was a success. Finally, the state machine updates the robot's status back to ONLINE, so it can receive more orders, and the state machine completes successfully.

Why go through Lambda and IoT Core?

The robot could directly call the Task Success API, but we would need to give it permission to do so - as well as a direct internet connection. This version of the system means that the robot only ever communicates using MQTT messages via AWS IoT Core. See my video on AWS IoT Core to see how to set this up.

Testing the Smoothie State Machine

To test the state machine, we start with a table with two robots, both with ONLINE status. If you follow the setup instructions in the README, your table will have these entries:

Robots with ONLINE state

Successful Execution

If we now request any kind of smoothie using the script, we start an execution of the state machine. It will find that Robot1 is free to perform the function and update its status to WORKING:

Robot1 with WORKING state

Then it will send an MQTT message requesting the smoothie. After a few seconds, the mock robot script will respond with a success message. We can see this in the MQTT test client:

MQTT Test Client showing order and success messages

This allows the state machine to finish its execution successfully:

Successful step function execution

If we click on the execution, we can see the successful path lit up in green:

State machine diagram with successful states

Smoothie Complete!

We've made our first fake smoothie! Now we should make sure we can handle errors that happen during smoothie making.

Robot Issue during Execution

What happens if there is an issue with the robot? Here we can use error handling in Step Functions. We define a timeout on the smoothie making task, and if that timeout is reached before the task is successful, we catch the error - in this case, we update the robot's state to BROKEN and fail that state machine's execution.

To test this, we can kill the mock robot script, which simulates all robots being offline. In this case, running the will request the smoothie from Robot1, but will then time out after 10 seconds. This then updates the robot's state to BROKEN, ensuring that future executions do not request smoothies from Robot1.

Robot Status shown as BROKEN

The overall state execution also fails, allowing us to alert the customer of the failure:

Execution fails from time out

We can also see what happened to cause the failure by clicking on the execution and scrolling to the diagram:

State Machine diagram of timeout failure

Another execution will have the same effect for Robot2, leaving us with no available robots.

No Available Robots

If we never add robots into the table, or all of our robots are BROKEN or WORKING, we won't have a robot to make a smoothie order. That means our state machine will fail at the first step - getting an available robot:

State Machine diagram with no robots available

That's our state machine defined and tested. In the next section, we'll take a look at how it's built.

Building a State Machine

To build the Step Functions state machine, we have a few options, but I would recommend using CDK for the definition and the visual designer in the console for prototyping. If you're not sure what the benefits of using CDK are, I invite you to watch my video on the benefits, where I discuss how to use CDK with SiteWise:

The workflow goes something like this:

  1. Make a base state machine with functions and AWS resources using CDK
  2. Use the visual designer to prototype and build the stages of the state machine up further
  3. Define the stages back in the CDK code to make the state machine reproducible and recover from any breaking changes made in the previous step

Once complete, you should be able to deploy the CDK stack to any AWS account and have a fully working serverless application! To make this step simpler, I've uploaded my CDK code to a Github repository. Setup instructions are in the README, so I'll leave them out of this post. Instead, we'll break down some of the code in the repository to see how it forms the full application.

CDK Stack

This time, I've split the CDK stack into multiple files to make the dependencies and interactions clearer. In this case, the main stack is at lib/cdk-stack.ts, and refers to the four components:

  1. RobotTable - the DynamoDB table containing robot names and statuses
  2. Functions - the Lambda functions with the application logic, used to interact with other AWS services
  3. IoTRules - the IoT Rule used to forward the MQTT message from a successful smoothie order back to the Step Function
  4. SmoothieOrderHandler - the definition of the state machine itself, referring to the Lambda functions in the Functions construct

We can take a look at each of these in turn to understand how they work.


This construct is simple; it defines a DynamoDB table where the name of the robot is the primary key. The table will be filled by a script after stack deployment, so this is as much as it needed. Once filled, the table will have the same contents as shown in the testing section.


This construct defines four Lambda functions. All four are written using Rust to minimize the execution time - the benefits are discussed more in my blog post on Lambda functions. Each handler function is responsible for one small task to show how the state machine can pass data around.

Combining Functions

We could simplify the state machine by combining functions together, or using Step Functions to call AWS services directly. I'll leave it to you to figure out how to simplify the state machine!

The functions are as follows:

  1. Get Available Robot - scans the DynamoDB table to find the first robot with ONLINE status. Requires the table name as an environment variable, and permission to read the table.
  2. Update Status - updates the robot name to the given status in the DynamoDB table. Also requires the table name as an environment variable, and permission to write to the table.
  3. Send MQTT - sends a smoothie order to the given robot name. Requires IoT data permissions to connect to IoT Core and publish a message.
  4. Send Task Success - called by an IoT Rule when a robot publishes that it has successfully finished a smoothie. Requires permission to send the task success message to the state machine, which has to be done after the state machine is defined, hence updating the permission in a separate function.

IoT Rules

This construct defines an IoT Rule that listens on topic filter robots/+/success for any messages, then pulls out the contents of the MQTT message and calls the Send Task Success Lambda function. The only additional permission it needs is to call a Lambda function, so it can call the Send Task Success function.

Smoothie Order Handler

This construct pulls all the Lambda functions together into our state machine. Each stage corresponds to one of the stages in the State Machine Visual Representation section.

The actual state machine is defined as a chain of functions:

const orderDef =
errors: [step.Errors.TIMEOUT],
resultPath: step.JsonPath.DISCARD,

Defining each stage as a constant, then chaining them together, allows us to see the logic of the state machine more easily. However, it does hide the information that is being passed between stages - Step Functions will store metadata while executing and pass the output of one function to the next. We don't always want to pass the output of one function directly to another, so we define how to modify the data for each stage.

For example, the Get Robot function looks up a robot name, so the entire output payload should be saved for the next function:

const getAvailableRobot = new steptasks.LambdaInvoke(this, 'GetRobot', {
lambdaFunction: functions.getAvailableRobotFunction,
outputPath: "$.Payload",

However, the Set Robot Working stage does not produce any relevant output for future stages, so its output can be discarded. Also, it needs a new Status field defined for the function to work, so the payload is defined in the stage. To set one of the fields based on the output of the previous function, we use .$ to tell Step Functions to fill it in automatically. Hence, the result is:

const setRobotWorking = new steptasks.LambdaInvoke(this, 'SetRobotWorking', {
lambdaFunction: functions.updateStatusFunction,
payload: step.TaskInput.fromObject({
"RobotName.$": "$.RobotName",
"Status": "WORKING",
resultPath: step.JsonPath.DISCARD,

Another interesting thing to see in this construct is how to define a stage that waits for a task to complete before continuing. This is done by changing the integration pattern, plus passing the task token to the task handler - in this case, our mock robot. The definition is as follows:

const tellRobotOrder = new steptasks.LambdaInvoke(this, 'TellRobotOrder', {
lambdaFunction: functions.sendMqttFunction,
// Define the task token integration pattern
integrationPattern: step.IntegrationPattern.WAIT_FOR_TASK_TOKEN,
// Define the task timeout
taskTimeout: step.Timeout.duration(cdk.Duration.seconds(10)),
payload: step.TaskInput.fromObject({
// Pass the task token to the task handler
"TaskToken": step.JsonPath.taskToken,
"RobotName.$": "$.RobotName",
"SmoothieName.$": "$.SmoothieName",
resultPath: step.JsonPath.DISCARD,

This tells the state machine to generate a task token and give it to the Lambda function as defined, then wait for a task success signal before continuing. We can also define a catch route in case the task times out, which is done using the addCatch function:

errors: [step.Errors.TIMEOUT],
resultPath: step.JsonPath.DISCARD,

With that, we've seen how the state machine is built, seen how it runs, and seen how to completely define it in CDK code.


Do you want to test your understanding? Here are a couple of challenges for you to extend this example:

  1. Retry making the smoothie! If a robot times out making the smoothie, just cancelling the order is not a good customer experience - ideally, the system should give the order to another robot instead. See if you can set up a retry path from the BROKEN robot status update back to the start of the state machine.
  2. Add a queue to the input! At present, if we have more orders than robots, the later orders will simply fail immediately. Try adding a queue that starts executing the state machine using Amazon Simple Queue Service (SQS).


Step Functions can be used to build serverless applications as state machines that call other AWS resources. In particular, a powerful combination is Step Functions with AWS Lambda functions for the application logic.

We can use other serverless AWS resources to access more cloud functionality or interface with edge devices. In this case, we use MQTT messages via IoT Core to message robots with smoothie orders, then listen for the responses to those messages to continue execution. We can also use a DynamoDB table to store robot statuses, which is a serverless database table. The table contains each robot's current status as the step function executes.

Best of all, this serverless application runs in the cloud, giving us all of the advantages of running using AWS - excellent logging and monitoring, fine-grained permissions, and modifying the application on demand, to name a few!

· 17 min read
Michael Hart

This is the second part of the "ROS2 Control with the JetBot" series, where I show you how to get a JetBot working with ROS2 Control! This is a sequel to the part 1 blog post, where I showed how to drive the JetBot's motors using I2C and PWM with code written in C++.

In this post, I show the next step in making ROS2 Control work with the WaveShare JetBot - wrapping the motor control code in a System. I'll walk through some concepts, show the example repository for ROS2 Control implementations, and then show how to implement the System for JetBot and see it running.

This post is also available in video form - check the video link below if you want to follow along!

ROS2 Control Concepts

First, before talking about any of these concepts, there's an important distinction to make: ROS Control and ROS2 Control are different frameworks, and are not compatible with one another. This post is focused on ROS2 Control - or as their documentation calls it, ros2_control.

ros2_control's purpose is to simplify integrating new hardware into ROS2. The central idea is to separate controllers from systems, actuators, and sensors. A controller is responsible for controlling the movement of a robot; an actuator is responsible for moving a particular joint, like a motor moving a wheel. There's a good reason for this separation: it allows us to write a controller for a wheel configuration, without knowing which specific motors are used to move the wheels.

Let's take an example: the Turtlebot and the JetBot are both driven using one wheel on each side and casters to keep the robots level. These are known as differential drive robots.

Turtlebot image with arrows noting wheels

Turtlebot 3 Burger image edited from Robotis

JetBot image with arrows noting wheels and caster

WaveShare JetBot AI Kit image edited from NVIDIA

As the motor configuration is the same, the mathematics for controlling them is also the same, which means we can write one controller to control either robot - assuming we can abstract away the code to move the motors.

In fact, this is exactly what's provided by the ros2_controllers library. This library contains several standard controllers, including our differential drive controller. We could build a JetBot and a Turtlebot by setting up this standard controller to be able to move their motors - all we need to do is write the code for moving the motors when commanded to by the controller.

ros2_control also provides the controller manager, which is used to manage resources and activate/deactivate controllers, to allow for advanced functionality like switching between controllers. Our use case is simple, so we will only use it to activate the controller. This architecture is explained well in the ros2_control documentation - see the architecture page for more information.

This post shows how to perform this process for the JetBot. We're going to use the I2C and motor classes from the previous post in the series to define a ros2_control system that will work with the differential drive controller. We use a System rather than an Actuator because we want to define one class that can control both motors in one write call, instead of having two separate Actuators.

ROS2 Control Demos Repository

To help us with our ros2_control system implementation, the ros2_control framework has helpfully provided us with a set of examples. One of these examples is exactly what we want - building a differential drive robot (or diffbot, in the examples) with a custom System for driving the motors.

The repository has a great many examples available. If you're here to learn about ros2_control, but not to build a diffbot, there are examples of building simulations, building URDF files representing robots, externally connected sensors, and many more.

We will be using example 2 from this demo repository as a basis, but stripping out anything we don't require right now, like supporting simulation; we can return these parts in later iterations as we come to understand them.

JetBot System Implementation

In this section, I'll take you through the key parts of my JetBot System implementation for ros2_control. The code is available on Github - remember that this repository will be updated over time, so select the tag jetbot-motors-pt2 to get the same code version as in this article!

Components are libraries, not nodes

ros2_control uses a different method of communication from the standard ROS2 publish/subscribe messaging. Instead, the controller will load the code for the motors as a plugin library, and directly call functions inside it. This is the reason we had to rewrite the motor driver in C++ - it has to be a library that can be loaded by ros2_control, which is written in C++.

Previously, we wrote an example node that span the wheels using the motor driver; now we are replacing this executable by a library that can be loaded by ros2_control. In CMakeLists.txt, we can see:



pluginlib_export_plugin_description_file(hardware_interface jetbot_control.xml)

These are the lines that build the JetBot code as a library instead of a system, and export definitions that show it is a valid plugin library to be loaded by ros2_control. A new file, jetbot_control.xml, tells ros2_control more information about this library to allow it to be loaded - in this case, the library name and ros2_control plugin type (SystemInterface - we'll discuss this more in the Describing the JetBot section).

Code Deep Dive

For all of the concepts in ros2_control, the actual implementation of a System is quite simple. Our JetBotSystemHardware class extends the SystemInterface class:

class JetBotSystemHardware : public hardware_interface::SystemInterface {

In the private fields of the class, we create the fields that we will need during execution. This includes the I2CDevice and two Motor classes from the previous post, along with two vectors for the hardware commands and hardware velocities:

std::vector<MotorPins> motor_pin_sets_;
std::vector<Motor> motors_;
std::shared_ptr<I2CDevice> i2c_device_;
std::vector<double> hw_commands_;
std::vector<double> hw_velocities_;

Then, a number of methods need to be overridden from the base class. Take a look at the full header file to see them, but essentially it boils down to three concepts:

  1. export_state_interfaces/export_command_interfaces: report the state and command interfaces supported by this system class. These interfaces can then be checked by the controller for compatibility.
  2. on_init/on_activate/on_deactivate: lifecycle methods automatically called by the controller. Different setup stages for the System occur in these methods, including enabling the motors in the on_activate method and stopping them in on_deactivate.
  3. read/write: methods called every controller update. read is for reading the velocities from the motors, and write is for writing requested speeds into the motors.

From these, we use the on_init method to:

  1. Initialize the base SystemInterface class
  2. Read the pin configuration used for connecting to the motors from the parameters
  3. Check that the provided hardware information matches the expected information - for example, that there are two velocity command interfaces
  4. Initialize the I2CDevice and Motors

This leaves the System initialized, but not yet activated. Once on_activate is called, the motors are enabled and ready to receive commands. The read and write methods are then repeatedly called for reading from and writing to the motors respectively. When it's time to shutdown, on_deactivate will stop the motors, and the destructors of the classes perform any required cleanup. There are more lifecycle states that could potentially be used for a more complex system - these are documented in the ros2 demos repository.

This System class, plus the I2CDevice and Motor classes, are compiled into the plugin library, ready to be loaded by the controller.

Describing the JetBot

The SystemInterface then comes into play when describing the robot. The description folder from the example contains the files that define the robot, including its ros2_control configuration, simulation configuration, and materials used to represent it during simulation. As this implementation has been pared down to basics, only the ros2_control configuration with mock hardware flag have been kept in.

The jetbot.ros2_control.xacro file defines the ros2_control configuration needed to control the robot. It uses xacro files to define this configuration, where xacro is a tool that extends XML files by allowing us to define macros that can be referenced in other files:

<xacro:macro name="jetbot_ros2_control" params="name prefix use_mock_hardware">

In this case, we are defining a macro for the ros2_control part of the JetBot that can be used in the overall robot description.

We then define the ros2_control portion with type system:

<ros2_control name="${name}" type="system">

Inside this block, we give the path to the plugin library, along with the parameters needed to configure it. You may recognize the pin numbers in this section!

<param name="pin_enable_0">8</param>
<param name="pin_pos_0">9</param>
<param name="pin_neg_0">10</param>
<param name="pin_enable_1">13</param>
<param name="pin_pos_1">12</param>
<param name="pin_neg_1">11</param>

This tells any controller loading our JetBot system hardware which pins are used to drive the PWM chip. But, we're not done yet - we also need to tell ros2_control the command and state interfaces available.

ros2_control Joints, Command Interfaces, and State Interfaces

ros2_control uses joints to understand what the movable parts of a robot are. In our case, we define one joint for each motor.

Each joint then defines a number of command and state interfaces. Each command interface accepts velocity, position, or effort commands, which allows ros2_control controllers to command the joints to move as it needs. State interfaces report a measurement from the joint out of velocity, position, or effort, which allows ros2_control to monitor how much the joint has actually moved and adjust itself. In our case, each joint accepts velocity commands and reports measured velocity - although we configure the controller to ignore the velocity, because we don't actually have a sensor like an encoder in the JetBot. This means we're using open loop control, as opposed to closed loop control.

<joint name="${prefix}left_wheel_joint">
<command_interface name="velocity"/>
<state_interface name="velocity"/>

Closed loop control is far more accurate than open loop control. Imagine you're trying to sprint exactly 100 metres from a starting line, but you have to do it once blindfolded, and once again without a blindfold and line markings every ten metres - which run is likely to be more accurate? In the JetBot, there's no sensor to measure how much it has moved, so the robot is effectively blindfolded and guessing how far it has travelled. This means our navigation won't be as accurate - we are limited by hardware.

JetBot Description

With the ros2_control part of the JetBot defined, we can import and use this macro in the overall JetBot definition. As we've stripped out all other definitions, such as simulation parameters, this forms the only part of the overall JetBot definition:

<xacro:include filename="$(find jetbot_control)/ros2_control/jetbot.ros2_control.xacro" />
name="JetBot" prefix="$(arg prefix)" use_mock_hardware="$(arg use_mock_hardware)"/>

Let's summarize what we've created so far:

  1. A plugin library capable of writing commands to the JetBot motors
  2. A ros2_control xacro file, describing the plugin to load and the parameters to give it
  3. One joint per motor, each with a velocity command and state interface
  4. An overall description file that imports the ros2_control file and calls the macro

Now when we use xacro to build the overall description file, it will import the ros2_control file macro and expand it, giving a complete robot description that we can add to later. It's now time to look at creating a controller manager and a differential drive controller.

Creating A Controller

So far, we've defined a JetBot using description files. Now we want to be able to launch ros2_control and tell it what controller to create, how to configure it, and how load our defined JetBot. For this, we use the jetbot_controllers.yaml file.

We start with the controller_manager. This is used to load one or more controllers and swap between them. It also makes sure that resources are only used by one controller at a time and manages the change between controllers. In our case, we're only using it to load and run one controller:

update_rate: 10 # Hz

type: diff_drive_controller/DiffDriveController

We tell the manager to update at 10Hz and to load the diff_drive_controller/DiffDriveController controller. This is the standard differential drive controller discussed earlier. If we take a look at the information page, we can see a lot of configuration for it - we provide this configuration in the same file.

We define that the controller is open loop, as there is no feedback. We give the names of the joints for the controller to control - this is how the controller knows it can send velocities to the two wheels implemented by our system class. We also set velocity limits on both linear and angular movement:

linear.x.max_velocity: 0.016
linear.x.min_velocity: -0.016
angular.z.max_velocity: 0.25
angular.z.min_velocity: -0.25

These numbers are obtained through experimentation! ros2_control operates using target velocities specified in radians per second [source]. However, the velocity we send to motors doesn't correspond to radians per second - the range of -1 to +1 is the minimum velocity up to maximum velocity of the motors, which change with the battery level of the robot. I obtained the numbers given through experimentation - these move the robot at a reasonable pace.

Finally, we supply the wheel separation and radius, specified in metres. I measured these from my own robot. The separation is the minimum separation between wheels, and the radius is from the centre of one wheel to the very edge:

wheel_separation: 0.104
wheel_radius: 0.032

With this, we have described how to configure a controller manager with a differential drive controller to control our JetBot!

Launching the Controller

The last step here is to provide a launch script to bring everything up. The example again provides us with the launch script, including a field that allows us to launch with mock hardware if we want - this is great for testing that everything loads correctly on a system that doesn't have the right hardware.

The launch script goes through a few steps to get to the full ros2_control system, starting with loading the robot description. We specify the path to the description file relative to the package, and use the xacro tool to generate the full XML for us:

# Get URDF via xacro
robot_description_content = Command(
" ",
[FindPackageShare("jetbot_control"), "urdf", "jetbot.urdf.xacro"]
" ",
robot_description = {"robot_description": robot_description_content}

Following this, we load the jetbot controller configuration:

robot_controllers = PathJoinSubstitution(

With the robot description and the robot controller configuration loaded, we can pass these to the controller manager:

control_node = Node(
parameters=[robot_description, robot_controllers],

Finally, we ask the launched controller manager to start up the jetbot_base_controller:

robot_controller_spawner = Node(

All that remains is to build the package and launch the new launch file!

ros2_control Launch Execution

This article has been written from the bottom up, but now we have the full story, we can look from the top down:

  1. We launch the JetBot launch file defined in the package
  2. The launch file spawns the controller manager, which is used to load controllers and manage resources
  3. The launch file requests that the controller manager launches the differential drive controller
  4. The differential drive controller loads the JetBot System as a plugin library
  5. The System connects to the I2C bus, and hence, the motors
  6. The controller can then command the System to move the motors as requested by ROS2 messaging

Hooray! We have defined everything we need to launch ros2_control and configure it to control our JetBot! Now we have a controller that is able to move our robot around.

Running on the JetBot

To try the package out, we first need a working JetBot. If you're not sure how to do the initial setup, I've created a video on exactly that:

With the JetBot working, we can create a workspace and clone the code into it. Use VSCode over SSH to execute the following commands:

mkdir ~/dev_ws
cd ~/dev_ws
git clone -b jetbot-motors-pt2
cp -r ./jetbot-ros-control/.devcontainer .

Then use the Dev Containers plugin to rebuild and reload the container. This will take a few minutes, but the step is crucial to allow us to run ROS2 Humble on the JetBot, which uses an older version of Ubuntu. Once complete, we can build the workspace, source it, and launch the controller:

source /opt/ros/humble/setup.bash
colcon build --symlink-install
source install/setup.bash
ros2 launch jetbot_control

This should launch the controller and allow it to connect to the motors successfully. Now we can use teleop_twist_keyboard to test it - but with a couple of changes.

First, we now expect messages to go to /jetbot_base_controller/cmd_vel topic instead of the previous /cmd_vel topic. We can fix that by asking teleop_twist_keyboard to remap the topic it normally publishes to.

Secondly, we normally expect /cmd_vel to accept Twist messages, but the controller expects TwistStamped messages. There is a parameter for teleop_twist_keyboard that turns its messages into TwistStamped messages, but while trying it out I found that the node ignored that parameter. Checking it out from source fixed it for me, so in order to run the keyboard test, I recommend building and running from source:

git clone
colcon build --symlink-install
source install/setup.bash
ros2 run teleop_twist_keyboard teleop_twist_keyboard \
--ros-args \
-p stamped:=true \
-r /cmd_vel:=/jetbot_base_controller/cmd_vel

Once running, you should be able to use the standard keyboard controls written on screen to move the robot around. Cool!

Let's do one more experiment, to see how the configuration works. Go into the jetbot_controllers.yaml file and play with the maximum velocity and acceleration fields, to see how the robot reacts. Relaunch after every configuration change to see the result. You can also tune these parameters to match what you expect more closely.

That's all for this stage - we have successfully integrated our JetBot's motors into a ros2_control System interface!

Next Steps

Having this setup gives us a couple of options going forwards.

First, we stripped out a lot of configuration that supported simulation - we could add this back in to support Gazebo simulation, where the robot in the simulation should act nearly identically to the real life robot. This allows us to start developing robotics applications purely in simulation, which is likely to be faster due to the reset speed of the simulation, lack of hardware requirements, and so on.

Second, we could start running a navigation stack that can move the robot for us; for example, we could request that the robot reaches an end point, and the navigation system will plan a path to take the robot to that point, and even face the right direction.

Stay tuned for more posts in this series, where we will explore one or both of these options, now that we have the robot integrated into ROS2 using ros2_control.

· 14 min read
Michael Hart

This post shows how to build two simple functions, running in the cloud, using AWS Lambda. The purpose of these functions is the same - to update the status of a given robot name in a database, allowing us to view the current statuses in the database or build tools on top of it. This is one way we could coordinate robots in one or more fleets - using the cloud to store the state and run the logic to co-ordinate those robots.

This post is also available in video form - check the video link below if you want to follow along!

What is AWS Lambda?

AWS Lambda is a service for executing serverless functions. That means you don't need to provision any virtual machines or clusters in the cloud - just trigger the Lambda with some kind of event, and your pre-built function will run. It runs on inputs from the event and could give you some outputs, make changes in the cloud (like database modifications), or both.

AWS Lambda charges based on the time taken to execute the function and the memory assigned to the function. The compute power available for a function scales with the memory assigned to it. We will explore this later in the post by comparing the memory and execution time of two Lambda functions.

In short, AWS Lambda allows you to build and upload functions that will execute in the cloud when triggered by configured events. Take a look at the documentation if you'd like to learn more about the service!

How does that help with robot co-ordination?

Moving from one robot to multiple robots helping with the same task means that you will need a central system to co-ordinate between them. The system may distribute orders to different robots, tell them to go and recharge their batteries, or alert a user when something goes wrong.

This central service can run anywhere that the robots are able to communicate with it - on one of the robots, on a server near the robots, or in the cloud. If you want to avoid standing up and maintaining a server that is constantly online and reachable, the cloud is an excellent choice, and AWS Lambda is a great way to run function code as part of this central system.

Let's take an example: you have built a prototype robot booth for serving drinks. Customers can place an order at a terminal next to the robot and have their drink made. Now that your booth is working, you want to add more booths with robots and distribute orders among them. That means your next step is to add two new features:

  1. Customers should be able to place orders online through a digital portal or webapp.
  2. Any order should be dispatched to any available robot at a given location, and alert the user when complete.

Suddenly, you have gone from one robot capable of accepting orders through a terminal to needing a central database with ordering system. Not only that, but if you want to be able to deploy to a new location, having a single server per site makes it more difficult to route online orders to the right location. One central system in the cloud to manage the orders and robots is perfect for this use case.

Building Lambda Functions

Convinced? Great! Let's start by building a simple Lambda function - or rather, two simple Lambda functions. We're going to build one Python function and one Rust function. That's to allow us to explore the differences in memory usage and runtime, both of which increase the cost of running Lambda functions.

All of the code used in this post is available on Github, with setup instructions in the README. In this post, I'll focus on relevant parts of the code.

Python Function

Firstly, what are the Lambda functions doing? In both cases, they accept a name and a status as arguments, attached to the event object passed to the handler; check the status is valid; and update a DynamoDB table for the given robot name with the given robot status. For example, in the Python code:

def lambda_handler(event, context):
# ...
name = str(event["name"])
status = str(event["status"])

We can see that the event is passed to the lambda handler and contains the required fields, name and status. If valid, the DynamoDB table is updated:

ddb = boto3.resource("dynamodb")
table = ddb.Table(table_name)
Key={"name": name},
"status": {
"Value": status

Rust Function

Here is the equivalent for checking the input arguments for Rust:

#[derive(Deserialize, Debug, Serialize)]
#[serde(rename_all = "UPPERCASE")]
enum Status {
// ...
#[derive(Deserialize, Debug)]
struct Request {
name: String,
status: Status,

The difference here is that Rust states its allowed arguments using an enum, so no extra code is required for checking that arguments are valid. The arguments are obtained by accessing event.payload fields:

let status_str = format!("{}", &event.payload.status);
let status = AttributeValueUpdate::builder().value(AttributeValue::S(status_str)).build();
let name = AttributeValue::S(;

With the fields obtained and checked, the DynamoDB table can be updated:

let request = ddb_client
.key("name", name)
.attribute_updates("status", status);
tracing::info!("Executing request [{request:?}]...");

let response = request
tracing::info!("Got response: {:#?}", response);

CDK Build

To make it easier to build and deploy the functions, the sample repository contains a CDK stack. I've talked more about Cloud Development Kit (CDK) and the advantages of Infrastructure-as-Code (IaC) in my video "From AWS IoT Core to SiteWise with CDK Magic!":

In this case, our CDK stack is building and deploying a few things:

  1. The two Lambda functions
  2. The DynamoDB table used to store the robot statuses
  3. An IoT Rule per Lambda function that will listen for MQTT messages and call the corresponding Lambda function

The DynamoDB table comes from Amazon DynamoDB, another service from AWS that keeps a NoSQL database in the cloud. This service is also serverless, again meaning that no servers or clusters are needed.

There are also two IoT Rules, which are from AWS IoT Core, and define an action to take when an MQTT message is published on a particular topic filter. In our case, it allows robots to publish an MQTT message saying they are online, and will call the corresponding Lambda function. I have used IoT Rules before for inserting data into AWS IoT SiteWise; for more information on setting up rules and seeing how they work, take a look at the video I linked just above.

Testing the Functions

Once the CDK stack has been built and deployed, take a look at the Lambda console. You should have two new functions built, just like in the image below:

Two new Lambda functions in the AWS console

Great! Let's open one up and try it out. Open the function name that has "Py" in it and scroll down to the Test section (top red box). Enter a test name (center red box) and a valid input JSON document (bottom red box), then save the test.

Test configuration for Python Lambda function

Now run the test event. You should see a box pop up saying that the test was successful. Note the memory assigned and the billed duration - these are the main factors in determining the cost of running the function. The actual memory used is not important for cost, but can help optimize the right settings for cost and speed of execution.

Test result for Python Lambda function

You can repeat this for the Rust function, only with the test event name changed to TestRobotRs so we can tell them apart. Note that the memory used and duration taken are significantly lower.

Test result for Rust Lambda function

Checking the Database Table

We can now access the DynamoDB table to check the results of the functions. Access the DynamoDB console and click on the table created by the stack.

DynamoDB Table List

Select the button in the top right to explore items.

Explore Table Items button in DynamoDB

This should reveal a screen with the current items in the table - the two test names you used for the Lambda functions:

DynamoDB table with Lambda test items

Success! We have used functions run in the cloud to modify a database to contain the current status of two robots. We could extend our functions to allow different statuses to be posted, such as OFFLINE or CHARGING, then write other applications to work using the current statuses of the robots, all within the cloud. One issue is that this is a console-heavy way of executing the functions - surely there's something more accessible to our robots?

Executing the Functions

Lambda functions have a huge variety of ways that they can be executed. For example, we could set up an API Gateway that is able to accept API requests and forward them to the Lambda, then return the results. One way to check the possible input types is to access the Lambda, then click the "Add trigger" button. There are far too many options to list them all here, so I encourage you to take a look for yourself!

Lambda add trigger button

There's already one input for each Lambda - the AWS IoT trigger. This is an IoT Rule set up by the CDK stack, which is watching the topic filter robots/+/status. We can test this using either the MQTT test client or by running the test script in the sample repository:


One message published on the topic will trigger both functions to run, and we can see the update in the table.

DynamoDB Table Contents after MQTT

There is only one extra entry, and that's because both functions executed on the same input. That means "FakeRobot" had its status updated to ONLINE once by each function.

If we wanted, we could set up the robot to call the Lambda function when it comes online - it could make an API call, or it could connect to AWS IoT Core and publish a message with its ONLINE status. We could also set up more Lambda functions to take customer orders, dispatch them to robots, and so on - the Lambda functions and accompanying AWS services allow us to build a completely serverless robot co-ordination system in the cloud. If you want to see more about connecting ROS2 robots to AWS IoT Core, take a look at my video here:

Lambda Function Cost

How much does Lambda cost to run? For this section, I'll give rough numbers using the AWS Price Calculator. We will assume a rough estimate of 100 messages per minute - that accounts for customer orders arriving, robots reporting their status when it changes, and orders are being distributed; in all, I'll assume a rough estimate of 100 messages per minute, triggering 1 Lambda function invocation each.

For our functions, we can run the test case a few times for each function to get a small spread of numbers. We can also edit the configuration in the console to set higher memory limits, to see if the increase in speed will offset the increased memory cost.

Edit Lambda general configuration

Edit Lambda memory setting

Finally, we will use an ARM architecture, as this currently costs less than x86 in AWS.

I will run a valid test input for each test function 4 times each for 3 different memory values - 128MB, 256MB, and 512MB - and take the latter 3 invocations, as the first invocation takes much longer. I will then take the median billed runtime and calculate the cost per month for 100 invocations per minute at that runtime and memory usage.

My results are as follows:

TestPython (128MB)Python (256MB)Python (512MB)Rust (128MB)Rust (256MB)Rust (512MB)
1594 ms280 ms147 ms17 ms5 ms6 ms
2574 ms279 ms147 ms15 ms6 ms6 ms
3561 ms274 ms133 ms5 ms5 ms6 ms
Median574 ms279 ms147 ms15 ms5 ms6 ms
Monthly Cost$5.07$4.95$5.17$0.99$0.95$1.06

There is a lot of information to pull out from this table! The first thing to notice is the monthly cost. This is the estimated cost per month for Lambda - 100 invocations per minute for the entire month costs a maximum total of $5.17. These are rough numbers, and other services will add to that cost, but that's still very low!

Next, in the Python function, we can see that multiplying the memory will divide the runtime by roughly the same factor. The cost stays roughly the same as well. That means we can configure the function to use more memory to get the fastest runtime, while still paying the same price. In some further testing, I found that 1024MB is a good middle ground. It's worth experimenting to find the best price point and speed of execution.

If we instead look at the Rust function, we find that the execution time is pretty stable from 256MB onwards. Adding more memory doesn't speed up our function - it is most likely limited by the response time of DynamoDB. The optimal point seems to be 256MB, which gives very stable (and snappy) response times.

Finally, when we compare the two functions, we can see that Rust is much faster to respond (5ms instead of 279 ms at 256MB), and costs ~20% as much per month. That's a large difference in execution time and in cost, and tells us that it's worth considering a compiled language (Rust, C++, Go etc) when building a Lambda function that will be executed many times.

The main point to take away from this comparison is that memory and execution time are the major factors when estimating Lambda cost. If we can minimize these parameters, we will minimize cost of Lambda invocation. The follow-up to that is to consider using a compiled language for frequently-run functions to minimize these parameters.


Once you move from one robot working alone to multiple robots working together, you're very likely to need some central management system, and the cloud is a great option for this. What's more, you can use serverless technologies like AWS Lambda and Amazon DynamoDB to only pay for the transactions - no upkeep, and no server provisioning. This makes the management process easy: just define your database and the functions to interact with it, and your system is good to go!

AWS Lambda is a great way to define one or more of these functions. It can react to events like API calls or MQTT messages by integrating with other services. By combining IoT, DynamoDB, and Lambda, we can allow robots to send an MQTT message that triggers a Lambda, allowing us to track the current status of robots in our fleet - all deployed using CDK.

Lambda functions are charged by invocation, where the cost for each invocation depends on the memory assigned to the function and the time taken for that function to complete. We can minimize the cost of Lambda by reducing the memory required and the execution time for a function. Because of this, using a compiled language could translate to large savings for functions that run frequently. With that said, the optimal price point might not be the minimum possible memory - the Python function seems to be cheapest when configured with 1024MB.

We could continue to expand this system by adding more possible statuses, defining the fleet for each robot, and adding more functions to manage distributing orders. This is the starting point of our management system. See if you can expand one or both of the Lambda functions to define more possible statuses for the robots!

· 21 min read
Michael Hart

This post is about how beginners can make the most out of every tutorial by digging deep into the code to understand it. This is the best foundation you can give yourself for continuing to work on the code and making your own modifications. It follows on from Getting Started as a Robotics Software Engineer!, where I give the advice:

First, look for and use every resource you have available to you. Look online, ask people, work in the field; anything you can to make your journey easier.

Following on from that, I wanted to show how to take a tutorial and use various resources to understand what's happening in the provided code. I'll take the tutorial from ROS about writing a simple publisher/subscriber, and I'll use C++ to build it, as this is less well-known than Python and so a better way to demonstrate self-learning.

If you'd prefer to follow along, I've built a video demonstrating everything in this article, available here:

To understand the tutorial code, we'll be using the following resources:

  1. Version control (git) - to check the differences between versions and understand what's changed
  2. Explaining the code - try to explain as much of the code as you can, either in comments or out loud to someone or something else.
  3. Setting up your IDE - using your editor to help you move around the code and look up parts of it will help you find out what's going on.
  4. Using online resources - tutorials, blogs, videos, and a number of other resources can help explain parts that you don't understand up to this point.
  5. Debugging - using a debugger to attach to running code and stop it to interrogate variables and step through each line.
  6. Testing - using unit tests to check your understanding or set up simple test cases to focus on a particular area of code.

Using Source Control to your advantage

This isn't strictly speaking about understanding your code - what it does do is help you get back to a known state, and see what's changed since then. Let's follow the tutorial until there's a blank project checked out, then commit our progress using Git.

Installing Git

First, we need to install Git and do some configuration. Follow the install instructions, then set up your username and e-mail address:

# Replace with your username and email!
git config --global "Mike Likes Robots"
git config --global ""

Committing the template project

Follow the tutorial up until Create a package, then execute the pkg create command:

ros2 pkg create --build-type ament_cmake --license Apache-2.0 cpp_pubsub

Once this is done, let's stop and commit our progress:

cd cpp_pubsub
# This initializes a git repository
git init
git add --all
git commit -m "Initial commit"

Checking Git Status

Our project is now in source control. If we check the status now, we should see that there are no changes:

$ cd ~/dev_ws/src/cpp_pubsub
$ git status
On branch main
nothing to commit, working tree clean

Great! There are no file changes since the last commit. Try running this command again after changing a file!

Adding New Files

Let's follow through the next few steps - work through the whole of step 2. From here, we can run git status again and see some changes.

$ git status
On branch main
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: CMakeLists.txt
modified: package.xml

Untracked files:
(use "git add <file>..." to include in what will be committed)

no changes added to commit (use "git add" and/or "git commit -a")

Git tracks files, but won't track empty folders, so adding the new source file tells git that the src folder exists. As the whole file is new, we won't take a look, but we can take a look at the package.xml changes to see what has been updated:

# Tell me the difference between this file and the last committed version
$ git diff package.xml
diff --git a/package.xml b/package.xml
index 909dca9..5504dc9 100644
--- a/package.xml
+++ b/package.xml
@@ -12,6 +12,9 @@

+ <depend>rclcpp</depend>
+ <depend>std_msgs</depend>

The plus signs tell us that lines have been added. If there were any minus signs, that shows the line has been deleted. So not only can we restore a previous version of a file (or even the whole project), we can see the exact changes that have happened!

Any time we get something working, we should make sure to commit it so we can compare against it later. Let's do that now:

$ git add --all
# Use a meaningful commit message - not just "made changes"
$ git commit -m "Add talker node"

Now we can carry on with the tutorial!

Try to Explain the Code

At this point, you should read through the only source file and see how much of it you understand. But, not only should you understand it - you should be able to explain it. For each line, see if you can write a comment above the line with exactly what is happening. You could also do "rubber duck debugging" - take a rubber duck (or another toy), and tell the duck what the code is doing in detail. This is to force you to slow down and read each line thoroughly instead of skimming over lines you think you understand.

Let's take an example from the code and explain it together.

// Gives access to time-related functions
#include <chrono>
// Functional?
#include <functional>
// Gives access to smart pointers
#include <memory>
// Gives access to string functions like string length, string concatenation
#include <string>

Work through the file and see how much of it you understand just from reading the tutorial. Even in the block above, it's not very clear what #include <functional> does - let's see if we can use our IDE to explain it.

Using the IDE

IDE stands for Integrated Development Environment, which means a set of tools to help development integrated into one place. There are a few examples with different amounts of setup required, such as:

  1. Visual Studio Code (aka VSCode)
  2. CLion (or other JetBrains IDEs)
  3. NeoVim

These just to name a few - there are a great many IDEs out there, and you should experiment to find the one you prefer. The ones I've listed here are some of the most popular in my experience, but I would recommend using VSCode with the C/C++ extensions.

Using an IDE will help us to look up functions and variables more easily, highlight issues in the code, and run everything all in one place - including a debugger. In this case, if I open the project in VSCode, I can right click on <functional> and Go To Definition:

Go to definition of functional

This is hugely complicated! We're not getting the information we need from the IDE here. Let's take another example - hover over the using namespace std::chrono_literals line and wait for the explanation:

IDE help for chrono literals

That's much more help! Now we can see that this line helps define time periods, like the 500ms on line 38. We are getting some red underlines in the same image though - this is the IDE telling us that something is wrong. In this case, the IDE can't find the two underlined files that the code is trying to access, even though the build system can; it just means that the IDE isn't working properly. We can do some configuration to solve this error by pressing Ctrl+Shift+P (on Windows/Linux) and searching for C/C++ configuration:

C/C++ edit configuration menu option

Then scroll down to the Include path section and add /opt/ros/humble/setup.bash:

Add include path for IDE

Success! Our red squiggles have disappeared. That's our IDE configured. Try typing some code into functions and see the autocomplete working!

Using Online Resources

The IDE explained chrono literals for us - but what about that functional header? This is where we need to leave the IDE and get some extra information. Here are a few examples of resources available online that you can use to supplement your understanding:

  1. Documentation: in particular for ROS2, the ROS2 docs are extensive and have a lot of guides to get you started.
  2. Tutorials: plenty of articles exist to get you started with tutorials. This blog post alone uses two tutorials - Writing a simple publisher and subscriber (C++) and Writing Basic Tests with C++ with GTest. With tutorials, try not to go through the steps until they work - take the time to understand what's going on, and read the explanations.
  3. Open Source Examples: there are plenty of open source projects in general that can help provide examples for how to do something. For example, Husky has a lot of ROS2 code showing how to organize packages, write launch files, write tests, and so on. It can be very difficult to enter someone else's project, so my advice here is to figure out the project structure, then search for the specific thing you're looking for. Understanding a full package with no help takes a long time, even for seasoned developers.
  4. Blogs/Videos/Podcasts: these are great for general knowledge about coding or for searching for a specific thing to learn. I recommend finding blogs, video channels, or podcasts that cover interesting topics with good explanations, then follow those resources for new updates. I like Joel on Software as a general blog, and Articulated Robotics gives great explanations while going through examples.
  5. Forums: sites like Stack Overflow and Reddit can directly answer your questions, or have many previously answered questions that may help with your issue. Don't be afraid to post your own question! Folks on these sites are there to help. One tip if you do post a question is to make a small example of your problem and include it in your post so others can instantly see what the issue is. If you don't provide enough information, or you ask a question with an answer that's easily found through a Google search, you won't get a good reception.
  6. Online courses: sites like Codecademy and LeetCode Explore have courses available to guide you through different languages and concepts, explaining every step along the way. These are great to build a great foundation and understand what you're reading and writing better.
  7. Coding challenges: sites like LeetCode and Project Euler provide sample problems that need you to write code in order to solve them. They're good practice and good fun too! My particular favorite is the Advent of Code, which runs every year during advent, and I try to keep up with until I inevitably run out of time and give up on (normally about 11 days in).
  8. AI: the recent surge of GenAI technologies has an amazing benefit to coders, as it can explain tricky concepts. Any part of the code you don't understand, you can usually just plug it in to ChatGPT and get a thorough explanation. There are also coding companions, like Amazon CodeWhisperer, which suggest lines of code while you're programming so you don't need to look up everything in documentation - which can be a significant speedup!

Find the resources that suit you best, and keep them available when you're trying to understand the code. In this case, we can find the C++ reference guide for the functional header, which explains that our header is likely to give access to the std::bind expression in our code.

See if you can go through every line in the code and explain it thoroughly. Even if you can explain it, it's still worth checking through the next sections on debugging and testing to understand the flow of the code.


Broadly speaking, debugging is the process of finding and correcting issues in your code, but there's a special tool called a debugger that is REALLY helpful with this process. A debugger can be used to pause actively running code and let you look inside!

You can use a debugger to stop the code at a particular line, or when a variable has a particular value, or when a line has been "hit" a certain number of times - all from within your IDE. You can also step through, line by line, to see how the variables are changing. Let's try it out in our simple talker.


A common method of debugging is to add print/log statements throughout your code and then run the code. This is effective a lot of the time, but doesn't give you the same amount of control as a full debugger, so it is still well worth learning!

Setting up the Debugger

Normally for a project with a standard build system, it's fairly easy to set up debugging. For our system, as it's using colcon build, there's a couple of extra configuration steps to go through.

First, open up the debug menu. Select "create a launch.json file".

Open debug and create launch file

Select the C++ (GDB/LLDB) option. This will open a blank file with no configuration, like this:

// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit:
"version": "0.2.0",
"configurations": []

Install GDB

We will be using gdb as a debugger, which means we will need to install it:

sudo apt install gdb

Add Debugger Launch Configuration

With that installed, we can add a new configuration for debugging our talker. Click the Add Configuration button in the bottom right and select C/C++: (gdb) Launch.

Add GDB launch configuration

This will create a template launch file. Now change the program entry to "${workspaceFolder}/install/cpp_pubsub/lib/cpp_pubsub/talker". Once finished, your debug configuration file should look like this:

// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit:
"version": "0.2.0",
"configurations": [
"name": "(gdb) Launch",
"type": "cppdbg",
"request": "launch",
"program": "${workspaceFolder}/install/cpp_pubsub/lib/cpp_pubsub/talker",
"args": [],
"stopAtEntry": false,
"cwd": "${fileDirname}",
"environment": [],
"externalConsole": false,
"MIMode": "gdb",
"setupCommands": [
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
"description": "Set Disassembly Flavor to Intel",
"text": "-gdb-set disassembly-flavor intel",
"ignoreFailures": true


That's the debug configuration done - time to build and debug.

Build with Debug Information

One more thing to bear in mind is that we have to build with debug information enabled - this is how the debugger can tell which line corresponds to which part of the running code. We can do this by using a different build command - instead of colcon build, we will instead use:

colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo

Now we're ready to launch!

Pausing on a Breakpoint

Let's tell our debugger to pause inside the timer callback function by adding a breakpoint. That's the red dot to the left of the line number. Open the publish_member_function.cpp file and click to the left of the line number to add the breakpoint - it will look like this:

Add timer callback breakpoint

Now launch the debugger with the green triangle in the debug menu:

Launch debugger task

The program should launch and hit your breakpoint. There's a lot of information here, so let's keep it simple: stepping through the code and watching the message variable change. Take a look at the debugger window:

Debugger stopped on breakpoint

In the top left, numbered 1, we can see the controls for stepping through the code. Number 2 shows the current value of the message object, which is empty at this point. Number 3 shows the point in the program where the debugger has stopped. Bonus: in the bottom left, there's a watch panel, where you can add specific variables to watch - I've added count_ and for easier viewing.

The controls for stepping through the code allow us to:

  1. Resume - continue execution. Keeps running until another breakpoint is hit.
  2. Step over - step to the next line.
  3. Step into - if there is a function call, step into the function.
  4. Step out - run to the end of the current function and step back out.
  5. Stop - quit the program entirely.

We can also edit the breakpoint to have more control over when it stops. Right click the breakpoint and click Edit Breakpoint to see the options available - I'll leave these out for brevity.

Now we can click the step over button a few times to see the variable update with the string that we're going to publish. We can also resume a few times to see the count_ variable increment every time.

From this, we can stop the code, check what variables are doing, and look at the flow of the code. Any time we don't understand how code links together, or we're not sure what kind of data is passed around, debugging can tell us exactly!

However, if we're looking at code that requires a callback to do anything, it can be less convenient to run multiple different programs at the same time. For example, the other half of this tutorial is to create a listener node that listens for messages from the talker node - what happens if we only run the listener node? Nothing, because it doesn't have any messages to listen to. One way we can debug the subscriber code with the same data every single time is to use testing.

Testing [Advanced]

Testing is an incredibly useful tool in software development. It doesn't just mean verifying the code works correctly at the time of writing - the tests can be run automatically for any future update to ensure that the code doesn't break in unexpected places. In this case, we're looking at understanding the code, so you're not likely to be writing your own complete test suite - instead, we'll set up one test that will let us check how the listener node works without having to run the talker node at all!


This is a more advanced way of diving deeper into a file. It's very useful once you can get it working, but can be difficult to set up.

Adding Tests to the Listener

First, to get this working, we need to download the listener file. Follow the tutorial to the end of step 4 - and remember to commit your code once you're sure it works!

With the new node downloaded and working, we're next going to hack our subscriber source file a bit by changing the main function to GTest if a particular symbol is defined. That basically means we can compile the file twice and get two different outcomes depending on how we set up the compiler. Replace the subscriber source file with the following:

#include <memory>

#include "rclcpp/rclcpp.hpp"
#include "std_msgs/msg/string.hpp"
using std::placeholders::_1;

class MinimalSubscriber : public rclcpp::Node
: Node("minimal_subscriber")
subscription_ = this->create_subscription<std_msgs::msg::String>(
"topic", 10, std::bind(&MinimalSubscriber::topic_callback, this, _1));

void topic_callback(const std_msgs::msg::String & msg) const
RCLCPP_INFO(this->get_logger(), "I heard: '%s'",;
rclcpp::Subscription<std_msgs::msg::String>::SharedPtr subscription_;

int main(int argc, char * argv[])
rclcpp::init(argc, argv);
return 0;

#include <gtest/gtest.h>

class RclCppRunner {
RclCppRunner(int* argc, char** argv) {
rclcpp::init(*argc, argv);
~RclCppRunner() {

class TestPublisher : public rclcpp::Node {
TestPublisher() : Node("test_publisher") {
publisher_ = this->create_publisher<std_msgs::msg::String>("topic", 10);
void publish(const std::string contents) {
auto message = std_msgs::msg::String(); = contents;
rclcpp::Publisher<std_msgs::msg::String>::SharedPtr publisher_;

TEST(package_name, debug_listener)
auto listener = std::make_shared<MinimalSubscriber>();
TestPublisher talker;
talker.publish("Test Message 1");

int main(int argc, char** argv)
testing::InitGoogleTest(&argc, argv);
RclCppRunner runner(&argc, argv);

return RUN_ALL_TESTS();


A useful resource for learning how the testing works here is from the ROS2 documentation on writing tests. We're not following this because the test is only temporary - we're going to remove it once we understand the code.

Compiling our Test Function

We also need to set up the CMakeLists.txt to compile the file differently so we can run our test. To do that, add the following to the end of the CMakeLists.txt file:

find_package(ament_cmake_gtest REQUIRED)
ament_add_gtest(${PROJECT_NAME}_tutorial_test src/subscriber_member_function.cpp)
target_include_directories(${PROJECT_NAME}_tutorial_test PUBLIC

Now we can run the build command again to build our new test file:

colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo

Running the Test

From here, you can run the tests either using:

colcon test
colcon test-result --all

Or my preferred method for seeing the GTest output:

$ cd ~/dev_ws
$ ./build/cpp_pubsub/cpp_pubsub_tutorial_test
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from package_name
[ RUN ] package_name.debug_listener
[INFO] [1707940864.492059362] [minimal_subscriber]: I heard: 'Test Message 1'
[ OK ] package_name.debug_listener (31 ms)
[----------] 1 test from package_name (31 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (31 ms total)
[ PASSED ] 1 test.

At this point, our subscriber printed a message saying it heard the Test Message. Great! We can even change the test message to whatever we want by changing the test:

TEST(package_name, debug_listener)
auto listener = std::make_shared<MinimalSubscriber>();
TestPublisher talker;
// Change the string in the following line
talker.publish("Test Message 1");

Rebuild, and the new message will be used instead.

Launch the Test with a Debugger

Finally, we can set up the debugger to run with our new test. Go back to the debug tab and add a new configuration. This time, the program name will be ${workspaceFolder}/build/cpp_pubsub/cpp_pubsub_tutorial_test, and we will change the name to be something more meaningful as well. The end result is something like:

"name": "Launch tutorial test",
"type": "cppdbg",
"request": "launch",
"program": "${workspaceFolder}/build/cpp_pubsub/cpp_pubsub_tutorial_test",
"args": [],
"stopAtEntry": false,
"cwd": "${fileDirname}",
"environment": [],
"externalConsole": false,
"MIMode": "gdb",
"setupCommands": [
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
"description": "Set Disassembly Flavor to Intel",
"text": "-gdb-set disassembly-flavor intel",
"ignoreFailures": true

Now set a breakpoint on line 20 inside the topic_callback function and launch the debugger. It should break on the line and let you check the contents of the message!

Break on log line in subscriber

Now we can write whatever test data we want, or write multiple tests with different data or testing different code paths.

This seems complicated!

I don't disagree. Setting up debugging for compiled languages already gets a bit tricky, especially when the language strictly enforces if something is private or public - this isn't an issue we would be facing for Python! Python is much easier to debug, which is another reason it's a great choice for beginners.

With that said, once the initial setup is done, it becomes much easier to extend and get some really valuable information. It's certainly worth learning.


In short, my advice is to try to thoroughly understand the code in a tutorial, rather than doing the minimum work to get it working. This will help you in the long run, even if it takes you more time now.

Use git to help you track changes and revert to working versions as you experiment. Thoroughly explain the code as you work through it, preferably out loud to a rubber duck. Use your IDE, debugger, and testing setup to help read and run the code so you can see how it works. Last of all, and perhaps most obviously, use all of the resources you can from online to learn more about the code and the technology around it - take a look at my shortlist to figure out the best option for your use case.

Good luck with your future tutorials!

· 12 min read
Michael Hart

This post is all about my advice on getting started as a Robotics Software Engineer. I want to tell you a little of my journey to get to this point, then what you should do to practise and give yourself the best start possible.

If you prefer a video format, check out my YouTube video below:

Who is this post for?

This post is aimed at absolute beginners. If you've seen Boston Dynamics robots running through assault courses or automated drone deliveries and thought, "this is the kind of stuff that I want to work on my whole career" - this post is for you.

Robotics contains a lot of engineering disciplines. If you're interested in the intelligence behind a robot - the way it messages, figures out where it is and where to go, and how it makes decisions - this post is for you.

With that said, if you're interested in building your own arm, 3D printed parts, or printed circuit boards, this post isn't likely to be much help to you. Feel free to read through anyway!

Atlas jumping with a package

Atlas Gets a Grip | Boston Dynamics by Boston Dynamics

Who am I?

I'm a Software Development Engineer focused on robotics. I have a master's degree in Electrical & Electronic Engineering from Imperial College London in the UK. I have 11 years experience in software engineering, with 7 years of that specialising in robotics.

Throughout my career, I've worked on:

  1. A robot arm to cook steak and fries
  2. A maze-exploring rover
  3. A robot arm to tidy your room by picking up pens, toys, and other loose items
  4. Amazon Scout, a delivery rover

That's to name just a few! Suffice to say, I've worked on a lot of different projects, but I'm not a deep expert in any particular field. What I specialise in now is connecting robots to the cloud and getting value from it. That's why I'm working for Amazon Web Services as a Senior Software Development Engineer specialised in robotics. With that history, I'm a good person for getting you started in the world of robotics - starting with the hardware you need.

What hardware do you need?

Thankfully, not a lot! All you really need to get going is a computer. It doesn't need to be especially powerful while you're learning. If you want to be able to run simulations, a GPU will help, but you don't need one.

For the operating system (OS), you'll have a slightly easier time if it's running Linux or Mac OS, but Windows is very close to being as good because of great steps in recent years building the Windows Subsystem for Linux (WSL2). It's basically a way of running Linux insides your Windows computer. Overall, any OS will work just fine, so don't sweat this part.

That's all you need to get started, because the first step to becoming a software engineer for robotics is the software engineer part. You need to learn how to program.

How should you learn to program?

Pick a Language

First, pick a starting language. I do mean a starting language - a good way to think of programming languages is as tools in a toolbox. Each tool can perform multiple jobs, but there's usually a tool that's better suited for completing a given task. Try to fill out your toolbox instead of learning how to use one tool for every job.

Before you can fill out your toolbox, you need to start with one tool. You want to use your tools for robotics at some point, which helps narrow options down to Python and C++ - those are the most common options in robotics. I would recommend Python, as it's easier to understand in the beginning, and learning to program is hard enough without the difficult concepts that come with C++. There are tons of tutorials online for starting in Python. I would recommend trying a Codecademy course, which you can work through to get the beginning concepts.

Once you grasp the concepts, it's time to practise. The important part to understand here is that programming is a different way of thinking - you need to train your brain. You will have to get used to the new concepts and fully understand them before you can use them without thinking about them.

To practise, you could try Leetcode or Project Euler, but those are puzzles in problem-solving or teach computer science concepts, and they aren't the best tool for learning a language in my eyes. I believe that the best way to learn is to come up with a project idea and build it. It doesn't have to be in robotics. You could build a text-based RPG, like:

$ You are in a forest, what do you do?
1. Go forward
2. Look around

Or, you could build a text-based pokemon battle simulator, where you pick a move that damages the other pokemon. The important part is that the project is something you're interested it - that's what will motivate you to keep building it and practising.


Try writing some post-it notes of features you want to add to your project and stick them on the wall. Try to complete one post-it note before starting another.

Spend time building your project. Look online for different concepts and how they can fit your project. Use sites like Stack Overflow to help if you get stuck. If you can, find someone with more experience who's able to help guide you through the project. You can ask people you know or look online for help, like joining a discord community. There will be times when you're absolutely stumped as to why your project isn't working, and while you could figure it out yourself eventually, a mentor would not only help you past those issues more quickly but also teach you more about the underlying concept.

In essence, that's all you have to do. Get a computer, pick a language, and build a project in it. Look at tutorials and forums online when you get stuck, and find a mentor if you can. If you do this for a while, you'll train your brain to use programming concepts as naturally as thinking.

Other Tools

On top of Python, a couple of tools that you need to learn:

  1. Version control
  2. Linux Terminal

Version Control

Version control is software that allows you to store different versions of files and easily switch between them. Every change to the file can be "committed" as a new version, allowing you to see the differences between every version of your code. There's a lot more it can do, and it is absolutely invaluable to software developers.

Git is by far the most commonly used version control software. Check out a course on how to use it, then get into the havit of committing code versions whenever you get something working - you'll thank yourself when you break your code and can easily reset it to a working version with the tool.

Linux Terminal

The Linux Terminal is a way of typing commands for the computer to execute. You will be using the Terminal extensively when developing software. You don't need a course to learn it, but when you do have to write commands in the terminal, try not to just copy and paste it without looking at the command - read through it and figure out what it's doing. Pretty soon, you'll be writing your own commands.


Use Ctrl+R to search back through history for a command you've already executed and easily run it again.

Work with Others

You don't just have to learn alone - it is hugely helpful to learn from other people. You could look into making a project with friends or an online community, or better yet, join a robotics competition as part of a team.

If you practise enough with what I've told you here, you'll also be eligible for internships. Look around and see what's available. Try to go for companies with experienced software engineers to learn from, and if you're successful in your application, learn as much as you can from them. Software development as a job is very different from doing it at home, and you will learn an incredible amount from the people around you and the processes that the company uses.

What about robots?

So far, we've not talked about robots very much. Getting a solid ground in programming is very important before going on to the next step. But, once you have the basics, this part is how to get going with programming robots.

Robot Operating System (ROS)

The best starting point is ROS. This is the most popular robotics framework, although far from the only one. It is free, will run on any system, and has a ton of tools available that you can learn from. Also, because it's the most popular, there's a lot of help available when you're struggling. Follow the documentation, install it on your system, and get it passing messages around. Then understand the publish and subscribe system it uses for messages - this is crucial for robotics in general. The resources on this blog can help, or you can think of a project you want to build in ROS. Start small, like getting a robot moving around with an Xbox controller: you'll learn pretty quickly during this process that developing robots is really difficult, so set achievable goals for yourself!


If you want to work with robots in simulation, that's great! You can get going with just your computer. You need to understand that it's incredibly difficult to make robots behave the same in simulation as they do in real life, so don't expect it to transfer easily. However, it is better for developing robotics software quickly - it's faster to run and quicker to reset, so it's easier to work with. Because of that, it's a valuable skill to have.

If you're looking for somewhere to start, there are a few options. Gazebo is well-known as a ROS simulation tool, but there are also third party simulation software applications that still support ROS. I would start by looking at either NVIDIA Isaac SIM or O3DE - both are user-friendly applications that would be a great starting point.

NVIDIA Isaac Sim Example

NVIDIA - Narrowing the Sim2Real Gap with NVIDIA Isaac Sim


As far as embedded development goes, I consider this optional - but helpful. It's good for understanding how computers work, and you may need it if you want to get closer to the electronics. However, I don't think you need it; most programming is on development kits, like Raspberry Pi and Jetson Nano boards. These are running full Linux operating systems, so you don't need to know embedded to use them. If you do want to learn embedded, consider buying a development kit - I would recommend a NUCLEO board (example here) - and work with it to understand how UART, I2C, and other serial communications work, plus operating its LEDs. If you want more advice, let me know.

NUCLEO Product Page

NUCLEO-F302R8 Product Page on Amazon

Real Robot Hardware

How about real robots? This is a bit of an issue - a lot of the cheaper robots you see don't have good computers running on them. You want to find something with at least a Raspberry Pi or Jetson Nano making it work, and that's getting into the hundreds of dollars. It's possible to go cheaper, like with an $80 kit and a $20 board bought separately - but I wouldn't recommend that for a beginner; it's a lot harder to get working. If you are interested in a kit that you can add your own board to, take a look at the Elegoo Robot Kit on Amazon.

My recommended option here would be the JetBot that I've already been making blogs and videos about. It should run up just under $300, and comes with everything you need to start making robotics applications. There will also be a lot of resources and videos on it to get it going.

JetBot Product Information

WaveShare JetBot Product Information

If your budget is a bit higher and you want something more advanced, you could take a look at Turtlebot, like a Turtlebot Burger. That will cost nearer $700, but also comes with a Lidar, which is great for mapping its environment.

TurtleBot Product Information

Turtlebot 3 Burger RPi4

If your budget is lower than a JetBot, I would recommend either working in simulation or trying to build your own robot. Building your own will be a lot tougher and take a lot longer, but you should learn quite a bit from it too.

Some Final Advice

Before finishing up this post, I wanted to give some more general advice.

First, look for and use every resource you have available to you. Look online, ask people, work in the field; anything you can to make your journey easier.

Second, you should get a mentor. This is related to the first point, but it's so important. Find someone you can respect and learn as much as you can from them. This is really the secret to learning a lot - use others' experience to jump ahead instead of learning it yourself the slow, hard way. Finding the right mentor can be a challenge, and you may need to go through a few people before you get to the most helpful person, so be prepared!

I'm sure there's a lot more advice I could give you, but this is my best advice for beginners. At this stage, you need to learn how to learn - finding resources and taking advantage of them. It is the best possible foundation you can give yourself for the rest of your career.

· 15 min read
Michael Hart

Welcome to a new series - setting up the JetBot to work with ROS2 Control interfaces! Previously, I showed how to set up the JetBot to work from ROS commands, but that was a very basic motor control method. It didn't need to be advanced because a human was remote controlling it. However, if we want autonomous control, we need to be able to travel a specific distance or follow a defined path, like a spline. A better way of moving a robot using ROS is by using the ROS Control interfaces; if done right, this means your robot can autonomously follow a path sent by the ROS navigation stack. That's our goal for this series: move the JetBot using RViz!

The first step towards this goal is giving ourselves the ability to control the motors using C++. That's because the controllers in ROS Control requires extending C++ classes. Unfortunately, the existing drivers are in Python, meaning we'll need to rewrite them in C++ - which is a good opportunity to learn how the serial control works. We use I2C to talk to the motor controller chip, an AdaFruit DC Motor + Stepper FeatherWing, which sets the PWM duty cycle that makes the motors move. I'll refer to this chip as the FeatherWing for the rest of this article.

First, we'll look at how I2C works in general. We don't need to know this, but it helps to understand how the serial communication works so we can understand the function calls in the code better.

Once we've seen how I2C works, we'll look at the commands sent to set up and control the motors. This will help us understand how to translate the ROS commands into something our motors will understand.

The stage after this will be in another article in this series, so stay tuned!

This post is also available in video form - check the video link below if you want to follow along!

Inter-Integrated Circuit (IIC/I2C)

I2C can get complicated! If you want to really dive deep into the timings and circuitry needed to make it work, this article has great diagrams and explanations. The image I use here is from the same site.


I2C is a serial protocol, meaning that it sends bits one at a time. It uses two wires called SDA and SCLK; together, these form the I2C bus. Multiple devices can be attached to these lines and take it in turns to send data. We can see the bus in the image below:

I2C Bus with SCLK and SDA lines

Data is sent on the SDA line, and a clock signal is sent on the SCLK line. The clock helps the devices know when to send the next bit. This is a very helpful part of I2C - the speed doesn't need to be known beforehand! Compare this with UART communication, which has two lines between every pair of devices: one to send data from A to B, and one to send data from B to A. Both devices must know in advance how fast to send their data so the other side can understand it. If they don't agree on timing, or even if one side's timing is off, the communication fails. By using a line for the clock in I2C, all devices are given the timing to send data - no prior knowledge required!

The downside of this is that there's only one line to send data on: SDA. The devices must take it in turns to send data. I2C solves this by designating a master device and one or more slave devices on the bus. The master device is responsible for sending the clock signal and telling the slave devices when to send data. In our case, the master device is the Jetson Nano, and the slave device is the FeatherWing. We could add extra FeatherWing boards to the bus, each with extra motors, and I2C would allow the Jetson to communicate with all of them - but this brings a new problem: how would each device know when it is the one meant to respond to a request?


The answer is simple. Each slave device on the bus has a unique address. In our case, the FeatherWing has a default address of 0x60, which is hex notation for the number 96. In fact, if we look at the Python version of the JetBot motor code, we can see the following:

if 96 in addresses:

Aha! So when we check what devices are available on the bus, we see device 96 - the FeatherWing.

When the Jetson wants to talk to a specific device, it starts by selecting the address. It sends the address it wants on the SDA line before making a request, and each device on the bus can check that address with the address it is expecting. If it's the wrong address, it ignores the request. For example, if the FeatherWing has a device of 0x61, and the Jetson sends the address 0x60, the FeatherWing should ignore that request - it's for a different address.

But, how do we assign an address to each device?

The answer comes by looking at the documentation for the FeatherWing:

FeatherWing I2C Addressing

By soldering different pins on the board together, we can tell the board a new address to take when it starts up. That way, we can have multiple FeatherWing boards, each with a different address, all controllable from the Jetson. Cool!

Pulse Width Modulation (PWM)

With that, we have a basic understanding of how the Jetson controls I2C devices connected to it, including our FeatherWing board. Next we want to understand how the FeatherWing controls the motors, so we can program the Jetson to issue corresponding commands to the FeatherWing.

The first step of this is PWM itself - how does the board run a motor at a particular speed? The following step is the I2C commands needed to make the FeatherWing do that. I'll start with the speed.

Motor Wires

Each JetBot motor is a DC motor with two wires. By making one wire a high voltage with the other wire at 0V, the motor will run at full speed in one direction; if we flip which wire is the high voltage, the motor will turn in the opposite direction. We will say that the wire that makes the motor move forwards is the positive terminal, and the backwards wire is the negative terminal.

We can see the positive (red) wire and the negative (black) wire from the product information page:

DC Motor with red and black wires

That means we know how to move the motor at full speed:

  1. Forwards - red wire has voltage, black wire is 0V
  2. Backwards - black wire has voltage, red wire is 0V

There are another couple of modes that we should know about:

  1. Motor off - both wires are 0V
  2. Motor brakes - both wires have voltage

Which gives us full speed forwards, full speed backwards, brake, and off. How do we move the motor at a particular speed? Say, half speed forwards?

Controlling Motor Speed

The answer is PWM. Essentially, instead of having a wire constantly have high voltage, we turn the voltage on and off. For half speed forwards, we have the wire on for 50% of the time, and off for 50% of the time. By switching between them really fast, we effectively make the motor move at half speed because it can't switch on and off fast enough to match the wire - the average voltage is half the full voltage!

That, in essence, is PWM: switch the voltage on the wire very fast from high to low and back again. The proportion of time spent high determines how much of the time the motor is on.

We can formalize this a bit more with some language. The frequency is how quickly the signal changes from high to low and back. The duty cycle is the proportion of time the wire is on. We can see this in the following diagram from Cadence:

PWM signal with mean voltage, duty cycle, and frequency

We can use this to set a slower motor speed. We choose a high enough frequency, which in our case is 1.6 kHz. This means the PWM signal goes through a high-low cycle 1600 times per second. Then if we want to go forwards at 25% speed, we cane set the duty cycle of the positive wire to our desired speed - 25% speed means 25% duty cycle. We can go backwards at 60% speed by setting 60% duty cycle on the negative wire.

Producing this signal sounds very manual, which is why the FeatherWing comes with a dedicated PWM chip. We can use I2C commands from the Jetson to set the PWM frequency and duty cycle for a motor, and it handles generating the signal for us, driving the motor. Excellent!

Controlling Motors through the FeatherWing

Now we know how to move a particular motor at a particular speed, forwards or backwards, we need to understand how to command the FeatherWing to do so. I struggled with this part! I couldn't find information on the product page about how to do this, which I would ordinarily use to set up an embedded system like this. This is because AdaFruit provides libraries to use the FeatherWing without needing any of this I2C or PWM knowledge.

Thankfully, the AdaFruit MotorKit library and its dependencies had all of the code I needed to write a basic driver in C++ - thank you, AdaFruit! The following is the list of links I used for a reference on controlling the FeatherWing:

  1. Adafruit_CircuitPython_MotorKit
  2. Adafruit_CircuitPython_PCA9685
  3. Adafruit_CircuitPython_Motor
  4. Adafruit_CircuitPython_BusDevice
  5. Adafruit_CircuitPython_Register

Thanks to those links, I was able to put together a basic C++ driver, available on here on Github.

Git Tag

Note that this repository will have updates in future to add to the ROS Control of the JetBot. To use the code quoted in this article, ensure you use the git tag jetbot-motors-pt1.

FeatherWing Initial Setup

To get the PWM chip running, we first do some chip setup - resetting, and setting the clock value. This is done in the I2CDevice constructor by opening the I2C bus with the Linux I2C driver; selecting the FeatherWing device by address; setting its mode to reset it; and setting its clock using a few reads and writes of the mode and prescaler registers. Once this is done, the chip is ready to move the motors.

I2CDevice::I2CDevice() {
i2c_fd_ = open("/dev/i2c-1", O_RDWR);
if (!i2c_fd_) {
std::__throw_runtime_error("Failed to open I2C interface!");

// Select the PWM device
if (!trySelectDevice())
std::__throw_runtime_error("Failed to select PWM device!");

// Reset the PWM device
if (!tryReset()) std::__throw_runtime_error("Failed to reset PWM device!");

// Set the PWM device clock
if (!trySetClock()) std::__throw_runtime_error("Failed to set PWM clock!");

Let's break this down into its separate steps. First, we use the Linux driver to open the I2C device.

// Headers needed for Linux I2C driver
#include <fcntl.h>
#include <linux/i2c-dev.h>
#include <linux/i2c.h>
#include <linux/types.h>
#include <sys/ioctl.h>
#include <unistd.h>

// ...

// Open the I2C bus
i2c_fd_ = open("/dev/i2c-1", O_RDWR);
// Check that the open was successful
if (!i2c_fd_) {
std::__throw_runtime_error("Failed to open I2C interface!");

We can now use this i2c_fd_ with Linux system calls to select the device address and read/write data. To select the device by address:

bool I2CDevice::trySelectDevice() {
return ioctl(i2c_fd_, I2C_SLAVE, kDefaultDeviceAddress) >= 0;

This uses the ioctl function to select a slave device by the default device address 0x60. It checks the return code to see if it was successful. Assuming it was, we can proceed to reset:

bool I2CDevice::tryReset() { return tryWriteReg(kMode1Reg, 0x00); }

The reset is done by writing a 0 into the Mode1 register of the device. Finally, we can set the clock. I'll omit the code for brevity, but you can take a look at the source code for yourself. It involves setting the mode register to accept a new clock value, then setting the clock, then setting the mode to accept the new value and waiting 5ms. After this, the PWM should run at 1.6 kHz.

Once the setup is complete, the I2CDevice exposes two methods: one to enable the motors, and one to set a duty cycle. The motor enable sets a particular pin on, so I'll skip that function. The duty cycle setter has more logic:

buf_[0] = kPwmReg + 4 * pin;

if (duty_cycle == 0xFFFF) {
// Special case - fully on
buf_[1] = 0x00;
buf_[2] = 0x10;
buf_[3] = 0x00;
buf_[4] = 0x00;
} else if (duty_cycle < 0x0010) {
// Special case - fully off
buf_[1] = 0x00;
buf_[2] = 0x00;
buf_[3] = 0x00;
buf_[4] = 0x10;
} else {
// Shift by 4 to fit 12-bit register
uint16_t value = duty_cycle >> 4;
buf_[1] = 0x00;
buf_[2] = 0x00;
buf_[3] = value & 0xFF;
buf_[4] = (value >> 8) & 0xFF;

Here we can see that the function checks the requested duty cycle. If at the maximum, it sets the motor PWM signal at its maximum - 0x1000. If below a minimum, it sets the duty cycle to its minimum. Anywhere in between will shift the value by 4 bits to match the size of the register in the PWM chip, then transmit that. Between the three if blocks, the I2CDevice has the ability to set any duty cycle for a particular pin. It's then up to the Motor class to decide which pins should be set, and the duty cycle to set them to.

Initialize Motors

Following the setup of I2CDevice, each Motor gets a reference to the I2CDevice to allow it to talk to the PWM chip, as well as a set of pins that correspond to the motor. The pins are as follows:

MotorEnable PinPositive PinNegative Pin
Motor 18910
Motor 2131112

The I2CDevice and Motors are constructed in the JetBot control node. Note that the pins are passed inline:

device_ptr_ = std::make_shared<JetBotControl::I2CDevice>();
motor_1_ = JetBotControl::Motor(device_ptr_, std::make_tuple(8, 9, 10), 1);
motor_2_ =
JetBotControl::Motor(device_ptr_, std::make_tuple(13, 11, 12), 2);

Each motor can then request the chip enables the motor on the enable pin, which again is done in the constructor:

Motor::Motor(I2CDevicePtr i2c, MotorPins pins, uint32_t motor_number)
: i2c_{i2c}, pins_{pins}, motor_number_{motor_number} {
u8 enable_pin = std::get<0>(pins_);
if (!i2c_->tryEnableMotor(enable_pin)) {
std::string error =
"Failed to enable motor " + std::to_string(motor_number) + "!";

Once each motor has enabled itself, it is ready to send the command to spin forwards or backwards, brake, or turn off. This example only allows the motor to set itself to spinning or not spinning. The command is sent by the control node once more:


To turn on, the positive pin is set to fully on, or 0xFFFF, while the negative pin is set to off:

if (!i2c_->trySetDutyCycle(pos_pin, 0xFFFF)) {
return false;
if (!i2c_->trySetDutyCycle(neg_pin, 0)) {
return false;

To turn off, both positive and negative pins are set to fully on, or 0xFFFF:

if (!i2c_->trySetDutyCycle(pos_pin, 0xFFFF)) {
return false;
if (!i2c_->trySetDutyCycle(neg_pin, 0xFFFF)) {
return false;

Finally, when the node is stopped, the motors make sure they stop spinning. This is done in the destructor of the Motor class:

Motor::~Motor() {

While the Motor class only has the ability to turn fully on or fully off, it has the code to set a duty cycle anywhere between 0 and 0xFFFF - which means we can set any speed in the direction we want the motors to spin!

Trying it out

If you want to give it a try, and you have a JetBot to do it with, you can follow my setup guide in this video:

Once this is done, follow the instructions in the README to get set up. This means cloning the code onto the JetBot, opening the folder inside the dev container, and then running:

source /opt/ros/humble/setup.bash
colcon build
source install/setup.bash
ros2 run jetbot_control jetbot_control

This will set the motors spinning for half a second, then off for half a second.


If you followed along to this point, you have successfully moved your JetBot's motors using a driver written in C++!

· 15 min read
Michael Hart

In this post, I'll show how to use two major concepts together:

  1. Docker images that can be privately hosted in Amazon Elastic Container Registry (ECR); and
  2. AWS IoT Greengrass components containing Docker Compose files.

These Docker Compose files can be used to run public Docker components, or pull private images from ECR. This means that you can deploy your own system of microservices to any platform compatible with AWS Greengrass.

This post is also available in video form - check the video link below if you want to follow along!

What is Docker?

Docker is very widely known in the DevOps world, but if you haven't heard of it, it's a way of taking a software application and bundling it up with all its dependencies so it can be easily moved around and run as an application. For example, if you have a Python application, you have two options:

  1. Ask the user to install Python, the pip dependencies needed to run the application, and give instructions on downloading and running the application; or
  2. Ask the user to install Docker, then provide a single command to download and run your application.

Either option is viable, but it's clear to see the advantages of the Docker method.

Docker Terminology

A container is a running instance of an application, and an image is the result of saving a container so it can be run again. You can have multiple containers based on the same image. Think of it as a movie saved on a hard disk: the file on disk is the "image", and playing that movie is the "container".

Docker Compose

On top of bundling software into images, Docker has a plugin called Docker Compose, which is a way of defining a set of containers that can be run together. With the right configuration, the containers can talk to each or the host computer. For instance, you might want to run a web server, API server, and database at the same time; with Docker Compose, you can define these services in one file and give them permission to talk to each other.

Building Docker Compose into Greengrass Components

We're now going to use Docker Compose by showing how to deploy and run an application using Greengrass and Docker Compose. Let's take a look at the code.

The code we're using comes from my sample repository.

Clone the Code

The first step is to check the code out on your local machine. The build scripts use Bash, so I'd recommend something Linux-based, like Ubuntu. Execute the following to check out the code:

git clone

Check Dependencies

The next step is to make sure all of our dependencies are installed and set up. Execute all of the following and check the output is what you expect - a help or version message.

aws --version
gdk --version
jq --version
docker --version

The AWS CLI will also need credentials set up such that the following call works:

aws sts get-caller-identity

Docker will need to be able to run containers as a super user. Check this using:

sudo docker run hello-world

Finally, we need to make sure Docker Compose is installed. This is available either as a standalone script or as a plugin to Docker, where the latter is the more recent method of installing. If you have access to the plugin version, I would recommend using that, although you will need to update the Greengrass component build script - docker-compose is currently used.

# For the script version
docker-compose --version

# For the plugin version
docker compose --version

More information can be found for any of these components using:

  1. AWS CLI
  2. Greengrass Development Kit (GDK)
  3. jq: sudo apt install jq
  4. Docker
  5. Docker Compose

Greengrass Permissions Setup

The developer guide for Docker in Greengrass tells us that we may need to add permissions to our Greengrass Token Exchange Role to be able to deploy components using either/both of ECR and S3, for storing private Docker images and Greengrass component artifacts respectively. We can check this by navigating to the IAM console, searching for Greengrass, and selecting the TokenExchangeRole. Under this role, we should see one or more policies granting us permission to use ECR and S3.

Token Exchange Role Policies

For the ECR policy, we expect a JSON block similar to the following:

"Version": "2012-10-17",
"Statement": [
"Action": [
"Resource": [
"Effect": "Allow"

For the S3 policy, we expect:

"Version": "2012-10-17",
"Statement": [
"Action": [
"Resource": [
"Effect": "Allow"

With these policies in place, any system we deploy Greengrass components to should have permission to pull Docker images from ECR and Greengrass component artifacts from S3.

Elastic Container Registry Setup

By default, this project builds and pushes a Docker image tagged python-hello-world:latest. It expects an ECR repository of the same name to exist. We can create this by navigating to the ECR console and clicking "Create repository". Set the name to python-hello-world and keep the settings default otherwise, then click Create repository. This should create a new entry in the Repositories list:

ECR Repository Created

Copy the URI from the repo and strip off the python-hello-world ending to get the BASE_URI. Then, back in your cloned repository, open the .env file and replace the ECR_REPO variable with your base URL. It should look like the following:

Any new Docker image you attempt to create locally will need a new ECR repository. You can follow the creation steps up to the base URI step - this is a one-time operation.

Building the Docker Images and Greengrass Components

At this point, you can build all the components and images by running:


Alternatively, any individual image/component can be built by changing directory into the component and running:

source ../../.env && ./

Publishing Docker Images and Greengrass Components

Just as with building the images/components, you can now publish them using:


If any image/component fails to publish, it should prevent the script execution to allow you to investigate further.

Deploying Your Component

With the images and component pushed, you can now use the component in a Greengrass deployment.

First, make sure you have Greengrass running on a system. If you don't, you can follow the guide in the video below:

Once you have Greengrass set up, you can navigate to the core device in the console. Open the Greengrass console, select core devices, and then select your device.

Select Greengrass Core Device

From the deployments tab, click the current deployment.

Select current deployment

In the top right, select Actions, then Revise.

Revise Greengrass Deployment

Click Select components, then tick the new component to add it to the deployment. Skip to Review and accept.

Add Component to Deployment

This will now deploy the component to your target Greengrass device. Assuming all the setup is correct, you can access the Greengrass device, and list the most recent logs using:

sudo vim /greengrass/v2/logs/com.docker.PythonHelloWorld.log

This will show all current logs. It may take a few minutes to get going, so keep checking back! Once the component is active, you should see some log lines similar to the following:

2024-01-19T21:46:03.514Z [INFO] (Copier) com.docker.PythonHelloWorld: stdout. [36mpython-hello-world_1  |^[[0m Received new message on topic /topic/local/pubsub: Hello from local pubsub topic. {, serviceName=com.docker.PythonHelloWorld, currentState=RUNNING}
2024-01-19T21:46:03.514Z [INFO] (Copier) com.docker.PythonHelloWorld: stdout. [36mpython-hello-world_1 |^[[0m Successfully published 999 message(s). {, serviceName=com.docker.PythonHelloWorld, currentState=RUNNING}
2024-01-19T21:46:03.514Z [INFO] (Copier) com.docker.PythonHelloWorld: stdout. [36mpython-hello-world_1 |^[[0m Received new message on topic /topic/local/pubsub: Hello from local pubsub topic. {, serviceName=com.docker.PythonHelloWorld, currentState=RUNNING}
2024-01-19T21:46:03.514Z [INFO] (Copier) com.docker.PythonHelloWorld: stdout. [36mpython-hello-world_1 |^[[0m Successfully published 1000 message(s). {, serviceName=com.docker.PythonHelloWorld, currentState=RUNNING}
2024-01-19T21:46:05.306Z [INFO] (Copier) com.docker.PythonHelloWorld: stdout. [36mcomdockerpythonhelloworld_python-hello-world_1 exited with code 0. {, serviceName=com.docker.PythonHelloWorld, currentState=RUNNING}

From these logs, we can see both "Successfully published" messages and "Received new message" messages, showing that the component is running correctly and has all the permissions it needs.

This isn't the only way to check the component is running! We could also use the Local Debug Console, a locally-hosted web UI, to publish/subscribe to local topics. Take a look at this excellent video if you want to set this method up for yourself:


If you got to this point, you have successfully deployed a Docker Compose application using Greengrass!

Diving into the code

To understand how to extend the code, we need to first understand how it works.

From the top-level directory, we can see a couple of important folders (components, docker) and a couple of important scripts (,

components contains all of the Greengrass components, where each component goes in a separate folder and is built using GDK. We can see this from the folder inside, com.docker.PythonHelloWorld. docker contains all of the Docker images, where each image is in a separate folder and is built using Docker. We have already seen and, but if we take a look inside, we see that both scripts source the .env file, then goes through all Docker folders followed by all Greengrass folders, for each one executing the or script inside. The only exception is for publishing Greengrass components, where the standard gdk component publish command is used directly instead of adding an extra file.

Let's take a deeper dive into the Docker image and the Greengrass component in turn.

Docker Image (docker/python-hello-world)

Inside this folder, we can see the LocalPubSub sample application from Greengrass (see the template), with some minor modifications. Instead of passing in the topic to publish on and the message to publish, we are instead using environment variables.

topic = os.environ.get("MQTT_TOPIC", "example/topic")
message = os.environ.get("MQTT_MESSAGE", "Example Hello!")

Passing command line arguments directly to Greengrass components is easy, but passing those same arguments through Docker Compose is more difficult. It's an easier pattern to use environment variables specified by the Docker Compose file and modified by Greengrass configuration - we will see more on this in the Greengrass Component deep dive.

Therefore, the component retrieves its topic and message from the environment, then published 1000 messages and listens for those same messages.

We also have a simple Dockerfile showing how to package the application. From a base image of python, we add the application code into the app directory, and then specify the entrypoint as the script.

Finally, we have the and The build simply uses the docker build command with a default tag of the ECR repo. The publish step does slightly more work by logging in to the ECR repository with Docker before pushing the component. Note that both scripts use the ECR_REPO variable set in the .env file.

If we want to add other Docker images, we can add a new folder with our component name and copy the contents of the python-hello-world image. We can then update the image name in the build and publish scripts and change the application code and Dockerfile as required. A new ECR repo will also be required, matching the name given in the build and publish scripts.

Greengrass Component (components/com.docker.PythonHelloWorld)

Inside our Greengrass component, we can see a build script, the Greengrass component files, and the docker-compose.yml that will be deployed using Greengrass.

The build script is slightly more complicated than the Docker equivalent, which is because the ECR repository environment variable needs to be replaced in the other files, but also needs to be reset after the component build to avoid committing changes to the source code. These lines...

find . -maxdepth 1 -type f -not -name "*.sh" -exec sed -i "s/{ECR_REPO}/$ECR_REPO/g" {} \;
gdk component build
find . -maxdepth 1 -type f -not -name "*.sh" -exec sed -i "s/$ECR_REPO/{ECR_REPO}/g" {} \;

...replace the ECR_REPO placeholder with the actual repo, then build the component with GDK, then replace that value back to the placeholder. As a result, the built files are modified, but the source files are changed to their original state.

Next we have the GDK configuration file, which shows that our build system is set to zip. We could push only the Docker Compose file, but this method allows us to zip other files that support it if we want to extend the component. We also have the version tag, which needs to be incremented with new component versions.

After that, we have the Docker Compose file. This contains one single service, some environment variables, and a volume. The service refers to the python-hello-world Docker image built by docker/python-hello-world by specifying the image name.


This component references the latest tag of python-hello-world. If you want your Greengrass component version to be meaningful, you should extend the build scripts to give a version number as the Docker image tag, so that each component version references a specific Docker image version.

We can see the MQTT_TOPIC and MQTT_MESSAGE environment variables that need to be passed to the container. These can be overridden in the recipe.yaml by Greengrass configuration, allowing us to pass configuration through to the Docker container.

Finally, we can see some other parameters which are needed for the Docker container to be able to publish and subscribe to local MQTT topics:


These will need to be included in any Greengrass component where the application needs to use MQTT. Other setups are available in the Greengrass Docker developer guide.

If we want to add a new Docker container to our application, we can create a new service block, just like python-hello-world, and change our environment variables and image tags. Note that we don't need to reference images stored in ECR - we can also access public Docker images!

The last file is the recipe.yaml, which contains a lot of important information for our component. Firstly, the default configuration allows our component to publish and subscribe to MQTT, but also specifies the environment variables we expect to be able to override:

Message: "Hello from local pubsub topic"
Topic: "/topic/local/pubsub"

This allows us to override the message and topic using Greengrass configuration, set in the cloud.

The recipe also specifies that the Docker Application Manager and Token Exchange Service are required to function correctly. Again, see the developer guide for more information.

We also need to look at the Manifests section, which specifies the Artifacts required and the Lifecycle for running the application. Within Artifacts, we can see:

- URI: "docker:{ECR_REPO}/python-hello-world:latest"

This line specifies that a Docker image is required from our ECR repo. Each new private Docker image added to the Compose file will need a line like this to grant permission to access it. However, public Docker images can be freely referenced.

Unarchive: ZIP

This section specified that the component files are in a zip, and the S3 location is supplied during the GDK build. We are able to use files from this zip by referencing the {artifacts:decompressedPath}/com.docker.PythonHelloWorld/ path.

In fact, we do this during the Run lifecycle stage:

RequiresPrivilege: True
Script: |
MQTT_TOPIC="{configuration:/Topic}" \
MQTT_MESSAGE="{configuration:/Message}" \
docker-compose -f {artifacts:decompressedPath}/com.docker.PythonHelloWorld/docker-compose.yml up

This uses privilege, as Docker requires super user privilege in our current setup. It is possible to set it up to work without super user, but this method is the simplest. We also pass MQTT_TOPIC and MQTT_MESSAGE as environment variables to the docker-compose command. With the up command, we tell the component to start the application in the Docker Compose file.


If we want to change to use Compose as a plugin, we can change the run command here to start with docker compose.

And that's the important parts of the source code! I encourage you to read through and check your understanding of the parameters - not setting permissions and environment variables correctly can lead to some confusing errors.

Where to go from here

Given this setup, we should be able to deploy private or public Docker containers, which paves the path for deploying our robot software using Greengrass. We can run a number of containers together for the robot software. This method of deployment gives us the advantages of Greengrass, like having an easier route to the cloud, a more fault-tolerant deployment with roll-back mechanism and version tracking, and allowing us to deploy other components like the CloudWatch Log Manager.

In the future, we can extend this setup to build ROS2 containers, allowing us to migrate our robot software to Docker images and Greengrass components. We could install Greengrass on each robot, then deploy the full software with configuration options. We then also have a mechanism to update components or add more as needed, all from the cloud.

Give the repository a clone and try it out for yourself!

· 7 min read
Michael Hart

This post shows how to build a Robot Operating System 2 node using Rust, a systems programming language built for safety, security, and performance. In the post, I'll tell you about Rust - the programming language, not the video game! I'll tell you why I think it's useful in general, then specifically in robotics, and finally show you how to run a ROS2 node written entirely in Rust that will send messages to AWS IoT Core.

This post is also available in video form - check the video link below if you want to follow along!

Why Rust?

The first thing to talk about is, why Rust in particular over other programming languages? Especially given that ROS2 has strong support for C++ and Python, we should think carefully about whether it's worth travelling off the beaten path.

There are much more in-depths articles and videos about the language itself, so I'll keep my description brief. Rust is a systems-level programming language, which is the same langauge as C and C++, but with a very strict compiler that blocks you from doing "unsafe" operations. That means the language is built for high performance, but with a greatly diminished risk of doing something unsafe as C and C++ allow.

Rust is also steadily growing in traction. It is the only language other than C to make its way into the Linux kernel - and the Linux kernel was originally written in C! The Windows kernel is also rewriting some modules in Rust - check here to see what they have to say:

The major tech companies are adopting Rust, including Google, Facebook, and Amazon. This recent 2023 keynote from Dr Wener Vogels, Vice President and CTO of, had some choice words to say about Rust. Take a look here to hear this expert in the industry:

Why isn't Rust used more?

That's a great question. Really, I've presented the best parts in this post so far. Some of the drawbacks include:

  1. Being a newer language means less community support and less components provided out of the box. For example, writing a desktop GUI in Rust is possible, but the libraries are still maturing.
  2. It's harder to learn than most languages. The stricter compiler means some normal programming patterns don't work, whcih means relearning some concepts and finding different ways to accomplish the same task.
  3. It's hard for a new language to gain traction! Rust has to prove it will stand the test of time.

Having said that, I believe learning the language is worth it for safety, security, and sustainability reasons. Safety and security comes from the strict compiler, and sustainability comes from being a low-level language that does the task faster and with fewer resources.

That's true for robotics as much as it is for general applications. Some robot software can afford to be slow, like high-level message passing and decision making, but a lot of it needs to be real-time and high-performance, like processing Lidar data. My example today is perfectly acceptable in Python because it's passing non-urgent messages, but it is a good use case to explore using Rust in.

With that, let's stop talking about Rust, and start looking at building that ROS2 node.

Building a ROS2 Node

The node we're building replicates the Python-based node from this blog post. The same setup is required, meaning the setup of X.509 certificates, IoT policies, and so on will be used. If you want to follow along, make sure to run through that setup to the point of running the code - at which point, we can switch over to the Rust-based node. If you prefer to follow instructions from a README, please follow this link - it is the repository containing the source code we'll be using!


The first part of our setup is making sure all of our tools are installed. This node can be built on any operating system, but instructions are given for Ubuntu, so you may need some extra research for other systems.

Execute the following to install Rust using Rustup:

curl --proto '=https' --tlsv1.2 -sSf | sh

There are further dependencies taken from the ROS2 Rust repository as follows:

sudo apt install -y git libclang-dev python3-pip python3-vcstool # libclang-dev is required by bindgen
# Install these plugins for cargo and colcon:
cargo install --debug cargo-ament-build # --debug is faster to install
pip install git+
pip install git+

Source Code

Assuming your existing ROS2 workspace is at ~/ros2_ws, the following commands can be used to check out the source code:

cd ~/ros2_ws/src
git clone
git clone
git clone

ROS2 Rust then uses vcs to import the other repositories it needs:

cd ~/ros2_ws
vcs import src < src/ros2_rust/ros2_rust_humble.repos

That concludes checking out the source code.

Building the workspace

The workspace can now be built. It takes around 10m to build ROS2 Rust, which should only need to be done once. Following that, changes to the code from this repository can be built very quickly. To build the workspace, execute:

cd ~/ros2_ws
colcon build
source install/setup.bash

The build output should look something like this:

Colcon Build Complete

Once the initial build has completed, the following command can be used for subsequent builds:

colcon build --packages-select aws_iot_node

Here it is in action:


Now, any changes that are made to this repository can be built and tested with cargo commands, such as:

cargo build
cargo run --bin mock-telemetry

The cargo build log will look something like:


Multi-workspace Setup

The ROS2 Rust workspace takes a considerable amount of time to build, and often gets built as part of the main workspace when it's not required, slowing down development. A different way of structuring workspaces is to separate the ROS2 Rust library from your application, as follows:

# Create and build a workspace for ROS2 Rust
mkdir -p ~/ros2_rust_ws/src
cd ~/ros2_rust_ws/src
git clone
cd ~/ros2_rust_ws
vcs import src < src/ros2_rust/ros2_rust_humble.repos
colcon build
source install/setup.bash

# Check out application code into main workspace
cd ~/ros2_ws/src
git clone
git clone
cd ~/ros2_ws
colcon build
source install/local_setup.bash

This method means that the ROS2 Rust workspace only needs to be updated with new releases for ROS2 Rust, and otherwise can be left. Furthermore, you can source the setup script easily by adding a line to your ~/.bashrc:

echo "source ~/ros2_rust_ws/install/setup.bash" >> ~/.bashrc

The downside of this method is that you can only source further workspaces using the local_setup.bash script, or it will overwrite the variables needed to access the ROS2 Rust libraries.

Running the Example

To run the example, you will need the IOT_CONFIG_FILE variable set from the Python repository.

Open two terminals. In each terminal, source the workspace, then run one of the two nodes as follows:

source ~/ros2_ws/install/setup.bash  # Both terminals
source ~/ros2_ws/install/local_setup.bash # If using the multi-workspace setup method
ros2 run aws_iot_node mqtt-telemetry --ros-args --param path_for_config:=$IOT_CONFIG_FILE # One terminal
ros2 run aws_iot_node mock-telemetry # Other terminal

Using a split terminal in VSCode, this looks like the following:

Both MQTT and Mock nodes running

You should now be able to see messages appearing in the MQTT test client in AWS IoT Core. This will look like the following:

MQTT Test Client


We've demonstrated that it's possible to build nodes in Rust just as with C++ and Python - although there's an extra step of setting up ROS2 Rust so our node can link to it. We can now build other nodes in Rust if we're on a resource constrained system, such as a Raspberry Pi or other small dev kit, and we want the guarantees from the Rust compiler that the C++ compiler doesn't have while being more secure and sustainable than a Python-based version.

Check out the repo and give it a try for yourself!