Skip to main content

9 posts tagged with "robotics"

View All Tags

· 15 min read
Michael Hart

Congratulations! You have a whole lab full of robots running your latest software. Now you want to start looking at an overall view of your robots. It's time to build a fleet overview, and this post will show you how to use Fleet Indexing from AWS IoT Device Management to start your overview.

Fleet Indexing is a feature of AWS IoT Core that collects and indexes information about all of your selected Things and allows you to execute queries on them and aggregate data about them. For example, you can check which of your Things are online, giving you an easy way to determine which of your robots are connected.

I want to walk you through this process and show you what it looks like in the console and using Command Line Interface (CLI) commands. We'll be using the sample code from aws-iot-robot-connectivity-samples-ros2, but I've forked it to add some helper scripts that make setup a little easier for multiple robots. It also adds a launch script so multiple robots can be launched at the same time.

This guide is also available in video form - see the link below!

Fleet Indexing

Fleet Indexing is a feature of IoT Device Management that allows you to index, search, and aggregate your device data from multiple AWS IoT sources. Once the index has been enabled and built, you would be able to run queries such as:

  • How many devices do I have online?
  • How many devices have less than 30% battery left?
  • Which devices are currently on a mission?
  • What is the average metres travelled for my fleet of robots at this location?

To perform these queries, AWS has a query language which can be used for console or CLI commands. There is also the option to aggregate data over time, post that aggregated data to CloudWatch (allowing for building dashboards), and enabling alarms on the aggregate state of your fleet based on pre-defined thresholds.

Device Management has a suite of features, including bulk registration, device jobs (allowing for Over The Air updates, for example), and secure tunnelling. I won't go into depth on these - Fleet Indexing is the focus of this post. If you are interested in these other features, you can read more in the docs, or let me know directly!

Pricing

Fleet Indexing is a paid service at AWS, and it's worth understanding where the costs come from. The Device Management pricing page has the most detail. In short, there is a very low charge for registering devices, then additional charges for updating the index and querying the index.

Fleet Indexing is opt-in. Every service added to the index, such as connectivity or shadow indexing, will increase the size of the index and the operations that will trigger an index update. For example, if connectivity is not enabled, then a device coming online or offline will not update the index, so there will be no additional fleet indexing charge. If shadow indexing is enabled, then every update of an indexed shadow will incur a charge. At time of writing, update charges are measured in a few USD per million updates, and queries cost a few cents per thousand queries.

Overall, we can keep this in mind when deciding which features to include in the Fleet Index given a particular budget. Frequent shadow updates from a large robot fleet will have a larger associated cost.

If you'd like to learn more about pricing estimates and monitoring your usage at AWS, take a look at my video on the topic:

Setup Guide

Now we know a bit more about what Fleet Indexing is, I want to show you how to set it up. I'll use a sample application with multiple ROS2 nodes posting frequent shadow updates so we can see the changes in the shadow index. Each pair of ROS2 nodes is acting as one robot. I'll show you how to enable Fleet Indexing first only with Connectivity data, then add named shadows.

Sample Application Setup

For the sample application, use a computer with Ubuntu 22.04 and ROS2 Humble installed. Install ros-dev-tools as well as ros-humble-desktop. Also install the AWS CLI and the AWS IoT SDK - instructions are in the repository's README and my video on setting up the sample application:

Once the necessary tools are installed, clone the repository:

cd ~
git clone https://github.com/mikelikesrobots/aws-iot-robot-connectivity-samples-ros2.git

Make sure that the application builds correctly by building the ROS2 workspace:

cd ~/aws-iot-robot-connectivity-samples-ros2/workspace
source /opt/ros/humble/setup.bash
colcon build --symlink-install
source install/setup.bash

Next, set the CERT_FOLDER_PATH so that the certificates are written to the right place:

export CERT_FOLDER_LOCATION=~/aws-iot-robot-connectivity-samples-ros2/iot_certs_and_config/
echo "export CERT_FOLDER_LOCATION=$CERT_FOLDER_LOCATION" >> ~/.bashrc

At this point we can set up a few robots for the sample application. For this, we can use the new script in the forked repository:

cd ~/aws-iot-robot-connectivity-samples-ros2
# Install python dependencies for the script
python3 -m pip install -r scripts/requirements.txt
# Create certificates, policies etc for robot1, robot2, and robot3
python3 scripts/make_robots.py robot1 robot2 robot3

This will create Things, certificates, policies and so on for three robots.

Cleaning Up Resources

Once you're finished working with the sample application, you can clean all of these resources up using the corresponding cleanup script:

cd ~/aws-iot-robot-connectivity-samples-ros2
python3 scripts/delete_robots.py robot1 robot2 robot3

With your robots created, you now need to launch the shadow service launch script. Execute the command:

ros2 launch iot_shadow_service crack_all_safes.launch.py certs_path:=$CERT_FOLDER_LOCATION

This should find all the robots you've created in the certificates folder and launch an instance of the sample application for each one. In this case, three "safe cracker" robots will start up. The logs will show shadow updates for each one. We now have three robots in our fleet, each connecting to AWS IoT Core to post their shadow updates.

You can now leave this application running while following the rest of the instructions.

Enabling Fleet Index with Connectivity

For each step, I'll show how to use the console and CLI for enabling Fleet Indexing.

Enabling in Console

In the console, navigate to the AWS Console IoT Core page, and scroll to the bottom of the navigation bar. Open the Settings page.

IoT Settings Navbar Menu

Fleet indexing is partway done the page. Click the Manage indexing button.

Manage Fleet Indexing

In the title bar of the Thing indexing box, there's a check box to enable indexing. Check this box.

Enable Fleet indexing

At this point, you could confirm the update and see the Fleet Index include the names of your Things. However, to get some utility from it, we can also include Connectivity to see whether devices are online or not. To do this, scroll down and check the box next to "Add thing connectivity".

Enable Connectivity

Now scroll to the bottom of the page and click Update.

Confirm Indexing Update

Once enabled, it may take a few minutes to build the Index. You can proceed to the next section, and if the searches are not returning results, wait for a few minutes before trying again.

Enabling using CLI

To enable Fleet Indexing with only Connectivity, execute the following command:

aws iot update-indexing-configuration --thing-indexing-configuration '{
"thingIndexingMode": "REGISTRY",
"thingConnectivityIndexingMode": "STATUS"
}'

The index may take a few minutes to build. If the following section does not immediately work, wait for a few minutes before trying again.

Querying the Fleet Index

Now that connectivity is enabled, we can query our Fleet Index to find out how many devices are online.

In the AWS Console, go to the IoT Core page and open the Things page.

Things Navigation Menu Item

In the controls at the top, two buttons are available that require Fleet Indexing to work. Let's start with Advanced Search.

Advanced Search Button

Within the Query box, enter connectivity.connected = true. Press enter to add it as a query, then click Search.

Query Connected Devices

This should give all three robots as connected devices.

Connected Device Results

Success! We have listed three devices as connected. You can also experiment with the query box to see what other options are available. We can also increase the options available by allowing other IoT services to be indexed.

caution

The console uses queries with a slightly different syntax to the CLI and examples page. If you use an example, be careful to translate the syntax!

As a next step, we can see how to aggregate data. Go back to the Things page and instead click the "Run aggregations" button.

Run Aggregations Button

Search for "thingName = robot*", and under aggregation properties, select "connectivity.connected" with aggregation type "Bucket". This search will return that 3 devices are connected. Explore the page - in particular the Fleet metrics section - to see what else this tool is able to do.

Count Connected Devices

You can use the console to explore the data available.

CLI Queries and Aggregations

To search for the names of connected devices, the CLI command is:

aws iot search-index \
--index-name "AWS_Things" \
--query-string "connectivity.connected:true"

This will return a json structure containing all of the things that are currently connected. This can further be processed, such as with the tool jq:

sudo apt install jq
aws iot search-index \
--index-name "AWS_Things" \
--query-string "connectivity.connected:true" \
| jq '.things[].thingName'

This returns the following lines:

"robot1"
"robot2"
"robot3"

Success! We can even watch this whole command to view the connected devices over time:

watch "aws iot search-index --index-name 'AWS_Things' --query-string 'connectivity.connected:true' | jq '.things[].thingName'"

We could get a number of connected devices by counting the lines in the response:

aws iot search-index \
--index-name 'AWS_Things' \
--query-string 'connectivity.connected:true' \
| jq '.things[].thingName' \
| wc -l

Or, we can use an aggregation. To search for Things with a name starting with robot that are connected, we can use this command:

aws iot get-buckets-aggregation \
--query-string "thingName:robot*" \
--aggregation-field connectivity.connected \
--buckets-aggregation-type '{"termsAggregation": {"maxBuckets": 10}}'

This gives the following response:

{
"totalCount": 3,
"buckets": [
{
"keyValue": "true",
"count": 3
}
]
}

In both cases, we can see that 3 devices are connected. The latter command is more complicated, but can be altered to provide other aggregated data - you can now explore the different aggregation types to perform more queries on your Fleet Index!

Indexing Named Shadows

Next, we will see how to add Named Shadows to the Fleet Index. Again, we will see both the console and CLI methods of accomplishing this.

caution

As described in the pricing section, anything that increases the number of updates to the index will incur additional charges. In this sample application, each robot updates its shadow multiple times per second. Be careful with your own system design if you plan to index your shadow with how many shadow updates are required!

Enabling named shadows is a two-step process. First, you must enable indexing named shadows; then, you must specify the shadows to be indexed. This is to allow you to optimize the fleet index for cost and performance - you can select which shadows should be included to avoid unnecessary updates.

Enabling Named Shadows using the Console

Back in the Fleet Indexing settings, check the box for Add named shadows, then add the shadow names robot1-shadow, robot2-shadow, robot3-shadow.

Adding Shadows to Index

Click Update. This will result in a banner on the page saying that the index is updating. Wait for a couple of minutes, and if the banner persists, continue with the next steps regardless - this banner can remain on screen past when the indexing is complete.

Enabling Named Shadows using the CLI

To add named shadows to the index, you need to both enable the named shadow indexing and specify the shadow names to be indexed. This must be done all in one command - the configuration in the command overwrites the previous configuration, meaning that if you omit the connectivity configuration argument, it will remove connectivity information from the index.

The command to add the shadows to the existing configuration is as follows:

aws iot update-indexing-configuration --thing-indexing-configuration '{
"thingIndexingMode": "REGISTRY",
"namedShadowIndexingMode": "ON",
"thingConnectivityIndexingMode": "STATUS",
"filter": {"namedShadowNames": ["robot1-shadow", "robot2-shadow", "robot3-shadow"]}
}'

The command keeps the indexing mode and connectivity mode the same, but adds named shadow indexing and the specific shadow names to the index.

Querying shadow data

Shadow data can be indexed using the console and CLI - however, the use of wildcards with named shadows is not supported at time of writing. It is possible to query data using a particular shadow name, but currently can't query across the whole fleet.

Querying Shadow via Console

To query for robot1-shadow, open the Advanced search page again. Enter the query shadow.name.robot1-shadow.hasDelta = true, then click search. It may take a few searches depending on the shadow's delta, but this should return robot1 as a Thing.

Shadow Query Results

You can then select any of the Things returned to see more information about that Thing, including viewing its shadow.

Querying Shadow via CLI

With the console, searching for your robot* things will return a table of links, which means clicking through each Thing to see the shadow. The query result from the CLI has more detail in one place, including the entire indexed shadow contents for all things matching the query. For example, the following query will return the thing name, ID, shadow, and connectivity status for all robot* things.

aws iot search-index --index-name 'AWS_Things' --query-string 'connectivity.connected:true'

Example output:

{
"things": [
{
"thingName": "robot1",
"thingId": "13032b27-e770-4146-8706-8ec1249b7015",
"shadow": "{\"name\":{\"robot1-shadow\":{\"desired\":{\"digit\":59},\"reported\":{\"digit\":59},\"metadata\":{\"desired\":{\"digit\":{\"timestamp\":1718135446}},\"reported\":{\"digit\":{\"timestamp\":1718135446}}},\"hasDelta\":false,\"version\":55137}}}",
"connectivity": {
"connected": true,
"timestamp": 1718134574557
}
},
{
"thingName": "robot2",
"thingId": "7e396088-6d02-461e-a7ca-a64076f8a0ca",
"shadow": "{\"name\":{\"robot2-shadow\":{\"desired\":{\"digit\":70},\"reported\":{\"digit\":70},\"metadata\":{\"desired\":{\"digit\":{\"timestamp\":1718135462}},\"reported\":{\"digit\":{\"timestamp\":1718135463}}},\"hasDelta\":false,\"version\":55353}}}",
"connectivity": {
"connected": true,
"timestamp": 1718134574663
}
},
{
"thingName": "robot3",
"thingId": "b5cf7c73-3b97-4d8e-9ccf-a1d5788873c8",
"shadow": "{\"name\":{\"robot3-shadow\":{\"desired\":{\"digit\":47},\"reported\":{\"digit\":22},\"delta\":{\"digit\":47},\"metadata\":{\"desired\":{\"digit\":{\"timestamp\":1718135446}},\"reported\":{\"digit\":{\"timestamp\":1718135447}},\"del
ta\":{\"digit\":{\"timestamp\":1718135446}}},\"hasDelta\":true,\"version\":20380}}}",
"connectivity": {
"connected": true,
"timestamp": 1718134574573
}
}
]
}

We can filter this again using jq to get all of the current reported digits for the shadows. The following command uses a few chained Linux commands to parse the desired data. The purpose of the command is to show that the data returned by the query can be further parsed for particular fields, or used by a program to take further action.

aws iot search-index \
--index-name 'AWS_Things' \
--query-string 'connectivity.connected:true' \
| jq -r '.things[].shadow | fromjson | .name | to_entries[] | {name: .key, desired: .value.desired.digit}'

The above command searches for all connected devices, then for those devices, retrieves the desired digit field from all of its shadows and places them into a map of the following format:

{
"name": "robot1-shadow",
"desired": 24
}
{
"name": "robot2-shadow",
"desired": 54
}
{
"name": "robot3-shadow",
"desired": 8
}

We can again wrap this in a watch command to see how frequently the values update:

watch "aws iot search-index --index-name 'AWS_Things' --query-string 'connectivity.connected:true' | jq -r '.things[].shadow | fromjson | .name | to_entries[] | {name: .key, desired: .value.desired.digit}'"

Success! We can see the desired digit of our three safe cracking robots update live on screen, all from the fleet index.

CloudWatch Metrics and Device Management Alarms

Fleet Indexing is a launching point for sending specific data to CloudWatch Metrics or activating Device Management Alarms. AWS allows you to set up queries and aggregated data that you're interested in, such as the average battery level across a fleet of mobile robots, and perform actions based on that. You could set an alarm to get an alert when the average battery drops too low, emit metrics based on queries that are tracked in CloudWatch, and even start building a CloudWatch Dashboard to view a graph of the average battery level over time.

These are just a few examples of how valuable it is to connect robots to the cloud. Once they are connected and data starts flowing, you can set up monitoring, alarms, and actions to take directly in the cloud, without needing to deploy any of your own infrastructure.

Summary

Overall, Fleet Indexing from IoT Device Management is a useful tool for collecting data from your fleet into one place, then allowing you to query and aggregate data across the fleet or even subsections of the fleet. This post shows how to index device connectivity and named shadow data, plus how to query and aggregate data, both using the console and using the command line. Finally, I briefly mentioned some of the future possibilities with the Fleet Index data, such as setting up alarms, metrics, and dashboards - all from data already flowing into AWS!

Have a try for yourself using the sample code - but remember, frequent updates of the index will incur higher costs. Be careful with your system design and the IoT services you select to index to optimize costs and performance.

· 7 min read
Michael Hart

This post is to show how to set up the Boston Dynamics Spot SDK.

If you prefer a video format, or you want to see the samples in action, check out my YouTube video below:

Boston Dynamics Spot SDK

Boston Dynamics released the Spot robot, and I was able to get my hands on one in the lab. I also released the video below to show the basics of getting it unpacked and moving around.

This post is about showing examples from the Boston Dynamics Spot SDK, including mapping its environment and moving autonomously to key waypoints in the map, or detecting and following a person around. Unfortunately, it's hard to demonstrate the samples working in this post! I'll talk about the setup of the SDK and using it to connect to the robot, then leave it to the video for showing the samples themselves.

Downloading the SDK

The SDK is available from this Github repository, and Boston Dynamics host the documentation for the SDK. Most importantly, we're interested in the Python examples.

Start by cloning the SDK in your chosen terminal using git:

git clone https://github.com/boston-dynamics/spot-sdk

Make sure you have python installed - if not, follow the instructions for your system to install.

The next step is to set up a virtual environment for dependencies. This is entirely optional, but it makes it easier to track Python dependencies if they're all installed like this.

To set it up, make sure pip3 is available:

pip3 --version

Then use it to install virtualenv:

pip3 install virtualenv

Once virtualenv is installed, use it to create a virtual environment and activate it:

cd spot-sdk
virtualenv venv
# On Windows:
venv\Scripts\activate
# On Mac:
source venv/bin/activate

Now we have a clean virtual environment, we can install any dependencies we want into it. For example, we can install dependencies for the hello-spot example:

pip install -r ./python/examples/hello_spot/requirements.txt

This same process can be used for every example that you want to run. You do need an internet connection to do this, so it's a good idea to install dependencies you're likely to need now to save reconnecting later.

Hello Spot

To understand how to connect to the robot, the Hello Spot example is a great place to start. Let's take a look at how it works! The code we're looking at is the hello_spot.py file. In addition, we can look up any concepts in the Boston Dynamics Concepts documentation - the code provides a great overview, with the documentation going into more detail about each part.

The interesting parts are all in the hello_spot function. I'll go through a few lines and describe what they're doing. First, a minor but useful step: setting up logging.

bosdyn.client.util.setup_logging(config.verbose)

This first line is using the Python logging module, as described in the code comments. It's using the verbose argument set in config so you can change how verbose the logging is when you start the script.

The next step is to create the standard SDK. This is the object used to interact with the SDK.

sdk = bosdyn.client.create_standard_sdk('HelloSpotClient')

The SDK object is used to create the robot object, which is a Python object representing the robot that we can interact with. This is where the host name is used to contact the robot, which is one of the command line parameters. Get this wrong, or be connected to the wrong network so that you can't contact the robot, and this stage will fail.

robot = sdk.create_robot(config.hostname)

The next step is to authenticate with the robot using the username and password. The SDK provides a utility method which prompts the user (you) for the username and password to connect to the robot.

bosdyn.client.util.authenticate(robot)

Once authenticated, we need to synchronize time with the robot. This is because the robot has its own idea of the current time and will refuse commands sent with a time difference that's too large. For more information here, see the concepts page on time-sync in the documentation.

robot.time_sync.wait_for_sync()

The example then asserts that the robot is not estopped (i.e. emergency stopped). To be able to drive the robot, someone must be able to press the estop. This can be done using the tablet with the Spot app or the estop example from the SDK. If someone is able to press the estop, and the estop has not already been pressed, then execution can continue.

assert not robot.is_estopped(), 'assertion message'

The comments explain the concept of leasing very well, so I'll include the relevant comments:

# Only one client at a time can operate a robot. Clients acquire a lease to
# indicate that they want to control a robot. Acquiring may fail if another
# client is currently controlling the robot. When the client is done
# controlling the robot, it should return the lease so other clients can
# control it. The LeaseKeepAlive object takes care of acquiring and returning
# the lease for us.
lease_client = robot.ensure_client(bosdyn.client.lease.LeaseClient.default_service_name)
with bosdyn.client.lease.LeaseKeepAlive(lease_client, must_acquire=True, return_at_exit=True):

The rest of the code stays within the with block, meaning that it can access the robot and move it around. From there, it is possible to give the robot movement commands, including being able to stand up.

This is the basic pattern for all robot scripts that involve movement:

  1. Create SDK object
  2. Create robot object from SDK
  3. Authenticate with robot
  4. Synchronize time with robot
  5. Check estop
  6. Obtain robot lease

Message Format and Request/Response

The way that our application sends messages to the Spot robot is by building Protocol Buffers (aka protobuf) messages and sending them using gRPC. For the most part, we don't need to understand this, as the SDK does a good job of hiding the messaging behind SDK methods - but once we get to the more advanced examples, understanding how protobuf and gRPC work can help understand the code.

We can use protobuf to define the structure of a message, then generate the code for serializing and deserializing an instance of that message to bytes. The SDK provides the messages and the pre-generated methods for serializing and deserializing those messages for us.

At the same time, we use gRPC to send and receive messages. You can think of this as similar to a ROS service call, which also has a request/response flow. For the Spot SDK, there are defined services available, which we can interact with by using the relevant Request and Response messages.

Take the arm_joint_move example, which has a method to build an arm movement command:

def make_robot_command(arm_joint_traj):
""" Helper function to create a RobotCommand from an ArmJointTrajectory.
The returned command will be a SynchronizedCommand with an ArmJointMoveCommand
filled out to follow the passed in trajectory. """

joint_move_command = arm_command_pb2.ArmJointMoveCommand.Request(trajectory=arm_joint_traj)
arm_command = arm_command_pb2.ArmCommand.Request(arm_joint_move_command=joint_move_command)
sync_arm = synchronized_command_pb2.SynchronizedCommand.Request(arm_command=arm_command)
arm_sync_robot_cmd = robot_command_pb2.RobotCommand(synchronized_command=sync_arm)
return RobotCommandBuilder.build_synchro_command(arm_sync_robot_cmd)

We can see in this code that we construct an arm_command_pb2.ArmCommand.Request, so we can reasonably expect an arm_command_pb2.ArmCommand.Response in response. We can also see that the SDK provides a RobotCommandBuilder to help us build messages for particular services.

This flow is in many places that don't have specific SDK methods. Anywhere that you see _pb2 in the example code is generated by protobuf. Have fun reading through the samples!

Examples

As written earlier in this post, it's difficult to show the working examples in a blog post! Please do take a look at the video if you want to see the following samples in action:

  1. hello_spot
  2. estop
  3. graph_nav_command_line
  4. graph_nav_view_map
  5. spot_detect_and_follow

Summary

In short, that's the basics of interacting with Spot using Python! From there, it's up to you what you can get the robot to do. In the future, I'm eager to get my robot connected to AWS to show how the cloud can provide value for it - maybe I can control it using a Lambda function in the cloud!

· 11 min read
Michael Hart

This post is to give you my five tops on how to stand out as a Software Engineer. These are tips that will help at any career level, not just when you're starting out.

If you prefer a video format, check out my YouTube video below:

Knowing When to Ask Questions

My first tip is knowing when to ask questions. The phrasing of this title sounds like you need to ask fewer questions, but mostly likely you need to ask more questions. The truth is that when you start on a new team, that team is expecting you to ask a lot of questions. This is especially true when you're just starting out, so my advice to you is this:

  1. When you need to know something that you can't find out online, don't waste your time. Ask another team member straight away.
  2. When you need to know something that you could possibly find online, try setting yourself a time limit before asking for help. Give it 15-30 minutes, try and work through it, then find a team member to take a look with you.

This will give you a good balance between feeling like you're pestering people and taking all their time up, and being able to actually complete your work. The last thing your new team members want for you is to sit there wasting hours or even days on something they could have helped you with in two minutes.

Example - Ask Straight Away

You want to find some documentation for your project. It's very unlikely you could find this information by yourself, so it's best for you to find a tem member to ask for help.

Example - Wait, Then Ask

You've changed something in the code and it won't compile any more. This is something you could probably figure out by yourself given enough time, so set a timer for 15 minutes, then try to work through it. If the timer goes off, and you haven't made any progress, find someone who can help.

Taking Responsibility

My second tip is to take responsibility, and there's two ways to take responsibility that I'm talking about: first, taking responsibility for tasks that you don't normally do as part of your work; second, taking responsibility when you make a mistake.

Volunteering for Tasks

As far as tasks outside your normal work goes, it's very common for your manager or your team to have a task come up that needs to be completed, but doesn't naturally fall to a particular person. Chances are, your team would prefer one person to be responsible for it and drive it to completion. If that's something you could do, but is outside of your normal work area, it's a great idea to consider taking it on. It's a way that you can stand out as an engineer, learn something new, and grow in your career.

Example - Running a Hackathon

I had a teammate who wanted to participate in the Hackathon, but when I encouraged them to try and organise it for themself, they weren't willing to take on that responsibility. Instead, I took on the task: I arranged it, chose the theme for it, and made sure it went ahead. Now, I'm much more prepared to run other Hackathons in the future.

Owning Your Mistakes

The second way that you should take responsibility is when you've made a mistake. Especially when you're starting out, but all throughout your career, you can and will make mistakes. The best thing that you can do is learn from them and try to make sure they don't happen again.

The best response you can give if you get called out in a meeting for something you've done wrong is to avoid getting upset, and to simply say, "yes, I made a mistake there, and here's what I'm going to do to stop it happening again."

That could be something you do differently, or it could be a process that you or your team put in place to make sure that no one will make that mistake again.

Example - Not Paying Attention

I took part in an informational meeting with a lot of distractions in the house. I couldn't pay full attention, I wasn't able to ask questions at the end, and I didn't even realise how distracted I was until both my manager and one of my colleagues commented on it. I realised how disrespectful it was to the presenter at the time. To this day, I make sure that there as few distractions in the house as possible when I'm attending a meeting out of respect for other people's time.

Actively Pursue Advancement

My third tip for you is to actively pursue advancement. By that, I mean you need to go after what you want for your next career step.

In my experience, many people are content to receive tasks from their team, and do their work well and on time, but that's not the way that you can grow your career the fastest. The best thing that you can do is have an honest conversation with your manager about where you want to be. Is there more responsibility that you want to take on? Do you want a raise, or a promotion? These are things that you need to bring attention to if you want to make them happen and you need to make them happen yourself.

To understand this better, try and think of it from your manager's point of view - or their manager's point of view. They have teams ot manage, projects to get out on time, customers they need to talk to and keep happy; how much of their attention do you think is solely on you? The answer is probably not that much, which is why you need to bring their attention on to you. You need to make it happen, and that's what this conversation would do.

Talk to your manager, tell them what you need, and ask them for feedback so you understand exactly where you are and what weaknesses you need to work on in order to progress.

Example - Asking for a Raise

My most recent example of this was when I was working on a team and I felt like I was taking on more responsibility than my level required - even acting in a team lead role. I had a conversation with my manager and asked him for a raise. Not only did he respond positively to this, he actually helped me to get promoted instead so that I became the official team lead. It was a benefit to me and a benefit to him because it showed how he was growing his team.

Document Your Wins

My fourth tip is to document your wins, by which I mean writing down what you're doing as you're doing it, including any wins that you have in that process.

You can start this by taking notes every day of what you're doing. Open a note with the date and your responsibilities for the day, then log what you do during the day.

Daily Note Template

I've configured my note-taking software Obsidian to automatically open this every day.

This is a great way to keep a log and look back on how you fixed something in the past, but it will also help you with the next step: periodically updating a document that contains all of your wins.

Daily Notes List

My list of daily notes since starting digital notes.

My recommendation for this is something called a brag doc. I use a modified version of the document suggested by Julia Evans. To use this document effectively, set time aside every 2-4 weeks and update the document with what you've been doing and what has gone well. Use the daily notes you've been taking to help supplement this. By doing it bit by bit, it's a lot easier to keep track of what you've done over a long period of time. You'll also have a great body of evidence if you need to pursue advancement; you can show evidence that you've been working above your level. Bonus points if you can write some sort of data down - numbers are even more convincing than quotes when you're trying to prove something.

The next step after this is to use that brag doc to keep your resume, CV, or LinkedIn profile up to date. This is another point where it's much easier to do it little by little over time so it's always up to date, instead of making one large effort when you need it.

Example - Resource Feedback Spreadsheet

For example, I have a spreadsheet that keeps track of everyone that's reached out to me from the company saying something about the resources that I put online. This is a great way for me to figure out what the best resources are what's been most helpful - plus, if I need to prove that what I've done has been helpful, I have all the evidence right there.

Remember to set that time aside for writing your daily notes and updating your brag doc and LinkedIn profile. Don't dismiss this - keeping my LinkedIn profile up to date was what landed me my job at Amazon in the first place!

Know Your Worth

My fifth and final tip is to know your worth. Getting into software engineering is no easy feat. It takes a lot of training and technical knowledge, so getting where you are is already a battle - not to mention any experience that you can get on top of that.

You've earned the right to be confident. You should be confident in your statements and your decisions while being prepared to learn from your mistakes. Even if you don't feel confident from your amount of experience, I advise you to act like you're confident. Enough time acting like you're confident and you will eventually feel that confidence.

Example - High-Level Meeting

My most recent example of this was taking part in a meeting with leaders several levels above me. I was nervous and didn't want to speak in case I didn't sound like I knew what I was talking about. I spent a lot of the meeting sitting and taking notes, distilling what I had heard down right up until the point where I made some realisations. When I eventually spoke up about them, the leaders listened to me and the conversation took a whole different direction.

Another part of knowing your worth is being aware of what other options you have. Keep an eye on your career field and see what other job opportunities there are, as well as the kind of salary your job normally makes. It's a great idea to know what's out there - either you'll find something that you think is more exciting and is a better opportunity, or you can be satisfied that where you are is the best place for you.

Putting the Tips into Action

So there you have my top five tips on how to stand out as a software engineer.

Some things you can do periodically, like updating your brag doc or your LinkedIn profile. You can start straight away by opening up a daily note, writing the date, and starting to take notes. You can also make an empty brag doc, ready to start filling in your first entries.

Another way you can get started is by arranging a talk with your manager, where you can have an honest conversation about where you want to be and what kind of feedback your manager can provide you. Arrange the meeting, go in knowing what you want, and write down the result of the meeting so you can look back on it in your log.

The last part is interacting with your team. Start acting with more confidence around your team, take responsibility for something that's outside of your comfort zone, and take responsibility for your mistakes as they happen. A few good examples of tasks you can take responsibility for are being the Scrum Master for your team, writing up documentation that is currently missing, or leading a meeting that no one has volunteered for.

Any of these options will help you to get started and to grow in your career. Good luck standing out as a software engineer!

· 13 min read
Michael Hart

This post is about how to build an AWS Step Functions state machine and how you can use it to interact with IoT edge devices. In this case, we are sending a smoothie order to a "robot" and waiting for it to make that smoothie.

The state machine works by chaining together a series of Lambda functions and defining how data should be passed between them (if you're not sure about Lambda function, take a look at this blog post!). There's also a step where the state machine needs to wait for the smoothie to be made, which is slightly more complicated - we'll cover that later in this post.

This post is also available in video form - check the video link below if you want to follow along!

AWS Step Functions Service

AWS Step Functions is an AWS service that allows users to build serverless workflows. Serverless came up in my post on Lambda functions - it means that you can run applications in the cloud without provisioning any servers or contantly-running resources. That in turns means you only pay for the time that something is executing in the cloud, which is often much cheaper than provisioning a server, but with the same performance.

To demonstrate Step Functions, we're building a state machine that accepts smoothie orders from customers and sends them to an available robot to make that smoothie. Our state machine will look for an available robot, send it the order, and wait for the order to complete. The state machine will be built in AWS Step Functions, which we can access using the console.

State Machine Visual Representation

First, we'll look at the finished state machine to get an idea of how it works. Clicking the edit button within the state machine will open the workflow Design tab for a visual representation of the state machine:

Visual representation of Step Functions State Machine

Each box in the diagram is a stage of the Step Functions state machine. Most of the stages are Lambda functions, which are configured to interface with AWS resources. For example, the first stage (GetRobot) scans a DynamoDB table for the first robot with the ONLINE status, meaning that it is ready for work.

If at least one robot is available, GetRobot will pass its name to the next stage - SetRobotWorking. This function updates that robot's entry in the DynamoDB table to WORKING, so future invocations don't try to give that robot another smoothie order.

From there, the robot name is again passed on to TellRobotOrder, which is responsible for sending an MQTT message via AWS IoT Core to tell the robot its new smoothie order. This is where the state machine gets slightly more complicated - we need the state machine to pause and wait for the smoothie to be made.

Activities

While we're waiting for the smoothie to be made, we could have the Lambda function wait for a response, but we would be paying for the entire time that function is sitting and waiting. If the smoothie takes 5 minutes to complete, that would be over 6000x the price!

Instead, we can use the Activities feature of Step Functions to allow the state machine to wait at no extra cost. The system follows this setup:

IoT Rule to Robot Diagram

When the state machine sends the smoothie order to the robot, it includes a generated task token. The robot thens make the smoothie, and when it is finished, publishes a message saying it was successful with that same task token. An IoT Rule that forwards that message to another Lambda function, which tells the state machine that the task was a success. Finally, the state machine updates the robot's status back to ONLINE, so it can receive more orders, and the state machine completes successfully.

Why go through Lambda and IoT Core?

The robot could directly call the Task Success API, but we would need to give it permission to do so - as well as a direct internet connection. This version of the system means that the robot only ever communicates using MQTT messages via AWS IoT Core. See my video on AWS IoT Core to see how to set this up.

Testing the Smoothie State Machine

To test the state machine, we start with a table with two robots, both with ONLINE status. If you follow the setup instructions in the README, your table will have these entries:

Robots with ONLINE state

Successful Execution

If we now request any kind of smoothie using the test_stepfunction.sh script, we start an execution of the state machine. It will find that Robot1 is free to perform the function and update its status to WORKING:

Robot1 with WORKING state

Then it will send an MQTT message requesting the smoothie. After a few seconds, the mock robot script will respond with a success message. We can see this in the MQTT test client:

MQTT Test Client showing order and success messages

This allows the state machine to finish its execution successfully:

Successful step function execution

If we click on the execution, we can see the successful path lit up in green:

State machine diagram with successful states

Smoothie Complete!

We've made our first fake smoothie! Now we should make sure we can handle errors that happen during smoothie making.

Robot Issue during Execution

What happens if there is an issue with the robot? Here we can use error handling in Step Functions. We define a timeout on the smoothie making task, and if that timeout is reached before the task is successful, we catch the error - in this case, we update the robot's state to BROKEN and fail that state machine's execution.

To test this, we can kill the mock robot script, which simulates all robots being offline. In this case, running the test_stepfunction.sh will request the smoothie from Robot1, but will then time out after 10 seconds. This then updates the robot's state to BROKEN, ensuring that future executions do not request smoothies from Robot1.

Robot Status shown as BROKEN

The overall state execution also fails, allowing us to alert the customer of the failure:

Execution fails from time out

We can also see what happened to cause the failure by clicking on the execution and scrolling to the diagram:

State Machine diagram of timeout failure

Another execution will have the same effect for Robot2, leaving us with no available robots.

No Available Robots

If we never add robots into the table, or all of our robots are BROKEN or WORKING, we won't have a robot to make a smoothie order. That means our state machine will fail at the first step - getting an available robot:

State Machine diagram with no robots available

That's our state machine defined and tested. In the next section, we'll take a look at how it's built.

Building a State Machine

To build the Step Functions state machine, we have a few options, but I would recommend using CDK for the definition and the visual designer in the console for prototyping. If you're not sure what the benefits of using CDK are, I invite you to watch my video on the benefits, where I discuss how to use CDK with SiteWise:

The workflow goes something like this:

  1. Make a base state machine with functions and AWS resources using CDK
  2. Use the visual designer to prototype and build the stages of the state machine up further
  3. Define the stages back in the CDK code to make the state machine reproducible and recover from any breaking changes made in the previous step

Once complete, you should be able to deploy the CDK stack to any AWS account and have a fully working serverless application! To make this step simpler, I've uploaded my CDK code to a Github repository. Setup instructions are in the README, so I'll leave them out of this post. Instead, we'll break down some of the code in the repository to see how it forms the full application.

CDK Stack

This time, I've split the CDK stack into multiple files to make the dependencies and interactions clearer. In this case, the main stack is at lib/cdk-stack.ts, and refers to the four components:

  1. RobotTable - the DynamoDB table containing robot names and statuses
  2. Functions - the Lambda functions with the application logic, used to interact with other AWS services
  3. IoTRules - the IoT Rule used to forward the MQTT message from a successful smoothie order back to the Step Function
  4. SmoothieOrderHandler - the definition of the state machine itself, referring to the Lambda functions in the Functions construct

We can take a look at each of these in turn to understand how they work.

RobotTable

This construct is simple; it defines a DynamoDB table where the name of the robot is the primary key. The table will be filled by a script after stack deployment, so this is as much as it needed. Once filled, the table will have the same contents as shown in the testing section.

Functions

This construct defines four Lambda functions. All four are written using Rust to minimize the execution time - the benefits are discussed more in my blog post on Lambda functions. Each handler function is responsible for one small task to show how the state machine can pass data around.

Combining Functions

We could simplify the state machine by combining functions together, or using Step Functions to call AWS services directly. I'll leave it to you to figure out how to simplify the state machine!

The functions are as follows:

  1. Get Available Robot - scans the DynamoDB table to find the first robot with ONLINE status. Requires the table name as an environment variable, and permission to read the table.
  2. Update Status - updates the robot name to the given status in the DynamoDB table. Also requires the table name as an environment variable, and permission to write to the table.
  3. Send MQTT - sends a smoothie order to the given robot name. Requires IoT data permissions to connect to IoT Core and publish a message.
  4. Send Task Success - called by an IoT Rule when a robot publishes that it has successfully finished a smoothie. Requires permission to send the task success message to the state machine, which has to be done after the state machine is defined, hence updating the permission in a separate function.

IoT Rules

This construct defines an IoT Rule that listens on topic filter robots/+/success for any messages, then pulls out the contents of the MQTT message and calls the Send Task Success Lambda function. The only additional permission it needs is to call a Lambda function, so it can call the Send Task Success function.

Smoothie Order Handler

This construct pulls all the Lambda functions together into our state machine. Each stage corresponds to one of the stages in the State Machine Visual Representation section.

The actual state machine is defined as a chain of functions:

const orderDef =
getAvailableRobot
.next(setRobotWorking)
.next(tellRobotOrder
.addCatch(setRobotBroken.next(finishFailure),
{
errors: [step.Errors.TIMEOUT],
resultPath: step.JsonPath.DISCARD,
})
)
.next(setRobotFinished)
.next(finishSuccess);

Defining each stage as a constant, then chaining them together, allows us to see the logic of the state machine more easily. However, it does hide the information that is being passed between stages - Step Functions will store metadata while executing and pass the output of one function to the next. We don't always want to pass the output of one function directly to another, so we define how to modify the data for each stage.

For example, the Get Robot function looks up a robot name, so the entire output payload should be saved for the next function:

const getAvailableRobot = new steptasks.LambdaInvoke(this, 'GetRobot', {
lambdaFunction: functions.getAvailableRobotFunction,
outputPath: "$.Payload",
});

However, the Set Robot Working stage does not produce any relevant output for future stages, so its output can be discarded. Also, it needs a new Status field defined for the function to work, so the payload is defined in the stage. To set one of the fields based on the output of the previous function, we use .$ to tell Step Functions to fill it in automatically. Hence, the result is:

const setRobotWorking = new steptasks.LambdaInvoke(this, 'SetRobotWorking', {
lambdaFunction: functions.updateStatusFunction,
payload: step.TaskInput.fromObject({
"RobotName.$": "$.RobotName",
"Status": "WORKING",
}),
resultPath: step.JsonPath.DISCARD,
});

Another interesting thing to see in this construct is how to define a stage that waits for a task to complete before continuing. This is done by changing the integration pattern, plus passing the task token to the task handler - in this case, our mock robot. The definition is as follows:

const tellRobotOrder = new steptasks.LambdaInvoke(this, 'TellRobotOrder', {
lambdaFunction: functions.sendMqttFunction,
// Define the task token integration pattern
integrationPattern: step.IntegrationPattern.WAIT_FOR_TASK_TOKEN,
// Define the task timeout
taskTimeout: step.Timeout.duration(cdk.Duration.seconds(10)),
payload: step.TaskInput.fromObject({
// Pass the task token to the task handler
"TaskToken": step.JsonPath.taskToken,
"RobotName.$": "$.RobotName",
"SmoothieName.$": "$.SmoothieName",
}),
resultPath: step.JsonPath.DISCARD,
});

This tells the state machine to generate a task token and give it to the Lambda function as defined, then wait for a task success signal before continuing. We can also define a catch route in case the task times out, which is done using the addCatch function:

.addCatch(setRobotBroken.next(finishFailure),
{
errors: [step.Errors.TIMEOUT],
resultPath: step.JsonPath.DISCARD,
})

With that, we've seen how the state machine is built, seen how it runs, and seen how to completely define it in CDK code.

Challenge!

Do you want to test your understanding? Here are a couple of challenges for you to extend this example:

  1. Retry making the smoothie! If a robot times out making the smoothie, just cancelling the order is not a good customer experience - ideally, the system should give the order to another robot instead. See if you can set up a retry path from the BROKEN robot status update back to the start of the state machine.
  2. Add a queue to the input! At present, if we have more orders than robots, the later orders will simply fail immediately. Try adding a queue that starts executing the state machine using Amazon Simple Queue Service (SQS).

Summary

Step Functions can be used to build serverless applications as state machines that call other AWS resources. In particular, a powerful combination is Step Functions with AWS Lambda functions for the application logic.

We can use other serverless AWS resources to access more cloud functionality or interface with edge devices. In this case, we use MQTT messages via IoT Core to message robots with smoothie orders, then listen for the responses to those messages to continue execution. We can also use a DynamoDB table to store robot statuses, which is a serverless database table. The table contains each robot's current status as the step function executes.

Best of all, this serverless application runs in the cloud, giving us all of the advantages of running using AWS - excellent logging and monitoring, fine-grained permissions, and modifying the application on demand, to name a few!

· 17 min read
Michael Hart

This is the second part of the "ROS2 Control with the JetBot" series, where I show you how to get a JetBot working with ROS2 Control! This is a sequel to the part 1 blog post, where I showed how to drive the JetBot's motors using I2C and PWM with code written in C++.

In this post, I show the next step in making ROS2 Control work with the WaveShare JetBot - wrapping the motor control code in a System. I'll walk through some concepts, show the example repository for ROS2 Control implementations, and then show how to implement the System for JetBot and see it running.

This post is also available in video form - check the video link below if you want to follow along!

ROS2 Control Concepts

First, before talking about any of these concepts, there's an important distinction to make: ROS Control and ROS2 Control are different frameworks, and are not compatible with one another. This post is focused on ROS2 Control - or as their documentation calls it, ros2_control.

ros2_control's purpose is to simplify integrating new hardware into ROS2. The central idea is to separate controllers from systems, actuators, and sensors. A controller is responsible for controlling the movement of a robot; an actuator is responsible for moving a particular joint, like a motor moving a wheel. There's a good reason for this separation: it allows us to write a controller for a wheel configuration, without knowing which specific motors are used to move the wheels.

Let's take an example: the Turtlebot and the JetBot are both driven using one wheel on each side and casters to keep the robots level. These are known as differential drive robots.

Turtlebot image with arrows noting wheels

Turtlebot 3 Burger image edited from Robotis

JetBot image with arrows noting wheels and caster

WaveShare JetBot AI Kit image edited from NVIDIA

As the motor configuration is the same, the mathematics for controlling them is also the same, which means we can write one controller to control either robot - assuming we can abstract away the code to move the motors.

In fact, this is exactly what's provided by the ros2_controllers library. This library contains several standard controllers, including our differential drive controller. We could build a JetBot and a Turtlebot by setting up this standard controller to be able to move their motors - all we need to do is write the code for moving the motors when commanded to by the controller.

ros2_control also provides the controller manager, which is used to manage resources and activate/deactivate controllers, to allow for advanced functionality like switching between controllers. Our use case is simple, so we will only use it to activate the controller. This architecture is explained well in the ros2_control documentation - see the architecture page for more information.

This post shows how to perform this process for the JetBot. We're going to use the I2C and motor classes from the previous post in the series to define a ros2_control system that will work with the differential drive controller. We use a System rather than an Actuator because we want to define one class that can control both motors in one write call, instead of having two separate Actuators.

ROS2 Control Demos Repository

To help us with our ros2_control system implementation, the ros2_control framework has helpfully provided us with a set of examples. One of these examples is exactly what we want - building a differential drive robot (or diffbot, in the examples) with a custom System for driving the motors.

The repository has a great many examples available. If you're here to learn about ros2_control, but not to build a diffbot, there are examples of building simulations, building URDF files representing robots, externally connected sensors, and many more.

We will be using example 2 from this demo repository as a basis, but stripping out anything we don't require right now, like supporting simulation; we can return these parts in later iterations as we come to understand them.

JetBot System Implementation

In this section, I'll take you through the key parts of my JetBot System implementation for ros2_control. The code is available on Github - remember that this repository will be updated over time, so select the tag jetbot-motors-pt2 to get the same code version as in this article!

Components are libraries, not nodes

ros2_control uses a different method of communication from the standard ROS2 publish/subscribe messaging. Instead, the controller will load the code for the motors as a plugin library, and directly call functions inside it. This is the reason we had to rewrite the motor driver in C++ - it has to be a library that can be loaded by ros2_control, which is written in C++.

Previously, we wrote an example node that span the wheels using the motor driver; now we are replacing this executable by a library that can be loaded by ros2_control. In CMakeLists.txt, we can see:

add_library(${PROJECT_NAME}
SHARED
hardware/src/jetbot_system.cpp
hardware/src/i2c_device.cpp
hardware/src/motor.cpp
)

...

pluginlib_export_plugin_description_file(hardware_interface jetbot_control.xml)

These are the lines that build the JetBot code as a library instead of a system, and export definitions that show it is a valid plugin library to be loaded by ros2_control. A new file, jetbot_control.xml, tells ros2_control more information about this library to allow it to be loaded - in this case, the library name and ros2_control plugin type (SystemInterface - we'll discuss this more in the Describing the JetBot section).

Code Deep Dive

For all of the concepts in ros2_control, the actual implementation of a System is quite simple. Our JetBotSystemHardware class extends the SystemInterface class:

class JetBotSystemHardware : public hardware_interface::SystemInterface {

In the private fields of the class, we create the fields that we will need during execution. This includes the I2CDevice and two Motor classes from the previous post, along with two vectors for the hardware commands and hardware velocities:

 private:
std::vector<MotorPins> motor_pin_sets_;
std::vector<Motor> motors_;
std::shared_ptr<I2CDevice> i2c_device_;
std::vector<double> hw_commands_;
std::vector<double> hw_velocities_;

Then, a number of methods need to be overridden from the base class. Take a look at the full header file to see them, but essentially it boils down to three concepts:

  1. export_state_interfaces/export_command_interfaces: report the state and command interfaces supported by this system class. These interfaces can then be checked by the controller for compatibility.
  2. on_init/on_activate/on_deactivate: lifecycle methods automatically called by the controller. Different setup stages for the System occur in these methods, including enabling the motors in the on_activate method and stopping them in on_deactivate.
  3. read/write: methods called every controller update. read is for reading the velocities from the motors, and write is for writing requested speeds into the motors.

From these, we use the on_init method to:

  1. Initialize the base SystemInterface class
  2. Read the pin configuration used for connecting to the motors from the parameters
  3. Check that the provided hardware information matches the expected information - for example, that there are two velocity command interfaces
  4. Initialize the I2CDevice and Motors

This leaves the System initialized, but not yet activated. Once on_activate is called, the motors are enabled and ready to receive commands. The read and write methods are then repeatedly called for reading from and writing to the motors respectively. When it's time to shutdown, on_deactivate will stop the motors, and the destructors of the classes perform any required cleanup. There are more lifecycle states that could potentially be used for a more complex system - these are documented in the ros2 demos repository.

This System class, plus the I2CDevice and Motor classes, are compiled into the plugin library, ready to be loaded by the controller.

Describing the JetBot

The SystemInterface then comes into play when describing the robot. The description folder from the example contains the files that define the robot, including its ros2_control configuration, simulation configuration, and materials used to represent it during simulation. As this implementation has been pared down to basics, only the ros2_control configuration with mock hardware flag have been kept in.

The jetbot.ros2_control.xacro file defines the ros2_control configuration needed to control the robot. It uses xacro files to define this configuration, where xacro is a tool that extends XML files by allowing us to define macros that can be referenced in other files:

<xacro:macro name="jetbot_ros2_control" params="name prefix use_mock_hardware">

In this case, we are defining a macro for the ros2_control part of the JetBot that can be used in the overall robot description.

We then define the ros2_control portion with type system:

<ros2_control name="${name}" type="system">

Inside this block, we give the path to the plugin library, along with the parameters needed to configure it. You may recognize the pin numbers in this section!

<hardware>
<plugin>jetbot_control/JetBotSystemHardware</plugin>
<param name="pin_enable_0">8</param>
<param name="pin_pos_0">9</param>
<param name="pin_neg_0">10</param>
<param name="pin_enable_1">13</param>
<param name="pin_pos_1">12</param>
<param name="pin_neg_1">11</param>
</hardware>

This tells any controller loading our JetBot system hardware which pins are used to drive the PWM chip. But, we're not done yet - we also need to tell ros2_control the command and state interfaces available.

ros2_control Joints, Command Interfaces, and State Interfaces

ros2_control uses joints to understand what the movable parts of a robot are. In our case, we define one joint for each motor.

Each joint then defines a number of command and state interfaces. Each command interface accepts velocity, position, or effort commands, which allows ros2_control controllers to command the joints to move as it needs. State interfaces report a measurement from the joint out of velocity, position, or effort, which allows ros2_control to monitor how much the joint has actually moved and adjust itself. In our case, each joint accepts velocity commands and reports measured velocity - although we configure the controller to ignore the velocity, because we don't actually have a sensor like an encoder in the JetBot. This means we're using open loop control, as opposed to closed loop control.

<joint name="${prefix}left_wheel_joint">
<command_interface name="velocity"/>
<state_interface name="velocity"/>
</joint>

Closed loop control is far more accurate than open loop control. Imagine you're trying to sprint exactly 100 metres from a starting line, but you have to do it once blindfolded, and once again without a blindfold and line markings every ten metres - which run is likely to be more accurate? In the JetBot, there's no sensor to measure how much it has moved, so the robot is effectively blindfolded and guessing how far it has travelled. This means our navigation won't be as accurate - we are limited by hardware.

JetBot Description

With the ros2_control part of the JetBot defined, we can import and use this macro in the overall JetBot definition. As we've stripped out all other definitions, such as simulation parameters, this forms the only part of the overall JetBot definition:

<xacro:include filename="$(find jetbot_control)/ros2_control/jetbot.ros2_control.xacro" />
<xacro:jetbot_ros2_control
name="JetBot" prefix="$(arg prefix)" use_mock_hardware="$(arg use_mock_hardware)"/>

Let's summarize what we've created so far:

  1. A plugin library capable of writing commands to the JetBot motors
  2. A ros2_control xacro file, describing the plugin to load and the parameters to give it
  3. One joint per motor, each with a velocity command and state interface
  4. An overall description file that imports the ros2_control file and calls the macro

Now when we use xacro to build the overall description file, it will import the ros2_control file macro and expand it, giving a complete robot description that we can add to later. It's now time to look at creating a controller manager and a differential drive controller.

Creating A Controller

So far, we've defined a JetBot using description files. Now we want to be able to launch ros2_control and tell it what controller to create, how to configure it, and how load our defined JetBot. For this, we use the jetbot_controllers.yaml file.

We start with the controller_manager. This is used to load one or more controllers and swap between them. It also makes sure that resources are only used by one controller at a time and manages the change between controllers. In our case, we're only using it to load and run one controller:

controller_manager:
ros__parameters:
update_rate: 10 # Hz

jetbot_base_controller:
type: diff_drive_controller/DiffDriveController

We tell the manager to update at 10Hz and to load the diff_drive_controller/DiffDriveController controller. This is the standard differential drive controller discussed earlier. If we take a look at the information page, we can see a lot of configuration for it - we provide this configuration in the same file.

We define that the controller is open loop, as there is no feedback. We give the names of the joints for the controller to control - this is how the controller knows it can send velocities to the two wheels implemented by our system class. We also set velocity limits on both linear and angular movement:

linear.x.max_velocity: 0.016
linear.x.min_velocity: -0.016
angular.z.max_velocity: 0.25
angular.z.min_velocity: -0.25

These numbers are obtained through experimentation! ros2_control operates using target velocities specified in radians per second [source]. However, the velocity we send to motors doesn't correspond to radians per second - the range of -1 to +1 is the minimum velocity up to maximum velocity of the motors, which change with the battery level of the robot. I obtained the numbers given through experimentation - these move the robot at a reasonable pace.

Finally, we supply the wheel separation and radius, specified in metres. I measured these from my own robot. The separation is the minimum separation between wheels, and the radius is from the centre of one wheel to the very edge:

wheel_separation: 0.104
wheel_radius: 0.032

With this, we have described how to configure a controller manager with a differential drive controller to control our JetBot!

Launching the Controller

The last step here is to provide a launch script to bring everything up. The example again provides us with the launch script, including a field that allows us to launch with mock hardware if we want - this is great for testing that everything loads correctly on a system that doesn't have the right hardware.

The launch script goes through a few steps to get to the full ros2_control system, starting with loading the robot description. We specify the path to the description file relative to the package, and use the xacro tool to generate the full XML for us:

# Get URDF via xacro
robot_description_content = Command(
[
PathJoinSubstitution([FindExecutable(name="xacro")]),
" ",
PathJoinSubstitution(
[FindPackageShare("jetbot_control"), "urdf", "jetbot.urdf.xacro"]
),
" ",
"use_mock_hardware:=",
use_mock_hardware,
]
)
robot_description = {"robot_description": robot_description_content}

Following this, we load the jetbot controller configuration:

robot_controllers = PathJoinSubstitution(
[
FindPackageShare("jetbot_control"),
"config",
"jetbot_controllers.yaml",
]
)

With the robot description and the robot controller configuration loaded, we can pass these to the controller manager:

control_node = Node(
package="controller_manager",
executable="ros2_control_node",
parameters=[robot_description, robot_controllers],
output="both",
)

Finally, we ask the launched controller manager to start up the jetbot_base_controller:

robot_controller_spawner = Node(
package="controller_manager",
executable="spawner",
arguments=[
"jetbot_base_controller",
"--controller-manager",
"/controller_manager",
],
)

All that remains is to build the package and launch the new launch file!

ros2_control Launch Execution

This article has been written from the bottom up, but now we have the full story, we can look from the top down:

  1. We launch the JetBot launch file defined in the package
  2. The launch file spawns the controller manager, which is used to load controllers and manage resources
  3. The launch file requests that the controller manager launches the differential drive controller
  4. The differential drive controller loads the JetBot System as a plugin library
  5. The System connects to the I2C bus, and hence, the motors
  6. The controller can then command the System to move the motors as requested by ROS2 messaging
success

Hooray! We have defined everything we need to launch ros2_control and configure it to control our JetBot! Now we have a controller that is able to move our robot around.

Running on the JetBot

To try the package out, we first need a working JetBot. If you're not sure how to do the initial setup, I've created a video on exactly that:

With the JetBot working, we can create a workspace and clone the code into it. Use VSCode over SSH to execute the following commands:

mkdir ~/dev_ws
cd ~/dev_ws
git clone https://github.com/mikelikesrobots/jetbot-ros-control -b jetbot-motors-pt2
cp -r ./jetbot-ros-control/.devcontainer .

Then use the Dev Containers plugin to rebuild and reload the container. This will take a few minutes, but the step is crucial to allow us to run ROS2 Humble on the JetBot, which uses an older version of Ubuntu. Once complete, we can build the workspace, source it, and launch the controller:

source /opt/ros/humble/setup.bash
colcon build --symlink-install
source install/setup.bash
ros2 launch jetbot_control jetbot.launch.py

This should launch the controller and allow it to connect to the motors successfully. Now we can use teleop_twist_keyboard to test it - but with a couple of changes.

First, we now expect messages to go to /jetbot_base_controller/cmd_vel topic instead of the previous /cmd_vel topic. We can fix that by asking teleop_twist_keyboard to remap the topic it normally publishes to.

Secondly, we normally expect /cmd_vel to accept Twist messages, but the controller expects TwistStamped messages. There is a parameter for teleop_twist_keyboard that turns its messages into TwistStamped messages, but while trying it out I found that the node ignored that parameter. Checking it out from source fixed it for me, so in order to run the keyboard test, I recommend building and running from source:

git clone https://github.com/ros2/teleop_twist_keyboard
colcon build --symlink-install
source install/setup.bash
ros2 run teleop_twist_keyboard teleop_twist_keyboard \
--ros-args \
-p stamped:=true \
-r /cmd_vel:=/jetbot_base_controller/cmd_vel

Once running, you should be able to use the standard keyboard controls written on screen to move the robot around. Cool!

Let's do one more experiment, to see how the configuration works. Go into the jetbot_controllers.yaml file and play with the maximum velocity and acceleration fields, to see how the robot reacts. Relaunch after every configuration change to see the result. You can also tune these parameters to match what you expect more closely.

That's all for this stage - we have successfully integrated our JetBot's motors into a ros2_control System interface!

Next Steps

Having this setup gives us a couple of options going forwards.

First, we stripped out a lot of configuration that supported simulation - we could add this back in to support Gazebo simulation, where the robot in the simulation should act nearly identically to the real life robot. This allows us to start developing robotics applications purely in simulation, which is likely to be faster due to the reset speed of the simulation, lack of hardware requirements, and so on.

Second, we could start running a navigation stack that can move the robot for us; for example, we could request that the robot reaches an end point, and the navigation system will plan a path to take the robot to that point, and even face the right direction.

Stay tuned for more posts in this series, where we will explore one or both of these options, now that we have the robot integrated into ROS2 using ros2_control.

· 14 min read
Michael Hart

This post shows how to build two simple functions, running in the cloud, using AWS Lambda. The purpose of these functions is the same - to update the status of a given robot name in a database, allowing us to view the current statuses in the database or build tools on top of it. This is one way we could coordinate robots in one or more fleets - using the cloud to store the state and run the logic to co-ordinate those robots.

This post is also available in video form - check the video link below if you want to follow along!

What is AWS Lambda?

AWS Lambda is a service for executing serverless functions. That means you don't need to provision any virtual machines or clusters in the cloud - just trigger the Lambda with some kind of event, and your pre-built function will run. It runs on inputs from the event and could give you some outputs, make changes in the cloud (like database modifications), or both.

AWS Lambda charges based on the time taken to execute the function and the memory assigned to the function. The compute power available for a function scales with the memory assigned to it. We will explore this later in the post by comparing the memory and execution time of two Lambda functions.

In short, AWS Lambda allows you to build and upload functions that will execute in the cloud when triggered by configured events. Take a look at the documentation if you'd like to learn more about the service!

How does that help with robot co-ordination?

Moving from one robot to multiple robots helping with the same task means that you will need a central system to co-ordinate between them. The system may distribute orders to different robots, tell them to go and recharge their batteries, or alert a user when something goes wrong.

This central service can run anywhere that the robots are able to communicate with it - on one of the robots, on a server near the robots, or in the cloud. If you want to avoid standing up and maintaining a server that is constantly online and reachable, the cloud is an excellent choice, and AWS Lambda is a great way to run function code as part of this central system.

Let's take an example: you have built a prototype robot booth for serving drinks. Customers can place an order at a terminal next to the robot and have their drink made. Now that your booth is working, you want to add more booths with robots and distribute orders among them. That means your next step is to add two new features:

  1. Customers should be able to place orders online through a digital portal or webapp.
  2. Any order should be dispatched to any available robot at a given location, and alert the user when complete.

Suddenly, you have gone from one robot capable of accepting orders through a terminal to needing a central database with ordering system. Not only that, but if you want to be able to deploy to a new location, having a single server per site makes it more difficult to route online orders to the right location. One central system in the cloud to manage the orders and robots is perfect for this use case.

Building Lambda Functions

Convinced? Great! Let's start by building a simple Lambda function - or rather, two simple Lambda functions. We're going to build one Python function and one Rust function. That's to allow us to explore the differences in memory usage and runtime, both of which increase the cost of running Lambda functions.

All of the code used in this post is available on Github, with setup instructions in the README. In this post, I'll focus on relevant parts of the code.

Python Function

Firstly, what are the Lambda functions doing? In both cases, they accept a name and a status as arguments, attached to the event object passed to the handler; check the status is valid; and update a DynamoDB table for the given robot name with the given robot status. For example, in the Python code:

def lambda_handler(event, context):
# ...
name = str(event["name"])
status = str(event["status"])

We can see that the event is passed to the lambda handler and contains the required fields, name and status. If valid, the DynamoDB table is updated:

ddb = boto3.resource("dynamodb")
table = ddb.Table(table_name)
table.update_item(
Key={"name": name},
AttributeUpdates={
"status": {
"Value": status
}
},
ReturnValues="UPDATED_NEW",
)

Rust Function

Here is the equivalent for checking the input arguments for Rust:

#[derive(Deserialize, Debug, Serialize)]
#[serde(rename_all = "UPPERCASE")]
enum Status {
Online,
}
// ...
#[derive(Deserialize, Debug)]
struct Request {
name: String,
status: Status,
}

The difference here is that Rust states its allowed arguments using an enum, so no extra code is required for checking that arguments are valid. The arguments are obtained by accessing event.payload fields:

let status_str = format!("{}", &event.payload.status);
let status = AttributeValueUpdate::builder().value(AttributeValue::S(status_str)).build();
let name = AttributeValue::S(event.payload.name.clone());

With the fields obtained and checked, the DynamoDB table can be updated:

let request = ddb_client
.update_item()
.table_name(table_name)
.key("name", name)
.attribute_updates("status", status);
tracing::info!("Executing request [{request:?}]...");

let response = request
.send()
.await;
tracing::info!("Got response: {:#?}", response);

CDK Build

To make it easier to build and deploy the functions, the sample repository contains a CDK stack. I've talked more about Cloud Development Kit (CDK) and the advantages of Infrastructure-as-Code (IaC) in my video "From AWS IoT Core to SiteWise with CDK Magic!":

In this case, our CDK stack is building and deploying a few things:

  1. The two Lambda functions
  2. The DynamoDB table used to store the robot statuses
  3. An IoT Rule per Lambda function that will listen for MQTT messages and call the corresponding Lambda function

The DynamoDB table comes from Amazon DynamoDB, another service from AWS that keeps a NoSQL database in the cloud. This service is also serverless, again meaning that no servers or clusters are needed.

There are also two IoT Rules, which are from AWS IoT Core, and define an action to take when an MQTT message is published on a particular topic filter. In our case, it allows robots to publish an MQTT message saying they are online, and will call the corresponding Lambda function. I have used IoT Rules before for inserting data into AWS IoT SiteWise; for more information on setting up rules and seeing how they work, take a look at the video I linked just above.

Testing the Functions

Once the CDK stack has been built and deployed, take a look at the Lambda console. You should have two new functions built, just like in the image below:

Two new Lambda functions in the AWS console

Great! Let's open one up and try it out. Open the function name that has "Py" in it and scroll down to the Test section (top red box). Enter a test name (center red box) and a valid input JSON document (bottom red box), then save the test.

Test configuration for Python Lambda function

Now run the test event. You should see a box pop up saying that the test was successful. Note the memory assigned and the billed duration - these are the main factors in determining the cost of running the function. The actual memory used is not important for cost, but can help optimize the right settings for cost and speed of execution.

Test result for Python Lambda function

You can repeat this for the Rust function, only with the test event name changed to TestRobotRs so we can tell them apart. Note that the memory used and duration taken are significantly lower.

Test result for Rust Lambda function

Checking the Database Table

We can now access the DynamoDB table to check the results of the functions. Access the DynamoDB console and click on the table created by the stack.

DynamoDB Table List

Select the button in the top right to explore items.

Explore Table Items button in DynamoDB

This should reveal a screen with the current items in the table - the two test names you used for the Lambda functions:

DynamoDB table with Lambda test items

Success! We have used functions run in the cloud to modify a database to contain the current status of two robots. We could extend our functions to allow different statuses to be posted, such as OFFLINE or CHARGING, then write other applications to work using the current statuses of the robots, all within the cloud. One issue is that this is a console-heavy way of executing the functions - surely there's something more accessible to our robots?

Executing the Functions

Lambda functions have a huge variety of ways that they can be executed. For example, we could set up an API Gateway that is able to accept API requests and forward them to the Lambda, then return the results. One way to check the possible input types is to access the Lambda, then click the "Add trigger" button. There are far too many options to list them all here, so I encourage you to take a look for yourself!

Lambda add trigger button

There's already one input for each Lambda - the AWS IoT trigger. This is an IoT Rule set up by the CDK stack, which is watching the topic filter robots/+/status. We can test this using either the MQTT test client or by running the test script in the sample repository:

./scripts/send_mqtt.sh

One message published on the topic will trigger both functions to run, and we can see the update in the table.

DynamoDB Table Contents after MQTT

There is only one extra entry, and that's because both functions executed on the same input. That means "FakeRobot" had its status updated to ONLINE once by each function.

If we wanted, we could set up the robot to call the Lambda function when it comes online - it could make an API call, or it could connect to AWS IoT Core and publish a message with its ONLINE status. We could also set up more Lambda functions to take customer orders, dispatch them to robots, and so on - the Lambda functions and accompanying AWS services allow us to build a completely serverless robot co-ordination system in the cloud. If you want to see more about connecting ROS2 robots to AWS IoT Core, take a look at my video here:

Lambda Function Cost

How much does Lambda cost to run? For this section, I'll give rough numbers using the AWS Price Calculator. We will assume a rough estimate of 100 messages per minute - that accounts for customer orders arriving, robots reporting their status when it changes, and orders are being distributed; in all, I'll assume a rough estimate of 100 messages per minute, triggering 1 Lambda function invocation each.

For our functions, we can run the test case a few times for each function to get a small spread of numbers. We can also edit the configuration in the console to set higher memory limits, to see if the increase in speed will offset the increased memory cost.

Edit Lambda general configuration

Edit Lambda memory setting

Finally, we will use an ARM architecture, as this currently costs less than x86 in AWS.

I will run a valid test input for each test function 4 times each for 3 different memory values - 128MB, 256MB, and 512MB - and take the latter 3 invocations, as the first invocation takes much longer. I will then take the median billed runtime and calculate the cost per month for 100 invocations per minute at that runtime and memory usage.

My results are as follows:

TestPython (128MB)Python (256MB)Python (512MB)Rust (128MB)Rust (256MB)Rust (512MB)
1594 ms280 ms147 ms17 ms5 ms6 ms
2574 ms279 ms147 ms15 ms6 ms6 ms
3561 ms274 ms133 ms5 ms5 ms6 ms
Median574 ms279 ms147 ms15 ms5 ms6 ms
Monthly Cost$5.07$4.95$5.17$0.99$0.95$1.06

There is a lot of information to pull out from this table! The first thing to notice is the monthly cost. This is the estimated cost per month for Lambda - 100 invocations per minute for the entire month costs a maximum total of $5.17. These are rough numbers, and other services will add to that cost, but that's still very low!

Next, in the Python function, we can see that multiplying the memory will divide the runtime by roughly the same factor. The cost stays roughly the same as well. That means we can configure the function to use more memory to get the fastest runtime, while still paying the same price. In some further testing, I found that 1024MB is a good middle ground. It's worth experimenting to find the best price point and speed of execution.

If we instead look at the Rust function, we find that the execution time is pretty stable from 256MB onwards. Adding more memory doesn't speed up our function - it is most likely limited by the response time of DynamoDB. The optimal point seems to be 256MB, which gives very stable (and snappy) response times.

Finally, when we compare the two functions, we can see that Rust is much faster to respond (5ms instead of 279 ms at 256MB), and costs ~20% as much per month. That's a large difference in execution time and in cost, and tells us that it's worth considering a compiled language (Rust, C++, Go etc) when building a Lambda function that will be executed many times.

The main point to take away from this comparison is that memory and execution time are the major factors when estimating Lambda cost. If we can minimize these parameters, we will minimize cost of Lambda invocation. The follow-up to that is to consider using a compiled language for frequently-run functions to minimize these parameters.

Summary

Once you move from one robot working alone to multiple robots working together, you're very likely to need some central management system, and the cloud is a great option for this. What's more, you can use serverless technologies like AWS Lambda and Amazon DynamoDB to only pay for the transactions - no upkeep, and no server provisioning. This makes the management process easy: just define your database and the functions to interact with it, and your system is good to go!

AWS Lambda is a great way to define one or more of these functions. It can react to events like API calls or MQTT messages by integrating with other services. By combining IoT, DynamoDB, and Lambda, we can allow robots to send an MQTT message that triggers a Lambda, allowing us to track the current status of robots in our fleet - all deployed using CDK.

Lambda functions are charged by invocation, where the cost for each invocation depends on the memory assigned to the function and the time taken for that function to complete. We can minimize the cost of Lambda by reducing the memory required and the execution time for a function. Because of this, using a compiled language could translate to large savings for functions that run frequently. With that said, the optimal price point might not be the minimum possible memory - the Python function seems to be cheapest when configured with 1024MB.

We could continue to expand this system by adding more possible statuses, defining the fleet for each robot, and adding more functions to manage distributing orders. This is the starting point of our management system. See if you can expand one or both of the Lambda functions to define more possible statuses for the robots!

· 12 min read
Michael Hart

This post is all about my advice on getting started as a Robotics Software Engineer. I want to tell you a little of my journey to get to this point, then what you should do to practise and give yourself the best start possible.

If you prefer a video format, check out my YouTube video below:

Who is this post for?

This post is aimed at absolute beginners. If you've seen Boston Dynamics robots running through assault courses or automated drone deliveries and thought, "this is the kind of stuff that I want to work on my whole career" - this post is for you.

Robotics contains a lot of engineering disciplines. If you're interested in the intelligence behind a robot - the way it messages, figures out where it is and where to go, and how it makes decisions - this post is for you.

With that said, if you're interested in building your own arm, 3D printed parts, or printed circuit boards, this post isn't likely to be much help to you. Feel free to read through anyway!

Atlas jumping with a package

Atlas Gets a Grip | Boston Dynamics by Boston Dynamics

Who am I?

I'm a Software Development Engineer focused on robotics. I have a master's degree in Electrical & Electronic Engineering from Imperial College London in the UK. I have 11 years experience in software engineering, with 7 years of that specialising in robotics.

Throughout my career, I've worked on:

  1. A robot arm to cook steak and fries
  2. A maze-exploring rover
  3. A robot arm to tidy your room by picking up pens, toys, and other loose items
  4. Amazon Scout, a delivery rover

That's to name just a few! Suffice to say, I've worked on a lot of different projects, but I'm not a deep expert in any particular field. What I specialise in now is connecting robots to the cloud and getting value from it. That's why I'm working for Amazon Web Services as a Senior Software Development Engineer specialised in robotics. With that history, I'm a good person for getting you started in the world of robotics - starting with the hardware you need.

What hardware do you need?

Thankfully, not a lot! All you really need to get going is a computer. It doesn't need to be especially powerful while you're learning. If you want to be able to run simulations, a GPU will help, but you don't need one.

For the operating system (OS), you'll have a slightly easier time if it's running Linux or Mac OS, but Windows is very close to being as good because of great steps in recent years building the Windows Subsystem for Linux (WSL2). It's basically a way of running Linux insides your Windows computer. Overall, any OS will work just fine, so don't sweat this part.

That's all you need to get started, because the first step to becoming a software engineer for robotics is the software engineer part. You need to learn how to program.

How should you learn to program?

Pick a Language

First, pick a starting language. I do mean a starting language - a good way to think of programming languages is as tools in a toolbox. Each tool can perform multiple jobs, but there's usually a tool that's better suited for completing a given task. Try to fill out your toolbox instead of learning how to use one tool for every job.

Before you can fill out your toolbox, you need to start with one tool. You want to use your tools for robotics at some point, which helps narrow options down to Python and C++ - those are the most common options in robotics. I would recommend Python, as it's easier to understand in the beginning, and learning to program is hard enough without the difficult concepts that come with C++. There are tons of tutorials online for starting in Python. I would recommend trying a Codecademy course, which you can work through to get the beginning concepts.

Once you grasp the concepts, it's time to practise. The important part to understand here is that programming is a different way of thinking - you need to train your brain. You will have to get used to the new concepts and fully understand them before you can use them without thinking about them.

To practise, you could try Leetcode or Project Euler, but those are puzzles in problem-solving or teach computer science concepts, and they aren't the best tool for learning a language in my eyes. I believe that the best way to learn is to come up with a project idea and build it. It doesn't have to be in robotics. You could build a text-based RPG, like:

$ You are in a forest, what do you do?
1. Go forward
2. Look around
>>>

Or, you could build a text-based pokemon battle simulator, where you pick a move that damages the other pokemon. The important part is that the project is something you're interested it - that's what will motivate you to keep building it and practising.

tip

Try writing some post-it notes of features you want to add to your project and stick them on the wall. Try to complete one post-it note before starting another.

Spend time building your project. Look online for different concepts and how they can fit your project. Use sites like Stack Overflow to help if you get stuck. If you can, find someone with more experience who's able to help guide you through the project. You can ask people you know or look online for help, like joining a discord community. There will be times when you're absolutely stumped as to why your project isn't working, and while you could figure it out yourself eventually, a mentor would not only help you past those issues more quickly but also teach you more about the underlying concept.

In essence, that's all you have to do. Get a computer, pick a language, and build a project in it. Look at tutorials and forums online when you get stuck, and find a mentor if you can. If you do this for a while, you'll train your brain to use programming concepts as naturally as thinking.

Other Tools

On top of Python, a couple of tools that you need to learn:

  1. Version control
  2. Linux Terminal

Version Control

Version control is software that allows you to store different versions of files and easily switch between them. Every change to the file can be "committed" as a new version, allowing you to see the differences between every version of your code. There's a lot more it can do, and it is absolutely invaluable to software developers.

Git is by far the most commonly used version control software. Check out a course on how to use it, then get into the havit of committing code versions whenever you get something working - you'll thank yourself when you break your code and can easily reset it to a working version with the tool.

Linux Terminal

The Linux Terminal is a way of typing commands for the computer to execute. You will be using the Terminal extensively when developing software. You don't need a course to learn it, but when you do have to write commands in the terminal, try not to just copy and paste it without looking at the command - read through it and figure out what it's doing. Pretty soon, you'll be writing your own commands.

tip

Use Ctrl+R to search back through history for a command you've already executed and easily run it again.

Work with Others

You don't just have to learn alone - it is hugely helpful to learn from other people. You could look into making a project with friends or an online community, or better yet, join a robotics competition as part of a team.

If you practise enough with what I've told you here, you'll also be eligible for internships. Look around and see what's available. Try to go for companies with experienced software engineers to learn from, and if you're successful in your application, learn as much as you can from them. Software development as a job is very different from doing it at home, and you will learn an incredible amount from the people around you and the processes that the company uses.

What about robots?

So far, we've not talked about robots very much. Getting a solid ground in programming is very important before going on to the next step. But, once you have the basics, this part is how to get going with programming robots.

Robot Operating System (ROS)

The best starting point is ROS. This is the most popular robotics framework, although far from the only one. It is free, will run on any system, and has a ton of tools available that you can learn from. Also, because it's the most popular, there's a lot of help available when you're struggling. Follow the documentation, install it on your system, and get it passing messages around. Then understand the publish and subscribe system it uses for messages - this is crucial for robotics in general. The resources on this blog can help, or you can think of a project you want to build in ROS. Start small, like getting a robot moving around with an Xbox controller: you'll learn pretty quickly during this process that developing robots is really difficult, so set achievable goals for yourself!

Simulation

If you want to work with robots in simulation, that's great! You can get going with just your computer. You need to understand that it's incredibly difficult to make robots behave the same in simulation as they do in real life, so don't expect it to transfer easily. However, it is better for developing robotics software quickly - it's faster to run and quicker to reset, so it's easier to work with. Because of that, it's a valuable skill to have.

If you're looking for somewhere to start, there are a few options. Gazebo is well-known as a ROS simulation tool, but there are also third party simulation software applications that still support ROS. I would start by looking at either NVIDIA Isaac SIM or O3DE - both are user-friendly applications that would be a great starting point.

NVIDIA Isaac Sim Example

NVIDIA - Narrowing the Sim2Real Gap with NVIDIA Isaac Sim

Embedded

As far as embedded development goes, I consider this optional - but helpful. It's good for understanding how computers work, and you may need it if you want to get closer to the electronics. However, I don't think you need it; most programming is on development kits, like Raspberry Pi and Jetson Nano boards. These are running full Linux operating systems, so you don't need to know embedded to use them. If you do want to learn embedded, consider buying a development kit - I would recommend a NUCLEO board (example here) - and work with it to understand how UART, I2C, and other serial communications work, plus operating its LEDs. If you want more advice, let me know.

NUCLEO Product Page

NUCLEO-F302R8 Product Page on Amazon

Real Robot Hardware

How about real robots? This is a bit of an issue - a lot of the cheaper robots you see don't have good computers running on them. You want to find something with at least a Raspberry Pi or Jetson Nano making it work, and that's getting into the hundreds of dollars. It's possible to go cheaper, like with an $80 kit and a $20 board bought separately - but I wouldn't recommend that for a beginner; it's a lot harder to get working. If you are interested in a kit that you can add your own board to, take a look at the Elegoo Robot Kit on Amazon.

My recommended option here would be the JetBot that I've already been making blogs and videos about. It should run up just under $300, and comes with everything you need to start making robotics applications. There will also be a lot of resources and videos on it to get it going.

JetBot Product Information

WaveShare JetBot Product Information

If your budget is a bit higher and you want something more advanced, you could take a look at Turtlebot, like a Turtlebot Burger. That will cost nearer $700, but also comes with a Lidar, which is great for mapping its environment.

TurtleBot Product Information

Turtlebot 3 Burger RPi4

If your budget is lower than a JetBot, I would recommend either working in simulation or trying to build your own robot. Building your own will be a lot tougher and take a lot longer, but you should learn quite a bit from it too.

Some Final Advice

Before finishing up this post, I wanted to give some more general advice.

First, look for and use every resource you have available to you. Look online, ask people, work in the field; anything you can to make your journey easier.

Second, you should get a mentor. This is related to the first point, but it's so important. Find someone you can respect and learn as much as you can from them. This is really the secret to learning a lot - use others' experience to jump ahead instead of learning it yourself the slow, hard way. Finding the right mentor can be a challenge, and you may need to go through a few people before you get to the most helpful person, so be prepared!

I'm sure there's a lot more advice I could give you, but this is my best advice for beginners. At this stage, you need to learn how to learn - finding resources and taking advantage of them. It is the best possible foundation you can give yourself for the rest of your career.

· 15 min read
Michael Hart

Welcome to a new series - setting up the JetBot to work with ROS2 Control interfaces! Previously, I showed how to set up the JetBot to work from ROS commands, but that was a very basic motor control method. It didn't need to be advanced because a human was remote controlling it. However, if we want autonomous control, we need to be able to travel a specific distance or follow a defined path, like a spline. A better way of moving a robot using ROS is by using the ROS Control interfaces; if done right, this means your robot can autonomously follow a path sent by the ROS navigation stack. That's our goal for this series: move the JetBot using RViz!

The first step towards this goal is giving ourselves the ability to control the motors using C++. That's because the controllers in ROS Control requires extending C++ classes. Unfortunately, the existing drivers are in Python, meaning we'll need to rewrite them in C++ - which is a good opportunity to learn how the serial control works. We use I2C to talk to the motor controller chip, an AdaFruit DC Motor + Stepper FeatherWing, which sets the PWM duty cycle that makes the motors move. I'll refer to this chip as the FeatherWing for the rest of this article.

First, we'll look at how I2C works in general. We don't need to know this, but it helps to understand how the serial communication works so we can understand the function calls in the code better.

Once we've seen how I2C works, we'll look at the commands sent to set up and control the motors. This will help us understand how to translate the ROS commands into something our motors will understand.

The stage after this will be in another article in this series, so stay tuned!

This post is also available in video form - check the video link below if you want to follow along!

Inter-Integrated Circuit (IIC/I2C)

I2C can get complicated! If you want to really dive deep into the timings and circuitry needed to make it work, this article has great diagrams and explanations. The image I use here is from the same site.

SDA and SCLK

I2C is a serial protocol, meaning that it sends bits one at a time. It uses two wires called SDA and SCLK; together, these form the I2C bus. Multiple devices can be attached to these lines and take it in turns to send data. We can see the bus in the image below:

I2C Bus with SCLK and SDA lines

Data is sent on the SDA line, and a clock signal is sent on the SCLK line. The clock helps the devices know when to send the next bit. This is a very helpful part of I2C - the speed doesn't need to be known beforehand! Compare this with UART communication, which has two lines between every pair of devices: one to send data from A to B, and one to send data from B to A. Both devices must know in advance how fast to send their data so the other side can understand it. If they don't agree on timing, or even if one side's timing is off, the communication fails. By using a line for the clock in I2C, all devices are given the timing to send data - no prior knowledge required!

The downside of this is that there's only one line to send data on: SDA. The devices must take it in turns to send data. I2C solves this by designating a master device and one or more slave devices on the bus. The master device is responsible for sending the clock signal and telling the slave devices when to send data. In our case, the master device is the Jetson Nano, and the slave device is the FeatherWing. We could add extra FeatherWing boards to the bus, each with extra motors, and I2C would allow the Jetson to communicate with all of them - but this brings a new problem: how would each device know when it is the one meant to respond to a request?

Addressing

The answer is simple. Each slave device on the bus has a unique address. In our case, the FeatherWing has a default address of 0x60, which is hex notation for the number 96. In fact, if we look at the Python version of the JetBot motor code, we can see the following:

if 96 in addresses:

Aha! So when we check what devices are available on the bus, we see device 96 - the FeatherWing.

When the Jetson wants to talk to a specific device, it starts by selecting the address. It sends the address it wants on the SDA line before making a request, and each device on the bus can check that address with the address it is expecting. If it's the wrong address, it ignores the request. For example, if the FeatherWing has a device of 0x61, and the Jetson sends the address 0x60, the FeatherWing should ignore that request - it's for a different address.

But, how do we assign an address to each device?

The answer comes by looking at the documentation for the FeatherWing:

FeatherWing I2C Addressing

By soldering different pins on the board together, we can tell the board a new address to take when it starts up. That way, we can have multiple FeatherWing boards, each with a different address, all controllable from the Jetson. Cool!

Pulse Width Modulation (PWM)

With that, we have a basic understanding of how the Jetson controls I2C devices connected to it, including our FeatherWing board. Next we want to understand how the FeatherWing controls the motors, so we can program the Jetson to issue corresponding commands to the FeatherWing.

The first step of this is PWM itself - how does the board run a motor at a particular speed? The following step is the I2C commands needed to make the FeatherWing do that. I'll start with the speed.

Motor Wires

Each JetBot motor is a DC motor with two wires. By making one wire a high voltage with the other wire at 0V, the motor will run at full speed in one direction; if we flip which wire is the high voltage, the motor will turn in the opposite direction. We will say that the wire that makes the motor move forwards is the positive terminal, and the backwards wire is the negative terminal.

We can see the positive (red) wire and the negative (black) wire from the product information page:

DC Motor with red and black wires

That means we know how to move the motor at full speed:

  1. Forwards - red wire has voltage, black wire is 0V
  2. Backwards - black wire has voltage, red wire is 0V

There are another couple of modes that we should know about:

  1. Motor off - both wires are 0V
  2. Motor brakes - both wires have voltage

Which gives us full speed forwards, full speed backwards, brake, and off. How do we move the motor at a particular speed? Say, half speed forwards?

Controlling Motor Speed

The answer is PWM. Essentially, instead of having a wire constantly have high voltage, we turn the voltage on and off. For half speed forwards, we have the wire on for 50% of the time, and off for 50% of the time. By switching between them really fast, we effectively make the motor move at half speed because it can't switch on and off fast enough to match the wire - the average voltage is half the full voltage!

That, in essence, is PWM: switch the voltage on the wire very fast from high to low and back again. The proportion of time spent high determines how much of the time the motor is on.

We can formalize this a bit more with some language. The frequency is how quickly the signal changes from high to low and back. The duty cycle is the proportion of time the wire is on. We can see this in the following diagram from Cadence:

PWM signal with mean voltage, duty cycle, and frequency

We can use this to set a slower motor speed. We choose a high enough frequency, which in our case is 1.6 kHz. This means the PWM signal goes through a high-low cycle 1600 times per second. Then if we want to go forwards at 25% speed, we cane set the duty cycle of the positive wire to our desired speed - 25% speed means 25% duty cycle. We can go backwards at 60% speed by setting 60% duty cycle on the negative wire.

Producing this signal sounds very manual, which is why the FeatherWing comes with a dedicated PWM chip. We can use I2C commands from the Jetson to set the PWM frequency and duty cycle for a motor, and it handles generating the signal for us, driving the motor. Excellent!

Controlling Motors through the FeatherWing

Now we know how to move a particular motor at a particular speed, forwards or backwards, we need to understand how to command the FeatherWing to do so. I struggled with this part! I couldn't find information on the product page about how to do this, which I would ordinarily use to set up an embedded system like this. This is because AdaFruit provides libraries to use the FeatherWing without needing any of this I2C or PWM knowledge.

Thankfully, the AdaFruit MotorKit library and its dependencies had all of the code I needed to write a basic driver in C++ - thank you, AdaFruit! The following is the list of links I used for a reference on controlling the FeatherWing:

  1. Adafruit_CircuitPython_MotorKit
  2. Adafruit_CircuitPython_PCA9685
  3. Adafruit_CircuitPython_Motor
  4. Adafruit_CircuitPython_BusDevice
  5. Adafruit_CircuitPython_Register

Thanks to those links, I was able to put together a basic C++ driver, available on here on Github.

Git Tag

Note that this repository will have updates in future to add to the ROS Control of the JetBot. To use the code quoted in this article, ensure you use the git tag jetbot-motors-pt1.

FeatherWing Initial Setup

To get the PWM chip running, we first do some chip setup - resetting, and setting the clock value. This is done in the I2CDevice constructor by opening the I2C bus with the Linux I2C driver; selecting the FeatherWing device by address; setting its mode to reset it; and setting its clock using a few reads and writes of the mode and prescaler registers. Once this is done, the chip is ready to move the motors.

I2CDevice::I2CDevice() {
i2c_fd_ = open("/dev/i2c-1", O_RDWR);
if (!i2c_fd_) {
std::__throw_runtime_error("Failed to open I2C interface!");
}

// Select the PWM device
if (!trySelectDevice())
std::__throw_runtime_error("Failed to select PWM device!");

// Reset the PWM device
if (!tryReset()) std::__throw_runtime_error("Failed to reset PWM device!");

// Set the PWM device clock
if (!trySetClock()) std::__throw_runtime_error("Failed to set PWM clock!");
}

Let's break this down into its separate steps. First, we use the Linux driver to open the I2C device.

// Headers needed for Linux I2C driver
#include <fcntl.h>
#include <linux/i2c-dev.h>
#include <linux/i2c.h>
#include <linux/types.h>
#include <sys/ioctl.h>
#include <unistd.h>

// ...

// Open the I2C bus
i2c_fd_ = open("/dev/i2c-1", O_RDWR);
// Check that the open was successful
if (!i2c_fd_) {
std::__throw_runtime_error("Failed to open I2C interface!");
}

We can now use this i2c_fd_ with Linux system calls to select the device address and read/write data. To select the device by address:

bool I2CDevice::trySelectDevice() {
return ioctl(i2c_fd_, I2C_SLAVE, kDefaultDeviceAddress) >= 0;
}

This uses the ioctl function to select a slave device by the default device address 0x60. It checks the return code to see if it was successful. Assuming it was, we can proceed to reset:

bool I2CDevice::tryReset() { return tryWriteReg(kMode1Reg, 0x00); }

The reset is done by writing a 0 into the Mode1 register of the device. Finally, we can set the clock. I'll omit the code for brevity, but you can take a look at the source code for yourself. It involves setting the mode register to accept a new clock value, then setting the clock, then setting the mode to accept the new value and waiting 5ms. After this, the PWM should run at 1.6 kHz.

Once the setup is complete, the I2CDevice exposes two methods: one to enable the motors, and one to set a duty cycle. The motor enable sets a particular pin on, so I'll skip that function. The duty cycle setter has more logic:

buf_[0] = kPwmReg + 4 * pin;

if (duty_cycle == 0xFFFF) {
// Special case - fully on
buf_[1] = 0x00;
buf_[2] = 0x10;
buf_[3] = 0x00;
buf_[4] = 0x00;
} else if (duty_cycle < 0x0010) {
// Special case - fully off
buf_[1] = 0x00;
buf_[2] = 0x00;
buf_[3] = 0x00;
buf_[4] = 0x10;
} else {
// Shift by 4 to fit 12-bit register
uint16_t value = duty_cycle >> 4;
buf_[1] = 0x00;
buf_[2] = 0x00;
buf_[3] = value & 0xFF;
buf_[4] = (value >> 8) & 0xFF;
}

Here we can see that the function checks the requested duty cycle. If at the maximum, it sets the motor PWM signal at its maximum - 0x1000. If below a minimum, it sets the duty cycle to its minimum. Anywhere in between will shift the value by 4 bits to match the size of the register in the PWM chip, then transmit that. Between the three if blocks, the I2CDevice has the ability to set any duty cycle for a particular pin. It's then up to the Motor class to decide which pins should be set, and the duty cycle to set them to.

Initialize Motors

Following the setup of I2CDevice, each Motor gets a reference to the I2CDevice to allow it to talk to the PWM chip, as well as a set of pins that correspond to the motor. The pins are as follows:

MotorEnable PinPositive PinNegative Pin
Motor 18910
Motor 2131112

The I2CDevice and Motors are constructed in the JetBot control node. Note that the pins are passed inline:

device_ptr_ = std::make_shared<JetBotControl::I2CDevice>();
motor_1_ = JetBotControl::Motor(device_ptr_, std::make_tuple(8, 9, 10), 1);
motor_2_ =
JetBotControl::Motor(device_ptr_, std::make_tuple(13, 11, 12), 2);

Each motor can then request the chip enables the motor on the enable pin, which again is done in the constructor:

Motor::Motor(I2CDevicePtr i2c, MotorPins pins, uint32_t motor_number)
: i2c_{i2c}, pins_{pins}, motor_number_{motor_number} {
u8 enable_pin = std::get<0>(pins_);
if (!i2c_->tryEnableMotor(enable_pin)) {
std::string error =
"Failed to enable motor " + std::to_string(motor_number) + "!";
std::__throw_runtime_error(error.c_str());
}
}

Once each motor has enabled itself, it is ready to send the command to spin forwards or backwards, brake, or turn off. This example only allows the motor to set itself to spinning or not spinning. The command is sent by the control node once more:

motor_1_.trySetSpinning(spinning_);
motor_2_.trySetSpinning(spinning_);

To turn on, the positive pin is set to fully on, or 0xFFFF, while the negative pin is set to off:

if (!i2c_->trySetDutyCycle(pos_pin, 0xFFFF)) {
return false;
}
if (!i2c_->trySetDutyCycle(neg_pin, 0)) {
return false;
}

To turn off, both positive and negative pins are set to fully on, or 0xFFFF:

if (!i2c_->trySetDutyCycle(pos_pin, 0xFFFF)) {
return false;
}
if (!i2c_->trySetDutyCycle(neg_pin, 0xFFFF)) {
return false;
}

Finally, when the node is stopped, the motors make sure they stop spinning. This is done in the destructor of the Motor class:

Motor::~Motor() {
trySetSpinning(false);
}

While the Motor class only has the ability to turn fully on or fully off, it has the code to set a duty cycle anywhere between 0 and 0xFFFF - which means we can set any speed in the direction we want the motors to spin!

Trying it out

If you want to give it a try, and you have a JetBot to do it with, you can follow my setup guide in this video:

Once this is done, follow the instructions in the README to get set up. This means cloning the code onto the JetBot, opening the folder inside the dev container, and then running:

source /opt/ros/humble/setup.bash
colcon build
source install/setup.bash
ros2 run jetbot_control jetbot_control

This will set the motors spinning for half a second, then off for half a second.

Congratulations!

If you followed along to this point, you have successfully moved your JetBot's motors using a driver written in C++!

· 7 min read
Michael Hart

This post shows how to build a Robot Operating System 2 node using Rust, a systems programming language built for safety, security, and performance. In the post, I'll tell you about Rust - the programming language, not the video game! I'll tell you why I think it's useful in general, then specifically in robotics, and finally show you how to run a ROS2 node written entirely in Rust that will send messages to AWS IoT Core.

This post is also available in video form - check the video link below if you want to follow along!

Why Rust?

The first thing to talk about is, why Rust in particular over other programming languages? Especially given that ROS2 has strong support for C++ and Python, we should think carefully about whether it's worth travelling off the beaten path.

There are much more in-depths articles and videos about the language itself, so I'll keep my description brief. Rust is a systems-level programming language, which is the same langauge as C and C++, but with a very strict compiler that blocks you from doing "unsafe" operations. That means the language is built for high performance, but with a greatly diminished risk of doing something unsafe as C and C++ allow.

Rust is also steadily growing in traction. It is the only language other than C to make its way into the Linux kernel - and the Linux kernel was originally written in C! The Windows kernel is also rewriting some modules in Rust - check here to see what they have to say:

The major tech companies are adopting Rust, including Google, Facebook, and Amazon. This recent 2023 keynote from Dr Wener Vogels, Vice President and CTO of Amazon.com, had some choice words to say about Rust. Take a look here to hear this expert in the industry:

Why isn't Rust used more?

That's a great question. Really, I've presented the best parts in this post so far. Some of the drawbacks include:

  1. Being a newer language means less community support and less components provided out of the box. For example, writing a desktop GUI in Rust is possible, but the libraries are still maturing.
  2. It's harder to learn than most languages. The stricter compiler means some normal programming patterns don't work, whcih means relearning some concepts and finding different ways to accomplish the same task.
  3. It's hard for a new language to gain traction! Rust has to prove it will stand the test of time.

Having said that, I believe learning the language is worth it for safety, security, and sustainability reasons. Safety and security comes from the strict compiler, and sustainability comes from being a low-level language that does the task faster and with fewer resources.

That's true for robotics as much as it is for general applications. Some robot software can afford to be slow, like high-level message passing and decision making, but a lot of it needs to be real-time and high-performance, like processing Lidar data. My example today is perfectly acceptable in Python because it's passing non-urgent messages, but it is a good use case to explore using Rust in.

With that, let's stop talking about Rust, and start looking at building that ROS2 node.

Building a ROS2 Node

The node we're building replicates the Python-based node from this blog post. The same setup is required, meaning the setup of X.509 certificates, IoT policies, and so on will be used. If you want to follow along, make sure to run through that setup to the point of running the code - at which point, we can switch over to the Rust-based node. If you prefer to follow instructions from a README, please follow this link - it is the repository containing the source code we'll be using!

Prerequisites

The first part of our setup is making sure all of our tools are installed. This node can be built on any operating system, but instructions are given for Ubuntu, so you may need some extra research for other systems.

Execute the following to install Rust using Rustup:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

There are further dependencies taken from the ROS2 Rust repository as follows:

sudo apt install -y git libclang-dev python3-pip python3-vcstool # libclang-dev is required by bindgen
# Install these plugins for cargo and colcon:
cargo install --debug cargo-ament-build # --debug is faster to install
pip install git+https://github.com/colcon/colcon-cargo.git
pip install git+https://github.com/colcon/colcon-ros-cargo.git

Source Code

Assuming your existing ROS2 workspace is at ~/ros2_ws, the following commands can be used to check out the source code:

cd ~/ros2_ws/src
git clone https://github.com/mikelikesrobots/aws-iot-node-rust.git
git clone https://github.com/ros2-rust/ros2_rust.git
git clone https://github.com/aws-samples/aws-iot-robot-connectivity-samples-ros2.git

ROS2 Rust then uses vcs to import the other repositories it needs:

cd ~/ros2_ws
vcs import src < src/ros2_rust/ros2_rust_humble.repos

That concludes checking out the source code.

Building the workspace

The workspace can now be built. It takes around 10m to build ROS2 Rust, which should only need to be done once. Following that, changes to the code from this repository can be built very quickly. To build the workspace, execute:

cd ~/ros2_ws
colcon build
source install/setup.bash

The build output should look something like this:

Colcon Build Complete

Once the initial build has completed, the following command can be used for subsequent builds:

colcon build --packages-select aws_iot_node

Here it is in action:

build-only-iot

Now, any changes that are made to this repository can be built and tested with cargo commands, such as:

cargo build
cargo run --bin mock-telemetry

The cargo build log will look something like:

cargo-build-complete

Multi-workspace Setup

The ROS2 Rust workspace takes a considerable amount of time to build, and often gets built as part of the main workspace when it's not required, slowing down development. A different way of structuring workspaces is to separate the ROS2 Rust library from your application, as follows:

# Create and build a workspace for ROS2 Rust
mkdir -p ~/ros2_rust_ws/src
cd ~/ros2_rust_ws/src
git clone https://github.com/ros2-rust/ros2_rust.git
cd ~/ros2_rust_ws
vcs import src < src/ros2_rust/ros2_rust_humble.repos
colcon build
source install/setup.bash

# Check out application code into main workspace
cd ~/ros2_ws/src
git clone https://github.com/mikelikesrobots/aws-iot-node-rust.git
git clone https://github.com/aws-samples/aws-iot-robot-connectivity-samples-ros2.git
cd ~/ros2_ws
colcon build
source install/local_setup.bash

This method means that the ROS2 Rust workspace only needs to be updated with new releases for ROS2 Rust, and otherwise can be left. Furthermore, you can source the setup script easily by adding a line to your ~/.bashrc:

echo "source ~/ros2_rust_ws/install/setup.bash" >> ~/.bashrc

The downside of this method is that you can only source further workspaces using the local_setup.bash script, or it will overwrite the variables needed to access the ROS2 Rust libraries.

Running the Example

To run the example, you will need the IOT_CONFIG_FILE variable set from the Python repository.

Open two terminals. In each terminal, source the workspace, then run one of the two nodes as follows:

source ~/ros2_ws/install/setup.bash  # Both terminals
source ~/ros2_ws/install/local_setup.bash # If using the multi-workspace setup method
ros2 run aws_iot_node mqtt-telemetry --ros-args --param path_for_config:=$IOT_CONFIG_FILE # One terminal
ros2 run aws_iot_node mock-telemetry # Other terminal

Using a split terminal in VSCode, this looks like the following:

Both MQTT and Mock nodes running

You should now be able to see messages appearing in the MQTT test client in AWS IoT Core. This will look like the following:

MQTT Test Client

Conclusion

We've demonstrated that it's possible to build nodes in Rust just as with C++ and Python - although there's an extra step of setting up ROS2 Rust so our node can link to it. We can now build other nodes in Rust if we're on a resource constrained system, such as a Raspberry Pi or other small dev kit, and we want the guarantees from the Rust compiler that the C++ compiler doesn't have while being more secure and sustainable than a Python-based version.

Check out the repo and give it a try for yourself!