Other products

ROBOTCORE Transform

Speed up your ROS Coordinate System Transformations (tf)

ROBOTCORE Transform is an optimized robotics transform library that manages efficiently the transformations between coordinate systems in a robot. API-compatible with the ROS 2 transform (tf2) library, ROBOTCORE Transform delivers higher throughput and lower latency while aligning with the standard way to keep track of coordinate frames and transform data within a ROS robotic system.

Get ROBOTCORE® Transform Robotics consulting?
ROBOTCORE TRANSFORM

Accelerated coordinate transformations (tf)

The ROS tf2 library manages the transformations between coordinate systems in a robot. It does so by using a directed single rooted tree that has methods for 1) registration of transformation information and 2) computation of coordinate transformations. The cool thing about tf2 is that it allows to transform data in time as well as in space, which makes it a centric component in the ROS ecosystem being used in popular robotics stacks including navigation2, moveit2 and Autoware.Auto.

ROBOTCORE Transform introduces architectural upgrades in the tf2 implementation and leverages hardware acceleration to create customized transform data pipelines that deliver higher throughput and lower the latency while reading (or publishing) the tf2 tree. ROBOTCORE Transform is served either as source code or as an IP which allows your ROS computational graphs to easily integrate it. Delivering lower latency and higher throughput, it's API-compatible with the ROS 2 tf2 library simplifying its integration in existing robots, so that you don’t spend time reinventing the wheel and re-developing what already works.

Benchmarks

Why tf2?

To keep track of a robot's position, and the rest of the world in relation to itself

tf2 allows roboticists to keep track of multiple coordinate frames over time. It maintains the relationship between coordinate frames in a graph-like data structure (a tree) that is buffered in time, and lets the ROS graph-users to transform data between any two coordinate frames at any desired point in time, provided such point is available within the buffers.

Transform data between coordinate frames in time and in space

tf2 is generally best understood via example: consider a robot fleet consisting on 3 robots in a warehouse that start from similar positions. Each robot senses the environment using the laser mounted on its top (base_link_laser frame), as well as the lasers readings from the other robots. The challenge on each robot to integrate all these laser readings requires knowledge of the relationship over time and over space between the laser (base_link_laser frame), to the base (base_link frame), to the ground level (base_footprint frame), to the world frame (map frame) and back to whichever frame ends up being used to compute the sensor fusion. After all, sensor readings can only be integrated when expressed in the same coordinate frame.

Up to 5x faster
tf2 tree operations

Transforming data across coordinate frames is key to ROS computational graphs. tf2 sits at the core of navigation2, moveit2 and Autoware.Auto (among others). For every motion computation, the tf2 tree gets accessed many times. This makes tf2 a very time-sensitive component in high performance robots, wherein both latency and throughput must be carefully considered. That's where ROBOTCORE Transform comes to play. It introduces architectural upgrades in the tf2 implementation and leverages hardware acceleration delivering a 5x latency speedup in coordinate transformations.

Developer-ready documentation and support

ROBOTCORE Transform is served by seasoned ROS developers and for ROS development. It ships as a complement to ROBOTCORE either as IP, or as source code. It includes documentation, examples, reference designs and the possibility of various levels of support.

Ask about support levels

Benchmarks

(plots are interactive)

ROS 2 (tf2)

tf tree subscription latency (us),
2 subscribers
(Measured the worse case subscription latency in a graph with 2 tf tree subscribers. AMD's KV260 board has been used for benchmarking. )

tf tree subscription latency (us),
20-100 subscribers
(Measured the worse case subscription latency in a graph with multiple tf tree subscribers. AMD's KV260 board has been used for benchmarking. )

Do you have any questions?

Get in touch with our team.

Let's talk Case studies