Skip to content

Welcome to RobotFlow Docs

RobotFlow project launches in mid of 2020, initiated by Robot AI [email protected]MVIG, SJTU. It starts as an inside project for labmates to use. Soon, we find it takes a lot of time for a used-to pure computer vision lab to bridge the engineering gap towards robotics. Thus, we think we might as well make it also useful for the community who want to steer the research direction towards embodied AI like us, or who may think some of our tools are useful for their research.

Writing documentations is a tedious and time-consuming process. Since we still need to focus on research, the documentation progress can be rather slower. We are trying to catch up. Please bare with us, Thanks! If you have any problem, you can post it in our forum.

Target audience

RobotFlow is determined to demoncratize the robotic research. We hope to lower the barrier to entry for non-roboticist to participate in the exciting field of embodied AI/robot learning. As for roboticist, RobotFlow works with ROS.

Due to the research focus, we have validated the use of RobotFlow for manipulation, but not for locomotion. We welcome experts in locomotion to help us make the system better.

Wrapping the Loop in a unified interface

We aim to wrap all the components of RobotFlow in unified interfaces:

  1. Python API
  2. Unity-based GUI

With unified interfaces, user can call all the components with consistent experience. This unified interface is still under development. We are expecting to release it late this year.

Before that, we suggest you to docs of each component.


  • RFIMU: Visual-inertial localization setup. Hardware design included.
  • RFDigit


  • RFVision: Multi-modal perception library.

Plan & Control