3. Cortex Modes: Belief, Sim, Real

3.1. Learning Objectives

In this tutorial, we will cover Cortex’s different modes of operation. We will see how to:

  1. Bring the system up in belief-only mode.

  2. Bring it up into belief-sim mode for testing the control communication protocol.

  3. Connect to a physical robot.

In each of these modes, we’ll test the system is working properly using the manual control feature.

Prerequisites

  • Completed the Overview and Fundamentals tutorial to understand the basic concepts, especially the separation between the belief robot (the robot making decisions) and sim / real robot (the robot being controlled, either in simulation or reality).

  • Completed the Manually Commanding the Robot tutorial to learn how to manually control the robot’s target prim.

All commands are run from the cortex examples directory:

cd standalone_examples/cortex

3.2. Preliminaries

We need the cortex_control ROS package built for the first half of this tutorial, and we’ll need the cortex_control_franka package for connecting to a physical robot.

We’ll describe these instructions for building the ros_workspace that comes with Isaac Sim, but for a minimal workspace, you can create your own (or use an existing workspace) and simply copy or symlink cortex_control and cortex_control_franka into it.

Build and verify the packages:

source /opt/ros/noetic/setup.bash

cd ../../ros_workspace
catkin_make

source devel/setup.bash
roscd cortex_control  # This should get you into the cortex_control package directory.
roscd cortex_control_franka  # This should get you into the cortex_control_franka package directory.

3.3. Launching into belief-only mode

This is the default mode (and the mode used in the Manually Commanding the Robot tutorial).

# Use the default assets root (see the overview tutorial) and usd env (Franka blocks world):
./cortex launch

# Or equivalently, and more explicitly:
./cortex launch --usd_env=Isaac/Samples/Cortex/Franka/BlocksWorld/cortex_franka_blocks_belief.usd

The second command there parallels how we will bring up the belief-sim mode. Try manually controlling the robot from here.

3.4. Launching into belief-sim mode

To bring Cortex up in belief-sim mode, point it to the belief-sim variant of the environment (two robots, a belief robot and a sim robot, will be available). And use the --enable_ros flag.

./cortex launch --usd_env=Isaac/Samples/Cortex/Franka/BlocksWorld/cortex_franka_blocks_belief_sim.usd --enable_ros

The belief robot is the robot in front, and the sim robot is the robot in back. Try manually controlling the belief robot. Note that only the belief robot will move because the control communication to the sim robot has not yet been established.

The following video shows launching into belief-sim mode and establishing control communication (described next):

3.4.1. Establish control communication

Establish the control communication by running the sim_controller from the ROS-workspace. Make sure you’ve built the ros_workspace. See Preliminaries as well as Overview and Fundamentals for instructions.

rosrun cortex_control sim_controller

When the controller runs, you will see the belief robot snap to the configuration of the sim robot. Then controlling the belief robot will control the sim robot as well. Note that there is a slight delay in the motion between the two of about .1 seconds that will be visible as you move the belief robot. This delay is set in

../../ros_workspace/src/cortex_control/config/command_stream_interpolator.yaml

as the interpolation_delay parameter so the commands can be buffered for robustly for interpolation given jitter in ROS messaging. This is a realistic buffer delay reflective of the interpolation delay we use for communicating with physical robots as well.

3.4.2. Details of ROS communication

The flag --enable_ros starts Cortex with two additional extensions cortex_ros and cortex_sim, which leverage the ROS bridge for establishing ROS connections. cortex_ros acts to connect the belief robot and environment to either the sim world or the real world, while cortex_sim, if a sim world exists, will create the ROS communication messaging to mimic what we would expect from the real robot and a perception system. Specifically, it streams the robot’s joint states simulating proprioception along with ground truth object transforms mimicking the output of a perception module, and receives position commands from the simulated controller.

The full list of (relevant) topics is:

/cortex/gripper/command  # discrete json gripper command
/rmpflow/commands/gripper  # continuous streaming gripper commands
/rmpflow/commands/joint_command  # continuous streaming joint commands
/rmpflow/commands/joint_command/ack  # joint command ack returned from the CommandStreamInterpolator
/rmpflow/commands/joint_command/interpolated  # interpolated commands streamed from the CommandStreamInterpolator
/rmpflow/commands/joint_command/suppress  # cortex suppression commands from CommandStreamInterpolator
/robot/joint_state  # joint states published from sim robot via cortex_sim
/tf  # object transforms streamed from the sim perception via cortex_sim

Inside the behavior scripts, the gripper is generically (robot-independently) controlled by joint commands similar to the arm joints. However, in many cases, it’s important to re-interpret those continuous control signals as discrete gripper commands. For instance, the Franka robot has its own ROS service interface for commanding the gripper to move to a set point or close until it feels a force. To support these types of grippers, we transform the continuous gripper joint commands into discrete commands and publish them on /cortex/gripper/command as a json string. The mapping is:

  1. A gripper width value of < 0. translates to a close_to_grasp command.

  2. A gripper with >= 0. translates to a move_to command with the joint command value as its parameter.

Specifically, these json strings look like:

data: "{\"command\": \"move_to\", \"width\": 0.1}"
data: "{\"command\": \"close_to_grasp\"}"

When communicating to a physical Franka robot, we will use the franka_gripper_command_relay.py to translate these generic discrete commands to Franka-specific gripper commands.

3.5. Controlling physical robots

Controlling a physical robot requires that you have a controller implementing the connection and communication protocol required for syncing with Cortex. The cortex_control tools provide a reference implementation of the protocol in an interpolation tool:

Warning

To run Cortex in this mode you need a physical Franka robot with Franka Control Interface (FCI) and franka_ros. See:

along with a real-time control workstation setup as described here:

Without that setup, the cortex_control_franka commands will error.

cortex_control/src/cortex/control/command_stream_interpolator.h
cortex_control/src/cortex/control/command_stream_interpolator.cpp
cortex_control/src/cortex/control/command_stream_interpolator_main.cpp

The header and implementation files contain a class CommandStreamInterpolator that encapsulates an interpolator that can connect with Cortex via ROS following the required connection and synchronization protocol. The _main.cpp file uses that tool to implement the sim_controller binary used above for connecting with the simulated robot.

To launch the system for controlling a physical robot, we’ll launch into belief-only mode and use the --enable_ros flag:

# Terminal 1: Start a roscore
source /opt/ros/noetic/setup.bash
roscore

# Terminal 2: Launch cortex loop runner passing in the blocks world USD env.
./cortex launch \
    --usd_env=Isaac/Samples/Cortex/Franka/BlocksWorld/cortex_franka_blocks_belief.usd \
    --enable_ros

Launching into belief-only mode in this way loads the belief robot and establish its ROS interface for connecting with a physical robot. It’s important to not use the belief-sim environment since both the sim robot and the physical robot would then be trying to connect to the belief robot.

At this point, we can run behaviors, but the system will only command the belief robot. The physical robot isn’t yet connected.

Now start up the physical Franka robot, and start the cortex_control_franka controllers on the Franka real-time machine using the commands in the code block below . See

for instructions on how to setup Franka ROS and the real-time machine.

Note that at the point were we launch the joint position controller in Terminal 3 below, you should see the Cortex belief robot synchronize with the physical robot. It will engage the physical robot at that point, so you may see some slight movement, but it shouldn’t be much since the belief has synchronized with the physical robot’s configuration. However, make sure you have the e-stop ready in case anything goes wrong.

Warning

Make sure you have the e-stop in hand when running the following commands in case anything goes wrong. In particular, the command in Terminal 3 (starting the position controller) will engage the physical robot.

# Terminal 1: Start the Franka controller manager
roslaunch cortex_control_franka franka_control_cortex.launch

# Terminal 2: Set high torque thresholds for Franka
rosrun cortex_control_franka set_high_collision_thresholds

# Terminal 3: Startup the position controller -- launching this controller syncs the belief with the physical robot.
roslaunch cortex_control_franka joint_position_controller.launch

# Terminal 4: Start the gripper commander listener
rosrun cortex_control_franka franka_gripper_command_relay.py

Note

You will have to source the appropriate ROS workspace to run the above commands.

Now when we run behaviors we’ll see the physical robot following the simulated robot. Try the following:

./cortex activate franka/open_gripper.py  # Opens the physical gripper
./cortex activate franka/close_gripper.py  # Closes the physical gripper
./cortex activate go_home.py  # Sends the robot to its home position

Once the robot gets to its home position, you’ll also be able to control the physical robot manually by moving the motion_controller_target prim as before.