1. Overview and Fundamentals

Cortex is currently a preview release intended to give users a feel for where we’re headed with models of behavioral design and deployment on physical robots. The framework and its APIs are still evolving and are likely to change over time.

Cortex uses Isaac Sim to create a belief representation of the robot that use its understanding of the world to make decisions. You can think of this belief as what lives inside the mind of the robot. The basic belief is entirely independent of ROS, but we can use ROS to create connections between the physical world and this internal belief world to synchronize the belief with reality in real time. This synchronization involves perception information streaming in so the belief system knows where objects really are in the physical world, and control signals streaming out so the physical robot will mimic the movement of the belief robot in real time.

Isaac Cortex architecture.

For testing these ROS connections we can use a sim world to create a replica of the real robot and the physical environment that implements the required ROS communication protocols. With both the belief and sim robots running, the belief robot can make decisions based on what it believes about the world while the simulated and belief worlds remain in sync.

Then, we can simply swap the sim world for the physical world to execute the same behavior on a physical robot that implements the same ROS communication protocol.

An important part of Cortex is the decision framework for programming the robot’s behavior. Many of the tutorials here focus on different aspects of the decision framework, especially decider networks and design patterns for reactive behavior.


The decision framework is meant to be flexible and non-rigid. We provide tooling around common design patterns we’ve found invaluable at NVIDIA, especially for designing reactive behavior for collaborative robots (decider networks). But not all problems will fit these patterns. We expect and hope the framework will be used in creative ways we haven’t anticipated. Our aim is to keep the framework flexible and inviting to new ideas and patterns in behavioral design.

For more information, see Nathan Ratliff’s Isaac Cortex GTC22 talk

1.1. Learning Objectives

The goal of this tutorial is to give a high-level picture of what Cortex is and how to run it. We’ll take a look at a block stacking example and step through the different ways we can setup Cortex for behavior generation, simulated control, and real-world control.


  • The reader should be familiar with Isaac Sim and its core Python scripting API.

1.2. Terminology

Common terms used throughout this tutorial series.

  • Belief world/robot The model of the world and robot used to make decisions. You can think of this model as living inside the mind of the robot.

  • Sim world/robot A simulated version of the real world. The belief robot/world synchronizes with the sim robot/world using perception (or ground truth) observations and control signals.

  • Real world The physical world where the actual robot and objects live. To synchronize with the Cortex, the physical robot and perception module need to communicate with Cortex in the same way the sim world/robot does. In that sense, the ROS tooling around sim world/robot can be viewed as a reference implementation of the required real-world protocol.

  • Belief-only mode Cortex is launched with only the belief robot in the scene and no ROS communication. This mode can be used for basic programming of the behaviors.

  • Belief-sim mode Cortex is launched with both a belief and a sim robot in the scene and ROS communication turned on. This mode can be used for testing the full communication loop with realistic delays before controlling a physical robot. It’s usually used after designing a behavior in belief-only mode.

  • Belief-physical mode Cortex is launched with only a belief robot, but with ROS communication turned on. This mode is used to communicate and synchronize with physical robots.

  • cortex_control The ROS-package used to connect belief robots to either simulated or physical robots. Provides a tool called the CommandStreamInterpolator implementing the synchronization protocol for syncing with Cortex and interpolating the stream of commands for control. That tool is also used in a sim_controller binary to connect the belief robot to the sim robot.

  • cortex_control_franka A library depending on cortex_control which implements a real-time Franka controller using franka_ros and the CommandStreamInterpolator. This controller is a real-world reference implementation counterpart to sim_controller and can be used to control physical Franka robots from Cortex.

  • Behavior Any python object with a tick() method. Behaviors control the belief robot to perform tasks.

  • Decision framework The Cortex framework loading the ticking behaviors, along with the tooling for designing behaviors.

  • State machine Standard finite-state machine model which can be used to implement behaviors.

  • Decider network A behavior model more suited to reactive decision making characterized by its direct modeling of hierarchical decisions. It’s similar to a decision tree, but can be a more general acyclic graph, and has a built in notion of statefulness making them compatible with state machines.

1.3. Launching Cortex with a USD Environment

This tutorial runs a demo of Cortex (Franka block stacking). Generally, it assumes scripts are run from the standalone_examples/cortex directory. Note that many of the commands listed below will have to be run in separate terminals. A convenient tool for organizing the terminals is Terminator.

When launching Cortex below using ./cortex launch, the environment paths specified by --usd_env should be given relative to the assets root. Usually, that root path will be omniverse://localhost/NVIDIA/Assets/Isaac/2022.1 on most installations, but by default the root will be set correctly without specification so you don’t need to worry about it. For instance, below we use the flag


rather than specifying the full path


If your assets are installed or copied elsewhere, you can specify the root explicitly using the --assets_root flag. For instance,


See the section Details of launching Cortex below for more information.

1.4. Basic startup and manual control

These first two sections will start Cortex with a belief robot only. We’ll look at starting both a belief and a sim robot connected by ROS after that.


All file paths in this tutorial and others in the Cortex series are given relative to standalone_examples/cortex.

Startup Cortex

cd standalone_examples/cortex  # This line will be omitted in future code blocks.
./cortex launch --usd_env=Isaac/Samples/Cortex/Franka/BlocksWorld/cortex_franka_blocks_belief.usd

Note that Cortex defaults to this environment (Franka, belief-only) if --usd_env isn’t specified.

Select the belief world’s motion controller target prim (see Manually Commanding the Robot for more information).


and use the Move tool from the toolbar on the left of the viewport to manually drag it around. The belief robot’s end-effector should follow.

Send the robot home:

./cortex activate go_home.py

Once the robot arrives at its home configuration, you can again manually control it by dragging the motion controller target.

1.5. Block stacking demo

Launch the block stacking demo behavior:

./cortex activate franka/build_block_tower.py

While the block stacking demo is running, you can interact with the blocks. Move them around and see the robot react. You can even disturb the block tower.

Note that there are no singulation behaviors currently (pushing blocks away from each other or away from the tower region), so if blocks end up too close to each other, there might be errors in execution. The system should still be pretty robust to those – try helping it out and see the robot react and pick up from where it needs to.

If the tower ends up in the wrong order, the robot will deconstruct the tower first before reconstructing it in the right order. Try the following two things.

First, let the robot build at least part of the block tower. Then select the bottom block and use the Move widget to drag it out from under the other blocks. The upper portion of the block tower will fall into the gap left by the removal of the lower block, and the resulting tower will be out of order. The robot will immediately begin deconstructing the tower and reconstructing it in the right order.

1.6. Using ROS communication for synchronization

This section steps through how to start up cortex with both belief and sim robots and how to connect them via ROS.

See Cortex Modes: Belief, Sim, Real for details on leveraging the ROS communication and synchronization capabilities of Cortex.

1.6.1. Preliminaries: Build the Cortex control ROS package

For this tutorial, we need to build the cortex_control ROS package. We’ll give instructions here for building the complete ros_workspace that comes with Isaac Sim, but for a minimal workspace, you can create your own (or use an existing workspace) and simply copy or symlink cortex_control into it. Note, for connecting to physical robots, you’ll want to also build cortex_control_franka as described in Cortex Modes: Belief, Sim, Real.

Building and verifying cortex_control:

source /opt/ros/noetic/setup.bash

cd ../../ros_workspace

source devel/setup.bash
roscd cortex_control  # This should get you into the package directory.

1.6.2. Launching a belief-sim environment and establishing control

Try the following.

Start a roscore.

Start cortex with a belief and sim variant of the blocks world:

./cortex launch \
    --usd_env=Isaac/Samples/Cortex/Franka/BlocksWorld/cortex_franka_blocks_belief_sim.usd \

Note the differences between this command and the one above. We pass it the ..._belief_sim.usd variant of the world along with the flag --enable_ros. You should see the environment load with two robots. The robot in front is the belief robot (under prim /cortex/belief); the robot in back is the sim robot (under prim /cortex/sim).

Start the block stacking demo:

./cortex activate franka/build_block_tower.py

You should see the belief robot start into the block stacking procedure. But since the controller isn’t currently running, the sim robot won’t be following the belief robot.

The sim environment is constantly streaming the ground truth poses of the blocks to the belief, and the block stacking behavior is set up to constantly synchronize the belief with those ground truth poses it receives. Therefore, you’ll see the belief robot try to pick up the first block, realize that the block isn’t moving in the (simulated version of the) real world, snap that block belief back to its original location, then try again.

It’ll repeat trying to pick up the block and failing until we connect the two robots and get the sim robot to follow the belief to make a real change to the simulated world.

Additionally, try manually moving the block the robot is trying to manipulate (initially the blue block) in the simulated world. You’ll see the belief block follow and the robot react to that. But still, the belief robot is unable to affect the simulated world because control hasn’t been launched.

Now start the controller. From a terminal with the cortex_control ROS package installed and sourced run

rosrun cortex_control sim_controller

This simulated controller will accept commands from the belief robot, interpolate them, and stream low-level commands to the simulated robot at a higher rate, mimicking the control flow used on physical robots.

At this point, you’ll see the belief robot snap to a configuration matching the simulated robot, and then continue reaching toward that first block. This time the simulated robot will follow the belief and actually pick up that first block. Now, with control running, the belief robot is making a real impact on (the simulated version of) reality so the procedure can make progress.

If you leave it running, both robots will run in synchrony and build the block tower. The belief robot is the one making the decisions, and the simulated robot is following that belief robot closely, in real time, performing the same operations in (the simulated version of) reality.

1.7. Details of launching Cortex

Here’s a full listing of the flags for launching Cortex:

usage: cortex launch [-h] [--assets_root ASSETS_ROOT] [--usd_env USD_ENV]
                     [--enable_ros] [--position_only] [--loop_fast]
                     [--print_stage_prims_on_startup] [--print_diagnostics]
                     [--suppress_behaviors] [--test]

optional arguments:
  -h, --help            show this help message and exit
  --assets_root ASSETS_ROOT
                        Assets root path. If None (default), defaults to using
                        the built it get_assets_root_path() helper which
                        typically reports
                        'omniverse://localhost/NVIDIA/Assets/Isaac/2022.1' on
                        most installations.
  --usd_env USD_ENV     Relative path to the USD environment to load. This
                        path will be relative to the --assets_root. By
                        default, it points to the example Franka blocks world
  --enable_ros          Enable cortex ROS-based extensions for communicating
                        with physical robots.
  --position_only       Contol only the position, not the orientation.
  --loop_fast           Usually uses a steady step of 60 hz. Setting this flag
                        tells the system to step as fast as it can.
                        Prints the stage prims when the environment is first
                        loaded during startup.
  --print_diagnostics   Print diagnostic information, including profiling
  --suppress_behaviors  If set, suppresses the behaviors. Useful for
                        diagnosing issues.
  --test                Run a simple bringup test to make sure the cortex
                        system starts.

The most relevant are --usd_env and --enable_ros as described above. In addition to that, --position_only will set the cortex commander into position only mode from startup. That can be set back to full pose as above using set_commander_to_full_pose.py. Additionally, by default the system will loop at 60 hz. Use --loop_fast to set the loop runner spinning as quickly as it can go.

The rest of the flags are for diagnostics.

1.8. Available pre-made environments

Here’s a list of pre-made environments:


The belief robot (under belief) will be controlled by the behaviors in all cases. When using a ..._belief_sim.usd world, a sim version of the world will be loaded as well, offset from the belief robot. The sim robot will be present in all cases when it’s in the environment, however, it’s only accessible from cortex if --enable_ros is selected. If it is, then starting the sim_controller will synchronize the two robots, and you’ll see the simulated robot following the belief robot.

See Using ROS communication for synchronization as well as Cortex Modes: Belief, Sim, Real for more details of connecting the sim and belief robots using control. See also exts/omni.isaac.cortex/docs/README.md for details on the USD conventions used to setup the worlds.

1.9. More information

See the Cortex extension’s README.md file for detailed documentation: