9.1. Overview & Getting Started

A variety of reinforcement learning tasks are provided at OmniIsaacGymEnvs. Please make sure to grab the latest from the main branch to make sure contents are in sync with the latest Isaac Sim release. If you have a previous release of OmniIsaacGymEnvs checked out, please run git pull origin main to update to the latest.

The Omniverse Isaac Gym extension provides an interface for performing reinforcement learning training and inferencing in Isaac Sim. This framework simplifies the process of connecting reinforcement learning libraries and algorithms with other components in Isaac Sim. Similar to existing frameworks and environment wrapper classes that inherit from gym.Env, the Omniverse Isaac Gym extension also provides an interface inheriting from gym.Env and implements a simple set of APIs required by most common RL libraries. This interface can be used as a bridge connecting RL libraries with physics simulation and tasks running in the Isaac Sim framework.

We can view the RL ecosystem as three main pieces: the Task, the RL policy, and the Environment wrapper that provides an interface for communication between the task and the RL policy.

The Task is where main task logic is implemented, such as computing observations and rewards. This is where we can collect states of actors in the scene and apply controls or actions to our actors. Omniverse Isaac Gym allows for tasks to be defined following the BaseTask definition in omni.isaac.core. This provides flexibility for users to re-use task implementations for both RL and non-RL use cases.

The main purpose of the Omniverse Isaac Gym extension is to provide Environment Wrapper interfaces that allow for RL policies to communicate with simulation in Isaac Sim. As a base interface, we are providing a class named VecEnvBase, a vectorized interface inheriting from gym.Env that implements common RL APIs. This class can also be easily extended towards RL libraries that require additional APIs by creating a new derived class.

Commonly used APIs provided by the base wrapper class VecEnvBase include:

  • render(self, mode: str = “human”): renders the current frame

  • close(self): closes the simulator

  • seed(self, seed: int = -1): sets a seed. Use -1 for a random seed.

  • step(self, actions: Union[np.ndarray, torch.Tensor]): triggers task pre_physics_step with actions, steps simulation and renderer, computes observations, rewards, dones, and returns state buffers

  • reset(self): triggers task reset(), steps simulation, and re-computes observations

For more details on the RL tasking framework, please refer to the RL Framework page in OmniIsaacGymEnvs.

9.1.1. Learning Objectives

In this tutorial, we will set up our reinforcement learning example repository: OmniIsaacGymEnvs. We will

  1. Install OmniIsaacGymEnvs for Isaac Sim

  2. Run inferencing and training examples in OmniIsaacGymEnvs

  3. Install OmniIsaacGymEnvs in Docker

  4. Run OmniIsaacGymEnvs examples with LiveStream

10-15 Minute Tutorial

9.1.2. Getting Started

9.1.3. Installing Examples Repository

To set up these examples, first clone the repository:

git clone https://github.com/NVIDIA-Omniverse/OmniIsaacGymEnvs.git

We can install the examples as a python module in Isaac Sim. Locate the Isaac Sim python executable, which by default should be python.sh on Linux or python.bat on Windows, located at the root of the Isaac Sim directory. We will refer to this path as PYTHON_PATH.

To set a PYTHON_PATH variable in the terminal that links to the python executable, we can run a command that resembles the following. Make sure to update the paths to your local path.

For Linux: alias PYTHON_PATH=~/.local/share/ov/pkg/isaac_sim-*/python.sh
For Windows: doskey PYTHON_PATH=C:\Users\user\AppData\Local\ov\pkg\isaac_sim-*\python.bat $*
For IsaacSim Docker: alias PYTHON_PATH=/isaac-sim/python.sh

Install OmniIsaacGymEnvs to PYTHON_PATH by running the following from the root of OmniIsaacGymEnvs:

PYTHON_PATH -m pip install -e .

The following error may appear during the initial installation. This error is harmless and can be ignored.

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.

Below is a list of current environments in OmniIsaacGymEnvs:

  • Dexterous Manipulation Tasks:
    • AllegroHand

    • ShadowHand

    • ShadowHandOpenAI_FF

    • ShadowHandOpenAI_LSTM

  • Locomotion Tasks:
    • Ant

    • Anymal

    • AnymalTerrain

    • Humanoid

  • Factory Environments:
    • FactoryTaskNutBoltPick

    • FactoryTaskNutBoltPlace

    • FactoryTaskNutBoltScrew

  • Deformable Tasks:
    • FrankaDeformable

  • Copter Environments:
    • Crazyflie

    • Ingenuity

    • Quadcopter

  • Others:
    • BallBalance

    • Cartpole

    • FrankaCabinet

For more detailed explanation of each task, please refer to the RL Examples page in OmniIsaacGymEnvs.

9.1.4. Running Examples

Example scripts should be launched from OmniIsaacGymEnvs/omniisaacgymenvs.

9.1.4.1. Launching Training Examples

To train your first policy, run:

PYTHON_PATH scripts/rlgames_train.py task=Cartpole

We will see an Isaac Sim window pop up. Once Isaac Sim initialization completes (which may take a few minutes if launching for the first time), the Cartpole scene will be constructed and simulation will start running automatically. The process will terminate once training finishes.

../_images/isaac_sim_rl_cartpole.gif

9.1.4.2. Running Inference

To load a trained checkpoint and perform inference (no training), pass test=True as an argument, along with the checkpoint name.

PYTHON_PATH scripts/rlgames_train.py task=Cartpole test=True checkpoint=runs/Cartpole/nn/Cartpole.pth

9.1.4.3. Inferencing with Pre-Trained Checkpoints

Pre-trained checkpoints are provided for each task on the Nucleus server, under Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints.

To load a pre-trained checkpoint and run inferencing, run:

PYTHON_PATH scripts/rlgames_train.py task=Cartpole test=True checkpoint=omniverse://localhost/NVIDIA/Assets/Isaac/2023.1.1/Isaac/Samples/OmniIsaacGymEnvs/Checkpoints/cartpole.pth

9.1.5. Installing Examples Repository in Isaac Sim Docker

OmniIsaacGymEnvs provides utility scripts for executing OIGE environments in an Isaac Sim docker container. The latest Isaac Sim Docker image can be found on NGC. A utility script is provided at docker/run_docker.sh to help initialize the OIGE repository and launch the Isaac Sim docker container. The script can be run from the OIGE repo root with the following command:

./docker/run_docker.sh

Then, training can be launched from the container with:

/isaac-sim/python.sh scripts/rlgames_train.py headless=True task=Ant

To run the Isaac Sim docker with UI, use the following script:

./docker/run_docker_viewer.sh

Then, training can be launched from the container with:

/isaac-sim/python.sh scripts/rlgames_train.py task=Ant

Alternatively, a dockerfile is also provided for building an image with OIGE pre-installed. This avoids the re-installation process of OIGE each time a docker container is launched. To build the image, run:

docker build -t isaac-sim-oige -f docker/dockerfile .

Then, start a container with the built image:

./docker/run_dockerfile.sh

Then, training can be launched from the container with:

/isaac-sim/python.sh scripts/rlgames_train.py task=Ant headless=True

9.1.6. Running with LiveStream

OmniIsaacGymEnvs also supports livestream through the Omniverse Streaming Client. To enable this feature, add the commandline argument enable_livestream=True:

PYTHON_PATH scripts/rlgames_train.py task=Ant headless=True enable_livestream=True

Connect from the Omniverse Streaming Client once the SimulationApp has been created. Note that enabling livestream is equivalent to training with the viewer enabled, thus the speed of training/inferencing will decrease compared to running in headless mode.

9.1.7. Summary

This tutorial covered the following topics:

  1. Installation of OmniIsaacGymEnvs

  2. Running training examples in OmniIsaacGymEnvs

  3. Running inferencing examples in OmniIsaacGymEnvs

  4. Setting up OmniIsaacGymEnvs in docker

  5. Running OmniIsaacGymEnvs examples with LiveStream

9.1.7.1. Next Steps

Continue on to the next tutorial in our Reinforcement Learning Tutorials series, Creating a new RL Example in OmniIsaacGymEnvs, to learn about the tasking framework for OmniIsaacGymEnvs.

If you are interested in setting up a new reinforcement learning task outside of OmniIsaacGymEnvs, please see the tutorial, Custom RL Example using Stable Baselines, to learn about using the stable baselines library with Isaac Sim.

9.1.7.2. Further Learning

  • For more details on the RL examples, please refer to the README page in OmniIsaacGymEnvs.

  • For more information on the frameworks used in OmniIsaacGymEnvs, please refer to the docs directory.