2. Getting Started with Cloner¶

Training reinforcement learning policies can often benefit from collecting trajectories from vectorized copies of environments performing the same task. The Cloner interface is designed to simplify the environment design process for such a scene by providing APIs that allow users to clone a given environment as many times as desired.

In addition to providing cloning functionality, the Cloner interface also provides utilities to generate target paths, automatically compute target transforms, as well as filtering out collisions between clones.

5-10 Minute Tutorial

2.1. Learning Objectives¶

In this tutorial, we will walk through the Cloner interface. We will

1. Set up an example using the Cloner class

2. Set up an example using the GridCloner class

10-15 Minute Tutorial

2.2. Getting Started¶

We will first launch Isaac Sim and enable the Cloner extension. Open the Extensions window from the UI by navigating to Window > Extensions from the top menu bar. Find the Isaac Sim Cloner extension, or omni.isaac.cloner and enable the extension via the toggle switch on the right side of the extension name.

Next, open the Script Editor window from the UI by navigating to Window > Script Editor from the top menu bar. All example code in this tutorial can be pasted into the Script Editor window and executed by clicking on Run.

2.3. Introduction to Cloner¶

Please make sure omni.isaac.cloner is enabled from the Extensions window before running the snippets.

Let’s first start with a simple use case of the Cloner interface. In this example, we will create a scene with 4 cubes.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 from omni.isaac.cloner import Cloner # import Cloner interface from omni.isaac.core.utils.stage import get_current_stage from pxr import UsdGeom # create our base environment with one cube base_env_path = "/World/Cube_0" UsdGeom.Cube.Define(get_current_stage(), base_env_path) # create a Cloner instance cloner = Cloner() # generate 4 paths that begin with "/World/Cube" - path will be appended with _{index} target_paths = cloner.generate_paths("/World/Cube", 4) # clone the cube at target paths cloner.clone(source_prim_path="/World/Cube_0", prim_paths=target_paths) 

We should now have 4 cubes in our stage: “/World/Cube_0”, “/World/Cube_1”, “/World/Cube_2”, “/World/Cube_3”. But you may have noticed that the cubes have all been created at the same position.

We can add a transform to each cube, simply replace the last line of the previous code with the following:

 1 2 3 4 5 6 import numpy as np cube_positions = np.array([[0, 0, 0], [3, 0, 0], [6, 0, 0], [9, 0, 0]]) # clone the cube at target paths at specified positions cloner.clone(source_prim_path="/World/Cube_0", prim_paths=target_paths, positions=cube_positions) 

It is also possible to specify the orientations of each clone by passing in an orientations argument, which should also be a np.ndarray.

2.4. Grid Cloner¶

Grid Cloner is a specialized Cloner class that automatically places clones in a grid, without requiring pre-computed translations and orientations from the user.

To use the Grid Cloner, we will need to specify the spacing we would like between each clone at initialization.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 from omni.isaac.cloner import GridCloner # import GridCloner interface from omni.isaac.core.utils.stage import get_current_stage from pxr import UsdGeom # create our base environment with one cube base_env_path = "/World/Cube_0" UsdGeom.Cube.Define(get_current_stage(), base_env_path) # create a GridCloner instance cloner = GridCloner(spacing=3) # generate 4 paths that begin with "/World/Cube" - path will be appended with _{index} target_paths = cloner.generate_paths("/World/Cube", 4) # clone the cube at target paths cloner.clone(source_prim_path="/World/Cube_0", prim_paths=target_paths) 

Now we have a scene with 4 cubes placed in a grid!

2.5. Summary¶

This tutorial covered the following topics:

1. How to use the Cloner interface

2. How to use the GridCloner interface

2.5.1. Next Steps¶

Continue on to the next tutorial in our Reinforcement Learning Tutorials series, Creating New RL Environment, to learn about how to set up a new reinforcement learning environment.