Basic Samples

Omniverse Kit Helper

This sample shows how to use the OmniKitHelper class to configure, start, stop and shutdown an application

The sample can be executed by running the following:

./python.sh python_samples/core/helper.py

See the OmniKitHelper Class API docs for more info

Time Stepping

This sample shows how to launch start a omniverse kit python app and then create callbacks which get called each rendering frame and each physics timestep.

The sample can be executed by running the following:

./python.sh python_samples/simple/time_stepping.py

And it will output

Rendering and Physics with 1 second step size:
physics update step: 1.0 seconds
kit update step: 1.0 seconds
Rendering and Physics with 1/60 seconds step:
physics update step: 0.01666666753590107 seconds
kit update step: 0.01666666753590107 seconds
Rendering 1/60 seconds step size and Physics 1/120 seconds step size:
physics update step: 0.008333333767950535 seconds
physics update step: 0.008333333767950535 seconds
physics update step: 0.008333333767950535 seconds
physics update step: 0.008333333767950535 seconds
kit update step: 0.03333333507180214 seconds

time_stepping.py

There are a few key sections in this python sample, first of which is initializing the OmniKitHelper object

CONFIG = {
    "experience": f'{os.environ["EXP_PATH"]}/omni.isaac.sim.python.kit',
    "renderer": "RayTracedLighting",
    "headless": True,
}

if __name__ == "__main__":
    kit = OmniKitHelper(config=CONFIG)
    ...

This launches the omniverse kit application in headless mode in Raytracing mode

Once launched we can set the callbacks for both the rendering step and the physics step. These get called every rendering and physics step.

# Create callbacks to both editor and physics step callbacks
def editor_update(dt):
    print("kit update step:", dt, "seconds")

def physics_update(dt):
    print("physics update step:", dt, "seconds")

# start simulation
kit.play()

# assign callbacks
update_sub = kit.editor.subscribe_to_update_events(editor_update)
physics_sub = omni.physx._physx.acquire_physx_interface().subscribe_physics_step_events(physics_update)

Finally we can run the app with different timestep values to see how the callbacks get executed:

# perform step experiments
print("Rendering and Physics with 1 second step size:")
kit.update(1.0)
print("Rendering and Physics with 1/60 seconds step:")
kit.update(1.0 / 60.0)
print("Rendering 1/60 seconds step size and Physics 1/120 seconds step size:")
kit.update(1.0 / 30.0, 1.0 / 120.0, 4)

First we run the app with a step size of one second. Then a step size of 1.0/6.0. In both cases both the rendering and physics step sizes match. Note that the physics step callback will always execute before the rendering step callback. Finally the app runs with a rendering step size of 1.0/30.0 and a physics step size of 1.0/120.0 and 4 substeps. The number of substeps is important to be at least the minimum required to perform the correct number of physics steps to match the rendering step.

Load USD Stage

This sample demonstrates how to load a USD stage and start simulating it

The sample can be executed by running the following, specify usd_path to a location in the /Isaac folder on the nucleus server:

./python.sh python_samples/simple/load_stage.py --usd_path /Environments/Simple_Room/simple_room.usd

Below is code for the core part of the sample where we:

  • Load a usd stage

  • Wait for it to finish loading

  • Start simulation and wait for app to be closed

omni.usd.get_context().open_stage(path/to/stage.usd, None)
# Wait two frames so that stage starts loading
kit.app.update()
kit.app.update()

# Wait for stage to load
while kit.is_loading():
    kit.update(1.0 / 60.0)

#Start simulation
kit.play()
# Run indefinetly until app is closed
while kit.app.is_running():
    # Run in realtime mode, we don't specify the step size
    kit.update()
kit.stop()
kit.shutdown()

To exit the sample you can terminate via the terminal with CTRL-C

Articulation Information

This sample loads the Franka Panda robot articulation and then prints its information using the Dynamic Control API

The sample can be executed by running the following:

./python.sh python_samples/simple/franka_articulation.py

URDF Import

This sample demonstrates how to use the URDF Python API, configure its physics and then simulate it for a fixed number of frames.

The sample can be executed by running the following:

./python.sh python_samples/simple/urdf_import.py

Change Resolution

This sample demonstrates how to change the resolution of the viewport at runtime

The sample can be executed by running the following:

./python.sh python_samples/simple/change_resolution.py

Convert Assets to USD

This sample demonstrates how to batch convert OBJ/STL/FBX assets to USD.

To execute it with sample data, run the following:

./python.sh python_samples/simple/asset_usd_converter.py --folders python_samples/data/cube python_samples/data/torus

The input folders containing OBJ/STL/FBX assets are specified as argument and it will output in terminal the path to converted USD files.

Converting folder python_samples/data/cube...
---Added python_samples/data/cube_converted/cube_fbx.usd

Converting folder python_samples/data/torus...
---Added python_samples/data/torus_converted/torus_stl.usd
---Added python_samples/data/torus_converted/torus_obj.usd

asset_usd_converter.py

This sample leverages python APIs from the Asset Importer extension.

The key section in the python sample is setting up the optional import options in the convert function.

async def convert(in_file, out_file, load_materials=False):
    # This import causes conflicts when global
    import omni.kit.asset_converter

    def progress_callback(progress, total_steps):
        pass

    converter_context = omni.kit.asset_converter.AssetConverterContext()
    # setup converter and flags
    converter_context.ignore_materials = not load_materials
    # converter_context.ignore_animation = False
    # converter_context.ignore_cameras = True
    # converter_context.single_mesh = True
    # converter_context.smooth_normals = True
    # converter_context.preview_surface = False
    # converter_context.support_point_instancer = False
    # converter_context.embed_mdl_in_usd = False
    # converter_context.use_meter_as_world_unit = True
    # converter_context.create_world_as_default_root_prim = False

The details about the import options can be found here.

Livestream

This sample demonstrates how to enable livestreaming when running in native python

See the livestream documentation for more information on running the client

The sample supports the kit remote livestream backend

./python.sh python_samples/simple/livestream.py

livestream.py

The additions to run the native livestream server are minimal:

# Start the omniverse application
kit = OmniKitHelper(config=CONFIG)

# Enable Livestream extension
ext_manager = omni.kit.app.get_app().get_extension_manager()
kit.set_setting("/app/window/drawMouse", True)
kit.set_setting("/app/livestream/proto", "ws")
ext_manager.set_extension_enabled_immediate("omni.kit.livestream.core", True)
ext_manager.set_extension_enabled_immediate("omni.kit.livestream.native", True)
...
# Continue usage as normal

Change the following lines to set the application window resolution for livestreaming. This will not affect the viewport resolution which is used for rendering images.

"window_width": 1920,
"window_height": 1080,

When running the remote viewer client make sure to match the resolution for it to look correctly scaled.

  • On Linux:

    $ ./kit-remote.sh -s <remote_ip_address> -w 1920 -h 1080
    
  • On Windows:

    > kit-remote.exe -s <remote_ip_address> -w 1920 -h 1080
    

You can also use this with the websocket livestream server by replacing

ext_manager.set_extension_enabled_immediate("omni.kit.livestream.native", True)

with

ext_manager.set_extension_enabled_immediate("omni.services.streamclient.websocket", True)

Note

Only one livestream server can be enabled at startup to work correctly

To exit the sample you can terminate via the terminal with CTRL-C