Adding Semantics to a Scene

Semantic Schema Editor

In order to generate annotations such as segmentation and bounding boxes, semantic information such as the class for different objects in the scene must be specified. The Semantic Schema Editor tool allows user to assign semantic labels to those objects.

Semantic Schema Editor is an Omniverse extension, accessible via Extension manager. Go to Window->Extensions to find the extension. Then find the Semantic Schema Editor as shown bellow to enable it. There are two ways to assign semantic labels to objects.

Note

If you are using Isaac Sim, you will find the semantic schema editor extension already downloaded. To use it, click on Synthetic data tab in the top left and click on Semantics schema Editor.

../../_images/Replicator_basic_labeling.gif

Apply semantic data on selected objects

As the title suggests, select a group of objects and then for each object, specify the Type field as class and the Data field as the semantic label.

Apply semantic data on entire stage

A heuristic based approach is used to assign semantic labels to different objects in the scene on pressing Generate Labels button. The field Prim types to label specifies which USD object type will be assigned a semantic label. The field Class list specifies a list of class names. If an object in the scene contains any of these class names, then its semantic label will be assigned to that class name.

Programmatically defining Semantics

To add semantics you need to modify the semantics of the mesh, or reference you want label. In the code below we show how to add a semantic class, but you could have any other semantics you wanted. You would just modify the tuple.

rep.modify.semantics([('class', 'avocado')])

Bellow a script moving around an avocado from the NVIDIA library of assets and adding the semantic class avocado. To run it follow the same instructions as in Getting started with Omniverse Replicator.

import omni.replicator.core as rep

with rep.new_layer():

    # Defining a plane to place the avocado
    plane = rep.create.plane(scale=100, visible=True)

    # Defining the avocado starting from the NVIDIA residential provided assets. Position and semantics of this asset are modified.
    AVOCADO = 'omniverse://localhost/NVIDIA/Assets/ArchVis/Residential/Food/Fruit/Avocado01.usd'
    avocado = rep.create.from_usd(AVOCADO)
    with avocado:
        rep.modify.semantics([('class', 'avocado')])
        rep.modify.pose(
                position=(-50, 0, -50),
                rotation=(-90,-45, 0)
                )

    # Setup camera and attach it to render product
    camera = rep.create.camera(focus_distance=80)
    render_product = rep.create.render_product(camera, resolution=(1024, 1024))

    # Creating 30 frames with the camera changing positions around the avocado
    with rep.trigger.on_frame(num_frames=30):
        with camera:
            rep.modify.pose(position=rep.distribution.uniform((-20, 20, 80), (20, 50, 100)), look_at=avocado)

Visualize Semantics

See the Visualizing Semantic Data tutorial for how to use and visualize semantics applied with this extension.