PYTHON API
Core Functions
- omni.replicator.core.get_global_seed()
Return global seed value
- Returns
(int)seed value
- omni.replicator.core.set_global_seed(seed: int)
Set a global seed.
- Parameters
seed – Seed to use as initialization for the pseudo-random number generator. Seed is expected to be a non-negative integer.
- omni.replicator.core.new_layer(name: str = None)
Create a new authoring layer context. Use
new_layer
to keep replicator changes into a contained layer. If a layer of the same name already exists, the layer will be cleared before new changes are applied.- Parameters
name – Name of the layer to be created. If ommitted, the name “Replicator” is used.
Example
>>> import omni.replicator.core as rep >>> with rep.new_layer(): >>> rep.create.cone(count=100, position=rep.distribution.uniform((-100,-100,-100),(100,100,100)))
Create
create
methods are helpers to put objects onto the USD stage.
- omni.replicator.core.create.render_product(camera: Union[ReplicatorItem, str, List[str], Path, List[Path]], resolution: Tuple[int, int], force_new: bool = False, name: Optional[Union[str, List[str]]] = None) Union[str, List]
Create a render product A RenderProduct describes images or other file-like artifacts produced by a render, such as rgb (
LdrColor
), normals, depth, etc. If an existing render product exists that have the same resolution and camera attached, it is returned. If no matching render product is found or ifforce_new
is set to True, a new render product is created.Note: When using Viewport 2.0, viewports are not generated to draw the render product on screen. Note: Render products can utilize a large amount of VRAM. Render Products no longer in use should be destroyed.
- Parameters
camera – The camera to attach to the render product. If a list of cameras is provided, a list of render products is created.
resolution – (width, height) resolution of the render product
force_new – If
True
, force creation of a new render product. IfFalse
, existing render products will be re-used if currently assigned to same camera and of the same resolution. Is overriden toTrue
if aname
is provided.name – Optionally specify the name(s) of the render product(s). Name must produce a valid USD path. If no
name
is provided, defaults toReplicator
. The render product will be created at the following path within the Session Layer:/Render/OmniverseKit/HydraTextures/<name>
. If multiple cameras are provided or if a render product of the specifiedname
already exists, a_<num>
suffix is added starting at_01
. If specifying unique names for multiple cameras,name
can be supplied as a list of strings of the same length ascamera
.
Example
>>> import omni.replicator.core as rep >>> render_product = rep.create.render_product(rep.create.camera(), resolution=(1024, 1024), name="MyRenderProduct")
- omni.replicator.core.create.register(fn: Callable[[...], Union[ReplicatorItem, Node]], override: bool = True, fn_name: Optional[str] = None) None
Register a new function under
omni.replicator.core.create
. Extend the default capabilities ofomni.replicator.core.create
by registering new functionality. New functions must return aReplicatorItem
or anOmniGraph
node.- Parameters
fn – A function that returns a
ReplicatorItem
or anOmniGraph
node.override – If
True
, will override existing functions of the same name. IfFalse
, an error is raised.fn_name – Optional, specify the registration name. If not specified, the function name is used.
fn_name
must only contains alphanumeric letters(a-z)
, numbers(0-9)
, or underscores(_)
, and cannot start with a number or contain any spaces.
Example
>>> import omni.replicator.core as rep >>> def light_cluster(num_lights: int = 10): ... lights = rep.create.light( ... light_type="sphere", ... count=num_lights, ... position=rep.distribution.uniform((-500, -500, -500), (500, 500, 500)), ... intensity=rep.distribution.uniform(10000, 20000), ... temperature=rep.distribution.uniform(1000, 10000), ... ) ... return lights >>> rep.create.register(light_cluster) >>> lights = rep.create.light_cluster(50)
Lights
- omni.replicator.core.create.light(position: Union[ReplicatorItem, float, Tuple[float]] = None, scale: Union[ReplicatorItem, float, Tuple[float]] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[ReplicatorItem, Tuple[float]] = None, light_type: str = 'Distant', color: Union[ReplicatorItem, Tuple[float, float, float]] = (1.0, 1.0, 1.0), intensity: Union[ReplicatorItem, float] = 1000.0, exposure: Union[ReplicatorItem, float] = None, temperature: Union[ReplicatorItem, float] = 6500, texture: Union[ReplicatorItem, str] = None, count: int = 1, name: str = None, parent: Union[ReplicatorItem, str, Path, Prim] = None) ReplicatorItem
Create a light
- Parameters
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value. Ignored for dome and distant light types.
scale – Scaling factors for XYZ axes. If a single value is provided, all axes will be set to that value. Ignored for dome and distant light types.
rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
light_type – Light type. Select from [“cylinder”, “disk”, “distant”, “dome”, “rect”, “sphere”]
color – Light color in (R,G,B). Float values from
[0.0-1.0]
intensity – Light intensity. Scales the power of the light linearly.
exposure – Scales the power of the light exponentially as a power of 2. The result is multiplied with
intensity
.temperature – Color temperature in degrees Kelvin indicating the white point. Lower values are warmer, higher values are cooler. Valid range
[1000-10000]
.texture – Image texture to use for dome light such as an HDR (High Dynamic Range) intended for IBL (Image Based Lighting). Ignored for other light types.
count – Number of objects to create.
name – Name of the light.
parent – Optional parent prim path. The object will be created as a child of this prim.
Examples
>>> import omni.replicator.core as rep >>> distance_light = rep.create.light( ... rotation=rep.distribution.uniform((0,-180,-180), (0,180,180)), ... intensity=rep.distribution.normal(10000, 1000), ... temperature=rep.distribution.normal(6500, 1000), ... light_type="distant") >>> dome_light = rep.create.light( ... rotation=rep.distribution.uniform((0,-180,-180), (0,180,180)), ... texture=rep.distribution.choice(rep.example.TEXTURES), ... light_type="dome")
Misc
- omni.replicator.core.create.group(items: List[Union[ReplicatorItem, str, Path]], semantics: List[Tuple[str, str]] = None, name=None) ReplicatorItem
Group assets into a common node. Grouping assets makes it easier and faster to apply randomizations to multiple assets simultaneously.
- Parameters
items – Assets to be grouped together.
semantics – List of semantic type-label pairs.
name (optional) – A name for the given group node
Example
>>> import omni.replicator.core as rep >>> cones = [rep.create.cone() for _ in range(100)] >>> group = rep.create.group(cones, semantics=[("class", "cone")])
Cameras
- omni.replicator.core.create.camera(position: Union[ReplicatorItem, float, Tuple[float]] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[ReplicatorItem, Tuple[float]] = None, focal_length: Union[ReplicatorItem, float] = 24.0, focus_distance: Union[ReplicatorItem, float] = 400.0, f_stop: Union[ReplicatorItem, float] = 0.0, horizontal_aperture: Union[ReplicatorItem, float] = 20.955, horizontal_aperture_offset: Union[ReplicatorItem, float] = 0.0, vertical_aperture_offset: Union[ReplicatorItem, float] = 0.0, clipping_range: Union[ReplicatorItem, Tuple[float, float]] = (1.0, 1000000.0), projection_type: Union[ReplicatorItem, str] = 'pinhole', fisheye_nominal_width: Union[ReplicatorItem, float] = 1936.0, fisheye_nominal_height: Union[ReplicatorItem, float] = 1216.0, fisheye_optical_centre_x: Union[ReplicatorItem, float] = 970.94244, fisheye_optical_centre_y: Union[ReplicatorItem, float] = 600.37482, fisheye_max_fov: Union[ReplicatorItem, float] = 200.0, fisheye_polynomial_a: Union[ReplicatorItem, float] = 0.0, fisheye_polynomial_b: Union[ReplicatorItem, float] = 0.00245, fisheye_polynomial_c: Union[ReplicatorItem, float] = 0.0, fisheye_polynomial_d: Union[ReplicatorItem, float] = 0.0, fisheye_polynomial_e: Union[ReplicatorItem, float] = 0.0, fisheye_polynomial_f: Union[ReplicatorItem, float] = 0.0, fisheye_p0: Union[ReplicatorItem, float] = -0.00037, fisheye_p1: Union[ReplicatorItem, float] = -0.00074, fisheye_s0: Union[ReplicatorItem, float] = -0.00058, fisheye_s1: Union[ReplicatorItem, float] = -0.00022, fisheye_s2: Union[ReplicatorItem, float] = 0.00019, fisheye_s3: Union[ReplicatorItem, float] = -0.0002, cross_camera_reference_name: str = None, count: int = 1, parent: Union[ReplicatorItem, str, Path, Prim] = None, name: str = None) ReplicatorItem
Create a camera
- Parameters
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value.
rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
focal_length – Physical focal length of the camera in units equal to
0.1 * world units
.focus_distance – Distance from the camera to the focus plane in world units.
f_stop – Lens aperture. Default
0.0
turns off focusing.horizontal_aperture – Horizontal aperture in units equal to
0.1 * world units
. Default simulates a 35mm spherical projection aperture.horizontal_aperture_offset – Horizontal aperture offset in units equal to
0.1 * world units
.vertical_aperture_offset – Vertical aperture offset in units equal to
0.1 * world units
.clipping_range – (Near, Far) clipping distances of the camera in world units.
projection_type – Camera projection model. Select from [“pinhole”, “fisheye_polynomial”, “fisheyeOrthographic”, “fisheyeEquidistant”, “fisheyeEquisolid”, “fisheyeSpherical”, “fisheyeKannalaBrandtK3”, “fisheyeRadTanThinPrism”].
fisheye_nominal_width – Nominal width of fisheye lens model.
fisheye_nominal_height – Nominal height of fisheye lens model.
fisheye_optical_centre_x – Horizontal optical centre position of fisheye lens model.
fisheye_optical_centre_y – Vertical optical centre position of fisheye lens model.
fisheye_max_fov – Maximum field of view of fisheye lens model.
fisheye_polynomial_a – First polynomial coefficient of fisheye camera.
fisheye_polynomial_b – Second polynomial coefficient of fisheye camera.
fisheye_polynomial_c – Third polynomial coefficient of fisheye camera.
fisheye_polynomial_d – Fourth polynomial coefficient of fisheye camera.
fisheye_polynomial_e – Fifth polynomial coefficient of fisheye camera.
fisheye_polynomial_f – Sixth polynomial coefficient of fisheye camera.
fisheye_p0 – Distortion coefficient to calculate tangential distortion for rad tan thin prism camera.
fisheye_p1 – Distortion coefficient to calculate tangential distortion for rad tan thin prism camera.
fisheye_s0 – Distortion coefficient to calculate thin prism distortion for rad tan thin prism camera.
fisheye_s1 – Distortion coefficient to calculate thin prism distortion for rad tan thin prism camera.
fisheye_s2 – Distortion coefficient to calculate thin prism distortion for rad tan thin prism camera.
fisheye_s3 – Distortion coefficient to calculate thin prism distortion for rad tan thin prism camera.,
count – Number of objects to create.
parent – Optional parent prim path. The camera will be created as a child of this prim.
name – Name of the camera
Example
>>> import omni.replicator.core as rep >>> # Create camera >>> camera = rep.create.camera( ... position=rep.distribution.uniform((0,0,0), (100, 100, 100)), ... rotation=(45, 45, 0), ... focus_distance=rep.distribution.normal(400.0, 100), ... f_stop=1.8, ... ) >>> # Attach camera to render product >>> render_product = rep.create.render_product(camera, resolution=(1024, 1024))
- omni.replicator.core.create.stereo_camera(stereo_baseline: Union[ReplicatorItem, float], position: Optional[Union[ReplicatorItem, float, Tuple[float]]] = None, rotation: Optional[Union[ReplicatorItem, float, Tuple[float]]] = None, look_at: Optional[Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]]] = None, look_at_up_axis: Optional[Union[ReplicatorItem, Tuple[float]]] = None, focal_length: Union[ReplicatorItem, float] = 24.0, focus_distance: Union[ReplicatorItem, float] = 400.0, f_stop: Union[ReplicatorItem, float] = 0.0, horizontal_aperture: Union[ReplicatorItem, float] = 20.955, horizontal_aperture_offset: Union[ReplicatorItem, float] = 0.0, vertical_aperture_offset: Union[ReplicatorItem, float] = 0.0, clipping_range: Union[ReplicatorItem, Tuple[float, float]] = (1.0, 1000000.0), projection_type: Union[ReplicatorItem, str] = 'pinhole', fisheye_nominal_width: Union[ReplicatorItem, float] = 1936.0, fisheye_nominal_height: Union[ReplicatorItem, float] = 1216.0, fisheye_optical_centre_x: Union[ReplicatorItem, float] = 970.94244, fisheye_optical_centre_y: Union[ReplicatorItem, float] = 600.37482, fisheye_max_fov: Union[ReplicatorItem, float] = 200.0, fisheye_polynomial_a: Union[ReplicatorItem, float] = 0.0, fisheye_polynomial_b: Union[ReplicatorItem, float] = 0.00245, fisheye_polynomial_c: Union[ReplicatorItem, float] = 0.0, fisheye_polynomial_d: Union[ReplicatorItem, float] = 0.0, fisheye_polynomial_e: Union[ReplicatorItem, float] = 0.0, fisheye_polynomial_f: Union[ReplicatorItem, float] = 0.0, fisheye_p0: Union[ReplicatorItem, float] = -0.00037, fisheye_p1: Union[ReplicatorItem, float] = -0.00074, fisheye_s0: Union[ReplicatorItem, float] = -0.00058, fisheye_s1: Union[ReplicatorItem, float] = -0.00022, fisheye_s2: Union[ReplicatorItem, float] = 0.00019, fisheye_s3: Union[ReplicatorItem, float] = -0.0002, count: int = 1, name: Optional[str] = None, parent: Optional[Union[ReplicatorItem, str, Path, Prim]] = None) ReplicatorItem
Create a stereo camera pair.
- Parameters
stereo_baseline – Distance between stereo camera pairs.
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value.
rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path or as coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
focal_length – Physical focal length of the camera in units equal to
0.1 * world units
.focus_distance – Distance from the camera to the focus plane in world units.
f_stop – Lens aperture. Default
0.0
turns off focusing.horizontal_aperture – Horizontal aperture in units equal to
0.1 * world units
. Default simulates a 35mm spherical projection aperture.horizontal_aperture_offset – Horizontal aperture offset in units equal to
0.1 * world units
.vertical_aperture_offset – Vertical aperture offset in units equal to
0.1 * world units
.clipping_range – (Near, Far) clipping distances of the camera in world units.
projection_type – Camera projection model. Select from [“pinhole”, “fisheye_polynomial”, “fisheyeOrthographic”, “fisheyeEquidistant”, “fisheyeEquisolid”, “fisheyeSpherical”, “fisheyeKannalaBrandtK3”, “fisheyeRadTanThinPrism”].
fisheye_nominal_width – Nominal width of fisheye lens model.
fisheye_nominal_height – Nominal height of fisheye lens model.
fisheye_optical_centre_x – Horizontal optical centre position of fisheye lens model.
fisheye_optical_centre_y – Vertical optical centre position of fisheye lens model.
fisheye_max_fov – Maximum field of view of fisheye lens model.
fisheye_polynomial_a – First component of fisheye polynomial (only valid for fisheye_polynomial projection type).
fisheye_polynomial_b – Second component of fisheye polynomial (only valid for fisheye_polynomial projection type).
fisheye_polynomial_c – Third component of fisheye polynomial (only valid for fisheye_polynomial projection type).
fisheye_polynomial_d – Fourth component of fisheye polynomial (only valid for fisheye_polynomial projection type).
fisheye_polynomial_e – Fifth component of fisheye polynomial (only valid for fisheye_polynomial projection type).
count – Number of objects to create.
name – Name of the cameras.
_L
and_R
will be appended for Left and Right cameras, respectively.parent – Optional parent prim path. The cameras will be created as a child of this prim.
Example
>>> import omni.replicator.core as rep >>> # Create stereo camera >>> stereo_camera_pair = rep.create.stereo_camera( ... stereo_baseline=10, ... position=(10, 10, 10), ... rotation=(45, 45, 0), ... focus_distance=rep.distribution.normal(400.0, 100), ... f_stop=1.8, ... ) >>> # Attach camera to render product >>> render_product = rep.create.render_product(stereo_camera_pair, resolution=(1024, 1024))
Materials
- omni.replicator.core.create.material_omnipbr(diffuse: Tuple[float] = None, diffuse_texture: str = None, roughness: float = None, roughness_texture: str = None, metallic: float = None, metallic_texture: str = None, specular: float = None, emissive_color: Tuple[float] = None, emissive_texture: str = None, emissive_intensity: float = 0.0, project_uvw: bool = False, semantics: List[Tuple[str, str]] = None, count: int = 1) ReplicatorItem
Create an OmniPBR Material
- Parameters
diffuse – Diffuse/albedo color in RGB colorspace
diffuse_texture – Path to diffuse texture
roughness – Material roughness in the range
[0, 1]
roughness_texture – Path to roughness texture
metallic – Material metallic value in the range
[0, 1]
. Typically, metallic is assigned either0.0
or1.0
metallic_texture – Path to metallic texture
specular – Intensity of specular reflections in the range
[0, 1]
emissive_color – Color of emissive light emanating from material in RGB colorspace
emissive_texture – Path to emissive texture
emissive_intensity – Emissive intensity of the material. Setting to
0.0
(default) disables emission.project_uvw – When
True
, UV coordinates will be generated by projecting them from a coordinate system.semantics – Assign semantics to material
count – Number of objects to create.
Example
>>> import omni.replicator.core as rep >>> mat1 = rep.create.material_omnipbr( ... diffuse=rep.distribution.uniform((0, 0, 0), (1, 1, 1)), ... roughness=rep.distribution.uniform(0, 1), ... metallic=rep.distribution.choice([0, 1]), ... emissive_color=rep.distribution.uniform((0, 0, 0.5), (0, 0, 1)), ... emissive_intensity=rep.distribution.uniform(0, 1000), ... ) >>> mat2 = rep.create.material_omnipbr( ... diffuse_texture=rep.distribution.choice(rep.example.TEXTURES), ... roughness_texture=rep.distribution.choice(rep.example.TEXTURES), ... metallic_texture=rep.distribution.choice(rep.example.TEXTURES), ... emissive_texture=rep.distribution.choice(rep.example.TEXTURES), ... emissive_intensity=rep.distribution.uniform(0, 1000), ... ) >>> cone = rep.create.cone(material=mat1) >>> torus = rep.create.torus(material=mat2)
- omni.replicator.core.create.projection_material(proxy_prim: Union[ReplicatorItem, str, Path], semantics: List[Tuple[str, str]] = None, material: Union[ReplicatorItem, str, Path] = None, offset_scale: float = 0.01, input_prims: Union[ReplicatorItem, List[str]] = None, name: Optional[str] = None) ReplicatorItem
Project a texture onto a target prim.
ProjectPBRMaterial
is used to facilitate these projections. The proxy prim is a prim used to control the position, rotation and scale of the projection. There can only be one proxy/projection pair, so a proxy prim can only modify a single projection. The projection will happen in the direction of the negative x-axis. This node only sets up the projection material, therep.modify.projection_material
node should be used to update the projection itself.
- Parameters
proxy_prim – The prims which will be used to manipulate the projection.
material – Projection material to apply to the projection. If not provided, use ‘ProjectPBRMaterial’.
semantics – Semantics to apply to the defect.
offset_scale – Scale factor when extruding
target_prim
points.input_prims – The prim which will be projected on to. If using
with
syntax, this argument can be omitted.name (optional) – A name for the given projection node.
Example
>>> import omni.replicator.core as rep >>> torus = rep.create.torus() >>> cube = rep.create.cube(position=(50, 100, 0), rotation=(0, 0, 90), scale=(0.2, 0.2, 0.2)) >>> sem = [('class', 'shape')] >>> with torus: ... rep.create.projection_material(cube, sem) omni.replicator.core.create.projection_material
- omni.replicator.core.create.mdl_from_json(material_def: Dict = None, material_def_path: str = None) ReplicatorItem
Create a MDL ShaderGraph material defined in a json dictionary.
- Parameters
material_def – A dictionary object defining the MDL material graph.
material_def_path – A path to a json file to decode and generate an MDL material graph from.
Example
>>> import omni.replicator.core as rep >>> gen_mat = rep.create.mdl_from_json(material_def=rep.example.MDL_JSON_EXAMPLE) >>> cube = rep.create.cube(material=gen_mat)
Shapes
- omni.replicator.core.create.cone(position: Union[ReplicatorItem, float, Tuple[float]] = None, scale: Union[ReplicatorItem, float, Tuple[float]] = None, pivot: Union[ReplicatorItem, Tuple[float]] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[ReplicatorItem, Tuple[float]] = None, semantics: List[Tuple[str, str]] = None, material: Union[ReplicatorItem, Prim] = None, visible: bool = True, as_mesh: bool = True, count: int = 1, name: str = None, parent: Union[ReplicatorItem, str, Path, Prim] = None) ReplicatorItem
Create a cone
- Parameters
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value.
scale – Scaling factors for XYZ axes. If a single value is provided, all axes will be set to that value.
pivot – Pivot that sets the center point of translate and rotate operation. Pivot values are normalized between
[-1, 1]
for each axis based on the prim’s axis aligned extents.rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
semantics – List of semantic type-label pairs.
material – Material to attach to the cone.
visible – If
False
, the prim will be invisible. This is often useful when creating prims to use as bounds with other randomizers.as_mesh – If
False
, create aUsd.Cone
prim. IfTrue
, create a mesh.count – Number of objects to create.
name – Name of the object.
parent – Optional parent prim path. The object will be created as a child of this prim.
Example
>>> import omni.replicator.core as rep >>> cone = rep.create.cone( ... position=rep.distribution.uniform((0,0,0), (100, 100, 100)), ... scale=2, ... rotation=(45, 45, 0), ... semantics=[("class", "cone")], ... )
- omni.replicator.core.create.cube(position: Union[ReplicatorItem, float, Tuple[float]] = None, scale: Union[ReplicatorItem, float, Tuple[float]] = None, pivot: Union[ReplicatorItem, Tuple[float]] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[ReplicatorItem, Tuple[float]] = None, semantics: List[Tuple[str, str]] = None, material: Union[ReplicatorItem, Prim] = None, visible: bool = True, as_mesh: bool = True, count: int = 1, name: str = None, parent: Union[ReplicatorItem, str, Path, Prim] = None) ReplicatorItem
Create a cube
- Parameters
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value.
scale – Scaling factors for XYZ axes. If a single value is provided, all axes will be set to that value.
pivot – Pivot that sets the center point of translate and rotate operation. Pivot values are normalized between
[-1, 1]
for each axis based on the prim’s axis aligned extents.rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
semantics – List of semantic type-label pairs.
material – Material to attach to the cube.
visible – If
False
, the prim will be invisible. This is often useful when creating prims to use as bounds with other randomizers.as_mesh – If
False
, create aUsd.Cube
prim. IfTrue
, create a mesh.count – Number of objects to create.
name – Name of the object
parent – Optional parent prim path. The object will be created as a child of this prim.
Example
>>> import omni.replicator.core as rep >>> cube = rep.create.cube( ... position=rep.distribution.uniform((0,0,0), (100, 100, 100)), ... scale=2, ... rotation=(45, 45, 0), ... semantics=[("class", "cube")], ... )
- omni.replicator.core.create.cylinder(position: Union[ReplicatorItem, float, Tuple[float]] = None, scale: Union[ReplicatorItem, float, Tuple[float]] = None, pivot: Union[ReplicatorItem, Tuple[float]] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[ReplicatorItem, Tuple[float]] = None, semantics: List[Tuple[str, str]] = None, material: Union[ReplicatorItem, Prim] = None, visible: bool = True, as_mesh: bool = True, count: int = 1, name: str = None, parent: Union[ReplicatorItem, str, Path, Prim] = None) ReplicatorItem
Create a cylinder
- Parameters
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value.
scale – Scaling factors for XYZ axes. If a single value is provided, all axes will be set to that value.
pivot – Pivot that sets the center point of translate and rotate operation. Pivot values are normalized between
[-1, 1]
for each axis based on the prim’s axis aligned extents.rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
semantics – List of semantic type-label pairs.
material – Material to attach to the cylinder.
visible – If
False
, the prim will be invisible. This is often useful when creating prims to use as bounds with other randomizers.as_mesh – If
False
, create a Usd.Cylinder prim. IfTrue
, create a mesh.count – Number of objects to create.
name – Name of the object
parent – Optional parent prim path. The object will be created as a child of this prim.
Example
>>> import omni.replicator.core as rep >>> cylinder = rep.create.cylinder( ... position=rep.distribution.uniform((0,0,0), (100, 100, 100)), ... scale=2, ... rotation=(45, 45, 0), ... semantics=[("class", "cylinder")], ... )
- omni.replicator.core.create.disk(position: Union[ReplicatorItem, float, Tuple[float]] = None, scale: Union[ReplicatorItem, float, Tuple[float]] = None, pivot: Union[ReplicatorItem, Tuple[float]] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[ReplicatorItem, Tuple[float]] = None, semantics: List[Tuple[str, str]] = None, material: Union[ReplicatorItem, Prim] = None, visible: bool = True, count: int = 1, name: str = None, parent: Union[ReplicatorItem, str, Path, Prim] = None) ReplicatorItem
Create a disk
- Parameters
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value.
scale – Scaling factors for XYZ axes. If a single value is provided, all axes will be set to that value.
pivot – Pivot that sets the center point of translate and rotate operation. Pivot values are normalized between
[-1, 1]
for each axis based on the prim’s axis aligned extents.rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
semantics – List of semantic type-label pairs.
material – Material to attach to the disk.
visible – If
False
, the prim will be invisible. This is often useful when creating prims to use as bounds with other randomizers.count – Number of objects to create.
name – Name of the object.
parent – Optional parent prim path. The object will be created as a child of this prim.
Example
>>> import omni.replicator.core as rep >>> disk = rep.create.disk( ... position=rep.distribution.uniform((0,0,0), (100, 100, 100)), ... scale=2, ... rotation=(45, 45, 0), ... semantics=[("class", "disk")], ... )
- omni.replicator.core.create.plane(position: Union[ReplicatorItem, float, Tuple[float]] = None, scale: Union[ReplicatorItem, float, Tuple[float]] = None, pivot: Union[ReplicatorItem, Tuple[float]] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[ReplicatorItem, Tuple[float]] = None, semantics: List[Tuple[str, str]] = None, material: Union[ReplicatorItem, Prim] = None, visible: bool = True, count: int = 1, name: str = None, parent: Union[ReplicatorItem, str, Path, Prim] = None) ReplicatorItem
Create a plane
- Parameters
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value.
scale – Scaling factors for XYZ axes. If a single value is provided, all axes will be set to that value.
pivot – Pivot that sets the center point of translate and rotate operation. Pivot values are normalized between
[-1, 1]
for each axis based on the prim’s axis aligned extents.rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
semantics – List of semantic type-label pairs.
material – Material to attach to the plane.
visible – If
False
, the prim will be invisible. This is often useful when creating prims to use as bounds with other randomizers.count – Number of objects to create.
name – Name of the object
parent – Optional parent prim path. The object will be created as a child of this prim.
Example
>>> import omni.replicator.core as rep >>> plane = rep.create.plane( ... position=rep.distribution.uniform((0,0,0), (100, 100, 100)), ... scale=2, ... rotation=(45, 45, 0), ... semantics=[("class", "plane")], ... )
- omni.replicator.core.create.sphere(position: Union[ReplicatorItem, float, Tuple[float]] = None, scale: Union[ReplicatorItem, float, Tuple[float]] = None, pivot: Union[ReplicatorItem, Tuple[float]] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[ReplicatorItem, Tuple[float]] = None, semantics: List[Tuple[str, str]] = None, material: Union[ReplicatorItem, Prim] = None, visible: bool = True, as_mesh: bool = True, count: int = 1, name: str = None, parent: Union[ReplicatorItem, str, Path, Prim] = None) ReplicatorItem
Create a sphere
- Parameters
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value.
scale – Scaling factors for XYZ axes. If a single value is provided, all axes will be set to that value.
pivot – Pivot that sets the center point of translate and rotate operation. Pivot values are normalized between
[-1, 1]
for each axis based on the prim’s axis aligned extents.rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
semantics – List of semantic type-label pairs.
material – Material to attach to the sphere.
visible – If
False
, the prim will be invisible. This is often useful when creating prims to use as bounds with other randomizers.as_mesh – If
False
, create aUsd.Sphere
prim. IfTrue
, create a mesh.count – Number of objects to create.
name – Name of the object.
parent – Optional parent prim path. The object will be created as a child of this prim.
Example
>>> import omni.replicator.core as rep >>> sphere = rep.create.sphere( ... position=rep.distribution.uniform((0,0,0), (100, 100, 100)), ... scale=2, ... rotation=(45, 45, 0), ... semantics=[("class", "sphere")], ... )
- omni.replicator.core.create.torus(position: Union[ReplicatorItem, float, Tuple[float]] = None, scale: Union[ReplicatorItem, float, Tuple[float]] = None, pivot: Union[ReplicatorItem, Tuple[float]] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[ReplicatorItem, Tuple[float]] = None, semantics: List[Tuple[str, str]] = None, material: Union[ReplicatorItem, Prim] = None, visible: bool = True, count: int = 1, name: str = None, parent: Union[ReplicatorItem, str, Path, Prim] = None) ReplicatorItem
Create a torus
- Parameters
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value.
scale – Scaling factors for XYZ axes. If a single value is provided, all axes will be set to that value.
pivot – Pivot that sets the center point of translate and rotate operation. Pivot values are normalized between
[-1, 1]
for each axis based on the prim’s axis aligned extents.rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
semantics – List of semantic type-label pairs.
material – Material to attach to the torus.
visible – If
False
, the prim will be invisible. This is often useful when creating prims to use as bounds with other randomizers.count – Number of objects to create.
name – Name of the object
parent – Optional parent prim path. The object will be created as a child of this prim.
Example
>>> import omni.replicator.core as rep >>> torus = rep.create.torus( ... position=rep.distribution.uniform((0,0,0), (100, 100, 100)), ... scale=2, ... rotation=(45, 45, 0), ... semantics=[("class", "torus")], ... )
- omni.replicator.core.create.xform(position: Union[ReplicatorItem, float, Tuple[float]] = None, scale: Union[ReplicatorItem, float, Tuple[float]] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[ReplicatorItem, Tuple[float]] = None, semantics: List[Tuple[str, str]] = None, visible: bool = True, count: int = 1, name: str = None, parent: Union[ReplicatorItem, str, Path, Prim] = None) ReplicatorItem
Create a Xform
- Parameters
position – XYZ coordinates in world space. If a single value is provided, all axes will be set to that value.
scale – Scaling factors for XYZ axes. If a single value is provided, all axes will be set to that value.
rotation – Euler angles in degrees in XYZ order. If a single value is provided, all axes will be set to that value.
look_at – Look-at target, specified either as a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – Look-at up axis of the created prim.
semantics – List of semantic type-label pairs.
visible – If
False
, the prim will be invisible. This is often useful when creating prims to use as bounds with other randomizers.count – Number of objects to create.
name – Name of the object.
parent – Optional parent prim path. The xform will be created as a child of this prim.
Example
>>> import omni.replicator.core as rep >>> xform = rep.create.xform( ... position=rep.distribution.uniform((0,0,0), (100, 100, 100)), ... semantics=[("class", "thing")], ... )
USD
- omni.replicator.core.create.from_dir(dir_path: str, recursive: bool = False, path_filter: Optional[str] = None, semantics: Optional[List[Tuple[str, str]]] = None) ReplicatorItem
Create a group of assets from the USD files found in dir_path
- Parameters
dir_path – The root path to search from.
recursive – If
True
, search through sub-folders.path_filter – A Regular Expression (RegEx) string to filter paths with.
semantics – List of semantic type-label pairs.
Example
>>> import omni.replicator.core as rep >>> asset_path = rep.example.ASSETS_DIR >>> asset = rep.create.from_dir(asset_path, path_filter="rocket")
- omni.replicator.core.create.from_usd(usd: str, semantics: List[Tuple[str, str]] = None, count: int = 1) ReplicatorItem
Reference a USD into the current USD stage.
- Parameters
usd – Path to a usd file (
\*.usd
,\*.usdc
,\*.usda
)semantics – List of semantic type-label pairs.
Example
>>> import omni.replicator.core as rep >>> usd_path = rep.example.ASSETS[0] >>> asset = rep.create.from_usd(usd_path, semantics=[("class", "example")])
Get
get
methods are helpers to get objects from the USD stage, either by path or by semantic label.
get.prims
is very broad with its regex matching on the USD stage, so individual helper methods are provided
to narrow the search field to differnt USD types (mesh, light, etc.)
- omni.replicator.core.get.camera(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd ‘camera’ types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.curve(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd ‘curve’ types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.geomsubset(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd ‘geomsubset’ types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.graph(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get all ‘graph’ types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.light(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
- Get Usd ‘light’ types based on specified constraints.
Matches types RectLight, SphereLight, CylinderLight, DiskLight, DistantLight, SphereLight
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.listener(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd listener types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.material(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd material types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.mesh(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd mesh types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.physics(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get physics/physicsscene types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.prim_at_path(path: Union[str, List[str], ReplicatorItem], name: Optional[str] = None) ReplicatorItem
Get the prim at the exact path
- Parameters
path – USD path to the desired prim. Defaults to None.
name (optional) – A name for the graph node.
- omni.replicator.core.get.prims(path_pattern: str = None, path_match: str = None, path_pattern_exclusion: str = None, prim_types: Union[str, List[str]] = None, prim_types_exclusion: Union[str, List[str]] = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, ignore_case: bool = True, name: Optional[str] = None) ReplicatorItem
Get prims based on specified constraints.
Search the stage for stage paths with matches to the specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_match – Python string matching. Faster than regex matching.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
prim_types – List of prim types to include
prim_types_exclusion – List of prim types to ignore
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
ignore_case – Case-insensitive regex matching
name (optional) – A name for the graph node.
- omni.replicator.core.get.renderproduct(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd renderproduct types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.rendervar(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd rendervar types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.scope(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd ‘scope’ types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.shader(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd ‘shader’ types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.shape(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
- Get Usd ‘shape’ types based on specified constraints.
Includes Capsule, Cone, Cube, Cylinder, Plane, Sphere
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.skelanimation(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd ‘skelanimation’ types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.skeleton(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd ‘skeleton’ types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.sound(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd ‘sound’ types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.xform(path_pattern: str = None, path_pattern_exclusion: str = None, semantics: Union[List[Tuple[str, str]], Tuple[str, str]] = None, semantics_exclusion: Union[List[Tuple[str, str]], Tuple[str, str]] = None, cache_result: bool = True, name: Optional[str] = None, path_match: str = None) ReplicatorItem
Get Usd ‘xform’ types based on specified constraints.
- Parameters
path_pattern – The RegEx (Regular Expression) path pattern to match.
path_pattern_exclusion – The RegEx (Regular Expression) path pattern to ignore.
semantics – Semantic type-value pairs of semantics to include
semantics_exclusion – Semantic type-value pairs of semantics to ignore
cache_result – Run get prims a single time, then return the cached result
name (optional) – A name for the graph node.
path_match – Python string matching. Faster than regex matching.
- omni.replicator.core.get.register(fn: Callable[[...], Union[ReplicatorItem, Node]], override: bool = True, fn_name: Optional[str] = None) None
Register a new function under
omni.replicator.core.get
. Extend the default capabilities ofomni.replicator.core.get
by registering new functionality. New functions must return aReplicatorItem
or anOmniGraph
node.- Parameters
fn – A function that returns a
ReplicatorItem
or anOmniGraph
node.override – If
True
, will override existing functions of the same name. IfFalse
, an error is raised.fn_name – Optional, specify the registration name. If not specified, the function name is used.
fn_name
must only contains alphanumeric letters(a-z)
, numbers(0-9)
, or underscores(_)
, and cannot start with a number or contain any spaces.
Distribution
distribution
methods are helpers set a range of values to simulate complex behavior.
- omni.replicator.core.distribution.choice(choices: List[str], weights: List[float] = None, num_samples: Union[ReplicatorItem, int] = 1, seed: Optional[int] = -1, with_replacements: bool = True, name: Optional[str] = None) ReplicatorItem
Provides sampling from a list of values
- Parameters
choices – Values in the distribution to choose from.
weights – Matching list of weights for each choice.
num_samples – The number of times to sample.
seed (optional) – A seed to use for the sampling.
with_replacements – If
True
, allow re-sampling the same element. IfFalse
, each element can only be sampled once. Note that in this case, the size of the elements being sampled must be larger than the sampling size. Default is True.name (optional) – A name for the given distribution. Named distributions will have their values available to the
Writer
.
- omni.replicator.core.distribution.combine(distributions: List[Union[ReplicatorItem, Tuple[ReplicatorItem]]], name: Optional[str] = None) ReplicatorItem
Combine input from different distributions.
- Parameters
distributions – List of Replicator distribution nodes or numbers.
name (optional) – A name for the given distribution. Named distributions will have their values available to the
Writer
.
- omni.replicator.core.distribution.log_uniform(lower: Tuple, upper: Tuple, num_samples: int = 1, seed: Optional[int] = None, name: Optional[str] = None) ReplicatorItem
Provides sampling with a log uniform distribution
- Parameters
lower – Lower end of the distribution.
upper – Upper end of the distribution.
num_samples – The number of times to sample.
seed (optional) – A seed to use for the sampling.
name (optional) – A name for the given distribution. Named distributions will have their values available to the
Writer
.
- omni.replicator.core.distribution.normal(mean: Tuple, std: Tuple, num_samples: int = 1, seed: Optional[int] = None, name: Optional[str] = None) ReplicatorItem
Provides sampling with a normal distribution
- Parameters
mean – Average value for the distribution.
std – Standard deviation value for the distribution.
num_samples – The number of times to sample.
seed (optional) – A seed to use for the sampling.
name (optional) – A name for the given distribution. Named distributions will have their values available to the
Writer
.
- omni.replicator.core.distribution.sequence(items: Union[List, ReplicatorItem], ordered: Optional[bool] = True, seed: Optional[int] = -1, name: Optional[str] = None) ReplicatorItem
Provides sampling sequentially
- Parameters
items – Ordered list of items to sample sequentially.
ordered – Whether to return item in order.
seed (optional) – A seed to use for the sampling.
name (optional) – A name for the given distribution. Named distributions will have their values available to the
Writer
.
Example
>>> import omni.replicator.core as rep >>> cube = rep.create.cube(count=1) >>> with cube: ... rep.modify.pose(position=rep.distribution.sequence([(0.0, 0.0, 200.0), (0.0, 200.0, 0.0), (200.0, 0.0, 0.0)])) omni.replicator.core.distribution.sequence
- omni.replicator.core.distribution.uniform(lower: Tuple, upper: Tuple, num_samples: int = 1, seed: Optional[int] = None, name: Optional[str] = None) ReplicatorItem
Provides sampling with a uniform distribution
- Parameters
lower – Lower end of the distribution.
upper – Upper end of the distribution.
num_samples – The number of times to sample.
seed (optional) – A seed to use for the sampling.
name (optional) – A name for the given distribution. Named distributions will have their values available to the
Writer
.
- omni.replicator.core.distribution.register(fn: Callable[[...], Union[ReplicatorItem, Node]], override: bool = True, fn_name: Optional[str] = None) None
Register a new function under
omni.replicator.core.distribution
. Extend the default capabilities ofomni.replicator.core.distribution
by registering new functionality. New functions must return aReplicatorItem
or anOmniGraph
node.- Parameters
fn – A function that returns a
ReplicatorItem
or anOmniGraph
node.override – If
True
, will override existing functions of the same name. IfFalse
, an error is raised.fn_name – Optional, specify the registration name. If not specified, the function name is used.
fn_name
must only contains alphanumeric letters(a-z)
, numbers(0-9)
, or underscores(_)
, and cannot start with a number or contain any spaces.
Modify
modify
methods are helpers to get change objects on the USD stage.
- omni.replicator.core.modify.animation(values: Union[ReplicatorItem, List[str], List[Path], List[usdrt.Sdf.Path]], reset_timeline: bool = False, input_prims: Union[ReplicatorItem, List[str]] = None) ReplicatorItem
Modify the bound animation on a skeleton. This does not do any retargetting.
- Parameters
values – The animation to set to the skeleton. If a list of values is provided, one will be chosen at random.
reset_timeline – Reset the timeline after changing the animation.
input_prims – The skeleton to modify. If using
with
syntax, this argument can be omitted.
Example
>>> import omni.replicator.core as rep >>> from pxr import Sdf >>> person = rep.get.skeleton('/World/Worker/Worker') >>> new_anim = Sdf.Path('/World/other_anim') >>> with person: ... rep.modify.animation([new_anim]) omni.replicator.core.modify.animation
- omni.replicator.core.modify.attribute(name: str, value: Union[Any, ReplicatorItem], attribute_type: str = None, input_prims: Union[ReplicatorItem, List[str]] = None) ReplicatorItem
Modify the attribute of the prims specified in
input_prims
.- Parameters
name – The name of the attribute to modify.
value – The value to set the attribute to.
attribute_type – The data type of the attribute. This parameter is required if the attribute specified does not already exist and must be created.
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.
Example
>>> import omni.replicator.core as rep >>> sphere = rep.create.sphere(as_mesh=False) >>> with sphere: ... rep.modify.attribute("radius", rep.distribution.uniform(1, 5)) omni.replicator.core.modify.attribute
- omni.replicator.core.modify.material(value: Union[ReplicatorItem, List[str]] = None, input_prims: Union[ReplicatorItem, List[str]] = None, name: Optional[str] = None) ReplicatorItem
Modify the material bound to the prims specified in
input_prims
.- Parameters
value – The material to bind to the prims. If multiple materials provided, a random one will be chosen.
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.name (optional) – A name for the graph node.
Example
>>> import omni.replicator.core as rep >>> mat = rep.create.material_omnipbr() >>> sphere = rep.create.sphere(as_mesh=False) >>> with sphere: ... rep.modify.material(["/Replicator/Looks/OmniPBR"]) omni.replicator.core.modify.material
- omni.replicator.core.modify.pose(position: Union[ReplicatorItem, float, Tuple[float]] = None, position_x: Union[ReplicatorItem, float] = None, position_y: Union[ReplicatorItem, float] = None, position_z: Union[ReplicatorItem, float] = None, rotation: Union[ReplicatorItem, float, Tuple[float]] = None, rotation_x: Union[ReplicatorItem, float] = None, rotation_y: Union[ReplicatorItem, float] = None, rotation_z: Union[ReplicatorItem, float] = None, rotation_order: str = 'XYZ', scale: Union[ReplicatorItem, float, Tuple[float]] = None, size: Union[ReplicatorItem, float, Tuple[float]] = None, pivot: Union[ReplicatorItem, Tuple[float]] = None, look_at: Union[ReplicatorItem, str, Path, usdrt.Sdf.Path, Tuple[float, float, float], List[Union[str, Path, usdrt.Sdf.Path]]] = None, look_at_up_axis: Union[str, Tuple[float, float, float]] = None, input_prims: Union[ReplicatorItem, List[str]] = None, name: Optional[str] = None) ReplicatorItem
Modify the position, rotation, scale, and/or look-at target of the prims specified in
input_prims
.- Parameters
position – XYZ coordinates in world space.
position_x – coordinates value along the x axis.
position_y – coordinates value along the y axis.
position_z – coordinates value along the z axis.
rotation – Rotation in degrees for the axes specified in
rotation_order
.rotation_x – Rotation in degrees for the X axis.
rotation_y – Rotation in degrees for the Y axis.
rotation_z – Rotation in degrees for the Z axis.
rotation_order – Order of rotation. Select from [XYZ, XZY, YXZ, YZX, ZXY, ZYX]
scale – Scale factor for each of XYZ axes.
size – Desired size of the input prims. Each input prim is scaled to match the specified
size
extents in each of the XYZ axes.pivot – Pivot that sets the center point of translate and rotate operation.
look_at – The look at target to orient towards specified as either a
ReplicatorItem
, a prim path, or world coordinates. If multiple prims are set, the target point will be the mean of their positions.look_at_up_axis – The up axis used in look_at function
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.name (optional) – A name for the graph node.
Note
position
and any of (position_x
,position_y
, andposition_z
) cannot both be specified.rotation
andlook_at
cannot both be specified.size
andscale
cannot both be specified.size
is converted to scale based on the prim’s current axis-aligned bounding box size. If a scale is already applied, it might not be able to reflect the true size of the prim.
Example
>>> import omni.replicator.core as rep >>> with rep.create.cube(): ... rep.modify.pose(position=rep.distribution.uniform((0, 0, 0), (100, 100, 100)), ... scale=rep.distribution.uniform(0.1, 10), ... look_at=(0, 0, 0)) omni.replicator.core.modify.pose
- omni.replicator.core.modify.pose_camera_relative(camera: Union[ReplicatorItem, List[str]], render_product: ReplicatorItem, distance: float, horizontal_location: float = 0, vertical_location: float = 0, input_prims=None) ReplicatorItem
Modify the positions of the prim relative to a camera.
- Parameters
camera – Camera that the prim is relative to.
horizontal_location – Horizontal location in the camera space, which is in the range
[-1, 1]
.vertical_location – Vertical location in the camera space, which is in the range
[-1, 1]
.distance – Distance from the prim to the camera.
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.
Example
>>> import omni.replicator.core as rep >>> camera = rep.create.camera() >>> render_product = rep.create.render_product(camera, (1024, 512)) >>> with rep.create.cube(): ... rep.modify.pose_camera_relative(camera, render_product, distance=500, horizontal_location=0, vertical_location=0) omni.replicator.core.modify.pose_camera_relative
- omni.replicator.core.modify.pose_orbit(barycentre: Union[ReplicatorItem, Tuple[float, float, float], str], distance: Union[ReplicatorItem, float], azimuth: Union[ReplicatorItem, float], elevation: Union[ReplicatorItem, float], look_at_barycentre: bool = True, input_prims: Optional[Union[ReplicatorItem, List[str]]] = None) Node
Position the
input_prims
in an orbit around a point.- Parameters
barycentre – The point around which to position the input prims. The barycentre can be specified as either coordinates or as prim paths. If more than one prim path is provided, the barycentre will be set to the mean of the prim centres.
distance – Distance from barycentre
azimuth – Horizontal angle (in degrees).
elevation – Vertical angle (in degrees).
look_at_centre – If
True
, orient theinput_prims
towards the barycentre. DefaultTrue
.input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.
Example
>>> import omni.replicator.core as rep >>> cube = rep.create.cube() >>> camera = rep.create.camera() >>> with camera: ... rep.modify.pose_orbit( ... barycentre=cube, ... distance=rep.distribution.uniform(400, 500), ... azimuth=45, ... elevation=rep.distribution.uniform(-180, 180), ... ) omni.replicator.core.modify._pose_orbit
- omni.replicator.core.modify.projection_material(position: Union[ReplicatorItem, List[str]] = None, rotation: Union[ReplicatorItem, List[str]] = None, scale: Union[ReplicatorItem, List[str]] = None, texture_group: Union[ReplicatorItem, List[str]] = None, diffuse: Union[ReplicatorItem, List[str]] = None, normal: Union[ReplicatorItem, List[str]] = None, roughness: Union[ReplicatorItem, List[str]] = None, metallic: Union[ReplicatorItem, List[str]] = None, input_prims: Union[ReplicatorItem, List[str]] = None, name: Optional[str] = None) Node
Modify values on a projection and update the transform via updates to the proxy prim.
The proxy prims’ transforms can be modified outside this function and then this function can be used to update the projection position, scale, and rotation if not manually provided.
- Parameters
position – Manually update the position of the projection, this will override the position from the proxy.
rotation – Manually update the rotation of the projection, this will override the rotation from the proxy.
scale – Manually update the scale of the projection, this will override the scale from the proxy.
texture_group – Update the diffuse, normal, roughness, and/or metallic texture simultaniously. Use where there are diffuse, normal, roughness, and/or metallic textures in a set. If using this arg, the diffuse, normal, roughness and/or metallic args should be set to the suffix used to denote each type.
diffuse – Update the diffuse texture used on the projection material. Will not change if not provided.
normal – Update the normal texture used on the projection material. Will not change if not provided.
roughness – Update the roughness texture used on the projection material. Will not change if not provided.
metallic – Update the metallic texture used on the projection material. Will not change if not provided.
input_prims – The projection prim to modify. If using
with
syntax, this argument can be omitted.name (optional) – A name for the graph node.
Example
>>> import omni.replicator.core as rep >>> import os >>> torus = rep.create.torus() >>> cube = rep.create.cube(position=(50, 100, 0), rotation=(0, 0, 90), scale=(0.2, 0.2, 0.2)) >>> sem = [('class', 'shape')] >>> with torus: ... projection = rep.create.projection_material(cube, sem) >>> with projection: ... rep.modify.projection_material(diffuse=os.path.join(rep.example.TEXTURES_DIR, "smiley_albedo.png")) omni.replicator.core.modify.projection_material
- omni.replicator.core.modify.semantics(semantics: List[Union[str, Tuple[str, str]]] = None, input_prims: Union[ReplicatorItem, List[str]] = None, mode: str = 'add') ReplicatorItem
Add semantics to the target prims
- Parameters
semantics –
TYPE,VALUE
pairs of semantic labels to include on the prim. (Ex: (‘class’, ‘sphere’))input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.mode – Semantics modification mode. Select from [
add
,replace
,clear
]. Inadd
mode, semantic labels are added to the prim, labels with the sameTYPE:VALUE
will be skipped. (eg.class:car, class:sedan
->class:car, class:sedan, class:automobile
). Inreplace
mode, the semanticsVALUE
specified will replace any existing value of the same semanticTYPE
(eg.class:car, class:sedan, subclass:emergency
->class:automobile, subclass:emergency
). Inclear
mode, ALL existing semantics are cleared before adding the specified semantics. (eg.class:car, subclass:emergency, region:usa
->class:automobile
).
Example
>>> import omni.replicator.core as rep >>> with rep.create.sphere(): ... rep.modify.semantics([("class", "sphere")]) omni.replicator.core.modify.semantics
- omni.replicator.core.modify.variant(name: str, value: Union[List[str], ReplicatorItem], input_prims: Union[ReplicatorItem, List[str]] = None) ReplicatorItem
Modify the variant of the prims specified in
input_prims
.- Parameters
name – The name of the variant set to modify.
value – The value to set the variant to.
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.
Example
>>> import os >>> import omni.replicator.core as rep >>> sphere = rep.create.from_usd(os.path.join(rep.example.ASSETS_DIR, "variant.usd")) >>> with rep.trigger.on_frame(max_execs=10): ... with sphere: ... rep.modify.variant("colorVariant", rep.distribution.choice(["red", "green", "blue"])) omni.replicator.core.modify.variant
- omni.replicator.core.modify.visibility(value: Union[ReplicatorItem, List[bool], bool] = None, input_prims: Union[ReplicatorItem, List[str]] = None, name: Optional[str] = None) ReplicatorItem
Modify the visibility of prims.
- Parameters
value – True, False. Or a list of
bools
for each prim to be modified, or a Replicator Distribution.input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.name (optional) – A name for the graph node.
Example
>>> import omni.replicator.core as rep >>> sphere = rep.create.sphere(position=(100, 0, 100)) >>> with sphere: ... rep.modify.visibility(False) omni.replicator.core.modify.visibility >>> with rep.trigger.on_frame(max_execs=10): ... with sphere: ... rep.modify.visibility(rep.distribution.sequence([True, False])) omni.replicator.core.modify.visibility
- omni.replicator.core.modify.register(fn: Callable[[...], Union[ReplicatorItem, Node]], override: bool = True, fn_name: Optional[str] = None) None
Register a new function under
omni.replicator.core.modify
. Extend the default capabilities ofomni.replicator.core.modify
by registering new functionality. New functions must return aReplicatorItem
or anOmniGraph
node.- Parameters
fn – A function that returns a
ReplicatorItem
or anOmniGraph
node.override – If
True
, will override existing functions of the same name. IfFalse
, an error is raised.fn_name – Optional, specify the registration name. If not specified, the function name is used.
fn_name
must only contains alphanumeric letters(a-z)
, numbers(0-9)
, or underscores(_)
, and cannot start with a number or contain any spaces.
Time
- omni.replicator.core.modify.time(value: Union[float, ReplicatorItem]) ReplicatorItem
Set the timeline time value (in seconds).
- Parameters
value – The value to set the time to.
Example
>>> import omni.replicator.core as rep >>> with rep.trigger.on_frame(max_execs=10): ... rep.modify.time(rep.distribution.uniform(0, 500)) omni.replicator.core.modify.time
- omni.replicator.core.modify.timeline(value: Union[float, ReplicatorItem], modify_type: str = None) ReplicatorItem
Modify the timeline by frame number or time value (in seconds).
- Parameters
value – The value to set the frame number or time to.
modify_type – The method with which to modify the timeline by. Valid types are [time, start_time, end_time, frame, start_frame, end_frame]
Example
>>> import omni.replicator.core as rep >>> with rep.trigger.on_frame(max_execs=10): ... rep.modify.timeline(rep.distribution.uniform(0, 500), "frame") omni.replicator.core.modify.timeline
Randomizer
- omni.replicator.core.randomizer.color(colors: Union[ReplicatorItem, List[Tuple[float]]], per_sub_mesh: bool = False, seed: int = None, input_prims: Union[ReplicatorItem, List[str]] = None) ReplicatorItem
Randomize colors Creates and binds an OmniPBR material to each prim in input_prims and randomizes colors.
- Parameters
colors – List of colors, or a
ReplicatorItem
that outputs a list of colors. If supplied as a list, a choice sampler is automatically created to sample from the supplied color list.per_sub_mesh – If
True
, bind a color to each mesh and geom_subset. IfFalse
, a color is bound only to the specified prim.seed – If colors is specified as a list, optionally provide seed for color sampler. Unused if colors is a
ReplicatorItem
.input_prims – List of input_prims. If constructing using
with
structure, set to None to bindinput_prims
to the current context.
Example
>>> import omni.replicator.core as rep >>> cones = rep.create.cone(position=rep.distribution.uniform((-100,-100,-100),(100,100,100)), count=100) >>> with cones: ... rep.randomizer.color(colors=rep.distribution.uniform((0, 0, 0), (1, 1, 1))) omni.replicator.core.randomizer.color
- omni.replicator.core.randomizer.instantiate(paths: Union[ReplicatorItem, List[Union[str, Path, usdrt.Sdf.Path, Prim, ReplicatorItem]]], size: Union[ReplicatorItem, int], weights: List[float] = None, mode: str = 'scene_instance', with_replacements=True, seed: int = None, name: str = None, use_cache: bool = True, semantics: List[Tuple[str, str]] = None) ReplicatorItem
Sample
size
number of prims from the paths provided.- Parameters
paths – The list of USD paths pointing to the assets to sample from.
size – The number of samples to sample. NOTE: if the paths is a
ReplicatorItem
, size will be ignored.weights – The weights to use for sampling. If provided, the length of
weights
must match the length ofpaths
. If omitted, uniform sampling will be used. NOTE: if the paths is aReplicatorItem
, weights will be ignored.mode – The instantiation mode. Choose from [scene_instance, point_instance, reference]. Defaults to scene_instance. Scene Instance creates a prototype in the cache, and new instances reference the prototype. Point Instances are best suited for situations requiring a very large number of samples, but only pose attributes can be modified per instance. Reference mode is used for asset references that need to be modified (WARNING: this mode has known material loading issue.)
with_replacements – When
False
, avoids duplicates when sampling. DefaultTrue
. NOTE: if the paths is a ReplicatorItem, with_replacements will be ignored.seed – Seed to use as initialization for the pseudo-random number generator. If not specified, the global seed will be used. NOTE: if the paths is a
ReplicatorItem
, seed will be ignored.name – Optionally prepend a name to the population.
use_cache – If
True
, cache the assets inpaths
to speed up randomization. Set to False if the size of the population is too large to be cached. Default: True.semantics – List of semantic type-label pairs.
Example
>>> import omni.replicator.core as rep >>> usds = rep.utils.get_usd_files(rep.example.ASSETS_DIR) >>> with rep.randomizer.instantiate(usds, size=100): ... rep.modify.pose(position=rep.distribution.uniform((-50,-50,-50),(50,50,50))) omni.replicator.core.modify.pose
- omni.replicator.core.randomizer.materials(materials: Union[ReplicatorItem, List[str]], seed: int = None, max_cached_materials: int = 0, input_prims=None, name: Optional[str] = None) ReplicatorItem
Sample materials from provided materials and bind to the input_prims.
Note that binding materials is a relatively expensive operation. It is generally more efficient to modify materials already bound to prims.
- Parameters
materials – The list of materials to sample from and bind to the input prims. The materials can be prim paths, MDL paths or a
ReplicatorItem
.seed – Seed to use as initialization for the pseudo-random number generator. If not specified, the global seed will be used.
max_cached_materials – Maximum number of materials allowed to remain in the scene when not attached to a prim. A larger value allows more materials to remain in the scene, reducing the number of materials that need to be re-created each call at the expense of memory usage. Only applies to materials created from MDL paths specified in materials. The default value of 0 removes all cached materials at the end of each call.
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.name (optional) – A name for the graph node.
Example
>>> import omni.replicator.core as rep >>> mats = rep.create.material_omnipbr(diffuse=rep.distribution.uniform((0,0,0), (1,1,1)), count=100) >>> spheres = rep.create.sphere( ... scale=0.2, ... position=rep.distribution.uniform((-100,-100,-100), (100,100,100)), ... count=100 ... ) >>> with spheres: ... rep.randomizer.materials(mats) omni.replicator.core.randomizer.materials
- omni.replicator.core.randomizer.rotation(min_angle: Tuple[float, float, float] = (-180.0, -180.0, -180.0), max_angle: Tuple[float, float, float] = (180.0, 180.0, 180.0), seed: int = None, input_prims: Union[ReplicatorItem, List[str]] = None) ReplicatorItem
Randomize the rotation of the input prims
This randomizer produces a truly uniformly distributed rotations to the input prims. In contrast, rotations are not truly uniformly distributed when simply sampling uniformly for each rotation axis.
- Parameters
min_angle – Minimum value for Euler angles in XYZ form (degrees)
max_angle – Maximum value for Euler angles in XYZ form (degrees)
seed – Seed to use as initialization for the pseudo-random number generator. If not specified, the global seed will be used.
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.
Example
>>> import omni.replicator.core as rep >>> cubes = rep.create.cube(position=rep.distribution.uniform((-100,-100,-100),(100,100,100)), count=100) >>> with cubes: ... rep.randomizer.rotation() omni.replicator.core.randomizer.rotation
- omni.replicator.core.randomizer.scatter_2d(surface_prims: Union[ReplicatorItem, List[str]], no_coll_prims: Union[ReplicatorItem, List[str]] = None, min_samp: Tuple[float, float, float] = (None, None, None), max_samp: Tuple[float, float, float] = (None, None, None), seed: int = None, offset: int = 0, check_for_collisions: bool = False, input_prims: Union[ReplicatorItem, List[str]] = None, name: Optional[str] = None) ReplicatorItem
Scatter input prims across the surface of the specified surface prims.
- Parameters
surface_prims – The prims across which to scatter the input prims. These can be meshes or GeomSubsets which specify a subset of a mesh’s polygons on which to scatter.
no_coll_prims – Existing prim(s) to prevent collisions with - if any prims are passed they will be checked for collisions which may slow down compute, regardless if
check_for_collisions
isTrue
orFalse
.min_samp – The minimum position in global space to sample from.
max_samp – The maximum position in global space to sample from.
seed – Seed to use as initialization for the pseudo-random number generator. If not specified, the global seed will be used.
offset – The distance the prims should be offset along the normal of the surface of the mesh.
check_for_collisions –
Whether the scatter operation should ensure that objects are not intersecting.
0
: No collision checking (fastest)1
: Check for collisions among the sampled input prims,2
: No collision checking among sampled input prims, but compute collision convex meshes for all the prims on the stage by recursively traversing the stage, and make sure the sampled prims do not collide with any of them.3
: Make sure the sampled prims don’t collide with anything (slowest)
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.name (optional) – A name for the graph node.
Example
>>> import omni.replicator.core as rep >>> spheres = rep.create.sphere(count=100) >>> surface_prim = rep.create.torus(scale=20, visible=False) >>> with spheres: ... rep.randomizer.scatter_2d(surface_prim) omni.replicator.core.randomizer.scatter_2d
- omni.replicator.core.randomizer.scatter_3d(volume_prims: Union[ReplicatorItem, List[str]] = None, no_coll_prims: Union[ReplicatorItem, List[str]] = None, volume_excl_prims: Union[ReplicatorItem, List[str]] = None, min_samp: Tuple[float, float, float] = (None, None, None), max_samp: Tuple[float, float, float] = (None, None, None), resolution_scaling: float = 1.0, voxel_size: float = 0.0, check_for_collisions: bool = False, prevent_vol_overlap: bool = True, viz_sampled_voxels: bool = False, seed: int = None, input_prims: Union[ReplicatorItem, List[str]] = None, name: Optional[str] = None) ReplicatorItem
Scatter input prims within the bounds of the specified volume prims.
- Parameters
volume_prims – The prims within which to scatter the input prims. Currently, only meshes are supported, and they must be watertight. If no prims are provided, you must specify min_samp and max_samp bounds.
no_coll_prims – Existing prim(s) to prevent collisions with - if any prims are passed they will be checked for collisions using rejection sampling. This may slow down compute, regardless if check_for_collisions is True/False.
volume_excl_prims – Prim(s) from which to exclude from sampling. Must have watertight meshes. Similar effect to
no_coll_prims
, but more efficient and less accurate. Rather than performing rejection sampling based on collision with the provided volume (asno_coll_prims
does), this prunes off the voxelized sampling space enclosed byvolume_excl_prims
so the rejection rate is 0 because it never tires to sample in the excluded space. However, some objects may get sampled very close to the edge of a mesh involume_excl_prims
, where the sampled root point is outsidevolume_excl_prims
but parts of the mesh extend to overlap the space. To get the best of both worlds, you can pass the same volume prim to bothno_coll_prims
and tovolume_excl_prims
, providing a high accuracy and a low rejection rate.min_samp – The minimum position in global space to sample from.
max_samp – The maximum position in global space to sample from.
resolution_scaling – Amount the default voxel resolution used in sampling should be scaled. More complex meshes may require higher resolution. Default voxel resolution is 30 for the longest side of the mean sized volumePrim mesh provided. Higher values will ensure more fine-grained voxels, but will come at the cost of performance.
voxel_size – Voxel size used to compute the resolution. If this is provided, then resolution_scaling is ignored, otherwise (if it is
0
by default) resolution_scaling is used.check_for_collisions – Whether the scatter operation should ensure that sampled objects are not intersecting.
prevent_vol_overlap – If
True
, prevents double sampling even when multiple enclosing volumes overlap, so that the entire enclosed volume is sampled uniformly. IfFalse
, it allows overlapped sampling with higher density in overlapping areas.viz_sampled_voxels – If
True
, creates semi-transparent green cubes in all voxels in the scene that the input prim positions are sampled from.seed – Seed to use as initialization for the pseudo-random number generator. If not specified, the global seed will be used.
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.name (optional) – A name for the graph node.
Example
>>> import omni.replicator.core as rep >>> spheres = rep.create.sphere(count=100) >>> volume_prim = rep.create.torus(scale=20, visible=False) >>> with spheres: ... rep.randomizer.scatter_3d(volume_prim) omni.replicator.core.randomizer.scatter_3d
- omni.replicator.core.randomizer.texture(textures: Union[ReplicatorItem, List[str]], texture_scale: Union[ReplicatorItem, List[Tuple[float, float]]] = None, texture_rotate: Union[ReplicatorItem, List[int]] = None, per_sub_mesh: bool = False, project_uvw: bool = False, seed: int = None, input_prims: Union[ReplicatorItem, List[str]] = None) ReplicatorItem
Randomize texture Creates and binds an OmniPBR material to each prim in input_prims and modifies textures.
- Parameters
textures – List of texture paths, or a
ReplicatorItem
that outputs a list of texture paths. If a list of texture paths is provided, they will be sampled uniformly using the global seed.texture_scale – List of texture scales in (X, Y) represented by positive floats. Larger values will make the texture appear smaller on the asset.
texture_rotate – Rotation in degrees of the texture.
per_sub_mesh – If
True
, bind a material to each mesh and geom_subset. IfFalse
, a material is bound only to the specified prim.project_uvw – When
True
, UV coordinates will be generated by projecting them from a coordinate system.seed – Seed to use as initialization for the pseudo-random number generator. If not specified, the global seed will be used.
input_prims – List of input_prims. If constructing using
with
structure, set to None to bind input_prims to the current context.
Example
>>> import omni.replicator.core as rep >>> with rep.create.cone(position=rep.distribution.uniform((-100,-100,-100),(100,100,100)), count=100): ... rep.randomizer.texture(textures=rep.example.TEXTURES, texture_scale=[(0.5, 0.5)], texture_rotate=[45]) omni.replicator.core.randomizer.texture
- omni.replicator.core.randomizer.register(fn: Callable[[...], Union[ReplicatorItem, Node]], override: bool = True, fn_name: Optional[str] = None) None
Register a new function under
omni.replicator.core.randomizer
. Extend the default capabilities ofomni.replicator.core.randomizer
by registering new functionality. New functions must return aReplicatorItem
or anOmniGraph
node.- Parameters
fn – A function that returns a
ReplicatorItem
or anOmniGraph
node.fn_name – An optional arg that let user choose the function name when registering it in replicator.
override – If
True
, will override existing functions of the same name. IfFalse
, an error is raised.fn_name – Optional, specify the registration name. If not specified, the function name is used.
fn_name
must only contains alphanumeric letters(a-z)
, numbers(0-9)
, or underscores(_)
, and cannot start with a number or contain any spaces.
Example
>>> import omni.replicator.core as rep >>> def scatter_points(points): ... return rep.modify.pose(position=rep.distribution.choice(points)) >>> rep.randomizer.register(scatter_points) >>> with rep.create.cone(): ... rep.randomizer.scatter_points([(0, 0, 0), (0, 0, 100), (0, 0, 200)]) omni.replicator.core.randomizer.scatter_points
Physics
- omni.replicator.core.physics.collider(approximation_shape: str = 'convexHull', contact_offset: float = None, rest_offset: float = None, input_prims: Union[ReplicatorItem, List] = None) None
Applies the Physx Collision API to the prims specified in
input_prims
.- Parameters
approximation_shape – The approximation used in the collider (by default, convex hull). Other approximations include “convexDecomposition”, “boundingSphere”, “boundingCube”, “meshSimplification”, and “none”. “none” will just use default mesh geometry.
contact_offset – Offset used when generating contact points. If it is
None
, it will determined by scene’s currentmeters_per_unit
. Default:None
.rest_offset – Offset used when generating rest contact points. If it is
None
, it will determined by scene’s currentmeters_per_unit
. Default:None
.input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.
Example
>>> import omni.replicator.core as rep >>> with rep.create.cube(): ... rep.physics.collider() omni.replicator.core.physics.collider
- omni.replicator.core.physics.drive_properties(stiffness: Union[ReplicatorItem, float] = 0.0, damping: Union[ReplicatorItem, float] = 0.0, input_prims: Union[ReplicatorItem, List] = None) None
Applies the Drive API to the prims specified in
input_prims
, if necessary. Prims must be either revolute or prismatic joints. For D6 joint randomization, please refer toomni.replicator.core.modify.attribute
and provide the exact attribute name of the drive parameter to be randomized.- Parameters
stiffness – The stiffness of the drive (unitless).
damping – The damping of the drive (unitless).
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.
- omni.replicator.core.physics.mass(mass: Optional[float] = None, density: Optional[float] = None, center_of_mass: Optional[List] = None, diagonal_inertia: Optional[List] = None, principal_axes: Optional[List] = None, input_prims: Union[ReplicatorItem, List] = None) None
Applies the Physx Mass API to the prims specified in
input_prims
, if necessary. This function sets up randomization parameters for various mass-related properties in the mass API.- Parameters
mass – The mass of the prim. By default mass is derived from the volume of the collision geometry multiplied by a density.
density – The density of the prim.
center_of_mass – Center of the mass of the prim in local coordinates.
diagonal_inertia – Constructs a diagonalized inertia tensor along the principal axes.
principal_axes – A quaternion (wxyz) representing the orientation of the principal axes in the local coordinate frame.
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.
Example
>>> import omni.replicator.core as rep >>> with rep.create.cube(): ... rep.physics.mass(mass=rep.distribution.uniform(1.0, 50.0)) omni.replicator.core.physics.mass
- omni.replicator.core.physics.physics_material(static_friction: Union[ReplicatorItem, float] = None, dynamic_friction: Union[ReplicatorItem, float] = None, restitution: Union[ReplicatorItem, float] = None, input_prims: Union[ReplicatorItem, List] = None) None
If input prim is a material, the physics material API will be applied if necessary. Otherwise, if the prim has a bound material, then randomizations will be made on this material (where once again, with the physics material API being bound if necessary). If the prim does not have a bound material, then a physics material will be created at
<prim_path>/PhysicsMaterial
and bound at the prim.- Parameters
static_friction – Static friction coefficient (unitless).
dynamic_friction – Dynamic friction coefficient (unitless).
restitution – Restitution coefficient (unitless).
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.
Example
>>> import omni.replicator.core as rep >>> with rep.create.cube(): ... rep.physics.physics_material( ... static_friction=rep.distribution.uniform(0.0, 1.0), ... dynamic_friction=rep.distribution.uniform(0.0, 1.0), ... restitution=rep.distribution.uniform(0.0, 1.0) ... ) omni.replicator.core.physics.physics_material
- omni.replicator.core.physics.rigid_body(velocity: Union[ReplicatorItem, Tuple[float, float, float]] = (0.0, 0.0, 0.0), angular_velocity: Union[ReplicatorItem, Tuple[float, float, float]] = (0.0, 0.0, 0.0), contact_offset: float = None, rest_offset: float = None, overwrite: bool = False, input_prims: Union[ReplicatorItem, List] = None) None
Randomizes the velocity and angular velocity of the prims specified in
input_prims
. If they do not have theRigidBodyAPI
then one will be created for the prim.- Parameters
velocity – The velocity of the prim.
angular_velocty – The angular velocity of the prim (degrees / time).
contact_offset – Offset used when generating contact points. If it is
None
, it will determined by scene’s currentmeters_per_unit
. Default:None
.rest_offset – Offset used when generating rest contact points. If it is
None
, it will determined by scene’s currentmeters_per_unit
. Default:None
.overwrite – If True, apply rigid body to the input prim and remove any rigid body already applied to a descendent of the input prim. If False, rigid body is only be applied to the input prim if no descendent is already specified as a rigid body. This is because PhysX does not allow nested rigid body hierarchies.
input_prims – The prims to be modified. If using
with
syntax, this argument can be omitted.
Example
>>> import omni.replicator.core as rep >>> with rep.create.cube(): ... rep.physics.rigid_body( ... velocity=rep.distribution.uniform((0, 0, 0), (100, 100, 100)), ... angular_velocity=rep.distribution.uniform((30, 30, 30), (300, 300, 300)) ... ) omni.replicator.core.physics.rigid_body
Annotators
- omni.replicator.core.annotators.get(name: str, init_params: Optional[dict] = None, render_product_idxs: Optional[List[int]] = None, device: Optional[str] = None, do_array_copy: bool = True) Annotator
Get annotator from registry
- Parameters
name – Name of annotator to be retrieved from registry
init_params – Annotator initialization parameters
render_product_idxs – Index of render products to utilize
device – If set, make annotator data available to specified device if possible. Select from
['cpu', 'cuda', 'cuda:<device_index>']
. Defaults tocpu
do_array_copy – If
True
, retrieve a copy of the data array. This is recommended for workflows using asynchronous backends to manage the data lifetime. Can be set toFalse
to gain performance if the data is expected to be used immediately within th writer. Defaults toTrue
- omni.replicator.core.annotators.get_augmentation(name: str) Augmentation
Get Augmentation from registry
- Parameters
name – Name of augmentation to retrieve from registry
- omni.replicator.core.annotators.get_registered_annotators() List[str]
Returns a list names of registered annotators.
- Returns
List of registered annotators.
- omni.replicator.core.annotators.register(name: str, annotator: Union[Annotator, str]) None
Register annotator
- Parameters
name – Name under which to register annotator
annotator – Annotator to be registered
- omni.replicator.core.annotators.register_augmentation(name: str, augmentation: Union[Augmentation, str]) None
Register an augmentation operation.
- Parameters
name – Name under which to register augmentation
augmentation – Augmentation to be registered. Can be specified as an
Augmentation
, the name of a registered augmentation or the node type id of an omnigraph node to be used as an augmentation.
Example
>>> import omni.replicator.core as rep >>> def make_opaque(data_in): ... data_in[..., 3] = 255 >>> rep.annotators.register_augmentation("makeOpaque", rep.annotators.Augmentation(make_opaque))
- omni.replicator.core.annotators.unregister_augmentation(name: str) None
Unregister a registered augmentation
- Parameters
name – Name of augmentation to unregister
- class omni.replicator.core.annotators.Annotator(name: str, init_params: Optional[dict] = None, render_product_idxs: Optional[List[int]] = None, device: Optional[str] = None, render_products: Optional[list] = None, template_name: Optional[str] = None, do_array_copy: bool = True)
Annotator class Annotator instances identify the annotator name, it’s initialization parameters, the render products it is tied to, as well as the name of the OmniGraph template.
Initialization parameters can be overridden with initialize(), and render products can be set with attach(). Once attached, the data from an annotator can be retrieved with get_data().
- Parameters
name – Annotator name
init_params – Optional parameters specifying the parameters to initialize the annotator with
render_product_idxs – Optionally specify the index of render products to utilize
device – If set, make annotator data available to specified device if possible. Select from
['cpu', 'cuda', 'cuda:<device_index>']
. Defaults tocpu
.render_products[List] – If set, attach annotator to specified render products
template_name – Optional name of the template describing the annotator graph
do_array_copy – If
True
, retrieve a copy of the data array. This is recommended for workflows using asynchronous backends to manage the data lifetime. Can be set toFalse
to gain performance if the data is expected to be used immediately within th writer. Defaults toTrue
- class omni.replicator.core.annotators.AnnotatorRegistry
Registry of annotators providing groundtruth data to writers.
Default Annotators
The current annotators that are available through the registry are:
Standard Annotators |
RT Annotators |
PathTracing Annotators |
---|---|---|
LdrColor/rgb |
SmoothNormal |
PtDirectIllumation |
HdrColor |
BumpNormal |
PtGlobalIllumination |
camera_params/CameraParams |
Motion2d |
PtReflections |
normals |
DiffuseAlbedo |
PtRefractions |
motion_vectors |
SpecularAlbedo |
PtSelfIllumination |
cross_correspondence |
Roughness |
PtBackground |
distance_to_image_plane |
DirectDiffuse |
PtWorldNormal |
distance_to_camera |
DirectSpecular |
PtRefractionFilter |
primPaths |
Reflections |
PtMultiMatte<0-7> |
bounding_box_2d_tight_fast |
IndirectDiffuse |
PtWorldPos |
bounding_box_2d_tight |
DepthLinearized |
PtZDepth |
bounding_box_2d_loose_fast |
EmissionAndForegroundMask |
PtVolumes |
bounding_box_2d_loose |
AmbientOcclusion |
PtDiffuseFilter |
bounding_box_3d_360 |
PtReflectionFilter |
|
bounding_box_3d_fast |
||
bounding_box_3d |
||
semantic_segmentation |
||
instance_segmentation_fast |
||
instance_segmentation |
||
skeleton_data |
||
pointcloud |
||
CrossCorrespondence |
||
MotionVectors |
||
Occlusion |
Some annotators support initialization parameters. For example, segmentation annotators can be parametrized with a colorize
attribute specify the output format.
omni.replicator.core.annotators.get("semantic_segmentation", init_params={"colorize": True})
To see how annotators are used within a writer, we have prepared scripts that implement the basic writer which covers all standard annotators.
Standard Annotators
These annotators can be used in any rendering mode. Each annotator’s usage and outputs are described below.
LdrColor
Annotator Name: LdrColor
, (alternative name: rgb
)
The LdrColor
or rgb
annotator produces the low dynamic range output image as an array of type np.uint8
with shape (width, height, 4)
, where the four channels correspond to R,G,B,A.
Example
import omni.replicator.core as rep
async def test_ldr():
# Add Default Light
distance_light = rep.create.light(rotation=(315,0,0), intensity=3000, light_type="distant")
cone = rep.create.cone()
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
ldr = rep.AnnotatorRegistry.get_annotator("LdrColor")
ldr.attach(rp)
await rep.orchestrator.step_async()
data = ldr.get_data()
print(data.shape, data.dtype) # ((512, 1024, 4), uint8)
import asyncio
asyncio.ensure_future(test_ldr())
Normals
Annotator Name: normals
The normals
annotator produces an array of type np.float32
with shape (height, width, 4)
.
The first three channels correspond to (x, y, z)
. The fourth channel is unused.
Example
import omni.replicator.core as rep
async def test_normals():
# Add Default Light
distance_light = rep.create.light(rotation=(315,0,0), intensity=3000, light_type="distant")
cone = rep.create.cone()
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
normals = rep.AnnotatorRegistry.get_annotator("normals")
normals.attach(rp)
await rep.orchestrator.step_async()
data = normals.get_data()
print(data.shape, data.dtype) ~ ((512, 1024, 4), float32)
import asyncio
asyncio.ensure_future(test_normals())
Distance to Camera
Annotator Name: distance_to_camera
Outputs a depth map from objects to camera positions. The distance_to_camera
annotator produces a 2d array of types np.float32
with 1 channel.
Data Details
The unit for distance to camera is in meters (For example, if the object is 1000 units from the camera, and the meters_per_unit variable of the scene is 100, the distance to camera would be 10).
0 in the 2d array represents infinity (which means there is no object in that pixel).

Distance to Image Plane
Annotator Name: distance_to_image_plane
Outputs a depth map from objects to image plane of the camera. The distance_to_image_plane
annotator produces a 2d array of types np.float32
with 1 channel.
Data Details
The unit for distance to image plane is in meters (For example, if the object is 1000 units from the image plane of the camera, and the meters_per_unit variable of the scene is 100, the distance to camera would be 10).
0 in the 2d array represents infinity (which means there is no object in that pixel).

Motion Vectors
Annotator Name: motion_vectors
Outputs a 2D array of motion vectors representing the relative motion of a pixel in the camera’s viewport between frames.
The MotionVectors annotator returns the per-pixel motion vectors in in image space.
Output Format
array((height, width, 4), dtype=<np.float32>)
The components of each entry in the 2D array represent four different values encoded as floating point values:
x: motion distance in the horizontal axis (image width) with movement to the left of the image being positive and movement to the right being negative.
y: motion distance in the vertical axis (image height) with movement towards the top of the image being positive and movement to the bottom being negative.
z: unused
w: unused
Example
import asyncio
import omni.replicator.core as rep
async def test_motion_vectors():
# Add an object to look at
cone = rep.create.cone()
# Add motion to object
cone_prim = cone.get_output_prims()["prims"][0]
cone_prim.GetAttribute("xformOp:translate").Set((-100, 0, 0), time=0.0)
cone_prim.GetAttribute("xformOp:translate").Set((100, 50, 0), time=10.0)
camera = rep.create.camera()
render_product = rep.create.render_product(camera, (512, 512))
motion_vectors_anno = rep.annotators.get("MotionVectors")
motion_vectors_anno.attach(render_product)
# Take a step to render the initial state (no movement yet)
await rep.orchestrator.step_async()
# Capture second frame (now the timeline is playing)
await rep.orchestrator.step_async()
data = motion_vectors_anno.get_data()
print(data.shape, data.dtype, data.reshape(-1, 4).min(axis=0), data.reshape(-1, 4).max(axis=0))
# (1024, 512, 4), float32, [-93.80073 -1. -1. -1. ] [ 0. 23.450201 1. 1. ]
asyncio.ensure_future(test_motion_vectors())
Note
The values represent motion relative to camera space.
bounding_box_2d_tight_fast
Outputs tight 2d bounding box of each entity with semantics in the camera’s viewport. Tight bounding boxes bound only the visible pixels of entities. Completely occluded entities are ommited.
Initialization Parameters
semanticTypes: List of allowed semantic types the types. For example, if semantic_types is [“class”], only the bounding boxes for prims with semantics of type “class” will be retrieved.
Output Format
The bounding box annotator returns a dictionary with the bounds and semantic id found under the “data” key, while other information is under the “info” key: “idToLabels”, “bboxIds” and “primPaths”.
{
"data": np.dtype(
[
("semanticId", "<u4"),
("x_min", "<i4"),
("y_min", "<i4"),
("x_max", "<i4"),
("y_max", "<i4"),
],
"info": {
"idToLabels": {<semanticId>: <semantic_labels>}, # mapping from integer semantic ID to a comma delimited list of associated semantics
"bboxIds": [<bbox_id_0>, ..., <bbox_id_n>], # ID specific to bounding box annotators allowing easy mapping between different bounding box annotators.
"primPaths": [<prim_path_0>, ... <prim_path_n>], # prim path tied to each bounding box
}
}
Note
bounding_box_2d_tight_fast bounds only visible pixels.
Example
import omni.replicator.core as rep
async def test_bbox_2d_tight_fast():
cone = rep.create.cone(semantics=[("prim", "cone")], position=(100, 0, 0))
sphere = rep.create.sphere(semantics=[("prim", "sphere")], position=(-100, 0, 0))
invalid_type = rep.create.cube(semantics=[("shape", "boxy")], position=(0, 100, 0))
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
bbox_2d_tight_fast = rep.AnnotatorRegistry.get_annotator("bounding_box_2d_tight_fast", init_params={"semanticTypes": ["prim"]})
bbox_2d_tight_fast.attach(rp)
await rep.orchestrator.step_async()
data = bbox_2d_tight_fast.get_data()
print(data)
# {
# 'data': array([
# (0, 442, 198, 581, 357, 0.),
# (1, 245, 94, 368, 220, 0.38),
# dtype=[('semanticId', '<u4'),
# ('x_min', '<i4'),
# ('y_min', '<i4'),
# ('x_max', '<i4'),
# ('y_max', '<i4'),
# ('occlusionRatio', '<f4')]),
# 'info': {
# 'bboxIds': array([0, 1], dtype=uint32),
# 'idToLabels': {'0': {'prim': 'cone'}, '1': {'prim': 'sphere'}},
# 'primPaths': ['/Replicator/Cone_Xform', '/Replicator/Sphere_Xform']}
# }
# }
import asyncio
asyncio.ensure_future(test_bbox_2d_tight_fast())
bounding_box_2d_tight
Outputs tight 2d bounding box of each entity with semantics in the camera’s viewport. Tight bounding boxes bound only the visible pixels of entities. Completely occluded entities are ommited.
Initialization Parameters
semanticTypes: List of allowed semantic types the types. For example, if semantic_types is [“class”], only the bounding boxes for prims with semantics of type “class” will be retrieved.
Output Format
The bounding box annotator returns a dictionary with the bounds and semantic id found under the “data” key, while other information is under the “info” key: “idToLabels”, “bboxIds” and “primPaths”.
{
"data": np.dtype(
[
("semanticId", "<u4"),
("x_min", "<i4"),
("y_min", "<i4"),
("x_max", "<i4"),
("y_max", "<i4"),
("occlusionRatio", "<f4"),
],
"info": {
"idToLabels": {<semanticId>: <semantic_labels>}, # mapping from integer semantic ID to a comma delimited list of associated semantics
"bboxIds": [<bbox_id_0>, ..., <bbox_id_n>], # ID specific to bounding box annotators allowing easy mapping between different bounding box annotators.
"primPaths": [<prim_path_0>, ... <prim_path_n>], # prim path tied to each bounding box
}
}
Note
bounding_box_2d_tight bounds only visible pixels.
Example
import omni.replicator.core as rep
async def test_bbox_2d_tight():
cone = rep.create.cone(semantics=[("prim", "cone")], position=(100, 0, 0))
sphere = rep.create.sphere(semantics=[("prim", "sphere")], position=(-100, 0, 0))
invalid_type = rep.create.cube(semantics=[("shape", "boxy")], position=(0, 100, 0))
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
bbox_2d_tight = rep.AnnotatorRegistry.get_annotator("bounding_box_2d_tight", init_params={"semanticTypes": ["prim"]})
bbox_2d_tight.attach(rp)
await rep.orchestrator.step_async()
data = bbox_2d_tight.get_data()
print(data)
# {
# 'data': array([
# (0, 442, 198, 581, 357, 0.),
# (1, 245, 94, 368, 220, 0.38),
# dtype=[('semanticId', '<u4'),
# ('x_min', '<i4'),
# ('y_min', '<i4'),
# ('x_max', '<i4'),
# ('y_max', '<i4')]),
# ("occlusionRatio", "<f4"),
# 'info': {
# 'bboxIds': array([0, 1], dtype=uint32),
# 'idToLabels': {'0': {'prim': 'cone'}, '1': {'prim': 'sphere'}},
# 'primPaths': ['/Replicator/Cone_Xform', '/Replicator/Sphere_Xform']}
# }
# }
import asyncio
asyncio.ensure_future(test_bbox_2d_tight())
bounding_box_2d_loose_fast
Outputs loose 2d bounding box of each entity with semantics in the camera’s field of view. Loose bounding boxes bound the entire entity regardless of occlusions.
Initialization Parameters
semanticTypes: List of allowed semantic types the types. For example, if semantic_types is [“class”], only the bounding boxes for prims with semantics of type “class” will be retrieved.
Output Format
The bounding box annotator returns a dictionary with the bounds and semantic id found under the “data” key, while other information is under the “info” key: “idToLabels”, “bboxIds” and “primPaths”.
{
"data": np.dtype(
[
("semanticId", "<u4"),
("x_min", "<i4"),
("y_min", "<i4"),
("x_max", "<i4"),
("y_max", "<i4"),
("occlusionRatio", "<f4"),
],
"info": {
"idToLabels": {<semanticId>: <semantic_labels>}, # mapping from integer semantic ID to a comma delimited list of associated semantics
"bboxIds": [<bbox_id_0>, ..., <bbox_id_n>], # ID specific to bounding box annotators allowing easy mapping between different bounding box annotators.
"primPaths": [<prim_path_0>, ... <prim_path_n>], # prim path tied to each bounding box
}
}
Note
bounding_box_2d_loose will produce the loose 2d bounding box of any prim in the viewport, no matter if is partially occluded or fully occluded.
Example
import omni.replicator.core as rep
async def test_bbox_2d_loose_fast():
cone = rep.create.cone(semantics=[("prim", "cone")], position=(100, 0, 0))
sphere = rep.create.sphere(semantics=[("prim", "sphere")], position=(-100, 0, 0))
invalid_type = rep.create.cube(semantics=[("shape", "boxy")], position=(0, 100, 0))
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
bbox_2d_loose_fast = rep.AnnotatorRegistry.get_annotator("bounding_box_2d_loose_fast", init_params={"semanticTypes": ["prim"]})
bbox_2d_loose_fast.attach(rp)
await rep.orchestrator.step_async()
data = bbox_2d_loose_fast.get_data()
print(data)
# {
# 'data': array([
# (0, 442, 198, 581, 357, 0.),
# (1, 245, 92, 375, 220, 0.38),
# dtype=[('semanticId', '<u4'),
# ('x_min', '<i4'),
# ('y_min', '<i4'),
# ('x_max', '<i4'),
# ('y_max', '<i4')]),
# ("occlusionRatio", "<f4"),
# 'info': {
# 'bboxIds': array([0, 1], dtype=uint32),
# 'idToLabels': {'0': {'prim': 'cone'}, '1': {'prim': 'sphere'}},
# 'primPaths': ['/Replicator/Cone_Xform', '/Replicator/Sphere_Xform']}
# }
# }
import asyncio
asyncio.ensure_future(test_bbox_2d_loose_fast())
bounding_box_2d_loose
Outputs loose 2d bounding box of each entity with semantics in the camera’s field of view. Loose bounding boxes bound the entire entity regardless of occlusions.
Initialization Parameters
semanticTypes: List of allowed semantic types the types. For example, if semantic_types is [“class”], only the bounding boxes for prims with semantics of type “class” will be retrieved.
Output Format
The bounding box annotator returns a dictionary with the bounds and semantic id found under the “data” key, while other information is under the “info” key: “idToLabels”, “bboxIds” and “primPaths”.
{
"data": np.dtype(
[
("semanticId", "<u4"),
("x_min", "<i4"),
("y_min", "<i4"),
("x_max", "<i4"),
("y_max", "<i4"),
("occlusionRatio", "<f4"),
],
"info": {
"idToLabels": {<semanticId>: <semantic_labels>}, # mapping from integer semantic ID to a comma delimited list of associated semantics
"bboxIds": [<bbox_id_0>, ..., <bbox_id_n>], # ID specific to bounding box annotators allowing easy mapping between different bounding box annotators.
"primPaths": [<prim_path_0>, ... <prim_path_n>], # prim path tied to each bounding box
}
}
Note
bounding_box_2d_loose will produce the loose 2d bounding box of any prim in the viewport, no matter if is partially occluded or fully occluded.
Example
import omni.replicator.core as rep
async def test_bbox_2d_loose():
cone = rep.create.cone(semantics=[("prim", "cone")], position=(100, 0, 0))
sphere = rep.create.sphere(semantics=[("prim", "sphere")], position=(-100, 0, 0))
invalid_type = rep.create.cube(semantics=[("shape", "boxy")], position=(0, 100, 0))
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
bbox_2d_loose = rep.AnnotatorRegistry.get_annotator("bounding_box_2d_loose", init_params={"semanticTypes": ["prim"]})
bbox_2d_loose.attach(rp)
await rep.orchestrator.step_async()
data = bbox_2d_loose.get_data()
print(data)
# {
# 'data': array([
# (0, 442, 198, 581, 357, 0.),
# (1, 245, 92, 375, 220, 0.38),
# dtype=[('semanticId', '<u4'),
# ('x_min', '<i4'),
# ('y_min', '<i4'),
# ('x_max', '<i4'),
# ('y_max', '<i4')]),
# ("occlusionRatio", "<f4"),
# 'info': {
# 'bboxIds': array([0, 1], dtype=uint32),
# 'idToLabels': {'0': {'prim': 'cone'}, '1': {'prim': 'sphere'}},
# 'primPaths': ['/Replicator/Cone_Xform', '/Replicator/Sphere_Xform']}
# }
# }
import asyncio
asyncio.ensure_future(test_bbox_2d_loose())
bounding_box_3d_360
Outputs 3D bounding box of each entity with semantics for the entire world including outside the sensor’s field of view
Initialization Parameters
None
Output Format
The bounding box annotator returns a dictionary with the bounds and semantic id found under the “data” key, while other information is under the “info” key: “idToLabels”, “bboxIds” and “primPaths”.
{
"data": np.dtype(
[
('x_min', '<f4'), # Minimum bound in x axis in local reference frame (in world units)
('y_min', '<f4'), # Minimum bound in y axis in local reference frame (in world units)
('z_min', '<f4'), # Minimum bound in z axis in local reference frame (in world units)
('x_max', '<f4'), # Maximum bound in x axis in local reference frame (in world units)
('y_max', '<f4'), # Maximum bound in y axis in local reference frame (in world units)
('z_max', '<f4'), # Maximum bound in z axis in local reference frame (in world units)
('transform', '<f4', (4, 4)), # Local to world transformation matrix (transforms the bounds from local frame to world frame)
('occlusionRatio', '<f4')]), # Occlusion (visible pixels / total pixels), where `0.0` is fully visible and `1.0` is fully occluded. See additional notes below.
],
"info": {
"idToLabels": {<semanticId>: <semantic_labels>}, # mapping from integer semantic ID to a comma delimited list of associated semantics
"bboxIds": [<bbox_id_0>, ..., <bbox_id_n>], # ID specific to bounding box annotators allowing easy mapping between different bounding box annotators.
"primPaths": [<prim_path_0>, ... <prim_path_n>], # prim path tied to each bounding box
}
}
Note
bounding boxes are retrieved regardless of occlusion.
bounding box dimensions (<axis>_min, <axis>_max) are expressed in stage units.
occlusionRatio
can only provide valid values for prims composed of a single mesh. Multi-mesh labelled prims will return a value of -1 indicating that no occlusion value is available.
Example
import omni.replicator.core as rep
async def test_bbox_3d_360():
cone = rep.create.cone(semantics=[("prim", "cone")], position=(100, 0, 0))
sphere = rep.create.sphere(semantics=[("prim", "sphere")], position=(-100, 0, 0))
cube = rep.create.cube(semantics=[("prim", "cube")], position=(1000, 1000, 1000))
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
bbox_3d_360 = rep.AnnotatorRegistry.get_annotator("bounding_box_3d_360")
bbox_3d_360.attach(rp)
await rep.orchestrator.step_async()
data = bbox_3d_360.get_data()
print(data)
# {
# 'data': array([
# (0, -50., -50., -50., 50., 50., 50., [[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 100., 0., 0., 1.]], 0. ),
# (1, -50., -50., -50., 50., 50., 50., [[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [-100., 0., 0., 1.]], 0.38)],
# (2, -50., -50., -50., 50., 50., 50., [[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [1000., 1000., 1000., 1.]], nan)],
# dtype=[
# ('semanticId', '<u4'),
# ('x_min', '<f4'),
# ('y_min', '<f4'),
# ('z_min', '<f4'),
# ('x_max', '<f4'),
# ('y_max', '<f4'),
# ('z_max', '<f4'),
# ('transform', '<f4', (4, 4)),
# ('occlusionRatio', '<f4')]),
# 'info': {
# 'bboxIds': array([0, 1, 2], dtype=uint32),
# 'idToLabels': {0: {'prim': 'cone'}, 1: {'prim': 'sphere'}, 2: {'prim': 'cube'}},
# 'primPaths': ['/Replicator/Cone_Xform', '/Replicator/Sphere_Xform', '/Replicator/Cube_Xform']
# }
# }
import asyncio
asyncio.ensure_future(test_bbox_3d_360())
bounding_box_3d_fast
Outputs 3D bounding box of each entity with semantics for entities within the sensor’s field of view.
Initialization Parameters
None
Output Format
The bounding box annotator returns a dictionary with the bounds and semantic id found under the “data” key, while other information is under the “info” key: “idToLabels”, “bboxIds” and “primPaths”.
{
"data": np.dtype(
[
("semanticId", "<u4"),
("x_min", "<i4"),
("y_min", "<i4"),
("x_max", "<i4"),
("y_max", "<i4"),
("z_min", "<i4"),
("z_max", "<i4"),
("transform", "<i4"),
],
"info": {
"idToLabels": {<semanticId>: <semantic_labels>}, # mapping from integer semantic ID to a comma delimited list of associated semantics
"bboxIds": [<bbox_id_0>, ..., <bbox_id_n>], # ID specific to bounding box annotators allowing easy mapping between different bounding box annotators.
"primPaths": [<prim_path_0>, ... <prim_path_n>], # prim path tied to each bounding box
}
}
Note
bounding boxes are retrieved regardless of occlusion.
bounding box dimensions (<axis>_min, <axis>_max) are expressed in stage units.
Example
import omni.replicator.core as rep
async def test_bbox_3d_fast():
cone = rep.create.cone(semantics=[("prim", "cone")], position=(100, 0, 0))
sphere = rep.create.sphere(semantics=[("prim", "sphere")], position=(-100, 0, 0))
cube = rep.create.cube(semantics=[("prim", "cube")], position=(1000, 1000, 1000))
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
bbox_3d_fast = rep.AnnotatorRegistry.get_annotator("bounding_box_3d_fast")
bbox_3d_fast.attach(rp)
await rep.orchestrator.step_async()
data = bbox_3d_fast.get_data()
print(data)
# {
# 'data': array([
# (0, -50., -50., -50., 50., 50., 50., [[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 100., 0., 0., 1.]], 0. ),
# (1, -50., -50., -50., 50., 50., 50., [[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [-100., 0., 0., 1.]], 0.38)],
# dtype=[
# ('semanticId', '<u4'),
# ('x_min', '<f4'),
# ('y_min', '<f4'),
# ('z_min', '<f4'),
# ('x_max', '<f4'),
# ('y_max', '<f4'),
# ('z_max', '<f4'),
# ('transform', '<f4', (4, 4)),
# ('occlusionRatio', '<f4')]),
# 'info': {
# 'bboxIds': array([0, 1, 2], dtype=uint32),
# 'idToLabels': {0: {'prim': 'cone'}, 1: {'prim': 'sphere'}}},
# 'primPaths': ['/Replicator/Cone_Xform', '/Replicator/Sphere_Xform']
# }
# }
import asyncio
asyncio.ensure_future(test_bbox_3d_fast())
bounding_box_3d
Outputs 3D bounding box of each entity with semantics for entities within the sensor’s field of view.
Initialization Parameters
None
Output Format
The bounding box annotator returns a dictionary with the bounds and semantic id found under the “data” key, while other information is under the “info” key: “idToLabels”, “bboxIds” and “primPaths”.
{
"data": np.dtype(
[
("semanticId", "<u4"),
("x_min", "<i4"),
("y_min", "<i4"),
("x_max", "<i4"),
("y_max", "<i4"),
("z_min", "<i4"),
("z_max", "<i4"),
("transform", "<i4"),
],
"info": {
"idToLabels": {<semanticId>: <semantic_labels>}, # mapping from integer semantic ID to a comma delimited list of associated semantics
"bboxIds": [<bbox_id_0>, ..., <bbox_id_n>], # ID specific to bounding box annotators allowing easy mapping between different bounding box annotators.
"primPaths": [<prim_path_0>, ... <prim_path_n>], # prim path tied to each bounding box
}
}
Note
bounding boxes are retrieved regardless of occlusion.
bounding box dimensions (<axis>_min, <axis>_max) are expressed in stage units.
Example
import omni.replicator.core as rep
async def test_bbox_3d():
cone = rep.create.cone(semantics=[("prim", "cone")], position=(100, 0, 0))
sphere = rep.create.sphere(semantics=[("prim", "sphere")], position=(-100, 0, 0))
cube = rep.create.cube(semantics=[("prim", "cube")], position=(1000, 1000, 1000))
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
bbox_3d = rep.AnnotatorRegistry.get_annotator("bounding_box_3d")
bbox_3d.attach(rp)
await rep.orchestrator.step_async()
data = bbox_3d.get_data()
print(data)
# {
# 'data': array([
# (0, -50., -50., -50., 50., 50., 50., [[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 100., 0., 0., 1.]], 0. ),
# (1, -50., -50., -50., 50., 50., 50., [[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [-100., 0., 0., 1.]], 0.38)],
# dtype=[
# ('semanticId', '<u4'),
# ('x_min', '<f4'),
# ('y_min', '<f4'),
# ('z_min', '<f4'),
# ('x_max', '<f4'),
# ('y_max', '<f4'),
# ('z_max', '<f4'),
# ('transform', '<f4', (4, 4)),
# ('occlusionRatio', '<f4')]),
# 'info': {
# 'bboxIds': array([0, 1, 2], dtype=uint32),
# 'idToLabels': {0: {'prim': 'cone'}, 1: {'prim': 'sphere'}}},
# 'primPaths': ['/Replicator/Cone_Xform', '/Replicator/Sphere_Xform']
# }
# }
import asyncio
asyncio.ensure_future(test_bbox_3d())
instance_id_segmentation_fast
Development segmentation node Instance segmentation that returns the renderer instance ID - used for debugging
instance_id_segmentation
- Development segmentation node
Instance segmentation that returns the renderer instance ID - used for debugging
instance_segmentation_fast
Outputs instance segmentation of each entity in the camera’s viewport. Only semantically labelled entities are returned.
Initialization Parameters
Colorize (bool): whether to output colorized instance segmentation or non-colorized one.
Output Format
{
"data": array((height, width), dtype=<np.uint32>),
"info": {
"idToLabels": {<semanticId>: <prim_path>}, # mapping from instance ID to the instance's prim path
"idToSemantic":{<instanceId>: <semantic_labels>}, # mapping from instance ID to a comma delimited list of associated semantics
}
}
Note
Two prims with same semantic labels but live in different USD path will have different ids.
If two prims have no semantic labels, and they have a same parent which has semantic labels, they will be classified as the same instance.
The semantic labels of an entity will be the semantic labels of itself, plus all the semantic labels it inherit from its parent and semantic labels with same type will be concatenated, separated by comma. For example, if an entity has a semantic label of [{“class”: “cube”}], and its parent has [{“class”: “rectangle”}]. Then the final semantic labels of that entity will be [{“class”: “rectangle, cube”}].
import omni.replicator.core as rep
async def test_instance_segmentation_fast():
cone = rep.create.cone(semantics=[("prim", "cone")], position=(100, 0, 0))
sphere = rep.create.sphere(semantics=[("prim", "sphere")], position=(-100, 0, 0))
invalid_type = rep.create.cube(semantics=[("shape", "boxy")], position=(0, 100, 0))
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
instance_seg = rep.AnnotatorRegistry.get_annotator("instance_segmentation_fast")
instance_seg.attach(rp)
await rep.orchestrator.step_async()
data = instance_seg.get_data()
print(data)
# {
# 'data': array([[0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# ...,
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0]],
# 'info': {
# 'idToLabels': {'idToLabels': {0: 'BACKGROUND', 1: 'UNLABELLED', 3: '/Replicator/Sphere_Xform', 2: '/Replicator/Cone_Xform', 4: '/Replicator/Cube_Xform'},
# 'idToSemantics': {0: {'class': 'BACKGROUND'}, 1: {'class': 'UNLABELLED'}, 3: {'prim': 'sphere'}, 2: {'prim': 'cone'}, 4: {'shape': 'boxy'}}
# }
# }
import asyncio
asyncio.ensure_future(test_instance_segmentation_fast())
instance_segmentation
Outputs instance segmentation of each entity in the camera’s viewport. Only semantically labelled entities are returned.
Initialization Parameters
Colorize (bool): whether to output colorized instance segmentation or non-colorized one.
Output Format
{
"data": array((height, width), dtype=<np.uint32>),
"info": {
"idToLabels": {<semanticId>: <prim_path>}, # mapping from instance ID to the instance's prim path
"idToSemantic":{<instanceId>: <semantic_labels>}, # mapping from instance ID to a comma delimited list of associated semantics
}
}
Note
Two prims with same semantic labels but live in different USD path will have different ids.
If two prims have no semantic labels, and they have a same parent which has semantic labels, they will be classified as the same instance.
The semantic labels of an entity will be the semantic labels of itself, plus all the semantic labels it inherit from its parent and semantic labels with same type will be concatenated, separated by comma. For example, if an entity has a semantic label of [{“class”: “cube”}], and its parent has [{“class”: “rectangle”}]. Then the final semantic labels of that entity will be [{“class”: “rectangle, cube”}].
import omni.replicator.core as rep
async def test_instance_segmentation():
cone = rep.create.cone(semantics=[("prim", "cone")], position=(100, 0, 0))
sphere = rep.create.sphere(semantics=[("prim", "sphere")], position=(-100, 0, 0))
invalid_type = rep.create.cube(semantics=[("shape", "boxy")], position=(0, 100, 0))
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
instance_seg = rep.AnnotatorRegistry.get_annotator("instance_segmentation")
instance_seg.attach(rp)
await rep.orchestrator.step_async()
data = instance_seg.get_data()
print(data)
# {
# 'data': array([[0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# ...,
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0]],
# 'info': {
# 'idToLabels': {'idToLabels': {'0': 'BACKGROUND', '1': 'UNLABELLED', '3': '/Replicator/Sphere_Xform', '2': '/Replicator/Cone_Xform', '4': '/Replicator/Cube_Xform'},
# 'idToSemantics': {'0': {'class': 'BACKGROUND'}, '1': {'class': 'UNLABELLED'}, '3': {'prim': 'sphere'}, '2': {'prim': 'cone'}, '4': {'shape': 'boxy'}}
# }
# }
import asyncio
asyncio.ensure_future(test_instance_segmentation())
semantic_segmentation
Outputs semantic segmentation of each entity in the camera’s field of view that has semantic labels.
Initialization Parameters
Colorize (bool): whether to output colorized semantic segmentation or non-colorized one.
Output Format
{
"data": array((height, width), dtype=<np.uint32>),
"info": {
"idToLabels": {<semanticId>: <semantic_labels>}, # mapping from semantic ID to a comma delimited list of associated semantics
}
}
- data (semantic segmentation array):
If
colorize
is set toTrue
, the image will be a 2d array of typesnp.uint8
with 4 channels. The uint32 array can be converted using semantic_seg_data[“data”].view(np.uint8).reshape(height, width, -1)Different colors represent different semantic labels.
If
colorize
is set toFalse
, the image will be a 2d array of typesnp.uint32
with 1 channel, which is the semantic id of the entities.
- info:
idToLabels
If
colorize
is set toTrue
, it will be the mapping from color to semantic labels.If
colorize
is set toFalse
, it will be the mapping from semantic id to semantic labels.
Note
The semantic labels of an entity will be the semantic labels of itself, plus all the semantic labels it
inherit from its parent and semantic labels with same type will be concatenated, separated by comma.
For example, if an entity has a semantic label of [{“class”: “cube”}]
, and its parent has
[{“class”: “rectangle”}]
. Then the final semantic labels of that entity will be
[{“class”: “rectangle, cube”}]
.
import omni.replicator.core as rep
async def test_semantic_segmentation():
cone = rep.create.cone(semantics=[("prim", "cone")], position=(100, 0, 0))
sphere = rep.create.sphere(semantics=[("prim", "sphere")], position=(-100, 0, 0))
invalid_type = rep.create.cube(semantics=[("shape", "boxy")], position=(0, 100, 0))
cam = rep.create.camera(position=(500,500,500), look_at=cone)
rp = rep.create.render_product(cam, (1024, 512))
semantic_seg = rep.AnnotatorRegistry.get_annotator("semantic_segmentation")
semantic_seg.attach(rp)
await rep.orchestrator.step_async()
data = semantic_seg.get_data()
print(data)
# {
# 'data': array([[0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# ...,
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0],
# [0, 0, 0, ..., 0, 0, 0]],
# 'info': {
# 'idToLabels': {'0': {'class': 'BACKGROUND'}, '2': {'prim': 'cone'}, '3': {'shape': 'boxy'}, '4': {'prim': 'sphere'}}
# }
# }
import asyncio
asyncio.ensure_future(test_semantic_segmentation())
CameraParams
The Camera Parameters annotator returns the camera details for the camera corresponding to the render product to which the annotator is attached.
Data Details
cameraFocalLength: Camera focal length
cameraFocusDistance: Camera focus distance
cameraFStop: Camera fStop value
cameraAperture: Camera horizontal and vertical aperture
cameraApertureOffset: Camera horizontal and vertical aperture offset
renderProductResolution: RenderProduct resolution
cameraModel: Camera model name
cameraViewTransform: Camera to world transformation matrix
cameraProjection: Camera projection matrix
cameraFisheyeNominalWidth: Camera fisheye nominal width
cameraFisheyeNominalHeight: Camera fisheye nominal height
cameraFisheyeOpticalCentre: Camera fisheye optical centre
cameraFisheyeMaxFOV: Camera fisheye maximum field of view
cameraFisheyePolynomial: Camera fisheye polynomial
cameraNearFar: Camera near/far clipping range
Example
import asyncio
import omni.replicator.core as rep
async def test_camera_params():
camera_1 = rep.create.camera()
camera_2 = rep.create.camera(
position=(100, 0, 0),
projection_type="fisheye_polynomial"
)
render_product_1 = rep.create.render_product(camera_1, (1024, 512))
render_product_2 = rep.create.render_product(camera_2, (800, 600))
anno_1 = rep.annotators.get("CameraParams").attach(render_product_1)
anno_2 = rep.annotators.get("CameraParams").attach(render_product_2)
await rep.orchestrator.step_async()
print(anno_1.get_data())
# {'cameraAperture': array([20.95 , 15.29], dtype=float32),
# 'cameraApertureOffset': array([0., 0.], dtype=float32),
# 'cameraFisheyeLensP': array([], dtype=float32),
# 'cameraFisheyeLensS': array([], dtype=float32),
# 'cameraFisheyeMaxFOV': 0.0,
# 'cameraFisheyeNominalHeight': 0,
# 'cameraFisheyeNominalWidth': 0,
# 'cameraFisheyeOpticalCentre': array([0., 0.], dtype=float32),
# 'cameraFisheyePolynomial': array([0., 0., 0., 0., 0.], dtype=float32),
# 'cameraFocalLength': 24.0,
# 'cameraFocusDistance': 400.0,
# 'cameraFStop': 0.0,
# 'cameraModel': 'pinhole',
# 'cameraNearFar': array([1., 1000000.], dtype=float32),
# 'cameraProjection': array([ 2.29, 0. , 0. , 0. ,
# 0. , 4.58, 0. , 0. ,
# 0. , 0. , 0. , -1. ,
# 0. , 0. , 1. , 0. ]),
# 'cameraViewTransform': array([1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1.]),
# 'metersPerSceneUnit': 0.009999999776482582,
# 'renderProductResolution': array([1024, 512], dtype=int32)
# }
print(anno_2.get_data())
# {
# 'cameraAperture': array([20.955 , 15.291], dtype=float32),
# 'cameraApertureOffset': array([0., 0.], dtype=float32),
# 'cameraFisheyeLensP': array([-0., -0.], dtype=float32),
# 'cameraFisheyeLensS': array([-0., -0., 0., -0.], dtype=float32),
# 'cameraFisheyeMaxFOV': 200.0,
# 'cameraFisheyeNominalHeight': 1216,
# 'cameraFisheyeNominalWidth': 1936,
# 'cameraFisheyeOpticalCentre': array([970.9424, 600.375 ], dtype=float32),
# 'cameraFisheyePolynomial': array([0. , 0.002, 0. , 0. , 0. ], dtype=float32),
# 'cameraFocalLength': 24.0,
# 'cameraFocusDistance': 400.0,
# 'cameraFStop': 0.0,
# 'cameraModel': 'fisheyePolynomial',
# 'cameraNearFar': array([1., 1000000.], dtype=float32),
# 'cameraProjection': array([ 2.29, 0. , 0. , 0. ,
# 0. , 3.05, 0. , 0. ,
# 0. , 0. , 0. , -1. ,
# 0. , 0. , 1. , 0. ]),
# 'cameraViewTransform': array([1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., -100., 0., 0., 1.]),
# 'metersPerSceneUnit': 0.009999999776482582,
# 'renderProductResolution': array([800, 600], dtype=int32)
# }
asyncio.ensure_future(test_camera_params())
skeleton_data
The skeleton data annotator outputs pose information about the skeletons in the scene view.
Output Format
Parameter |
Data Type |
Description |
---|---|---|
animationVariant |
list(<num_skeletons>, dtype=str) |
Animation variant name for each skeleton |
assetPath |
list(<num_skeletons>, dtype=str) |
Asset path for each skeleton |
globalTranslations |
array((<num_joints>, 3), dtype=float32) |
Global translation of each joint |
globalTranslationsSizes |
array(<num_skeletons>, dtype=int32) |
Size of each set of joints per skeleton |
inView |
array(<num_skeletons>, dtype=bool) |
If the skeleton is in view of the camera |
jointOcclusions |
array(<num_joints>, dtype=bool) |
For each joint, True if joint is occluded, otherwise False |
jointOcclusionsSizes |
array(<num_skeletons>, dtype=int32) |
Size of each set of joints per skeleton |
localRotations |
array((<num_joints>, 4), dtype=float32) |
Local rotation of each joint |
localRotationsSizes |
array(<num_skeletons>, dtype=int32) |
Size of each set of joints per skeleton |
numSkeletons |
<num_skeletons> |
Number of skeletons in scene |
occlusionTypes |
list(str) |
For each joint, the type of occlusion |
occlusionTypesSizes |
array([<num_skeletons>], dtype=int32) |
Size of each set of joints per skeleton |
restGlobalTranslations |
array((<num_joints>, 3), dtype=float32) |
Global translation for each joint at rest |
restGlobalTranslationsSizes |
array([<num_skeletons>], dtype=int32) |
Size of each set of joints per skeleton |
restLocalRotations |
array((<num_joints>, 4), dtype=float32) |
Local rotation of each join at rest |
restLocalRotationsSizes |
array(<num_skeletons>, dtype=int32) |
Size of each set of joints per skeleton |
restLocalTranslations |
array((<num_joints>, 3), dtype=float32) |
Local translation of each join at rest |
restLocalTranslationsSizes |
array(<num_skeletons>, dtype=int32) |
Size of each set of joints per skeleton |
skeletonJoints |
list(str) |
List of skeleton joints, encoded as a string |
skeletonParents |
array(<num_joints>, dtype=int32) |
Which joint is the parent of the index, -1 is root |
skeletonParentsSizes |
array(<num_skeletons>, dtype=int32) |
Size of each set of joints per skeleton |
skelName |
list(<num_skeletons>, dtype=str) |
Name of each skeleton |
skelPath |
list(<num_skeletons>, dtype=str) |
Path of each skeleton prim |
translations2d |
array((<num_joints>, 2), dtype=float32) |
Screen space joint position in pixels |
translations2dSizes |
array(<num_skeletons>, dtype=int32) |
Size of each set of joints per skeleton |
This annotator returns additional data as a single string held in a dictionary with the key skeleton_data
for backwards compatibility with the
original implementation of this annotator. Use eval(data["skeleton_data"])
to extract the attributes from this string.
Example
Below is an example script that outputs 10 images with skeleton pose annotation.
import asyncio
import omni.replicator.core as rep
# Define paths for the character
PERSON_SRC = 'omniverse://localhost/NVIDIA/Assets/Characters/Reallusion/Worker/Worker.usd'
async def test_skeleton_data():
# Human Model
person = rep.create.from_usd(PERSON_SRC, semantics=[('class', 'person')])
# Area to scatter cubes in
area = rep.create.cube(scale=2, position=(0.0, 0.0, 100.0), visible=False)
# Create the camera and render product
camera = rep.create.camera(position=(25, -421.0, 182.0), rotation=(77.0, 0.0, 3.5))
render_product = rep.create.render_product(camera, (1024, 1024))
def randomize_spheres():
spheres = rep.create.sphere(scale=0.1, count=100)
with spheres:
rep.randomizer.scatter_3d(area)
return spheres.node
rep.randomizer.register(randomize_spheres)
with rep.trigger.on_frame(interval=10, max_execs=5):
rep.randomizer.randomize_spheres()
# Attach annotator
skeleton_anno = rep.annotators.get("skeleton_data")
skeleton_anno.attach(render_product)
await rep.orchestrator.step_async()
data = skeleton_anno.get_data()
print(data)
# {
# 'animationVariant': ['None'],
# 'assetPath': ['Bones/Worker.StandingDiscussion_LookingDown_M.usd'],
# 'globalTranslations': array([[ 0. , 0. , 0. ], ..., [-21.64, 2.58, 129.8 ]], dtype=float32),
# 'globalTranslationsSizes': array([101], dtype=int32),
# 'inView': array([ True]),
# 'jointOcclusions': array([ True, False, ..., False, False]),
# 'jointOcclusionsSizes': array([101], dtype=int32),
# 'localRotations': array([[ 1. , 0. , 0. , 0. ], ..., [ 1. , 0. , -0.09, -0. ]], dtype=float32),
# 'localRotationsSizes': array([101], dtype=int32),
# 'numSkeletons': 1,
# 'occlusionTypes': ["['BACKGROUND', 'None', ..., 'None', 'None']"],
# 'occlusionTypesSizes': array([101], dtype=int32),
# 'restGlobalTranslations': array([[ 0. , 0. , 0. ], ..., [-31.86, 8.96, 147.72]], dtype=float32),
# 'restGlobalTranslationsSizes': array([101], dtype=int32),
# 'restLocalRotations': array([[ 1. , 0. , 0. , 0. ], ..., [ 1. , 0. , 0. , -0. ]], dtype=float32),
# 'restLocalRotationsSizes': array([101], dtype=int32),
# 'restLocalTranslations': array([[ 0. , 0. , 0. ], ..., [ -0. , 12.92, 0.01]], dtype=float32),
# 'restLocalTranslationsSizes': array([101], dtype=int32),
# 'skeletonJoints': ["['RL_BoneRoot', 'RL_BoneRoot/Hip', ..., 'RL_BoneRoot/Hip/Waist/Spine01/Spine02/R_Clavicle/R_Upperarm/R_UpperarmTwist01/R_UpperarmTwist02']"],
# 'skeletonParents': array([-1, 0, 1, ..., 97, 78, 99], dtype=int32),
# 'skeletonParentsSizes': array([101], dtype=int32),
# 'skelName': ['Worker'],
# 'skelPath': ['/Replicator/Ref_Xform/Ref/ManRoot/Worker/Worker'],
# 'translations2d': array([[513.94, 726.03],
# [514.42, 480.42],
# [514.42, 480.42],
# ...,
# [499.45, 450.9 ],
# [466.3 , 354.6 ],
# [455.09, 388.56]], dtype=float32),
# 'translations2dSizes': array([101], dtype=int32),
# 'skeletonData': ... # string data representation for backward compatibility
# }
asyncio.ensure_future(test_skeleton_data())
pointcloud
Outputs a 2D array of shape (N, 3) representing the points sampled on the surface of the prims in the viewport, where N is the number of point.
Output Format
The pointcloud annotator returns positions of the points found under the “data” key, while other information is under the “info” key: “pointRgb”, “pointNormals”, “pointSemantic” and “pointInstance”.
{
'data': array([...], shape=(<num_points>, 3), dtype=float32),
'info': {
'pointNormals': array([...], shape=(<num_points> * 4), dtype=float32),
'pointRgb': array([...], shape=(<num_points> * 4), dtype=uint8),
'pointSemantic': array([...], shape=(<num_points>), dtype=uint8),
'pointInstance': array([...], shape=(<num_points>), dtype=uint8),
}
}
Data Details
Point positions are in the world space.
Sample resolution is determined by the resolution of the render product.
Note
To get the mapping from semantic id to semantic labels, pointcloud annotator is better used with semantic
segmentation annotator, and users can extract the idToLabels
data from the semantic segmentation
annotator.
Example 1
Pointcloud annotator captures prims seen in the camera, and sampled the points on the surface of the prims, based on the resolution of the render product attached to the camera. Additional to the points sampled, it also outputs rgb, normals and semantic id values associated to the prim where that point belongs to. For prims without any valid semantic labels, pointcloud annotator will ignore it.
import asyncio
import omni.replicator.core as rep
async def test_pointcloud():
# Pointcloud only capture prims with valid semantics
W, H = (1024, 512)
cube = rep.create.cube(position=(0, 0, 0), semantics=[("class", "cube")])
camera = rep.create.camera(position=(200., 200., 200.), look_at=cube)
render_product = rep.create.render_product(camera, (W, H))
pointcloud_anno = rep.annotators.get("pointcloud")
pointcloud_anno.attach(render_product)
await rep.orchestrator.step_async()
pc_data = pointcloud_anno.get_data()
print(pc_data)
# {
# 'data': array([[-49.96, 50. , -49.28],
# [-49.74, 50. , -49.51],
# [-49.51, 50. , -49.74],
# ...,
# [ 50. , -49.33, 27.51],
# [ 50. , -49.67, 27.08],
# [ 50. , -50. , 26.65]], dtype=float32),
# 'info': {
# 'pointNormals': array([ 0., 1., -0., ..., 0., -0., 1.], dtype=float32),
# 'pointRgb': array([154, 154, 154, ..., 24, 24, 255], dtype=uint8),
# 'pointSemantic': array([2, 2, 2, ..., 2, 2, 2], dtype=uint8)},
# 'pointInstance': array([1, 1, 1, ..., 1, 1, 1], dtype=uint8)}
# }
asyncio.ensure_future(test_pointcloud())
Example 2
In this example, we demonstrate a scenario where multiple camera captures are taken to produce a more complete pointcloud, utilizing the excellent
open3d
library to export a coloured ply
file.
import os
import asyncio
import omni.replicator.core as rep
import open3d as o3d
import numpy as np
async def test_pointcloud():
# Pointcloud only capture prims with valid semantics
cube = rep.create.cube(semantics=[("class", "cube")])
camera = rep.create.camera()
render_product = rep.create.render_product(camera, (1024, 512))
pointcloud_anno = rep.annotators.get("pointcloud")
pointcloud_anno.attach(render_product)
# Camera positions to capture the cube
camera_positions = [(500, 500, 0), (-500, -500, 0), (500, 0, 500), (-500, 0, -500)]
with rep.trigger.on_frame(max_execs=len(camera_positions)):
with camera:
rep.modify.pose(position=rep.distribution.sequence(camera_positions), look_at=cube) # make the camera look at the cube
# Accumulate points
points = []
points_rgb = []
for _ in range(len(camera_positions)):
await rep.orchestrator.step_async()
pc_data = pointcloud_anno.get_data()
points.append(pc_data["data"])
points_rgb.append(pc_data["info"]["pointRgb"].reshape(-1, 4)[:, :3])
# Output pointcloud as .ply file
ply_out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "out")
os.makedirs(ply_out_dir, exist_ok=True)
pc_data = np.concatenate(points)
pc_rgb = np.concatenate(points_rgb)
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(pc_data)
pcd.colors = o3d.utility.Vector3dVector(pc_rgb)
o3d.io.write_point_cloud(os.path.join(ply_out_dir, "pointcloud.ply"), pcd)
asyncio.ensure_future(test_pointcloud())
ReferenceTime
Outputs the reference time corresponding to the render and associated annotations.
Output Format
The reference time annotator returns a numerator and denominator representing the time corresponding to the render and associated annotations.
{
'referenceTimeNumerator': int,
'referenceTimeDenominator': int,
}
Example
import asyncio
import omni.replicator.core as rep
async def test_reference_time():
W, H = (1024, 512)
camera = rep.create.camera()
render_product = rep.create.render_product(camera, (W, H))
ref_time_anno = rep.annotators.get("ReferenceTime")
ref_time_anno.attach(render_product)
await rep.orchestrator.step_async()
ref_time_data = ref_time_anno.get_data()
print(ref_time_data)
# {
# 'referenceTimeNumerator': <numerator>,
# 'referenceTimeDenominator': <denominator>,
# }
asyncio.ensure_future(test_reference_time())
CrossCorrespondence
The cross correspondence annotator outputs a 2D array representing the camera optical flow map of the camera’s viewport against a reference viewport.
To enable the cross correspondance annotation, the camera attached to the render product annotated with cross correspondance must have the attribute crossCameraReferenceName set to the (unique) name (not path) of a second camera (itself attached to a second render product). The Projection Type of the two cameras needs to be of type fisheyePolynomial (Camera –> Fisheye Lens –> Projection Type –> fisheyePolynomial).
Output Format
The Cross Correspondence annotator produces the cross correspondence between pixels seen from two cameras.
The components of each entry in the 2D array represent for different values encoded as floating point values:
x: dx - difference to the x value of of the corresponding pixel in the reference viewport. This value is normalized to
[-1.0, 1.0]
y: dy - difference to the y value of of the corresponding pixel in the reference viewport. This value is normalized to
[-1.0, 1.0]
z: occlusion mask - boolean signifying that the pixel is occluded or truncated in one of the cross referenced viewports. Floating point value represents a boolean
(1.0 = True, 0.0 = False)
w: geometric occlusion calculated - boolean signifying that the pixel can or cannot be tested as having occluded geometry (e.g. no occlusion testing is performed on missed rays)
(1.0 = True, 0.0 = False)
array((height, width, 4), dtype=<np.float32>)
Example
import asyncio
import omni.replicator.core as rep
from pxr import Sdf
async def test_cross_correspondence():
# Add an object to look at
rep.create.cone()
# Add stereo camera pair
stereo = rep.create.stereo_camera(position=(20, 0, 300), projection_type="fisheye_polynomial", stereo_baseline=20)
# Add cross correspondence attribute
stereo_L_prim = stereo.get_output_prims()["prims"][0].GetChildren()[0].GetChildren()[0]
stereo_L_prim.CreateAttribute("crossCameraReferenceName", Sdf.ValueTypeNames.String)
# Set attribute to refer to second camera name - beware of scenes with multiple cameras that share the same name!
stereo_L_prim.GetAttribute("crossCameraReferenceName").Set("StereoCam_R")
render_products = rep.create.render_product(stereo, (512, 512))
# Add annotator to left render product
anno = rep.annotators.get("cross_correspondence")
anno.attach(render_products[0])
await rep.orchestrator.step_async()
data = anno.get_data()
print(data.shape, data.dtype)
# (512, 512, 4), float32
asyncio.ensure_future(test_cross_correspondence())
Note
Both cameras must have the cameraProjectionType attribute set to fisheyePolynomial
The annotated camera must have the crossCameraReferenceName attribute set to the name of the second camera
To avoid unexpected results, ensure that the referenced camera has a unique name
MotionVectors
Outputs a 2D array of motion vectors representing the relative motion of a pixel in the camera’s viewport between frames.
The MotionVectors annotator returns the per-pixel motion vectors in in image space.
Output Format
array((height, width, 4), dtype=<np.float32>)
The components of each entry in the 2D array represent for different values encoded as floating point values:
x: motion distance in the horizontal axis (image width) with movement to the left of the image being positive and movement to the right being negative.
y: motion distance in the vertical axis (image height) with movement towards the top of the image being positive and movement to the bottom being negative.
z: unused
w: unused
Example
import asyncio
import omni.replicator.core as rep
async def test_motion_vectors():
# Add an object to look at
cone = rep.create.cone()
# Add motion to object
cone_prim = cone.get_output_prims()["prims"][0]
cone_prim.GetAttribute("xformOp:translate").Set((-100, 0, 0), time=0.0)
cone_prim.GetAttribute("xformOp:translate").Set((100, 50, 0), time=10.0)
camera = rep.create.camera()
render_product = rep.create.render_product(camera, (512, 512))
motion_vectors_anno = rep.annotators.get("MotionVectors")
motion_vectors_anno.attach(render_product)
# Take a step to render the initial state (no movement yet)
await rep.orchestrator.step_async()
# Capture second frame (now the timeline is playing)
await rep.orchestrator.step_async()
data = motion_vectors_anno.get_data()
print(data.shape, data.dtype, data.reshape(-1, 4).min(axis=0), data.reshape(-1, 4).max(axis=0))
# (1024, 512, 4), float32, [-93.80073 -1. -1. -1. ] [ 0. 23.450201 1. 1. ]
asyncio.ensure_future(test_motion_vectors())
Note
The values represent motion relative to camera space.
Attribute
Outputs the value of an attribute attached to one of more prims.
The Attribute annotator retrieves the attribute value(s) of one or more prims at the time of render. On attach, the attribute specified will be automatically pushed to Fabric to ensure it can be retrieved. Note that the output type of the attribute must be identical in all specified prims.
Output Format
array((attribute_size x number_of_prims, 1))
The Attribute annotator retrieves the data from the attribute and flattens them, creating a 1D array of shape (attribute_size x number_of_prims, 1).
Currently it can retrieve the attribute with following Sdf data types:
Int, IntArray, Int2, Int2Array, Int3, Int3Array
Float, FloatArray, Float2, Float2Array, Float3, Float3Array
Double, DoubleArray, Double2, Double2Array, Double3, Double3Array
Example
import asyncio
import omni.replicator.core as rep
from pxr import Sdf
async def test_attribute_anno():
cube1 = rep.create.cube(as_mesh=False)
cube2 = rep.create.cube(as_mesh=False)
cube_prim_path = "/Replicator/Cube_Xform/Cube"
cube_prim_path_2 = "/Replicator/Cube_Xform_01/Cube"
for path in [cube_prim_path, cube_prim_path_2]:
stage = omni.usd.get_context().get_stage()
cube_prim = stage.GetPrimAtPath(path)
cube_prim.CreateAttribute("float2Arr", Sdf.ValueTypeNames.Float2Array).Set([(12.34, 56.78), (56.78, 12.34)])
await omni.kit.app.get_app().next_update_async()
rp = rep.create.render_product("/OmniverseKit_Persp", (1024, 1024))
fabric_reader_anno = rep.annotators.get(
"Attribute",
init_params={
"prims": [cube_prim_path, cube_prim_path_2],
"attribute": "float2Arr",
},
)
fabric_reader_anno.attach(rp)
await rep.orchestrator.step_async()
data = fabric_reader_anno.get_data()
print(data, data.shape, data.dtype)
# [12.34 56.78 56.78 12.34 12.34 56.78 56.78 12.34] (8,) float32
asyncio.ensure_future(test_attribute_anno())
RT Annotators
RT Annotators are only available in RayTracedLighting rendering mode (RTX - Real-Time)
Example
import asyncio
import omni.replicator.core as rep
async def test_pt_anno():
# Set rendermode to PathTracing
rep.settings.set_render_rtx_realtime()
# Create an interesting scene
red_diffuse = rep.create.material_omnipbr(diffuse=(1, 0, 0.2), roughness=1.0)
metallic_reflective = rep.create.material_omnipbr(roughness=0.01, metallic=1.0)
glow = rep.create.material_omnipbr(emissive_color=(1.0, 0.5, 0.4), emissive_intensity=100000.0)
rep.create.cone(material=metallic_reflective)
rep.create.cube(position=(100, 50, -100), material=red_diffuse)
rep.create.sphere(position=(-100, 50, 100), material=glow)
ground = rep.create.plane(scale=(100, 1, 100), position=(0, -50, 0))
# Attach render product
W, H = (1024, 512)
camera = rep.create.camera(position=(400., 400., 400.), look_at=ground)
render_product = rep.create.render_product(camera, (W, H))
anno = rep.annotators.get("SmoothNormal")
anno.attach(render_product)
await rep.orchestrator.step_async()
data = anno.get_data()
print(data.shape, data.dtype)
# (512, 1024, 4), float32
asyncio.ensure_future(test_pt_anno())
SmoothNormal
Output Format
np.ndtype(np.float32) # shape: (H, W, 4)
BumpNormal
Output Format
np.ndtype(np.float32) # shape: (H, W, 4)
AmbientOcclusion
Output Format
np.ndtype(np.float16) # shape: (H, W, 4)
Motion2d
Output Format
np.ndtype(np.float32) # shape: (H, W, 4)
DiffuseAlbedo
Output Format
np.ndtype(np.uint8) # shape: (H, W, 4)
SpecularAlbedo
Output Format
np.ndtype(np.float16) # shape: (H, W, 4)
Roughness
Output Format
np.ndtype(np.uint8) # shape: (H, W, 4)
DirectDiffuse
Output Format
np.ndtype(np.float16) # shape: (H, W, 4)
DirectSpecular
Output Format
np.ndtype(np.float16) # shape: (H, W, 4)
Reflections
Output Format
np.ndtype(np.float32) # shape: (H, W, 4)
IndirectDiffuse
Output Format
np.ndtype(np.float16) # shape: (H, W, 4)
DepthLinearized
Output Format
np.ndtype(np.float32) # shape: (H, W, 1)
EmissionAndForegroundMask
Output Format
np.ndtype(np.float16) # shape: (H, W, 1)
PathTracing Annotators
PathTracing Annotators are only available in PathTracing rendering mode (RTX - Interactive). In addition, the following carb settings must be set on app launch:
rtx-transient.aov.enableRtxAovs = true
rtx-transient.aov.enableRtxAovsSecondary = true
Example
import asyncio
import omni.replicator.core as rep
async def test_pt_anno():
# Set rendermode to PathTracing
rep.settings.set_render_pathtraced()
# Create an interesting scene
red_diffuse = rep.create.material_omnipbr(diffuse=(1, 0, 0.2), roughness=1.0)
metallic_reflective = rep.create.material_omnipbr(roughness=0.01, metallic=1.0)
glow = rep.create.material_omnipbr(emissive_color=(1.0, 0.5, 0.4), emissive_intensity=100000.0)
rep.create.cone(material=metallic_reflective)
rep.create.cube(position=(100, 50, -100), material=red_diffuse)
rep.create.sphere(position=(-100, 50, 100), material=glow)
ground = rep.create.plane(scale=(100, 1, 100), position=(0, -50, 0))
# Attach render product
W, H = (1024, 512)
camera = rep.create.camera(position=(400., 400., 400.), look_at=ground)
render_product = rep.create.render_product(camera, (W, H))
anno = rep.annotators.get("PtGlobalIllumination")
anno.attach(render_product)
await rep.orchestrator.step_async()
data = anno.get_data()
print(data.shape, data.dtype)
# (512, 1024, 4), float16
asyncio.ensure_future(test_pt_anno())
PtDirectIllumation
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtGlobalIllumination
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtReflections
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtRefractions
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtSelfIllumination
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtBackground
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtWorldNormal
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtWorldPos
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtZDepth
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtVolumes
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtDiffuseFilter
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtReflectionFilter
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtRefractionFilter
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtMultiMatte0
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtMultiMatte1
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtMultiMatte2
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtMultiMatte3
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtMultiMatte4
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtMultiMatte5
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtMultiMatte6
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
PtMultiMatte7
Output Format
np.ndtype(np.float16) # Shape: (Height, Width, 4)
Annotator Exceptions
- exception omni.replicator.core.annotators.AnnotatorError(msg=None)
Bases:
Exception
Base exception for errors raised by annotator
Annotator Utils
- class omni.replicator.core.utils.annotator_utils.AnnotatorCache
Cache static annotator parameters to improve performance
Accessing OmniGraph attributes is relatively expensive. Cache static attributes where possible to improve performance in certain scenarios.
- annotators = {}
- classmethod get_data(annotator_id: int, node_params: Dict, annotator_params: Tuple, target_device: str, **kwargs)
Retrieve annotator array data.
- Parameters
annotator_id – Unique annotator identifier.
node_params – Node parameters.
annotator_params – Annotator parameters as an AnnotatorParams object.
target_device – Target device onto white to return the data (eg. “cpu”, “cuda:0”)
- classmethod get_common_params(annotator_id: int, node_params: Dict, annotator_params: Tuple, target_device: str, use_cache: bool = False)
Get annotator common parameters.
Retrieves common parameters that rarely change so they can be cached for faster data retrieval.
- Parameters
annotator_id – Unique annotator identifier.
node_params – Node parameters.
annotator_params – Annotator parameters as an AnnotatorParams object.
target_device – Target device onto white to return the data (eg. “cpu”, “cuda:0”)
use_cache – If
True
, cache common params for faster data retrieval. Defaults toFalse
.
- omni.replicator.core.utils.annotator_utils.get_extra_data(params: Dict)
Get all other annotator outputs, excluding array data.
- Parameters
params – Annotator parameters
- omni.replicator.core.utils.annotator_utils.get_annotator_data(node: Node, annotator_params: Tuple, from_inputs: bool = False, device: str = 'cpu', annotator_id: Optional[Tuple[str]] = None, do_copy: bool = False, use_legacy_structure: bool = True) Dict
Retrieve data from annotator node.
- Parameters
node – The OmniGraph annotator node object from which to retrieve data.
annotator_params – AnnotatorParams tuple specifying annotator metadata.
from_inputs – If
True
, annotator data is extracted from the node inputs. Defaults toFalse
.device – Specifies the device onto white to return array data. Valid values:
["cpu", "cuda", "cuda:<index>"]
Defaults to"cpu"
.annotator_id – Unique annotator identifier. Defaults to
None
.do_copy – If
True
, arrays are copied before being returned. This copy ensures that the data lifetime can be managed by downstream processes. Defaults toFalse
.use_legacy_structure –
Specifies the output structure to return. If
True
, the legacy structure is returned. The legacy structure changes depending on the data being returned:only array data: <array>
only non-array data: {<anno_attribute_0>: <anno_output_0>, <anno_attribute_n>: <aanno_output_n>}
array data and non-array data: {“data”: <array>, “info”: {<anno_attribute_0>: <anno_output_0>, <anno_attribute_n>: <aanno_output_n>}}
If
False
, a consistent data structure is returned:all cases: {<anno_attribute_0>: <anno_output_0>, <anno_attribute_n>: <anno_output_n>}
Defaults to
True
.
- omni.replicator.core.utils.annotator_utils.script_node_check(nodes: list = [])
Prompt user before enabling a script node
Call this script in any node capable of executing arbitrary scripts within the node’s
initialize()
call.Note
Only one prompt attempt is made per session
- omni.replicator.core.utils.annotator_utils.check_should_run_script()
Augmentations
- class omni.replicator.core.annotators.Augmentation(node_type_id: str, data_out_shape: Optional[Tuple[int]] = None, documentation: Optional[str] = None, **kwargs)
Augmentation class
Augmentations are defined by a python function or warp kernel which manipulates the array held in a required
data_in
argument. Augmentations can be applied to any annotator which outputs adata
attribute along with correspondingwidth
andheight
attributes.- Parameters
augmentation –
Python function or Warp kernel describing the augmentation operation. Details on required and supported optional arguments as follows:
data_in
: Required by both python and warp augmentations and is populated with the input data arraydata_out
: Required by warp functions, holds the output array. Not supported in conjunction with python functions.data_out_shape
: Required for warp functions if the shape of the output array does not match that of the input array. An axis value of-1
indicate that the axis matches the input’s axis dimension.seed
: Optional argument that can be used with both python and warp functions. If set toNone
or<0
, will use Replicator’s global seed together with the node identifier to produce a repeatable unique seed. When used with warp kernels, the seed is used to initialize a random number generator that produces a new integer seed value for each warp kernel call.
data_out_shape – Specifies the shape of the output array if the augmentation is specified as a warp kernel and the output array is a different shape than that of the input array. An axis value of
-1
indicates that the axis is the same size of the corresponding axis in the input array.documentation – Optionally document augmentation functionality, input parameters and output format.
kwargs – Optional parameters specifying the parameters to initialize the augmentation with
Example
>>> import omni.replicator.core as rep >>> import warp as wp >>> @wp.kernel ... def rgba_to_rgb(data_in: wp.array3d(dtype=wp.uint8), data_out: wp.array3d(dtype=wp.uint8)): ... i, j = wp.tid() ... data_out[i, j, 0] = data_in[i, j, 0] ... data_out[i, j, 1] = data_in[i, j, 1] ... data_out[i, j, 2] = data_in[i, j, 2] >>> augmentation = rep.annotators.Augmentation.from_function(rgba_to_rgb, data_out_shape=(-1, -1, 3))
- omni.replicator.core.annotators.augment(source_annotator: Union[Annotator, str], augmentation: Union[str, Augmentation], data_out_shape: Optional[Tuple] = None, name: Optional[str] = None, device: Optional[str] = None, **kwargs) Annotator
Create an augmented annotator
- Parameters
source_annotator – Annotator to be augmented
augmentation – Augmentation to be applied to source annotator. Can be specified as an
Augmentation
, the name of a registered augmentation or the node type id of an omnigraph node to be used as an augmentation.data_out_shape – Specifies the shape of the output array if the augmentation is specified as a warp kernel and the output array is a different shape than that of the input array. An axis value of
-1
indicates that the axis is the same size of the corresponding axis in the input array.name – Optional augmentation name. The augmentation name serves as the key in a writer payload dictionary. If set to
None
, the augmentation will take the name of the source annotator. Defaults toNone
device – Optionally specify the target device. If the augmentation is a warp kernel, the device will automatically default to
"cuda"
.kwargs – Optional parameters specifying the parameters with which to initialize the augmentation.
Example
>>> import omni.replicator.core as rep >>> @wp.kernel ... def rgba_to_rgb(data_in: wp.array3d(dtype=wp.uint8), data_out: wp.array3d(dtype=wp.uint8)): ... i, j = wp.tid() ... data_out[i, j, 0] = data_in[i, j, 0] ... data_out[i, j, 1] = data_in[i, j, 1] ... data_out[i, j, 2] = data_in[i, j, 2] >>> rgb_anno = rep.annotators.augment( ... source_annotator="rgb", ... augmentation=rep.annotators.Augmentation.from_function(rgba_to_rgb) ... )
- omni.replicator.core.annotators.augment_compose(source_annotator: Union[Annotator, str], augmentations: List[Union[Augmentation, str]], name: Optional[str] = None) Annotator
Compose an augmentated Annotator from multiple augmentation operations
Chain one or more augmentation operations together to be applied to a source annotator.
- Parameters
source_annotator – Annotator to be augmented
augmentations – List of augmentations to be applied in sequence to the source annotator
name – Optional augmentation name. The augmentation name serves as the key in a writer payload dictionary. If set to
None
, the augmentation will take the name of the source annotator. Defaults toNone
Example
>>> import omni.replicator.core as rep >>> @wp.kernel ... def rgba_to_rgb(data_in: wp.array3d(dtype=wp.uint8), data_out: wp.array3d(dtype=wp.uint8)): ... i, j = wp.tid() ... data_out[i, j, 0] = data_in[i, j, 0] ... data_out[i, j, 1] = data_in[i, j, 1] ... data_out[i, j, 2] = data_in[i, j, 2] >>> def rgb_to_greyscale(data_in): ... r, g, b = data_in[..., 0], data_in[..., 1], data_in[..., 2] ... return (0.299 * r + 0.587 * g + 0.114 * b).astype(np.uint8) >>> greyscale_anno = rep.annotators.augment_compose( ... source_annotator="rgb", ... augmentations=[ ... rep.annotators.Augmentation.from_function(rgba_to_rgb), ... rep.annotators.Augmentation.from_function(rgb_to_greyscale), ... ] ... )
Default Augmentations
RgbaToRgb
Remove the alpha (last) channel from an RGBA image
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 3): RGB image
Example
import asyncio import omni.replicator.core as rep async def test_rgba_to_rgb(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("RgbaToRgb") augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data.shape) # (480, 640, 4) asyncio.ensure_future(test_rgba_to_rgb())
AdjustSigmoid
Perform sigmoid correction on an image
A form of contrast adjustment, transforms each pixel (normalized to be between 0 and 1) of an image according to the equation
1 2. Initialization Parameters
cutoff (float): Shifts the characteristic sigmoid curve horizontally
gain (float): Multiplier in the exponent’s power of sigmoid function.
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_adjust_sigmoid(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("AdjustSigmoid", cutoff=0.2, gain=20.0) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data.shape) # (480, 640, 4) asyncio.ensure_future(test_adjust_sigmoid())References
- 1
Gustav J. Braun, “Image Lightness Rescaling Using Sigmoidal Contrast Enhancement Functions”, http://markfairchild.org/PDFs/PAP07.pdf
- 2
Stéfan van der Walt, Johannes L. Schönberger, Juan Nunez-Iglesias, François Boulogne, Joshua D. Warner, Neil Yager, Emmanuelle Gouillart, Tony Yu and the scikit-image contributors. scikit-image: Image processing in Python. PeerJ 2:e453 (2014), https://doi.org/10.7717/peerj.453
Brightness
Modify the brightness of an image.
Initialization Parameters
brightness_factor (float): value between
[-100, 100]
that determines the brightness modification.Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_brightness(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("Brightness", brightness_factor=5.0) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data.shape) # (480, 640, 4) asyncio.ensure_future(test_brightness())
SpeckleNoise
Add speckle noise to an RGBA image
Provided a noise scaling factor
sigma
, add speckle noise to each pixel of the image.Initialization Parameters
sigma (float): determines the amount of noise to add. A larger value produces a noisier image.
seed (int): Seed to use as initialization for the pseudo-random number generator. Seed is expected to be a non-negative integer.
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_speckle_noise(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("SpeckleNoise", sigma=0.2) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data.shape) # (480, 640, 4) asyncio.ensure_future(test_speckle_noise())
ShotNoise
Add shot noise to an RGBA image
Provided a noise scaling factor
sigma
, add shot noise to each pixel of the image.Initialization Parameters
sigma (float): Determines the amount of noise to add. A larger value produces a noisier image.
seed (int): Seed to use as initialization for the pseudo-random number generator. Seed is expected to be a non-negative integer.
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_shot_noise(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("ShotNoise", sigma=0.2) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data.shape) # (480, 640, 4) asyncio.ensure_future(test_shot_noise())
RgbToHsv
Modifies an RGB image to HSV
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_rgb_to_hsv(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("HsvToRgb") augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data.shape) # (480, 640, 4) asyncio.ensure_future(test_rgb_to_hsv())
HsvToRgb
Modifies an HSV image to RGB
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_hsv_to_rgb(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment_compose(["RgbToHsv", "HsvToRgb"]) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data.shape) # (480, 640, 4) asyncio.ensure_future(test_hsv_to_rgb())
GlassBlur
Applies a Glass Blur augmentation to an RGBA image.
To simulate glass blur, each pixel is swapped with another sampled from within a window whose size is determined by the
delta
parameter.Initialization Parameters
delta (int): determines the maximum window size from which to sample a pixel. A larger value produces a blurrier effect.
seed (int): Seed to use as initialization for the pseudo-random number generator. Seed is expected to be a non-negative integer.
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_glass_blur(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("GlassBlur", delta=5) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data.shape) # (480, 640, 4) asyncio.ensure_future(test_glass_blur())
BackgroundRand
Randomize and apply a background image.
Given a folder path, valid images are randomly selected and applied as the background to the current image.
Initialization Parameters
folderpath (str): Path to directory containing images to be used as backgrounds.
seed (int): Seed to use as initialization for the pseudo-random number generator for the sampler controlling image selection. Seed is expected to be a non-negative integer.
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_background_rand(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("BackgroundRand", folderpath=rep.example.TEXTURES_DIR) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data["data"].shape) # (480, 640, 4) print(data["info"]) # {'xform': array([1., 0., 0., 0., 1., 0., 0., 0., 1.])} asyncio.ensure_future(test_background_rand())
Contrast
Adjust the contrast of an image.
Initialization Parameters
contrastFactor (float): Positive float value specifying how much to adjust the contrast. A value of
0.0
produces a solid grey image,1.0
results in the original input image and2.0
increases the contrast by a factor of2.0
.Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_adjust_contrast(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("Contrast", contrastFactor=1.5) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data["data"].shape) # (480, 640, 4) print(data["info"]) # {'xform': array([1., 0., 0., 0., 1., 0., 0., 0., 1.])} asyncio.ensure_future(test_adjust_contrast())
Conv2d
Apply a 2D convolution to an image
Initialization Parameters
- kernel (float[]): The kernel to convolve with an image. Kernel is provided as a flattened array of size
[N * N]
whereN
is the kernel size.Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import numpy as np import omni.replicator.core as rep async def test_conv2d(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) # Create a gaussian blur kernel gaussian_blur = np.array([ [0.0625, 0.1250, 0.0625], [0.1250, 0.2500, 0.1250], [0.0625, 0.1250, 0.0625], ]) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("Conv2d", kernel=gaussian_blur) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data["data"].shape) # (480, 640, 4) print(data["info"]) # {'xform': array([1., 0., 0., 0., 1., 0., 0., 0., 1.])} asyncio.ensure_future(test_conv2d())
CropResize
Crop, resize and translate an image.
Initialization Parameters
cropFactor (float): Value between >0.0 and 1.0 specifying the amount of the image to crop. A value of
1.0
indicates no crop and a value of 0.5 will crop the image by half.offsetFactor (float[2]): Value between (-1.0 and 1.0) indicating the translation offset factor in
(vertical, horizontal)
directions. A value of(-1.0, -1.0)
will translate the cropped image to the bottom-most and left-most, and a value of(1.0, 1.0)
to the top-most and right-most. Note that ifcropFactor
is set to1.0
, no translation is possible.seed (int): Seed to use as initialization for the pseudo-random number generator for the sampler controlling image selection. Seed is expected to be a non-negative integer.
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_crop_resize(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("CropResize", cropFactor=0.5, offset_factor=(-0.2, 0.2)) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data["data"].shape) # (480, 640, 4) print(data["info"]) # {'xform': array([ 2., 0., 320., 0., 2., -720., 0., 0., 1.])} asyncio.ensure_future(test_crop_resize())
CutMix
Randomly apply a rectangular patch from another image onto the input image.
The augmentation takes in a random rectangular patch from another image and superimposes it on the input image. The rectangular patch is encoded in a binary mask where the pixels belonging to the rectangle have a mask value of 1 and 0 otherwise.
Initialization Parameters
folderpath (str): Path to directory containing images to be used as patches.
seed (int): Seed to use as initialization for the pseudo-random number generator for the sampler controlling image selection. Seed is expected to be a non-negative integer.
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_cut_mix(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("CutMix", folderpath=rep.example.TEXTURES_DIR) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data["data"].shape) # (480, 640, 4) print(data["info"]) # {'xform': array([1., 0., 0., 0., 1., 0., 0., 0., 1.])} asyncio.ensure_future(test_cut_mix())
ImageBlend
Blend an input image with a sampled blend image.
Initialization Parameters
blendFactor (float): Blend amount. A value of
0.0
will return the original image and a value of1.0
will return the blend image.folderpath (str): Path to directory containing images to be used as patches.
seed (int): Seed to use as initialization for the pseudo-random number generator for the sampler controlling image selection. Seed is expected to be a non-negative integer.
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_image_blend(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("ImageBlend", blendFactor=0.2, folderpath=rep.example.TEXTURES_DIR) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data["data"].shape) # (480, 640, 4) print(data["info"]) # {'xform': array([1., 0., 0., 0., 1., 0., 0., 0., 1.])} asyncio.ensure_future(test_image_blend())
MotionBlur
Apply a motion blur effect to an input image.
Initialization Parameters
motionAngle (float): Angle in degrees where
0
indicates motion towards the left,90
towards the bottom, and270
towards the top.strength (float): Motion Blur strength from
-1
to1
.kernelSize (int): Size of the conv kernel which controls the size of blur that will be produced.
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_motion_blur(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("MotionBlur", motionAngle=45.0, strength=0.8, kernelSize=25) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data["data"].shape) # (480, 640, 4) print(data["info"]) # {'xform': array([1., 0., 0., 0., 1., 0., 0., 0., 1.])} asyncio.ensure_future(test_motion_blur())
Pixellate
Pixellate an input image.
Initialization Parameters
kernelSize (int): Size of the conv kernel which controls how many original pixels get consolidated into a single larger pixel.
Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_pixellate(): rp = rep.create.render_product(rep.create.camera(), (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("Pixellate", kernelSize=25) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data["data"].shape) # (480, 640, 4) print(data["info"]) # {'xform': array([1., 0., 0., 0., 1., 0., 0., 0., 1.])} asyncio.ensure_future(test_pixellate())
Rotate
Rotate an input image.
Initialization Parameters
rotation (float): Clockwise image rotation in degrees. A value of
0.0
corresponds to no rotation.Input Format - (height, width, 4): RGBA image
Output Format - (height, width, 4): RGBA image
Example
import asyncio import omni.replicator.core as rep async def test_rotate(): camera = rep.create.camera() rp = rep.create.render_product(camera, (640, 480)) augmented_anno = rep.annotators.get("LdrColor", device="cuda").augment("Rotate", rotation=90) augmented_anno.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() print(data["data"].shape) # (480, 640, 4) print(data["info"]) # {'xform': array([ 0.0, 1.0, 80.0, -1.0, 0.0, 559.0, 0.0, 0.0, 1.0])} # Test bounding box transform based on rotation xform # Add a labelled sphere with rep.create.sphere(semantics=[("class", "sphere")]): rep.modify.pose_camera_relative(camera=camera, render_product=rp, distance=700, horizontal_location=0.2, vertical_location=0.4) # Add a labelled cone with rep.create.cone(semantics=[("class", "cone")]): rep.modify.pose_camera_relative(camera=camera, render_product=rp, distance=500, horizontal_location=-0.2, vertical_location=0.1) # Add bounding box annotator bbox_2d = rep.annotators.get("bounding_box_2d_tight_fast") bbox_2d.attach(rp) await rep.orchestrator.step_async() data = augmented_anno.get_data() xform = data["info"]["xform"] bbox_data = bbox_2d.get_data()["data"] visualization = rep.tools.colorize_bbox_2d(data["data"].numpy(), bbox_data, xform, draw_rotated_boxes=True) from PIL import Image Image.fromarray(visualization).save("rotated_bbox.png") asyncio.ensure_future(test_rotate())
Writers
Writers
are how to get data from Omniverse Replicator out to disk.
Writer Base class
- class omni.replicator.core.writers.Writer
Base Writer class.
Writers must specify a list of required annotators which will be called during data collection. Annotator data is packaged in a
data
dictionary of the form<annotator_name>: <annotator_data>
and passed to the writer’swrite
function.__init__()
andwrite()
must be implemented by custom writers that inherit from this class.An optional
on_final_frame()
function can be defined to run once data generation is stopped.- backend
Optionally specify a rep.backends.Backend object. The backend specified here is used to automatically write metadata data.
- version
Writer version number.
- annotators
Required list of annotators to attach to writer.
- num_written
Integer that is incremented with every call to
write()
- data_structure
Specifies the writer’s output data structure. Valid values:
["legacy", "annotator", "renderProduct]
- annotator: {“annotators”: <anno>: {<render_product>: {<annotator_data>}}} - renderProduct: {“renderProducts”: <render_product>: {<anno>: {<annotator_data>}}} - legacy (multi renderProduct): {<render_product>_<anno>: {<annotator_data>}} - legacy (single renderProduct): {<anno>: {<annotator_data>}}
Default Writers
BasicWriter
- class omni.replicator.core.writers_default.BasicWriter(output_dir: Optional[str] = None, s3_bucket: Optional[str] = None, s3_region: Optional[str] = None, s3_endpoint: Optional[str] = None, semantic_types: Optional[List[str]] = None, rgb: bool = False, bounding_box_2d_tight: bool = False, bounding_box_2d_loose: bool = False, semantic_segmentation: bool = False, instance_id_segmentation: bool = False, instance_segmentation: bool = False, distance_to_camera: bool = False, distance_to_image_plane: bool = False, bounding_box_3d: bool = False, occlusion: bool = False, normals: bool = False, motion_vectors: bool = False, camera_params: bool = False, pointcloud: bool = False, pointcloud_include_unlabelled: bool = False, image_output_format: str = 'png', colorize_semantic_segmentation: bool = True, colorize_instance_id_segmentation: bool = True, colorize_instance_segmentation: bool = True, colorize_depth: bool = False, skeleton_data: bool = False, frame_padding: int = 4, semantic_filter_predicate: Optional[str] = None, use_common_output_dir: bool = False, backend: Optional[BaseBackend] = None)
Basic writer capable of writing built-in annotator groundtruth.
- Parameters
output_dir – Output directory string that indicates the directory to save the results.
s3_bucket – The S3 Bucket name to write to. If not provided, disk backend will be used instead. Default:
None
. This backend requires that AWS credentials are set up in~/.aws/credentials
. See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configurations3_region – If provided, this is the region the S3 bucket will be set to. Default:
us-east-1
s3_endpoint – If provided, this endpoint URL will be used instead of the default.
semantic_types – List of semantic types to consider when filtering annotator data. Default:
["class"]
rgb – Boolean value that indicates whether the
rgb
/LdrColor
annotator will be activated and the data will be written or not. Default:False
.bounding_box_2d_tight – Boolean value that indicates whether the
bounding_box_2d_tight
annotator will be activated and the data will be written or not. Default:False
.bounding_box_2d_loose – Boolean value that indicates whether the
bounding_box_2d_loose
annotator will be activated and the data will be written or not. Default:False
.semantic_segmentation – Boolean value that indicates whether the
semantic_segmentation
annotator will be activated and the data will be written or not. Default:False
.instance_id_segmentation – Boolean value that indicates whether the
instance_id_segmentation
annotator will be activated and the data will be written or not. Default:False
.instance_segmentation – Boolean value that indicates whether the
instance_segmentation
annotator will be activated and the data will be written or not. Default:False
.distance_to_camera – Boolean value that indicates whether the
distance_to_camera
annotator will be activated and the data will be written or not. Default:False
.distance_to_image_plane – Boolean value that indicates whether the
distance_to_image_plane
annotator will be activated and the data will be written or not. Default:False
.bounding_box_3d – Boolean value that indicates whether the
bounding_box_3d
annotator will be activated and the data will be written or not. Default:False
.occlusion – Boolean value that indicates whether the
occlusion
annotator will be activated and the data will be written or not. Default:False
.normals – Boolean value that indicates whether the
normals
annotator will be activated and the data will be written or not. Default:False
.motion_vectors – Boolean value that indicates whether the
motion_vectors
annotator will be activated and the data will be written or not. Default:False
.camera_params – Boolean value that indicates whether the
camera_params
annotator will be activated and the data will be written or not. Default:False
.pointcloud – Boolean value that indicates whether the
pointcloud
annotator will be activated and the data will be written or not. Default:False
.pointcloud_include_unlabelled – If
True
, pointcloud annotator will capture any prim in the camera’s perspective, not matter if it has semantics or not. IfFalse
, only prims with semantics will be captured. Defaults toFalse
.image_output_format – String that indicates the format of saved RGB images. Default:
"png"
colorize_semantic_segmentation – If
True
, semantic segmentation is converted to an image where semantic IDs are mapped to colors and saved as a uint8 4 channel PNG image. IfFalse
, the output is saved as auint32
PNG image. Defaults toTrue
.colorize_instance_id_segmentation – If
True
, instance id segmentation is converted to an image where instance IDs are mapped to colors. and saved as a uint8 4 channel PNG image. IfFalse
, the output is saved as auint32
PNG image. Defaults toTrue
.colorize_instance_segmentation – If
True
, instance segmentation is converted to an image where instance are mapped to colors. and saved as a uint8 4 channel PNG image. IfFalse
, the output is saved as auint32
PNG image. Defaults toTrue
.colorize_depth – If
True
, will output an additional PNG image for depth for visualization Defaults toFalse
.frame_padding – Pad the frame number with leading zeroes. Default:
4
semantic_filter_predicate –
A string specifying a semantic filter predicate as a disjunctive normal form of semantic type, labels.
- Examples :
”typeA : labelA & !labelB | labelC , typeB: labelA ; typeC: labelD” “typeA : * ; * : labelA”
use_common_output_dir – If
True
, output for each annotator coming from multiple render products are saved under a common directory with the render product as the filename prefix (eg. <render_product_name>_<annotator_name>_<sequence>.<format>). IfFalse
, multiple render product outputs are placed into their own directory (eg. <render_product_name>/<annotator_name>_<sequence>.<format>). Setting is ignored if using the writer with a single render product. Defaults toFalse
.backend – Optionally pass a backend to use. If specified, output_dir and s3_<> arguments may be omitted. If both are provided, the backends will be grouped.
Example
>>> import omni.replicator.core as rep >>> import carb >>> camera = rep.create.camera() >>> render_product = rep.create.render_product(camera, (1024, 1024)) >>> writer = rep.WriterRegistry.get("BasicWriter") >>> tmp_dir = carb.tokens.get_tokens_interface().resolve("${temp}/rgb") >>> writer.initialize(output_dir=tmp_dir, rgb=True) >>> writer.attach([render_product]) >>> rep.orchestrator.run()
FPSWriter
- class omni.replicator.core.writers_default.FPSWriter
Record Writer FPS
Writer that can be attached to record and print out writer FPS. Typically attached together with another writer.
Note
Writer does not write any data.
KittiWriter
- class omni.replicator.core.writers_default.KittiWriter(output_dir: str, s3_bucket: Optional[str] = None, s3_region: Optional[str] = None, s3_endpoint: Optional[str] = None, semantic_types: Optional[List[str]] = None, omit_semantic_type: bool = False, bbox_height_threshold: int = 25, partly_occluded_threshold: float = 0.5, fully_visible_threshold: float = 0.95, renderproduct_idxs: Optional[List[tuple]] = None, mapping_path: Optional[str] = None, mapping_dict: Optional[dict] = None, colorize_instance_segmentation: bool = False, semantic_filter_predicate: Optional[str] = None, use_kitti_dir_names: bool = False)
Writer outputting data in the
KITTI
annotation format: http://www.cvlibs.net/datasets/kitti/Note
Development work to provide full-support is ongoing.
Supported Annotations: - RGB - Object Detection (partial 2D support, see notes) - Depth - Semantic Segmentation - Instance Segmentation
- Parameters
output_dir – Output directory to which
KITTI
annotations will be saved.semantic_types – List of semantic types to consider. If
None
, only consider semantic types"class"
.omit_semantic_type – If
True
, only record the semantic data (ie.class: car
becomescar
).bbox_height_threshold – The minimum valid bounding box height, in pixels. Value must be positive integers.
partly_occluded_threshold – Minimum occlusion factor for bounding boxes to be considered partly occluded.
fully_visible_threshold – Minimum occlusion factor for bounding boxes to be considered fully visible.
mapping_path – File path to JSON to use as the label to color mapping for
KITTI
. ex:{'car':(155, 255, 74, 255)}
If nomapping_path
is supplied, the default semantics specified in the KITTI spec will be used. Note that semantics not specified in the mapping will be labelled as “unlabelled”. The mapping may include both “unlabelled” and “background” labels to specify how each is colored whencolorize_instance_segmentation
isTrue
mapping_dict – Dictionary of labels and their colors in (R,G,B,A). ex:
{"my_semantic": (12, 07, 83, 255)}
mapping_dict and mapping_path cannot both be specified.colorize_instance_segmentation – If
True
, save an additional colorized instance segmentation image to theinstance_rgb
directoryuse_kitti_dir_names – If
True
, use standardKITTI
directory names:rgb
->image_02
,semantic_segmentation
->semantic
,instance_segmentation
->instance
,object_detection
->label_02
Note
Object Detection
Bounding boxes with a height smaller than 25 pixels are discarded
Supported: bounding box extents, semantic labels
Partial Support: occluded (occlusion is estimated from the area ratio of tight / loose bounding boxes)
Unsupported: alpha, dimensions, location, rotation_y, truncated (all set to default values of
0.0
)
COCOWriter
- class omni.replicator.core.writers_default.CocoWriter(output_dir: str, semantic_types: Optional[List[str]] = None, coco_categories: Optional[dict] = None, s3_bucket: Optional[str] = None, s3_region: Optional[str] = None, s3_endpoint: Optional[str] = None, dataset_id: Optional[str] = None, frame_padding: int = 4, image_output_format: str = 'png', coco_license_info: Optional[List[dict]] = None, **kwargs)
Writer outputting data in the Coco annotation format: https://cocodataset.org/#format-data
Note
Development work to provide full-support is ongoing.
Supported: Object Detection, Stuff Segmentation, Panoptic Segmentation
Unsupported: Keypoints, Image Captioning, DensePose
- Supported Annotations:
RGB
Object Detection
Semantic Segmentation
- Parameters
output_dir – Output directory to which Coco annotations will be saved.
semantic_types – List of semantic types to consider. If
None
, only consider semantic types['class', 'coco', 'stuff', 'thing']
.coco_categories – Dictionary of COCO compatible labels. Required keys:
name
,id
. IfNone
, will use the built-in COCO labels from https://cocodataset.org ex:{"semantic_name": 'name': 'semantic_name', 'id': 1234, 'supercategory': 'super_name', 'color': (03, 23, 15), 'isthing': 1}
s3_bucket – The S3 Bucket name to write to. If not provided, disk backend will be used instead. Default:
None
. This backend requires that AWS credentials are set up in~/.aws/credentials
. See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configurations3_region – If provided, this is the region the S3 bucket will be set to. Default:
us-east-1
s3_endpoint – If provided, this endpoint URL will be used instead of the default.
dataset_id – An identifier to be added to the output file names. If None, will use a random string
image_output_format – Image filetype to write to disk, default is PNG
coco_license_info – Dictionary of license information to add.
[{"id": int, "name": str, "url": str,},]
. See: https://cocodataset.org/#format-data
Writer Utils
- omni.replicator.core.writers_default.tools.data_to_colour(data)
- omni.replicator.core.writers_default.tools.colorize_distance(distance_data: Union[array, ndarray], near: float = 1e-05, far: float = 100.0)
Convert distance in meters to grayscale image.
- Parameters
- Returns
Data converted to uint8 from (0, 255)
- Return type
(ndarray)
- omni.replicator.core.writers_default.tools.binary_mask_to_rle(binary_mask)
Convert a Binary Mask to RLE format neeeded by Pycocotools: Args: binary_mask - numpy.array of 0’s and 1’s representing current segmentation returns - data in RLE format to be used by pycocotools.
- omni.replicator.core.writers_default.tools.colorize_segmentation(data, labels, mapping=None)
Convert segmentation data into colored image.
- Parameters
data (numpy.array) – data returned by the annotator.
dict (labels) – label data mapping semantic IDs to semantic labels - {“0”: {“class”:”cube”}, “1”: {“class”, “sphere”}}
mapping (dict) – mapping from ids to labels used for retrieving color {(255, 0, 0): {“class”:”cube”}, (0, 255, 0)}: {“class”:”sphere”}}
- Returns
Data coverted to uint8 RGBA image and remapped labels
- Return type
Tuple[np.array, dict]
- omni.replicator.core.writers_default.tools.colorize_motion_vector(data)
Convert motion vector into colored image. The conversion is done by mapping 3D direction vector to HLS space, then converted to RGB.
- Parameters
data (numpy.array) – data returned by the annotator of shape (H, W, 4).
- Returns
Data converted to uint8 RGBA image.
- Return type
(ndarray)
- omni.replicator.core.writers_default.tools.colorize_bbox_2d(image: ndarray, data: ndarray, xform: Optional[ndarray] = None, draw_rotated_boxes: bool = False) ndarray
Colorizes 2D bounding box data for visualization.
- Parameters
image – RGBA Image that bounding boxes are drawn onto.
data – 2D bounding box data from the annotator.
xform – Optional 3x3 transform matrix to apply to the points. The Xform expects height, width ordering.
draw_rotated_boxes – If
True
, draw bounding boxes with orientation using four corners when transformed usingxform
. Ignored ifxform
isNone
. Defaults toFalse
.
- Returns
Data converted to uint8 RGB image, which the outline of the bounding box is colored.
- Return type
(ndarray)
- omni.replicator.core.writers_default.tools.colorize_normals(data)
Convert normals data into colored image.
- omni.replicator.core.writers_default.tools.random_colours(N, enable_random=True, num_channels=4)
Generate random colors. Generate visually distinct colours by linearly spacing the hue channel in HSV space and then convert to RGB space.
Triggers
- omni.replicator.core.trigger.on_frame(interval: int = 1, num_frames: int = 0, name: str = 'on_frame', rt_subframes: int = 1, max_execs: int = 0) ReplicatorItem
Execute on a specific generation frame.
- Parameters
interval – The generation frame interval to execute on.
num_frames – (Will be deprecated) Replaced by
max_execs
. The number of times to activate the trigger. Generation automatically stops when all triggers have reached their maximum activation number. Note that this determines the number of times that the trigger is activated and not the number of times data is written.name – The name of the trigger.
rt_subframes – If rendering in RTX Realtime mode, specifies the number of subframes to render in order to reduce artifacts caused by large changes in the scene.
max_execs – The number of times to activate the trigger. Generation automatically stops when all triggers have reached their maximum activation number.
Example
>>> import omni.replicator.core as rep >>> spheres = rep.create.sphere(count=10, scale=rep.distribution.uniform(1., 3.)) >>> with rep.trigger.on_frame(max_execs=10): ... with spheres: ... mod = rep.modify.pose(position=rep.distribution.uniform((-500., -500., -500.), (500., 500., 500.)))
- omni.replicator.core.trigger.on_key_press(key: str, modifier: str = None, name: str = None) ReplicatorItem
Execute when a keyboard key is input.
- Parameters
key – The key to listen for.
modifier – Optionally add a modifier [shift, alt, ctrl]
name – The name of the trigger.
Example
>>> import omni.replicator.core as rep >>> spheres = rep.create.sphere(count=10, scale=rep.distribution.uniform(1., 3.)) >>> with rep.trigger.on_key_press(key="P", modifier="shift"): ... with spheres: ... mod = rep.modify.pose(position=rep.distribution.uniform((-500., -500., -500.), (500., 500., 500.)))
- omni.replicator.core.trigger.on_time(interval: float = 1, num: int = 0, name: str = 'on_time', rt_subframes: int = 32, enable_capture_on_play: bool = True, reset_physics: bool = True, max_execs: int = 0) ReplicatorItem
Execute on a specific time interval.
- Parameters
interval – The interval of elapsed time to execute on.
num – (Will be deprecated) Replaced by
max_execs
. The number of times to activate the trigger. Generation automatically stops when all triggers have reached their maximum activation number. Note that this determines the number of times that the trigger is activated and not the number of times data is written.name – The name of the trigger.
rt_subframes – If rendering in RTX Realtime mode, specifies the number of subframes to render in order to reduce artifacts caused by large changes in the scene.
enable_capture_on_play – Enable
CaptureOnPlay
which ties replicator capture with the timeline state. Defaults toTrue
.reset_physics – If
True
physics simulation is reset on each trigger activation. Defaults toTrue
.max_execs – The number of times to activate the trigger. Generation automatically stops when all triggers have reached their maximum activation number. Note that this determines the number of times that the trigger is activated and not the number of times data is written.
Example
>>> import omni.replicator.core as rep >>> spheres = rep.create.sphere(count=10, scale=rep.distribution.uniform(1., 3.)) >>> with rep.trigger.on_time(max_execs=10): ... with spheres: ... mod = rep.modify.pose(position=rep.distribution.uniform((-500., -500., -500.), (500., 500., 500.)))
- omni.replicator.core.trigger.on_custom_event(event_name: str) ReplicatorItem
Execute when a specified event is received.
- Parameters
event_name – The name of the event to listen for.
Example
>>> import omni.replicator.core as rep >>> spheres = rep.create.sphere(count=10, scale=rep.distribution.uniform(1., 3.)) >>> with rep.trigger.on_custom_event(event_name="Randomize!"): ... with spheres: ... mod = rep.modify.pose(position=rep.distribution.uniform((-500., -500., -500.), (500., 500., 500.))) >>> # Send event >>> rep.utils.send_og_event("Randomize!")
- omni.replicator.core.trigger.on_condition(condition: Union[partial, Callable], max_execs: int = 0, rt_subframes: int = 1) ReplicatorItem
Execute when a specified condition is met.
Create a
OnCondition
trigger which activates whencondition
returns True.- Parameters
condition – The function or partial defining the condition to be met. Must return a
bool
. Function parameters are automatically added to the node as inputs. If default parameters are provided, these default values will be used.max_execs – The number of times to activate the trigger. Generation automatically stops when all triggers have reached their maximum activation number.
rt_subframes – If rendering in RTX Realtime mode, specifies the number of subframes to render in order to reduce artifacts caused by large changes in the scene. Default is 1.
Example
>>> import omni.usd >>> import omni.replicator.core as rep >>> from functools import partial >>> # Create a condition that returns ``True`` whenever a prim reaches the specified threshold >>> def has_reached_ground(prim_paths, threshold=0.): ... import omni.usd ... from pxr import UsdGeom ... stage = omni.usd.get_context().get_stage() ... up_axis = UsdGeom.GetStageUpAxis(stage) ... op = "xformOp:translate" ... idx = 1 if up_axis == "Y" else 2 ... for prim_path in prim_paths: ... prim = stage.GetPrimAtPath(str(prim_path)) ... if prim.HasAttribute(op) and prim.GetAttribute(op).Get()[idx] <= threshold: ... return True ... return False >>> spheres = rep.create.sphere(count=10, scale=rep.distribution.uniform(1., 3.)) >>> # Reposition the spheres when condition is met >>> with rep.trigger.on_condition(condition=partial(has_reached_ground, prim_paths=spheres.get_output("prims"))): ... with spheres: ... mod = rep.modify.pose(position=rep.distribution.uniform((-500., 200., 200.), (500., 500., 500.))) ... phys = rep.physics.rigid_body()
- omni.replicator.core.trigger.register(fn: Callable[[...], Union[ReplicatorItem, Node]], override: bool = True, fn_name: Optional[str] = None) None
Register a new function under
omni.replicator.core.trigger
. Extend the default capabilities ofomni.replicator.core.trigger
by registering new functionality. New functions must return aReplicatorItem
or anOmniGraph
node.- Parameters
fn – A function that returns a
ReplicatorItem
or anOmniGraph
node.override – If
True
, will override existing functions of the same name. IfFalse
, an error is raised.fn_name – Optional, specify the registration name. If not specified, the function name is used.
fn_name
must only contains alphanumeric letters(a-z)
, numbers(0-9)
, or underscores(_)
, and cannot start with a number or contain any spaces.
Orchestrator
Simulation Control
- omni.replicator.core.orchestrator.pause() None
Pause a running replicator scenario
Submit the
Pause
command to replicator without waiting for its status to change toPaused
.
- omni.replicator.core.orchestrator.preview() None
Run the replicator scenario for a single iteration
Submit the
Preview
command to replicator without waiting for a frame to be previewed. Writers are disabled during preview.
- omni.replicator.core.orchestrator.resume() None
Resume a paused replicator scenario
Submit the
Resume
command to replicator without waiting for its status to change toStarted
.
- omni.replicator.core.orchestrator.run(num_frames: Optional[int] = None, start_timeline: bool = False) None
Run the replicator scenario
Submit the
Start
command to replicator without waiting for its status to change toStarted
.
- Parameters
num_frames – Optionally specify the maximum number of frames to capture. Note that
num_frames
does not override the number of frames specified in triggers defined in the scene.start_timeline – Optionally start the timeline when Replicator starts.
- omni.replicator.core.orchestrator.run_until_complete(num_frames: Optional[int] = None, start_timeline: bool = False) None
Run the replicator scenario until stopped (standalone workflow)
Synchronous function: only use from standalone workflow (controlling Kit app from python). Generation ends when all triggers have reached their end condition or when a stop event is published.
- Parameters
num_frames – Optionally specify the maximum number of frames to capture. Note that
num_frames
does not override the number of frames specified in triggers defined in the scene.
- omni.replicator.core.orchestrator.step(rt_subframes: int = -1, pause_timeline: bool = True, delta_time: Optional[float] = None) None
Step one frame (standalone workflow)
Synchronous step function: only use from standalone workflow (controlling Kit app from python). If Replicator is not yet started, an initialization step is first taken to ensure the necessary settings are set for data capture. The renderer will then render as many subframes as required by current settings and schedule a frame to be captured by any active annotators and writers.
- Parameters
rt_subframes – Specify the number of subframes to render. During subframe generation, the simulation is paused. This is often beneficial when large scene changes occur to reduce rendering artifacts or to allow materials to fully load. This setting is enabled for both RTX Real-Time and Path Tracing render modes. Values must be greater than
0
.pause_timeline – If
True
, pause timeline after step. Defaults toTrue
.delta_time – The amount of time that timeline advances for each step call. When delta_time == None, default timeline rate will be used. When delta_time == 0.0, timeline will not advance. When delta_time > 0.0, timeline will advance by the custom delta_time.
Simulation Control (async)
- async omni.replicator.core.orchestrator.preview_async() None
Run the replicator scenario for a single iteration
Submit the
Preview
command to replicator and wait for a frame to be previewed. Writers are disabled during preview.
- async omni.replicator.core.orchestrator.run_async(num_frames: Optional[int] = None, start_timeline: bool = False) None
Run the replicator scenario and wait for orchestrator to start
Submit the
Start
command to replicator and wait for its status to change toStarted
.
- Parameters
num_frames – Optionally specify the maximum number of frames to capture. Note that
num_frames
does not override the number of frames specified in triggers defined in the scene.start_timeline – Optionally start the timeline when Replicator starts.
- async omni.replicator.core.orchestrator.run_until_complete_async(num_frames: Optional[int] = None, start_timeline: bool = False) None
Run the replicator scenario until stopped Generation ends when all triggers have reached their end condition or when a stop event is published.
- Parameters
num_frames – Optionally specify the maximum number of frames to capture. Note that
num_frames
does not override the number of frames specified in triggers defined in the scene.start_timeline – Optionally start the timeline when Replicator starts.
- async omni.replicator.core.orchestrator.step_async(rt_subframes: int = -1, pause_timeline: bool = True, delta_time: Optional[float] = None) None
Step one frame
If Replicator is not yet started, an initialization step is first taken to ensure the necessary settings are set for data capture. The renderer will then render as many subframes as required by current settings and schedule a frame to be captured by any active annotators and writers.
- Parameters
rt_subframes – Specify the number of subframes to render. During subframe generation, the simulation is paused. This is often beneficial when large scene changes occur to reduce rendering artifacts or to allow materials to fully load. This setting is enabled for both RTX Real-Time and Path Tracing render modes. Values must be greater than
0
.pause_timeline – If
True
, pause timeline after step. Defaults toTrue
.delta_time – The amount of time that timeline advances for each step call. When delta_time == None, default timeline rate will be used. When delta_time == 0.0, timeline will not advance. When delta_time > 0.0, timeline will advance by the custom delta_time.
Orchestrator info
Other
- omni.replicator.core.orchestrator.register_status_callback(callback: Callable) StatusCallback
Register a callback on orchestrator status changed.
Register a callback and return a StatusCallback object that automatically unregisters callback when destroyed.
- Parameters
callback – Callback function that will be called whenever orchestrator status changes
- Returns
StatusCallback object which will automatically unregister callback when destroyed
Example
>>> import omni.replicator.core as rep >>> def my_callback(status): ... print(f"Orchestrator Status: {status}") >>> # register callback >>> my_callback = rep.orchestrator.register_status_callback(my_callback) >>> my_callback2 = rep.orchestrator.register_status_callback(my_callback) >>> # unregister callback manually >>> my_callback.unregister() >>> # unregister callback automatically >>> del(my_callback)
- omni.replicator.core.orchestrator.set_capture_on_play(value: bool) None
Set Replicator to capture on timeline playing.
When capture on play is enabled, timeline operations (ie. Play, Stop, Pause) will trigger the corresponding Replicator commands.
- Parameters
value (bool) – If
True
, Replicator will engage when timeline is playing to capture frames.
- omni.replicator.core.orchestrator.set_minimum_next_rt_subframes(rt_subframes: int) None
Specify the minimum number of subframes to render
Specify the minimum number of subframes to render. During subframe generation, the simulation is paused. This is often beneficial when large scene changes occur to reduce rendering artifacts or to allow materials to fully load. This setting is enabled for both RTX Real-Time and Path Tracing render modes. Values must be greater than
0
.
- Parameters
rt_subframes – Minimum number of subframes to render for the next frame. Resets on every frame.
- omni.replicator.core.orchestrator.set_next_rt_subframes(rt_subframes: int) None
Specify the number of subframes to render
Specify the number of subframes to render. During subframe generation, the simulation is paused. This is often beneficial when large scene changes occur to reduce rendering artifacts or to allow materials to fully load. This setting is enabled for both RTX Real-Time and Path Tracing render modes. Values must be greater than
0
.
- Parameters
rt_subframes – Number of subframes to render for the next frame. Resets on every frame.
- exception omni.replicator.core.orchestrator.OrchestratorError(msg=None)
Base exception for errors raised by the orchestrator
Backends
Backend Registry
- omni.replicator.core.backends.registry.get(name: str, init_params: Optional[dict] = None) BaseBackend
Get backend from registry
- Parameters
name – Backend name
init_params – Dictionary of initialization parameters with which to initialize writer
- Example
>>> import omni.replicator.core as rep >>> class PassBackend(rep.backends.BaseBackend): ... def __init__(self, param1): ... pass ... def write_blob(self, **kwargs): ... pass ... def read_blob(self, **kwargs): ... pass >>> rep.backends.register(PassBackend) >>> backend = rep.backends.get("PassBackend", init_params={"param1": 1}) >>> backend.get_name() 'PassBackend'
- omni.replicator.core.backends.registry.register(backend: BaseBackend) None
Register a backend.
- Parameters
backend – Backend to register, must be derived from BaseBackend
Example
>>> import omni.replicator.core as rep >>> class PassBackend(rep.backends.BaseBackend): ... def __init__(self, **kwargs): ... pass ... def write_blob(self, **kwargs): ... pass ... def read_blob(self, **kwargs): ... pass >>> rep.backends.register(PassBackend)
- omni.replicator.core.backends.registry.unregister(backend: Union[str, BaseBackend]) None
Unregister a backend.
- Parameters
name – Name of backend to unregister
Example
>>> import omni.replicator.core as rep >>> class PassBackend(rep.backends.BaseBackend): ... def __init__(self, **kwargs): ... pass ... def write_blob(self, **kwargs): ... pass ... def read_blob(self, **kwargs): ... pass >>> rep.backends.register(PassBackend) >>> rep.backends.unregister("PassBackend")
- class omni.replicator.core.backends.BackendRegistry
Registry of backends
Backends define how to read/write blobs of bytes data.
- classmethod register_backend(backend: BaseBackend) None
Register a backend.
- Parameters
backend – Backend to register, must be derived from BaseBackend. The backend is registered under its class name.
Example
>>> import omni.replicator.core as rep >>> class PassBackend(rep.backends.BaseBackend): ... def __init__(self, **kwargs): ... pass ... def write_blob(self, **kwargs): ... pass ... def read_blob(self, **kwargs): ... pass >>> rep.backends.BackendRegistry.register_backend(PassBackend)
- classmethod unregister_backend(backend: BaseBackend) None
Unregister a backend.
- Parameters
name – Name of backend to unregister
Example
>>> import omni.replicator.core as rep >>> class PassBackend(rep.backends.BaseBackend): ... def __init__(self, **kwargs): ... pass ... def write_blob(self, **kwargs): ... pass ... def read_blob(self, **kwargs): ... pass >>> rep.backends.BackendRegistry.register_backend(PassBackend) >>> rep.backends.BackendRegistry.unregister_backend("PassBackend")
- classmethod get_registered_backends() List
Returns a list of registered backends.
- Returns
List of the names of registered backends.
Example
>>> import omni.replicator.core as rep >>> registered_backends = rep.backends.BackendRegistry.get_registered_backends()
- classmethod get_backend(name: str, init_params: Optional[dict] = None) BaseBackend
Get backend from registry
- Parameters
name – Backend name
init_params – Dictionary of initialization parameters with which to initialize writer
- Example
>>> import omni.replicator.core as rep >>> class PassBackend(rep.backends.BaseBackend): ... def __init__(self, param1): ... pass ... def write_blob(self, **kwargs): ... pass ... def read_blob(self, **kwargs): ... pass >>> rep.backends.BackendRegistry.register_backend(PassBackend) >>> backend = rep.backends.BackendRegistry.get_backend("PassBackend") >>> backend.initialize(param1=1) >>> backend.get_name() 'PassBackend'
IO Queue
- omni.replicator.core.backends.io_queue.wait_until_done() None
Wait until data queue is fully processed
Blocks execution until
is_done_writing() == True
.
- omni.replicator.core.backends.io_queue.set_max_queue_size(value: int) None
Set maximum queue size
On systems with more available memory, increasing the queue size can reduce instances where I/O bottlenecks data generation.
- class omni.replicator.core.backends.sequential.Sequential(*tasks: List[Callable])
Setup a sequence of tasks. Used to defined a sequence tasks. Tasks may be defined as lambdas, functions or classes. Calling a Sequential instance returns a partial function that can be scheduled to run on background threads. This is often desirable to avoid I/O or post processing tasks from blocking the simulation thread.
- Parameters
*tasks – List of functions representing tasks to be executed in sequence. Tasks must have compatible inputs/outputs to be sequenced together.
Note: Calling a Sequential instance returns a Python partial function.
- Example 1:
>>> import omni.replicator.core.functional as F >>> import numpy as np >>> from functools import partial >>> from omni.replicator.core import backends >>> # Define function and class to go into a task sequence >>> def pad(x, pad_value, pad_length): ... return f"{x:{pad_value}>{pad_length}}" >>> class add_prefix: ... def __init__(self, prefix): ... self.prefix = prefix ... def __call__(self, x): ... return f"{self.prefix}{x}" >>> sequence_path = Sequential( ... lambda x: x * 2, ... partial(pad, pad_value=0, pad_length=5), ... add_prefix("frame_"), ... lambda x, ext="png": f"{x}.{ext}", ... ) >>> # Test sequence >>> sequence_path(1)() 'frame_00002.png' >>> # Setup data task sequence >>> sequence_data = Sequential( ... lambda x, factor=2: x * factor, ... lambda x, offset=10: x + offset, ... ) >>> # Test sequence >>> seq_partial = sequence_data(np.ones(1)) >>> type(seq_partial) <class 'functools.partial'> >>> seq_partial() array([12.]) >>> # Setup Backend >>> backend_abs = backends.BackendDispatch(output_dir="_out") >>> test_frame_num = 5 >>> test_data = np.ones((100, 100, 4)) >>> backend_abs.schedule(F.write_image, path=sequence_path(test_frame_num), data=sequence_data(test_data))
For more complex operations, sequenced tasks can accept and return tuples or dictionaries to further parameterize downstream tasks. Note that sequenced funtions should return and ingest the same arguments to be compatible with each other.
- Example 2:
>>> import omni.replicator.core.functional as F >>> import numpy as np >>> from omni.replicator.core import backends >>> # Define functions to go into a task sequence >>> def add_empty_suffix(path, data): ... import os # import goes here as it will go out of scope on execution ... if data.sum() == 0: ... path_og, ext = os.path.splitext(path) ... path = f"{path_og}_empty{ext}" ... return path, data >>> def empty_data_message(path, data): ... if data.sum() == 0: ... print(f"Writing empty image of size {data.shape} to {path}") ... return {"path": path, "data": data} >>> sequence_write_image = backends.Sequential( ... add_empty_suffix, ... empty_data_message, ... F.write_image, ... ) >>> # Test sequence >>> sequence_write_image("path/to/image.png", np.zeros((10, 10, 3)))() Writing empty image of size (10, 10, 3) to path/to/image_empty.png >>> # Setup Backend >>> backend_abs = backends.BackendDispatch(output_dir="_out") >>> test_path = "exr_data.exr" >>> test_data = np.zeros((10, 10, 3), dtype=np.float32) >>> backend_abs.schedule(sequence_write_image, path=test_path, data=test_data)
- execute(*args, backend_instance=None, **kwargs) Any
Executes sequence of tasks with specified parameters
- Parameters
backend_instance – Optionally specify the backend to use. This parameter is automatically provided when called from a <backend>.schedule() call.
*args – Optional positional parameters.
**kwargs – Optional keyword parameters.
Functional
I/O
- omni.replicator.core.functional.io_functions.write_image(path: str, data: ~typing.Union[~numpy.ndarray, ~warp.types.array, ~PIL.Image.Image], backend_instance: ~omni.replicator.core.scripts.backends.base.BaseBackend = <class 'omni.replicator.core.scripts.backends.disk.DiskBackend'>, **kwargs) None
Write image data to a specified path. Supported image extensions include: [jpeg, jpg, png, exr]
- Parameters
path – Write path URI
data – Image data
backend_instance – Backend to use to write. Defaults to
DiskBackend
.kwargs – Specify additional save parameters, typically specific to the image file type.
- omni.replicator.core.functional.io_functions.write_jpeg(path: str, data: ~typing.Union[~numpy.ndarray, ~warp.types.array], backend_instance: ~omni.replicator.core.scripts.backends.base.BaseBackend = <class 'omni.replicator.core.scripts.backends.disk.DiskBackend'>, quality: int = 75, progressive: bool = False, optimize: bool = False, **kwargs) None
Write image data to JPEG.
- Parameters
path – Write path URI
data – Data to write
backend_instance – Backend to use to write. Defaults to
DiskBackend
. Defaults toDiskBackend
.quality – The image quality, on a scale from 0 (worst) to 95 (best), or the string keep. The default is 75. Values above 95 should be avoided; 100 disables portions of the JPEG compression algorithm, and results in large files with hardly any gain in image quality. The value keep is only valid for JPEG files and will retain the original image quality level, subsampling, and qtables.
progressive – Indicates that this image should be stored as a progressive JPEG file.
optimize – Reduce file size, may be slower. Indicates that the encoder should make an extra pass over the image in order to select optimal encoder settings.
kwargs – Additional parameters may be specified and can be found within the PILLOW documentation: https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html#jpeg-saving
- omni.replicator.core.functional.io_functions.write_png(path: str, data: ~typing.Union[~numpy.ndarray, ~warp.types.array], backend_instance: ~omni.replicator.core.scripts.backends.base.BaseBackend = <class 'omni.replicator.core.scripts.backends.disk.DiskBackend'>, compress_level: int = 3, **kwargs) None
Write image data to PNG.
- Parameters
path – Write path URI
data – Data to write
backend_instance – Backend to use to write. Defaults to
DiskBackend
.compress_level – Specifies ZLIB compression level. Compression is specified as a value between [0, 9] where 1 is fastest and 9 provides the best compression. A value of 0 provides no compression. Defaults to 3.
kwargs – Additional parameters may be specified and can be found within the PILLOW documentation: https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html#png-saving
- omni.replicator.core.functional.io_functions.write_exr(path: str, data: ~typing.Union[~numpy.ndarray, ~warp.types.array], backend_instance: ~omni.replicator.core.scripts.backends.base.BaseBackend = <class 'omni.replicator.core.scripts.backends.disk.DiskBackend'>, exr_flag=None, **kwargs) None
Write data to EXR.
- Parameters
path – Write path URI
data – Data to write
backend_instance – Backend to use to write. Defaults to
DiskBackend
.FIF_EXR (exr_flag from) – imageio.plugins.freeimage.IO_FLAGS.EXR_DEFAULT: Save data as half with piz-based wavelet compression imageio.plugins.freeimage.IO_FLAGS.EXR_FLOAT: Save data as float instead of as half (not recommended) imageio.plugins.freeimage.IO_FLAGS.EXR_NONE: Save with no compression imageio.plugins.freeimage.IO_FLAGS.EXR_ZIP: Save with zlib compression, in blocks of 16 scan lines imageio.plugins.freeimage.IO_FLAGS.EXR_PIZ: Save with piz-based wavelet compression imageio.plugins.freeimage.IO_FLAGS.EXR_PXR24: Save with lossy 24-bit float compression imageio.plugins.freeimage.IO_FLAGS.EXR_B44: Save with lossy 44% float compression - goes to 22% when combined with EXR_LC imageio.plugins.freeimage.IO_FLAGS.EXR_LC: Save images with one luminance and two chroma channels, rather than as RGB (lossy compression)
- omni.replicator.core.functional.io_functions.write_json(path, data, backend_instance=None, encoding='utf-8', errors='strict', **kwargs) None
Write json data to a specified path.
- Parameters
path – Write path URI
data – Data to write
backend_instance – Backend to use to write. Defaults to
DiskBackend
.encoding – This parameter specifies the encoding to be used. For a list of all encoding schemes, please visit: https://docs.python.org/3/library/codecs.html#standard-encodings
errors – This parameter specifies an error handling scheme when encoding the json string data. The default for errors is ‘strict’ which means that the encoding errors raise a UnicodeError. Other possible values are ‘ignore’, ‘replace’, ‘xmlcharrefreplace’, ‘backslashreplace’ and any othername registered via codecs.register_error().
**kwargs – Additional JSON encoding parameters may be supplied. See https://docs.python.org/3/library/json.html#json.dump for full list.
- omni.replicator.core.functional.io_functions.write_pickle(path: str, data: ~typing.Union[~numpy.ndarray, ~warp.types.array], backend_instance: ~omni.replicator.core.scripts.backends.base.BaseBackend = <class 'omni.replicator.core.scripts.backends.disk.DiskBackend'>, **kwargs) None
Write pickle data to a specified path.
- Parameters
path – Write path URI
data – Data to write
backend_instance – Backend to use to write. Defaults to
DiskBackend
.kwargs – Additional Pickle encoding parameters may be supplied. See https://docs.python.org/3/library/pickle.html#pickle.Pickler for full list.
- omni.replicator.core.functional.io_functions.write_np(path: str, data: ~typing.Union[~numpy.ndarray, ~warp.types.array], backend_instance: ~omni.replicator.core.scripts.backends.base.BaseBackend = <class 'omni.replicator.core.scripts.backends.disk.DiskBackend'>, allow_pickle: bool = True, fix_imports: bool = True) None
Write numpy data to a specified path. Save parameters are detailed here: https://numpy.org/doc/stable/reference/generated/numpy.save.html
- Parameters
path – Write path URI
data – Data to write
backend_instance – Backend to use to write. Defaults to
DiskBackend
.allow_pickle – bool, optional Allow saving object arrays using Python pickles. Reasons for disallowing pickles include security (loading pickled data can execute arbitrary code) and portability (pickled objects may not be loadable on different Python installations, for example if the stored objects require libraries that are not available, and not all pickled data is compatible between Python 2 and Python 3). Default to True.
fix_imports – bool, optional Only useful in forcing objects in object arrays on Python 3 to be pickled in a Python 2 compatible way. If
fix_imports
is True, pickle will try to map the new Python 3 names to the old module names used in Python 2, so that the pickle data stream is readable with Python 2. Defaults to True
Default Backends
- class omni.replicator.core.backends.BaseBackend
Backend abstract class
Backends define how to write and read data. Backends define a
write_blob
function that defines how to write bytes to a specified path and aread_blob
function that defines how to read bytes from a path.- Note: While backends are most often used to write data, they can also be used to stream data or process data in some
other way.
- schedule(fn: Callable, *args, **kwargs) None
Schedule a task to be executed asynchronously
Append a task to a data queue that will be executed by multithreaded workers at a later time. This is often desirable so as to avoid bottlenecking the simulation thread with I/O tasks.
- Note: Because scheduled tasks are not executed immediately, special care must be given to manage the lifetime
of passed objects.
- Parameters
fn – Task function to be performed asynchronously.bind
*args – Positional arguments to parametrize task.
**kwargs – Keyword arguments to parametrize task.
- class omni.replicator.core.backends.DiskBackend(output_dir: str, overwrite: bool = True)
Disk writing backend
Backend to write data to disk to a specified output directory.
- Parameters
output_dir – Root output directory. If specified as a relative path, output will be relative to the path specified by the setting
/omni/replicator/backends/disk/root_dir
. If no root directory is specified, the root_dir is specified as <home_dir>/omni.replicator_out.overwrite – If
True
, overwrite existing folder of the same output path. IfFalse
, a suffix in the format of_000N
is added to the output directory name, whereN
is the next available number. Defaults to True.
- read_blob(path) bytes
Return blob of bytes
- Parameters
path – Path of file to read.
- Returns
Bytes data.
- resolve_path(path: str) str
Join path to output directory
- Parameters
path – Partial path to resolve with output_dir
- Returns
Full file path
- static write_blob(path: str, data: bytes) None
Write blob to disk (uninitialized backend) Write blob to disk at specified path with uninitialized backend.
- Parameters
path – Path to write data to. If specified as a relative path, output will be relative to the path specified by the setting
/omni/replicator/backends/disk/root_dir
. If no root directory is specified, the root_dir is specified as <home_dir>/omni.replicator_out.data – Data to write to disk.
- class omni.replicator.core.backends.S3Backend(bucket: str, key_prefix: str, region: Optional[str] = None, endpoint_url: Optional[str] = None, overwrite: bool = False)
Writer backend for saving generated data to an S3 bucket.
This backend requires that AWS credentials are set up in ~/.aws/credentials or the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables be defined. See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
- Parameters
key_prefix – Prefix path within S3 bucket. When calling
write_blob
or read_blob,key_prefix
is joined to thepath
argument of either methods to produce the fullKey
denoting the file location in the bucket.bucket – S3 bucket name. Bucket must follow naming rules: https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html
region – Optionally specify S3 Region name (eg. us-east-2)
endpoint_url – Optionally specify S3 endpoint URL (eg. s3.us-east-2.amazonaws.com)
overwrite – If
True
, overwrite existingkey_prefix
path. IfFalse
, a suffix in the format of _000N is added to thekey_prefix
name, whereN
is the next available number. Defaults to False.
- class omni.replicator.core.backends.BackendGroup(backends: List[BaseBackend])
Group multiple backends Group multiple backends to write data to multiple end points simultaneously. For example, you may want to stream data to a robot and also write the data to local disk as a backup or to create an offline dataset.
- Parameters
backends – List of backends to group together.
- schedule(fn: Callable, *args, **kwargs)
Schedule a task to be executed asynchronously
Append a task to a data queue that will be executed by multithreaded workers at a later time. This is often desirable so as to avoid bottlenecking the simulation thread with I/O tasks.
- Note: Because scheduled tasks are not executed immediately, special care must be given to manage the lifetime
of passed objects.
- Parameters
fn – Task function to be performed asynchronously.bind
*args – Positional arguments to parametrize task.
**kwargs – Keyword arguments to parametrize task.
Replicator Utils
- omni.replicator.core.utils.compute_aabb(bbox_cache: BBoxCache, prim: str, include_children: bool = True) array
Compute an AABB for a given prim_path, a combined AABB is computed if include_children is True
- Parameters
bbox_cache (UsdGeom.BboxCache) – Existing Bounding box cache to use for computation
prim_path (UsdPrim) – prim to compute AABB for
include_children (bool, optional) – include children of specified prim in calculation. Defaults to True.
- Returns
Bounding box for this prim, [min x, min y, min z, max x, max y, max z]
- Return type
np.array
- omni.replicator.core.utils.create_node(node_type_id: str, graph: Optional[Graph] = None, node_name=None, **kwargs)
Helper function to create a replicator node of type node_type_id
- Parameters
node_type_id – Node type ID
graph – Optionally specify the graph onto which to create node
kwargs – Node attributes can optionally be set by specifying them as kwargs
- omni.replicator.core.utils.find_prims(prim_paths: List[Union[str, Path]], mode: str = 'instances') List[Prim]
Find prims based on specified mode
- Parameters
prim_paths – List of paths to prims in the current stage
mode –
Choose from one of the following modes:[‘prims’, ‘prototypes’, ‘materials’, ‘meshes’, ‘instances’], defaults to “instances”.
prims
- Returns the prims corresponding to the paths in prim_paths.prototypes
- Returns the prims corresponding to the paths in prim_paths and parses point instancers to retreive its prototypes.meshes
- Traverse through the prims in prim_paths and return all meshes and geomsubsets.materials
- Traverse through the prims in prim_paths and return all bound materials.
- Raises
ValueError – Invalid mode choice
- Returns
List of Usd prim objects
- omni.replicator.core.utils.get_files_group(folder_path: str, file_suffixes: Optional[List[str]] = None, ignore_case: bool = True, file_type: str = 'png') List[dict]
Retrieve all the files in a folder and group them based on the suffixes.
- Parameters
folder_path – The folder where to search.
file_prefix – The prefix to filter the files by.
file_type – The texture file type.
- Returns
<filepath>,}).
- Return type
A list of tuples with each tuple of the format (<prefix>, {<suffix>
- omni.replicator.core.utils.get_graph(graph_path: Optional[str] = None, is_hidden: bool = False)
Get or create the replicator graph
Retrieve a graph at specified path. If no graph exists, an execution (aka action) graph in the
SIMULATION
stage is created. If nograph_path
is specified, use/Replicator/SDGPipeline
.- Parameters
graph_path – Path to which graph is created.
is_hidden – If
True
, hide created graph in Stage panel. Ignored if graph already exists.
- omni.replicator.core.utils.get_node_targets(node: Node, attribute: str, replicatorXform: bool = True) List[str]
Get node targets from prims bundle - DEPRECATED
This function provided a convenient way to retrieve targets from a node attribute and is now deprecated. Use
get_non_xform_prims
for similar functionality.- Parameters
node – Node from which to get targets
attribute – Attribute name
replicatorXform – If
False
and a target has the attribute replicatorXform, return its leaf children.
- omni.replicator.core.utils.get_non_xform_prims(prim_paths: List[Union[str, Path]]) List[str]
Return non-xform prim paths
For each prim_path specified, return its child prim if it has the
replicatorXform
attribute.- Parameters
prim_paths – list of prim paths
- omni.replicator.core.utils.get_prim_variant_values(prim_path: Union[str, Path], variant_name: str) List[str]
- omni.replicator.core.utils.get_prims_from_paths(prim_paths: Union[Path, str]) List[Prim]
Convert prim paths to prim objects
- Parameters
prim_paths – list of prim paths
- omni.replicator.core.utils.get_usd_files(path: str, recursive: bool = False, path_filter: Optional[str] = None) List[str]
Retrieve a list of USD files at the provided path
- Parameters
path – Path or URL to search from.
recursive – If
True
, recusively search through sub-directories.path_filter – Optional regex filter to refine the search.
- omni.replicator.core.utils.read_prim_transform(prim_ref: Prim)
Return the prim’s local to world transform for current time
- Parameters
prim_ref – Prim to compute transform for
- omni.replicator.core.utils.send_og_event(event_name: str) None
Send an OmniGraph event that can be received by the omni.graph.action.OnCustomEvent node.
Sends an empty payload to signal the omni.graph.action.OnCustomEvent node to activate
- Parameters
event_name – Name of the omnigraph event to send. The event name sent is always in the format omni.graph.action.{event_name}.
- omni.replicator.core.utils.set_target_prims(node: Node, attribute: Attribute, target_prims: List[Union[ReplicatorItem, Node, str]])
Set node targets to attribute
This function provides a convenient way to set targets to a node attribute, and will be deprecated once multi-prim node attributes are supported.
- Parameters
node – Node to which to set targets
attribute – Attribute name
target_prims – Targets to attach to attribute
Settings
- omni.replicator.core.settings.carb_settings(setting: str, value: Union[List, float, ReplicatorItem]) ReplicatorItem
Set a specific carb setting
Carb settings are settings controlling anything from render parameters to specific extension behaviours. Any of these can be controlled and randomized through Replicator. Because settings can be introduced and removed by extensions, an providing an exhaustive list of available settings is not possible. Below you will find a subset of settings that may be useful for SDG workflows:
- RealTime Render Settings:
- Antialiasing/DLSS:
/rtx/post/aa/op
(int): Specify the antialiasing or DLSS strategy to use. Select from None (0), TAA (1), FXAA (2), DLSS (3) and DLAA (4)/rtx/post/scaling/staticRatio
(float): TAA static ratio/rtx/post/taa/samples
(int): TAA samples/rtx/post/taa/alpha
(float): TAA alpha/rtx/post/fxaa/qualitySubPix
(float): FXAA sub-pixel quality/rtx/post/fxaa/qualityEdgeThreshold
(float): FXAA edge threshold/rtx/post/fxaa/qualityEdgeThresholdMin
(float): FXAA edge threshold minimum/rtx/post/dlss/execMode
(int): DLSS mode. Select from: Performance (0): Higher performance than balanced mode. Balanced (1): Balanced for optimized performance and image quality. Quality (2): Higher image quality than balanced mode./rtx/post/aa/sharpness
(float): DLSS/DLAA sharpness. Higher values produce sharper results./rtx/post/aa/autoExposureMode
(int): DLSS/DLAA auto exposure mode. Select from “Force self-evaluated” (0), “PostProcess AutoExposure” (1) and “Fixed” (2)/rtx/post/aa/exposureMultiplier
(float): DLSS/DLAA factor with which to multiply the selected exposure mode. Requires autoExposureMode=1/rtx/post/aa/exposure
(float): DLSS/DLAA exposure level. Requires autoExposureMode=2
- Direct Lighting:
/rtx/newDenoiser/enabled
(bool): Enables the new experimental denoiser./rtx/shadows/enabled
(bool): When disabled, lights will not cast shadows./rtx/directLighting/sampledLighting/enabled
(bool): Mode which favors performance with many lights (10 or more), at the cost of image quality./rtx/directLighting/sampledLighting/autoEnable
(bool): Automatically enables Sampled Direct Lighting when the light count is greater than the Light Count Threshold./rtx/directLighting/sampledLighting/autoEnableLightCountThreshold
(int): Light count threshold above which Sampled Direct Lighting is automatically enabled./rtx/shadows/sampleCount
(int): Higher values increase the quality of shadows at the cost of performance./rtx/shadows/denoiser/quarterRes
(bool): Reduces the shadow denoiser resolution to gain performance at the cost of quality./rtx/directLighting/domeLight/enabled
(bool): Enables dome light contribution to diffuse BSDFs if Dome Light mode is IBL./rtx/directLighting/domeLight/enabledInReflections
(bool): Enables Dome Light sampling in reflections at the cost of performance./rtx/directLighting/domeLight/sampleCount
(int): Higher values increase dome light sampling quality at the cost of performance./rtx/directLighting/sampledLighting/samplesPerPixel
(int): Higher values increase the direct lighting quality at the cost of performance./rtx/directLighting/sampledLighting/clampSamplesPerPixelToNumberOfLights
(bool): When enabled, clamps the “Samples per Pixel” to the number of lights in the scene/rtx/directLighting/sampledLighting/maxRayIntensity
(float): Clamps the brightness of a sample, which helps reduce fireflies, but may result in some loss of energy./rtx/reflections/sampledLighting/clampSamplesPerPixelToNumberOfLights
(bool): When enabled, clamps the “Reflections: Light Samples per Pixel” to the number of lights in the scene/rtx/reflections/sampledLighting/samplesPerPixel
(int): Higher values increase the reflections quality at the cost of performance./rtx/reflections/sampledLighting/maxRayIntensity
(float): Clamps the brightness of a sample, which helps reduce fireflies, but may result in some loss of energy./rtx/lightspeed/ReLAX/fireflySuppressionType
(int): Choose the filter type (None 0, Median 1, RCRS 2). Clamps overly bright pixels to a maximum value./rtx/lightspeed/ReLAX/historyClampingEnabled
(bool): Reduces temporal lag./rtx/lightspeed/ReLAX/aTrousIterations
(int): Number of times the frame is denoised./rtx/directLighting/sampledLighting/ris/meshLights
(bool): Enables direct illumination sampling of geometry with emissive materials.
- Indirect Diffuse Lighting:
/rtx/indirectDiffuse/enabled
(bool): Enables indirect diffuse GI sampling./rtx/indirectDiffuse/fetchSampleCount
(int): Number of samples made for indirect diffuse GI. Higher number gives better GI quality, but worse performance./rtx/indirectDiffuse/maxBounces
(int): Number of bounces approximated with indirect diffuse GI./rtx/indirectDiffuse/scalingFactor
(float): Multiplier for the indirect diffuse GI contribution./rtx/indirectDiffuse/denoiser/method
(int): Select from NVRTD (0) or NRD:Reblur (1)/rtx/indirectDiffuse/denoiser/kernelRadius
(int): NVRTD controls the spread of local denoising area. Higher values results in smoother GI. Requires /rtx/indirectDiffuse/denoiser/method=0/rtx/indirectDiffuse/denoiser/iterations
(int): NVRTD number of denoising passes. Higher values results in smoother looking GI. Requires /rtx/indirectDiffuse/denoiser/method=0/rtx/indirectDiffuse/denoiser/temporal/maxHistory
(int): NVRTD Control of latency in GI updates. Higher values results in smoother looking GI. Requires /rtx/indirectDiffuse/denoiser/method=0/rtx/lightspeed/NRD_ReblurDiffuse/maxAccumulatedFrameNum
(int): NRD:Reblur maximum accumulated frame number. Requires /rtx/indirectDiffuse/denoiser/method=1/rtx/lightspeed/NRD_ReblurDiffuse/maxFastAccumulatedFrameNum
(int): NRD:Reblur maximum fast accumulated frame number. Requires /rtx/indirectDiffuse/denoiser/method=1/rtx/lightspeed/NRD_ReblurDiffuse/planeDistanceSensitivity
(float): NRD:Reblur plane distance sensitivity. Requires /rtx/indirectDiffuse/denoiser/method=1/rtx/lightspeed/NRD_ReblurDiffuse/blurRadius
(float): NRD:Reblur blur radius. Requires /rtx/indirectDiffuse/denoiser/method=1/rtx/ambientOcclusion/enabled
(bool): Enables ambient occlusion./rtx/ambientOcclusion/rayLength
(float): The radius around the intersection point which the ambient occlusion affects./rtx/ambientOcclusion/minSamples
(int): Minimum number of samples per frame for ambient occlusion sampling./rtx/ambientOcclusion/maxSamples
(int): Maximum number of samples per frame for ambient occlusion sampling./rtx/ambientOcclusion/denoiserMode
(int): Allows for increased AO denoising at the cost of more blurring. Select from None (0), Aggressive (1) and Simple (2)./rtx/sceneDb/ambientLightColor
(color3): Color of the global environment lighting./rtx/sceneDb/ambientLightIntensity
(float): Brightness of the global environment lighting.
- Reflections:
/rtx/reflections/maxRoughness
(float): Roughness threshold for approximated reflections. Higher values result in better quality, at the cost of performance./rtx/reflections/maxReflectionBounces
(int): Number of bounces for reflection rays./rtx/reflections/importantLightsOnly
(bool): Process important lights only.
- Translucency:
/rtx/translucency/maxRefractionBounces
(int): Number of bounces for refraction rays./rtx/translucency/reflectAtAllBounce
(bool): When enabled, reflection seen through refraction is rendered. When disabled, reflection is limited to first bounce only. More accurate, but worse performance/rtx/translucency/reflectionThroughputThreshold
(float): Threshold below which reflection paths due to fresnel are no longer traced. Lower values result in higher quality at the cost of performance./rtx/raytracing/fractionalCutoutOpacity
(bool): Enables fractional cutout opacity values resulting in a translucency-like effect similar to alpha-blending./rtx/translucency/virtualDepth
(bool): Improves DoF for translucent (refractive) objects, but can result in worse performance./rtx/translucency/virtualMotion
(bool): Enables motion vectors for translucent (refractive) objects, which can improve temporal rendering such as denoising, but can result in worse performance./rtx/translucency/worldEps
(float): Treshold below which image-based reprojection is used to compute refractions. Lower values result in higher quality at the cost performance./rtx/translucency/sampleRoughness
(bool): Enables sampling roughness, such as for simulating frosted glass, but can result in worse performance.
- Subsurface Scattering:
/rtx/raytracing/subsurface/maxSamplePerFrame
(int): Max samples per frame for the infinitely-thick geometry SSS approximation./rtx/raytracing/subsurface/fireflyFiltering/enabled
(bool): Enables firefly filtering for the subsurface scattering. The maximum filter intensity is determined by ‘/rtx/directLighting/sampledLighting/maxRayIntensity’./rtx/raytracing/subsurface/denoiser/enabled
(bool): Enables denoising for the subsurface scattering./rtx/directLighting/sampledLighting/irradiance/denoiser/enabled
(bool): Denoise the irradiance output from sampled lighting pass before it’s used, helps in complex lighting conditions or if there are large area lights which makes irradiance estimation difficult with low sampled lighting sample count./rtx/raytracing/subsurface/transmission/enabled
(bool): Enables transmission of light through the medium, but requires additional samples and denoising./rtx/raytracing/subsurface/transmission/bsdfSampleCount
(int): Transmission sample count per frame./rtx/raytracing/subsurface/transmission/perBsdfScatteringSampleCount
(int): Transmission samples count per BSDF Sample. Samples per pixel per frame = BSDF Sample Count * Samples Per BSDF Sample./rtx/raytracing/subsurface/transmission/screenSpaceFallbackThresholdScale
(float): Transmission threshold for screen-space fallback./rtx/raytracing/subsurface/transmission/halfResolutionBackfaceLighting
(bool): Enables rendering transmission in half-resolution to improve performance at the expense of quality./rtx/raytracing/subsurface/transmission/ReSTIR/enabled
(bool): Enables transmission sample guiding, which may help with complex lighting scenarios./rtx/raytracing/subsurface/transmission/denoiser/enabled
(bool): Enables transmission denoising.
- Caustics:
/rtx/raytracing/caustics/photonCountMultiplier
(int): Factor multiplied by 1024 to compute the total number of photons to generate from each light./rtx/raytracing/caustics/photonMaxBounces
(int): Maximum number of bounces to compute for each light/photon path./rtx/raytracing/caustics/positionPhi
(float): Position Phi/rtx/raytracing/caustics/normalPhi
(float): Normal Phi/rtx/raytracing/caustics/eawFilteringSteps
(int): Number of iterations for the denoiser applied to the results of the caustics tracing pass.
- Global Volumetric Effects:
/rtx/raytracing/inscattering/maxAccumulationFrames
(int): Number of frames samples accumulate over temporally. High values reduce noise, but increase lighting update times./rtx/raytracing/inscattering/depthSlices
(int): Number of layers in the voxel grid to be allocated. High values result in higher precision at the cost of memory and performance./rtx/raytracing/inscattering/pixelRatio
(int): Higher values result in higher fidelity volumetrics at the cost of performance and memory (depending on the # of depth slices)./rtx/raytracing/inscattering/sliceDistributionExponent
(float): Controls the number (and relative thickness) of the depth slices./rtx/raytracing/inscattering/inscatterUpsample
(int): Inscatter Upsample/rtx/raytracing/inscattering/blurSigma
(float): Sigma parameter for the Gaussian filter used to spatially blur the voxel grid. 1 = no blur, higher values blur further./rtx/raytracing/inscattering/ditheringScale
(float): The scale of the noise dithering. Used to reduce banding from quantization on smooth gradients./rtx/raytracing/inscattering/spatialJitterScale
(float): Spatial jitter scale. 1 = the entire voxel’s volume./rtx/raytracing/inscattering/temporalJitterScale
(float): Temporal jitter scale/rtx/raytracing/inscattering/enableFlowSampling
(bool): Enable flow sampling/rtx/raytracing/inscattering/minFlowLayer
(int): Minimum flow layer/rtx/raytracing/inscattering/maxFlowLayer
(int): Maximum flow layer/rtx/raytracing/inscattering/flowDensityScale
(float): Flow density scale/rtx/raytracing/inscattering/flowDensityOffset
(float): Flow density offset
- PathTracing Render Settings:
- Pathtracing:
/rtx/pathtracing/spp
(int): Total number of samples for each rendered pixel, per frame./rtx/pathtracing/totalSpp
(int): Maximum number of samples to accumulate per pixel. When this count is reached the rendering stops until a scene or setting change is detected, restarting the rendering process. Set to 0 to remove this limit./rtx/pathtracing/adaptiveSampling/enabled
(bool): When enabled, noise values are computed for each pixel, and upon threshold level eached, the pixel is no longer sampled/rtx/pathtracing/adaptiveSampling/targetError
(float): The noise value treshold after which the pixel would no longer be sampled./rtx/pathtracing/maxBounces
(int): Maximum number of ray bounces for any ray type. Higher values give more accurate results, but worse performance./rtx/pathtracing/maxSpecularAndTransmissionBounces
(int): Maximum number of ray bounces for specular and trasnimission./rtx/pathtracing/maxVolumeBounces
(int): Maximum number of ray bounces for SSS./rtx/pathtracing/ptfog/maxBounces
(int): Maximum number of bounces for volume scattering within a fog/sky volume./rtx/pathtracing/fractionalCutoutOpacity
(bool): If enabled, fractional cutout opacity values are treated as a measure of surface ‘presence’ resulting in a translucency effect similar to alpha-blending. Path-traced mode uses stochastic sampling based on these values to determine whether a surface hit is valid or should be skipped./rtx/resetPtAccumOnAnimTimeChange
(bool): If enabled, rendering is restarted every time the MDL animation time changes.
- Anti-Aliasing:
/rtx/pathtracing/aa/op
(int): Sampling pattern used for Anti-Aliasing. Select between Box (0), Triangle (1), Gaussian (2) and Uniform (3)./rtx/pathtracing/aa/filterRadius
(float): Sampling footprint radius, in pixels, when generating samples with the selected antialiasing sample pattern.
- Firefly Filtering:
/rtx/pathtracing/fireflyFilter/maxIntensityPerSample
(float): Clamps the maximium ray intensity for glossy bounces. Can help prevent fireflies, but may result in energy loss./rtx/pathtracing/fireflyFilter/maxIntensityPerSampleDiffuse
(float): Clamps the maximium ray intensity for diffuse bounces. Can help prevent fireflies, but may result in energy loss.
- Denoising:
/rtx/pathtracing/optixDenoiser/blendFactor
(float): A blend factor indicating how much to blend the denoised image with the original non-denoised image. 0 shows only the denoised image, 1.0 shows the image with no denoising applied./rtx/pathtracing/optixDenoiser/AOV
(bool): If enabled, the OptiX Denoiser will also denoise the AOVs.
- Non-Uniform Volumes:
/rtx/pathtracing/ptvol/transmittanceMethod
(int): Choose between Biased Ray Marching (0) or Ratio Tracking (1). Biased ray marching is the ideal option in all cases./rtx/pathtracing/ptvol/maxCollisionCount
(int): Maximum delta tracking steps between bounces. Increase to more than 32 for highly scattering volumes like clouds./rtx/pathtracing/ptvol/maxLightCollisionCount
(int): Maximum ratio tracking delta steps for shadow rays. Increase to more than 32 for highly scattering volumes like clouds./rtx/pathtracing/ptvol/maxBounces
(int): Maximum number of bounces in non-uniform volumes.
- Global Volumetric Effects
/rtx/pathtracing/ptvol/raySky
(bool): Enables an additional medium of Rayleigh-scattering particles to simulate a physically-based sky./rtx/pathtracing/ptvol/raySkyScale
(float): Scales the size of the Rayleigh sky./rtx/pathtracing/ptvol/raySkyDomelight
(bool): If a domelight is rendered for the sky color, the Rayleight Atmosphere is applied to the foreground while the background sky color is left unaffected.
- PostProcess Render Settings:
- Tonemapping:
/rtx/post/tonemap/maxWhiteLuminance
(float): Maximum HDR luminance value that will map to 1.0 post tonemap./rtx/post/tonemap/whiteScale
(float): Maximum white value that will map to 1.0 post tonemap./rtx/post/tonemap/enableSrgbToGamma
(bool): Available with Linear/Reinhard/Modified Reinhard/HejiHableAlu/HableUc2 Tone Mapping./rtx/post/tonemap/cm2Factor
(float): Use this factor to adjust for scene units being different from centimeters./rtx/post/tonemap/filmIso
(float): Simulates the effect on exposure of a camera’s ISO setting./rtx/post/tonemap/cameraShutter
(float): Simulates the effect on exposure of a camera’s shutter open time./rtx/post/tonemap/fNumber
(float): Simulates the effect on exposure of a camera’s f-stop aperture./rtx/post/tonemap/whitepoint
(color3): A color mapped to white on the output./rtx/post/tonemap/colorMode
(int): Tone Mapping Color Space selector. Select from sRGBLinear (0) or ACEScg (1)/rtx/post/tonemap/wrapValue
(float): Offset/rtx/post/tonemap/dither
(float): Removes banding artifacts in final images.
- Auto Exposure:
/rtx/post/histogram/filterType
(int): Select a method to filter the histogram. Options are Median (0) and Average (1)./rtx/post/histogram/tau
(float): How fast automatic exposure compensation adapts to changes in overall light intensity./rtx/post/histogram/whiteScale
(float): higher values result in darker images./rtx/post/histogram/useExposureClamping
(bool): Clamps the exposure to a range within a specified minimum and maximum Exposure Value./rtx/post/histogram/minEV
(float): Clamps the exposure to a range within a specified minimum and maximum Exposure Value./rtx/post/histogram/maxEV
(float): Clamps the exposure to a range within a specified minimum and maximum Exposure Value.
- Color Correction:
/rtx/post/colorcorr/saturation
(color3): Higher values increase color saturation while lowering desaturates./rtx/post/colorcorr/contrast
(color3): Higher values increase the contrast of darks/lights and colors./rtx/post/colorcorr/gamma
(color3): Gamma value in inverse gamma curve applied before output./rtx/post/colorcorr/gain
(color3): A factor applied to the color values./rtx/post/colorcorr/offset
(color3): An offset applied to the color values.
- Color Grading:
/rtx/post/colorgrad/blackpoint
(color3): Defines the Black Point value./rtx/post/colorgrad/whitepoint
(color3): Defines the White Point value./rtx/post/colorgrad/contrast
(color3): Higher values increase the contrast of darks/lights and colors./rtx/post/colorgrad/lift
(color3): Color is multiplied by (Lift - Gain) and later Lift is added back./rtx/post/colorgrad/gain
(color3): Color is multiplied by (Lift - Gain) and later Lift is added back./rtx/post/colorgrad/multiply
(color3): A factor applied to the color values./rtx/post/colorgrad/offset
(color3): Color offset: an offset applied to the color values./rtx/post/colorgrad/gamma
(color3): Gamma value in inverse gamma curve applied before output.
- Chromatic Aberration:
/rtx/post/chromaticAberration/strengthR
(float): The strength of the distortion applied on the Red channel./rtx/post/chromaticAberration/strengthG
(float): The strength of the distortion applied on the Green channel./rtx/post/chromaticAberration/strengthB
(float): The strength of the distortion applied on the Blue channel./rtx/post/chromaticAberration/modeR
(int): Selects between Radial (0) and Barrel (1) distortion for the Red channel./rtx/post/chromaticAberration/modeG
(int): Selects between Radial (0) and Barrel (1) distortion for the Green channel./rtx/post/chromaticAberration/modeB
(int): Selects between Radial (0) and Barrel (1) distortion for the Blue channel./rtx/post/chromaticAberration/enableLanczos
(bool): Use a Lanczos sampler when sampling the input image being distorted.
- Depth of Field:
/rtx/post/dof/enabled
(bool): camera parameters affecting Depth of Field are ignored./rtx/post/dof/subjectDistance
(float): Objects at this distance from the camera will be in focus./rtx/post/dof/focalLength
(float): The focal length of the lens (in mm). The focal length divided by the f-stop is the aperture diameter./rtx/post/dof/fNumber
(float): F-stop (aperture) of the lens. Lower f-stop numbers decrease the distance range from the Subject Distance where objects remain in focus./rtx/post/dof/anisotropy
(float): Anisotropy of the lens. A value of -0.5 simulates the depth of field of an anamorphic lens.
- Motion Blur (RealTime render mode):
/rtx/post/motionblur/maxBlurDiameterFraction
(float): The fraction of the largest screen dimension to use as the maximum motion blur diameter./rtx/post/motionblur/exposureFraction
(float): Exposure time fraction in frames (1.0 = one frame duration) to sample./rtx/post/motionblur/numSamples
(int): Number of samples to use in the filter. A higher number improves quality at the cost of performance.
- FFT Bloom:
/rtx/post/lensFlares/flareScale
(float): Overall intensity of the bloom effect./rtx/post/lensFlares/cutoffPoint
(double3): A cutoff color value to tune the radiance range for which Bloom will have any effect./rtx/post/lensFlares/cutoffFuzziness
(float): a smooth transition between 0 and the original values is used./rtx/post/lensFlares/alphaExposureScale
(float): Alpha channel intensity of the bloom effect./rtx/post/lensFlares/energyConstrainingBlend
(bool): Constrains the total light energy generated by bloom./rtx/post/lensFlares/physicalSettings
(bool): Choose between a Physical or Non-Physical bloom model./rtx/post/lensFlares/blades
(int): The number of physical blades of a simulated camera diaphragm causing the bloom effect./rtx/post/lensFlares/apertureRotation
(float): Rotation of the camera diaphragm./rtx/post/lensFlares/sensorDiagonal
(float): Diagonal of the simulated sensor./rtx/post/lensFlares/sensorAspectRatio
(float): results in the bloom effect stretching in one direction./rtx/post/lensFlares/fNumber
(float): Increases/Decreases the sharpness of the bloom effect./rtx/post/lensFlares/focalLength
(float): Focal length of the lens modeled to simulate the bloom effect./rtx/post/lensFlares/haloFlareRadius
(double3): Controls the size of each RGB component of the halo flare effect./rtx/post/lensFlares/haloFlareFalloff
(double3): Controls the falloff of each RGB component of the halo flare effect./rtx/post/lensFlares/haloFlareWeight
(float): Controls the intensity of the halo flare effect./rtx/post/lensFlares/anisoFlareFalloffY
(double3): Controls the falloff of each RGB component of the anistropic flare effect in the X direction./rtx/post/lensFlares/anisoFlareFalloffX
(double3): Controls the falloff of each RGB component of the anistropic flare effect in the Y direction./rtx/post/lensFlares/anisoFlareWeight
(float): Control the intensity of the anisotropic flare effect./rtx/post/lensFlares/isotropicFlareFalloff
(double3): Controls the falloff of each RGB component of the isotropic flare effect./rtx/post/lensFlares/isotropicFlareWeight
(float): Control the intensity of the isotropic flare effect.
- TV Noise Grain:
/rtx/post/tvNoise/grainSize
(float): The size of the film grains./rtx/post/tvNoise/enableScanlines
(bool): Emulates a Scanline Distortion typical of old televisions./rtx/post/tvNoise/scanlineSpread
(float): How wide the Scanline distortion will be./rtx/post/tvNoise/enableScrollBug
(bool): Emulates sliding typical on old televisions./rtx/post/tvNoise/enableVignetting
(bool): Blurred darkening around the screen edges./rtx/post/tvNoise/vignettingSize
(float): Controls the size of vignette region./rtx/post/tvNoise/vignettingStrength
(float): Controls the intensity of the vignette./rtx/post/tvNoise/enableVignettingFlickering
(bool): Enables a slight flicker effect on the vignette./rtx/post/tvNoise/enableGhostFlickering
(bool): Introduces a blurred flicker to help emulate an old television./rtx/post/tvNoise/enableWaveDistortion
(bool): Introduces a Random Wave Flicker to emulate an old television./rtx/post/tvNoise/enableVerticalLines
(bool): Introduces random vertical lines to emulate an old television./rtx/post/tvNoise/enableRandomSplotches
(bool): Introduces random splotches typical of old dirty television./rtx/post/tvNoise/enableFilmGrain
(bool): Enables a film grain effect to emulate the graininess in high speed (ISO) film./rtx/post/tvNoise/grainAmount
(float): The intensity of the film grain effect./rtx/post/tvNoise/colorAmount
(float): The amount of color offset each grain will be allowed to use./rtx/post/tvNoise/lumAmount
(float): The amount of offset in luminance each grain will be allowed to use./rtx/post/tvNoise/grainSize
(float): The size of the film grains.
- Reshade:
/rtx/reshade/presetFilePath
(string): The path to a preset.init file containing the Reshade preset to use./rtx/reshade/effectSearchDirPath
(string): The path to a directory containing the Reshade files that the preset can reference./rtx/reshade/textureSearchDirPath
(string): The path to a directory containing the Reshade texture files that the preset can reference.
- Replicator Settings:
/omni/replicator/captureMotionBlur
(bool): Capture a motion blur effect. In RealTime render mode, this is equivalent to enabling/rtx/post/motionblur/enabled
. In Pathtrace mode, a timestep is split intoN
subframes whereN
is equal to/rtx/pathtracing/totalSpp
./omni/replicator/pathTracedMotionBlurSubSamples
(float): Number of sub samples to render if in PathTracing render mode and motion blur is enabled./omni/replicator/totalRenderProductPixels
(int): Number of total pixels created when calling rep.create.render_product. Used to calculate maxSamplePerLaunch inside orchestrator.py.
- Parameters
setting – Carb setting to modify.
value – Value to set the carb setting to.
Example
>>> import omni.replicator.core as rep >>> # Randomize film grain post process effect >>> tv_noise = rep.settings.carb_settings("/rtx/post/tvNoise/enabled", True) >>> with rep.trigger.on_frame(): ... flicker = rep.settings.carb_settings( ... "/rtx/post/tvNoise/enableGhostFlickering", ... rep.distribution.choice([True, False]), ... ) ... grain_size = rep.settings.carb_settings( ... "/rtx/post/tvNoise/grainSize", ... rep.distribution.uniform(1.5, 5.0), ... )
- omni.replicator.core.settings.set_render_pathtraced(samples_per_pixel: Union[int, ReplicatorItem] = 64) None
Setup PathTraced render mode
- Parameters
samples_per_pixel – Select the total number of samples to sample for each pixel per frame. Valid range [1, inf]
Example
>>> import omni.replicator.core as rep >>> rep.settings.set_render_pathtraced(samples_per_pixel=512)
- omni.replicator.core.settings.set_render_rtx_realtime(antialiasing: Union[str, ReplicatorItem] = 'FXAA') None
Setup RTX Realtime render mode
- Parameters
antialiasing – Antialiasing algorithm. Select from [Off, FXAA, DLSS, TAA, DLAA]. FXAA is recommended for non-sequential data generation as it does not accumulate samples across frames.
Example
>>> import omni.replicator.core as rep >>> rep.settings.set_render_rtx_realtime(antialiasing="DLSS")