Omniverse Replicator is a highly extensible framework built on a scalable Omniverse platform that enables physically accurate 3D synthetic data generation to accelerate training and performance of AI perception networks.
Omniverse Replicator provides deep learning engineers and researchers with a set of tools and workflows to bootstrapping model training, improve the performance of existing models or develop a new type of models that were not possible due to the lack of datasets or required annotations. It allows users to easily import simulation-ready assets to build contextually aware 3D scenes to unleash a data-centric approach by creating new types of datasets and annotations previously not available.
Built on open-source standards like Universal Scene Description (USD), PhysX, Material Definition Language (MDL), Omniverse Replicator can be easily integrated or connected to existing pipelines via extensible Python APIs.
Omniverse Replicator is built on the highly extensible OmniGraph architecture that allows users to easily extend the built-in functionalities to create datasets for their own needs. It provides an extensible registry of annotators and writers to address custom requirements around type of annotations and output formats needed to train AI models. In addition, extensible randomizers allow the creation of programmable datasets that enable a data-centric approach to training these models.
The Omniverse Replicator Developer Overview page also covers high level aspects of getting started developing with Synthetic Data and Replicator.
Theory behind training with synthetic data
A typical process to train a deep neural network for perception tasks involves manual collection of data (images in most cases), followed by manual process of annotating these images and optional augmentations. These images are then converted into the format usable by the DNNs. DNN is then trained for the perception tasks. Hyperparameter tuning or changes in network architecture are typical steps to optimize network performance. Analysis of the model performance may lead to potential changes in the dataset however this may require another cycle of manual data collection and annotation. This is an expensive manual process.
Synthetic data generation enables large scale training data generation with accurate annotations in a cost-effective manner. Furthermore, synthetic data generation also addresses challenges related to long tail anomalies, bootstraps model training where no training data is available as well as online reinforcement learnings.
Some more difficult perception tasks require annotations of images that are extremely difficult to do manually (e.g. images with occluded objects). Programmatically generated synthetic data can address this very effectively since all generated data is perfectly labeled. The programmatic nature of data generation also allows the creation of non-standard annotations and indirect features that can be beneficial to DNN performance.
As described above synthetic data generation has many advantages however there are a set of challenges that need to be addressed for it to be effective.
Synthetic data sets are generated using simulation; hence it is critical that we close the gap between the simulation and real world. This gap is called the domain gap, which can be divided into two parts:
The appearance gap is the set of pixel level differences between real and synthetic images. These differences can be a result of differences in object detail, materials, or in the case of synthetic data, differences in the capabilities of the rendering system used
The content gap refers to the difference between the domains. This includes factors like the number of objects in the scene, the diversity in type and placement, and similar contextual information.
A critical tool for overcoming these domain gaps is domain randomization. Domain randomization increases the size of the domain that we generate for a synthetic dataset to try to ensure that we include the range that best matches reality including long tail anomalies. By generating a wider distribution of data than we might find in reality, a neural network may be able to learn to better generalize across the full scope of the problem.
The appearance gap can be further addressed with high fidelity 3D assets and ray-tracing or path-tracing based rendering, using physically based materials such as those defined with the MDL material language. Validated sensor models and domain randomization of their parameters can also help here.
On the content side, a large pool of assets relevant to the scene is needed. Omniverse provides a wide variety of connectors available to other 3D applications. Developers can also write tools to generate diverse domain scenes applicable to their specific domain.
These challenges introduce a layer of complexity to training with synthetic data, since it is not possible to know if the randomizations done in the synthetic dataset were able to encapsulate the real domain. To successfully train a network with synthetic data, the network has to be tested on a real dataset. To address any model performance issues, we adopt a data-centric approach as a first step where we tune our dataset before attempting to change model architecture or hyperparameters.
This means that the process of training with synthetic data is highly iterative. Replicator allows for this kind of data-centric AI training by converting simulated worlds into a set of learnable parameters. Throughout the training, the scene can be modified, randomized, and the distribution of the assets can be changed iteratively.
Replicator is composed of six components that enable you to generate synthetic data:
Semantics Schema Editor Semantic annotations (data “of interest” pertaining to a given mesh) are required to properly use the synthetic data extension. These annotations inform the extension about what objects in the scene need bounding boxes, pose estimations, etc… The Semantics Schema Editor provides a way to apply these annotations to prims on the stage through a UI.
Visualizer The Replicator visualizer enables you to visualize the semantic labels for 2D/3D bounding boxes, normals, depth and more.
Randomizers: Replicator’s randomization tools allow developers to easily create domain randomized scenes, quickly sampling from assets, materials, lighting, and camera positions.
Omni.syntheticdata: Omni.synthetiticdata is the lowest level component of the Replicator software stack, and it will ship as a built-in extension in all future versions of the Omniverse Kit SDK. The omni.syntheticdata extension provides low level integration with the RTX renderer, and the OmniGraph computation graph system.This is the component that powers the computation graphs for Replicator’s Ground Truth extraction Annotators, passing Arbitrary Output Variables or AOVs from the renderer through to the Annotators.
Annotators: The annotation system itself ingests the AOVs and other output from the omni.syntheticdata extension to produce precisely labeled annotations for DNN training.
Writers: Writers process the images and other annotations from the annotators, and produce DNN specific data formats for training. Writers can output to local storage, over the network to cloud based storage backends such as SwiftStack, and in the future we will provide backends for live on-GPU training, allowing generated data to stay on the GPU for training and avoiding any additional IO at all.
Throughout generation of a dataset the most common workflow is to randomize a scene, select your annotators and then write to your desired format. However, if needed for more customization you have access to omni.synthetic data.
API documentation and Changelogs
Python API Documentation for Omniverse Replicator can be found here.
Changelogs for Omniverse Replicator are available at this link.
Replicator Examples on Github
We now offer convenient Replicator Examples on Github. There’s snippets, full scripts and USD scenes where content examples are needed. This repo will grow as we add to it.
Courses and Videos
To help you get started with Replicator, we have created a handful of hands-on tutorials.
- Getting started with Replicator
- Core functionalities - "Hello World" of Replicator
- Camera Examples
- Running Replicator headlessly
- Adding semantics with Semantics Schema Editor and programmatically
- Interactive live visualization
- Randomizers examples
- Data Augmentation
- Replicator Materials
- Annotating with Transparent Materials
- Annotators information
- Visualizing output folder with annotated data programmatically
- Using existing 3D assets with Replicator
- Using Replicator with a fully developed scene
- Using physics with Replicator
- Randomizing appearance, placement and orientation of existing 3D assets with a built-in writer
- Writer Examples
- Create a custom writer
- Distribution Examples
- Rendering with Subframes
- I/O Optimization Guide
- Advanced Scattering
- Working with Layers
- Replicator YAML
- Replicator YAML Manual and Syntax Guide
Replicator On Cloud
Instructions are given for setting up Replicator on cloud below.
Materials or textures will sometimes not be loaded in time for capture when in
RTX - Real-Timemode. If this occurs, you can increase the interval between captures by setting the
/omni/replicator/RTSubframesflag (default=3). To set in Python,
carb.settings.get_settings().set(<new value>). Similarly, capture speed can be increased if no materials are randomized by setting the value to its minimum of 1.
Errors in annotator visualization and data generation may occur when running on a system with multi-GPU. To disable multi-GPU, launch with the
In scenes with a large number of 3D bounding boxes, the visualizer flickers due to the rendering order of the boxes. This rendering issue is purely aesthetic and will not have any effect when writing the data.