DeepTag is a set of services and clients that allow Omniverse users to tag their 3D models automatically using AI, add additional tags manually, and search through their entire content with tags. The system relies on two services that are designed to run on a Kubernetes cluster:



Deeptag Inference Service

Classification of existing assets using a pre-trained Deep Learning (DL) model.

Deeptag Search Service

Indexing of assets in omniverse and efficient retrieval of objects given an input tag.

Each of these services is briefly described below.

Registering and Access

Omniverse Deeptag service containers and helm charts are available to members of Omniverse Early Access Program.

To register, please proceed to EAP Registration on and follow the steps on the page to join the NVIDIA Developer Program and then submit your application to the Omniverse Early Access Program.

You will receive a notification email for joining the NVIDIA Developer Program and once your application to the EAP is approved, you will receive a notification email with further instructions.

Deeptag Inference Service

Inference service is designed to run in the background on a kubernetes cluster and automatically generate tags for assets found in omniverse. More precisely tags are generated for 3D assets in one of the following cases:


Full omniverse re-scan that is happening on the first initialization of the service


Incremental asset modification updates that are received as notifications from omniverse

When a modification update is received from omniverse (e.g. asset was uploaded, copied, modified, etc.) the inference service first checks the cache database, whether an asset with the same hash value has already been processed. If that is the case, the service directly updates tags for the given asset with the ones from cache, otherwise the asset is added to the cache miss queue. Then if the number of assets in the cache miss queue exceeds a predefined amount or a timeout from the first insertion is expired the service spawns an inference job which generates tags for all the assets from the queue. These tags are then pushed to omniverse.

The complete overview of the service workflow is schematically illustrated in the figure below:

DeepTag inference service overview

As we can see from the figure above the service is divided into parts: Cache service and Inference Job. This is done in order to efficiently use system resources, as caching service is very lightweight and only uses CPU, while inference job renders and classifies assets from omniverse and requires GPU for processing. The later is only spawned when enough samples are accumulated in the cache miss queue.

Inference job classification workflow can be schematically illustrated in the figure below.

DeepTag classification job overview

Inference service has a plugin-based structure, where different plugins are able to process different types of assets in omniverse (.usd, .mdl, .jpg, etc.). Currently supported plugins are presented below:

Plugin type

Supported formats



.usd, .usda, .usdc

Rendering-based classification of 3D assets. Rendering is performed using Omniverse Kit.

Deeptag Search Service

Similarly to the Deeptag Inference Service searching runs in the background and has the following functionality:


Indexing all the assets in omniverse on the first execution of the service and then updating index database on tag updates


Searching for the assets with given an input tag

Search functionality is integrated and extends the results produced by Nucleus Search Service. An example of a search query is illustrated below:

DeepTag search query example



  • Hardware: A NGC-ready System with one or more GPUs.

  • Software: Ubuntu 18.04 Server or Ubuntu 20.04 Server (no graphical UI)

Installing DeepTag on a single machine (bare metal)

​ You will need to have a hardware machine as specified in Prerequisites. ​

Install EGX Ubuntu Server

​ You can select several different versions of EGX Stack for Ubuntu Server.

We tested the following configurations:

  • Ubuntu 20.04 (suggested) v3.1

    • a minor change is needed, as you’ll have to install GPU Operator 1.6.2 instead of 1.2.0

  • Ubuntu 18.04 v3.1

    • a minor change is needed, as you’ll have to install GPU Operator 1.6.2 instead of 1.2.0

Install Nucleus (Optional)

​ The same machine that you use can also host your Omniverse Nucleus Stack. Please refer to the instructions contained in the Enterprise Nucleus Server section of the Omniverse Nucleus Documentation. ​

Install DeepTag

​ You will need to install the following helm charts:

Omniverse DeepTag Inference Service

Omniverse DeepTag Search Service

(Optional) Omniverse DeepTag Metrics

Users can have multiple instances of DeepTag running on a single server or cluster, however they need to be configured to work on different Nucleus instances, or to generate tags in different namespaces on a single Nucleus instance, to avoid conflicts. ​

DeepTag Metrics service is configured to collect metrics from all the services running on a single server or cluster. It deploys prometheus and Grafana services and automatically collects metrics from existing Deeptag services (that for each service are published on port 8000). Users may decide to use their own prometheus server, however it will require additional configuration. ​


Docker cannot access the network after kubernetes is installed

When installing Docker and Kubernetes, some IP addresses will be reserved for Docker and Kubernetes. ​ In Kubernetes, that is specified in the kubeadm init parameters. For example, if your local network is using addresses in the range 192.168.[0-191].*x*, we suggest to initialize kubernetes with this command: ​ .. code-block:

sudo kubeadm init --pod-network-cidr=