At the core of Omniverse is a set of fundamental services known as Omniverse Nucleus, which allows a variety of Omniverse-enabled client applications ( Apps, Connectors, and others) to share and modify authoritative representations of virtual worlds.
Ultimately, Nucleus sits on a network somewhere, with clients connected to it, sharing the same authoritative world state.
Nucleus allows live syncing. This is a publish/subscribe model, where multiple Clients connected to it in such a manner that once one of those Clients submits a change (publishes it), every other Client (subscriber) will immediately receive that change.
Nucleus represents assets in a hierarchical, tree-like structure. To an end user it looks just like a familiar file tree - with directories and files inside them.
Files can be uploaded, downloaded, and moved around; directories created, deleted, and listed; permissions can be managed as desired; etc.
Nucleus utilizes a single file tree similar to Unix systems, with a “path”
to a file being a forward-slash (
/) separated string of “nodes”:
… and so on.
Though Nucleus itself does not care what kind of files are in there (one can
upload anything to Nucleus as if it was a fileserver), the most common data
formats used across Omniverse are
USD (Universal Scene Description),
MDL (Material Definition Library), various kinds of images, and similar.
Ultimately, Nucleus is a collection of services that avail themselves on a network and allow Client applications to connect to those services. Note that Clients can be desktop, user manipulated, applications (ie, CAD or content creation software), as well as microservices - automated processes (rendering, content manipulation and generation, or whatever else one might desire).
Within Nucleus, there are a number of components, each one communicating with
multiple others. Most important of those communications are shown
on the diagram. Some lines are omitted for clarity. One example of such
an omission would be the
Tagging Service using
to generate a ticket for later communication with
Externally, some of those services expose API endpoints (open ports) for Clients to talk to them directly. Those ports and endpoints would depend on your distribution.
At the center of Nucleus is it’s Core - a set of services for storing and retrieving data (files).
Nucleus Core is exposed to other parts of Omniverse via it’s API, over HTTP and Websockets connections.
On the backend, it utilizes a data directory configured by an administrator to store it’s data. This directory is opaque to the user, and does not represent the actual file tree in Nucleus.
Nucleus Core consists of the following components:
Nucleus Core API Responder: the primary component exposing Nucleus Core API
Nucleus Core LFT (Large File Transfer) Service: in Enterprise deployments, exposes an HTTP endpoint for upload and download of files of larger sizes. LFT Service can be scaled to run more than one instance
In Enterprise Nucleus Server installations, Core would include some miscellanea for exposing metrics, processing and rotating logs, etc
Discovery service rides “alongside” Nucleus and enables other services to register and advertise themselves to Clients.
Auth and User Management Service¶
The name of this service speaks for itself, and it’s configuration and operation is described more widely in it’s own section.
Other services included in Nucleus are:
Search Service: indexes items in Nucleus, and provides API for searching them
Thumbnail Service: creates thumbnails for data formats it supports
Tagging Service: exposes API to allow users to tag files in Nucleus file tree
Client Assumptions and Expectations¶
All Omniverse components that connect to Nucleus make the following assumptions:
Nucleus Core API is available on port
Nucleus Discovery is available on the same host on port
If an Enterprise Nucleus Server is
deployed with an SSL gateway (ingress) in front of it, Clients will
make the following assumptions when told to connect to
host is the desired DNS hostname of the Nucleus Server):
If port is specified when connecting, connections will be made via HTTPS and WSS on that host and port
If no port is specified, an HTTP connection to port
80and HTTPS connection to port
443will be attempted. Former is necessary for supporting of redirects to correct FQDNs for sophisticated setups.
Understanding of the above assumptions is of utmost importance when planning deployments of Nucleus. Things can be moved around and configured as desired, however, it is possible to create an un-workable configuration if Client Assumptions are broken.
Note that we talk about ports and HTTP/Websockets in the architectural diagram and it’s elabroations above. This is by design.
Nucleus itself does not concern itself with SSL, instead just exposing it’s services on specific, configurable, ports.
If transport security is desired, Nucleus Enterprise Server allows SSL to be implemented as a standard gateway we call Ingress - a basic HTTPS termination endpoint - that acts as a reverse proxy. Clients talk SSL to Ingress Gateway, and Ingress uses standard HTTP/Websockets to talk to Nucleus Services.
The term Ingress here is going to be very familiar to Kubernetes experts - and we’re using it in precisely the same meaning as Kubernetes does - the idea is precisely the same, and it works in the same manner.
SSL setup is sophisticated, and warrants it’s own document.
This distribution contains the most essential services to check out Nucleus.
It supports both Linux and Windows.
It supports all essential features of Nucleus, and can be connected to by regular Omniverse Client applications, however, this distribution is missing typical “infrastructure” aspects like:
Caching: Files served by Nucleus Workstation setups can not be cached by their clients’ Caches
Optimal data transfer mechanisms: HTTP, multithreading, and similar optimizations are not possible with Workstation setups
Specialized backup and restore functionality
Enterprise Nucleus Server¶
This is the distribution of Nucleus that is production-grade and intended for enterprise deployments. It includes all services we have, and supports all features available. This is the distribution we run in our internal production.
Currently, Enterprise distribution is available as Docker Compose stacks, and we do plan to create Kubernetes artifacts in the future.