Omniverse Farm Installation and Deployments


Farm has been implemented as a set of services and can be deployed in a variety of ways. Some deployments are easier to start with others will require knowledge of more advanced infrastructure such as Kubernetes.

There is a large variety of options available to further tune OV Farm and change its behaviour to adapt to a large set of use cases. These are not described here but can be found in the config files and helm charts.

Deployment & Installation options

In depth installation and deployment guides are available from the menu on the left. Below are some of the considerations when using one deployment strategy over the other

Version Compatibility

It’s recommended to use the following Farm versions when performing batch rendering with Composer:

Omniverse Farm and Composer Compatibility Matrix







Omniverse Launcher

The easiest option to try out Omniverse Farm is by using the applications available on the Launcher. This uses a traditional Queue & Agent set up where the Queue would run as a single process and an Agent is deployed to each node. Scale and redundancy are a limiting factor in this deployment.

Headless deployment

The headless deployment is similar to the Omniverse Launcher version in that it will have a Queue process and Agents deployed to nodes. The main difference is that both the Queue and Agent run completely headless and no GPU resources are required. It does require familiarity with Terminals.


The recommended way of deploying Omniverse Farm is by using Kubernetes. When deployed on Kubernetes, tasks, by default, will be scheduled on Kubernetes using the Kubernetes scheduler. The Farm is deployed behind a load balancer with failover and redundancy for all services making up Omniverse Farm. This is the most scalable option


It is possible to deploy the Kubernetes based version of Omniverse Farm on Azure, AWS, GCP and OCI. At the moment, security will need to be managed by restricting the access to the management APIs using security groups or similar.


It is possible to mix and match some of these deployment. It is for example an option to deploy the management services (Queue) by themselves in Kubernetes and use an agent approach to distribute work across nodes.

Everything is API driven and as long as the services can talk to each other any deployment scenario is possible. Below is a diagram of the interactions between the various services.