RTX – Interactive (Path Tracing) mode

The Omniverse RTX Renderer provides the RTX – Interactive (Path Tracing) mode; a single path tracing pass is used every frame to incrementally-sample the lighting contributions from all possible lighting interactions in the scene. This is followed by a single de-noising step using the NVIDIA OptiX™ AI-Accelerated Denoiser. Post-Processing effects such as bloom and tone mapping are applied after de-noising.

RTX – Interactive (Path Tracing) mode is the most accurate Omniverse RTX Renderer rendering mode and can produce photo-quality images, at the expense of lower framerate than RTX - Real-Time mode.

Path Tracing



Samples per Pixel per Frame (1 to 32)

Total number of samples for each rendered pixel, per frame.

Total Samples per Pixel (0 = inf)

Maximum number of samples to accumulate per pixel. When this sample count is reached the rendering stops until
a scene or setting change is detected, restarting the rendering process. Set to 0 to remove this limit.

Max Bounces

Maximum number of ray bounces for any ray type. Higher values give more accurate results, but worse performance.

Max Specular and Transmission Bounces

Maximum number of ray bounces for specular and trasnimission.

Max SSS Volume Scattering Bounces

Maximum number of ray bounces for SSS.

Max Fog Scattering Bounces

Maximum number of bounces for volume scattering within a fog volume.

Max Non-Uniform Volume Scattering Bounces

Maximum number of ray bounces for non-uniform volumes.

Fractional Cutout Opacity

If enabled, fractional cutout opacity values are treated as a measure of surface ‘presence’, resulting in a translucency effect similar to alpha-blending.
Path-traced mode uses stochastic sampling based on these values to determine whether a surface hit is valid or should be skipped.

Reset Accumulation on Time Change

If enabled, the Path-Tracer accumulation is restarted every time the MDL animation time changes. Settings this to false is useful to prevent accumulation reset at every frame when the
‘animation time’ is changing every frames using wallclock time (which can be the case when an animation is not being played, and wallclock time is used instead of animation time).


While using a higher number of bounces increases accuracy of the final image, it can quickly reduce performance while achieving diminishing returns in terms of image quality.

Adaptive Sampling

Example of Adaptive Sampling debug view

The RTX – Interactive (Path Tracing) mode supports Adaptive Sampling. With Adaptive Sampling, samples are non-uniformly distributed where most beneficial for further convergence, which can result in less noise for the same number of samples and also provides a more consistent noise level across multiple frames.

Adaptive Sampling can be enabled in the RTX – Interactive (Path Tracing)’s render settings, and its Target Error value can be adjusted there as well, which when reached will stop accumulating more samples.

An Adaptive Sampling Error debug view can be selected in the Debug View render settings, which allows visualizing the normalized standard deviation of the Monte Carlo estimator of the pixels: warm colors represent high variance, which indicate that additional samples would lead to improved convergence for those pixels.

Note that currently Movie Capture does not support ending the rendering of a frame earlier when the Target Error threshold has been reached, but it will be supported in a following release.



Adaptive Sampling

When enabled, noise values are computed for each pixel, and upon threshold level reached, the pixel is no longer sampled.
  • Target Error

The noise value treshold after which the pixel would no longer be sampled.

Sampling & Caching




Enables caching path tracing results for improved performance at the cost of some accuracy.

Many-Light Sampling

Enables a technique that improves the sampling quality (and therefore rendering convergence) in scenes with many light primitives.

Mesh-Light Sampling

Enables direct illumination sampling of geometry with emissive materials.




Anti-Aliasing Sample Pattern

Sampling pattern used for Anti-Aliasing. Select between Box, Triangle, Gaussian and Uniform.

Anti-Aliasing Radius

The sampling footprint radius, in pixels, when generating samples with the selected antialiasing sample pattern.

Firefly Filtering



Firefly Filtering

Enables image filtering to reduce the presence of excessively bright “firefly” pixel artifacts.

Max Ray Intensity Glossy

Clamps the maximium ray intensity for glossy bounces. Can help prevent fireflies, but may result in energy loss.

Max Ray Intensity Diffuse

Clamps the maximium ray intensity for diffuse bounces. Can help prevent fireflies, but may result in energy loss.





Enable to apply the OptiX Denoiser to the radiance image generated by the renderer. The OptiX denoiser results in an order of magnitude reduction in rendering times for a target image quality.

Optix Denoiser Blend Factor

A blend factor indicating how much to blend the denoised image with the original non-denoised image. 0 shows only the denoised image, 1.0 shows the image with no denoising applied.

Denoise AOVs

If enabled, the OptiX Denoiser will also denoise the AOVs.

Non-uniform Volumes

This feature enables path-traced volume rendering of both VDB files (internally converted to NanoVDB, a faster and more compact GPU-friendly volume representation) and procedural MDL based volumes. VDB files can either contain a SDF (signed distance field/level set) or density volume. Currently, the VDB volume material can only be applied to a cube mesh, while procedural volume materials can be applied to any kind of mesh (cube, sphere, torus, etc). Volumes can also overlap with other volumes. The maximum number of overlaps between the volumes is currently limited to four.



Non-uniform Volumes

The number of bounces is controlled by Max Heterogeneous Volume Scattering Bounces (under the Path Tracing section in Path-Traced Mode Settings).
• When set to 1: perform single scattering. Fast, suitable for lowly scattering volumes like fog.
• When set to a value greater than 1: perform multi-scattering. Slower, suitable for highly scattering volumes like clouds.
Important: Make sure to increase the number of bounces to avoid darkening artifacts.

Transmittance Method

Choose between Biased Ray Marching or Ratio Tracking. Biased ray marching is the ideal option in all cases.

Max Collision Count

Maximum delta tracking iterations. Increase to more than 32 for highly scattering volumes like clouds.
Important: if set too low, parts of the volume will disappear.

Max Light Collision Count

Maximum ratio tracking iterations. Increase to more than 32 for highly scattering volumes like clouds.
Important: if set too low, parts of the volume will disappear.

Global Volumetric Effects



Rayleigh Atmosphere

Enables an additional medium of Rayleigh-scattering particles to simulate a physically-based sky.
  • Rayleight Atmosphere Scale

Scales the size of the Rayleigh sky.
  • Skip Background

If a domelight is rendered for the sky color, the Rayleight Atmosphere is applied to the foreground while the background sky color is left unaffected.



AOVs (Arbitrary Output Variables) are data known to the renderer and used to compute illumination. Typically, AOVs contain decomposed lighting information such as:

  • Direction and indirect illumination

  • Reflections and refractions

  • Objects with self illumination

But they can also contain geometric and scene information, such as:

  • Surface position in world-space

  • Orientation of normals

  • View-relative depth

As the final image is computed, the intermediate information used during rendering can be optionally written to disk, providing opportunities to modify the final image during compositing and additional insights through 2D analysis. The auxiliary images, called “passes” in Omniverse and “Render Products” in USD, are just named outputs. The AOV data used by the renderer is referred to as a “Render Variable” and defines what is written for each pass.

If the OptiX Denoiser is enabled (true by default) the AOVs will be denoised. An option to control this is available in Denoising settings.

Debug View contains a list of the AOV passes which can be displayed if enabled for previewing.

AOV Common Settings



Minimum Z-Depth

The minimum z-depth for AOVs. The depth range can be shorted when necessary to increases the precision.

Maximum Z-Depth

The maximum z-depth for AOVs. The depth range can be shorted when necessary to increases the precision.

32-Bit Depth AOV

Uses a 32-bit format for the depth AOV.

AOV Passes




Shading of the background, such as the background resulting from rendering a Dome Light.

Diffuse Filter

The raw color of the diffuse texture.

Direct Illumination

Shading from direct paths to light sources.

Global Illumination

Diffuse shading from indirect paths to light sources.


Shading from indirect reflection paths to light sources.

Reflection Filter

The raw color of the reflection, before being multiplied for its final intensity.


Shading from refraction paths to light sources.

Refraction Filter

The raw color of the refraction, before being multiplied for its final intensity.


Shading of the surface’s own emission value.


Shading from VDB volumes.

World Normal

The surface’s normal in world-space.

World Position

The surface’s position in world-space.


The surface’s depth relative to the view position.

Multi Matte


Multi Matte extends AOV support by enabling rendering masked mesh geometry to AOVs.

The Multi Matte channel count defines the total number of channels available, and each is assigned to a Multi Matte AOV’s color channel (red, green, or blue). Each channel has an index, and Mesh geometry with a matching Multi Matte ID index will be rendered to the first Multi Matte AOV channel found with a matching index.

Debug View contains a list of all Multi Matte AOV passes which can be previewed.



Channel Count

Multimatte allows rendering AOVs of meshes which have a Multimatte ID index matching a Multimatte AOV’s channel index.
Channel Count determines how many channels can be used, which are distributed among the Multimatte AOVs’ color channels.
You can preview a Multimatte AOV by selecting one in the Debug View render settings.

Multimatte AOV

  • Red Channel Multimatte ID Index

The Multimatte ID index to match for the red channel of this Multimatte AOV.
  • Green Channel Multimatte ID Index

The Multimatte ID index to match for the green channel of this Multimatte AOV.
  • Blue Channel Multimatte ID Index

The Multimatte ID index to match for the blue channel of this Multimatte AOV.


Multi-GPU rendering in RTX – Interactive (Path Tracing) mode distributes the image across the GPUs while automatically balancing the workload. Automatic Load Balancing can improve performance, particularly at high resolution and with mixed GPU models of varying capacity.

The primary GPU performs various tasks, such as: rendering pixels, sample aggregation, denoising, post processing, UI rendering. The default GPU 0 Weight value is usually ideal.




Enables using multiple GPUs (when available). This splits the rendering of the image into a large tile per GPU with a small overlap region between them.

Automatic Load Balancing

Automatically balances the amount of total path tracing work to be performed by each GPU in a multi-GPU configuration.

GPU 0 Weight

The amount of total Path Tracing work (between 0 and 1) to be performed by the first GPU in a Multi-GPU configuration.
A value of 1 means the first GPU will perform the same amount of work assigned to any other GPU.
Ignored if Automatic Load Balancing is enabled.

Compress Radiance

Enables lossy compression of per-pixel output radiance values.

Compress Albedo

Enables lossy compression of per-pixel output albedo values (needed by OptiX denoiser).

Compress Normals

Enables lossy compression of per-pixel output normal values (needed by OptiX denoiser).


Enabling multi-threading improves UI responsiveness.


In simple scenes, Automatic Load Balancing may not make a significant difference, and may take more time in scenes with low frame rates.


For efficiency’s sake, in some contexts rendering will switch to single-GPU automatically until conditions change to warrant multi-GPU rendering, for example when rendering at low resolution.

Multi-GPU rendering is enabled by default if the system has multiple NVIDIA RTX-enabled GPUs of the same model. GPUs which don’t support ray tracing are skipped automatically.

If the GPU models are not identical, multi-GPU can be enabled with the command line: --/renderer/multiGpu/enabled=true​

Per-GPU memory usage is limited to 48GB. The GPU with the lowest memory capacity will limit the amount of memory the other GPUs can leverage. You can disable the lower-capacity GPU to avoid this limitation.

To limit the maximum number of GPUs, Omniverse Kit must be launched with the following argument: --/renderer/multiGpu/maxGpuCount


SLI mode is unstable and should be globally disabled in the NVIDIA control panel for multi-GPU. This will be addressed in future releases.