Cameras#

Cameras in Omniverse are used to simulate real world cameras and their functionality.

Note

Some camera properties may be overridden by a renderer.

Lens#

Lens

Description

Units

Focal Length

Longer Lens Lengths Narrower FOV, Shorter Lens Lengths Wider FOV
Millimeters

Focus Distance

The distance at which perfect sharpness is achieved.
World Units

fStop

Controls Distance Blurring. Lower Numbers decrease focus range, larger
numbers increase it.
N/A

Projection

Sets camera to perspective or orthographic mode.
N/A

Stereo Role

Sets use for stereoscopic renders as left/right eye or mono (default)
for non stereo renders.
N/A

Aperture#

Aperture

Description

Units

Aperture (Horizontal)

Emulates sensor/film width on a camera
Millimeters (or, tenths of a world unit)

Aperture Offset (Horizontal)

Offsets Resolution/Film gate horizontally.
Millimeters (or, tenths of a world unit)

Aperture (Vertical)

Emulates sensor/film height on a camera.
Millimeters (or, tenths of a world unit)

Aperture Offset (Vertical)

Offsets Resolution/Film gate vertically.
Millimeters (or, tenths of a world unit)

Clipping#

Clipping

Description

Units

Clipping Planes



Clipping Range

Clips the view outside of both near and far range values.
World Units

Fisheye Lens#

The RTX Renderer supports simulating corrected wide angle and 360 degree rendered projections for accurate fisheye lens modeling. Pinhole projections are also supported.

Projection Type

Description

Pinhole

Standard Camera Projection (disables Fisheye).

Fisheye Polynomial

360 Degree Spherical Projection.

Fisheye Spherical

360 Degree Full Frame Projection.

Fisheye KannalaBrandt K3

Kannala-Brandt model.

Fisheye Rad Tan Thin Prism


Omnidirectional Stereo


Generalized Projection

Uses projection textures for arbitrary lens distortion modeling.

Support per Projection Type

Pinhole

Fisheye Polynomial

Fisheye Spherical

Fisheye KannalaBrandt K3

Fisheye Rad Tan Thin Prism

Omnidirectional Stereo

Generalized Projection

Nominal Width (pixels)

YES

YES

YES

YES

Nominal Height (pixels)

YES

YES

YES

YES

Optical Center X (pixels)

YES

YES

YES

YES

Optical Center Y (pixels)

YES

YES

YES

YES

Max FOV

YES

YES

YES

Poly k0

YES

YES

YES

Poly k1

YES

YES

YES

Poly k2

YES

YES

YES

Poly k3

YES

YES

YES

Poly k4

YES

YES

Poly k5

YES

p0

YES

p1

YES

s0

YES

s1

YES

s2

YES

s3

YES

Interpupillary Distance (cm)

YES

Is left eye

YES

Generalized Projection Direction Texture

YES

Generalized Projection NDC Texture

YES

Generalized Projection#

The Generalized Projection model enables arbitrary lens distortion modeling, both parametric and non-parametric, for any precision level given adequate calibration precision, with the use of user-provided textures.

How to use the Generalized Projection model

This model depends on generating two textures to define its octahedral encoding operations.

  • Direction Texture: Defines the unproject from NDC (Normalized Device Coordinate) to view direction operation. At runtime, for a given pixel, NDC is used as UVs to sample the Direction Texture and get the ray direction to trace primary rays.

  • NDC Texture: Defines the project from view direction or view position to NDC operation. At runtime, for a given view direction or view position, the texture’s encoded view direction is sampled as UVs to look up for the NDC, which is used to compute the corresponding pixel position on screen.

Only the following camera Fisheye Lens parameters must be set, others are ignored for this model:

  1. Projection Type (cameraProjectionType): Set as Generalized Projection.

  2. Generalized Projection Direction Texture (generalizedProjectionDirectionTexturePath) and Generalized Projection NDC Texture (generalizedProjectionNDCTexturePath) : set the path for the respective textures.

  3. Nominal Width (fthetaWidth) and Nominal Height (fthetaHeight).

  4. Optical Center X (fthetaCx) and Optical Center Y (fthetaCy).

  5. Max FOV (ftheataMaxFov).

Note

Optical Center values should either be defined in the texture generation process or in the camera parameters. If defined by the textures, set the camera’s Optical Center values as half the Nominal Width and half the Nominal Height to avoid double-application. If not defined by the textures (i.e., texture is perfectly centered with (0.5, 0.5) as center), set the Optical Center in the camera parameters.

Direction Texture

  1. The texture should be a 32-bit float texture with three channels (RGB) which encodes the three components of a normalized direction.

  2. The texture should store information for the unproject from NDC (Normalized Device Coordinate) to view direction operation.

  3. For each texel, coordinates range from (0,0) to (textureWidth, textureHeight):

    • Use the normalized texture coordinate (UV) as the NDC.

    • Call a user-defined unproject function, which should return a normalized view direction in float3.

    • Store the three components of the resulting view direction in R, G, and B channels of the image data array respectively.

    • Pack into an EXR image by calling save_img (provided in sample script).

NDC Texture

  1. The texture should be a 32-bit float texture with two channels (RG) which encodes the two components of a normalized screen position (NDC).

  2. The texture should store information for the project view space direction or position to NDC operation.

  3. For each texel, coordinates range from (0,0) to (textureWidth, textureHeight):

    • Use the normalized texture coordinate (UV) as the octahedral encoded view direction in float2.

    • Decode the encoded view direction using the script’s provided function get_octahedral_directions.

    • Call a user-defined project function; the function should return a normalized device/screen coordinate in float2, with a range from (0,0) to (1,1).

    • Store the two components of the resulting coordinate in the R and G channels of the image data array respectively.

    • Pack into an EXR by calling save_img (provided in sample script).

Note

Inaccurate projection artifacts may occur if the textures’ resolution is lower than the target view resolution.

Texture Generation Python Script Example

An example Python script to generate compatible fisheye polynomial textures which follow the model’s required convention is provided.

Python Script
  1# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved.
  2#
  3# NVIDIA CORPORATION and its licensors retain all intellectual property
  4# and proprietary rights in and to this software, related documentation
  5# and any modifications thereto.  Any use, reproduction, disclosure or
  6# distribution of this software and related documentation without an express
  7# license agreement from NVIDIA CORPORATION is strictly prohibited.
  8
  9
 10# This script can generate textures compatible with the "GeneralizedProjectionTexture" lens projection.
 11# It generates a forward/inverse texture pair that drives the projection.
 12# This script can generate these textures by mimicing the existing omni fisheye projections
 13# and baking them as textures.
 14# This python script will require pipinstalls for the following python libraries:
 15import OpenEXR
 16import Imath
 17import numpy as np
 18from numpy.polynomial.polynomial import polyval
 19
 20# Compression for the .exr output file
 21# Should be one of:
 22# NO_COMPRESSION | RLE_COMPRESSION | ZIPS_COMPRESSION | ZIP_COMPRESSION | PIZ_COMPRESSION
 23#g_compression = Imath.Compression.NO_COMPRESSION
 24g_compression = Imath.Compression.ZIP_COMPRESSION
 25
 26
 27# util functions
 28def save_img(img_path: str, img_data):
 29    """ Saving a numpy array to .exr file """
 30
 31    if img_data.ndim != 3:
 32        raise "The input image must be a 2-Dimensional image"
 33
 34    height = img_data.shape[0]
 35    width = img_data.shape[1]
 36    channel_count = img_data.shape[2]
 37
 38    channel_names = ['R', 'G', 'B', 'A']
 39
 40    channel_map = {}
 41    channel_data_types = {}
 42
 43    for i in range(channel_count):
 44        channel_map[channel_names[i]] = img_data[:, :, i].tobytes()
 45        channel_data_types[channel_names[i]] = Imath.Channel(Imath.PixelType(Imath.PixelType.FLOAT))
 46
 47    header = OpenEXR.Header(width, height)
 48    header['Compression'] = g_compression
 49    header['channels'] = channel_data_types
 50    exrfile = OpenEXR.OutputFile(img_path, header)
 51    exrfile.writePixels(channel_map)
 52    exrfile.close()
 53
 54
 55# decode an array of X and Y UV coordinates in range (0, 1) into directions
 56def oct_to_unit_vector(x,y):
 57    dirX, dirY = np.meshgrid(x, y)
 58    dirZ = 1 - np.abs(dirX) - np.abs(dirY)
 59
 60    sx = 2 * np.heaviside(dirX, 1) - 1
 61    sy = 2 * np.heaviside(dirY, 1) - 1
 62
 63    tmpX = dirX
 64    dirX = np.where(dirZ <= 0, (1 - abs(dirY))*sx , dirX)
 65    dirY = np.where(dirZ <= 0, (1 - abs(tmpX))*sy , dirY)
 66
 67    n = 1 / np.sqrt(dirX**2 + dirY**2 + dirZ**2)
 68    dirX *= n
 69    dirY *= n
 70    dirZ *= n
 71    return dirX, dirY, dirZ
 72
 73
 74# encode an array of view directions (unit vectors) into octahedral encoding
 75def unit_vector_to_oct(dirX, dirY, dirZ):
 76    n = 1 / (np.abs(dirX) + np.abs(dirY) + np.abs(dirZ))
 77    octX = dirX * n
 78    octY = dirY * n
 79
 80    sx = 2*np.heaviside(octX, 1) - 1
 81    sy = 2*np.heaviside(octY, 1) - 1
 82
 83    tmpX = octX
 84    octX = np.where(dirZ <= 0, (1 - abs(octY))*sx, octX)
 85    octY = np.where(dirZ <= 0, (1 - abs(tmpX))*sy, octY)
 86    return octX, octY
 87
 88
 89# take the texture width and height, generate the view directions for each texel in an array
 90def get_octahedral_directions(width, height):
 91    # octahedral vectors are encoded on the [-1,+1] square
 92    x = np.linspace(-1, 1, width)
 93    y = np.linspace(-1, 1, height)
 94
 95    # octahedral to unit vector
 96    dirX, dirY, dirZ = oct_to_unit_vector(x, y)
 97
 98    return dirX, dirY, dirZ
 99
100
101def poly_KB(theta, coeffs_KB):
102    theta2 = theta**2
103    dist= 1 + theta2 * (coeffs_KB[1] + theta2 * (coeffs_KB[2] + theta2 * (coeffs_KB[3]+ theta2 * coeffs_KB[4])))
104    return theta*dist
105
106
107#--unproject from sensor plane normalized coordindates to ray cosine directions
108def create_backward_KB_distortion(textureWidth, textureHeight, Width, Height,fx, fy,cx, cy,coeffs_KB):
109    from scipy.optimize import root_scalar
110
111    # Create a 2d array of values from [0-cx, 1-cx]x[0-cy, 1-cy], and normalize to opencv angle space. note scale from -0.5 to 0.5, hence sparing width/2. 
112    screen_x = (np.linspace(0, 1, textureWidth)  - cx)*Width/fx
113    screen_y = (np.linspace(0, 1, textureHeight) - cy)*Width/fy
114
115    # y is negated here to match the inverse function's behavior.
116    # Otherwise, rendering artifacts occur because the functions don't match.
117    screen_y = [-y for y in screen_y]
118
119    X, Y = np.meshgrid(screen_x, screen_y)
120
121    # Compute the radial distance on the screen from its center point
122    R = np.sqrt( X**2 + Y**2)
123    
124    def find_theta_for_R(R, coeffs_KB):
125        if R > 0:
126            # Define a lambda function for the current R
127            func = lambda theta: poly_KB(theta,coeffs_KB) - R
128            # Use root_scalar to find the root
129            theta_solution = root_scalar(func, 
130                                         bracket=[0, np.pi/1], 
131                                         method='brentq',
132                                         xtol=1e-12, # Controls absolute tolerance
133                                         rtol=1e-12, # Control relative tolerance
134                                         maxiter=1000) # Set max iteration
135            return theta_solution.root
136        else:
137            return 0  # Principal point maps to z-axis
138
139    # Vectorize the function
140    vectorized_find_theta = np.vectorize(find_theta_for_R,excluded=coeffs_KB)
141    theta=vectorized_find_theta(R,coeffs_KB) 
142
143    # compute direction consines.
144    X = X / R
145    Y = Y / R
146    sin_theta = np.sin(theta)
147    dirX = X * sin_theta
148    dirY = Y * sin_theta
149    dirZ = -np.cos(theta)
150
151    # set the out of bound angles so that we can clip it in shaders
152    dirX = np.where(abs(theta) >= np.pi, 0, dirX)
153    dirY = np.where(abs(theta) >= np.pi, 0, dirY)
154    dirZ = np.where(abs(theta) >= np.pi, 1, dirZ)
155
156    return dirX, dirY, dirZ
157
158#--Project
159def create_forward_KB_distortion_given_directions(dirX, dirY, dirZ, Width, Height, fx, fy,cx, cy, coeffs_KB):
160    # compute theta between ray and optical axis
161    sin_theta = np.sqrt(dirX**2 + dirY**2)
162    theta = np.arctan2(sin_theta, -dirZ)
163    
164    # appy forward distortion model
165    y=poly_KB(theta, coeffs_KB)
166
167    # normalize to render scale
168    Rx = y/Width*fx
169    Ry = y/Width*fy
170    dirX = np.where(sin_theta != 0, dirX * (Rx / sin_theta), dirX)
171    dirY = np.where(sin_theta != 0, dirY * (Ry / sin_theta), dirY)
172
173    return dirX + cx, dirY + cy   
174
175
176def create_forward_KB_distortion_octahedral(width, height, Width, Height, fx, fy,cx, cy, coeffs):
177    # convert each texel coordinate into a view direction, using octahedral decoding
178    dirX, dirY, dirZ = get_octahedral_directions(width, height)
179                
180    # project the decoded view direction into NDC
181    # TODO replace this call with your own `project` function
182    
183    return create_forward_KB_distortion_given_directions(dirX, dirY, dirZ, Width, Height, fx, fy,cx, cy, coeffs)
184
185
186def generate_hq_fisheye_KB(textureWidth, textureHeight,Width, Height, fx, fy,centerX,centerY,polynomials):
187    from datetime import datetime
188    # Create date string as part of texture file name
189    date_time_string = datetime.now().strftime("%Y_%m_%d_%H_%M_%S")
190
191    # Create encoding for "unproject", i.e. NDC->direction
192    dirX, dirY, dirZ = create_backward_KB_distortion(textureWidth, textureHeight,Width, Height, fx, fy,centerX, centerY, polynomials)
193    dirX = dirX.astype("float32")
194    dirY = dirY.astype("float32")
195    dirZ = dirZ.astype("float32")
196
197    # image data has 3 channels
198    direction_img_data = np.zeros((textureHeight, textureWidth, 3), dtype=np.float32)
199    direction_img_data[:, :, 0] = dirX
200    direction_img_data[:, :, 1] = dirY
201    direction_img_data[:, :, 2] = dirZ
202    save_img(f"texture/fisheye_direction_{textureWidth}x{textureHeight}_unproject_KB_{date_time_string}.exr", direction_img_data)
203
204    # Create encoding for "project", i.e. direction->NDC
205    x, y = create_KB_inverse_distortion_octahedral(textureWidth, textureHeight, Width, Height, fx, fy,centerX, centerY, polynomials)
206    x = x.astype("float32")
207    y = y.astype("float32")
208   
209    # image data has 2 channels (NDC.xy)
210    NDC_img_data = np.zeros((textureHeight,textureWidth,2), dtype=np.float32)
211    NDC_img_data[:, :, 0] = x
212    NDC_img_data[:, :, 1] = y
213
214    # save the image as exr format
215    save_img(f"texture/fisheye_NDC_{textureWidth}x{textureHeight}_project_KB_{date_time_string}.exr", NDC_img_data)
216
217
218def main():
219    # Generates high quality example texture based on opencv fisheye(KB) distortion model.
220    
221    # Renderer resolution settings as desired
222    textureWidth = 3840
223    textureHeight = 2560
224
225    # Camera parameters from OpenCV calibration output or other sources
226    # Image resoluton
227    Width=1920
228    Height=1280   
229    
230    # Optical center
231    centerX = 0.5
232    centerY = 0.5
233    
234    # Focal length
235    fx=731
236    fy=731
237    
238    # OpenCV Fisheye model. 
239    polynomials_KB = [1,-0.054776250681940974,-0.0024398746462049982,-0.001661261528356045,0.0002956774267707282]
240    generate_hq_fisheye_KB(textureWidth, textureHeight,Width, Height, fx, fy,centerX,centerY,polynomials_KB)    
241
242
243if __name__ == "__main__":
244    main()

Note

The textures are generated in the EXR format, which can be viewed with apps such as RenderDoc, and the OpenEXR package can be installed via: pip install OpenEXR

Shutter#

Shutter

Description

Units

Shutter Open

Used with Motion Blur to control blur amount, increased values delay
shutter opening.
Seconds

Shutter Close

Used with Motion Blur to control blur amount, increased values forward
the shutter close.
Seconds

References#

Kannala, J., and S.S. Brandt. A Generic Camera Model and Calibration Method for Conventional, Wide-Angle, and Fish-Eye Lenses. IEEE Transactions on Pattern Analysis and Machine Intelligence 28, no. 8 (August 2006): 1335–40. https://doi.org/10.1109/TPAMI.2006.153.