Omniverse Acoustic Extension#
Introduction#
In Omniverse, the Acoustic Sensor extension consists of a WPM-based simulation plugin and supporting utilities. The sensor is configured using USD schemas and provides high-fidelity acoustic wave propagation simulation using the Wave Propagation Model (WPM).
The extension supports two operating modes:
Basic Mode: Standard acoustic simulation without interference effects (default)
Interference Mode: Includes interference modeling between concurrent transmissions
Note
Naming Convention:
This sensor is now referred to as the “Acoustic Sensor” in documentation and future releases. It was called “Ultrasonic” or “USS” in previous releases.
Schema-Based Configuration#
The acoustic sensor is configured using USD schemas. The sensor prim uses the following schemas:
OmniAcoustic: Base typed schema for acoustic sensors
OmniSensorGenericAcousticWpmAPI: Main single-apply API containing all sensor parameters and default topology (single sensor mount, single receiver group, single firing sequence)
OmniSensorWpmAcousticSensorMountAPI: Multiple-apply API for sensor mount definitions (e.g., m001, m002, …)
OmniSensorWpmAcousticRxGroupAPI: Multiple-apply API for receiver group configurations (e.g., g001, g002, …)
OmniSensorWpmAcousticFiringSeqAPI: Multiple-apply API for firing sequence cycles (e.g., seq001, seq002, …)
Default Configuration:
By default, OmniSensorGenericAcousticWpmAPI provides a minimal single-sensor configuration:
One sensor mount (m001): at origin (0, 0, 0) with no rotation
One receiver group (g001): with single receiver [0]
One firing sequence (seq001): with single event at time 0
Users can extend the configuration by prepending additional multiple-apply API schemas (m002, m003, g002, g003, seq002, etc.) to add more sensor mounts, receiver groups, or firing sequences.
Note
Migration from JSON Configuration:
Previous versions used JSON files for acoustic sensor configuration (e.g., Example.json). The schema-based approach provides type safety, validation, native USD tooling support, better composability, and standardization with other sensors.
Example Acoustic Sensor Prim#
Minimal Configuration#
The simplest way to define an acoustic sensor uses only the main API schema, which provides complete defaults:
def OmniAcoustic "acoustic_sensor" (
doc = "Minimal acoustic sensor using WPM with schema defaults"
prepend apiSchemas = ["OmniSensorGenericAcousticWpmAPI"]
)
{
string sensorModelPluginName = "rtx.sensors.acoustic.plugin"
# Optional: Override interference mode (defaults to false)
bool omni:sensor:WpmAcoustic:enableInterference = false
# RenderProduct configuration
def RenderProduct "RenderedOutputs"
{
uniform int2 resolution = (1, 1)
rel camera = </acoustic_sensor>
rel orderedVars = [
<SupportedOutputs/RtxSensorGmo>,
<SupportedOutputs/RtxSensorMetadata>,
]
def Scope "SupportedOutputs"
{
def RenderVar "RtxSensorGmo"
{
uniform string sourceName = "GenericModelOutput"
}
def RenderVar "RtxSensorMetadata"
{
string sourceName = "RtxSensorMetadata"
}
}
}
}
This minimal configuration provides:
Single sensor mount at origin (0, 0, 0) with no rotation
Single receiver group with receiver [0]
Single firing sequence with one event at time 0
All ray tracing, physical, and signal processing parameters set to schema defaults
Multi-Mount Configuration#
To add multiple sensor mounts, receiver groups, and firing sequences, prepend additional multiple-apply API schemas and define their attributes:
def OmniAcoustic "acoustic_sensor" (
doc = "Multi-mount acoustic sensor configuration"
prepend apiSchemas = [
"OmniSensorGenericAcousticWpmAPI",
"OmniSensorWpmAcousticSensorMountAPI:m002",
"OmniSensorWpmAcousticRxGroupAPI:g002",
"OmniSensorWpmAcousticFiringSeqAPI:seq002"
]
)
{
string sensorModelPluginName = "rtx.sensors.acoustic.plugin"
# Override default mount positions (m001 provided by schema at origin)
float3 omni:sensor:WpmAcoustic:sensorMount:m001:position = (4.0, 0.5, 0.5)
float3 omni:sensor:WpmAcoustic:sensorMount:m001:rotation = (0.0, -1.0, 0.0)
# Add second mount (m002 prepended above)
float3 omni:sensor:WpmAcoustic:sensorMount:m002:position = (4.0, -0.5, 0.5)
float3 omni:sensor:WpmAcoustic:sensorMount:m002:rotation = (0.0, -1.0, 0.0)
# Override default receiver group (g001 provided by schema)
uint[] omni:sensor:WpmAcoustic:rxGroup:g001:receiverIndices = [0, 1]
# Add second receiver group (g002 prepended above)
uint[] omni:sensor:WpmAcoustic:rxGroup:g002:receiverIndices = [0, 1]
# Override default firing sequence (seq001 provided by schema)
float[] omni:sensor:WpmAcoustic:firingSeq:seq001:eventTimeNs = [0.0, 5000.0]
uint[] omni:sensor:WpmAcoustic:firingSeq:seq001:txSensorId = [0, 1]
uint[] omni:sensor:WpmAcoustic:firingSeq:seq001:rxGroupId = [0, 1]
uint[] omni:sensor:WpmAcoustic:firingSeq:seq001:channel = [0, 1]
# (RenderProduct configuration as above)
}
Schema Attributes Reference#
The following tables document the main schema attributes for the acoustic sensor.
Sensor Identification
Attribute
|
Type
|
Description
|
Default
|
|---|---|---|---|
tickRate
|
float
|
Sensor update rate in Hz
|
30.0
|
modelName
|
string
|
Sensor model name
|
“AcousticWPM”
|
modelVersion
|
string
|
Sensor model version
|
“0.0.0”
|
modelVendor
|
string
|
Sensor vendor name
|
“NVIDIA”
|
marketName
|
string
|
Sensor market name
|
“Generic”
|
Ray Tracing Parameters (Runtime updatable)
Attribute
|
Type
|
Description
|
Default
|
|---|---|---|---|
azSpanDeg
|
float
|
Azimuth span for ray tracing in degrees
|
90.0
|
elSpanDeg
|
float
|
Elevation span for ray tracing in degrees
|
90.0
|
raysPerDeg
|
uint
|
Ray density per degree
|
3
|
traceTreeDepth
|
uint
|
Maximum number of ray bounces
|
2
|
Physical Sensor Parameters (Runtime updatable)
Attribute
|
Type
|
Description
|
Default
|
|---|---|---|---|
membraneDiameter
|
float
|
Membrane diameter in meters
|
0.03
|
centerFrequency
|
float
|
Signal center frequency in Hz
|
51200.0
|
bandwidth
|
float
|
Signal bandwidth in Hz
|
3000.0
|
pulseDuration
|
float
|
Duration of pulse in seconds
|
0.0025
|
pulsePower
|
float
|
Dimensionless pulse power
|
513.57
|
sampleDuration
|
float
|
Time sampling of waveform in seconds
|
0.0001024
|
Close Range Parameters (Runtime updatable)
The default values in the table below are rounded for display.
Attribute
|
Type
|
Description
|
Default
|
|---|---|---|---|
closeRange
|
float
|
Close range threshold in meters
|
1.42
|
closeRangeDecay
|
float
|
Distance beyond which close range amplification
starts decaying
|
1.26
|
closeIndirectAmplBase
|
float
|
Base amplitude for indirect close range
reflections
|
1.12
|
closeDirectAmplBase
|
float
|
Base amplitude for direct close range reflections
|
1.39
|
closeIndirectAmpl
|
float
|
Amplitude multiplier for indirect close range
reflections
|
17.64
|
closeDirectAmpl
|
float
|
Amplitude multiplier for direct close range
reflections
|
12.66
|
Noise Parameters (Runtime updatable)
Attribute
|
Type
|
Description
|
Default
|
|---|---|---|---|
noiseMin
|
float
|
Dimensionless minimum amplitude of noise
|
0.3
|
noiseMax
|
float
|
Dimensionless maximum amplitude of noise
|
1.7
|
Signal Processing Parameters
Attribute
|
Type
|
Description
|
Default
|
|---|---|---|---|
signalMode
|
token
|
Signal mode: “CHIRP” or “AM”. Runtime updatable
|
“CHIRP”
|
enableInterference
|
bool
|
Enable interference model
Not runtime updatable
|
false
|
gainCoeffs
|
float[]
|
Gain polynomial coefficients (7 elements)
Runtime updatable
|
Schema default
|
directivityCoeffs
|
float[]
|
Directivity polynomial coefficients (7 elements)
Runtime updatable
|
Schema default
|
Note
For gainCoeffs and directivityCoeffs, the exact default arrays are defined in the USD schema (OmniSensorGenericAcousticWpmAPI); see the schema source for the full 7-element values.
Sensor Mount Parameters (Multiple-Apply API, Runtime updatable)
Each sensor mount is defined using the OmniSensorWpmAcousticSensorMountAPI with an instance name (e.g., m001, m002).
Attribute
|
Type
|
Description
|
Default
|
|---|---|---|---|
position
|
float3
|
Sensor mount position in meters [x, y, z]
|
(0, 0, 0)
|
rotation
|
float3
|
Sensor mount rotation in degrees [roll, pitch, yaw]
|
(0, 0, 0)
|
Receiver Group Parameters (Multiple-Apply API, Runtime updatable)
Receiver groups define collections of sensor mounts that act as receivers for acoustic transmissions. Each group is identified by an instance name (e.g., g001, g002) and specifies which sensor mounts (by their index) belong to that group.
When a firing event references a receiver group (via rxGroupId), the WPM model computes signal propagation from the transmitter to all sensor mounts in that group. This allows modeling scenarios like:
Single receiver listening to a transmission:
receiverIndices = [0]Multiple receivers listening together:
receiverIndices = [0, 1, 2]Selective receiver subsets for different firing patterns
Each receiver group is defined using the OmniSensorWpmAcousticRxGroupAPI with an instance name (e.g., g001, g002).
Attribute
|
Type
|
Description
|
Default
|
|---|---|---|---|
receiverIndices
|
uint[]
|
Array of sensor mount indices forming this group
|
[0]
|
Firing Sequence Parameters (Multiple-Apply API, Runtime updatable)
Firing sequences define the temporal pattern of acoustic transmissions and receptions. Each firing sequence is a repeating cycle of firing events that specifies when transmitters fire, which receivers listen, and which output channel to use.
Each firing event within a sequence is defined by four parallel arrays (all must have the same length):
eventTimeNs: When the event occurs (nanosecond offset within the cycle)
txSensorId: Which sensor mount acts as the transmitter (index into sensor mounts array)
rxGroupId: Which receiver group listens (index into receiver groups array)
channel: Which output channel the resulting signal way is assigned to
For example, a firing sequence with 3 events might fire from sensor mount 0 at time 0, then sensor mount 2 at time 5000ns, then sensor mount 4 at time 10000ns, each using different receiver groups and channels. The WPM model simulates acoustic wave propagation for each event and outputs the resulting signal ways.
Each firing sequence cycle is defined using the OmniSensorWpmAcousticFiringSeqAPI with an instance name (e.g., seq001, seq002).
Attribute
|
Type
|
Description
|
Default
|
|---|---|---|---|
eventTimeNs
|
float[]
|
Time offset in nanoseconds for each firing event
|
[0.0]
|
txSensorId
|
uint[]
|
Transmitter sensor mount index for each event
|
[0]
|
rxGroupId
|
uint[]
|
Receiver group index for each event
|
[0]
|
channel
|
uint[]
|
Channel ID for each event
|
[0]
|
Runtime Parameter Updates#
The acoustic sensor supports dynamic parameter updates at runtime, allowing sensor reconfiguration without restarting the simulation. Nearly all parameters can be updated at runtime, with one exception.
Runtime Updatable Parameters:
Ray tracing parameters (azSpanDeg, elSpanDeg, raysPerDeg, traceTreeDepth)
Physical parameters (membraneDiameter, centerFrequency, bandwidth, pulseDuration, pulsePower, sampleDuration)
Close range parameters (all closeRange* and closeIndirect*/closeDirect* parameters)
Noise parameters (noiseMin, noiseMax)
Signal processing (signalMode, gainCoeffs, directivityCoeffs; changing signalMode triggers full reinitialization)
Sensor mount positions and rotations (all sensor mount API instances)
Receiver groups (receiverIndices arrays)
Firing sequences (all firing sequence API instance arrays)
Non-Updatable Parameters:
The following parameter cannot be changed at runtime and must be set when the sensor is created:
enableInterference— Operating mode (basic vs. interference) is fixed at sensor creation
Warning
When parameters requiring memory reallocation are updated (e.g., changing firing sequences or adding sensor mounts), the sensor automatically handles reallocation. This may cause a brief processing delay during the update.
Operating Modes#
The acoustic sensor supports two operating modes, selected via the enableInterference parameter:
- Basic Mode (enableInterference = false)
Standard acoustic simulation without interference effects between concurrent transmissions. This is the default mode and is suitable for most use cases. It provides high performance while maintaining accurate signal propagation modeling.
- Interference Mode (enableInterference = true)
Includes interference modeling between concurrent transmissions from multiple sensor mounts. This mode is more computationally expensive but provides higher fidelity when multiple sensors transmit simultaneously and their signals may interfere.
Note
The enableInterference parameter cannot be changed at runtime. The operating mode must be set when the sensor prim is created and remains fixed for the lifetime of that sensor instance.
Output Formats#
Understanding Signal Ways#
The fundamental output unit of the acoustic sensor is a signal way. A signal way represents the acoustic waveform received at one receiver from one transmitter on a specific channel. It consists of:
Metadata: Transmitter ID, receiver ID, channel ID, and time offset
Waveform samples: A sequence of amplitude values representing the received acoustic signal over time
When a firing event occurs (as defined in the firing sequences), the WPM model simulates acoustic wave propagation from the transmitting sensor mount to each receiver in the specified receiver group. Each transmission-to-receiver pair produces one signal way containing the sampled waveform.
Example: If a firing event has:
Transmitter: sensor mount 0
Receiver group: g001 with receivers [0, 1, 2]
Channel: 0
The model will generate 3 signal ways: one for each receiver (0→0, 0→1, 0→2), all on channel 0. Each signal way contains the waveform samples showing how the acoustic pulse propagated from the transmitter to that specific receiver.
The total number of signal ways per frame depends on the firing sequence configuration. A complex firing sequence with multiple events and large receiver groups will produce more signal ways than a simple sequence.
Output Format Types#
The acoustic sensor provides multiple output formats to support different use cases and data access patterns.
GenericModelOutput (GMO)#
The traditional unified output buffer containing all acoustic sensor data. This format provides a comprehensive output structure with all point cloud data and metadata in a single buffer.
Configuring GMO Output:
def RenderProduct "RenderedOutputs"
{
rel orderedVars = [
<SupportedOutputs/RtxSensorGmo>,
<SupportedOutputs/RtxSensorMetadata>,
]
def Scope "SupportedOutputs"
{
def RenderVar "RtxSensorGmo"
{
uniform string sourceName = "GenericModelOutput"
}
def RenderVar "RtxSensorMetadata"
{
string sourceName = "RtxSensorMetadata"
}
}
}
GMO Data Structure
The GMO buffer contains all signal ways for the current frame, organized with metadata and waveform samples. The number of signal ways and samples per signal way vary based on the sensor’s firing sequence configuration.
The auxiliary data structure (USSAuxiliaryData) contains:
Field Name
|
Type
|
Description
|
|---|---|---|
numSgws
|
uint32_t
|
Number of signal ways
|
numSamplesPerSgw
|
uint32_t
|
Number of samples in each signal way
|
The BasicElements structure should be interpreted as follows:
Field Name
|
Type
|
Description
|
|---|---|---|
timeOffsetNs
|
int32_t
|
Time offset of the signal way in nanoseconds
|
x
|
float
|
Transmitting sensor ID
|
y
|
float
|
Receiving sensor ID
|
z
|
float
|
Channel ID
|
scalar
|
float
|
Amplitude value in sample
|
flags
|
uint8_t
|
Element status flags
|
A sample parser for the generic dump format can be found in: kit/source/extensions/omni.sensors.nv.acoustic/python/parse.py
Granular AOVs#
Individual tensor outputs for flexible data access. The acoustic sensor supports granular AOVs (Arbitrary Output Variables) that provide individual data channels. These can be requested as separate RenderVars in addition to or instead of the combined GenericModelOutput.
Available Granular AOVs:
AOV Name |
Data Type |
Buffer Semantics |
Description |
|---|---|---|---|
TransmitterIds |
float |
1D Array |
Transmitter sensor ID for each element |
ReceiverIds |
float |
1D Array |
Receiver sensor ID for each element |
ChannelIds |
float |
1D Array |
Channel ID for each element |
Amplitudes |
float |
1D Array |
Signal amplitude values for each sample |
TimeOffsetNs |
int32 |
1D Array |
Time offset in nanoseconds for each element |
Flags |
uint8 |
1D Array |
Element status flags |
SignalWayShape |
uint32 |
1D Array (2 elements) |
Signal way dimensions: [numSignalWays, numSamplesPerSignalWay] |
Configuring Granular AOV Output:
def RenderProduct "RenderedOutputs"
{
rel orderedVars = [
<SupportedOutputs/TransmitterIds>,
<SupportedOutputs/ReceiverIds>,
<SupportedOutputs/ChannelIds>,
<SupportedOutputs/Amplitudes>,
<SupportedOutputs/TimeOffsetNs>,
<SupportedOutputs/Flags>,
<SupportedOutputs/SignalWayShape>,
]
def Scope "SupportedOutputs"
{
def RenderVar "TransmitterIds"
{
string sourceName = "TransmitterIds"
}
def RenderVar "ReceiverIds"
{
string sourceName = "ReceiverIds"
}
def RenderVar "ChannelIds"
{
string sourceName = "ChannelIds"
}
def RenderVar "Amplitudes"
{
string sourceName = "Amplitudes"
}
def RenderVar "TimeOffsetNs"
{
string sourceName = "TimeOffsetNs"
}
def RenderVar "Flags"
{
string sourceName = "Flags"
}
def RenderVar "SignalWayShape"
{
string sourceName = "SignalWayShape"
}
}
}
Note
You can request any combination of granular AOVs. Only the requested AOVs will be included in the output, reducing memory footprint for applications that don’t need all data channels.
Plugins#
WpmAcousticPlugin#
Introduction
This is the generic acoustic sensor simulation plugin using WPM (Wave Propagation Model). The plugin provides high-fidelity acoustic wave propagation modeling with support for multiple ray bounces, material interactions, and both basic and interference modes.
Plugin Name
rtx.sensors.acoustic.plugin
Configuration
The sensor is configured using USD schema attributes as described in the “Schema-Based Configuration” section above. All parameters are specified through the OmniSensorGenericAcousticWpmAPI and related multiple-apply APIs.
Acoustic Data Converter Plugin#
Introduction
This is a data converter plugin implementing the conversion of the binary model output into an acoustic output object.
Plugin Name
omni.sensors.nv.acoustic.data_converter.plugin
Usage
Using this plugin is straightforward and can be illustrated via the following code snippet:
void* modelOutputBinaryBuffer = getBufferSomehow();
omni::sensors::GenericModelOutput convertedBuffer =
carb::getCachedInterface<omni::sensors::acoustic::IAcousticCycleConverter>()->convertBuffer(modelOutputBinaryBuffer);
Omnigraph Nodes#
The extension provides OmniGraph nodes for transcoding acoustic sensor data into various formats and for visualization.
TranscoderAcoustic#
Introduction
The Transcoder node encodes the acoustic sensor data stream into a specified format and writes it into a file.
Omnigraph Node Name
omni.sensors.nv.acoustic.TranscoderAcoustic
Parameters & Attributes
Parameter Name
|
Description
|
Type
|
Default
|
Value Range
|
|---|---|---|---|---|
signalScaler
|
Scaling factor applied to signals (amplification)
|
float
|
1.0
|
|
timestampMode
|
Mode for the sent timestamp
|
string
|
“VENDOR”
|
|
dumpPackets
|
Dump packets into a file
|
bool
|
false
|
|
vendor
|
Vendor of the sensor model
|
string
|
“generic”
|
“generic”
|
fileFormat
|
File format to dump
|
string
|
“bin”
|
“bin”, “pcap”
|
fileName
|
Name of the file to write binary dumps into
|
string
|
“acoustic.h5”
|
|
format
|
Packet format
|
string
|
“”
|
Acoustic Encoder Plugin#
Introduction
This plugin contains packet encoders for acoustic sensor data (generic and vendor-specific formats).
Plugin Name
omni.sensors.nv.acoustic.acoustic_encoder.plugin