Range Sensors

Simulating range based sensors is done primarily through the use of the Omniverse Isaac Sim range sensor extension. This extension provides the API for creating these sensor prims in a stage, setting the parameters of that sensor, and querying depth data from that sensor.

The extension is enabled by default. In case it is not, you must enable the omni.isaac.range_sensor extension by selecting Windows -> Extensions from the menus at the top of the Isaac Sim window.

Lidar

How to create from UI

You should be able to find the lidar sensor under Create -> Isaac -> Sensors -> Lidar from the drop down menus at the top of the Isaac Sim Window.

The lidar will execute a set of line PhysX traces every tick, so we need a physics scene available on the stage. To create a physics scene, select Create -> Physics -> Physics Scene from the menu bar. You should see PhysicsScene appear in the Stage panel on the right.

Create the lidar by selecting Create -> Isaac -> Sensors -> Lidar -> Rotating. Select the lidar in the Stage panel, then check the “drawLines” box in the Property panel under the Raw USD Properties section. In that same panel, set the “rotationRate” to 1.0 and press the play button.

Note that you can update all of the lidar parameters on the fly while the stage is running. When the rotation rate reaches zero or less, the lidar prim will cast rays in all directions based on your FOV and resolution. Set the “rotationRate” to zero for now.

Lidar line traces will ignore anything that doesn’t have a collision API attached to it. Let’s make something that will interact with our lidar sensor. Right click in the viewport and select Create -> Mesh -> Cube. You should see a cube appear in your viewport where you clicked. Select the cube, and set its position to (200,0,0) in the Property panel. Next, in the Property tab and click on the + Add menu and select + Add -> Physics -> Collider. You can use the mouse to move the cube around and see the lidar rays interact with it.

For most use cases, the lidar will be attached to another prim (cars, robots, etc…). To do this, first we need an prim to which we can attach the lidar. Right click in the viewport and select Create -> Mesh -> Cylinder. In the Property panel, set the position of the cylinder to (0,0,0). Finally, in the context panel, click and drag the lidar onto the cylinder. When you move the cylinder the lidar prim will move with it. You can set a relative transform for the lidar relative to the parent cylidner prim as well. The relative transform will apply when the parent cylinder is moved.

Python API Documentation

See the API Documentation for usage information.

The lidar python API can be used to interact with the lidar through scripts and extensions, and allows you to retrieve the depth data from the last sweep. In order to demonstrate this API we will use Omniverse Kit’s python interpreter instead of writing our own extension in order to focus on the lidar.

First, we will need a fresh stage. You can make a new, empty stage by selecting File -> New from the menu bar at the top of Omniverse Isaac Sim. Next, we need to get the python interpreter. Select Window -> Script Editor from the menu bar. You will see a new tab open in the same frame as the viewport. Drag the Script Editor tab down to the bottom and dock it in a different frame. You should now see both the viewport and the Script Editor. At any time, you can click the Run button at the bottom of the Script Editor window.

Make sure to start with a clean stage each time you press the Run button

First, our imports…

1
2
3
4
5
6
7
8
# provides the core omniverse apis
import omni
# used to run sample asynchronously to not block rendering thread
import asyncio
# import the python bindings to interact with lidar sensor
from omni.isaac.range_sensor import _range_sensor
# pxr usd imports used to create cube
from pxr import UsdGeom, Gf, UsdPhysics

First, we need to get the stage, the lidar interface (so we can interact with the lidar), and the timeline interface (so we can play and pause the stage)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
stage = omni.usd.get_context().get_stage()
lidarInterface = _range_sensor.acquire_lidar_sensor_interface()
timeline = omni.timeline.get_timeline_interface()

omni.kit.commands.execute('AddPhysicsSceneCommand',stage = stage, path='/World/PhysicsScene')
lidarPath = "/LidarName"
result, prim = omni.kit.commands.execute(
            "RangeSensorCreateLidar",
            path=lidarPath,
            parent="/World",
            min_range=0.4,
            max_range=100.0,
            draw_points=False,
            draw_lines=True,
            horizontal_fov=360.0,
            vertical_fov=30.0,
            horizontal_resolution=0.4,
            vertical_resolution=4.0,
            rotation_rate=0.0,
            high_lod=False,
            yaw_offset=0.0,
            enable_semantics=False
        )

Remember, our lidar is using PhysX for its line traces, so we first need to create a physics scene. When we execute this code, we will see a prim named “physicsScene” of type “PhysicsScene” and a prim named “LidarName” of type “Lidar” appear in the stage context menu. Attributes for the lidar can be set directly in the RangeSensorCreateLidar command. For illustrative purposes the rotation rate of the lidar is set to zero so that all rays are simulated at once.

We also need an obstacle for our lidar. For this we will create a cube, and place it next to the lidar. We also need to increase the size as by default the cube will be one unit on a side (which by default is 1 cm). This obstacle needs the physics collision API in order to interact with the lidar line traces

1
2
3
4
5
6
CubePath = "/World/CubeName"
cubeGeom = UsdGeom.Cube.Define(stage, CubePath)
cubePrim = stage.GetPrimAtPath(CubePath)
cubeGeom.AddTranslateOp().Set(Gf.Vec3f(200.0, 0.0, 0.0))
cubeGeom.CreateSizeAttr(100)
collisionAPI = UsdPhysics.CollisionAPI.Apply(cubePrim)

The lidar needs several cycles of simulation in order to get data for the first frame, so we will start the simulation by “playing” it for a second, and then pause the stage to populate the depth buffers in the lidar. The simulation cycles needs to be running outside of the python script, so we will use an asyncronous function to run it and use ensure_future function to make sure data is only retrieved after the sweeping is complete. Otherwise we may be trying to retrieve data that is not there yet. The editor doesn’t need to be paused in order to retrieve the buffer. It can be fetched live at any time.

We can pull the depth data from the c++ lidar object being wrapped by our python bindings. The depth data is formatted as uint16 so it can be consumed by the ISDK and various micro-controllers. To convert to the fraction of the max depth simply divide by 65535.0 (the max uint16).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
async def get_lidar_param():
    await asyncio.sleep(1.0)
    timeline.pause()
    depth = lidarInterface.get_depth_data("/World"+lidarPath)
    zenith = lidarInterface.get_zenith_data("/World"+lidarPath)
    azimuth = lidarInterface.get_azimuth_data("/World"+lidarPath)

    print("depth", depth)
    print("zenith", zenith)
    print("azimuth", azimuth)

timeline.play()
asyncio.ensure_future(get_lidar_param())

Segmented Point Cloud

We extend from the above sample to get segmented point cloud data from lidar sensor.

  1. Create a new, empty stage by selecting File -> New from the menu bar at the top of Omniverse Isaac Sim.

  2. Open Window->Script Editor, paste the below code and click Run to execute python code.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
# provides the core omniverse apis
import omni
# used to run sample asynchronously to not block rendering thread
import asyncio
# import the python bindings to interact with lidar sensor
from omni.isaac.range_sensor import _range_sensor
# pxr usd imports used to create cube, add semantics
from pxr import UsdGeom, Gf, UsdPhysics, Semantics

stage = omni.usd.get_context().get_stage()
lidarInterface = _range_sensor.acquire_lidar_sensor_interface()
timeline = omni.timeline.get_timeline_interface()

omni.kit.commands.execute('AddPhysicsSceneCommand',stage = stage, path='/World/PhysicsScene')
lidarPath = "/LidarName"

# Create lidar prim
result, prim = omni.kit.commands.execute(
            "RangeSensorCreateLidar",
            path=lidarPath,
            parent="/World",
            min_range=0.4,
            max_range=100.0,
            draw_points=True,
            draw_lines=False,
            horizontal_fov=360.0,
            vertical_fov=60.0,
            horizontal_resolution=0.4,
            vertical_resolution=0.4,
            rotation_rate=0.0,
            high_lod=True,
            yaw_offset=0.0,
            enable_semantics=True
        )
UsdGeom.XformCommonAPI(prim).SetTranslate((200.0, 0.0, 0.0))

# Create a cube, sphere, add collision and different semantic labels
primType = ["Cube", "Sphere"]
for i in range(2):
    prim = stage.DefinePrim("/World/"+primType[i], primType[i])
    UsdGeom.XformCommonAPI(prim).SetTranslate((-200.0, -200.0 + i * 400.0, 0.0))
    UsdGeom.XformCommonAPI(prim).SetScale((100, 100, 100))
    collisionAPI = UsdPhysics.CollisionAPI.Apply(prim)

    # Add semantic label
    sem = Semantics.SemanticsAPI.Apply(prim, "Semantics")
    sem.CreateSemanticTypeAttr()
    sem.CreateSemanticDataAttr()
    sem.GetSemanticTypeAttr().Set("class")
    sem.GetSemanticDataAttr().Set(primType[i])

# Get point cloud and semantic id for lidar hit points
async def get_lidar_param():
    await asyncio.sleep(1.0)
    timeline.pause()
    pointcloud = lidarInterface.get_point_cloud_data("/World"+lidarPath)
    semantics = lidarInterface.get_semantic_data("/World"+lidarPath)

    print("Point Cloud", pointcloud)
    print("Semantic ID", semantics)

timeline.play()
asyncio.ensure_future(get_lidar_param())

Now, we will discuss the main differences in the code compared to the previous sample. First, the lidar sensor is created with enable_semantics flag set to True as shown in line 33. Next, a cube and a sphere are created with physics collision API and assigned different semantic labels in lines 38-50. Finlly, we get point cloud data (lidar hit position relative to the sensor origin) and their semantic IDs using APIs in lines 56-57.

The segmented point cloud from lidar sensor should look like below.

Segmented Lidar Point Cloud

Ultrasonic

Base Sample app

To run the Ultrasonic sample, navigate to Isaac Examples -> Sensing -> Ultrasonic. You’ll see the following after clicking those 3 buttons:
  • Clean Stage And Spawn an Ultrasonic Sensor

  • Spawn an Obstacle for the Ultrasonic Sensor

  • Click Play

../_images/isaacsim_ultrasonic_menu.png

It spawns an ultrasonic sensor (USS) with obstacles. The flickering white lines show how the ultrasonic sensor fires different beams at different times, based on the UltrasonicFiringGroups. The blue lines show where the sensor hits obstacles.

../_images/isaacsim_ultrasonic_firing.gif

The UltrasonicArray specifies the sensor’s properties such as the number of emitters, firing groups, resolution, number of bins per USS envelope. An envelope is that wedge shape, specifying the range in which an emitter is shooting beams out. HorizontalFov and verticalFov control how big the envelope is. minRange and maxRange specify the distance range of the emitter. The minRange to maxRange distance is divided further into number of bins (numBins).

This sample’s structure can guide you on how to create your own ultrasonic sensor with Create -> Isaac -> Sensors -> Ultrasonic.

../_images/isaacsim_uss_properties.png

To see hits’ debug drawing, you can enable drawLines and drawPoints. You can also click on the “Get data from Ultrasonic Sensor” button to get more USS info.

../_images/isaacsim_uss_getinfo.png

You can mouse over those white bars. They tell us which emitter is seeing the obstacle hits (emitter 8 and 9) and how far away (at bin 150th of numBins 224 above). 126 is the intensity of the hit. Enabling useBRDF and useDistAttenuation will affect this intensity.