.. _isaac_sim_app_tutorial_ros_camera: ============= Cameras ============= Learning Objectives ======================= In this example, we will learn how to - Add additional cameras to the scene and onto the robot - Add camera publishers - Send ground truth synthetic perception data through rostopics **Prerequisite** - Completed :ref:`isaac_sim_app_install_ros`: installed ROS, enabled the ROS extension, built the provided *Isaac Sim* ROS workspace, and set up the necessary environment variables . - It is also helpful to have some basic understanding of `ROS topics `_ and how `publisher and subscriber `_ works. - Completed tutorial on :ref:`isaac_sim_app_tutorial_gui_omnigraph` and :ref:`isaac_sim_app_tutorial_gui_camera_sensors` - Completed :ref:`isaac_sim_app_tutorial_ros_turtlebot` and :ref:`isaac_sim_app_tutorial_ros_drive_turtlebot` so that there is a Turtlebot ready on stage. - ``roscore`` is running. Camera Publisher =============================== Setup Cameras ^^^^^^^^^^^^^^^^^^^^ The default camera displayed in the Viewport is the *Perspective* camera. You can verify that by the *Camera* button on the top left hand corner *inside* the Viewport display. Click on the Camera button and you will see there are a few other preset camera positions: Top, Front, and Right side views. For the purpose of this tutorial, let's add two stationary cameras, naming them *Camera_1* and *Camera_2*, viewing the room from two different perspectives. The procedures for adding cameras to the stage can be found in :ref:`isaac_sim_app_tutorial_gui_camera_sensors`. You may want to open additional Viewports to see multiple camera views at the same time. To open additional Viewports: *Window -> Viewport -> Viewport 2* to open the viewport, and select the desired camera view from the *Cameras* button on the upper left corner in the viewport. .. image:: /content/images/isaac_tutorial_ros_camera_add_viewport.gif :align: center Build the Graph for a RGB publisher ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #. Open Visual Scripting: *Window > Visual Scripting > Action Graph*. #. Click on the *New Action Graph* Icon in middle of the Action Graph Window, or *Edit Action Graph* if you want to append the camera publisher to an existing action graph. #. Build an Action Graph with the nodes and connection of the following image, and parameters using the table below. .. figure:: /content/images/isaac_tutorial_ros_camera_rgb_graph.png :align: center :width: 800 :alt: ROS camera RGB Parameters: +----------------------+---------------------+----------------+ |Node | Input Field | Value | +----------------------+---------------------+----------------+ |Isaac Create Viewport |name |New Viewport | +----------------------+---------------------+----------------+ |Isaac Set Camera |cameraPrim |/World/Camera_1 | +----------------------+---------------------+----------------+ |ROS1 Camera Helper |type | rgb | | +---------------------+----------------+ | |topicName | rgb | | +---------------------+----------------+ | |frameId | turtle | +----------------------+---------------------+----------------+ Ticking this graph will automatically create a new viewport titled "New Viewport", assign it to *Camera_1*, and display the viewport. .. important:: The publisher is tied to the images rendered in the assigned viewport, therefore the corresponding viewport must be open and rendering for the publisher to work. If there are multiple tabs of Viewports, the ones you wish to publish must be open and visible, however small, for it to be streaming data. .. _isaac_sim_app_tutorial_ros_camera_graph_explained: Graph Explained ^^^^^^^^^^^^^^^^^^^ - **Isaac Create Viewport**: Creating a Viewport with a specific name specified as a string. - **Isaac Get Viewport Render Product**: outputs the path to the render product prim used to acquire the rendered data. - **Isaac Set Camera**: Set the render product's camera to the camera found in path **Get Prim Path**. - **ROS1 Camera Helper**: Indicating which type of data to publish, and which rostopic to publish it on. **Camera Helper Node** The *Camera Helper Node* is abstracting a complex postprocessing network from the users. Once you press *Play* with a Camera Helper Node connected, you may see that in the list of Action Graphs when you click on the icon on the upper left corner of the Action Graph window, a new one appears: ``/Render/PostProcessing/SDGPipeline``. This graph is automatically created by the Camera Helper Node. The pipeline retrieves relevant data from the renderer, process it, and send them to the corresponding ROS publisher. This graph is only created in the session you are running. It will not be saved as part of your asset and will not appear in the Stage tree. Depth and other Perception Ground Truth data =============================================== In addition to RGB image, the following synthetic sensor and perceptual information also are available for any camera. - Camera Info - Depth - Point Cloud - BoundingBox 2D Tight - BoundingBox 2D Loose - BoundingBox 3D - Semantic labels - Instance Labels Each Camera Helper node can only retrieve one type of data. You can indicate which type you wish to assign to the node in the dropdown menu for the field ``type`` in the Camera Helper Node's Property tab. .. note:: Once you specify a type for a Camera Helper node and activated it (i.e. started simulation and the underlying SDGPipeline has been generated), you cannot change the type and reuse the node. You can either use a new node, or reload your stage and regenerate the SDGPipeline with the modified type. An example of publishing multiple rostopics for multiple cameras can be found in our asset ``Isaac/Samples/ROS/Scenario/simple_room_turtlebot.usd``. Verify ROS connection ^^^^^^^^^^^^^^^^^^^^^^^^ Use ``rostopic echo /`` to see the raw information that is being passed along. To view the published image, open a new terminal with ROS environment sourced, make sure ``roscore`` is running and the simulator is playing, and run command ``rosrun image_view image_view image:=/rgb`` to see the image published on the ``/rgb`` topic. It should reflect what is seen in the viewport. If you move the camera around in the viewport, the published image should follow. .. raw:: html
We can also view the image in RViz. open a new terminal with ROS environment sourced, type in ``rviz`` to open RViz. Add a Image Topic in the Display window, and make sure to match the image topic to the topic Name in the *Camera Helper* node, or in this case ``/rgb``. .. figure:: /content/images/isaac_sim_ros_rviz_image.png :align: center :width: 800 .. note:: If your depth image only shows black and white sections, it is likely due to somewhere in the field of view has "infinite" depth and skewed the contrast. Adjust your field of view so that the depth range in the image is limited. Additional Publishing Options =============================================== To publish images on demand or periodically at a specified rate, you will need to use Python scripting. Go to :ref:`isaac_sim_app_tutorial_ros_python_camera` for examples. Summary ======================= This tutorial introduces how to publish camera and perception data in ROS. Next Steps ^^^^^^^^^^^^^^^^^^^^^^ - Continue on to the next tutorial in our ROS Tutorials series, :ref:`isaac_sim_app_tutorial_ros_sensors`, to learn how to add a lidar sensor to the Turtlebot3. - Go to :ref:`isaac_sim_app_tutorial_ros_camera_noise` to learn how to publish camera images with noise. Further Learning ^^^^^^^^^^^^^^^^^^^^^^ - Additional information about synthetic data generation can be found in the Replicator Tutorial Series. - Examples of running similar environment using Standalone Python workflow is outlined :ref:`here`.