Review Client Features and UI#

Now that the client is running, it’s a great time to familiarize yourself with some of the features you can build with the SDK.

Portals & Volumes#

Omniverse renders in visionOS can be displayed in Portal mode, which acts as a view into another environment, or Volume mode, where objects are placed directly into the user’s space.

Portal mode in Omniverse acts like a view into another environment, rendered in stereo with head tracking, making objects appear just beyond the portal opening. This mode is ideal for art-directed environments or when dealing with digital twins that are too large for comfortable viewing in volume mode.

_images/portal-mode.png

Volume mode places objects directly into a user’s space, allowing them to cast shadows and be manipulated with gestures. Users can walk around the volume, approach it closely, and even explore its interior. This mode offers a more immersive and interactive experience with the rendered objects.

_images/volume-mode.png

To switch between portal and volume modes, select the Cube icon in the top left corner of the Configuration window.

_images/switch-mode.png

Buttons and Sliders#

SwiftUI provides you with effective tools like buttons, sliders, and lists to trigger custom JSON messages to Omniverse for dynamic updates and interactions. In the Configuration window, you can use thumbnails to represent different variants, such as leather types and metal colors. Selecting a thumbnail sends a JSON message to Omniverse, updating the variant.

The Configuration window also includes a Camera icon in the top right corner, which opens a list of camera views. Selecting a view adjusts the user’s position in the stage, altering the portal view. A lighting intensity slider allows for adjustments, sending updates to the Omniverse server upon release. Additionally, buttons can trigger animations and control object visibility, offering a robust set of interactive features for visionOS development.

_images/buttons-sliders.png

Gestures#

Gestures are available in the Volume context. In volume mode, you can pinch and hold with one hand while moving the hand to the left and right to rotate the Purse. This is a common “click and drag” gesture in visionOS. Likewise, you can pinch and hold with both hands and move their hands closer or further apart to scale the Purse. We calculate these gestures locally on the Vision Pro using the built-in visionOS Gestures, and send the desired rotate and scale value back to Omniverse to update the stage.

Placement Tool#

The Placement Tool in AVP allows you to position and anchor a virtual asset onto flat, horizontal surfaces within an AR environment. To use it, ensure the configurator project is running on a physical Vision Pro device, as it won’t work in the visionOS Simulator. Activate placement mode by pressing the “Place” button, which brings up a placement puck that follows your headset’s direction. Move the puck to the desired location, then use a pinch gesture to anchor the asset in place. Ensure both the client app and the server-side configurator USD file have active and connected sessions for successful placement.

_images/placement-tool.png

Xcode Project Configuration for the Placement Tool#

  1. In Xcode select Configurator > Info > Targets > Configurator

  2. Verify that the following keys exist:

    • NSWorldSensingUsageDescription: Needed to track model position in the world

    • NSHandsTrackingUsageDescription: Required to accurately place models, and streaming hand tracking data to a remote server

      _images/placement-tool-config.png

      These keys are required for AR placement functionality. They inform the user why the app needs access to world sensing and hand tracking capabilities.