Spatial Streaming for Omniverse Digital Twins Workflow#
In this example you learn how to use Omniverse to author OpenUSD stages that can be manipulated and streamed to a client application running on the Apple Vision Pro. We’ll show you how to set up your authoring and streaming environment. Additionally you learn how to get started creating front end clients for these streams that run natively on the Vision Pro with Swift and SwiftUI.
When used together, you can create state-of-the-art hybrid applications that take advantage of the unique device properties of the Apple Vision Pro while allowing users to interact with an accurate simulation of their digital twin.
We’ll also demonstrate how you can scale your experiences by deploying your OpenUSD content to the cloud via NVIDIA’s Graphics Delivery Network (GDN) so that users of your visionOS application can experience your digital twin without the need for a powerful workstation.
To create hybrid applications for Apple Vision Pro, you need to implement multiple SDK’s across multiple platforms and then establish communication between them. Omniverse will render and manipulate your data, and the Apple Vision Pro will send inputs like head pose, and custom JSON events to an Omniverse server. These inputs will be assigned to scene logic. You will create this logic using ActionGraph and Python. You’ll use ActionGraph to listen for new commands from the visionOS client. Upon receiving a command, it can be routed to a python script node, which will make a change to the stage - such as changing an prim’s variant, orientation, or a user’s location.
To generate these commands on the client, you’ll use native visionOS gestures and SwiftUI. SwiftUI menus and buttons can be customized to represent how they will update the scene, such as with a thumbnail of a leather swatch. When a user interacts with them, your Omniverse server will quickly respond to display the results with low latency.
You can do all of this development on your local network using a workstation, a Mac, and an Apple Vision Pro. When you’re confident your experience is ready for users without a local development environment, you can deploy that data to GDN, NVIDIA’s Graphics Delivery Network. You will package your USD data into a single zip file and publish it to GDN. For Apple Vision Pro use cases, you will use a specific GDN machine configuration for immersive streaming. You’ll reserve these machines that will load Omniverse and your data so that a user can open your visionOS client and connect to it automatically. You can continue to add more reservations to support more simultaneous users.
This example is made up of several consecutive steps:
Build an Omniverse Kit application to open and stream your USD content, and verify that our Kit application is ready to stream to the headset.
Build the Purse Configurator visionOS application and simulate it in Xcode.
Establish a connection between the visionOS application and the Omniverse Kit application.
Learn how to communicate between the server and client, and add a new feature to the experience.
Rebuild and test the new feature on the device.