Wrnch AI

Overview

You can simplify the process of creating animated 3D characters that mimic human movement using a human-centric computer vision (AI) platform delivered by wrnch Inc. To use this platform, you don’t need your actors to wear hardware sensors. You can simply capture motion with a camera.

_images/ext_wrnch_overview-diagram.png

The Omniverse Wrnch Extension Includes

  • wrnch CaptureStream is a free application that you can download to your mobile device to perform marker-less motion capture. With this application, you can capture an actor’s movement without requiring the person to wear tracking sensors. When CaptureStream is running, you will see the actor’s movements mimicked by an avatar, enabling you to capture the motion you want to transform into 3D animation in your application. As you capture human performance using wrnch CaptureStream, the wrnch Engine detects humans in the video feed and uses powerful human pose estimation algorithms to track skeletal joints to infer human poses and motion. The wrnch Engine outputs 2D and 3D pose data using the wrnch eXchange (wrX) data protocol.

  • wrnch AI Pose Estimator extension is an Omniverse extension. With the extension, you can search and find a wrnch CaptureStream application running on a local network. As the human pose data is transmitted to Omniverse in real time, the extension translates the wrX data stream into USD (Universal Scene Description) – a 3D description and format file developed by Pixar for content interchange – where it can be mapped to a 3D virtual character in Omniverse.

Discover how you can get up and running using the tools and extension to add life-like 3D characters to enhance your storytelling by completing the following steps.

  1. Set Up wrnch CaptureStream

  2. Set Up wrnch AI Pose Estimator extension for Omniverse

  3. Stream AI Pose Data to Animate 3D Characters in Real Time

  4. Set Up wrnch CaptureStream

User Manual

Wrnch CaptureStream allows you to capture the human motion that you’d like to reproduce in an application.

Pre-requisites

Supported devices

  • Apple iPhone XS/XR and later devices with A12 or later processors

  • Apple iPad 2019 or later

  • Platform/OS: Apple iOS 14 or later

  • Phone/Tablet mount; for example, tripod with iPhone adapter

CaptureStream Setup

  1. Download the wrnch CaptureStream application from the AppStore to capture and digitize human motion using just an iPhone or iPad.

  2. Once you Open the application,
    • Select OK to allow CaptureStream to find and connect to devices on your local network.

    • Select OK to allow CaptureStream to access the camera to detect human poses.

  3. Press Start, and toggle on “Pro Mode” to use the rear-facing camera.

  4. In landscape orientation

    • Point the device at an angle towards the floor to establish a ground plane. Walk around, scanning the performance area. You will see a grid appearing as an overlay to the floor. Once the grid covers the area of the floor that the actor will perform on, click on the grid to select it. The grid will change from gray to blue.

    • Make sure the blue grid that covers the floor includes all the space for your intended performance.

  5. Wait up to 30 seconds while artificial intelligence for human pose estimation is loaded into the device.

Setup the performance area

For marker-less motion capture, we need to setup a performance area.

  1. Choose a well-lit, clear space where an actor will perform the human that you want to mimic in your Omniverse application. Consider a size of about 10 feet x 10 feet (3m x 3m).

  2. Mount the camera device in landscape orientation in a fixed position, for example on a tripod. The camera needs to stay still as you capture the performance area where the actor will move. Ensure that the mounted camera is far enough away from the performance space to ensure that you can capture the whole body of the actor. Take a test video using the rear-facing camera to make sure it can see the entire performance.

Launch and calibrate

The wrnch CaptureStream application now needs to be be launched and calibrated for best results.

  1. When you start wrnch CaptureStream, you are first asked to give permission for the application to access the device’s camera and network. Once you’ve done that, click Start on the landing screen.

  2. On the launch screen, use the slider to select Pro Mode to calibrate and enable root motion. This enables you to maximize the fidelity of the performance capacity by tracking you as you move around the space. Click Next.

  3. At this point, you need to inform CaptureStream about the floor of your performance space. Tilt the camera at a 45-degree angle to the floor and sweep it across the floor. CaptureStream will display a blue grid as it detects the floor and will then instruct you to position the camera in a stable position. Click Next.

  4. Instruct an actor to walk into the scene from the side, facing sideways, to the central area of the performance so that CaptureStream can understand body size. As long as wrnch CaptureStream does not detect a face, the view looks similar to the following image.

    _images/ext_wrnch_calibrate.png
  5. To finalize initialization, ask the actor to hold an A-pose (arms out at an angle to each side while standing straight) while facing the camera. When CaptureStream has calibrated the body size, it turns green as shown in the following image.

_images/ext_wrnch_calibrate-2.png

At this point, CaptureStream is ready to stream 3D poses of the actor’s performance across your local network. Next you need to set up the wrnch AI Pose Estimator extension in Omniverse so that it can receive the human pose information.

AI Pose Estimator Setup

The extension for Omniverse Wrnch Pose Estimator needs to be setup.

The wrnch AI Pose Estimator extension allows you to see devices running the wrnch CaptureStream application on your local network. Once you connect to an app and select the character you want to animate, the extension is ready to receive streaming pose data. The AI Pose Estimator extension supports three different character sets: Sol, GenericRig, and Squad.

Pre-Requisites

  • NVIDIA Omniverse

  • NVIDIA Omniverse Machinima

Launch NVIDIA Omniverse Machinima

  1. Launch NVIDIA Omniverse Machinima.

  2. Load a scene to which you want to add characters, materials etc.

  3. Load the characters you want to animate in your story. The AI Pose Estimator extension supports three different character sets: Sol, GenericRig, and Squad as shown in the following image.

_images/ext_wrnch_characters.png

Setup AI Pose Estimator

Select and set up the wrnch AI Pose Estimator extension

_images/ext_wrnch_icon.png
  1. From the Extensions catalog, select the wrnch AI Pose Estimator extension. The extension can be found by entering “wrnch” in the Search menu.

    _images/ext_wrnch_extension.png
  2. Enable the wrnch AI Pose Estimator extension by clicking the toggle switch as shown. The red light indicates that further setup is needed. You can dock the pop-up anywhere you’d like in the Omniverse app.

    _images/ext_wrnch_extension-2.png
  3. Select “Click here to get what you need to wrnch it” to < doc ?>

    _images/ext_wrnch_extension-3.png
  4. In the CaptureStream source field, press the down-arrow to find and select the application streaming pose data across the local area network. If you do not see any applications listed, ensure that you have setup and started wrnch CaptureStream as described in “Set Up wrnch CaptureStream”.

    _images/ext_wrnch_extension-4.png

Note

You also need to make sure your Omniverse application has local network access rights – this is controlled by Windows Firewall and can be verified by navigating in the Windows system to Windows Defender Firewall -> Allow an app or feature through Windows Defender Firewall, locating the Omniverse app, and making sure it has local network access.

  1. Choose the skeleton you want to drive by clicking on the character in the scene (for example, the Sol character as highlighted in yellow in the following figure) and then selecting “Use highlighted skeleton”.

_images/ext_wrnch_extension-4.png

At this point, the wrnch setup indicator turns green, which means that the extension can start receiving AI pose data streaming from a camera in real-time.

Stream AI Pose Data

Now let’s stream the Pose Data to Animate 3D Characters in Real Time.

Once everything is set-up, you will use both the wrnch CaptureStream application and the wrnch AI Pose Estimator extension to stream human pose metadata from a camera to animate 3D characters in an Omniverse application.

  1. Within the wrnch AI Pose Estimator extension, press the “Start streaming motion” when you are ready to capture an actor’s human performance.

    _images/ext_wrnch_pose-estimator-1.png
  2. Using the wrnch CaptureStream application, begin capturing the actor’s performance that you’d like to reproduce in the Omniverse application. Within the CaptureStream application, the wrnch Engine is running AI algorithms against the live video feed. The human pose estimation algorithms track key skeletal joints to infer 3D human motion.

The wrnch Engine then outputs human pose metadata using the wrnch eXchange (wrX) data protocol across the local area network to the wrnch AI Pose Estimator extension in Omniverse in real time.

  1. Within Omniverse, you will see your selected animated character mimic the movements of the human actor being recorded.

_images/ext_wrnch_pose-estimator-2.png
  1. If necessary, you can adjust the “Height Offset” slider to align the character’s feet with the ground within your scene. When you have finished, press “Stop Streaming Motion”.

_images/ext_wrnch_pose-estimator-3.png

For additional information about the wrnch AI platform, including the wrnch Engine, wrnch CaptureStream, and wrnch extensions for Omniverse, Unreal Engine 4, or Unity, visit www.wrnch.ai.