Pose Tracker

Overview

Using our pose tracking AI model - a fully convolutional neural network model. Pose Tracker allows the user to animate a character using video as a motion source. Connect your character to the Pose tracker with the automatic Retargeting tool. You can load a pre-recorded movie file as the source and generate animation from that to save to anim clips - Or you can stream a live camera to animate your character in real time.

To get off to a quick start - you can find a Pose estimation demo scene under the Animation/Pose Tracker/OpenDemoScene menu.

_images/ext_pose_tracker.jpg

Video Source

Video Source is where you determine the source video for your session.

_images/ext_pose_tracker_video_source.jpg

Stream Live Camera

Allows the user to stream a video source from a camera connected to their computer.

Video Source

Allows the user to load a movie file from disk as the source motion.

Retargeting

Retargeting allows you to connect your own character SkelMesh to the Pose Trackers output skeleton. The retargeting tool will automatically resolve the mapping to your skeleton. In cases where it fails to successfully map automatically - the user can manually create the mapping along with additional tools to assist with solving various issues that can arise from random/non standard skeleton structures and orientations.

_images/ext_pose_tracker_retargeting.jpg

Target Skeleton

This field will display any valid skelroot found in the current stage.

Skeleton Connected

_images/ext_pose_tracker_ready.jpg

The Green Check mark means the selected skeleton has retargeting setup and is connected.

Skeleton not connected

_images/ext_pose_tracker_warning.jpg

Indicates the currently assigned skeleton is not ready for retargeting. Clicking the icon exposes the Run AutoRetarget command.

Auto Retarget

Success will return a green check Mark. Failure will prompt you to open the retargeting window.

For a comprehensive look at the Retargeting tool - Please refer to the Documentation found here.

Open Retargeting Window

_images/ext_pose_tracker_link.jpg

Opens the Retargeting tool for more comprehensive setup of characters.

Focus Target Skeleton

_images/ext_pose_tracker_focus.jpg

Focus the viewport on the selected skeleton.

Preview

_images/ext_pose_tracker_play_control.jpg

The preview window is a movie player that allows the user to see their source footage and provides playback controls.

When using a Streamed video source - this player will change to just have a play button - to start and stop the feed from the selected camera.

Pose Tracker

_images/ext_pose_tracker_pose_tracker.jpg

Pose Tracker is the user facing interface for the AI generated solution that runs behind the scenes. It is a fully convolutional neural network model with architecture consisting of a backbone network, an initial estimation stage, which does a pixel-wise prediction of confidence maps, followed by multistage refinement on the initial predictions.

Display options

3d points visualizer - Displays the 3d points generated by the AI from the 2d video input. 2d Visualizer - Displays the 2d points on the source video input.

Start Engine

Initiates the pose tracking engine.

Post Processing

Post processing allows the user to adjust various filters to tweak the Pose Tracking result as best to individual circumstances.

_images/ext_pose_tracker_post_processing.jpg

Calibrate

Recenter character to the origin.

Smoothing

This is used for smoothing out rotational motion of each joint.

Filter Level

This is used to filter less confidence joint values - e.g. by occlusion.

When comparing with preview skeleton on the image, the output skeleton in viewport will filter more by confidence value if this value is higher and it will filter less if this value is lower.

Generally higher value means more stable result by accepting only high confidence values,but it can filter out more motions.

Hand Pose

This overwrites the character finger pose. Make sure you have retarget tags for fingers on the target character.

Recorder

The animation recorder generates a time-sampled USDSkelAnimation clip and writes it to disk. This allows you to convert your video source to an animation clip for use in the sequencer or anywhere you can use SkelAnimation clips.

_images/ext_pose_tracker_recorder.jpg

Target Skeleton

Displays the target skeleton to be recorded, which is synchronized with the Retargeting skeleton field above.

If the skeleton is not specified or not retarget-ready, a warning icon will appear and the record button will be disabled.

If skeleton is specified and retarget-ready, a green check icon will appear, and the record button will be enabled.

Destination path

Specify a folder on disk to write your animation clip. Press the folder to use a browser window to select the folder. Press the link button to browse to the folder in file explorer.

Take Name

Create a name for your animation clip.

The output USD will be: {destination_path}/{take_name}.usd

The output USD will contain one SkelAnimation with the Take Name.

Note

Kit tokens can be used in Destination Path and Take Name, as well as two special tokens:

  • ${target_skel} - Resolves to the target skeleton prim name.

  • ${target_skel_root} - Resolves to the target skeleton root prim name.

Auto Increment

Auto increment will automatically increment the output file name when the output file already exists. With this option disabled, a confirmation prompt will be shown to confirm overriding existing file.

Record

Clicking record - will start playing the movie file and start the engine for a clean recording of the full animation.