User Manual

Hot Keys

Key

Action

P

Play

R

Rewind

L

Loop

[

Set Range Start

]

Set Range End

Audio Player and Recorder

Allows you to load custom audio and serves as the primary playback mechanism for Audio2Face.

../_images/a2f_audio-player_overview.jpg

Option

Effect

Audio source Directory

Browse to a folder destination to load all of the audio files within that folder into the “track” menu.

Track

Allows you to select which audio track you would like to playback from your specified library location. Only .wav files are supported currently.

Range

The first box allows you to clip (trim) an audio file to set a desired start time. The second box allows you to clip the audio to specify the end point.

Timeline/AudioWaveForm

A visually descriptive timeline of the audio track - Use mouse to scrub back and forth.

Play Controls

Play - Return to Head - Loop playback and Record.

Recording

Pressing the record button (red Dot) in the play controls panel will reveal the recording tools and live mode for audio2face.

Mute

stops the audio stream to A2F.

Rec

Click to record a new Wav audio file

Live

Click to enable live mode - A2F driven by real time voice data input.

New

Creates a new .wav file to record - enter desired name in the panel to the right. Auto increments names when new tracks created with the same name.

Save

Allows you to save your audio recording as a .wav file.

Attached Meshes

Allows the selection of the desired source training asset.

../_images/a2f_attached-meshes.jpg
  • Nvidia’s Audio2Face asset loads by default.

  • This feature will allow you to assign custom meshes for custom A2F neural networks (NN). (coming soon)

Network

../_images/a2f_network.jpg

Option

Effect

Network Name

Select which Neural Network you want to use.

Processing Time

Displays the latency of the selected Network.

Pre Processing

Allows the adjustment of key variables that influence the animation result.

../_images/a2f_pre-process.jpg

Option

Effect

Prediction Delay

Allows the user to tweak the alignment of the audio input and animation output.

Input Strength

Adjusts the audio volume - allowing the user to influence the range of motion of the animation resolve.
Can be muted for a subtle performance or amplified for larger/louder performances.

Post Processing

Allows the user to independently control smoothness and strength of the motions on upper and lower face.

../_images/a2f_post-process.jpg

Option

Effect

Upper and Lower Face Smoothing

Smooths the motions on the upper and lower regions of the face.

Upper and Lower Face Strength

Allows the user to determine the amplification of the motion on the upper and lower regions of the face.

Face Mask Level

Allows the user to determine the boundary of the upper and lower region of the face. (in the axis of the height)

Face Mask Softness

Allows the user to determine how smoothly the upper and lower face regions blend on the boundary.

Emotion

Allows the user to select a specific face shape from animation training sources to impose on the default face - allowing the user more control to inject specific emotional expressions onto the performance.

../_images/a2f_emotion.jpg

Option

Effect

Source Shot

Selects the animation clip.

Source Frame

Selects the specific frame from the animation source to use as shape of influence.

Multiple instances of Audio2Face

You can create multiple instances of Audio2face to run multiple characters in the same stage / scene.

../_images/a2f_multi-instance.jpg

Option

Effect

  • A2F Pipeline

Adds a connected set of A2F modules.
  • Male Head

Adds another Default head asset to the scene.
  • Audio Player

Creates a new Audio Player.
  • A2F Instance

Creates a new instance of the Audio2Face Network.

OmniGraph Editor

Each Audio2Face instance, audio player and target geometries relationship can be changed in Omnigraph to allow flexible arrangements of which Audio player is connected to which character / instance of audio2face.

../_images/a2f_node-graph.jpg

Character Transfer - Retargeting Pipeline

The Mesh Retargeting tool is available on the Character Transfer Tool tab.

../_images/a2f_character-transfer.jpg

The status message at the bottom of the widget informs the user if there are any steps missing to execute the retargeting successfully.

You can access sample assets used in the online tutorials for the character transfer process in the following locations.

  • Omniverse mount: \Nvidia\Assets\Audio2Face\Samples\

  • \<Audio2Face Install Directory>\assets\

Note

Hidden directories are currently not visible in content browser / the app’s file browser - will be fixed in upcoming version)

Meshes

In this panel, users can select the meshes to use in the character transfer process.

../_images/a2f_meshes.jpg

Option

Effect

Driver A2F Mesh

Selects the character mesh that will be driven by A2F. It should be in neutral closed mouth expression.
This mesh is not used for picking correspondence points.
(On mark retarget example, this would be /World/charTransfer/mark)

Target Mesh

Selects which character head to retarget onto.

After the target mesh is assigned the “openMouthA2Fmouth” field will appear.

Option

Effect

OpenMouth A2F Mesh

Selects the character mesh that share the same topology as the Driver A2F Mesh, but in Open mouth expression.
(On mark retarget example, this would be /World/charTransfer/mark_openMouth)

Mesh Fitting - Correspondence

This is the authoring panel for the correspondence points. This panel remains minimized until the meshes are correctly assigned in the Meshes panel.

../_images/a2f_mesh-fit.jpg

Option

Effect

Correspondence visibility

Visibility switch for the points and lines correspondence viewport draw.

Preset

Load and save correspondence setups for custom characters and preloaded presets.

Display Size

Controls the draw size of the correspondence points.

Correspondence list

This widget shows the authored correspondences. Users can select the correspondence pair from the list by clicking it.
Shift key or Ctrl can be used in combination with the mouse click to do multiple selections.

Add Mode

Enter Add Mode. In this mode, the user can pick the correspondence points between the meshes.

Edit Mode

Enter Edit Mode. In this mode, the user can reposition the existing correspondence points by clicking and dragging
the point to a new position on the mesh.

Delete Mode

This will delete selected correspondence points on the list.

Fitting Iterations

Shape Fitting number of iterations. Can be more accurate with a higher number but also takes longer to execute.
3 is a good starting point and is used on all our example characters.

Fitting Stiffness

Shape Fitting stiffness to try keep its original contour.
100 is a good starting point and is used on all our example characters.

Begin Mesh Fitting

Execute the retarget operations when clicked.

Post Wrap

Post wrap panel shows the available fitted mesh for the post wrap process. The Smooth Level and and Begin Post Wrap button will be hidden if not Fitted Mesh is selected from the drop down.

../_images/a2f_post-wrap.jpg

Option

Effect

Fitted Mesh

Fitted mesh is assigned in this field. Mesh created from Mesh Fitting process above will
be automatically selected when the Mesh Fitting process finishes.

Smooth Level

Smoothing level on the target mesh. Low number preserves the original details of the A2F mesh,
high number smooths out the details and possible artifacts - Recommended to start low and increase as needed.

Begin Post Wrap

Start Post Wrap process as part of retarget. This process applies multiple deformers that are
chained to make the live connection between a2f mesh and the target mesh.

Tools

../_images/a2f_tools.jpg

Option

Effect

  • Male Template

Adds A2F neutral “mark” head and “mark_openMouth” meshes to the stage. Both necessary for the character transfer pipeline.

Delete setup

Removes all Character Transfer related nodes and connections as well as reset the meshes to a2f neutral state.

Delete Correspondence

Deletes all user created correspondence points assigned to the meshes.

Wrap UI

Opens Wrap UI which offers additional options for the wrapping process.

Wrap UI

Wrap UI provides some options for users to use one driver mesh to drive one or more secondary meshes. For example, the face mesh is used as a driver for the eyebrows and moustache meshes. This UI execution uses the scene selection information to know which meshes to execute on. The first selected mesh would be the driver to drive one or more of the following selected meshes.

../_images/a2f_wrap-ui.jpg

Option

Effect

Use Max Distance

Toggle for considering Max Distance field below to set the weights of the deformer.
When enabled, any driven points further than the Max Distance to the driver mesh
will have 0 weight, hence not affected by the Wrap deformer.

Max Distance

The cutoff distance of each driven point to the driver mesh surface. This field is
disabled until the Use Max Distance checkbox above is enabled.

Face Tangent Space

Use face tangent space to store and apply the offset vector of each point to the
surface. When disabled, the offset vector would be applied in the object space
of the driven mesh.

When the tool detects the intended driven mesh already has an incoming connection, a popup would appear for each mesh to let the user decide the next action.

Option

Effect

Skip

Leave the specified mesh be. No new Wrap deformer would be created for this particular mesh.

Replace

Delete the upstream node of the mesh, and replace it with the new created Wrap deformer.

Add

Add new Wrap deformer and connect it to the mesh. The previous upstream node will be
disconnected from the driven mesh but not deleted.

Data Conversion tab

This tab provides all of the export and format functionality

../_images/A2F_data_conversion_tab.jpg

Common Settings

../_images/a2f_export_commonsettings.png

Option

Effect

Export Directory

Select a destination for the point cache export.

File Name

Specify the filename of the exported cache.

Playback FPS

Select the Frames per second you would like the cache to be exported at.

Geometry cache

../_images/a2f_geomcache.png

Option

Effect

Export Filter

Select the mesh filter to select the meshes within the stage from which you want to export the animation cache for. (only exports the animation data - does not export the source mesh.)

Export as

You have the choice of a USD point cache export or a Maya point cache export.

Additional Export Notes

  1. When exporting a custom mesh used in the character transfer process. Be sure to select the mesh in the stage that is labelled with “_result”.

  2. When exporting a Maya cache, you can import the source mark head to Maya to assign the cache too. A mark asset (male_head_model_hi_published.mb) can be found in your install directory /audio2face/assets/

Blendshape Conversion

Allows the user to convert the output a2f animation into a blendshape-driven animation

../_images/a2f_blendshapeConversion.jpg

Option

Effect

Input anim mesh

The mesh you want to convert into a blendshape animation. It is usually the mesh from the A2F pipeline.

Blendshape mesh

The neutral mesh with blendshape prims and this should be based on the usdSkel schema.

Set up blendshape solve

Setting up an interactive blendshape solve pipeline in the stage.
../_images/a2f_blendsolve.png

Export as json

Export the resulting blendshape weight values as .json file.

Export as USD SkelAnimation

Export the resulting blendshape weight values as .usd in the SkelAnimation prim.

Additional notes

When exporting animation, you will use the information in “common settings” for the file path and the playback fps.

MetaHuman

Now supports Unreal Engine’s MetaHuman via the Omniverse Unreal Engine Connector version 103.1. The exporter creates custom “animation curves” that are imported as an animation sequence to align with the Metahuman skeleton.

Two arrays


custom:mh_curveNames

This token array contains the names of the control curves, like CTRL_expressions_eyeLookUpR.

custom:mh_curveValues

this float array contains the time sampled curve value data.

Pose & pose option table

Option

Effect

Pose

Name of the blendshape poses.

Use

whether we want to include or exclude the pose in the blendshape solve.

Cancel pair

tagging 2 poses that cancel each other. They cannot be used together at a frame, only one of them can get a non-zero weight.

Symmetry

tagging 2 poses that are symmetric. They try to have the same weights during the blendshape conversion.

Select All, Selection options

(Right mouse click menu): Handy pose selection options.
../_images/a2F_select_options.png

Toggle use

Toggle the selected “use” checkboxes in the table.

Tag/untag cancel pair

Tag/untag the selected 2 poses as a cancel pair.

Tag/untag symmetry pair

Tag/untag the selected 2 poses as a symmetry pair.

Weight regularization

Stronger value prevents the solve to compute strong weight values. Weight regularization.

Temporal smoothing

Stronger value prevents pops between consecutive frames. Temporal weight regularization.

Weight Sparsity

Stronger value generates sparse blendshape weights, which is preferable for animator’s edit. L1 weight regularization.

Symmetric Pose

Stronger value makes the symmetry poses to have more similar weight values.

Load preset

Load preset blendshape options from a .json file.

Save preset

Save blendshape options as preset in a .json file.