Audio2Face Overview¶
NVIDIA Omniverse™ Audio2Face is a combination of artificial intelligence (AI)-based technologies that derive facial motion and lip sync animations from an audio source.
With Audio2Face, you can:
Analyze an audio sample and automatically animate emotions in a character’s performance
Animate all the features of your character’s face, including their eyes and tongue
and more!
You can use Audio2Face at runtime or in more traditional content creation pipelines. It offers various output formats and includes the ability to connect a custom blend shape mesh and export the resulting blend weights.
Minimum Mesh Requirements for Full Face Character Setup¶
A2F requires that a Head Mesh must be broken down into its individual mesh components. Head Mesh, Left Eye, Right Eye, Lower Teeth and Tongue must be individual meshes and cannot contain sub meshes. Please see Online Documentation and NVOD A2F Tutorial videos for further guidance.
Requirements¶
In order to use Audio2Face, you must have:
Win 64 1909 or newer
Omniverse Nucleus
Ubuntu Linux Version 20.04 or newer
If you’re trying to access sample assets in the local Omniverse mount, you need to install the Nucleus application from the Omniverse Launcher.
What’s new in Audio2Face 2022?¶
Read about the latest features in Audio2Face 2022
Linux Support
Audio2Face and all it’s features are now fully supported in a Linux application.
Headless Mode and Rest API
Headless Audio2face supports advanced batch export capabilities with a REST API
Headless Mode and Rest API
A new solution and interface to allow multiple Blendshape solves and Batch exporting.