==================== |a2f_short| Overview ==================== .. raw:: html
Overview -------- |a2f_long| is a combination of artificial intelligence (AI)-based technologies that derive facial motion and lip sync animations from an audio source. With |a2f_short|, you can: - Analyze an audio sample and automatically animate emotions in a character's performance - Animate all the features of your character's face, including their eyes and tongue - and more! You can use |a2f_short| at runtime or in more traditional content creation pipelines. It offers various output formats and includes the ability to connect a custom blend shape mesh and export the resulting blend weights. Minimum Mesh Requirements for Full Face Character Setup ------------------------------------------------------- A2F requires that a Head Mesh must be broken down into its individual mesh components. Head Mesh, Left Eye, Right Eye, Lower Teeth and Tongue must be individual meshes and cannot contain sub meshes. Please see Online Documentation and NVOD A2F Tutorial videos for further guidance. Requirements ------------ In order to use |a2f_short|, you must have: - Win 64 1909 or newer - |nuc| - Ubuntu Linux Version 20.04 or newer If you're trying to access sample assets in the local |omni| :doc:`mount`, you need to `install <../../launcher/install.html>`__ the |nuc_short| application from the |launcher|.