FACEGOOD / Audio2Face
- воскресенье, 23 января 2022 г. в 00:30:52
http://www.facegood.cc
The Test part and The UE project for xiaomei created by FACEGOOD is not available for commercial use.they are for testing purposes only.
We create a project that transforms audio to blendshape weights,and drives the digital human,xiaomei,in UE project.
The framework we used contains three parts.In Formant network step,we perform fixed-function analysis of the input audio clip.In the articulation network,we concatenate an emotional state vector to the output of each convolution layer after the ReLU activation. The fully-connected layers at the end expand the 256+E abstract features to blendshape weights .
this pipeline shows how we use FACEGOOD Audio2Face.
Use ExportBsWeights.py to export weights file from Maya.Then we can get BS_name.npy and BS_value.npy .
Use step1_LPC.py to deal with wav file to get lpc_*.npy . Preprocess the wav to 2d data.
we recommand that uses FACEGOOD avatary to produces trainning data.its fast and accurate. http://www.avatary.com
the data for train is stored in dataSet1
python step14_train.py --epochs 8 --dataSet dataSet1
In folder /test,we supply a test application named AiSpeech.
wo provide a pretrained model,zsmeif.pb
In floder /example/ueExample, we provide a packaged ue project that contains a digit human created by FACEGOOD can drived by /AiSpeech/zsmeif.py.
you can follow the steps below to use it:
python zsmeif.py
tersorflow-gpu 1.15
python-libs: pyaudio requests websocket websocket-client
The testing data, Maya model, and ue4 test project can be downloaded from the link below.
data_all code : n6ty
Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion
Wechat: FACEGOOD_CHINA
Email:support@facegood.cc
Discord: https://discord.gg/V46y6uTdw8
Audio2Face Core is released under the terms of the MIT license.See COPYING for more information or see https://opensource.org/licenses/MIT.