ReSyncer: Rewiring Style-based Generator for Unified Audio-Visually Synced Facial Performer
3. Zhongguancun Laboratory, 4. S-Lab, Nanyang Technological University.
Abstract
Lip-syncing videos with given audio is the foundation for various applications including the creation of virtual presenters or performers. While recent studies explore high-fidelity lip-sync with different techniques, their task-orientated models either require long-term videos for clip-specific training or retain visible artifacts. In this paper, we propose a unified and effective framework ReSyncer, that synchronizes generalized audio-visual facial information. The key design is revisiting and rewiring the Style-based generator to efficiently adopt 3D facial dynamics predicted by a principled style-injected Transformer. By simply re-configuring the information insertion mechanisms within the noise and style space, our framework fuses motion and appearance with unified training. Extensive experiments demonstrate that ReSyncer not only produces high-fidelity lip-synced videos according to audio, but also supports multiple appealing properties that are suitable for creating virtual presenters and performers, including fast personalized fine-tuning, video-driven lip-syncing, the transfer of speaking styles, and even face swapping.
Demo Video
Materials
Citation
@inproceedings{guan2024resyncer, title = {ReSyncer: Rewiring Style-based Generator for Unified Audio-Visually Synced Facial Performer}, author = {Guan, Jiazhi and Xu, Zhiliang and Zhou, Hang and Wang, Kaisiyuan and He, Shengyi and Zhang, Zhanwang and Liang, Borong and Feng, Haocheng and Ding, Errui and Liu, Jingtuo and Wang, Jingdong and Zhao, Youjian and Liu, Ziwei}, booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)}, year = {2024} }