0nutation / SpeechGPT
- вторник, 23 мая 2023 г. в 00:00:09
SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities.
SpeechGPT is a large language model with intrinsic cross-modal conversational abilities, capable of perceiving and generating multi-model content following human instructions. With discrete speech representations, we first construct SpeechInstruct, a large-scale cross-modal speech instruction dataset. Additionally, we employ a three-stage training strategy that includes modality-adaptation pre-training, cross-modal instruction fine-tuning, and chain-of-modality instruction fine-tuning. The experimental results demonstrate that SpeechGPT has an impressive capacity to follow multi-modal human instructions and highlight the potential of handling multiple modalities with one model.
SpeechGPT demos are shown in our project page. As shown in the demos, SpeechGPT has strong cross-modal instruction-following ability and spoken dialogue ability. SpeechGPT can be a talking encyclopedia, your personal assistant, your chat partner, a poet, a psychologist and your educational assistant...
SpeechGPT’s capabilities to tackle multiple cross-modal tasks
Left: SpeechInstruct construction process. Right: SpeechGPT model structure
We will release SpeechInstruct dataset.
We will release modality-adaptation pre-trained model, cross-modal instruction fine-tuned model and chain-of-modality instruction fine-tuned model.
SpeechGPT demos are shown in our project page. As shown in the demos, SpeechGPT has strong cross-modal instruction-following ability and spoken dialogue ability. SpeechGPT can be a talking encyclopedia, your personal assistant, your chat partner, a poet, a psychologist and your educational assistant etc.
Cases of cross-modal instruction-following results
Cases of spoken dialogue results
If you find SpeechGPT useful for your research and applications, please cite using the BibTex:
@misc{zhang2023speechgpt,
title={SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities},
author={Dong Zhang and Shimin Li and Xin Zhang and Jun Zhan and Pengyu Wang and Yaqian Zhou and Xipeng Qiu},
year={2023},
eprint={2305.11000},
archivePrefix={arXiv},
primaryClass={cs.CL}
}