OpenLMLab / MOSS-RLHF
- ะฟััะฝะธัะฐ, 14 ะธัะปั 2023โฏะณ. ะฒ 00:00:07
MOSS-RLHF
Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle. In this technical report, we intend to help researchers to train their models stably with human feedback.
Contributions are summarized as follows:
This repository works on Python 3.8 and PyTorch 1.13.1.
We recommend using the conda virtual environment to run the code.
conda update conda -n base -c defaults
conda create -n rlhf python=3.8
conda activate rlhf
conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
apt install libaio-dev
DS_BUILD_OPS=1 pip install deepspeed
Run code in a few steps.
We can not directly release the full weight of the reward model because of protocol restrictions. You can merge the diff weight with original Llama-7B to recover the reward model we used.
We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps:
1) Download the weight diff into your local machine. The weight diff is located at:
# For English:
TODO
# For Chinese:
https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main
2) Merge the weight diff with the original Llama-7B:
# For English:
TODO
# For Chinese:
python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover
Because of some limitations, we can not release the Chinese SFT model (Currently). You can use your own SFT model, or a strong base model instead of our SFT model.
Run the command below.
# For Chinese:
bash run_zh.sh
# For English:
TODO
@article{zheng2023secrets,
title={Secrets of RLHF in Large Language Models Part I: PPO},
author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
year={2023},
eprint={2307.04964},
archivePrefix={arXiv},
primaryClass={cs.CL}
}