|
--- |
|
inference: false |
|
--- |
|
|
|
# longchat-7b-16k Model Card |
|
|
|
## Model details |
|
|
|
**Model type:** |
|
longchat-7b-16k is an open-source chatbot trained by fine-tuning llama-7b on user-shared conversations collected from ShareGPT, using the condensing rotary embedding technique reported in the [blog](https://lmsys.org/blog/2023-06-29-longchat). |
|
|
|
**Model date:** |
|
longchat-7b-16k was trained on June 2023. |
|
|
|
**Organizations developing the model:** |
|
The LongChat developers: Dacheng Li*, Rulin Shao*, Anze Xie, Ying Sheng, Lianmin Zheng, Ion Stoica, Xuezhe Ma, and Hao Zhang |
|
|
|
**Paper or resources for more information:** |
|
https://github.com/DachengLi1/LongChat |
|
|
|
**Where to send questions or comments about the model:** |
|
https://github.com/DachengLi1/LongChat |
|
|
|
## Intended use |
|
**Primary intended uses:** |
|
The primary use of longchat-7b-16k is for research purposes. |
|
|
|
**Primary intended users:** |
|
The primary intended users of the model are researchers in natural language processing, machine learning, and artificial intelligence. |
|
|
|
## Training dataset |
|
80K conversations collected from ShareGPT.com. |
|
|
|
## Evaluation dataset |
|
A preliminary evaluation of the model quality is conducted by our released [LongEval](https://github.com/DachengLi1/LongChat). |