File size: 1,345 Bytes
1d0dd8b
163cfea
1d0dd8b
 
 
6ce2caf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: apache-2.0
inference: false
---

**NOTE: This "delta model" cannot be used directly.**
Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights.
See https://github.com/lm-sys/FastChat#vicuna-weights for instructions.

# Vicuna Model Card

## Model details

**Model type**
Vicuna-13B is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. 

**Organizations developing the model**
The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego.

**Paper or resources for more information**
https://vicuna.lmsys.org/

**License**
Apache License 2.0

**Where to send questions or comments about the model**
https://github.com/lm-sys/FastChat/issues

## Intended use
**Primary intended uses**
The primary use of Vicuna is research on large language models and chatbots.

**Primary intended users**
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.

## Training dataset
70K conversations collected from ShareGPT.com.

## Evaluation dataset
A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details.