jphme commited on
Commit
3dd9487
1 Parent(s): 9764da5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ ---
8
+
9
+ # Vicuna 13b v1.3 Ger
10
+
11
+ vicuna-13b-v1.3-ger is a variant of [LMSYS](https://huggingface.co/lmsys)´s [Vicuna 13b v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3) model, finetuned on an additional dataset in German language. The original model has been trained on explain tuned datasets, created using instructions and input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
12
+
13
+ This model is optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. However the model is not yet fully optimized for German language, as it has been trained on a small, experimental dataset and has limited capabilities due to the small parameter count.
14
+
15
+ I am working on improving the model´s capabilities and will update the model if there is sufficient interest.
16
+
17
+ ## Results
18
+
19
+ I did only evaluate the output on a small, handcrafted sample on test prompts in German, confirming that the model's ability to understand and generate German text is well above the base model.
20
+
21
+ ## Problems
22
+
23
+ There might be inconsistencies in multi-turn chat applications, as there was a small problem with the <eos> tokens during preparation of the finetuning dataset.
24
+ Please report any problems so I can fix this for the next version.
25
+
26
+ # Original Vicuna Model Card
27
+
28
+ ## Model Details
29
+
30
+ Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
31
+
32
+ - **Developed by:** [LMSYS](https://lmsys.org/)
33
+ - **Model type:** An auto-regressive language model based on the transformer architecture.
34
+ - **License:** Non-commercial license
35
+ - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
36
+
37
+ ### Model Sources
38
+
39
+ - **Repository:** https://github.com/lm-sys/FastChat
40
+ - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
41
+ - **Paper:** https://arxiv.org/abs/2306.05685
42
+ - **Demo:** https://chat.lmsys.org/
43
+
44
+ ## Uses
45
+
46
+ The primary use of Vicuna is research on large language models and chatbots.
47
+ The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
48
+
49
+ ## How to Get Started with the Model
50
+
51
+ - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
52
+ - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
53
+
54
+ ## Training Details
55
+
56
+ Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
57
+ The training data is around 140K conversations collected from ShareGPT.com.
58
+ See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
59
+
60
+ ## Evaluation
61
+
62
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
63
+
64
+ ## Difference between different versions of Vicuna
65
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)