GGUF
Japanese
English
Inference Endpoints
conversational
LoneStriker commited on
Commit
519acb3
1 Parent(s): 398354d

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -1,35 +1,6 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
1
+ SambaLingo-Japanese-Chat-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
2
+ SambaLingo-Japanese-Chat-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
3
+ SambaLingo-Japanese-Chat-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
4
+ SambaLingo-Japanese-Chat-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
5
+ SambaLingo-Japanese-Chat-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
6
+ SambaLingo_Logo.png filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ datasets:
4
+ - HuggingFaceH4/ultrachat_200k
5
+ - HuggingFaceH4/ultrafeedback_binarized
6
+ - HuggingFaceH4/cai-conversation-harmless
7
+ language:
8
+ - ja
9
+ - en
10
+ ---
11
+
12
+
13
+
14
+ # SambaLingo-Japanese-Chat
15
+
16
+ <img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
17
+
18
+ <!-- Provide a quick summary of what the model is/does. -->
19
+ SambaLingo-Japanese-Chat is a human aligned chat model trained in Japanese and English. It is trained using direct preference optimization on top the base model [SambaLingo-Japanese-Base](https://huggingface.co/sambanovasystems/SambaLingo-Japanese-Base). The base model adapts [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) to Japanese by training on 42 billion tokens from the Japanese split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
20
+
21
+ ## Model Description
22
+ <!-- Provide a longer summary of what this model is. -->
23
+
24
+ - **Developed by:** [SambaNova Systems](https://sambanova.ai/)
25
+ - **Model type:** Language Model
26
+ - **Language(s):** Japanese, English
27
+ - **Finetuned from model:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf)
28
+ - **Try This Model:** [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space)
29
+ - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
30
+
31
+ ## Getting Started
32
+
33
+ ### Loading Model With Hugging Face
34
+ Please make sure to set use_fast=False when loading the tokenizer.
35
+ ```python
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Japanese-Chat", use_fast=False)
39
+ model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Japanese-Chat", device_map="auto", torch_dtype="auto")
40
+ ```
41
+
42
+ ### Interacting With Model Pipeline
43
+ Please make sure to set use_fast=False when loading the tokenizer.
44
+ ```python
45
+ from transformers import pipeline
46
+ pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Japanese-Chat", device_map="auto", use_fast=False)
47
+ messages = [
48
+ {"role": "user", "content": {YOUR_QUESTION}},
49
+ ]
50
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
51
+ outputs = pipe(prompt)[0]
52
+ outputs = outputs["generated_text"]
53
+ ```
54
+
55
+ ### Suggested Inference Parameters
56
+ - Temperature: 0.8
57
+ - Repetition penalty: 1.0
58
+ - Top-p: 0.9
59
+
60
+ ### Prompting Guidelines
61
+ To prompt this model, please use the following chat template:
62
+ ```
63
+ <|user|>\n{question}</s>\n<|assistant|>\n
64
+ ```
65
+
66
+ ### Example Prompts and Generations
67
+ ```
68
+ <|user|>
69
+ 茶道用語で、茶室ではなく、屋外でお茶を点てることを何というでしょう?</s>
70
+ <|assistant|>
71
+ 茶道用語で、茶室ではなく屋外でお茶を点てることを「野点(のだて)」といいます。
72
+ ```
73
+
74
+
75
+ ## Training Details
76
+ The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
77
+
78
+ The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
79
+
80
+ The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
81
+
82
+
83
+ ## Tokenizer Details
84
+ We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
85
+
86
+ ## Uses
87
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
88
+
89
+ ### Direct Use
90
+
91
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
92
+ Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
93
+
94
+
95
+ ### Out-of-Scope Use
96
+
97
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
98
+ SambaLingo should NOT be used for:
99
+
100
+ - Mission-critical applications
101
+ - Applications that involve the safety of others
102
+ - Making highly important decisions
103
+
104
+ ## Bias, Risks, and Limitations
105
+
106
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
107
+
108
+ Like all LLMs, SambaLingo has certain limitations:
109
+ - Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
110
+ - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
111
+ - Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
112
+ - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
113
+ - Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
114
+
115
+ ## Acknowledgments
116
+ We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
117
+
118
+ We would like to give a special thanks to the following groups:
119
+ - Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
120
+ - Nguyen et al for open sourcing CulturaX dataset
121
+ - CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
122
+ - EleutherAI for their open source evaluation framework
123
+ - Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
124
+
125
+
126
+ ## Cite SambaLingo
127
+ ```
128
+ @software{sambalingo,
129
+ title = {{SambaLingo: Open Source Language Experts}},
130
+ author = {SambaNova Systems},
131
+ url = {https://huggingface.co/sambanovasystems/SambaLingo-Japanese-Chat}
132
+ month = {2},
133
+ year = {2024},
134
+ version = {1.0},
135
+ }
136
+ ```
SambaLingo-Japanese-Chat-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b78e2e2516d25e5c68e0b537273277faca879c638bde9adaae755836c5a455b
3
+ size 3727512000
SambaLingo-Japanese-Chat-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1fc29994a2bfc5110a51b407bf1e3fd4a80a5deb8eec2d68cc7d7daae92c89b
3
+ size 4225192384
SambaLingo-Japanese-Chat-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75c42d29991cad56d4245fab823ee561f211d9e06b696b735befc01432b78faf
3
+ size 4940321216
SambaLingo-Japanese-Chat-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efb1930d1441f88e542fcc4a1ce3bbf1fc238fa355a2a10c1c3f12b098b4d57a
3
+ size 5700145600
SambaLingo-Japanese-Chat-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83364a121c4f1d39e54e27248fb86efc2fdcdef77b46bc2b435cd12ed0d181b2
3
+ size 7382323648
SambaLingo_Logo.png ADDED

Git LFS Details

  • SHA256: 12134a10d8250af8f27c6e541744cf2c2b563a286abb462def466bf4c1691a7f
  • Pointer size: 132 Bytes
  • Size of remote file: 1.46 MB