sho-takase commited on
Commit
24e42c1
1 Parent(s): eabe8fa

Add readme

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ja
4
+ - en
5
+ ---
6
+
7
+ # Sarashina2-8x70B
8
+
9
+ This repository provides large language models trained by [SB Intuitions](https://www.sbintuitions.co.jp/).
10
+
11
+
12
+ ## Required Hardware
13
+ BF16 Inference:
14
+ - 16x H100
15
+ - 16x A100 80GB
16
+
17
+
18
+ ## Model Description
19
+
20
+ We constructed this Sarashina2-8x70B model, which consists of over 450 billion parameters, by applying the [sparse upcycling technique](https://arxiv.org/abs/2212.05055) to our [Sarashina2-70B](https://huggingface.co/sbintuitions/sarashina2-70b) model to efficiently build the Mixture-of-Experts model.
21
+ We trained the Sarashina2-8x70B model using a mix of Japanese and English corpora from web data.
22
+
23
+
24
+ ## Tokenization
25
+
26
+ We use a [sentencepiece](https://github.com/google/sentencepiece) tokenizer with a unigram language model and byte-fallback.
27
+ We do not apply pre-tokenization with Japanese tokenizer.
28
+ Thus, a user may directly feed raw sentences into the tokenizer.
29
+
30
+
31
+ ## Ethical Considerations and Limitations
32
+ Sarashina2 has not been tuned to follow an instruction yet.
33
+ Therefore, sarashina2 might generate some meaningless sequences, some inaccurate instances or biased/objectionable outputs.
34
+ Before using sarashina2, we would like developers to tune models based on human preferences and safety considerations.
35
+
36
+ ## License
37
+
38
+ [Sarashina Model NonCommercial License Agreement](https://huggingface.co/sbintuitions/sarashina2-8x70B/blob/main/Sarashina%20Model%20NonCommercial%20License%20Agreement)