jieunhan commited on
Commit
6b271d2
β€’
1 Parent(s): 5494c6e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - moe
5
+ - frankenmoe
6
+ - merge
7
+ - mergekit
8
+ - lazymergekit
9
+ - yanolja/KoSOLAR-10.7B-v0.2
10
+ - yanolja/Bookworm-10.7B-v0.4-DPO
11
+ base_model:
12
+ - yanolja/KoSOLAR-10.7B-v0.2
13
+ - yanolja/Bookworm-10.7B-v0.4-DPO
14
+ ---
15
+
16
+ # solar_merge_test_1
17
+
18
+ ## 🧩 Configuration
19
+
20
+ ```yaml
21
+ base_model: yanolja/KoSOLAR-10.7B-v0.2
22
+ dtype: float16
23
+ experts:
24
+ - source_model: yanolja/KoSOLAR-10.7B-v0.2
25
+ positive_prompts: ["당신은 μ‚¬λžŒλ“€μ—κ²Œ 도움을 μ£ΌλŠ” μ–΄μ‹œμŠ€ν„΄νŠΈμ΄λ‹€."]
26
+ - source_model: yanolja/Bookworm-10.7B-v0.4-DPO
27
+ positive_prompts: ["당신은 λ‹€λ°©λ©΄μœΌλ‘œ 닡변을 μž˜ν•˜λŠ” μ–΄μ‹œμŠ€ν„΄νŠΈμ΄λ‹€."]
28
+ gate_mode: cheap_embed
29
+ tokenizer_source: base
30
+ ```
31
+
32
+ ## πŸ’» Usage
33
+
34
+ ```python
35
+ !pip install -qU transformers bitsandbytes accelerate
36
+
37
+ from transformers import AutoTokenizer
38
+ import transformers
39
+ import torch
40
+
41
+ model = "jieunhan/solar_merge_test_3"
42
+
43
+ tokenizer = AutoTokenizer.from_pretrained(model)
44
+ pipeline = transformers.pipeline(
45
+ "text-generation",
46
+ model=model,
47
+ model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
48
+ )
49
+
50
+ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
51
+ prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
52
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
53
+ print(outputs[0]["generated_text"])
54
+ ```