Update README.md
Browse files
README.md
CHANGED
@@ -11,11 +11,11 @@ license: agpl-3.0
|
|
11 |
首先下载LLaMA原始权重,然后使用[权重转换脚本](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)转换权重。
|
12 |
```python
|
13 |
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
|
14 |
-
--input_dir /path/to/downloaded/llama/weights --model_size
|
15 |
```
|
16 |
## Step2:使用[解密脚本](https://github.com/icalk-nlp/EduChat/blob/main/decrypt.py)将增量权重加到原始LLaMA权重上。
|
17 |
```python
|
18 |
-
python ./decrypt.py --base /path/to/LLAMA_hf/
|
19 |
```
|
20 |
# 使用示例
|
21 |
转换权重后,使用示例请参考:https://github.com/icalk-nlp/EduChat#%E4%BD%BF%E7%94%A8%E7%A4%BA%E4%BE%8B
|
|
|
11 |
首先下载LLaMA原始权重,然后使用[权重转换脚本](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)转换权重。
|
12 |
```python
|
13 |
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
|
14 |
+
--input_dir /path/to/downloaded/llama/weights --model_size 13B --output_dir /output/LLaMA_hf/13B
|
15 |
```
|
16 |
## Step2:使用[解密脚本](https://github.com/icalk-nlp/EduChat/blob/main/decrypt.py)将增量权重加到原始LLaMA权重上。
|
17 |
```python
|
18 |
+
python ./decrypt.py --base /path/to/LLAMA_hf/13B --target ./educhat-sft-002-13b-decrypt --delta /path/to/educhat-sft-002-13b
|
19 |
```
|
20 |
# 使用示例
|
21 |
转换权重后,使用示例请参考:https://github.com/icalk-nlp/EduChat#%E4%BD%BF%E7%94%A8%E7%A4%BA%E4%BE%8B
|