File size: 4,337 Bytes
32c97cb d82939e 32c97cb d82939e 32c97cb d82939e 32c97cb d82939e 32c97cb d82939e 32c97cb d82939e 32c97cb d82939e 32c97cb d82939e 32c97cb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
license: other
---
# OpenAssistant LLaMa-Based Models
Due to the license attached to LLaMa models by Meta AI it is not possible to directly distribute LLaMa-based models. Instead we provide XOR weights for the OA models.
Thanks to Mick for writing the `xor_codec.py` script which enables this process
## The Process
Note: This process applies to `oasst-rlhf-2-llama-30b-7k-steps` model. The same process can be applied to other models in future, but the checksums will be different..
To use OpenAssistant LLaMa-Based Models, you need to have a copy of the original LLaMa model weights and add them to a `llama` subdirectory here.
Ensure your LLaMa 30B checkpoint matches the correct md5sums:
```
f856e9d99c30855d6ead4d00cc3a5573 consolidated.00.pth
d9dbfbea61309dc1e087f5081e98331a consolidated.01.pth
2b2bed47912ceb828c0a37aac4b99073 consolidated.02.pth
ea0405cdb5bc638fee12de614f729ebc consolidated.03.pth
4babdbd05b8923226a9e9622492054b6 params.json
```
These can be converted to HuggingFace Transformers-compatible weights using the script available [here](https://github.com/huggingface/transformers/blob/28f26c107b4a1c5c7e32ed4d9575622da0627a40/src/transformers/models/llama/convert_llama_weights_to_hf.py).
**Important**: It was tested with git version transformers 4.28.0.dev0 (git hash: **28f26c107b4a1c5c7e32ed4d9575622da0627a40**). Make sure the package tokenizers 0.13.3 is installed. Use of different versions may result in broken outputs.
```
PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python python convert_llama_weights_to_hf.py --input_dir ~/llama/ --output_dir ~/llama30b_hf/ --model_size 30B
```
Run `find -type f -exec md5sum "{}" + > checklist.chk` in the conversion target directory. This should produce a `checklist.chk` with exactly the following content if your files are correct:
```
d0e13331c103453e9e087d59dcf05432 ./pytorch_model-00001-of-00007.bin
29aae4d31a0a4fe6906353001341d493 ./pytorch_model-00002-of-00007.bin
b40838eb4e68e087b15b3d653ca1f5d7 ./pytorch_model-00003-of-00007.bin
f845ecc481cb92b8a0586c2ce288b828 ./pytorch_model-00004-of-00007.bin
f3b13d089840e6caf22cd6dd05b77ef0 ./pytorch_model-00005-of-00007.bin
12e0d2d7a9c00c4237b1b0143c48a05e ./pytorch_model-00007-of-00007.bin
1348f7c8bb3ee4408b69305a10bdfafb ./pytorch_model-00006-of-00007.bin
aee09e21813368c49baaece120125ae3 ./generation_config.json
eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
598538f18fed1877b41f77de034c0c8a ./config.json
fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json
b77e99aa2ddc3df500c2b2dc4455a6af ./pytorch_model.bin.index.json
edd1a5897748864768b1fab645b31491 ./tokenizer_config.json
6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json
```
Once you have LLaMa weights in the correct format, you can apply the XOR decoding:
```
python xor_codec.py oasst-rlhf-2-llama-30b-7k-steps/ oasst-rlhf-2-llama-30b-7k-steps-xor/ llama30b_hf/
```
You should expect to see one warning message during execution:
`Exception when processing 'added_tokens.json'`
This is normal. If similar messages appear for other files, something has gone wrong.
Now run `find -type f -exec md5sum "{}" + > checklist.chk` in the output directory (here `oasst-rlhf-2-llama-30b-7k-steps`). You should get a file with exactly these contents:
```
d08594778f00abe70b93899628e41246 ./pytorch_model-00007-of-00007.bin
f11acc069334434d68c45a80ee899fe5 ./pytorch_model-00003-of-00007.bin
9f41bd4d5720d28567b3e7820b4a8023 ./pytorch_model-00001-of-00007.bin
27b0dc092f99aa2efaf467b2d8026c3f ./added_tokens.json
148bfd184af630a7633b4de2f41bfc49 ./generation_config.json
b6e90377103e9270cbe46b13aed288ec ./pytorch_model-00005-of-00007.bin
4c5941b4ee12dc0d8e6b5ca3f6819f4d ./pytorch_model-00004-of-00007.bin
eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
2c92d306969c427275f34b4ebf66f087 ./pytorch_model-00006-of-00007.bin
9a4d2468ecf85bf07420b200faefb4af ./config.json
deb33dd4ffc3d2baddcce275a00b7c1b ./tokenizer.json
13a3641423840eb89f9a86507a90b2bf ./pytorch_model.bin.index.json
ed59bfee4e87b9193fea5897d610ab24 ./tokenizer_config.json
704373f0c0d62be75e5f7d41d39a7e57 ./special_tokens_map.json
ed991042b2a449123824f689bb94b29e ./pytorch_model-00002-of-00007.bin
```
If so you have successfully decoded the weights and should be able to use the model with HuggingFace Transformers.
|