marcelbinz
commited on
Commit
•
1151431
1
Parent(s):
412fa13
Update README.md
Browse files
README.md
CHANGED
@@ -27,6 +27,9 @@ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloa
|
|
27 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
28 |
```
|
29 |
|
|
|
|
|
|
|
30 |
### Licensing Information
|
31 |
|
32 |
[Llama 3.1 Community License Agreement](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE)
|
|
|
27 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
28 |
```
|
29 |
|
30 |
+
Alternatively, you can run the model using unsloth on a single 80GB using the [low-rank adapter](https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B-adapter).
|
31 |
+
|
32 |
+
|
33 |
### Licensing Information
|
34 |
|
35 |
[Llama 3.1 Community License Agreement](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE)
|