marcelbinz commited on
Commit
159600d
·
verified ·
1 Parent(s): d510712

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -13,7 +13,9 @@ tags:
13
 
14
  ### Model Summary:
15
 
16
- <img src="https://marcelbinz.github.io/imgs/centaur.png" width="200"/>
 
 
17
 
18
  Llama-3.1-Centaur-70B is a foundation model of cognition model that can predict and simulate human behavior in any behavioral experiment expressed in natural language.
19
 
@@ -41,7 +43,7 @@ model, tokenizer = FastLanguageModel.from_pretrained(
41
  FastLanguageModel.for_inference(model)
42
  ```
43
 
44
- This requires 80 GB GPU memory.
45
 
46
  You can alternatively also directly use the less-tested [merged model](https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B).
47
 
 
13
 
14
  ### Model Summary:
15
 
16
+ <p align="center">
17
+ <img src="https://marcelbinz.github.io/imgs/centaur.png" width="200"/>
18
+ </p>
19
 
20
  Llama-3.1-Centaur-70B is a foundation model of cognition model that can predict and simulate human behavior in any behavioral experiment expressed in natural language.
21
 
 
43
  FastLanguageModel.for_inference(model)
44
  ```
45
 
46
+ This requires 80 GB GPU memory. More details are provided in this [**example script**](https://github.com/marcelbinz/Llama-3.1-Centaur-70B/blob/main/test_adapter.py).
47
 
48
  You can alternatively also directly use the less-tested [merged model](https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B).
49