File size: 2,757 Bytes
a9362ac
 
7888197
 
7b6e1d0
71c65ab
fde1595
92eb005
 
b61c08f
 
 
412fa13
b61c08f
 
27f7299
b61c08f
 
 
 
f4bfe26
1388208
f4bfe26
b61c08f
 
 
 
 
 
 
 
 
 
a0d37b2
1151431
 
92eb005
 
 
 
 
 
a0d37b2
 
 
 
 
 
 
 
 
 
 
92eb005
7888197
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
license: llama3.1
tags:
- Psychology
- unsloth
pipeline_tag: text-generation
library_name: transformers
---


### Model Summary:

Llama-3.1-Centaur-70B is a foundation model of cognition model that can predict and simulate human behavior in any behavioral experiment expressed in natural language.


- **Paper:** [Centaur: a foundation model of human cognition](https://marcelbinz.github.io/imgs/Centaur__preprint_.pdf)
- **Point of Contact:** [Marcel Binz](mailto:marcel.binz@helmholtz-munich.de)
         
### Usage:

Note that Centaur is trained on a data set in which human choices are encapsulated by "<<" and ">>" tokens. For optimal performance, it is recommended to adjust prompts accordingly.

You can use the model using HuggingFace Transformers library with 2 or more 80GB GPUs (NVIDIA Ampere or newer) with at least 150GB of free disk space to accommodate the download.

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "marcelbinz/Llama-3.1-Centaur-70B"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
```

You can alternatively run the model with unsloth on a single 80GB GPU using the [low-rank adapter](https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B-adapter).


### Licensing Information

[Llama 3.1 Community License Agreement](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE)

### Citation Information

```
@misc{binz2024centaurfoundationmodelhuman,
      title={Centaur: a foundation model of human cognition}, 
      author={Marcel Binz and Elif Akata and Matthias Bethge and Franziska Brändle and Fred Callaway and Julian Coda-Forno and Peter Dayan and Can Demircan and Maria K. Eckstein and Noémi Éltető and Thomas L. Griffiths and Susanne Haridi and Akshay K. Jagadish and Li Ji-An and Alexander Kipnis and Sreejan Kumar and Tobias Ludwig and Marvin Mathony and Marcelo Mattar and Alireza Modirshanechi and Surabhi S. Nath and Joshua C. Peterson and Milena Rmus and Evan M. Russek and Tankred Saanum and Natalia Scharfenberg and Johannes A. Schubert and Luca M. Schulze Buschoff and Nishad Singhi and Xin Sui and Mirko Thalmann and Fabian Theis and Vuong Truong and Vishaal Udandarao and Konstantinos Voudouris and Robert Wilson and Kristin Witte and Shuchen Wu and Dirk Wulff and Huadong Xiong and Eric Schulz},
      year={2024},
      eprint={2410.20268},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2410.20268}, 
}
```

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)