--- license: llama3.1 tags: - Psychology - unsloth --- ### Model Summary: Llama-3.1-Centaur-70B is a foundation model of cognition model that can predict and simulate human behavior in any behavioral experiment expressed in natural language. - **Paper:** [Centaur: a foundation model of human cognition](https://arxiv.org/abs/XXX.XXXX) - **Point of Contact:** [Marcel Binz](mailto:marcel.binz@helmholtz-munich.de) ### Usage: Note that Centaur is trained on a data set in which human choices are encapsuled by "<<" and ">>" tokens. For optimal performance, it is recommended to adjust prompts accordingly. You can use the model using HuggingFace Transformers library with 2 or more 80GB GPUs (NVIDIA Ampere or newer) with at least 150GB of free disk space to accomodate the download. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "marcelbinz/Llama-3.1-Centaur-70B" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) ``` Alternatively, you can run the model using unsloth on a single 80GB GPU using the [low-rank adapter](https://huggingface.co/marcelbinz/Llama-3.1-Centaur-70B-adapter). ### Licensing Information [Llama 3.1 Community License Agreement](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE) ### Citation Information Forthcoming. [](https://github.com/unslothai/unsloth)