Edit model card

DanteLLM

DanteLLM is a Large Language Model developed in Sapienza lab. In October 2023 we submitted a paper called DanteLLM: Let's Push Italian LLM Research Forward! 🤌 🇮🇹

That paper got accepted with the scores 5, 4, 4 out of 5

How to run the model (Ollama)

This repo contains the model in GGUF format. You can run DanteLLM on Ollama following these steps:

Make sure you have Ollama correctly installed and ready to use.

Then, you can download DanteLLM's weights using:

huggingface-cli download rstless-research/DanteLLM-7B-Instruct-Italian-v0.1-GGUF dantellm-merged-hf.q8_0.gguf Modelfile --local-dir . --local-dir-use-symlinks False

Load the model using:

ollama create dante -f Modelfile

Finally, to run the model, use:

ollama run dante

Authors

  • Andrea Bacciu* (work done prior joining Amazon)
  • Cesare Campagnano*
  • Giovanni Trappolini
  • Prof. Fabrizio Silvestri

* Equal contribution

Downloads last month
315
GGUF
Model size
7.24B params
Architecture
llama

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for rstless-research/DanteLLM-7B-Instruct-Italian-v0.1-GGUF

Adapter
(884)
this model