OpenBuddy

Image

Requisitos

Para usar este modelo, necesitas tener instalado llama.cpp en tu equipo. Puedes obtener llama.cpp desde el siguiente repositorio:

Para instalar llama.cpp, sigue estos pasos:

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

Uso del modelo

La plantilla del modelo es la siguiente:

User: {prompt} Assistant:

Puedes utilizar el modelo en llama.cpp con el siguiente comando:

./main -m ggml-model-Q8_0.gguf -p "User: ¿Cómo te llamas?\nAssistant:" --log-disable

LM Studio config-presets

Filename:openbuddy.preset.json

{
  "name": "OpenBuddy",
  "inference_params": {
    "input_prefix": "User:",
    "input_suffix": "\nAssistant:",
    "antiprompt": [
      "User:",
      "\nAssistant:"
    ],
    "pre_prompt": "You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user).\nAlways answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\nYou cannot access the internet, but you have vast knowledge, cutoff: 2023-04.\nYou are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), not related to GPT or OpenAI.",
    "pre_prompt_prefix": "",
    "pre_prompt_suffix": ""
  },
  "load_params": {
    "rope_freq_scale": 0,
    "rope_freq_base": 0
  }
}

Referencias

Downloads last month
48
GGUF
Model size
7.28B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including HirCoir/openbuddy-mistral2-7b-v20.3-32k-GGUF