Edit model card

Index-1.9B-Chat-GGUF

This repository is the GGUF version of Index-1.9B-Chat, which adapts to llama.cpp and also provides ModelFile adaptation for Ollma.

For more details, see our GitHub and Index-1.9B Technical Report

LLAMA.CPP

# Install llama.cpp(https://github.com/ggerganov/llama.cpp)
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

# Install llama-cpp-python(https://github.com/abetlen/llama-cpp-python)
pip install llama-cpp-python

llama.cpp terminal

./build/bin/llama-cli -m models/Index-1.9B-Chat/ggml-model-bf16.gguf --color -if

Note!! llama.cpp does not support custom chat_template, so you need to splice prompt yourself. The chat_template of Index-1.9B is

# The three delimiters are <unk>(token_id=0), reserved_0(token_id=3), reserved_1(token_id=4)
[<unk>]sytem_message[reserved_0]user_message[reserved_1]response

Use llama-cpp-python to support custom chat_template (already written to GGUF and can be used directly)

from llama_cpp import Llama

model_path = "Index-1.9B-Chat-GGUF/ggml-model-Q6_K.gguf"
llm = Llama(model_path =model_path, verbose=True)
output = llm.create_chat_completion(
      messages = [
          {"role": "system", "content": "你是由哔哩哔哩自主研发的大语言模型,名为“Index”。你能够根据用户传入的信息,帮助用户完成指定的任务,并生成恰当的、符合要求的回复。"},
          #{"role": "system", "content": "你需要扮演B站评论区老哥,用评论区阴阳怪气的话术回复,不要说你是AI"},
          {"role": "user","content": "篮球和鸡有什么关系"}
      ]
)
print(output)

OLLAMA

curl -fsSL https://ollama.com/install.sh | sh
# Start server
ollama serve

# Adaptation model, model file and System Message can be modified in OllamaModelFile
ollama create Index-1.9B-Chat -f Index-1.9B-Chat-GGUF/OllamaModelFile

# Start Terminal
ollama run Index-1.9B-Chat

# System Message can be specified dynamically
curl http://localhost:11434/api/chat -d '{
  "model": "Index-1.9B-Chat",
  "messages": [
      { "role": "system", "content": "你是由哔哩哔哩自主研发的大语言模型,名为“Index”。你能够根据用户传入的信息,帮助用户完成指定的任务,并生成恰当的、符合要求的回复。" },
    { "role": "user", "content": "续写 金坷垃" }
  ]
}'
Downloads last month
391
GGUF
Model size
2.17B params
Architecture
llama

4-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .