Edit model card

llm-jp-13b-v2.0-gguf

llm-jpさんが公開しているllm-jp-13b-v2.0のggufフォーマット変換版です。

モデル一覧

GGUF V2.0系
mmnga/llm-jp-13b-v2.0-gguf
mmnga/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-gguf
mmnga/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-gguf
mmnga/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-gguf

GGUF V1.0系
mmnga/llm-jp-13b-instruct-dolly-en-ja-oasst-v1.1-gguf
mmnga/llm-jp-13b-v1.0-gguf
mmnga/llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0-gguf
mmnga/llm-jp-13b-instruct-full-dolly-oasst-v1.0-gguf
mmnga/llm-jp-1.3b-v1.0-gguf

Convert Script

convert-hf-to-gguf_llmjp_v2-py

Usage

git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'llm-jp-13b-v2.0-q4_0.gguf' -n 128 -p '以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。\n\n### 指示:\n自然言語処理とは何か\n\n### 応答:\n' --top_p 0.95 --temp 0.7 --repeat-penalty 1.1
Downloads last month
700
GGUF
Model size
13.7B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .