vicuna-7b-v1.5-gguf / README.md
TheMarmot's picture
Update README.md
225c5da
|
raw
history blame
568 Bytes
metadata
license: llama2
model_name: Vicuna 7B v1.5
base_model: lmsys/vicuna-7b-v1.5
inference: false
model_creator: lmsys
model_type: llama
prompt_template: >
  A chat between a curious user and an artificial intelligence assistant. The
  assistant gives helpful, detailed, and polite answers to the user's questions.
  USER: {prompt} ASSISTANT:
prepared_by: TheMarmot

Model Details

Model Description

This is an unquantized version of the Vicuna 7B v1.5 from lmsys in the gguf format which is compatible with llama-cpp.

  • Developed by: [Joseph Bejjani]