--- license: llama3.1 inference: false --- # DRAGON-LLAMA-3.1-GGUF dragon-llama-3.1-gguf is RAG-instruct trained on top of a Llama-3.1 base model. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) 1 Test Run (temperature=0.0, sample=False) with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **94.0** correct out of 100 --Not Found Classification: 70.0% --Boolean: 90.0% --Math/Logic: 72.5% --Complex Questions (1-5): 4 (Above Average - table-reading, causal) --Summarization Quality (1-5): 4 (Above Average) --Hallucinations: No hallucinations but a few instances of drawing on 'background' knowledge. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). The inference accuracy tests were performed on this model (GGUF 4_K_M) not the original Pytorch, and it is possible that the original Pytorch may score higher, but we have chosen to use the quantized version as it is most representative of the likely use of the model for inference. Please compare with [dragon-llama2-7b](https://www.huggingface.co/llmware/dragon-llama-7b-v0) or the most recent [dragon-mistral-0.3](https://www.huggingface.co/llmware/dragon-mistral-0.3-gguf). ### Model Description - **Developed by:** llmware - **Model type:** Llama-8b-3.1-Base - **Language(s) (NLP):** English - **License:** Llama-3.1 Community License - **Finetuned from model:** Llama-3.1-Base ## Bias, Risks, and Limitations Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model To pull the model via API: from huggingface_hub import snapshot_download snapshot_download("llmware/dragon-llama-3.1-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False) Load in your favorite GGUF inference engine, or try with llmware as follows: from llmware.models import ModelCatalog # to load the model and make a basic inference model = ModelCatalog().load_model("llmware/dragon-llama-3.1-gguf", temperature=0.0, sample=False) response = model.inference(query, add_context=text_sample) Details on the prompt wrapper and other configurations are on the config.json file in the files repository. ## Model Card Contact Darren Oberst & llmware team