metadata
language:
- en
library_name: peft
pipeline_tag: text-generation
tags:
- Mistral
license: llama2
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0
verified: false
Mistral-7b-OpenOrca-lora
This is a test.
This LoRA model is extracted from the efficient parameter fine-tuned model (Mistral-7B-OpenOra), and now it needs to be verified whether this LoRA model can achieve comparable performance with the original model.
The final goal is to create a toolkit that can simultaneously load multiple LoRA modules, and automatically switch to the appropriate combination of LoRA modules based on user queries to generate the best answer.
The lora merged model is here
The source code is here
lm-evaluation-harness
Metric | Mistral-7B-OpenOrca | Mistral-7B-OpenOrca-lora |
---|---|---|
ARC | 64.08 | |
HellaSwag | 83.99 | |
MMLU | 62.24 | |
TruthfulQA | 53.05 | |
Average | 65.84 |
HumanEval
Metric | Mistral-7B-OpenOrca | Mistral-7B-OpenOrca-lora |
---|---|---|
humaneval-python | 35.976 |
Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
Framework versions
- PEFT 0.5.0