Model Card for mistral-ko-OpenOrca-2000
It is a fine-tuned model using Korean in the mistral-7b model
Model Details
- Model Developers : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
- Repository : To be added
- Model Architecture : The shleeeee/mistral-ko-OpenOrca-2000 is is a fine-tuned version of the Mistral-7B-v0.1.
- Lora target modules : q_proj, k_proj, v_proj, o_proj,gate_proj
- train_batch : 4
- epochs : 2
Dataset
2000 ko-OpenOrca datasets
Prompt template: Mistral
<s>[INST]{['instruction']}[/INST]{['output']}</s>
Usage
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-OpenOrca-2000")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-OpenOrca-2000")
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="shleeeee/mistral-ko-OpenOrca-2000")
Evaluation
To be added
- Downloads last month
- 2,134
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.