Edit model card

Explanation

  • With the base model, applied DPO to the small amount of layers with the open dataset , saved just the adapter part
  • Merged the base model and the tuned adapter together

Base Model

Used Corpus

Score

Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
52.83 50 60.55 48.8 71.51 43.65

Log

  • 2024.01.25: Initial version Upload
  • 2024.02.10: Readme updated
  • 2024.02.11: Score updated

LICENSE

  • Apache 2.0

Citation

  • beomi/OPEN-SOLAR-KO-10.7B
    @misc {solar_ko_junbum_2023,
        author       = { {L. Junbum} },
        title        = { Solar-Ko-10.7b },
        year         = 2024,
        url          = { https://huggingface.co/beomi/SOLAR-KO-10.7B },
        publisher    = { Hugging Face }
    }
    
Downloads last month
1,786
Safetensors
Model size
10.9B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dddsaty/OPEN_SOLAR_KO_10.7B_DPO_Adapter_Attach

Quantizations
1 model

Dataset used to train dddsaty/OPEN_SOLAR_KO_10.7B_DPO_Adapter_Attach