Edit model card

DataVortexS-10.7B-dpo-v1.6

DataVortex

Our Team

Research & Engineering Product Management
Kwangseok Yang Seunghyun Choi
Jeongwon Choi Hyoseok Choi

Model Details

Base Model

LDCC/LDCC-SOLAR-10.7B

Trained On

  • OS: Ubuntu 22.04
  • GPU: H100 80GB 4ea
  • transformers: v4.36.2

Instruction format

It follows ChatML format.

E.g.

text = """\
<|im_start|>system
당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€.<|im_end|>
<|im_start|>user
λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?<|im_end|>
<|im_start|>assistant
λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€.<|im_end|>
<|im_start|>user
μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?<|im_end|>
<|im_start|>assistant
"""

Model Benchmark

Ko LM Eval Harness

Task 0-shot 5-shot 10-shot 50-shot
kobest_boolq 0.920118 0.92442 0.929443 0.927317
kobest_copa 0.727263 0.778936 0.804812 0.815761
kobest_hellaswag 0.433039 0.465922 0.459741 0.471022
kobest_sentineg 0.764909 0.93946 0.937002 0.931962
Average 0.711332 0.777185 0.78275 0.786516

Ko-LLM-Leaderboard

Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
59.22 53.84 67.9 52.37 64.6 57.38

Implementation Code

This model contains the chat_template instruction format.
You can use the code below.

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.6")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.6")

messages = [
    {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."},
    {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"},
    {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."},
    {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"}
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

License

The model is licensed under the cc-by-nc-sa-4.0 license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license.

Downloads last month
1,802
Safetensors
Model size
10.9B params
Tensor type
FP16
Β·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Edentns/DataVortexS-10.7B-dpo-v1.6

Finetuned
this model
Merges
1 model

Collection including Edentns/DataVortexS-10.7B-dpo-v1.6