Edit model card

Synatra-7B-Instruct-v0.2

Made by StableFluffy

Contact (Do not Contact for personal things.) Discord : is.maywell Telegram : AlzarTakkarsen

License

This model is strictly non-commercial (cc-by-nc-4.0) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.

Model Details

Base Model
mistralai/Mistral-7B-Instruct-v0.1

Trained On
A6000 48GB * 8

TODO

  • RP 기반 νŠœλ‹ λͺ¨λΈ μ œμž‘
  • 데이터셋 μ •μ œ
  • μ–Έμ–΄ 이해λŠ₯λ ₯ κ°œμ„ 
  • 상식 보완
  • ν† ν¬λ‚˜μ΄μ € λ³€κ²½

Instruction format

In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST] and [/INST] tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.

E.g.

text = "<s>[INST] μ•„μ΄μž‘ λ‰΄ν„΄μ˜ 업적을 μ•Œλ €μ€˜. [/INST]"

Model Benchmark

Ko-LLM-Leaderboard

Model Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2 Avg
kyujinpy/KoT-platypus2-13B(No.1 at 2023/10/12) 43.69 53.05 42.29 43.34 65.38 49.55
Synatra-V0.1-7B-Instruct 41.72 49.28 43.27 43.75 39.32 43.47
Synatra-7B-Instruct-v0.2 41.81 49.35 43.99 45.77 42.96 44.78

MMLUμ—μ„œλŠ” μš°μ„Έν•˜λ‚˜ Ko-CommonGen V2 μ—μ„œ 크게 μ•½ν•œ λͺ¨μŠ΅μ„ λ³΄μž„.

Implementation Code

Since, chat_template already contains insturction format above. You can use the code below.

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-V0.1-7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-V0.1-7B")

messages = [
    {"role": "user", "content": "What is your favourite condiment?"},
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

If you run it on oobabooga your prompt would look like this.

[INST] 링컨에 λŒ€ν•΄μ„œ μ•Œλ €μ€˜. [/INST]

Readme format: beomi/llama-2-ko-7b


Downloads last month
4,366
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for maywell/Synatra-7B-Instruct-v0.2

Merges
1 model

Spaces using maywell/Synatra-7B-Instruct-v0.2 5