gollm-12.8b-instruct-v2.3
This model is a fine-tuned version of EleutherAI/polyglot-ko-12.8b on a custom mixed dataset
Model description
- No-context template
μλλ μμ
μ μ€λͺ
νλ μ§λ¬Έμ΄μ μΆκ° 컨ν
μ€νΈλ₯Ό μ 곡νλ λ§₯λ½μ΄ ν¨κ» μ 곡λ©λλ€. μμ²μ μ μ ν μλ£νλ λ΅λ³μ μμ±νμΈμ.
### μ§λ¬Έ:
{instruction}
### λ΅λ³:
- With context template
μλλ μμ
μ μ€λͺ
νλ μ§λ¬Έμ΄μ μΆκ° 컨ν
μ€νΈλ₯Ό μ 곡νλ λ§₯λ½μ΄ ν¨κ» μ 곡λ©λλ€. μμ²μ μ μ ν μλ£νλ λ΅λ³μ μμ±νμΈμ.
### λ§₯λ½:
{input}
### μ§λ¬Έ:
{instruction}
### λ΅λ³:
Intended uses & limitations
More information needed
Training and evaluation data
- self-introduction (20 samples)
- High-quality reasoning dataset from private documents, QAs generated by Claude AI (1.3k samples)
- EverythingLM-v2 (0.9k samples)
- KoCoT (2k samples)
- Private MRC dataset - answer generated by GPT-4 (32k samples) Original data have ~12k question-answer pairs with context, and augmentation is applied to make 20k samples with triplet contexts case (1 correct context out of 3)
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- saved_checkpoint_at_epoch: 1 (condition: loss < 0.3)
Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
- Downloads last month
- 4,226
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for tlphams/gollm-12.8b-instruct-v2.3
Base model
EleutherAI/polyglot-ko-12.8b