yongtaek-lim commited on
Commit
4b8b974
1 Parent(s): be25af3

Model save

Browse files
Files changed (2) hide show
  1. README.md +2 -7
  2. trainer_log.jsonl +1 -0
README.md CHANGED
@@ -1,13 +1,11 @@
1
  ---
2
  library_name: transformers
3
- license: other
4
  base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
5
  tags:
6
  - llama-factory
7
  - full
8
  - generated_from_trainer
9
- metrics:
10
- - accuracy
11
  model-index:
12
  - name: pogny
13
  results: []
@@ -18,10 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # pogny
20
 
21
- This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the alpaca_en_demo dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 1.2285
24
- - Accuracy: 0.6567
25
 
26
  ## Model description
27
 
 
1
  ---
2
  library_name: transformers
3
+ license: llama3
4
  base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
5
  tags:
6
  - llama-factory
7
  - full
8
  - generated_from_trainer
 
 
9
  model-index:
10
  - name: pogny
11
  results: []
 
16
 
17
  # pogny
18
 
19
+ This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on an unknown dataset.
 
 
 
20
 
21
  ## Model description
22
 
trainer_log.jsonl ADDED
@@ -0,0 +1 @@
 
 
1
+ {"current_steps": 1, "total_steps": 1, "epoch": 0.8, "percentage": 100.0, "elapsed_time": "0:03:41", "remaining_time": "0:00:00", "throughput": "0.00", "total_tokens": 0}