worde-byte commited on
Commit
e4114d1
1 Parent(s): 2e6f1ee

GOAT-12-14-ss

Browse files
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model: upstage/SOLAR-10.7B-Instruct-v1.0
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: output
8
+ results: []
9
+ library_name: peft
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # output
16
+
17
+ This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) on an unknown dataset.
18
+
19
+ ## Model description
20
+
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+
34
+ The following `bitsandbytes` quantization config was used during training:
35
+ - quant_method: bitsandbytes
36
+ - load_in_8bit: False
37
+ - load_in_4bit: True
38
+ - llm_int8_threshold: 6.0
39
+ - llm_int8_skip_modules: None
40
+ - llm_int8_enable_fp32_cpu_offload: False
41
+ - llm_int8_has_fp16_weight: False
42
+ - bnb_4bit_quant_type: nf4
43
+ - bnb_4bit_use_double_quant: False
44
+ - bnb_4bit_compute_dtype: float16
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 0.0002
49
+ - train_batch_size: 8
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
+ - lr_scheduler_type: linear
54
+ - num_epochs: 40
55
+
56
+ ### Training results
57
+
58
+
59
+
60
+ ### Framework versions
61
+
62
+ - PEFT 0.5.0
63
+ - Transformers 4.36.1
64
+ - Pytorch 2.0.1+cu118
65
+ - Datasets 2.14.4
66
+ - Tokenizers 0.15.0
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb69fa44b4ecf564f6d446ac60af5e001591c38603874f04fb316657999dfa7b
3
  size 163603896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:896f0a667d8e0038822d7723f8f21526cf1e4efb03d18bad3883a244933f507f
3
  size 163603896
emissions.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ timestamp,project_name,run_id,duration,emissions,emissions_rate,cpu_power,gpu_power,ram_power,cpu_energy,gpu_energy,ram_energy,energy_consumed,country_name,country_iso_code,region,cloud_provider,cloud_region,os,python_version,codecarbon_version,cpu_count,cpu_model,gpu_count,gpu_model,longitude,latitude,ram_total_size,tracking_mode,on_cloud,pue
2
+ 2023-12-15T03:06:03,codecarbon,119c2ff3-8008-4581-b7a3-d9dfe67d8517,38115.6783618927,1.902573047700639,4.991575985179878e-05,90.0,347.245,47.1240119934082,0.9528742625713376,3.6553566216007463,0.4986943486287465,5.10692523280082,United States,USA,massachusetts,,,Linux-5.10.0-21-amd64-x86_64-with-glibc2.31,3.10.13,2.2.3,32,AMD Ryzen Threadripper 2950X 16-Core Processor,1,1 x NVIDIA GeForce RTX 3090,-71.8262,42.2821,125.66403198242188,machine,N,1.0
runs/Dec14_16-30-35_secretsauce/events.out.tfevents.1702589446.secretsauce.85280.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:91ffcff32d960d0e5c6280f319804859a8353ef70984439acb794ebeee513a6f
3
- size 6690
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:455c251d8b41a3ddc443bee8cafaada4b02d3cdbd09ee91fb5be3116fd087cf8
3
+ size 7044