EdBerg commited on
Commit
d8cf9b2
1 Parent(s): 40d5c2c

EdBerg/outputs4

Browse files
README.md CHANGED
@@ -1,60 +1,57 @@
1
  ---
2
- library_name: peft
3
- license: llama3
4
  base_model: meta-llama/Meta-Llama-3-8B-Instruct
 
 
5
  tags:
 
6
  - trl
7
  - sft
8
- - generated_from_trainer
9
- model-index:
10
- - name: outputs4
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # outputs4
18
-
19
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
20
-
21
- ## Model description
22
 
23
- More information needed
 
24
 
25
- ## Intended uses & limitations
26
 
27
- More information needed
 
28
 
29
- ## Training and evaluation data
30
-
31
- More information needed
 
 
32
 
33
  ## Training procedure
34
 
35
- ### Training hyperparameters
 
 
36
 
37
- The following hyperparameters were used during training:
38
- - learning_rate: 0.0002
39
- - train_batch_size: 1
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - gradient_accumulation_steps: 4
43
- - total_train_batch_size: 4
44
- - optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
- - lr_scheduler_type: linear
46
- - lr_scheduler_warmup_steps: 2
47
- - training_steps: 2000
48
- - mixed_precision_training: Native AMP
49
 
50
- ### Training results
 
 
 
 
51
 
 
52
 
53
 
54
- ### Framework versions
55
 
56
- - PEFT 0.13.3.dev0
57
- - Transformers 4.46.1
58
- - Pytorch 2.5.0+cu121
59
- - Datasets 3.0.2
60
- - Tokenizers 0.20.1
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  base_model: meta-llama/Meta-Llama-3-8B-Instruct
3
+ library_name: transformers
4
+ model_name: outputs4
5
  tags:
6
+ - generated_from_trainer
7
  - trl
8
  - sft
9
+ licence: license
 
 
 
10
  ---
11
 
12
+ # Model Card for outputs4
 
 
 
 
 
 
 
13
 
14
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="EdBerg/outputs4", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/harpermia882/huggingface/runs/lwtdy8ey)
31
+
32
+ This model was trained with SFT.
33
 
34
+ ### Framework versions
 
 
 
 
 
 
 
 
 
 
 
35
 
36
+ - TRL: 0.12.0
37
+ - Transformers: 4.46.1
38
+ - Pytorch: 2.5.0+cu121
39
+ - Datasets: 3.1.0
40
+ - Tokenizers: 0.20.1
41
 
42
+ ## Citations
43
 
44
 
 
45
 
46
+ Cite TRL as:
47
+
48
+ ```bibtex
49
+ @misc{vonwerra2022trl,
50
+ title = {{TRL: Transformer Reinforcement Learning}},
51
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
+ year = 2020,
53
+ journal = {GitHub repository},
54
+ publisher = {GitHub},
55
+ howpublished = {\url{https://github.com/huggingface/trl}}
56
+ }
57
+ ```
adapter_config.json CHANGED
@@ -21,13 +21,13 @@
21
  "rank_pattern": {},
22
  "revision": null,
23
  "target_modules": [
24
- "q_proj",
25
- "v_proj",
26
- "up_proj",
27
  "k_proj",
28
  "o_proj",
 
 
 
29
  "gate_proj",
30
- "down_proj"
31
  ],
32
  "task_type": "CAUSAL_LM",
33
  "use_dora": false,
 
21
  "rank_pattern": {},
22
  "revision": null,
23
  "target_modules": [
 
 
 
24
  "k_proj",
25
  "o_proj",
26
+ "q_proj",
27
+ "down_proj",
28
+ "up_proj",
29
  "gate_proj",
30
+ "v_proj"
31
  ],
32
  "task_type": "CAUSAL_LM",
33
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d575391d318a8faae70ca2c8643382d3ef778bf97e7b5d6e06c1f4e534228c17
3
  size 83945296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d3d224ace6e068a0cc01676ade4ecdff25487716c46c39ed68a6c3985e77506
3
  size 83945296
runs/Nov03_12-45-23_83b6b7a2b296/events.out.tfevents.1730637925.83b6b7a2b296.276.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:124be3d5a30a93f1f06d1bc1c51515c1ac71f20eb49495183adb5d67173224d4
3
+ size 216734
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:14c26ffde9a9345d6ba285a5bce2d54f2312c4e8efb714bd93fab217940e9f15
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d20b0521e765cab269f15515ab5d118472e9a12548b083709486dd7e02fa6ab
3
  size 5496