File size: 1,754 Bytes
97ee9bf
 
d935c42
2a821e4
 
d935c42
97ee9bf
d935c42
 
 
97ee9bf
 
d935c42
 
97ee9bf
d935c42
97ee9bf
d935c42
 
4d7e641
97ee9bf
d935c42
97ee9bf
d935c42
97ee9bf
d935c42
97ee9bf
d935c42
97ee9bf
d935c42
97ee9bf
d935c42
97ee9bf
d935c42
97ee9bf
d935c42
97ee9bf
d935c42
 
 
 
 
 
 
 
2a821e4
 
4d7e641
d935c42
97ee9bf
d935c42
97ee9bf
d935c42
 
4d7e641
 
 
 
 
 
97ee9bf
 
 
 
d935c42
2a821e4
d935c42
4d7e641
d935c42
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: ybelkada/falcon-7b-sharded-bf16
model-index:
- name: falcon-7b-ft-adapters
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# falcon-7b-ft-adapters

This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5185

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 6
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0563        | 0.95  | 15   | 0.9446          |
| 0.7495        | 1.97  | 31   | 0.6337          |
| 0.5959        | 2.98  | 47   | 0.5719          |
| 0.5474        | 4.0   | 63   | 0.5410          |
| 0.545         | 4.95  | 78   | 0.5240          |
| 0.4553        | 5.71  | 90   | 0.5185          |


### Framework versions

- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0