Text Generation
PEFT
English
File size: 1,393 Bytes
5c88e6c
 
 
 
 
 
 
 
 
ba2c8a0
 
5c88e6c
fb134fd
 
cb48e4c
fb134fd
 
 
 
 
 
 
 
 
 
 
 
5c88e6c
fb134fd
 
446ac2a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c88e6c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: other
language:
- en
pipeline_tag: text-generation
datasets:
- LDJnr/Puffin
- pvduy/rm_hh_helpful_only
library_name: peft
widget:
- text: "USER: What's better, farming, or using computers (which suck)\nASSISTANT:"
---
<table>
<tr>
<td style="width: 30%; text-align: left; vertical-align: middle">

# CurtGPT
Using Microsoft's Phi 1.5 model like it was never intended.

</td>
<td style="text-align: center;">
<img src="https://github.com/tim-a-davis/silly_little_language_modeling_thing_at_utd/blob/main/curtgpt%20logo.png?raw=true" width="300" height="auto">
</td>
</tr>
</table>

# Main Procedure
This model is an adapter on [puffin phi v2](https://huggingface.co/teknium/Puffin-Phi-v2) trained using [QLoRA](https://arxiv.org/pdf/2305.14314.pdf) and [DPO](https://arxiv.org/pdf/2305.18290.pdf) on 60,000 samples from the [anthropic helpful only](https://huggingface.co/datasets/pvduy/rm_hh_helpful_only) dataset.


---
library_name: peft
---
## Training procedure


The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions


- PEFT 0.5.0