File size: 2,290 Bytes
4801cca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1967817
4801cca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1967817
 
4801cca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: mit
---
### Model Card: **TinyLlama-1.1B-Chat-v1.0-Unfiltered**

---

**Model Name**: TinyLlama-1.1B-Chat-v1.0-Unfiltered  
**Model Type**: Conversational AI Model  
**Architecture**: Based on a 1.1B parameter TinyLlama architecture  

**Training Data**:  
- Fine-tuned on the "dan_remixed" dataset (2.7MB).  
- The dataset improves spelling, grammar, and consistency while replacing references to violent crimes with non-violent activities and removes self-censorship from explicatives.

**Training Time**: Approximately 30-45 minutes. Each validation epoch takes ~322 seconds.  
**Hardware**: Trained on Google Colab Pro A100 GPU (40GB).

---

**Training Performance**:
- **Epoch Losses**:
  - Epoch 1: 0.7209
  - Epoch 2: 0.4441
  - Epoch 3: 0.3683
  - Epoch 4: 0.3358
  - Epoch 5: 0.3145
- **Final Training Loss (Epoch 5)**: 0.3145

---

**Validation Performance** (5 Epochs):  
- **Epoch 1**:  
  - Training Loss: 0.2921  
  - Validation Loss: 0.7962  
  - Perplexity: 2.22  
  - Epoch completed in 321.64 seconds  

- **Epoch 2**:  
  - Training Loss: 0.2872  
  - Validation Loss: 0.7672  
  - Perplexity: 2.15  
  - Epoch completed in 321.91 seconds  

- **Epoch 3**:  
  - Training Loss: 0.2874  
  - Validation Loss: 0.7821  
  - Perplexity: 2.19  
  - Epoch completed in 321.94 seconds  

- **Epoch 4**:  
  - Training Loss: 0.2864  
  - Validation Loss: 0.7796  
  - Perplexity: 2.18  
  - Epoch completed in 322.01 seconds  

-
 **Epoch 5**:  
  - Training Loss: 0.2831  
  - Validation Loss: 0.8017  
  - Perplexity: 2.23  
  - Epoch completed in 322.01 seconds

---

**Optimizer**: AdamW, learning rate: 1e-5  
**Loss Function**: Cross-Entropy Loss, ignoring padding tokens (ignore_index=-100)  
**Use Case**: Conversational AI designed for general, unrestricted conversation, with no filtering on the nature of responses, provided the content is non-violent.

---

**Limitations**:
- Due to the small fine-tuning dataset size (2.7MB), the model may be prone to **overfitting** and **bias**.
- The dataset has been modified to avoid violent language, but the model might still exhibit strong or explicit responses.

**Metrics**:
- Loss and perplexity have been tracked, and more conversational metrics (like BLEU, ROUGE, or human evaluation) could be explored.