File size: 3,616 Bytes
cd9ec5f
 
 
e078f13
cd9ec5f
e078f13
 
 
 
 
 
 
 
16cf5da
 
 
 
e078f13
 
 
 
 
16cf5da
e078f13
 
 
 
 
 
 
16cf5da
e078f13
16cf5da
 
e078f13
16cf5da
 
e078f13
16cf5da
 
e078f13
16cf5da
cd9ec5f
ef176de
e078f13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f5390e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e078f13
 
 
ef176de
e078f13
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
language:
- fr
license: mit
tags:
- generated_from_trainer
datasets:
- allocine
metrics:
- accuracy
- f1
- precision
- recall
widget:
- text: Un film magnifique avec un duo d'acteurs excellent.
- text: Grosse déception pour ce thriller qui peine à convaincre.
base_model: cmarkea/distilcamembert-base
model-index:
- name: distilcamembert-allocine
  results:
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: allocine
      type: allocine
      config: allocine
      split: validation
      args: allocine
    metrics:
    - type: accuracy
      value: 0.9714
      name: Accuracy
    - type: f1
      value: 0.9709909727152854
      name: F1
    - type: precision
      value: 0.9648256399919372
      name: Precision
    - type: recall
      value: 0.9772356063699469
      name: Recall
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# distilcamembert-allocine

This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on the allocine dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1066
- Accuracy: 0.9714
- F1: 0.9710
- Precision: 0.9648
- Recall: 0.9772

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Accuracy |   F1   | Precision | Recall |
| :-----------: | :---: | :---: | :-------------: | :------: | :----: | :-------: | :----: |
|    0.1504     |  0.2  |  500  |     0.1290      |  0.9555  | 0.9542 |  0.9614   | 0.9470 |
|    0.1334     |  0.4  | 1000  |     0.1049      |  0.9624  | 0.9619 |  0.9536   | 0.9703 |
|    0.1158     |  0.6  | 1500  |     0.1052      |  0.963   | 0.9627 |  0.9498   | 0.9760 |
|    0.1153     |  0.8  | 2000  |     0.0949      |  0.9661  | 0.9653 |  0.9686   | 0.9620 |
|    0.1053     |  1.0  | 2500  |     0.0936      |  0.9666  | 0.9663 |  0.9542   | 0.9788 |
|    0.0755     |  1.2  | 3000  |     0.0987      |   0.97   | 0.9695 |  0.9644   | 0.9748 |
|    0.0716     |  1.4  | 3500  |     0.1078      |  0.9688  | 0.9684 |  0.9598   | 0.9772 |
|    0.0688     |  1.6  | 4000  |     0.1051      |  0.9673  | 0.9670 |  0.9552   | 0.9792 |
|    0.0691     |  1.8  | 4500  |     0.0940      |  0.9709  | 0.9704 |  0.9688   | 0.9720 |
|    0.0733     |  2.0  | 5000  |     0.1038      |  0.9686  | 0.9683 |  0.9558   | 0.9812 |
|    0.0476     |  2.2  | 5500  |     0.1066      |  0.9714  | 0.9710 |  0.9648   | 0.9772 |
|     0.047     |  2.4  | 6000  |     0.1098      |  0.9689  | 0.9686 |  0.9587   | 0.9788 |
|    0.0431     |  2.6  | 6500  |     0.1110      |  0.9711  | 0.9706 |  0.9666   | 0.9747 |
|    0.0464     |  2.8  | 7000  |     0.1149      |  0.9697  | 0.9694 |  0.9592   | 0.9798 |
|    0.0342     |  3.0  | 7500  |     0.1122      |  0.9703  | 0.9699 |  0.9621   | 0.9778 |


### Framework versions

- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2