File size: 2,866 Bytes
3b0b6b3
e9d411d
 
3b0b6b3
 
d7c65f9
3b0b6b3
 
442c154
3b0b6b3
19fccfa
 
 
 
e9eb71c
 
19fccfa
 
 
 
 
 
 
 
d7c65f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2e46507
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3b0b6b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
tags: 
- autotrain
language: en
widget:
- text: "I love AutoTrain \U0001F917"
datasets:
- lewtun/autotrain-data-acronym-identification
- acronym_identification
co2_eq_emissions: 10.435358044493652
model-index:
- name: autotrain-demo
  results:
  - task:
      name: Token Classification
      type: token-classification
    dataset:
      name: acronym_identification
      type: acronym_identification
      args: default
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9708090976211485
  - task:
      type: token-classification
      name: Token Classification
    dataset:
      name: acronym_identification
      type: acronym_identification
      config: default
      split: train
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9790777669399117
      verified: true
    - name: Precision
      type: precision
      value: 0.9197835301644851
      verified: true
    - name: Recall
      type: recall
      value: 0.946479027789208
      verified: true
    - name: F1
      type: f1
      value: 0.9329403493591477
      verified: true
    - name: loss
      type: loss
      value: 0.06360606849193573
      verified: true
  - task:
      type: token-classification
      name: Token Classification
    dataset:
      name: acronym_identification
      type: acronym_identification
      config: default
      split: validation
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9758354452761242
      verified: true
    - name: Precision
      type: precision
      value: 0.9339674814732883
      verified: true
    - name: Recall
      type: recall
      value: 0.9159344831326608
      verified: true
    - name: F1
      type: f1
      value: 0.9248630887185104
      verified: true
    - name: loss
      type: loss
      value: 0.07593930512666702
      verified: true
---

# Model Trained Using AutoTrain

- Problem type: Entity Extraction
- Model ID: 7324788
- CO2 Emissions (in grams): 10.435358044493652

## Validation Metrics

- Loss: 0.08991389721632004
- Accuracy: 0.9708090976211485
- Precision: 0.8998421675654347
- Recall: 0.9309429854401959
- F1: 0.9151284109149278

## Usage

You can use cURL to access this model:

```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lewtun/autotrain-acronym-identification-7324788
```

Or Python API:

```
from transformers import AutoModelForTokenClassification, AutoTokenizer

model = AutoModelForTokenClassification.from_pretrained("lewtun/autotrain-acronym-identification-7324788", use_auth_token=True)

tokenizer = AutoTokenizer.from_pretrained("lewtun/autotrain-acronym-identification-7324788", use_auth_token=True)

inputs = tokenizer("I love AutoTrain", return_tensors="pt")

outputs = model(**inputs)
```