File size: 6,702 Bytes
40bd5dd
7c17293
 
 
40bd5dd
9dd9117
 
 
56c3628
 
 
 
 
 
 
 
 
 
a35f9a6
9dd9117
4790979
7c17293
e65f420
a35f9a6
3cba355
 
a35f9a6
3cba355
 
 
9dd9117
3cba355
7c17293
4aafffa
4790979
 
 
 
 
 
 
 
 
9dd9117
a35f9a6
914b98a
 
 
3cba355
 
914b98a
3cba355
 
 
 
 
 
914b98a
 
 
 
3cba355
 
914b98a
3cba355
 
 
 
 
 
914b98a
 
4790979
3cba355
 
a95a326
3cba355
 
 
 
 
a95a326
3cba355
 
 
 
 
 
a95a326
3cba355
 
4790979
9dd9117
914b98a
9dd9117
3cba355
 
 
601f379
 
 
 
 
3cba355
 
 
 
 
 
4790979
 
 
 
 
3cba355
4790979
 
 
914b98a
 
4790979
 
 
 
a95a326
9dd9117
3cba355
9dd9117
a95a326
9dd9117
 
 
914b98a
9dd9117
2355f98
 
 
 
 
4790979
 
 
 
 
2355f98
 
9dd9117
a35f9a6
 
914b98a
 
 
 
 
 
3cba355
 
4790979
9dd9117
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
---
language:
  language: en
  license: mit
---

# model-card-testing

## Table of Contents
1. [Model Details](#model-details)
2. [How To Get Started With the Model](#how-to-get-started-with-the-model)
3. [Uses](#uses)
4. [Limitations](#limitations)
5. [Training](#training)
6. [Evaluation Results](#evaluation-results)
7. [Environmental Impact](#environmental-impact)
8. [Citation Information](#citation-information)

## Model Details

model-card-testing is a distilled language modelthat can be used for text generation. Users of this model card should also consider information about the design, training, and limitations of gpt2.

- **Developed by:** author1, author2
- **Model type:** testing type
- **Language(s):**  # not working right now
- **License:**  # not working right now
- **Model Description:** testing description
- **Related Models:** 
    - **Parent Model**: gpt2
    - **Sibling Models**: TO DO (could we do this automatically somehow?)


## How to Get Started with the Model 

Use the code below to get started with the model.  model-card-testing can be used directly with a pipeline for text generation. 
Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='model-card-testing')
>>> set_seed(42)
>>> generator("Hello, I'm a language model," max_length=20, num_return_sequences=5)
```




Here is how to use this model to get the features of a given text in Pytorch: 

NOTE: This will need customization/fixing.

```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('model-card-testing')
model = GPT2Model.from_pretrained('model-card-testing')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```

and in TensorFlow: 

NOTE: This will need customization/fixing.

```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('model-card-testing')
model = TFGPT2Model.from_pretrained('model-card-testing')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```

## Uses
LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.

#### Direct Use
This model can be used for:
- Text generation
- Exploring characterisitics of language generated by a language model 
    - Examples: Cloze tests, counterfactuals, generations with reframings

#### Downstream Use
Tasks that leverage language models, including: 
- Information Extraction, Question Answering, Summarization
- Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
- Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
- Entertainment: Creation of games, chat bots, and amusing generations.

#### Misuse and Out-of-scope Use
Using the model in high-stakes settings is out of scope for this model.  The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.

## Limitations

**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**

Significant research has explored bias and fairness issues with models for language generation (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). This model also has persistent bias issues, as highlighted in these demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.


The impact of model compression techniques, such as knowledge distillation, on bias and fairness issues associated with language models is an active area of research. For example:
- [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
- [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
- [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.




NOTE: This code will need customization/fixing.


```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='model-card-testing')
>>> set_seed(48)
>>> generator("The White man worked as a", max_length=20, num_return_sequences=3)

>>> set_seed(48)
>>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
```




## Training 

#### Training Data

model-card-testing was trained using . See the  data card for additional information. 

#### Training Procedure

Preprocessing, hardware used, hyperparameters...

## Evaluation Results

This model achieves the following results:

NOTE: This will need customization.


| Dataset  | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB    | enwiki8 | text8  | WikiText103 | 1BW   |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL)   | (ACC)   | (ACC)  | (ACC)  | (PPL)     | (PPL)  | (BPB)   | (BPC)  | (PPL)       | (PPL) |
|          |         |         |        |        |           |        |         |        |             |       |




## Environmental Impact

You can estimate carbon emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700)

- **Hardware Type:** 
- **Hours used:** 
- **Cloud Provider:** 
- **Compute Region:** 
- **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*:

## Citation Information

```bibtex
@inproceedings{...,
  year={2020}
}
```