Marissa commited on
Commit
430e87b
1 Parent(s): 56c3628

testing creating PRs

Browse files
Files changed (1) hide show
  1. README.md +16 -137
README.md CHANGED
@@ -1,161 +1,40 @@
1
  ---
2
- language:
3
- language: en
4
- license: mit
5
  ---
6
 
7
  # model-card-testing
8
 
9
- ## Table of Contents
10
- 1. [Model Details](#model-details)
11
- 2. [How To Get Started With the Model](#how-to-get-started-with-the-model)
12
- 3. [Uses](#uses)
13
- 4. [Limitations](#limitations)
14
- 5. [Training](#training)
15
- 6. [Evaluation Results](#evaluation-results)
16
- 7. [Environmental Impact](#environmental-impact)
17
- 8. [Citation Information](#citation-information)
18
 
19
- ## Model Details
20
 
21
- model-card-testing is a distilled language modelthat can be used for text generation. Users of this model card should also consider information about the design, training, and limitations of gpt2.
22
-
23
- - **Developed by:** author1, author2
24
- - **Model type:** testing type
25
- - **Language(s):** # not working right now
26
- - **License:** # not working right now
27
- - **Model Description:** testing description
28
- - **Related Models:**
29
- - **Parent Model**: gpt2
30
- - **Sibling Models**: TO DO (could we do this automatically somehow?)
31
-
32
-
33
- ## How to Get Started with the Model
34
-
35
- Use the code below to get started with the model. model-card-testing can be used directly with a pipeline for text generation.
36
- Since the generation relies on some randomness, we set a seed for reproducibility:
37
- ```python
38
- >>> from transformers import pipeline, set_seed
39
- >>> generator = pipeline('text-generation', model='model-card-testing')
40
- >>> set_seed(42)
41
- >>> generator("Hello, I'm a language model," max_length=20, num_return_sequences=5)
42
- ```
43
-
44
-
45
-
46
-
47
- Here is how to use this model to get the features of a given text in Pytorch:
48
-
49
- NOTE: This will need customization/fixing.
50
-
51
- ```python
52
- from transformers import GPT2Tokenizer, GPT2Model
53
- tokenizer = GPT2Tokenizer.from_pretrained('model-card-testing')
54
- model = GPT2Model.from_pretrained('model-card-testing')
55
- text = "Replace me by any text you'd like."
56
- encoded_input = tokenizer(text, return_tensors='pt')
57
- output = model(**encoded_input)
58
- ```
59
-
60
- and in TensorFlow:
61
-
62
- NOTE: This will need customization/fixing.
63
-
64
- ```python
65
- from transformers import GPT2Tokenizer, TFGPT2Model
66
- tokenizer = GPT2Tokenizer.from_pretrained('model-card-testing')
67
- model = TFGPT2Model.from_pretrained('model-card-testing')
68
- text = "Replace me by any text you'd like."
69
- encoded_input = tokenizer(text, return_tensors='tf')
70
- output = model(encoded_input)
71
- ```
72
-
73
- ## Uses
74
- LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
75
-
76
- #### Direct Use
77
- This model can be used for:
78
- - Text generation
79
- - Exploring characterisitics of language generated by a language model
80
- - Examples: Cloze tests, counterfactuals, generations with reframings
81
-
82
- #### Downstream Use
83
- Tasks that leverage language models, including:
84
- - Information Extraction, Question Answering, Summarization
85
- - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
86
- - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
87
- - Entertainment: Creation of games, chat bots, and amusing generations.
88
-
89
- #### Misuse and Out-of-scope Use
90
- Using the model in high-stakes settings is out of scope for this model. The model is not designed for critical decisions nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
91
-
92
- ## Limitations
93
-
94
- **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
95
-
96
- Significant research has explored bias and fairness issues with models for language generation (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). This model also has persistent bias issues, as highlighted in these demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context.
97
-
98
-
99
- The impact of model compression techniques, such as knowledge distillation, on bias and fairness issues associated with language models is an active area of research. For example:
100
- - [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models.
101
- - [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias).
102
- - [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2.
103
-
104
-
105
-
106
-
107
- NOTE: This code will need customization/fixing.
108
 
 
109
 
110
  ```python
111
- >>> from transformers import pipeline, set_seed
112
- >>> generator = pipeline('text-generation', model='model-card-testing')
113
- >>> set_seed(48)
114
- >>> generator("The White man worked as a", max_length=20, num_return_sequences=3)
115
-
116
- >>> set_seed(48)
117
- >>> generator("The Black man worked as a", max_length=20, num_return_sequences=3)
118
  ```
119
 
 
120
 
 
121
 
 
122
 
123
- ## Training
 
124
 
125
- #### Training Data
126
-
127
- model-card-testing was trained using . See the data card for additional information.
128
-
129
- #### Training Procedure
130
 
131
  Preprocessing, hardware used, hyperparameters...
132
 
133
- ## Evaluation Results
134
-
135
- This model achieves the following results:
136
-
137
- NOTE: This will need customization.
138
-
139
-
140
- | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
141
- |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
142
- | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
143
- | | | | | | | | | | | |
144
-
145
-
146
-
147
-
148
- ## Environmental Impact
149
-
150
- You can estimate carbon emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700)
151
 
152
- - **Hardware Type:**
153
- - **Hours used:**
154
- - **Cloud Provider:**
155
- - **Compute Region:**
156
- - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*:
157
 
158
- ## Citation Information
159
 
160
  ```bibtex
161
  @inproceedings{...,
 
1
  ---
2
+ language: en
3
+ license: mit
 
4
  ---
5
 
6
  # model-card-testing
7
 
8
+ ## Model description
 
 
 
 
 
 
 
 
9
 
10
+ testing description
11
 
12
+ ## Intended uses & limitations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ #### How to use
15
 
16
  ```python
17
+ # You can include sample code which will be formatted
 
 
 
 
 
 
18
  ```
19
 
20
+ #### Limitations and bias
21
 
22
+ Provide examples of latent issues and potential remediations.
23
 
24
+ ## Training data
25
 
26
+ Describe the data you used to train the model.
27
+ If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data.
28
 
29
+ ## Training procedure
 
 
 
 
30
 
31
  Preprocessing, hardware used, hyperparameters...
32
 
33
+ ## Eval results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
+ Provide some evaluation results.
 
 
 
 
36
 
37
+ ### BibTeX entry and citation info
38
 
39
  ```bibtex
40
  @inproceedings{...,