Creating initial model card (#1)
Browse files- Creating initial model card (90ca3c071408368b2a7926da21b37759091da3e3)
Co-authored-by: Ezi Ozoani <Ezi@users.noreply.huggingface.co>
README.md
ADDED
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- wikitext-103
|
4 |
+
|
5 |
+
tags:
|
6 |
+
- text-generation
|
7 |
+
|
8 |
+
language: en
|
9 |
+
model-index:
|
10 |
+
- name: transfo-xl-wt103
|
11 |
+
results: []
|
12 |
+
task:
|
13 |
+
name: Text Generation
|
14 |
+
type: text-generation
|
15 |
+
|
16 |
+
---
|
17 |
+
|
18 |
+
|
19 |
+
# Transfo-xl-wt103
|
20 |
+
|
21 |
+
## Table of Contents
|
22 |
+
- [Model Details](#model-details)
|
23 |
+
- [Uses](#uses)
|
24 |
+
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
|
25 |
+
- [Training](#training)
|
26 |
+
- [Evaluation](#evaluation)
|
27 |
+
- [Citation Information](#citation-information)
|
28 |
+
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
|
29 |
+
|
30 |
+
|
31 |
+
## Model Details
|
32 |
+
**Model Description:**
|
33 |
+
The Transformer-XL model is a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax inputs and outputs (tied).
|
34 |
+
- **Developed by:** [Zihang Dai](dzihang@cs.cmu.edu), [Zhilin Yang](zhiliny@cs.cmu.edu), [Yiming Yang1](yiming@cs.cmu.edu), [Jaime Carbonell](jgc@cs.cmu.edu), [Quoc V. Le](qvl@google.com), [Ruslan Salakhutdinov](rsalakhu@cs.cmu.edu)
|
35 |
+
- **Shared by:** HuggingFace team
|
36 |
+
- **Model Type:** Text Generation
|
37 |
+
- **Language(s):** English
|
38 |
+
- **License:** [More information needed]
|
39 |
+
- **Resources for more information:**
|
40 |
+
- [Research Paper](https://arxiv.org/pdf/1901.02860.pdf)
|
41 |
+
- [GitHub Repo](https://github.com/kimiyoung/transformer-xl)
|
42 |
+
- [HuggingFace Documentation](https://huggingface.co/docs/transformers/model_doc/transfo-xl#transformers.TransfoXLModel)
|
43 |
+
|
44 |
+
|
45 |
+
## Uses
|
46 |
+
|
47 |
+
#### Direct Use
|
48 |
+
|
49 |
+
This model can be used for text generation.
|
50 |
+
The authors provide additionally notes about the vocabulary used, in the [associated paper](https://arxiv.org/pdf/1901.02860.pdf):
|
51 |
+
|
52 |
+
> We envision interesting applications of Transformer-XL in the fields of text generation, unsupervised feature learning, image and speech modeling.
|
53 |
+
|
54 |
+
#### Misuse and Out-of-scope Use
|
55 |
+
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
|
56 |
+
|
57 |
+
## Risks, Limitations and Biases
|
58 |
+
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
|
59 |
+
|
60 |
+
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
|
61 |
+
|
62 |
+
|
63 |
+
## Training
|
64 |
+
|
65 |
+
|
66 |
+
#### Training Data
|
67 |
+
|
68 |
+
The authors provide additionally notes about the vocabulary used, in the [associated paper](https://arxiv.org/pdf/1901.02860.pdf):
|
69 |
+
|
70 |
+
> best model trained the Wikitext-103 dataset. We seed the our Transformer-XL with a context of at most 512 consecutive tokens randomly sampled from the test set of Wikitext-103. Then, we run Transformer-XL to generate a pre-defined number of tokens (500 or 1,000 in our case). For each generation step, we first find the top-40 probabilities of the next-step distribution and sample from top-40 tokens based on the re-normalized distribution. To help reading, we detokenize the context, the generated text and the reference text.
|
71 |
+
|
72 |
+
The authors use the following pretraining corpora for the model, described in the [associated paper](https://arxiv.org/pdf/1901.02860.pdf):
|
73 |
+
- WikiText-103 (Merity et al., 2016),
|
74 |
+
|
75 |
+
|
76 |
+
#### Training Procedure
|
77 |
+
|
78 |
+
##### Preprocessing
|
79 |
+
The authors provide additionally notes about the training procedure used, in the [associated paper](https://arxiv.org/pdf/1901.02860.pdf):
|
80 |
+
|
81 |
+
> Similar to but different from enwik8, text8 con- tains 100M processed Wikipedia characters cre- ated by lowering case the text and removing any character other than the 26 letters a through z, and space. Due to the similarity, we simply adapt the best model and the same hyper-parameters on en- wik8 to text8 without further tuning.
|
82 |
+
|
83 |
+
|
84 |
+
## Evaluation
|
85 |
+
|
86 |
+
#### Results
|
87 |
+
|
88 |
+
| Method | enwiki8 |text8 | One Billion Word | WT-103 | PTB (w/o finetuning) |
|
89 |
+
|:--------------------:|---------:|:----:|:----------------:|:------:|:--------------------:|
|
90 |
+
| Transformer-XL. | 0.99 | 1.08 | 21.8 | 18.3 | 54.5 |
|
91 |
+
|
92 |
+
## Citation Information
|
93 |
+
|
94 |
+
```bibtex
|
95 |
+
|
96 |
+
@misc{https://doi.org/10.48550/arxiv.1901.02860,
|
97 |
+
doi = {10.48550/ARXIV.1901.02860},
|
98 |
+
|
99 |
+
url = {https://arxiv.org/abs/1901.02860},
|
100 |
+
|
101 |
+
author = {Dai, Zihang and Yang, Zhilin and Yang, Yiming and Carbonell, Jaime and Le, Quoc V. and Salakhutdinov, Ruslan},
|
102 |
+
|
103 |
+
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences},
|
104 |
+
|
105 |
+
title = {Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context},
|
106 |
+
|
107 |
+
publisher = {arXiv},
|
108 |
+
|
109 |
+
year = {2019},
|
110 |
+
|
111 |
+
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
|
112 |
+
}
|
113 |
+
|
114 |
+
|
115 |
+
```
|
116 |
+
|
117 |
+
## How to Get Started With the Model
|
118 |
+
```
|
119 |
+
from transformers import TransfoXLTokenizer, TransfoXLModel
|
120 |
+
import torch
|
121 |
+
|
122 |
+
tokenizer = TransfoXLTokenizer.from_pretrained("transfo-xl-wt103")
|
123 |
+
model = TransfoXLModel.from_pretrained("transfo-xl-wt103")
|
124 |
+
|
125 |
+
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
|
126 |
+
outputs = model(**inputs)
|
127 |
+
|
128 |
+
last_hidden_states = outputs.last_hidden_state
|
129 |
+
|
130 |
+
```
|
131 |
+
|
132 |
+
|
133 |
+
|
134 |
+
|
135 |
+
|
136 |
+
|
137 |
+
|
138 |
+
|
139 |
+
|