File size: 4,041 Bytes
3ec9c30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
language:
- de
tags:
- nsp
- next-sentence-prediction
- t5
datasets:
  - wikipedia
metrics:
  - accuracy
---

# T5-german-nsp

T5-german-nsp is fine-tuned for Next Sentence Prediction task on the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) using [GermanT5/t5-efficient-gc4-german-base-nl36](https://huggingface.co/GermanT5/t5-efficient-gc4-german-base-nl36) model. It was introduced in this [paper](https://arxiv.org/abs/2307.07331) and first released on this page.

## Model description

T5-german-nsp is a Transformer-based model which was fine-tuned for Next Sentence Prediction task on 4000 German Wikipedia articles.

## Intended uses

- Apply Next Sentence Prediction tasks. (compare the results with BERT models since BERT natively supports this task)
- See how to fine-tune a T5 model using our [code](https://github.com/slds-lmu/stereotypes-multi/tree/main)
- Check our [paper](https://arxiv.org/abs/2307.07331) to see its results

## How to use

You can use this model directly with a pipeline for next sentence prediction.  Here is how to use this model in PyTorch:

### Necessary Initialization
```python
import torch
from transformers import T5ForConditionalGeneration, T5Tokenizer
from huggingface_hub import hf_hub_download

class ModelNSP(torch.nn.Module):
    def __init__(self, pretrained_model, tokenizer, nsp_dim=300):
        super(ModelNSP, self).__init__()
        self.zero_token, self.one_token = (self.find_label_encoding(x, tokenizer).item() for x in ["0", "1"])
        self.core_model = T5ForConditionalGeneration.from_pretrained(pretrained_model)
        self.nsp_head = torch.nn.Sequential(torch.nn.Linear(self.core_model.config.hidden_size, nsp_dim),
                                      torch.nn.Linear(nsp_dim, nsp_dim), torch.nn.Linear(nsp_dim, 2))

    def forward(self, input_ids, attention_mask=None):
        outputs = self.core_model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=3,
                                           output_scores=True, return_dict_in_generate=True)
        logits = [torch.Tensor([score[self.zero_token], score[self.one_token]]) for score in outputs.scores[1]]
        return torch.stack(logits).softmax(dim=-1)

    @staticmethod
    def find_label_encoding(input_str, tokenizer):
        encoded_str = tokenizer.encode(input_str, add_special_tokens=False, return_tensors="pt")
        return (torch.index_select(encoded_str, 1, torch.tensor([1])) if encoded_str.size(dim=1) == 2 else encoded_str)

tokenizer = T5Tokenizer.from_pretrained("tolga-ozturk/t5-base-nsp")
model = torch.nn.DataParallel(ModelNSP("t5-base", tokenizer).eval())
model.load_state_dict(torch.load(hf_hub_download(repo_id="tolga-ozturk/t5-base-nsp", filename="model_weights.bin")))
```

### Inference
```python
batch_texts = [("binäre Klassifikation: In Italien wird Pizza ungeschnitten angeboten.", "Der Himmel ist blau."),
    ("binäre Klassifikation: In Italien wird Pizza ungeschnitten angeboten.", "In der Türkei wird es aber in Scheiben geschnitten serviert.")]
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=batch_texts, truncation="longest_first", padding=True, return_tensors="pt", return_attention_mask=True, max_length=256)
print(torch.argmax(model(encoded_dict.input_ids, attention_mask=encoded_dict.attention_mask), dim=-1))
```

### Training Metrics
<img src="https://huggingface.co/tolga-ozturk/t5-german-nsp/resolve/main/metrics.png">

## BibTeX entry and citation info

```bibtex
@misc{title={How Different Is Stereotypical Bias Across Languages?}, 
      author={Ibrahim Tolga Öztürk and Rostislav Nedelchev and Christian Heumann and Esteban Garces Arias and Marius Roger and Bernd Bischl and Matthias Aßenmacher},
      year={2023},
      eprint={2307.07331},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

The work is done with Ludwig-Maximilians-Universität Statistics group, don't forget to check out [their huggingface page](https://huggingface.co/misoda) for other interesting works!