File size: 1,262 Bytes
7705e19 6a64ddc 7705e19 ec35ede 7705e19 6a64ddc d83f447 6a64ddc 7705e19 67b22d8 af7a325 67b22d8 af7a325 1a9d5bf af7a325 67b22d8 af7a325 67b22d8 af7a325 fe6a973 af7a325 1a9d5bf af7a325 fe6a973 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
language: no
license: CC-BY 4.0
tags:
- translation
datasets:
- oscar
widget:
- text: "Dette er en test!"
---
# Norwegian mT5 - Translation Bokmål Nynorsk
## Description
This is a sample reference model.
Here is an example of how to use the model from Python
```python
# Import libraries
from transformers import T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('andrek/nb2nn',from_flax=True)
tokenizer = AutoTokenizer.from_pretrained('andrek/nb2nn')
#Encode the text
text = "Hun vil ikke gi bort sine personlige data."
inputs = tokenizer.encode(text, return_tensors="pt")
outputs = model.generate(inputs, max_length=255, num_beams=4, early_stopping=True)
#Decode and print the result
print(tokenizer.decode(outputs[0]))
```
Or if you like to use the pipeline instead
```python
# Set up the pipeline
from transformers import pipeline, T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('andrek/nb2nn',from_flax=True)
tokenizer = AutoTokenizer.from_pretrained('andrek/nb2nn')
translator = pipeline("translation", model=model, tokenizer=tokenizer)
# Do the translation
text = "Hun vil ikke gi bort sine personlige data."
print(translator(text, max_length=255))
``` |