File size: 1,466 Bytes
d81233e
 
aa8cdf6
 
d81233e
aa8cdf6
 
 
 
 
9889636
aa8cdf6
 
9889636
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2f88bab
 
9889636
2f88bab
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
license: cc-by-nc-sa-4.0
language:
- 'no'
---

Gnerative Pretrained Tranformer with 3 Billion parameters for Norwegian. NorLlama-3B is based on Llama architechture, and pretrained on [Tencent Pre-training Framework](https://github.com/Tencent/TencentPretrain)

It belongs to NorGLM, a suite of pretrained Norwegian Generative Language Models. NorGLM can be used for non-commercial purposes.

## Datasets
All models in NorGLM are trained on 200G datasets, nearly 25B tokens, including Norwegian, Denish, Swedish, Germany and English.

## Run the Model

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "NorGLM/NorLlama-3B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map='auto',
    torch_dtype=torch.bfloat16
)

text = "Tom ønsket å gå på barene med venner"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
```

## Citation Information
If you feel our work is helpful, please cite our paper:

```
@article{liu2023nlebench+,
  title={NLEBench+ NorGLM: A Comprehensive Empirical Analysis and Benchmark Dataset for Generative Language Models in Norwegian},
  author={Liu, Peng and Zhang, Lemei and Farup, Terje Nissen and Lauvrak, Even W and Ingvaldsen, Jon Espen and Eide, Simen and Gulla, Jon Atle and Yang, Zhirong},
  journal={arXiv preprint arXiv:2312.01314},
  year={2023}
}
```