sahuPrachi
commited on
Commit
•
51cc7d2
1
Parent(s):
b648c36
README.md updated
Browse files
README.md
CHANGED
@@ -1,23 +1,118 @@
|
|
|
|
1 |
---
|
2 |
-
|
3 |
-
- MultiIndicHeadlineGenerationSS
|
4 |
-
- multilingual
|
5 |
-
- nlp
|
6 |
-
- indicnlp
|
7 |
-
datasets:
|
8 |
-
- ai4bharat/IndicHeadlineGeneration
|
9 |
-
language:
|
10 |
- as
|
11 |
- bn
|
|
|
12 |
- hi
|
13 |
- kn
|
14 |
- ml
|
|
|
15 |
- or
|
16 |
- pa
|
17 |
- ta
|
18 |
- te
|
19 |
-
|
20 |
-
-
|
21 |
-
|
22 |
-
-
|
23 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
---
|
3 |
+
languages:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- as
|
5 |
- bn
|
6 |
+
- gu
|
7 |
- hi
|
8 |
- kn
|
9 |
- ml
|
10 |
+
- mr
|
11 |
- or
|
12 |
- pa
|
13 |
- ta
|
14 |
- te
|
15 |
+
tags:
|
16 |
+
- multilingual
|
17 |
+
- nlp
|
18 |
+
- indicnlp
|
19 |
+
---
|
20 |
+
|
21 |
+
MultiIndicHeadlineGenerationSS is a multilingual, sequence-to-sequence pre-trained model focusing only on Indic languages. It currently supports 11 Indian languages and is finetuned on [IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS) checkpoint. You can use MultiIndicHeadlineGenerationSS model to build natural language generation applications in Indian languages for tasks like summarization, headline generation and other summarization related tasks. Some salient features of the MultiIndicHeadlineGenerationSS are:
|
22 |
+
|
23 |
+
<ul>
|
24 |
+
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
|
25 |
+
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
|
26 |
+
<li> Trained on large Indic language corpora (1.316 million paragraphs and 5.9 million unique tokens) . </li>
|
27 |
+
<li>Unlike ai4bharat/MultiIndicHeadlineGeneration each language is written in its own script so you do not need to perform any script mapping to/from Devanagari.</li>
|
28 |
+
</ul>
|
29 |
+
|
30 |
+
|
31 |
+
# Usage:
|
32 |
+
|
33 |
+
```
|
34 |
+
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
|
35 |
+
from transformers import AlbertTokenizer, AutoTokenizer
|
36 |
+
|
37 |
+
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
|
38 |
+
|
39 |
+
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS", do_lower_case=False, use_fast=False, keep_accents=True)
|
40 |
+
|
41 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS")
|
42 |
+
|
43 |
+
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicHeadlineGenerationSS")
|
44 |
+
|
45 |
+
# Some initial mapping
|
46 |
+
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
|
47 |
+
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
|
48 |
+
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
|
49 |
+
# To get lang_id use any of ['<2as>', '<2bn>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
|
50 |
+
|
51 |
+
# First tokenize the input and outputs. The format below is how MultiIndicHeadlineGenerationSS was trained so the input should be "Paragraph </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
|
52 |
+
|
53 |
+
inp = tokenizer("यूट्यूब या फेसबुक पर वीडियो देखते समय आप भी बफरिंग की वजह से परेशान होते हैं? इसका जवाब हां है तो जल्द ही आपकी सारी समस्या खत्म होने वाली है। दरअसल, टेलीकॉम मिनिस्टर अश्विनी वैष्णव ने पिछले सप्ताह कहा कि अगस्त के अंत तक हर-हाल में '5G' इंटरनेट लॉन्च हो जाएगा। उन्होंने यह भी कहा है कि स्पेक्ट्रम की बिक्री शुरू हो चुकी है और जून तक ये प्रोसेस खत्म होने की संभावना है।</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[58232, 76, 14514, 53, 5344, 10605, 1052, 680, 83, 648, . . . . , 12126, 725, 19, 13635, 17, 7, 64001, 64007]])
|
54 |
+
|
55 |
+
out = tokenizer("<2hi> 5G इंटरनेट का इंतजार हुआ खत्म:अगस्त तक देश में शुरू हो सकती है 5G सर्विस </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64007, 329, 1906, 15429, . . . . ,17, 329, 1906, 27241, 64001]])
|
56 |
+
|
57 |
+
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
|
58 |
+
|
59 |
+
# For loss
|
60 |
+
model_outputs.loss ## This is not label smoothed.
|
61 |
+
|
62 |
+
# For logits
|
63 |
+
model_outputs.logits
|
64 |
+
|
65 |
+
# For generation. Pardon the messiness. Note the decoder_start_token_id.
|
66 |
+
|
67 |
+
model.eval() # Set dropouts to zero
|
68 |
+
|
69 |
+
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=32, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
|
70 |
+
|
71 |
+
# Decode to get output strings
|
72 |
+
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
73 |
+
print(decoded_output) # अगस्त के अंत तक '5G' इंटरनेट लॉन्च हो जाएगा : अश्विनी वैष्णव
|
74 |
+
|
75 |
+
|
76 |
+
```
|
77 |
+
|
78 |
+
# Benchmarks
|
79 |
+
Scores on the `MultiIndicHeadlineGenerationSS` test sets are as follows:
|
80 |
+
|
81 |
+
Language | Rouge-1 / Rouge-2 / Rouge-L
|
82 |
+
---------|----------------------------
|
83 |
+
as | 48.10 / 32.41 / 46.82
|
84 |
+
bn | 35.71 / 18.93 / 33.49
|
85 |
+
gu | 32.41 / 16.95 / 30.87
|
86 |
+
hi | 38.48 / 18.44 / 33.60
|
87 |
+
kn | 65.22 / 54.23 / 64.50
|
88 |
+
ml | 58.52 / 47.02 / 57.60
|
89 |
+
mr | 34.11 / 18.36 / 33.04
|
90 |
+
or | 24.83 / 11.00 / 23.74
|
91 |
+
pa | 45.15 / 27.71 / 42.12
|
92 |
+
ta | 47.15 / 31.09 / 45.72
|
93 |
+
te | 36.80 / 20.81 / 35.58
|
94 |
+
average | 42.41 / 27.00 / 40.64
|
95 |
+
|
96 |
+
|
97 |
+
# Contributors
|
98 |
+
<ul>
|
99 |
+
<li> Aman Kumar </li>
|
100 |
+
<li> Prachi Sahu </li>
|
101 |
+
<li> Himani Shrotriya </li>
|
102 |
+
<li> Raj Dabre </li>
|
103 |
+
<li> Anoop Kunchukuttan </li>
|
104 |
+
<li> Ratish Puduppully </li>
|
105 |
+
<li> Mitesh M. Khapra </li>
|
106 |
+
<li> Pratyush Kumar </li>
|
107 |
+
</ul>
|
108 |
+
|
109 |
+
# Paper
|
110 |
+
If you use MultiIndicHeadlineGeneration, please cite the following paper:
|
111 |
+
```
|
112 |
+
@inproceedings{Kumar2022IndicNLGSM,
|
113 |
+
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
|
114 |
+
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
|
115 |
+
year={2022},
|
116 |
+
url = "https://arxiv.org/abs/2203.05437"
|
117 |
+
}
|
118 |
+
```
|