Update README.md
Browse files
README.md
CHANGED
@@ -24,4 +24,12 @@ To generate text using the model:
|
|
24 |
|
25 |
tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/mbart-russian-grammar-corrector", src_lang="ru_RU", tgt_lang="ru_RU")
|
26 |
input = tokenizer("I was here yesterday to studying",text_target="I was here yesterday to study", return_tensors='pt')
|
27 |
-
output = model.generate(input["input_ids"],attention_mask=input["attention_mask"],forced_bos_token_id=tokenizer_it.lang_code_to_id["ru_RU"])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/mbart-russian-grammar-corrector", src_lang="ru_RU", tgt_lang="ru_RU")
|
26 |
input = tokenizer("I was here yesterday to studying",text_target="I was here yesterday to study", return_tensors='pt')
|
27 |
+
output = model.generate(input["input_ids"],attention_mask=input["attention_mask"],forced_bos_token_id=tokenizer_it.lang_code_to_id["ru_RU"])
|
28 |
+
|
29 |
+
|
30 |
+
|
31 |
+
Training of the model is performed using the following loss computation based on the hidden state output h:
|
32 |
+
|
33 |
+
h.logits, h.loss = model(input_ids=input["input_ids"],
|
34 |
+
attention_mask=input["attention_mask"],
|
35 |
+
labels=input["labels"])
|