modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 21
values | files
sequence | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
yechen/bert-base-chinese | 2021-05-01T04:00:07.000Z | [
"pytorch",
"tf",
"masked-lm",
"zh",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | yechen | 316 | transformers | ---
language: zh
---
|
yechen/bert-large-chinese | 2021-05-20T09:22:07.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"zh",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | yechen | 299 | transformers | ---
language: zh
---
|
yechen/question-answering-chinese | 2021-05-20T09:25:57.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"zh",
"transformers"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"vocab.txt"
] | yechen | 642 | transformers | ---
language: zh
---
|
yeop/gpt2-pride-and-prejudice | 2021-04-27T09:27:03.000Z | [] | [
".gitattributes"
] | yeop | 0 | |||
yerevann/m3-gen-only-generator | 2020-05-04T13:37:40.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | yerevann | 11 | transformers | ||
yhavinga/gpt-neo-micro-nl-a | 2021-06-06T15:34:00.000Z | [
"pytorch",
"gpt_neo",
"causal-lm",
"dutch",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
] | yhavinga | 10 | transformers | ---
language:
- dutch
widget:
- text: "'We waren allemaal verheugd om"
---
# gpt-neo-micro-nl-a
This model is a test GPT Neo model created from scratch with its own tokenizer
on Dutch texts with the aitextgen toolkit.
See https://aitextgen.io/ for more info.
```
GPTNeoConfig {
"activation_function": "gelu_new",
"attention_dropout": 0.1,
"attention_layers": [ "global", "local", "global", "local", "global", "local", "global", "local" ],
"attention_types": [ [ [ "global", "local" ], 4 ] ],
"bos_token_id": 0,
"embed_dropout": 0.0,
"eos_token_id": 0,
"gradient_checkpointing": false,
"hidden_size": 256,
"initializer_range": 0.02,
"intermediate_size": 256,
"layer_norm_epsilon": 1e-05,
"max_position_embeddings": 256,
"model_type": "gpt_neo",
"num_heads": 8,
"num_layers": 8,
"resid_dropout": 0.0,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.6.1",
"use_cache": true,
"vocab_size": 5000,
"window_size": 32
}
|
yhavinga/gpt-nl-a | 2021-06-06T11:49:13.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"dutch",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
] | yhavinga | 13 | transformers | ---
language:
- dutch
widget:
- text: "De brand"
---
# gpt-nl-a micro
This model is a test GPT2 model created from scratch with its own tokenizer
on Dutch texts with the aitextgen toolkit.
See https://aitextgen.io/ for more info.
```
GPT2Config {
"activation_function": "gelu_new",
"attn_pdrop": 0.1,
"bos_token_id": 0,
"embd_pdrop": 0.1,
"eos_token_id": 0,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 32,
"n_embd": 256,
"n_head": 8,
"n_inner": null,
"n_layer": 8,
"n_positions": 32,
"resid_pdrop": 0.1,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"transformers_version": "4.6.1",
"use_cache": true,
"vocab_size": 5000
}
```
|
yhavinga/mt5-base-mixednews-nl | 2021-03-13T08:19:42.000Z | [
"pytorch",
"mt5",
"seq2seq",
"dutch",
"dataset:xsum_nl",
"transformers",
"summarization",
"text2text-generation"
] | summarization | [
".gitattributes",
"README.md",
"all_results.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"test_results.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"val_results.json"
] | yhavinga | 257 | transformers | ---
tags:
- summarization
language:
- dutch
datasets:
- xsum_nl
widget:
- text: "Onderzoekers ontdekten dat vier van de vijf kinderen in Engeland die op school lunches hadden gegeten, op school voedsel hadden geprobeerd dat ze thuis niet hadden geprobeerd.De helft van de ondervraagde ouders zei dat hun kinderen hadden gevraagd om voedsel dat ze op school hadden gegeten om thuis te worden gekookt.De enquête, van ongeveer 1.000 ouders, vond dat de meest populaire groenten wortelen, suikermaïs en erwten waren.Aubergine, kikkererwten en spinazie waren een van de minst populaire.Van de ondervraagde ouders, 628 hadden kinderen die lunches op school aten. (% duidt op een deel van de ouders die zeiden dat hun kind elke groente zou eten) England's School Food Trust gaf opdracht tot het onderzoek na een onderzoek door de Mumsnet-website suggereerde dat sommige ouders hun kinderen lunchpakket gaven omdat ze dachten dat ze te kieskeurig waren om iets anders te eten. \"Schoolmaaltijden kunnen een geweldige manier zijn om ouders te helpen hun kinderen aan te moedigen om nieuw voedsel te proberen en om de verscheidenheid van voedsel in hun dieet te verhogen. \"Mumsnet medeoprichter, Carrie Longton, zei: \"Het krijgen van kinderen om gezond te eten is de droom van elke ouder, maar maaltijdtijden thuis kan vaak een slagveld en emotioneel geladen zijn. \"Vanuit Mumsnetters' ervaring lijkt het erop dat eenmaal op school is er een verlangen om in te passen bij iedereen anders en zelfs een aantal positieve peer pressure om op te scheppen over de verscheidenheid van wat voedsel je kunt eten. \"Schoolmaaltijden zijn ook verplaatst op nogal een beetje van toen Mumsnetters op school waren, met gezondere opties en meer afwisseling. \"Schoolmaaltijden in Engeland moeten nu voldoen aan strenge voedingsrichtlijnen.Ongeveer vier op de tien basisschoolkinderen in Engeland eten nu schoollunches, iets meer dan op middelbare scholen.Meer kinderen in Schotland eten schoollunches - ongeveer 46%.Het onderzoek werd online uitgevoerd tussen 26 februari en 5 maart onder een panel van ouders die ten minste één kind op school hadden van 4-17 jaar oud."
- text: "Het Londense trio staat klaar voor de beste Britse act en beste album, evenals voor twee nominaties in de beste song categorie. \"We kregen te horen zoals vanmorgen 'Oh I think you're genomineerd',\" zei Dappy. \"En ik was als 'Oh yeah, what one?' En nu zijn we genomineerd voor vier awards. Ik bedoel, wow! \"Bandmate Fazer voegde eraan toe: \"We dachten dat het het beste van ons was om met iedereen naar beneden te komen en hallo te zeggen tegen de camera's.En nu vinden we dat we vier nominaties hebben. \"De band heeft twee shots bij de beste song prijs, het krijgen van het knikje voor hun Tyncy Stryder samenwerking nummer één, en single Strong Again.Their album Uncle B zal ook gaan tegen platen van Beyonce en Kany \"Aan het eind van de dag zijn we dankbaar om te zijn waar we zijn in onze carrières. \"Als het niet gebeurt dan gebeurt het niet - live om te vechten een andere dag en blijven maken albums en hits voor de fans. \"Dappy onthulde ook dat ze kunnen worden optreden live op de avond.De groep zal doen Nummer Een en ook een mogelijke uitlevering van de War Child single, I Got Soul.Het liefdadigheidslied is een re-working van The Killers' All These Things That I've Done en is ingesteld op artiesten als Chipmunk, Ironik en Pixie Lott.Dit jaar zal Mobos worden gehouden buiten Londen voor de eerste keer, in Glasgow op 30 september.N-Dubz zei dat ze op zoek waren naar optredens voor hun Schotse fans en bogen over hun recente shows ten noorden van de Londense We hebben Aberdeen ongeveer drie of vier maanden geleden gedaan - we hebben die show daar verbrijzeld! Overal waar we heen gaan slaan we hem in elkaar!\""
---
# mt5-base-mixednews-nl
mt5-base finetuned on three mixed news sources:
1. CNN DM translated to Dutch with MarianMT.
2. XSUM translated to Dutch with MarianMt.
3. News article summaries distilled from the nu.nl website.
Config:
* Learning rate 1e-3
* Trained for one epoch
* Max source length 1024
* Max target length 142
* Min target length 75
Scores:
* rouge1 28.8482
* rouge2 9.4584
* rougeL 20.1697
|
yhk04150/SBERT | 2021-05-20T09:27:40.000Z | [
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"vocab.json"
] | yhk04150 | 11 | transformers | hello
|
yhk04150/YBERT | 2021-04-13T10:36:11.000Z | [] | [
".gitattributes"
] | yhk04150 | 0 | |||
yhk04150/yhkBERT | 2021-05-20T09:28:34.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"training_args.bin",
"vocab.json",
"checkpoint-17000/config.json",
"checkpoint-17000/optimizer.pt",
"checkpoint-17000/pytorch_model.bin",
"checkpoint-17000/scheduler.pt",
"checkpoint-17000/trainer_state.json",
"checkpoint-17000/training_args.bin"
] | yhk04150 | 13 | transformers | |
yhk04150/yhkBERT03 | 2021-05-20T09:28:48.000Z | [
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"merges.txt",
"vocab.json"
] | yhk04150 | 10 | transformers | |
yigitbekir/turkish-bert-uncased-sentiment | 2021-05-20T09:29:34.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"sentiment-no-neutral-extended_test.csv",
"sentiment-no-neutral-extended_train.csv",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | yigitbekir | 27 | transformers | |
yihanlin/scibert_scivocab_uncased | 2021-05-20T09:30:31.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
] | yihanlin | 113 | transformers | ||
yjc/roberta-base | 2021-03-09T09:53:07.000Z | [] | [
".gitattributes"
] | yjc | 0 | |||
yjernite/bart_eli5 | 2021-03-09T22:31:11.000Z | [
"pytorch",
"bart",
"seq2seq",
"en",
"dataset:eli5",
"transformers",
"license:apache-2.0",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"tokenizer.json",
"vocab.json"
] | yjernite | 705 | transformers | ---
language: en
license: apache-2.0
datasets:
- eli5
---
## BART ELI5
Read the article at https://yjernite.github.io/lfqa.html and try the demo at https://huggingface.co/qa/
|
yjernite/retribert-base-uncased | 2021-03-10T02:54:37.000Z | [
"pytorch",
"retribert",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer.json",
"vocab.txt"
] | yjernite | 1,435 | transformers | ||
ykacer/bert-base-cased-imdb-sequence-classification | 2021-05-20T09:31:37.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:imdb",
"transformers",
"sequence",
"classification",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | ykacer | 31 | transformers |
---
language:
- en
thumbnail: https://raw.githubusercontent.com/JetRunner/BERT-of-Theseus/master/bert-of-theseus.png
tags:
- sequence
- classification
license: apache-2.0
datasets:
- imdb
metrics:
- accuracy
---
|
yluisfern/FDR | 2021-04-02T16:40:25.000Z | [] | [
".gitattributes",
"README.md"
] | yluisfern | 0 | https://www.geogebra.org/m/cwcveget
https://www.geogebra.org/m/b8dzxk6z
https://www.geogebra.org/m/nqanttum
https://www.geogebra.org/m/pd3g8a4u
https://www.geogebra.org/m/jw8324jz
https://www.geogebra.org/m/wjbpvz5q
https://www.geogebra.org/m/qm3g3ma6
https://www.geogebra.org/m/sdajgph8
https://www.geogebra.org/m/e3ghhcbf
https://www.geogebra.org/m/msne4bfm
https://www.geogebra.org/m/nmcv2te5
https://www.geogebra.org/m/hguqx6cn
https://www.geogebra.org/m/jnyvpgqu
https://www.geogebra.org/m/syctd97g
https://www.geogebra.org/m/nq9erdby
https://www.geogebra.org/m/au4har8c
https://network.aza.org/network/members/profile?UserKey=811de229-7f08-4360-863c-ac04181ba9c0
https://network.aza.org/network/members/profile?UserKey=31b495a0-36f7-4a50-ba3e-d76e3487278c
https://network.aza.org/network/members/profile?UserKey=753c0ddd-bded-4b03-8c68-11dacdd1f676
https://network.aza.org/network/members/profile?UserKey=db9d0a25-1615-4e39-b61f-ad68766095b3
https://network.aza.org/network/members/profile?UserKey=59279f52-50cf-4686-9fb0-9ab613211ead
https://network.aza.org/network/members/profile?UserKey=67b3ce20-cc3a-420f-8933-10796f301060
https://network.aza.org/network/members/profile?UserKey=f5e610c3-6400-4429-b42b-97eeeeb284a9
https://network.aza.org/network/members/profile?UserKey=ccda0739-f5f5-4ecc-a729-77c9a6825897
https://network.aza.org/network/members/profile?UserKey=3983471f-cf43-4a4a-90d3-148040f92dd9
https://network.aza.org/network/members/profile?UserKey=9f16d7a8-3502-4904-a99a-38362de78973
https://network.aza.org/network/members/profile?UserKey=961981d5-9743-44ac-8525-d4c8b708eb5a
https://network.aza.org/network/members/profile?UserKey=178276d7-c64d-408e-af52-96d1ebd549fc |
||
ylwz/listen_bert | 2021-01-03T14:17:18.000Z | [] | [
".gitattributes"
] | ylwz | 0 | |||
ynie/albert-xxlarge-v2-snli_mnli_fever_anli_R1_R2_R3-nli | 2020-10-17T02:05:17.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | ynie | 427 | transformers | |
ynie/bart-large-snli_mnli_fever_anli_R1_R2_R3-nli | 2020-10-17T02:00:14.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ynie | 446 | transformers | |
ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli | 2020-10-17T02:00:30.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | ynie | 51 | transformers | |
ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli | 2021-05-20T23:17:23.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"dataset:snli",
"dataset:anli",
"dataset:multi_nli",
"dataset:multi_nli_mismatch",
"dataset:fever",
"transformers",
"license:mit"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ynie | 2,025 | transformers | ---
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
license: mit
---
This is a strong pre-trained RoBERTa-Large NLI model.
The training data is a combination of well-known NLI datasets: [`SNLI`](https://nlp.stanford.edu/projects/snli/), [`MNLI`](https://cims.nyu.edu/~sbowman/multinli/), [`FEVER-NLI`](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [`ANLI (R1, R2, R3)`](https://github.com/facebookresearch/anli).
Other pre-trained NLI models including `RoBERTa`, `ALBert`, `BART`, `ELECTRA`, `XLNet` are also available.
Trained by [Yixin Nie](https://easonnie.github.io), [original source](https://github.com/facebookresearch/anli).
Try the code snippet below.
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
if __name__ == '__main__':
max_length = 256
premise = "Two women are embracing while holding to go packages."
hypothesis = "The men are fighting outside a deli."
hg_model_hub_name = "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli"
# hg_model_hub_name = "ynie/albert-xxlarge-v2-snli_mnli_fever_anli_R1_R2_R3-nli"
# hg_model_hub_name = "ynie/bart-large-snli_mnli_fever_anli_R1_R2_R3-nli"
# hg_model_hub_name = "ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli"
# hg_model_hub_name = "ynie/xlnet-large-cased-snli_mnli_fever_anli_R1_R2_R3-nli"
tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name)
model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name)
tokenized_input_seq_pair = tokenizer.encode_plus(premise, hypothesis,
max_length=max_length,
return_token_type_ids=True, truncation=True)
input_ids = torch.Tensor(tokenized_input_seq_pair['input_ids']).long().unsqueeze(0)
# remember bart doesn't have 'token_type_ids', remove the line below if you are using bart.
token_type_ids = torch.Tensor(tokenized_input_seq_pair['token_type_ids']).long().unsqueeze(0)
attention_mask = torch.Tensor(tokenized_input_seq_pair['attention_mask']).long().unsqueeze(0)
outputs = model(input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
labels=None)
# Note:
# "id2label": {
# "0": "entailment",
# "1": "neutral",
# "2": "contradiction"
# },
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() # batch_size only one
print("Premise:", premise)
print("Hypothesis:", hypothesis)
print("Entailment:", predicted_probability[0])
print("Neutral:", predicted_probability[1])
print("Contradiction:", predicted_probability[2])
```
More in [here](https://github.com/facebookresearch/anli/blob/master/src/hg_api/interactive_eval.py).
Citation:
```
@inproceedings{nie-etal-2020-adversarial,
title = "Adversarial {NLI}: A New Benchmark for Natural Language Understanding",
author = "Nie, Yixin and
Williams, Adina and
Dinan, Emily and
Bansal, Mohit and
Weston, Jason and
Kiela, Douwe",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
year = "2020",
publisher = "Association for Computational Linguistics",
}
```
|
ynie/roberta-large_conv_contradiction_detector_v0 | 2021-05-20T23:20:34.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ynie | 492 | transformers | |
ynie/xlnet-large-cased-snli_mnli_fever_anli_R1_R2_R3-nli | 2020-10-17T01:54:45.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
] | ynie | 75 | transformers | |
yoonseob/yaiBERT-v2 | 2020-12-04T00:40:42.000Z | [
"pytorch",
"transformers"
] | [
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | yoonseob | 11 | transformers | ||
yoonseob/yaiBERT | 2020-12-03T17:23:58.000Z | [
"pytorch",
"transformers"
] | [
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | yoonseob | 11 | transformers | ||
yoonseob/ysBERT | 2021-05-20T09:31:54.000Z | [
"pytorch",
"bert",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | yoonseob | 11 | transformers | ||
yorko/scibert_scivocab_uncased_long_4096 | 2021-06-18T13:41:31.000Z | [
"pytorch",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | yorko | 0 | transformers | |
yoshitomo-matsubara/bert-base-uncased-cola | 2021-05-29T21:40:15.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:cola",
"transformers",
"cola",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 15 | transformers | ---
language: en
tags:
- bert
- cola
- glue
- torchdistill
license: apache-2.0
datasets:
- cola
metrics:
- matthew's correlation
---
`bert-base-uncased` fine-tuned on CoLA dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/cola/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
yoshitomo-matsubara/bert-base-uncased-cola_from_bert-large-uncased-cola | 2021-06-03T05:00:03.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:cola",
"transformers",
"cola",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 3 | transformers | ---
language: en
tags:
- bert
- cola
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- cola
metrics:
- matthew's correlation
---
`bert-base-uncased` fine-tuned on CoLA dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/cola/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-base-uncased-mnli | 2021-05-29T21:43:56.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mnli",
"dataset:ax",
"transformers",
"mnli",
"ax",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 10 | transformers | ---
language: en
tags:
- bert
- mnli
- ax
- glue
- torchdistill
license: apache-2.0
datasets:
- mnli
- ax
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on MNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mnli/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
yoshitomo-matsubara/bert-base-uncased-mnli_from_bert-large-uncased-mnli | 2021-06-03T05:02:16.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mnli",
"dataset:ax",
"transformers",
"mnli",
"ax",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 3 | transformers | ---
language: en
tags:
- bert
- mnli
- ax
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- mnli
- ax
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on MNLI dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mnli/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-base-uncased-mrpc | 2021-05-29T21:47:37.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mrpc",
"transformers",
"mrpc",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 22 | transformers | ---
language: en
tags:
- bert
- mrpc
- glue
- torchdistill
license: apache-2.0
datasets:
- mrpc
metrics:
- f1
- accuracy
---
`bert-base-uncased` fine-tuned on MRPC dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mrpc/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
yoshitomo-matsubara/bert-base-uncased-mrpc_from_bert-large-uncased-mrpc | 2021-06-03T05:03:57.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mrpc",
"transformers",
"mrpc",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 3 | transformers | ---
language: en
tags:
- bert
- mrpc
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- mrpc
metrics:
- f1
- accuracy
---
`bert-base-uncased` fine-tuned on MRPC dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mrpc/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-base-uncased-qnli | 2021-05-29T21:49:44.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qnli",
"transformers",
"qnli",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 8 | transformers | ---
language: en
tags:
- bert
- qnli
- glue
- torchdistill
license: apache-2.0
datasets:
- qnli
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on QNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qnli/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
yoshitomo-matsubara/bert-base-uncased-qnli_from_bert-large-uncased-qnli | 2021-06-03T05:05:26.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qnli",
"transformers",
"qnli",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 3 | transformers | ---
language: en
tags:
- bert
- qnli
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- qnli
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on QNLI dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qnli/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-base-uncased-qqp | 2021-05-29T21:52:35.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qqp",
"transformers",
"qqp",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 8 | transformers | ---
language: en
tags:
- bert
- qqp
- glue
- torchdistill
license: apache-2.0
datasets:
- qqp
metrics:
- f1
- accuracy
---
`bert-base-uncased` fine-tuned on QQP dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
yoshitomo-matsubara/bert-base-uncased-qqp_from_bert-large-uncased-qqp | 2021-06-03T05:06:46.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qqp",
"transformers",
"qqp",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 4 | transformers | ---
language: en
tags:
- bert
- qqp
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- qqp
metrics:
- f1
- accuracy
---
`bert-base-uncased` fine-tuned on QQP dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-base-uncased-rte | 2021-05-29T21:55:13.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:rte",
"transformers",
"rte",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 8 | transformers | ---
language: en
tags:
- bert
- rte
- glue
- torchdistill
license: apache-2.0
datasets:
- rte
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on RTE dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/rte/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
yoshitomo-matsubara/bert-base-uncased-rte_from_bert-large-uncased-rte | 2021-06-03T05:08:12.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:rte",
"transformers",
"rte",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 4 | transformers | ---
language: en
tags:
- bert
- rte
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- rte
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on RTE dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/rte/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-base-uncased-sst2 | 2021-05-29T21:57:09.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:sst2",
"transformers",
"sst2",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 10 | transformers | ---
language: en
tags:
- bert
- sst2
- glue
- torchdistill
license: apache-2.0
datasets:
- sst2
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on SST-2 dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/sst2/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
yoshitomo-matsubara/bert-base-uncased-sst2_from_bert-large-uncased-sst2 | 2021-06-03T05:09:20.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:sst2",
"transformers",
"sst2",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 9 | transformers | ---
language: en
tags:
- bert
- sst2
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- sst2
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on SST-2 dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/sst2/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-base-uncased-stsb | 2021-05-29T21:58:50.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:stsb",
"transformers",
"stsb",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 9 | transformers | ---
language: en
tags:
- bert
- stsb
- glue
- torchdistill
license: apache-2.0
datasets:
- stsb
metrics:
- pearson correlation
- spearman correlation
---
`bert-base-uncased` fine-tuned on STS-B dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/stsb/mse/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
yoshitomo-matsubara/bert-base-uncased-stsb_from_bert-large-uncased-stsb | 2021-06-03T05:10:42.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:stsb",
"transformers",
"stsb",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 5 | transformers | ---
language: en
tags:
- bert
- stsb
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- stsb
metrics:
- pearson correlation
- spearman correlation
---
`bert-base-uncased` fine-tuned on STS-B dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/stsb/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-base-uncased-wnli | 2021-05-29T22:00:50.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:wnli",
"transformers",
"wnli",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 8 | transformers | ---
language: en
tags:
- bert
- wnli
- glue
- torchdistill
license: apache-2.0
datasets:
- wnli
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on WNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/wnli/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
yoshitomo-matsubara/bert-base-uncased-wnli_from_bert-large-uncased-wnli | 2021-06-03T05:12:16.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:wnli",
"transformers",
"wnli",
"glue",
"kd",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 3 | transformers | ---
language: en
tags:
- bert
- wnli
- glue
- kd
- torchdistill
license: apache-2.0
datasets:
- wnli
metrics:
- accuracy
---
`bert-base-uncased` fine-tuned on WNLI dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation.
The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/wnli/kd/bert_base_uncased_from_bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
|
yoshitomo-matsubara/bert-large-uncased-cola | 2021-05-29T21:32:06.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:cola",
"transformers",
"cola",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 170 | transformers | ---
language: en
tags:
- bert
- cola
- glue
- torchdistill
license: apache-2.0
datasets:
- cola
metrics:
- matthew's correlation
---
`bert-large-uncased` fine-tuned on CoLA dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/cola/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
yoshitomo-matsubara/bert-large-uncased-mnli | 2021-05-29T21:32:31.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mnli",
"dataset:ax",
"transformers",
"mnli",
"ax",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 87 | transformers | ---
language: en
tags:
- bert
- mnli
- ax
- glue
- torchdistill
license: apache-2.0
datasets:
- mnli
- ax
metrics:
- accuracy
---
`bert-large-uncased` fine-tuned on MNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mnli/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
yoshitomo-matsubara/bert-large-uncased-mrpc | 2021-05-29T21:32:51.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:mrpc",
"transformers",
"mrpc",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 38 | transformers | ---
language: en
tags:
- bert
- mrpc
- glue
- torchdistill
license: apache-2.0
datasets:
- mrpc
metrics:
- f1
- accuracy
---
`bert-large-uncased` fine-tuned on MRPC dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/mrpc/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
yoshitomo-matsubara/bert-large-uncased-qnli | 2021-05-29T21:33:19.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qnli",
"transformers",
"qnli",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 58 | transformers | ---
language: en
tags:
- bert
- qnli
- glue
- torchdistill
license: apache-2.0
datasets:
- qnli
metrics:
- accuracy
---
`bert-large-uncased` fine-tuned on QNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qnli/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
yoshitomo-matsubara/bert-large-uncased-qqp | 2021-05-29T21:33:37.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qqp",
"transformers",
"qqp",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 22 | transformers | ---
language: en
tags:
- bert
- qqp
- glue
- torchdistill
license: apache-2.0
datasets:
- qqp
metrics:
- f1
- accuracy
---
`bert-large-uncased` fine-tuned on QQP dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
yoshitomo-matsubara/bert-large-uncased-rte | 2021-05-29T21:33:55.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:rte",
"transformers",
"rte",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 48 | transformers | ---
language: en
tags:
- bert
- rte
- glue
- torchdistill
license: apache-2.0
datasets:
- rte
metrics:
- accuracy
---
`bert-large-uncased` fine-tuned on RTE dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/rte/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
yoshitomo-matsubara/bert-large-uncased-sst2 | 2021-05-29T21:34:13.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:sst2",
"transformers",
"sst2",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 51 | transformers | ---
language: en
tags:
- bert
- sst2
- glue
- torchdistill
license: apache-2.0
datasets:
- sst2
metrics:
- accuracy
---
`bert-large-uncased` fine-tuned on SST-2 dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/sst2/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
yoshitomo-matsubara/bert-large-uncased-stsb | 2021-05-29T21:34:30.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:stsb",
"transformers",
"stsb",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 195 | transformers | ---
language: en
tags:
- bert
- stsb
- glue
- torchdistill
license: apache-2.0
datasets:
- stsb
metrics:
- pearson correlation
- spearman correlation
---
`bert-large-uncased` fine-tuned on STS-B dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/stsb/mse/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
yoshitomo-matsubara/bert-large-uncased-wnli | 2021-05-29T21:34:53.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:wnli",
"transformers",
"wnli",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"training.log",
"vocab.txt"
] | yoshitomo-matsubara | 98 | transformers | ---
language: en
tags:
- bert
- wnli
- glue
- torchdistill
license: apache-2.0
datasets:
- wnli
metrics:
- accuracy
---
`bert-large-uncased` fine-tuned on WNLI dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/wnli/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
yosiasz/amharic | 2021-02-19T03:12:41.000Z | [] | [
".gitattributes"
] | yosiasz | 0 | |||
yosuke/bert-base-japanese-char | 2021-05-20T09:32:29.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"model.ckpt.data-00000-of-00001",
"model.ckpt.index",
"model.ckpt.meta",
"pytorch_model.bin",
"vocab.txt"
] | yosuke | 35 | transformers | |
young/BertForFinance | 2021-03-17T05:13:04.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
] | young | 21 | transformers | ||
youngfan918/bert_cn_finetuning | 2021-05-20T09:33:15.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | youngfan918 | 11 | transformers | |
youngfan918/bert_finetuning_test | 2021-05-20T09:34:11.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
] | youngfan918 | 12 | transformers | |
youscan/ukr-roberta-base | 2021-05-20T23:23:40.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"uk",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | youscan | 166 | transformers | ---
language:
- uk
---
# ukr-roberta-base
## Pre-training corpora
Below is the list of corpora used along with the output of wc command (counting lines, words and characters). These corpora were concatenated and tokenized with HuggingFace Roberta Tokenizer.
| Tables | Lines | Words | Characters |
| ------------- |--------------:| -----:| -----:|
| [Ukrainian Wikipedia - May 2020](https://dumps.wikimedia.org/ukwiki/latest/ukwiki-latest-pages-articles.xml.bz2) | 18 001 466| 201 207 739 | 2 647 891 947 |
| [Ukrainian OSCAR deduplicated dataset](https://oscar-public.huma-num.fr/shuffled/uk_dedup.txt.gz) | 56 560 011 | 2 250 210 650 | 29 705 050 592 |
| Sampled mentions from social networks | 11 245 710 | 128 461 796 | 1 632 567 763 |
| Total | 85 807 187 | 2 579 880 185 | 33 985 510 302 |
## Pre-training details
* Ukrainian Roberta was trained with code provided in [HuggingFace tutorial](https://huggingface.co/blog/how-to-train)
* Currently released model follows roberta-base-cased model architecture (12-layer, 768-hidden, 12-heads, 125M parameters)
* The model was trained on 4xV100 (85 hours)
* Training configuration you can find in the [original repository](https://github.com/youscan/language-models)
## Author
Vitalii Radchenko - contact me on Twitter [@vitaliradchenko](https://twitter.com/vitaliradchenko)
|
ysyang2002/test-model | 2021-03-25T22:21:51.000Z | [] | [
".gitattributes"
] | ysyang2002 | 0 | |||
ytlin/16l3xf7a_1 | 2021-05-23T13:47:19.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ytlin | 17 | transformers | |
ytlin/18ygyqcn_4 | 2021-05-23T13:48:01.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | ytlin | 18 | transformers | |
ytlin/19rdmhqc | 2020-10-06T06:39:21.000Z | [
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | ytlin | 14 | transformers | |
ytlin/1klqb7u9_35 | 2021-05-23T13:48:32.000Z | [
"pytorch",
"gpt2",
"transformers"
] | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ytlin | 17 | transformers | ||
ytlin/1pm2c7qw_5 | 2021-05-23T13:49:02.000Z | [
"pytorch",
"gpt2",
"transformers"
] | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ytlin | 18 | transformers | ||
ytlin/1pm2c7qw_6 | 2021-05-23T13:49:27.000Z | [
"pytorch",
"gpt2",
"transformers"
] | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ytlin | 15 | transformers | ||
ytlin/1riatc43 | 2020-10-05T21:26:03.000Z | [
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | ytlin | 20 | transformers | |
ytlin/21qspw2p | 2021-05-23T13:49:48.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | ytlin | 14 | transformers | |
ytlin/2jgyqp5g | 2020-10-06T06:54:48.000Z | [
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | ytlin | 17 | transformers | |
ytlin/2sk5p244 | 2020-10-06T06:38:22.000Z | [
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | ytlin | 14 | transformers | |
ytlin/31r11ahz_2 | 2020-10-04T10:44:59.000Z | [
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | ytlin | 15 | transformers | |
ytlin/329vcm1b_4 | 2020-10-05T06:03:46.000Z | [
"pytorch",
"mbart",
"seq2seq",
"transformers",
"text2text-generation"
] | text2text-generation | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"special_tokens_map.json",
"tokenizer_config.json"
] | ytlin | 21 | transformers | |
ytlin/35oote4t_52 | 2021-05-23T13:50:14.000Z | [
"pytorch",
"gpt2",
"transformers"
] | [
".gitattributes",
"added_tokens.json",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ytlin | 20 | transformers | ||
ytlin/38hbj3w7_10 | 2021-05-23T13:50:35.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | ytlin | 18 | transformers | |
ytlin/38hbj3w7_13 | 2021-05-23T13:50:57.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | ytlin | 16 | transformers | |
ytlin/46695u38_3 | 2021-05-23T13:51:39.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | ytlin | 16 | transformers | |
ytlin/CDial-GPT2_LCCC-base | 2020-10-05T14:39:38.000Z | [
"pytorch",
"transformers"
] | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
] | ytlin | 13 | transformers | ||
ytlin/distilbert-base-cased-sgd_qa-step5000 | 2021-02-09T14:39:56.000Z | [] | [
".gitattributes"
] | ytlin | 0 | |||
ytlin/q4b4siil | 2021-05-23T13:52:22.000Z | [
"pytorch",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"added_tokens.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
] | ytlin | 22 | transformers | |
yuanbit/finbert-qa | 2020-12-02T14:08:41.000Z | [] | [
".gitattributes"
] | yuanbit | 0 | |||
yucahu/len1 | 2021-05-23T13:54:23.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"tf_model.h5",
"vocab.json"
] | yucahu | 19 | transformers | |
yuifuku1118/wav2vec2-large-xlsr-japanese-roma-demo | 2021-06-03T16:41:40.000Z | [] | [
".gitattributes"
] | yuifuku1118 | 0 | |||
yumi/yumitask | 2021-03-24T02:55:00.000Z | [] | [
".gitattributes"
] | yumi | 0 | |||
yunusemreemik/turkish_financial_qna_model | 2021-04-15T00:05:04.000Z | [] | [
".gitattributes",
"README.md"
] | yunusemreemik | 0 | |||
yuv4r4j/model_name | 2021-06-17T15:04:01.000Z | [] | [
".gitattributes"
] | yuv4r4j | 0 | |||
yuvraj/summarizer-cnndm | 2020-12-11T22:04:58.000Z | [
"pytorch",
"bart",
"seq2seq",
"en",
"transformers",
"summarization",
"text2text-generation"
] | summarization | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | yuvraj | 200 | transformers | ---
language: "en"
tags:
- summarization
---
# Summarization
## Model description
BartForConditionalGeneration model fine tuned for summarization on 10000 samples from the cnn-dailymail dataset
## How to use
PyTorch model available
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("yuvraj/summarizer-cnndm")
AutoModelWithLMHead.from_pretrained("yuvraj/summarizer-cnndm")
summarizer = pipeline('summarization', model=model, tokenizer=tokenizer)
summarizer("<Text to be summarized>")
## Limitations and bias
Trained on a small dataset
|
yuvraj/xSumm | 2020-12-11T22:05:01.000Z | [
"pytorch",
"bart",
"seq2seq",
"en",
"transformers",
"summarization",
"extreme summarization",
"text2text-generation"
] | summarization | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
] | yuvraj | 28 | transformers | ---
language: "en"
tags:
- summarization
- extreme summarization
---
## Model description
BartForConditionalGenerationModel for extreme summarization- creates a one line abstractive summary of a given article
## How to use
PyTorch model available
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("yuvraj/xSumm")
model = AutoModelWithLMHead.from_pretrained("yuvraj/xSumm")
xsumm = pipeline('summarization', model=model, tokenizer=tokenizer)
xsumm("<text to be summarized>")
## Limitations and bias
Trained on a small fraction of the xsumm training dataset
|
yyelirr/CalvinoCosmicomicsGEN | 2021-05-07T16:49:32.000Z | [] | [
".gitattributes"
] | yyelirr | 0 | |||
yyua689/yiny | 2021-05-21T01:25:24.000Z | [] | [
".gitattributes"
] | yyua689 | 0 | |||
zachgray/doctor | 2021-04-03T04:54:53.000Z | [] | [
".gitattributes"
] | zachgray | 0 | |||
zachzhang/relevance_models | 2021-04-14T18:26:16.000Z | [] | [
".gitattributes",
"multilingual_zero.bin",
"multilingual_zero2.bin"
] | zachzhang | 0 | |||
zafer247/hgpt2 | 2021-01-06T12:18:04.000Z | [] | [
".gitattributes"
] | zafer247 | 0 | |||
zakiyaakter6/bfgbfgbfgbfgb | 2021-04-03T12:10:42.000Z | [] | [
".gitattributes",
"README.md"
] | zakiyaakter6 | 0 | |||
zalogaaa/test | 2020-11-21T13:45:38.000Z | [] | [
".gitattributes"
] | zalogaaa | 0 | |||
zanderbush/DebateWriting | 2021-01-13T20:39:45.000Z | [] | [
".gitattributes",
"config1.json",
"merges1.txt",
"pytorch_model1.bin",
"vocab.json"
] | zanderbush | 7 | |||
zanderbush/ForceWords | 2021-05-23T13:56:01.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
] | text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"log_history.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
] | zanderbush | 22 | transformers |