Model Name
stringlengths 5
122
| URL
stringlengths 28
145
| Crawled Text
stringlengths 1
199k
⌀ | text
stringlengths 180
199k
|
---|---|---|---|
EvilGirlfriend/Ua | https://huggingface.co/EvilGirlfriend/Ua | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : EvilGirlfriend/Ua
### Model URL : https://huggingface.co/EvilGirlfriend/Ua
### Model Description : No model card New: Create and edit this model card directly on the website! |
Evye/Eve | https://huggingface.co/Evye/Eve | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Evye/Eve
### Model URL : https://huggingface.co/Evye/Eve
### Model Description : No model card New: Create and edit this model card directly on the website! |
Ewan1011/phoenixbot-chat | https://huggingface.co/Ewan1011/phoenixbot-chat | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Ewan1011/phoenixbot-chat
### Model URL : https://huggingface.co/Ewan1011/phoenixbot-chat
### Model Description : No model card New: Create and edit this model card directly on the website! |
Ewan1011/phoenixbotchat | https://huggingface.co/Ewan1011/phoenixbotchat | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Ewan1011/phoenixbotchat
### Model URL : https://huggingface.co/Ewan1011/phoenixbotchat
### Model Description : No model card New: Create and edit this model card directly on the website! |
ExEngineer/DialoGPT-medium-jdt | https://huggingface.co/ExEngineer/DialoGPT-medium-jdt | #jdt chat bot | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : ExEngineer/DialoGPT-medium-jdt
### Model URL : https://huggingface.co/ExEngineer/DialoGPT-medium-jdt
### Model Description : #jdt chat bot |
ExSol/Alex | https://huggingface.co/ExSol/Alex | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : ExSol/Alex
### Model URL : https://huggingface.co/ExSol/Alex
### Model Description : No model card New: Create and edit this model card directly on the website! |
Exelby/Exe | https://huggingface.co/Exelby/Exe | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Exelby/Exe
### Model URL : https://huggingface.co/Exelby/Exe
### Model Description : No model card New: Create and edit this model card directly on the website! |
Exelby/Exelbyexe | https://huggingface.co/Exelby/Exelbyexe | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Exelby/Exelbyexe
### Model URL : https://huggingface.co/Exelby/Exelbyexe
### Model Description : No model card New: Create and edit this model card directly on the website! |
Exilon/DialoGPT-large-quirk | https://huggingface.co/Exilon/DialoGPT-large-quirk | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Exilon/DialoGPT-large-quirk
### Model URL : https://huggingface.co/Exilon/DialoGPT-large-quirk
### Model Description : |
Exor/DialoGPT-small-harrypotter | https://huggingface.co/Exor/DialoGPT-small-harrypotter | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Exor/DialoGPT-small-harrypotter
### Model URL : https://huggingface.co/Exor/DialoGPT-small-harrypotter
### Model Description : No model card New: Create and edit this model card directly on the website! |
Extreole/test | https://huggingface.co/Extreole/test | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Extreole/test
### Model URL : https://huggingface.co/Extreole/test
### Model Description : No model card New: Create and edit this model card directly on the website! |
EyeSeeThru/txt2img | https://huggingface.co/EyeSeeThru/txt2img | read me | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : EyeSeeThru/txt2img
### Model URL : https://huggingface.co/EyeSeeThru/txt2img
### Model Description : read me |
Eyvaz/wav2vec2-base-russian-big-kaggle | https://huggingface.co/Eyvaz/wav2vec2-base-russian-big-kaggle | This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Eyvaz/wav2vec2-base-russian-big-kaggle
### Model URL : https://huggingface.co/Eyvaz/wav2vec2-base-russian-big-kaggle
### Model Description : This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. More information needed More information needed More information needed The following hyperparameters were used during training: |
Eyvaz/wav2vec2-base-russian-demo-kaggle | https://huggingface.co/Eyvaz/wav2vec2-base-russian-demo-kaggle | This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Eyvaz/wav2vec2-base-russian-demo-kaggle
### Model URL : https://huggingface.co/Eyvaz/wav2vec2-base-russian-demo-kaggle
### Model Description : This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
Eyvaz/wav2vec2-base-russian-modified-kaggle | https://huggingface.co/Eyvaz/wav2vec2-base-russian-modified-kaggle | This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training: This model's model-index metadata is invalid:
Schema validation error. "model-index" must be an array | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Eyvaz/wav2vec2-base-russian-modified-kaggle
### Model URL : https://huggingface.co/Eyvaz/wav2vec2-base-russian-modified-kaggle
### Model Description : This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset. More information needed More information needed More information needed The following hyperparameters were used during training: This model's model-index metadata is invalid:
Schema validation error. "model-index" must be an array |
EzioDD/DialoGPT-small-house | https://huggingface.co/EzioDD/DialoGPT-small-house | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : EzioDD/DialoGPT-small-house
### Model URL : https://huggingface.co/EzioDD/DialoGPT-small-house
### Model Description : No model card New: Create and edit this model card directly on the website! |
EzioDD/house | https://huggingface.co/EzioDD/house | #house small GPT | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : EzioDD/house
### Model URL : https://huggingface.co/EzioDD/house
### Model Description : #house small GPT |
FAN-L/HM_model001 | https://huggingface.co/FAN-L/HM_model001 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FAN-L/HM_model001
### Model URL : https://huggingface.co/FAN-L/HM_model001
### Model Description : No model card New: Create and edit this model card directly on the website! |
FFF000/dialogpt-FFF | https://huggingface.co/FFF000/dialogpt-FFF | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FFF000/dialogpt-FFF
### Model URL : https://huggingface.co/FFF000/dialogpt-FFF
### Model Description : |
FFZG-cleopatra/bert-emoji-latvian-twitter | https://huggingface.co/FFZG-cleopatra/bert-emoji-latvian-twitter | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FFZG-cleopatra/bert-emoji-latvian-twitter
### Model URL : https://huggingface.co/FFZG-cleopatra/bert-emoji-latvian-twitter
### Model Description : No model card New: Create and edit this model card directly on the website! |
FOFer/distilbert-base-uncased-finetuned-squad | https://huggingface.co/FOFer/distilbert-base-uncased-finetuned-squad | This model is a fine-tuned version of distilbert-base-uncased on the squad_v2 dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FOFer/distilbert-base-uncased-finetuned-squad
### Model URL : https://huggingface.co/FOFer/distilbert-base-uncased-finetuned-squad
### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the squad_v2 dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
FPTAI/velectra-base-discriminator-cased | https://huggingface.co/FPTAI/velectra-base-discriminator-cased | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FPTAI/velectra-base-discriminator-cased
### Model URL : https://huggingface.co/FPTAI/velectra-base-discriminator-cased
### Model Description : No model card New: Create and edit this model card directly on the website! |
FPTAI/vibert-base-cased | https://huggingface.co/FPTAI/vibert-base-cased | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FPTAI/vibert-base-cased
### Model URL : https://huggingface.co/FPTAI/vibert-base-cased
### Model Description : No model card New: Create and edit this model card directly on the website! |
Fabby/gpt2-english-light-novel-titles | https://huggingface.co/Fabby/gpt2-english-light-novel-titles | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fabby/gpt2-english-light-novel-titles
### Model URL : https://huggingface.co/Fabby/gpt2-english-light-novel-titles
### Model Description : No model card New: Create and edit this model card directly on the website! |
FabianGroeger/HotelBERT-small | https://huggingface.co/FabianGroeger/HotelBERT-small | This model was trained on reviews from a well known German hotel platform. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FabianGroeger/HotelBERT-small
### Model URL : https://huggingface.co/FabianGroeger/HotelBERT-small
### Model Description : This model was trained on reviews from a well known German hotel platform. |
FabianGroeger/HotelBERT | https://huggingface.co/FabianGroeger/HotelBERT | This model was trained on reviews from a well known German hotel platform. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FabianGroeger/HotelBERT
### Model URL : https://huggingface.co/FabianGroeger/HotelBERT
### Model Description : This model was trained on reviews from a well known German hotel platform. |
FabioDataGeek/distilbert-base-uncased-finetuned-emotion | https://huggingface.co/FabioDataGeek/distilbert-base-uncased-finetuned-emotion | This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FabioDataGeek/distilbert-base-uncased-finetuned-emotion
### Model URL : https://huggingface.co/FabioDataGeek/distilbert-base-uncased-finetuned-emotion
### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
Faky/DialoGPT-small-RickBot | https://huggingface.co/Faky/DialoGPT-small-RickBot | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Faky/DialoGPT-small-RickBot
### Model URL : https://huggingface.co/Faky/DialoGPT-small-RickBot
### Model Description : No model card New: Create and edit this model card directly on the website! |
Famaral97/distilbert-base-uncased-finetuned-ner | https://huggingface.co/Famaral97/distilbert-base-uncased-finetuned-ner | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Famaral97/distilbert-base-uncased-finetuned-ner
### Model URL : https://huggingface.co/Famaral97/distilbert-base-uncased-finetuned-ner
### Model Description : No model card New: Create and edit this model card directly on the website! |
Fan-s/reddit-tc-bert | https://huggingface.co/Fan-s/reddit-tc-bert | This model is a fine-tuned version of bert-base-uncased on an Reddit-dialogue dataset.
This model can be used for Text Classification: Given two sentences, see if they are related.
It achieves the following results on the evaluation set: The following hyperparameters were used during training: You can use the model like this: This model's model-index metadata is invalid:
Schema validation error. "model-index[0].results" is required | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fan-s/reddit-tc-bert
### Model URL : https://huggingface.co/Fan-s/reddit-tc-bert
### Model Description : This model is a fine-tuned version of bert-base-uncased on an Reddit-dialogue dataset.
This model can be used for Text Classification: Given two sentences, see if they are related.
It achieves the following results on the evaluation set: The following hyperparameters were used during training: You can use the model like this: This model's model-index metadata is invalid:
Schema validation error. "model-index[0].results" is required |
Fang/Titania | https://huggingface.co/Fang/Titania | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fang/Titania
### Model URL : https://huggingface.co/Fang/Titania
### Model Description : No model card New: Create and edit this model card directly on the website! |
FangLee/DialoGPT-small-Kirito | https://huggingface.co/FangLee/DialoGPT-small-Kirito | @Kirito DialoGPT Small Model | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FangLee/DialoGPT-small-Kirito
### Model URL : https://huggingface.co/FangLee/DialoGPT-small-Kirito
### Model Description : @Kirito DialoGPT Small Model |
FardinSaboori/bert-finetuned-squad-accelerate | https://huggingface.co/FardinSaboori/bert-finetuned-squad-accelerate | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FardinSaboori/bert-finetuned-squad-accelerate
### Model URL : https://huggingface.co/FardinSaboori/bert-finetuned-squad-accelerate
### Model Description : No model card New: Create and edit this model card directly on the website! |
FardinSaboori/bert-finetuned-squad | https://huggingface.co/FardinSaboori/bert-finetuned-squad | This model is a fine-tuned version of bert-base-cased on the squad dataset. More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FardinSaboori/bert-finetuned-squad
### Model URL : https://huggingface.co/FardinSaboori/bert-finetuned-squad
### Model Description : This model is a fine-tuned version of bert-base-cased on the squad dataset. More information needed More information needed More information needed The following hyperparameters were used during training: |
FarhanAli/RoBERT_healthFact | https://huggingface.co/FarhanAli/RoBERT_healthFact | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FarhanAli/RoBERT_healthFact
### Model URL : https://huggingface.co/FarhanAli/RoBERT_healthFact
### Model Description : No model card New: Create and edit this model card directly on the website! |
FarhanAli/health_fact_data_models | https://huggingface.co/FarhanAli/health_fact_data_models | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FarhanAli/health_fact_data_models
### Model URL : https://huggingface.co/FarhanAli/health_fact_data_models
### Model Description : No model card New: Create and edit this model card directly on the website! |
FarisHijazi/wav2vec2-large-xls-r-300m-arabic-colab | https://huggingface.co/FarisHijazi/wav2vec2-large-xls-r-300m-arabic-colab | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FarisHijazi/wav2vec2-large-xls-r-300m-arabic-colab
### Model URL : https://huggingface.co/FarisHijazi/wav2vec2-large-xls-r-300m-arabic-colab
### Model Description : No model card New: Create and edit this model card directly on the website! |
FarisHijazi/wav2vec2-large-xls-r-300m-turkish-colab | https://huggingface.co/FarisHijazi/wav2vec2-large-xls-r-300m-turkish-colab | This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FarisHijazi/wav2vec2-large-xls-r-300m-turkish-colab
### Model URL : https://huggingface.co/FarisHijazi/wav2vec2-large-xls-r-300m-turkish-colab
### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. More information needed More information needed More information needed The following hyperparameters were used during training: |
FarisHijazi/wav2vec2-large-xlsr-turkish-demo-colab | https://huggingface.co/FarisHijazi/wav2vec2-large-xlsr-turkish-demo-colab | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FarisHijazi/wav2vec2-large-xlsr-turkish-demo-colab
### Model URL : https://huggingface.co/FarisHijazi/wav2vec2-large-xlsr-turkish-demo-colab
### Model Description : No model card New: Create and edit this model card directly on the website! |
Farjami/Modal1 | https://huggingface.co/Farjami/Modal1 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Farjami/Modal1
### Model URL : https://huggingface.co/Farjami/Modal1
### Model Description : No model card New: Create and edit this model card directly on the website! |
Fatemah/salamBERT | https://huggingface.co/Fatemah/salamBERT | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fatemah/salamBERT
### Model URL : https://huggingface.co/Fatemah/salamBERT
### Model Description : No model card New: Create and edit this model card directly on the website! |
Fauzan/autonlp-judulberita-32517788 | https://huggingface.co/Fauzan/autonlp-judulberita-32517788 | You can use cURL to access this model: Or Python API: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fauzan/autonlp-judulberita-32517788
### Model URL : https://huggingface.co/Fauzan/autonlp-judulberita-32517788
### Model Description : You can use cURL to access this model: Or Python API: |
FelipeV/bert-base-spanish-uncased-sentiment | https://huggingface.co/FelipeV/bert-base-spanish-uncased-sentiment | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FelipeV/bert-base-spanish-uncased-sentiment
### Model URL : https://huggingface.co/FelipeV/bert-base-spanish-uncased-sentiment
### Model Description : No model card New: Create and edit this model card directly on the website! |
Felipehonorato/storIA | https://huggingface.co/Felipehonorato/storIA | This model was fine-tuned to generate horror stories in a collaborative way.
Check it out on our repo. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Felipehonorato/storIA
### Model URL : https://huggingface.co/Felipehonorato/storIA
### Model Description : This model was fine-tuned to generate horror stories in a collaborative way.
Check it out on our repo. |
Fengkai/distilbert-base-uncased-finetuned-emotion | https://huggingface.co/Fengkai/distilbert-base-uncased-finetuned-emotion | This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fengkai/distilbert-base-uncased-finetuned-emotion
### Model URL : https://huggingface.co/Fengkai/distilbert-base-uncased-finetuned-emotion
### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
Fengkai/xlm-roberta-base-finetuned-panx-de | https://huggingface.co/Fengkai/xlm-roberta-base-finetuned-panx-de | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fengkai/xlm-roberta-base-finetuned-panx-de
### Model URL : https://huggingface.co/Fengkai/xlm-roberta-base-finetuned-panx-de
### Model Description : No model card New: Create and edit this model card directly on the website! |
Fenshee/Tania | https://huggingface.co/Fenshee/Tania | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fenshee/Tania
### Model URL : https://huggingface.co/Fenshee/Tania
### Model Description : No model card New: Create and edit this model card directly on the website! |
Fera/HakaiMono | https://huggingface.co/Fera/HakaiMono | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fera/HakaiMono
### Model URL : https://huggingface.co/Fera/HakaiMono
### Model Description : No model card New: Create and edit this model card directly on the website! |
Ferch423/gpt2-small-portuguese-wikipediabio | https://huggingface.co/Ferch423/gpt2-small-portuguese-wikipediabio | This is a finetuned model version of gpt2-small-portuguese(https://huggingface.co/pierreguillou/gpt2-small-portuguese) by pierreguillou. It was trained on a person abstract dataset extracted from DBPEDIA (over 100000 people's abstracts). The model is intended as a simple and fun experiment for generating texts abstracts based on ordinary people's names. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Ferch423/gpt2-small-portuguese-wikipediabio
### Model URL : https://huggingface.co/Ferch423/gpt2-small-portuguese-wikipediabio
### Model Description : This is a finetuned model version of gpt2-small-portuguese(https://huggingface.co/pierreguillou/gpt2-small-portuguese) by pierreguillou. It was trained on a person abstract dataset extracted from DBPEDIA (over 100000 people's abstracts). The model is intended as a simple and fun experiment for generating texts abstracts based on ordinary people's names. |
Ferial/distilbert-base-uncased-finetuned-ner | https://huggingface.co/Ferial/distilbert-base-uncased-finetuned-ner | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Ferial/distilbert-base-uncased-finetuned-ner
### Model URL : https://huggingface.co/Ferial/distilbert-base-uncased-finetuned-ner
### Model Description : No model card New: Create and edit this model card directly on the website! |
Ferran/pk-bert | https://huggingface.co/Ferran/pk-bert | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Ferran/pk-bert
### Model URL : https://huggingface.co/Ferran/pk-bert
### Model Description : No model card New: Create and edit this model card directly on the website! |
Fhrozen/test_an4 | https://huggingface.co/Fhrozen/test_an4 | This model was trained by Fhrozen using an4 recipe in espnet. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fhrozen/test_an4
### Model URL : https://huggingface.co/Fhrozen/test_an4
### Model Description : This model was trained by Fhrozen using an4 recipe in espnet. |
Fiddi/distilbert-base-uncased-finetuned-ner | https://huggingface.co/Fiddi/distilbert-base-uncased-finetuned-ner | This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fiddi/distilbert-base-uncased-finetuned-ner
### Model URL : https://huggingface.co/Fiddi/distilbert-base-uncased-finetuned-ner
### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
Fiftyzed/Fiftyzed | https://huggingface.co/Fiftyzed/Fiftyzed | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fiftyzed/Fiftyzed
### Model URL : https://huggingface.co/Fiftyzed/Fiftyzed
### Model Description : No model card New: Create and edit this model card directly on the website! |
Film8844/wangchanberta-ner | https://huggingface.co/Film8844/wangchanberta-ner | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Film8844/wangchanberta-ner
### Model URL : https://huggingface.co/Film8844/wangchanberta-ner
### Model Description : No model card New: Create and edit this model card directly on the website! |
FilmonK/DialoGPT-small-harrypotter | https://huggingface.co/FilmonK/DialoGPT-small-harrypotter | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FilmonK/DialoGPT-small-harrypotter
### Model URL : https://huggingface.co/FilmonK/DialoGPT-small-harrypotter
### Model Description : No model card New: Create and edit this model card directly on the website! |
Filosofas/DialoGPT-medium-PALPATINE | https://huggingface.co/Filosofas/DialoGPT-medium-PALPATINE | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Filosofas/DialoGPT-medium-PALPATINE
### Model URL : https://huggingface.co/Filosofas/DialoGPT-medium-PALPATINE
### Model Description : |
Finka/model_name | https://huggingface.co/Finka/model_name | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finka/model_name
### Model URL : https://huggingface.co/Finka/model_name
### Model Description : No model card New: Create and edit this model card directly on the website! |
Finnish-NLP/convbert-base-finnish | https://huggingface.co/Finnish-NLP/convbert-base-finnish | Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in
this paper
and first released at this page. Note: this model is the ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ConvBERT generator model intented to be used for fill-mask task is released here Finnish-NLP/convbert-base-generator-finnish Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs. Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based
dynamic convolution to replace some of the global self-attention heads for modeling local input sequence
dependencies. These convolution heads, together with the rest of the self-attention
heads, form a new mixed attention block that should be more efficient at both global
and local context learning. You can use the raw model for extracting features or fine-tune it to a downstream task like text classification. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. This Finnish ConvBERT model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official ConvBERT repository and also some instructions was used from here. Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our other models: To conclude, this ConvBERT model wins the ELECTRA model while losing to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ConvBERT model has 106M parameters when roberta-large models have 355M parameters. ConvBERT winning the ELECTRA is also in line with the findings of the ConvBERT paper. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finnish-NLP/convbert-base-finnish
### Model URL : https://huggingface.co/Finnish-NLP/convbert-base-finnish
### Model Description : Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in
this paper
and first released at this page. Note: this model is the ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ConvBERT generator model intented to be used for fill-mask task is released here Finnish-NLP/convbert-base-generator-finnish Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs. Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based
dynamic convolution to replace some of the global self-attention heads for modeling local input sequence
dependencies. These convolution heads, together with the rest of the self-attention
heads, form a new mixed attention block that should be more efficient at both global
and local context learning. You can use the raw model for extracting features or fine-tune it to a downstream task like text classification. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. This Finnish ConvBERT model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official ConvBERT repository and also some instructions was used from here. Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our other models: To conclude, this ConvBERT model wins the ELECTRA model while losing to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ConvBERT model has 106M parameters when roberta-large models have 355M parameters. ConvBERT winning the ELECTRA is also in line with the findings of the ConvBERT paper. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 |
Finnish-NLP/convbert-base-generator-finnish | https://huggingface.co/Finnish-NLP/convbert-base-generator-finnish | Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in
this paper
and first released at this page. Note: this model is the ConvBERT generator model intented to be used for the fill-mask task. The ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here Finnish-NLP/convbert-base-finnish Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs. Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based
dynamic convolution to replace some of the global self-attention heads for modeling local input sequence
dependencies. These convolution heads, together with the rest of the self-attention
heads, form a new mixed attention block that should be more efficient at both global
and local context learning. You can use this generator model mainly just for the fill-mask task. For other tasks, check the Finnish-NLP/convbert-base-finnish model instead. Here is how to use this model directly with a pipeline for fill-mask task: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. This Finnish ConvBERT model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official ConvBERT repository and also some instructions was used from here. For evaluation results, check the Finnish-NLP/convbert-base-finnish model repository instead. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finnish-NLP/convbert-base-generator-finnish
### Model URL : https://huggingface.co/Finnish-NLP/convbert-base-generator-finnish
### Model Description : Pretrained ConvBERT model on Finnish language using a replaced token detection (RTD) objective. ConvBERT was introduced in
this paper
and first released at this page. Note: this model is the ConvBERT generator model intented to be used for the fill-mask task. The ConvBERT discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here Finnish-NLP/convbert-base-finnish Finnish ConvBERT is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ConvBERT model as inputs. Compared to BERT and ELECTRA models, ConvBERT model utilizes a span-based
dynamic convolution to replace some of the global self-attention heads for modeling local input sequence
dependencies. These convolution heads, together with the rest of the self-attention
heads, form a new mixed attention block that should be more efficient at both global
and local context learning. You can use this generator model mainly just for the fill-mask task. For other tasks, check the Finnish-NLP/convbert-base-finnish model instead. Here is how to use this model directly with a pipeline for fill-mask task: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. This Finnish ConvBERT model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official ConvBERT repository and also some instructions was used from here. For evaluation results, check the Finnish-NLP/convbert-base-finnish model repository instead. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 |
Finnish-NLP/electra-base-discriminator-finnish | https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish | Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
this paper
and first released at this page. Note: this model is the ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ELECTRA generator model intented to be used for fill-mask task is released here Finnish-NLP/electra-base-generator-finnish Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs. You can use the raw model for extracting features or fine-tune it to a downstream task like text classification. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. This Finnish ELECTRA model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official ELECTRA repository and also some instructions was used from here. Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our other models: To conclude, this ELECTRA model loses to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ELECTRA model has 110M parameters when roberta-large models have 355M parameters. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finnish-NLP/electra-base-discriminator-finnish
### Model URL : https://huggingface.co/Finnish-NLP/electra-base-discriminator-finnish
### Model Description : Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
this paper
and first released at this page. Note: this model is the ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification. The ELECTRA generator model intented to be used for fill-mask task is released here Finnish-NLP/electra-base-generator-finnish Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs. You can use the raw model for extracting features or fine-tune it to a downstream task like text classification. Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. This Finnish ELECTRA model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official ELECTRA repository and also some instructions was used from here. Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our other models: To conclude, this ELECTRA model loses to other models but is still fairly competitive compared to our roberta-large models when taking into account that this ELECTRA model has 110M parameters when roberta-large models have 355M parameters. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 |
Finnish-NLP/electra-base-generator-finnish | https://huggingface.co/Finnish-NLP/electra-base-generator-finnish | Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
this paper
and first released at this page. Note: this model is the ELECTRA generator model intented to be used for the fill-mask task. The ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here Finnish-NLP/electra-base-discriminator-finnish Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs. You can use this generator model mainly just for the fill-mask task. For other tasks, check the Finnish-NLP/electra-base-discriminator-finnish model instead. Here is how to use this model directly with a pipeline for fill-mask task: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. This Finnish ELECTRA model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official ELECTRA repository and also some instructions was used from here. For evaluation results, check the Finnish-NLP/electra-base-discriminator-finnish model repository instead. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finnish-NLP/electra-base-generator-finnish
### Model URL : https://huggingface.co/Finnish-NLP/electra-base-generator-finnish
### Model Description : Pretrained ELECTRA model on Finnish language using a replaced token detection (RTD) objective. ELECTRA was introduced in
this paper
and first released at this page. Note: this model is the ELECTRA generator model intented to be used for the fill-mask task. The ELECTRA discriminator model intented to be used for fine-tuning on downstream tasks like text classification is released here Finnish-NLP/electra-base-discriminator-finnish Finnish ELECTRA is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the replaced token detection (RTD) objective. Instead of masking the input like in BERT's masked language modeling (MLM) objective, this approach corrupts the input by replacing some tokens with plausible alternatives sampled from a small generator model. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained that predicts whether each token in the corrupted input was replaced by a generator model's sample or not. Thus, this training approach resembles Generative Adversarial Nets (GAN). This way, the model learns an inner representation of the Finnish language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ELECTRA model as inputs. You can use this generator model mainly just for the fill-mask task. For other tasks, check the Finnish-NLP/electra-base-discriminator-finnish model instead. Here is how to use this model directly with a pipeline for fill-mask task: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. This Finnish ELECTRA model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using WordPiece and a vocabulary size of 50265. The inputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 1M steps. The optimizer used was a AdamW with learning rate 2e-4, learning rate warmup for 20000 steps and linear decay of the learning rate after. Training code was from the official ELECTRA repository and also some instructions was used from here. For evaluation results, check the Finnish-NLP/electra-base-discriminator-finnish model repository instead. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 |
Finnish-NLP/gpt2-finnish | https://huggingface.co/Finnish-NLP/gpt2-finnish | Pretrained GPT-2 model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
this paper
and first released at this page. Note: this model is quite small 117M parameter variant as in Huggingface's GPT-2 config, so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 345M parameter variant gpt2-medium-finnish and 774M parameter variant gpt2-large-finnish available which perform better compared to this model. Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt. You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you. You can use this model directly with a pipeline for text generation: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. This Finnish GPT-2 model was pretrained on the combination of six datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 300k steps (a bit over 2 epochs, 256 batch size). The optimizer used was a second-order optimization method called Distributed Shampoo with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after. At first, commonly used Adam optimizer was tried but there were significant issues getting the model to converge even with multiple different learning rate trials so then Adam optimizer was replaced with the Distributed Shampoo which worked a lot better. Evaluation was done using the validation split of the mc4_fi_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) loses to our bigger model variants. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finnish-NLP/gpt2-finnish
### Model URL : https://huggingface.co/Finnish-NLP/gpt2-finnish
### Model Description : Pretrained GPT-2 model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
this paper
and first released at this page. Note: this model is quite small 117M parameter variant as in Huggingface's GPT-2 config, so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 345M parameter variant gpt2-medium-finnish and 774M parameter variant gpt2-large-finnish available which perform better compared to this model. Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt. You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you. You can use this model directly with a pipeline for text generation: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. This Finnish GPT-2 model was pretrained on the combination of six datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 300k steps (a bit over 2 epochs, 256 batch size). The optimizer used was a second-order optimization method called Distributed Shampoo with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after. At first, commonly used Adam optimizer was tried but there were significant issues getting the model to converge even with multiple different learning rate trials so then Adam optimizer was replaced with the Distributed Shampoo which worked a lot better. Evaluation was done using the validation split of the mc4_fi_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) loses to our bigger model variants. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 |
Finnish-NLP/gpt2-large-finnish | https://huggingface.co/Finnish-NLP/gpt2-large-finnish | Pretrained GPT-2 large model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
this paper
and first released at this page. Note: this model is 774M parameter variant as in Huggingface's GPT-2-large config, so not the famous big 1.5B parameter variant by OpenAI. Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt. You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you. You can use this model directly with a pipeline for text generation: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. This Finnish GPT-2 model was pretrained on the combination of six datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 640k steps (a bit over 1 epoch, 64 batch size). The optimizer used was a AdamW with learning rate 4e-5, learning rate warmup for 4000 steps and cosine decay of the learning rate after. Evaluation was done using the validation split of the mc4_fi_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller model variants. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finnish-NLP/gpt2-large-finnish
### Model URL : https://huggingface.co/Finnish-NLP/gpt2-large-finnish
### Model Description : Pretrained GPT-2 large model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
this paper
and first released at this page. Note: this model is 774M parameter variant as in Huggingface's GPT-2-large config, so not the famous big 1.5B parameter variant by OpenAI. Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt. You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you. You can use this model directly with a pipeline for text generation: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. This Finnish GPT-2 model was pretrained on the combination of six datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 640k steps (a bit over 1 epoch, 64 batch size). The optimizer used was a AdamW with learning rate 4e-5, learning rate warmup for 4000 steps and cosine decay of the learning rate after. Evaluation was done using the validation split of the mc4_fi_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller model variants. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 |
Finnish-NLP/gpt2-medium-finnish | https://huggingface.co/Finnish-NLP/gpt2-medium-finnish | Pretrained GPT-2 medium model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
this paper
and first released at this page. Note: this model is 345M parameter variant as in Huggingface's GPT-2-medium config, so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 774M parameter variant gpt2-large-finnish available which performs better compared to this model. Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt. You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you. You can use this model directly with a pipeline for text generation: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. This Finnish GPT-2 model was pretrained on the combination of six datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 360k steps (a bit over 1 epoch, 128 batch size). The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after. Evaluation was done using the validation split of the mc4_fi_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller gpt2-finnish model variant but loses to our bigger gpt2-large-finnish model. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finnish-NLP/gpt2-medium-finnish
### Model URL : https://huggingface.co/Finnish-NLP/gpt2-medium-finnish
### Model Description : Pretrained GPT-2 medium model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
this paper
and first released at this page. Note: this model is 345M parameter variant as in Huggingface's GPT-2-medium config, so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 774M parameter variant gpt2-large-finnish available which performs better compared to this model. Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token i only uses the inputs from 1 to i but not the future tokens. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt. You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you. You can use this model directly with a pipeline for text generation: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. This Finnish GPT-2 model was pretrained on the combination of six datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens. The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 360k steps (a bit over 1 epoch, 128 batch size). The optimizer used was a AdamW with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after. Evaluation was done using the validation split of the mc4_fi_cleaned dataset with Perplexity (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) performs better than our smaller gpt2-finnish model variant but loses to our bigger gpt2-large-finnish model. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 |
Finnish-NLP/roberta-large-finnish-v2 | https://huggingface.co/Finnish-NLP/roberta-large-finnish-v2 | This Finnish-NLP/roberta-large-finnish-v2 model is a new version of the previously trained Finnish-NLP/roberta-large-finnish model. Training hyperparameters were same but the training dataset was cleaned better with the goal to get better performing language model through the better cleaned data. Based on the model evaluations (check the table at the end), slightly better cleaned data didn't seem to produce better performing model. Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between finnish and Finnish. Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions. This Finnish RoBERTa model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by <mask>. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 520k train steps (2 epochs, batch size 512) with a sequence length of 128 and continuing for 520k steps (1 epoch, batch size 64) with a sequence length of 512. The optimizer used for the 128 sequence training was AdamW, and for the 512 sequence training it was Adafactor (to save memory). Learning rate was 2e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, learning rate warmup for 1500 steps and linear decay of the learning rate after. Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our previous Finnish-NLP/roberta-large-finnish model: To conclude, this model didn't significantly improve compared to our previous Finnish-NLP/roberta-large-finnish model. This model is also slightly (~ 1%) losing to the FinBERT (Finnish BERT) model. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finnish-NLP/roberta-large-finnish-v2
### Model URL : https://huggingface.co/Finnish-NLP/roberta-large-finnish-v2
### Model Description : This Finnish-NLP/roberta-large-finnish-v2 model is a new version of the previously trained Finnish-NLP/roberta-large-finnish model. Training hyperparameters were same but the training dataset was cleaned better with the goal to get better performing language model through the better cleaned data. Based on the model evaluations (check the table at the end), slightly better cleaned data didn't seem to produce better performing model. Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between finnish and Finnish. Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions. This Finnish RoBERTa model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by <mask>. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 520k train steps (2 epochs, batch size 512) with a sequence length of 128 and continuing for 520k steps (1 epoch, batch size 64) with a sequence length of 512. The optimizer used for the 128 sequence training was AdamW, and for the 512 sequence training it was Adafactor (to save memory). Learning rate was 2e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, learning rate warmup for 1500 steps and linear decay of the learning rate after. Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our previous Finnish-NLP/roberta-large-finnish model: To conclude, this model didn't significantly improve compared to our previous Finnish-NLP/roberta-large-finnish model. This model is also slightly (~ 1%) losing to the FinBERT (Finnish BERT) model. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 |
Finnish-NLP/roberta-large-finnish | https://huggingface.co/Finnish-NLP/roberta-large-finnish | Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between finnish and Finnish. Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions. This Finnish RoBERTa model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 78GB of text. The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by <mask>. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 2 epochs with a sequence length of 128 and continuing for one more epoch with a sequence length of 512. The optimizer used is Adafactor with a learning rate of 2e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, learning rate warmup for 1500 steps and linear decay of the learning rate after. Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) and to our previous Finnish RoBERTa-large trained during the Hugging Face JAX/Flax community week: To conclude, this model improves on our previous Finnish RoBERTa-large model trained during the Hugging Face JAX/Flax community week but is still slightly (~ 1%) losing to the FinBERT (Finnish BERT) model. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finnish-NLP/roberta-large-finnish
### Model URL : https://huggingface.co/Finnish-NLP/roberta-large-finnish
### Model Description : Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective. RoBERTa was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between finnish and Finnish. Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions. This Finnish RoBERTa model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 78GB of text. The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by <mask>. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 2 epochs with a sequence length of 128 and continuing for one more epoch with a sequence length of 512. The optimizer used is Adafactor with a learning rate of 2e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, learning rate warmup for 1500 steps and linear decay of the learning rate after. Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) and to our previous Finnish RoBERTa-large trained during the Hugging Face JAX/Flax community week: To conclude, this model improves on our previous Finnish RoBERTa-large model trained during the Hugging Face JAX/Flax community week but is still slightly (~ 1%) losing to the FinBERT (Finnish BERT) model. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 |
Finnish-NLP/roberta-large-wechsel-finnish | https://huggingface.co/Finnish-NLP/roberta-large-wechsel-finnish | Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective with WECHSEL method. RoBERTa was introduced in
this paper and first released in
this repository. WECHSEL method (Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models) was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between finnish and Finnish. Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs. Using the WECHSEL method, we first took the pretrained English roberta-large model, changed its tokenizer with our Finnish tokenizer and initialized model's token embeddings such that they are close to semantically similar English tokens by utilizing multilingual static word embeddings (by fastText) covering English and Finnish. We were able to confirm the WECHSEL paper's findings that using this method you can save pretraining time and thus computing resources. To get idea of the WECHSEL method's training time savings you can check the table below illustrating the MLM evaluation accuracies during the pretraining compared to the Finnish-NLP/roberta-large-finnish-v2 which was trained from scratch: Downstream finetuning text classification tests can be found from the end but there this model trained with WECHSEL method didn't significantly improve the downstream performances. However, based on tens of qualitative fill-mask task example tests we noticed that for fill-mask task this WECHSEL model significantly outperforms our other models trained from scratch. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions. This Finnish RoBERTa model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by <mask>. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 270k steps (a bit over 1 epoch, 512 batch size) with a sequence length of 128 and continuing for 180k steps (batch size 64) with a sequence length of 512. The optimizer used was Adafactor (to save memory). Learning rate was 2e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, learning rate warmup for 2500 steps and linear decay of the learning rate after. Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our previous Finnish-NLP/roberta-large-finnish-v2 and Finnish-NLP/roberta-large-finnish models: To conclude, this model didn't significantly improve compared to our previous models which were trained from scratch instead of using the WECHSEL method as in this model. This model is also slightly (~ 1%) losing to the FinBERT (Finnish BERT) model. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Finnish-NLP/roberta-large-wechsel-finnish
### Model URL : https://huggingface.co/Finnish-NLP/roberta-large-wechsel-finnish
### Model Description : Pretrained RoBERTa model on Finnish language using a masked language modeling (MLM) objective with WECHSEL method. RoBERTa was introduced in
this paper and first released in
this repository. WECHSEL method (Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models) was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between finnish and Finnish. Finnish RoBERTa is a transformers model pretrained on a large corpus of Finnish data in a self-supervised fashion. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the RoBERTa model as inputs. Using the WECHSEL method, we first took the pretrained English roberta-large model, changed its tokenizer with our Finnish tokenizer and initialized model's token embeddings such that they are close to semantically similar English tokens by utilizing multilingual static word embeddings (by fastText) covering English and Finnish. We were able to confirm the WECHSEL paper's findings that using this method you can save pretraining time and thus computing resources. To get idea of the WECHSEL method's training time savings you can check the table below illustrating the MLM evaluation accuracies during the pretraining compared to the Finnish-NLP/roberta-large-finnish-v2 which was trained from scratch: Downstream finetuning text classification tests can be found from the end but there this model trained with WECHSEL method didn't significantly improve the downstream performances. However, based on tens of qualitative fill-mask task example tests we noticed that for fill-mask task this WECHSEL model significantly outperforms our other models trained from scratch. You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2. You can use this model directly with a pipeline for masked language modeling: Here is how to use this model to get the features of a given text in PyTorch: and in TensorFlow: The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions. This Finnish RoBERTa model was pretrained on the combination of five datasets: Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text. The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with <s> and the end of one by </s> The details of the masking procedure for each sentence are the following: 15% of the tokens are masked. In 80% of the cases, the masked tokens are replaced by <mask>. In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed). The model was trained on TPUv3-8 VM, sponsored by the Google TPU Research Cloud, for 270k steps (a bit over 1 epoch, 512 batch size) with a sequence length of 128 and continuing for 180k steps (batch size 64) with a sequence length of 512. The optimizer used was Adafactor (to save memory). Learning rate was 2e-4, β1=0.9\beta_{1} = 0.9β1=0.9, β2=0.98\beta_{2} = 0.98β2=0.98 and ϵ=1e−6\epsilon = 1e-6ϵ=1e−6, learning rate warmup for 2500 steps and linear decay of the learning rate after. Evaluation was done by fine-tuning the model on downstream text classification task with two different labeled datasets: Yle News and Eduskunta. Yle News classification fine-tuning was done with two different sequence lengths: 128 and 512 but Eduskunta only with 128 sequence length.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to the FinBERT (Finnish BERT) model and to our previous Finnish-NLP/roberta-large-finnish-v2 and Finnish-NLP/roberta-large-finnish models: To conclude, this model didn't significantly improve compared to our previous models which were trained from scratch instead of using the WECHSEL method as in this model. This model is also slightly (~ 1%) losing to the FinBERT (Finnish BERT) model. This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud. Feel free to contact us for more details 🤗 |
Fiona99/distilbert-base-uncased-finetuned-cola | https://huggingface.co/Fiona99/distilbert-base-uncased-finetuned-cola | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Fiona99/distilbert-base-uncased-finetuned-cola
### Model URL : https://huggingface.co/Fiona99/distilbert-base-uncased-finetuned-cola
### Model Description : No model card New: Create and edit this model card directly on the website! |
Firat/albert-base-v2-finetuned-squad | https://huggingface.co/Firat/albert-base-v2-finetuned-squad | This model is a fine-tuned version of albert-base-v2 on the squad dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Firat/albert-base-v2-finetuned-squad
### Model URL : https://huggingface.co/Firat/albert-base-v2-finetuned-squad
### Model Description : This model is a fine-tuned version of albert-base-v2 on the squad dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
Firat/distilbert-base-uncased-finetuned-squad | https://huggingface.co/Firat/distilbert-base-uncased-finetuned-squad | This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Firat/distilbert-base-uncased-finetuned-squad
### Model URL : https://huggingface.co/Firat/distilbert-base-uncased-finetuned-squad
### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
Firat/roberta-base-finetuned-squad | https://huggingface.co/Firat/roberta-base-finetuned-squad | This model is a fine-tuned version of roberta-base on the squad dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Firat/roberta-base-finetuned-squad
### Model URL : https://huggingface.co/Firat/roberta-base-finetuned-squad
### Model Description : This model is a fine-tuned version of roberta-base on the squad dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
Firestawn/Ru | https://huggingface.co/Firestawn/Ru | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Firestawn/Ru
### Model URL : https://huggingface.co/Firestawn/Ru
### Model Description : No model card New: Create and edit this model card directly on the website! |
FirmanBr/FirmanBrilianBert | https://huggingface.co/FirmanBr/FirmanBrilianBert | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FirmanBr/FirmanBrilianBert
### Model URL : https://huggingface.co/FirmanBr/FirmanBrilianBert
### Model Description : No model card New: Create and edit this model card directly on the website! |
FirmanBr/FirmanIndoLanguageModel | https://huggingface.co/FirmanBr/FirmanIndoLanguageModel | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FirmanBr/FirmanIndoLanguageModel
### Model URL : https://huggingface.co/FirmanBr/FirmanIndoLanguageModel
### Model Description : No model card New: Create and edit this model card directly on the website! |
FirmanBr/chibibot | https://huggingface.co/FirmanBr/chibibot | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FirmanBr/chibibot
### Model URL : https://huggingface.co/FirmanBr/chibibot
### Model Description : No model card New: Create and edit this model card directly on the website! |
FisherYu/test_code_nlp | https://huggingface.co/FisherYu/test_code_nlp | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FisherYu/test_code_nlp
### Model URL : https://huggingface.co/FisherYu/test_code_nlp
### Model Description : |
FitoDS/wav2vec2-large-xls-r-300m-guarani-colab | https://huggingface.co/FitoDS/wav2vec2-large-xls-r-300m-guarani-colab | This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FitoDS/wav2vec2-large-xls-r-300m-guarani-colab
### Model URL : https://huggingface.co/FitoDS/wav2vec2-large-xls-r-300m-guarani-colab
### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
FitoDS/wav2vec2-large-xls-r-300m-spanish-large | https://huggingface.co/FitoDS/wav2vec2-large-xls-r-300m-spanish-large | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FitoDS/wav2vec2-large-xls-r-300m-spanish-large
### Model URL : https://huggingface.co/FitoDS/wav2vec2-large-xls-r-300m-spanish-large
### Model Description : No model card New: Create and edit this model card directly on the website! |
FitoDS/wav2vec2-large-xls-r-300m-turkish-colab | https://huggingface.co/FitoDS/wav2vec2-large-xls-r-300m-turkish-colab | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FitoDS/wav2vec2-large-xls-r-300m-turkish-colab
### Model URL : https://huggingface.co/FitoDS/wav2vec2-large-xls-r-300m-turkish-colab
### Model Description : No model card New: Create and edit this model card directly on the website! |
FitoDS/xls-r-ab-test | https://huggingface.co/FitoDS/xls-r-ab-test | This model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: This model's model-index metadata is invalid:
Schema validation error. "model-index[0].name" is not allowed to be empty | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FitoDS/xls-r-ab-test
### Model URL : https://huggingface.co/FitoDS/xls-r-ab-test
### Model Description : This model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: This model's model-index metadata is invalid:
Schema validation error. "model-index[0].name" is not allowed to be empty |
Flakko/FlakkoDaniel | https://huggingface.co/Flakko/FlakkoDaniel | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Flakko/FlakkoDaniel
### Model URL : https://huggingface.co/Flakko/FlakkoDaniel
### Model Description : No model card New: Create and edit this model card directly on the website! |
Flampt/DialoGPT-medium-Sheldon | https://huggingface.co/Flampt/DialoGPT-medium-Sheldon | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Flampt/DialoGPT-medium-Sheldon
### Model URL : https://huggingface.co/Flampt/DialoGPT-medium-Sheldon
### Model Description : |
Flarrix/gpt11-lmao | https://huggingface.co/Flarrix/gpt11-lmao | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Flarrix/gpt11-lmao
### Model URL : https://huggingface.co/Flarrix/gpt11-lmao
### Model Description : No model card New: Create and edit this model card directly on the website! |
FloKit/bert-base-uncased-finetuned-squad | https://huggingface.co/FloKit/bert-base-uncased-finetuned-squad | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FloKit/bert-base-uncased-finetuned-squad
### Model URL : https://huggingface.co/FloKit/bert-base-uncased-finetuned-squad
### Model Description : No model card New: Create and edit this model card directly on the website! |
FloZe92/DialoGPT-small-harrypotter_ | https://huggingface.co/FloZe92/DialoGPT-small-harrypotter_ | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FloZe92/DialoGPT-small-harrypotter_
### Model URL : https://huggingface.co/FloZe92/DialoGPT-small-harrypotter_
### Model Description : No model card New: Create and edit this model card directly on the website! |
Flyguy/model_name | https://huggingface.co/Flyguy/model_name | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Flyguy/model_name
### Model URL : https://huggingface.co/Flyguy/model_name
### Model Description : No model card New: Create and edit this model card directly on the website! |
FlyingFrog/Model1 | https://huggingface.co/FlyingFrog/Model1 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FlyingFrog/Model1
### Model URL : https://huggingface.co/FlyingFrog/Model1
### Model Description : No model card New: Create and edit this model card directly on the website! |
For/sheldonbot | https://huggingface.co/For/sheldonbot | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : For/sheldonbot
### Model URL : https://huggingface.co/For/sheldonbot
### Model Description : |
Forax/For | https://huggingface.co/Forax/For | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Forax/For
### Model URL : https://huggingface.co/Forax/For
### Model Description : No model card New: Create and edit this model card directly on the website! |
Forest/gpt2-fanfic | https://huggingface.co/Forest/gpt2-fanfic | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Forest/gpt2-fanfic
### Model URL : https://huggingface.co/Forest/gpt2-fanfic
### Model Description : No model card New: Create and edit this model card directly on the website! |
Forost/Out | https://huggingface.co/Forost/Out | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Forost/Out
### Model URL : https://huggingface.co/Forost/Out
### Model Description : No model card New: Create and edit this model card directly on the website! |
ForutanRad/bert-fa-QA-v1 | https://huggingface.co/ForutanRad/bert-fa-QA-v1 | Persian Question and answer Model Based on Bert Model This model is a fine-tuned version of ParsBERT on PersianQA dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : ForutanRad/bert-fa-QA-v1
### Model URL : https://huggingface.co/ForutanRad/bert-fa-QA-v1
### Model Description : Persian Question and answer Model Based on Bert Model This model is a fine-tuned version of ParsBERT on PersianQA dataset.
It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: |
FosterPatch/GoT-test | https://huggingface.co/FosterPatch/GoT-test | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : FosterPatch/GoT-test
### Model URL : https://huggingface.co/FosterPatch/GoT-test
### Model Description : |
Francesco/dummy | https://huggingface.co/Francesco/dummy | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Francesco/dummy
### Model URL : https://huggingface.co/Francesco/dummy
### Model Description : No model card New: Create and edit this model card directly on the website! |
Francesco/resnet101-224-1k | https://huggingface.co/Francesco/resnet101-224-1k | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Francesco/resnet101-224-1k
### Model URL : https://huggingface.co/Francesco/resnet101-224-1k
### Model Description : No model card New: Create and edit this model card directly on the website! |
Francesco/resnet101 | https://huggingface.co/Francesco/resnet101 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Francesco/resnet101
### Model URL : https://huggingface.co/Francesco/resnet101
### Model Description : No model card New: Create and edit this model card directly on the website! |
Francesco/resnet152-224-1k | https://huggingface.co/Francesco/resnet152-224-1k | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Francesco/resnet152-224-1k
### Model URL : https://huggingface.co/Francesco/resnet152-224-1k
### Model Description : No model card New: Create and edit this model card directly on the website! |
Francesco/resnet152 | https://huggingface.co/Francesco/resnet152 | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Francesco/resnet152
### Model URL : https://huggingface.co/Francesco/resnet152
### Model Description : No model card New: Create and edit this model card directly on the website! |
Francesco/resnet18-224-1k | https://huggingface.co/Francesco/resnet18-224-1k | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Francesco/resnet18-224-1k
### Model URL : https://huggingface.co/Francesco/resnet18-224-1k
### Model Description : No model card New: Create and edit this model card directly on the website! |