Model Name
stringlengths
5
122
URL
stringlengths
28
145
Crawled Text
stringlengths
1
199k
text
stringlengths
180
199k
DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch6
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch6
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch6 ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch6 ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-clean-small-more-epoch
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-small-more-epoch
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-clean-small-more-epoch ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-small-more-epoch ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-100
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-100
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-100 ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-100 ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-50
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-50
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-50 ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-50 ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-clean-small
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-small
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-clean-small ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-small ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-clean-with-unclean-valid
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-with-unclean-valid
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-clean-with-unclean-valid ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean-with-unclean-valid ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-clean
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-clean ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-clean ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-12
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-12
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-12 ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-12 ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-4
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-4
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-4 ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-4 ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-8
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-8
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-8 ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unclean-freeze-8 ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-unclean-small
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unclean-small
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-unclean-small ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unclean-small ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-unclean
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unclean
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-unclean ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unclean ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-asian-hate-tweets-self-unlean-with-clean-valid
https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unlean-with-clean-valid
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-asian-hate-tweets-self-unlean-with-clean-valid ### Model URL : https://huggingface.co/DoyyingFace/bert-asian-hate-tweets-self-unlean-with-clean-valid ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-cola-finetuned
https://huggingface.co/DoyyingFace/bert-cola-finetuned
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-cola-finetuned ### Model URL : https://huggingface.co/DoyyingFace/bert-cola-finetuned ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-tweets-semeval-clean
https://huggingface.co/DoyyingFace/bert-tweets-semeval-clean
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-tweets-semeval-clean ### Model URL : https://huggingface.co/DoyyingFace/bert-tweets-semeval-clean ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-tweets-semeval-unclean
https://huggingface.co/DoyyingFace/bert-tweets-semeval-unclean
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-tweets-semeval-unclean ### Model URL : https://huggingface.co/DoyyingFace/bert-tweets-semeval-unclean ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/bert-wiki-comments-finetuned
https://huggingface.co/DoyyingFace/bert-wiki-comments-finetuned
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/bert-wiki-comments-finetuned ### Model URL : https://huggingface.co/DoyyingFace/bert-wiki-comments-finetuned ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/doyying_bert_first
https://huggingface.co/DoyyingFace/doyying_bert_first
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/doyying_bert_first ### Model URL : https://huggingface.co/DoyyingFace/doyying_bert_first ### Model Description : No model card New: Create and edit this model card directly on the website!
DoyyingFace/doyying_bert_first_again
https://huggingface.co/DoyyingFace/doyying_bert_first_again
This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/doyying_bert_first_again ### Model URL : https://huggingface.co/DoyyingFace/doyying_bert_first_again ### Model Description : This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
DoyyingFace/dummy-model
https://huggingface.co/DoyyingFace/dummy-model
This model is a fine-tuned version of camembert-base on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/dummy-model ### Model URL : https://huggingface.co/DoyyingFace/dummy-model ### Model Description : This model is a fine-tuned version of camembert-base on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
DoyyingFace/test-dummy-model
https://huggingface.co/DoyyingFace/test-dummy-model
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DoyyingFace/test-dummy-model ### Model URL : https://huggingface.co/DoyyingFace/test-dummy-model ### Model Description : No model card New: Create and edit this model card directly on the website!
DrMatters/rubert_cased
https://huggingface.co/DrMatters/rubert_cased
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrMatters/rubert_cased ### Model URL : https://huggingface.co/DrMatters/rubert_cased ### Model Description : No model card New: Create and edit this model card directly on the website!
DrOz/DialoGPT-small-RickAndMorty
https://huggingface.co/DrOz/DialoGPT-small-RickAndMorty
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrOz/DialoGPT-small-RickAndMorty ### Model URL : https://huggingface.co/DrOz/DialoGPT-small-RickAndMorty ### Model Description : No model card New: Create and edit this model card directly on the website!
DrSploit/DrFars
https://huggingface.co/DrSploit/DrFars
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrSploit/DrFars ### Model URL : https://huggingface.co/DrSploit/DrFars ### Model Description : No model card New: Create and edit this model card directly on the website!
Drackyyy/TLM
https://huggingface.co/Drackyyy/TLM
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Drackyyy/TLM ### Model URL : https://huggingface.co/Drackyyy/TLM ### Model Description : No model card New: Create and edit this model card directly on the website!
Drackyyy/ag-large-scale
https://huggingface.co/Drackyyy/ag-large-scale
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Drackyyy/ag-large-scale ### Model URL : https://huggingface.co/Drackyyy/ag-large-scale ### Model Description : No model card New: Create and edit this model card directly on the website!
Dragoniod1596/DialoGPT-small-Legacies
https://huggingface.co/Dragoniod1596/DialoGPT-small-Legacies
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dragoniod1596/DialoGPT-small-Legacies ### Model URL : https://huggingface.co/Dragoniod1596/DialoGPT-small-Legacies ### Model Description :
Dragonjack/test
https://huggingface.co/Dragonjack/test
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dragonjack/test ### Model URL : https://huggingface.co/Dragonjack/test ### Model Description : No model card New: Create and edit this model card directly on the website!
Dreyzin/DialoGPT-medium-avatar
https://huggingface.co/Dreyzin/DialoGPT-medium-avatar
#Uncle Iroh DialoGPT Model
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dreyzin/DialoGPT-medium-avatar ### Model URL : https://huggingface.co/Dreyzin/DialoGPT-medium-avatar ### Model Description : #Uncle Iroh DialoGPT Model
Dri/Dri
https://huggingface.co/Dri/Dri
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dri/Dri ### Model URL : https://huggingface.co/Dri/Dri ### Model Description : No model card New: Create and edit this model card directly on the website!
DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 --dataset mozilla-foundation/common_voice_7_0 --config ab --split test --log_outputs NA The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-ab-CV7 --dataset mozilla-foundation/common_voice_7_0 --config ab --split test --log_outputs NA The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-ab-v4
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-ab-v4
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: This model's model-index metadata is invalid: Schema validation error. "model-index[0].name" is not allowed to be empty
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-ab-v4 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-ab-v4 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: This model's model-index metadata is invalid: Schema validation error. "model-index[0].name" is not allowed to be empty
DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AS dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs Assamese language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AS dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs Assamese language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs Assamese (as) language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs Assamese (as) language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-as-with-LM-v2
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-as-with-LM-v2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-as-with-LM-v2 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-as-with-LM-v2 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 --dataset mozilla-foundation/common_voice_8_0 --config bas --split test --log_outputs Basaa (bas) language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BAS dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bas-v1 --dataset mozilla-foundation/common_voice_8_0 --config bas --split test --log_outputs Basaa (bas) language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-d2 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BG dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset mozilla-foundation/common_voice_8_0 --config bg --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-bg-v1 --dataset speech-recognition-community-v2/dev_data --config bg --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs Breton language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d10 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs Breton language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs Breton language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - BR dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-br-d2 --dataset mozilla-foundation/common_voice_8_0 --config br --split test --log_outputs Breton language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GN dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 --dataset mozilla-foundation/common_voice_8_0 --config gn --split test --log_outputs NA The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - GN dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-gn-k1 --dataset mozilla-foundation/common_voice_8_0 --config gn --split test --log_outputs NA The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs NA The following hyperparameters were used during training: #
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs NA The following hyperparameters were used during training: #
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2 --dataset mozilla-foundation/common_voice_8_0 --config hi --split test --log_outputs Hindi language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8-b2 --dataset mozilla-foundation/common_voice_8_0 --config hi --split test --log_outputs Hindi language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset mozilla-foundation/common_voice_8_0 --config hi --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset speech-recognition-community-v2/dev_data --config hi --split validation --chunk_length_s 10 --stride_length_s 1 Note: Hindi language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HI dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset mozilla-foundation/common_voice_8_0 --config hi --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-cv8 --dataset speech-recognition-community-v2/dev_data --config hi --split validation --chunk_length_s 10 --stride_length_s 1 Note: Hindi language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset. It achieves the following results on the evaluation set: ###Evaluation Commands python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs Hindi language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset. It achieves the following results on the evaluation set: ###Evaluation Commands python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs Hindi language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 -HI dataset. It achieves the following results on the evaluation set: Evaluation Commands python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs NA The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 -HI dataset. It achieves the following results on the evaluation set: Evaluation Commands python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-wx1 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs NA The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs Upper Sorbian language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v1 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs Upper Sorbian language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs Upper Sorbian (hsb) not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v2 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs Upper Sorbian (hsb) not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs Upper Sorbian (hsb) language not found in speech-recognition-community-v2/dev_data! The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HSB dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hsb-v3 --dataset mozilla-foundation/common_voice_8_0 --config hsb --split test --log_outputs Upper Sorbian (hsb) language not found in speech-recognition-community-v2/dev_data! The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - KK dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs Kazakh language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training: !python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - KK dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-kk-with-LM --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs Kazakh language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training: !python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs
DrishtiSharma/wav2vec2-large-xls-r-300m-maltese
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-maltese
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: !python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-maltese --dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs This model's model-index metadata is invalid: Schema validation error. "model-index[0].results[0].metrics" is required
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-maltese ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-maltese ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: !python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-maltese --dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs This model's model-index metadata is invalid: Schema validation error. "model-index[0].results[0].metrics" is required
DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset mozilla-foundation/common_voice_8_0 --config mr --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset speech-recognition-community-v2/dev_data --config mr --split validation --chunk_length_s 10 --stride_length_s 1 Note: Marathi language not found in speech-recognition-community-v2/dev_data! The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MR dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset mozilla-foundation/common_voice_8_0 --config mr --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-mr-v2 --dataset speech-recognition-community-v2/dev_data --config mr --split validation --chunk_length_s 10 --stride_length_s 1 Note: Marathi language not found in speech-recognition-community-v2/dev_data! The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs Erzya language not found in speech-recognition-community-v2/dev_data! The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs Erzya language not found in speech-recognition-community-v2/dev_data! The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - OR dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset speech-recognition-community-v2/dev_data --config or --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - OR dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset speech-recognition-community-v2/dev_data --config or --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs Oriya language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-dx12 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs Oriya language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs Punjabi language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-pa-IN-dx1 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs Punjabi language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-a3 --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset speech-recognition-community-v2/dev_data --config sat --split validation --chunk_length_s 10 --stride_length_s 1 Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SAT dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset mozilla-foundation/common_voice_8_0 --config sat --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sat-final --dataset speech-recognition-community-v2/dev_data --config sat --split validation --chunk_length_s 10 --stride_length_s 1 Note: Santali (Ol Chiki) language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v1 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sl-with-LM-v2 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SR dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset mozilla-foundation/common_voice_8_0 --config sr --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset speech-recognition-community-v2/dev_data --config sr --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SR dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset mozilla-foundation/common_voice_8_0 --config sr --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-sr-v4 --dataset speech-recognition-community-v2/dev_data --config sr --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2
https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - VOT dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 --dataset mozilla-foundation/common_voice_8_0 --config vot --split test --log_outputs Votic language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - VOT dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-vot-final-a2 --dataset mozilla-foundation/common_voice_8_0 --config vot --split test --log_outputs Votic language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-xls-r-300m-kk-n2
https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-300m-kk-n2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - KK dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs Kazakh language not found in speech-recognition-community-v2/dev_data! The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - KK dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-kk-n2 --dataset mozilla-foundation/common_voice_8_0 --config kk --split test --log_outputs Kazakh language not found in speech-recognition-community-v2/dev_data! The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-xls-r-300m-mt-o1
https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-300m-mt-o1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 --dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs Maltese language not found in speech-recognition-community-v2/dev_data! The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MT dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-mt-o1 --dataset mozilla-foundation/common_voice_8_0 --config mt --split test --log_outputs Maltese language not found in speech-recognition-community-v2/dev_data! The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5
https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs Punjabi language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-pa-IN-r5 --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test --log_outputs Punjabi language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11
https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-SURSILV dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 --dataset mozilla-foundation/common_voice_8_0 --config rm-sursilv --split test --log_outputs Romansh-Sursilv language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-SURSILV dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-sursilv-d11 --dataset mozilla-foundation/common_voice_8_0 --config rm-sursilv --split test --log_outputs Romansh-Sursilv language isn't available in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1
https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-VALLADER dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 --dataset mozilla-foundation/common_voice_8_0 --config rm-vallader --split test --log_outputs Romansh-Vallader language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - RM-VALLADER dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-300m-rm-vallader-d1 --dataset mozilla-foundation/common_voice_8_0 --config rm-vallader --split test --log_outputs Romansh-Vallader language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-xls-r-myv-a1
https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-myv-a1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset. It achieves the following results on the evaluation set: 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-myv-a1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Erzya language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training: !python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-xls-r-myv-a1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-myv-a1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MYV dataset. It achieves the following results on the evaluation set: 1. To evaluate on mozilla-foundation/common_voice_8_0 with test split python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-myv-a1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs 2. To evaluate on speech-recognition-community-v2/dev_data Erzya language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training: !python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-myv-v1 --dataset mozilla-foundation/common_voice_8_0 --config myv --split test --log_outputs
DrishtiSharma/wav2vec2-xls-r-pa-IN-a1
https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-pa-IN-a1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: This model's model-index metadata is invalid: Schema validation error. "model-index[0].name" is not allowed to be empty
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-xls-r-pa-IN-a1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-pa-IN-a1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: This model's model-index metadata is invalid: Schema validation error. "model-index[0].name" is not allowed to be empty
DrishtiSharma/wav2vec2-xls-r-sl-a1
https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-sl-a1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-xls-r-sl-a1 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-sl-a1 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a1 --dataset speech-recognition-community-v2/dev_data --config sl --split validation --chunk_length_s 10 --stride_length_s 1 The following hyperparameters were used during training:
DrishtiSharma/wav2vec2-xls-r-sl-a2
https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-sl-a2
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: ##Evaluation Commands python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs Votic language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DrishtiSharma/wav2vec2-xls-r-sl-a2 ### Model URL : https://huggingface.co/DrishtiSharma/wav2vec2-xls-r-sl-a2 ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SL dataset. It achieves the following results on the evaluation set: ##Evaluation Commands python eval.py --model_id DrishtiSharma/wav2vec2-xls-r-sl-a2 --dataset mozilla-foundation/common_voice_8_0 --config sl --split test --log_outputs Votic language not found in speech-recognition-community-v2/dev_data The following hyperparameters were used during training:
Duael/RRHood
https://huggingface.co/Duael/RRHood
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Duael/RRHood ### Model URL : https://huggingface.co/Duael/RRHood ### Model Description :
Duc/distilbert-base-uncased-finetuned-ner
https://huggingface.co/Duc/distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Duc/distilbert-base-uncased-finetuned-ner ### Model URL : https://huggingface.co/Duc/distilbert-base-uncased-finetuned-ner ### Model Description : This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
DuckMeme/Eve
https://huggingface.co/DuckMeme/Eve
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DuckMeme/Eve ### Model URL : https://huggingface.co/DuckMeme/Eve ### Model Description : No model card New: Create and edit this model card directly on the website!
Duda/Duda
https://huggingface.co/Duda/Duda
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Duda/Duda ### Model URL : https://huggingface.co/Duda/Duda ### Model Description : No model card New: Create and edit this model card directly on the website!
Dudu/DialoGPT-small-harrypotter
https://huggingface.co/Dudu/DialoGPT-small-harrypotter
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dudu/DialoGPT-small-harrypotter ### Model URL : https://huggingface.co/Dudu/DialoGPT-small-harrypotter ### Model Description : No model card New: Create and edit this model card directly on the website!
DueLinx0402/DialoGPT-small-harrypotter
https://huggingface.co/DueLinx0402/DialoGPT-small-harrypotter
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : DueLinx0402/DialoGPT-small-harrypotter ### Model URL : https://huggingface.co/DueLinx0402/DialoGPT-small-harrypotter ### Model Description :
Dumiiii/wav2vec2-xls-r-300m-romanian
https://huggingface.co/Dumiiii/wav2vec2-xls-r-300m-romanian
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an common voice ro and RSS dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: Used the following code for evaluation: Credits for evaluation: https://huggingface.co/anton-l This model's model-index metadata is invalid: Schema validation error. "model-index" must be an array
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dumiiii/wav2vec2-xls-r-300m-romanian ### Model URL : https://huggingface.co/Dumiiii/wav2vec2-xls-r-300m-romanian ### Model Description : This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an common voice ro and RSS dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training: Used the following code for evaluation: Credits for evaluation: https://huggingface.co/anton-l This model's model-index metadata is invalid: Schema validation error. "model-index" must be an array
Duugu/alexia-bot-test
https://huggingface.co/Duugu/alexia-bot-test
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Duugu/alexia-bot-test ### Model URL : https://huggingface.co/Duugu/alexia-bot-test ### Model Description :
Duugu/jakebot3000
https://huggingface.co/Duugu/jakebot3000
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Duugu/jakebot3000 ### Model URL : https://huggingface.co/Duugu/jakebot3000 ### Model Description :
Duy/wav2vec2_malay
https://huggingface.co/Duy/wav2vec2_malay
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Duy/wav2vec2_malay ### Model URL : https://huggingface.co/Duy/wav2vec2_malay ### Model Description : No model card New: Create and edit this model card directly on the website!
Dynamo14324/macow
https://huggingface.co/Dynamo14324/macow
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dynamo14324/macow ### Model URL : https://huggingface.co/Dynamo14324/macow ### Model Description : No model card New: Create and edit this model card directly on the website!
Dyzi/DialoGPT-small-landcheese
https://huggingface.co/Dyzi/DialoGPT-small-landcheese
#Landcheese
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : Dyzi/DialoGPT-small-landcheese ### Model URL : https://huggingface.co/Dyzi/DialoGPT-small-landcheese ### Model Description : #Landcheese
E312/t5-small-finetuned-xsum
https://huggingface.co/E312/t5-small-finetuned-xsum
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : E312/t5-small-finetuned-xsum ### Model URL : https://huggingface.co/E312/t5-small-finetuned-xsum ### Model Description : No model card New: Create and edit this model card directly on the website!
ECHO123/1
https://huggingface.co/ECHO123/1
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ECHO123/1 ### Model URL : https://huggingface.co/ECHO123/1 ### Model Description : No model card New: Create and edit this model card directly on the website!
EColi/sponsorblock-base-v1
https://huggingface.co/EColi/sponsorblock-base-v1
This model is a fine-tuned version of /1TB_SSD/SB_AI/out_epoch1/out/checkpoint-1115000/ on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EColi/sponsorblock-base-v1 ### Model URL : https://huggingface.co/EColi/sponsorblock-base-v1 ### Model Description : This model is a fine-tuned version of /1TB_SSD/SB_AI/out_epoch1/out/checkpoint-1115000/ on an unknown dataset. It achieves the following results on the evaluation set: More information needed More information needed More information needed The following hyperparameters were used during training:
EColi/sponsorblock-base
https://huggingface.co/EColi/sponsorblock-base
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EColi/sponsorblock-base ### Model URL : https://huggingface.co/EColi/sponsorblock-base ### Model Description : No model card New: Create and edit this model card directly on the website!
EEE/DialoGPT-medium-brooke
https://huggingface.co/EEE/DialoGPT-medium-brooke
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EEE/DialoGPT-medium-brooke ### Model URL : https://huggingface.co/EEE/DialoGPT-medium-brooke ### Model Description :
EEE/DialoGPT-small-aang
https://huggingface.co/EEE/DialoGPT-small-aang
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EEE/DialoGPT-small-aang ### Model URL : https://huggingface.co/EEE/DialoGPT-small-aang ### Model Description :
EEE/DialoGPT-small-yoda
https://huggingface.co/EEE/DialoGPT-small-yoda
null
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EEE/DialoGPT-small-yoda ### Model URL : https://huggingface.co/EEE/DialoGPT-small-yoda ### Model Description :
EEE/TrumpSpeechGen
https://huggingface.co/EEE/TrumpSpeechGen
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EEE/TrumpSpeechGen ### Model URL : https://huggingface.co/EEE/TrumpSpeechGen ### Model Description : No model card New: Create and edit this model card directly on the website!
EGOIST/XM
https://huggingface.co/EGOIST/XM
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EGOIST/XM ### Model URL : https://huggingface.co/EGOIST/XM ### Model Description : No model card New: Create and edit this model card directly on the website!
EL1u/distilbert-base-uncased-finetuned-ner
https://huggingface.co/EL1u/distilbert-base-uncased-finetuned-ner
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EL1u/distilbert-base-uncased-finetuned-ner ### Model URL : https://huggingface.co/EL1u/distilbert-base-uncased-finetuned-ner ### Model Description : No model card New: Create and edit this model card directly on the website!
ELiRF/NASCA
https://huggingface.co/ELiRF/NASCA
IMPORTANT: On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding. Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish. News Abstractive Summarization for Catalan (NASca) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Catalan news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Catalan newspapers, the Catalan subset of the OSCAR corpus and Wikipedia articles in Catalan were used for pre-training the model (9.3GB of raw text -2.5 millions of documents-). NASca is finetuned for the summarization task on 636.596 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ELiRF/NASCA ### Model URL : https://huggingface.co/ELiRF/NASCA ### Model Description : IMPORTANT: On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding. Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish. News Abstractive Summarization for Catalan (NASca) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Catalan news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Catalan newspapers, the Catalan subset of the OSCAR corpus and Wikipedia articles in Catalan were used for pre-training the model (9.3GB of raw text -2.5 millions of documents-). NASca is finetuned for the summarization task on 636.596 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).
ELiRF/NASES
https://huggingface.co/ELiRF/NASES
IMPORTANT: On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding. Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish. News Abstractive Summarization for Spanish (NASes) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Spanish news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Spanish newspapers, and Wikipedia articles in Spanish were used for pre-training the model (21GB of raw text -8.5 millions of documents-). NASes is finetuned for the summarization task on 1.802.919 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : ELiRF/NASES ### Model URL : https://huggingface.co/ELiRF/NASES ### Model Description : IMPORTANT: On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding. Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish. News Abstractive Summarization for Spanish (NASes) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Spanish news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Spanish newspapers, and Wikipedia articles in Spanish were used for pre-training the model (21GB of raw text -8.5 millions of documents-). NASes is finetuned for the summarization task on 1.802.919 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA).
EMBEDDIA/bertic-tweetsentiment
https://huggingface.co/EMBEDDIA/bertic-tweetsentiment
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EMBEDDIA/bertic-tweetsentiment ### Model URL : https://huggingface.co/EMBEDDIA/bertic-tweetsentiment ### Model Description : No model card New: Create and edit this model card directly on the website!
EMBEDDIA/crosloengual-bert
https://huggingface.co/EMBEDDIA/crosloengual-bert
CroSloEngual BERT is a trilingual model, using bert-base architecture, trained on Croatian, Slovenian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. Evaluation is presented in our article: The preprint is available at arxiv.org/abs/2006.07890.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EMBEDDIA/crosloengual-bert ### Model URL : https://huggingface.co/EMBEDDIA/crosloengual-bert ### Model Description : CroSloEngual BERT is a trilingual model, using bert-base architecture, trained on Croatian, Slovenian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. Evaluation is presented in our article: The preprint is available at arxiv.org/abs/2006.07890.
EMBEDDIA/english-tweetsentiment
https://huggingface.co/EMBEDDIA/english-tweetsentiment
No model card New: Create and edit this model card directly on the website!
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EMBEDDIA/english-tweetsentiment ### Model URL : https://huggingface.co/EMBEDDIA/english-tweetsentiment ### Model Description : No model card New: Create and edit this model card directly on the website!
EMBEDDIA/est-roberta
https://huggingface.co/EMBEDDIA/est-roberta
Load in transformers library with: Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens. Est-RoBERTa was trained for 40 epochs.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EMBEDDIA/est-roberta ### Model URL : https://huggingface.co/EMBEDDIA/est-roberta ### Model Description : Load in transformers library with: Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens. Est-RoBERTa was trained for 40 epochs.
EMBEDDIA/finest-bert
https://huggingface.co/EMBEDDIA/finest-bert
FinEst BERT is a trilingual model, using bert-base architecture, trained on Finnish, Estonian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. Evaluation is presented in our article: The preprint is available at arxiv.org/abs/2006.07890.
Indicators looking for configurations to recommend AI models for configuring AI agents ### Model Name : EMBEDDIA/finest-bert ### Model URL : https://huggingface.co/EMBEDDIA/finest-bert ### Model Description : FinEst BERT is a trilingual model, using bert-base architecture, trained on Finnish, Estonian, and English corpora. Focusing on three languages, the model performs better than multilingual BERT, while still offering an option for cross-lingual knowledge transfer, which a monolingual model wouldn't. Evaluation is presented in our article: The preprint is available at arxiv.org/abs/2006.07890.